Sorry, you need to enable JavaScript to visit this website.

In recent work [1], we developed a distributed stochastic multi-arm contextual bandit algorithm to learn optimal actions when the contexts are unknown, and M agents work collaboratively under the coordination of a central server to minimize the total regret. In our model, the agents observe only the context distribution and the exact context is unknown to the agents. Such a situation arises, for instance, when the context itself is a noisy measurement or based on a prediction mechanism.

Categories:
25 Views

Most real-world multi-agent tasks exhibit the characteristic of sparse interaction, wherein agents interact with each other in a limited number of crucial states while largely acting independently. Effectively modeling the sparse interaction and leveraging the learned interaction structure to instruct agents' learning processes can enhance the efficiency of multi-agent reinforcement learning algorithms. However, it remains unclear how to identify these specific interactive states solely through trials and errors within current multi-agent tasks.

Categories:
38 Views

Traditional Time-series Anomaly Detection (TAD) methods often struggle with the composite nature of complex time-series data and a diverse array of anomalies. We introduce TADNet, an end-to-end TAD model that leverages Seasonal-Trend Decomposition to link various types of anomalies to specific decomposition components, thereby simplifying the analysis of complex time-series and enhancing detection performance. Our training methodology, which includes pre-training on a synthetic dataset followed by fine-tuning, strikes a balance between effective decomposition and precise anomaly detection.

Categories:
31 Views

Sequential recommendation has been developed to predict the next item in which users are most interested by capturing user behavior patterns embedded in their historical interaction sequences. However, most existing methods appear to exhibit limitations in modeling fine-grained dependencies embedded in users’ various periodic behavior patterns and heterogeneous dependencies across multi-behaviors. Towards this end, we propose a Filter-enhanced Hypergraph Transformer framework for Multi-Behavior Sequential Recommendation (FHT-MB) to address the above challenges.

Categories:
11 Views

This work proposes a model for continual learning on tasks involving temporal sequences, specifically, human motions. It improves on a recently proposed brain-inspired replay model (BI-R) by building a biologically-inspired conditional temporal variational autoencoder (BI-CTVAE), which instantiates a latent mixture-of-Gaussians for class representation. We investigate a novel continual-learning-to-generate (CL2Gen) scenario where the model generates motion sequences of different classes. The generative accuracy of the model is tested over a set of tasks.

Categories:
43 Views

A number of machine learning applications involve time series prediction, and in some cases additional information about dynamical constraints on the target time series may be available. For instance, it might be known that the desired quantity cannot change faster than some rate or that the rate is dependent on some known factors. However, incorporating these constraints into deep learning models, such as recurrent neural networks, is not straightforward.

Categories:
18 Views

Deep neural networks (DNNs) allow digital receivers to learn operating in complex environments.
DNNs infer reliably when applied in a similar statistical relationship as the one under which it was trained.
This property constitutes a major drawback of using DNN-aided receivers for dynamic communication systems, whose input-output relationship varies over time.
In such setups, DNN-aided receivers may be required to retrain periodically, which conventionally involves excessive pilot signaling at the cost of reduced spectral efficiency.

Categories:
4 Views

This work proposes a novel and scalable reinforcement learning approach for routing in ad-hoc wireless networks. In most previous reinforcement learning based routing methods, the links in the network are assumed to be fixed, and a different agent is trained for

Categories:
18 Views

Pages