Sorry, you need to enable JavaScript to visit this website.

Most real-world multi-agent tasks exhibit the characteristic of sparse interaction, wherein agents interact with each other in a limited number of crucial states while largely acting independently. Effectively modeling the sparse interaction and leveraging the learned interaction structure to instruct agents' learning processes can enhance the efficiency of multi-agent reinforcement learning algorithms. However, it remains unclear how to identify these specific interactive states solely through trials and errors within current multi-agent tasks.

Categories:
21 Views

Deep reinforcement learning (DRL) is able to learn control policies for many complicated tasks, but it’s power has not been unleashed to handle multi-agent circumstances. Independent learning, where each agent treats others as part of the environment and learns its own policy without considering others’ policies is a simple way to apply DRL to multi-agent tasks. However, since agents’ policies change as learning proceeds, from the perspective of each agent, the environment is non-stationary, which makes conventional DRL methods inefficient.

Categories:
48 Views