LEMURS: Learning Distributed Multi-robot Interactions


Eduardo Sebastian, Thai Duong, Nikolay Atanasov, Eduardo Montijano and Carlos Sagues

Departamento de Informatica e Ingenieria de Sistemas,
Universidad de Zaragoza
Department of Electrical and Computer Engineering,
University of California, San Diego

Under review, IEEE ICRA 2023

[Paper]
[Pronunciation]
[Code]


This paper presents LEMURS, an algorithm for learning scalable multi-robot control policies from cooperative task demonstrations. We propose a port-Hamiltonian description of the multi-robot system to exploit universal physical constraints in interconnected systems and achieve closed-loop stability. We represent a multi-robot control policy using an architecture that combines self-attention mechanisms and neural ordinary differential equations. The former handles time-varying communication in the robot team, while the latter respects the continuous-time robot dynamics. Our representation is distributed by construction, enabling the learned control policies to be deployed in robot teams of different sizes. We demonstrate that LEMURS can learn interactions and cooperative behaviors from demonstrations of multi-agent navigation and flocking tasks.


Paper

Eduardo Sebastian, Thai Duong, Nikolay Atanasov, Eduardo Montijano and Carlos Sagues

LEMURS: Learning Distributed Multi-Robot Interactions

Under review, IEEE ICRA 2023.

[pdf]    

Overview



Details and Multimedia




Proposed architecture for LEMURS. Robot i receives information from its neighbors. Then, the self-attention module processes the data to obtain the port-Hamiltonian terms. With this information, robot i computes the learned control policy through an ordinary differential equation solver.


>
LEMURS can learn the control policies that replicate cooperative tasks from just observed demonstrations. For instance, LEMURS can learn how to navigate and avoid collisions in a fixed communication topology (left), time-varying topology (center) or to flock (right). (Top) Trajectories from demonstrations, (bottom) learned control policies.


>
The resulting control policies generalize to any number of robots in the team, while preserving closed-loop stability guarantees and a distributed communication topology.


>
We can train LEMURS with demonstrations of 4 robots and deploy in teams of more than 60 robots and still achieve a successful cooperative behavior.


>
There is no significant difference in training nor deployment when the size of the team changes.


>
LEMURS outperforms other learning methods in capturing the complex interactions from the task demonstrations.


Code


 [github]


Citation


If you find our papers/code useful for your research, please cite our work as follows.

E. Sebastian, T. Duong, N. Atanasov, E. Montijano, C. Sagues. LEMURS: Learning Distributed Multi-Robot Interactions. Under review, IEEE ICRA 2023.

@article{sebastian22LEMURS,
author = {Eduardo Sebasti\'{a}n AND Thai Duong AND Nikolay Atanasov AND Eduardo Montijano AND Carlos Sag\"{u}\'{e}s},
title = {{LEMURS: Learning Distributed Multi-robot Interactions}},
journal = {arXiv preprint arXiv:2209.09702},
year = {2022} }




Acknowledgements

This work has been supported by NSF CCF-2112665 (TILOS), by the ONR Global grant N62909-19-1-2027, the Spanish projects PID2021-125514NB-I00, PID2021-124137OB-I00 and PGC2018-098719-B-I00 (MCIU / AEI / FEDER, UE), DGA T45-20R, and Spanish grants FPU19-05700 and EST22/00253.

This webpage template was borrowed from https://thaipduong.github.io/SE3HamDL/.