MALO Lab

In the Multi-Agent Learning and Optimization (MALO) Lab, we study distributed algorithms for learning and optimization over multi-agent networks. Specifically, we design the rules for a group of autonomous agents, each having local information, to collaboratively achieve global objectives through local computation and local communication.

The above shows two typical computing architectures for distributed computation: centralized (left) vs decentralized (right). MALO Lab focuses on the decentralized architecture that requires no central controller and enjoys more flexibility, robustness, and lower communication overhead.

Applications of our research include large-scale distributed machine learning, resource allocation in networks, multi-robot coordination, decentralized estimation, among others.

Our research is interdisciplinary in nature and spans several areas including network science, optimization, control theory, and machine learning.

Current Projects

Distributed Optimization over General Directed Networks

Network topology plays a central role in the design and analysis of decentralized algorithms for learning and optimization over multi-agent networks. Most existing works consider undirected networks where the information exchange is bidirectional. However, in some real-world applications, directed networks are inevitable. This project will focus on designing novel algorithms for distributed learning and optimization over general directed networks.

An illustration of distributed optimization over a directed network.

Related works:

Asymptotic Network Independence in Distributed Stochastic Gradient Methods

Distributed stochastic gradient methods are the workhorse for solving large-scale machine learning problems in networks. To accelerate convergence and improve the scalability of such methods, the project will focus on studying the asymptotic network independence property of distributed stochastic gradient algorithms, that is, asymptotically, distributed algorithms work as well as their centralized counterpart and is not affected by the network topology. Moreover, the project will develop algorithms with shorter transient times to achieve network independence.

See this paper for an introduction on asymptotic network independence in distributed optimization.

Related works:

Communication-Efficient Decentralized Learning Methods

Existing theory in distributed multi-agent optimization mainly concerns the number of required numerical operations for reaching certain accuracy, i.e., computation complexity. However, in a distributed setting, communication costs are non-negligible and can be the key factor affecting an algorithm’s performance (especially for large-scale networks). To reduce communication costs, the project will explore communication-efficient algorithms that require minimal information exchange between agents while preserving satisfactory computation complexity. One of the efforts in this direction is incorporating communication compression techniques into the design of distributed optimization algorithms.

Related works:

Funding Support

We gratefully acknowledge funding support from The Chinese University of Hong Kong, Shenzhen, Shenzhen Research Institute of Big Data, Shenzhen Institute of Artificial Intelligence and Robotics for Society, and National Natural Science Foundation of China.