Documents
Poster
Poster
Robust Distributed Gradient Descent with Arbitrary Number of Byzantine Attackers
- Citation Author(s):
- Submitted by:
- Xinyang Cao
- Last updated:
- 14 April 2018 - 12:40am
- Document Type:
- Poster
- Document Year:
- 2018
- Event:
- Presenters:
- Xinyang Cao
- Paper Code:
- 1822
- Categories:
- Log in to post comments
Due to the grow of modern dataset size and the desire to harness computing power of multiple machines, there is a recent surge of interest in the design of distributed machine learning algorithms. However, distributed algorithms are sensitive to Byzantine attackers who can send falsified data to prevent the convergence of algorithms or lead the algorithms to converge to value of the attackers' choice. Our novel algorithm can deal with an arbitrary number of Byzantine attackers.