Sorry, you need to enable JavaScript to visit this website.

Robust Distributed Gradient Descent with Arbitrary Number of Byzantine Attackers

Citation Author(s):
Lifeng Lai
Submitted by:
Xinyang Cao
Last updated:
14 April 2018 - 12:40am
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
Xinyang Cao
Paper Code:
1822
Categories:
 

Due to the grow of modern dataset size and the desire to harness computing power of multiple machines, there is a recent surge of interest in the design of distributed machine learning algorithms. However, distributed algorithms are sensitive to Byzantine attackers who can send falsified data to prevent the convergence of algorithms or lead the algorithms to converge to value of the attackers' choice. Our novel algorithm can deal with an arbitrary number of Byzantine attackers.

up
0 users have voted: