Sorry, you need to enable JavaScript to visit this website.

Despite the importance of distributed learning, few fully distributed support vector machines exist. In this paper, not only do we provide a fully distributed nonlinear SVM; we propose the first distributed constrained-form SVM. In the fully distributed context, a dataset is distributed among networked agents that cannot divulge their data, let alone centralize the data, and can only communicate with their neighbors in the network. Our strategy is based on two algorithms: the Douglas-Rachford algorithm and the projection-gradient method.


We address for the first time the question of how networked agents can collaboratively fit a Morozov-regularized linear model when each agent knows a summand of the regression data. This question generalizes previously studied data-splitting scenarios, which require that the data be partitioned among the agents. To answer the question, we introduce a class of network-structured problems, which contains the regularization problem, and by using the Douglas-Rachford splitting algorithm, we develop a distributed algorithm to solve these problems.