Sorry, you need to enable JavaScript to visit this website.

Distributed and Cooperative Learning (MLR-DIST)

Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers


We consider decentralized consensus optimization when workers sample data from non-identical distributions and perform variable amounts of work due to slow nodes known as stragglers. The problem of non-identical distributions and the problem of variable amount of work have been previously studied separately. In our work we analyse them together under a unified system model. We propose to combine worker outputs weighted by the amount of work completed by each.

Paper Details

Authors:
Tharindu Adikari, Stark C. Draper
Submitted On:
14 May 2020 - 10:40am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Slides for the talk

(42)

Subscribe

[1] Tharindu Adikari, Stark C. Draper, "Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5302. Accessed: Oct. 24, 2020.
@article{5302-20,
url = {http://sigport.org/5302},
author = {Tharindu Adikari; Stark C. Draper },
publisher = {IEEE SigPort},
title = {Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers},
year = {2020} }
TY - EJOUR
T1 - Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers
AU - Tharindu Adikari; Stark C. Draper
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5302
ER -
Tharindu Adikari, Stark C. Draper. (2020). Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers. IEEE SigPort. http://sigport.org/5302
Tharindu Adikari, Stark C. Draper, 2020. Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers. Available at: http://sigport.org/5302.
Tharindu Adikari, Stark C. Draper. (2020). "Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers." Web.
1. Tharindu Adikari, Stark C. Draper. Decentralized Optimization with Non-Identical Sampling in Presence of Stragglers [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5302

CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT


Clustering is a common technique for statistical data analysis and it has been widely used in many fields. When the data is collected via a distributed network or distributedly stored, data analysis algorithms have to be designed in a distributed fashion. This paper investigates data clustering with distributed data. Facing the distributed network challenges including data volume, communication latency, and information security, we here propose a distributed clustering algorithm where each IoT device may have data from multiple clusters.

Paper Details

Authors:
Hui Chen, Hao Yu, Shengjie Zhao, Qingjiang Shi
Submitted On:
14 May 2020 - 1:20am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_slides_paper_3420.pdf

(44)

Subscribe

[1] Hui Chen, Hao Yu, Shengjie Zhao, Qingjiang Shi, "CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5222. Accessed: Oct. 24, 2020.
@article{5222-20,
url = {http://sigport.org/5222},
author = {Hui Chen; Hao Yu; Shengjie Zhao; Qingjiang Shi },
publisher = {IEEE SigPort},
title = {CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT},
year = {2020} }
TY - EJOUR
T1 - CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT
AU - Hui Chen; Hao Yu; Shengjie Zhao; Qingjiang Shi
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5222
ER -
Hui Chen, Hao Yu, Shengjie Zhao, Qingjiang Shi. (2020). CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT. IEEE SigPort. http://sigport.org/5222
Hui Chen, Hao Yu, Shengjie Zhao, Qingjiang Shi, 2020. CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT. Available at: http://sigport.org/5222.
Hui Chen, Hao Yu, Shengjie Zhao, Qingjiang Shi. (2020). "CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT." Web.
1. Hui Chen, Hao Yu, Shengjie Zhao, Qingjiang Shi. CONSENSUS-BASED DISTRIBUTED CLUSTERING FOR IOT [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5222

COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS


Cooperative training methods for distributed machine learning are typically based on the exchange of local gradients or local model parameters. The latter approach is known as Federated Learning (FL). An alternative solution with reduced communication overhead, referred to as Federated Distillation (FD), was recently proposed that exchanges only averaged model outputs.

Paper Details

Authors:
Joonhyuk Kang
Submitted On:
13 May 2020 - 11:03pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

[ICASSP]JHA_Cooperative Learning via Federated Distillation over Fading Channels.pdf

(40)

Keywords

Additional Categories

Subscribe

[1] Joonhyuk Kang, "COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5211. Accessed: Oct. 24, 2020.
@article{5211-20,
url = {http://sigport.org/5211},
author = {Joonhyuk Kang },
publisher = {IEEE SigPort},
title = {COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS},
year = {2020} }
TY - EJOUR
T1 - COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS
AU - Joonhyuk Kang
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5211
ER -
Joonhyuk Kang. (2020). COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS. IEEE SigPort. http://sigport.org/5211
Joonhyuk Kang, 2020. COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS. Available at: http://sigport.org/5211.
Joonhyuk Kang. (2020). "COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS." Web.
1. Joonhyuk Kang. COOPERATIVE LEARNING VIA FEDERATED DISTILLATION OVER FADING CHANNELS [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5211

Artificial Intelligence based region of interest enhanced video compression

Paper Details

Authors:
Palanivel Guruvareddiar, Praveen Prasad
Submitted On:
30 March 2020 - 3:35pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Session:
Document Year:
Cite

Document Files

DCC_AI_based_video_compression.doc

(64)

Subscribe

[1] Palanivel Guruvareddiar, Praveen Prasad, "Artificial Intelligence based region of interest enhanced video compression", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5066. Accessed: Oct. 24, 2020.
@article{5066-20,
url = {http://sigport.org/5066},
author = {Palanivel Guruvareddiar; Praveen Prasad },
publisher = {IEEE SigPort},
title = {Artificial Intelligence based region of interest enhanced video compression},
year = {2020} }
TY - EJOUR
T1 - Artificial Intelligence based region of interest enhanced video compression
AU - Palanivel Guruvareddiar; Praveen Prasad
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5066
ER -
Palanivel Guruvareddiar, Praveen Prasad. (2020). Artificial Intelligence based region of interest enhanced video compression. IEEE SigPort. http://sigport.org/5066
Palanivel Guruvareddiar, Praveen Prasad, 2020. Artificial Intelligence based region of interest enhanced video compression. Available at: http://sigport.org/5066.
Palanivel Guruvareddiar, Praveen Prasad. (2020). "Artificial Intelligence based region of interest enhanced video compression." Web.
1. Palanivel Guruvareddiar, Praveen Prasad. Artificial Intelligence based region of interest enhanced video compression [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5066

Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization


This paper focuses on the problem of communication efficient distributed zeroth order minimization of a sum of strongly convex loss functions. Specifically, we develop distributed stochastic optimization methods for zeroth order strongly convex optimization that are based on an adaptive probabilistic sparsifying communications protocol.

Paper Details

Authors:
Anit Kumar Sahu, Dusan Jakovetic, Dragana Bajovic, Soummya Kar
Submitted On:
27 November 2018 - 6:55pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

globalsip_talk.pdf

(149)

Subscribe

[1] Anit Kumar Sahu, Dusan Jakovetic, Dragana Bajovic, Soummya Kar, "Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3818. Accessed: Oct. 24, 2020.
@article{3818-18,
url = {http://sigport.org/3818},
author = {Anit Kumar Sahu; Dusan Jakovetic; Dragana Bajovic; Soummya Kar },
publisher = {IEEE SigPort},
title = {Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization},
year = {2018} }
TY - EJOUR
T1 - Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization
AU - Anit Kumar Sahu; Dusan Jakovetic; Dragana Bajovic; Soummya Kar
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3818
ER -
Anit Kumar Sahu, Dusan Jakovetic, Dragana Bajovic, Soummya Kar. (2018). Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization. IEEE SigPort. http://sigport.org/3818
Anit Kumar Sahu, Dusan Jakovetic, Dragana Bajovic, Soummya Kar, 2018. Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization. Available at: http://sigport.org/3818.
Anit Kumar Sahu, Dusan Jakovetic, Dragana Bajovic, Soummya Kar. (2018). "Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization." Web.
1. Anit Kumar Sahu, Dusan Jakovetic, Dragana Bajovic, Soummya Kar. Non-Asymptotic Rates for Communication Efficient Distributed Zeroth Order Strongly Convex Optimization [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3818

Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning

Paper Details

Authors:
Mori haruki,Yuki Miyauchi, Kazuki Yamada, Shintaro Izumi, Masahiko Yoshimoto, Hiroshi Kawaguchi
Submitted On:
23 November 2018 - 3:41am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

slide_globalsip.pdf

(142)

Subscribe

[1] Mori haruki,Yuki Miyauchi, Kazuki Yamada, Shintaro Izumi, Masahiko Yoshimoto, Hiroshi Kawaguchi, "Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3739. Accessed: Oct. 24, 2020.
@article{3739-18,
url = {http://sigport.org/3739},
author = {Mori haruki;Yuki Miyauchi; Kazuki Yamada; Shintaro Izumi; Masahiko Yoshimoto; Hiroshi Kawaguchi },
publisher = {IEEE SigPort},
title = {Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning},
year = {2018} }
TY - EJOUR
T1 - Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning
AU - Mori haruki;Yuki Miyauchi; Kazuki Yamada; Shintaro Izumi; Masahiko Yoshimoto; Hiroshi Kawaguchi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3739
ER -
Mori haruki,Yuki Miyauchi, Kazuki Yamada, Shintaro Izumi, Masahiko Yoshimoto, Hiroshi Kawaguchi. (2018). Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning. IEEE SigPort. http://sigport.org/3739
Mori haruki,Yuki Miyauchi, Kazuki Yamada, Shintaro Izumi, Masahiko Yoshimoto, Hiroshi Kawaguchi, 2018. Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning. Available at: http://sigport.org/3739.
Mori haruki,Yuki Miyauchi, Kazuki Yamada, Shintaro Izumi, Masahiko Yoshimoto, Hiroshi Kawaguchi. (2018). "Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning." Web.
1. Mori haruki,Yuki Miyauchi, Kazuki Yamada, Shintaro Izumi, Masahiko Yoshimoto, Hiroshi Kawaguchi. Delayed Weight Update for Faster Convergence in Data-parallel Deep Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3739

FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM


The present work introduces the hybrid consensus alternating direction method of multipliers (H-CADMM), a novel framework for optimization over networks which unifies existing distributed optimization approaches, including the centralized and the decentralized consensus ADMM. H-CADMM provides a flexible tool that leverages the underlying graph topology in order to achieve a desirable sweet-spot between node-to-node communication overhead and rate of convergence -- thereby alleviating known limitations of both C-CADMM and D-CADMM.

Paper Details

Authors:
Meng Ma, Athanasios N. Nikolakopoulos, Georgios B. Giannakis
Submitted On:
13 April 2018 - 4:18pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

H-CADMM

(347)

Subscribe

[1] Meng Ma, Athanasios N. Nikolakopoulos, Georgios B. Giannakis, "FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2760. Accessed: Oct. 24, 2020.
@article{2760-18,
url = {http://sigport.org/2760},
author = {Meng Ma; Athanasios N. Nikolakopoulos; Georgios B. Giannakis },
publisher = {IEEE SigPort},
title = {FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM},
year = {2018} }
TY - EJOUR
T1 - FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM
AU - Meng Ma; Athanasios N. Nikolakopoulos; Georgios B. Giannakis
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2760
ER -
Meng Ma, Athanasios N. Nikolakopoulos, Georgios B. Giannakis. (2018). FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM. IEEE SigPort. http://sigport.org/2760
Meng Ma, Athanasios N. Nikolakopoulos, Georgios B. Giannakis, 2018. FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM. Available at: http://sigport.org/2760.
Meng Ma, Athanasios N. Nikolakopoulos, Georgios B. Giannakis. (2018). "FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM." Web.
1. Meng Ma, Athanasios N. Nikolakopoulos, Georgios B. Giannakis. FAST DECENTRALIZED LEARNING VIA HYBRID CONSENSUS ADMM [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2760

TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA


The success of deep learning—in the form of multi-layer neural networks — depends critically on the volume and variety of training data. Its potential is greatly compromised when training data originate in a geographically distributed manner and are subject to bandwidth constraints. This paper presents a data sampling approach to deep learning, by carefully discriminating locally available training samples based on their relative importance.

Paper Details

Authors:
Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, and Rob Liston
Submitted On:
15 September 2017 - 3:49pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster presentation for Paper #2847

(300)

Subscribe

[1] Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, and Rob Liston, "TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2159. Accessed: Oct. 24, 2020.
@article{2159-17,
url = {http://sigport.org/2159},
author = {Zheng Jiang; Xiaoqing Zhu; Wai-tian Tan; and Rob Liston },
publisher = {IEEE SigPort},
title = {TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA},
year = {2017} }
TY - EJOUR
T1 - TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA
AU - Zheng Jiang; Xiaoqing Zhu; Wai-tian Tan; and Rob Liston
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2159
ER -
Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, and Rob Liston. (2017). TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA. IEEE SigPort. http://sigport.org/2159
Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, and Rob Liston, 2017. TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA. Available at: http://sigport.org/2159.
Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, and Rob Liston. (2017). "TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA." Web.
1. Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, and Rob Liston. TRAINING SAMPLE SELECTION FOR DEEP LEARNING OF DISTRIBUTED DATA [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2159

A Projection-free Decentralized Algorithm for Non-convex Optimization


This paper considers a decentralized projection free algorithm for non-convex optimization in high dimension. More specifically, we propose a Decentralized Frank-Wolfe (DeFW)
algorithm which is suitable when high dimensional optimization constraints are difficult to handle by conventional projection/proximal-based gradient descent methods. We present conditions under which the DeFW algorithm converges to a stationary point and prove that the rate of convergence is as fast as ${\cal O}( 1/\sqrt{T} )$, where

Paper Details

Authors:
Anna Scaglione, Jean Lafond, Eric Moulines
Submitted On:
7 December 2016 - 11:58pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ncvx_globalsip16.pdf

(480)

Subscribe

[1] Anna Scaglione, Jean Lafond, Eric Moulines, "A Projection-free Decentralized Algorithm for Non-convex Optimization", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1381. Accessed: Oct. 24, 2020.
@article{1381-16,
url = {http://sigport.org/1381},
author = {Anna Scaglione; Jean Lafond; Eric Moulines },
publisher = {IEEE SigPort},
title = {A Projection-free Decentralized Algorithm for Non-convex Optimization},
year = {2016} }
TY - EJOUR
T1 - A Projection-free Decentralized Algorithm for Non-convex Optimization
AU - Anna Scaglione; Jean Lafond; Eric Moulines
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1381
ER -
Anna Scaglione, Jean Lafond, Eric Moulines. (2016). A Projection-free Decentralized Algorithm for Non-convex Optimization. IEEE SigPort. http://sigport.org/1381
Anna Scaglione, Jean Lafond, Eric Moulines, 2016. A Projection-free Decentralized Algorithm for Non-convex Optimization. Available at: http://sigport.org/1381.
Anna Scaglione, Jean Lafond, Eric Moulines. (2016). "A Projection-free Decentralized Algorithm for Non-convex Optimization." Web.
1. Anna Scaglione, Jean Lafond, Eric Moulines. A Projection-free Decentralized Algorithm for Non-convex Optimization [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1381

Distributed Sequence Prediction: A consensus+innovations approach


This paper focuses on the problem of distributed sequence
prediction in a network of sparsely interconnected agents,
where agents collaborate to achieve provably reasonable
predictive performance. An expert assisted online learning
algorithm in a distributed setup of the consensus+innovations
form is proposed, in which the agents update their weights
for the experts’ predictions by simultaneously processing the
latest network losses (innovations) and the cumulative losses
obtained from neighboring agents (consensus). This paper

Paper Details

Authors:
Anit Kumar Sahu, Soummya Kar
Submitted On:
6 December 2016 - 3:37am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP_talk.pdf

(341)

Subscribe

[1] Anit Kumar Sahu, Soummya Kar, "Distributed Sequence Prediction: A consensus+innovations approach", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1357. Accessed: Oct. 24, 2020.
@article{1357-16,
url = {http://sigport.org/1357},
author = {Anit Kumar Sahu; Soummya Kar },
publisher = {IEEE SigPort},
title = {Distributed Sequence Prediction: A consensus+innovations approach},
year = {2016} }
TY - EJOUR
T1 - Distributed Sequence Prediction: A consensus+innovations approach
AU - Anit Kumar Sahu; Soummya Kar
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1357
ER -
Anit Kumar Sahu, Soummya Kar. (2016). Distributed Sequence Prediction: A consensus+innovations approach. IEEE SigPort. http://sigport.org/1357
Anit Kumar Sahu, Soummya Kar, 2016. Distributed Sequence Prediction: A consensus+innovations approach. Available at: http://sigport.org/1357.
Anit Kumar Sahu, Soummya Kar. (2016). "Distributed Sequence Prediction: A consensus+innovations approach." Web.
1. Anit Kumar Sahu, Soummya Kar. Distributed Sequence Prediction: A consensus+innovations approach [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1357

Pages