ID: cs/0601089

Distributed Kernel Regression: An Algorithm for Training Collaboratively

January 20, 2006

View on ArXiv

Similar papers 4

Max-Diversity Distributed Learning: Theory and Algorithms

December 19, 2018

89% Match
Yong Liu, Jian Li, Weiping Wang
Machine Learning
Machine Learning

We study the risk performance of distributed learning for the regularization empirical risk minimization with fast convergence rate, substantially improving the error analysis of the existing divide-and-conquer based distributed learning. An interesting theoretical finding is that the larger the diversity of each local estimate is, the tighter the risk bound is. This theoretical analysis motivates us to devise an effective maxdiversity distributed learning algorithm (MDD). Ex...

Find SimilarView on arXiv

Communication-Efficient Parallel Block Minimization for Kernel Machines

August 5, 2016

89% Match
Cho-Jui Hsieh, Si Si, Inderjit S. Dhillon
Machine Learning

Kernel machines often yield superior predictive performance on various tasks; however, they suffer from severe computational challenges. In this paper, we show how to overcome the important challenge of speeding up kernel machines. In particular, we develop a parallel block minimization framework for solving kernel machines, including kernel SVM and kernel logistic regression. Our framework proceeds by dividing the problem into smaller subproblems by forming a block-diagonal ...

Find SimilarView on arXiv

A Decentralized Framework for Kernel PCA with Projection Consensus Constraints

November 29, 2022

89% Match
Fan He, Ruikai Yang, ... , Huang Xiaolin
Distributed, Parallel, and C...

This paper studies kernel PCA in a decentralized setting, where data are distributively observed with full features in local nodes and a fusion center is prohibited. Compared with linear PCA, the use of kernel brings challenges to the design of decentralized consensus optimization: the local projection directions are data-dependent. As a result, the consensus constraint in distributed linear PCA is no longer valid. To overcome this problem, we propose a projection consensus c...

Find SimilarView on arXiv

Collaboratively Learning Linear Models with Structured Missing Data

July 22, 2023

89% Match
Chen Cheng, Gary Cheng, John Duchi
Machine Learning
Distributed, Parallel, and C...
Machine Learning

We study the problem of collaboratively learning least squares estimates for $m$ agents. Each agent observes a different subset of the features$\unicode{x2013}$e.g., containing data collected from sensors of varying resolution. Our goal is to determine how to coordinate the agents in order to produce the best estimator for each agent. We propose a distributed, semi-supervised algorithm Collab, consisting of three steps: local training, aggregation, and distribution. Our proce...

Find SimilarView on arXiv

Greedy Sparsity-Promoting Algorithms for Distributed Learning

October 14, 2014

89% Match
Symeon Chouvardas, Gerasimos Mileounis, ... , Theodoridis Sergios
Information Theory
Information Theory

This paper focuses on the development of novel greedy techniques for distributed learning under sparsity constraints. Greedy techniques have widely been used in centralized systems due to their low computational requirements and at the same time their relatively good performance in estimating sparse parameter vectors/signals. The paper reports two new algorithms in the context of sparsity--aware learning. In both cases, the goal is first to identify the support set of the unk...

Find SimilarView on arXiv

Distributed soft thresholding for sparse signal recovery

January 10, 2013

89% Match
Chiara Ravazzi, Sophie M. Fosson, Enrico Magli
Information Theory
Distributed, Parallel, and C...
Information Theory
Optimization and Control

In this paper, we address the problem of distributed sparse recovery of signals acquired via compressed measurements in a sensor network. We propose a new class of distributed algorithms to solve Lasso regression problems, when the communication to a fusion center is not possible, e.g., due to communication cost or privacy reasons. More precisely, we introduce a distributed iterative soft thresholding algorithm (DISTA) that consists of three steps: an averaging step, a gradie...

Find SimilarView on arXiv

Lightweight Distributed Gaussian Process Regression for Online Machine Learning

May 11, 2021

89% Match
Zhenyuan Yuan, Minghui Zhu
Machine Learning
Multiagent Systems

In this paper, we study the problem where a group of agents aim to collaboratively learn a common static latent function through streaming data. We propose a lightweight distributed Gaussian process regression (GPR) algorithm that is cognizant of agents' limited capabilities in communication, computation and memory. Each agent independently runs agent-based GPR using local streaming data to predict test points of interest; then the agents collaboratively execute distributed G...

Find SimilarView on arXiv

A Diffusion Kernel LMS algorithm for nonlinear adaptive networks

February 7, 2016

89% Match
Symeon Chouvardas, Moez Draief
Information Theory
Systems and Control
Information Theory

This work presents a distributed algorithm for nonlinear adaptive learning. In particular, a set of nodes obtain measurements, sequentially one per time step, which are related via a nonlinear function; their goal is to collectively minimize a cost function by employing a diffusion based Kernel Least Mean Squares (KLMS). The algorithm follows the Adapt Then Combine mode of cooperation. Moreover, the theoretical properties of the algorithm are studied and it is proved that und...

Find SimilarView on arXiv

Communication Efficient Distributed Optimization using an Approximate Newton-type Method

December 30, 2013

89% Match
Ohad Shamir, Nathan Srebro, Tong Zhang
Machine Learning
Optimization and Control
Machine Learning

We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably \emph{improves} with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as on...

Find SimilarView on arXiv

Distributed Low-Rank Estimation Based on Joint Iterative Optimization in Wireless Sensor Networks

November 5, 2014

88% Match
S. Xu, Lamare R. C. de, H. V. Poor
Information Theory
Machine Learning
Information Theory

This paper proposes a novel distributed reduced--rank scheme and an adaptive algorithm for distributed estimation in wireless sensor networks. The proposed distributed scheme is based on a transformation that performs dimensionality reduction at each agent of the network followed by a reduced-dimension parameter vector. A distributed reduced-rank joint iterative estimation algorithm is developed, which has the ability to achieve significantly reduced communication overhead an...

Find SimilarView on arXiv