ID: cs/0609045

Metric entropy in competitive on-line prediction

September 9, 2006

View on ArXiv

Similar papers 3

Online Nonparametric Regression with General Loss Functions

January 26, 2015

84% Match
Alexander Rakhlin, Karthik Sridharan
Machine Learning
Information Theory
Machine Learning
Information Theory

This paper establishes minimax rates for online regression with arbitrary classes of functions and general losses. We show that below a certain threshold for the complexity of the function class, the minimax rates depend on both the curvature of the loss function and the sequential complexities of the class. Above this threshold, the curvature of the loss does not affect the rates. Furthermore, for the case of square loss, our results point to the interesting phenomenon: when...

Find SimilarView on arXiv

Structured Prediction in Online Learning

June 18, 2024

84% Match
Pierre DI-ENS, PSL Boudart, Alessandro PSL, DI-ENS, Inria Rudi, Pierre UGA, LJK Gaillard
Machine Learning
Statistics Theory
Machine Learning
Statistics Theory

We study a theoretical and algorithmic framework for structured prediction in the online learning setting. The problem of structured prediction, i.e. estimating function where the output space lacks a vectorial structure, is well studied in the literature of supervised statistical learning. We show that our algorithm is a generalisation of optimal algorithms from the supervised learning setting, and achieves the same excess risk upper bound also when data are not i.i.d. Moreo...

Find SimilarView on arXiv

Online Learning with Predictable Sequences

August 18, 2012

84% Match
Alexander Rakhlin, Karthik Sridharan
Machine Learning
Machine Learning

We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specifically if the sequence encountered by the learner is described well by a known "predictable process", the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding prior knowledge ab...

Find SimilarView on arXiv

A Tale of Two Metrics: Simultaneous Bounds on Competitiveness and Regret

August 15, 2015

84% Match
Lachlan L. H. Andrew, Siddharth Barman, Katrina Ligett, Minghong Lin, Adam Meyerson, ... , Wierman Adam
Data Structures and Algorith...

We consider algorithms for "smoothed online convex optimization" problems, a variant of the class of online convex optimization problems that is strongly related to metrical task systems. Prior literature on these problems has focused on two performance metrics: regret and the competitive ratio. There exist known algorithms with sublinear regret and known algorithms with constant competitive ratios; however, no known algorithm achieves both simultaneously. We show that this i...

Find SimilarView on arXiv

Online convex optimization and no-regret learning: Algorithms, guarantees and applications

April 12, 2018

84% Match
E. Veronica Belmega, Panayotis Mertikopoulos, ... , Sanguinetti Luca
Machine Learning
Information Theory
Information Theory
Optimization and Control
Machine Learning

Spurred by the enthusiasm surrounding the "Big Data" paradigm, the mathematical and algorithmic tools of online optimization have found widespread use in problems where the trade-off between data exploration and exploitation plays a predominant role. This trade-off is of particular importance to several branches and applications of signal processing, such as data mining, statistical inference, multimedia indexing and wireless communications (to name but a few). With this in m...

Find SimilarView on arXiv

Metric Entropy and the Optimal Prediction of Chaotic Signals

February 15, 2011

84% Match
Divakar Viswanath, Xuan Liang, Kirill Serkh
Chaotic Dynamics

Suppose we are given a time series or a signal $x(t)$ for $0\leq t\leq T$. We consider the problem of predicting the signal in the interval $T<t\leq T+t_{f}$ from a knowledge of its history and nothing more. We ask the following question: what is the largest value of $t_{f}$ for which a prediction can be made? We show that the answer to this question is contained in a fundamental result of information theory due to Wyner, Ziv, Ornstein, and Weiss. In particular, for the class...

Find SimilarView on arXiv

Mixing predictions for online metric algorithms

April 4, 2023

83% Match
Antonios Antoniadis, Christian Coester, Marek Eliáš, ... , Simon Bertrand
Machine Learning
Data Structures and Algorith...

A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical tas...

Find SimilarView on arXiv

Transitions, Losses, and Re-parameterizations: Elements of Prediction Games

May 20, 2018

83% Match
Parameswaran Kamalaruban
Machine Learning
Machine Learning

This thesis presents some geometric insights into three different types of two player prediction games -- namely general learning task, prediction with expert advice, and online convex optimization. These games differ in the nature of the opponent (stochastic, adversarial, or intermediate), the order of the players' move, and the utility function. The insights shed some light on the understanding of the intrinsic barriers of the prediction problems and the design of computati...

Find SimilarView on arXiv

Universal Online Optimization in Dynamic Environments via Uniclass Prediction

February 13, 2023

83% Match
Arnold Salas
Machine Learning
Machine Learning

Recently, several universal methods have been proposed for online convex optimization which can handle convex, strongly convex and exponentially concave cost functions simultaneously. However, most of these algorithms have been designed with static regret minimization in mind, but this notion of regret may not be suitable for changing environments. To address this shortcoming, we propose a novel and intuitive framework for universal online optimization in dynamic environments...

Find SimilarView on arXiv

Parameter-free online learning via model selection

December 30, 2017

83% Match
Dylan J. Foster, Satyen Kale, ... , Sridharan Karthik
Machine Learning
Machine Learning

We introduce an efficient algorithmic framework for model selection in online learning, also known as parameter-free online learning. Departing from previous work, which has focused on highly structured function classes such as nested balls in Hilbert space, we propose a generic meta-algorithm framework that achieves online model selection oracle inequalities under minimal structural assumptions. We give the first computationally efficient parameter-free algorithms that work ...

Find SimilarView on arXiv