ID: 1504.06240

A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences

April 23, 2015

View on ArXiv

Similar papers 4

Algorithmic statistics: forty years later

July 27, 2016

84% Match
Nikolai Vereshchagin, Alexander Shen
Computational Complexity
Information Theory
Information Theory
Statistics Theory
Statistics Theory

Algorithmic statistics has two different (and almost orthogonal) motivations. From the philosophical point of view, it tries to formalize how the statistics works and why some statistical models are better than others. After this notion of a "good model" is introduced, a natural question arises: it is possible that for some piece of data there is no good model? If yes, how often these bad ("non-stochastic") data appear "in real life"? Another, more technical motivation come...

Find SimilarView on arXiv

Algorithmic identification of probabilities is hard

May 20, 2014

84% Match
Laurent Bienvenu, Santiago Figueira, ... , Shen Alexander
Logic

Suppose that we are given an infinite binary sequence which is random for a Bernoulli measure of parameter $p$. By the law of large numbers, the frequency of zeros in the sequence tends to~$p$, and thus we can get better and better approximations of $p$ as we read the sequence. We study in this paper a similar question, but from the viewpoint of inductive inference. We suppose now that $p$ is a computable real, but one asks for more: as we are reading more and more bits of ou...

Find SimilarView on arXiv

Prediction with Restricted Resources and Finite Automata

December 10, 2008

84% Match
Finn Macleod, James Gleeson
Machine Learning

We obtain an index of the complexity of a random sequence by allowing the role of the measure in classical probability theory to be played by a function we call the generating mechanism. Typically, this generating mechanism will be a finite automata. We generate a set of biased sequences by applying a finite state automata with a specified number, $m$, of states to the set of all binary sequences. Thus we can index the complexity of our random sequence by the number of states...

Find SimilarView on arXiv

Theory and Applications of Probabilistic Kolmogorov Complexity

May 29, 2022

84% Match
Zhenjian Lu, Igor C. Oliveira
Computational Complexity
Information Theory
Information Theory

Diverse applications of Kolmogorov complexity to learning [CIKK16], circuit complexity [OPS19], cryptography [LP20], average-case complexity [Hir21], and proof search [Kra22] have been discovered in recent years. Since the running time of algorithms is a key resource in these fields, it is crucial in the corresponding arguments to consider time-bounded variants of Kolmogorov complexity. While fruitful interactions between time-bounded Kolmogorov complexity and different areas...

Find SimilarView on arXiv

A Short Introduction to Kolmogorov Complexity

May 13, 2010

84% Match
Volker Nannen
Computational Complexity

This is a short introduction to Kolmogorov Complexity. The interested reader is referred to the text books by Cover & Thomas as well as Li & V\'itanyi, which cover the fields of information theory and Kolmogorov complexity in depth and with all the necessary rigor.

Find SimilarView on arXiv

Low complexity, low probability patterns and consequences for algorithmic probability applications

July 6, 2022

84% Match
Mohamed Alaskandarani, Kamaludin Dingle
Computational Complexity
Information Theory
Information Theory
Probability

Developing new ways to estimate probabilities can be valuable for science, statistics, and engineering. By considering the information content of different output patterns, recent work invoking algorithmic information theory has shown that a priori probability predictions based on pattern complexities can be made in a broad class of input-output maps. These algorithmic probability predictions do not depend on a detailed knowledge of how output patterns were produced, or histo...

Find SimilarView on arXiv

Algorithmic Computability and Approximability of Capacity-Achieving Input Distributions

February 25, 2022

84% Match
Holger Boche, Rafael F. Schaefer, H. Vincent Poor
Information Theory
Information Theory

The capacity of a channel can usually be characterized as a maximization of certain entropic quantities. From a practical point of view it is of primary interest to not only compute the capacity value, but also to find the corresponding optimizer, i.e., the capacity-achieving input distribution. This paper addresses the general question of whether or not it is possible to find algorithms that can compute the optimal input distribution depending on the channel. For this purpos...

Find SimilarView on arXiv

Algorithmic statistics revisited

April 20, 2015

84% Match
Nikolay Vereshchagin, Alexander Shen
Information Theory
Information Theory

The mission of statistics is to provide adequate statistical hypotheses (models) for observed data. But what is an "adequate" model? To answer this question, one needs to use the notions of algorithmic information theory. It turns out that for every data string $x$ one can naturally define "stochasticity profile", a curve that represents a trade-off between complexity of a model and its adequacy. This curve has four different equivalent definitions in terms of (1)~randomness ...

Find SimilarView on arXiv

Algorithmic Data Analytics, Small Data Matters and Correlation versus Causation

September 5, 2013

84% Match
Hector Zenil
Computational Engineering, F...
Computational Complexity
Information Theory
Information Theory

This is a review of aspects of the theory of algorithmic information that may contribute to a framework for formulating questions related to complex highly unpredictable systems. We start by contrasting Shannon Entropy and Kolmogorov-Chaitin complexity epitomizing the difference between correlation and causation to then move onto surveying classical results from algorithmic complexity and algorithmic probability, highlighting their deep connection to the study of automata fre...

Find SimilarView on arXiv

Algorithmic information theory: Some recollections

January 5, 2007

84% Match
G. J. IBM Research Chaitin
History and Overview

Presents a history of the evolution of the author's ideas on program-size complexity and its applications to metamathematics over the course of more than four decades. Includes suggestions for further work.

Find SimilarView on arXiv