ID: 1504.06240

A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences

April 23, 2015

View on ArXiv

Similar papers 2

On the Distribution Function of the Complexity of Finite Sequences

September 8, 2000

86% Match
Janusz Szczepanski
Probability
Combinatorics

Investigations of complexity of sequences lead to important applications such as effective data compression, testing of randomness, discriminating between information sources and many others. In this paper we establish formulas describing the distribution functions of random variables representing the complexity of finite sequences introduced by Lempel and Ziv in 1976. We show that the distribution functions depend in an affine way on the probabilities of the so called "exact...

Find SimilarView on arXiv

Sequential Predictions based on Algorithmic Complexity

August 5, 2005

86% Match
Marcus Hutter
Information Theory
Machine Learning
Information Theory

This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's universal prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is kno...

Find SimilarView on arXiv

Computable randomness is about more than probabilities

May 1, 2020

86% Match
Floris Persiau, Bock Jasper De, Cooman Gert de
Probability
Logic

We introduce a notion of computable randomness for infinite sequences that generalises the classical version in two important ways. First, our definition of computable randomness is associated with imprecise probability models, in the sense that we consider lower expectations (or sets of probabilities) instead of classical 'precise' probabilities. Secondly, instead of binary sequences, we consider sequences whose elements take values in some finite sample space. Interestingly...

Find SimilarView on arXiv

Aspects of Chaitin's Omega

July 25, 2017

85% Match
George Barmpalias
Logic
Computational Complexity

The halting probability of a Turing machine,also known as Chaitin's Omega, is an algorithmically random number with many interesting properties. Since Chaitin's seminal work, many popular expositions have appeared, mainly focusing on the metamathematical or philosophical significance of Omega (or debating against it). At the same time, a rich mathematical theory exploring the properties of Chaitin's Omega has been brewing in various technical papers, which quietly reveals the...

Find SimilarView on arXiv

Kolmogorov Complexity and Information Content

September 17, 2017

85% Match
Fouad B. Chedid
Information Theory
Computational Complexity
Information Theory

In this paper, we revisit a central concept in Kolmogorov complexity in which one would equate program-size complexity with information content. Despite the fact that Kolmogorov complexity has been widely accepted as an objective measure of the information content of a string, it has been the subject of many criticisms including the fundamental one directed by logicians and philosophers towards the statistical and semantical theories of information, which is about confusing a...

Find SimilarView on arXiv

Identification of Probabilities of Languages

August 24, 2012

85% Match
Paul M. B. CWI and University of Amsterdam Vitanyi, Nick Behavioural Science Group, Warwick Business School, University of Warwick Chater
Machine Learning
Probability

We consider the problem of inferring the probability distribution associated with a language, given data consisting of an infinite sequence of elements of the languge. We do this under two assumptions on the algorithms concerned: (i) like a real-life algorothm it has round-off errors, and (ii) it has no round-off errors. Assuming (i) we (a) consider a probability mass function of the elements of the language if the data are drawn independent identically distributed (i.i.d.), ...

Find SimilarView on arXiv

Algorithmic Complexity for Short Binary Strings Applied to Psychology: A Primer

June 15, 2011

85% Match
Nicolas Gauvrit, Hector Zenil, ... , Soler-Toscano Fernando
Computational Complexity

Since human randomness production has been studied and widely used to assess executive functions (especially inhibition), many measures have been suggested to assess the degree to which a sequence is random-like. However, each of them focuses on one feature of randomness, leading authors to have to use multiple measures. Here we describe and advocate for the use of the accepted universal measure for randomness based on algorithmic complexity, by means of a novel previously pr...

Find SimilarView on arXiv

On the Algorithmic Nature of the World

June 19, 2009

85% Match
Hector Zenil, Jean-Paul Delahaye
Computational Complexity
Information Theory
Information Theory

We propose a test based on the theory of algorithmic complexity and an experimental evaluation of Levin's universal distribution to identify evidence in support of or in contravention of the claim that the world is algorithmic in nature. To this end we have undertaken a statistical comparison of the frequency distributions of data from physical sources on the one hand--repositories of information such as images, data stored in a hard drive, computer programs and DNA sequences...

Find SimilarView on arXiv

A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions

March 24, 2020

85% Match
Hector Zenil
Information Theory
Information Theory

Some established and also novel techniques in the field of applications of algorithmic (Kolmogorov) complexity currently co-exist for the first time and are here reviewed, ranging from dominant ones such as statistical lossless compression to newer approaches that advance, complement and also pose new challenges and may exhibit their own limitations. Evidence suggesting that these different methods complement each other for different regimes is presented and despite their man...

Find SimilarView on arXiv

Stationary Algorithmic Probability

August 25, 2006

85% Match
Markus Mueller
Information Theory
Computational Complexity
Information Theory
Probability

Kolmogorov complexity and algorithmic probability are defined only up to an additive resp. multiplicative constant, since their actual values depend on the choice of the universal reference computer. In this paper, we analyze a natural approach to eliminate this machine-dependence. Our method is to assign algorithmic probabilities to the different computers themselves, based on the idea that "unnatural" computers should be hard to emulate. Therefore, we study the Markov pro...

Find SimilarView on arXiv