In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution π and, regardless of the initial state, the time-t distribution of the chain converges to π as t tends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large must t be until the time-t distribution is approximately π ? One variant, variation distance mixing time, is defined as the smallest t such that the total variation distance of probability measures is small:
Attributes | Values |
---|
rdf:type
| |
rdfs:label
| - Markov chain mixing time (en)
|
rdfs:comment
| - In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution π and, regardless of the initial state, the time-t distribution of the chain converges to π as t tends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large must t be until the time-t distribution is approximately π ? One variant, variation distance mixing time, is defined as the smallest t such that the total variation distance of probability measures is small: (en)
|
dct:subject
| |
Wikipage page ID
| |
Wikipage revision ID
| |
Link from a Wikipage to another Wikipage
| |
Link from a Wikipage to an external page
| |
sameAs
| |
dbp:wikiPageUsesTemplate
| |
first
| |
last
| |
year
| |
has abstract
| - In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution π and, regardless of the initial state, the time-t distribution of the chain converges to π as t tends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large must t be until the time-t distribution is approximately π ? One variant, variation distance mixing time, is defined as the smallest t such that the total variation distance of probability measures is small: for all subsets of states and all initial states. This is the sense in which Dave Bayer and Persi Diaconis proved that the number of riffle shuffles needed to mix an ordinary 52 card deck is 7. Mathematical theory focuses on how mixing times change as a function of the size of the structure underlying the chain. For an -card deck, the number of riffle shuffles needed grows as . The most developed theory concerns randomized algorithms for #P-Complete algorithmic counting problems such as the number of graph colorings of a given vertex graph. Such problems can, for sufficiently large number of colors, be answered using the Markov chain Monte Carlo method and showing that the mixing time grows only as. This example and the shuffling example possess the rapid mixing property, that the mixing time grows at most polynomially fast in (number of states of the chain). Tools for proving rapid mixing include arguments based on conductance and the method of coupling. In broader uses of the Markov chain Monte Carlo method, rigorous justification of simulation results would require a theoretical bound on mixing time, and many interesting practical cases have resisted such theoretical analysis. (en)
|
author1-link
| |
author2-link
| |
prov:wasDerivedFrom
| |
page length (characters) of wiki page
| |
foaf:isPrimaryTopicOf
| |
is Link from a Wikipage to another Wikipage
of | |
is Wikipage disambiguates
of | |
is foaf:primaryTopic
of | |