Monte Carlo methods have found widespread use among many disciplines as a way to simulate random processes in order to obtain numerical results. Analytically, it can often be difficult to compute the expected value of an outcome due to the complexity of the distribution. Instead, Monte Carlo methods simulate a process to determine the expected value of an outcome empirically. In particular, it is often useful to sample from a probability distribution to determine the expectation after a long period of time. So given a very large set and a probability distribution over it, the distribution and expected values can be approximated by drawing samples from the distribution. Often times, though, obtaining samples from the probability distribution is difficult. Markov chain Monte Carlo methods attempt to solve this problem by using local state transitions to “walk around” in This generates a random walk to draw samples from by using the stationary distribution of the Markov chain. A Markov chain is a system that transitions between discrete states with the special property that the transition probability from one state to another depends only on the current state the chain is in at the current time, and not any previous steps. The stationary distribution of an irreducible Markov chain is the unique time-independent distribution. The generated Markov chain can be initialized at any state, and the distribution will converge to its stationary distribution after many iterations of stochastic transitions between states. Once the distribution of the chain is close to the stationary distribution, the states visited by the Markov chain will give a good approximation to samples from the stationary distribution.