Detailed Notes on Simulation Techniques
1     Generating Random Samples using Probability Integral Transformation
    • Concept: If X has a continuous distribution with cumulative distribution function (CDF) F (x), then U = F (X) ∼
      Uniform(0, 1).
    • Steps:
         1. Generate U ∼ Uniform(0, 1).
         2. Obtain X by solving F (X) = U (inverse transform method): X = F −1 (U ).
    • Applications:
         – Simulation of random variables from non-standard distributions.
         – Example: Generate exponential random variable X with rate λ: X = − λ1 ln(U ), where U ∼ Uniform(0, 1).
Theory: A random variable X with CDF F (x) can be generated by transforming a uniform random variable U ∼
Uniform(0, 1) using the inverse of its CDF:
                                             X = F −1 (U ).
    Example: Generate samples from the exponential distribution with rate λ = 2.
    1. Write the CDF of the exponential distribution:
                                                      F (x) = 1 − e−λx ,    x ≥ 0.
    2. Solve F (X) = U for X:
                                                                        1
                                   U = 1 − e−2X =⇒ e−2X = 1 − U =⇒ X = − ln(1 − U ).
                                                                        2
    3. Generate U ∼ Uniform(0, 1) and compute X = − 21 ln(1 − U ).
    4. Problem: Generate 5 random samples when U = {0.1, 0.25, 0.5, 0.75, 0.9}.
    5. Solution:
                                                       1
                                                  X = − ln(1 − U ),        for each U :
                                                       2
         • For U = 0.1, X = − 21 ln(0.9) = 0.05263.
         • For U = 0.25, X = − 21 ln(0.75) = 0.14384.
         • Continue similarly for the rest.
2     Accept-Reject Algorithm
    • Concept: Generate samples from a complex target distribution f (x) using a simpler proposal distribution g(x).
    • Steps:
         1. Choose a proposal distribution g(x) and a constant M such that f (x) ≤ M g(x) for all x.
         2. Generate X ∼ g(x) and U ∼ Uniform(0, 1).
                               f (X)
         3. Accept X if U ≤   M g(X) ;   otherwise, reject X and repeat.
    • Applications:
         – Sampling from distributions without closed-form inverses.
         – Example: Generating standard normal variates using exponential proposals.
Theory: Used when direct sampling is difficult. A simpler distribution g(x) serves as a proposal distribution.
  Example: Sample from f (x) = 2x on [0, 1] using g(x) = 1 (Uniform distribution on [0, 1]).
    1. Find M such that f (x) ≤ M g(x). Here M = 2.
    2. Generate X ∼ g(x) ∼ Uniform(0, 1).
                                                             f (X)
    3. Generate U ∼ Uniform(0, 1) and accept X if U ≤       M g(X) .
    4. Problem: Generate one sample using X = 0.6 and U = 0.5.
                                                                1
    5. Solution:
                                         f (0.6) = 2 × 0.6 = 1.2,    M g(0.6) = 2 × 1 = 2.
      Compute:
                                                       f (0.6)   1.2
                                                               =     = 0.6.
                                                      M g(0.6)    2
      - Since U = 0.5 ≤ 0.6, accept X = 0.6.
3     Metropolis Algorithm
    • Concept: A Markov Chain Monte Carlo (MCMC) method to generate samples from a target distribution π(x).
    • Steps:
         1. Start with an initial value X0 .
         2. Propose a new state X ∗ using a symmetric proposal distribution q(X ∗ |Xt ) (e.g., Gaussian).
         3. Compute the acceptance probability:
                                                                   π(X ∗ )
                                                                          
                                                      α = min 1,             .
                                                                   π(Xt )
         4. Accept X ∗ with probability α; otherwise, set Xt+1 = Xt .
         5. Repeat for a large number of iterations.
    • Applications:
         – Sampling from high-dimensional distributions.
         – Bayesian inference.
Theory: Samples from a target distribution π(x) using a symmetric proposal distribution.
                                     2
  Example: Sample from π(x) ∝ e−x (standard normal distribution).
    1. Start with X0 = 0.
    2. Propose X ∗ ∼ Uniform(Xt − 0.5, Xt + 0.5).
    3. Compute acceptance probability:
                                                                 π(X ∗ )
                                                                           
                                                      α = min 1,                .
                                                                 π(Xt )
    4. Generate U ∼ Uniform(0, 1). Accept X ∗ if U ≤ α; otherwise, set Xt+1 = Xt .
    5. Problem: Perform one iteration starting with X0 = 0, propose X ∗ = 0.3, and use U = 0.4.
    6. Solution:                                        2                           2
                                         π(0.3) = e−0.3 = 0.91393,       π(0) = e−0 = 1.
      Compute:                                                  
                                                         0.91393
                                              α = min 1,           = 0.91393.
                                                            1
      - Since U = 0.4 ≤ 0.91393, accept X ∗ = 0.3.
4     Gibbs Sampling
    • Concept: MCMC method where sampling is performed conditionally for each variable in a multivariate distribu-
      tion.
    • Steps:
         1. Start with an initial vector X = (X1 , X2 , . . . , Xd ).
         2. Sequentially sample Xi from its conditional distribution:
                                                             Xi ∼ P (Xi |X−i ),
            where X−i represents all variables except Xi .
         3. Repeat the above step for several iterations.
    • Applications:
         – Bayesian hierarchical models.
         – Spatial statistics and image processing.
                                                               2
5     Monte Carlo Integration
    • Concept: Approximation of integrals using random samples.
    • Steps:
                            Rb
        1. To compute I =     a
                                  h(x)f (x) dx, generate X1 , X2 , . . . , Xn ∼ f (x).
        2. Estimate the integral as:
                                                                         n
                                                                     1X
                                                                Iˆ =       h(Xi ).
                                                                     n i=1
        3. For multidimensional integrals, generalize X ∼ f (x) and use the same averaging approach.
    • Applications:
        – Numerical integration in high dimensions.
        – Bayesian computation.
6     MCMC Principle
    • Key Idea: Create a Markov chain that has the desired target distribution π(x) as its stationary distribution.
    • Steps:
        1. Define transition probabilities ensuring irreducibility, aperiodicity, and positivity.
        2. Use methods like Metropolis or Gibbs sampling to generate a sequence of samples.
        3. After a burn-in period, use the samples to approximate expectations:
                                                                             n
                                                                         1X
                                                            E[h(X)] ≈          h(Xi ).
                                                                         n i=1
    • Applications:
        – Bayesian inference.
        – High-dimensional optimization.
7     Metropolis-Hastings Algorithm
    • Concept: Generalization of the Metropolis algorithm for non-symmetric proposals.
    • Steps:
        1. Start with an initial value X0 .
        2. Propose X ∗ using a proposal distribution q(X ∗ |Xt ).
        3. Compute acceptance probability:
                                                                   π(X ∗ )q(Xt |X ∗ )                                                                                        
                                                        α = min 1,                           .
                                                                   π(Xt )q(X ∗ |Xt )
        4. Accept X ∗ with probability α; otherwise, set Xt+1 = Xt .
        5. Repeat for several iterations.
    • Applications:
        – Complex Bayesian models.
        – Sampling from intractable distributions.
                                                                   3
8     Bootstrap Method
    • Concept: A resampling technique to estimate the distribution of a statistic.
    • Steps:
        1. Given a dataset X = {x1 , x2 , . . . , xn }, generate B bootstrap samples by sampling with replacement.
        2. For each bootstrap sample, compute the statistic of interest Tb∗ .
        3. Use the empirical distribution of {T1∗ , T2∗ , . . . , TB∗ } to estimate the standard error, confidence intervals, etc.
    • Applications:
         – Non-parametric confidence intervals.
         – Hypothesis testing.
Theory: Resamples from data to estimate sampling distributions.
  Example: Compute the mean and its standard error for X = {2, 3, 5} using 3 bootstrap samples.
    1. Resample X with replacement:
         • Bootstrap sample 1: {2, 3, 3}, mean = 2.67.
         • Bootstrap sample 2: {5, 2, 5}, mean = 4.00.
         • Bootstrap sample 3: {3, 2, 3}, mean = 2.67.
    2. Compute the bootstrap standard error:
                                                     v
                                                     u            B
                                                     u     1 X
                                                SE = t        (X̄b − X̄bootstrap )2 .
                                                          B−1
                                                                 b=1