Stochastic Process I
(MAT 223)
By
ENGR AYODELE SAMUEL OLUSOLA
Department of Mathematical Sciences
College of Science and Information Technology
Tai Solarin University of Education
Ijagun,Ijebu Ode,Ogun State.
March 10, 2025
1
Contents
1 LECTURE ONE: Markov Chains 3
1.1 Introduction to Markov Chains . . . . . . . . . . . . . . . . . 3
1.2 States and Transitions . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . 3
1.4 General Two-State Markov Chains . . . . . . . . . . . . . . . 3
1.5 Powers of the Transition Matrix . . . . . . . . . . . . . . . . . 4
1.6 Gambler’s Ruin as a Markov Chain . . . . . . . . . . . . . . . 4
1.7 Classification of States . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Problems and Solutions on Markov Chains . . . . . . . . . . . 5
2 LECTURE TWO: Poisson Processes 8
2.1 Introduction to Poisson Processes . . . . . . . . . . . . . . . . 8
2.2 Partition Theorem Approach . . . . . . . . . . . . . . . . . . . 8
2.2.1 Derivation using Partitioning . . . . . . . . . . . . . . 8
2.3 Iterative Method for Poisson Processes . . . . . . . . . . . . . 9
2.4 Generating Function Approach . . . . . . . . . . . . . . . . . 9
2.5 Variance in Terms of the Probability Generating Function . . 10
2.6 Arrival Time Distribution . . . . . . . . . . . . . . . . . . . . 10
2.6.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6.2 Waiting Time for nth Arrival . . . . . . . . . . . . . . 10
2.7 Problems and Solutions on Poisson Processes . . . . . . . . . . 11
2
1 LECTURE ONE: Markov Chains
1.1 Introduction to Markov Chains
A Markov Chain is a stochastic process that undergoes transitions from
one state to another on a state space. The process satisfies the Markov
property, which states that the probability of transitioning to the next
state depends only on the current state and not on the sequence of states
that preceded it.
1.2 States and Transitions
A Markov chain consists of:
• A set of states S = {s1 , s2 , . . . , sn }.
• A set of transition probabilities Pij which determine the probability
of moving from state i to state j.
The transitions can be represented using a state transition diagram,
where states are nodes and directed edges indicate possible transitions.
1.3 Transition Probabilities
The transition probabilities are typically represented in a matrix known as
the transition matrix:
P11 P12 . . . P1n
P21 P22 . . . P2n
P = ..
.. ... ..
. . .
Pn1 Pn2 . . . Pnn
Each element Pij satisfies:
X
0 ≤ Pij ≤ 1, Pij = 1, ∀i.
j
1.4 General Two-State Markov Chains
Consider a Markov chain with two states S = {0, 1}. The transition matrix
is given by:
p 1−p
P =
q 1−q
3
where:
• p is the probability of staying in state 0.
• 1 − p is the probability of transitioning from state 0 to state 1.
• q is the probability of transitioning from state 1 to state 0.
• 1 − q is the probability of staying in state 1.
1.5 Powers of the Transition Matrix
For long-term behavior, we analyze powers of the transition matrix:
P n = P · P n−1 .
For large n, under certain conditions, P n converges to a stationary dis-
tribution π satisfying:
πP = π.
Solving this equation gives the stationary distribution, which represents
the long-run proportion of time spent in each state.
1.6 Gambler’s Ruin as a Markov Chain
The Gambler’s Ruin problem models a gambler with an initial fortune Xn
who bets a fixed amount on each game. The gambler:
• Wins with probability p.
• Loses with probability 1 − p.
The process can be represented as a Markov chain with states {0, 1, . . . , N },
where 0 represents ruin and N represents success. The transition probabili-
ties are:
Pk,k+1 = p, Pk,k−1 = 1 − p.
Using recursion methods, the probability of eventual ruin can be found
as:
(1−(q/p)x )
(
1−(q/p)N
, p ̸= q
Pruin = x
N
, p = q = 0.5.
4
1.7 Classification of States
A state i is classified as:
• Recurrent: The probability of returning to state i is 1.
• Transient: The probability of never returning to state i is greater than
0.
• Absorbing: If Pii = 1, the process remains in state i forever.
A Markov chain is irreducible if every state is reachable from any other
state. A periodic state has a return probability cycle, whereas an aperiodic
state does not.
Conclusion
Markov Chains provide a powerful framework for modeling stochastic pro-
cesses with memoryless properties. Applications range from finance and ge-
netics to game theory and queueing systems.
1.8 Problems and Solutions on Markov Chains
Problem 1: Transition Matrix
Problem: Consider a Markov chain with three states S = {0, 1, 2} and the
following transition matrix:
0.3 0.5 0.2
P = 0.4 0.4
0.2
0.2 0.6 0.2
(a) Verify that P is a valid transition matrix.
(b) Find the probability of moving from state 0 to state 2 in two steps.
Solution:
(a) To be a valid transition matrix, each row must sum to 1:
0.3 + 0.5 + 0.2 = 1, 0.4 + 0.4 + 0.2 = 1, 0.2 + 0.6 + 0.2 = 1.
Thus, P is a valid transition matrix.
(b) The two-step transition probability from state 0 to state 2 is obtained
by computing P 2 :
5
0.3 0.5 0.2
P 2 = P × P = 0.4 0.4 0.2
0.2 0.6 0.2
Multiplying,
(0.3)(0.3) + (0.5)(0.4) + (0.2)(0.2) . . . (0.3)(0.2) + (0.5)(0.2) + (0.2)(0.2)
P2 =
.. ... ..
. .
(similar calculations for other entries)
2
Solving, we find P0,2 = 0.26, meaning the probability of moving from
state 0 to state 2 in two steps is 0.26.
Problem 2: Stationary Distribution
Problem: Given the transition matrix:
0.8 0.2
P =
0.4 0.6
Find the stationary distribution π = (π1 , π2 ) such that πP = π.
Solution:
The stationary distribution satisfies:
π1 (0.8) + π2 (0.4) = π1
π1 (0.2) + π2 (0.6) = π2
Rearranging,
π1 − 0.8π1 − 0.4π2 = 0 ⇒ 0.2π1 = 0.4π2 ⇒ π1 = 2π2 .
Since π1 + π2 = 1:
1 2
2π2 + π2 = 1 ⇒ 3π2 = 1 ⇒ π2 = , π1 = .
3 3
Thus, the stationary distribution is:
2 1
π= , .
3 3
6
Problem 3: Gambler’s Ruin
Problem: A gambler has #3 and bets #1 on each round. The probability
of winning is p = 0.4 and losing is q = 0.6. If the gambler continues playing
until reaching #0 (ruin) or #5 (goal), find the probability of eventual ruin.
Solution:
The probability of ruin satisfies:
(q/p)x − 1
Pruin (x) = .
(q/p)N − 1
Substituting q = 0.6, p = 0.4, x = 3, N = 5:
(0.6/0.4)3 − 1 (1.5)3 − 1
Pruin (3) = = .
(0.6/0.4)5 − 1 (1.5)5 − 1
3.375 − 1 2.375
= = ≈ 0.36.
7.59375 − 1 6.59375
Thus, the probability of ruin is **0.36**.
Problem 4: Classification of States
Problem: Consider the Markov chain with transition matrix:
1 0 0 0
0.5 0 0.5 0
P = 0 0.5 0 0.5
0 0 0 1
(a) Identify the absorbing states.
(b) Classify the transient states.
Solution:
(a) A state i is absorbing if Pii = 1. Here, state 0 and state 3 are absorbing
because:
P00 = 1, P33 = 1.
(b) The transient states are those that can be left but never necessarily
revisited. Here, states 1 and 2 are transient, as they do not form an absorbing
class and have nonzero probabilities leading elsewhere.
7
Conclusion
These problems cover fundamental aspects of Markov chains, including tran-
sition matrices, stationary distributions, gambler’s ruin, and state classifi-
cation. Mastering these concepts is essential for deeper study of stochastic
processes.
2 LECTURE TWO: Poisson Processes
2.1 Introduction to Poisson Processes
A Poisson process is a stochastic process that models the occurrence of ran-
dom events over time. It is widely used in fields such as queueing theory,
telecommunications, and reliability analysis.
The Poisson process has the following properties:
1. The process starts at zero: N (0) = 0.
2. The increments are independent: The number of arrivals in disjoint
time intervals are independent.
3. The number of arrivals in an interval of length t follows a Poisson
distribution:
(λt)k e−λt
P (N (t) = k) = , k = 0, 1, 2, . . .
k!
where λ is the rate (average number of arrivals per unit time).
4. The Poisson process has stationary increments: The probability of a
certain number of arrivals in an interval depends only on the length of
the interval, not on its position.
2.2 Partition Theorem Approach
A useful way to analyze the Poisson process is through partitioning time
intervals into smaller segments.
2.2.1 Derivation using Partitioning
Consider a small interval [t, t + ∆t]. The probability of an arrival in this
small interval is approximately:
8
P (1 arrival in [t, t + ∆t]) ≈ λ∆t
P (No arrivals in [t, t + ∆t]) ≈ 1 − λ∆t.
By considering multiple partitions and using the law of total probability,
we can derive results such as the Poisson distribution of arrivals over time.
2.3 Iterative Method for Poisson Processes
An alternative approach to deriving properties of the Poisson process is the
iterative method, which considers differential equations.
Define Pn (t) = P (N (t) = n). Then the Kolmogorov forward equation
states:
dPn (t)
= λPn−1 (t) − λPn (t).
dt
By solving these iteratively, we can show:
(λt)n e−λt
Pn (t) = .
n!
This provides an alternative derivation of the Poisson distribution.
2.4 Generating Function Approach
The probability generating function (PGF) of N (t) is given by:
∞
X
G(s, t) = E[sN (t) ] = sn Pn (t).
n=0
Using the Poisson distribution:
∞
X (λt)n e−λt
G(s, t) = sn .
n=0
n!
Since this is the Taylor series expansion of eλt(s−1) , we obtain:
G(s, t) = eλt(s−1) .
The PGF is useful for finding moments such as mean and variance.
9
2.5 Variance in Terms of the Probability Generating
Function
The mean of N (t) is given by:
d
E[N (t)] = G(s, t) = λt.
ds s=1
The second moment is:
d2
E[N (t)(N (t) − 1)] = G(s, t) = (λt)2 .
ds2 s=1
The variance is then computed as:
Var(N (t)) = E[N 2 (t)] − (E[N (t)])2 = λt.
This result shows that the mean and variance of a Poisson process are
equal.
2.6 Arrival Time Distribution
The interarrival times of a Poisson process (time between successive arrivals)
are independent and exponentially distributed.
2.6.1 Derivation
Let T1 , T2 , . . . be the arrival times. The probability that the first arrival
occurs after time t is:
P (T1 > t) = P (N (t) = 0) = e−λt .
Thus, the interarrival time distribution follows:
fT (t) = λe−λt , t > 0.
2.6.2 Waiting Time for nth Arrival
The waiting time for the nth arrival, denoted Sn , follows a **Gamma distri-
bution**:
Sn ∼ Gamma(n, λ).
This has the density function:
10
λn tn−1 e−λt
fSn (t) = , t > 0.
(n − 1)!
This result is useful in queueing theory and reliability analysis.
Conclusion
The Poisson process is a fundamental stochastic model describing random
arrivals over time. It is characterized by:
• The Poisson distribution for the number of arrivals.
• The exponential distribution for interarrival times.
• Useful properties derived via partition theorem, iterative methods, and
generating functions.
Understanding Poisson processes is crucial for applications in queueing
systems, network traffic, and biological processes.
2.7 Problems and Solutions on Poisson Processes
Problem 1: Basic Poisson Process
Problem: A Poisson process models calls arriving at a call center with a
rate of λ = 5 calls per hour.
(a) What is the probability that exactly 3 calls arrive in a 2-hour period?
(b) What is the probability that at least 2 calls arrive in a 30-minute period?
Solution:
The number of arrivals in time t follows a Poisson distribution:
(λt)k e−λt
P (N (t) = k) = .
k!
For 2 hours, the expected number of arrivals is:
λt = 5 × 2 = 10.
(a) The probability of exactly 3 arrivals in 2 hours:
103 e−10 1000e−10
P (N (2) = 3) = = .
3! 6
Approximating e−10 ≈ 4.54 × 10−5 ,
11
1000 × 4.54 × 10−5
P (N (2) = 3) ≈ ≈ 0.0076.
6
(b) For 30-minute period, t = 0.5 hours, so:
λt = 5 × 0.5 = 2.5.
The probability of at least 2 arrivals is:
P (N (0.5) ≥ 2) = 1 − P (N (0.5) = 0) − P (N (0.5) = 1).
Computing:
P (N (0.5) = 0) = e−2.5 ≈ 0.0821.
(2.5)1 e−2.5
P (N (0.5) = 1) = = 2.5 × 0.0821 = 0.205.
1!
P (N (0.5) ≥ 2) = 1 − (0.0821 + 0.205) = 0.7129.
Final Answers: (a) P (N (2) = 3) ≈ 0.0076. (b) P (N (0.5) ≥ 2) ≈
0.7129.
Problem 2: Poisson Process Using the Parti-
tion Theorem
Problem: Consider a Poisson process with rate λ = 4 arrivals per hour.
Compute the probability that exactly one arrival occurs in the first 15 min-
utes, given that there is at least one arrival.
Solution:
The probability of exactly 1 arrival in 15 minutes (t = 0.25):
(4 × 0.25)1 e−4×0.25
P (N (0.25) = 1) = .
1!
(1)1 e−1
= = e−1 ≈ 0.3679.
1
The probability of at least 1 arrival:
P (N (0.25) ≥ 1) = 1 − P (N (0.25) = 0) = 1 − e−1 = 1 − 0.3679 = 0.6321.
12
Using conditional probability:
P (N (0.25) = 1) 0.3679
P (N (0.25) = 1 | N (0.25) ≥ 1) = = ≈ 0.582.
P (N (0.25) ≥ 1) 0.6321
Final Answer: 0.582.
Problem 3: Interarrival Time Distribution
Problem: Suppose customers arrive at a store according to a Poisson process
with λ = 3 customers per hour. What is the probability that the first
customer arrives after 30 minutes?
Solution:
The time until the first arrival follows an Exponential distribution:
P (T1 > t) = e−λt .
For t = 0.5 (30 minutes):
P (T1 > 0.5) = e−3(0.5) = e−1.5 .
Approximating e−1.5 ≈ 0.2231, we get:
P (T1 > 0.5) ≈ 0.2231.
Final Answer: 0.2231.
Problem 4: Generating Function Approach
Problem: For a Poisson process with rate λ, find E[N (t)] using the proba-
bility generating function.
Solution:
The probability generating function (PGF) is:
G(s, t) = eλt(s−1) .
The expected value is:
d
E[N (t)] = G(s, t) .
ds s=1
Differentiating:
13
d
G(s, t) = λteλt(s−1) .
ds
Setting s = 1:
E[N (t)] = λteλt(1−1) = λt.
Final Answer: E[N (t)] = λt.
Problem 5: Variance of a Poisson Process
Problem: Show that Var(N (t)) = λt using the PGF.
Solution:
The second moment is found using:
d2
E[N (t)(N (t) − 1)] = G(s, t) .
ds2 s=1
Differentiating:
d2
G(s, t) = λ2 t2 eλt(s−1) .
ds2
At s = 1:
E[N 2 (t)] = λ2 t2 + λt.
Using Var(N (t)) = E[N 2 (t)] − (E[N (t)])2 :
Var(N (t)) = (λ2 t2 + λt) − (λ2 t2 ) = λt.
Final Answer: Var(N (t)) = λt.
14
References
[1] Grinstead, C.M., & Snell, J.L.(1997). Introduction to Probability. Amer-
ican Mathematical Society.
[2] Ross, S.M.(2014). Introduction to Probability Models. Academic Press,
11th edition.
[3] Karlin, S., & Taylor, H.M.(1975). A First Course in Stochastic Processes.
Academic Press. 2nd edition.
[4] Grimmet, G., & Stirzaker, D.(2001). Probability and Random Processes.
Oxford University Press.
[5] Norris, J.R.(1998). Markov Chains. Cambridge University Press.
[6] Feller, W.(1968). An Introduction to Probability Theory and Its Appli-
cation. John Wiley & Sons, 1.
[7] Ross, S.M.(2010). Applied Probability Models with Optimization Ap-
plications. Dover Publications.
[8] Cinlar, E.(2013). Introduction to Stochastic Processes. Dover Publica-
tions.
15