Uniform random number
• They can take any value in a continuous range
• One can not predict in advance which value will be taken
• The distribution of variables may be well known
• Distribution of a random variable gives probability of a given value
• Probability distribution function is given by g(u)du = P (u < u′ <
u + du)
Ru
• Integral/accumulated distribution function G(u) = −∞ g(u)du ⇒
g(u) = dG(u)/du
• G(u)
R ∞ increases monotonically with u. Normalising of g determined by
−∞ g(u)du = 1
• Truly random number : sequence of truly random numbers is completely
unpredictable and hence irreproducible
• Can be generated only through random physics processes, e.g., radioac-
tive decay, arrival time of cosmic muon
• Pseudo random number : mid-square : Sequence generated according
to a strict mathematical formula but indistinguishable from a sequence
which is generated truly randomly
– Start with random number of r digit
– Take the middle r/2 digit, which is the first random number
– Square the r/2 digit and again
– Take the middle r/2 digit → second random number
• Multiplicative linear congruential generator (MLCG), build up a se-
quence ri+1 = mod(a × ri + c, m)
– Choose a, c, m carefully
– Lower order bits much random than higher order bits, e.g., for ran-
dom numbers in the range 1-10,
9
∗ J = 1 + int(10 × ri) better in compare with
∗ J = 1 + mod(100000 × ri, 10)
– Gradual improvement of a,c,m .. (Computer Phys. Comm., 60
(1990) 329)
∗ a = 23, m = 108 + 1
∗ a = 65539, m = 229
∗ a = 69069, m = 232
∗ a = 75 = 16907, m = 231 − 1
∗ a = 1664525, m = 232
∗ a = 742938285, m = 231 − 1
∗ a = 40014, m = 2147483563
∗ a = 40692, m = 2147483399
∗ a = 515, m = 247
– Extension, ri+1 = mod(a × ri + b × ri−1 + c, m)
• Fibonacci series : ri = mod(ri−p ⊙ ri−q , m)
where ⊙ is some binary or logical operation (ordinary addition or sub-
traction, but not exclusive-or) and p and q are the lags with p > q
Properties of some Pseudorandom Number Generators :
Generator Randomness Portability Approx initialize restart Disjoint
period [wd] [wd] seq
Traditional unreliable poor 109 1 1 sequential
RANECU good good 1018 2 2 (109 × 109)
RANMAR good good 1043 1 100 (109 × 1034)
RANECU good good 10170 1 25 (109 × 10161)
10
Though repetition number is very large, random numbers stay mainly in the
plane (A large sequential correlation) : George Marsagla
• k random numbers are used at a time to plot in k-dimensional space
• points lie on ′ k − 1′ dimensional planes (# of plane < m1/k )
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.8 0.9 0.5 0.8 0.9
0.4 0.4
0.3 0.5 0.6 0.7 0.3 0.5 0.6 0.7
0.2
0.1 0.3 0.4 0.2
0.1 0.3 0.4
0.1 0.2 0.1 0.2
Multiply with carry
GetUint() {
static unsigned int m_z = 1, m_w = 2;
m_z = 36969 * (m_z & 65535) + (m_z >> 16);
m_w = 18000 * (m_w & 65535) + (m_w >> 16);
return (m_z << 16) + m_w; }
Subtractive method
• Make a table in a slightly random order of numbers which are slightly
random
• Randomise the table by subtracting a number not especially random
• Take difference of two such numbers in the table and replace one of them
by another numbers which is not especially random
11
Random number generation for any arbitrary function
Generation of uniform random number : srand(time()); float x =
rand()/float(RAND MAX)
Transformation method to generate other distribution.
Suppose one has a random number function producing x uniformly in the
range 0 ≤ x ≤ 1, which implies probability of generating a number between
x and x + dx f (x)dx = dx for 0 ≤ x ≤ 1 and =0 for x <R0, x > 1
∞
And the probability distribution function in normalised −∞ f (x)dx = 1
RNow,
∞
generate a random number with a given probability, g(y) with
−∞ g(y)dy = 1
Transformation law of probability |f (x)dx| = |g(y)dy| or g(y) = f (x)|dx/dy|
RSo for a uniform
R random distribution [0,1], f (x) = 1, implies dx =
g(y)dy, ⇒ x = g(y)dy + c.
Inverting this to get y = y(x)
Simply, let a function f (x) ranging from 0 toR1 maps to another function
1 R2
f (y) ranging from 0 to 2, Cumulative function 0 f (x)dx = 0 g(y)dy (other
example, your x-co-ordinate is changing from cm to m (100 times in x, but
1/100 times in f (x).
12
Generation of some simple distribution
R
• Isotropic azimuthal distribution : f (φ) = k = (1/2π), x = (1/2π)dφ =
(1/2π)φ + c
At φ = 0, x = 0 ⇒ c = 0 or at φ = 2π, x = 1 ⇒ c = 0, thus φ = 2πx
• Isotropic
R distribution : Surface area is sin θdθdφ. So, f (θ) = (1/2) sin θ.
∴ x = (1/2) sin θdθ = −(1/2) cos θ + c At x = 0, θ = 0 ⇒ c = 1/2 or
x = 1, θ = π ⇒ c = 1/2
∴ x = (1/2)(1 − cos θ) ⇒ θ = cos−1 (1 − 2x)
• Exponential
R decay or absorption : f (τ ) = (1/τ )exp(−t/τ ), x =
(1/τ ) exp(−t/τ ) + c = exp(−t/τ ) + c
At x = 0, t = 0 ⇒ c = 1
∴ x = exp(−t/τ ) + 1 ⇒ t = −τ loge(1 − x) ∼ t = −τ loge (x)
• Cosmic muon flux, dN/dE ∝ E −α within a range, Elow to Ehigh, where
α>1
−1
1 −(α−1) −(α−1)
N ormalisation(N ) = Elow − Ehigh
α−1
N
dx = N E −α dE → x = − α−1 E −(α−1) + c
N −(α−1)
at x = 0, E = Elow =⇒ c = α−1 Elow
h i
N −(α−1) −(α−1)
Thus, x = α−1 Elow −E
−1/(α−1)
−(α−1) (α − 1)
E = Elow − x
N
13
• Probability distribution Ris continuous but not conventionally integrated
x
analytically, F (cos θ) = −1 f (cos θ)d(cos θ) = x
-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1
1 1
0.8 0.8
F(cosΘ)
0.6 0.6
f(cosΘ
0.4 0.4
0.2 0.2
0 0
-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1
CosΘ
Integrate numerically and normalise F (1) = 1.
Construct a table of cos θn with index n (n = 0, 1, 2, .....N ), where
F (cos θ) = n/N . Table have N + 1 entries.
Generate x, multiply x by N
α = Integral part of N x
β = fractional part of N x
cos θ(x) = cos θα + β(cosθα+1 − cos θα )
14
Normal/Gaussian Distribution : G.E.P. Box and M.E. Muller, 1958
First consider the probability distribution function fR (r) = r exp(−r2/2)
with 0 < r < ∞
Rr
Consider the cumulative distribution function FR (r) = 0 u exp(−u2/2) du =
1 − exp(−r2/2) = ξ
If a random variable is distributed according to fR (r) then r = [−2 ln(1 −
ξ)]1/2 = [−2 ln ξ]1/2
Now consider the Gaussian distribution with mean zero and σ=1, φ(y; 0, 1) =
√1 exp(−y 2 /2) in the range −∞ < y < ∞
2π
In fact it will be easier to sample two independent Gaussian random variables
1
together f (x, y) = φ(x; 0, 1)φ(y; 0, 1) = 2π exp(−(x2 + y 2 )/2)
Transforming it to independent polar co-ordinate r, φ φ(x)φ(y)dxdy =
1 2
2π exp(−r /2)r dr dφ = [fφ (φ)dφ] [fr (r) dr]
Here φ is uniformly distributed between 0 and 2π and may be sampled by
φ = 2πξ
The pdf of r is r exp(−(1/2)r2), hence r = [−2 ln ξ1]1/2
Thus, x = [−2 ln ξ1]1/2 cos(2πξ2); y = [−2 ln ξ1]1/2 sin(2πξ2);
Use of central limit theorem
Sum of a large number of random variables will itself a normally distributed
random variable
The deviation of the mean
√ of N random number (uniformly distributed) is
2
σm = (1/12N ), σm = (1/ 12N)
Hence the variable
15
N N N
! ! !
√
r
−1
X xi 1 X xi 1 12 X N
y = σm − = − 12N = xi −
i=1
N 2 i=1
N 2 N i=1
2
Will have mean 0 and σ = 1 and will be normally distributed.
p Sum of 1-25
uniform random number [−0.5 : 0.5] with normalisation of 12/N
χ2 , ndf, µ, σ, RMS
U1 U2 U3 U4
3 3
10 10
3
10
3982.1 3760.3 2948.2 1095.5
102 102
33 47 57 63
3 -0.014 102 -0.002 10
0.009 10
-0.013
10
1.771 0.97 0.939 0.976
1 0.999 1 1 1 1.001
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
U5 U6 U7 U8
3 3 3 3
10 10 10 10
622.2 444.1 333 339
102 102 102 102
66 71 72 73
10
-0.003 10
-0.002 10
-0.004 10
-0.008
0.987 0.99 0.995 0.992
1 1.001 1 1.002 1 1.003 1 1.002
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
U9 3
U10 U11 U12
3
10 10 3 3
10 10
170.7 165.1 144.2 123.5
102 102
102 102
71 74 72 76
10 -0.007 10 -0.004 10
-0.005 10
-0.004
0.997 0.998 0.999 0.999
1 1
0.999 1.001 1 1.001 1 1.001
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
U13 U14 3
U15 U16
3 3 3
10 10 10 10
112.8 94.4 102 124.9 121.7
102 102 102
76 76 76 76
-0.004 10 -0.002 10 -0.002 -0.001
10 10
0.999 0.999 1
0.997 0.998
1
1 1 1 0.999 1 1
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
U17 U18 U19 U20
3 3 3 3
10 10 10 10
128.8 111.9 81.9 112.9
102 102 102
102
78 77 77 75
10
-0.002 10 0 10
-0.001 10
0
0.996 0.996 0.998 0.997
1
1 1 0.999 1 0.999 1 0.999
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
U21 U22 U23 G
3 3 3 3
10 10 10 10
97.7 125.4 68.6 102 79.6
102 102 102
77 76 75 81
0 10 0 0 10 0.002
10 10
0.998 0.999 1 1
1.002
1 1 1
0.999 1.001 1 1.002
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
16
Rejection and acceptance
• It is very general technique for sampling any probability/distribution
• Neither normalisation of the density nor any numerical/analytical inte-
gration need to be known explicitly to carry out the sampling
• But sometime it is too time consuming
Pmax
f(x)→
x→
– Take two uniform random number x1 :[0,1] and x2 :[0,1]
– Calculate f (x1)
– If ( x2 × Pmx < f (x1) accept x1 else reject
– Go for the next event ....
Pmax= 1/π
f(x)→
1/(2π)
0 x→ 1
17
• Generate distribution
1 1
f (x) = ,
π 1 + x2
with range 0 ≤ x ≤ 1
• Transformation method :
Z x
1 1 1
y= 2
dx = tan−1 (x) ⇒ x = tan (π y)
0 π1+x π
• Rejection and acceptance method : Pmx = 1/π, verify these two methods
18
Special function - discrete numbers
n! r
Binomial distribution : p(r; n, p) = (n−r)!r! p (1 − p)n−r
• k = 0 and generate u, uniform in [0,1]
• Pk = (1 − p)n, B = Pk
• If u < B accept r = k
• Else Pk = Pk (p/(1 − p)).(n − k)/(k + 1), B = B + Pk , k = k + 1
and go to previous step
• Or generate probabilities for all r (0 to n), find the maximum of it and
then use acceptance and rejection method to generate events.
Poisson distribution : p(n; λ) = e−λ λn!
n
• k = 0, and generate u, uniform in [0,1]
• A = exp(−λ)
• If (u < A) accept nk = k
• Else k = k + 1, A = A + exp(−λ)λk /k! and go to previous step
• Or generate cumulative distribution function for all values of n and
choose n is random value u lies in between (n − 1) and n.
19