0% found this document useful (0 votes)
82 views21 pages

Unit 13

This document discusses credibility theory, which provides tools for predicting future costs and events using both recent observations and other information. It covers classical and Bayesian credibility theory. Classical credibility determines the amount of data needed for full credibility when estimating frequencies, severities, and pure premiums. It also determines how to calculate partial credibility when less data is available than the full credibility standard. Bayesian credibility combines current observations with prior information to produce improved estimates using Bayes' theorem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views21 pages

Unit 13

This document discusses credibility theory, which provides tools for predicting future costs and events using both recent observations and other information. It covers classical and Bayesian credibility theory. Classical credibility determines the amount of data needed for full credibility when estimating frequencies, severities, and pure premiums. It also determines how to calculate partial credibility when less data is available than the full credibility standard. Bayesian credibility combines current observations with prior information to produce improved estimates using Bayes' theorem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT 13,CREDIBILITYTHEORY

Structure
13.0. Objectives
13.1 Introduction
13.2 Classical Credibility
13.3 Bayesian Credibility Theory
\
13.4 BuhJmann Credibility I

13.5 Buhlmann-Straub Credibility Model .

13.6 Estimation of credibility Formula Parameters


13.7 Comparison of Classical and Buhlmann Credibility
13.8 Maximum Aggregate Loss and General Solution
13.9 Let Us Sum Up
13.10 Key Words
13.1 1 Some Useful Books
13.12 Answer or Hints to Check Your Progress
13.13 Exercises

After going through this unit, you'will be able to:


appreciate the role credibility in insurance;
predict future events or costs;
limit the random fluctuations in the actuarial business;
estimate the cost to provide future insurance coverage; and
determine the criterion for credibility.

13.1 INTRODUCTION
Credibility theory provides tools to deal with the gandomness of data that are used
-
for predicting future events or costs. For this purpose we need other information
together with the recent observations. or.
example, suppose that the recent
experience indicates that skilled workers should be charged a rate of Rs.5 (per
Rs. 100 of pay roll) for workers compensation insurance. Assume that current rate
is Rs.10. What should the new rate be? Should it be Rs.5, Rs.10 or in between?
Credibility is used to weigh together these two estimates.
The basic formula for calculating credibility weighted estimate is:
Estimate: Z x [observation] + (1 - 2) [other information], 0 5 z 51. Z is called
the credibility assigned to the observation whereas (1 - 2) is called the
complement of credibility. In the above example Rs. 10 (current rate) is the other
information Thus, the skilled workers rate of workers compensation insurance is
*marialTechniques-11 Zx Rs.5 + (1 - Z) Rs.10. Under this we calculate Z for solution we which is
given in next sections.
The credibility Z is a function of the expected variance of the observations versus
the selected variance to be allowed in the first term of the credibility formula,
Z x [observation]. Biihlmann credibility is referred to as least square credibility.
Another approach that combines current observations with prior information to
produce a better estimate is Bayesian analysis. Bayes theorem is the basis of this
analysis.

13.2 CLASSICAL CREDIBILITY


In classical credibility, one determines how much data will be needed before
assigning to it 100% credibility. Such amount of data is referred to as full
credibility criterion or the standard for full credibility criterion or the standard for
full credibility. If one has this much of data or more, then z = 1.00; if one has
observed less than this amount of data then 0 Iz < 1 and we call this as partial
credibility.
There are four basic concepts from classical probability that we will cover.
1) How to determine the criterion for full credibility when estimating
frequencies.
2) How to determine the criterion for full credibility for estimating severities.
3) How to determine the criterion for full credibility when estimating pure
premiums (loss costs).
4) How to determine the amount of partial credibility to be assigned when
one has less data than is needed for full credibility.
Full Credibility for Frequency
Here we mainly use normal approximation to Poison process. The probability P
that observation X is within f k of the mean p is

If we assure u = -'
0
is normally distributed for a Poison distribution

with expected number of claims n, then p = n and o = & . The probability


that the observed nuhber of claims N is within +_ k of the expected number
p = n is:
Example 1: If the numb+ of claims has a Poisson distribution, compute "the Credibility Theory
probability of being withno f 5 % of a mean of 100 claims using the normal.
approximation to the Poison distribution.

! To compute the number of expected claims no such that the chance of being
l+P
+
within k of the mean is P. Let y = & be such that 0( y ) = -.
2
I
y is determined from normal table, which yields

Y
no = - (13.2.4)
lr 2
IC
?

Example 2: For P = 95% and for k = 5%, what is the number of claims required
for full credibility for estimating the frequency? 2

l+P
Solution: y = 1.960. Since 0(1.960) = -= 97.5%
2

Full Credibility for Severity


The classical credibility ideas can also be applied to estimate the claim severity,
the average size of a claim.
\uppose a sample of N claims X I , X2, X j , ... XN are each independently drawn
trom a loss distribution, with mean 'p, and variance a : . The severity, i.e., mean
of the distribution can be estimated by (x,+ X 2 + ...+ x,)/ N . The variance of
the observedseverity is

0 s
' Therefore, standard deviation is -
JN'
The probability that the observed severity S is within +k of the mean p, is I
I

Subtracting throughout p, (and dividing by Os and


- substituting u = -5' -Ps
1

JN a,
JN
we get
Actuarial Techniques-11
According to central limit theorem the distribution of

l+P
approximated by a normal distribution for large N. Now define ~ ( y =) -.
2
We want y = k f i
(s)
- . Solving for N
,

, But 5 = CVs, the coefficient of variation of the claim size distribution. Letting
P,
no be the full credibility standard for frequency, givenp and k produces

This is the standard for full credibility for severity.


Example 3: The coefficient of variation of the severity is 3. For P = 95% and
k = 596, what is the number of claims required for full credibility for estimating
the severity?
~olutioln: From example 2,
no = 1537. So N = 1537.(3)~= 13,833 claims.
Full Credibility for Pure Premiums
Suppose that N claims of sizes X,, Xz..., XN occur during the observation period.
The following quantities are useful in analysing the co>t of insuring a risk or
group of risks.
Aggregate losses L = (x,+ X2 + ... + x,)
(XI + X2 ..". + X N )
Pure Premium: P P =
Exposures
Exposures can be obtained by dividing the standard of credibility by the no of
claims.

Loss Ratio: LR =
(x,+ X2 + ... + x,)
Earned Pr emium
Losses
Pure Premium =
Exposures

= [Number of claims Losses


Exposures Number ofCaims

= ( ~ r e ~ u e n c y )(Severity) (13.2.8)
When frequency and severity are not independent,
process variance of pure premium
= (Mean frequency) (Variance of Severity) Credibility Theory
+ (Mean severity12(Variance of frequency)

When frequency and severity are not independent, process variance can be
obtained using the variance formula, i.e.,

where X is pure premium.


In (1 3.2.9) if we use Poisson frequency then
I = + p,:)
p, (0,; = p , . (2nd moment of severig) (13.2.10)
The subscripts indicate the means and variance of the frequency (n and severity
(s). Assuming normal approximation, full credibility standards can be calclliated
in the same manner as given earlier.
The. probability that the observed pure premium PP is within +- k of the mean
p , : is given by

where u = - "' is a unit normal variable, assuming normal approximation.


UP/,
l+P
Define y such that 0(y) = -and
2

Assuming that frequency is a Poisson process and that n, is the expected number
of claims required for full credibility, we get.
p, = 0: =n,.
F U ~ h e rsuppose
, that frequency and severity are independent Then
, = p, p. =n,.. p, andof, = pj (0,:+ P,:) = n ~ ( +4 d)
substituting for ppp& a,

Solving for n,
Actuarial Techniques-I1

\\ n, = ( \ Ps
[I&[<)] = no (IICV;) - (13.2.12)

This is the standa~dfor full credibility of the pure premihm. CV\ =

coefficient of variation of the severity.


t\I is the
I
I
I

Example 4: The number of claims has a Poisson distribution. The mean of the
severity distribution is 2000 and the standard deviation is 4000. For P = 90% and
k = 5% what is the standard for full credibility of the pure premium?
Solution: no = 1082 claims CVs = 2.

So, n, = 1082(l+ 2') = 5410 claims.


It is interesting to note that
n, = no + no CVs2
=standard for full credibility of frequency + standard for full
credibility of severity.
Partial Credibility
When one has at least the number of claims needed for full credibility, then one
assigns 100% credibility to the observations. However, when there is less data
than is needed for full credibility, less than 100% credibility is assigned.
Let n be the (expected) number of claims for the volume of data and n~ be the
standard for Full credibility. Then the partial credibility assigned is Z =
g9
if n 2 n,, and Z = 1.00. Use the square root rule for partial credibility for either
frequency, severity or pure premium.
Example 5: The standard for full credibility is 683 claims and one has observed
300 claims. How much credibility is assigned to this data?

Solution: = 66.3%.

13.3 BAYESIAN CREDIBILITY THEORY


Bayesian analysis is another technique to update a prior hypothesis based on the
observations. This theory is based on Baye's theorem. The statement of the
theorem is given below.
If one has a set of mutually disjoint events A,, then one can write the marginal
I
distribution function P[B] in terms of the conditional distribution P [B A , ] and
the probabilities P[AJ
P[B] = P ( B \ A , ) P [ A , ] and
I

-
Credibility Theory

P[A,IB] is the conditional probability of Ai given that B has already occurred.


Conditional Expectation
In order to compute conditional expectation, we take the weighted average over
all the possibilities x :

Example 13.3.1
Let G be the result of rolling a green 6 sided die. Let R be the result of rolling a
red 6 sided die. G and R are independent of each other. Let M be the maximum of
G and R. What is the expectation of the conditional distribution of M if G = 3?
Solution: M = Max(G, R)
The conditional distribution of M if G = 3

3
So f (3) = - . Thus the conditional expectation of M if G = 3 is
6

Bayesian Analysis
Take the following simple example. Assume that there are two types of risks each
with Bernoulli claim frequencies. One type of risk has 30% chance of a claim and
a 70% chance for no claims. The second type has a 50% chance of having a claim.
Of the universe of risks, % are of the first type with a 30% chance of claim, while
1/4 are of second type with a 50% chance of having a claim.

Type of Risk A Priori probability that Chance of a claim occurring


a risk is of this type for a risk of this type

If a risk is chosen at random, then the chance of having a claim ;s

Thus chance of no claims is 65%. Assume that we pick a risk at random and
observe no claim. Then what is the chance that we have risk type l?
e 0)
~ ( ~ ~ ~= ~ ~n = 0 / P n = O .
l l =nP ( T 1~ and
)
Actuarial Techniques-11 However P (Type = 1 and n = 0) = P (n = 0 I Type = 1). P (Type = 1)
= (0.7) (0.75).
P ( n = 0ITYpel). type = 1)
Therefore, P (Type = 1In = 0) =
P ( n = 0)

This is a special case of Baye's Theorem

Or, P ( ~ i s k ~ ylobset-v)
pe 1
= P (0bserv ~ i s k ~ y pxeP) (Risk Type) / P (observ)
Example 13;3.2
Assume we pick a risk at random and observe no claim. Then what is the chance
that we have risk Type 2?

Posterior estimates
When we have probabilities posterior to an observation, this can be used to
estimate a claim if the same risk is observed again. For example, if there is no
claim, the estimated claim frequency for the same risk is:
(Posterior prob. Type 1) (Claim freq. Type 1)
+ (Posterior prob. Type 2) (Claim freq. Type 2)

Thus the posterior estimate is a weighted average of the hypothetical means for
the different types of risks. The posterior estimate of 33.85% lies between 30%
and 5.0%. But it is not necessarily true while applying to credibility.
Example 13.3.3
If a risk is chosen at random and one claim is observed, what is the posterior
estimate of the chance of a claim from this same risk?
Solution: (0.6429) (0.3) + (0.3571) (0.5) = 37.14 %.
A B C D E F
Type of Apriori chance Chance of Prob. Weight = Posterior Mean annual
risk of this type of the product of the chance of this freq.
risk observation col'umns B & C type of risk
0.225 64.29% 0.30
-
0.125 35.71% 0.50
Over all 0.35 1.000 37.14%
P(Type = 1 and n = 1) / P(n = 1) Credibility Theory

Similarly, P (Type = 2 1 n = 1) = 0.357.


Here we note that the estimated posterior to the observation of one claim ib
37.14%, which is greater than the apriori estimate of 35%. Thus we infer that the
. future chance of a claim from this risk is higher than it was prior to the
observation.
We had 65% chance of observing no claim and 35% chance of one claim,
weighting together the two posterior estimate: (65%) (33.85%) + (35%) (37.14%)
= 35%.The weighted average of the posterior estimates is equal to the overall
apriori mean. This is referred to as "the estimates in balance". If D, are the
possible outcomes, then the Bayesian estimates are E[X ID,]and

P(D,)E[XID,] = E[X] =a priori mean

The sum of the products of the a priori chance of each outcome times its posterior
Bayesian estimate is equal to the apriori mean.
Multi-Sided Dice Example
Assume that there are a total of 100 multisided dice for which 60 are 4-sided, 30
are 6-sided and 10 are 8-sided. For a given die, each side has an equal chance of
being rolled i.e., the die is fair.
One person has picked at random a multi-sided die (you do not know what sided
die he has picked). He then rolled the die and told you the result. You are to
estimate the result when he rolls that same die again.
If the result is a 3, then the estimate of the next roll of the same die is 2.853.

A B C D E F

Type of Apriori chance Chance of the Prob. Weights = Posterior chance Mean
Die of this type of observation product of columns of this type of die roll
die B&C die

4-sided 0.600 0.250 0.1500 70.6% 2.5

6-sided 0.300 0.167 0.0500 23.5% 3.5

8-sided 0.100 0.125 0.0125 5.9% 4.5

Overall 0.2125 1.000 2.853


Actuarial Techniques-I1 Example 13.3.4
If instead a 6 is rolled, what is the estimate of the next roll of the same die?
Solution:

I Type of
die
Apriori chance
of this type of
die
Chance of the
observation
Prob. Weights =
product of the
columns B & C
Posterior
chance of this
type of die
Mean die
roll

4-sided 0.600 0.000 0.0000 0.0% 2.5

6-sided 0.300 0.167 0.0500 80.0% 3.5

8-sided 0.100 0.125 0.0125 20.0% 4.5

Overal I 0.625 1.000 3.700

Thus the estimate of the next roll of the same die is 3.7
For this example we get the following set of estimates corresponding to each
possible observation:
- - -

Observation 1 2 3 4 5 6 7 8

Bayesian estimate 2.853 2.853 2.853 2.853 3.7 3.7 4.5 4.5

13.4 BUHLMANN CREDIBILITY


Btihlmann credibility is also known as least squares credibility or the greatest
accuracy credibility. Credibility is given by the formula: Z = N / (n+K) where N
is number of observations and K is called Biihlmann credibility parameter which
can be determined from the expected value of the process variance and the
variance of hypothetical means using analysis of variance.
Analysis of Variance
Consider the example of multisided dice: There are a total of 100 multi-sided dice
of which 60 are 4 sided, 30 are 6-sided and 10 are 8-sided. For a given die, each
side has an equal chance of being rolled; i.e., the die is fair. One person picked at
random a multisided die. He then rolled the die and told you the result. You are to
estimate the result when he rolls the same die again.
In order to apply Biihlmann credibility to this problem one will first have to
calculate the items that would be used in "analysis of variance". One needs to
compute the expected value of the process variance (EPV) and the variance of the
hypothetic means, which together sum to the total variance.
Expected Value of the Process Variance (EPV)
. We can compute the mean and variance for each type of die. Mean and variance
for a 6-sided die is given below.
Credibility Theory

Roll of die APriori Probability Column A x ~olurnn'B Square of Column AX Column B

I I 0.16667 0.16667 0.166667

I sum I 1 I 3.5 ! 15.16667 I


Thus the mean is 3.5 and variance is 15.16667 - (3.51~= 2.91667 = 35/12. Thus
the conditional variance if a 6-sided die is picked is: Var [X 1 6-sided] = 35/12.
Similarly the mean and variance of a 4-sided die is 2.5 and 15/12 respectively and
the mean and variapce of a 8-sided die is 4.5 and 63/12 respectively.
Expected value of the process variance can be computed by weighting together
the process variance for each type of risk using as weights the chance of having
each type of risk. Expected value of the process variance is:

Variance of Hypothetical Means: (VHM)

1 Type of
die 1 APrior Chance of this type
of die
Mean for this type
of die
Square of the mean of this type
of Die

4-sided 0.6 2.5 6.25

- I
6-sided 0.3 3.5 12.25

8-sided 0.1 4.5 20.25

I Avcrage 3 9.45

Variance of the hypothetical mean is:


9.45 - 32 = 0.45.
This is the variance for a single observation i.e., one roll of a die.
Total Variance
One can compute the total variance if one were to do this repeatedly. In this case,
there is a 60% x ($1 = 15% chance that a 4-sided die will be picked and then a 1

is rolled. Similarly, this chance for 6-sided die is 30% x


(3 = 5% and 8-sided
Actuarial Techniques-I1 1
die is (10%) x - = 1.25% . The total chance of a 1 is 15% + 5% + 1.25% =
8

Roll Probability Probability Probability Aprior Column (Square


of a due to 4 due to 6- due to 8- probability = Ax ofA) x E
die sided die sided die sided die B+C+D Column E

8 0.0125 0.0 125 0.1000 0.8000


-
Sum 0.6 0.3 0.1 1 3 , 11.6

Total variance = 1 1.6 - 3* = 2.6.


Note that EPV + VHM = 2.1 5 + 0.45 = 2.6.
So EPV + VHM = Total variance
EPV = E, [var [ X (811

For EPV, first we separately compute the process variance for each of the type of
risks and then take the expected value over all types of risks. For VHM first we
compute the expected value for each type of risk and then take the variance
overall types of risks.

The following information will be used in a series of examples involving the


frequency, severity and pure premium.

Type Portion of risks in Bernoulli (Annual) Severity Gamma


this type frequency distribution distribution
We assume that the types are homogeneous i.e., every insured of a given type has Credibility Theory
the same frequency and severity process. Further, assume that for an individual
insured, %frequencyand severity are independent. We will show how to compute
the expebted value of the process variance and the variance of the hypothetical
mean in each case i.e., frequency, severity and pure premium.
Expected VaIue of the Process Variance, Frequency example
For type 1 process variance of Bernoulli frequency is pq = (0.4) (1-0.4) = 0.24.
For type 2, process variance for the frequency = (0.7) (0.3) =0.21 and for type 3
process variance for the frequency =(0.8) (1 - 0.8) = 0.16.
EPV = (50%) (0.24) + (30%) (0.21) +(20%) (0.16) = 0.215.
Variance of the Hypothetical Mean Frequencies
For type 1 mean = p = 0.4, for type 2 it is 0.7 and for type 3 it is 0.8.
First moment = (50%) (0.4) + (30%) (0.7) + (20%) (0.8) = 0.57
second moment = (50%) (0.4)~+ (30%) (0.7)~+ (20%) (0.8)~= 0.355
VHM = .355 - (0.57)~= 0.0301.
Excepted Value of the Process Variance Severity Example
In this case one has to weigh together the process variances of the severities for
the individual types using the chance that a claim comes from each type. The
chance that a claim comes from an individual of a given type is proportional to
the product of the apriori chance of an insured being of that type and the mean
frequency for that type.

For type 1, the process variance of Gamma severity is - a = - 4 - 40,000 ;


A2 0.012
3 2
for type 2 it is - = 30,000 and, for type 3, it is 7= 20,000. The
0.0 l2 0.01
mean frequencies are 0.4, 0.7 and 0.8. The apriori chance of each type are 50%
30% and 20% respectively. Thus, the weights for EPV are (0.4) (50%) = 0.2,
(0.7)(30%) = 0.21 and (0.8) (20%) = 16. The sum of these weights 0.2 +0.21 +
0.16 = 0.57. The probability that a claim came from each class is

EPV of the severity = ((0.2) (40,000) + (0.21) (30000) +(0.16) (20,000)) 1 (0.2 +
0.21 + 0.16) = 30,702.
Variance of Hypothetical Mean Severities
A B C D E F G H
Class Apriori Mean Weights Gamma parameters Mean Square of
probability frequency =B x C severity mean
a h
severity
50% 1
0.4 0.20 4 0.01 400 160,000
-- - - - - - -
2 30% 0.7 0.21 3 0.01 300 90,000
3 20% 0.8 0.16 2 0.0 1 200 40,000
/ Average I 0.57
-

307.02 100,526
-
Actuarial Techniques41 a 4
-
Mean of Gamma severity = -
A
= - = 400
0.01
*
(0.2) (400) + (0.21) (300) + (0.16) (200)
First moment = ~307.2
0.20 +0.21+ 0.16
Second moment =

VHM of the severity = 100. 526 - 307.02~= 6,265.


Expected Value of the Process Variance, Pure Premium Example

Class A priori Mean Variance of Mean Variance of Process


probability frequency frequency sevet'ity severity variance

1 50% 0.4 0.24 400 40,000 54,400

3 , 20% 0.5 0.16 200 20,000 22,400

Average 43650

Since fkquency and severity are independent, the process variance of .the pure
premium= (Mean Frequency) (Variance of severity) + (Mean severity12(Variance
of frequency) = (0.4) (40,000) + (400)~(0.24) = 54,400.
Variance of Hypothetical Mean Pure, Premium case
Class Probability Frequency Severity Mean pure Square of the pure
premium premium

1 50% 0.4 400 160 25,600


-
2 30% 0.7 300 210 44,100

3 20% , 0.8 200 160 25,600

Average 175 31,150

Mean of pure premium = (Mean Frequency) (Mean Severity)


Variance of hypnotically mean of pure premium = 3 1,150 - 1752 = 525.
1
Biihlmann Credibility I

Using BOhlmann credibility, the new estimate = Z(observation)+ (1-4 (prior ,


mean).
In 100 multisided dice example, the prior mean is 3. The apriori expected value of
selecting a die at random and rolling it is: 0.6(2.5) +0.3(3.5) + 0.1(4.5) = 3.00.
Buhlmanfi credibility parameter is calculated as K = EPVIVHM. For the - example
of 100 multisided die
K = - EPV
- - -2.15 ~ 4 . 7 7 8= -.
43 credibility Theory
VHM 0.45 9
L
For 1V observations, the Biihlmann credibility is:

1
In fhis case for one observation, Z = = 0.173 1 . Thus if we obserVe a
1 + 4.778
roll of a 5, then the new estimate is
(0.1 73 1 ) ( 5 ) + (1 - 0.1 73 1) ( 3 ) = 3.3462 . The Biihlrnann credibility
estimate is a linear function of the observation:

t Note that i f N = 1 , then Z = 1/1+K


-
- VHM -
-
VHM
VHM + EP V Total var iance

i Observation

New
estimate
. 1

2.6538
, 2

2.8269
3

3
4

3.1713
5 6

3.3462 3.5193
7 8

3.6924 3.8655

When N = 3 , Z = 3 / 3 + 4.778 = 0.386. As N -+ oo, 2-1. But unlike classical


credibility, BUhlmann credibility never reaches 100%.
In ~ e n e r a lone computes EPV and VHM for a single observation and then plugs
into the formula for Buhlmann credibility with the number of observation N. If
one is estimating claim frequencies or pure premiums then N is in exposures. If
one is estimating claim severities then N is in the number of claims (N is in the
units of whatever is in the denominator of the quantity one is estimating)
Example 13.4.2
In exampIe 13.4.1. we assumed that the types were homogeneous i.e., every
insured of a given type has the same frequency and severity process. Assume that
for an individual insured, frequency and severity are independent. Let us compute
Buhimann credibility parameter in each case.
Suppose that an insured is picked at random and we do not know what type he is.
For this randomly selected insured during 4 years one observes 3 claims for a total
of $450. Then one can use Biihlmann credibility to predict the future frequency,
severity or pure premium of this insured.
Frequency Example

K=- EPV -
-.
0.215 =
- 7.14.
VHM 0.0301
4 4
T ~ S4. years of experience are given a credibility of - - -= 35.9%.
4 + K 11.14
The observed frequency is 0.75 and the apriori mean.frequency is 0.57. Thus, the
estimate of the future frequency for this insured is (0.359) (0.75) + (1-0.359)
(0-57)= 0.635.'
Actuarial Techniques41 Severity Example

K=-=EPV 30,702 = 4.9.


VHM 6,265
3 observed claims are given a credibility of 3 / 3+K = 38.0%. The observed
severity is $450 / 3 = $150. Thus the estimate of the future severity for this
insured is:

. Pure Premium Example:


EPV 43,650
K =-=-=83.1
VHM 525
4 years of experience is given a credibility of 4 / 4+K = 4187.1 = 4.6%. The
observed pure premium is $450 / 4 = $1 12.5. The apriori mean pure premium is
$175. Thus the estimate of the'hture pure premium for this insured is: (0.046)
(1 12.5) + (1-0.046) (175) =$172.
Assumptions Underlying Z
1) The complement ofcredibility is given to the overall mean.
2) The credibility is determined as the slope of the weighted least squares
line to the Bayesian estimates.
3) The risk parameters and risk process do not shift ojer time.
4) The expected value of the process variance of the sum of N observations
increases as N. Therefore, the expected value of the process variance of
the average of N observations decreases as I/N.
5) The variance of the hypothetical means of the sum of N observations
increases as N ~ Therefore
. the variance of the hypothetical means of the
average of N observations is independent of N.

13.5 BUHLMANN-STRAUB CREDIBILITY MODEL


The Buhlmann-Straub model assumes that the means of the random variables are
equal for the selected risk. But the process variances are inversely proportional to
the size (i.e., exposure) of the risk during each observation period. For example,
when tne risk is twice as large, the process variance is halved. 'These assumptions
are summarised in the following table.
I
Assumptions of Biihlmann-Starub Ckedibility

Exposure Period 1l m , ... Period N / m ~...


4

HYpothetical
[x,
(8) = E , ~ , ~le] =
-.. = EXI@ [ x ,lo] ..
mean for risk
0 per unit of
exposure

Process .. . u2(8)
a' (0)
variance for Varrp,[XI lo] =- V a r X[x,
, lo] = --
risk 0 m1 m,v
Credibility Theory

Estimated pure premium


= (0.3243) (3000) + (1 -0.3243) (2,400)
= 2,594.58.

Check Your Progress 1


1) Discuss full credibility for frequency and severity
I ...........................................................................................

Type of Risk Apriori chance of type of risk Chance of a claim

I C 15%
e
40%

A risk is selected at random from &e above information. Calculate the


I

expected annual claim frequency from the same risk if you observe no
claim in a year using Bayesian analysis.

...............;......................................................................../..
Define and explain Biihlmann Credibility parameter

b 13.6 ESTIMATION OF CREDIBILITY FORMULA


PARAMETERS
The selection of credibility parameters requires a balancing of responsiveness
versus stability. Larger credibility weights put more weight on the observations,
which means that the current data have a larger impact on the estimate. Then the
estimates are more responsive to current data. But this comes at the expense of
less stability in the estimates. Credibility parameters often are selected to reflect
the actuary's desired balance between responsiveness and stability.
I
1 In classical credibility, P and k values must be chosen where P is the probability
j that observation X is within +k percent of the mean p . Usually P = 90% and
.' Actuarial Techniques-ll Example 13.6.1
A sample of 100 claims was distributed as follows:

Size of claim Number of claim

Estimate the coefficient of variation of the claim severity bas'ed dn this


distribution.

Solution: The sample mean is


= 2,150.
=(0.85)(1000)+(0~l0)(5000)+0~03(l0,000)+002(25,000)

d = [(100I - 1) ~5(i000-*150y+10(5000-2.150y+3(10,000-2,150)'+
I
2(25,0000 - 2,150)'}F = 3.791. We are dividing by n - I to calculate an unbiased
estimate. Thus

From the table of standards for full credibility for frequency (claims)
no = 1,082. So, the full credibility standard 'for the pure premium is
n,,. = n,(l + CVS)' = 1,082(1+1.'16') = 4,434.
Suppose that there are M risks in a population and they are similar in size.
Assume that we tracked the annual frequency year by year for Y years for each of

:1
the risks. The frequencies are

...
XMI
X,, ...
X,?
..I

XM,
...
...
--. x,,
X, represents the frequency of ithrisk andfh year. Lists of estimators are given
below
I
Prowss variancu for r j s k i &:
I

Expected valueof proem-wrimm EFV'


-
11
Mi,
E*'
Variance pf t h e hypattidioal-mms VHMI
(or-I)) ,=,

value of 0:

THwa are tbva. ailto, diiwrn in1 ai particulk mtiitg d h 7Ma. ffffmrc t t i m Had tHa.
following seqpenm off claims in, year1 1' tHroW:$~010)1:0l and!ttt't~s m n c t ! Had:
1, I4T0,2.Es4irnateaa~h~ofitHe:vdQ:sin'ttta.@ar: taitjix
%lhtlonr Por fi tss d t i w , = 064%
wdl q'= C f b B O l Rar tit= s m n d d l i ~ r c
*ctuarial Techniques-11 Thus the estimated future claim frequency for first driver is
(i)(. (i)
6)+ 9) = 0.85 and for the sec6nd driver is

Biihlrnann Credibility: (EstimateKfrom Best Fit to Data)


N
A credibility formula of the form Z = - is usually used to weigh the
N+K
policyholder's experience.
One can estimate K by observing which values of K would have worked well in
the past. The goal is for each policyholder to have the same expected loss ratio
after the application of experience rating. Let LR, be the loss ratio for policyholder
i where, in the denominator we use the premiums after the application of
experience rating. Let L R A vbe~ the average loss ratio for all policyholders. Then
define D(K) to be ,

D ( K ) = ~ ( L R -, LRAJ (13.6.1)
all I

The sum 6f squares of the differences is a h c t i o n of K, the credibility parameter


that was used in the experience rating. The goal is to find a K that minimizes
D(K). This requires recomputing the premium that each policyholder would have
been charged under a different K' value. This generates new LR,'s that are then
put into the formula above and D(K1) is computed. Using techniques from
numerical analysis, a K that minimizes D(K) can be found out.
Another approach to calculate credibility parameters is linear regression analysis
of a policyholder's current frequency, pure premium etc. Using historical data for
many policyholders we set up the regression equation:
Observation in the year Y
= m[observation in year (Y - I)] + constant (13.6.2)
The slope m from a least squares fit to the data turns out to be the Biihlmann
credibility Z. The constant term is (1 - Z) (coverall average). After we have
calculated the 'parameters in our model using historical data, we can estimate
future results using the (model and recent data. Regression models can a l s ~be
built using multiple years.

13.7 COMPARISION OF CLASSICAL AND


BUHLMANN CREDIBILITY
Although the formulae look very different, classical credibility and Bithlmann
credibility can produce similar results. The most significant difference between
the two models is that Biihlmann credibility never reaches Z = 1.00, which is an
asyniptote at the curve. Both models can be effective at improving the stability
and accuracy of estimates.
-. - Classical &edibility and Biihlmann credibility formulae will -produce
approximately the same credibility weights, if the full credibility standard for
classical credibility no , is about 7 to 8 times larger than the Biihlmann credibility
1
parameter K. .I+

74
For a particular application, the actuary can choose the model out of the three Credibility Theory
models (classical, Buhlmann and ~afsiafi)appropriate to'the goals and data. Ifthe
goal is to generate the most accurate insurance rates, with least squares as the
measure of fit, then Buhlmann credibility may be the bgst choice. Biihlmann
creQibility forms the basis of most experience-rating'plan. It is often used to
calculate class rates in a classification plans. The use of Biihlmann credibility
requires an estimate of the EPV and VHM.
Classical cridibility might be used if estimates for the EPV and VHM are
unknown or difficult to calculate. Classical credibility is oftqn used in the
calc~~lation
of overall' rate increases. Often it is simpler to work with Bayesian
analysis. Bayesian analysis may be an option if the actuary has reasonable
estimate of the prior distribution. However, Bayesian analysis may be
co~nplicatedto apply and the most difficult of the methods to explain to non
actuaries.

13.8 THE MAXIMUM AGGREGATE LOSS AND


THE GENERAL SOLUTION
Aggregate losses, pure premiums and loss ratios depend on both the number of
claiins an'd the size of claim; they have more reasons to vary than either frequency
and severity. Because they are more difficult to estimate than frequencies, all
other things being equal, the standard for full credibility is larger than that for
freclirencies. Formulae for the variance of'the pure premium as seen in section
13.2 were calculated as follows:
General case: oip = p, q: + p,: o/' (13.8.1)
Poisson frequency:
,' *

oip= p, (a:+ p : ) = p, (2ndmoment of the severity) (1 3.8.2)


'The subscripts indicate the 'means and variance of the frequency V) and severity
(T. Assuming the normal :approximation, full credibility standards can be
cajculated following the same steps as in the previous sections.
The probability that the observed.pure premium PP is within f k of the mean ,up,
is
P = Prob [p,,,,- kp,.,,5 PP < p,,, + kp,,,,]

= Prob [ - k
5 u 9 k(z)]

where u = .PP - is a unit normal variable, assuming the normal


~ I J I )
approximation. Consequently, we have the following results:
l+P
I) Define y such that + ( y )= -. Then, in order to have probability P that
7 2 1
e
the observed pure premium will differ from the true pure premium by less..
*.
than fk p,,,,:

You might also like