Performance Indicators
Performance Indicators
This article explains and illustrates key performance indicators and critical success
factors.
Introduction
Objectives
Critical success factors
Performance indicators and key performance indicators
Performance measures – a practical framework
Use of performance indicators in the SBL and APM syllabi
Introduction
Both Strategic Business Leader (SBL) and Advanced Performance Management (APM)
require candidates to be able to establish key performance indicators and critical
success factors.
A surprising number of candidates do not feel comfortable with these terms, and this
article is aimed at explaining and illustrating these concepts. In particular, it will
explain what is meant by:
Performance
Objectives
Critical success factors
Performance indicators
Key performance indicators.
Back to top
Performance
This can be defined as:
Organisations differ greatly in which aspects of their behaviour and results constitute
good performance. For example, their aim could be to make profits, to increase the
share price, to cure patients in a hospital, or to clear household rubbish. The concept
of ‘performance’ is very relevant to both SBL and APM. SBL looks at how organisations
can make decisions that improve their strategic performance and APM is focused on
how organisations evaluate their performance.
The primary required tasks are often found in the organisation’s mission statement as
it is there that the organisation’s purpose should be defined. These are called ‘primary
required tasks’ because although the primary task of a profit-seeking business is to
make profits, this rests on other subsidiary tasks such as good design, low cost per
unit, quality, flexibility, successful marketing and so on. Many of these are non-
financial achievements.
Some aspects of performance are ‘nice to have’ but others will be critical success
factors. For example, the standard of an airline’s meals and entertainment systems
will rank after punctuality, reliability and safety, all of which are likely to be critical to
the airline’s success.
Back to top
Objectives
Objectives are simply targets that an organisation sets out to achieve. They are
elements of the mission that have been quantified and are the basis for deciding
appropriate performance measures and indicators. There is little point measuring
something if you do not know whether the result is satisfactory and cannot decide if
performance needs to change. Organisations will create a hierarchy of objectives
which will include corporate objectives which affect the organisation as a whole and
unit objectives which will affect individual business units within the organisation. Even
here objectives will be categorised as primary and secondary, for example an
organisation might set itself a primary objective of growth in profits but will then need
to develop strategies to ensure this primary objective is achieved. This is where
secondary objectives are needed, for example to improve product quality or to make
more efficient use of resources.
Specific: there is little point in setting an objective for a company to improve its
inventory. What does that mean? It could mean that stock-outs should be less
frequent, or average stock holdings should be lower, or the inventory will be held in
better conditions to reduce wastage.
Measurable: if you can’t measure something you will be at a loss as to how to control
it. Some aspects of performance might be difficult to measure, but efforts must be
made. Customer satisfaction is important to most businesses and indications could be
obtained by arranging customer surveys, repeat business and so on.
Relevant: relevant to the organisation and the person to whom the objectives are
given. It is important that people understand how achieving an objective will help
organisational success. If this connection isn’t clear, employees will begin to feel that
the objective is simply a cynical exercise of management power. The person to whom
the objective is given must also feel that they can affect its achievement.
Back to top
'Those product features that are particularly valued by a group of customers, and,
therefore, where the organisation must excel to outperform the competition.’
This definition is more complex than the first, but it is more useful because it makes
the organisation look towards its customers (or users) and recognises that their
opinion of excellence is more important and reliable than internally generated
opinions. If an organisation doesn’t deliver what its customers, clients, patients,
citizens or students value, it is failing.
Back to top
Performance indicators and key performance indicators
Performance indicators (or performance measures) are methods used to assess
performance. For example:
In profit-seeking organisations:
Profit
Earnings per share
Return on capital employed
In not-for-profit organisations:
Similar effects are found in not-for-profit organisations. For example, in a school, a CSF
might be that a pupil leaves with good standards of literacy. But that might depend on
pupil-teacher ratios, pupils’ attendance and the experience of the teachers. If these
factors contribute to good performance, they need to be measured and monitored.
Just as CSFs are more important than other aspects of performance, not all
performance indicators are created equal. The performance indicators that measure
the most important aspects of performance are the key performance indicators (KPIs).
To a large extent, KPIs measure how well CSFs are achieved; other performance
indicators measure how well other aspects of performance are achieved
There are a number of potential pitfalls in the design of performance indicators and
measurement systems:
Not enough performance measures are set
Often, directors and employees will be judged on the results of performance measures.
It has been said that ‘Whatever gets measured gets done’ and employees will tend to
concentrate on achieving the required performance where it is measured. The
corollary is that ‘Whatever doesn't get measured doesn't get done’ and the danger is
that employees will ignore areas of behaviour and performance which are not
assessed.
Safety – safety checks and speed limits will take priority over punctuality
Cleanliness – it might be necessary to occasionally reduce cleaning to keep to the
timetable
Energy consumption running a train faster than normal (though within speed limits)
will cause higher fuel consumption but punctuality takes precedence
Operations director
The duty manager at each station is responsible for logging the arrivals time of each
train. A five-minute margin is allowed ie a train is logged ‘on time’ if it is no later than
5 minutes after the advertised time. Beyond five minutes the actual time by which the
train is late is logged. Results will be calculated in percentage bands: on time, up to 15
minutes late, >15–30 minutes late, >30 minutes – one hour late, >one hour late, and
so on
While logging late arrivals, station duty managers should also note the cause where
possible. The operations director must collate this information using statistical analysis
which highlights persistent problems such as particular times of the day, routes or
days of the week
The target is dictated by the railway timetable. The timetable should be reviewed
twice a year to look for ways of reducing journey times to keep TTTE competitive with
improvements in competing transport
8. Update of target
Weekly
The operations director will report performance on a monthly basis to the board
together with plans for service improvement
Back to top
Mission statements: these define the important aspects of performance that sum up
the purpose of the organisation.
Generic strategies: the main generic strategies to achieve competitive advantage are
cost leadership and differentiation. If a company’s success depends on being a cost
leader (a CSF) then it must carefully monitor all its costs to achieve the leadership
position. The company will therefore make use of performance indicators relating to
cost and efficiency. If a company has chosen differentiation as its path to success, then
it must ensure that it is offering enhanced products and services and must establish
measures of these.
Value chain: a value chain sets out an organisation’s activities and enquires as to how
the organisation can make profits: where is value added? For example, value might be
added by promising fantastic quality. If so, that would be a CSF and a key performance
indicator would the rate occurrence of bad units.
Boston consulting group grid: this model uses relative market share and market
growth to suggest what should be done with products or subsidiaries. In SBL if a
company identifies a product as a ‘problem child’ BCG says that the appropriate action
for the company is either to divest itself of that product or to invest to grow the
product towards a ‘star’ position on the grid. This requires money to be spent on
promotion, product enhancement, especially attractive pricing and perhaps
investment in new, efficient equipment.
PESTEL and Porter’s five forces: both the macro-environment and competitive
environment change continuously. Organisations have to keep these under review and
react to the changes so that performance is sustained or improved. For example, if
laws were introduced which stated that suppliers should be paid within a maximum of
60 days, then a performance measure will be needed to encourage and monitor the
attainment of this target.
Product life cycle: different performance measures are required at different stages of
the life cycle. In the early days of a product’s life, it is important to reach a successful
growth trajectory and to stay ahead of would-be copycats. At the maturity stage,
where there is great competition and the market is no longer growing, performance
will depend on low costs per unit and maintaining market share to enjoy economies of
scale.
Company structure: different structures inevitably affect both performance and its
management. For example, as businesses become larger many choose a divisionalised
structure to allow specialisation in different parts of the business:
manufacturing/selling, European market/Asian market/North American market, product
type A/product type B. Divisional performance measures, such as return on investment
and residual income, then become relevant.
Information technology (IT): new technologies will influence performance and could
help to more effectively measure performance. However, remember that sophisticated
new technology does not guarantee better performance as costs can easily outweigh
benefits. If IT is vital to a business, then downtime and query response time become
relevant as might a measure of system usability.
Human resource management: what type of people should be recruited, and how are
they to be motivated, appraised and rewarded to maximise the chance of good
organisational performance? Performance measures are needed, for example, to
monitor the effectiveness of training, job performance, job satisfaction, recruitment
and retention. In addition, considerable effort has to be given to considering how
employees’ remuneration should be linked to performance.
Fitzgerald and Moon building blocks
The APM syllabus mentions three specific approaches or models:
Balanced scorecard
Performance pyramid
Fitzgerald’s and Moon’s building blocks
The balanced scorecard approach is probably the best known, but all seek to ensure
that the net is thrown wide when designing performance measures for organisations
so that factors such as quality, innovation, flexibility, stakeholder performance, and
delivery and cycle time are listed as being important aspects of performance.
Whenever an aspect of performance is important then a performance measure should
be designed and used.
The Fitzgerald and Moon model is worth a specific mention here as it is the only model
which explicitly links performance measures to the individuals responsible for the
performance.
The model first sets out the dimensions (split into results and determinants) where key
performance indicators should be established. You will see there is a mix of financial
and non-financial, and both quantitative and qualitative:
Results
Financial performance
Competitive performance
Determinants
Quality
Flexibility
Resource utilisation
Innovation
The model then suggests standards for KPIs:
Ownership: refers to the idea that KPIs will be taken more seriously if staff have a say
in setting targets. Staff will be more committed and will better understand why that
KPI is needed.
Achievability: if KPIs are frequently and obviously not achievable then motivation is
harmed. Why would staff put in extra effort to try to achieve a target (and bonus) if
they believe failure is inevitable.
Fairness: everyone should be set similarly challenging objectives and it is essential
that allowance should be made for uncontrollable events. Managers should not be
penalised for events that are completely outside everyone’s control (for example, a
natural disaster) or which is someone else’s fault.
The model then suggests how employee rewards should be set up to encourage
employees to achieve the KPI targets:
It is difficult and expensive to gather, store, validate and make available the various
types of management information required for decision making. As such, it is
important for managers and providers of information support systems to determine, in
advance, what is most relevant to them.
It is necessary to identify the ‘key indicators’ that will help a manager to plan,
manage, and control an area of responsibility. This method is based on the need for
managers to focus, at any point in time, on the most significant aspects of their
responsibilities. The development of an EIS, designed to support management control,
is based on two main concepts:
The selection of a set of key indicators of the health of the functional business
area. Information will then be collected for each of these indicators.
Exception reporting – the ability to make available to a manager, as required,
information on only those indicators where performance differs significantly
from expectations.
The underlying belief is that an effective control system must be tailored to the
specific industry in which the organisation operates, and to the specific strategies that
it has adopted. It must identify the CSFs that should receive careful and continuous
management attention if the organisation is to be successful, and it must highlight
performance with respect to these key variables in reports available to all levels of
management.
The first concept is frequently approached from the viewpoint of CSFs in that a limited
number of areas are identified in which results, if they are satisfactory, will ensure
successful performance. They are the few key areas, it is believed, where ‘things must
go right’ if the organisation is to flourish. In turn, each manager must identify the key
areas that apply to them, in which results are identified as being absolutely necessary
to achieve specific goals. The goals, in turn, support overall organisational goals. The
genesis of this approach goes back to the history of warfare, where writers on battles
have identified the successful leader as the one who concentrated his forces on the
most significant areas.
Daniel’s initial thinking had been that CSFs are those that are common to
organisations operating in a particular industry. However, MIT identified five prime
sources of CSFs:
the structure of the particular industry
competitive strategy, industry position, and geographic location
environmental factors
temporary factors
functional managerial position.
In this way a major competitor’s strategy can produce a CSF for a small company. For
example, Dell’s competitive approach to the marketing of small, inexpensive
computers informs the CSF identification for all computer manufacturers. The smaller
companies must identify what they will do in response, and how they measure the
effectiveness of their response. Just as differences in industry position can dictate
CSFs, differences in geographic location (eg distribution costs) and in strategic
positioning (differentiation or focus strategies for smaller companies) can lead to
different CSFs in companies within the same industry.
Environmental factors
As the Gross National Product of an economy can fluctuate with changes in political
and demographic factors, CSFs can also change for an organisation. In the early
1970s, virtually no chief executive in the US would have listed ‘energy supply
availability’ as a CSF. However, following the organisation of OPEC and its oil embargo,
this factor is now closely monitored by most executives, because adequate availability
of energy, and its price stability, is vital to organisational planning and bottom-line
performance in manufacturing and distribution.
Temporary factors
Internal organisational considerations often lead to the monitoring of temporary CSFs.
These are areas of activity that are deemed significant to the success of the
organisation for a particular period of time because they are considered below the
threshold of acceptability, even though they may generally appear to be in good
shape and not apparently in need of special attention. For instance, an insurance
company that had just been fined by the industry regulator for miss-selling would
probably generate a short-term CSF of ensuring that such miss-selling, and
consequent financial penalties, would not happen again in the near future.
Managers who are either in reasonable control of day-to-day operations, or who are
insulated from such concerns, spend more time in a building or adapting mode. These
people can be classified as future-oriented planners whose primary purpose is to
implement major change programmes aimed at adapting the organisation to the
perceived emerging environment. Typical CSFs in this area might include the
successful implementation of major recruitment and training efforts, or new product or
service development programmes.
It is at this point that we should discuss the concept that organisations are ‘human
activity systems’, and that individuals within these systems bring their own ‘world
view’ to their roles – encompassing their whole belief system – based on their training
and previous experience. This world view will influence their perception of what they
consider to be important in achieving their own organisational objectives. Thus a new
incumbent to a role may identify a number of new CSFs that may augment or replace
the CSFs identified by the previous incumbent.
As a starting point, four areas for measurement should be considered when managing
for improvement: customers, response, process, and system.
Customers
What matters to customers? Can these things be measured (simply and efficiently)?
Do we have any systematic methods for understanding what matters to customers?
Do we translate what matters into measures for managing and improving
performance?
Response
Can ‘what matters to customers’ be turned into response measures? Are there other
‘end to end’ measures that will help the organisation learn about, for example,
customer acquisition and the efficiency of services delivered? What processes must be
measured end to end? Consider risk management – what events in the outside
environment do we need to watch out for? What do we need to know about competitor
activity?
Process
What measures might be useful in the processes? Some measures should be
permanent and some should be temporary. For example, ‘throughput’ might be an
important permanent measure, and ‘waste’ a useful temporary measure.
System
How should the above measures fit together to tell managers how they are
performing, and how they will perform? Are other whole system measures needed?
How well is the organisation integrated into, and monitoring, its external environment?
These two articles each provide a brief overview of a model which can assist
accountants, not only in the determination of business strategy, but also in the
appraisal of business performance. As well as looking at the theory, the articles will
also provide advice to show how the models can be examined and how to tackle those
requirements.
Porter’s Five Forces Model
The use of Porter’s five forces model (see Figure 1) will help identify the sources of
competition in an industry or sector. It looks at the attractiveness of a market, focused
on the ability to make profits from it.
The model has similarities with other tools for environmental audit, such as political,
economic, social, and technological (PEST) analysis, but should be used at the level of
the strategic business unit, rather than the organisation as a whole. A strategic
business unit (SBU) is a part of an organisation for which there is a distinct external
market for goods or services. SBUs are diverse in their operations and markets so the
impact of competitive forces may be different for each one.
Five forces analysis focuses on five key areas: the threat of new entrants, the
bargaining power of buyers, the bargaining power of suppliers, the threat of
substitutes, and competitive rivalry.
The threat of new entrants
This depends on the extent to which there are barriers to entry. These barriers must be
overcome by new entrants if they are to compete successfully. Johnson et al (2005),
suggest that the existence of such barriers should be viewed as delaying entry and not
permanently stopping potential entrants. Typical barriers are detailed below:
Economies of scale exist, for example, the benefits associated with volume
manufacturing by organisations operating in the automobile and chemical
industries where high fixed costs exist. Lower unit costs result from increased
output, thereby placing potential entrants at a considerable cost disadvantage
unless they can immediately establish operations on a scale which will enable
them to derive similar economies.
Certain industries, especially those which are capital intensive and/or require
very large amounts of research and development expenditure, will deter all but
the largest of new companies from entering the market.
Supplier and customer loyalty exists. A potential entrant will find it difficult to
gain entry to an industry where there are one or more established operators
with a comprehensive knowledge of the industry, and with close links with key
suppliers and customers.
Differentiated products and services have a higher perceived value than those
offered by competitors. Products may be differentiated in terms of price, quality,
brand image, functionality, exclusivity, and so on. However, differentiation may
be eroded if competitors can imitate the product or service being offered and/or
reduce customer loyalty.
The bargaining power of buyers
The power of the buyer will be high where:
There are a few, large players in a market. For example, large supermarket
chains can apply a great deal of pressure on their potential suppliers to attempt
to get them to lower their prices. This is especially the case where there are a
large number of undifferentiated, small suppliers, such as small farming
businesses supplying fresh produce to large supermarket chains who can then
‘pick and choose’.
The cost of switching between suppliers is low, for example from one haulage
contractor to another. The service offered will have the same outcome and
unless a long-term contract has been negotiated, deliveries can be arranged on
a parcel-by-parcel basis.
The buyer’s product is not significantly affected by the quality of the supplier’s
product. For example, a manufacturer of paper towels and toilet paper will not
be affected too greatly by the quality of the spiral-wound paper tubes on which
their products are wrapped.
Buyers earn low profits so will be very keen to negotiate lower prices from their
suppliers in order to increase margins.
Buyers have the potential for backward integration, for example where the
buyer might purchase the supplier and/or set up in business and compete with
the supplier. This is a strategic option which might be selected by a buyer in
circumstances where favourable prices and quality levels cannot be obtained by
bargaining with current suppliers alone.
Buyers are well informed, for example, having full information regarding
availability of supplies and can use that knowledge in the negotiation against
the supplier.
The bargaining power of suppliers
The power of the seller will be high where (and this tends to be the reverse of the
power of buyers):
There are a large number of customers, reducing their reliance upon any single
customer suggesting that they may not care if they were to lose a customer.
The switching costs are high. For example, switching from one software supplier
to another could prove extremely costly as all equipment and processes are
specific to the supplier and all will need to change. This is on top of any costs of
designing a new system itself.
The brand is powerful/well known (Apple, Mercedes, McDonalds, Microsoft).
Where the supplier’s brand is powerful then a retailer might not be able to
operate without a particular brand in its range of products.
The rate of market growth is slow. The concept of the life cycle suggests that in
mature markets, market share has to be achieved at the expense of competitors
as there are few new customers now entering the market.
There are high exit barriers. This can lead to excess capacity as players will not
be willing to leave and, consequently, increased competition from those firms
effectively ‘locked in’ to a particular marketplace.
In summary, the application of Porter’s five forces model will increase management
understanding of an industrial environment which they may want to enter, or assist
them to assess a market that they are currently in.
Now that the model has been explained you need to be able to apply it in the exam.
Often candidates can struggle to perform this ‘application’ effectively – either, due to
not following the precise questions requirement or not using the information in the
scenario effectively or even at all. So, this next section will look at a few of the ways
that this may be examined in the APM exam and provide some advice on how to tackle
answering those questions.
When conducting a five forces assessment an organisation will need to consider:
how to measure the strength of the forces and how reliable those
measurements are
how to manage the forces identified to mitigate their influence on the
organisation’s future performance, and
what performance measures are required to monitor the forces.
These factors are often the basis for questions requiring the use of this model.
Illustration:
The examples below are based on a company making semi-conductors/micro-chips
and the SBU being addressed in the question makes them for the autonomous vehicle
industry (self-driving cars), a specialised use in an already specialist industry.
EXAMPLE 1 – Using the model to perform the analysis
Required:
Using Porter’s five forces model, assess the impact of the external business
environment on the performance management of Scarlette Plc.
This is the first part of the requirement (the second part follows in the next example).
This requirement does indeed require you to perform the analysis for the SBU. This
must be done in the precise context of the scenario in the question and does not need
to be preceded with explanations of the model or its parts.
An extract from a very good answer is reproduced below to show the approach that
will score the maximum marks available for one force, threat of new entrants, in this
scenario:
Answer – Extract showing threat of new entrants only
The threat of new entrants will be dictated by barriers to entry into the specialist semi-
conductor market. These appear to be high, given the high fixed costs and the high
levels of technical expertise required to develop a viable product. Also, the need to
have cultivated strong relationships with the autonomous car producers and control
systems manufacturers who will be the customers for the products.
Comments: The answer begins with a recognition of the issues affecting barriers,
then moves on to identify the specifics for the industry. It justifies the identification of
the barriers being high here, doing this both, for the microchip industry in general,
then focussing in more closely on the specific use in this SBU.
EXAMPLE 2 – Providing performance measure for the forces
…and give a justified recommendation of one new performance measure for
each of the five force areas at Scarlette.
Answer – Extract showing threat of new entrants only
A suitable performance measure would be percentage growth in revenue because as
the industry grows Scarlette may expect their revenues to grow with it, as they gain
new contracts and even new customers. Scarlette will need to compare this measure
against the growth of the industry itself and competitors to ensure that they are at
least keeping up with them.
[Other measures could include ratio of fixed cost to total cost (measures capital
required) or customer loyalty (through long-term contracts to supply semi-conductors
to manufacturers).]
Comment: As the comment at the end of the answer shows, there are many
measures which could be applied here. The key to gaining pass marks is to identify a
measure which is going to be useful for the organisation in the scenario, given its
industry and situation. This answer also clearly justifies the recommendation in this
context.
In Performance Management models – part 2 the Boston Consulting Group matrix
(BCG) will be the model focused on.
This article provides a brief overview of the second of two models, which can assist
accountants, not only in the determination of business strategy, but also in the
appraisal of business performance. It also looks at how to approach a particular style
of question that may appear in the APM exam.
In this part the Boston Consulting Group matrix will be reviewed, you may also wish to
read part 1, which covers Porters Five Forces.
The Boston Consulting Group Matrix
There is a fundamental need for management to evaluate existing products and
services in terms of their market development potential, and their potential to
generate profit. The Boston Consulting Group matrix, which incorporates the concept
of the product life cycle, is a useful tool which helps management teams to assess
existing and developing products and services in terms of their market potential. More
importantly, the model can also be used to assess the strategic position of strategic
business units (SBUs), and in this respect it is particularly useful to those organisations
which operate in a number of different markets and offer a number of different
products or services.
The matrix offers an approach to product portfolio planning. It has two axes, namely
relative market share (meaning relative to the competition) and market growth.
Management must consider each product or service marketed, and then position it on
the matrix. This is done by considering the relative market share, which for the
company with the largest share (market leader) means comparing to the next biggest
player and for smaller players (market followers) it means comparing their share to
the leader. The other axis on the matrix is the market growth rate – which is either
growing quickly or the market is mature where it will grow slowly or may even have
stopped growing altogether.
Problem children
Problem children have a relatively low market share in a market that is growing
quickly, often due to the fact that these are new products/services, or that they are yet
to receive recognition by prospective purchasers. In order to realise the full potential
of problem children, management needs to develop new business prudently, and
apply sound project management principles if it is to avoid costly disasters. Gross
profit margins are likely to be high, but overheads are also high, covering the costs of
research, development, advertising, market education, and low economies of scale. As
a result, the development of problem children can be loss-making until the product
moves into the rising star category, which is by no means assured. This is evidenced
by the fact that many problem children products remain as such, while others become
tomorrow’s dogs.
Note: Problem children are also known as question marks.
Stars
Stars are products which are in the high market share and growing market quadrant.
As a product moves into this category it is commonly known as a rising star. While a
market is strong and still growing, competition is not yet fully established. Since
demand is strong, and market saturation and over-supply is not an issue, the pricing of
such products is relatively unhindered, and therefore these products generate very
good margins. At the same time, costs per unit are minimised due to high volumes
and good economies of scale. These are great products, and worthy of continuing
investment for as long as they have the potential to achieve good rates of growth. In
circumstances where this potential no longer exists, these products are likely to fall
vertically in the matrix into the cash cow quadrant (fallen stars), and their cash
generating characteristics will change. It is therefore vital that a company has rising
stars developing from its problem children in order to fill the void left by the fallen
stars.
Cash cows
A cash cow has a relatively high market share in a mature/low growth market and
should generate significant cash flows. This somewhat crude metaphor is based on the
idea of ‘milking’ the returns from a previous investment that established good
distribution and market share for the product. Activities to support products in this
quadrant should be aimed at maintaining and protecting their existing position,
together with good cost management, rather than aimed at investment for growth.
This is because there is little likelihood of additional growth being achieved.
Dogs
A dog has a relatively low market share in a mature/low growth market, might well be
loss making, and therefore have negative cash flow. A common belief is that there is
no point in developing products or services in this quadrant. Many organisations
discontinue dogs, but businesses which have been denied adequate funding for
development may find themselves with a high proportion of their products or services
in this quadrant. A dog product that forms an integral part of a portfolio may also be
retained to ensure complete coverage – eg a furniture reseller may have some dog
products but does so in order to remain a ‘one-stop-shop’ for all customer furniture
needs and not lose customers.
Limitations of the Boston Consulting Group matrix
The popularity of the matrix has diminished a little as the criteria it is based on –
market share and market growth are no longer reliable predictors of long-term
success. Other models have been developed from it – with further criteria added
(these are outside the scope of APM, however). It was also very useful when
conglomerates were much more common, and these companies needed to review
their portfolios of SBUs to ensure that effort/funds are focused on the correct markets.
Management should therefore exercise a degree of caution when using the matrix.
Some of its limitations are detailed below:
The rate of market growth is just one factor in an assessment of industry
attractiveness, and relative market share is just one factor in the assessment of
competitive advantage. The matrix ignores many other factors that contribute
towards these two important determinants of profitability.
There can be practical difficulties in determining what exactly ‘high’ and ‘low’
(growth and share) can mean in a particular situation.
The focus upon high market growth can lead to the profit potential of declining
markets being ignored.
The matrix assumes that each SBU or product/service is independent. This is
not always the case, as organisations often take advantage of potential
synergies.
The use of the matrix is best suited to SBUs as opposed to products, or to broad
markets (which might comprise many market segments).
The position of dogs is frequently misunderstood, as many dogs play a vital role
in helping SBUs achieve competitive advantage. For example, dogs may be
required to complete a product range (as referred to earlier in this article) and
provide a credible presence in the market. Dogs may also be retained in order
to reduce the threat from competitors via a broad portfolio.
Notwithstanding these limitations, the Boston Consulting Group matrix provides a
useful starting point in the assessment of the performance of products and services
and, more importantly, of SBUs. Although when conducting a BCG assessment an
organisation will need to consider:
how to measure each of the categories in the matrix and how reliable those
measurements are
how to manage the different categories identified to mitigate their influence on
the organisation’s future performance
what performance indicators are required as a result of the BCG categorisation,
how those indicators link into both overall organisational performance and
individual performance.
Now that the model has been explained and demonstrated we will move on to look at
how it can be examined in APM. An analysis using the model may be asked for,
however often this will be done for you in the question and the requirements will focus
on how these SBUs can be managed and what performance measures may be
required. You may also be expected to evaluate the use of BCG matrix as a
performance management system. This section of the article will provide advice about
answering several types of requirements. In the examples, only extracts from the
requirements and answers are provided, to keep the article to a sensible length.
Illustration
EXAMPLE 1 – Using the model to perform the analysis
FNI is a large, diversified entertainment business based in Zeeland. It has a main
objective of maximising shareholder wealth and is made up of four divisions:
3 Restaurants Dog
Integrated reporting
The recognition that businesses depend on different forms of capital for their success
is also an important part of the rationale for integrated reporting (IR). However, IR
also encourages a focus on business sustainability and organisation’s long-term
success. By encouraging businesses to focus on their ability to create and sustain
value over the longer term, IR should help them take decisions which are sustainable,
and which ensure a more effective allocation of scarce resources.
Integrated Reporting is discussed in more detail in a separate article.
Sustainability and performance information
The argument that it is insufficient for businesses to consider only financial
information alone is not new. There are echoes here to discussions around the need for
multi-dimensional performance measurement systems (such as the balanced
scorecard (Kaplan and Norton, 1996)) – which emphasise the need for
financial and non-financial measures to be part of a business’ information systems.
Equally, one of the criticisms sometimes made of the way businesses use balanced
scorecards is that they are linked to delivering traditional economic value (eg
shareholder wealth), rather than considering the importance of corporate social
responsibility (CSR) and sustainability. As such, some commentators have suggested
the need to add social and environmental perspectives to the balanced scorecard.
However, others have argued that sustainability could be incorporated into the
existing four perspectives. The logic of the scorecard is to link a business’ objectives
and strategy to its performance measures, and the argument here is that businesses
should include sustainability goals within their strategy.
As such, when selecting goals for the perspectives, a business should consider
requirements for sustainability. For example:
Customer perspective: Have the interests of sustainability stakeholders been
taken into account eg green consumers; local communities; government
regulators?
Internal process perspective:
o Have the environmental impacts of processes eg resource usage; waste
and recycling; impact on water and air been considered?
o Do HR processes take into account labour best practices around health
and safety, diversity, equal opportunity etc?
Learning and growth:
o How are training and development programmes helping to promote
sustainability values and culture?
o How are innovations leading to more efficient use of resources and the
reduction of waste, or leading to the introduction of more environmentally
friendly products?
More generally, regardless of the performance measurement system it uses, in order
to improve sustainability performance, a business needs to translate its overall
objectives into specific practices, linked to sustainability, in each key area of
performance. It then needs to identify specific measurement indicators, so it can
assess how well it is achieving its objectives in each key area.
Key performance indicators (KPIs)
Monitoring key performance indicators (KPIs) is recognised as a crucial part of
performance management for any business. However, many businesses don’t
measure sustainability KPIs in the way that they would financial KPIs, for example. One
of the key challenges with introducing sustainability KPIs is that the list of potential
indicators is very large, so determining which are the most important to monitor (ie
the key indicators) can be a complex task.
However, the following are some potential indicators a business could track in relation
to sustainability:
Energy Materials
- Energy consumption - Raw material usage
- Energy saved due to - % of non-renewable materials
implemented improvements used
- % of recycled materials used
- Product recycling rate %
Waste Social
- Waste generated - Number of health and safety
- Waste by type and disposal incidents (workplace safety)
method - Number of sick days
- Waste production rate (employees’ health and well-
being)
Emissions
- Toxic emissions
- CO2 emissions
- Greenhouse gas emissions
- Carbon footprint
Clearly, risk permeates most aspects of corporate decision making (and life
in general), and few can predict with any precision what the future holds in
store
Risk can take myriad forms – ranging from the specific risks faced by individual
companies (such as financial risk, or the risk of a strike among the workforce), through
the current risks faced by particular industry sectors (such as banking, car
manufacturing, or construction), to more general economic risks resulting from
interest rate or currency fluctuations, and, ultimately, the looming risk of recession.
Risk often has negative connotations, in terms of potential loss, but the potential for
greater than expected returns also often exists.
Clearly, risk is almost always a major variable in real-world corporate decision-making,
and managers ignore its vagaries at their peril. Similarly, trainee accountants require
an ability to identify the presence of risk and incorporate appropriate adjustments into
the problem-solving and decision-making scenarios encountered in the exam hall.
While it is unlikely that the precise probabilities and perfect information which feature
in exam questions can be transferred to real-world scenarios, a knowledge of the
relevance and applicability of such concepts is necessary.
In this first article, the concepts of risk and uncertainty will be introduced together
with the use of probabilities in calculating both expected values and measures of
dispersion. In addition, the attitude to risk of the decision maker will be examined by
considering various decision-making criteria, and the usefulness of decision trees will
also be discussed. In the second article, more advanced aspects of risk assessment
will be addressed, namely the value of additional information when making decisions,
further probability concepts, the use of data tables, and the concept of value-at-risk.
The basic definition of risk is that the final outcome of a decision, such as an
investment, may differ from that which was expected when the decision was taken.
We tend to distinguish between risk and uncertainty in terms of the availability of
probabilities. Risk is when the probabilities of the possible outcomes are known (such
as when tossing a coin or throwing a dice); uncertainty is where the randomness of
outcomes cannot be expressed in terms of specific probabilities. However, it has been
suggested that in the real world, it is generally not possible to allocate probabilities to
potential outcomes, and therefore the concept of risk is largely redundant. In the
artificial scenarios of exam questions, potential outcomes and probabilities will
generally be provided, therefore a knowledge of the basic concepts of probability and
their use will be expected.
Probability
The term ‘probability’ refers to the likelihood or chance that a certain event will occur,
with potential values ranging from 0 (the event will not occur) to 1 (the event will
definitely occur). For example, the probability of a tail occurring when tossing a coin is
0.5, and the probability when rolling a dice that it will show a four is 1/6 (0.166). The
total of all the probabilities from all the possible outcomes must equal 1, ie some
outcome must occur.
A real world example could be that of a company forecasting potential future sales
from the introduction of a new product in year one (Table 1).
From Table 1, it is clear that the most likely outcome is that the new product
generates sales of £1,000,000, as that value has the highest probability.
Independent and conditional events
An independent event occurs when the outcome does not depend on the outcome of a
previous event. For example, assuming that a dice is unbiased, then the probability of
throwing a five on the second throw does not depend on the outcome of the first
throw.
In contrast, with a conditional event, the outcomes of two or more events are related,
ie the outcome of the second event depends on the outcome of the first event. For
example, in Table 1, the company is forecasting sales for the first year of the new
product. If, subsequently, the company attempted to predict the sales revenue for the
second year, then it is likely that the predictions made will depend on the outcome for
year one. If the outcome for year one was sales of $1,500,000, then the predictions for
year two are likely to be more optimistic than if the sales in year one were $500,000.
The availability of information regarding the probabilities of potential outcomes allows
the calculation of both an expected value for the outcome, and a measure of the
variability (or dispersion) of the potential outcomes around the expected value (most
typically standard deviation). This provides us with a measure of risk which can be
used to assess the likely outcome.
Expected values and dispersion
Using the information regarding the potential outcomes and their associated
probabilities, the expected value of the outcome can be calculated simply by
multiplying the value associated with each potential outcome by its probability.
Referring back to Table 1, regarding the sales forecast, then the expected value of the
sales for year one is given by:
Expected value
= ($500,000)(0.1) + ($700,000)(0.2) + ($1,000,000)(0.4) + ($1,250,000)(0.2) +
($1,500,000)(0.1)
= $50,000 + $140,000 + $400,000 + $250,000 + $150,000
= $990,000
In this example, the expected value is very close to the most likely outcome, but this is
not necessarily always the case. Moreover, it is likely that the expected value does not
correspond to any of the individual potential outcomes. For example, the average
score from throwing a dice is (1 + 2 + 3 + 4 + 5 + 6) / 6 or 3.5, and the average
family (in the UK) supposedly has 2.4 children. A further point regarding the use of
expected values is that the probabilities are based upon the event occurring
repeatedly, whereas, in reality, most events only occur once.
In addition to the expected value, it is also informative to have an idea of the risk or
dispersion of the potential actual outcomes around the expected value. The most
common measure of dispersion is standard deviation (the square root of the variance),
which can be illustrated by the example given in Table 2 above, concerning the
potential returns from two investments.
To estimate the standard deviation, we must first calculate the expected values of
each investment:
Investment A
Expected value = (8%)(0.25) + (10%)(0.5) + (12%)(0.25) = 10%
Investment B
Expected value = (5%)(0.25) + (10%)(0.5) + (15%)(0.25) = 10%
The calculation of standard deviation proceeds by subtracting the expected value from
each of the potential outcomes, then squaring the result and multiplying by the
probability. The results are then totalled to yield the variance and, finally, the square
root is taken to give the standard deviation, as shown in Table 3.
In Table 3, although investments A and B have the same expected return, investment
B is shown to be more risky by exhibiting a higher standard deviation. More commonly,
the expected returns and standard deviations from investments and projects are both
different, but they can still be compared by using the coefficient of variation, which
combines the expected return and standard deviation into a single figure.
Coefficient of variation and standard error
The coefficient of variation is calculated simply by dividing the standard deviation by
the expected return (or mean):
Coefficient of variation = standard deviation / expected return
For example, assume that investment X has an expected return of 20% and a standard
deviation of 15%, whereas investment Y has an expected return of 25% and a
standard deviation of 20%. The coefficients of variation for the two investments will
be:
Investment X = 15% / 20% = 0.75
Investment Y = 20% / 25% = 0.80
The interpretation of these results would be that investment X is less risky, on the
basis of its lower coefficient of variation. A final statistic relating to dispersion is the
standard error, which is a measure often confused with standard deviation. Standard
deviation is a measure of variability of a sample, used as an estimate of the variability
of the population from which the sample was drawn. When we calculate the sample
mean, we are usually interested not in the mean of this particular sample, but in the
mean of the population from which the sample comes. The sample mean will vary
from sample to sample and the way this variation occurs is described by the ‘sampling
distribution’ of the mean. We can estimate how much a sample mean will vary from
the standard deviation of the sampling distribution. This is called the standard error
(SE) of the estimate of the mean.
The standard error of the sample mean depends on both the standard deviation and
the sample size:
SE = SD/√(sample size)
The standard error decreases as the sample size increases, because the extent of
chance variation is reduced. However, a fourfold increase in sample size is necessary
to reduce the standard error by 50%, due to the square root of the sample size being
used. By contrast, standard deviation tends not to change as the sample size
increases.
Decision-making criteria
The decision outcome resulting from the same information may vary from manager to
manager as a result of their individual attitude to risk. We generally distinguish
between individuals who are risk averse (dislike risk) and individuals who are risk
seeking (content with risk). Similarly, the appropriate decision-making criteria used to
make decisions are often determined by the individual’s attitude to risk.
To illustrate this, we shall discuss and illustrate the following criteria:
1. Maximin
2. Maximax
3. Minimax regret
An ice cream seller, when deciding how much ice cream to order (a small, medium, or
large order), takes into consideration the weather forecast (cold, warm, or hot). There
are nine possible combinations of order size and weather, and the payoffs for each are
shown in Table 4.
The highest payoffs for each order size occur when the order size is most appropriate
for the weather, ie small order/cold weather, medium order/warm weather, large
order/hot weather. Otherwise, profits are lost from either unsold ice cream or lost
potential sales. We shall consider the decisions the ice cream seller has to make using
each of the decision criteria previously noted (note the absence of probabilities
regarding the weather outcomes).
1. Maximin
This criteria is based upon a risk-averse (cautious) approach and bases the
order decision upon maximising the minimum payoff. The ice cream seller will
therefore decide upon a medium order, as the lowest payoff is £200, whereas
the lowest payoffs for the small and large orders are £150 and $100
respectively.
2. Maximax
This criteria is based upon a risk-seeking (optimistic) approach and bases the
order decision upon maximising the maximum payoff. The ice cream seller will
therefore decide upon a large order, as the highest payoff is $750, whereas the
highest payoffs for the small and medium orders are $250 and $500
respectively.
3. Minimax regret
This approach attempts to minimise the regret from making the wrong decision
and is based upon first identifying the optimal decision for each of the weather
outcomes. If the weather is cold, then the small order yields the highest payoff,
and the regret from the medium and large orders is $50 and $150 respectively.
The same calculations are then performed for warm and hot weather and a
table of regrets constructed (Table 5).
The decision is then made on the basis of the lowest regret, which in this case is the
large order with the maximum regret of $200, as opposed to $600 and $450 for the
small and medium orders.
Decision trees
The final topic to be discussed in this first article is the use of decision trees to
represent a decision problem. Decision trees provide an effective method of decision-
making because they:
clearly lay out the problem so that all options can be challenged
allow us to fully analyse the possible consequences of a decision
provide a framework in which to quantify the values of outcomes and the
probabilities of achieving them
help us to make the best decisions on the basis of existing information and best
guesses.
A comprehensive example of a decision tree is shown in Figures 1 to 4, where a
company is trying to decide whether to introduce a new product or consolidate
existing products. If the company decides on a new product, then it can be developed
thoroughly or rapidly. Similarly, if the consolidation decision is made then the existing
products can be strengthened or reaped. In a decision tree, each decision (new
product or consolidate) is represented by a square box, and each outcome (good,
moderate, poor market response) by circular boxes.
The first step is to simply represent the decision to be made and the potential
outcomes, without any indication of probabilities or potential payoffs, as shown
in Figure 1 below.
The next stage is to estimate the payoffs associated with each market response and
then to allocate probabilities. The payoffs and probabilities can then be added to the
decision tree, as shown in Figure 2 below.
The expected values along each branch of the decision tree are calculated by starting
at the right hand side and working back towards the left recording the relevant value
at each node of the tree. These expected values are calculated using the probabilities
and payoffs. For example, at the first node, when a new product is thoroughly
developed, the expected payoff is:
Expected payoff = (0.4)($1,000,000) + (0.4)($50,000) + (0.2)($2,000) = $420,400
The calculations are then completed at the other nodes, as shown in Figure 3 below.
We have now completed the relevant calculations at the uncertain outcome modes.
We now need to include the relevant costs at each of the decision nodes for the two
new product development decisions and the two consolidation decisions, as shown
in Figure 4 below.
The payoff we previously calculated for ‘new product, thorough development’ was
$420,400, and we have now estimated the future cost of this approach to be
$150,000. This gives a net payoff of $270,400.
The net benefit of ‘new product, rapid development’ is $31,400. On this branch, we
therefore choose the most valuable option, ‘new product, thorough development’, and
allocate this value to the decision node.
The outcomes from the consolidation decision are $99,800 from strengthening the
products, at a cost of $30,000, and $12,800 from reaping the products without any
additional expenditure.
By applying this technique, we can see that the best option is to develop a new
product. It is worth much more to us to take our time and get the product right, than
to rush the product to market. And it’s better just to improve our existing products
than to botch a new product, even though it costs us less.
In the next article, we will examine the value of information in making decisions, the
use of data tables, and the concept of value-at-risk.
Written by a member of the APM examining team
In this second article on the risks of uncertainty, we build upon the basics of
risk and uncertainty addressed in the first article published in April 2009 to
examine more advanced aspects of incorporating risk into decision making
In particular, we return to the use of expected values and examine the potential
impact of the availability of additional information regarding the decision under
consideration. Initially, we examine a somewhat artificial scenario, where it is possible
to obtain perfect information regarding the future outcome of an uncertain variable
(such as the state of the economy or the weather), and calculate the potential value of
such information. Subsequently, the analysis is revisited and the more realistic case of
imperfect information is assumed, and the initial probabilities are adjusted using
Bayesian analysis.
Some decision scenarios may involve two uncertain variables, each with their own
associated probabilities. In such cases, the use of data/decision tables may prove
helpful where joint probabilities are calculated involving possible combinations of the
two uncertain variables. These joint probabilities, along with the payoffs, can then be
used to answer pertinent questions such as what is the probability of a profit/(loss)
occurring?
The other main topic covered in the article is that of Value-at-Risk (VaR), which has
been referred to as 'the new science of risk management'. The principles underlying
VaR will be discussed along with an illustration of its potential uses.
Expected values and information
To illustrate the potential value of additional information regarding the likely outcomes
resulting from a decision, we return to the example given in the first article, of the ice
cream seller who is deciding how much ice cream to order but is unsure about the
weather. We now add probabilities to the original information regarding whether the
weather will be cold, warm or hot, as shown in Table 1.
Table 1: Assigning probabilities to weather
We are now in a position to be able to calculate the expected values associated with
the three sizes of order, as follows:
Expected value (small) = 0.2 ($250) + 0.5 ($200) + 0.3 ($150) = $195
Expected value (medium) = 0.2 ($200) + 0.5 ($500) + 0.3 ($300) = $380
Expected value (large) = 0.2 ($100) + 0.5 ($300) + 0.3 ($750) = $395
On the basis of these expected values, the optimal decision would be to order a large
amount of ice cream with an expected value of $395. However, it may be possible to
improve upon this value if better information regarding the weather could be obtained.
Exam questions often make the assumption that it is possible to obtain perfect
information, ie to predict exactly what the outcome of the uncertain variable will be.
The value of perfect information
In the case of the ice cream seller, perfect information would be certainty regarding
the outcome of the weather.
If this was the case, then the ice cream seller would purchase the size of order which
gave the highest payoff for each weather outcome - in other words, purchasing a small
order if the weather was forecast to be cold, a medium order if it was forecast to be
warm, and a large order if the forecast was for hot weather. The resulting expected
value would then be:
Expected value =; 0.2 ($250) + 0.5 ($500) + 0.3 ($750) = $525
The value of the perfect information is the difference between the expected values
with and without the information, ie
Value of information = $525 - $395 = $130
Exam questions are often phrased in terms of the maximum amount that the decision
maker would be prepared to pay for the information, which again is the difference
between the expected values with and without the information.
However, the concept of perfect information is somewhat artificial since, in the real
world, such perfect certainty rarely, if ever, exists. Future outcomes, irrespective of the
variable in question, are not perfectly predictable. Weather forecasts or economic
predictions may exhibit varying degrees of accuracy, which leads us to the concept of
imperfect information.
The value of imperfect information
With imperfect information we do not enjoy the benefit of perfect foresight.
Nevertheless, such information can be used to enhance the accuracy of the
probabilities of the possible outcomes and therefore has value. The ice cream seller
may examine previous weather forecasts and, on that basis, estimate probabilities of
future forecasts being accurate. For example, it could be that when hot weather is
forecast past experience has suggested the following probabilities:
P (forecast hot but weather cold)- 0.3
P (forecast hot but weather warm);- 0.4
P (forecast hot and weather hot)- 0.7
The probabilities given do not add up to 1 and so, for example, P (forecast hot but
weather cold) cannot mean P (weather cold given that forecast was hot), but must
mean P (forecast was hot given that weather turned out to be cold).
We can use a table to determine the required probabilities. Suppose that the weather
was recorded on 100 days. Using our original probabilities, we would expect 20 days to
be cold, 50 days to be warm, and 30 days to be hot. The information from our forecast
is then used to estimate the number of days that each of the outcomes is likely to
occur given the forecast (see Table 2).
Table 2: Likely weather outcomes
Hot 6** 20 21 47
Other 14 30 9 53
20* 50 30 100
* From past data, cold weather occurs with probability of 0.2 ie on 0.2 of the 100 days
in the sample = 20 days. Other percentages are also derived from past data.
** If the actual weather is cold, there is a 0.3 probability that hot weather had been
forecast. This will occur on 0.3 of the 20 days on which the weather was poor = 6 days
(0.3 x 20). Similarly, 20 = 0.4 x 50 and 21 = 0.7 x 30.
The revised probabilities, if the forecast is hot, are therefore:
P (Cold)=6/47=0.128
P (Warm) = 20/47 = 0.425
P (Hot) = 21/47 = 0.447
The expected values can then be recalculated as:
Expected value (small) = 0.128 ($250) + 0.425 ($200) + 0.447 ($150) = $184
Expected value (medium) = 0.128 ($200) + 0.425 ($500) + 0.447 ($300) =
$372
Expected value (large) = 0.128 ($100) + 0.425 ($300) + 0.447 ($750) = $476
Value of imperfect information = $476 - $395 = 81
The estimated value for imperfect information appears reasonable, given that the
value we had previously calculated for perfect information was $130.
Bayes' rule
Bayes' rule is perhaps the preferred method for estimating revised (posterior)
probabilities when imperfect information is available. An intuitive introduction to
Bayes' rule was provided in The Economist, 30 September 2000:
'The essence of the Bayesian approach is to provide a mathematical rule explaining
how you should change your existing beliefs in the light of new evidence. In other
words, it allows scientists to combine new data with their existing knowledge or
expertise. The canonical example is to imagine that a precocious newborn observes
his first sunset, and wonders whether the sun will rise again or not. He assigns equal
prior probabilities to both possible outcomes, and represents this by placing one white
and one black marble into a bag. The following day, when the sun rises, the child
places another white marble in the bag. The probability that a marble plucked
randomly from the bag will be white (ie the child's degree of belief in future sunrises)
has thus gone from a half to two-thirds. After sunrise the next day, the child adds
another white marble, and the probability (and thus the degree of belief) goes from
two-thirds to three-quarters. And so on. Gradually, the initial belief that the sun is just
as likely as not to rise each morning is modified to become a near-certainty that the
sun will always rise.'
In mathematical terms, Bayes' rule can be stated as:
Posterior probability =likelihood x prior probability
marginal likelihood
For example, consider a medical test for a particular disease which is 90% accurate, ie
if you test positive then there is a 90% probability that you have the disease and a
10% probability that you have been misdiagnosed. If we further assume that 3% of the
population actually have this disease, then the probability of having the disease (given
that you have tested positive) is shown by:
P(Disease|Test = +) =
P(Test = +|Disease) x P(Disease)
P(Test = +|Dis) x P(Dis) + P(Test= +|No Dis) x P(No Dis)
= 0.90 0.03; 0.027;
0.90 x 0.03 + 0.10 x 0.97 0.027 + 0.097
= 0.218
This result suggests that you have a 22% probability of having the disease, given that
you tested positive. This may seem a low probability but only 3% of the population
have the disease and we would expect them to test positive. However, 10% of tests
will prove positive for people who do not have the disease. Therefore, if 100 people
are tested, approximately three out of the 13 positive tests will actually have the
disease.
Bayes' rule has been used in a practical context for classifying email as spam on the
basis of certain key words appearing in the text.
Data tables
Data tables show the expected values resulting from combinations of uncertain
variables, along with their associated joint probabilities. These expected values and
probabilities can then be used to estimate, for example, the probability of a profit or a
loss.
To illustrate, assume that a concert promoter is trying to predict the outcome of two
uncertain variables, namely:
1. The number of people attending the concert, which could be 300, 400, or 600
with estimated probabilities of 0.4, 0.4, and 0.2 respectively.
2. From each person attending, the profit on drinks and confectionary, which could
be $2, $4, or $6 with estimated probabilities of 0.3, 0.4 and 0.3 respectively.
As each of the two uncertain variables can take three values, a 3 x 3 data table can be
constructed. We shall assume that the expected values have already been calculated
as follows:
The two tables could then be used to answer questions such as:
1. The probability of making a loss? = 0.12 + 0.12 + 0.16 = 0.40
2. The probability of making a profit of more than $3,500? = 0.08 + 0.12 + 0.06 =
0.26
Value-at-Risk (VaR)
Although financial risk management has been a concern of regulators and financial
executives for a long time, Value-at-Risk (VaR) did not emerge as a distinct concept
until the late 1980s. The triggering event was the stock market crash of 1987 which
was so unlikely, given standard statistical models, that it called the entire basis of
quantitative finance into account.
VaR is a widely used measure of the risk of loss on a specific portfolio of financial
assets. For a given portfolio, probability, and time horizon, VaR is defined as a
threshold value such that the probability that the mark-to-market loss on the portfolio
over the given time horizon exceeds this value (assuming normal markets and no
trading) is the given probability level. Such information can be used to answer
questions such as 'What is the maximum amount that I can expect to lose over the
next month with 95%/99% probability?'
For example, large investors, interested in the risk associated with the FT100 index,
may have gathered information regarding actual returns for the past 100 trading days.
VaR can then be calculated in three different ways:
1. The historical method
This method simply ranks the actual historical returns in order from worst to best, and
relies on the assumption that history will repeat itself. The largest five (one) losses can
then be identified as the threshold values when identifying the maximum loss with 5%
(1%) probability.
2. The variance-covariance method
This relies upon the assumption that the index returns are normally distributed, and
uses historical data to estimate an expected value and a standard deviation. It is then
a straightforward task to identify the worst 5 or 1% as required, using the standard
deviation and known confidence intervals of the normal distribution - ie -1.65 and -
2.33 standard deviations respectively.
3. Monte Carlo simulation
While the historical and variance-covariance methods rely primarily upon historical
data, the simulation method develops a model for future returns based on randomly
generated trials.
Admittedly, historical data is utilised in identifying possible returns but hypothetical,
rather than actual, returns provide the data for the confidence levels.
Of these three methods, the variance-covariance is probably the easiest as the
historical method involves crunching historical data and the Monte Carlo simulation is
more complex to use.
VaR can also be adjusted for different time periods, since some users may be
concerned about daily risk whereas others may be more interested in weekly, monthly,
or even annual risk. We can rely on the idea that the standard deviation of returns
tends to increase with the square root of time to convert from one time period to
another. For example, if we wished to convert a daily standard deviation to a monthly
equivalent then the adjustment would be :
σ monthly = σ daily x √T where T = 20 trading days
For example, assume that after applying the variance-covariance method we estimate
that the daily standard deviation of the FT100 index is 2.5%, and we wish to estimate
the maximum loss for 95 and 99% confidence intervals for daily, weekly, and monthly
periods assuming five trading days each week and four trading weeks each month:
95% confidence
Daily = -1.65 x 2.5% = -4.125%
Weekly = -1.65 x 2.5% x √5 = -9.22%
Monthly = -1.65 x 2.5% x √20 = -18.45%
99% confidence
Daily = -2.33 x 2.5% = -5.825%
Weekly = -2.33 x 2.5% x √5 = -13.03%
Monthly = -2.33 x 2.5% x √20 = -26.05%
Therefore we could say with 95% confidence that we would not lose more than 9.22%
per week, or with 99% confidence that we would not lose more than 26.05% per
month.
On a cautionary note, New York Times reporter Joe Nocera published an extensive
piece entitled Risk Mismanagement on 4 January 2009, discussing the role VaR played
in the ongoing financial crisis. After interviewing risk managers, the author suggests
that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by
giving false security to bank executives and regulators. A powerful tool for professional
risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when
misunderstood.
Conclusion
These two articles have provided an introduction to the topic of risk present in decision
making, and the available techniques used to attempt to make appropriate
adjustments to the information provided. Adjustments and allowances for risk also
appear elsewhere in the ACCA syllabus, such as sensitivity analysis, and risk-adjusted
discount rates in investment appraisal decisions where risk is probably at its most
obvious. Moreover in the current economic climate, discussion of risk management,
stress testing and so on is an everyday occurrence.
Written by a member of the APM examining team
Two related articles (Data analytics – parts 1 and 2 – see 'Related links' box)
have looked at the way organisations can use data analytics to help
understand and manage performance, including the use of predictive
analytics to help improve forecasting. This article looks at some important
techniques which could be used in forecasting. You should already be
familiar with these techniques from your Performance Management (PM)
studies, and you should be prepared to apply them to the scenarios in APM
questions as necessary.
Forecasting and uncertainty
The business landscape has become increasingly unpredictable and uncertain in
recent times due to rapid changes in technology and fierce competition, as well as
major global events such as COVID-19.
This uncertainty also makes it increasingly difficult for businesses to budget and
forecast accurately. For example, think about the range of factors which could impact
a sales forecast:
Economic conditions (eg economic growth rates, inflation)
Industry conditions (eg market growth rates, competitors entering/ leaving
the market, competitors’ actions)
The organisation’s products or services (eg whether any new
products/services are being launched, or new product features; where
products/services are in their life cycle, and whether sales are growing or
declining)
Policy changes (eg changes in the prices of an organisation’s
products/services; changes in terms and conditions offered to customers)
Marketing and advertising (eg increasing/decreasing) advertising activities;
launching new marketing campaigns; marketing on new channels)
Legislation and regulation (eg new legislation - either affecting the
organisation’s product, or competitor’s/substitute products)
A comprehensive sales forecast needs to consider all these factors though.
In a previous technical article – Data analytics, part 1 – we highlighted the potential
value of predictive analytics in helping organisations understand future patterns
and trends, which in turn should help organisations improve the accuracy of their
forecasts. Continuing the illustration of sales forecasts, using machine learning-based
analytics software which incorporates as rich a data set as possible – including details
about external events and market conditions, product life cycles and product launches,
historical growth and sales figures, customer surveys and feedback – should help an
organisation to improve the accuracy of its sales forecast.
Although the focus of Advanced Performance Management (APM) exam questions will
not be on the detailed calculations which would take place in analytics software,
accountants need the business knowledge and commercial acumen to interpret the
results of data analytics; including having an understanding of the modelling
assumptions, and what decisions can justifiably be made based on the analysis.
As such, in the APM exam, you could be expected to draw on analysis techniques to
help you understand the assumptions being used in a given scenario, and to evaluate
how realistic or plausible they are. You should already be familiar with these
techniques from your studies of Performance Management (PM) and Management
Accounting (MA), but we are going to briefly recap four techniques which you might
need to use in the context of assessing forecasts, or helping to make decisions based
on them:
Regression and correlation
Time series
Expected values
Standard deviation
For the detailed articles relating to each of these teachings, see the following links:
Regression and correlation
The CFO has asked for a forecast for the sales figures for the first quarter of 20X6.
The assistant management accountant has begun this work. Using moving averages,
they calculated that the underlying trend as +$500 per quarter, but the last moving
average the assistant calculated was for 20X5 Quarter 2: $261,500.
The assistant has also calculated that the seasonal adjustments for Q1 are either -
$79,000 (using an additive model) or 0.70 (using a multiplicative model).
Forecast for 20X6 Qtr 1
The last moving average calculated was 20X5 Qtr 2, which is three periods ago.
So the underlying trend value for 20X6 Qtr 1 will be: $261,500 + (500 × 3) =
$263,000.
We then have to adjust for the seasonal variation.
Additive: 263,000 – 79,000 = $184,000
Multiplicative: 263,000 * 0.70 = $184,100
However, as with any forecasts, care needs to be taken when using time series
analysis, because it is based on the assumption that the past is a good indicator of
what will happen in the future. In our simple example, we have assumed that the
underlying sales revenue will continue to grow by $500 per quarter. However, changes
in the external and competitive environment can create uncertainty, making forecasts
based on past observations unrealistic.
Similarly, effective forecasting relies on the ability to identify genuine patterns and
trends in the data. Therefore, analysts need to be able to identify the difference
between random fluctuations or outliers and can separate them from underlying
trends or seasonal variations.
Expected values
An important aspect of predictive analytics is that it doesn’t simply forecast possible
future outcomes it also identifies the likelihood of those events happening.
The availability of information regarding the probabilities of potential outcomes allows
the calculation of an expected value for the outcome.
The expected value indicates the expected financial outcome of a decision. It is
calculated by multiplying the value associated with each potential outcome by its
probability, and then summing the answers.
Expected values can be useful to evaluate alternative courses of action. When making
a decision that could have multiple outcomes, a business should look at the value of
each alternative and choose the one which has the most beneficial expected value (ie
the highest expected value when looking at sales or income; or the lowest expected
value when looking at costs).
WORKED EXAMPLE
Mewbix is launching a new cereal product in Deeland, a country with 10 million
households.
Mewbix has already introduced the product in some test areas across the country, and
- in conjunction with a marketing consultancy business – has been monitoring sales
and market share. This data has been supplemented by survey-based tracking of
consumer awareness, repeat purchase patterns, and customer satisfaction ratings.
Key findings from the test market and the subsequent customer research have
indicated two feasible selling prices for Mewbix: $2.50 or $3.00 per packet. The market
research has suggested that, for the coming year:
If the selling price is $2.50 per packet, 2% of the households in Deeland will buy
Mewbix. Of these, 30% are expected to purchase 1 packet per week, 45% are
expected to purchase 1 packet every 2 weeks, and 25% are expected to purchase 1
packet every 4 weeks.
If the selling price is $3.00 per packet, 1.5% of the households will buy Mewbix. Of
these 25% are expected to purchase 1 packet per week, 50% are expected to
purchase 1 packet every 2 weeks, and 25% are expected to purchase 1 packet every 4
weeks.
Based on the findings from the test market and the subsequent customer
research, Mewbix’s CEO has asked for your advice about what price to sell
the new cereal for, and how much revenue he should forecast for it in next
year’s budget.
In order to give your advice, you need to forecast the revenue expected at each price:
The forecast data suggests that demand for the new cereal is elastic, such that using
the higher price leads to a significantly lower annual revenue. As such, the cereal
should be sold for $2.50 per packet, and sales of $15,275,000 should be budgeted for
the coming year.
However, as with any predictive models, there is no guarantee that the actual sales
will mirror the expected values.
Standard deviation
When analysing data sets, it can often be useful to calculate the average (mean)
value, to help get a representative estimate for the values in the data set. However,
looking at an average value could be misleading when the distribution of values in the
dataset is skewed, or when the distribution contains outliers.
Therefore, when looking at average values, it is also important to consider the
standard deviation in the dataset.
Standard deviation measures how clustered or dispersed a data set is in relation to its
mean.
A low standard deviation tells us that data is clustered around the mean, and therefore
the data is accurately characterised by its mean. Conversely, a high standard
deviation indicates data is more spread out, such that the mean may not accurately
represent the data set. As such, the average is a less reliable indicator of the
individual values in a data set where the standard deviation is high, compared to a
situation where the standard deviation is low.
WORKED EXAMPLE
Customers who have stayed at Hotel Vaykance, are encouraged to complete a survey,
rating how much they have enjoyed their stay on a scale from 1 – 5, with 1 being ‘Not
enjoyed at all’ and 5 being ‘Enjoyed greatly’. The surveys then ask further questions,
helping management understand why customers have awarded the score they have.
The ‘Average satisfaction score’ is a key performance indicator (KPI) for the business,
and is reported in the monthly management information. The KPI reported in last
month’s management information was 2.84.
The standard deviation was 1.9, but the standard deviation figure isn’t currently
included in the management information. However, the CFO has asked for standard
deviation to be included going forwards.
The results from the last month’s customer satisfaction surveys are summarised in the
graph below.
The CFO has asked you to explain the significance of standard deviation
when assessing the results.
The average satisfaction score (2.84) suggests that customers are reasonably well
satisfied with their stay. However, this does not accurately reflect the population,
which was polarised between guests who either enjoyed their stay very much (41% at
‘5’) or not at all (46% at ‘1’). The graph showing the results from the customer results
survey illustrates this polarisation very clearly.
The standard deviation also highlights this polarisation. A standard deviation of +/- 1.9
compared to an average of 2.84 is very high.
In a scenario like this, where scores were only given between 1 and 5, the highest
standard deviation possible would be 2 (ie if 50% of respondents had given a customer
satisfaction rating of 5, and 50% had given a rating of 1, the average would be 3, but
the standard deviation would be 2). The actual standard deviation of 1.9 is very close
to this theoretical maximum, meaning it is very high.
The high standard deviation implies that a large proportion of the dataset is far away
from the mean, and therefore it is risky to draw conclusions using the mean.
Written by a member of the APM examining team
Being able to understand the relationship between different factors is very important
for organisations. For example, it would be useful to understand the relationship
between advertising spend and sales generated from that advertising spend or
between the production level and the total production costs. Understanding these
relationships allows organisations to make better predictions of what sales or costs will
be in the future. This will be invaluable when budgeting or forecasting.
This article will look at how the relationships between variables can be analysed using
the ‘line of best fit’ method and regression analysis, and how the strength of these
relationships can be measured using correlation.
Relationship between variables
In any relationship between two variables there is an independent variable and a
dependent variable, the size of the movements in the dependent variable depending
on the size of the movements of the independent variable. For example; the total cost
of a production process would be dependent on the level of activity.
Consider the following data produced by a company over the last two years.
20X1 Q1 15 300
20X1 Q2 45 615
20X1 Q3 25 470
20X1 Q4 55 680
20X2 Q1 30 520
20X2 Q2 20 350
20X2 Q3 35 590
20X2 Q4 60 740
The company wants to understand the relationship between the activity level and total
production cost so that it can forecast total production costs going forward.
Line of best fit
One method of understanding the relationship between the variables is the line of best
fit method. All the data given is plotted on a chart. The activity level is the
independent variable (as described above) and it is shown on the x (horizontal) axis.
The total production cost is the dependent variable and it is shown on the y (vertical)
axis.
Once all the data is plotted on the graph, a line of best fit can be drawn:
In this case some of the points are on the line and some are above and below, but
most are close to the line which suggests that there is a relationship between activity
level and the total production cost.
This ‘line of best fit’ can be used to predict what will happen at other levels of
production. For levels of production which don’t fall within the range of the previous
levels, it is possible to extrapolate the ‘line of best fit’ to forecast other levels by
reading the value from the chart.
This is a straightforward technique, but it has some limitations. The main one being
that the ‘line of best fit’ is estimated from the data points plotted and different lines
may be drawn from the same set of data points. A method which can overcome this
weakness is regression analysis.
Regression analysis
Regression analysis also uses the historic data and finds a line of best fit, but does so
statistically, making the resulting line more reliable.
We assume a linear (straight line) relationship between the variables and that the
equation of a straight line is:
y = a + bx
where:
a is the fixed element (where the line crosses the y axis)
b is the variable element (gradient of the line) and
x and y relate to the x and y variables.
a and b are calculated using the following formulae:
Units Total
(000s) cost xy x2 y2
x ($000)
y
20X1
15 300
Q1 4,500 225 90,000
20X1
45 615
Q2 27,675 2,025 378,225
20X1
25 470
Q3 11,750 625 220,900
20X1
55 680
Q4 37,400 3,025 462,400
20X2
30 520
Q1 15,600 900 270,400
20X2
20 350
Q2 7,000 400 122,500
20X2
35 590
Q3 20,650 1,225 348,100
20X2
60 740
Q4 44,400 3,600 547,600
Totals
(∑) 168,975 12,025 2,440,12
285 4,265 5
The equation of the regression line (in the form y = a + bx) becomes:
y = 208.90 + 9.1x
Using this equation, it is easy to forecast total costs at different levels of production,
for example for a production level of 80,000 units, the estimate of total cost will be:
208.90 + (9.1 x 80) = 936.90, or $936,900.
How reliable this estimate is will depend on the strength of the relationship between
the two variables; how much of the change in y can be explained by the change in x?
The stronger the relationship between the variables, the more reliance can be placed
on the equation calculated and the better the forecasts will be.
A measure of the strength of the relationship between the variables is correlation.
Correlation
Two variables are said to be correlated if they are related to one another and if
changes in one tend to accompany changes in the other. Correlation can be positive
(where increases in one variable result in increases in the other) or negative (where
increases in one variable result in decreases in the other).
The chart shown in the ‘line of best fit’ section above shows a strong positive
correlation. Some other relationships are shown below:
It is possible that there is no correlation between the variables. A horizontal line would
suggest no correlation, as would the following:
Where a company wants to use past data to forecast the future, the stronger the
correlation, the better the estimates will be.
The strength of correlation between variables can be measured by the correlation
coefficient which can be calculated using the following formula:
Time series analysis can be used to analyse historic data and establish any underlying
trend and seasonal variations within the data. The trend refers to the general direction
the data is heading in and can be upward or downward. The seasonal variation refers
to the regular variations which exist within the data. This could be a weekly variation
with certain days traditionally experiencing higher or lower sales than other days, or it
could be monthly or quarterly variations.
The trend and seasonal variations can be used to help make predictions about the
future – and as such can be very useful when budgeting and forecasting.
Calculating moving averages
One method of establishing the underlying trend (smoothing out peaks and troughs) in
a set of data is using the moving averages technique. Other methods, such as
regression analysis can also be used to estimate the trend. Regression analysis is dealt
with in a separate article.
A moving average is a series of averages, calculated from historic data. Moving
averages can be calculated for any number of time periods, for example a three-
month moving average, a seven-day moving average, or a four-quarter moving
average. The basic calculations are the same.
The following simplified example will take us through the calculation process.
Monthly sales revenue data were collected for a company for 20X2:
Ja Fe Ma Ap Ma Ju Au Se Oc No De
Jul
n b r r y n g p t v c
Sale
s 12 14 18 13 19 13 15 19 14 16 20
151
$00 5 5 6 1 2 7 7 8 3 3 4
0
From this data, we will calculate a three-month moving average, as we can see a
basic cycle that follows a three-monthly pattern (increases January – March, drops for
April then increases April – June, drops for July and so on). In an exam, the question
will state what time period to use for this cycle/pattern in order to calculate the
averages required.
Step 1 – Create a table
Create a table with 5 columns, shown below, and list the data items given in columns
one and two. The first three rows from the data given above have been input in the
table:
The average needs to be calculated for each three-month period. To do this you move
your average calculation down one month, so the next calculation will involve
February, March and April. The total for these three months would be (145+186+131)
= 462 and the average would be (462 ÷ 3) = 154.
Continue working down the data until you no longer have three items to add together.
Note: you will have fewer averages than the original observations as you will lose the
beginning and end observations in the averaging process.
Step 3 – Calculate the trend
The three-month moving average represents the trend. From our example we can see
a clear trend in that each moving average is $2,000 higher than the preceding month
moving average. This suggests that the sales revenue for the company is, on average,
growing at a rate of $2,000 per month.
This trend can now be used to predict future underlying sales values.
Step 4 – Calculate the seasonal variation
Once a trend has been established, any seasonal variation can be calculated. The
seasonal variation can be assumed to be the difference between the actual sales and
the trend (three-month moving average) value. Seasonal variations can be calculated
using the additive or multiplicative models.
Using the additive model:
To calculate the seasonal variation, go back to the table and for each average
calculated, compare the average to the actual sales figure for that period.
A negative variation means that the actual figure in that period is less than the trend
and a positive figure means that the actual is more than the trend.
From the data we can see a clear three-month cycle in the seasonal variation. Every
first month has a variation of -7, suggesting that this month is usually $7,000 below
the average. Every second month has a variation of 32 suggesting that this month is
usually $32,000 above the average. In month 3, the variation suggests that every third
month, the actual will be $25,000 below the average.
It is assumed that this pattern of seasonal adjustment will be repeated for each three-
month period going forward.
Using the multiplicative model:
If we had used the multiplicative model, the variations would have been expressed as
a percentage of the average figure, rather than an absolute. For example:
This suggests that month 1 is usually 95% of the trend, month 2 is 121% and month 3
is 84%. The multiplicative model is a better method to use when the trend is
increasing or decreasing over time, as the seasonal variation is also likely to be
increasing or decreasing.
Note that with the additive model the three seasonal variations must add up to zero
(32-25-7 = 0). Where this is not the case, an adjustment must be made. With the
multiplicative model the three seasonal variations add to three (0.95 + 1.21 + 0.84 =
3). (If it was four-month average, the four seasonal variations would add to four etc).
Again, if this is not the case, an adjustment must be made.
In this simplified example the trend shows an increase of exactly $2,000 each month,
and the pattern of seasonal variations is exactly the same in each three-month period.
In reality a time series is unlikely to give such a perfect result.
Step 5 – Using time series to forecast the future
Now that the trend and the seasonal variations have been calculated, these can be
used to predict the likely level of sales revenue for the future.
Question:
Using the above example, what is the predicted level of sales revenue for
June 20X3 and July 20X3?
Solution:
Start with the trend then apply the seasonal variations. We calculated an increasing
trend of $2,000 per month. The last figure we calculated was for November 20X2
showing $170,000. If we assume the trend continues as it has done previously, then
by June 20X3, the sales revenue figure will have increased by $14,000 ($2,000 per
month for seven months). Adding this to the figure we have for November, we can
predict the underlying trend value for June 20X3 to be $184,000. ($14,000 +
$170,000).
We know that sales exhibit a seasonal variation. Taking account of the seasonal
variation will give us a better estimate for June 20X3. From the table in step 4, we can
see that June has a positive variation of $32,000.
Our estimate for the sales revenue for June 20X3 is therefore $184,000 + $32,000 =
$216,000.
For July, the underlying trend value will be $170,000 + $16,000 = $186,000. The
seasonal variation for July 20X3 is a negative variation of $25,000, therefore our
estimate for the sales revenue for July 20X3 is $186,000 - $25,000 = $161,000.
Calculating moving averages for an even number of periods
In the above example, we used a three-month moving average. Looking back at step
2, we can see that the average is shown against the mid-point of the three
observations. The mid-point of the period for January, February and March is shown
against the February observation.
When we are calculating a moving average with an even number of periods, for
example a four-quarter moving average, we do the same basic calculation, but the
mid-point will lie between observations. From step 4 above, we can see that we need
the moving average to be shown against an observation so that the seasonal variation
can be calculated. We therefore calculate the four-quarter moving average as before,
but we then calculate a second moving average.
In the example below, the four-quarter moving averages have been calculated in the
same way as before. The first four observations are added together and then divided
by four. The four-quarter moving average for the first four quarters is 322.50. Moving
to the next four observations, gives an average of 327.50. We can then work out the
mid-point of these two averages by adding them together and dividing by two. This
gives a mid-point of (322.50 + 327.50) ÷ 2 = 325. This mid-point is our trend and the
figure is shown against the quarter 3, 20X8 observation. All other calculations are
done in the same way as our original example.
Conclusion
Care must be taken however when using time series analysis. This forecasting method
is based on the assumption that what has happened in the past is a good indicator of
what is likely to happen in the future. In this example the suggestion is that sales
revenue will continue to grow by $2,000 per month indefinitely. If we consider the
concept of the product lifecycle, we can see that this is a rather simplistic and flawed
assumption.
In the real world, changes in the environment (technological, social, environmental,
political, economic etc) can all create uncertainty, making forecasts made from past
observations unrealistic.