0% found this document useful (0 votes)
18 views77 pages

Performance Indicators

The document discusses key performance indicators (KPIs) and critical success factors (CSFs) essential for organizations, particularly in the context of Strategic Business Leader (SBL) and Advanced Performance Management (APM) syllabi. It outlines the definitions, importance, and practical frameworks for measuring performance, emphasizing the need for relevant, measurable, and achievable objectives. Additionally, it highlights potential pitfalls in performance measurement and the significance of aligning indicators with organizational strategies and stakeholder expectations.

Uploaded by

Asad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views77 pages

Performance Indicators

The document discusses key performance indicators (KPIs) and critical success factors (CSFs) essential for organizations, particularly in the context of Strategic Business Leader (SBL) and Advanced Performance Management (APM) syllabi. It outlines the definitions, importance, and practical frameworks for measuring performance, emphasizing the need for relevant, measurable, and achievable objectives. Additionally, it highlights potential pitfalls in performance measurement and the significance of aligning indicators with organizational strategies and stakeholder expectations.

Uploaded by

Asad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 77

Performance indicators

This article explains and illustrates key performance indicators and critical success
factors.
Introduction
Objectives
Critical success factors
Performance indicators and key performance indicators
Performance measures – a practical framework
Use of performance indicators in the SBL and APM syllabi
Introduction
Both Strategic Business Leader (SBL) and Advanced Performance Management (APM)
require candidates to be able to establish key performance indicators and critical
success factors.

A surprising number of candidates do not feel comfortable with these terms, and this
article is aimed at explaining and illustrating these concepts. In particular, it will
explain what is meant by:

Performance
Objectives
Critical success factors
Performance indicators
Key performance indicators.
Back to top

Performance
This can be defined as:

‘A task or operation seen in terms of how successfully it is performed’


(www.oxforddictionaries.com).

Organisations differ greatly in which aspects of their behaviour and results constitute
good performance. For example, their aim could be to make profits, to increase the
share price, to cure patients in a hospital, or to clear household rubbish. The concept
of ‘performance’ is very relevant to both SBL and APM. SBL looks at how organisations
can make decisions that improve their strategic performance and APM is focused on
how organisations evaluate their performance.

The primary required tasks are often found in the organisation’s mission statement as
it is there that the organisation’s purpose should be defined. These are called ‘primary
required tasks’ because although the primary task of a profit-seeking business is to
make profits, this rests on other subsidiary tasks such as good design, low cost per
unit, quality, flexibility, successful marketing and so on. Many of these are non-
financial achievements.

Some aspects of performance are ‘nice to have’ but others will be critical success
factors. For example, the standard of an airline’s meals and entertainment systems
will rank after punctuality, reliability and safety, all of which are likely to be critical to
the airline’s success.

Back to top

Objectives
Objectives are simply targets that an organisation sets out to achieve. They are
elements of the mission that have been quantified and are the basis for deciding
appropriate performance measures and indicators. There is little point measuring
something if you do not know whether the result is satisfactory and cannot decide if
performance needs to change. Organisations will create a hierarchy of objectives
which will include corporate objectives which affect the organisation as a whole and
unit objectives which will affect individual business units within the organisation. Even
here objectives will be categorised as primary and secondary, for example an
organisation might set itself a primary objective of growth in profits but will then need
to develop strategies to ensure this primary objective is achieved. This is where
secondary objectives are needed, for example to improve product quality or to make
more efficient use of resources.

Objectives often follow the SMART rule. They should be:

Specific: there is little point in setting an objective for a company to improve its
inventory. What does that mean? It could mean that stock-outs should be less
frequent, or average stock holdings should be lower, or the inventory will be held in
better conditions to reduce wastage.
Measurable: if you can’t measure something you will be at a loss as to how to control
it. Some aspects of performance might be difficult to measure, but efforts must be
made. Customer satisfaction is important to most businesses and indications could be
obtained by arranging customer surveys, repeat business and so on.

Achievable/agreed/accepted: objectives are achieved by people and those people


must accept and agree that the objectives are achievable and important.

Relevant: relevant to the organisation and the person to whom the objectives are
given. It is important that people understand how achieving an objective will help
organisational success. If this connection isn’t clear, employees will begin to feel that
the objective is simply a cynical exercise of management power. The person to whom
the objective is given must also feel that they can affect its achievement.

Time-limited: all objectives have to be achieved within a specified time period


otherwise procrastination will rule.

Back to top

Critical success factors


A critical success factor (CSF) can be defined as:

‘An area where an organisation must perform well if it is to succeed.’

Alternatively, Johnson, Scholes & Whittington defined CSFs as:

'Those product features that are particularly valued by a group of customers, and,
therefore, where the organisation must excel to outperform the competition.’

This definition is more complex than the first, but it is more useful because it makes
the organisation look towards its customers (or users) and recognises that their
opinion of excellence is more important and reliable than internally generated
opinions. If an organisation doesn’t deliver what its customers, clients, patients,
citizens or students value, it is failing.

Back to top
Performance indicators and key performance indicators
Performance indicators (or performance measures) are methods used to assess
performance. For example:

In profit-seeking organisations:

Profit
Earnings per share
Return on capital employed

In not-for-profit organisations:

Exam grades (a school)


Waiting times for hospital admission (a health service)
Condition of roads (a local government highways department)
Particularly in profit-seeking organisations, the prime financial performance indicators
allow performance to be measured but they say little about how that performance has
been achieved. So, high profits will depend on a combination of good sales volumes,
adequate prices and sufficiently low costs. If high profits can only be achieved by a
satisfactory combination of volume, price and cost, then those factors should be
measured also and will need to be compared to standards and budgets.

Similar effects are found in not-for-profit organisations. For example, in a school, a CSF
might be that a pupil leaves with good standards of literacy. But that might depend on
pupil-teacher ratios, pupils’ attendance and the experience of the teachers. If these
factors contribute to good performance, they need to be measured and monitored.

Just as CSFs are more important than other aspects of performance, not all
performance indicators are created equal. The performance indicators that measure
the most important aspects of performance are the key performance indicators (KPIs).
To a large extent, KPIs measure how well CSFs are achieved; other performance
indicators measure how well other aspects of performance are achieved

There are a number of potential pitfalls in the design of performance indicators and
measurement systems:
Not enough performance measures are set
Often, directors and employees will be judged on the results of performance measures.
It has been said that ‘Whatever gets measured gets done’ and employees will tend to
concentrate on achieving the required performance where it is measured. The
corollary is that ‘Whatever doesn't get measured doesn't get done’ and the danger is
that employees will ignore areas of behaviour and performance which are not
assessed.

Too many performance indicators


This occurs especially where performance measures are not ranked by importance and
none have been identified as KPIs. Performance indicators have to be measured,
calculated and reported to management, and discrepancies must be explained or
excuses invented. Too many measures can divert time from more important tasks and
there is a danger that employees concentrate on the easier but more trivial measures
than on the more difficult but vital targets.

The wrong performance measures


An example of this would be applying strict cost measures in an organisation where
luxury products and services are sold (a differentiation strategy). This is likely to
detract from the organisation’s strategic success.

Too tight/too loose performance measures


Performance indicators that are too difficult to attain can lead to a loss of employee
motivation and promote dysfunctional behaviours such as gaming and the
misrepresentation of data. Performance measures that are too loose can pull down
performance. Benchmarking can help to avoid this. Internal benchmarking generally
sets measures based on previous period’s measures or set measures with respect to
other branches or divisions. However these internal benchmarks can lead to
complacency as many organisations have to compete with others and benchmarks
should be aligned to competitors’ performance.

‘Hit and run’ performance indicators


This means that a performance indicator is set and then it is assumed that things will
look after themselves. Performance indicators need a management framework they
are to be at all effective.
Back to top
Performance measures – a practical framework
Expanding on the last point, above, to establish a performance measurement system,
something like the following is needed for each measure:

A meaningful title of the measure


What is its purpose and how does that purpose relate to strategic success?
What other performance measures might be affected by this one, how are they
affected and how are conflicts to be resolved?
Who will be held responsible for it?
What is the source data, who is responsible for its supply, how is it measured and how
is the measure calculated?
What investigations and explanations are required and who is responsible?
What target is set and how has that target been determined?
How often should the target be updated?
How often is the measure reported on?
Reporting and action?
For example, consider a passenger train company called TTTE:

1. Title of performance measure

Punctuality (the percentage of trains arriving at their destination on time)

2. Purpose of performance measure

TTTE’s strategic objective is to provide comfortable, reliable and punctual services to


passengers. TTTE competes with other train companies, cars, buses and airlines.
Punctuality is seen as a key competitive lever and therefore must be measured

3. Other performance measures affected

Safety – safety checks and speed limits will take priority over punctuality
Cleanliness – it might be necessary to occasionally reduce cleaning to keep to the
timetable

Energy consumption running a train faster than normal (though within speed limits)
will cause higher fuel consumption but punctuality takes precedence

4. Who is held responsible?

Operations director

5. Source data, measurement and calculation of the measure.

The duty manager at each station is responsible for logging the arrivals time of each
train. A five-minute margin is allowed ie a train is logged ‘on time’ if it is no later than
5 minutes after the advertised time. Beyond five minutes the actual time by which the
train is late is logged. Results will be calculated in percentage bands: on time, up to 15
minutes late, >15–30 minutes late, >30 minutes – one hour late, >one hour late, and
so on

6. Investigations and explanations

While logging late arrivals, station duty managers should also note the cause where
possible. The operations director must collate this information using statistical analysis
which highlights persistent problems such as particular times of the day, routes or
days of the week

7. Target and how it is determined

The target is dictated by the railway timetable. The timetable should be reviewed
twice a year to look for ways of reducing journey times to keep TTTE competitive with
improvements in competing transport

8. Update of target

The banding and any tolerances will be updated annually


9. How often should the measure be reported

Weekly

10. Reporting and action

The operations director will report performance on a monthly basis to the board
together with plans for service improvement

Back to top

Use of performance indicators in the SBL and APM syllabi


Performance indicators are relevant to the following models and theories:

Mission statements: these define the important aspects of performance that sum up
the purpose of the organisation.

Stakeholder analysis: recognises that different stakeholders have different views on


what constitutes good performance. Sometimes what stakeholders want is different to
what the mission statement suggests as the purpose of the organisation. This can be a
particular problem when the stakeholders are key-players.

Generic strategies: the main generic strategies to achieve competitive advantage are
cost leadership and differentiation. If a company’s success depends on being a cost
leader (a CSF) then it must carefully monitor all its costs to achieve the leadership
position. The company will therefore make use of performance indicators relating to
cost and efficiency. If a company has chosen differentiation as its path to success, then
it must ensure that it is offering enhanced products and services and must establish
measures of these.

Value chain: a value chain sets out an organisation’s activities and enquires as to how
the organisation can make profits: where is value added? For example, value might be
added by promising fantastic quality. If so, that would be a CSF and a key performance
indicator would the rate occurrence of bad units.
Boston consulting group grid: this model uses relative market share and market
growth to suggest what should be done with products or subsidiaries. In SBL if a
company identifies a product as a ‘problem child’ BCG says that the appropriate action
for the company is either to divest itself of that product or to invest to grow the
product towards a ‘star’ position on the grid. This requires money to be spent on
promotion, product enhancement, especially attractive pricing and perhaps
investment in new, efficient equipment.

PESTEL and Porter’s five forces: both the macro-environment and competitive
environment change continuously. Organisations have to keep these under review and
react to the changes so that performance is sustained or improved. For example, if
laws were introduced which stated that suppliers should be paid within a maximum of
60 days, then a performance measure will be needed to encourage and monitor the
attainment of this target.

Product life cycle: different performance measures are required at different stages of
the life cycle. In the early days of a product’s life, it is important to reach a successful
growth trajectory and to stay ahead of would-be copycats. At the maturity stage,
where there is great competition and the market is no longer growing, performance
will depend on low costs per unit and maintaining market share to enjoy economies of
scale.

Company structure: different structures inevitably affect both performance and its
management. For example, as businesses become larger many choose a divisionalised
structure to allow specialisation in different parts of the business:
manufacturing/selling, European market/Asian market/North American market, product
type A/product type B. Divisional performance measures, such as return on investment
and residual income, then become relevant.

Information technology (IT): new technologies will influence performance and could
help to more effectively measure performance. However, remember that sophisticated
new technology does not guarantee better performance as costs can easily outweigh
benefits. If IT is vital to a business, then downtime and query response time become
relevant as might a measure of system usability.

Human resource management: what type of people should be recruited, and how are
they to be motivated, appraised and rewarded to maximise the chance of good
organisational performance? Performance measures are needed, for example, to
monitor the effectiveness of training, job performance, job satisfaction, recruitment
and retention. In addition, considerable effort has to be given to considering how
employees’ remuneration should be linked to performance.
Fitzgerald and Moon building blocks
The APM syllabus mentions three specific approaches or models:

Balanced scorecard
Performance pyramid
Fitzgerald’s and Moon’s building blocks
The balanced scorecard approach is probably the best known, but all seek to ensure
that the net is thrown wide when designing performance measures for organisations
so that factors such as quality, innovation, flexibility, stakeholder performance, and
delivery and cycle time are listed as being important aspects of performance.
Whenever an aspect of performance is important then a performance measure should
be designed and used.

The Fitzgerald and Moon model is worth a specific mention here as it is the only model
which explicitly links performance measures to the individuals responsible for the
performance.

The model first sets out the dimensions (split into results and determinants) where key
performance indicators should be established. You will see there is a mix of financial
and non-financial, and both quantitative and qualitative:

Results
Financial performance
Competitive performance
Determinants
Quality
Flexibility
Resource utilisation
Innovation
The model then suggests standards for KPIs:

Ownership: refers to the idea that KPIs will be taken more seriously if staff have a say
in setting targets. Staff will be more committed and will better understand why that
KPI is needed.
Achievability: if KPIs are frequently and obviously not achievable then motivation is
harmed. Why would staff put in extra effort to try to achieve a target (and bonus) if
they believe failure is inevitable.
Fairness: everyone should be set similarly challenging objectives and it is essential
that allowance should be made for uncontrollable events. Managers should not be
penalised for events that are completely outside everyone’s control (for example, a
natural disaster) or which is someone else’s fault.
The model then suggests how employee rewards should be set up to encourage
employees to achieve the KPI targets:

Clarity: exactly how does performance translate into a reward?


Motivation: the reward must be both desirable and must be perceived as achievable if
it is to be motivating.Controllable: achievement of the KPI giving rise to the reward
should be something the manager can influence and control.

Critical success factors


Critical success factors (CSFs) are often quoted in management literature as
those areas in which an organisation needs to perform best if it is to achieve
overall success. CSFs have frequently been used to help determine the
requirements for executive information systems (EIS), supporting the ‘key
indicator’ approach to management control. A number of methods have
been developed to identify these key indicators, and the CSF approach is
one of the most widely used, which should be measured and monitored
using EIS to help manage the strategic direction of an organisation.

It is difficult and expensive to gather, store, validate and make available the various
types of management information required for decision making. As such, it is
important for managers and providers of information support systems to determine, in
advance, what is most relevant to them.

It is necessary to identify the ‘key indicators’ that will help a manager to plan,
manage, and control an area of responsibility. This method is based on the need for
managers to focus, at any point in time, on the most significant aspects of their
responsibilities. The development of an EIS, designed to support management control,
is based on two main concepts:
 The selection of a set of key indicators of the health of the functional business
area. Information will then be collected for each of these indicators.
 Exception reporting – the ability to make available to a manager, as required,
information on only those indicators where performance differs significantly
from expectations.
The underlying belief is that an effective control system must be tailored to the
specific industry in which the organisation operates, and to the specific strategies that
it has adopted. It must identify the CSFs that should receive careful and continuous
management attention if the organisation is to be successful, and it must highlight
performance with respect to these key variables in reports available to all levels of
management.

The first concept is frequently approached from the viewpoint of CSFs in that a limited
number of areas are identified in which results, if they are satisfactory, will ensure
successful performance. They are the few key areas, it is believed, where ‘things must
go right’ if the organisation is to flourish. In turn, each manager must identify the key
areas that apply to them, in which results are identified as being absolutely necessary
to achieve specific goals. The goals, in turn, support overall organisational goals. The
genesis of this approach goes back to the history of warfare, where writers on battles
have identified the successful leader as the one who concentrated his forces on the
most significant areas.

The current state of performance in these areas should be continually measured.


Because these areas are identified as being critical, each manager should have the
appropriate information that indicates whether events are proceeding sufficiently well
in each area. CSFs and associated performance indicators (PIs) can play a central role
in this.

BACKGROUND TO THE APPROACH


The concept of CSFs was first introduced in 1962 by D Ronald Daniel, later managing
director of the management consultancy McKinsey and Co. Introducing the concept,
Daniel cited examples where major corporations had introduced computerised
information systems, processed extensive amounts of data, and claimed to produce
meaningful information. However, this information, on closer examination, appeared to
be of little use in assisting managers to better perform their jobs, especially in terms
of direction, planning, management of operations, and control. To draw attention to
the type of information required, Daniel coined the phrase ‘critical success factors’.
Further, he provided examples of CSFs that he had identified for contemporary major
industries. These included:
 In the automobile industry – styling, an efficient dealer network organisation,
tight control of management costs.
 In the food processing industry – new product development, good distribution
channels, effective advertising.
 In the life insurance industry – the development of agency management
personnel, effective control of clerical personnel, innovative new policies.
 In the supermarket industry – the right product mix available in each store,
having it actually available on the shelves, advertising it effectively to pull
shoppers in, pricing it correctly (since profit margins were low in this industry).
Daniel identified CSFs as being necessary to support the attainment of organisational
goals. Goals represent the end points that an organisation hopes to reach. CSFs,
however, are the areas in which good performance is necessary to ensure attainment
of these goals. Daniel focused on those CSFs that are relevant for any company in a
particular industry.

REFINING THE APPROACH


Early research in to the uses and usefulness of CSFs took place at the Massachusetts
Institute of Technology (MIT) in the early 1970s, which took Daniel’s work further (see
Rockart, John F, Chief executives define their own information needs, Harvard Business
Review, March–April 1979, Vol 57, pp 81–93 and John F Rockart and Christine Bullen,
1986, The Rise of Managerial Computing, Sloan School of Management and IT).

Daniel’s initial thinking had been that CSFs are those that are common to
organisations operating in a particular industry. However, MIT identified five prime
sources of CSFs:
 the structure of the particular industry
 competitive strategy, industry position, and geographic location
 environmental factors
 temporary factors
 functional managerial position.

The structure of the particular industry


As first identified by Daniel, any industry has a set of CSFs that are determined by the
characteristics of the industry itself. Each company in the industry must pay attention
to these factors. For example, the manager of any supermarket would ignore at his
peril the CSFs listed above.

Competitive strategy, industry position, and geographic location


Every company in an industry is in a unique situation determined by its history and
current competitive strategy. For smaller organisations within an industry dominated
by one or two large companies, the actions of the major companies will often produce
new and significant problems for their smaller competitors. The competitive strategy
for the smaller companies may involve establishing a new market niche, getting out of
a product line completely, or redistributing resources among various product lines.
Their strategy is mainly a reaction to the larger companies’ strategies.

In this way a major competitor’s strategy can produce a CSF for a small company. For
example, Dell’s competitive approach to the marketing of small, inexpensive
computers informs the CSF identification for all computer manufacturers. The smaller
companies must identify what they will do in response, and how they measure the
effectiveness of their response. Just as differences in industry position can dictate
CSFs, differences in geographic location (eg distribution costs) and in strategic
positioning (differentiation or focus strategies for smaller companies) can lead to
different CSFs in companies within the same industry.
Environmental factors
As the Gross National Product of an economy can fluctuate with changes in political
and demographic factors, CSFs can also change for an organisation. In the early
1970s, virtually no chief executive in the US would have listed ‘energy supply
availability’ as a CSF. However, following the organisation of OPEC and its oil embargo,
this factor is now closely monitored by most executives, because adequate availability
of energy, and its price stability, is vital to organisational planning and bottom-line
performance in manufacturing and distribution.

Temporary factors
Internal organisational considerations often lead to the monitoring of temporary CSFs.
These are areas of activity that are deemed significant to the success of the
organisation for a particular period of time because they are considered below the
threshold of acceptability, even though they may generally appear to be in good
shape and not apparently in need of special attention. For instance, an insurance
company that had just been fined by the industry regulator for miss-selling would
probably generate a short-term CSF of ensuring that such miss-selling, and
consequent financial penalties, would not happen again in the near future.

Functional managerial position


Each functional managerial position has a generic set of CSFs associated with it. For
example, almost all manufacturing managers are concerned with product quality,
inventory control, and cost control.

Two further dimensions


These five sources of CSFs are one form of classification. CSFs can also be classified as
follows:

Internal versus external sources of CSFs


Every manager will have internal CSFs relating to the department and the people they
manage. These CSFs can range across such diverse interests as human resource
development or inventory control. The primary characteristic of such internal CSFs is
that they deal with issues that are entirely within the manager’s sphere of influence
and control. External CSFs relate to issues that are generally less under the manager’s
direct control such as the availability or price of a particular critical raw material or
source of energy.

Monitoring versus building/adapting CSFs


Managers who are geared to producing short-term operating results invest
considerable effort in tracking and guiding their organisation’s performance, and
therefore employ monitoring CSFs to continuously scrutinise existing situations.
Almost all managers have some monitoring CSFs, which often include financially-
oriented CSFs such as actual performance versus budget or the current status of
product or service transaction cost. Another monitoring CSF might be personnel
turnover rates.

Managers who are either in reasonable control of day-to-day operations, or who are
insulated from such concerns, spend more time in a building or adapting mode. These
people can be classified as future-oriented planners whose primary purpose is to
implement major change programmes aimed at adapting the organisation to the
perceived emerging environment. Typical CSFs in this area might include the
successful implementation of major recruitment and training efforts, or new product or
service development programmes.

RESEARCH CONCLUSIONS – CSFs IN PRACTICE


Research has shown that, in general, individual managers focus on a mix of CSFs
drawn from the above sources. From an organisational perspective, however, CSFs
also have a number of hierarchical levels:
 industry CSFs
 corporate CSFs
 functional CSFs
 individual CSFs.

As mentioned at the beginning of this article, industry CSFs affect an organisation in


the development of its strategy, objectives, and goals. No organisation can afford to
develop a strategy that does not pay adequate attention to the principal factors that
underlie success in its industry. In turn, the strategy, objectives, and goals developed
by an organisation lead to the development of a particular set of CSFs for the whole
organisation (corporate CSFs) unique to its own circumstances. In turn, corporate CSFs
become an input into a similar CSF determination process for each sub-organisation or
division in the corporation. Managers at each organisational level will have an
individual set of CSFs that will depend heavily on their perspective of their role and on
temporary factors.

It is at this point that we should discuss the concept that organisations are ‘human
activity systems’, and that individuals within these systems bring their own ‘world
view’ to their roles – encompassing their whole belief system – based on their training
and previous experience. This world view will influence their perception of what they
consider to be important in achieving their own organisational objectives. Thus a new
incumbent to a role may identify a number of new CSFs that may augment or replace
the CSFs identified by the previous incumbent.

STEPS TOWARDS IMPLEMENTATION – MEASUREMENT


The main use of the CSF concept is as a focus for implementing organisational
transformation by supporting beneficial change. This is achieved by:
 helping individual managers determine their priorities and their supporting
information requirements
 aiding an organisation in its general planning processes, for strategic and
annual planning, and for budgeting purposes
 aiding an organisation in its information systems planning processes.
A key driver for strategic and tactical information systems development is the
provision of better performance management information, in order to match
achievement against critical organisational goals. To achieve any benefit from using
the CSF concept it is also important to remember that choosing what to measure and
report on will markedly influence behaviour at every level. So care needs to be taken
in human activity systems to recognise that an unbalanced set of indicators, while
valid for the short-term needs of an individual in the hierarchy, may have unintended
consequences in influencing the behaviour of subordinates. Therefore there is a need
to produce a Balanced Scorecard of indicators and measures.

As a starting point in a typical command and control organisation, the following


implementation tactics may help:
 Concentrate on measurement, not on counting. For example, focus on what the
organisation is trying to achieve, set targets, and measure progress towards
achieving those targets.
 Make it a priority to establish measures for the main core processes (core being
defined as those that touch the customer or client).
 Ensure that the chosen measures reflect what matters to the customer or client.
 Use historic data to establish existing capability – identify targets and have a
plan to close the gap.
 Continually review measures in use and their impact – look at ‘what’ is being
measured and ‘why’, and publicly discard those measures no longer most
relevant.

As a starting point, four areas for measurement should be considered when managing
for improvement: customers, response, process, and system.

Customers
What matters to customers? Can these things be measured (simply and efficiently)?
Do we have any systematic methods for understanding what matters to customers?
Do we translate what matters into measures for managing and improving
performance?

Response
Can ‘what matters to customers’ be turned into response measures? Are there other
‘end to end’ measures that will help the organisation learn about, for example,
customer acquisition and the efficiency of services delivered? What processes must be
measured end to end? Consider risk management – what events in the outside
environment do we need to watch out for? What do we need to know about competitor
activity?

Process
What measures might be useful in the processes? Some measures should be
permanent and some should be temporary. For example, ‘throughput’ might be an
important permanent measure, and ‘waste’ a useful temporary measure.
System
How should the above measures fit together to tell managers how they are
performing, and how they will perform? Are other whole system measures needed?
How well is the organisation integrated into, and monitoring, its external environment?

Finally, CSF measures chosen should be SMART, that is:


 specific – in the context of developing CSF objectives this means that the action,
behaviour, or achievement described is always linked to a rate, number
percentage, or frequency
 measurable – a system, method, or procedure exists that allows the tracking
and recording of the behaviour or action on which the CSF objective is focused
 agreed – there should be an agreement with those involved in achieving the
objective that it is relevant and necessary
 realistic – that the objectives set are capable of being achieved
 time-based – the objective set should be linked to a date by which it is to be
achieved.

These two articles each provide a brief overview of a model which can assist
accountants, not only in the determination of business strategy, but also in the
appraisal of business performance. As well as looking at the theory, the articles will
also provide advice to show how the models can be examined and how to tackle those
requirements.
Porter’s Five Forces Model
The use of Porter’s five forces model (see Figure 1) will help identify the sources of
competition in an industry or sector. It looks at the attractiveness of a market, focused
on the ability to make profits from it.
The model has similarities with other tools for environmental audit, such as political,
economic, social, and technological (PEST) analysis, but should be used at the level of
the strategic business unit, rather than the organisation as a whole. A strategic
business unit (SBU) is a part of an organisation for which there is a distinct external
market for goods or services. SBUs are diverse in their operations and markets so the
impact of competitive forces may be different for each one.
Five forces analysis focuses on five key areas: the threat of new entrants, the
bargaining power of buyers, the bargaining power of suppliers, the threat of
substitutes, and competitive rivalry.
The threat of new entrants
This depends on the extent to which there are barriers to entry. These barriers must be
overcome by new entrants if they are to compete successfully. Johnson et al (2005),
suggest that the existence of such barriers should be viewed as delaying entry and not
permanently stopping potential entrants. Typical barriers are detailed below:
 Economies of scale exist, for example, the benefits associated with volume
manufacturing by organisations operating in the automobile and chemical
industries where high fixed costs exist. Lower unit costs result from increased
output, thereby placing potential entrants at a considerable cost disadvantage
unless they can immediately establish operations on a scale which will enable
them to derive similar economies.

 Certain industries, especially those which are capital intensive and/or require
very large amounts of research and development expenditure, will deter all but
the largest of new companies from entering the market.

 In many industries, manufacturers enjoy control over supply and/or distribution


channels via direct ownership (vertical integration) or, quite simply, supplier or
customer loyalty. Potential market entrants may be frustrated by not being able
to get their products accepted by those individuals who decide which products
gain shelf or floor space in retailing outlets. Retail space is always at a premium,
and untried products from a new supplier constitute an additional risk for the
retailer.

 Supplier and customer loyalty exists. A potential entrant will find it difficult to
gain entry to an industry where there are one or more established operators
with a comprehensive knowledge of the industry, and with close links with key
suppliers and customers.

 Cost disadvantages independent of scale. Well-established companies may


possess cost advantages which are not available to potential entrants
irrespective of their size and cost structure. Critical factors include proprietary
product technology, personal contacts, favourable business locations, learning
curve effects, favourable access to sources of raw materials, and government
subsidies.

 In some circumstances, a potential entrant may expect a high level of


retaliation from an existing firm, designed to prevent entry – or make the costs
of entry prohibitive.

 Government regulation may prevent companies from entering into direct


competition with nationalised industries or implement complex rules that non-
nationals may struggle to interpret and follow. In other scenarios, the existence
of patents and copyrights afford some degree of protection against new
entrants.

 Differentiated products and services have a higher perceived value than those
offered by competitors. Products may be differentiated in terms of price, quality,
brand image, functionality, exclusivity, and so on. However, differentiation may
be eroded if competitors can imitate the product or service being offered and/or
reduce customer loyalty.
The bargaining power of buyers
The power of the buyer will be high where:
 There are a few, large players in a market. For example, large supermarket
chains can apply a great deal of pressure on their potential suppliers to attempt
to get them to lower their prices. This is especially the case where there are a
large number of undifferentiated, small suppliers, such as small farming
businesses supplying fresh produce to large supermarket chains who can then
‘pick and choose’.

 The cost of switching between suppliers is low, for example from one haulage
contractor to another. The service offered will have the same outcome and
unless a long-term contract has been negotiated, deliveries can be arranged on
a parcel-by-parcel basis.

 The buyer’s product is not significantly affected by the quality of the supplier’s
product. For example, a manufacturer of paper towels and toilet paper will not
be affected too greatly by the quality of the spiral-wound paper tubes on which
their products are wrapped.

 Buyers earn low profits so will be very keen to negotiate lower prices from their
suppliers in order to increase margins.

 Buyers have the potential for backward integration, for example where the
buyer might purchase the supplier and/or set up in business and compete with
the supplier. This is a strategic option which might be selected by a buyer in
circumstances where favourable prices and quality levels cannot be obtained by
bargaining with current suppliers alone.

 Buyers are well informed, for example, having full information regarding
availability of supplies and can use that knowledge in the negotiation against
the supplier.
The bargaining power of suppliers
The power of the seller will be high where (and this tends to be the reverse of the
power of buyers):
 There are a large number of customers, reducing their reliance upon any single
customer suggesting that they may not care if they were to lose a customer.

 The switching costs are high. For example, switching from one software supplier
to another could prove extremely costly as all equipment and processes are
specific to the supplier and all will need to change. This is on top of any costs of
designing a new system itself.
 The brand is powerful/well known (Apple, Mercedes, McDonalds, Microsoft).
Where the supplier’s brand is powerful then a retailer might not be able to
operate without a particular brand in its range of products.

 There is a possibility of the supplier integrating forward, such as a brewery


buying restaurants to enable control of the customer.

 Customers are fragmented so that they have little bargaining power


individually, such as the retail customers of a petrol station situated in a remote
location.
The threat of substitute products
The threat of substitutes is higher where:
 There is direct product-for-product substitution – eg for email/fax and postal
services. The products are performing the same task/outcome, albeit in different
ways.

 There is substitution of need. For example, better quality domestic appliances


reduce the need for maintenance and repair services. The information
technology revolution has made a significant impact in this particular area as it
has greatly diminished the need for providers of printing and secretarial
services.

 There is generic substitution competing for disposable income, such as the


competition between carpet and flooring manufacturers as with email and post,
these are both essentially doing the same thing, being floor coverings but
perform the task in differing ways.
Competitive rivalry
Competitive rivalry is likely to be high where:
 There are a number of equally balanced competitors of a similar size.
Competition is likely to intensify as one competitor strives to attain dominance
over another.

 The rate of market growth is slow. The concept of the life cycle suggests that in
mature markets, market share has to be achieved at the expense of competitors
as there are few new customers now entering the market.

 There is a lack of differentiation between competitor offerings, in such


situations, there is little disincentive to switch from one to another, they are all
the same.
 The industry has high fixed costs, perhaps as a result of capital intensity, which
may precipitate price wars and hence low margins. Where capacity can only be
increased in large increments, requiring substantial investment, then the
competitor who takes up this option is likely to create short-term excess
capacity and increased competition in order to fill this extra capacity.

 There are high exit barriers. This can lead to excess capacity as players will not
be willing to leave and, consequently, increased competition from those firms
effectively ‘locked in’ to a particular marketplace.
In summary, the application of Porter’s five forces model will increase management
understanding of an industrial environment which they may want to enter, or assist
them to assess a market that they are currently in.
Now that the model has been explained you need to be able to apply it in the exam.
Often candidates can struggle to perform this ‘application’ effectively – either, due to
not following the precise questions requirement or not using the information in the
scenario effectively or even at all. So, this next section will look at a few of the ways
that this may be examined in the APM exam and provide some advice on how to tackle
answering those questions.
When conducting a five forces assessment an organisation will need to consider:
 how to measure the strength of the forces and how reliable those
measurements are
 how to manage the forces identified to mitigate their influence on the
organisation’s future performance, and
 what performance measures are required to monitor the forces.
These factors are often the basis for questions requiring the use of this model.
Illustration:
The examples below are based on a company making semi-conductors/micro-chips
and the SBU being addressed in the question makes them for the autonomous vehicle
industry (self-driving cars), a specialised use in an already specialist industry.
EXAMPLE 1 – Using the model to perform the analysis
Required:
Using Porter’s five forces model, assess the impact of the external business
environment on the performance management of Scarlette Plc.
This is the first part of the requirement (the second part follows in the next example).
This requirement does indeed require you to perform the analysis for the SBU. This
must be done in the precise context of the scenario in the question and does not need
to be preceded with explanations of the model or its parts.
An extract from a very good answer is reproduced below to show the approach that
will score the maximum marks available for one force, threat of new entrants, in this
scenario:
Answer – Extract showing threat of new entrants only
The threat of new entrants will be dictated by barriers to entry into the specialist semi-
conductor market. These appear to be high, given the high fixed costs and the high
levels of technical expertise required to develop a viable product. Also, the need to
have cultivated strong relationships with the autonomous car producers and control
systems manufacturers who will be the customers for the products.
Comments: The answer begins with a recognition of the issues affecting barriers,
then moves on to identify the specifics for the industry. It justifies the identification of
the barriers being high here, doing this both, for the microchip industry in general,
then focussing in more closely on the specific use in this SBU.
EXAMPLE 2 – Providing performance measure for the forces
…and give a justified recommendation of one new performance measure for
each of the five force areas at Scarlette.
Answer – Extract showing threat of new entrants only
A suitable performance measure would be percentage growth in revenue because as
the industry grows Scarlette may expect their revenues to grow with it, as they gain
new contracts and even new customers. Scarlette will need to compare this measure
against the growth of the industry itself and competitors to ensure that they are at
least keeping up with them.
[Other measures could include ratio of fixed cost to total cost (measures capital
required) or customer loyalty (through long-term contracts to supply semi-conductors
to manufacturers).]
Comment: As the comment at the end of the answer shows, there are many
measures which could be applied here. The key to gaining pass marks is to identify a
measure which is going to be useful for the organisation in the scenario, given its
industry and situation. This answer also clearly justifies the recommendation in this
context.
In Performance Management models – part 2 the Boston Consulting Group matrix
(BCG) will be the model focused on.
This article provides a brief overview of the second of two models, which can assist
accountants, not only in the determination of business strategy, but also in the
appraisal of business performance. It also looks at how to approach a particular style
of question that may appear in the APM exam.

In this part the Boston Consulting Group matrix will be reviewed, you may also wish to
read part 1, which covers Porters Five Forces.
The Boston Consulting Group Matrix
There is a fundamental need for management to evaluate existing products and
services in terms of their market development potential, and their potential to
generate profit. The Boston Consulting Group matrix, which incorporates the concept
of the product life cycle, is a useful tool which helps management teams to assess
existing and developing products and services in terms of their market potential. More
importantly, the model can also be used to assess the strategic position of strategic
business units (SBUs), and in this respect it is particularly useful to those organisations
which operate in a number of different markets and offer a number of different
products or services.
The matrix offers an approach to product portfolio planning. It has two axes, namely
relative market share (meaning relative to the competition) and market growth.
Management must consider each product or service marketed, and then position it on
the matrix. This is done by considering the relative market share, which for the
company with the largest share (market leader) means comparing to the next biggest
player and for smaller players (market followers) it means comparing their share to
the leader. The other axis on the matrix is the market growth rate – which is either
growing quickly or the market is mature where it will grow slowly or may even have
stopped growing altogether.
Problem children
Problem children have a relatively low market share in a market that is growing
quickly, often due to the fact that these are new products/services, or that they are yet
to receive recognition by prospective purchasers. In order to realise the full potential
of problem children, management needs to develop new business prudently, and
apply sound project management principles if it is to avoid costly disasters. Gross
profit margins are likely to be high, but overheads are also high, covering the costs of
research, development, advertising, market education, and low economies of scale. As
a result, the development of problem children can be loss-making until the product
moves into the rising star category, which is by no means assured. This is evidenced
by the fact that many problem children products remain as such, while others become
tomorrow’s dogs.
Note: Problem children are also known as question marks.
Stars
Stars are products which are in the high market share and growing market quadrant.
As a product moves into this category it is commonly known as a rising star. While a
market is strong and still growing, competition is not yet fully established. Since
demand is strong, and market saturation and over-supply is not an issue, the pricing of
such products is relatively unhindered, and therefore these products generate very
good margins. At the same time, costs per unit are minimised due to high volumes
and good economies of scale. These are great products, and worthy of continuing
investment for as long as they have the potential to achieve good rates of growth. In
circumstances where this potential no longer exists, these products are likely to fall
vertically in the matrix into the cash cow quadrant (fallen stars), and their cash
generating characteristics will change. It is therefore vital that a company has rising
stars developing from its problem children in order to fill the void left by the fallen
stars.
Cash cows
A cash cow has a relatively high market share in a mature/low growth market and
should generate significant cash flows. This somewhat crude metaphor is based on the
idea of ‘milking’ the returns from a previous investment that established good
distribution and market share for the product. Activities to support products in this
quadrant should be aimed at maintaining and protecting their existing position,
together with good cost management, rather than aimed at investment for growth.
This is because there is little likelihood of additional growth being achieved.
Dogs
A dog has a relatively low market share in a mature/low growth market, might well be
loss making, and therefore have negative cash flow. A common belief is that there is
no point in developing products or services in this quadrant. Many organisations
discontinue dogs, but businesses which have been denied adequate funding for
development may find themselves with a high proportion of their products or services
in this quadrant. A dog product that forms an integral part of a portfolio may also be
retained to ensure complete coverage – eg a furniture reseller may have some dog
products but does so in order to remain a ‘one-stop-shop’ for all customer furniture
needs and not lose customers.
Limitations of the Boston Consulting Group matrix
The popularity of the matrix has diminished a little as the criteria it is based on –
market share and market growth are no longer reliable predictors of long-term
success. Other models have been developed from it – with further criteria added
(these are outside the scope of APM, however). It was also very useful when
conglomerates were much more common, and these companies needed to review
their portfolios of SBUs to ensure that effort/funds are focused on the correct markets.
Management should therefore exercise a degree of caution when using the matrix.
Some of its limitations are detailed below:
 The rate of market growth is just one factor in an assessment of industry
attractiveness, and relative market share is just one factor in the assessment of
competitive advantage. The matrix ignores many other factors that contribute
towards these two important determinants of profitability.
 There can be practical difficulties in determining what exactly ‘high’ and ‘low’
(growth and share) can mean in a particular situation.
 The focus upon high market growth can lead to the profit potential of declining
markets being ignored.
 The matrix assumes that each SBU or product/service is independent. This is
not always the case, as organisations often take advantage of potential
synergies.
 The use of the matrix is best suited to SBUs as opposed to products, or to broad
markets (which might comprise many market segments).
 The position of dogs is frequently misunderstood, as many dogs play a vital role
in helping SBUs achieve competitive advantage. For example, dogs may be
required to complete a product range (as referred to earlier in this article) and
provide a credible presence in the market. Dogs may also be retained in order
to reduce the threat from competitors via a broad portfolio.
Notwithstanding these limitations, the Boston Consulting Group matrix provides a
useful starting point in the assessment of the performance of products and services
and, more importantly, of SBUs. Although when conducting a BCG assessment an
organisation will need to consider:
 how to measure each of the categories in the matrix and how reliable those
measurements are
 how to manage the different categories identified to mitigate their influence on
the organisation’s future performance
 what performance indicators are required as a result of the BCG categorisation,
how those indicators link into both overall organisational performance and
individual performance.
Now that the model has been explained and demonstrated we will move on to look at
how it can be examined in APM. An analysis using the model may be asked for,
however often this will be done for you in the question and the requirements will focus
on how these SBUs can be managed and what performance measures may be
required. You may also be expected to evaluate the use of BCG matrix as a
performance management system. This section of the article will provide advice about
answering several types of requirements. In the examples, only extracts from the
requirements and answers are provided, to keep the article to a sensible length.
Illustration
EXAMPLE 1 – Using the model to perform the analysis
FNI is a large, diversified entertainment business based in Zeeland. It has a main
objective of maximising shareholder wealth and is made up of four divisions:

Division Position in the matrix

1 Bars Cash cow

2 Dance clubs Problem child

3 Restaurants Dog

4 Online, live-streamed events Problem child


A consultant has performed the BCG analysis of the four divisions and you are required
to evaluate their positions in the model and discuss the measures necessary to
monitor performance.
An extract from a very good answer is reproduced below to show the approach which
will score the well, focused on the bar division which has been identified as a cash
cow:
Answer
The bar division is a cash cow as it has a very strong share of a low growth market.
The focus for this division will be on generating as much cash flow as possible in order
to continue to invest elsewhere in the business. It will also have a focus on cost control
to ensure that it continues to be as profitable as possible. As a result, measures which
would be suitable for the bar division would be profit margins and cash generation.
Comment: This answer begins by justifying the bar division’s placement in the matrix.
It then goes on to explain what this division’s focus will be and why. It then concludes
with measures that relate to its situation.
You could also be asked to evaluate the BCG analysis as a performance management
system at the company.
Answer
The BCG matrix can be beneficial as it allows the company to view the prospects of its
different divisions. A different style of management should be applied to each division
based on this analysis. Those businesses which are in faster growing sectors will
require more capital to be invested and may not generate cash as efficiently from
profits. However, those businesses in slower growing mature markets should have a
focus on cost control and cash generation. Business units identified as cash cows and,
particularly, dogs should not be dismissed since if they are properly managed, they
can provide a rich source of cash as they are run down.
The performance management systems and metrics used by the divisions should
therefore be adjusted to reflect this analysis. The metrics for high growth prospects of
dance clubs and live-streamed events will be based on profit and return on
investment.
However, the BCG matrix is a very simple method of analysis. For example, using
relative market share measured against the largest competitor, where a value of 1·0 is
used as cut off between large and small, means there is only one star or cow per
market. It was designed as a tool for product portfolio analysis rather than
performance measurement. As a performance system, it seems to downgrade
traditional measures of performance such as profit and shareholder wealth and
therefore may not be well aligned with all of the key stakeholders’ objectives. It should
be seen as a starting point for considering the appropriate performance management
for a business unit but not the final result for the overall company.
Additionally, it may be that different products with each business unit may not fit the
unit’s classification. For example, a newly launched street food format would be under
the restaurant division but may be in a higher growth sub-sector and so applying the
performance systems and management style of a dog business would not be
appropriate. It may also be difficult to distinguish the sectors from each other as, for
example, it may be difficult to define the difference between a bar and a restaurant
where both sell much of the same services. The model also fails to consider the links
between the business units, for example, where the bars may serve the dance clubs.
Comment: An evaluation needs to look at the good and bad points of any model
being discussed in the context of the question. This answer builds up a picture via
looking at how the model works, then considering if it is appropriate when looking at
the objectives of the company. It finishes with some clear negatives and limitations of
the model for managing performance. This is a very good answer which focuses on the
exact requirement – it is not simply looking at limitations of the model in itself but as a
tool for performance management.
Conclusion
These two articles have covered two common models used in the APM exam. As well
as explaining the models they have given advice and examples of how to answer
questions set on them. It will be a good idea to now review questions in past APM
exams which have been set using them. This will help you to see more examples of
how they are examined and that it is not enough to simple quote theory to score well
in APM.
References

The importance of sustainability


Since the 1990s, there has been increasing recognition amongst governments,
businesses, consumers, investors and other stakeholders of the importance of
sustainability and the impacts of businesses on society and environment. They have
recognised that focusing solely on profit maximisation, without considering the
interaction of a business with its operating environment, will not be a sustainable
approach.
At the same time, there has been increasing demand for environmentally friendly
products and processes, for example, hybrid – and more recently – electric vehicles in
place of conventional petrol or diesel ones. As such, adopting a sustainable business
model could be not only a challenge but also an opportunity for organisations.
There are many different definitions of sustainability, but a commonly used one is that
in the Brundtland Report (1987): meeting the needs of the present without
compromising the ability of future generations to meet their own needs.
Most discussions of sustainability also highlight the need for organisations to
contribute to economic prosperity, environmental quality and social justice. We will
focus on these three strands (economic, environmental, social) as the basis of our
discussion in this article.
Sustainability and performance
In addition to the overall importance of sustainability, there could also be a direct link
between environmental behaviour and performance.
There are a number of ways poor environmental behaviour could affect a business:
 fines (for pollution, or other breaches of regulations)
 increased liability to environmental taxes (for example, carbon taxes)
 damage to corporate reputation
 loss of sales or consumer boycotts
 inability to secure finance
 loss of insurance cover
Conversely, reducing material, energy and water usage should not only reduce
environmental impact, it could also reduce operating costs. Similarly, a focus on
reducing waste could, in turn, improve process efficiency, and reduce the amount (and
therefore the cost) of materials used.
Equally, although health and safety measures might not add value to a business by
themselves, they can help to protect a business from the cost of accidents which
might otherwise occur. If a business has poor health and safety controls, this might
result in – amongst other things – increased staff absence from injury or illness, and
possible compensation claims for any work-related injuries.
Triple bottom line
The increased focus on sustainability has important implications for performance
management, and for accountants producing and reviewing management information.
In addition to the financial information which they have traditionally measured,
businesses now also need to consider the environmental and social aspects of
performance, and they need the information on these areas to be relevant and
reliable, and to be provided in a meaningful and comparable way.
The triple bottom line approach (Elkington, 1998) has emerged as a potential way to
define a business’s sustainable performance: measuring performance not only in the
economic value businesses add, but also on the environmental and social value they
add – or destroy.
It is important to note that the third element here – economic – does not simply mean
the financial profit a business makes. Economic impact is wider than just financial
impact. Financial profit focuses on the business itself, but the economic impact of a
business is on society as a whole, for example, through creating jobs and paying
taxes.
It is also important to recognise that social and environmental issues are not confined
within a business’ normal financial reporting boundaries, but businesses also needed
to consider sustainability issues across their supply chain, and the social and
environmental practices of their suppliers (for example, supermarkets requiring
suppliers to manufacture products from sustainable sources or eco-friendly materials,
or to supply ‘organic’ produce).
Triple bottom line and different types of capital
We are all familiar with the logic that companies’ underlying objective is to deliver
value for their shareholders. However, there is now an increasing recognition that the
long-term pursuit of shareholder value is linked to the preservation and enhancement
of different types of capital – natural, human, social, manufactured and financial –
which can be broadly related to the three aspects of the triple bottom line:

Aspect of triple Type of capital affected


bottom line
Environmental - Natural capital: natural resources (eg
air, water, land) and processes used by a
business in delivering its products and
services

Social - Human capital: health, skills,


motivation of employees.
- Social capital: relationships,
partnerships and co-operation (eg with
suppliers)

Economic - Manufactured capital: buildings,


equipment and infrastructure used by the
business
- Financial capital: funds available to
enable the business to operate. Reflects
the value generated from the other types
of capital.

Integrated reporting
The recognition that businesses depend on different forms of capital for their success
is also an important part of the rationale for integrated reporting (IR). However, IR
also encourages a focus on business sustainability and organisation’s long-term
success. By encouraging businesses to focus on their ability to create and sustain
value over the longer term, IR should help them take decisions which are sustainable,
and which ensure a more effective allocation of scarce resources.
Integrated Reporting is discussed in more detail in a separate article.
Sustainability and performance information
The argument that it is insufficient for businesses to consider only financial
information alone is not new. There are echoes here to discussions around the need for
multi-dimensional performance measurement systems (such as the balanced
scorecard (Kaplan and Norton, 1996)) – which emphasise the need for
financial and non-financial measures to be part of a business’ information systems.
Equally, one of the criticisms sometimes made of the way businesses use balanced
scorecards is that they are linked to delivering traditional economic value (eg
shareholder wealth), rather than considering the importance of corporate social
responsibility (CSR) and sustainability. As such, some commentators have suggested
the need to add social and environmental perspectives to the balanced scorecard.
However, others have argued that sustainability could be incorporated into the
existing four perspectives. The logic of the scorecard is to link a business’ objectives
and strategy to its performance measures, and the argument here is that businesses
should include sustainability goals within their strategy.
As such, when selecting goals for the perspectives, a business should consider
requirements for sustainability. For example:
 Customer perspective: Have the interests of sustainability stakeholders been
taken into account eg green consumers; local communities; government
regulators?
 Internal process perspective:
o Have the environmental impacts of processes eg resource usage; waste
and recycling; impact on water and air been considered?
o Do HR processes take into account labour best practices around health
and safety, diversity, equal opportunity etc?
 Learning and growth:
o How are training and development programmes helping to promote
sustainability values and culture?
o How are innovations leading to more efficient use of resources and the
reduction of waste, or leading to the introduction of more environmentally
friendly products?
More generally, regardless of the performance measurement system it uses, in order
to improve sustainability performance, a business needs to translate its overall
objectives into specific practices, linked to sustainability, in each key area of
performance. It then needs to identify specific measurement indicators, so it can
assess how well it is achieving its objectives in each key area.
Key performance indicators (KPIs)
Monitoring key performance indicators (KPIs) is recognised as a crucial part of
performance management for any business. However, many businesses don’t
measure sustainability KPIs in the way that they would financial KPIs, for example. One
of the key challenges with introducing sustainability KPIs is that the list of potential
indicators is very large, so determining which are the most important to monitor (ie
the key indicators) can be a complex task.
However, the following are some potential indicators a business could track in relation
to sustainability:

Energy Materials
- Energy consumption - Raw material usage
- Energy saved due to - % of non-renewable materials
implemented improvements used
- % of recycled materials used
- Product recycling rate %

Water Supply chain


- Water consumption/ Water - % of suppliers that comply with
footprint established sustainability
- % of water reused or recycled strategy
- Supply chain miles

Waste Social
- Waste generated - Number of health and safety
- Waste by type and disposal incidents (workplace safety)
method - Number of sick days
- Waste production rate (employees’ health and well-
being)

Emissions
- Toxic emissions
- CO2 emissions
- Greenhouse gas emissions
- Carbon footprint

As we have mentioned before, in addition to encouraging sustainability, monitoring


these indicators could also help business performance more generally. For example,
monitoring and trying to reduce energy consumption could help to lower energy costs
as well as reducing environmental impact. Similarly, supply chain miles provide an
indicator of how far a product is travelling before reaching its destination. If products
are travelling large distances, this could mean they incur heavy costs along the way.
Therefore, looking to reduce supply chain miles could influence a business’s choice of
suppliers; not just to reduce carbon footprint, but potentially costs as well.
Evaluating measurement of sustainability performance
Having established the need to embed ‘sustainability’ into an organisation’s
performance measurement systems, a key question to ask will be: how well are
businesses actually doing this?
By definition, performance measurement is always selective. Businesses cannot
measure every aspect of performance, so they must decide the most important
metrics and indicators to focus on. When evaluating an organisation’s performance
measurement systems (in relation to sustainability) key questions to ask will be:
 What is being measured? What measures are chosen?
 To what extent are aspects of sustainability covered in an organisation’s
performance measurement system? Are measures of sustainability included?
 To what extent do the chosen performance indicators enable management to
measure performance from a sustainability perspective?
 Are the measures chosen the most appropriate ones for the organisation to be
using?
 Do the chosen measures provide a balanced picture of the organisation’s
performance (rather than, for example, just focusing on areas which the
organisation is doing well)?
One potential approach here when selecting areas to measure could be to analyse a
business’s organisation’s value chain to identify the areas which have the greatest
potential impact on sustainability. These should then be the priority areas to measure,
so the business should select indicators which show well it is performing in these
priority areas.
Reliability of measures
As well as the areas being measured, another important consideration is the extent to
which the data being gathered is reliable and meaningful.
 How is performance measured (eg inputs; activities; outputs), and can the data
be reliably captured?
 Are there benchmarks or comparators against which performance can be
assessed?
 Are performance measures clearly defined so they can be consistently and
reliably measured?
 Is information presented in a way which maximises its usefulness to its
audience?
As with other performance indicators, it is important to monitors trends in indicators of
sustainability performance to measure progress. However, for any trend to be
meaningful, the indicators need to be measured consistently – over time, and across
different parts of a business (and potentially between businesses).
One of the particular challenges in comparing sustainability performance between
businesses is that, whereas financial performance can be monitored using a number of
widely accepted indicators derived from the financial statements, the indicators of
social and environmental impacts are less clearly established, and the information
used to calculate them is often not part of mainstream information flows.
In addition, the perception of sustainability can vary across countries, communities
and individuals. Some initiatives promoted by an organisation as environmentally
friendly might not be perceived as by relevant or beneficial by green consumers.
Reporting Sustainability: Global Reporting Initiative and the UN Sustainable
Development Goals
Given the increasing importance of sustainability as a major global issue, there has
been increasing recognition for a globally accepted framework within which
organisations can frame their sustainability strategy.
The Global Reporting Initiative (GRI) Standards provide best practice for reporting on a
range of economic, environmental and social impacts, and give companies specific
guidance on what information they should report on. However, the GRI Standards are
not mandatory. And while there is an increasing recognition of the need to set a
sustainability equivalent of the International Financial Reporting Standards (IFRS), to
put financial and non-financial information on the same footing, this has not been
achieved yet.
The United Nations’ Sustainable Development Goals (SDGs) could be more relevant at
a strategic level, encouraging companies to embed sustainability measures into their
‘core’ performance reporting.
The SDGs are part of the United Nations (UN) 2030 Agenda for Sustainable
Development. The Agenda, formally adopted by the UN in 2015, is a 15-year plan with
the aim of ending poverty, combatting climate change, and fighting injustice and
inequality. The SDGs are 17 high level goals for sustainable development, with each
goal supported by a number of specific objectives. In turn, indicators are
recommended for each objective to enable performance against it to be measured.

1: No poverty 10: Reduced inequalities

2: Zero hunger 11: Sustainable cities and


communities

3: Good health and well being 12: Responsible consumption


and production

4: Quality education 13: Climate action


5: Gender equality 14: Life below water

6: Clean water and sanitation 15: Life on land

7: Affordable and clean energy 16: Peace, justice and strong


institutions

8: Decent work and economic 17: Partnerships for the goals


growth

9: Industry, innovation and


infrastructure

Sustainable Development Goals – United Nations General Assembly (2015)


The overall goals are broad and aspirational. However, they are supported by a range
of associated targets (169 in total) and indicators, which provide a quantifiable
framework for assessing whether the goals are being achieved.
For example, Goal 8 'Decent work and economic growth' aims to 'Promote sustained,
inclusive and sustainable economic growth, full and productive employment and
decent work for all.' One of the targets linked to this goal is to “Improve … global
resource efficiency in consumption and production and endeavour to decouple
economic growth from environmental degradation”. In turn, performance against this
target is measured using the indicators:
 Material footprint, material footprint per capita and material footprint per GDP
 Domestic material consumption, domestic material consumption per capita, and
domestic material consumption per GDP.
Implications of the SDGs on organisations and performance management
The principal responsibility for achieving the SDGs lies with national governments, but
governments cannot tackle the issues on their own. Success in achieving the SDGs
also depends on the active participation of businesses and non-governmental
organisations (NGOs) across the world.
In this respect, two key challenges are:
 encouraging senior managers to evaluate the extent to which their business
objectives create societal value and
 demonstrating the link between 'sustainability' and business.
One possible way to do this is to translate the language of sustainability into the
language of everyday business and operations. For example, instead of asking a
construction company 'How does climate change affect your business?', the issues
could be identified more pertinently by looking at the risks that flooding or changes in
water level might have on the company's projects and site operations.
More generally, the SDGs encourage businesses to adopt sustainable practices and
integrate sustainability information into their reporting. As we mentioned earlier, in
relation to the balanced scorecard, the challenge here is not so much adding
additional perspectives for measuring performance, but embedding ‘sustainability’
into the existing perspectives, as an integral factor to consider in business decisions
and business performance measurement.
Accordingly, in the APM exam, you should be prepared to evaluate how effectively a
business measuring sustainability within its performance measurement system.
Written by a member of the Advanced Performance Management examining
team

Clearly, risk permeates most aspects of corporate decision making (and life
in general), and few can predict with any precision what the future holds in
store
Risk can take myriad forms – ranging from the specific risks faced by individual
companies (such as financial risk, or the risk of a strike among the workforce), through
the current risks faced by particular industry sectors (such as banking, car
manufacturing, or construction), to more general economic risks resulting from
interest rate or currency fluctuations, and, ultimately, the looming risk of recession.
Risk often has negative connotations, in terms of potential loss, but the potential for
greater than expected returns also often exists.
Clearly, risk is almost always a major variable in real-world corporate decision-making,
and managers ignore its vagaries at their peril. Similarly, trainee accountants require
an ability to identify the presence of risk and incorporate appropriate adjustments into
the problem-solving and decision-making scenarios encountered in the exam hall.
While it is unlikely that the precise probabilities and perfect information which feature
in exam questions can be transferred to real-world scenarios, a knowledge of the
relevance and applicability of such concepts is necessary.
In this first article, the concepts of risk and uncertainty will be introduced together
with the use of probabilities in calculating both expected values and measures of
dispersion. In addition, the attitude to risk of the decision maker will be examined by
considering various decision-making criteria, and the usefulness of decision trees will
also be discussed. In the second article, more advanced aspects of risk assessment
will be addressed, namely the value of additional information when making decisions,
further probability concepts, the use of data tables, and the concept of value-at-risk.
The basic definition of risk is that the final outcome of a decision, such as an
investment, may differ from that which was expected when the decision was taken.
We tend to distinguish between risk and uncertainty in terms of the availability of
probabilities. Risk is when the probabilities of the possible outcomes are known (such
as when tossing a coin or throwing a dice); uncertainty is where the randomness of
outcomes cannot be expressed in terms of specific probabilities. However, it has been
suggested that in the real world, it is generally not possible to allocate probabilities to
potential outcomes, and therefore the concept of risk is largely redundant. In the
artificial scenarios of exam questions, potential outcomes and probabilities will
generally be provided, therefore a knowledge of the basic concepts of probability and
their use will be expected.
Probability
The term ‘probability’ refers to the likelihood or chance that a certain event will occur,
with potential values ranging from 0 (the event will not occur) to 1 (the event will
definitely occur). For example, the probability of a tail occurring when tossing a coin is
0.5, and the probability when rolling a dice that it will show a four is 1/6 (0.166). The
total of all the probabilities from all the possible outcomes must equal 1, ie some
outcome must occur.
A real world example could be that of a company forecasting potential future sales
from the introduction of a new product in year one (Table 1).

From Table 1, it is clear that the most likely outcome is that the new product
generates sales of £1,000,000, as that value has the highest probability.
Independent and conditional events
An independent event occurs when the outcome does not depend on the outcome of a
previous event. For example, assuming that a dice is unbiased, then the probability of
throwing a five on the second throw does not depend on the outcome of the first
throw.
In contrast, with a conditional event, the outcomes of two or more events are related,
ie the outcome of the second event depends on the outcome of the first event. For
example, in Table 1, the company is forecasting sales for the first year of the new
product. If, subsequently, the company attempted to predict the sales revenue for the
second year, then it is likely that the predictions made will depend on the outcome for
year one. If the outcome for year one was sales of $1,500,000, then the predictions for
year two are likely to be more optimistic than if the sales in year one were $500,000.
The availability of information regarding the probabilities of potential outcomes allows
the calculation of both an expected value for the outcome, and a measure of the
variability (or dispersion) of the potential outcomes around the expected value (most
typically standard deviation). This provides us with a measure of risk which can be
used to assess the likely outcome.
Expected values and dispersion
Using the information regarding the potential outcomes and their associated
probabilities, the expected value of the outcome can be calculated simply by
multiplying the value associated with each potential outcome by its probability.
Referring back to Table 1, regarding the sales forecast, then the expected value of the
sales for year one is given by:
Expected value
= ($500,000)(0.1) + ($700,000)(0.2) + ($1,000,000)(0.4) + ($1,250,000)(0.2) +
($1,500,000)(0.1)
= $50,000 + $140,000 + $400,000 + $250,000 + $150,000
= $990,000
In this example, the expected value is very close to the most likely outcome, but this is
not necessarily always the case. Moreover, it is likely that the expected value does not
correspond to any of the individual potential outcomes. For example, the average
score from throwing a dice is (1 + 2 + 3 + 4 + 5 + 6) / 6 or 3.5, and the average
family (in the UK) supposedly has 2.4 children. A further point regarding the use of
expected values is that the probabilities are based upon the event occurring
repeatedly, whereas, in reality, most events only occur once.

In addition to the expected value, it is also informative to have an idea of the risk or
dispersion of the potential actual outcomes around the expected value. The most
common measure of dispersion is standard deviation (the square root of the variance),
which can be illustrated by the example given in Table 2 above, concerning the
potential returns from two investments.
To estimate the standard deviation, we must first calculate the expected values of
each investment:
Investment A
Expected value = (8%)(0.25) + (10%)(0.5) + (12%)(0.25) = 10%
Investment B
Expected value = (5%)(0.25) + (10%)(0.5) + (15%)(0.25) = 10%
The calculation of standard deviation proceeds by subtracting the expected value from
each of the potential outcomes, then squaring the result and multiplying by the
probability. The results are then totalled to yield the variance and, finally, the square
root is taken to give the standard deviation, as shown in Table 3.
In Table 3, although investments A and B have the same expected return, investment
B is shown to be more risky by exhibiting a higher standard deviation. More commonly,
the expected returns and standard deviations from investments and projects are both
different, but they can still be compared by using the coefficient of variation, which
combines the expected return and standard deviation into a single figure.
Coefficient of variation and standard error
The coefficient of variation is calculated simply by dividing the standard deviation by
the expected return (or mean):
Coefficient of variation = standard deviation / expected return
For example, assume that investment X has an expected return of 20% and a standard
deviation of 15%, whereas investment Y has an expected return of 25% and a
standard deviation of 20%. The coefficients of variation for the two investments will
be:
Investment X = 15% / 20% = 0.75
Investment Y = 20% / 25% = 0.80
The interpretation of these results would be that investment X is less risky, on the
basis of its lower coefficient of variation. A final statistic relating to dispersion is the
standard error, which is a measure often confused with standard deviation. Standard
deviation is a measure of variability of a sample, used as an estimate of the variability
of the population from which the sample was drawn. When we calculate the sample
mean, we are usually interested not in the mean of this particular sample, but in the
mean of the population from which the sample comes. The sample mean will vary
from sample to sample and the way this variation occurs is described by the ‘sampling
distribution’ of the mean. We can estimate how much a sample mean will vary from
the standard deviation of the sampling distribution. This is called the standard error
(SE) of the estimate of the mean.
The standard error of the sample mean depends on both the standard deviation and
the sample size:
SE = SD/√(sample size)
The standard error decreases as the sample size increases, because the extent of
chance variation is reduced. However, a fourfold increase in sample size is necessary
to reduce the standard error by 50%, due to the square root of the sample size being
used. By contrast, standard deviation tends not to change as the sample size
increases.
Decision-making criteria
The decision outcome resulting from the same information may vary from manager to
manager as a result of their individual attitude to risk. We generally distinguish
between individuals who are risk averse (dislike risk) and individuals who are risk
seeking (content with risk). Similarly, the appropriate decision-making criteria used to
make decisions are often determined by the individual’s attitude to risk.
To illustrate this, we shall discuss and illustrate the following criteria:
1. Maximin
2. Maximax
3. Minimax regret
An ice cream seller, when deciding how much ice cream to order (a small, medium, or
large order), takes into consideration the weather forecast (cold, warm, or hot). There
are nine possible combinations of order size and weather, and the payoffs for each are
shown in Table 4.

The highest payoffs for each order size occur when the order size is most appropriate
for the weather, ie small order/cold weather, medium order/warm weather, large
order/hot weather. Otherwise, profits are lost from either unsold ice cream or lost
potential sales. We shall consider the decisions the ice cream seller has to make using
each of the decision criteria previously noted (note the absence of probabilities
regarding the weather outcomes).
1. Maximin
This criteria is based upon a risk-averse (cautious) approach and bases the
order decision upon maximising the minimum payoff. The ice cream seller will
therefore decide upon a medium order, as the lowest payoff is £200, whereas
the lowest payoffs for the small and large orders are £150 and $100
respectively.
2. Maximax
This criteria is based upon a risk-seeking (optimistic) approach and bases the
order decision upon maximising the maximum payoff. The ice cream seller will
therefore decide upon a large order, as the highest payoff is $750, whereas the
highest payoffs for the small and medium orders are $250 and $500
respectively.
3. Minimax regret
This approach attempts to minimise the regret from making the wrong decision
and is based upon first identifying the optimal decision for each of the weather
outcomes. If the weather is cold, then the small order yields the highest payoff,
and the regret from the medium and large orders is $50 and $150 respectively.
The same calculations are then performed for warm and hot weather and a
table of regrets constructed (Table 5).

The decision is then made on the basis of the lowest regret, which in this case is the
large order with the maximum regret of $200, as opposed to $600 and $450 for the
small and medium orders.
Decision trees
The final topic to be discussed in this first article is the use of decision trees to
represent a decision problem. Decision trees provide an effective method of decision-
making because they:
 clearly lay out the problem so that all options can be challenged
 allow us to fully analyse the possible consequences of a decision
 provide a framework in which to quantify the values of outcomes and the
probabilities of achieving them
 help us to make the best decisions on the basis of existing information and best
guesses.
A comprehensive example of a decision tree is shown in Figures 1 to 4, where a
company is trying to decide whether to introduce a new product or consolidate
existing products. If the company decides on a new product, then it can be developed
thoroughly or rapidly. Similarly, if the consolidation decision is made then the existing
products can be strengthened or reaped. In a decision tree, each decision (new
product or consolidate) is represented by a square box, and each outcome (good,
moderate, poor market response) by circular boxes.
The first step is to simply represent the decision to be made and the potential
outcomes, without any indication of probabilities or potential payoffs, as shown
in Figure 1 below.
The next stage is to estimate the payoffs associated with each market response and
then to allocate probabilities. The payoffs and probabilities can then be added to the
decision tree, as shown in Figure 2 below.
The expected values along each branch of the decision tree are calculated by starting
at the right hand side and working back towards the left recording the relevant value
at each node of the tree. These expected values are calculated using the probabilities
and payoffs. For example, at the first node, when a new product is thoroughly
developed, the expected payoff is:
Expected payoff = (0.4)($1,000,000) + (0.4)($50,000) + (0.2)($2,000) = $420,400
The calculations are then completed at the other nodes, as shown in Figure 3 below.
We have now completed the relevant calculations at the uncertain outcome modes.
We now need to include the relevant costs at each of the decision nodes for the two
new product development decisions and the two consolidation decisions, as shown
in Figure 4 below.
The payoff we previously calculated for ‘new product, thorough development’ was
$420,400, and we have now estimated the future cost of this approach to be
$150,000. This gives a net payoff of $270,400.
The net benefit of ‘new product, rapid development’ is $31,400. On this branch, we
therefore choose the most valuable option, ‘new product, thorough development’, and
allocate this value to the decision node.
The outcomes from the consolidation decision are $99,800 from strengthening the
products, at a cost of $30,000, and $12,800 from reaping the products without any
additional expenditure.
By applying this technique, we can see that the best option is to develop a new
product. It is worth much more to us to take our time and get the product right, than
to rush the product to market. And it’s better just to improve our existing products
than to botch a new product, even though it costs us less.
In the next article, we will examine the value of information in making decisions, the
use of data tables, and the concept of value-at-risk.
Written by a member of the APM examining team

In this second article on the risks of uncertainty, we build upon the basics of
risk and uncertainty addressed in the first article published in April 2009 to
examine more advanced aspects of incorporating risk into decision making
In particular, we return to the use of expected values and examine the potential
impact of the availability of additional information regarding the decision under
consideration. Initially, we examine a somewhat artificial scenario, where it is possible
to obtain perfect information regarding the future outcome of an uncertain variable
(such as the state of the economy or the weather), and calculate the potential value of
such information. Subsequently, the analysis is revisited and the more realistic case of
imperfect information is assumed, and the initial probabilities are adjusted using
Bayesian analysis.
Some decision scenarios may involve two uncertain variables, each with their own
associated probabilities. In such cases, the use of data/decision tables may prove
helpful where joint probabilities are calculated involving possible combinations of the
two uncertain variables. These joint probabilities, along with the payoffs, can then be
used to answer pertinent questions such as what is the probability of a profit/(loss)
occurring?
The other main topic covered in the article is that of Value-at-Risk (VaR), which has
been referred to as 'the new science of risk management'. The principles underlying
VaR will be discussed along with an illustration of its potential uses.
Expected values and information
To illustrate the potential value of additional information regarding the likely outcomes
resulting from a decision, we return to the example given in the first article, of the ice
cream seller who is deciding how much ice cream to order but is unsure about the
weather. We now add probabilities to the original information regarding whether the
weather will be cold, warm or hot, as shown in Table 1.
Table 1: Assigning probabilities to weather

Order/weather Cold Warm Hot


Probability 0.2 0.5 0.3

Small $250 $200 $150

Medium $200 $500 $300


Order/weather Cold Warm Hot
Probability 0.2 0.5 0.3

Large $100 $300 $750

We are now in a position to be able to calculate the expected values associated with
the three sizes of order, as follows:
 Expected value (small) = 0.2 ($250) + 0.5 ($200) + 0.3 ($150) = $195
 Expected value (medium) = 0.2 ($200) + 0.5 ($500) + 0.3 ($300) = $380
 Expected value (large) = 0.2 ($100) + 0.5 ($300) + 0.3 ($750) = $395
On the basis of these expected values, the optimal decision would be to order a large
amount of ice cream with an expected value of $395. However, it may be possible to
improve upon this value if better information regarding the weather could be obtained.
Exam questions often make the assumption that it is possible to obtain perfect
information, ie to predict exactly what the outcome of the uncertain variable will be.
The value of perfect information
In the case of the ice cream seller, perfect information would be certainty regarding
the outcome of the weather.
If this was the case, then the ice cream seller would purchase the size of order which
gave the highest payoff for each weather outcome - in other words, purchasing a small
order if the weather was forecast to be cold, a medium order if it was forecast to be
warm, and a large order if the forecast was for hot weather. The resulting expected
value would then be:
Expected value =; 0.2 ($250) + 0.5 ($500) + 0.3 ($750) = $525
The value of the perfect information is the difference between the expected values
with and without the information, ie
Value of information = $525 - $395 = $130
Exam questions are often phrased in terms of the maximum amount that the decision
maker would be prepared to pay for the information, which again is the difference
between the expected values with and without the information.
However, the concept of perfect information is somewhat artificial since, in the real
world, such perfect certainty rarely, if ever, exists. Future outcomes, irrespective of the
variable in question, are not perfectly predictable. Weather forecasts or economic
predictions may exhibit varying degrees of accuracy, which leads us to the concept of
imperfect information.
The value of imperfect information
With imperfect information we do not enjoy the benefit of perfect foresight.
Nevertheless, such information can be used to enhance the accuracy of the
probabilities of the possible outcomes and therefore has value. The ice cream seller
may examine previous weather forecasts and, on that basis, estimate probabilities of
future forecasts being accurate. For example, it could be that when hot weather is
forecast past experience has suggested the following probabilities:
 P (forecast hot but weather cold)- 0.3
 P (forecast hot but weather warm);- 0.4
 P (forecast hot and weather hot)- 0.7
The probabilities given do not add up to 1 and so, for example, P (forecast hot but
weather cold) cannot mean P (weather cold given that forecast was hot), but must
mean P (forecast was hot given that weather turned out to be cold).
We can use a table to determine the required probabilities. Suppose that the weather
was recorded on 100 days. Using our original probabilities, we would expect 20 days to
be cold, 50 days to be warm, and 30 days to be hot. The information from our forecast
is then used to estimate the number of days that each of the outcomes is likely to
occur given the forecast (see Table 2).
Table 2: Likely weather outcomes

Outcome/forecast Cold Warm Hot Total

Hot 6** 20 21 47

Other 14 30 9 53

20* 50 30 100

* From past data, cold weather occurs with probability of 0.2 ie on 0.2 of the 100 days
in the sample = 20 days. Other percentages are also derived from past data.
** If the actual weather is cold, there is a 0.3 probability that hot weather had been
forecast. This will occur on 0.3 of the 20 days on which the weather was poor = 6 days
(0.3 x 20). Similarly, 20 = 0.4 x 50 and 21 = 0.7 x 30.
The revised probabilities, if the forecast is hot, are therefore:
 P (Cold)=6/47=0.128
 P (Warm) = 20/47 = 0.425
 P (Hot) = 21/47 = 0.447
The expected values can then be recalculated as:
 Expected value (small) = 0.128 ($250) + 0.425 ($200) + 0.447 ($150) = $184
 Expected value (medium) = 0.128 ($200) + 0.425 ($500) + 0.447 ($300) =
$372
 Expected value (large) = 0.128 ($100) + 0.425 ($300) + 0.447 ($750) = $476
 Value of imperfect information = $476 - $395 = 81
The estimated value for imperfect information appears reasonable, given that the
value we had previously calculated for perfect information was $130.
Bayes' rule
Bayes' rule is perhaps the preferred method for estimating revised (posterior)
probabilities when imperfect information is available. An intuitive introduction to
Bayes' rule was provided in The Economist, 30 September 2000:
'The essence of the Bayesian approach is to provide a mathematical rule explaining
how you should change your existing beliefs in the light of new evidence. In other
words, it allows scientists to combine new data with their existing knowledge or
expertise. The canonical example is to imagine that a precocious newborn observes
his first sunset, and wonders whether the sun will rise again or not. He assigns equal
prior probabilities to both possible outcomes, and represents this by placing one white
and one black marble into a bag. The following day, when the sun rises, the child
places another white marble in the bag. The probability that a marble plucked
randomly from the bag will be white (ie the child's degree of belief in future sunrises)
has thus gone from a half to two-thirds. After sunrise the next day, the child adds
another white marble, and the probability (and thus the degree of belief) goes from
two-thirds to three-quarters. And so on. Gradually, the initial belief that the sun is just
as likely as not to rise each morning is modified to become a near-certainty that the
sun will always rise.'
In mathematical terms, Bayes' rule can be stated as:
Posterior probability =likelihood x prior probability
marginal likelihood
For example, consider a medical test for a particular disease which is 90% accurate, ie
if you test positive then there is a 90% probability that you have the disease and a
10% probability that you have been misdiagnosed. If we further assume that 3% of the
population actually have this disease, then the probability of having the disease (given
that you have tested positive) is shown by:
P(Disease|Test = +) =
P(Test = +|Disease) x P(Disease)
P(Test = +|Dis) x P(Dis) + P(Test= +|No Dis) x P(No Dis)
= 0.90 0.03; 0.027;
0.90 x 0.03 + 0.10 x 0.97 0.027 + 0.097
= 0.218

This result suggests that you have a 22% probability of having the disease, given that
you tested positive. This may seem a low probability but only 3% of the population
have the disease and we would expect them to test positive. However, 10% of tests
will prove positive for people who do not have the disease. Therefore, if 100 people
are tested, approximately three out of the 13 positive tests will actually have the
disease.
Bayes' rule has been used in a practical context for classifying email as spam on the
basis of certain key words appearing in the text.
Data tables
Data tables show the expected values resulting from combinations of uncertain
variables, along with their associated joint probabilities. These expected values and
probabilities can then be used to estimate, for example, the probability of a profit or a
loss.
To illustrate, assume that a concert promoter is trying to predict the outcome of two
uncertain variables, namely:
1. The number of people attending the concert, which could be 300, 400, or 600
with estimated probabilities of 0.4, 0.4, and 0.2 respectively.
2. From each person attending, the profit on drinks and confectionary, which could
be $2, $4, or $6 with estimated probabilities of 0.3, 0.4 and 0.3 respectively.
As each of the two uncertain variables can take three values, a 3 x 3 data table can be
constructed. We shall assume that the expected values have already been calculated
as follows:

Number/spend 300 400 600

$2 (2,000) (1,000) 3,000

$4 (750) 3,000 4,000

$6 1,000 5,000 7,000


Number/spend 300 400 600

The probabilities can be used to calculate joint probabilities as follows:

Number/spend 300 400 600

$2 0.12 0.12 0.06

$4 0.16 0.16 0.08

$6 0.12 0.12 0.06

The two tables could then be used to answer questions such as:
1. The probability of making a loss? = 0.12 + 0.12 + 0.16 = 0.40
2. The probability of making a profit of more than $3,500? = 0.08 + 0.12 + 0.06 =
0.26

Value-at-Risk (VaR)
Although financial risk management has been a concern of regulators and financial
executives for a long time, Value-at-Risk (VaR) did not emerge as a distinct concept
until the late 1980s. The triggering event was the stock market crash of 1987 which
was so unlikely, given standard statistical models, that it called the entire basis of
quantitative finance into account.
VaR is a widely used measure of the risk of loss on a specific portfolio of financial
assets. For a given portfolio, probability, and time horizon, VaR is defined as a
threshold value such that the probability that the mark-to-market loss on the portfolio
over the given time horizon exceeds this value (assuming normal markets and no
trading) is the given probability level. Such information can be used to answer
questions such as 'What is the maximum amount that I can expect to lose over the
next month with 95%/99% probability?'
For example, large investors, interested in the risk associated with the FT100 index,
may have gathered information regarding actual returns for the past 100 trading days.
VaR can then be calculated in three different ways:
1. The historical method
This method simply ranks the actual historical returns in order from worst to best, and
relies on the assumption that history will repeat itself. The largest five (one) losses can
then be identified as the threshold values when identifying the maximum loss with 5%
(1%) probability.
2. The variance-covariance method
This relies upon the assumption that the index returns are normally distributed, and
uses historical data to estimate an expected value and a standard deviation. It is then
a straightforward task to identify the worst 5 or 1% as required, using the standard
deviation and known confidence intervals of the normal distribution - ie -1.65 and -
2.33 standard deviations respectively.
3. Monte Carlo simulation
While the historical and variance-covariance methods rely primarily upon historical
data, the simulation method develops a model for future returns based on randomly
generated trials.
Admittedly, historical data is utilised in identifying possible returns but hypothetical,
rather than actual, returns provide the data for the confidence levels.
Of these three methods, the variance-covariance is probably the easiest as the
historical method involves crunching historical data and the Monte Carlo simulation is
more complex to use.
VaR can also be adjusted for different time periods, since some users may be
concerned about daily risk whereas others may be more interested in weekly, monthly,
or even annual risk. We can rely on the idea that the standard deviation of returns
tends to increase with the square root of time to convert from one time period to
another. For example, if we wished to convert a daily standard deviation to a monthly
equivalent then the adjustment would be :
σ monthly = σ daily x √T where T = 20 trading days
For example, assume that after applying the variance-covariance method we estimate
that the daily standard deviation of the FT100 index is 2.5%, and we wish to estimate
the maximum loss for 95 and 99% confidence intervals for daily, weekly, and monthly
periods assuming five trading days each week and four trading weeks each month:
95% confidence
Daily = -1.65 x 2.5% = -4.125%
Weekly = -1.65 x 2.5% x √5 = -9.22%
Monthly = -1.65 x 2.5% x √20 = -18.45%
99% confidence
Daily = -2.33 x 2.5% = -5.825%
Weekly = -2.33 x 2.5% x √5 = -13.03%
Monthly = -2.33 x 2.5% x √20 = -26.05%
Therefore we could say with 95% confidence that we would not lose more than 9.22%
per week, or with 99% confidence that we would not lose more than 26.05% per
month.
On a cautionary note, New York Times reporter Joe Nocera published an extensive
piece entitled Risk Mismanagement on 4 January 2009, discussing the role VaR played
in the ongoing financial crisis. After interviewing risk managers, the author suggests
that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by
giving false security to bank executives and regulators. A powerful tool for professional
risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when
misunderstood.

Conclusion
These two articles have provided an introduction to the topic of risk present in decision
making, and the available techniques used to attempt to make appropriate
adjustments to the information provided. Adjustments and allowances for risk also
appear elsewhere in the ACCA syllabus, such as sensitivity analysis, and risk-adjusted
discount rates in investment appraisal decisions where risk is probably at its most
obvious. Moreover in the current economic climate, discussion of risk management,
stress testing and so on is an everyday occurrence.
Written by a member of the APM examining team

Two related articles (Data analytics – parts 1 and 2 – see 'Related links' box)
have looked at the way organisations can use data analytics to help
understand and manage performance, including the use of predictive
analytics to help improve forecasting. This article looks at some important
techniques which could be used in forecasting. You should already be
familiar with these techniques from your Performance Management (PM)
studies, and you should be prepared to apply them to the scenarios in APM
questions as necessary.
Forecasting and uncertainty
The business landscape has become increasingly unpredictable and uncertain in
recent times due to rapid changes in technology and fierce competition, as well as
major global events such as COVID-19.
This uncertainty also makes it increasingly difficult for businesses to budget and
forecast accurately. For example, think about the range of factors which could impact
a sales forecast:
 Economic conditions (eg economic growth rates, inflation)
 Industry conditions (eg market growth rates, competitors entering/ leaving
the market, competitors’ actions)
 The organisation’s products or services (eg whether any new
products/services are being launched, or new product features; where
products/services are in their life cycle, and whether sales are growing or
declining)
 Policy changes (eg changes in the prices of an organisation’s
products/services; changes in terms and conditions offered to customers)
 Marketing and advertising (eg increasing/decreasing) advertising activities;
launching new marketing campaigns; marketing on new channels)
 Legislation and regulation (eg new legislation - either affecting the
organisation’s product, or competitor’s/substitute products)
A comprehensive sales forecast needs to consider all these factors though.
In a previous technical article – Data analytics, part 1 – we highlighted the potential
value of predictive analytics in helping organisations understand future patterns
and trends, which in turn should help organisations improve the accuracy of their
forecasts. Continuing the illustration of sales forecasts, using machine learning-based
analytics software which incorporates as rich a data set as possible – including details
about external events and market conditions, product life cycles and product launches,
historical growth and sales figures, customer surveys and feedback – should help an
organisation to improve the accuracy of its sales forecast.
Although the focus of Advanced Performance Management (APM) exam questions will
not be on the detailed calculations which would take place in analytics software,
accountants need the business knowledge and commercial acumen to interpret the
results of data analytics; including having an understanding of the modelling
assumptions, and what decisions can justifiably be made based on the analysis.
As such, in the APM exam, you could be expected to draw on analysis techniques to
help you understand the assumptions being used in a given scenario, and to evaluate
how realistic or plausible they are. You should already be familiar with these
techniques from your studies of Performance Management (PM) and Management
Accounting (MA), but we are going to briefly recap four techniques which you might
need to use in the context of assessing forecasts, or helping to make decisions based
on them:
 Regression and correlation
 Time series
 Expected values
 Standard deviation
For the detailed articles relating to each of these teachings, see the following links:
 Regression and correlation

 Time series and moving averages


 Risks of uncertainty
Regression and correlation
Being able to understand the relationship between different factors is very important
in forecasting. Regression analysis is a common method used in predictive analytics,
where algorithms are trained to understand (or ‘learn’) the relationship between
independent variables and a dependent variable. The model can then forecast future
trends or product outcomes from new data. For example, in a sales environment, an
organisation could use machine learning analytics to predict the next week’s sales,
taking account of a number of input factors.
Regression is a technique for investigating the relationship between a dependent
variable and an independent variable (or a series of independent variables). The most
common form is linear regression, which establishes a linear relationship between two
variables, based on a line of best fit.
Regression analysis uncovers the associations between variables. The degree of
association is measured by the correlation coefficient (denoted by ‘r’) and is
measured on a scale from +1 to -1. When one variable increases as the other
increases, the correlation is positive; conversely, if one variable decreases as the other
increases, the correlation is negative. The closer the value is to 1 or -1, the stronger
the correlation.
A related calculation is the coefficient of determination (calculated as r2). The
coefficient of determination identifies the proportion of changed in the dependent
variable, which can be explained by changes in the independent variable.
However, it is important to remember that just because two events correlate, this
doesn’t necessarily mean that one causes the other.
WORKED EXAMPLE
Neen Co has identified there is a positive correlation between the amount of
advertising it does in a given period and its revenue in that period. Neen Co’s
management accountant has analysed the number of adverts per month in the last
year compared to monthly revenue and produced a line of best fit based on this.
Neen Co expects to place 120 adverts in the next month, and the CFO has
asked you, using the information prepared by the management accountant,
to forecast sales for the next month, but to note any concerns you have
about the forecast.
The regression analysis has identified that the line of best fit between the advertising
(the independent variable; ‘x’) and revenue (the dependent variable; ‘y’) is calculated
as:
y = 107.43x + 6,649.
Using this, we can calculate revenue for the next month as: (107.43 × 120) + 6,649 =
$19,541k.
However, in this scenario, r 2 = 0.80, meaning that 80% of the changes in revenue can
be explained by changes in advertising. This also means that 20% of the changes are
due to other factors.
There was a notable outlier in the last year, where the number of adverts was around
115 – relatively close to the number of adverts forecast for the next month (120) –
where revenue was slightly above $25 million. The linear model would have predicted
revenue of slightly over $19 million [(107.43 × 115) + 6,649 = $19,003k]. This
reinforces that there are other factors which could influence revenue, not just the
number of adverts.
This links back to the opening illustration at the start of the article, about the range of
factors which could influence a sales forecast. Similarly, predictive analytics software
takes account of a range of factors when calculating forecasts, reinforcing the point
that it would be unrealistic to assume that sales can be accurately forecast on the
basis of one independent variable only.
Time series forecasting
Linear regression, which we have just discussed, analyses the relationship between
variables using a ‘line of best fit’; that is, a linear trend line.
However, using this type of simple linear relationship alone as the basis of forecasts
will not be realistic if there are seasonal variations within the data. In such
circumstances, time series analysis can be used to establish not only underlying
trends but also seasonal variations within the data. The trend and seasonal variations
can then be used together to help make predictions about the future.
ILLUSTRATIVE EXAMPLE
Jeps Co is an events and hospitality company, whose business is highly seasonal. The
quarterly sales figures for 20X3 – 20X5 are:

The CFO has asked for a forecast for the sales figures for the first quarter of 20X6.
The assistant management accountant has begun this work. Using moving averages,
they calculated that the underlying trend as +$500 per quarter, but the last moving
average the assistant calculated was for 20X5 Quarter 2: $261,500.
The assistant has also calculated that the seasonal adjustments for Q1 are either -
$79,000 (using an additive model) or 0.70 (using a multiplicative model).
Forecast for 20X6 Qtr 1
The last moving average calculated was 20X5 Qtr 2, which is three periods ago.
So the underlying trend value for 20X6 Qtr 1 will be: $261,500 + (500 × 3) =
$263,000.
We then have to adjust for the seasonal variation.
Additive: 263,000 – 79,000 = $184,000
Multiplicative: 263,000 * 0.70 = $184,100

However, as with any forecasts, care needs to be taken when using time series
analysis, because it is based on the assumption that the past is a good indicator of
what will happen in the future. In our simple example, we have assumed that the
underlying sales revenue will continue to grow by $500 per quarter. However, changes
in the external and competitive environment can create uncertainty, making forecasts
based on past observations unrealistic.
Similarly, effective forecasting relies on the ability to identify genuine patterns and
trends in the data. Therefore, analysts need to be able to identify the difference
between random fluctuations or outliers and can separate them from underlying
trends or seasonal variations.
Expected values
An important aspect of predictive analytics is that it doesn’t simply forecast possible
future outcomes it also identifies the likelihood of those events happening.
The availability of information regarding the probabilities of potential outcomes allows
the calculation of an expected value for the outcome.
The expected value indicates the expected financial outcome of a decision. It is
calculated by multiplying the value associated with each potential outcome by its
probability, and then summing the answers.
Expected values can be useful to evaluate alternative courses of action. When making
a decision that could have multiple outcomes, a business should look at the value of
each alternative and choose the one which has the most beneficial expected value (ie
the highest expected value when looking at sales or income; or the lowest expected
value when looking at costs).
WORKED EXAMPLE
Mewbix is launching a new cereal product in Deeland, a country with 10 million
households.
Mewbix has already introduced the product in some test areas across the country, and
- in conjunction with a marketing consultancy business – has been monitoring sales
and market share. This data has been supplemented by survey-based tracking of
consumer awareness, repeat purchase patterns, and customer satisfaction ratings.
Key findings from the test market and the subsequent customer research have
indicated two feasible selling prices for Mewbix: $2.50 or $3.00 per packet. The market
research has suggested that, for the coming year:
If the selling price is $2.50 per packet, 2% of the households in Deeland will buy
Mewbix. Of these, 30% are expected to purchase 1 packet per week, 45% are
expected to purchase 1 packet every 2 weeks, and 25% are expected to purchase 1
packet every 4 weeks.
If the selling price is $3.00 per packet, 1.5% of the households will buy Mewbix. Of
these 25% are expected to purchase 1 packet per week, 50% are expected to
purchase 1 packet every 2 weeks, and 25% are expected to purchase 1 packet every 4
weeks.
Based on the findings from the test market and the subsequent customer
research, Mewbix’s CEO has asked for your advice about what price to sell
the new cereal for, and how much revenue he should forecast for it in next
year’s budget.
In order to give your advice, you need to forecast the revenue expected at each price:

The forecast data suggests that demand for the new cereal is elastic, such that using
the higher price leads to a significantly lower annual revenue. As such, the cereal
should be sold for $2.50 per packet, and sales of $15,275,000 should be budgeted for
the coming year.
However, as with any predictive models, there is no guarantee that the actual sales
will mirror the expected values.
Standard deviation
When analysing data sets, it can often be useful to calculate the average (mean)
value, to help get a representative estimate for the values in the data set. However,
looking at an average value could be misleading when the distribution of values in the
dataset is skewed, or when the distribution contains outliers.
Therefore, when looking at average values, it is also important to consider the
standard deviation in the dataset.
Standard deviation measures how clustered or dispersed a data set is in relation to its
mean.
A low standard deviation tells us that data is clustered around the mean, and therefore
the data is accurately characterised by its mean. Conversely, a high standard
deviation indicates data is more spread out, such that the mean may not accurately
represent the data set. As such, the average is a less reliable indicator of the
individual values in a data set where the standard deviation is high, compared to a
situation where the standard deviation is low.
WORKED EXAMPLE
Customers who have stayed at Hotel Vaykance, are encouraged to complete a survey,
rating how much they have enjoyed their stay on a scale from 1 – 5, with 1 being ‘Not
enjoyed at all’ and 5 being ‘Enjoyed greatly’. The surveys then ask further questions,
helping management understand why customers have awarded the score they have.
The ‘Average satisfaction score’ is a key performance indicator (KPI) for the business,
and is reported in the monthly management information. The KPI reported in last
month’s management information was 2.84.
The standard deviation was 1.9, but the standard deviation figure isn’t currently
included in the management information. However, the CFO has asked for standard
deviation to be included going forwards.
The results from the last month’s customer satisfaction surveys are summarised in the
graph below.

The CFO has asked you to explain the significance of standard deviation
when assessing the results.

The average satisfaction score (2.84) suggests that customers are reasonably well
satisfied with their stay. However, this does not accurately reflect the population,
which was polarised between guests who either enjoyed their stay very much (41% at
‘5’) or not at all (46% at ‘1’). The graph showing the results from the customer results
survey illustrates this polarisation very clearly.
The standard deviation also highlights this polarisation. A standard deviation of +/- 1.9
compared to an average of 2.84 is very high.
In a scenario like this, where scores were only given between 1 and 5, the highest
standard deviation possible would be 2 (ie if 50% of respondents had given a customer
satisfaction rating of 5, and 50% had given a rating of 1, the average would be 3, but
the standard deviation would be 2). The actual standard deviation of 1.9 is very close
to this theoretical maximum, meaning it is very high.
The high standard deviation implies that a large proportion of the dataset is far away
from the mean, and therefore it is risky to draw conclusions using the mean.
Written by a member of the APM examining team
Being able to understand the relationship between different factors is very important
for organisations. For example, it would be useful to understand the relationship
between advertising spend and sales generated from that advertising spend or
between the production level and the total production costs. Understanding these
relationships allows organisations to make better predictions of what sales or costs will
be in the future. This will be invaluable when budgeting or forecasting.
This article will look at how the relationships between variables can be analysed using
the ‘line of best fit’ method and regression analysis, and how the strength of these
relationships can be measured using correlation.
Relationship between variables
In any relationship between two variables there is an independent variable and a
dependent variable, the size of the movements in the dependent variable depending
on the size of the movements of the independent variable. For example; the total cost
of a production process would be dependent on the level of activity.
Consider the following data produced by a company over the last two years.

Activity level Total production cost


(000 units) ($000)

20X1 Q1 15 300

20X1 Q2 45 615

20X1 Q3 25 470
20X1 Q4 55 680

20X2 Q1 30 520

20X2 Q2 20 350

20X2 Q3 35 590

20X2 Q4 60 740

The company wants to understand the relationship between the activity level and total
production cost so that it can forecast total production costs going forward.
Line of best fit
One method of understanding the relationship between the variables is the line of best
fit method. All the data given is plotted on a chart. The activity level is the
independent variable (as described above) and it is shown on the x (horizontal) axis.
The total production cost is the dependent variable and it is shown on the y (vertical)
axis.
Once all the data is plotted on the graph, a line of best fit can be drawn:

In this case some of the points are on the line and some are above and below, but
most are close to the line which suggests that there is a relationship between activity
level and the total production cost.
This ‘line of best fit’ can be used to predict what will happen at other levels of
production. For levels of production which don’t fall within the range of the previous
levels, it is possible to extrapolate the ‘line of best fit’ to forecast other levels by
reading the value from the chart.
This is a straightforward technique, but it has some limitations. The main one being
that the ‘line of best fit’ is estimated from the data points plotted and different lines
may be drawn from the same set of data points. A method which can overcome this
weakness is regression analysis.
Regression analysis
Regression analysis also uses the historic data and finds a line of best fit, but does so
statistically, making the resulting line more reliable.
We assume a linear (straight line) relationship between the variables and that the
equation of a straight line is:
y = a + bx
where:
a is the fixed element (where the line crosses the y axis)
b is the variable element (gradient of the line) and
x and y relate to the x and y variables.
a and b are calculated using the following formulae:

These formulae are given on the PM formulae sheet.


The easiest way to tackle these calculations is to first set up a table with columns for
x, y, xy and x2.
(note: the table also contains a column for y 2. This will be required in a later
calculation)

Units Total
(000s) cost xy x2 y2
x ($000)
y
20X1
15 300
Q1 4,500 225 90,000

20X1
45 615
Q2 27,675 2,025 378,225

20X1
25 470
Q3 11,750 625 220,900

20X1
55 680
Q4 37,400 3,025 462,400

20X2
30 520
Q1 15,600 900 270,400

20X2
20 350
Q2 7,000 400 122,500

20X2
35 590
Q3 20,650 1,225 348,100

20X2
60 740
Q4 44,400 3,600 547,600

Totals
(∑) 168,975 12,025 2,440,12
285 4,265 5
The equation of the regression line (in the form y = a + bx) becomes:
y = 208.90 + 9.1x
Using this equation, it is easy to forecast total costs at different levels of production,
for example for a production level of 80,000 units, the estimate of total cost will be:
208.90 + (9.1 x 80) = 936.90, or $936,900.
How reliable this estimate is will depend on the strength of the relationship between
the two variables; how much of the change in y can be explained by the change in x?
The stronger the relationship between the variables, the more reliance can be placed
on the equation calculated and the better the forecasts will be.
A measure of the strength of the relationship between the variables is correlation.
Correlation
Two variables are said to be correlated if they are related to one another and if
changes in one tend to accompany changes in the other. Correlation can be positive
(where increases in one variable result in increases in the other) or negative (where
increases in one variable result in decreases in the other).
The chart shown in the ‘line of best fit’ section above shows a strong positive
correlation. Some other relationships are shown below:
It is possible that there is no correlation between the variables. A horizontal line would
suggest no correlation, as would the following:

Where a company wants to use past data to forecast the future, the stronger the
correlation, the better the estimates will be.
The strength of correlation between variables can be measured by the correlation
coefficient which can be calculated using the following formula:

r = 1 denotes perfect positive linear correlation


r = -1 denotes perfect negative linear correlation
r = 0 denotes no linear correlation
The value of the correlation coefficient must lie between -1 and 1. The closer the value
is to 1 and -1, the stronger the correlation.
Using the previous example to calculate r:

r = 0.965 which indicates a strong positive correlation.


A further calculation is the coefficient of determination which is calculated as r2.
The coefficient of determination gives the proportion of changes in y (the dependent
variable) that can be explained by changes in x (the independent variable). In this
example, r2 = 0.931, so 93.1% of the changes in total production cost can be
explained by changes in activity levels. This means that 6.9% of the changes must be
due to other factors.
Conclusion
Care must be taken however when using regression analysis and correlation to make
future forecasts. The calculations performed can only suggest that a relationship exists
between the factors, it cannot prove the relationship. It is possible that there are other
factors involved in the changes in the variables which may not have been considered.
Also, like time series analysis, which is dealt with in a separate article, regression
analysis uses past observations to attempt to predict what will happen in the future.
The assumption that what has happened in the past is a good indicator of what will
happen in the future is a simplistic assumption. In the real world, changes in the
environment (technological, social, environmental, political, economic etc) can all
create uncertainty, making forecasts made from past observations unrealistic.
Written by a member of the Performance Management examining team

Time series analysis can be used to analyse historic data and establish any underlying
trend and seasonal variations within the data. The trend refers to the general direction
the data is heading in and can be upward or downward. The seasonal variation refers
to the regular variations which exist within the data. This could be a weekly variation
with certain days traditionally experiencing higher or lower sales than other days, or it
could be monthly or quarterly variations.
The trend and seasonal variations can be used to help make predictions about the
future – and as such can be very useful when budgeting and forecasting.
Calculating moving averages
One method of establishing the underlying trend (smoothing out peaks and troughs) in
a set of data is using the moving averages technique. Other methods, such as
regression analysis can also be used to estimate the trend. Regression analysis is dealt
with in a separate article.
A moving average is a series of averages, calculated from historic data. Moving
averages can be calculated for any number of time periods, for example a three-
month moving average, a seven-day moving average, or a four-quarter moving
average. The basic calculations are the same.
The following simplified example will take us through the calculation process.
Monthly sales revenue data were collected for a company for 20X2:

Ja Fe Ma Ap Ma Ju Au Se Oc No De
Jul
n b r r y n g p t v c

Sale
s 12 14 18 13 19 13 15 19 14 16 20
151
$00 5 5 6 1 2 7 7 8 3 3 4
0

From this data, we will calculate a three-month moving average, as we can see a
basic cycle that follows a three-monthly pattern (increases January – March, drops for
April then increases April – June, drops for July and so on). In an exam, the question
will state what time period to use for this cycle/pattern in order to calculate the
averages required.
Step 1 – Create a table
Create a table with 5 columns, shown below, and list the data items given in columns
one and two. The first three rows from the data given above have been input in the
table:

Step 2 – Calculate the three-month moving average.


Add together the first three sets of data, for this example it would be January, February
and March. This gives a total of (125+145+186) = 456. Put this total in the middle of
the data you are adding, so in this case across from February. Then calculate the
average of this total, by dividing this figure by 3 (the figure you divide by will be the
same as the number of time periods you have added in your total column). Our three-
month moving average is therefore (456 ÷ 3) = 152.

The average needs to be calculated for each three-month period. To do this you move
your average calculation down one month, so the next calculation will involve
February, March and April. The total for these three months would be (145+186+131)
= 462 and the average would be (462 ÷ 3) = 154.

Continue working down the data until you no longer have three items to add together.
Note: you will have fewer averages than the original observations as you will lose the
beginning and end observations in the averaging process.
Step 3 – Calculate the trend
The three-month moving average represents the trend. From our example we can see
a clear trend in that each moving average is $2,000 higher than the preceding month
moving average. This suggests that the sales revenue for the company is, on average,
growing at a rate of $2,000 per month.
This trend can now be used to predict future underlying sales values.
Step 4 – Calculate the seasonal variation
Once a trend has been established, any seasonal variation can be calculated. The
seasonal variation can be assumed to be the difference between the actual sales and
the trend (three-month moving average) value. Seasonal variations can be calculated
using the additive or multiplicative models.
Using the additive model:
To calculate the seasonal variation, go back to the table and for each average
calculated, compare the average to the actual sales figure for that period.

A negative variation means that the actual figure in that period is less than the trend
and a positive figure means that the actual is more than the trend.
From the data we can see a clear three-month cycle in the seasonal variation. Every
first month has a variation of -7, suggesting that this month is usually $7,000 below
the average. Every second month has a variation of 32 suggesting that this month is
usually $32,000 above the average. In month 3, the variation suggests that every third
month, the actual will be $25,000 below the average.
It is assumed that this pattern of seasonal adjustment will be repeated for each three-
month period going forward.
Using the multiplicative model:
If we had used the multiplicative model, the variations would have been expressed as
a percentage of the average figure, rather than an absolute. For example:

This suggests that month 1 is usually 95% of the trend, month 2 is 121% and month 3
is 84%. The multiplicative model is a better method to use when the trend is
increasing or decreasing over time, as the seasonal variation is also likely to be
increasing or decreasing.
Note that with the additive model the three seasonal variations must add up to zero
(32-25-7 = 0). Where this is not the case, an adjustment must be made. With the
multiplicative model the three seasonal variations add to three (0.95 + 1.21 + 0.84 =
3). (If it was four-month average, the four seasonal variations would add to four etc).
Again, if this is not the case, an adjustment must be made.
In this simplified example the trend shows an increase of exactly $2,000 each month,
and the pattern of seasonal variations is exactly the same in each three-month period.
In reality a time series is unlikely to give such a perfect result.
Step 5 – Using time series to forecast the future
Now that the trend and the seasonal variations have been calculated, these can be
used to predict the likely level of sales revenue for the future.
Question:
Using the above example, what is the predicted level of sales revenue for
June 20X3 and July 20X3?
Solution:
Start with the trend then apply the seasonal variations. We calculated an increasing
trend of $2,000 per month. The last figure we calculated was for November 20X2
showing $170,000. If we assume the trend continues as it has done previously, then
by June 20X3, the sales revenue figure will have increased by $14,000 ($2,000 per
month for seven months). Adding this to the figure we have for November, we can
predict the underlying trend value for June 20X3 to be $184,000. ($14,000 +
$170,000).
We know that sales exhibit a seasonal variation. Taking account of the seasonal
variation will give us a better estimate for June 20X3. From the table in step 4, we can
see that June has a positive variation of $32,000.
Our estimate for the sales revenue for June 20X3 is therefore $184,000 + $32,000 =
$216,000.
For July, the underlying trend value will be $170,000 + $16,000 = $186,000. The
seasonal variation for July 20X3 is a negative variation of $25,000, therefore our
estimate for the sales revenue for July 20X3 is $186,000 - $25,000 = $161,000.
Calculating moving averages for an even number of periods
In the above example, we used a three-month moving average. Looking back at step
2, we can see that the average is shown against the mid-point of the three
observations. The mid-point of the period for January, February and March is shown
against the February observation.
When we are calculating a moving average with an even number of periods, for
example a four-quarter moving average, we do the same basic calculation, but the
mid-point will lie between observations. From step 4 above, we can see that we need
the moving average to be shown against an observation so that the seasonal variation
can be calculated. We therefore calculate the four-quarter moving average as before,
but we then calculate a second moving average.
In the example below, the four-quarter moving averages have been calculated in the
same way as before. The first four observations are added together and then divided
by four. The four-quarter moving average for the first four quarters is 322.50. Moving
to the next four observations, gives an average of 327.50. We can then work out the
mid-point of these two averages by adding them together and dividing by two. This
gives a mid-point of (322.50 + 327.50) ÷ 2 = 325. This mid-point is our trend and the
figure is shown against the quarter 3, 20X8 observation. All other calculations are
done in the same way as our original example.

Conclusion
Care must be taken however when using time series analysis. This forecasting method
is based on the assumption that what has happened in the past is a good indicator of
what is likely to happen in the future. In this example the suggestion is that sales
revenue will continue to grow by $2,000 per month indefinitely. If we consider the
concept of the product lifecycle, we can see that this is a rather simplistic and flawed
assumption.
In the real world, changes in the environment (technological, social, environmental,
political, economic etc) can all create uncertainty, making forecasts made from past
observations unrealistic.

You might also like