Chapter-3 Final
Chapter-3 Final
People
Resources
Hardware
Computer
System
Components Software
Data
Resources
Networking and
Communication
System
Objectives Objectives of
Controls
Controls
INFORMATION
SYSYEM (IS) Classification Nature of
IS
Resources
Auditing
Environmental
Controls
Physical
Security Audit Functions
Controls
Logical Access
Audit Trail Controls
Managerial
Controls
Application
Controls
INTRODUCTION
We now have systems that are constantly exchanging information about various things and even about
us. This inter-networking of physical devices, vehicles, smart devices, embedded electronics, software,
sensors or any such device is often referred to as IOT (Internet of Things).
What is interesting about various emerging technologies is that at its core we have some key elements
namely, People, Computer Systems (Hardware, Operating System and other Software), Data
Resources, Networking and Communication System. In this chapter, we are going to explore each of
those key elements.
INFORMATION IS a combination of people, hardware, software, communication devices, and
SYSTEMS: network and data resources that processes (can be storing, retrieving,
transforming information or data and information for a specific purpose. The
system needs inputs from user (keying instructions and commands, typing,
scanning) Which will then be processed (calculating, reporting) using technology
devices such as computers, and produce output (printing reports, displaying
results) that will be sent to another user or other system via a network and a
feedback method that controls the operation.
The main aim and purpose of each IS to convert the data into information which is
useful and meaningful. An IS depends on the resources of people (end users and IS
specialists), hardware (machines and media), software (programs and
procedures), data (data and knowledge bases), and networks (communications
media and network support) to perform input, processing, output, storage, and
control activities that transform data resources into information products. This
information system model highlights the relationships among the components
and activities of information systems. It also provides a framework that
emphasizes four major concepts that can applied to all types of information
systems.
An Information System model comprises of following steps:
Input: Data is collected from an organization or from external environments and
converted into suitable format required for processing.
Output: Then information is stored For future use or communicated to user after
application of respective procedure on it.
Functions of ISs
CONTROL
(Decision Makers, FEEDBACK
Auto Control)
User
People Resources
While thinking about IS, it is easy to get too focused on the technological components and forget that
we must look beyond these tools at the whole picture and try to understand how technology
integrates into an organization. A focus on people involved in IS is the next step. From the helpdesk to
the system programmers all the way up to the Chief Information Officer (CIO), all of them are essential
elements of the information systems. People are the most important element in most CBIS. The
people involved include users of the system and IS personnel, including all the people who manage,
run, program, and maintain the system.
In the ever-changing world, innovation is the only key, which can sustain long-run growth. More and
more firms are realizing the importance of innovation to gain competitive advantage. Accordingly,
they are engaging themselves in various innovative activities. Understanding these layers of
information system helps any enterprise grapple with the problems it is facing and innovate to
perhaps reduce total cost of production, increase income avenues and increase efficiency of systems.
I. Hardware
Hardware is the tangible portion of our computer systems; something we can touch and see. It
basically consists of devices that perform the functions of input, processing, data storage and output
activities of the computer.
Typical hardware architecture consists of:
Input Input devices are used for providing data and instructions to computer. These are
devices : devices through which we interact with the systems and these include devices like
Keyboard, Mouse and other pointing devices, Scanners & Bar Code, MICR readers,
Webcams, Microphone and Stylus/ Touch Screen.
Keyboard helps to provide text based input.
Mouse helps to provide menu or selection based input
Scanners & Webcams help in image based input
Microphone helps to provide voice based input.
Processing It include computer chips that contain the Central Processing Unit and main memory.
devices. The Central Processing Unit (CPU or microprocessor) is the actual hardware that
interprets and executes the program (software) instructions and coordinates how all
the other hardware devices work together.
The CPU is built on a small flake of silicon and can contain the equivalent of several
million transistors.
We can think of transistors as switches which could be ’ON’ or ‘OFF’ i.e., taking a
value of 1 or 0. The processor or CPU is like the brain of the computer.
Data Refers to the memory where data and programs are stored. Various types of memory
storage techniques are as given:
Devices Internal Includes processer Registers and Cache memory.
memory Processor Registers: Registers are internal memory within CPU,
which are very fast and very small.
Cache memory: There is a huge speed difference between the speed
of Registers and Primary Memory. This results in a slow processing of
data as RAM provides data to CPU for processing at slow speed. To
bridge this speed difference or gap, the cache memory can be used.
The Cache (pronounced as cash) is a smaller size and faster memory.
This memory stores that copy of the data which is most frequently
used from main memory locations such that Processor/Registers can
access it faster than its access from main memory.
Primary These are devices in which any location can be accessed in any order
memory/Mai i.e. randomly (in contrast with sequential order). Two popular
n Memory. primarily memories are (i) RAM and (ii) ROM
RAM- Volatile in nature (information is lost as soon as
Random power is turned off.)
Access Purpose is to hold program and data while they are
memory in use
Information can be read as well as modified.
Responsible for storing the instructions and data
that the computer is using at that present moment
ROM- Non Volatile in nature (contents are remain in the
Read Only system, even in absence of power.)
Memory These are used to store small amount of
information for CPU for quick reference.
Information can be read not modified.
Generally used by manufacturers to store data and
programs that is used repeatedly. like translators
Secondary The main memory or primary memory is volatile in nature and it
memory used to store data and instruction being executed. These memories
cannot store data on permanent basis and these memories provide
small storage capacity. In addition to primary memories computer
uses secondary memories which provide permanent storage and
these memories are available in large capacity e.g. hard disk and
CD/DVD
These memories are known as secondary storage because these
memories are not directly accessible by CPU. Data in these
memories are transferred through RAM or primary memory.
Secondary storage does not lose the data when the device is
switched off or shut down i.e. it is a non-volatile memory. The
features of secondary memory devices are:
• Non-volatility (content can be stored permanently),
• Large capacity (these are available in large size e.g. hard
disk),
• Low cost (the cost of this type of memory is lower
compared to register and RAMs)
• Slow speed (slower in speed compared to registers or
primary storage).
Secondary storage devices can differ amongst each other in terms
of speed and access time, cost/ portability, capacity and type of
access. Based on these parameters most common types of
secondary storage are: USB Pen Drive, Memory Card, Floppy Disk,
Hard Disk, CD, DVD, Blue ray Disk and Smart card etc.
Virtual Virtual Memory is not an actual memory but an imaginary
Memory memory area supported by some operating system like windows
It is a memory technique which helps to execute big size
programs with small size available RAM.
If a computer lacks the RAM needed to run a task, Windows uses
virtual memory to compensate.
Virtual memory is combination of computer's RAM with
temporary space on the hard disk. When RAM runs low, virtual
memory moves data from RAM to a space called a paging file or
segmentation on hard disk.
Moving data to and from the paging file, frees up RAM to
complete its work.
Thus, Virtual memory is an allocation of hard disk space to help
RAM.
Virtual memory
Register cache Primary
Secondary Memory
II. Software
Software is defined as a set of instructions that tell the hardware what to do. Software is created
through the process of programming. Without software, the hardware would not be functional.
Software can be broadly divided into two categories: Operating Systems Software and Application
Software as shown in the Fig. 3.3.3. Operating systems manage the hardware and create the interface
between the hardware and the user. Application software is the category of programs that do some
processing/task for the user.
SOFTWARE
Functions of OS
The key functions provided by OS are as follow:
Performing OS helps in performing hardware tasks such as obtaining inputs from keyboards
hardware and mouse, access of data from hard disk & display of outputs on monitor. OS
functions: system acts as an intermediary between the application program and the
hardware.
User Interfaces: OS provides a user interface for working on a computer. Previously it used to
provide command based User Interface (CUI) i.e. text commands were given to
computer to execute any activity. Now-a-days OS provides Graphic User Interface
(GUI) which provides icons & menus for executing activities on a computer in a
user friendly manner e.g. Windows.
Hardware OS provides Application Program Interfaces (API) for connecting to different
Independence: types of hardware. That is OS provides internal code for writing programs to
configure any hardware with computer system.
Memory OS provides efficient memory management by providing the required memory
Management: (RAM) space for the files and programs to be executed and reclaim the space
once the files or programs are closed. Operating systems also provides Virtual
Memory by creating an area of hard disk to supplement the memory capacity of
RAM. In this way OS augments RAM memory by creating a virtual RAM.
Task OS can execute many tasks simultaneously and it maintains track of resources
Management: used by multiple jobs/tasks being executed simultaneously. In case of multitasks
execution, OS maintains track of tasks to be executed by providing a queue and
scheduling these tasks for execution by same CPU.
Networking OS provides many features and capabilities and these features help configuring
Capability: computers for network and internet connection.
For example, network and internet feature in control panel of Window 8 helps to
configure network and internet connectivity.
Logical access Operating system provides many security features.
security: For example, it provides features like user identification & user authentication
through a User ID and Password.
File management: OS does efficient file management by allowing users to give appropriate name to
file and provide folders or directories for files management. It does efficient
allocation of space for file storage and allows features like multi-sharing for same
file, etc.
3. Data Resources
You can think of data as a collection of facts.
For example, your street addresses, the city you live in a new phone number are all pieces of data. Like
software, data is also intangible. By themselves, pieces of data are not very useful. But aggregated
and organized together into a database, data can become a powerful tool for businesses.
For years’ business houses, have been gathering information with regards to customers, suppliers,
business partners, markets, cost, and price movement and so on. After collection of information for
years’ companies have now started analyzing this information and creating important insights out of
data.
Data is now helping companies to create strategy for future. This is precisely the reason why we have
started hearing a lot about data analytics in past few years.
Data:
- Data are the raw pieces of information with no context.
- Data can be quantitative or qualitative.
- Quantitative data is numeric, the result of a measurement, count, or some calculation.
- Qualitative data is descriptive.
For ex: if I tell you my favorite number is 5, that is qualitative data because it is descriptive, not the
result of a measurement or mathematical calculation but a number can be quantitative data also.
Data is not useful by itself. To make it useful, it needs to be given in some context.
Returning to the example above, if I told you that ‘15, 23, 14, and 85" are the numbers of students that
had registered for upcoming classes that would-be information. By adding the context – that the
numbers represent the count of students registering for specific classes – I have converted data into
information.
Once we have put our data into context, aggregated and analyzed it, we can use it to make decisions
for our organization.
We can say that this consumption of information produces knowledge.
This knowledge can be used to make decisions, set policies, and even spark innovation.
Database:
The goal of many IS is to transform data into information to generate knowledge that can be used
for decision making.
To do this, the system must be able to take data, put the data into context, and provide tools for
aggregation and analysis.
A database is designed for just such a purpose.
- A database is an organized collection of related information.
- It is called an organized collection because in a database all data is described and associated
with other data.
- All information in a database should be related as well; separate databases should be created
to manage unrelated information.
For example, a database that contains information about students should not also hold information
about company stock prices.
DBMS may be defined as software that aid in organizing, controlling and using the data needed by
the application programme. They provide the facility to create and maintain a well-organized
database. Applications access the DBMS, which then accesses the data.
Commercially available Data Base Management Systems are Oracle, MySQL, SQL Servers and DB2 etc.
Microsoft Access and Open Office Base are examples of personal database- management systems.
These systems are primarily used to develop and analyze single- user databases. These databases are
not meant to be shared across a network or the Internet, but are instead installed on a device and
work with a single user at a time.
Database Models:
Databases can be organized in many ways, and thus take many forms. A database model is a type
of data model that determines the logical structure of a database and fundamentally determines in
which manner data can be stored, organized and manipulated. Let’s now look at the database
model hierarchy.
All records in hierarchy are called Nodes. Each node is related to the others in a parent-child
relationship.
Each parent record may have one or more child records, but no child record may have more than
one parent record. Thus, the hierarchical data structure implements one-to-one and one-to-
many relationships.
The top parent record in the hierarchy is called the Root Record. In this example, building
records are the root to any sequence of room, equipment, and repair records. Entrance to this
hierarchy by the DBMS is made through the root record i.e., building.
Records that ‘own’ other records are called Parent Records. For example, room records are the
parents of equipment records. Room records are also children of the parent record, building.
There can be many levels of node records in a database.
B. Network Database Model: The network model is a variation on the hierarchical model, in the sense
that branches can be connected to multiple nodes. The network model can represent redundancy
in data more efficiently than in the hierarchical model.
A network database structure views all records in sets.
Each set is composed of an owner record and one or more member records.
However, unlike the hierarchical mode, the network model also permits a record to be a
member of more than one set at one time.
The network model would permit the equipment records to be the children of both the room
records and the vendor records.
This feature allows the network model to implement the many-to-one and the many-to- many
relationship types.
For example, suppose that in our database, it is decided to have the following records: repair
vendor records for the companies that repair the equipment, equipment records for the various
machines we have, and repair invoice records for the repair bills for the equipment. Suppose four
repair vendors have completed repairs on equipment items 1,2,3,4,5,6,7 and 8. These records
might be logically organized into the sets shown in Figure.
Part of structure
Engineer
Engineer ID
Date of Birth Civil Jobs
Address
Employment Date
Current job
Experience
Class of structure
Advantages of DBMS
Major advantages of DBMS are given as follows:
1. Permitted Data Sharing: DBMS provides features to share the data of entire organization by
various users concurrently or simultaneously ex. Railway Reservation etc.
2. Reduced Redundancy: In non-database system (File System), each application or department has
its own private files resulting in considerable amount of redundancy of the stored data. Thus
storage space is also wasted. By having a centralized database or data in linked tables by DBMS,
the data redundancy can be avoided.
3. Data Integrity can be maintained: Data integrity can be maintained by having accurate, consistent
and up-to-date data. Updates to data can only have to be made in one place in DBMS to ensure
integrity.
4. Program and file consistency: by using DBMS, file formats are standardized. This makes the data
files easier to maintain because the same rules are applied across all types of data. This ensure
program and file consistency.
5. Improved Security: DBMS provide various security features which can be used for providing a
secured database e.g. User authentication and Access Controls through password etc.
6. User friendly: DBMS makes the data access and manipulation easier for user in a user friendly
manner.
7. Data Independence: Data stored in DBMS provide data independence, in DBMS data does not
reside in applications but on database which are independent of each other.
Disadvantages of a DBMS;
There are basically two major downsides to using DBMSs. One is cost and other is threat to data
security. These are given as under.
1. Cost: implementing a DBMS system can be expensive and time-consuming. Especially in large
entities. Training requirements is quite costly.
2. Security: Even with safeguards in place, it may be possible for some unauthorized users to access
the database. If any unauthorized user can get access to database, he can make unauthorized
alteration or modification to data.
Data Warehouse:
As organizations, have begun to utilize databases as the center piece of their operations, the
need to fully understand and leverage the collected data has become more and more apparent.
organizations also want to analyze data in a historical sense: How does the data we have today
compare with the same set of data this time last month, or last year? From these needs arose the
concept of the data warehouse.
The concept of the data warehouse is simple:
- Extract data from one or more of the organization’s databases and load it into the data
warehouse (which is itself another database) for storage and analysis.
- However, the execution of this concept is not that simple.
A data warehouse should be designed so that it meets the following criteria:
It uses non-operational data. This means that the data warehouse is using a copy of data from the
active databases that the company uses in its day- to-day operations, so the data warehouse must
pull data from the existing databases on a regular, scheduled basis.
The data is time-variant. This means that whenever data is loaded into the data warehouse, it
receives a time stamp, which allows for comparisons between different time periods.
The data is standardized. Because the data in a data warehouse usually comes from several
different sources, it is possible that the data does not use the same definitions or units. For
example, our Events table in our Student Clubs database lists the event dates using the mm/dd/
yyyy format (e.g., 01/10/2013). A table in another database might use the format yy/mm/dd
(e.g.13/01/10) for dates. For the data warehouse to match up dates, a standard date format would
have to be agreed upon and all data loaded into the data warehouse would have to be converted
to use this standard format. This process is called Extraction-Transformation-Load (ETL).
There are two primary schools of thought when designing a data warehouse:
Bottom-Up and Top-Down.
The Bottom-Up Approach starts by creating small data warehouses, called data marts, to solve
specific business problems. As these data marts are created, they can be combined into a
larger data warehouse.
The Top-Down Approach suggests that we should start by creating an enterprise-wide data
warehouse and then, as specific business needs are identified, create smaller data marts from
the data warehouse.
Data Mining:
Data Mining is the process of analyzing data to find previously unknown trends, patterns to make
decisions. Generally, data mining is accomplished through automated means against large data
sets, such as a data warehouse.
Some examples of data mining include:
An analysis of sales from a large grocery chain might determine that milk is purchased more
frequently the day after it rains in cities with a population of less than 50,000.
A bank may find that loan applicants whose bank accounts show particular deposit and
withdrawal patterns are not good credit risks.
A baseball team may find that collegiate baseball players with specific statistics in hitting,
pitching, and fielding make for more successful major league players.
In some cases, a data-mining project is begun with a hypothetical result in mind.
For example, a grocery chain may already have some idea that buying patterns change after it rains
and want to get a deeper understanding of exactly what is happening.
In other cases, there are no presuppositions and a data-mining program is run against large data sets
to find patterns and associations.
In some cases, a data-mining project is begun with a hypothetical result in mind. For example, a
grocery chain may already have some idea that buying patterns change after it rains and want to get a
deeper understanding of exactly what is happening. In other cases, there are no presuppositions and a
data-mining program is run against large data sets to find patterns and associations.
CLASSIFICATION OF IS CONTROLS
Internal controls can be classified into various categories to illustrate the interaction of various groups
in the enterprise and their effect on ISs on different basis. These categories have been represented in
the Fig.
Objective of Controls
• Preventive
• Detective
• Corrective
Nature of IS Resource
• Environmental
• Physical Access
• Logical Access
Audit Functions
• Managerial
• Application
Classification of IS Controls
ENVIRONMENTAL CONTROLS
These are the controls relating to It environment such as power, AC, UPS, smoke detectors, fire-
extinguishers etc. below enlists all the environmental exposures and their controls.
(i) Controls for Environmental Exposures
(c) Asynchronous attacks: may occur when data moved across telecommunication line.
Data leakage Data leakage involves leaking information out of the computer by means of stealing
information from computer by copying into external media like CDs, USB Pen drives,
or taking printouts etc.
Subversive An intruders attempt to violate the integrity of some components in the sub system.
Threats: Subversive attack can provide intruders with important information about messages
being transmitted and the intruder can manipulate these messages in many ways.
Wire Tapping Involves spying on information (listening of information) transmitted over a
telecommunication network.
Piggy backing Refers to an act of following authorized users through a secured door or attaching to
a telecommunication link to capture and alter transmission.
This involves intercepting communication between the operating system and the
user and modifying them or substituting new messages.
PIGGY BACKING
HACKERS observe
message-read content HACKERS- Capture Modify Contents or
message from Mr.A add content.
from Mr. A
Internet / Communication
facilities
A. Managerial Controls: In this part, we shall examine controls over the managerial controls that must
be performed to ensure the development, implementation, operation and maintenance of ISs in a
planned and controlled manner in an organization. The controls at this level provide a stable
infrastructure in which ISs can be built, operated, and maintained on a day- to-day basis.
1. Top management and IS management controls:
Top management is responsible for preparing a master plan for the IS function. The senior managers
who take responsibility for IS function in an organisation face many challenges. The major functions
that a senior manager must perform are as follows:
(a) Planning This includes determining the goals of the IS function and the means of
achieving these goals
The steering committee shall comprise of representatives from all areas of
the business, and IT personnel. The committee would be responsible for
- The overall direction of IT.
- Overall responsibility for the activities of the IS function.
(a) Organizing There should be a prescribed IT organizational structure with documented
roles and responsibilities and agreed job descriptions.
This includes gathering, allocating, and coordinating the resources needed to
accomplish the goals that are established during Planning function.
(b) Leading This includes motivating, guiding, and communicating with personnel. The
process of leading requires managers to motivate subordinates, direct them
and communicate with them.
(c) Controlling This includes comparing actual performance with planned performance as a
basis for taking any corrective actions. This involves determining when the
actual activities of the IS functions deviate from the planned activities.
Quality Assurance (QA) personnel should work to improve the quality of information systems
produced, implemented, operated, and maintained in an organization. They perform a
monitoring role for management to unsure that –
Quality goals are established and understood clearly by all stakeholders; and
Compliance occurs with the standards that are in place to attain quality information systems.
Information security Administrator are responsible for ensuring that IS assets categorized under
personnel, hardware, software, documentations, data, applications, facilities are secure.
Assets are secure when the expected losses that will occur over some time, are at acceptable level.
The control’s classification based on “Nature of information System Resources – Environmental
Controls, Physical Controls and Logical Access Controls are all security measures against the
possible threats. However, despite the controls on place, there could be a possibility that a control
might fail. Disasters are events / incidents that are so critical that has capability to hit business
continuity of an entity in an irreversible manner.
When disaster strikes, there is a need to recover critical assets and recover operations and
mitigate losses using the last resort controls - A DRP and Insurance.
A comprehensive DRP comprise four parts, an emergency plan, A backup plan, A recovery plan and
a Test plan. The plan lays down the policies, guidelines, and procedure for all IS personnel.
Adequate insurance must be able to replace IS assets and to cover the extra costs associated with
restoring normal operations.
BCP controls are related to having an operational and tested IT continuity plan which is in line with
overall BCP. So that it can be ensured that It services are available as required and to ensure
minimum impact on business in the event of major disruption.
INPUT CONTROLS: Input controls are divided onto following broad classes.
Q: Explain three levels of input validation controls in detail. [PM]
Q: Discuss major processing controls in brief. [PM]
Input Controls
Source Batch
Data coding Validation
Document
Controls Controls Controls
Controls
Single Record
addition
transposition Interrogation
Double File
Truncation
transposition Interrogation
Substitution
INPUT CONTROLS
1. Source Fraud can be implemented on source document to manipulate entries or to remove
Document assets. To control against this type of exposure, the organization must implement
Controls control procedure over source documents to account for each document, as described.
2.Data coding Two types of errors can corrupt a data code and cause processing error. These are
Controls transcription & Transposition error.
Transcription These fall into three classes.
error a) Addition: when an extra digit is added Like 3256 can be coded as
32569
b) Truncation: when a digit is removed from the end. Like 3256 can
be coded as 325.
c) Substitution: replacement of one digit in a code with another.
Like3256 can be coded as 3258.
Transposition There are two types of transposition error.
error a) Single transposition: when two adjacent digits are reversed. Like
32568 can be coded as 35268.
b) Multiple Transposition: when non-adjacent digits are transposed.
Like 32568 can be coded as 52386.
Any of these errors can cause serious problem in data processing it
they go undetected.
For example: a sales order for customer 12345 that is transposed into
12354 will be posted to wrong customer’s account.
3. Batch Batching is the process of grouping together transactions that bear same type of
Control relationship to each other.
Three types of control totals can be calculated.
Financial Totals: Grand totals calculated fro each field containing monetary amount.
Hash Totals: Grand totals calculated for any code on a document in the batch; e.g.
source document serial no. can be totaled.
Record Count: Grand totals for the number of documents in the batch.
4.Validation Input validation controls are intended to detect error in the transaction data before the
Controls data are processed. There are three levels of input validation controls:
Field It involves programmed procedures that examine the characters of the
Interrogation data in the field. This includes the checks like Limit Check (against
predefined limits), Picture Checks (against entry into processing of
incorrect/invalid characters), valid check codes (against predetermined
transactions codes, tables) etc.
Record This includes the reasonableness check (Whether the value specified in
interrogation a field is reasonable for that particular field?); Valid Sign (to determine
which sign is valid for a numeric field) and Sequence Check (to follow a
required order matching with logical records.)
File This includes version usage; internal and external labeling; data file
Interrogation security; file updating and maintenance authorization etc.
Communication Controls: These discuss exposures in the communication subsystem, controls over
physical components, communication line errors, flows, and links, topological controls, channel access
controls, controls over subversive attacks, internetworking controls, communication architecture
controls, audit trail controls, and existence controls. Some communication controls are as follows:
(a) Physical Component Controls: These controls incorporate features that mitigate the possible
effects of exposures.
(b) Line Error Control: Whenever data is transmitted over a communication line, recall that it can
be received in error because of attenuation distortion, or noise that occurs on the line. These
errors must be detected and corrected.
(c) Flow Controls: Flow control are needed because two nodes in a network can differ in terms of
the rate at which they can send, received, and process data. For example, a main frame can
transmit data to a microcomputer terminal.
(d) Link Controls: In Wide Area Network (WAN), line error control and flow control are important
functions in the component that manages the link between two nodes in a network.
(e) Channel Access Controls: Two different nodes in a network can compete to use a
communication channel. Whenever the possibility of contention for the channel exists, some
type of channel access control technique must be used.
PROCESSING CONTROLS:
The processing subsystem is responsible for computing, sorting, classifying, and summarizing data. Its
major components are the Central Processor in which programs are executed, the real or virtual
memory in which program instructions and data are stored, the operating system that manages
system resources, and the application programs that execute instructions to achieve specific user
requirements. Some of these controls are as follows:
(i) Processor Controls: Table 3.4.6 enlists the Controls to reduce expected losses from errors and
irregularities associated with Central processors are:
Controls to reduce expected losses from errors and irregularities associated with Central processors
Control Explanation
Error Occasionally, processors might malfunction. The causes could be design errors,
Detection manufacturing defects, damage, fatigue, electromagnetic Interference and ionizing
and radiation. The failure might be transient (that Disappears after a short period),
Correction intermittent (that reoccurs periodically), or permanent (that does not correct with
time). For the transient and intermittent errors; retries and re-execution might be
successful, whereas For permanent errors, the processor must halt and report error.
Multiple It is important to determine the number of and nature of the execution states
Execution enforced by the processor. This helps auditors to determine which user processes
States will be able to carry out unauthorized activities, such as gaining access to sensitive
data maintained in memory regions assigned to the operating system or other user
processes.
Timing An operating system might get stuck in an infinite loop. In the absence of any
Controls control, the program will retain use of processor and prevent other programs from
undertaking their work.
Component In some cases, processor failure can result in significant losses. 3edundant
Replication processors allow errors to be detected and corrected. If processor failure is
permanent in multicomputer or multiprocessor architectures, the system might
reconfigure itself to isolate the failed processor.
(ii) Real Memory Controls: This comprises the fixed amount of primary storage in which programs or
data must reside for them to be executed or referenced by the central processor. Real memory
controls seek to detect and correct errors that occur in memory cells and to protect areas of
memory assigned to a program from illegal access by another program.
(iii) Virtual Memory Controls: Virtual Memory exists when the addressable storage space is larger than
the available real memory space. To achieve this outcome, a control mechanism must be in place
that maps virtual memory addresses into real memory addresses.
(iv) Data Processing Controls: These perform validation checks to identify errors during processing of
data. They are required to ensure both the completeness and the accuracy of data being
processed. Normally, the processing controls are enforced through the database management
system that stores the data. However, adequate controls should be enforced through the front-
end application system also to have consistency in the control process.
Database Controls
Protecting the integrity of a database when application software acts as an interface to interact
between the user and the database, are called Update Controls and Report Controls.
Major Update Controls are as follows:
Sequence Check between Synchronization and the correct sequence of processing
Transaction and Master Files: between the master file and transaction file is critical to
maintain the integrity of updating, insertion or deletion of
records in the master file with respect to the transaction
records. If errors, in this stage are overlooked, it leads to
corruption of the critical data.
Ensure All Records on Files are While processing, the transaction file records mapped to the
processed: respective master file, and the end-of-file of the transaction file
with respect to the end-of-file of the master file is to be
ensured.
Process multiple transactions Multiple transactions can occur based on a single master record
for a single record in the correct (e.g. dispatch of a product to different distribution centers).
order: Here, the order in which transactions are processed against the
product master record must be done based on a sorted
transaction codes.
Maintain a suspense account: When mapping between the masters records to transaction
record results in a mismatch due to failure in the corresponding
record entry in the master record; then these transactions are
maintained in a suspense account.
Output Controls
Output Controls ensure that the data delivered to users will be presented, formatted and delivered in
a consistent and secured manner. Output can be in any form, it can either be a printed data report or a
database file in a removable media. Various output Controls are as follows:
Storage and Logging Pre-printed stationery should be stored securely to prevent unauthorized
of sensitive, critical destruction or removal and usage.
forms
Logging of output When programs used for output of data are executed, these should be
program executions: logged and monitored; otherwise confidentiality/integrity of the data may
be compromised.
Spooling/Queuing “Spool” is an acronym for “Simultaneously Peripherals Operations Online.”
This is a process used to ensure that user can continue working, while the
print operation is getting completed. A queue is the list of document
waiting to be printed on a particular printer, this should not be subject to
unauthorized modifications.
Controls over printing Outputs should be made on the correct printer and it should be ensured
that unauthorized disclosure of information printed does not take place.
Report Distribution Distribution of reports should be made in a secure way to prevent
and Collection unauthorized disclosure of data. It should be made immediately after
Controls. printing to ensure that the time gap between generation and distribution is
reduced. A log should be maintained for reports that were generated and
to whom these were distributed. Retention Controls: Retention controls
consider the duration for which outputs should be retained before being
destroyed. Retention control requires that a date should be determined
for each output item produced.
Communication Controls: These discuss exposures in the communication subsystem, controls over
physical components, communication line errors, flows, and links, topological controls, channel access
controls, controls over subversive attacks, internetworking controls, communication architecture
controls, audit trail controls, and existence controls. Some communication controls are as follows:
Physical These controls incorporate features that mitigate the possible effects of
Components exposures.
Controls
Line Error Whenever data is transmitted over a communication line, recall that it can be
controls received in error because of attenuation distortion, or noise that occurs on the
line. These errors must be detected and corrected.
Flow Controls Flow controls are needed because two nodes in a network can differ in terms of
the rate at which they can send, received, and process data. For example, a
main frame can transmit data to a microcomputer terminal.
Link controls In Wide Area Network (WAN), line error control and flow control are important
functions in the component that manages the link between two nodes in a
network.
Channel Access Two different nodes in a network can compete to use a communication channel.
Controls Whenever the possibility of contention for the channel exists, some type of
channel access control technique must be used.
(i) Snapshots:
- Tracing a transaction is a computerized system can be performed with the help of snapshots or
extended records.
- The snapshot software is built into the system at those points where material processing occurs
which takes images of the flow of any transaction as it moves through the application.
- These images can be utilized to assess the authenticity, accuracy, and completeness of the
processing carried out on the transaction.
- The main areas to stay upon while involving such a system are to
o locate the snapshot points based on materiality of transactions,
o when the snapshot will be captured and
o the reporting system design and implementation to present data in a meaningful way.
(ii) Integrated Test Facility (ITF):
- The ITF technique involves the creation of a dummy entity in the application system files and
the processing of audit test data against the entity as a means of verifying processing
authenticity, accuracy, and completeness.
- This test data would be included with the normal production data used as input to the
application system.
- In such cases the auditor must decide what would be the method to be used to enter test data
and the methodology for removal of the effects of the ITF transactions.
(iii) System Control Audit Review File (SCARF):
- The SCARF technique involves embedding audit software modules within a host application
system to provide continuous monitoring of the system’s transactions.
- The information collected is written onto a special audit file- the SCARF master files.
- Auditors then examine the information contained on this file to see if some aspect of the
application system needs follow-up.
- In many ways, the SCARF technique is like the snapshot technique along with other data
collection capabilities.
(iv) Continuous and Intermittent Simulation (CIS):
- This is a variation of the SCARF continuous audit technique. This technique can be used to trap
exceptions whenever the application system uses a database management system. During
application system processing, CIS executes in the following way:
The database management system reads an application system transaction. It is passed to
CIS. CIS then determines whether it wants to examine the transaction further. If yes, the
next steps are performed or otherwise it waits to receive further data from the database
management system.
CIS replicates or simulates the application system processing.
Every update to the database that arises from processing the selected transaction will be
checked by CIS to determine whether discrepancies exist between the results it produces
and those the application system produces.
Exceptions identified by CIS are written to a exception log file.
The advantage of CIS is that it does not require modifications to the application system and
yet provides an online auditing capability.
(v) Audit Hooks:
- There are audit routines that flag suspicious transactions.
For example, internal auditors at Insurance Company determined that their policyholder system
was vulnerable to fraud every time a policyholder changed his or her name or address and then
subsequently withdrew funds from the policy. They devised a system of audit hooks to tag
records with a name or address change. The internal audit department will investigate these
tagged records for detecting fraud. When audit hooks are employed, auditors can be informed
of questionable transactions as soon as they occur. This approach of real-time notification
displays a message on the auditors terminal.
AUDIT TRAIL
Audit Trails are logs that can be designed to record activity at the system, application, and user level.
When properly implemented, audit trails provide an important detective control to help accomplish
security policy objectives. Many operating systems allow management to select the level of auditing to
be provided by the system. This determines 'which events will be recorded in the log6. An effective
audit policy will capture all significant events without cluttering the log with trivial activity.
Audit trail controls attempt to ensure that a chronological record of all events that have occurred in a
system is maintained. This record is needed to answer queries, fulfill statutory requirements, detect
the consequences of error and allow system monitoring and tuning.
The Accounting Audit Trail shows the source and nature of data and processes that update the
database.
The Operations Audit Trail maintains a record of attempted or actual resource consumption within
a system.
Applications System Controls involve ensuring that individual application systems safeguard assets
(reducing expected losses), maintain data integrity (ensuring complete, accurate and authorized data)
and achieve objectives effectively and efficiently from the perspective of users of the system from
within and outside the organization.
(i) Audit Trail Objectives: Audit trails can be used to support security objectives in three ways:
Detecting Detecting unauthorized access can occur in real time or after the fact.
Unauthorized Access:
The primary objective of real-time detection is to protect the system from
outsiders who are attempting to breach system controls. A real-time audit
trail can also be used to report on changes in system performance that may
indicate infestation by a virus or worm. Depending upon how much activity is being
logged and reviewed; real-time detection can impose a significant overhead
on the operating system, which can degrade operational performance.
(b) Audit of Physical Access Controls: Auditing physical security controls requires knowledge of natural
and manmade hazards, physical security controls, and access control systems.
(i) Siting and Marking: Auditing building siting and marking requires attention to several key
factors and features, including:
Proximity to hazards: The IS auditor should estimate the building’s distance to natural and
manmade hazards, such as Dams; rivers, lakes, and canals; natural gas
and petroleum pipelines; Water mains and pipelines; earthquake faults;
Areas prone to landslides; Volcanoes; Severe weather such as
hurricanes, cyclones, and tornadoes; Flood zones; Military bases;
Airports; railroads. The IS auditor should determine if any risk
assessment regarding hazards has been performed and if any
compensating controls that were recommended have been carried out.
Marking: The IS auditor should inspect the building and surrounding area to see if
building(s) containing information processing equipment identify the
organization. Marking may be visible on the building itself, but also on
signs or parking stickers on vehicles.
(ii) Physical barriers: This includes fencing, walls, and gates. The IS auditor needs to understand
how these are used to control access to the facility and determine their effectiveness.
(iii) Surveillance: The IS auditor needs to understand how video and human surveillance are used
to control and monitor access. He or she needs to understand how (and if) video is recorded and
reviewed, and if it is effective in preventing or detecting incidents.
(iv) Guards and dogs: The IS auditor needs to understand the use and effectiveness of security
guards and guard dogs. Processes, policies, procedures, and records should be examined to
understand required activities and how they are carried out.
(v) Key-Card systems: The IS auditor needs to understand how key-card systems are used to
control access to the facility. Some points to consider include: Work zones: Whether the facility
is divided into security zones and which persons are permitted to access which zones whether
key-card systems record personnel movement; what processes and procedures are used to
issue key-cards to employees? etc.
(ii) Auditing Password Management: The IS auditor needs to examine password configuration settings
on IS to determine how passwords are controlled. Some of the areas requiring examination are-
how many characters must a password have and whether there is a maximum length; how
frequently must passwords be changed; whether former passwords may be used again; whether
the password is displayed when logging in or when creating a new password etc.
(iii) Auditing User Access Provisioning: Auditing the user access provisioning process requires
attention to several key activities, including:
Access request The IS auditor should identify
processes: - all user access request processes and
- determine if these processes are used consistently throughout the
organization.
Access approvals: The IS auditor needs to determine
- how requests are approved and
- by what authority they are approved.
- if system owners approve access requests, or if any accesses are ever
denied.
New employee The IS auditor should examine
provisioning: - the new employee provisioning process to see how a new employee’s
user accounts are initially set up.
- if new employees’ managers are aware of the access requests that their
employees are given and if they are excessive.
Segregation of The IS auditor should determine
Duties (SOD): - if the organization makes any effort to identify segregation of duties.
- This may include whether there are any SOD matrices in existence and if
they are actively used.
Access reviews: The IS auditor should determine
- if there are any periodic access reviews and
- what aspects of user accounts are reviewed;
- this may include termination reviews, internal transfer reviews, SOD
reviews, and dormant account reviews.
(iv) Auditing Employee Terminations: Auditing employee terminations requires attention to several
key factors, including:
Termination The IS auditor should examine
process: - the employee termination process and
- determine its effectiveness.
- This examination should include understanding on how terminations are
performed and
- how user account management personnel are notified of terminations.
Access reviews: The IS auditor should determine
- if any internal reviews of terminated accounts are performed, which
would indicate a pattern of concern for effectiveness in this important
activity.
- If such reviews are performed, if any missed terminations are identified
and if any process improvements are undertaken.
Contractor access The IS auditor needs to determine
and - how contractor access and termination is managed and if such
terminations: management is effective.
(II) User Access Logs: The IS auditor needs to determine what events are recorded in access logs. The
IS auditor needs to understand the capabilities of the system being audited and determine if the
right events are being logged, or if logging is suppressed on events that should be logged.
Centralized The IS auditor should determine
access logs: - if the organization’s access logs are aggregated or
- if they are stored on individual systems.
Access log The auditor needs to determine
protection: - if access logs can be attacked to cause the system to stop logging events.
- For especially high-sensitivity environments, determine if logs should be
written to permanent digital media that is unalterable.
Access log The IS auditor needs to determine
review: - if there are policies, processes, or procedures regarding access log
review.
- The auditor should determine if access log reviews take place, who
performs them, how issues requiring attention are identified, and what
actions are taken when necessary.
Access log The IS auditor should determine
retention: - how long access logs are retained by the organization and if they are back
up.
(III) Investigative Procedures: Auditing investigative procedures requires attention to several key
activities, including:
Investigation The IS auditor should determine
policies and - if there are any policies or procedures regarding security investigations.
procedures:
- This would include who is responsible for performing investigations,
where information about investigations is stored, and to whom the
results of investigations are reported.
Computer crime The IS auditor should determine
investigations: - if there are policies, procedures, are framed regarding computer crime
investigations.
- The auditor should understand how internal investigations are carried
out to law enforcement.
Computer The IS auditor should determine
forensics: - if there are procedures for conducting computer forensics.
- Also identify tools and techniques available to the organization for the
acquisition and custody of forensic data.
- whether any employees have received computer forensics training and
are qualified to perform such investigations.
(IV) Internet Points of Presence: The IS auditor who is performing a comprehensive audit of an
organization’s system and network system needs to perform a ‘points of presence” audit to discover
what technical information is available about the organization’s Internet presence. Some of the
aspects of this intelligence gathering include:
Search engines: - Google, Yahoo!, and other search engines should be consulted to see
what information about the organization is available.
- Searches should include the names of company officers and management,
key technologists, and any internal information such as the names of
projects.
Social networking - Social networking sites such as Face book, LinkedIn, Myspace, and Twitter
sites: should be searched to see what employees, former employees, and
others are saying about the organization.
- Any authorized or unauthorized ‘fan pages’ should be searched as well.
Online sales sites: - Sites such as Craigslist and eBay should be searched to see if anything
related to the organization is sold online.
Domain names: - The IS auditor should verify contact information for known domain
names, as well as related domain names.
- For example, for the organization mycompany.com; organizations should
search for domain names such as mycompany.net, mycompany.info, and
mycompany.biz to see if they are registered and what contents are
available.
Justification of - The IS auditor should examine business records to determine on what basis
Online Presence: the organization established online capabilities such as e-mail, Internet-
facing web sites, Internet e-commerce, Internet access for employees,
and so on.
- These services add risk to the business and consume resources.
- The auditor should determine if a viable business case exists to support
these services or if they exist as a ‘benefit’ for employees.
An external auditor is more likely to undertake general audits rather than concurrent or post-
implementation audits of the systems development process. For internal auditors, mgt might require
that they participate in the development of material application systems or undertake post-
implementation reviews of material application systems as a matter of course.
Input controls:
Input controls are validation and error detection of data input into the system.
This maintains the chronology of events from the time data are captured and entered into an
application system until the time they are deemed valid and passed onto other subsystems.
Accounting audit trail : Operations Audit Trail :
Identity of the person (organization) who was Time to key in a source document at a
the source of the data; terminal
Identity of the person (organization) who Number of read errors made by an optical
entered the data into the system: scanning device.
Time and date when the data was captured; Number of keying errors identified during
Physical device used to enter the data into the verification;
system. Frequency with which an instruction in a
Account or record to be updated by the command language is used and
transaction. Time taken to invoke an instruction using A
Standing data to be updated by the light pen versus a mouse
transaction and
Details of the transaction
Communication Control :
They are responsible for controls over physical components communication line errors, flows, and
links topological controls, controls over subversive attacks etc.
This maintains a chronology of the events from the time a sender dispatches a message to the time a
receiver obtains the message.
Accounting audit trail : Operations Audit Trail :
Unique identifier of the source/sink node. Number of messages that have traversed
Unique identifier of each node that traverses each link and each node
the message:; Person or process authorizing Queue lengths at each node; Number of
dispatch of the message; Time and date at errors occurring; Number of retransmissions ;
which the message was dispatched; Log of errors to identity locations and
Time and date at which the message was patterns of errors:
received by the sink node: Log of system restarts; and
Time and date at which node was traversed by Massage transit times
the message; and
Message sequence number and the image of
the message received at each node traversed.
Processing Control:
They are responsible for computing sorting classifying and summarizing data. The audit trail maintains
the chronology of events from the time data is received from the input or communication subsystem
to the time data is dispatched to the database communication, or output subsystems
Accounting audit Trails : Operations Audit Trail:
To trace and replicate the processing A comprehensive log on hardware
performed on a data item. consumption: CPU time used, secondary
Triggered transactions to monitor input data storage space used and communication
entry, intermediate results and output data facilities used.
operations A comprehensive log on software
consumption: compilers used, subroutine
libraries used, file management facilities
used, and communication software used.
Output Controls:
They provide functions that determine the data content, data format, timeliness of data and how data
is prepared and routed to users.
The audit trail maintains the chronology of events that occur from the time the content of the output
is determined until the time users complete their disposal of output.
Accounting Audit Trail: Operations Audit Trail :
What output was presented to users? To maintain the record of resources
Who received the output consumed: graphs, images, report pages,
When the output was received; and printing time and display rate to produce the
What actions were taken with the output? various outputs
Database controls:
They provide functions to define, create modify, delete and read data in an IS.
The audit trail maintains the chronology of events that occur either to the database definition or the
database itself.
Accounting Audit Trail: Operations Audit Trail :
To attach a unique time stamp to all To maintain a chronology of resource
transactions, consumption events that effects the
To attach before images and afterimages of database
the data item on which a transaction is applied
to the audit trail and
Any modifications or corrections to audit trail
transactions accommodating the changes that
occur within an application system.
Short and long-term objectives: Organizations sometimes move departments from one executive to
another so that departments that were once far from each other (in terms of the org chart structure)
will be near each other. This provides new opportunities for developing synergies and partnerships that
did not exist before the reorganization (reorg). These organizational changes are usually performed to
help an organization meet new objectives that require new partnerships and teamwork that were less
important before.
Market conditions: Changes in market positions can cause an organization to realign its internal
structure in order to strengthen itself. For example, if a competitor lowers its
prices based on a new sourcing strategy, an organization may need to respond
by changing its organizational structure to put experienced executives in charge
of specific activities.
Regulation: New regulations may induce an organization to change its organizational
structure. For instance, an organization that becomes highly regulated may
elect to move its security and compliance group away from IT and place it under
the legal department, since compliance has much more to do with legal
compliance than industry standards.
Available talent: When someone leaves the organization (or moves to another position within
the organization), particularly in positions of leadership, a space opens in the
org chart that often cannot be filled right away. Instead, senior management
will temporarily change the structure of the organization by moving the
leaderless department under the control of someone else. Often, the decisions
of how to change the organization will depend upon the talent and experience
of existing leaders, in addition to each leader’s workload and other factors. For
example, if the director of IT program management leaves the organization, the
existing department could temporarily be placed under the IT operations
department, in this case because the director of IT operations used to run IT
program management. Senior management can see how that arrangement
works out and later decide whether to replace the director of IT program
management position or to do something else.
(a) Executive Management: Executive managers are the chief leaders and policymakers in an
organization. They set objectives and work directly with the organization’s most senior
management to help make decisions affecting the future strategy of the organization.
CIO (Chief Information This is the title of the top most leader in a larger IT organization.
Officer)
CTO (Chief Technical This position is usually responsible for an organization’s overall technology
Officer) strategy. Depending upon the purpose of the organization, this position
may be separate from IT.
CSO (Chief Security This position is responsible for all aspects of security, including information
Officer) security, physical security, and possibly executive protection (protecting
the safety of senior executives).
CISO (Chief Information This position is responsible for all aspects of data-related security. This
Security Officer) usually includes incident management, disaster recovery, vulnerability
management, and compliance.
CPO (Chief Privacy This position is responsible for the protection and use of personal
Officer) information. This position is found in organizations that collect and store
sensitive information for large numbers of persons.
SEGREGATION OF DUTIES
ISs often process large volumes of information that is sometimes highly valuable or sensitive.
Measures need to be taken in IT organizations to ensure that individuals do not possess sufficient
privileges to carry out potentially harmful actions on their own. Checks and balances are needed, so
that high-value and high- sensitivity activities involve the coordination of two or more authorized
individuals. The concept of Segregation of Duties (SOD), also known as separation of duties, ensures
that single individuals do not possess excess privileges that could result in unauthorized activities such
as fraud or the manipulation or exposure of sensitive data.
When SOD issues are encountered during a segregation of duties review, management will need to
decide how to mitigate the matter. The choices for mitigating a SOD issue include
• Reduce access privileges: Management can reduce individual user privileges so that the conflict no
longer exists.
• Introduce a new mitigating control: If management has determined that the person(s) need to
retain privileges that are viewed as a conflict, then new preventive or detective controls need to be
introduced that will prevent or detect unwanted activities.
Examples of mitigating controls include increased logging to record the actions of personnel,
improved exception reporting to identify possible issues, reconciliations of data sets, and external
reviews of high-risk controls.