SARADHA GANGADHARAN COLLEGE
MODULE – I
Introduction to Computers
Computer is an electronic device that operates (works) under the control of programs
stored in its own memory unit. A computer is an electronic machine that processes raw data to give
information as output.
An electronic device that accepts data as input, and transforms it under the influence of a set
of special instructions called Programs, to produce the desired output (referred to as Information).
Computer System
Hardware Software
Basic Computer Organization
The hardware component of the computer system consists of five parts: input devices,
centralprocessing unit (CPU) ,primary storage, output devices, and auxiliary storage devices.
The input device is usually a keyboard where programs and data are entered into the computers.
Examples of other input devices include a mouse, a pen ,a touch screen, oran audio input unit.
The central processing unit (CPU) is responsible for executing instructions such as arithmetic
calculations, comparisons among data, and movement of data inside the system..
The output device is usually a monitor or a printer to show output. If the output is shown on
the monitor, we say we have a soft copy. If it is printed on the printer, we say we have a hard
copy.
Auxiliary storage, also known as secondary storage, is used for both input and output. It is the
place where the programs and data are stored permanently. When we turn off the computer, or
programs and data remain in the secondary storage, ready for the next time we need them.
Program: A computer Program is a set of related instructions written in the language of the computer & is
used to make the computer perform a specific task
Data: Data may be in form of numbers, alphabets/letters or symbols, and can be processed to produce
information. Collection of raw facts, figures or instructions that do not have much meaning to the user.
1 First Year B.C,A
SARADHA GANGADHARAN COLLEGE
Computer Software
Computer software is divided in to two broad categories: system software and application
software .System software manages the computer resources .It provides the interface between
the hardware and the users. Application software, on the other hand is directly responsible for
helping users solve their problems.
Software
System software Application software
Operating System System General Application Specific
Systems support Development Purpose Task
System Software:
System software consists of programs that manage the hardware resources of a
computer and perform required information processing tasks. These programs are divided
into three classes: the operating system, system support, and system development.
The operating system provides services such as a user interface, file and database
access, and interfaces to communication systems such as Internet protocols. The primary
purpose of this software is to keep the system operating in an efficient manner while allowing
the users accessto the system.
Application software
Application software is broken in to two classes: general-purpose software and
application –specific software.
General purpose software is purchased from a software developer and can be used for
more than one application. Examples of general purpose software include word processors,
database management systems, and computer aided design systems.
Application –specific software can be used only for its intended purpose.
Computer Languages:
To write a program for a computer, we must use a computer language. Over the
years computer languages have evolved from machine languages to natural
languages.
1940’s Machine level Languages
1950’s Symbolic Languages or Assembly languages (Assembler)
1960’s High-Level Languages (Complier, interrupter)
2 First Year B.C,A
SARADHA GANGADHARAN COLLEGE
Machine Languages
In the earliest days of computers, the only programming languages available were machine
Languages. Each computer has its own machine language, which is made of streams of 0’s and
1’s.
Instructions in machine language must be in streams of 0’s and 1’s because the internal
circuits of a computer are made of switches transistors and other electronic devices that can be
in oneof two states: off or on. The off state is represented by 0 , the on state is represented by 1.
Symbolic or Assembly languages
In early 1950’s Admiral Grace Hopper, A mathematician and naval officer developed the
concept of a special computer program that would convert programs into machine language.
A special program called assembler translates symbolic code into machine language.
Because symbolic languages had to be assembled into machine language they soon became
known as assembly languages.
High Level Languages:
Working with symbolic languages was also very tedious because each machine instruction
has to be individually coded.
The first widely used high-level languages, FORTRAN (FORmula TRANslation)was created
by John Backus and an IBM team in 1957;it is still widely used today in scientific and
engineeringapplications. After FORTRAN was COBOL(Common Business-Oriented Language).
Admiral Hopper was played a key role in the development of the COBOL Business language.
Characteristics of Computers
Speed
A computer works with much higher speed and accuracy compared to humans while
performing mathematical calculations. ...
Accuracy
Computers perform calculations with 100% accuracy. ...
Diligence
A computer can perform millions of tasks or calculations with the same consistency and
accuracy.
Versatility
Versatility refers to the capability of a computer to perform different kinds of works with same
accuracy and efficiency.
Reliability
A computer is reliable as it gives consistent result for similar set of data i.e., if we give same set
of input any number of times, we will get the same result.
Memory
A computer has built-in memory called primary memory where it stores data.
Secondary storage are removable devices such as CDs, pen drives, etc., which are also used to store
data.
3 First Year B.C,A
SARADHA GANGADHARAN COLLEGE
Unit of Measurements Storage measurements:
The basic unit used in computer data storage is called a bit (binary digit). Computers use
these little bits, which are composed of 1s and 0s, This two number system, is called a “binary
number system”
Computer Storage units Bit
0 or 1 = 1 Bit
1 Kilobyte = 1024 bytes.
1 Megabyte = 1024 Kilobytes
1 Gigabyte = 1024 Megabytes
1 Terabyte = 1024 Gigabytes
1 Petabyte = 1024 Terabytes
Size example
To store a typical line of text from a book. - 4 KB
About one page of text. - 120 KB
The text of a typical pocket book. - 3 MB
A three minute song (128k bitrate) - 650-900 MB
Generations of Computers
1) First Generation Computers: Vacuum Tubes (1940-1956)
The technology behind the primary generation computers was a fragile glass device,
which was called vacuum tubes.
Main first generation computers are:
ENIAC: Electronic Numerical Integrator and Computer
EDVAC: Electronic Discrete Variable Automatic Computer was designed by von Neumann
UNIVAC: Universal Automatic Computer was developed in 1952 by Eckert and Mauchly.
2) Second Generation Computers: Transistors (1956-1963)
Second-generation computers used the technology of transistors rather than bulky
vacuum tubes. Another feature was the core storage. A transistor may be a device
composed of semiconductor material that amplifies a sign or opens or closes a circuit.
Programming language was shifted from high level to programming language and made
programming comparatively a simple task for programmers. Languages used for
programming during this era were FORTRAN (1956), ALGOL (1958), and COBOL (1959).
3) Third Generation Computers Integrated Circuits. (1964-1971)
During the third generation, technology envisaged a shift from huge transistors to
integrated circuits also referred to as IC. Here a variety of transistors were placed on silicon
chips, called semiconductors. Programming was now wiped out Higher level languages like
BASIC (Beginners All-purpose Symbolic Instruction Code).
4) Fourth Generation Computers : Micro-processors (1971-Present)
In 1971 First microprocessors were used, the large scale of integration LSI circuits built on
one chip called microprocessors. The most advantage of this technology is that one
microprocessor can contain all the circuits required to perform arithmetic, logic, and control
functions on one chip. Very Large Scale Integrated (VLSI) circuits replaced LSI circuits. The
Intel 4004chip, developed in 1971
Technologies like multiprocessing, multiprogramming, time-sharing, operating speed, and
virtual memory made it a more user-friendly and customary device
5) Fifth Generation Computers
The technology behind the fifth generation of computers is AI. It allows computers to
behave like humans. It is often seen in programs like voice recognition, area of medicines, and
entertainment. Within the field of games playing also it’s shown remarkable performance
where computers are capable of beating human competitors.
Example of fifth generation : Desktops, laptops, tablets, smartphones, etc.
4 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
Types of Computers : It can be broadly classified by their speed and computing power.
PC (Personal Computer)
It is a single user computer system having moderately powerful microprocessor
Workstation
`It is also a single user computer system, similar to personal computer however a
more powerful microprocessor has.
Mini Computer
It is a multi-user computer system, capable of supporting hundreds of users
simultaneously.
Main Frame
It is a multi-user computer system, capable of supporting hundreds of users
simultaneously. Software technology is different from minicomputer.
Super computer
Supercomputers are one of the fastest computers currently available.
Supercomputers are very expensive and are employed for specialized applications
Modules of a computer
When referring to computer software, a module is a discrete piece of code which can be
independently created and maintained to be used in different systems. For example, a developer
may create a module containing the code required to use a sound card or perform I/O on a certain
type of file system. The module can then be distributed for and used by any system that needs that
functionality and development of the module can proceed independently. This approach is known
as modular design.
With computer hardware, a module is an independent component that is used as part of a
more complex system. For example, a memory module can connect to a computer's motherboard to
operate as part of the complete system.
Planning the Computer Program
Algorithm
Flowchart
Pseudocode
Plan the logic of a computer program
Commonly used tools for program planning and their use
ALGORITHM:
An algorithm is a step by step procedure to solve a given problem in finite number of steps.
The characteristics of an algorithm are
(i) Algorithm must have finite number of steps.
(ii) No instructions should be repeated.
(iii) An algorithm should be simple.
(iii) An algorithm must take at least one or more input values.
(iv) An algorithm must provide at least one or more output values.
Advantages
• An algorithms are very easy to understand.
• Algorithm is programming language independent.
• Algorithm makes the problem simple, clear, correct.
5 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
Example-1
Problem definition: To find simple interest Problem
Analysis: inputs – p, r, t
Output – simple interest
Algorithm
Step 1: Start
Step 2: input p,r,t
Step 3: calculate si=p*r*t/100
Step 4: output si
Step 5: stop
Example – 2
Problem definition: To Convert temperature in Celsius to Fahrenheit Problem
Analysis : input – c
Output :f
Algorithm
Step 1: Start
Step 2: input c
Step 3: calculate f=9/5*c+32
Step 4: output f
Step 5: stop
Pseudo code
Pseudo code typically omits details that are essential for machine understanding of the
algorithm, such as variable declarations, system-specific code and some subroutines. The
programming language is augmented with natural language description details, where
convenient,or with compact mathematical notation..
Examples:
If student's grade is greater than or equal to 60
Print "passed"
else
Print "failed"
Endif
FLOW CHART:
A flow chart is a step by step diagrammatic representation of the logic paths to solve a
given problem. A flowchart is graphical representation of an algorithm
Advantages
• The flowchart shows the logic of a problem displayed in pictorial fashion
• It is useful for debugging and testing of programs.
• Program could be coded efficiently using flowcharts.
• The Flowchart is good means of communication to other users..
Rules for writing flowcharts
• It should be drawn from top to bottom.
• A flowchart always begins with start symbol and ends with stop symbol.
• Flow lines are used to join the symbols
• Decision box should have one entry point and two exit points.
• For lengthy flowcharts, connectors are used to join them.
6 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
Symbols Used In Flowchart
Symbol Purpose Description
Indicates the flow of logic
Flow line
by connecting symbols.
Represents the start and
Terminal(Stop/Start)
the end of a flowchart.
Used for input and
Input/output
output operation.
Used for arithmetic
Processing operations and data-
manipulations.
Used for decision making
Decision between two or more
alternatives.
Used to join different
On-page Connector
flow line
Used to connect the
Off-page Connector flowchart portion on a
different page.
Represents a group of
Predefined
statements performing
Process/Function
one processing task.
Add two numbers entered by the user.
7 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
Find the largest among three different numbers entered by the user.
Debugging
Debugging is a methodical process of finding and reducing the number of bugs (or defects)
in a computer program, thus making it behave as originally expected. Programming errors are
also known as the bugs or faults, and the process of removing these bugs is known
as debugging.
There are two main types of errors that need debugging:
1. Compile-time: These occur due to misuse of language constructs, such as syntax errors.
2. Run-time: These are much harder to figure out, as they cause the program to generate
incorrect output (or “crash”) during execution
Types of errors
These errors are detected either during the time of compilation or execution. Thus, the
errors must be removed from the program for the successful execution of the program.
1. Syntax error
2. Run-time error
3. Linker error
4. Logical error
5. Semantic error
8 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
Syntax error
Syntax errors are also known as the compilation errors. These errors are mainly occurred
due to the mistakes while typing or do not follow the syntax of the specified programming
language.
For Example : If we want to declare the variable of type integer,
int a; // this is the correct form
Int a; // this is an incorrect form.
#include <stdio.h>
int main()
{
a = 10;
printf("The value of a is : %d", a);
return 0;
}
The code throws the error that 'a' is undeclared. This error is nothing but the syntax error only.
Run-time error
Sometimes the errors exist during the execution-time even after the successful
compilation known as run-time errors. The division by zero is the common example of the run-
time error.
#include <stdio.h>
int main()
{
int a=2;
int b=2/0;
printf("The value of b is : %d", b);
return 0;
} The code shows the run-time error, i.e., division by zero.
Linker error
Linker errors are mainly generated when the executable file of the program is not created.
This can be happened either due to the wrong function prototyping or usage of the wrong
header file
#include <stdio.h>
int Main()
{
int a=78;
printf("The value of a is : %d", a);
return 0;
}
Error in linking Main function Declared in capital letter
Logical error
The logical error is an error that leads to an undesired output. These errors produce the
incorrect output,
#include <stdio.h>
9 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
int main()
{
int sum=0; // variable initialization
int k=1;
for(int i=1;i<=10;i++); // logical error, as we put the semicolon after loop
{
sum=sum+k;
k++;
}
printf("The value of sum is %d", sum);
return 0;
}
Semantic error
Semantic errors are the errors that occurred when the statements are not understandable
by the compiler.
#include <stdio.h>
int main()
{
int a,b,c;
a=2;
b=3;
c=1;
a+b=c; // semantic error
return 0;
}
Errors in expressions
int a, b, c;
a+b = c;
Documentation
“Documentation is one of the most important parts of a software project. However, a lot of
projects have little or no documentation to help their (potential) users use the software,”
At various stages of development multiple documents may be created for different users. In
fact, software documentation is a critical process in the overall software development process.
Techniques of Problem Solving
Problem solving more as a process with several steps involved that will help you reach the
best outcome. Those steps are:
Steps in Problem Solving
1. Problem Definition
2. Problem Analysis
3. Design
4. Coding
5. Testing
6. Maintenance
10 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
Problem Definition
• To solve a problem, the first step is to identify and define the problem.
• The problem must be stated clearly, accurately and precisely.
E-x Find largest of three numbers
Problem Analysis
• The problem analysis helps in designing and coding for that particular problem.
1. Input specifications the number of inputs and what forms the input are available
2. Output specifications the number of outputs and what forms the output should be
displayed.
E-x input – a,b,c
Output – c
Designing a program
• Algorithm - step by step procedure of solving a problem
• Flowcharts – It is the graphical representation of the algorithm.
Coding
• Writing instructions in a particular language to solve a problem
Testing a Program
• After writing a program, programmer needs to test the program for completeness,
correctness, reliability and maintainability
Maintaining the program
• It means periodic review of the programs and modifications based on user requirements
Problem solving aspects
Step 1: Define the Problem
What is the problem?
Step 2: Clarify the Problem
What data is available or needed to help clarify, or fully understand the problem?
Step 3: Define the Goals
What is your end goal or desired future state?
Step 4: Identify Root Cause of the Problem
Identify possible causes of the problem.
Step 5: Develop Action Plan
Generate a list of actions required to address the root cause and prevent
Problem from getting to others.
Step 6: Execute Action Plan
Implement action plan to address the root cause.
Verify actions are completed.
Step 7: Evaluate the Results
Monitor and Collect Data.
Did you meet your goals defined in step 3? If not, repeate 8-Step Process.
Step 8: Continuously Improve
Look for additional opportunities to implement solution.
If needed, repeat the 8-Step Problem Solving Process to drive further
Improvements
Program verification
Reviewing of any program for the purpose of finding faults is known as program
verification. Verification is the process of checking that program achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not. The
reviewing of a document can be done from the first phase of software development
11 I Year B. C.A I Sem
SARADHA GANGADHARAN COLLEGE
Structured programming concepts
Structured programming is a programming paradigm aimed at improving the clarity,
quality, and development time of a computer program by making extensive use of the structured
control flow constructs of selection (if/then/else) and repetition (while and for), block
structures, and subroutines.
Following the structured program theorem,
"Sequence"; ordered statements or subroutines executed in sequence.
"Selection"; one or a number of statements is executed depending on the state of the
Program. This is usually expressed with keywords such as if..then..else..endif.
"Iteration"; a statement or block is executed until the program reaches a certain state, or
Operations have been applied to every element of a collection. This is usually expressed
with keywords such as while, repeat, for or do..until.
Top-down Design
A top-down approach (also known as stepwise design) is essentially the breaking down of a
system to gain insight into the sub-systems that make it up.
In a top-down approach an overview of the system is formulated, specifying but not detailing
any first level subsystems.
Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem
levels, until the entire specification is reduced to base elements.
Top-Down Approach Bottom-Up Approach
Top-Down Approach is Theory-driven. Bottom-Up Approach is Data-Driven.
Emphasis is on data rather than
Emphasis is on doing things (algorithms).
procedure.
Large programs are divided into smaller Programs are divided into what are
programs which are known as known as objects are called
decomposition. Composition.
Communication is less among the Communication is a key among the
modules. modules.
Widely used in debugging, module
Widely used in testing.
documentation, etc.
The top-down approach is mainly used by The bottom-up approach is used by
Structured programming languages like C, Object-Oriented programming
Fortran, etc. languages like C++, C#, Java, etc.
May contains redundancy as we break up This approach contains less
the problem into smaller fragments, then redundancy if the data encapsulation
build that section separately. and data hiding are being used.
12 I Year B. C.A I Sem