0% found this document useful (0 votes)
10 views7 pages

Define The Following Terms

1

Uploaded by

maribeltojah7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

Define The Following Terms

1

Uploaded by

maribeltojah7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

1.

Define the following terms


a) Data processing

 The process of collecting, converting, manipulating, and organizing raw facts (data)
into meaningful and useful information.

b) Data processing style

 The specific method or approach used to carry out data processing, e.g.:
1. Manual – done by hand without machines.
2. Mechanical – using simple mechanical devices like typewriters or calculators.
3. Electronic – using computers and related digital devices.

2. Using an illustration, describe the four primary stages


of the data processing cycle
Stages:

1. Data Collection – Gathering raw facts from various sources (e.g., forms, surveys,
sensors).
2. Data Input – Converting the collected data into a machine-readable form (e.g., typing
into a computer, scanning).
3. Processing – Manipulating the data according to instructions (e.g., calculations,
sorting, classification).
4. Output – Presenting processed results in a usable format (e.g., printed reports, screen
displays).

Example:

 School exam marks collected on paper → entered into a computer → averaged and
ranked → report cards printed.

3. Outline the stages of data collection


1. Identification of data sources – Deciding where data will come from.
2. Preparation of source documents – Designing forms or formats for recording data.
3. Actual data recording – Writing or capturing the data on source documents.
4. Transmission of data – Sending the recorded data to the processing location.
5. Verification – Checking the data for accuracy before processing.
6. Storage – Keeping data safely until it’s ready for processing.

4. Garbage In Garbage Out (GIGO)

 Definition: A principle stating that the quality of output is determined by the quality of the
input; if incorrect or poor-quality data is entered, the output will also be incorrect.
 Relevance to errors: If wrong data is fed into a computer (garbage in), the computer will still
process it but give wrong results (garbage out), because computers do not think or correct
human errors.

5. Two types of transcription errors

1. Omission errors – Leaving out required characters or numbers during data entry.

2. Addition errors – Including extra unwanted characters or numbers during entry.

6. Three types of computational errors

1. Truncation errors – Shortening numbers and losing accuracy.

2. Rounding errors – Approximating numbers instead of using exact figures.

3. Overflow/underflow errors – When numbers are too large or too small for the computer’s
storage capacity.

7. Integrity of data

 Definition: The correctness, accuracy, and reliability of data throughout its lifecycle, ensuring
it remains unaltered and trustworthy.

8. Factors determining data integrity

1. Accuracy of data entry.

2. Proper validation checks.

3. Secure storage methods.

4. Controlled access to data.

5. Proper backup procedures.

9. Ways of minimising threats to data integrity

1. Use validation and verification checks.

2. Restrict access with passwords.

3. Keep regular backups.

4. Train staff on proper data handling.

5. Use error-checking software.


10. Distinguish

a) Sequential vs Serial file organisation

 Sequential: Records stored in a specific order (usually sorted by key field).

 Serial: Records stored in the order they are entered, without sorting.

b) Random vs Indexed-Sequential file organisation

 Random: Records stored at random locations but accessed using a calculation (hashing).

 Indexed-Sequential: Records stored sequentially but accessed using an index for faster
search.

11. Types of data processing methods

1. Manual – Done by hand.

2. Mechanical – Using devices like typewriters.

3. Electronic – Using computers.

12. Advantages of computer files over manual filing

1. Faster retrieval of information.

2. Require less physical space.

3. Easier to back up and duplicate.

4. Improved security through passwords.

13. Elements that make up a computer file

1. File name

2. File extension

3. Data records

4. File size

14. Logical vs Physical file

 Logical file: How data appears to the user (structure and organisation).

 Physical file: How data is actually stored on storage media.

15. Six types of computer files


1. Master files.

2. Transaction files.

3. Backup files.

4. Reference files.

5. Report files.

6. Temporary files.

16. Distinctions

 Already answered in Q10.

17. Describe at least five types of electronic data processing modes

1. Batch Processing

o Data is collected over a period of time, grouped into batches, and then processed
all at once.

o No immediate output is produced — results are available after the entire batch is
processed.

o Example: Processing payroll at the end of the month.

2. Online Processing

o Data is processed immediately after it is entered into the computer.

o The system is always connected to the central computer or database.

o Example: Airline seat booking systems.

3. Real-Time Processing

o Data is processed instantly as events occur, producing immediate results.

o Often used in situations where up-to-date information is critical.

o Example: Bank ATM withdrawals updating the account balance instantly.

4. Distributed Processing

o Data processing tasks are shared between two or more interconnected computers
located in different places.

o Reduces the workload on a single machine and can improve speed and reliability.

o Example: Branch offices of a company processing transactions locally but updating


a central database.

5. Multiprocessing
o A single computer uses two or more processors (CPUs) working together to
execute different tasks simultaneously.

o Improves processing speed and efficiency for complex tasks.

o Example: Scientific simulations or large database management.

18. Computer program definition

 A set of instructions written in a programming language to direct a computer to perform


specific tasks.

19. What is programming?

 The process of designing, writing, testing, and maintaining computer programs.

20. Advantages of high-level over low-level languages

1. Easier to learn and use.

2. Portable across different computers.

3. Require less programming time.

4. Easier to debug and maintain.

21. Four examples of languages & their application areas

1. C – System software development.

2. Java – Web and mobile applications.

3. Python – Data science and AI.

4. SQL – Database management.

22. Why an executable file is unique

 It contains machine code instructions directly understood by the computer, unlike text or
data files.

23. Compiler vs Interpreter

 Compiler: Translates the whole program into machine code before execution.

 Interpreter: Translates and runs code line-by-line.


 Early computers worked well with interpreters because they had limited memory and
interpreters required less memory at once.

24. Examples of programming languages per generation

1. 1st gen: Machine language.

2. 2nd gen: Assembly language.

3. 3rd gen: High-level languages (e.g., C, Pascal).

4. 4th gen: SQL, report generators.

5. 5th gen: Prolog, LISP (AI).

25. Advantage of machine language

 Executes faster because it requires no translation.

26. Write in

 HTML: HyperText Markup Language.

 OOP: Object-Oriented Programming.

27. Source code vs Object code

 Source code: Human-readable program written in a programming language.

 Object code: Machine-readable translation of source code.

28. Encapsulation

 The concept of bundling data and methods that operate on that data within one unit (class)
in OOP.

29. Compiling vs Interpreting

 Compiling creates an executable file; interpreting runs the program directly line-by-line.

30. Six stages of program development

1. Problem definition.

2. Feasibility study.

3. Program design.
4. Coding.

5. Testing/debugging.

6. Documentation & maintenance.

31. Two advantages of monolithic programs

1. Simple structure for small tasks.

2. Easier to test small simple programs.

32. Two advantages of modular programming

1. Easier maintenance and debugging.

2. Code reusability.

33. Stage where documentation falls

 Falls under program design and maintenance stages because documentation is created
during design and updated during maintenance.

34. Difference between flowchart & pseudocode

 Flowchart: Diagrammatic representation of program logic.

 Pseudocode: Written step-by-step instructions in plain language.

35. Program bug

 An error or defect in a program that causes it to produce incorrect results or behave


unexpectedly.

36. Importance of testing before implementation

1. Detect and remove errors.

2. Ensure program meets requirements.

3. Prevents system failures after release.

You might also like