0% found this document useful (0 votes)
29 views30 pages

Maintenance

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views30 pages

Maintenance

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIVERSITY OF JOS

FACULTY OF NATURAL SCIENCE

DEPARTMENT OF COMPUTER SCINCE

SYSTEM HARDWARE AND


MAINTENACE

ASSIGNMENT
BY
ABUBAKAR KABIRU
What is a Power Supply Unit?
A power supply unit (PSU) is a hardware device that converts Alternating Current (AC) electricity
int Direct Current DC electricity and then distributes it to the rest of the computer. On a standard
desktop computer, the PSU is where the power cord plugs into and usually has an I/O power
switch on it.

Connecting the Dots: PSU Cables and Power Distribution

If you open a standard computer case, you would see that the PSU is connected to the rest of the
computer by various power cables. These cables supply the motherboard, hard drives, and case
electronics with the electricity they need to function. Most PSUs also have extra cables meant
for the installation of peripherals with large power demands such as graphics cards. In recent
years, modular PSUs have become more common-place, allowing users to install as many power
cables as necessary

In addition to the power provided directly by the PSU, the motherboard assists in distributing
power to the CPU and RAM slots as well as the connectors for the CPU and case fan systems.
Since the motherboard can help distribute power, the PSU doesn’t need to be directly plugged
into every system component. Not only would it be a clutter of wires to deal with, many system
components such as integrated graphics chips and CPUs are too small or delicate for a direct PSU
connection. By combining a solid PSU with a compatible motherboard, you can rest assured that
your computer will have all of the power it needs.

1
Power Ratings and Voltage Rails
One of the main features to pay attention to regarding PSUs is their power rating. The power
rating describes the total system power that can be drawn from the unit before it overloads,
usually expressed as Watts (W). Modern PSUs commonly range from 300W to over 1000W. PSUs
with larger power ratings are commonly found in computers that have multiple graphics cards
installed such as those used for gaming or graphics processing. Laptops generally have power
supplies ranging from 50W to over 200W. These units usually have an associated power unit or
“brick” that converts AC to DC in the same way as a desktop PSU.

Another key feature of PSUs is their voltage, usually described in terms of voltage “rails”. A
voltage rail is a supply of voltage in varying amounts, used by different system components
depending on their voltage requirements. For example, a PCI network card will likely draw power
from the +5 V rail, whereas the motors for the CPU fans will draw power from the +12 V rail. Put
simply, the voltage rails are the levels of voltage available for use by any system component.
While power rating determines the total power capacity of a PSU, voltage rails determine how
that power is used.

Different types of power supply units

If you are building or upgrading your own PC, you need to choose a compatible and reliable power
supply unit (PSU) that can deliver enough power to your components. But not all PSUs are the
same. There are different standards, sizes, and connectors that you need to consider. In this
article, we will explain the differences and compatibility issues between ATX, EPS, and SFX PSU
standards, and how to choose the right one for your PC.

ATX PSUs

ATX stands for Advanced Technology extended, and it is the most common PSU standard for
desktop PCs. ATX PSUs have a standard size of 150 x 86 x 140 mm, and they can fit in most ATX
or larger cases. ATX PSUs have a 24-pin main connector that plugs into the motherboard, and
various peripheral connectors that power the CPU, GPU, storage devices, fans, and other

2
components. ATX PSUs can provide up to 300 W of power, but some models can go higher
depending on their efficiency and quality.

ATX: Standard for most desktops, with various wattages.

EPS PSUs

EPS stands for Entry-level Power Supply, and it is a PSU standard designed for servers and
workstations that require more power and stability. EPS PSUs have a similar size and shape as
ATX PSUs, but they have a different main connector. EPS PSUs have a 24-pin main connector that
splits into two 12-pin connectors, one for the motherboard and one for the CPU. EPS PSUs also
have additional 8-pin or 4-pin CPU connectors, and more peripheral connectors than ATX PSUs.
EPS PSUs can provide up to 400 W of power, but some models can go higher depending on their
efficiency and quality.

EPS: Designed for servers and high-performance systems.

ATX and EPS connectors differ in shape and pin configuration.

SFX PSUs

SFX stands for Small Form Factor, and it is a PSU standard designed for small and compact PCs
that have limited space and cooling. SFX PSUs have a smaller size than ATX or EPS PSUs, typically
125 x 63.5 x 100 mm, and they can fit in some mini-ITX or micro-ATX cases. SFX PSUs have a 24-
pin main connector that plugs into the motherboard, and fewer peripheral connectors than ATX
or EPS PSUs. SFX PSUs can provide up to 250 W of power, but some models can go higher
depending on their efficiency and quality.

ATX and EPS PSUs aren't always interchangeable.

SFX PSUs may need adapters for ATX/EPS connections.

SFX: Compact for small form factor (SFF) builds.

3
IMPORTANCE OF WATTAGE AND EFFICIENCY RATINGS IN A POWER SUPPLY UNIT.

When selecting a power supply for your electronic device, it's imperative to consider the
efficiency rating. Not all power supplies are the same - some may be more effective than others
in supplying voltage and current that keep your device running optimally.

A power supply's efficiency rating determines how much of the electricity drawn from the house
electrical AC or DC is actually converted into usable power for your device. This comprehensive
guide will give you insight into efficiency ratings on power supplies. We'll explain significance,
review common power supply ratings, and teach you how to calculate their efficiency.
Furthermore, we have tips to enhance your own hardware's performance as well. Whether
you're someone who likes a DIY project or an experienced technician - this guide has something
for all levels of technical expertise when it comes to understanding power supply efficiency.

What is the Efficiency Rating of Power Supply?

When you purchase a power supply, make sure to look at its efficiency rating. This ratio indicates
how much of the power coming from the AC or DC source is usable for your device - with 80%
being an excellent benchmark.

The other 20%, unfortunately, dissipates as heat and isn't available for powering up electronic
components. Keeping this in mind will ensure that you get the maximum performance out of
your hardware. When it comes to your device's performance and life span, power supply
efficiency matters. Sub-standard efficiency can increase energy use, elevate heat output, and
limit the longevity of your machine. However, if you opt for a higher level of efficiency with your
power supply unit - electricity costs will be reduced while thermal levels are minimized thus
leading to greater lifespan potential. it's important to note that the efficiency rating of a power
supply can vary depending on several factors, such as its load, power supply voltage, and
temperature. The power supply's efficiency rating can also be affected by the type of power
supply, such as AC vs DC power supply or modular power supply.

4
The impact of voltage fluctuations on a computer system and how a PSU mitigates this.

Voltage fluctuations can significantly impact high-voltage systems, leading to equipment damage,
loss of efficiency, and system failures. Mitigating these fluctuations through voltage regulation,
power conditioning, and proactive maintenance is crucial for ensuring the reliable operation of
critical processes.

By addressing these issues, organizations can reduce downtime, minimize costs, and ensure the
safety of their personnel and equipment.

Remember, investing in effective mitigation strategies not only protects your high-voltage
systems but also provides a competitive advantage by minimizing disruptions and enhancing
operational efficiency.

Understanding the Basics of Voltage Fluctuations

In this article, we will delve into the basics of voltage fluctuations, their causes, and how to
protect your devices from potential damage.

What are Voltage Fluctuations?


Voltage fluctuations, also known as voltage spikes or surges, are rapid and temporary deviations
from a designated voltage level. They can occur due to various reasons, including lightning strikes,
power grid issues, faulty wiring, or the operation of heavy machinery.

It is worth mentioning that voltage fluctuations can take two forms: surges and sags.

❖ Surges: These are sudden increases in voltage lasting for a short duration. Surges can be
caused by lightning, power system disturbances, or improper use of electrical equipment.
❖ Sags: These are voltage drops lasting for a short duration. Sags can happen due to heavy loads
being connected to the electrical system or faulty wiring.

Primary Memory (RAM)

a. primary memory and its role in a computer system.

5
Primary Memory is a section of computer memory that the CPU can access directly. Primary
Memory has a faster access time than secondary memory and is faster than cache memory in a
memory hierarchy. Primary Memory, on average, has a storage capacity that is lower than
secondary memory but higher than cache memory.

Why Do We Need Primary Memory?


Memory is structured in such a way that the access time for the ready process would be
minimized in order to improve system efficiency. To reduce access time for the ready procedure,
the following strategy is used.

All applications, files, and data are kept on secondary storage, which is larger and has a longer
access time. A CPU or processor cannot access secondary memory directly. The operating system
loads any process into Primary Memory, which is comparatively smaller and can be directly
accessed by the CPU in order to execute it. Because only those programs are loaded in Primary
Memory and are ready to be performed, the CPU may access them quickly, thus improving the
system’s speed.

Memory Hierarchy refers to the step-by-step organizing of memory.

Classification of Primary Memory


We can broadly classify Primary Memory into two parts:

1. Read-Only Memory or ROM

2. Random Access Memory or RAM

6
RAM or Read-Only Memory
Any data that does not need to be changed is saved in ROM. The ROM contains both programs
that run when the system boots up (known as a bootstrap program that initializes the OS) and
data such as the algorithm that the OS requires. Nothing can be tampered with or modified in
ROM.

Types of ROM
We can classify ROM into four major types on the basis of their behaviors. They are:

1. MROM – Masked ROM is pre-programmed and hardwired ROM. Any text that has already
been written cannot be changed in any way.

2. PROM – The user can only change the programmable ROM once. The user purchases a blank
PROM and writes the required text on it; however, the content cannot be changed once it has
been written.

3. EPROM – ROM that can be erased and programmed by removing the original material, which
can be done by exposing EPROM to UV radiation, and the content can be modified. The charge
on the ROM is dissipated by the ultraviolet light, allowing content to be rewritten on it.

4. EEPROM – The initial content of an electrically erasable and programmable ROM can be
modified by erasing the content that can be easily deleted electrically. Instead of removing

7
everything at once, one byte can be wiped at a time. As a result, reprogramming an EEPROM is a
time-consuming procedure.

Random Access Memory


Any system process that has to be executed is put into RAM, where it is processed by the CPU
according to the program’s instructions. If we click on an application like Browser, the Operating
System will first load browser code into RAM, following which the CPU will execute and open the
Browser.

Types of RAM
We can broadly classify RAM into SRAM or Static RAM and DRAM or Dynamic RAM on the basis
of behaviors.

1. DRAM- To keep data, dynamic RAM, or DRAM, must be refreshed every few milliseconds.
DRAM is made up of capacitors and transistors, and capacitors leak electric charge; hence DRAM
must be charged on a regular basis. Because DRAM is less expensive than SRAM, it is commonly
used in personal computers and servers.

2. SRAM – The data is stored in static RAM, or SRAM, as long as the system is powered on. SRAM
stores a bit using sequential circuits, similar to a flip-flop, so it does not need to be refreshed on
a regular basis. Because SRAM is so expensive, it’s only used when speed is critical.

Why is Primary Memory volatile in nature?


Depending on whether Primary Memory is stored in RAM or ROM, its contents may or may not
vanish when power is lost.

The content in ROM is non-volatile, meaning it is saved even if power is lost.

RAM’s content is volatile, meaning it vanishes when power fails or is lost.

When does cache memory come into existence?

8
The data present in Primary Memory can be accessed faster than data in secondary memory. The
Primary Memory access times are typically in the microseconds, whereas the CPU can do
operations in nanoseconds. The system’s performance suffers as a result of the time lag between
reading data and acting on it, and as the CPU is underutilized, it may sit idle for some time. To
reduce the time gap, a new memory segment known as cache memory was introduced

Differences between volatile and non-volatile memory.


First, it is fast, so data can be quickly accessed.

Second, it protects sensitive data because the data becomes unavailable once the system is
turned off.

Finally, because of its high speed, volatile memory makes data transfer much easier.

DISADVANTAGE
❖ However, volatile memory has a lower storage capacity, and
❖ it tends to be more expensive per unit.
❖ A single RAM chip may be no more than a few GB in capacity, while a high capacity RAM may
sell for hundreds of thousands of dollars.

What is non-volatile memory?


Non-volatile memory, also known as static or permanent memory, is a type of memory hardware
that does not lose the data stored within it when the system shuts down. In contrast to volatile
memory, non-volatile memory takes longer to fetch and store data, but it is still fast, and it has a
higher memory capacity than volatile memory. As a result, users can store all the information
they want on their device for an extended period of time. Additionally, because of its higher
memory capacity, non-volatile memory is more cost-efficient than volatile memory.

Non-volatile memory is often used for secondary storage or long-term storage. However, it takes
the operating system a long time to load this memory, and as a result, it delivers slower
performance and lower data transfer rates.

9
Some examples of non-volatile memory are read only memory (ROM)--memory stored here
cannot be electronically modified--flash drives, and hard drives, all of which permanently store
data regardless of whether or not the system is on.

There are two types of non-volatile memory: mechanically addressed systems and electrically
addressed systems.

Mechanically addressed systems, like hard disk drives, read and write on a specific storage
medium/contact structure. Electrically addressed systems, like a solid state drive, use electrical
mechanisms to read and write data.

Hierarchy of memory in a computer system, including RAM,


cache memory, and registers.
Memory hierarchy refers to the arrangement of different types of computer memory in a system,
organized in ascending order based on their access speed, capacity, and cost. This concept is
integral to computer architecture and aims to optimize the trade-offs between speed, size, and
data storage and retrieval costs.

Memory hierarchy exploits the principle of locality, which states that programs tend to access a
small portion of data frequently (temporal locality) and access data that is located close to
recently accessed data (spatial locality). This principle guides data placement across different
memory levels to improve overall system performance.

Efficient utilization of memory hierarchy is crucial in computer systems to ensure faster access to
frequently used data while maintaining larger storage capacities for less frequently accessed
information

Types of Memory Hierarchy


The Memory Hierarchy Design encompasses two primary categories: External Memory or
Secondary Memory:

10
This category includes peripheral storage devices such as Magnetic Disks, Optical Disks, and
Magnetic Tapes. These devices are accessible to the processor through an I/O Module, serving
as secondary storage.

Internal Memory or Primary Memory: This category encompasses Main Memory, Cache Memory,
and CPU registers. This memory directly interfaces with the processor and includes components
vital for immediate data access and processing.

What Do You Mean by Memory Hierarchy?


Memory hierarchy refers to the arrangement of different types of computer memory in a system,
organized in ascending order based on their access speed, capacity, and cost. This concept is
integral to computer architecture and aims to optimize the trade-offs between speed, size, and
data storage and retrieval costs.

Characteristics of Memory Hierarchy


At its core, memory hierarchy consists of multiple levels, each offering varying characteristics:

1. Registers: Located within the CPU, registers are the fastest but smallest form of memory
used to store data that the CPU needs to access immediately.

11
2. Cache: This level acts as a bridge between the CPU and main memory (RAM). It comprises
multiple levels (L1, L2, L3) of increasingly larger but slower caches that temporarily hold
frequently accessed data to expedite CPU operations.
3. Main Memory (RAM): This level stores data that the CPU is actively working on but at a
slower speed than caches. It provides a larger storage capacity than caches but is slower
to access.
4. Secondary Storage: Includes hard drives (HDDs), solid-state drives (SSDs), and other long-
term storage mediums. While offering massive storage capacity, they are slower than
RAM.

Importance of memory speed and capacity in computer


performance.
Generally, the faster the RAM, the faster the processing speed. With faster RAM, you increase
the speed at which memory transfers information to other components. Meaning, your fast
processor now has an equally fast way of talking to the other components, making your computer
much more efficient.

Hard Drives (HDD/SSD):


• Solid state drives (SSD) and hard disk drives (HDD) are data storage devices.
• SSDs store data in flash memory, while HDDs store data in magnetic disks.
• SSDs are a newer technology that uses silicon's physical and chemical properties to offer
more storage volume, speed, and efficiency. However, HDDs are a cost-efficient option if
you require infrequent data access in blocks of 1 MB or more at a time.
• SSDs use memory chips

while

• HDDs use mechanical spinning disks and a moving read/write head to access data.

12
Factors affecting the performance of HDDs and SSDs
1. SSD utilization time

After a long period of use, the NAND flash inside the SSD wears out and the bit error rate
increases. Inside the SSD, there's a bit error recovery mechanism that includes both software and
hardware. Once the bit error rate exceeds a certain level, the hardware mechanism keeps failing
and flipped bits must be restored using the software. Conversely, the software mechanism
usually causes latency, which has an impact on SSD performance. In some situations, SSD
performance may degrade after the SSD has been powered off for some time due to a charge
leakage fault occurring on the SSD NAND flash throughout this time.

Therefore, SSD performance is time-dependent and is primarily determined by the NAND flash
bit error rate.

2. Storage capacity

The resource mapping table within the SSD is stored in memory for improved performance. The
table takes up 0.1% of the SSD capacity in memory.

13
3. Temperature

When the NAND flash is running at full speed, it generates a lot of heat. When the temperature
reaches a certain threshold, the system will become deviant. The SSD does have a temperature
controller that addresses this issue. This system is used to monitor the temperature and
automatically adjust SSD performance needed to maintain the temperature of the system within
a certain range. Primarily, SSD performance is sacrificed to reduce temperature. Excessive heat
affects SSD and causes the inner temperature control system to adjust SSD performance.

4. Software for driver installation

The driver software works in either user mode or kernel mode on the host. In kernel mode, the
application uses up too many CPU resources and experiences regular disruptions and context
switching, resulting in poor performance. In user mode, software typically employs the I/O polling
method to prevent unnecessary context switching and thus maximize CPU efficiency and better
performance.

5. SSD controller’s processing power

The SSD controller employs FTL processing logic in converting logical block read and write
requests into NAND flash read and write requests. When data is read from or written to large
blocks, the SSD controller processing capability requirements are not high, but they are extremely
high when data is read from or written to small blocks.

Therefore, SSD controller’s processing capability can cause performance constraints for SSD I/O
system.

6 Important Factors Affecting Hard Drive Performance


1. Actual Capacity

Capacity is one of the most significant parameters of a hard drive since it’s widely used for data
storage. As we know, 1GB is 1024MB and 1TB is 1024GB. However, a lot of manufacturers usually

14
treat 1GB as 1000MB when developing their drives. Hence, you may see that the actual capacity
of a drive is not as large as what it boasts.

2. Transfer Rate

Transfer rate refers to the data read/write speed of a hard drive. It can be divided into internal
transfer rate and external transfer rate. The former one suggests the hard drive performance
when buffer is unused. And the latter is reflecting data transfer speed between drive buffer and
system bus.

3. Rotational Speed

Rotational speed usually refers to the maximum amount of revolutions that hard drive platter
can complete in one minute, namely revolutions per minute (RPM). Thus, it is also an extremely
important factor which is affecting the internal hard drive transfer rate.

4. Cache

Cache is virtually a memory chip in hard drive. When accessing data, the data will be exchanged
between drive and memory. Therefore, if a hard drive has a large cache, system load can be
reduced and data transmission speed will be increased greatly.

5. Average Access Time

Average access time means the time demanded for drive heads to find the target sector where
the wanted data to be read/written from the starting position. Thus, it reflects the data read and
write speed of a hard drive. Hence, when you select a hard drive, you should absolutely take this
into consideration.

7. Interface Type

There are many types of hard drive interface, namely the layouts and connections between chips,
controllers and cables in hard drive. This factor will influence and reflect the hard drive
performance as well. Different interface types can bring out different performances.

15
CONCEPT OF DISK FRAGMENTATION AND ITS IMPACT ON
STORAGE PERFORMANCE.
Disk fragmentation occurs when files or pieces of files get scattered throughout your disks. Not
only do hard disks get fragmented, but removable storage can also become fragmented. This can
cause poor disk performance and overall system degradation.

Fragmentation occurs in hard disk drives, but not in solid state drives. SSDs do not fragment files
in the same way that hard drives do. Fragmentation can also occur in primary system memory
(RAM), where application and system processes allocate and use memory in non-contiguous
blocks as existing memory registers are used and re-used, but this article focuses on HDD
fragmentation.

Disk fragmentation typically increases over time as read and write operations take place within
an operating system. Applications and files are added and removed, so new storage leaves
different blocks open in a non-contiguous manner. Over time, files become so fragmented that
reading the data on the hard drive is a slow process. This is sometimes regarded as an inefficient
use of file storage resources. On the other hand, the ability of an operating system to rapidly
write files on any available storage block, without first re-allocating blocks so there is a sequential
read, speeds up file writing operations.

Some degree of disk fragmentation is generally considered acceptable in exchange for increased
performance and simplicity. However, over time it can result in slower drives and wasted disk
space. To optimize storage space and speed, businesses must understand how disk
fragmentation works and how to mitigate it.

Types of Fragmentation
The two most common types of fragmentation are internal and external.

Internal fragmentation. Because of the way memory is divided into fixed partitions, sometimes a
file is a poor fit. For example, if a particular process requires 4 MB but the storage system has
allocated a 6 MB space, all six of those megabytes are used up even though the process only

16
consumes four. When allocated memory exceeds what is needed for a write, you have wasted
leftover space.

External fragmentation. When memory allocation processes are inefficient, they can leave
fragments of space on the drive that are too small to be useful. If a process tries to run using that
space, the process won’t be fulfilled because the empty space isn’t contiguous—it’s split up
amongst unusable fragments on the drive.

To mitigate internal fragmentation, storage admins should use dynamic allocation software—
often already existing in the operating system—to avoid allocating more memory than is needed
to run a process. To solve external fragmentation issues, use defragmentation tools, which
reposition fragments on hard drives so more contiguous space is available.

d. Describe the evolution of storage technologies from HDDs to


SSDs and beyond.
In the digital age, the amount of data being generated, processed, and stored has increased
exponentially. As a result, storage devices have become a crucial component of our daily lives,
from the computers and smartphones we use to the cloud servers that power the internet. The
evolution of storage devices has been remarkable, with technological advancements enabling us
to store more data in smaller and more efficient formats. This article explores the history of
storage devices, their evolution, and the current state of the art.

Early Storage Devices

The first storage devices were mechanical and electro-mechanical in nature. These devices relied
on mechanical movement to read and write data. The earliest known storage device was the
punch card, which was invented by Joseph Marie Jacquard in 1801. The punch card was used to
control weaving machines and was capable of storing up to 80 characters of data. The punch card
was a significant development in computing as it allowed for the automation of textile production
and the standardization of weaving patterns.

17
Another early storage device was the magnetic drum, which was invented by Gustav Tauschek in
1932. The magnetic drum was a rotating cylinder coated with magnetic material, which allowed
data to be read and written to the drum's surface. The magnetic drum was a significant
improvement over the punch card, as it allowed for random access to data and increased storage
capacity.

Magnetic Tape and Disk Storage

In the 1950s and 1960s, magnetic tape and disk storage became popular. Magnetic tape was a
thin strip of plastic coated with magnetic material, and it was used to store data sequentially. The
first commercial magnetic tape storage device was the IBM 726, which was introduced in 1952.
The IBM 726 used a half-inch magnetic tape and had a storage capacity of 2.3 megabytes.

The first commercial disk storage device was the IBM 350, which was introduced in 1956. The
IBM 350 used a stack of 50 disks, each with a diameter of 24 inches, and had a storage capacity
of 5 megabytes. The IBM 350 was a significant advancement over magnetic tape as it allowed for
random access to data and faster read/write speeds.

Floppy Disks and Hard Drives

In the 1970s and 1980s, floppy disks and hard drives became popular storage devices. Floppy
disks were small, portable storage devices that could be inserted into a computer's disk drive.
The first floppy disk was introduced by IBM in 1971 and had a storage capacity of 80 kilobytes.
By the 1980s, floppy disks had become the primary storage device for personal computers. Hard
drives, on the other hand, were large, fixed storage devices that were typically installed inside a
computer. The first hard drive was introduced by IBM in 1956 and had a storage capacity of 5
megabytes. By the 1980s, hard drives had become an essential component of personal
computers, with storage capacities ranging from a few hundred megabytes to a few gigabytes.

Solid State Drives In the 1990s and 2000s, solid-state drives (SSDs) became popular storage
devices. Unlike traditional hard drives, SSDs do not have any moving parts and use flash memory
to store data.

18
The first commercial SSD was introduced by SanDisk in 1991 and had a storage capacity of 20
megabytes. However, early SSDs were prohibitively expensive, and their storage capacities were
relatively low compared to traditional hard drives. Over time, SSD technology improved, and their
storage capacities increased, while their prices decreased. By the 2010s, SSDs had become a
viable alternative to traditional hard drives, with faster read/write speeds, lower power
consumption, and increased durability. Today, SSDs are the primary storage device for most
laptops and desktops, with storage capacities ranging from a few hundred gigabytes to several
terabytes.

Cloud Storage

Cloud storage has become increasingly popular in recent years, with the growth of the internet
and the increasing amount of data being generated. Cloud storage allows users to store their
data on remote servers, accessible through the internet, rather than on their local devices. Cloud
storage has several advantages, including remote access to data, automatic backups, and
scalability.

Cloud storage providers offer different types of storage, including object storage, file storage,
and block storage. Object storage is used for storing unstructured data, such as images, videos,
and audio files. File storage is used for storing structured data, such as documents, spreadsheets,
and presentations. Block storage is used for storing data in blocks, and it is typically used for
database storage and virtual machine storage.

4) Secondary Storage Devices:


Secondary storage devices provide a way for the computer to store information on a permanent
basis. The three primary categories for storage devices are magnetic storage, optical storage and
solid state storage. Examples of these are hard drives, CDs and DVDs, and flash drives.

19
Advantages and disadvantages of different secondary storage technologies (e.g., optical disks,
USB drives, cloud storage).

Advantages include easy portability, faster read speeds, and greater durability, however storage
capacity is considerably lower than magnetic disk storage. The capacity of optical drives is not as
high as HDDs and the cloud, but costs have remained low, and they remain a viable option for
secondary storage.

Disadvantages. Optical discs have a much lower capacity than hard drives or SSDs. They also have
a slow seek time, which means that the disc must spin to the right location before the data can
be accessed. Some CDs, DVDs, and Blu-ray discs are not rewritable, so data can only be written
to them once.

Explain the concept of RAID (Redundant Array of Independent Disks) and its various levels

RAID (redundant array of independent disks) is a way of storing the same data in different places
on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive failure.
There are different RAID levels, however, and not all have the goal of providing redundancy.

RAID allows information to be spread across several disks. RAID uses techniques such as disk
striping (RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5)

20
to achieve redundancy, lower latency, increased bandwidth, and maximized ability to recover
from hard disk crashes.

RAID levels

RAID devices use different versions, called levels. The original paper that coined the term and
developed the RAID setup concept defined six levels of RAID -- 0 through 5. This numbered
system enabled those in IT to differentiate RAID versions. The number of levels has since
expanded and has been broken into three categories: standard, nested and nonstandard RAID
levels.

Standard RAID levels

RAID 0. This configuration has striping but no redundancy of data. It offers the best performance,
but it does not provide fault tolerance.

RAID 0 diagram A visualization of RAID 0.

RAID 1. Also known as disk mirroring, this configuration consists of at least two drives that
duplicate the storage of data. There is no striping. Read performance is improved, since either
disk can be read at the same time. Write performance is the same as for single disk storage.

21
RAID 1 diagram A visualization of RAID 1.

RAID 2. This configuration uses striping across disks, with some disks storing error checking and
correcting (ECC) information. RAID 2 also uses a dedicated Hamming code parity, a linear form of
ECC. RAID 2 has no advantage over RAID 3 and is no longer used.

RAID 2 diagram A visualization of RAID 2.

RAID 3. This technique uses striping and dedicates one drive to storing parity information. The
embedded ECC information is used to detect errors. Data recovery is accomplished by calculating
the exclusive information recorded on the other drives. Because an I/O operation addresses all
the drives at the same time, RAID 3 cannot overlap I/O. For this reason, RAID 3 is best for single-
user systems with long record applications.

22
RAID 3 diagram A visualization of RAID 3.

RAID 4. This level uses large stripes, which means a user can read records from any single drive.
Overlapped I/O can then be used for read operations. Because all write operations are required
to update the parity drive, no I/O overlapping is possible.

RAID 4 diagram A visualization of RAID 4.

RAID 5. This level is based on parity block-level striping. The parity information is striped across
each drive, enabling the array to function, even if one drive were to fail. The array's architecture
enables read and write operations to span multiple drives. This results in performance better
than that of a single drive, but not as high as a RAID 0 array. RAID 5 requires at least three disks,
but it is often recommended to use at least five disks for performance reasons.

RAID 5 arrays are generally considered to be a poor choice for use on write-intensive systems
because of the performance impact associated with writing parity data. When a disk fails, it can
take a long time to rebuild a RAID 5 array.

23
RAID 5 diagram A visualization of RAID 5.

RAID 6. This technique is similar to RAID 5, but it includes a second parity scheme distributed
across the drives in the array. The use of additional parity enables the array to continue
functioning, even if two disks fail simultaneously. However, this extra protection comes at a cost.
RAID 6 arrays often have slower write performance than RAID 5 arrays.

RAID 6 diagram

Evaluate the role of secondary storage in data backup and disaster recovery strategies

This is a permanent, cheaper, larger and slower backup process typically used for long-term
storage of cold data. Secondary storage readily supports lengthy retention requirements and
other data that may need to be recovered to make the environment whole and running after a
major outage. Secondary storage plays a crucial role in data backup and disaster recovery
strategies. It provides a reliable and secure way to store copies of important data outside of the
primary storage system. In the event of a data loss or disaster, having backups in secondary
storage ensures that critical information can be recovered and restored. It acts as a safeguard,

24
protecting against hardware failures, accidental deletions, and natural disasters. So, it's like a
safety net for your valuable data!

5) Processor (CPU):

a. Describe the function of the processor in a computer system.

The processor, also known as the CPU (Central Processing Unit), is like the brain of a computer
system. It carries out all the instructions and calculations that make the computer work. It
performs tasks such as executing program instructions, performing arithmetic and logical
operations, and managing data transfers between different parts of the computer. Basically, it's
responsible for all the "thinking" and "processing" that happens inside your computer!

The primary functions of a processor are –

I Fetch –

Every instruction has its own address and is stored in the main memory. The CPU fetches the
address of the instruction which is to be executed from the program counter in the memory and
performs the instruction.

II Decode –

The instruction that is to be executed is converted into binary code so that the computer can
easily understand it and perform the required function. The process of conversion is known as
decoding.

III Execute –

The process of performing the required task specified in the instruction is known as execution
The execution of the instruction takes place in the CPU.

IV Write back –

After performing the instruction the CPU store the result in the memory that process is known
as a store or Write back.

25
DIFFERENCE BETWEEN VARIOUS CPU ARCHITECTURES
x86: This architecture is commonly used in desktop and laptop computers. It stands for
"extended 86" and is known for its complex instruction set, which means it can handle a wide
range of instructions. It's widely compatible with different software and has been popular in the
PC market for a long time.

ARM: ARM architecture is commonly found in mobile devices like smartphones and tablets. It
stands for "Advanced RISC Machine" and is based on a reduced instruction set. ARM processors
are known for their energy efficiency, which helps prolong battery life in portable devices.

RISC: RISC stands for "Reduced Instruction Set Computing." RISC architectures focus on simplicity
and efficiency by using a small set of instructions that can be executed quickly. This allows RISC
processors to perform tasks faster, making them suitable for applications that require high
performance, such as servers and supercomputers.

CISC: CISC stands for "Complex Instruction Set Computing." Unlike RISC, CISC architectures
support a larger and more complex set of instructions. This allows CISC processors to perform
more complex operations in a single instruction, which can be beneficial for certain tasks. CISC
architectures are commonly used in desktop and laptop computers.

Each architecture has its own strengths and weaknesses, and they are designed for specific
purposes. The choice of architecture depends on the intended use of the computer system and
the specific requirements of the applications it will run.

26
CONCEPT OF MULTI-CORE AND MULTI-THREADED PROCESSORS.
Multi-core processors have multiple independent processing units, called cores, on a single chip.
Each core can execute instructions independently, allowing for parallel processing. This means
that multiple tasks can be performed simultaneously, improving overall performance and
responsiveness.

Multi-threaded processors take advantage of a concept called multithreading. Threads are


individual sequences of instructions that can be executed independently. With multi-threading,
a processor can execute multiple threads concurrently. This allows for better utilization of the
processor's resources and can significantly improve performance, especially in tasks that can be
divided into smaller, independent parts.

The impact of multi-core and multi-threaded processors on performance is significant. They


enable faster and more efficient execution of tasks by distributing the workload across multiple
cores or threads. This can lead to improved multitasking capabilities, smoother performance in
resource-intensive applications, and faster execution of parallelizable tasks such as multimedia
processing, scientific simulations, and data analysis.

In summary, multi-core and multi-threaded processors enhance performance by enabling


parallel processing and efficient utilization of resources, ultimately leading to faster and more
responsive computing experiences.

Clock speed, cache size, and instruction set architecture are all important factors that
contribute to CPU performance. Let's take a closer look at each of them:

1. Clock Speed: Clock speed, measured in gigahertz (GHz), refers to the number of cycles a CPU
can execute per second. A higher clock speed generally means faster processing. However, it's
important to note that comparing clock speeds across different CPU architectures may not
provide an accurate measure of performance. Other factors like the efficiency of the architecture
and the number of instructions executed per clock cycle also impact overall performance.

27
2. Cache Size: CPU cache is a small, high-speed memory located on the CPU chip. It stores
frequently accessed data and instructions to reduce the time it takes for the CPU to retrieve
information from the main memory. A larger cache size generally leads to better performance
because it reduces the need to access slower main memory. It improves data access times and
helps keep the CPU busy with useful information.

3. Instruction Set Architecture (ISA): The ISA defines the set of instructions that a CPU can
understand and execute. Different ISAs have different instruction sets, which can impact
performance. Some architectures, like RISC (Reduced Instruction Set Computing), focus on
simplicity and efficiency by using a smaller set of instructions, while others, like CISC (Complex
Instruction Set Computing), support a larger and more complex set of instructions. The efficiency
of the instruction set and how well it matches the requirements of the software being run can
affect overall performance.

It's important to note that CPU performance is influenced by a combination of these factors, as
well as other architectural design choices and optimizations. The best CPU for a specific task
depends on the workload and the specific requirements of the applications being run.

MOTHERBOARD
a. The motherboard is like the central nervous system of a computer. It's a printed circuit board
that connects and holds together all the essential components of a computer system. It provides
a platform for these components to communicate and work together.

b. The components of a motherboard include:

- CPU Socket: This is where the processor (CPU) is installed. It provides the electrical connections
necessary for the CPU to communicate with other components.

- RAM Slots: These slots hold the memory modules (RAM) that provide temporary storage for
data and instructions.

28
- Expansion Slots: These slots allow you to add additional components, such as graphics cards,
sound cards, or network cards, to expand the capabilities of your computer.

- Power Connectors: These connectors supply power to the motherboard and its components.

- Storage Connectors: These connectors, such as SATA or M.2, allow you to connect storage
devices like hard drives or solid-state drives.

- I/O Ports: These ports, located on the rear panel of the motherboard, provide connections for
peripherals like USB devices, audio devices, and network cables.

c. Motherboard form factors, such as ATX, Micro ATX, and Mini-ITX, refer to the physical size and
layout of the motherboard. The form factor affects the size, number of expansion slots, and
overall design of the computer system. ATX is the most common and offers more expansion slots,
while Micro ATX is smaller and suitable for compact systems. Mini-ITX is even smaller, designed
for ultra-compact systems.

The choice of form factor affects the size of the computer case, the number of components that
can be installed, and the overall system design. Larger form factors may offer more expansion
options but require larger cases, while smaller form factors are more limited in terms of
expansion but allow for compact and portable systems.

d. The chipset is a crucial component on the motherboard that manages the communication
between the CPU, memory, storage, and other peripherals. It provides essential features and
functionality, such as data transfer protocols, USB support, and integrated graphics. The chipset
plays a significant role in system performance and compatibility.

BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware Interface) is firmware
stored on the motherboard that initializes the hardware during startup and provides a basic set
of instructions for the operating system to communicate with the hardware. It controls the boot
process, hardware settings, and system configuration. UEFI is a more modern and advanced
replacement for traditional BIOS, offering additional features and improved user interfaces.

29

You might also like