0% found this document useful (0 votes)
17 views34 pages

Vir Sem

Virtualization is a technology that allows multiple virtual machines (VMs) to run on a single physical server, leading to cost savings, improved administration efficiency, and faster deployment. It utilizes a hypervisor to manage resources and can be categorized into various types, including full virtualization, para-virtualization, and OS-level virtualization. The benefits of virtualization include better resource utilization, increased flexibility, high availability, and reduced infrastructure costs.

Uploaded by

pranav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views34 pages

Vir Sem

Virtualization is a technology that allows multiple virtual machines (VMs) to run on a single physical server, leading to cost savings, improved administration efficiency, and faster deployment. It utilizes a hypervisor to manage resources and can be categorized into various types, including full virtualization, para-virtualization, and OS-level virtualization. The benefits of virtualization include better resource utilization, increased flexibility, high availability, and reduced infrastructure costs.

Uploaded by

pranav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

NEED FOR VIRTUALIZATION

Virtualization is widely used in various IT environments, including data centers, cloud computing
and enterprise environments, due to its numerous benefits that can help organizations achieve cost
savings, simplify administration, enable fast deployment and reduce infrastructure costs.

Here are some key aspects of how virtualization addresses these needs:

● Cost savings: Virtualization can lead to significant cost savings by optimizing the utilization of
hardware resources. By creating multiple Virtual Machines (VMs) on a single physical server,
organizations can reduce the number of physical servers needed, resulting in lower hardware
acquisition costs, reduced power consumption and decreased data center space requirements.
Additionally, virtualization enables organizations to consolidate their workloads onto a smaller
number of servers, which can reduce operational costs, such as maintenance, licensing and support
for hardware and software.

● Administration efficiency: Virtualization makes it easier to manage IT systems. It gives tools to


control all virtual machines (VMs) from one place. This means administrators don’t have to manage
each physical server one by one. They can quickly create, set up, watch, and control VMs from a
central system. This saves time and effort.

● Fast deployment: Virtualization helps set up new virtual machines (VMs) quickly. This means
companies can easily add new resources whenever needed.Unlike physical servers, which take a
long time to buy and set up, VMs can be created and ready in minutes. You can use saved templates
or snapshots to create VMs fast and with the right settings.This saves time and helps launch new
apps or services faster.

● Infrastructure cost reduction: Virtualization can help organizations reduce their infrastructure
costs by optimizing the utilization of existing hardware resources. By consolidating workloads onto
a smaller number of physical servers, organizations can reduce their hardware procurement costs
and lower ongoing operational costs, such as power consumption and data center space.
Virtualization also enables organizations to dynamically allocate and reallocate resources based on
demand, which can help optimize resource utilization and reduce waste, leading to further cost
savings.

● Increased flexibility: Virtualization provides flexibility in terms of resource allocation and


usage. Organizations can allocate resources, such as CPU, memory, storage and networking, to
VMs based on their requirements and easily adjust these allocations as needed. This enables
organizations to dynamically scale resources up or down based on workload demands, providing
flexibility and agility in adapting to changing business needs.

● Resource Utilization: Virtualization enables better utilization of physical hardware resources. By


partitioning physical servers into multiple virtual machines (VMs), each VM can run its own
operating system and applications, making more efficient use of available CPU, memory, and
storage resources.

● High Availability: Virtualization platforms often include features for high availability, such as
live migration and automatic failover. These features ensure that VMs can be moved between
physical hosts with minimal downtime, maximizing uptime for critical applications.

HARDWARE VIRTUALIZATION
Hardware virtualization is the process of creating virtual versions of physical hardware like servers,
storage devices, or networks. Hardware virtualization (also called server virtualization) is a method
of running multiple virtual computers on a single physical machine at the same time.

Each virtual machine (VM) works like a real computer, with its own operating system and
applications. This makes it look like each VM is using its own hardware, even though they all share
the same physical machine.

To make this work, special software called a hypervisor (also known as a Virtual Machine Monitor
or VMM) is used. The hypervisor runs on the real computer and helps to create and manage VMs. It
handles the tasks of assigning CPU, memory, storage, and network to each VM while keeping them
separate from each other.

How hardware virtualization works

 Hardware virtualization installs a hypervisor or virtual machine manager (VMM), which


creates an abstraction layer between the software and the underlying hardware. Once a
hypervisor is in place, software relies on virtual representations of the computing
components, such as virtual processors rather than physical processors. Popular hypervisors
include VMware's vSphere, based on ESXi, and Microsoft's Hyper-V.

Benefits of Hardware Virtualization:

 Better use of hardware resources


 More flexible and scalable systems
 Lower costs
 Easier recovery from disasters
 Helpful for testing and developing software
 More secure and supports older software

Types of Hardware Virtualization

1. Full Virtualization

In full virtualization, the hypervisor (a special software) acts like real hardware. It allows many
different operating systems (called guest OS) to run on the same physical machine.

Each guest OS thinks it is running on its own hardware. The hypervisor manages and translates all
the hardware instructions for the guest OS. There is no need to change the guest OS or its
applications. This type of virtualization helps in running different operating systems on the same
server, like Windows and Linux side-by-side.
2. Partial Virtualization (OS-level or Containerization)

Partial virtualization, also called OS-level virtualization or containerization, uses containers to share
one OS kernel.Each container acts like a separate system with its own files and processes.All
containers run on the same physical host but share the same OS core.This makes it faster and lighter
than full virtualization.It works best when all applications can run on the same operating system .

3. Paravirtualization

Paravirtualization is a virtualization method where the guest operating system is modified to know
it's running in a virtual environment. It communicates with the hypervisor using special APIs for
better speed and efficiency. This makes it faster than full virtualization. However, it requires
changes to the OS, so not all systems can be used. It is best suited for situations where performance
is more important than flexibility.
TYPES OF HYPERVISOR

A hypervisor, also called a Virtual Machine Monitor (VMM), is a special type of software that
helps run virtual machines (VMs) on a physical computer (host system). It allows many virtual
computers to share the same physical resources like CPU, memory, storage, and network.

The hypervisor acts as a middle layer between the real hardware and the virtual machines. It hides
the physical details and gives each VM its own environment, as if it were running on its own
separate computer.

Main Functions of a Hypervisor:

1. Creates Virtual Machines (VMs):


It allows you to create many VMs on one physical machine.
2. Keeps VMs Separate:
VMs do not interfere with each other. They run independently even though they share the
same physical resources.
3. Shares Hardware Efficiently:
It divides and manages hardware like CPU, RAM, and storage among all the VMs fairly and
efficiently.
4. Handles Important Tasks:
It manages CPU time, memory allocation, disk usage, and input/output (I/O) operations like
reading or writing data.
5. Allows Different Operating Systems:
You can run different OS like Windows, Linux, etc., on the same physical machine at the
same time.

Type 1 hypervisors
A Type 1 hypervisor runs directly on a computer’s physical hardware, so it is also called a bare-
metal hypervisor. It does not need to run on top of any operating system. Because it has direct
access to the hardware, it works faster and is more efficient than other types. That’s why it is
commonly used in big companies and data centers. Type 1 hypervisors are sometimes called a
virtual operating system because they manage everything needed to run virtual machines. They
are also more secure, because each virtual machine runs its own separate operating system. So, if
one VM is attacked, the others are safe and not affected.
Type 2 hypervisors
A Type 2 hypervisor is installed on top of a regular operating system (OS) like Windows or
Linux. That’s why it is also called a hosted hypervisor. It depends on the host OS to manage
system resources like CPU, memory, storage, and network. Type 2 hypervisors were first used in
early computer systems when virtualization was added to existing operating systems. Even though
Type 1 and Type 2 hypervisors do the same job (running virtual machines), Type 2 hypervisors are
usually slower because everything must go through the host OS first. Also, if there is a security
problem in the host OS, it can affect all the virtual machines running on it.

Types of Virtual Machines


A virtual machine (VM) is a computer system emulation. VM software replaces physical computing
infrastructure/hardware with software to provide an environment for deploying applications and
performing other app-related tasks .A VM is created and managed by a hypervisor, which allows
many VMs to run on a single physical machine. Each VM is isolated, so it works independently
from others, even though they share the same hardware.
Example:
You can run Windows inside a virtual machine on a Mac, or Linux on a Windows computer,
without needing another physical device.

1) SYSTEM VIRTUAL MACHINE

A System Virtual Machine (System VM) is a software-based version of a real computer. It


allows a complete operating system (OS)—such as Windows, Linux, or macOS—to run inside
another computer system, using shared hardware.
In simple words, a system virtual machine lets you create a “computer within a computer.” It looks
and behaves just like a real machine, with its own CPU, memory, storage, and network access—but
all of this is virtual, managed by special software.

For example, on your Windows laptop, you can install a system virtual machine and run Ubuntu
Linux in a separate window as if you have two computers in one.

How Does a System Virtual Machine Work?

A system virtual machine works with the help of a hypervisor, which is the main software that
creates and manages virtual machines.

1. Hypervisor Installation:
A hypervisor is first installed either directly on the computer’s hardware (Type 1
hypervisor) or on top of an existing operating system (Type 2 hypervisor).
Examples: VMware ESXi (Type 1), VirtualBox and VMware Workstation (Type 2).
2. Creating the Virtual Machine:
Using the hypervisor, the user creates a new virtual machine. During this process, the user
decides how much CPU, RAM, storage, and network access the VM should get from the
real machine (host).
3. Installing the Guest Operating System:
After creating the VM, the user installs an operating system (called the guest OS) inside
the VM, just like installing it on a real computer.
4. Running the VM:
Once installed, the guest OS starts working in its own separate window. The VM can now
be used to install apps, browse the internet, and perform tasks just like a real computer.
5. Resource Management by the Hypervisor:
The hypervisor is constantly working in the background. It allocates hardware resources
to each VM as needed, prevents conflicts, and isolates each VM to ensure that they don’t
interfere with each other or with the host system.
6. Isolation and Security:
A key feature of system VMs is isolation. Even if something goes wrong in one virtual
machine (like a virus or crash), it won’t affect the host computer or other VMs.

2) PROCESS VIRTUAL MACHINE

A Process Virtual Machine (Process VM) is a type of virtual machine that is designed to run a
single program or application, rather than a full operating system. It creates a temporary virtual
environment for that program to run in, and once the program stops running, the process VM is
destroyed.

In simple words, a process VM helps a program run smoothly on different systems by creating a
small virtual space just for that program. It does not include a full operating system, unlike a
system VM. It’s fast, lightweight, and mainly used for running specific software on multiple
platforms.

How Does a Process Virtual Machine Work?

1. Program Starts:
When a user or system starts a particular program (like Java or Python), the process VM is
created automatically.
2. Creates a Virtual Environment:
The process VM sets up a temporary virtual space where the program can run. It acts like
a mini-computer that supports only that one process or application.
3. Runs Independently:
Inside this environment, the program runs independently of the host system. This means
the program can run the same way on different operating systems (like Windows, macOS,
Linux), because the process VM hides the differences.
4. Managed by a Virtual Engine:
The process VM is usually managed by a software engine like the Java Virtual Machine
(JVM) or .NET CLR. These engines translate the program's instructions into something the
host machine can understand.
5. Ends When the Program Ends:
Once the program finishes running or is closed, the process virtual machine is also
destroyed. It does not stay active like a system VM.

Types of Server Virtualization

Server virtualization is a smart technology that allows one physical server (a real computer) to be
split into many virtual servers, also called Virtual Machines (VMs). Each of these VMs works
like its own separate computer with its own operating system, software, and resources like
CPU, memory, and storage.

Even though all these virtual machines are running on the same physical hardware, they work
independently from one another — like having multiple small computers inside one big computer.

This is made possible by a special software called a Hypervisor or Virtual Machine Monitor
(VMM). The hypervisor acts like a manager — it hides the physical hardware and helps create and
control these virtual machines
1. Full Virtualization

Full virtualization uses a hypervisor to copy the entire hardware system, like CPU, memory, and
storage. It allows multiple virtual machines (VMs) to run on one physical server.
Each VM acts like a separate computer with its own operating system. No need to change the guest
operating system. It gives strong isolation and is flexible but may use more resources.
Examples include VMware ESXi, Hyper-V, and KVM.

2. Para-Virtualization

Para-virtualization also uses a hypervisor, but the guest OS is slightly modified to work better with
it. This helps improve speed and performance compared to full virtualization. The guest OS knows
it is running in a virtual environment. It requires changes to the OS, so it is less flexible. Best used
when performance is more important than compatibility.
Xen is a popular hypervisor that uses this method.

3. OS-Level Virtualization (Operating System Virtualization)

This type of virtualization runs containers instead of full VMs. All containers share the same
operating system kernel. Each container works like an isolated app with its own file system and
settings. It uses less memory and starts up faster than VMs. Great for running many lightweight
apps on one system. Docker is a famous tool for this type.

4. Hardware-Assisted Virtualization

This type uses features built into modern CPUs to support virtualization. Examples are Intel VT-x
and AMD-V. It helps the hypervisor manage virtual machines more efficiently. Improves
performance and security by using direct hardware support. It reduces the workload of the software
layer (hypervisor). Mostly used in enterprise-level virtualization platforms.

5. Network Virtualization

Network virtualization combines physical network devices like switches and routers into virtual
ones. It allows admins to control the network more easily and flexibly. Virtual networks can be
split, grouped, or moved as needed. It improves security, performance, and efficiency of data flow.
Often used in data centers and cloud platforms.
Common in Software-Defined Networking (SDN) systems.

6. Storage Virtualization

Storage virtualization combines many physical storage devices into one virtual storage system. It
helps in managing data more easily and efficiently. Storage can be shared across different virtual
machines or systems. It improves storage performance, backup, and disaster recovery. Admins can
move or expand storage without stopping operations.
Used in big companies and data centers for better control of storage resources.

Understanding Server Virtualization

Server virtualization is a smart technology that allows one physical server (a real computer) to be
split into many virtual servers, also called Virtual Machines (VMs). Each of these VMs works
like its own separate computer with its own operating system, software, and resources like
CPU, memory, and storage.

Even though all these virtual machines are running on the same physical hardware, they work
independently from one another — like having multiple small computers inside one big computer.

This is made possible by a special software called a Hypervisor or Virtual Machine Monitor
(VMM). The hypervisor acts like a manager — it hides the physical hardware and helps create and
control these virtual machines

Architecture of Server Virtualization

1. Physical Host Server

This is the actual, real computer or machine that runs the virtual machines (VMs).
It has all the necessary hardware like the CPU, memory (RAM), hard disk (storage), and network
ports. The physical server acts as the base where all virtual servers work.
It gives the computing power and resources needed to run the virtual environments. Without this
physical server, virtual machines cannot function. It’s like one big computer that runs many small
computers inside it.

2. Hypervisor / Virtual Machine Monitor (VMM)

This is special software that runs on the physical server. Its job is to create, control, and manage the
virtual machines. The hypervisor makes a layer between the physical hardware and the virtual
machines. It allows each VM to use only the amount of CPU, memory, and storage it needs. It also
keeps the VMs separate, so they don’t interfere with each other.
Without a hypervisor, virtualization wouldn’t be possible.

3. Virtual Machines (VMs)

Virtual machines are like separate mini-computers created inside the physical server.
Each VM works like a real computer, with its own operating system, software, and settings.
Even though they share the same physical machine, they operate independently. One VM can run
Windows, another Linux, without affecting each other. You can add, remove, or change them based
on your needs. They are great for saving space, cost, and power compared to using many physical
servers.

4. Management Console

This is the user interface or control panel used to manage everything. It helps administrators to
create, monitor, and control virtual machines. You can start, stop, delete, or move VMs using this
tool. It also helps in handling system resources like CPU, RAM, storage, and networks. It makes it
easier to manage the virtual environment from one place.
It’s like a remote that helps you control and manage all your virtual computers.

Uses of Server Virtualization

1) Better Use of Hardware

Server virtualization allows one physical server to run multiple virtual machines (VMs), each acting
like a separate computer. This helps in using the server’s CPU, memory, and storage more
effectively. Instead of leaving parts of the hardware idle, all resources are shared and put to good
use. This leads to higher efficiency and better performance.

2) Cost Saving

With server virtualization, you don’t need to buy many physical servers because one server can do
the work of many. This means you spend less money on hardware. You also save on electricity and
cooling costs because fewer machines are running. It also reduces the space needed in data centers.

3) Faster Deployment

Virtualization allows you to create new virtual servers quickly using ready-made templates. You
don’t have to wait for new hardware or do long setups like with physical servers. This makes it easy
to launch new applications or test software faster. It helps save time and speeds up work.

4) Isolation and Security

Each virtual machine (VM) runs on its own, like a separate computer. If one VM gets a virus or
crashes, the problem stays inside that VM only. The other VMs on the same server are safe and
continue working normally. This keeps the system more secure and stable.

Desktop Virtualization

Desktop Virtualization is a technology that allows a user’s desktop environment (including


operating system, files, and applications) to run on a remote server instead of a personal computer.
The user can access this desktop from any device like a laptop, tablet, or smartphone using an
internet connection.

This means the desktop is not tied to one specific computer – it can be used from anywhere,
anytime. It helps keep data safe, makes it easy to manage many desktops, and supports remote
working. The user sees and controls the desktop as if it were running on their own computer, even
though it's hosted somewhere else.
Architecture of Desktop Virtualization

1. Client Device (User’s Device):

This is the device the user uses to connect to their virtual desktop. It can be a laptop, desktop, tablet,
or even a mobile phone. The device does not do the actual processing – it just shows the desktop
screen and sends user inputs.

2. Connection Broker:

This is the middleman that connects the user to the right virtual desktop. It checks the user’s login
details and sends them to their personal desktop. It also manages sessions and tracks which user is
using which desktop.

3. Virtual Desktop Infrastructure (VDI) Server / Virtual Machines:

These are servers that store and run all the virtual desktops. Each user gets their own virtual
machine (VM) that works like a personal computer. The desktop OS, apps, and user settings are all
stored here, not on the user's device.

4. Host Server:

It runs a special software called a hypervisor that helps create and manage many virtual machines
(VMs) on the same physical server. The host server provides CPU, memory, storage, and
network access to all the virtual desktops. Even if many users are connected, the host server
handles all of them at once using its strong hardware.

TYPES OF DESKTOP VIRTUALIZATION

1. Virtual Desktop Infrastructure (VDI)

VDI is a system where a company sets up virtual desktops on its own servers in a data center. Each
user gets a separate virtual machine that looks and works like a personal computer. These virtual
desktops are stored and managed centrally, and users can connect to them using the internet from
anywhere. VDI offers high customization and strong security since all data stays on the server,
not on the user’s device. However, it needs a good setup and maintenance team, as the company
owns and runs the infrastructure.

2. Remote Desktop Services (RDS)

RDS, also called session-based virtualization, allows many users to share the same server and
access a shared Windows environment. Instead of having separate virtual machines, all users work
on the same system, but with their own sessions. It’s a cheaper and simpler option than VDI, as it
uses fewer resources. However, users have limited control over settings, and performance may
drop if too many users are active at once. It’s ideal for training centers, schools, or businesses
with common software needs.

3. Desktop as a Service (DaaS)

DaaS is a cloud-based solution where a third-party provider (like Amazon, Microsoft, or Google)
hosts the virtual desktops. Users connect to their desktops using the internet, and the provider
handles security, updates, and server management. It’s very flexible — businesses can scale up
or down easily — and there's no need to buy or maintain servers. DaaS is great for small and
medium businesses or remote teams who want simple setup and access from anywhere without
worrying about backend hardware.

Storage Virtualization

Storage virtualization is a smart and helpful technology that takes many different physical storage
devices — such as hard drives, SSDs, or storage servers — and combines them into one big
virtual storage space. These devices might be different in size, speed, or even made by different
companies, but virtualization makes them work together as if they are one single system.

This means that users and applications don’t have to worry about which physical device stores their
files. Instead, they see one large storage space where they can save and access their data easily. A
special software program called a virtualization layer or storage manager takes care of all the
hard work.

The best part is that this system is flexible and easy to manage. If you need more space, you can
simply add more storage devices, and the virtualization software will include them in the system
automatically. Storage virtualization hides the complexity of multiple devices and turns them into
one easy, organized, and powerful storage solution. This helps companies save time, improve
performance, manage storage better, and protect their important data.

TYPES OF STORAGE VIRTUALIZATION:

1) Block Storage Virtualization

Block storage breaks data into small units called blocks. These blocks can be stored in different
physical storage devices. The virtualization software hides the actual location of these blocks and
presents them as one large storage. This allows fast and flexible data access. It is often used in
databases and high-performance applications. It works like taking pages from different bookshelves
but reading them together as one book.

2) File Storage Virtualization


File storage virtualization manages data as files and folders, like how we save documents on a
computer. It collects files from different servers and shows them as a single, organized system. This
makes it easier to access, store, and manage files without knowing their actual location. It is very
useful in shared office networks. Users just see one virtual file system, even though the files are
stored in different places.

3) Object Storage Virtualization

Object storage handles data as individual objects, each with its own ID and details (metadata).
Object storage virtualization combines all of these objects into one big virtual pool, even if they are
stored in different locations. It is very good for storing photos, videos, and backups. It’s mainly
used in cloud storage systems. You just search for an object’s ID to find it instantly, like finding a
labeled box in a huge warehouse.

4) Unified Storage Virtualization

Unified storage combines different types of storage — block, file, and object — into one system.
This makes it easy to manage all kinds of data in one place. It helps reduce the cost of having
separate systems for different storage types. IT teams can manage everything from one interface. It
is ideal for businesses that need flexibility. Think of it like a smart cupboard that can hold books,
folders, and boxes all in one.

5) Host-Based Storage Virtualization

In this type, the computer itself handles virtualization through installed software. It turns one
physical drive into multiple virtual drives. This gives flexibility and is useful for small businesses or
personal computers. No external hardware is needed. However, it may slow down performance if
the computer is overloaded. It’s like your own laptop managing different virtual folders from the
same hard disk.

6) Network-Based Storage Virtualization

This type uses a central device or server to manage storage across the network. It brings together
storage from many physical devices and shows it as one big virtual system. This makes it easier for
companies to manage and share storage. It is secure and reduces downtime. It is commonly used in
large businesses and data centers. It’s like a shared company locker that’s managed by a smart
system.

STORAGE AREA NETWORK


A Storage Area Network (SAN) is a special, high-speed network that connects storage devices
like hard drives and SSDs to computers or servers. Instead of keeping storage inside the server
itself, SAN moves the storage outside and connects it using a network, so multiple servers can
access the same storage space.

It works like a private road system built only for storage traffic. SAN allows data to move quickly
and safely between servers and storage without using the normal company network, which keeps
performance fast and efficient.

SAN is especially useful for large companies, data centers, or cloud systems where a lot of data is
stored and needs to be accessed quickly. It helps in tasks like backups, large file transfers, database
access, and virtual machine storage.

SAN makes it easy to manage big storage systems, improves speed, and ensures that servers always
have access to the data they need all in a secure and organized way.

Main components of a Storage Area Network (SAN):

1. Servers (Hosts) - These are the computers that request and use data from the storage system.
They connect to the SAN to read and write data.

2. Storage Devices - These include hard drives, SSDs, or disk arrays that store all the data. They
are connected to the SAN and shared among servers.

3. SAN Switches - These devices connect servers to storage devices through fibre or high-speed
Ethernet. They manage traffic to make sure data moves quickly and securely.

4. Host Bus Adapters (HBAs) - HBAs are special cards inside servers that allow them to
connect to the SAN. They send and receive data between the server and storage.

5. Cables - High-speed fibre or Ethernet cables physically connect all parts of the SAN. They
carry the data from one component to another.
6. SAN Management Software - This software helps monitor, configure, and control the SAN.
It ensures everything runs smoothly and efficiently.

How SAN Works ?

In a Storage Area Network (SAN), servers connect to shared storage devices through a high-speed
network. When a server wants to access or save data, it sends a request using a Host Bus Adapter
(HBA). This request is passed through SAN switches that direct it to the right storage device. The
data then travels back and forth over fast fiber or Ethernet cables. This setup keeps the main
network free and allows multiple servers to use the same storage safely and efficiently.

ADVANTAGES:

Improved Storage Utilization


Simplified Storage Management
Increased Scalability

Enhanced Data availability

Reduced Overtime

CHALLENGES:

Cost: SAN setup is expensive due to high-end hardware, cables, and specialized equipment.

Security: SANs can be vulnerable to unauthorized access if proper security controls are not in
place.

Performance: If not properly designed, performance may drop due to high traffic or
misconfigurations.

Complexity: Managing and configuring a SAN requires skilled professionals and can be
complicated.

Network Attached Storage

Network Attached Storage (NAS) is a special storage device that connects to a network, so many
people and computers can use it at the same time. It works like a small file server, always
connected, that lets you save, share, or back up your files easily from any device on the same
network.

Unlike normal USB hard drives that only work with one computer, NAS connects to your Wi-Fi or
local network. This means laptops, desktops, or even smart TVs can all access the NAS together. It
has its own small software that helps manage files, users, and security.

People use NAS at home to store things like photos, videos, and movies. In offices, teams use it to
work on shared files or to back up important data. NAS is simple to set up, affordable, and easy to
use, even without much technical knowledge. If it's connected to the internet, you can even use it
from anywhere.

Components of NAS
 Storage Drives – Store all your files like photos, videos, and documents.

 NAS Operating System – Manages storage, users, and file sharing.

 Network Interface – Connects the NAS to your network via Ethernet or Wi-Fi.

 Processor (CPU) – Runs the NAS software and handles tasks.

 Memory (RAM) – Helps the NAS work faster and handle multiple tasks.

 RAID Controller – Protects data using multiple drives for safety and performance.

 Cooling Fan – Keeps the NAS device cool and prevents overheating.

Working of NAS

NAS works like a mini computer that stores files and shares them over a network. It is connected to
your home or office network through a cable (Ethernet) or Wi-Fi. When a user wants to save or
open a file, their device sends a request to the NAS. The NAS receives the request, finds the file on
its storage drives, and sends it back to the user’s device. Its small built-in operating system handles
file sharing, user access, and security. Multiple users can access the NAS at the same time without
needing a separate computer. It’s simple, fast, and always available on the network.

BENEFITS:

Easy Management

Cost Effective

High Availability

Scalability

File Sharing

Data Protection

CHALLENGES:

 Limited Performance – NAS can become slow if too many users access it at the same time.

 Network Dependency – It only works well if the network is stable and fast.

 Storage Limitations – Once storage is full, you need to add or upgrade drives, which can be
costly.

 Security Risks – If not set up properly, it can be exposed to hackers or data breaches.

 Scalability Issues – It may not be suitable for very large businesses with massive data needs.

 Single Point of Failure – If the NAS device fails, access to all stored data is lost unless backups
exist.
What is NAS used for ?

 File Sharing
NAS lets multiple users access and share files over the same network. Everyone connected can
open, edit, or save files from the same storage.

 Data Backup
It automatically saves copies of important files from your computers or phones. This protects your
data in case of a system crash or accidental deletion.

 Media Streaming
You can store music, videos, and photos on NAS and play them on smart TVs, phones, or
computers without moving files around.

 Remote Access
NAS allows you to access your files from anywhere using the internet. It’s helpful for working from
home or while traveling.

 Centralized Storage
Instead of having files scattered on different devices, NAS keeps everything in one place. This
makes managing and organizing data much easier.

RAID

RAID stands for Redundant Array of Independent Disks. It is a technology used in computers
and servers to connect and manage multiple hard drives together as a single logical unit. Instead
of storing data on just one hard disk, RAID spreads the data across several disks in a way that
improves both performance and data safety.

The main purpose of RAID is to offer faster data access and greater reliability. This means it
helps in reading and writing data quickly, and also ensures that your data is not lost even if one of
the disks stops working. RAID achieves this by using different techniques like data mirroring
(copying data to more than one disk) and data striping (splitting data into chunks across disks).

RAID can be managed through hardware (using a RAID controller) or software (through the
operating system). Depending on how it is set up, RAID can provide different levels of
redundancy (backup copies) and fault tolerance (ability to handle disk failures).
Advantages: Limitation:

Faster Speed No Data Protection

Better Performance No Backup

Full Storage Use High Risk

Simple Setup One Drive Affects All

Low Cost Not Good for Important Data

Advantages: Limitations :

Easy recovery Higher Cost


Simple setup Half Storage Usage
Good for important data Limited Scalability
Improved read speed Risk if both drives fail

RAID 5

RAID 5 is a type of RAID that combines speed and data protection. It uses a method called
block-level striping with distributed parity, which means the data is divided into small parts
(blocks) and spread across multiple hard drives. Along with the actual data, extra information
called parity is also stored on the drives. This parity data is not kept on a single disk, but is evenly
distributed across all the drives in the array.

The main purpose of the parity is to help recover lost data if one of the drives fails. So, if any
single disk stops working, the system can use the remaining data and the parity information to
rebuild the missing files. This makes RAID 5 a reliable option that offers high performance for
reading data, while also giving fault tolerance, meaning your data is still safe even if one drive
crashes.

Advantages: Limitations:

Data protection Slower write Performance


Better storage efficiency Complex Setup
Good read performance Rebuild Time is Long
Fault tolerance Risk if Two drives Fail
Supports multiple drives Needs minimum three drives
Advantages: Limitations:
High data protection Slower Write Speed
Can survive two drive failures More Complex Setup
Good read performance Requires more storage for parity
Reliable for large storage Rebuild Process is slow
Useful for critical data Needs minimum four drives
Advantages: Limitations:

High data protection Uses half the total storage


Fast read and write speed Needs minimum four drives
Quick recovery Expensive due to more drives
Good performance under load Not space-efficient
Supports critical applications Limited scalability

VMWare

VMware is a software tool that allows you to create and use Virtual Machines (VMs) on a single
physical server. These virtual machines act like real computers, each with its own operating
system, CPU, memory, and storage. The key benefit is that VMware shares the physical
machine's resources between many virtual machines, which makes work faster, easier to manage,
and more cost-effective.

Instead of using one computer for each task, VMware lets you run multiple tasks on one machine
by creating separate virtual environments. This is helpful for testing, development, backup, or
running multiple systems without needing more hardware.

Important Tools and Features of VMware

VMware offers many tools to make virtualization easy and efficient. Here are the key ones in
simple terms:

Hypervisor: This is the software layer that sits between the physical server and the virtual
machines. It allows the hardware to run multiple virtual machines at the same time.

Virtual Machines: These are software-based computers created inside the physical server.
Each virtual machine runs independently with its own OS and apps.

Virtual Machine Management: VMware lets admins easily start, stop, or move virtual
machines between servers using tools like vCenter.

Networking: VMware includes virtual switches, VLANs, and load balancing so that the
virtual machines can communicate efficiently.

Resource Allocation: It allows admins to divide CPU, memory, and storage among virtual
machines based on their needs.
Live Migration: You can move a running virtual machine from one physical server to
another without shutting it down.

Security: VMware provides features like encryption, firewalls, and access control to
protect virtual environments.

Backup and Disaster Recovery: It helps to create and manage backup plans so data is safe
even if something goes wrong.

Working of VMware Virtualization Tool

1. Hypervisor Installation
The first step is to install a hypervisor on the physical server. This software layer acts as a bridge
between the server’s hardware and the virtual machines. It allows multiple virtual machines to run
on the same server at the same time.

2. Creating Virtual Machines


After installing the hypervisor, virtual machines can be created. Each virtual machine works like a
separate computer with its own operating system, memory, storage, and CPU. These VMs run
independently on the same hardware.

3. Resource Allocation
VMware allows administrators to share and assign resources like CPU, RAM, and disk space to
each virtual machine based on its needs. This makes sure resources are used efficiently. It also helps
improve performance and reduce hardware waste.

4. Managing Virtual Machines


Virtual machines can be easily managed through tools like VMware vCenter. Administrators can
start, stop, or even move virtual machines as needed. These actions can be done without affecting
the other virtual machines.

5. Networking Setup
VMware provides virtual networking tools such as virtual switches and VLANs. These tools allow
virtual machines to communicate with each other just like physical machines on a network. This
makes virtual networking fast, flexible, and secure.

6. Live Migration
VMware allows virtual machines to be moved from one physical server to another without shutting
them down. This means the machine keeps running during the move. It is useful for maintenance,
upgrades, and balancing workloads.

7. Security and Backup


VMware includes built-in features for security like encryption, firewalls, and user access control.
These features protect virtual machines from attacks and unauthorized access. It also supports
backup and disaster recovery to keep data safe during system failures.

Amazon AWS Virtualization

Amazon AWS Virtualization is a technology that lets you create virtual computers (called EC2
instances) in the cloud using Amazon Web Services. Instead of buying real physical machines, you
can use virtual machines to run apps, store data, and manage networks. These virtual machines run
on shared physical servers managed by AWS.
This helps save money, space, and time, because everything is handled over the internet. You can
easily start, stop, or change your virtual machines anytime. It is widely used for websites, software
development, testing, and big data processing.

Tools used :

Amazon EC2 (Elastic Compute Cloud)


EC2 lets you create virtual computers in the cloud that you can start, stop, and manage easily. It
helps you run apps and websites without using real machines.

Amazon VPC (Virtual Private Cloud)


VPC gives you a private network inside AWS where you control how your resources connect and
stay secure. You can set IP ranges, firewalls, and internet access.

Amazon EBS (Elastic Block Store)


EBS provides extra storage for EC2 virtual machines, like a hard drive. It keeps your data safe with
backup and quick recovery options.

Amazon S3 (Simple Storage Service)


S3 stores files like images, documents, or backups that you can access from anywhere. It keeps your
data secure and organized.

AWS Lambda
Lambda runs your code automatically when something happens, like a file upload or a database
change. You don’t need to manage any servers.

Amazon RDS (Relational Database Service)


RDS is a cloud database service that supports MySQL, PostgreSQL, and more. It handles backups
and updates so you can focus on using the data.

Amazon ECS (Elastic Container Service)


ECS helps you run apps packed in containers, which are small, portable software units. It makes
managing and scaling these apps fast and easy.

Working Process:

Provisioning resources
This is the first step where AWS tools help create the needed computing, storage, and network
resources. Tools like EC2, S3, and VPC are used to set up virtual environments.

Configuring resources
After provisioning, the resources are customized to match specific needs. This includes setting up
security rules, network settings, and storage options.

Managing resources
AWS tools help manage and automate virtual resources easily. Services like AWS CloudFormation
and AWS Lambda are used for smooth resource handling.

Scaling resources
AWS can increase or decrease the number of resources based on traffic or usage. Auto Scaling
handles this automatically without manual work.
Monitoring and optimization
AWS offers services to check how resources are performing and how to improve them. Tools like
AWS Trusted Advisor give tips to reduce cost and boost performance.

Backup and recovery


AWS helps save copies of your data in case something goes wrong. Services like EBS and RDS
handle storage and database backups safely.

Google Virtualization

Google Virtualization is a technology used by Google Cloud to create virtual versions of physical
computers and systems, like servers, storage, and networks. Instead of using real machines for
every task, Google uses powerful servers that run multiple virtual machines (VMs) or containers
at the same time.

This allows people to run apps, store data, and build websites without needing their own
hardware. Everything runs on Google’s secure and fast global infrastructure. It saves cost, improves
speed, and gives flexibility to scale up or down based on usage.

Through services like Compute Engine (VMs), Google Kubernetes Engine (containers), and
Cloud Functions (serverless), Google makes it easy for businesses and developers to use powerful
computing tools over the internet.

Tool Features

1. Virtual Machine

Google provides virtual machines (VMs) through Compute Engine.


A VM is like a computer inside a server. You can choose the OS (Windows, Linux), amount of
CPU, RAM, and storage. It works just like your personal computer but runs on Google Cloud.

2. Container-Based Virtualization

Google uses containers through Google Kubernetes Engine (GKE) and Cloud Run.
Containers are lightweight, fast, and include everything needed to run an app. They are perfect for
developers who want to build and deploy applications quickly across different systems.

3. Serverless Computing

With tools like Cloud Functions and Cloud Run, Google offers serverless computing.
This means you don’t need to manage servers. Just write your code, and Google will run it when
needed. It automatically handles scaling and resources.

4. Automatic Scaling

Google Cloud automatically adds or removes resources (like more VMs or storage) based on how
much traffic or load your application has.
This helps apps run smoothly during busy times and saves money during low usage.

5. Security
Google Virtualization includes strong security tools like encryption, firewalls, IAM (Identity
Access Management), and secure VM isolation.
Only authorized users can access the systems, and data is protected from attacks.

6. Cost Effectiveness

You only pay for what you use. Google offers per-second billing, discounts for long-term use, and
cheaper options like preemptible VMs.
This makes it affordable for students, startups, and big companies alike.

7. Management and Monitoring Tools

Google Cloud provides tools like Cloud Console, Cloud Monitoring, and Cloud Logging.
These help users track performance, fix issues, and manage virtual machines, containers, and apps
from one dashboard.

Working Process

1. Choosing a Virtualization Technology

Google offers different virtualization options like Virtual Machines (VMs), Containers, and
Serverless.
You choose the best one based on your need — for example, VMs for full control, containers for
lightweight apps, and serverless for automatic execution.

2. Creating a Virtual Instance

Once you choose the type, you can create a virtual instance using Google Cloud tools like
Compute Engine or GKE.
You select the OS, CPU, RAM, disk size, and other settings. Google then sets up the virtual
environment for you in minutes.

3. Deploying and Managing Workloads

You can now run your apps or services on the virtual instance.
Developers upload their code or applications, and Google helps manage the storage, networking,
and software updates. Tools like Cloud Console or command-line tools (gcloud) help you control
everything easily.

4. Scaling and Optimizing

Google automatically scales your resources (adds or removes VMs/containers) depending on the
load.It also suggests better VM types, cost-saving plans, and load balancing to keep apps fast and
efficient.

5. Security and Compliance

Google follows strict security policies to protect your data.It uses encryption, firewalls, secure
identity management, and regularly checks for compliance with laws like GDPR and ISO
standards to keep everything safe and legal.
Network Virtualization
Network virtualization is a technology that creates a virtual version of a physical network. It
combines hardware (like switches, routers, and cables) and software into one system, so everything
can be managed using software instead of manually connecting physical devices.

It allows you to create many virtual networks on a single physical network. Each virtual network
can have its own settings, rules, and security, just like a real one. These virtual networks can be
easily created, changed, or deleted without touching the physical wires or devices.

For example, if you have one big server, network virtualization lets you divide it into smaller,
separate virtual networks. These can be used by different apps, teams, or customers — all safely
and independently — even though they’re using the same physical hardware.

Working Process

1. Start with Physical Hardware


Network virtualization begins with physical devices like switches, routers, and servers. These
devices form the base infrastructure needed for the virtual network. All virtual networks are created
on top of this physical layer.

2. Install Virtualization Software


Special software is installed to create and control virtual networks. It allows the system to divide
one physical network into many virtual ones. This software is the brain behind all network
virtualization tasks.

3. Create Virtual Networks


Using the software, many virtual networks are created on one physical setup. Each virtual network
acts like a separate real network with its own settings. They can work independently without
interfering with each other.

4. Connect Devices Virtually


Virtual machines, apps, and services are linked to these virtual networks. They communicate
through software, not direct physical cables. This helps organize and separate traffic between
different systems.

5. Manage with Software Tools


Admins use tools and dashboards to control the virtual networks. They can monitor traffic, fix
problems, or make changes easily. Everything is done through software, without touching the
hardware.
6. Secure and Optimize the Network
Each virtual network has its own firewall and security rules. The system checks for attacks and
keeps data safe. It also improves speed by managing traffic smartly.

Advantages of Network Virtualization

1. Resource Optimization
Network virtualization allows many virtual networks to run on one physical system. This reduces
the need for buying separate hardware for each network. It makes better use of existing network
resources. This helps save space, power, and money.

2. Isolation and Segmentation


Each virtual network is kept separate from the others. Different departments or users can use their
own secure network. This prevents data from mixing or being accessed by the wrong people. It
increases security and control across the system.

3. Flexibility and Scalability


Virtual networks can be created, changed, or removed easily with software. You don’t need to touch
or move physical wires. This makes it simple to adjust the network as business needs grow. New
services can be added quickly without delay.

4. Simplified Network Management


All virtual networks can be managed from one central software dashboard. Admins can easily make
changes or check network status. They don’t have to work on each physical device. This saves time
and reduces errors in network setup.

5. Improved Performance and Availability


Virtual networks help improve traffic flow using smart software. They can choose the best and
fastest route for data. If one path fails, another one takes over automatically. This keeps services
running smoothly with fewer delays.

6. Cost Reduction
Less hardware is needed because many networks share the same system. This reduces spending on
devices, power, and maintenance. Fewer physical changes also mean lower labor costs. Virtual
networks are a smart way to save money over time.

7. Enhanced Application Delivery


Virtual networks can be adjusted to suit the needs of each application. Admins can give more
bandwidth or speed to important apps. This helps apps run better and respond faster. Users get a
smoother and more reliable experience.

8. Disaster Recovery and Business Continuity


If there is a failure, virtual networks can be moved to another location quickly. This helps keep
systems running with little or no downtime. Backups and copies of the network are easy to create. It
supports business continuity during emergencies or disasters.

Functions of Network Virtualization

1. Abstraction of Physical Resources


Network virtualization hides the details of real hardware like switches and routers. It shows them as
virtual parts that are easier to manage. This means network admins can focus more on setup and
design instead of hardware. It also makes the system more flexible and easier to change
2. Virtual Network Creation and Isolation
With virtualization, many separate virtual networks can run on one physical system. Each one
works like its own private network. These networks stay separate from each other for better
security. Different teams or services can use their own networks safely.

3. Network Overlays and Tunneling


Virtualization uses techniques like overlays and tunneling to send data across networks. It wraps
data in extra layers so it can travel between different places. This helps connect locations that are far
apart. It makes building large virtual networks easier, no matter where users are.

4. Software-Defined Networking (SDN) Control


SDN gives one central place to control the entire virtual network. It uses software to manage traffic,
routes, and rules. This makes the network easy to update and watch over. It also helps the network
quickly adjust when needs change.

5. Dynamic Resource Allocation


Virtual networks can grow or shrink depending on how much they are used. This is done
automatically through software. It means no one has to manually change the setup. It improves
performance and makes resource use more efficient.

6. Network Segmentation and Quality of Service (QoS)


Admins can divide virtual networks into smaller parts and give special settings to each. Important
apps can get more speed or better performance. They can also control who gets access to what. This
helps the network work better and more securely.

7. Simplified Network Management and Automation


Managing virtual networks is easier because the complex hardware is hidden. Admins use one
central platform to check, change, and fix things. Automation tools help speed up work and reduce
mistakes. This makes the network safer and quicker to set up.

8. Enhanced Scalability and Fault Tolerance


Virtual networks can grow without breaking current services. They can also handle problems like
hardware failures by finding new paths for data. This means services keep running even during
issues. It’s great for businesses that need strong and flexible networks.

Tools Used in Network Virtualization

1. Hypervisors
Hypervisors are special software that help create and manage virtual machines (VMs) on a single
computer. They include virtual switches, which let VMs talk to each other and connect to the
internet. This helps set up virtual networks inside a computer or server. Hypervisors also keep each
VM separated so they don’t interfere with each other’s network.

2. Software-Defined Networking (SDN) Controllers


SDN controllers are like smart managers for the network. They help control the entire virtual
network from one place using software. They can set up connections, manage traffic, and make
changes automatically. SDN controllers also use a system called OpenFlow to talk to other network
devices and give instructions.

3. Network Overlay Technologies


Network overlays help build virtual networks on top of real physical ones. VXLAN and NVGRE
are examples that wrap network data so it can travel across long distances or different locations.
This lets companies create big virtual networks that are not limited by geography. These overlays
make it easier to support many users or departments on one network.

4. Virtual Routing and Forwarding (VRF)


VRF lets a single physical router act like many separate routers. Each virtual router has its own
routing table and rules, so different networks stay separate. It helps in dividing a network for
different customers or teams. VRF is often used in VPNs to keep data private and secure.

5. Network Function Virtualization (NFV)


NFV turns things like firewalls, load balancers, and other hardware into virtual tools. These tools
can be set up and changed quickly using software. You can also link them together to build custom
services. NFV makes it easy to scale up or down based on how much traffic there is.

6. Network Monitoring and Management Tools


These tools watch over virtual networks to make sure everything runs well. They check for
problems like slow speeds or lost data. They also help find and fix issues quickly. This ensures the
network is always working properly and securely.

7. Cloud Management Platforms (CMPs)


CMPs help manage virtual networks in cloud systems like AWS or Google Cloud. They can
automatically set up, grow, or remove networks when needed. CMPs also apply safety rules and
company policies across all cloud networks. This makes cloud network management easier and
more organized.

VLAN - Virtual Local Area Network

A VLAN (Virtual Local Area Network) is a way to split one physical network into smaller,
separate virtual networks.
Even if all computers are connected to the same switch, VLAN keeps their communication separate.
Each group or department can have its own VLAN to avoid sharing traffic with others.
This helps protect important data by keeping it away from unauthorized users.
VLAN also improves speed by reducing unnecessary data sharing across the whole network.
It makes it easier for network admins to organize and manage the network.
You can move devices between VLANs through software, without changing any cables.
For example, HR, IT, and guest users in a company can be kept apart using different VLANs.

Types of VLAN

Default VLAN
Every switch has a default VLAN, usually VLAN 1. All ports are part of this VLAN at the
beginning. It allows basic communication between all devices. Network admins usually
change it to improve security.

Data VLAN
A Data VLAN is used to carry normal user data like files, emails, and browsing. It does not
carry voice or system data. This separation helps keep the network neat and organized. It
also makes the network more secure.

Voice VLAN
A Voice VLAN is used to carry voice traffic like phone calls over the internet (VoIP). It
gives higher priority to voice to make calls clear and smooth. This helps avoid delays or
poor call quality. Voice and data are kept separate for better performance.
Management VLAN
The Management VLAN is used by network administrators to manage switches and other
devices. Normal users do not have access to this VLAN. It keeps control traffic away from
user traffic. This makes managing the network safer and more organized.

Native VLAN
The Native VLAN carries untagged traffic, which means data without a VLAN label. It
helps older or simple devices that cannot use VLAN tags. Each trunk port has one native
VLAN. It must be configured carefully to avoid security issues.

Guest VLAN
The Guest VLAN is made for visitors or temporary users. It gives them internet access
without giving access to the main network. This keeps company data safe from outsiders.
Guests stay in their own separate virtual network.

VLAN Architecture
VLAN architecture works by dividing one physical network into smaller virtual networks. Each
VLAN is like a separate group where only selected devices can communicate with each other, even
if they are connected to the same switch. Switches use VLAN IDs (numbers) to identify which data
belongs to which VLAN. Trunk ports are used to carry traffic for multiple VLANs between
switches, while access ports connect end devices to a specific VLAN. VLAN tagging helps
switches know where the data should go. This setup improves security, reduces unnecessary traffic,
and makes network management easier.

How to manage and configure VLAN’S

1. Plan and design your VLANs

Decide how many VLANs you need based on your network's structure or departments.
Choose VLAN IDs or names and plan which devices go into each VLAN.
Make sure to include rules for VLAN size and how they will talk to each other if needed.

2. Configure VLANs on networking devices

Set up VLANs on switches or routers using the correct software or commands.Create VLANs by
assigning them names and numbers (VLAN IDs).Also set up things like VLAN ports, VLAN
trunks, and access rules.

3. Configure VLAN interfaces

Set up virtual interfaces (SVIs) for each VLAN on Layer 3 devices.These interfaces act like
gateways for devices inside each VLAN.They help in sending traffic between VLANs if needed.
4. Assign devices to VLANs

Put computers, printers, or other devices into the right VLANs.You can do this using software tools
or manually by port configuration.This helps keep devices grouped based on use or department.

5. Test and verify VLAN configurations

Check if VLANs are working correctly by testing device communication.Make sure devices in the
same VLAN can talk to each other.Fix any errors if devices can't connect or if traffic isn't going
through.

6. Monitor and manage VLANs

Keep an eye on your VLANs to make sure they work well.You might track VLAN usage,
membership, and traffic.Also check for security risks using tools like ACLs.

7. Document VLAN configurations

Write down the VLAN setup for future reference.Include VLAN IDs, names, members, and
interface settings.This helps when solving issues or expanding the network later.

ADVANTAGES

· Improved Security – VLANs isolate sensitive data by separating users into different virtual
networks.

· Better Network Performance – Reduces unnecessary traffic by limiting broadcast domains.

· Simplified Management – Devices can be grouped logically, even if they are in different
physical locations.

· Scalability – New devices and departments can be added without major physical changes.

· Cost-Effective – Saves money by reducing the need for extra hardware like switches and routers.

WAN ARCHITECTURE

A WAN (Wide Area Network) connects computers and networks over long distances, such as
between cities or countries. It uses communication links like leased lines, satellites, or the internet
to transfer data. WANs help organizations share resources and information across multiple branch
offices. Devices like routers and switches are used to control the flow of data in the network. WANs
can use both wired and wireless connections and support many users and services like email, file
sharing, and video calls. Since data travels through many networks, security tools like encryption
are used to protect it. WAN architecture can be designed in different ways, such as point-to-point,
hub-and-spoke, or MPLS. These designs help ensure efficient, secure, and reliable communication
across wide areas.
End Devices

End devices are the computers and tools people use, like mobile phones, PCs, workstations, servers,
data centers, or mainframe computers. These are the devices that connect to the network and allow
users to send and receive information.

Customer Premises Equipment (CPE)

CPE means the equipment installed at the customer’s location, like routers or modems, that help
connect to the WAN. Different types of CPE are used depending on business needs to improve
network performance. The WAN service provider may also help manage and maintain this
equipment.

Access Points and Routers

Modern routers often include built-in wireless features, which let devices connect using Wi-Fi.
Access points help spread wireless signals in big offices, so many devices can connect over a larger
area. These tools are important in WAN systems to link different locations or floors into one big
network.

Network Switches

Switches help manage how data moves across a network. They work at different levels to make sure
devices get the right data quickly and efficiently. Switches are important for smooth and fast
communication within a network.

Local Area Network (LAN)

A LAN is a small network that connects a few devices like a laptop and a mobile phone. It can also
include routers and modems to form a working network at home or in small offices.

Connecting Media
Today, many WANs use centralized management tools. These tools help companies easily set up
and control their WAN using online dashboards. This makes it easier and faster to manage large
networks from one place.

Managing and Configuring WAN

Network Planning

Network planning means understanding what the WAN should do, like how many sites it connects,
how secure it should be, and the type of technology used. It includes deciding which locations need
to be linked and what type of connections (like leased lines or SD-WAN) will be used. The goal is
to match the design with the business needs and goals.

Network Design

After planning, the network design step begins. It considers things like routing, hardware,
performance levels, and service quality. The design should fit the organization's needs while
ensuring the network works smoothly and securely.

Network Implementation

In this step, the actual WAN is set up using hardware and software tools. Devices are installed, and
network connections are made between different offices or locations. This step may include
advanced tools like SDN (Software Defined Networking) or VPNs (Virtual Private Networks).

Network Monitoring and Management

Once the WAN is running, it needs to be watched and managed to keep it working properly. This
means fixing problems, checking for security risks, and tracking how data moves through the
network. Tools are used to find issues, block threats, and make sure everything runs efficiently.

Network Optimization

This step is about improving network performance. Techniques like reducing extra data
(compression), balancing traffic, or using smart tools to improve speed are used. These changes
help make the network faster and more efficient for users.

Network Scalability and Upgrades

As a business grows, the WAN should be able to grow too. This step involves adding more users,
devices, or locations as needed. The WAN may also need software or hardware changes to stay up
to date and support business changes.

Advantages

· Covers Large Areas – WANs connect computers over long distances, even across countries or
continents.

· Centralized Data Access – Employees in different locations can access shared data and systems
from a central server.
· Improves Communication – It enables quick communication through emails, video calls, and
messaging across branches.

· Supports Remote Work – People can work from anywhere and still access the company’s
network securely.

· Scalable for Growth – WANs can grow easily by adding new branches, users, or services
without much rework.

TYPES OF WAN

· Leased Line WAN

· Circuit-Switched WAN

· Packet-Switched WAN

· Public WAN

· Wireless WAN (WWAN)

· Software-Defined WAN (SD-WAN)

You might also like