Vir Sem
Vir Sem
Virtualization is widely used in various IT environments, including data centers, cloud computing
and enterprise environments, due to its numerous benefits that can help organizations achieve cost
savings, simplify administration, enable fast deployment and reduce infrastructure costs.
Here are some key aspects of how virtualization addresses these needs:
● Cost savings: Virtualization can lead to significant cost savings by optimizing the utilization of
hardware resources. By creating multiple Virtual Machines (VMs) on a single physical server,
organizations can reduce the number of physical servers needed, resulting in lower hardware
acquisition costs, reduced power consumption and decreased data center space requirements.
Additionally, virtualization enables organizations to consolidate their workloads onto a smaller
number of servers, which can reduce operational costs, such as maintenance, licensing and support
for hardware and software.
● Fast deployment: Virtualization helps set up new virtual machines (VMs) quickly. This means
companies can easily add new resources whenever needed.Unlike physical servers, which take a
long time to buy and set up, VMs can be created and ready in minutes. You can use saved templates
or snapshots to create VMs fast and with the right settings.This saves time and helps launch new
apps or services faster.
● Infrastructure cost reduction: Virtualization can help organizations reduce their infrastructure
costs by optimizing the utilization of existing hardware resources. By consolidating workloads onto
a smaller number of physical servers, organizations can reduce their hardware procurement costs
and lower ongoing operational costs, such as power consumption and data center space.
Virtualization also enables organizations to dynamically allocate and reallocate resources based on
demand, which can help optimize resource utilization and reduce waste, leading to further cost
savings.
● High Availability: Virtualization platforms often include features for high availability, such as
live migration and automatic failover. These features ensure that VMs can be moved between
physical hosts with minimal downtime, maximizing uptime for critical applications.
HARDWARE VIRTUALIZATION
Hardware virtualization is the process of creating virtual versions of physical hardware like servers,
storage devices, or networks. Hardware virtualization (also called server virtualization) is a method
of running multiple virtual computers on a single physical machine at the same time.
Each virtual machine (VM) works like a real computer, with its own operating system and
applications. This makes it look like each VM is using its own hardware, even though they all share
the same physical machine.
To make this work, special software called a hypervisor (also known as a Virtual Machine Monitor
or VMM) is used. The hypervisor runs on the real computer and helps to create and manage VMs. It
handles the tasks of assigning CPU, memory, storage, and network to each VM while keeping them
separate from each other.
1. Full Virtualization
In full virtualization, the hypervisor (a special software) acts like real hardware. It allows many
different operating systems (called guest OS) to run on the same physical machine.
Each guest OS thinks it is running on its own hardware. The hypervisor manages and translates all
the hardware instructions for the guest OS. There is no need to change the guest OS or its
applications. This type of virtualization helps in running different operating systems on the same
server, like Windows and Linux side-by-side.
2. Partial Virtualization (OS-level or Containerization)
Partial virtualization, also called OS-level virtualization or containerization, uses containers to share
one OS kernel.Each container acts like a separate system with its own files and processes.All
containers run on the same physical host but share the same OS core.This makes it faster and lighter
than full virtualization.It works best when all applications can run on the same operating system .
3. Paravirtualization
Paravirtualization is a virtualization method where the guest operating system is modified to know
it's running in a virtual environment. It communicates with the hypervisor using special APIs for
better speed and efficiency. This makes it faster than full virtualization. However, it requires
changes to the OS, so not all systems can be used. It is best suited for situations where performance
is more important than flexibility.
TYPES OF HYPERVISOR
A hypervisor, also called a Virtual Machine Monitor (VMM), is a special type of software that
helps run virtual machines (VMs) on a physical computer (host system). It allows many virtual
computers to share the same physical resources like CPU, memory, storage, and network.
The hypervisor acts as a middle layer between the real hardware and the virtual machines. It hides
the physical details and gives each VM its own environment, as if it were running on its own
separate computer.
Type 1 hypervisors
A Type 1 hypervisor runs directly on a computer’s physical hardware, so it is also called a bare-
metal hypervisor. It does not need to run on top of any operating system. Because it has direct
access to the hardware, it works faster and is more efficient than other types. That’s why it is
commonly used in big companies and data centers. Type 1 hypervisors are sometimes called a
virtual operating system because they manage everything needed to run virtual machines. They
are also more secure, because each virtual machine runs its own separate operating system. So, if
one VM is attacked, the others are safe and not affected.
Type 2 hypervisors
A Type 2 hypervisor is installed on top of a regular operating system (OS) like Windows or
Linux. That’s why it is also called a hosted hypervisor. It depends on the host OS to manage
system resources like CPU, memory, storage, and network. Type 2 hypervisors were first used in
early computer systems when virtualization was added to existing operating systems. Even though
Type 1 and Type 2 hypervisors do the same job (running virtual machines), Type 2 hypervisors are
usually slower because everything must go through the host OS first. Also, if there is a security
problem in the host OS, it can affect all the virtual machines running on it.
For example, on your Windows laptop, you can install a system virtual machine and run Ubuntu
Linux in a separate window as if you have two computers in one.
A system virtual machine works with the help of a hypervisor, which is the main software that
creates and manages virtual machines.
1. Hypervisor Installation:
A hypervisor is first installed either directly on the computer’s hardware (Type 1
hypervisor) or on top of an existing operating system (Type 2 hypervisor).
Examples: VMware ESXi (Type 1), VirtualBox and VMware Workstation (Type 2).
2. Creating the Virtual Machine:
Using the hypervisor, the user creates a new virtual machine. During this process, the user
decides how much CPU, RAM, storage, and network access the VM should get from the
real machine (host).
3. Installing the Guest Operating System:
After creating the VM, the user installs an operating system (called the guest OS) inside
the VM, just like installing it on a real computer.
4. Running the VM:
Once installed, the guest OS starts working in its own separate window. The VM can now
be used to install apps, browse the internet, and perform tasks just like a real computer.
5. Resource Management by the Hypervisor:
The hypervisor is constantly working in the background. It allocates hardware resources
to each VM as needed, prevents conflicts, and isolates each VM to ensure that they don’t
interfere with each other or with the host system.
6. Isolation and Security:
A key feature of system VMs is isolation. Even if something goes wrong in one virtual
machine (like a virus or crash), it won’t affect the host computer or other VMs.
A Process Virtual Machine (Process VM) is a type of virtual machine that is designed to run a
single program or application, rather than a full operating system. It creates a temporary virtual
environment for that program to run in, and once the program stops running, the process VM is
destroyed.
In simple words, a process VM helps a program run smoothly on different systems by creating a
small virtual space just for that program. It does not include a full operating system, unlike a
system VM. It’s fast, lightweight, and mainly used for running specific software on multiple
platforms.
1. Program Starts:
When a user or system starts a particular program (like Java or Python), the process VM is
created automatically.
2. Creates a Virtual Environment:
The process VM sets up a temporary virtual space where the program can run. It acts like
a mini-computer that supports only that one process or application.
3. Runs Independently:
Inside this environment, the program runs independently of the host system. This means
the program can run the same way on different operating systems (like Windows, macOS,
Linux), because the process VM hides the differences.
4. Managed by a Virtual Engine:
The process VM is usually managed by a software engine like the Java Virtual Machine
(JVM) or .NET CLR. These engines translate the program's instructions into something the
host machine can understand.
5. Ends When the Program Ends:
Once the program finishes running or is closed, the process virtual machine is also
destroyed. It does not stay active like a system VM.
Server virtualization is a smart technology that allows one physical server (a real computer) to be
split into many virtual servers, also called Virtual Machines (VMs). Each of these VMs works
like its own separate computer with its own operating system, software, and resources like
CPU, memory, and storage.
Even though all these virtual machines are running on the same physical hardware, they work
independently from one another — like having multiple small computers inside one big computer.
This is made possible by a special software called a Hypervisor or Virtual Machine Monitor
(VMM). The hypervisor acts like a manager — it hides the physical hardware and helps create and
control these virtual machines
1. Full Virtualization
Full virtualization uses a hypervisor to copy the entire hardware system, like CPU, memory, and
storage. It allows multiple virtual machines (VMs) to run on one physical server.
Each VM acts like a separate computer with its own operating system. No need to change the guest
operating system. It gives strong isolation and is flexible but may use more resources.
Examples include VMware ESXi, Hyper-V, and KVM.
2. Para-Virtualization
Para-virtualization also uses a hypervisor, but the guest OS is slightly modified to work better with
it. This helps improve speed and performance compared to full virtualization. The guest OS knows
it is running in a virtual environment. It requires changes to the OS, so it is less flexible. Best used
when performance is more important than compatibility.
Xen is a popular hypervisor that uses this method.
This type of virtualization runs containers instead of full VMs. All containers share the same
operating system kernel. Each container works like an isolated app with its own file system and
settings. It uses less memory and starts up faster than VMs. Great for running many lightweight
apps on one system. Docker is a famous tool for this type.
4. Hardware-Assisted Virtualization
This type uses features built into modern CPUs to support virtualization. Examples are Intel VT-x
and AMD-V. It helps the hypervisor manage virtual machines more efficiently. Improves
performance and security by using direct hardware support. It reduces the workload of the software
layer (hypervisor). Mostly used in enterprise-level virtualization platforms.
5. Network Virtualization
Network virtualization combines physical network devices like switches and routers into virtual
ones. It allows admins to control the network more easily and flexibly. Virtual networks can be
split, grouped, or moved as needed. It improves security, performance, and efficiency of data flow.
Often used in data centers and cloud platforms.
Common in Software-Defined Networking (SDN) systems.
6. Storage Virtualization
Storage virtualization combines many physical storage devices into one virtual storage system. It
helps in managing data more easily and efficiently. Storage can be shared across different virtual
machines or systems. It improves storage performance, backup, and disaster recovery. Admins can
move or expand storage without stopping operations.
Used in big companies and data centers for better control of storage resources.
Server virtualization is a smart technology that allows one physical server (a real computer) to be
split into many virtual servers, also called Virtual Machines (VMs). Each of these VMs works
like its own separate computer with its own operating system, software, and resources like
CPU, memory, and storage.
Even though all these virtual machines are running on the same physical hardware, they work
independently from one another — like having multiple small computers inside one big computer.
This is made possible by a special software called a Hypervisor or Virtual Machine Monitor
(VMM). The hypervisor acts like a manager — it hides the physical hardware and helps create and
control these virtual machines
This is the actual, real computer or machine that runs the virtual machines (VMs).
It has all the necessary hardware like the CPU, memory (RAM), hard disk (storage), and network
ports. The physical server acts as the base where all virtual servers work.
It gives the computing power and resources needed to run the virtual environments. Without this
physical server, virtual machines cannot function. It’s like one big computer that runs many small
computers inside it.
This is special software that runs on the physical server. Its job is to create, control, and manage the
virtual machines. The hypervisor makes a layer between the physical hardware and the virtual
machines. It allows each VM to use only the amount of CPU, memory, and storage it needs. It also
keeps the VMs separate, so they don’t interfere with each other.
Without a hypervisor, virtualization wouldn’t be possible.
Virtual machines are like separate mini-computers created inside the physical server.
Each VM works like a real computer, with its own operating system, software, and settings.
Even though they share the same physical machine, they operate independently. One VM can run
Windows, another Linux, without affecting each other. You can add, remove, or change them based
on your needs. They are great for saving space, cost, and power compared to using many physical
servers.
4. Management Console
This is the user interface or control panel used to manage everything. It helps administrators to
create, monitor, and control virtual machines. You can start, stop, delete, or move VMs using this
tool. It also helps in handling system resources like CPU, RAM, storage, and networks. It makes it
easier to manage the virtual environment from one place.
It’s like a remote that helps you control and manage all your virtual computers.
Server virtualization allows one physical server to run multiple virtual machines (VMs), each acting
like a separate computer. This helps in using the server’s CPU, memory, and storage more
effectively. Instead of leaving parts of the hardware idle, all resources are shared and put to good
use. This leads to higher efficiency and better performance.
2) Cost Saving
With server virtualization, you don’t need to buy many physical servers because one server can do
the work of many. This means you spend less money on hardware. You also save on electricity and
cooling costs because fewer machines are running. It also reduces the space needed in data centers.
3) Faster Deployment
Virtualization allows you to create new virtual servers quickly using ready-made templates. You
don’t have to wait for new hardware or do long setups like with physical servers. This makes it easy
to launch new applications or test software faster. It helps save time and speeds up work.
Each virtual machine (VM) runs on its own, like a separate computer. If one VM gets a virus or
crashes, the problem stays inside that VM only. The other VMs on the same server are safe and
continue working normally. This keeps the system more secure and stable.
Desktop Virtualization
This means the desktop is not tied to one specific computer – it can be used from anywhere,
anytime. It helps keep data safe, makes it easy to manage many desktops, and supports remote
working. The user sees and controls the desktop as if it were running on their own computer, even
though it's hosted somewhere else.
Architecture of Desktop Virtualization
This is the device the user uses to connect to their virtual desktop. It can be a laptop, desktop, tablet,
or even a mobile phone. The device does not do the actual processing – it just shows the desktop
screen and sends user inputs.
2. Connection Broker:
This is the middleman that connects the user to the right virtual desktop. It checks the user’s login
details and sends them to their personal desktop. It also manages sessions and tracks which user is
using which desktop.
These are servers that store and run all the virtual desktops. Each user gets their own virtual
machine (VM) that works like a personal computer. The desktop OS, apps, and user settings are all
stored here, not on the user's device.
4. Host Server:
It runs a special software called a hypervisor that helps create and manage many virtual machines
(VMs) on the same physical server. The host server provides CPU, memory, storage, and
network access to all the virtual desktops. Even if many users are connected, the host server
handles all of them at once using its strong hardware.
VDI is a system where a company sets up virtual desktops on its own servers in a data center. Each
user gets a separate virtual machine that looks and works like a personal computer. These virtual
desktops are stored and managed centrally, and users can connect to them using the internet from
anywhere. VDI offers high customization and strong security since all data stays on the server,
not on the user’s device. However, it needs a good setup and maintenance team, as the company
owns and runs the infrastructure.
RDS, also called session-based virtualization, allows many users to share the same server and
access a shared Windows environment. Instead of having separate virtual machines, all users work
on the same system, but with their own sessions. It’s a cheaper and simpler option than VDI, as it
uses fewer resources. However, users have limited control over settings, and performance may
drop if too many users are active at once. It’s ideal for training centers, schools, or businesses
with common software needs.
DaaS is a cloud-based solution where a third-party provider (like Amazon, Microsoft, or Google)
hosts the virtual desktops. Users connect to their desktops using the internet, and the provider
handles security, updates, and server management. It’s very flexible — businesses can scale up
or down easily — and there's no need to buy or maintain servers. DaaS is great for small and
medium businesses or remote teams who want simple setup and access from anywhere without
worrying about backend hardware.
Storage Virtualization
Storage virtualization is a smart and helpful technology that takes many different physical storage
devices — such as hard drives, SSDs, or storage servers — and combines them into one big
virtual storage space. These devices might be different in size, speed, or even made by different
companies, but virtualization makes them work together as if they are one single system.
This means that users and applications don’t have to worry about which physical device stores their
files. Instead, they see one large storage space where they can save and access their data easily. A
special software program called a virtualization layer or storage manager takes care of all the
hard work.
The best part is that this system is flexible and easy to manage. If you need more space, you can
simply add more storage devices, and the virtualization software will include them in the system
automatically. Storage virtualization hides the complexity of multiple devices and turns them into
one easy, organized, and powerful storage solution. This helps companies save time, improve
performance, manage storage better, and protect their important data.
Block storage breaks data into small units called blocks. These blocks can be stored in different
physical storage devices. The virtualization software hides the actual location of these blocks and
presents them as one large storage. This allows fast and flexible data access. It is often used in
databases and high-performance applications. It works like taking pages from different bookshelves
but reading them together as one book.
Object storage handles data as individual objects, each with its own ID and details (metadata).
Object storage virtualization combines all of these objects into one big virtual pool, even if they are
stored in different locations. It is very good for storing photos, videos, and backups. It’s mainly
used in cloud storage systems. You just search for an object’s ID to find it instantly, like finding a
labeled box in a huge warehouse.
Unified storage combines different types of storage — block, file, and object — into one system.
This makes it easy to manage all kinds of data in one place. It helps reduce the cost of having
separate systems for different storage types. IT teams can manage everything from one interface. It
is ideal for businesses that need flexibility. Think of it like a smart cupboard that can hold books,
folders, and boxes all in one.
In this type, the computer itself handles virtualization through installed software. It turns one
physical drive into multiple virtual drives. This gives flexibility and is useful for small businesses or
personal computers. No external hardware is needed. However, it may slow down performance if
the computer is overloaded. It’s like your own laptop managing different virtual folders from the
same hard disk.
This type uses a central device or server to manage storage across the network. It brings together
storage from many physical devices and shows it as one big virtual system. This makes it easier for
companies to manage and share storage. It is secure and reduces downtime. It is commonly used in
large businesses and data centers. It’s like a shared company locker that’s managed by a smart
system.
It works like a private road system built only for storage traffic. SAN allows data to move quickly
and safely between servers and storage without using the normal company network, which keeps
performance fast and efficient.
SAN is especially useful for large companies, data centers, or cloud systems where a lot of data is
stored and needs to be accessed quickly. It helps in tasks like backups, large file transfers, database
access, and virtual machine storage.
SAN makes it easy to manage big storage systems, improves speed, and ensures that servers always
have access to the data they need all in a secure and organized way.
1. Servers (Hosts) - These are the computers that request and use data from the storage system.
They connect to the SAN to read and write data.
2. Storage Devices - These include hard drives, SSDs, or disk arrays that store all the data. They
are connected to the SAN and shared among servers.
3. SAN Switches - These devices connect servers to storage devices through fibre or high-speed
Ethernet. They manage traffic to make sure data moves quickly and securely.
4. Host Bus Adapters (HBAs) - HBAs are special cards inside servers that allow them to
connect to the SAN. They send and receive data between the server and storage.
5. Cables - High-speed fibre or Ethernet cables physically connect all parts of the SAN. They
carry the data from one component to another.
6. SAN Management Software - This software helps monitor, configure, and control the SAN.
It ensures everything runs smoothly and efficiently.
In a Storage Area Network (SAN), servers connect to shared storage devices through a high-speed
network. When a server wants to access or save data, it sends a request using a Host Bus Adapter
(HBA). This request is passed through SAN switches that direct it to the right storage device. The
data then travels back and forth over fast fiber or Ethernet cables. This setup keeps the main
network free and allows multiple servers to use the same storage safely and efficiently.
ADVANTAGES:
Reduced Overtime
CHALLENGES:
Cost: SAN setup is expensive due to high-end hardware, cables, and specialized equipment.
Security: SANs can be vulnerable to unauthorized access if proper security controls are not in
place.
Performance: If not properly designed, performance may drop due to high traffic or
misconfigurations.
Complexity: Managing and configuring a SAN requires skilled professionals and can be
complicated.
Network Attached Storage (NAS) is a special storage device that connects to a network, so many
people and computers can use it at the same time. It works like a small file server, always
connected, that lets you save, share, or back up your files easily from any device on the same
network.
Unlike normal USB hard drives that only work with one computer, NAS connects to your Wi-Fi or
local network. This means laptops, desktops, or even smart TVs can all access the NAS together. It
has its own small software that helps manage files, users, and security.
People use NAS at home to store things like photos, videos, and movies. In offices, teams use it to
work on shared files or to back up important data. NAS is simple to set up, affordable, and easy to
use, even without much technical knowledge. If it's connected to the internet, you can even use it
from anywhere.
Components of NAS
Storage Drives – Store all your files like photos, videos, and documents.
Network Interface – Connects the NAS to your network via Ethernet or Wi-Fi.
Memory (RAM) – Helps the NAS work faster and handle multiple tasks.
RAID Controller – Protects data using multiple drives for safety and performance.
Cooling Fan – Keeps the NAS device cool and prevents overheating.
Working of NAS
NAS works like a mini computer that stores files and shares them over a network. It is connected to
your home or office network through a cable (Ethernet) or Wi-Fi. When a user wants to save or
open a file, their device sends a request to the NAS. The NAS receives the request, finds the file on
its storage drives, and sends it back to the user’s device. Its small built-in operating system handles
file sharing, user access, and security. Multiple users can access the NAS at the same time without
needing a separate computer. It’s simple, fast, and always available on the network.
BENEFITS:
Easy Management
Cost Effective
High Availability
Scalability
File Sharing
Data Protection
CHALLENGES:
Limited Performance – NAS can become slow if too many users access it at the same time.
Network Dependency – It only works well if the network is stable and fast.
Storage Limitations – Once storage is full, you need to add or upgrade drives, which can be
costly.
Security Risks – If not set up properly, it can be exposed to hackers or data breaches.
Scalability Issues – It may not be suitable for very large businesses with massive data needs.
Single Point of Failure – If the NAS device fails, access to all stored data is lost unless backups
exist.
What is NAS used for ?
File Sharing
NAS lets multiple users access and share files over the same network. Everyone connected can
open, edit, or save files from the same storage.
Data Backup
It automatically saves copies of important files from your computers or phones. This protects your
data in case of a system crash or accidental deletion.
Media Streaming
You can store music, videos, and photos on NAS and play them on smart TVs, phones, or
computers without moving files around.
Remote Access
NAS allows you to access your files from anywhere using the internet. It’s helpful for working from
home or while traveling.
Centralized Storage
Instead of having files scattered on different devices, NAS keeps everything in one place. This
makes managing and organizing data much easier.
RAID
RAID stands for Redundant Array of Independent Disks. It is a technology used in computers
and servers to connect and manage multiple hard drives together as a single logical unit. Instead
of storing data on just one hard disk, RAID spreads the data across several disks in a way that
improves both performance and data safety.
The main purpose of RAID is to offer faster data access and greater reliability. This means it
helps in reading and writing data quickly, and also ensures that your data is not lost even if one of
the disks stops working. RAID achieves this by using different techniques like data mirroring
(copying data to more than one disk) and data striping (splitting data into chunks across disks).
RAID can be managed through hardware (using a RAID controller) or software (through the
operating system). Depending on how it is set up, RAID can provide different levels of
redundancy (backup copies) and fault tolerance (ability to handle disk failures).
Advantages: Limitation:
Advantages: Limitations :
RAID 5
RAID 5 is a type of RAID that combines speed and data protection. It uses a method called
block-level striping with distributed parity, which means the data is divided into small parts
(blocks) and spread across multiple hard drives. Along with the actual data, extra information
called parity is also stored on the drives. This parity data is not kept on a single disk, but is evenly
distributed across all the drives in the array.
The main purpose of the parity is to help recover lost data if one of the drives fails. So, if any
single disk stops working, the system can use the remaining data and the parity information to
rebuild the missing files. This makes RAID 5 a reliable option that offers high performance for
reading data, while also giving fault tolerance, meaning your data is still safe even if one drive
crashes.
Advantages: Limitations:
VMWare
VMware is a software tool that allows you to create and use Virtual Machines (VMs) on a single
physical server. These virtual machines act like real computers, each with its own operating
system, CPU, memory, and storage. The key benefit is that VMware shares the physical
machine's resources between many virtual machines, which makes work faster, easier to manage,
and more cost-effective.
Instead of using one computer for each task, VMware lets you run multiple tasks on one machine
by creating separate virtual environments. This is helpful for testing, development, backup, or
running multiple systems without needing more hardware.
VMware offers many tools to make virtualization easy and efficient. Here are the key ones in
simple terms:
Hypervisor: This is the software layer that sits between the physical server and the virtual
machines. It allows the hardware to run multiple virtual machines at the same time.
Virtual Machines: These are software-based computers created inside the physical server.
Each virtual machine runs independently with its own OS and apps.
Virtual Machine Management: VMware lets admins easily start, stop, or move virtual
machines between servers using tools like vCenter.
Networking: VMware includes virtual switches, VLANs, and load balancing so that the
virtual machines can communicate efficiently.
Resource Allocation: It allows admins to divide CPU, memory, and storage among virtual
machines based on their needs.
Live Migration: You can move a running virtual machine from one physical server to
another without shutting it down.
Security: VMware provides features like encryption, firewalls, and access control to
protect virtual environments.
Backup and Disaster Recovery: It helps to create and manage backup plans so data is safe
even if something goes wrong.
1. Hypervisor Installation
The first step is to install a hypervisor on the physical server. This software layer acts as a bridge
between the server’s hardware and the virtual machines. It allows multiple virtual machines to run
on the same server at the same time.
3. Resource Allocation
VMware allows administrators to share and assign resources like CPU, RAM, and disk space to
each virtual machine based on its needs. This makes sure resources are used efficiently. It also helps
improve performance and reduce hardware waste.
5. Networking Setup
VMware provides virtual networking tools such as virtual switches and VLANs. These tools allow
virtual machines to communicate with each other just like physical machines on a network. This
makes virtual networking fast, flexible, and secure.
6. Live Migration
VMware allows virtual machines to be moved from one physical server to another without shutting
them down. This means the machine keeps running during the move. It is useful for maintenance,
upgrades, and balancing workloads.
Amazon AWS Virtualization is a technology that lets you create virtual computers (called EC2
instances) in the cloud using Amazon Web Services. Instead of buying real physical machines, you
can use virtual machines to run apps, store data, and manage networks. These virtual machines run
on shared physical servers managed by AWS.
This helps save money, space, and time, because everything is handled over the internet. You can
easily start, stop, or change your virtual machines anytime. It is widely used for websites, software
development, testing, and big data processing.
Tools used :
AWS Lambda
Lambda runs your code automatically when something happens, like a file upload or a database
change. You don’t need to manage any servers.
Working Process:
Provisioning resources
This is the first step where AWS tools help create the needed computing, storage, and network
resources. Tools like EC2, S3, and VPC are used to set up virtual environments.
Configuring resources
After provisioning, the resources are customized to match specific needs. This includes setting up
security rules, network settings, and storage options.
Managing resources
AWS tools help manage and automate virtual resources easily. Services like AWS CloudFormation
and AWS Lambda are used for smooth resource handling.
Scaling resources
AWS can increase or decrease the number of resources based on traffic or usage. Auto Scaling
handles this automatically without manual work.
Monitoring and optimization
AWS offers services to check how resources are performing and how to improve them. Tools like
AWS Trusted Advisor give tips to reduce cost and boost performance.
Google Virtualization
Google Virtualization is a technology used by Google Cloud to create virtual versions of physical
computers and systems, like servers, storage, and networks. Instead of using real machines for
every task, Google uses powerful servers that run multiple virtual machines (VMs) or containers
at the same time.
This allows people to run apps, store data, and build websites without needing their own
hardware. Everything runs on Google’s secure and fast global infrastructure. It saves cost, improves
speed, and gives flexibility to scale up or down based on usage.
Through services like Compute Engine (VMs), Google Kubernetes Engine (containers), and
Cloud Functions (serverless), Google makes it easy for businesses and developers to use powerful
computing tools over the internet.
Tool Features
1. Virtual Machine
2. Container-Based Virtualization
Google uses containers through Google Kubernetes Engine (GKE) and Cloud Run.
Containers are lightweight, fast, and include everything needed to run an app. They are perfect for
developers who want to build and deploy applications quickly across different systems.
3. Serverless Computing
With tools like Cloud Functions and Cloud Run, Google offers serverless computing.
This means you don’t need to manage servers. Just write your code, and Google will run it when
needed. It automatically handles scaling and resources.
4. Automatic Scaling
Google Cloud automatically adds or removes resources (like more VMs or storage) based on how
much traffic or load your application has.
This helps apps run smoothly during busy times and saves money during low usage.
5. Security
Google Virtualization includes strong security tools like encryption, firewalls, IAM (Identity
Access Management), and secure VM isolation.
Only authorized users can access the systems, and data is protected from attacks.
6. Cost Effectiveness
You only pay for what you use. Google offers per-second billing, discounts for long-term use, and
cheaper options like preemptible VMs.
This makes it affordable for students, startups, and big companies alike.
Google Cloud provides tools like Cloud Console, Cloud Monitoring, and Cloud Logging.
These help users track performance, fix issues, and manage virtual machines, containers, and apps
from one dashboard.
Working Process
Google offers different virtualization options like Virtual Machines (VMs), Containers, and
Serverless.
You choose the best one based on your need — for example, VMs for full control, containers for
lightweight apps, and serverless for automatic execution.
Once you choose the type, you can create a virtual instance using Google Cloud tools like
Compute Engine or GKE.
You select the OS, CPU, RAM, disk size, and other settings. Google then sets up the virtual
environment for you in minutes.
You can now run your apps or services on the virtual instance.
Developers upload their code or applications, and Google helps manage the storage, networking,
and software updates. Tools like Cloud Console or command-line tools (gcloud) help you control
everything easily.
Google automatically scales your resources (adds or removes VMs/containers) depending on the
load.It also suggests better VM types, cost-saving plans, and load balancing to keep apps fast and
efficient.
Google follows strict security policies to protect your data.It uses encryption, firewalls, secure
identity management, and regularly checks for compliance with laws like GDPR and ISO
standards to keep everything safe and legal.
Network Virtualization
Network virtualization is a technology that creates a virtual version of a physical network. It
combines hardware (like switches, routers, and cables) and software into one system, so everything
can be managed using software instead of manually connecting physical devices.
It allows you to create many virtual networks on a single physical network. Each virtual network
can have its own settings, rules, and security, just like a real one. These virtual networks can be
easily created, changed, or deleted without touching the physical wires or devices.
For example, if you have one big server, network virtualization lets you divide it into smaller,
separate virtual networks. These can be used by different apps, teams, or customers — all safely
and independently — even though they’re using the same physical hardware.
Working Process
1. Resource Optimization
Network virtualization allows many virtual networks to run on one physical system. This reduces
the need for buying separate hardware for each network. It makes better use of existing network
resources. This helps save space, power, and money.
6. Cost Reduction
Less hardware is needed because many networks share the same system. This reduces spending on
devices, power, and maintenance. Fewer physical changes also mean lower labor costs. Virtual
networks are a smart way to save money over time.
1. Hypervisors
Hypervisors are special software that help create and manage virtual machines (VMs) on a single
computer. They include virtual switches, which let VMs talk to each other and connect to the
internet. This helps set up virtual networks inside a computer or server. Hypervisors also keep each
VM separated so they don’t interfere with each other’s network.
A VLAN (Virtual Local Area Network) is a way to split one physical network into smaller,
separate virtual networks.
Even if all computers are connected to the same switch, VLAN keeps their communication separate.
Each group or department can have its own VLAN to avoid sharing traffic with others.
This helps protect important data by keeping it away from unauthorized users.
VLAN also improves speed by reducing unnecessary data sharing across the whole network.
It makes it easier for network admins to organize and manage the network.
You can move devices between VLANs through software, without changing any cables.
For example, HR, IT, and guest users in a company can be kept apart using different VLANs.
Types of VLAN
Default VLAN
Every switch has a default VLAN, usually VLAN 1. All ports are part of this VLAN at the
beginning. It allows basic communication between all devices. Network admins usually
change it to improve security.
Data VLAN
A Data VLAN is used to carry normal user data like files, emails, and browsing. It does not
carry voice or system data. This separation helps keep the network neat and organized. It
also makes the network more secure.
Voice VLAN
A Voice VLAN is used to carry voice traffic like phone calls over the internet (VoIP). It
gives higher priority to voice to make calls clear and smooth. This helps avoid delays or
poor call quality. Voice and data are kept separate for better performance.
Management VLAN
The Management VLAN is used by network administrators to manage switches and other
devices. Normal users do not have access to this VLAN. It keeps control traffic away from
user traffic. This makes managing the network safer and more organized.
Native VLAN
The Native VLAN carries untagged traffic, which means data without a VLAN label. It
helps older or simple devices that cannot use VLAN tags. Each trunk port has one native
VLAN. It must be configured carefully to avoid security issues.
Guest VLAN
The Guest VLAN is made for visitors or temporary users. It gives them internet access
without giving access to the main network. This keeps company data safe from outsiders.
Guests stay in their own separate virtual network.
VLAN Architecture
VLAN architecture works by dividing one physical network into smaller virtual networks. Each
VLAN is like a separate group where only selected devices can communicate with each other, even
if they are connected to the same switch. Switches use VLAN IDs (numbers) to identify which data
belongs to which VLAN. Trunk ports are used to carry traffic for multiple VLANs between
switches, while access ports connect end devices to a specific VLAN. VLAN tagging helps
switches know where the data should go. This setup improves security, reduces unnecessary traffic,
and makes network management easier.
Decide how many VLANs you need based on your network's structure or departments.
Choose VLAN IDs or names and plan which devices go into each VLAN.
Make sure to include rules for VLAN size and how they will talk to each other if needed.
Set up VLANs on switches or routers using the correct software or commands.Create VLANs by
assigning them names and numbers (VLAN IDs).Also set up things like VLAN ports, VLAN
trunks, and access rules.
Set up virtual interfaces (SVIs) for each VLAN on Layer 3 devices.These interfaces act like
gateways for devices inside each VLAN.They help in sending traffic between VLANs if needed.
4. Assign devices to VLANs
Put computers, printers, or other devices into the right VLANs.You can do this using software tools
or manually by port configuration.This helps keep devices grouped based on use or department.
Check if VLANs are working correctly by testing device communication.Make sure devices in the
same VLAN can talk to each other.Fix any errors if devices can't connect or if traffic isn't going
through.
Keep an eye on your VLANs to make sure they work well.You might track VLAN usage,
membership, and traffic.Also check for security risks using tools like ACLs.
Write down the VLAN setup for future reference.Include VLAN IDs, names, members, and
interface settings.This helps when solving issues or expanding the network later.
ADVANTAGES
· Improved Security – VLANs isolate sensitive data by separating users into different virtual
networks.
· Simplified Management – Devices can be grouped logically, even if they are in different
physical locations.
· Scalability – New devices and departments can be added without major physical changes.
· Cost-Effective – Saves money by reducing the need for extra hardware like switches and routers.
WAN ARCHITECTURE
A WAN (Wide Area Network) connects computers and networks over long distances, such as
between cities or countries. It uses communication links like leased lines, satellites, or the internet
to transfer data. WANs help organizations share resources and information across multiple branch
offices. Devices like routers and switches are used to control the flow of data in the network. WANs
can use both wired and wireless connections and support many users and services like email, file
sharing, and video calls. Since data travels through many networks, security tools like encryption
are used to protect it. WAN architecture can be designed in different ways, such as point-to-point,
hub-and-spoke, or MPLS. These designs help ensure efficient, secure, and reliable communication
across wide areas.
End Devices
End devices are the computers and tools people use, like mobile phones, PCs, workstations, servers,
data centers, or mainframe computers. These are the devices that connect to the network and allow
users to send and receive information.
CPE means the equipment installed at the customer’s location, like routers or modems, that help
connect to the WAN. Different types of CPE are used depending on business needs to improve
network performance. The WAN service provider may also help manage and maintain this
equipment.
Modern routers often include built-in wireless features, which let devices connect using Wi-Fi.
Access points help spread wireless signals in big offices, so many devices can connect over a larger
area. These tools are important in WAN systems to link different locations or floors into one big
network.
Network Switches
Switches help manage how data moves across a network. They work at different levels to make sure
devices get the right data quickly and efficiently. Switches are important for smooth and fast
communication within a network.
A LAN is a small network that connects a few devices like a laptop and a mobile phone. It can also
include routers and modems to form a working network at home or in small offices.
Connecting Media
Today, many WANs use centralized management tools. These tools help companies easily set up
and control their WAN using online dashboards. This makes it easier and faster to manage large
networks from one place.
Network Planning
Network planning means understanding what the WAN should do, like how many sites it connects,
how secure it should be, and the type of technology used. It includes deciding which locations need
to be linked and what type of connections (like leased lines or SD-WAN) will be used. The goal is
to match the design with the business needs and goals.
Network Design
After planning, the network design step begins. It considers things like routing, hardware,
performance levels, and service quality. The design should fit the organization's needs while
ensuring the network works smoothly and securely.
Network Implementation
In this step, the actual WAN is set up using hardware and software tools. Devices are installed, and
network connections are made between different offices or locations. This step may include
advanced tools like SDN (Software Defined Networking) or VPNs (Virtual Private Networks).
Once the WAN is running, it needs to be watched and managed to keep it working properly. This
means fixing problems, checking for security risks, and tracking how data moves through the
network. Tools are used to find issues, block threats, and make sure everything runs efficiently.
Network Optimization
This step is about improving network performance. Techniques like reducing extra data
(compression), balancing traffic, or using smart tools to improve speed are used. These changes
help make the network faster and more efficient for users.
As a business grows, the WAN should be able to grow too. This step involves adding more users,
devices, or locations as needed. The WAN may also need software or hardware changes to stay up
to date and support business changes.
Advantages
· Covers Large Areas – WANs connect computers over long distances, even across countries or
continents.
· Centralized Data Access – Employees in different locations can access shared data and systems
from a central server.
· Improves Communication – It enables quick communication through emails, video calls, and
messaging across branches.
· Supports Remote Work – People can work from anywhere and still access the company’s
network securely.
· Scalable for Growth – WANs can grow easily by adding new branches, users, or services
without much rework.
TYPES OF WAN
· Circuit-Switched WAN
· Packet-Switched WAN
· Public WAN