0% found this document useful (0 votes)
13 views105 pages

Answers 1

The global exchange of cloud resources enables worldwide access to computing resources via the Internet, facilitating collaboration and remote computing. Major cloud providers maintain interconnected data centers globally, but legal and security concerns can affect user trust in foreign clouds. Microsoft Azure offers a range of services including compute, storage, networking, and AI, while virtualization can occur at various implementation levels, each with distinct advantages and challenges.

Uploaded by

Rishee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views105 pages

Answers 1

The global exchange of cloud resources enables worldwide access to computing resources via the Internet, facilitating collaboration and remote computing. Major cloud providers maintain interconnected data centers globally, but legal and security concerns can affect user trust in foreign clouds. Microsoft Azure offers a range of services including compute, storage, networking, and AI, while virtualization can occur at various implementation levels, each with distinct advantages and challenges.

Uploaded by

Rishee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Write short notes on global exchange of cloud resources

The global exchange of cloud resources refers to the ability of cloud computing platforms
to share and deliver computing resources, such as storage, processing power, and
software, across the world via the Internet.

Key Points:

1. Worldwide Access

• Cloud services can be accessed from any location using the Internet.

• This enables global collaboration and remote computing without needing


physical infrastructure at every site.

2. Interconnected Data Centers

• Cloud providers like Amazon, Google, Microsoft have data centers in multiple
countries.

• These centers are interconnected over the Internet to provide scalable and
efficient services.

3. Legal & Security Concerns

• Users in different countries may hesitate to use foreign clouds due to data privacy
laws, trust, or regulatory issues.

• For example, European users may not feel comfortable storing sensitive data on
servers located in the U.S.

4. SLAs (Service Level Agreements)

• To support global exchange, clear agreements between users and providers are
necessary.

• SLAs define service quality, availability, security, and privacy standards.

5. Hybrid Cloud Use

• Many enterprises use hybrid clouds, combining local (private) infrastructure with
global (public) cloud services.

• This allows them to meet international demand while keeping sensitive data
secure.
Benefits of Global Exchange:

• High availability of resources

• Load balancing across time zones

• Disaster recovery through geographic distribution

• Cost savings by using cheaper global resources

Discuss a set of cloud services provided by Microsoft Azure.

1. Compute Services

• Virtual Machines (VMs): Run Windows or Linux machines in the cloud.

• Azure Functions: Serverless compute for running code without managing servers.

• Kubernetes Service (AKS): Manage containerized apps easily.

2. Storage Services

• Blob Storage: Store large unstructured data like images and videos.

• Disk & File Storage: Persistent and shared storage for VMs and apps.

• Archive Storage: Low-cost storage for rarely used data.

3. Networking Services

• Virtual Network (VNet): Secure cloud networking.

• Load Balancer & VPN Gateway: Distribute traffic and connect securely to on-
premises systems.

• CDN: Delivers content faster to users worldwide.

4. Database & Analytics

• Azure SQL Database: Fully managed relational database.


• Cosmos DB: Global, scalable NoSQL database.

• Data Factory: For data integration and movement.

5. Security & Identity

• Azure Active Directory: Manage user identity and access.

• Security Center & Key Vault: Monitor security and store sensitive data securely.

6. DevOps & Developer Tools

• Azure DevOps Services: Tools for Continuous Integration / Continuous


Deployment (CI/CD).

• DevTest Labs: Create and manage test environments for development.

• Azure Container Instances (ACI): Run containers without needing to manage


servers.

7. AI & Machine Learning Services

• Azure Machine Learning: Build, train, and deploy machine learning models.

• Cognitive Services: Pre-built AI APIs for vision, speech, and language tasks.

• Azure Bot Services: Build and deploy intelligent chatbots.

8. Hybrid Cloud Solutions

• Azure Arc: Manage on-prem, multi-cloud, and edge resources with Azure tools.

• Azure Stack: Run Azure services in on-premise environments.

• Site Recovery: Disaster recovery to keep applications running during failures.

9. IoT (Internet of Things) Services

• Azure IoT Hub: Connect and manage IoT devices securely.

• IoT Central: Fully managed IoT app platform with simplified setup.
• Digital Twins: Create digital replicas of real-world systems for simulation and
monitoring.

10. Migration & Modernization Services

• Azure Migrate: Assess and move workloads to Azure easily.

• Database Migration Service: Move databases with minimal downtime.

• App Service Migration Assistant: Shift web and .NET apps to Azure smoothly.

Explain in detail about Implementation Levels of virtualization.

Implementation Levels of Virtualization

Virtualization can be implemented at different levels in a computing system, each with


its own role, advantages, and challenges. As per the textbook, there are four main levels
of virtualization:

1. Instruction Set Architecture (ISA) Level

• This level allows virtualization of CPU instruction sets.

• It enables applications compiled for one hardware architecture to run on different


hardware.

• Achieved using binary translation.

• For example, Intel binaries can run on PowerPC using binary translation techniques.

Advantage: Increases application portability.


Limitation: Can be slow due to translation overhead.

2. Hardware Level

• Virtualization is done using a Virtual Machine Monitor (VMM) or Hypervisor.

• The hypervisor directly manages the hardware like CPU, memory, and I/O devices.

• This is also called bare-metal virtualization.


Example: VMware ESX Server, Xen.
Advantage: Best performance and resource isolation.
Limitation: Requires complex hardware support.

3. Operating System (OS) Level

• The OS kernel is modified to support multiple user spaces.

• Each user space behaves like a separate virtual machine.

• No need for a separate guest OS—this reduces overhead.

Example: Containers like Docker, LXC.


Advantage: Fast and lightweight.
Limitation: All containers must use the same OS kernel.

4. Library Support Level

• Uses API translation to make an application believe it’s running in a different


environment.

• For example, Windows applications can run on Linux using a library that mimics
Windows APIs.

Example: WINE (for running Windows apps on Linux).


Advantage: No need to install full OS.
Limitation: Limited compatibility and performance.

5. Application Level Virtualization (Continuation)

• This level allows individual applications to run in isolated environments, regardless


of the underlying system.

• The application is encapsulated along with its dependencies in a virtual


environment.

• It avoids software conflicts and simplifies deployment.


Example: Java Virtual Machine (JVM), .NET CLR.
Advantage: Platform independence and ease of deployment.
Limitation: Only the specific application is virtualized—not the whole system.

Explain how Migration of Memory, Files, and Network Resources happen in cloud
computing.

Migration of Memory, Files, and Network Resources in Cloud Computing

In cloud computing, migration means moving a running virtual machine (VM) or its
components like memory, files, or network state from one physical machine to another.
This is essential for load balancing, fault tolerance, and maintenance.

There are three major types of resource migration:

1. Migration of Memory

• Memory migration involves transferring the active memory pages of a VM.

• A live migration technique is used, where memory is copied while the VM is still
running.

• Dirty pages (pages that change during copying) are tracked and re-copied.

• Eventually, the VM is paused briefly to transfer the remaining pages.

Goal: Ensure minimal downtime and service disruption.


Use Case: Moving VMs during hardware maintenance.

2. Migration of Files

• VM files like disk images, system libraries, and configurations are stored in shared
storage systems.

• Network File System (NFS) or distributed file systems (like Google File System)
are used.

• Instead of physically copying files, the new machine accesses the same shared
storage.
Advantage: Fast and efficient file access.
Challenge: Requires file consistency and synchronization.

3. Migration of Network Resources

• Migrating VMs also requires preserving their network identity (like IP addresses).

• Virtual networking technologies allow the VM to retain its address even when
moved.

• Dynamic routing and network reconfiguration help maintain ongoing sessions.

Goal: Maintain ongoing network sessions without interruption.


Challenge: Handling IP bindings and latency during rerouting.

Explain VM based intrusion detection system.

What is Intrusion Detection?

Intrusion Detection Systems (IDS) are used to detect unauthorized access, misuse, or
attacks on computing systems.

Why Use VM-Based IDS in Cloud Computing?

In a traditional physical system, it’s hard to monitor all activities without interfering with
the system itself. But with virtual machines (VMs), it's possible to detect attacks from
outside the VM, without affecting its internal processes.

Definition from the Textbook

"With VM-based intrusion detection, one can build a secure VM to run the intrusion
detection facility (IDS) outside all guest VMs."

How It Works

1. A secure VM (called IDS VM) is created on the same host system.

2. This VM runs an IDS engine (e.g., Snort, Suricata).


3. It monitors:

o Guest OS activities

o Network traffic

o System calls

o Disk operations

4. All other VMs (guest VMs) continue their operations unaware of the monitoring.

Key Advantages

• No modification required in guest VMs.

• Can detect:

o Virus activities

o Worm propagation

o Security policy violations

o Suspicious system calls

• High-level isolation enhances security and stealth.

• IDS VM can monitor multiple guest VMs simultaneously.

Illustration (from textbook)

• The IDS engine resides in a separate monitoring VM.

• It has visibility into:

o VM memory states

o I/O operations

o Application behavior
Explain reputation system design options.

Reputation System Design Options

In cloud computing, trust is very important since users do not control the infrastructure.
One way to manage trust is by using a reputation system.

A reputation system helps in measuring and managing the trustworthiness of users,


services, and providers based on their past behavior and interactions.

1. What is Reputation?

Reputation is the quality assigned to an entity (user/service/provider) based on:

• Past interactions.

• Feedback from other users.

• Observations over time.

2. Ways to Determine Trust

There are three key approaches:

a. Policies

• A set of rules or conditions to define trust.

• Based on credentials, e.g., certificates or digital signatures.

• Example: A CSP is trusted only if it has ISO certification.

b. Reputation

• Based on history of behavior.

• More trust is given to entities with positive feedback over time.

• Example: A cloud provider with good uptime and secure service earns a high
reputation score.

c. Recommendations

• Trust is based on opinions of others.

• Can be direct (personal experience) or indirect (third-party feedback).


• Example: One CSP may trust another if a mutual user has had a good experience.

3. How Reputation is Used

• In service selection: Choose the provider with the highest score.

• In access control: Only allow users with good reputation to access critical data.

• In trust-based transactions: Reputation helps avoid malicious or unreliable


entities.

4. Technical Definition (from notes)

“Trust of a party A to a party B for a service X is the measurable belief of A that B behaves
dependably for a specified period within a specified context.”

This means trust is not random, it is measured, specific, and time-bound.

5. Design Goals of a Good Reputation System

• Should be accurate and tamper-proof.

• Should adapt to changing behavior over time.

• Should support decentralized evaluation, especially in distributed cloud systems.

Conclusion

Reputation systems are essential tools in cloud environments to build trust and ensure
secure interactions between unknown entities. They combine past performance, policies,
and recommendations to help in decision making.
What are the various system issues for running a typical parallel program in either
parallel or distributed manner?

To run a parallel program effectively on a parallel or distributed system, several important


system issues must be addressed. These are necessary to ensure smooth execution,
coordination, and proper utilization of computing resources.

The key issues are:

1. Partitioning

a) Computation Partitioning

• The given program is split into smaller tasks.

• These tasks are distributed to run simultaneously on different workers.

• It requires identifying parallel parts of the program that can run independently.

b) Data Partitioning

• The input or intermediate data is divided into smaller parts.

• Each part is processed by a different worker.

• This allows parallel data processing and improves speed.

2. Mapping

• Assigns tasks or data to specific computing resources.

• The aim is to distribute load evenly and make efficient use of resources.

• Usually handled by resource allocators in the system.

3. Synchronization

• Ensures that workers coordinate properly.

• Prevents race conditions (when two workers access same data simultaneously).

• Maintains data dependency so that a worker waits for data from another if needed.
4. Communication

• Workers often need to exchange data during execution.

• Communication is mainly required when there is data dependency.

• Efficient communication methods are essential for better performance.

5. Scheduling

• Decides which task runs when and on which worker.

• If there are more tasks than available resources, the scheduler prioritizes them.

• Follows specific scheduling policies to improve system performance.

Conclusion

In summary, the main system issues for running a parallel program are:

• Partitioning

• Mapping

• Synchronization

• Communication

• Scheduling

All these ensure the program runs efficiently across multiple computing resources, either
in a parallel or distributed environment.
With neat diagram explain OpenStack Nova system architecture

OpenStack is a cloud operating system that helps you build and manage private or public
clouds. It controls compute, storage, and networking resources using a dashboard or API.
Nova is the compute (VM) management part of OpenStack. It helps to create, run, and
manage virtual machines in the cloud. Nova is the compute service in OpenStack. It is
responsible for managing virtual machines (VMs).

1. Cloud Controller

• It is the central brain of Nova.

• Manages the overall cloud operation like scheduling, resource allocation, and VM
creation.

2. API Server

• Users interact with Nova using API requests (e.g., to launch or delete a VM).

• Clients like Boto (a Python library) send API calls.

• API server passes these calls to the Cloud Controller.

3. User Manager

• Handles authentication and user management.

• Can connect to systems like LDAP to verify user credentials.

4. S3 Service (Tornado)

• Handles object storage (like images) through HTTP protocol.

• Used to store VM images or snapshots in S3-compatible format.

5. Message Queue (<AMQP>)

• Acts as a communication channel between services (like controller, nodes,


storage).
• Ensures asynchronous messaging for task coordination.

6. Storage

• Manages persistent storage for VM disks.

• Uses methods like ATA over Ethernet for backend storage.

7. Nodes

• Actual physical or virtual machines where VMs run.

• Controlled by the cloud controller.

• Uses libvirt/KVM as a hypervisor to manage virtual machines.


Discuss Programming the Google App Engine.

Google App Engine (GAE) is a Platform as a Service (PaaS) that allows developers to build
and deploy web applications on Google’s cloud infrastructure.

Supported Languages

• Java: Comes with Eclipse plug-in and Google Web Toolkit (GWT).

• Python: Supports frameworks like Django, CherryPy, and Google’s webapp


environment.

• Other Languages: JVM-based interpreters support languages like Ruby and


JavaScript.

Data Management

• Datastore: A NoSQL schema-less storage system (max 1MB per entity).

o Java: Uses JDO.

o Python: Uses GQL (Google Query Language), similar to SQL.

• Memcache: Caches frequently accessed data for faster performance.

• Blobstore: Used to store large files (up to 2GB).

Google File System (GFS)

• Used internally for massive data storage.

• Designed for large file sizes (100MB to GBs).

• uses custom APIs.

• Key features:

o 64MB block size (vs 4KB in traditional systems)

o Optimized for sequential reads/writes

o Chunk replication (stored on 3 servers for reliability)

o Master node manages metadata


o

o No caching, supports snapshots and record appends

External Connectivity & Communication

• Secure Data Connection (SDC): Allows tunneling from a secure intranet through
firewalls using tunnel servers.

• URL Fetch: Makes external web requests (HTTP/HTTPS).

• Google Data APIs: Lets applications access services like Docs, Maps, YouTube,
etc.

• User Authentication: Integrated Google Accounts login system.

• Email Support: Built-in mail API to send emails.

Task and Resource Management

• Cron Jobs: Automate scheduled tasks (e.g., daily backups).

• Task Queues: Supports background task execution.

• Resource Quotas: Controls resource usage to avoid overuse.

• Free Tier: Limited usage is free, making it ideal for small apps or student projects.
What is Virtualization?

Virtualization is the creation of a virtual (instead of physical) version of something — like


hardware, storage, network, or an operating system.

• It allows multiple operating systems and applications to share a single hardware


system.

• Virtualization is achieved through software called a Virtual Machine Monitor (VMM)


or hypervisor.

• It decouples the hardware from the software.

What is Full Virtualization?

Full virtualization provides a complete simulation of the underlying hardware, so the


guest OS does not need to be modified.

The guest OS runs as if it's on real physical hardware.

How Full Virtualization Works:

1. Hypervisor/VMM sits between hardware and guest OS.

2. It captures and translates all privileged instructions.

3. Guest OS runs unmodified, thinking it controls the hardware.

4. Can support multiple different OSs simultaneously.

Example Technologies: VMware, Microsoft Virtual PC, VirtualBox.

Advantages of Full Virtualization:

• No need to modify the guest OS.

• Supports different OSs (Windows, Linux, etc.).

• Better isolation between VMs.

• Easier to deploy and migrate.

Challenges:

• Higher performance overhead due to instruction translation.

• Needs hardware support for efficient execution


How does OS-level virtualization enhance hardware-level virtualization, and what are its benefits
for cloud computing?

1. What is Hardware-Level Virtualization?

• Virtualization happens at the hardware level using a hypervisor (e.g., VMware, Xen).

• It creates separate virtual machines (VMs) with their own OS.

• Each VM behaves like an independent physical machine.

2. What is OS-Level Virtualization?

• Also called container-based virtualization.

• Instead of creating full VMs, it allows multiple isolated user spaces under the same OS
kernel.

• Each user space is called a container (e.g., Docker, LXC).

3. How OS-Level Enhances Hardware-Level:

Feature Hardware-Level OS-Level (Enhancement)

Resource Efficiency Moderate (full VMs) More efficient (lightweight)

Boot Time Slower (OS boots in VM) Instant (no OS boot needed)

Overhead Higher Minimal overhead

Number of Instances Limited by memory/CPU More containers on same hardware

• OS-level virtualization runs on top of hardware-level virtualization.

• Multiple containers can be deployed inside a VM, offering scalability and isolation.

• This layered approach boosts flexibility in cloud platforms.

4. Benefits for Cloud Computing

High Density Deployment

• Run many more containers per server than full VMs.


Faster Scaling and Start-up

• Containers start almost instantly, ideal for auto-scaling.

Lower Resource Usage

• Containers share the host OS, reducing memory and CPU usage.

Better Isolation than Native Apps

• Still maintains process and resource isolation within containers.

Simplified Management

• Containers can be easily deployed, stopped, or migrated across cloud infrastructure.

Conclusion

OS-level virtualization adds a lightweight, fast, and resource-efficient layer over hardware
virtualization. For cloud computing, this means better scalability, lower costs, and faster service
delivery, making it a key enabler of modern cloud-native applications.
What are the various design objectives of Cloud Computing?

Cloud computing was designed with several clear goals in mind. These design objectives ensure
that cloud systems are efficient, scalable, secure, and cost-effective for users and providers.

1. Shifting Computing from Desktops to Data Centers

• Move software, storage, and computing power from personal devices to central cloud data
centers.

• Users access everything via the Internet.

2. Service Provisioning and Cloud Economics

• Provide services on demand under pay-as-you-go models.

• Cloud providers sign SLAs to ensure resource efficiency and cost control.

3. Scalability in Performance

• Cloud systems should easily scale up or down based on workload and user demand.

• More users or data shouldn’t reduce performance.

4. Data Privacy Protection

• Ensure secure storage and transmission of personal and business data.

• Builds trust in cloud adoption.

5. High Quality of Cloud Services (QoS)

• Maintain reliable, fast, and consistent service across users and locations.

• Standardize QoS for interoperability between providers.

6. New Standards and Interfaces

• Avoid vendor lock-in by using open APIs and universal protocols.

• Improve portability and flexibility of applications.


Describe the components of a cloud ecosystem. How do these components support scability and
efficiency?

A cloud ecosystem is a framework of interconnected components that work together to deliver


cloud services. These components include users, cloud managers, infrastructure managers, and
virtual machines, all supported by tools and interfaces.

Key Components of a Cloud Ecosystem:

1. Users (Cloud Consumers)

• Individuals or organizations who demand computing services like storage, VMs, and
applications.

• They access services via web interfaces or APIs.

2. Cloud Manager

• Manages the provisioning of resources over an IaaS platform.

• Ensures service delivery based on demand and SLAs.

3. Virtual Infrastructure (VI) Manager

• Allocates virtual machines (VMs) across server clusters.

• Handles resource placement, load balancing, and scaling.

4. VM Managers (Hypervisors)

• Manages VMs installed on physical host machines.

• Examples: KVM, Xen, VMware.

• Ensures isolation, performance, and resource control.

5. Cloud Toolkits and Interfaces

• Tools like OpenNebula, Eucalyptus, and vSphere manage cloud infrastructure.

• APIs (e.g., Amazon EC2WS, ElasticHosts REST) allow automation and integration.
How These Components Support Scalability and Efficiency:

Scalability

• VI Managers can quickly provision more VMs as workload increases.

• Cloud Managers ensure seamless distribution across multiple clusters.

• Users can scale resources up/down without manual setup.

Efficiency

• VM Managers optimize resource usage by isolating and managing workloads.

• Dynamic resource allocation reduces waste.

• Automation tools handle deployment, saving time and effort.

Conclusion:

Together, the components of a cloud ecosystem coordinate, manage, and optimize resources,
ensuring that services are scalable, efficient, and reliable for both providers and users.
Discuss the importance of data centre interconnection networks in cloud computing

In cloud computing, data centers are made up of thousands of servers working together. These servers
must communicate efficiently. That’s where interconnection networks play a vital role.

Why Data Center Interconnection Networks Are Important:

1. Enable Fast Communication Between Servers

• Supports point-to-point and group (MPI) communications.

• Essential for file sharing, job scheduling, and data transfers in distributed systems.

2. Support High Performance and Low Latency

• Interconnection networks must offer low delay and high throughput.

• This improves application speed and responsiveness.

3. Ensure Scalability

• As cloud demands grow, the network must handle thousands of servers.

• Scalable designs like fat-tree, BCube, and MDCube support easy expansion.

4. Handle Massive Data Movement

• Cloud apps (like MapReduce) need to move huge volumes of data between servers.

• A strong network backbone ensures this without bottlenecks.

5. Support Load Balancing and Redundancy

• Efficient routing and multiple paths help distribute workload evenly.

• In case of failures, alternate paths keep the system running.

6. Enable Modular Data Center Designs

• Interconnect server containers (modules) efficiently.

• Examples: BCube inside containers, and MDCube across containers.


7. Support Fault Tolerance

• Network must tolerate link or switch failures.

• With hot-swappable components and smart routing, services continue without downtime.

Explain interconnection of modular Data centre’s with example

What is a Modular Data Center?

A modular data center is built using pre-fabricated container units, each containing hundreds or
thousands of servers. These containers are easy to deploy, scale, and relocate, making them
ideal for modern cloud computing.

Why Interconnection is Important:

• Modular containers don’t work in isolation.

• They must be interconnected to form a larger, scalable cloud system.

• Interconnection ensures data flow, load sharing, and communication between


containers.

How Modular Data Centers Are Interconnected:

1. Use of Server-Centric Network Inside Container – BCube

• BCube is used to interconnect servers within one container.

• It provides multiple paths between nodes for fault tolerance and high bandwidth.

• Uses kernel module in each server to forward packets.

2. Interconnecting Multiple Containers – MDCube

• MDCube (Modular Data Center Cube) is built by connecting multiple BCube containers.

• It uses the existing BCube switches to link containers in a virtual hypercube structure.
• Supports scalable and fault-tolerant communication between containers.

Example: 2D MDCube

• Imagine 9 BCube containers connected in a 3x3 grid.

• Each container is a BCube, and they are connected using high-speed links.

• Together, they act as one large-scale data center.

• Supports cloud applications that need high-speed data transfer across modules.

Advantages:

• High scalability: Easily add more containers.

• Fault tolerance: Multiple paths prevent failure.

• Cost-effective: Uses existing hardware and layout.

Conclusion:

Interconnecting modular data centers using BCube and MDCube creates a flexible, powerful,
and scalable cloud infrastructure. This design supports modern cloud applications with
efficiency and reliability.

what is Inter-Module Connection network. Explain with an example


What is an Inter-Module Connection Network?

An inter-module connection network connects multiple modular data center units (containers) together
to form a larger cloud infrastructure.

Each container (module) contains hundreds or thousands of servers, usually connected internally using
BCube.

To scale up and build massive data centers, these containers must be interconnected using special high-
speed networks.

Purpose of Inter-Module Networking:


• Ensure communication between containers

• Maintain high performance, scalability, and fault tolerance

• Support large cloud applications across multiple data center modules

Example: MDCube (Modular Data Center Cube)

What is MDCube?

• A network topology used to connect multiple BCube-based containers.

• Treats each BCube container as a building block.

• Forms a virtual hypercube across containers.

How It Works:

• Uses high-speed links from existing switches inside BCube containers.

• Interconnects multiple containers (e.g., 9 containers in a 3x3 layout).

• Supports efficient data routing, redundancy, and modular growth.

Visual Example: 2D MDCube

• 9 containers arranged like a grid:

[C00] [C01] [C02]

[C10] [C11] [C12]

[C20] [C21] [C22]

• Each container is internally connected via BCube.

• MDCube links all containers for seamless operation.

Benefits of Inter-Module Connection Network:

• Scalability: Easy to add more containers

• Fault Tolerance: Alternate paths between modules

• Performance: Supports high-speed communication for cloud apps


Discuss Various encryption techniques used for securing data in the cloud

Encryption Techniques for Securing Cloud Data

In cloud computing, encryption is the main method used to protect sensitive data, both at rest
(stored) and in transit (being transferred). It ensures that unauthorized users cannot access the
data, even if they manage to intercept or steal it.

1. Symmetric and Asymmetric Encryption

• Symmetric Encryption: Same key is used for encryption and decryption.

o Fast and suitable for encrypting large amounts of data.

• Asymmetric Encryption: Uses a pair of keys – public and private.

o Slower but more secure for exchanging keys or small data.

2. Fully Homomorphic Encryption (FHE)

• Allows processing on encrypted data without decrypting it.

• Ideal for cloud because data never needs to be exposed during computation.

• Problem: Very slow and inefficient in real-time usage.

o For example, early FHE systems took minutes for simple operations.

• Practical use is still limited due to high overhead.

3. Order-Preserving Encryption (OPE)

• Used when data needs to be searched or sorted without decrypting.

• Maintains the order of values even after encryption.

o Example: If A < B before encryption, then Enc(A) < Enc(B) too.

• Helps in range queries on encrypted databases.

• Downside: Less secure than traditional encryption.

4. Searchable Symmetric Encryption (SSE)

• Allows searching encrypted databases without revealing the data.


• The client keeps the encryption key and sends encrypted queries to the cloud.

• The server returns encrypted results, which are then decrypted by the client.

• SSE supports:

o Single or multi-keyword search

o Fuzzy search

o Ranked and authorized search

5. AWS Key Management Service (KMS)

• Example of a real-world encryption solution.

• Helps users to:

o Create and manage encryption keys

o Encrypt data stored in AWS services like S3, RDS, EBS, etc.

6. Encryption of Data in Transit

• Data sent over public networks must be encrypted using protocols like TLS/SSL.

• Prevents attacks such as man-in-the-middle and sniffing.

7. Protection from Insider Threats

• Data stored in private clouds can still be accessed by insiders.

• Encryption with strict key access control helps limit such risks.

Conclusion

Encryption is essential for cloud security. Techniques like FHE, OPE, SSE, and real-world tools like
AWS KMS ensure that data remains protected — even while being stored, processed, or
transmitted in the cloud.
Explain the key features of cloud and grid computing platforms

Introduction:

Cloud and Grid computing platforms are used to perform large-scale computing tasks by
connecting multiple resources.
They have different goals but share some key features like resource sharing, scalability, and
distributed computing.

Key Features of Cloud and Grid Platforms:

1. Computing Platforms (Physical/Virtual)

• Cloud uses virtual machines (VMs) to provide isolated environments.

• Grid uses physical resources distributed across various organizations.

Example: Cloud VMs in AWS or Azure vs Grid clusters in scientific labs.

2. Massive Data Storage

• Both support large-scale distributed storage systems.

• Cloud uses Blob storage, S3, HDFS, etc.

• Grids use shared file systems for storing scientific data.

Used for: Big data analytics, backups, multimedia storage.

3. Massive Data Processing Models

• Cloud offers models like MapReduce, Dryad, Twister for parallel data processing.

• Grid relies on batch processing systems.

MapReduce is widely used in cloud platforms like Hadoop.

4. Programming Support

• Cloud provides APIs for Java, Python, PHP, Ruby, .NET etc.

• Grids support MPI, OpenMP, and scripting for job submission.


Cloud APIs are easier to use and more flexible.

5. Workflow Management

• Both use workflow tools to manage complex tasks.

• Cloud supports cron jobs, task queues.

• Grid uses tools like Pegasus, Kepler, Taverna.

Used in scientific experiments, automation, simulations.

6. Security & User Management

• Both offer authentication, authorization, encryption.

• Cloud uses SSL, HTTPS, IAM (Identity and Access Management).

• Grid supports certificates and role-based access.

7. Fault Tolerance

• Cloud has automatic recovery (like VM migration).

• Grid usually requires manual reconfiguration.

Clouds are more fault-resilient and self-healing.

8. Elasticity & Scalability

• Cloud: Can scale up/down automatically based on demand.

• Grid: Scaling is limited to available resources in the grid.

Elastic scaling is a key feature of cloud computing.

Conclusion:

Cloud and Grid platforms both aim to provide high-performance computing, but:

• Cloud is more user-friendly, scalable, and automated.

• Grid is best for collaborative research and resource sharing.


Architecture of MapReduce in Hadoop ➢ Task Assignment: The JobTracker assigns map tasks based on data locality and reduce
tasks without locality constraints.
The topmost layer of Hadoop is the MapReduce engine that manages the data flow and
control flow of MapReduce jobs over distributed computing systems. ➢ Task Execution: The TaskTracker runs tasks by copying the job's JAR file and
executing it in a Java Virtual Machine (JVM).
➢ Task Monitoring: Heartbeat messages from TaskTrackers inform the JobTracker about
their status and readiness for new tasks

Dryad and DryadLINQ from Microsoft


Two runtime software environments are reviewed in this section for parallel and
Figure 6.11 shows the MapReduce engine architecture cooperating with HDFS. distributed computing, namely the Dryad and DryadLINQ, both developed by Microsoft.

➢ Similar to HDFS, the MapReduce engine also has a master/slave architecture Dryad
consisting of a single JobTracker as the master and a number of TaskTrackers as the ➢ Flexibility Over MapReduce: Dryad allows users to define custom data flows using
slaves (workers). directed acyclic graphs (DAGs), unlike the fixed structure of MapReduce.
➢ The JobTracker manages the MapReduce job over a cluster and is responsible for
monitoring jobs and assigning tasks to TaskTrackers. ➢ DAG-Based Execution: Vertices represent computation engines, while edges are
➢ The TaskTracker manages the execution of the map and/or reduce tasks on a single communication channels. The job manager assigns tasks and monitors execution.
computation node in the cluster. ➢ Job Manager & Name Server: The job manager builds, deploys, and schedules jobs,
➢ Each TaskTracker manages multiple execution slots based on CPU threads (M * N while the name server provides information about available computing resources.
slots).
➢ Each data block is processed by one map task, ensuring a direct one-to-one mapping ➢ 2D Pipe System: Unlike traditional UNIX pipes (1D), Dryad's 2D distributed pipes
between map tasks and data blocks. enable large-scale parallel processing across multiple nodes.

Running a Job in Hadoop ➢ Fault Tolerance: Handles vertex failures by reassigning jobs and channel failures by
recreating communication links.
➢ Job Execution Components: A user node, a JobTracker, and multiple TaskTrackers
coordinate a MapReduce job. ➢ Broad Applicability: Supports scripting languages, MapReduce programming, and
SQL integration, making it a versatile framework.
➢ Job Submission: The user node requests a job ID, prepares input file splits, and
submits the job to the JobTracker.
• Optimize data replication and consistency mechanisms for reliability.

Cloud databases enhance efficiency, but proper security measures are essential to prevent
unauthorized access, data breaches, and operational failures.

OPERATING SYSTEM SECURITY

An OS manages hardware resources while protecting applications from malicious attacks like
unauthorized access, code tampering, and spoofing. Security policies include access control,
authentication, and cryptographic protection.

Key Security Concerns:

1. Mandatory vs. Discretionary Security – Mandatory policies enforce strict security, while
discretionary policies leave security decisions to users, increasing risks.

2. Trusted Paths & Applications – Trusted software needs secure communication mechanisms Key Aspects of VM Security:
to prevent impersonation.
1. Hypervisor-Based Security – Ensures memory, disk, and network isolation for VMs.
3. OS Vulnerabilities – Commodity OSs often lack multi-layered security, making them
2. Trusted Computing Base (TCB) – A compromised TCB affects entire system security.
susceptible to privilege escalation.
3. VM State Management – Hypervisors can save, restore, clone, and encrypt VM states.
4. Malicious Software Threats – Java Security Manager uses sandboxing but cannot prevent all
security bypasses. 4. Attack Prevention – Dedicated security VMs and intrusion detection systems enhance
protection.
5. Closed vs. Open Systems – ATMs, smartphones, and game consoles have embedded
cryptographic keys for stronger authentication. 5. Inter-VM Communication – Faster than physical machines, enabling secure file migration.
6. Weak Isolation Between Applications – A compromised app can expose the entire system. Security Threats:
7. Application-Specific Security – Certain applications, like e-commerce, require extra Hypervisor-Based Threats:
protection like digital signatures.
• Resource starvation & DoS due to misconfigured limits or rogue VMs.
8. Challenges in Distributed Computing – OS security gaps affect application authentication and
• VM side-channel attacks exploiting weak inter-VM isolation.
secure user interactions.
• Buffer overflow vulnerabilities in hypervisor-managed processes.
A secure OS is crucial, but additional security measures like encryption, auditing, and
authentication are necessary for comprehensive protection. VM-Based Threats:

• Deployment of rogue or insecure VMs due to weak administrative controls.


VIRTUAL MACHINE SECURITY • Tampered VM images from insecure repositories lacking integrity checks.
Virtual Machine (VM) security primarily relies on hypervisors for isolation and access control, Mitigation Strategies:
reducing risks compared to traditional OS security.
• Enforce strong access controls and isolate inter-VM traffic.

• Use digitally signed VM images to ensure integrity.

• Implement intrusion detection & prevention systems for proactive security.

Virtualization enhances security but requires proper configurations, access control, and
monitoring to prevent exploits.
4. Searchable Symmetric Encryption (SSE) – Protects database queries from explicit data • Optimize data replication and consistency mechanisms for reliability.
leakage while enabling single-keyword, multi-keyword, ranked, and Boolean searches.
Cloud databases enhance efficiency, but proper security measures are essential to prevent
5. Private Cloud Risks – While firewalls protect against outsiders, insider threats remain a unauthorized access, data breaches, and operational failures.
concern. Access restrictions and monitoring help mitigate risks.

By utilizing OPE and SSE, encrypted databases can support efficient searches while enhancing data
OPERATING SYSTEM SECURITY
security. However, insider threats and query pattern exposure require additional safeguards.
An OS manages hardware resources while protecting applications from malicious attacks like
unauthorized access, code tampering, and spoofing. Security policies include access control,
SECURITY OF DATABASE SERVICES authentication, and cryptographic protection.

DBaaS allows cloud users to store and manage their data, but security risks include data integrity, Key Security Concerns:
confidentiality, and availability concerns.
1. Mandatory vs. Discretionary Security – Mandatory policies enforce strict security, while
Major Security Threats: discretionary policies leave security decisions to users, increasing risks.

1. Authorization & Authentication Issues – Weak access controls can lead to data leaks or 2. Trusted Paths & Applications – Trusted software needs secure communication mechanisms
unauthorized modifications. to prevent impersonation.

2. Encryption & Key Management – Poor encryption handling exposes data to external attacks. 3. OS Vulnerabilities – Commodity OSs often lack multi-layered security, making them
susceptible to privilege escalation.
3. Insider Threats – Superusers with excessive privileges may misuse confidential data.
4. Malicious Software Threats – Java Security Manager uses sandboxing but cannot prevent all
4. External Attacks – Methods like spoofing, sniffing, man-in-the-middle, and DoS attacks can
security bypasses.
compromise cloud databases.
5. Closed vs. Open Systems – ATMs, smartphones, and game consoles have embedded
5. Multi-Tenancy Risks – Shared environments increase data recovery vulnerabilities if proper
cryptographic keys for stronger authentication.
sanitation isn’t enforced.
6. Weak Isolation Between Applications – A compromised app can expose the entire system.
6. Data Transit Risks – Without encryption, data transfer over public networks is vulnerable.
7. Application-Specific Security – Certain applications, like e-commerce, require extra
7. Data Provenance Challenges – Tracking data origin and movement requires complex metadata
protection like digital signatures.
analysis.
8. Challenges in Distributed Computing – OS security gaps affect application authentication and
8. Lack of Transparency – Users may not know where their data is stored, complicating security
secure user interactions.
assessments.
A secure OS is crucial, but additional security measures like encryption, auditing, and
9. Replication & Consistency Issues – Synchronizing data across multiple cloud locations is
authentication are necessary for comprehensive protection.
difficult.

10. Auditing & Compliance Risks – Third-party audits can violate privacy laws if data is stored in
restricted locations. VIRTUAL MACHINE SECURITY

Mitigation Strategies: Virtual Machine (VM) security primarily relies on hypervisors for isolation and access control,
reducing risks compared to traditional OS security.
• Implement strong authentication and authorization protocols.

• Use robust encryption for stored and transmitted data.

• Restrict superuser access and enforce logging and monitoring.

• Conduct regular audits while ensuring legal compliance.


 Instead of moving data around, cloud sends programs to the data. 4.1.1.4 Hybrid Clouds
 This saves time and improves internet speed.
 A hybrid cloud combines both public and private clouds.
 Virtualization helps use resources better and cuts costs.  It allows a company to use its private cloud but also get extra power from a public
 Companies don’t need to set up or manage servers themselves. cloud when needed.
 Cloud provides hardware, software, and data only when needed.  Example: IBM’s RC2 connects private cloud systems across different countries.
 Hybrid clouds give access to the company, partners, and some third parties.
 The goal is to replace desktop computing with online services.  Public clouds offer flexibility, low cost, and standard services.
 Cloud can run many different apps at the same time easily.  Private clouds give more security, control, and customization.
 Hybrid clouds balance the two, making compromises between sharing and privacy.

4.1.1.1 Centralized versus Distributed Computing

 Cloud computing is distributed using virtual machines in big data centers.


 Public and private clouds work over the Internet.
 Big companies like Amazon, Google, and Microsoft build distributed cloud systems for
speed, reliability, and legal reasons.
 Private clouds (within companies) can connect to public clouds to get more resources.
 People may worry about using clouds in other countries unless strong agreements (SLAs)
are made.

4.1.1.2 Public Clouds

 A public cloud is available to anyone who pays for it.


 It is run by cloud providers (like Google, Amazon, Microsoft, IBM, Salesforce).
 Users subscribe to use services like storage or computing power.
 Public clouds let users create and manage virtual machines online.
 Services are charged based on usage (pay-as-you-go).

4.1.1.3 Private Clouds

 A private cloud is built and used within one organization (not public). 4.1.1.5 Data-Center Networking Structure
 It is owned and managed by the company itself.
 Only the organization and its partners can access it — not the general public.  The core of a cloud is a server cluster made of many virtual machines (VMs).
 It does not sell services over the Internet like public clouds do.  Compute nodes do the work; control nodes manage and monitor cloud tasks.
 Private clouds give flexible, secure, and customized services to internal users.  Gateway nodes connect users to the cloud and handle security.
 They allow the company to keep more control over data and systems.  Clouds create virtual clusters for users and assign jobs to them.
 Private clouds may affect cloud standard rules, but offer better customization for the  Unlike old systems, clouds handle changing workloads by adding or removing resources
company. as needed.
 Private clouds can support this flexibility if well designed.

Dept. of CSE, SVIT Page 2 Dept. of CSE, SVIT Page 3


 Private clouds balance workloads within the company’s network for better efficiency.  Developers don’t manage servers, storage, or infrastructure.
 Private clouds offer better security, privacy, and testing environments.  Focus is only on writing and running code.
 Public clouds help avoid big upfront costs in hardware, software, and staff.  It handles app hosting, scaling, updates, and security automatically.
 Companies often start by virtualizing their systems to reduce operating costs.  Faster development because everything is ready to use.
 Big companies (like Microsoft, Oracle, SAP) use policy-based IT management to  Great for developers and software teams.
improve services.  Useful for web apps, mobile apps, and APIs.
 IT as a service boosts flexibility and avoids replacing servers often.  Examples of PaaS providers:
 This leads to better IT efficiency and agility for companies.
 Google App Engine
 Microsoft Azure App Service
4.1.3 Infrastructure-as-a-Service (IaaS)
 Heroku
 IBM Cloud Foundry
 IaaS means renting IT infrastructure like servers, storage, and networks over the  Red Hat OpenShift
Internet.
 It provides virtual machines, storage, networks, and firewalls.  Pay only for what you use – no upfront setup or hardware costs.
 Users can choose their own operating system and software.  Helps teams collaborate easily and launch apps faster.
 Users don’t manage physical hardware, only virtual resources.
 It's a pay-as-you-go model – no need to buy expensive equipment. 4.1.4 Software-as-a-Service (SaaS)
 IaaS is scalable – add or remove resources anytime.
 Great for startups, developers, and large businesses.
 SaaS means using software over the internet.
 Helps with testing, hosting apps, data backup, and disaster recovery.
 Examples of IaaS providers:  No need to install or update anything.
 Accessible from any device with internet.
o Amazon EC2, S3
 Software is managed by the provider, not the user.
o Microsoft Azure VMs
o Google Compute Engine  Users pay monthly or yearly (subscription model).
o IBM Cloud  No hardware or server needed by the user.
o Oracle Cloud Infrastructure (OCI)  Used for email, file sharing, CRM, video calls, etc.

 Saves money and time by avoiding hardware setup.  Data is stored in the cloud by the provider.
 Ideal for companies needing flexible and powerful IT resources.  Saves time, money, and effort.
 Great for businesses and individuals.
 Examples of SaaS:
4.1.3 Platform-as-a-Service (PaaS)
 Gmail
 Paas provides a platform to build, test, and deploy applications.  Google Docs
 Microsoft 365
 It includes tools, libraries, databases, and runtime environments.  Salesforce
 Zoom
Dept. of CSE, SVIT Page 7 Dept. of CSE, SVIT Page 8
BCS601

Model Question Paper-1 with effect from 2022 (CBCS Scheme)


USN

Sixth Semester B.E. Degree Examination


Cloud Computing

TIME: 03 Hours Max. Marks: 100

Note: 01. Answer any FIVE full questions, choosing at least ONE question from each MODULE.

Bloom’s Marks
Module -1
Taxonomy
Level

Q.01 a Discuss in detail about distributed system models.

Distributed System Models


Distributed system models help in designing, analyzing, and understanding the
behavior of distributed systems. They are categorized into Physical,
Architectural, and Fundamental models.

1. Physical Model
Represents the hardware layout of the system.
• Nodes: Devices (servers, PCs) that process and communicate.
• Links: Communication channels (wired/wireless) like point-to-point or
broadcast.
• Middleware: Software that enables communication, fault tolerance,
synchronization. L2 10
• Network Topology: Structure of node connections (bus, star, ring, mesh).
• Protocols: TCP, UDP, MQTT used for secure and efficient data exchange.

2. Architectural Model
Defines the system's organization and interaction patterns.
• Client-Server Model: Centralized server responds to client requests (e.g.,
web services).
• Peer-to-Peer (P2P): All nodes are equal and share services (e.g.,
BitTorrent).
• Layered Model: Organized into layers for modular design and abstraction.
• Microservices Model: Small, independent services performing specific
functions, enhancing scalability.
3. Fundamental Model
Covers key concepts and formal behaviors.
• Interaction Model:
o Message Passing: Synchronous/asynchronous communication.
o Publish/Subscribe: Topics-based messaging.
• Failure Model:
o Types: Crash, omission, timing, Byzantine failures.
o Handling: Replication, fault detection, recovery methods.
• Security Model:
o Authentication: Passwords, keys, multi-factor verification.
o Encryption: Protects data confidentiality.
o Data Integrity: Hashing and digital signatures to prevent tampering.

b Explain the basic Cluster Architecture with a neat diagram.


L2 10
Cluster computing is a technique where multiple interconnected computers
(nodes) work together as a single system to execute tasks, process data, or run
applications. It provides users with a transparent system that appears as one
virtual machine.

Features of Cluster Computing


1. Transparency: Users see a single virtual system instead of multiple nodes.
2. Reliability: Failure of one node doesn’t affect the entire system.
3. Scalability: Nodes can be added or removed easily.
4. Performance: Parallel task execution improves overall speed.
5. Load Balancing: Tasks are distributed across nodes to prevent overload.

Cluster Computing Architecture


1. Node (Computer)
o Each node has its own processor, memory, and OS.
o Nodes are connected via a high-speed network.
2. Head Node (Master Node)
o Manages the cluster operations.
o Distributes tasks to other nodes (slave nodes).
o Collects results and monitors performance.
3. Slave Nodes (Worker Nodes)
o Execute the assigned tasks.
o Report back to the head node.
o Can perform computations in parallel.
4. Cluster Middleware
o Software that handles job scheduling, communication, and
resource management.
o Examples: MPI (Message Passing Interface), OpenMPI, etc.
5. Interconnect Network
o Ensures high-speed data transfer between nodes.
o Uses Ethernet, Infiniband, or other low-latency networks.
6. Storage System
o Shared or distributed storage system (like NFS or parallel file
systems).
o All nodes may access the same data.

OR
Q.02 a Write short notes on Peer-to-Peer network families.
Definition
• P2P architecture is a distributed model where each node (peer) acts as
both client and server, sharing resources without a central authority.

2. Characteristics
• Decentralization: No central server; peers communicate directly.
L2 10
• Scalability: Easily grows to support more users.
• Fault Tolerance: Network survives even if some nodes fail.
• Resource Sharing: Peers contribute bandwidth, storage, and data.
• Autonomy: Each peer manages its own data and functions.

3. Types of P2P Networks


• Pure P2P: Fully decentralized (e.g., BitTorrent).
• Hybrid P2P: Uses central servers or super peers (e.g., Skype).
• Overlay P2P: Virtual network over physical internet (e.g., Chord).
• Structured P2P: Organized topology with routing rules (e.g., Pastry).
• Unstructured P2P: Random topology, no fixed structure (e.g., Gnutella).
4. Components
• Peer Nodes: Active devices in the network.
• Overlay Network: Virtual layer connecting peers.
• Indexing Mechanisms: Help locate shared resources.
• Bootstrapping Mechanisms: Enable new peers to join the network.

5. Bootstrapping in P2P
• Helps new peers discover others and connect.
• Can use centralized servers, peer exchange, or DHTs.

6. Data Management
• Storage: Distributed across peers.
• Retrieval: Uses search algorithms.
• Replication: Increases availability.
• Consistency: Ensures all replicas are up to date.

7. Routing Algorithms
• Flooding: Sends to all neighbors — high traffic.
• Random Walk: Selects random paths — less overhead.
• DHTs: Efficient lookups via hash tables (e.g., Kademlia).
• Small-World Routing: Uses short paths and local/global links.

8. Advantages
• No central point of failure
• Efficient resource utilization
• Cost-effective
• High availability due to replication

9. Challenges
• Difficult to scale with efficiency
• Security risks from malicious nodes
• Inconsistent content quality
• Complex consistency and data management

b Discuss system attacks and threats to cyberspace resulting in 4 types of L2 10


losses.
1. Common System Attacks:
1. Malware Attacks:
o Includes viruses, worms, ransomware, spyware, and Trojans.
o Designed to steal, encrypt, or delete data or disrupt operations.
2. Phishing:
o Deceptive messages to trick users into giving up sensitive info like
passwords or credit card numbers.
3. Denial of Service (DoS/DDoS):
o Overloads networks or servers, making them unavailable to users.
4. Man-in-the-Middle (MitM):
o Attackers intercept communication between two parties to steal or
alter data.
5. SQL Injection:
o Injects malicious SQL queries into input fields to access or
manipulate databases.
o

Four Types of Losses Due to Cyber Attacks


1. Financial Loss
• (i) Cyber attacks like ransomware or online fraud can lead to direct theft of
money or demand for large payments.
• (ii) Organizations incur heavy costs for legal penalties, data recovery, and
strengthening future security.

2. Data Loss
• (i) Attacks such as malware, hacking, or unauthorized access can result in
loss or theft of sensitive data.
• (ii) Loss of intellectual property, customer information, or confidential
business records affects compliance and trust.

3. Reputational Loss
• (i) A successful cyber attack damages an organization’s public image and
brand value.
• (ii) Customers may lose confidence, leading to a decline in user base and
revenue.

4. Operational Loss
• (i) Cyber threats like Denial of Service (DoS) can bring down servers,
disrupting business operations.
• (ii) Delays in service delivery and system downtime reduce productivity
and efficiency.

Module-2
Q. a Explain in detail about Implementation Levels of virtualization.
03
1. Instruction Set Architecture (ISA) Level Virtualization
1. Emulates a guest ISA on a host with a different ISA.
2. Allows execution of legacy or cross-platform binary code.
3. Achieved through code interpretation or dynamic binary translation.
4. Very flexible but has low performance due to instruction overhead.
5. Adds a software translation layer between compiler and processor.

2. Hardware Abstraction Level Virtualization


1. Virtualizes hardware directly using a hypervisor (e.g., Xen, VMware).
2. Provides virtual CPUs, memory, and I/O to guest OSs.
3. High performance due to close interaction with physical hardware.
4. Complex to implement and manage.
5. Enables running multiple OSs on the same physical machine.

3. Operating System Level Virtualization


1. Provides isolated user-space instances (containers).
2. Shares a single OS kernel across all containers.
3. Efficient resource use and fast startup.
4. Limited flexibility – all containers must use the same OS.
5. Suitable for lightweight server consolidation.

L2 10
4. Library Support Level Virtualization
1. Virtualizes the API layer between apps and OS.
2. Allows apps to run in different environments (e.g., WINE for Windows
apps on UNIX).
3. Less overhead than full system virtualization.
4. Not all applications may work correctly.
5. Useful for GPU virtualization (e.g., vCUDA).

5. User/Application-Level Virtualization
1. Virtualizes individual applications as isolated units.
2. Examples include JVM (.java) and .NET CLR (.NET apps).
3. Easy to deploy and portable across platforms.
4. Limited isolation compared to lower-level virtualization.
5. Used in sandboxing, application streaming, and secure app deployment.
b Explain how Migration of Memory, Files, and Network Resources happen in 2, 3 7
cloud computing.

1. Memory Migration
• Moves the VM’s memory state from source to destination host.
• Internet Suspend-Resume (ISR) technique uses temporal locality to avoid
redundant transfers.
• Tree-based file structures allow only changed files to be sent.
• ISR results in high downtime, suitable for non-live migrations.
• Efficient memory handling is essential due to large size (MBs to GBs) and
need for speed.

2. File System Migration


• VMs need consistent, location-independent file systems on all hosts.
• Using a virtual disk per VM is simple but not scalable.
• Global/distributed file systems remove need for full file copying.
• ISR copies only the required VM files into the local file system.
• Smart copying and proactive transfer reduce data by using spatial
locality and prediction.

3. Network Migration
• Migrated VMs must retain all open network connections.
• VMs use virtual IP/MAC addresses, independent of host hardware.
• ARP replies notify the network of new locations (on LAN).
• Live migration enables no downtime, with iterative precopy or postcopy
techniques.
• Precopy allows continuous execution but may suffer network load;
postcopy reduces data size but increases downtime.
4. Live Migration Using Xen
• Xen supports live VM migration with minimal service interruption.
• Dom0 manages migration, using send/receive and shadow page tables.
• RDMA enables fast transfer by bypassing TCP/IP stack and CPU.
• Memory compression is used to reduce data size and overhead.
• Migration daemons track and send modified pages based on dirty
bitmaps.

OR
Q.04 a Explain VM based intrusion detection system. L2 10

🔐 Importance of Intrusion Detection (ID) in Cloud


• Detects and responds to attacks on systems and data.
• Required by many security standards and regulations.
• Must be integrated into any cloud deployment strategy.

☁️ Intrusion Detection by Cloud Service Model


1. Software as a Service (SaaS)
• IDS responsibility: Provider
• Customer role: Limited, may access logs for monitoring.
2. Platform as a Service (PaaS)
• IDS responsibility: Provider
• Customer role: Can configure app logs for external monitoring.
3. Infrastructure as a Service (IaaS)
• IDS responsibility: Shared
• Customer has flexibility to deploy IDS within VMs, networks, etc.

🧭 Where to Perform Intrusion Detection in IaaS


1. Within Virtual Machines (VMs)
o Customer-managed HIDS
o Detects activity inside VM.
2. At Hypervisor or Host Level
o Provider-managed HIDS
o Monitors VM-to-VM traffic and host behavior.
3. In Virtual Network
o IDS monitors intra-VM and VM-host traffic (stays within
hypervisor).
4. In Traditional Network
o Provider-managed NIDS
o Detects traffic entering or leaving the host system.

👥 Responsibility Clarification
• Providers:
o Deploy and manage IDS (host, hypervisor, virtual network).
o Must notify customers (via SLA) of any relevant attacks.
• Customers:
o Deploy HIDS inside VMs.
o Integrate IDS into their monitoring systems.
o Must negotiate visibility/alerts via contracts.

🛡️ Types of Intrusion Detection Systems


1. Host-Based IDS (HIDS)
• Runs on individual VMs (by customer) or host (by provider).
• Monitors system activities and logs.
• Challenge: Limited provider transparency for hypervisor HIDS.
2. Network-Based IDS (NIDS)
• Monitors traditional network traffic.
• Limitations:
o Cannot inspect virtual network traffic.
o Ineffective against encrypted traffic.
3. Hypervisor-Based IDS (via VM Introspection)
• Monitors all inter-VM and VM-hypervisor communications.
• Advantage: Full visibility.
• Limitation: Complex, emerging technology; provider-managed.

b Write steps for Creating a Virtual Machine: Configure and deploy a virtual L2 7
machine with specific CPU and memory requirements in Google Cloud.

[or]

Write 5 commands and explain Exploring AWS Cloud Shell.


Step 1: Sign in to Google Cloud Console
1. Go to Google Cloud Console: https://console.cloud.google.com/

2. Log in with your Google Account.


3. Select or create a new project from the top navigation bar.

Step 2: Open Compute Engine


1. In the left sidebar, navigate to "Compute Engine" → Click "VM
instances".

2. Click "Create Instance".

Step 3: Configure the Virtual Machine


1. Name the VM
• Enter a name for your VM instance.

2. Select the Region and Zone


• Choose a region close to your target audience or users.

• Choose an availability zone (e.g., us-central1-a).

3. Choose the Machine Configuration


• Under "Machine Configuration", select:

o Series (E2, N1, N2, etc.)

o Machine type (Select based on your CPU & RAM needs)

▪ Example:

▪ e2-medium (2 vCPU, 4GB RAM)

▪ n1-standard-4 (4 vCPU, 16GB RAM)

▪ Click "Customize" if you want specific CPU &


RAM.

4. Boot Disk (Operating System)


• Click "Change" under Boot Disk.

• Choose an Operating System (e.g., Ubuntu, Windows, Debian).

• Select disk size (e.g., 20GB or more).

5. Networking and Firewall


• Enable "Allow HTTP Traffic" or "Allow HTTPS Traffic" if needed.

• Click "Advanced options" for networking configurations.


Step 4: Create and Deploy the VM
1. Review all the configurations.

2. Click "Create" to deploy the VM.

3. Wait for the instance to be provisioned.

Step 5: Connect to the VM


1. Using SSH (Web)
• Go to Compute Engine → VM Instances.

• Click "SSH" next to your VM instance.

2. Using SSH (Terminal)


• Open Google Cloud SDK (Cloud Shell) or your local terminal.

• Run:

gcloud compute ssh your-instance-name --zone=us-central1-a

Step 6: Verify and Use the VM


• Check CPU and Memory:

lscpu # CPU details


free -h # Memory details
• Install required software (example: Apache web server)

sudo apt update && sudo apt install apache2 -y

Step 7: Stop or Delete the VM (Optional)


• Stop the VM:

gcloud compute instances stop your-instance-name --zone=us-central1-a


• Delete the VM:

gcloud compute instances delete your-instance-name --zone=us-central1-a


Module-3
Q. a Discuss IaaS, PaaS and SaaS cloud service models at different service levels. L2 10
05

✅ 1. Definition
• IaaS (Infrastructure as a Service): Provides virtualized computing
resources like servers, storage, and networking.
• PaaS (Platform as a Service): Offers a development environment with
tools to build, test, and deploy applications.
• SaaS (Software as a Service): Delivers fully functional software
applications over the internet.

✅ 2. Users
• IaaS: Network architects, IT administrators, skilled developers.
• PaaS: Software developers and programmers.
• SaaS: End-users, business teams, consumers.
✅ 3. Technical Knowledge Required
• IaaS: High technical knowledge.
• PaaS: Moderate coding knowledge.
• SaaS: No technical knowledge needed.

✅ 4. User Controls
• IaaS: Full control (OS, runtime, middleware, applications).
• PaaS: Control over app and data only.
• SaaS: No control (everything managed by provider).

✅ 5. Examples
• IaaS: AWS EC2, Microsoft Azure, Google Compute Engine.
• PaaS: Google App Engine, AWS Elastic Beanstalk, IBM Cloud.
• SaaS: Google Workspace, Salesforce, Zoom, Microsoft 365.

✅ 6. Use Cases
• IaaS: Hosting websites, big data analytics, backup and recovery.
• PaaS: Developing web/mobile apps, APIs, microservices.
• SaaS: Email, CRM, video conferencing, document collaboration.

✅ 7. Cost and Scalability


• IaaS: Pay-as-you-go model, highly scalable.
• PaaS: Cost-effective development platform, scalable.
• SaaS: Subscription-based, scalable for all business sizes.

✅ 8. Analogy (Food Example)


• SaaS: You order and eat food (ready-to-use).
• PaaS: You bake a cake in a provided kitchen (need skills, but setup is
done).
• IaaS: You rent a bare kitchen and cook from scratch (do everything
yourself).

✅ 9. Cloud & Enterprise Services


• IaaS: AWS VPC, vCloud Express.
• PaaS: Microsoft Azure, Google App Engine.
• SaaS: Google Apps, Facebook, MS Office Web, Salesforce.
✅ 10. Market Trend
• IaaS: ~12% growth.
• PaaS: ~32% growth.
• SaaS: ~27% growth.

b Explain Private, public and hybrid cloud deployment models

• Determines where infrastructure is located and who owns/controls it.


• Defines the nature, purpose, and access of the cloud environment.
• Helps organizations choose the best approach based on governance, cost,
flexibility, security, scalability, and management.

✅ Types of Cloud Deployment Models


1. Public Cloud
• Open to all; services are available to the general public over the internet.
• Owned and managed by third-party providers (e.g., Google Cloud, AWS).
• Example: Google App Engine
Advantages:
• Minimal investment (pay-as-you-go).
• No setup or infrastructure management.
• Maintenance handled by provider.
• Highly scalable on demand.
Disadvantages:
• Less secure (shared resources).
• Limited customization.

L2 7
2. Private Cloud
• Used by a single organization; exclusive access.
• Hosted on-premises or by a third party.
• Offers greater control and security.
Advantages:
• Full control over resources and policies.
• High data security and privacy.
• Supports legacy systems.
• Customizable for specific needs.
Disadvantages:
• Expensive to implement and maintain.
• Limited scalability compared to public cloud.

3. Hybrid Cloud
• Combines public and private clouds using proprietary software.
• Allows data and apps to move between environments.
Advantages:
• Flexible and customizable.
• Cost-effective (uses public cloud scalability).
• Better security with data segmentation.
Disadvantages:
• Complex to manage.
• Slower data transmission due to integration.
4. Community Cloud
• Shared by multiple organizations with similar interests or concerns.
• Managed internally or by a third-party.
Advantages:
• Cost-effective due to shared resources.
• Good security and collaboration.
• Enables efficient data and infrastructure sharing.
Disadvantages:
• Limited scalability.
• Customization is difficult due to shared setup.

5. Multi-Cloud
• Uses multiple public cloud providers simultaneously.
• Not limited to a single vendor or architecture.
Advantages:
• Mix and match best features of different providers.
• Low latency (choose nearest regions).
• High availability and fault tolerance.
Disadvantages:
• Complex architecture.
• Potential security risks due to integration gaps.
✅ Choosing the Right Cloud Deployment Model
Factors to Consider:
• Cost – Budget for infrastructure and service.
• Scalability – Ability to scale with growing demand.
• Ease of Use – Skill level required to manage the cloud.
• Compliance – Adherence to legal and regulatory standards.
• Privacy – Type and sensitivity of data being stored/processed.
➡️ No one-size-fits-all – the best deployment model depends on current business
requirements. You can switch models as your needs evolve.

OR
Q. a Write short notes on global exchange of cloud resources L2 10
06
❖ Global Exchange of Cloud Resources is the process of using cloud
services in different parts of the world and countries.
❖ It allows businesses and organizations to deploy, manage, and grow their
infrastructure all over the world.
❖ This process is made possible by cloud providers such as Amazon Web
Services (AWS), Microsoft Azure, and Google Cloud, which provide data
centers in different regions of the world.
❖ Such services enable organizations to provide resources cost-effectively,
with little delay, and achieve high availability as well as regional
compliance.

1. Geographical Distribution
• Cloud resources are hosted across a network of global data centers
spread across various regions.
• This allows organizations to serve users from different locations with
minimal delay, improving the overall user experience.
2. Load Balancing
• Cloud service providers offer load balancing across regions.
• This ensures that computing power and resources are efficiently distributed
to meet fluctuating demands across different regions.
3. Redundancy and Availability
• The global exchange enables redundancy by hosting data in multiple
locations.
• In the event of a system failure in one region, data and applications can still
be accessed from other regions, ensuring high availability.

4. Latency Reduction
• By locating resources closer to the end-users, latency is reduced
significantly.
• This enhances the performance of cloud-hosted applications, providing
users with faster access to services regardless of their physical location.
5. Cost Efficiency
• Pay-as-you-go models and cost-effective regional pricing allow
businesses to optimize their cloud expenditures.
• Companies only pay for the resources they use in specific regions, enabling
better cost management.
6. Disaster Recovery
• The global nature of cloud resources ensures that businesses can
implement effective disaster recovery strategies.
• By storing data across different regions, organizations can recover from
outages in one region by switching to another region with no significant
data loss or downtime.
7. Regulatory Compliance
• Many countries have strict data residency and privacy laws.
• The global distribution of cloud resources allows companies to adhere to
local regulations by keeping data within the country or region where
required.

Benefits of Global Exchange of Cloud Resources


1. Scalability:
o Global cloud resources can be scaled dynamically to handle varying
demand.
o Companies can deploy additional resources across different regions
based on performance needs.
2. High Availability:
o The global architecture of cloud resources ensures that services are
available even during regional outages.
o Cloud providers offer multiple availability zones to support
business continuity.
3. Optimized Performance:
o With resources located close to end-users, the speed of access is
significantly improved.
o This is particularly important for applications requiring real-time
data processing.
4. Cost Management:
o Regional pricing allows businesses to select the most cost-
effective locations to host resources.
o This flexibility helps businesses minimize expenses and optimize
their IT budgets.

Challenges of Global Exchange of Cloud Resources


1. Data Privacy and Sovereignty:
o Data sovereignty laws may limit the movement of data across
borders.
o Compliance with local laws regarding data storage and privacy
becomes complex when using a global cloud infrastructure.
2. Network Latency:
o Despite efforts to reduce latency, network performance between
regions can still cause delays.
o This becomes a challenge for real-time applications that require
minimal lag.
3. Complexity in Management:
o Managing distributed cloud resources across multiple regions can
be complex.
o Businesses need advanced orchestration tools and skilled IT
personnel to maintain performance and uptime.
4. Security:
o Security risks can increase with the complexity of managing
resources across various regions.
o Data breach risks may be heightened due to multiple entry points
and cross-border regulations.
Examples of Global Cloud Providers
1. Amazon Web Services (AWS):
o AWS has a vast network of data centers across the globe,
supporting a variety of services such as EC2, S3, and RDS.
o AWS ensures global scalability, high availability, and flexibility for
businesses.
2. Microsoft Azure:
o Azure operates data centers in over 60 regions, offering tools for
deploying applications, managing data, and ensuring security.
o Its global architecture supports businesses with complex regulatory
and performance needs.
3. Google Cloud:
o Google Cloud provides cloud services from numerous regions,
allowing customers to deploy workloads worldwide.
o Its global infrastructure offers low-latency access and high
availability.

b Discuss a set of cloud services provided by Microsoft Azure. L2 10

Cloud Services Provided by Microsoft Azure


1. Azure Compute Services
o Azure Virtual Machines (VMs): Provides scalable computing
resources on-demand for running Windows and Linux VMs.
o Azure App Services: Managed platform for building and deploying
web apps with support for various programming languages.
o Azure Kubernetes Service (AKS): Simplifies containerized app
management with Kubernetes orchestration.
o Azure Functions: Serverless compute service for running event-
driven functions without infrastructure management.
o Azure Virtual Desktop: Desktop virtualization service for securely
delivering remote desktop experiences.
2. Azure Storage Services
o Azure Blob Storage: Scalable storage for unstructured data like
text, images, and videos.
o Azure Disk Storage: Persistent block-level storage for Azure VMs,
offering different performance tiers.
o Azure File Storage: Managed file shares accessible via the SMB
protocol for shared storage.
o Azure Data Lake Storage: Big data storage solution optimized for
analytics with high scalability.
o Azure Archive Storage: Low-cost, long-term storage for
infrequently accessed data.
3. Azure Networking Services
o Azure Virtual Network (VNet): Isolated cloud network for
securely connecting Azure resources.
o Azure Load Balancer: Distributes incoming traffic across multiple
servers to ensure high availability.
o Azure VPN Gateway: Securely connects on-premises networks to
Azure using VPN.
o Azure Application Gateway: Layer 7 load balancer with web
application firewall and URL-based routing.
o Azure Content Delivery Network (CDN): Delivers content faster
to global users by caching data at edge locations.
4. Azure Databases and Analytics Services
o Azure SQL Database: Fully managed relational database service
built on Microsoft SQL Server.
o Azure Cosmos DB: Globally distributed, multi-model database for
high-performance applications.
o Azure Synapse Analytics: Analytics service combining big data
and data warehousing capabilities.
o Azure HDInsight: Managed service for processing big data using
open-source frameworks like Hadoop and Spark.
o Azure Data Factory: Cloud-based data integration service for
moving and transforming data.
5. Azure Security and Identity Services
o Azure Active Directory (Azure AD): Identity and access
management service for users and apps.
o Azure Security Center: Unified security management service for
monitoring and protecting Azure resources.
o Azure Key Vault: Securely stores and manages keys, secrets, and
certificates for apps.
o Azure DDoS Protection: Protection against distributed denial of
service attacks, ensuring application availability.
o Azure Information Protection: Classification and encryption of
data to prevent unauthorized access.
6. Azure DevOps and Developer Tools
o Azure DevOps Services: Cloud-based tools for managing the
software development lifecycle, including CI/CD.
o Azure DevTest Labs: Helps create and manage test environments
for development and testing.
o Azure Container Instances (ACI): Run containers without
managing infrastructure.
o Azure App Configuration: Centralized management of
configuration data for applications.
o Azure Monitor: Comprehensive monitoring solution for tracking
performance, logs, and alerts.
7. Azure AI and Machine Learning Services
o Azure Machine Learning: Cloud service for building, training,
and deploying machine learning models.
o Azure Cognitive Services: Pre-built APIs for vision, speech,
language, and decision-making AI capabilities.
o Azure Bot Services: Platform for building intelligent bots and
conversational interfaces.
o Azure AI Gallery: Repository for machine learning models,
scripts, and solutions.
o Azure Databricks: Apache Spark-based analytics platform for data
engineering and machine learning.
8. Azure Hybrid Cloud Solutions
o Azure Arc: Extends Azure management and services to on-
premises, multi-cloud, and edge environments.
o Azure Stack: A set of hybrid cloud solutions that enable running
Azure services on-premises.
o Azure Site Recovery: Disaster recovery service to ensure business
continuity.
o Azure ExpressRoute: Private connection between on-premises
data centers and Azure.
o Azure Migrate: Service to assess, migrate, and optimize workloads
in the cloud.
9. Azure IoT Services
o Azure IoT Hub: Centralized platform for managing and
connecting Internet of Things (IoT) devices.
o Azure IoT Central: Managed IoT app platform for simplifying IoT
device management and analytics.
o Azure Digital Twins: A service for creating digital replicas of
physical environments.
o Azure Sphere: Securely connects microcontroller-powered devices
to the cloud.
o Azure Time Series Insights: Analytics platform for analyzing
time-series data from IoT devices.
10. Azure Migration and Modernization Services
o Azure Migrate: Tools and services for migrating on-premises
workloads to Azure.
o Azure Database Migration Service: Simplifies the migration of
databases to Azure with minimal downtime.
o Azure Web Apps Migration: Tool to migrate web apps from on-
premises or other clouds to Azure.
o Azure App Service Migration Assistant: Helps move .NET
applications to Azure App Services.
o Azure VMware Solution: Migrate VMware workloads to Azure
without re-architecting applications.

Module-4
Q. a Discuss security of database services.
07
Cloud Database Security refers to the strategies, technologies, and tools employed
to protect cloud-hosted databases from unauthorized access, cyberattacks, data
breaches, and other malicious threats. It ensures the integrity, confidentiality, and
availability of data stored in cloud environments, and is essential for preventing
data loss, exposure, and misuse.

L2 10
Importance of Cloud Database Security
1. Protection Against Cyber Threats: As more enterprises migrate to the
cloud, protecting sensitive data from hackers, malware, and unauthorized
access becomes a significant concern.
2. Governance and Compliance: Maintaining regulatory compliance and
meeting industry standards is crucial for avoiding legal repercussions and
fines.
3. Maintaining Customer Trust: Proactive security measures ensure that
customers’ data is protected, helping businesses retain trust.
4. Data Availability: Cloud database security ensures that critical data
remains accessible while preventing unauthorized disruptions.
5. Business Continuity: Effective security protocols are vital for ensuring the
continuous operation of cloud services without unexpected downtime.

Features of Cloud Database Security


1. Monitoring Database: Utilizing customer-managed keys instead of
relying on cloud providers for critical resource management to minimize
third-party access risks.
2. Manage Passwords: Automating access control through password
management systems to provide temporary passwords and permissions to
authorized users.
3. Logging Capabilities: Enabling comprehensive logging to track
unauthorized access attempts and storing logs for centralized security event
management.
4. Encrypted Database Access: Enabling encryption for cloud databases to
protect sensitive data and prevent unauthorized access.
5. Access Control: Defining strict access policies to limit who can access the
database, ensuring that only authorized users have the appropriate
permissions.

Advantages of Cloud Database Security


1. Reduced Costs: Cloud providers offer advanced security features that
reduce administrative overhead, minimizing the total cost of ownership
(TCO).
2. Increased Visibility: Enhanced security protocols allow businesses to
monitor their data assets and user activity within the cloud environment.
3. Native Applications Support: Cloud databases provide native app
integration without the need for additional installation, allowing developers
to build seamless applications.
4. Data Encryption: Cloud services use sophisticated encryption to secure
data both in transit and at rest, protecting sensitive information from
unauthorized access.
5. Automated Security Updates: Cloud providers manage regular security
updates and patches, ensuring the database is protected against emerging
threats.

Disadvantages of Cloud Database Security


1. Account Hijacking: Attackers may use phishing or exploit vulnerabilities
in third-party services to gain access to user accounts and expose sensitive
data.
2. Misconfiguration: Cloud systems can become misconfigured over time as
services expand, leaving gaps in security that attackers may exploit.
3. Data Loss: Unauthorized users may delete valuable data, causing
irreparable loss, especially if backups are not managed securely.
4. Data Breaches: Inadequate security measures may lead to breaches,
risking not only data but also the company’s reputation and financial
stability due to noncompliance fines.
5. Shared Responsibility Model: Cloud database security relies on both the
provider and the user, with users needing to ensure proper configuration,
monitoring, and access control.

b Explain the security risks posed by shared images and management os. L2 10
Security Risks Posed by Shared Images:
1. Malicious Code Injection:
o Shared images can be pre-configured with malicious software that
might go undetected during the creation or deployment of the
image. When other users deploy the image, they might
unknowingly execute this malicious code.

2. Unpatched Vulnerabilities:
o If the shared image is not updated regularly, it may contain outdated
software with known vulnerabilities. This exposes the system to
exploits and attacks.
3. Data Leakage:
o Sensitive data stored in a shared image may be accessible to other
users or systems using the image. Improper data handling within
shared images can lead to unauthorized data access.
4. Privilege Escalation:
o Shared images might contain embedded administrator or root
privileges. If the image is not securely configured, it can allow
unauthorized users to escalate their privileges and gain control of
the system.
5. Lack of Isolation:
o In some cases, shared images may not have proper isolation
between different users or virtual machines. This can lead to
unintentional access to data or resources belonging to other users.
6. Compliance and Legal Risks:
o Shared images may not meet the required security and privacy
standards for regulated industries. This poses a risk of non-
compliance with laws such as GDPR, HIPAA, or PCI-DSS.
7. Insecure Configuration:
o Misconfigured settings in a shared image could lead to weak
security controls, allowing attackers to exploit weaknesses in the
system.
8. Inadequate Monitoring:
o Without adequate monitoring, it becomes difficult to detect
suspicious activities related to shared images, such as unauthorized
access or malicious activity.

Security Risks Posed by Management Operating Systems (OS):


1. Privilege Escalation and Unauthorized Access:
o If an attacker gains control of the management OS, they can
escalate privileges and gain access to all virtual machines and
systems managed by the OS. This can result in total control over
the infrastructure.
2. Weak Authentication and Access Control:
o A poorly implemented authentication mechanism or lack of proper
access control allows unauthorized users to access the management
OS, putting all virtual environments at risk.

3. Denial of Service (DoS) Attacks:


o A compromised management OS can be used to perform DoS
attacks on the virtual machines or containers, causing outages or
performance degradation across all hosted services.
4. Insecure Communication:
o Communication between the management OS and other systems,
such as virtual machines, could be intercepted if unencrypted
protocols are used. This could expose sensitive data or allow
attackers to tamper with communication.
5. Inadequate Resource Management:
o Poor resource allocation and management in the management OS
can allow malicious users or processes to consume excessive
system resources, leading to degraded performance or system
crashes.
6. Exposure of Management Interfaces:
o The management OS often exposes interfaces for managing virtual
machines or containers. If these interfaces are not secured, attackers
may exploit them to compromise the system.

7. Unpatched Vulnerabilities:
o The management OS may contain vulnerabilities that can be
exploited by attackers if not properly patched. This makes the OS a
prime target for security breaches.
8. Insider Threats:
o Employees or individuals with access to the management OS may
intentionally or unintentionally cause damage, leak data, or
compromise system security.
9. Misconfigurations:
o Misconfigurations in the management OS can lead to
vulnerabilities, including incorrect user permissions, weak
passwords, or incorrect networking settings, all of which increase
the risk of exploitation.
10. Lack of Auditing and Monitoring:
• Without proper logging and monitoring, it becomes difficult to detect
unusual activities or potential security breaches in the management OS,
leaving the system vulnerable to attacks.

OR
Q. a Discuss how virtual machines are secured
08 1. Hypervisor Security
• Ensure the integrity of the hypervisor through write protection and
restricted access to prevent unauthorized modifications.
• Implement isolation between VMs to prevent cross-VM attacks and
intrusion detection to monitor hypervisor activity.
2. Virtual Machine Isolation
• Enforce memory, network, and resource isolation to prevent unauthorized
access between VMs.
• Use strict access controls to limit communication and interactions between
VMs.
3. Access Control and Authentication
• Implement multi-factor authentication (MFA) and role-based access
control (RBAC) to restrict access to VMs.
3, 4 10
• Maintain audit logs and enforce strong password policies to ensure only
authorized access.
4. VM Monitoring and Logging
• Continuously monitor VM behavior and maintain centralized logs for
tracking potential security threats.
• Set up real-time alerting to notify administrators of suspicious activities.
5. Guest Operating System and Application Security
• Regularly update the guest OS and use security software like antivirus to
protect against vulnerabilities.
• Configure firewalls, IDS, and whitelisting to limit unauthorized access and
application execution.
6. VM Image Security
• Harden VM images before deployment and restrict image creation to
trusted sources.
• Perform virus scanning on VM images to ensure they are free from
malware or malicious content.
7. Data Encryption
• Encrypt data at rest and in transit to protect sensitive information on VMs.
• Use secure key management to ensure that encryption keys are properly
managed and rotated.
8. VM Backup and Recovery
• Perform regular backups and store them offsite to ensure data recovery in
case of a breach.
• Test disaster recovery plans to ensure VMs can be restored quickly after an
incident.
9. Virtual Machine Patching and Updates
• Apply automated patch management to ensure VMs are updated with the
latest security patches.
• Test patches in non-production environments before deployment to avoid
disruptions.
10. VM Resource Management
• Monitor VM resource usage to detect abnormal consumption patterns that
could signal security threats.
• Set resource allocation limits to prevent overuse by any single VM,
maintaining performance and security.

BCS502
b Explain reputation system design options. L2 10
1. Centralized Reputation System
• A centralized system relies on a single authority or server to collect, store,
and process reputation data for all users or services.
• Advantages:
o Simplified management with a single point of control.
o Easier to monitor and track user or service performance.
• Disadvantages:
o A single point of failure can disrupt the entire system.
o Potentially vulnerable to manipulation or attack if the central server
is compromised.
2. Decentralized Reputation System
• In this design, reputation data is stored and processed across multiple
nodes, with no central authority. Each participant or service maintains their
own reputation scores, and data is distributed among peers.
• Advantages:
o Increased robustness since there’s no single point of failure.
o Better suited for distributed or peer-to-peer cloud environments.
• Disadvantages:
o More complex to manage and ensure consistency across the
system.
o Higher computational and storage overhead as data needs to be
distributed and verified across multiple nodes.
3. Hybrid Reputation System
• A hybrid system combines elements of both centralized and decentralized
models. Typically, reputation data is stored centrally, but peer-to-peer
evaluations or ratings are used to influence the final score.
• Advantages:
o Flexibility in adapting to different cloud environments.
o Provides a balance of reliability and robustness.
• Disadvantages:
o May suffer from the complexity of managing multiple systems.
o Still subject to the risks of centralization (e.g., targeted attacks).
4. Reputation Based on Feedback Mechanisms
• This system relies on user feedback or ratings after interacting with a
service or user. Ratings from multiple users are aggregated to generate a
reputation score for the service or user.
• Advantages:
o Provides direct, real-time feedback from users, improving service
accountability.
o Scalable and adaptable to a wide range of cloud services.
• Disadvantages:
o Susceptible to fake or biased feedback if not properly monitored or
verified.
o May require additional mechanisms (e.g., reputation decay) to
ensure that scores remain relevant over time.
5. Reputation Based on Historical Behavior
• This system tracks the past behavior of users or services (e.g., uptime,
reliability, or security events) and uses this data to predict future behavior.
The reputation score is dynamically updated based on ongoing
performance metrics.
• Advantages:
o Provides a continuous, data-driven evaluation of trustworthiness.
o Reduces the impact of individual malicious actions since it focuses
on long-term patterns.
• Disadvantages:
o Requires large volumes of data and historical tracking, leading to
increased storage and processing overhead.
o May not quickly adapt to sudden, drastic changes in behavior.
6. Trust Models in Reputation Systems
• Trust models use algorithms or mathematical models to assign
trustworthiness scores. These models often factor in various metrics,
including past interactions, feedback, and service performance.
• Advantages:
o Can be customized based on the needs of the specific cloud
environment (e.g., service reliability, data integrity).
o Provides a formal, quantifiable approach to reputation
management.
• Disadvantages:
o Complex to design and implement.
o May need continuous refinement and updates to remain effective as
the cloud environment evolves.
7. Reputation Based on Third-party Evaluation
• In this approach, a trusted third-party organization (e.g., an auditor or
certification body) evaluates the reputation of services or users in the
cloud.
• Advantages:
o Enhances credibility as the third-party evaluation is independent.
o Useful for situations requiring external verification, such as
compliance with industry standards.
• Disadvantages:
o Potentially slow and expensive due to the need for external
evaluation.
o May introduce a bottleneck if the third-party organization becomes
overwhelmed with requests.
Module-5
Q. 09 a What are the various system issues for running a typical parallel program in L2 10
either parallel or distributed manner?

1. Communication Overhead
• Parallel systems (e.g., using threads or processes) may have lower
communication latency due to shared memory.
• Distributed systems must send data over a network, leading to higher
latency and bandwidth constraints.

2. Synchronization and Coordination


• Ensuring that multiple processes or threads coordinate properly is
critical.
• Problems like race conditions, deadlocks, and livelocks can occur.
• Need for locks, barriers, semaphores, or message passing mechanisms.

3. Data Distribution and Locality


• How data is divided among processes affects performance.
• In distributed systems, poor data locality can result in excessive remote
data access, hurting efficiency.

4. Load Balancing
• Uneven workload distribution causes some nodes/threads to be idle while
others are overloaded.
• Requires dynamic or static load balancing strategies.

5. Fault Tolerance and Reliability


• In distributed systems, nodes or network links can fail.
• Systems must handle failures gracefully (e.g., checkpointing, replication).

6. Scalability
• The ability of the system to maintain performance as more resources are
added.
• Communication, synchronization, and data contention may limit
scalability.

7. Resource Management
• Effective use of CPU, memory, network, and storage.
• In distributed systems, resource heterogeneity (e.g., different hardware
capabilities) complicates management.
8. Programming Model Complexity
• Writing efficient parallel/distributed programs is harder.
• APIs like MPI, OpenMP, CUDA, or MapReduce help but require
expertise.

9. Security Issues (Distributed Systems)


• Data transmission over networks introduces concerns about data integrity,
confidentiality, and authentication.

10. Debugging and Profiling


• Much harder than in sequential systems.
• Tools are needed for monitoring, profiling, and debugging parallel and
distributed executions.

b With a neat diagram explaining the data flow in running a MapReduce job L2 10
at various task trackers using Hadoop Library

❖ Data locality is a principle in Hadoop that promotes moving computation


(algorithms/code) close to where the data resides, instead of moving large
data to computation.
❖ Designed to reduce the network congestion and enhance the performance
of big data processing.

Step by step MapReduce Job Flow


The data processed by MapReduce should be stored in HDFS, which divides the
data into blocks and store distributedly,
Below are the steps for MapReduce data flow:
• Step 1: One block is processed by one mapper at a time. In the mapper, a
developer can specify his own business logic as per the requirements. In
this manner, Map runs on all the nodes of the cluster and process the data
blocks in parallel.
• Step 2: Output of Mapper also known as intermediate output is written to
the local disk. An output of mapper is not stored on HDFS as this is
temporary data and writing on HDFS will create unnecessary many
copies.
• Step 3: Output of mapper is shuffled to reducer node (which is a normal
slave node but reduce phase will run here hence called as reducer node).
The shuffling/copying is a physical movement of data which is done over
the network.
• Step 4: Once all the mappers are finished and their output is shuffled on
reducer nodes then this intermediate output is merged & sorted. Which is
then provided as input to reduce phase.
• Step 5: Reduce is the second phase of processing where the user can
specify his own custom business logic as per the requirements. An input to
a reducer is provided from all the mappers. An output of reducer is the
final output, which is written on HDFS.
Hence, in this manner, a map-reduce job is executed over the cluster. All the
complexities of distributed processing are handled by the framework. For
example, data/code distribution, high availability, fault-tolerance, data locality,
etc. The user just needs to concentrate on his own business requirements and write
his custom code at specified phases (map and reduce).

OR
Q. 10 a Discuss Programming the Google App Engine. 3, 4 10
• Google App Engine (GAE) is a fully managed Platform as a Service
(PaaS) used for building and hosting scalable web applications on
Google’s infrastructure.
• It dynamically scales web applications as traffic demand changes,
ensuring efficient resource usage and high availability.
• GAE supports multiple programming languages like Python, Java, Go,
and PHP, each with its own runtime and SDK for local development and
testing.
• The App Engine SDK allows developers to emulate the production
environment on local machines and later deploy their apps easily with cost-
control quotas.
• GAE provides numerous in-built services including cron jobs, queues,
scalable datastores (Cloud SQL, Datastore, Memcached),
communication tools, and in-memory caching.
• It offers a secure and high-performance execution environment with
general features (e.g., datastore, logs, blobstore, search) covered by
service-level agreements (SLA).
• GAE has preview and experimental features (e.g., Sockets, MapReduce,
Prospective Search, OpenID) that may change and are accessible to
selected users.
• Third-party services and helper libraries are integrated via partnerships,
enabling apps to perform extended tasks beyond core functionalities.
• Key advantages include fast deployment, ease of use, rich APIs, built-in
security, automatic scaling, high reliability, platform independence,
and reduced infrastructure cost.
• Overall, Google App Engine simplifies the development of robust,
scalable, and secure applications without managing server infrastructure,
making it ideal for rapid development and enterprise-scale solutions.
b With neat diagram explain OpenStack Nova system architecture. 3, 4 10

1. Nova is an OpenStack component responsible for managing and


provisioning virtual machine (VM) instances, similar to AWS EC2, but
for private clouds.
2. It supports multiple hypervisors and virtualization technologies such as
KVM, Hyper-V, VMware, and Xen.
3. Nova interacts with other OpenStack services like:
o Keystone for authentication,
o Glance for image services,
o Neutron for network provisioning,
o Cinder for providing volumes to VM instances.
4. Nova is developed using Python, and uses libraries like:
o Eventlet for networking,
o SQLAlchemy for database interactions.
5. It follows a horizontally scalable architecture, meaning the workload is
distributed across multiple servers instead of relying on a single machine.
6. Nova uses SQL databases to store information, which are shared logically
by its components.
7. It operates using multiple daemons running on top of Linux servers, each
performing specific tasks.
8. Use Cases of Nova include:
o Creating and managing VMs,
o Supporting bare-metal servers,
o Offering limited support for system containers (e.g., Docker).
9. Core components in Nova architecture include:
o DB: Central SQL database,
o API: Handles HTTP requests and interacts with other components,
o Scheduler: Allocates instances to hosts,
o Compute: Manages VMs and hypervisors,
o Conductor: Coordinates complex tasks and acts as a DB proxy.
10. Users can access Nova services via:
o Horizon (OpenStack dashboard UI),
o CLI (Command Line Interface),
o Novaclient (Python-based API and CLI tool for Nova operations).
1 . Define Cloud Computing? Explain its characteristics and benefits.
Cloud computing refers to the delivery of various computing services such as storage, processing power,
networking, and software applications over the internet.
 Remote Access: Services are accessed through the internet rather than being stored locally on devices or
in data centers.
 On-Demand Services: Resources such as computing power, storage, and applications can be scaled up
or down as needed.
 Pay-as-You-Go Model: Users only pay for the resources they use, which helps reduce costs.
Elasticity and Scalability: The ability to automatically scale resources up or down depending on demand.
Benefits:

1. Cost Savings – Eliminates the need for large capital expenditures on hardware and software.
2. Scalability and Flexibility – Easily scales resources up or down based on workload.
3. Improved Performance – Cloud providers optimize infrastructure for better efficiency and
performance.
4. Security and Compliance – Advanced security measures such as encryption, firewalls, and
compliance with industry standards ensure data protection.
5. Disaster Recovery and Backup – Cloud services provide automated backup and disaster recovery
solutions.
6. Accessibility and Collaboration – Cloud applications can be accessed from anywhere, allowing for
better collaboration among teams.

------------------------------------------------------------------------------------------------------------------
2. What are the types of VM architectures, and how do they help in making computing
easier?
Types:
(a) Physical Machine:
The traditional setup where a single OS directly manages the hardware.
No virtualization; applications and the OS depend entirely on the physical machine.
(b) Native VM Architecture:
A virtual machine monitor (VMM or hypervisor) operates directly on the hardware.
Efficient and high-performing, often used for managing resources in large-scale systems.
(c) Hosted VM Architecture:
A VMM runs on top of a host operating system, treating VMs as applications.
Simpler to set up but less efficient due to the extra layer introduced by the host OS.
(d) Dual-Mode VM Architecture:
Combines features of both native and hosted architectures.
Some tasks are handled directly by the VMM on hardware, while others pass through a host OS.
VM Architectures:
(b) Native (Bare-Metal) VM: Hypervisor (e.g., XEN) operates directly on hardware in privileged mode.
Guest OS could differ, like Linux on Windows hardware.
(c) Hosted VM: VMM runs in nonprivileged mode on a host OS. No need to modify the host.
(d) Dual-Mode VM: Splits VMM functions between user level and supervisor level. May require minor
modifications to the host OS.
Key Benefits:
Supports multiple VMs on the same hardware.
Facilitates portability and flexibility with virtual appliances bundled with their dedicated OS and
applications.

------------------------------------------------------------------------------------------------------------------
3) Explain Xen Architecture in detail.
The Xen Architecture
Overview of Xen:
• Open-source hypervisor developed by Cambridge University.
• A microkernel hypervisor separating mechanism (handled by Xen) from policy (handled by Domain
• Does not include native device drivers; provides mechanisms for guest OSes to access physical
devices.
Key Features:
• Small hypervisor size due to minimal functionality.
• Acts as a virtual environment between hardware and OS.
• Commercial versions include Citrix XenServer and Oracle VM.
Core Components:
1. Hypervisor.
2. Kernel.
3. Applications.
Domains in Xen:
Domain 0 (privileged guest OS): Manages hardware access and devices.
Allocates and maps resources for other domains (Domain U).
Boots first, without file system drivers.
Security risks exist if Domain 0 is compromised.
Domain U (unprivileged guest OS): Runs on resources allocated by Domain 0.
Security:
• Xen is Linux-based with a C2 security level.
• Strong security policies are needed to protect Domain 0.
VM Capabilities:
• Domain 0 acts as a VMM, enabling users to create, save, modify, share, migrate, and roll back VMs
like files.
• Rolling back or rerunning VMs allows fixing errors or redistributing content.

VM Execution Model:
Traditional machine states are linear, while VM states form a tree structure: Multiple instances of a VM can
exist simultaneously.
VMs can roll back to previous states or rerun from saved points.

------------------------------------------------------------------------------------------------------------------
4) What is the difference between LAN, SAN, and NAS, and how has Ethernet speed
improved networking for distributed computing?
How Faster Ethernet Improves Distributed Computing

1. Higher Data Speeds → 10GbE, 25GbE, and 100GbE reduce delays in data exchange between nodes.
2. Lower Latency → Ensures real-time processing for AI, big data, and financial trading.
3. Better Cloud & Edge Computing → Faster communication between cloud servers and IoT devices.
4. Efficient Storage Access → Improves NAS/SAN performance for distributed databases.
5. Scalability → Supports large-scale applications like HPC, AI training, and scientific research.

------------------------------------------------------------------------------------------------------------------
5. Explain GPU Programming Model.
GPU programming models are designed to leverage the parallel processing capabilities of GPUs for high-
performance computing tasks. These models provide frameworks and languages that enable developers to write
programs that run efficiently on GPUs.
Division of Labor: CPUs handle complex logic and management tasks, while GPUs excel at parallel
processing.
Offloading Tasks: CPUs offload parallelizable tasks to GPUs using programming models like CUDA,
OpenCL, and DirectCompute.
Parallel Execution: GPUs have thousands of smaller cores that process multiple data streams simultaneously.
Data Transfer: Efficient data transfer between CPU and GPU memory is crucial, facilitated by high-speed
interconnects like PCIe.
Coordination: CPUs and GPUs coordinate tasks using appropriate APIs and frameworks to manage
workflows and data dependencies.
Applications: Used in graphics rendering, scientific research, machine learning, financial modeling, and
more.
Hybrid Architectures: Some processors integrate CPU and GPU cores on the same chip, enhancing
performance for mixed workloads.

------------------------------------------------------------------------------------------------------------------
6. What is scalable computing over the Internet, and how does it use technologies like
IoT and cloud computing?

Scalable Computing over the Internet refers to the ability to dynamically allocate and manage computing
resources over the internet in a way that can handle growing demands. This involves distributing
computational tasks across multiple systems (often using cloud computing or distributed computing
platforms) to accommodate varying workloads.
Evolution of Computing Technology
Over the last 60 years, computing has evolved through multiple platforms and environments.

Shift from centralized computing to parallel and distributed systems.

Modern computing relies on data-intensive and network-centric architectures.


The Age of Internet Computing
High-performance computing (HPC) systems cater to large-scale computational needs.

High-throughput computing (HTC) focuses on handling a high number of simultaneous tasks.

The shift from Linpack Benchmark to HTC systems for measuring performance.

**Internet of Things (IoT)

 Extends the Internet to everyday objects via sensors, RFID, GPS.


 Introduced in 1999 at MIT, enabling communication between objects and people.
 IPv6 allows 2¹²⁸ unique addresses, tracking 100 trillion objects.

Communication Models:

1. H2H (Human-to-Human)
2. H2T (Human-to-Thing)
3. T2T (Thing-to-Thing)

Development & Challenges:

 IoT is growing rapidly in Asia & Europe.


 Cloud computing improves efficiency, intelligence, and scalability.

Cloud Computing & IoT

 Cloud can be centralized or distributed.


 Uses parallel & distributed computing.
 Built with physical or virtual resources in large data centers.
 Considered a form of utility or service computing.**

------------------------------------------------------------------------------------------------------------------
7. Explain Cluster Architecture in detail.

A computing cluster consists of interconnected stand-alone computers which work cooperatively as a single
integrated computing resource
Figure 1.15 shows the architecture of a typical server cluster built around a low-latency, high bandwidth
interconnection network. This network can be as simple as a SAN (e.g., Myrinet) or a LAN (e.g., Ethernet).
To build a larger cluster with more nodes, the interconnection network can be built with multiple levels of
Gigabit Ethernet, Myrinet, or InfiniBand switches.
The gateway IP address locates the cluster. The system image of a computer is decided by the way the OS
manages the shared cluster resources.
All resources of a server node are managed by their own OS. Thus, most clusters have multiple system images
as a result of having many autonomous nodes under different OS control.
------------------------------------------------------------------------------------------------------------------
8) What is Virtualization? Explain full virtualization
Full Virtualization:
• Noncritical Instructions: Run directly on the hardware.
• Critical Instructions: Trapped and replaced with software-based emulation by the Virtual Machine
Monitor (VMM).

Why Only Critical Instructions Are Trapped:


Performance: Binary translation of all instructions can lead to high performance overhead.

Security:
Critical instructions control hardware and can pose security risks.
Trapping them ensures system security.
Running noncritical instructions on hardware improves overall efficiency.
• Efficiency:
Running noncritical instructions on hardware improves overall efficiency.

**refer if want!
Host-Based Virtualization
Host-Based VM Architecture:
• A virtualization layer is installed on top of the host OS.
• The host OS remains responsible for managing hardware.
• Guest OSes run on top of the virtualization layer.
• Dedicated applications can run in VMs, while other applications may run
directly on the host OS. **
------------------------------------------------------------------------------------------------------------------
9) List and explain the implementation levels of Virtualization.
 Virtualization is a computer architecture technology by which multiple virtual machines (VMs) are
multiplexed in the same hardware machine.

• The idea of VMs can be dated back to the 1960s.

• The purpose of a VM is to enhance resource sharing by

many users and improve computer performance in terms

of resource utilization and application flexibility.

• According to a 2009 Gartner Report, virtualization was the top strategic technology poised to change the
computer industry.
Virtualization can be implemented across five abstraction layers:
1. Instruction Set Architecture (ISA) Level: Tools like Bochs and QEMU.
2. Hardware Level: Solutions like VMware and Xen.
3. Operating System Level: Examples include Ensim's VPS and FVM.
4. Library Support Level: Tools such as WINE or vCUDA.
5. Application Level: Examples like JVM or .NET CLR.
These layers provide flexibility in resource management and application independence.
 Instruction Set Architecture (ISA) Level: Virtualizes one processor architecture to run on another by
emulating its instructions.

 Example: Using QEMU to run ARM-based programs on an x86 processor.

 Hardware Abstraction Level: Creates virtual hardware environments directly on physical hardware.

 Example: VMware allows multiple operating systems (Windows/Linux) to run simultaneously on


the same server.

 Operating System Level: Partitions a single operating system to create multiple isolated containers or
environments.

 Example: Docker containers isolate applications on a single Linux-based server.

 Library Support Level: Virtualizes APIs to run applications designed for one platform on another.

 Example: WINE allows Windows applications to run on Linux systems.

 User-Application Level: Virtualizes at the application layer to allow specific programs to run on any
system.
Example: Java Virtual Machine (JVM) runs Java applications on different operating systems.
------------------------------------------------------------------------------------------------------------------
10) What are various primitive VM operations in distributed computing environment

The Virtual Machine Monitor (VMM) provides the abstraction of a virtual machine (VM) to the guest
operating systems (OS). With full virtualization, the VMM creates a virtual machine environment that is
identical to a physical machine, allowing standard operating systems (like Windows or Linux) to run just as
they would on real hardware.
Several key VMM operations enable flexible management of virtual machines in a distributed environment:
 Multiplexing (Figure 1.13a): The VMM can multiplex multiple virtual machines (VMs) across different
hardware systems, allowing several VMs to run on various physical machines while sharing resources
efficiently.
 Suspension (Figure 1.13b): A VM can be suspended and stored in stable storage (like disk storage).
This means the VM's state is saved, and it can be resumed later without losing any data or progress.
 Provisioning/Resume (Figure 1.13c): After being suspended, a VM can be resumed or provisioned
to a new hardware platform. This allows for the VM to be moved and started on a different machine
without loss of state.
 Migration (Figure 1.13d): A VM can be migrated from one physical machine to another. This
migration can happen with minimal downtime and is beneficial for load balancing, fault tolerance, or
system maintenance.
------------------------------------------------------------------------------------------------------------------
11) Illustrate the differences between Full Virtualization and host-based virtualization.

------------------------------------------------------------------------------------------------------------------
13) How does OS-level virtualization enhance hardware-level virtualization, and what
are its benefits for cloud computing?
Challenges of Hardware-Level Virtualization:
1. Slow initialization due to each VM creating its image from scratch.
2. Storage issues from considerable repeated content among VM images.
Disadvantages include: Slow performance.
Low density.
The need for para-virtualization to modify the guest OS.

May require hardware modifications to reduce performance overhead.

OS-Level Virtualization as a Solution:


• Inserts a virtualization layer inside the operating system to partition physical resources.
• Creates multiple isolated virtual machines (VMs) within a single OS kernel.

Characteristics of OS-Level VMs:


• Referred to as Virtual Execution Environments (VE), Virtual Private Systems (VPS), or containers.
VEs behave like real servers with their own: Processes.
File systems.
User accounts.
Network interfaces (IP addresses, routing tables, firewall rules, etc.).

VEs share the same operating system kernel but can be customized for different users.

Alternate Name:
• Known as single-OS image virtualization because all VEs use a single shared OS kernel.
Advantages of OS Extensions
Benefits of OS-Level Virtualization Compared to Hardware-Level Virtualization:
1. Minimal startup/shutdown costs, low resource requirements, and high scalability.
2. Ability to synchronize state changes between a VM and its host environment when needed.

Mechanisms to Achieve Benefits:


1. All OS-level VMs share a single operating system kernel.
2. Virtualization layer allows VM processes to access host machine resources without modifying them.

**

Benefit Impact on Cloud Computing


High Efficiency Containers share the host OS, reducing resource usage.
Rapid Scaling Ideal for microservices and serverless computing.

Improved Security Isolated environments prevent application conflicts.


Portability Containers run consistently across different cloud platforms.

Cost Reduction Less resource consumption leads to lower cloud costs.


Faster Deployment Ideal for CI/CD pipelines in DevOps.

Example: Cloud Implementation

 AWS, Azure, and Google Cloud use OS-level virtualization for Kubernetes-based container
orchestration on top of VMs.
 Hybrid Cloud Deployments use hardware virtualization (VMs) for stability and containers for rapid
application scaling.**

------------------------------------------------------------------------------------------------------------------
1) Define Cloud Computing? Explain its characteristics and benefits.
2) What are the types of VM architectures, and how do they help in making computing easier?
3) Explain Xen Architecture in detail.
4) What is the difference between LAN, SAN, and NAS, and how has Ethernet speed improved
networking for distributed computing?
5) Explain GPU Programming Model.
6) What is scalable computing over the Internet, and how does it use technologies like IoT and cloud
computing?
7) Explain Cluster Architecture in detail.
8) What is Virtualization? Explain full virtualization
9) List and explain the implementation levels of Virtualization.
10) What are various primitive VM operations in distributed computing environment.
11) Illustrate the differences between Full Virtualization and host-based virtualization.
12) Consider a program P where 25% will be executed sequentially and remaining parallelly. Calculate the
speedup and efficiency considering fixed workload.
Ans:
Steps:
1. Sequential Fraction (S): The fraction of the program that must be executed sequentially.
o Here, S = 0.25 (25%).
2. Parallelizable Fraction (P): The fraction that can be executed in parallel.
o P = 1 - S = 0.75 (75%).
3. Speedup (Amdahl's Law): Speedup is calculated as:
Speedup=1/(S+P/N)
Where:
• SS = Sequential fraction (25% or 0.25),
• PP = Parallel fraction (75% or 0.75),
• NN = Number of processors (assuming all parallel workload is evenly divided across NN).
4. Efficiency: Efficiency is calculated as:
Efficiency=Speedup/N

Example Calculation:
Let's calculate with N = 4 processors (you can substitute other values for NN):
1. Speedup:
Speedup=1/(0.25+0.75/4)=1/(0.25+0.1875)=1/0.4375≈2.286

13) How does OS-level virtualization enhance hardware-level virtualization, and what are its benefits for
cloud computing?
Cloud Computing Question Bank - IA2
Module-3
1. Differentiate between public cloud, private cloud, Hybrid cloud models. Discuss their
advantages and disadvantages.
2. Explain the three main cloud service models (IaaS, PaaS and SaaS). Provide examples for
each
3. Explain the typical data centre networking structure. How does it support scability, high
availability and performance in cloud environments?
4. What are the various design objectives of Cloud Computing?
5. Describe the components of a cloud ecosystem. How do these components support scability
and efficiency?
6. What are the challenges in designing warehouse-scale cloud data centre’s?
7. Discuss the importance of data centre interconnection networks in cloud computing
8. Explain interconnection of modular Data centre’s with example
9. what is Inter-Module Connection network. Explain with an example

Module-4
10. Explain the major security concerns and risks faced by cloud users.
11. Discuss Various encryption techniques used for securing data in the cloud
12. Discuss security of database services.
13. Explain Security Risks Posed by Shared Images & Management OS.
14. Discuss how virtual machines are secured.
15. Explain reputation system design options.

Module-5
16. Explain the key features of cloud and grid computing platforms
17. Discuss the challenges and System issues in running a typical parallel program in distributed
systems
18. With a neat diagram, explain the data flow during a MapReduce job using Hadoop.
19. Describe the programming model and environment provided by Google App Engine
20. Discuss the architecture and components of OpenStack Nova with a diagram
21. Explain the programming environments and tools provided by Amazon AWS and Microsoft
Azure
22. Describe emerging cloud software environments and their significance in real-world
applications

You might also like