Answers 1
Answers 1
The global exchange of cloud resources refers to the ability of cloud computing platforms
to share and deliver computing resources, such as storage, processing power, and
software, across the world via the Internet.
Key Points:
1. Worldwide Access
• Cloud services can be accessed from any location using the Internet.
• Cloud providers like Amazon, Google, Microsoft have data centers in multiple
countries.
• These centers are interconnected over the Internet to provide scalable and
efficient services.
• Users in different countries may hesitate to use foreign clouds due to data privacy
laws, trust, or regulatory issues.
• For example, European users may not feel comfortable storing sensitive data on
servers located in the U.S.
• To support global exchange, clear agreements between users and providers are
necessary.
• Many enterprises use hybrid clouds, combining local (private) infrastructure with
global (public) cloud services.
• This allows them to meet international demand while keeping sensitive data
secure.
Benefits of Global Exchange:
1. Compute Services
• Azure Functions: Serverless compute for running code without managing servers.
2. Storage Services
• Blob Storage: Store large unstructured data like images and videos.
• Disk & File Storage: Persistent and shared storage for VMs and apps.
3. Networking Services
• Load Balancer & VPN Gateway: Distribute traffic and connect securely to on-
premises systems.
• Security Center & Key Vault: Monitor security and store sensitive data securely.
• Azure Machine Learning: Build, train, and deploy machine learning models.
• Cognitive Services: Pre-built AI APIs for vision, speech, and language tasks.
• Azure Arc: Manage on-prem, multi-cloud, and edge resources with Azure tools.
• IoT Central: Fully managed IoT app platform with simplified setup.
• Digital Twins: Create digital replicas of real-world systems for simulation and
monitoring.
• App Service Migration Assistant: Shift web and .NET apps to Azure smoothly.
• For example, Intel binaries can run on PowerPC using binary translation techniques.
2. Hardware Level
• The hypervisor directly manages the hardware like CPU, memory, and I/O devices.
• For example, Windows applications can run on Linux using a library that mimics
Windows APIs.
Explain how Migration of Memory, Files, and Network Resources happen in cloud
computing.
In cloud computing, migration means moving a running virtual machine (VM) or its
components like memory, files, or network state from one physical machine to another.
This is essential for load balancing, fault tolerance, and maintenance.
1. Migration of Memory
• A live migration technique is used, where memory is copied while the VM is still
running.
• Dirty pages (pages that change during copying) are tracked and re-copied.
2. Migration of Files
• VM files like disk images, system libraries, and configurations are stored in shared
storage systems.
• Network File System (NFS) or distributed file systems (like Google File System)
are used.
• Instead of physically copying files, the new machine accesses the same shared
storage.
Advantage: Fast and efficient file access.
Challenge: Requires file consistency and synchronization.
• Migrating VMs also requires preserving their network identity (like IP addresses).
• Virtual networking technologies allow the VM to retain its address even when
moved.
Intrusion Detection Systems (IDS) are used to detect unauthorized access, misuse, or
attacks on computing systems.
In a traditional physical system, it’s hard to monitor all activities without interfering with
the system itself. But with virtual machines (VMs), it's possible to detect attacks from
outside the VM, without affecting its internal processes.
"With VM-based intrusion detection, one can build a secure VM to run the intrusion
detection facility (IDS) outside all guest VMs."
How It Works
o Guest OS activities
o Network traffic
o System calls
o Disk operations
4. All other VMs (guest VMs) continue their operations unaware of the monitoring.
Key Advantages
• Can detect:
o Virus activities
o Worm propagation
o VM memory states
o I/O operations
o Application behavior
Explain reputation system design options.
In cloud computing, trust is very important since users do not control the infrastructure.
One way to manage trust is by using a reputation system.
1. What is Reputation?
• Past interactions.
a. Policies
b. Reputation
• Example: A cloud provider with good uptime and secure service earns a high
reputation score.
c. Recommendations
• In access control: Only allow users with good reputation to access critical data.
“Trust of a party A to a party B for a service X is the measurable belief of A that B behaves
dependably for a specified period within a specified context.”
Conclusion
Reputation systems are essential tools in cloud environments to build trust and ensure
secure interactions between unknown entities. They combine past performance, policies,
and recommendations to help in decision making.
What are the various system issues for running a typical parallel program in either
parallel or distributed manner?
1. Partitioning
a) Computation Partitioning
• It requires identifying parallel parts of the program that can run independently.
b) Data Partitioning
2. Mapping
• The aim is to distribute load evenly and make efficient use of resources.
3. Synchronization
• Prevents race conditions (when two workers access same data simultaneously).
• Maintains data dependency so that a worker waits for data from another if needed.
4. Communication
5. Scheduling
• If there are more tasks than available resources, the scheduler prioritizes them.
Conclusion
In summary, the main system issues for running a parallel program are:
• Partitioning
• Mapping
• Synchronization
• Communication
• Scheduling
All these ensure the program runs efficiently across multiple computing resources, either
in a parallel or distributed environment.
With neat diagram explain OpenStack Nova system architecture
OpenStack is a cloud operating system that helps you build and manage private or public
clouds. It controls compute, storage, and networking resources using a dashboard or API.
Nova is the compute (VM) management part of OpenStack. It helps to create, run, and
manage virtual machines in the cloud. Nova is the compute service in OpenStack. It is
responsible for managing virtual machines (VMs).
1. Cloud Controller
• Manages the overall cloud operation like scheduling, resource allocation, and VM
creation.
2. API Server
• Users interact with Nova using API requests (e.g., to launch or delete a VM).
3. User Manager
4. S3 Service (Tornado)
6. Storage
7. Nodes
Google App Engine (GAE) is a Platform as a Service (PaaS) that allows developers to build
and deploy web applications on Google’s cloud infrastructure.
Supported Languages
• Java: Comes with Eclipse plug-in and Google Web Toolkit (GWT).
Data Management
• Key features:
• Secure Data Connection (SDC): Allows tunneling from a secure intranet through
firewalls using tunnel servers.
• Google Data APIs: Lets applications access services like Docs, Maps, YouTube,
etc.
• Free Tier: Limited usage is free, making it ideal for small apps or student projects.
What is Virtualization?
Challenges:
• Virtualization happens at the hardware level using a hypervisor (e.g., VMware, Xen).
• Instead of creating full VMs, it allows multiple isolated user spaces under the same OS
kernel.
Boot Time Slower (OS boots in VM) Instant (no OS boot needed)
• Multiple containers can be deployed inside a VM, offering scalability and isolation.
• Containers share the host OS, reducing memory and CPU usage.
Simplified Management
Conclusion
OS-level virtualization adds a lightweight, fast, and resource-efficient layer over hardware
virtualization. For cloud computing, this means better scalability, lower costs, and faster service
delivery, making it a key enabler of modern cloud-native applications.
What are the various design objectives of Cloud Computing?
Cloud computing was designed with several clear goals in mind. These design objectives ensure
that cloud systems are efficient, scalable, secure, and cost-effective for users and providers.
• Move software, storage, and computing power from personal devices to central cloud data
centers.
• Cloud providers sign SLAs to ensure resource efficiency and cost control.
3. Scalability in Performance
• Cloud systems should easily scale up or down based on workload and user demand.
• Maintain reliable, fast, and consistent service across users and locations.
• Individuals or organizations who demand computing services like storage, VMs, and
applications.
2. Cloud Manager
4. VM Managers (Hypervisors)
• APIs (e.g., Amazon EC2WS, ElasticHosts REST) allow automation and integration.
How These Components Support Scalability and Efficiency:
Scalability
Efficiency
Conclusion:
Together, the components of a cloud ecosystem coordinate, manage, and optimize resources,
ensuring that services are scalable, efficient, and reliable for both providers and users.
Discuss the importance of data centre interconnection networks in cloud computing
In cloud computing, data centers are made up of thousands of servers working together. These servers
must communicate efficiently. That’s where interconnection networks play a vital role.
• Essential for file sharing, job scheduling, and data transfers in distributed systems.
3. Ensure Scalability
• Scalable designs like fat-tree, BCube, and MDCube support easy expansion.
• Cloud apps (like MapReduce) need to move huge volumes of data between servers.
• With hot-swappable components and smart routing, services continue without downtime.
A modular data center is built using pre-fabricated container units, each containing hundreds or
thousands of servers. These containers are easy to deploy, scale, and relocate, making them
ideal for modern cloud computing.
• It provides multiple paths between nodes for fault tolerance and high bandwidth.
• MDCube (Modular Data Center Cube) is built by connecting multiple BCube containers.
• It uses the existing BCube switches to link containers in a virtual hypercube structure.
• Supports scalable and fault-tolerant communication between containers.
Example: 2D MDCube
• Each container is a BCube, and they are connected using high-speed links.
• Supports cloud applications that need high-speed data transfer across modules.
Advantages:
Conclusion:
Interconnecting modular data centers using BCube and MDCube creates a flexible, powerful,
and scalable cloud infrastructure. This design supports modern cloud applications with
efficiency and reliability.
An inter-module connection network connects multiple modular data center units (containers) together
to form a larger cloud infrastructure.
Each container (module) contains hundreds or thousands of servers, usually connected internally using
BCube.
To scale up and build massive data centers, these containers must be interconnected using special high-
speed networks.
What is MDCube?
How It Works:
In cloud computing, encryption is the main method used to protect sensitive data, both at rest
(stored) and in transit (being transferred). It ensures that unauthorized users cannot access the
data, even if they manage to intercept or steal it.
• Ideal for cloud because data never needs to be exposed during computation.
o For example, early FHE systems took minutes for simple operations.
• The server returns encrypted results, which are then decrypted by the client.
• SSE supports:
o Fuzzy search
o Encrypt data stored in AWS services like S3, RDS, EBS, etc.
• Data sent over public networks must be encrypted using protocols like TLS/SSL.
• Encryption with strict key access control helps limit such risks.
Conclusion
Encryption is essential for cloud security. Techniques like FHE, OPE, SSE, and real-world tools like
AWS KMS ensure that data remains protected — even while being stored, processed, or
transmitted in the cloud.
Explain the key features of cloud and grid computing platforms
Introduction:
Cloud and Grid computing platforms are used to perform large-scale computing tasks by
connecting multiple resources.
They have different goals but share some key features like resource sharing, scalability, and
distributed computing.
• Cloud offers models like MapReduce, Dryad, Twister for parallel data processing.
4. Programming Support
• Cloud provides APIs for Java, Python, PHP, Ruby, .NET etc.
5. Workflow Management
7. Fault Tolerance
Conclusion:
Cloud and Grid platforms both aim to provide high-performance computing, but:
➢ Similar to HDFS, the MapReduce engine also has a master/slave architecture Dryad
consisting of a single JobTracker as the master and a number of TaskTrackers as the ➢ Flexibility Over MapReduce: Dryad allows users to define custom data flows using
slaves (workers). directed acyclic graphs (DAGs), unlike the fixed structure of MapReduce.
➢ The JobTracker manages the MapReduce job over a cluster and is responsible for
monitoring jobs and assigning tasks to TaskTrackers. ➢ DAG-Based Execution: Vertices represent computation engines, while edges are
➢ The TaskTracker manages the execution of the map and/or reduce tasks on a single communication channels. The job manager assigns tasks and monitors execution.
computation node in the cluster. ➢ Job Manager & Name Server: The job manager builds, deploys, and schedules jobs,
➢ Each TaskTracker manages multiple execution slots based on CPU threads (M * N while the name server provides information about available computing resources.
slots).
➢ Each data block is processed by one map task, ensuring a direct one-to-one mapping ➢ 2D Pipe System: Unlike traditional UNIX pipes (1D), Dryad's 2D distributed pipes
between map tasks and data blocks. enable large-scale parallel processing across multiple nodes.
Running a Job in Hadoop ➢ Fault Tolerance: Handles vertex failures by reassigning jobs and channel failures by
recreating communication links.
➢ Job Execution Components: A user node, a JobTracker, and multiple TaskTrackers
coordinate a MapReduce job. ➢ Broad Applicability: Supports scripting languages, MapReduce programming, and
SQL integration, making it a versatile framework.
➢ Job Submission: The user node requests a job ID, prepares input file splits, and
submits the job to the JobTracker.
• Optimize data replication and consistency mechanisms for reliability.
Cloud databases enhance efficiency, but proper security measures are essential to prevent
unauthorized access, data breaches, and operational failures.
An OS manages hardware resources while protecting applications from malicious attacks like
unauthorized access, code tampering, and spoofing. Security policies include access control,
authentication, and cryptographic protection.
1. Mandatory vs. Discretionary Security – Mandatory policies enforce strict security, while
discretionary policies leave security decisions to users, increasing risks.
2. Trusted Paths & Applications – Trusted software needs secure communication mechanisms Key Aspects of VM Security:
to prevent impersonation.
1. Hypervisor-Based Security – Ensures memory, disk, and network isolation for VMs.
3. OS Vulnerabilities – Commodity OSs often lack multi-layered security, making them
2. Trusted Computing Base (TCB) – A compromised TCB affects entire system security.
susceptible to privilege escalation.
3. VM State Management – Hypervisors can save, restore, clone, and encrypt VM states.
4. Malicious Software Threats – Java Security Manager uses sandboxing but cannot prevent all
security bypasses. 4. Attack Prevention – Dedicated security VMs and intrusion detection systems enhance
protection.
5. Closed vs. Open Systems – ATMs, smartphones, and game consoles have embedded
cryptographic keys for stronger authentication. 5. Inter-VM Communication – Faster than physical machines, enabling secure file migration.
6. Weak Isolation Between Applications – A compromised app can expose the entire system. Security Threats:
7. Application-Specific Security – Certain applications, like e-commerce, require extra Hypervisor-Based Threats:
protection like digital signatures.
• Resource starvation & DoS due to misconfigured limits or rogue VMs.
8. Challenges in Distributed Computing – OS security gaps affect application authentication and
• VM side-channel attacks exploiting weak inter-VM isolation.
secure user interactions.
• Buffer overflow vulnerabilities in hypervisor-managed processes.
A secure OS is crucial, but additional security measures like encryption, auditing, and
authentication are necessary for comprehensive protection. VM-Based Threats:
Virtualization enhances security but requires proper configurations, access control, and
monitoring to prevent exploits.
4. Searchable Symmetric Encryption (SSE) – Protects database queries from explicit data • Optimize data replication and consistency mechanisms for reliability.
leakage while enabling single-keyword, multi-keyword, ranked, and Boolean searches.
Cloud databases enhance efficiency, but proper security measures are essential to prevent
5. Private Cloud Risks – While firewalls protect against outsiders, insider threats remain a unauthorized access, data breaches, and operational failures.
concern. Access restrictions and monitoring help mitigate risks.
By utilizing OPE and SSE, encrypted databases can support efficient searches while enhancing data
OPERATING SYSTEM SECURITY
security. However, insider threats and query pattern exposure require additional safeguards.
An OS manages hardware resources while protecting applications from malicious attacks like
unauthorized access, code tampering, and spoofing. Security policies include access control,
SECURITY OF DATABASE SERVICES authentication, and cryptographic protection.
DBaaS allows cloud users to store and manage their data, but security risks include data integrity, Key Security Concerns:
confidentiality, and availability concerns.
1. Mandatory vs. Discretionary Security – Mandatory policies enforce strict security, while
Major Security Threats: discretionary policies leave security decisions to users, increasing risks.
1. Authorization & Authentication Issues – Weak access controls can lead to data leaks or 2. Trusted Paths & Applications – Trusted software needs secure communication mechanisms
unauthorized modifications. to prevent impersonation.
2. Encryption & Key Management – Poor encryption handling exposes data to external attacks. 3. OS Vulnerabilities – Commodity OSs often lack multi-layered security, making them
susceptible to privilege escalation.
3. Insider Threats – Superusers with excessive privileges may misuse confidential data.
4. Malicious Software Threats – Java Security Manager uses sandboxing but cannot prevent all
4. External Attacks – Methods like spoofing, sniffing, man-in-the-middle, and DoS attacks can
security bypasses.
compromise cloud databases.
5. Closed vs. Open Systems – ATMs, smartphones, and game consoles have embedded
5. Multi-Tenancy Risks – Shared environments increase data recovery vulnerabilities if proper
cryptographic keys for stronger authentication.
sanitation isn’t enforced.
6. Weak Isolation Between Applications – A compromised app can expose the entire system.
6. Data Transit Risks – Without encryption, data transfer over public networks is vulnerable.
7. Application-Specific Security – Certain applications, like e-commerce, require extra
7. Data Provenance Challenges – Tracking data origin and movement requires complex metadata
protection like digital signatures.
analysis.
8. Challenges in Distributed Computing – OS security gaps affect application authentication and
8. Lack of Transparency – Users may not know where their data is stored, complicating security
secure user interactions.
assessments.
A secure OS is crucial, but additional security measures like encryption, auditing, and
9. Replication & Consistency Issues – Synchronizing data across multiple cloud locations is
authentication are necessary for comprehensive protection.
difficult.
10. Auditing & Compliance Risks – Third-party audits can violate privacy laws if data is stored in
restricted locations. VIRTUAL MACHINE SECURITY
Mitigation Strategies: Virtual Machine (VM) security primarily relies on hypervisors for isolation and access control,
reducing risks compared to traditional OS security.
• Implement strong authentication and authorization protocols.
A private cloud is built and used within one organization (not public). 4.1.1.5 Data-Center Networking Structure
It is owned and managed by the company itself.
Only the organization and its partners can access it — not the general public. The core of a cloud is a server cluster made of many virtual machines (VMs).
It does not sell services over the Internet like public clouds do. Compute nodes do the work; control nodes manage and monitor cloud tasks.
Private clouds give flexible, secure, and customized services to internal users. Gateway nodes connect users to the cloud and handle security.
They allow the company to keep more control over data and systems. Clouds create virtual clusters for users and assign jobs to them.
Private clouds may affect cloud standard rules, but offer better customization for the Unlike old systems, clouds handle changing workloads by adding or removing resources
company. as needed.
Private clouds can support this flexibility if well designed.
Saves money and time by avoiding hardware setup. Data is stored in the cloud by the provider.
Ideal for companies needing flexible and powerful IT resources. Saves time, money, and effort.
Great for businesses and individuals.
Examples of SaaS:
4.1.3 Platform-as-a-Service (PaaS)
Gmail
Paas provides a platform to build, test, and deploy applications. Google Docs
Microsoft 365
It includes tools, libraries, databases, and runtime environments. Salesforce
Zoom
Dept. of CSE, SVIT Page 7 Dept. of CSE, SVIT Page 8
BCS601
Note: 01. Answer any FIVE full questions, choosing at least ONE question from each MODULE.
Bloom’s Marks
Module -1
Taxonomy
Level
1. Physical Model
Represents the hardware layout of the system.
• Nodes: Devices (servers, PCs) that process and communicate.
• Links: Communication channels (wired/wireless) like point-to-point or
broadcast.
• Middleware: Software that enables communication, fault tolerance,
synchronization. L2 10
• Network Topology: Structure of node connections (bus, star, ring, mesh).
• Protocols: TCP, UDP, MQTT used for secure and efficient data exchange.
2. Architectural Model
Defines the system's organization and interaction patterns.
• Client-Server Model: Centralized server responds to client requests (e.g.,
web services).
• Peer-to-Peer (P2P): All nodes are equal and share services (e.g.,
BitTorrent).
• Layered Model: Organized into layers for modular design and abstraction.
• Microservices Model: Small, independent services performing specific
functions, enhancing scalability.
3. Fundamental Model
Covers key concepts and formal behaviors.
• Interaction Model:
o Message Passing: Synchronous/asynchronous communication.
o Publish/Subscribe: Topics-based messaging.
• Failure Model:
o Types: Crash, omission, timing, Byzantine failures.
o Handling: Replication, fault detection, recovery methods.
• Security Model:
o Authentication: Passwords, keys, multi-factor verification.
o Encryption: Protects data confidentiality.
o Data Integrity: Hashing and digital signatures to prevent tampering.
OR
Q.02 a Write short notes on Peer-to-Peer network families.
Definition
• P2P architecture is a distributed model where each node (peer) acts as
both client and server, sharing resources without a central authority.
2. Characteristics
• Decentralization: No central server; peers communicate directly.
L2 10
• Scalability: Easily grows to support more users.
• Fault Tolerance: Network survives even if some nodes fail.
• Resource Sharing: Peers contribute bandwidth, storage, and data.
• Autonomy: Each peer manages its own data and functions.
5. Bootstrapping in P2P
• Helps new peers discover others and connect.
• Can use centralized servers, peer exchange, or DHTs.
6. Data Management
• Storage: Distributed across peers.
• Retrieval: Uses search algorithms.
• Replication: Increases availability.
• Consistency: Ensures all replicas are up to date.
7. Routing Algorithms
• Flooding: Sends to all neighbors — high traffic.
• Random Walk: Selects random paths — less overhead.
• DHTs: Efficient lookups via hash tables (e.g., Kademlia).
• Small-World Routing: Uses short paths and local/global links.
8. Advantages
• No central point of failure
• Efficient resource utilization
• Cost-effective
• High availability due to replication
9. Challenges
• Difficult to scale with efficiency
• Security risks from malicious nodes
• Inconsistent content quality
• Complex consistency and data management
2. Data Loss
• (i) Attacks such as malware, hacking, or unauthorized access can result in
loss or theft of sensitive data.
• (ii) Loss of intellectual property, customer information, or confidential
business records affects compliance and trust.
3. Reputational Loss
• (i) A successful cyber attack damages an organization’s public image and
brand value.
• (ii) Customers may lose confidence, leading to a decline in user base and
revenue.
4. Operational Loss
• (i) Cyber threats like Denial of Service (DoS) can bring down servers,
disrupting business operations.
• (ii) Delays in service delivery and system downtime reduce productivity
and efficiency.
Module-2
Q. a Explain in detail about Implementation Levels of virtualization.
03
1. Instruction Set Architecture (ISA) Level Virtualization
1. Emulates a guest ISA on a host with a different ISA.
2. Allows execution of legacy or cross-platform binary code.
3. Achieved through code interpretation or dynamic binary translation.
4. Very flexible but has low performance due to instruction overhead.
5. Adds a software translation layer between compiler and processor.
L2 10
4. Library Support Level Virtualization
1. Virtualizes the API layer between apps and OS.
2. Allows apps to run in different environments (e.g., WINE for Windows
apps on UNIX).
3. Less overhead than full system virtualization.
4. Not all applications may work correctly.
5. Useful for GPU virtualization (e.g., vCUDA).
5. User/Application-Level Virtualization
1. Virtualizes individual applications as isolated units.
2. Examples include JVM (.java) and .NET CLR (.NET apps).
3. Easy to deploy and portable across platforms.
4. Limited isolation compared to lower-level virtualization.
5. Used in sandboxing, application streaming, and secure app deployment.
b Explain how Migration of Memory, Files, and Network Resources happen in 2, 3 7
cloud computing.
1. Memory Migration
• Moves the VM’s memory state from source to destination host.
• Internet Suspend-Resume (ISR) technique uses temporal locality to avoid
redundant transfers.
• Tree-based file structures allow only changed files to be sent.
• ISR results in high downtime, suitable for non-live migrations.
• Efficient memory handling is essential due to large size (MBs to GBs) and
need for speed.
3. Network Migration
• Migrated VMs must retain all open network connections.
• VMs use virtual IP/MAC addresses, independent of host hardware.
• ARP replies notify the network of new locations (on LAN).
• Live migration enables no downtime, with iterative precopy or postcopy
techniques.
• Precopy allows continuous execution but may suffer network load;
postcopy reduces data size but increases downtime.
4. Live Migration Using Xen
• Xen supports live VM migration with minimal service interruption.
• Dom0 manages migration, using send/receive and shadow page tables.
• RDMA enables fast transfer by bypassing TCP/IP stack and CPU.
• Memory compression is used to reduce data size and overhead.
• Migration daemons track and send modified pages based on dirty
bitmaps.
OR
Q.04 a Explain VM based intrusion detection system. L2 10
👥 Responsibility Clarification
• Providers:
o Deploy and manage IDS (host, hypervisor, virtual network).
o Must notify customers (via SLA) of any relevant attacks.
• Customers:
o Deploy HIDS inside VMs.
o Integrate IDS into their monitoring systems.
o Must negotiate visibility/alerts via contracts.
b Write steps for Creating a Virtual Machine: Configure and deploy a virtual L2 7
machine with specific CPU and memory requirements in Google Cloud.
[or]
▪ Example:
• Run:
✅ 1. Definition
• IaaS (Infrastructure as a Service): Provides virtualized computing
resources like servers, storage, and networking.
• PaaS (Platform as a Service): Offers a development environment with
tools to build, test, and deploy applications.
• SaaS (Software as a Service): Delivers fully functional software
applications over the internet.
✅ 2. Users
• IaaS: Network architects, IT administrators, skilled developers.
• PaaS: Software developers and programmers.
• SaaS: End-users, business teams, consumers.
✅ 3. Technical Knowledge Required
• IaaS: High technical knowledge.
• PaaS: Moderate coding knowledge.
• SaaS: No technical knowledge needed.
✅ 4. User Controls
• IaaS: Full control (OS, runtime, middleware, applications).
• PaaS: Control over app and data only.
• SaaS: No control (everything managed by provider).
✅ 5. Examples
• IaaS: AWS EC2, Microsoft Azure, Google Compute Engine.
• PaaS: Google App Engine, AWS Elastic Beanstalk, IBM Cloud.
• SaaS: Google Workspace, Salesforce, Zoom, Microsoft 365.
✅ 6. Use Cases
• IaaS: Hosting websites, big data analytics, backup and recovery.
• PaaS: Developing web/mobile apps, APIs, microservices.
• SaaS: Email, CRM, video conferencing, document collaboration.
L2 7
2. Private Cloud
• Used by a single organization; exclusive access.
• Hosted on-premises or by a third party.
• Offers greater control and security.
Advantages:
• Full control over resources and policies.
• High data security and privacy.
• Supports legacy systems.
• Customizable for specific needs.
Disadvantages:
• Expensive to implement and maintain.
• Limited scalability compared to public cloud.
3. Hybrid Cloud
• Combines public and private clouds using proprietary software.
• Allows data and apps to move between environments.
Advantages:
• Flexible and customizable.
• Cost-effective (uses public cloud scalability).
• Better security with data segmentation.
Disadvantages:
• Complex to manage.
• Slower data transmission due to integration.
4. Community Cloud
• Shared by multiple organizations with similar interests or concerns.
• Managed internally or by a third-party.
Advantages:
• Cost-effective due to shared resources.
• Good security and collaboration.
• Enables efficient data and infrastructure sharing.
Disadvantages:
• Limited scalability.
• Customization is difficult due to shared setup.
5. Multi-Cloud
• Uses multiple public cloud providers simultaneously.
• Not limited to a single vendor or architecture.
Advantages:
• Mix and match best features of different providers.
• Low latency (choose nearest regions).
• High availability and fault tolerance.
Disadvantages:
• Complex architecture.
• Potential security risks due to integration gaps.
✅ Choosing the Right Cloud Deployment Model
Factors to Consider:
• Cost – Budget for infrastructure and service.
• Scalability – Ability to scale with growing demand.
• Ease of Use – Skill level required to manage the cloud.
• Compliance – Adherence to legal and regulatory standards.
• Privacy – Type and sensitivity of data being stored/processed.
➡️ No one-size-fits-all – the best deployment model depends on current business
requirements. You can switch models as your needs evolve.
OR
Q. a Write short notes on global exchange of cloud resources L2 10
06
❖ Global Exchange of Cloud Resources is the process of using cloud
services in different parts of the world and countries.
❖ It allows businesses and organizations to deploy, manage, and grow their
infrastructure all over the world.
❖ This process is made possible by cloud providers such as Amazon Web
Services (AWS), Microsoft Azure, and Google Cloud, which provide data
centers in different regions of the world.
❖ Such services enable organizations to provide resources cost-effectively,
with little delay, and achieve high availability as well as regional
compliance.
1. Geographical Distribution
• Cloud resources are hosted across a network of global data centers
spread across various regions.
• This allows organizations to serve users from different locations with
minimal delay, improving the overall user experience.
2. Load Balancing
• Cloud service providers offer load balancing across regions.
• This ensures that computing power and resources are efficiently distributed
to meet fluctuating demands across different regions.
3. Redundancy and Availability
• The global exchange enables redundancy by hosting data in multiple
locations.
• In the event of a system failure in one region, data and applications can still
be accessed from other regions, ensuring high availability.
4. Latency Reduction
• By locating resources closer to the end-users, latency is reduced
significantly.
• This enhances the performance of cloud-hosted applications, providing
users with faster access to services regardless of their physical location.
5. Cost Efficiency
• Pay-as-you-go models and cost-effective regional pricing allow
businesses to optimize their cloud expenditures.
• Companies only pay for the resources they use in specific regions, enabling
better cost management.
6. Disaster Recovery
• The global nature of cloud resources ensures that businesses can
implement effective disaster recovery strategies.
• By storing data across different regions, organizations can recover from
outages in one region by switching to another region with no significant
data loss or downtime.
7. Regulatory Compliance
• Many countries have strict data residency and privacy laws.
• The global distribution of cloud resources allows companies to adhere to
local regulations by keeping data within the country or region where
required.
Module-4
Q. a Discuss security of database services.
07
Cloud Database Security refers to the strategies, technologies, and tools employed
to protect cloud-hosted databases from unauthorized access, cyberattacks, data
breaches, and other malicious threats. It ensures the integrity, confidentiality, and
availability of data stored in cloud environments, and is essential for preventing
data loss, exposure, and misuse.
L2 10
Importance of Cloud Database Security
1. Protection Against Cyber Threats: As more enterprises migrate to the
cloud, protecting sensitive data from hackers, malware, and unauthorized
access becomes a significant concern.
2. Governance and Compliance: Maintaining regulatory compliance and
meeting industry standards is crucial for avoiding legal repercussions and
fines.
3. Maintaining Customer Trust: Proactive security measures ensure that
customers’ data is protected, helping businesses retain trust.
4. Data Availability: Cloud database security ensures that critical data
remains accessible while preventing unauthorized disruptions.
5. Business Continuity: Effective security protocols are vital for ensuring the
continuous operation of cloud services without unexpected downtime.
b Explain the security risks posed by shared images and management os. L2 10
Security Risks Posed by Shared Images:
1. Malicious Code Injection:
o Shared images can be pre-configured with malicious software that
might go undetected during the creation or deployment of the
image. When other users deploy the image, they might
unknowingly execute this malicious code.
2. Unpatched Vulnerabilities:
o If the shared image is not updated regularly, it may contain outdated
software with known vulnerabilities. This exposes the system to
exploits and attacks.
3. Data Leakage:
o Sensitive data stored in a shared image may be accessible to other
users or systems using the image. Improper data handling within
shared images can lead to unauthorized data access.
4. Privilege Escalation:
o Shared images might contain embedded administrator or root
privileges. If the image is not securely configured, it can allow
unauthorized users to escalate their privileges and gain control of
the system.
5. Lack of Isolation:
o In some cases, shared images may not have proper isolation
between different users or virtual machines. This can lead to
unintentional access to data or resources belonging to other users.
6. Compliance and Legal Risks:
o Shared images may not meet the required security and privacy
standards for regulated industries. This poses a risk of non-
compliance with laws such as GDPR, HIPAA, or PCI-DSS.
7. Insecure Configuration:
o Misconfigured settings in a shared image could lead to weak
security controls, allowing attackers to exploit weaknesses in the
system.
8. Inadequate Monitoring:
o Without adequate monitoring, it becomes difficult to detect
suspicious activities related to shared images, such as unauthorized
access or malicious activity.
7. Unpatched Vulnerabilities:
o The management OS may contain vulnerabilities that can be
exploited by attackers if not properly patched. This makes the OS a
prime target for security breaches.
8. Insider Threats:
o Employees or individuals with access to the management OS may
intentionally or unintentionally cause damage, leak data, or
compromise system security.
9. Misconfigurations:
o Misconfigurations in the management OS can lead to
vulnerabilities, including incorrect user permissions, weak
passwords, or incorrect networking settings, all of which increase
the risk of exploitation.
10. Lack of Auditing and Monitoring:
• Without proper logging and monitoring, it becomes difficult to detect
unusual activities or potential security breaches in the management OS,
leaving the system vulnerable to attacks.
OR
Q. a Discuss how virtual machines are secured
08 1. Hypervisor Security
• Ensure the integrity of the hypervisor through write protection and
restricted access to prevent unauthorized modifications.
• Implement isolation between VMs to prevent cross-VM attacks and
intrusion detection to monitor hypervisor activity.
2. Virtual Machine Isolation
• Enforce memory, network, and resource isolation to prevent unauthorized
access between VMs.
• Use strict access controls to limit communication and interactions between
VMs.
3. Access Control and Authentication
• Implement multi-factor authentication (MFA) and role-based access
control (RBAC) to restrict access to VMs.
3, 4 10
• Maintain audit logs and enforce strong password policies to ensure only
authorized access.
4. VM Monitoring and Logging
• Continuously monitor VM behavior and maintain centralized logs for
tracking potential security threats.
• Set up real-time alerting to notify administrators of suspicious activities.
5. Guest Operating System and Application Security
• Regularly update the guest OS and use security software like antivirus to
protect against vulnerabilities.
• Configure firewalls, IDS, and whitelisting to limit unauthorized access and
application execution.
6. VM Image Security
• Harden VM images before deployment and restrict image creation to
trusted sources.
• Perform virus scanning on VM images to ensure they are free from
malware or malicious content.
7. Data Encryption
• Encrypt data at rest and in transit to protect sensitive information on VMs.
• Use secure key management to ensure that encryption keys are properly
managed and rotated.
8. VM Backup and Recovery
• Perform regular backups and store them offsite to ensure data recovery in
case of a breach.
• Test disaster recovery plans to ensure VMs can be restored quickly after an
incident.
9. Virtual Machine Patching and Updates
• Apply automated patch management to ensure VMs are updated with the
latest security patches.
• Test patches in non-production environments before deployment to avoid
disruptions.
10. VM Resource Management
• Monitor VM resource usage to detect abnormal consumption patterns that
could signal security threats.
• Set resource allocation limits to prevent overuse by any single VM,
maintaining performance and security.
BCS502
b Explain reputation system design options. L2 10
1. Centralized Reputation System
• A centralized system relies on a single authority or server to collect, store,
and process reputation data for all users or services.
• Advantages:
o Simplified management with a single point of control.
o Easier to monitor and track user or service performance.
• Disadvantages:
o A single point of failure can disrupt the entire system.
o Potentially vulnerable to manipulation or attack if the central server
is compromised.
2. Decentralized Reputation System
• In this design, reputation data is stored and processed across multiple
nodes, with no central authority. Each participant or service maintains their
own reputation scores, and data is distributed among peers.
• Advantages:
o Increased robustness since there’s no single point of failure.
o Better suited for distributed or peer-to-peer cloud environments.
• Disadvantages:
o More complex to manage and ensure consistency across the
system.
o Higher computational and storage overhead as data needs to be
distributed and verified across multiple nodes.
3. Hybrid Reputation System
• A hybrid system combines elements of both centralized and decentralized
models. Typically, reputation data is stored centrally, but peer-to-peer
evaluations or ratings are used to influence the final score.
• Advantages:
o Flexibility in adapting to different cloud environments.
o Provides a balance of reliability and robustness.
• Disadvantages:
o May suffer from the complexity of managing multiple systems.
o Still subject to the risks of centralization (e.g., targeted attacks).
4. Reputation Based on Feedback Mechanisms
• This system relies on user feedback or ratings after interacting with a
service or user. Ratings from multiple users are aggregated to generate a
reputation score for the service or user.
• Advantages:
o Provides direct, real-time feedback from users, improving service
accountability.
o Scalable and adaptable to a wide range of cloud services.
• Disadvantages:
o Susceptible to fake or biased feedback if not properly monitored or
verified.
o May require additional mechanisms (e.g., reputation decay) to
ensure that scores remain relevant over time.
5. Reputation Based on Historical Behavior
• This system tracks the past behavior of users or services (e.g., uptime,
reliability, or security events) and uses this data to predict future behavior.
The reputation score is dynamically updated based on ongoing
performance metrics.
• Advantages:
o Provides a continuous, data-driven evaluation of trustworthiness.
o Reduces the impact of individual malicious actions since it focuses
on long-term patterns.
• Disadvantages:
o Requires large volumes of data and historical tracking, leading to
increased storage and processing overhead.
o May not quickly adapt to sudden, drastic changes in behavior.
6. Trust Models in Reputation Systems
• Trust models use algorithms or mathematical models to assign
trustworthiness scores. These models often factor in various metrics,
including past interactions, feedback, and service performance.
• Advantages:
o Can be customized based on the needs of the specific cloud
environment (e.g., service reliability, data integrity).
o Provides a formal, quantifiable approach to reputation
management.
• Disadvantages:
o Complex to design and implement.
o May need continuous refinement and updates to remain effective as
the cloud environment evolves.
7. Reputation Based on Third-party Evaluation
• In this approach, a trusted third-party organization (e.g., an auditor or
certification body) evaluates the reputation of services or users in the
cloud.
• Advantages:
o Enhances credibility as the third-party evaluation is independent.
o Useful for situations requiring external verification, such as
compliance with industry standards.
• Disadvantages:
o Potentially slow and expensive due to the need for external
evaluation.
o May introduce a bottleneck if the third-party organization becomes
overwhelmed with requests.
Module-5
Q. 09 a What are the various system issues for running a typical parallel program in L2 10
either parallel or distributed manner?
1. Communication Overhead
• Parallel systems (e.g., using threads or processes) may have lower
communication latency due to shared memory.
• Distributed systems must send data over a network, leading to higher
latency and bandwidth constraints.
4. Load Balancing
• Uneven workload distribution causes some nodes/threads to be idle while
others are overloaded.
• Requires dynamic or static load balancing strategies.
6. Scalability
• The ability of the system to maintain performance as more resources are
added.
• Communication, synchronization, and data contention may limit
scalability.
7. Resource Management
• Effective use of CPU, memory, network, and storage.
• In distributed systems, resource heterogeneity (e.g., different hardware
capabilities) complicates management.
8. Programming Model Complexity
• Writing efficient parallel/distributed programs is harder.
• APIs like MPI, OpenMP, CUDA, or MapReduce help but require
expertise.
b With a neat diagram explaining the data flow in running a MapReduce job L2 10
at various task trackers using Hadoop Library
OR
Q. 10 a Discuss Programming the Google App Engine. 3, 4 10
• Google App Engine (GAE) is a fully managed Platform as a Service
(PaaS) used for building and hosting scalable web applications on
Google’s infrastructure.
• It dynamically scales web applications as traffic demand changes,
ensuring efficient resource usage and high availability.
• GAE supports multiple programming languages like Python, Java, Go,
and PHP, each with its own runtime and SDK for local development and
testing.
• The App Engine SDK allows developers to emulate the production
environment on local machines and later deploy their apps easily with cost-
control quotas.
• GAE provides numerous in-built services including cron jobs, queues,
scalable datastores (Cloud SQL, Datastore, Memcached),
communication tools, and in-memory caching.
• It offers a secure and high-performance execution environment with
general features (e.g., datastore, logs, blobstore, search) covered by
service-level agreements (SLA).
• GAE has preview and experimental features (e.g., Sockets, MapReduce,
Prospective Search, OpenID) that may change and are accessible to
selected users.
• Third-party services and helper libraries are integrated via partnerships,
enabling apps to perform extended tasks beyond core functionalities.
• Key advantages include fast deployment, ease of use, rich APIs, built-in
security, automatic scaling, high reliability, platform independence,
and reduced infrastructure cost.
• Overall, Google App Engine simplifies the development of robust,
scalable, and secure applications without managing server infrastructure,
making it ideal for rapid development and enterprise-scale solutions.
b With neat diagram explain OpenStack Nova system architecture. 3, 4 10
1. Cost Savings – Eliminates the need for large capital expenditures on hardware and software.
2. Scalability and Flexibility – Easily scales resources up or down based on workload.
3. Improved Performance – Cloud providers optimize infrastructure for better efficiency and
performance.
4. Security and Compliance – Advanced security measures such as encryption, firewalls, and
compliance with industry standards ensure data protection.
5. Disaster Recovery and Backup – Cloud services provide automated backup and disaster recovery
solutions.
6. Accessibility and Collaboration – Cloud applications can be accessed from anywhere, allowing for
better collaboration among teams.
------------------------------------------------------------------------------------------------------------------
2. What are the types of VM architectures, and how do they help in making computing
easier?
Types:
(a) Physical Machine:
The traditional setup where a single OS directly manages the hardware.
No virtualization; applications and the OS depend entirely on the physical machine.
(b) Native VM Architecture:
A virtual machine monitor (VMM or hypervisor) operates directly on the hardware.
Efficient and high-performing, often used for managing resources in large-scale systems.
(c) Hosted VM Architecture:
A VMM runs on top of a host operating system, treating VMs as applications.
Simpler to set up but less efficient due to the extra layer introduced by the host OS.
(d) Dual-Mode VM Architecture:
Combines features of both native and hosted architectures.
Some tasks are handled directly by the VMM on hardware, while others pass through a host OS.
VM Architectures:
(b) Native (Bare-Metal) VM: Hypervisor (e.g., XEN) operates directly on hardware in privileged mode.
Guest OS could differ, like Linux on Windows hardware.
(c) Hosted VM: VMM runs in nonprivileged mode on a host OS. No need to modify the host.
(d) Dual-Mode VM: Splits VMM functions between user level and supervisor level. May require minor
modifications to the host OS.
Key Benefits:
Supports multiple VMs on the same hardware.
Facilitates portability and flexibility with virtual appliances bundled with their dedicated OS and
applications.
------------------------------------------------------------------------------------------------------------------
3) Explain Xen Architecture in detail.
The Xen Architecture
Overview of Xen:
• Open-source hypervisor developed by Cambridge University.
• A microkernel hypervisor separating mechanism (handled by Xen) from policy (handled by Domain
• Does not include native device drivers; provides mechanisms for guest OSes to access physical
devices.
Key Features:
• Small hypervisor size due to minimal functionality.
• Acts as a virtual environment between hardware and OS.
• Commercial versions include Citrix XenServer and Oracle VM.
Core Components:
1. Hypervisor.
2. Kernel.
3. Applications.
Domains in Xen:
Domain 0 (privileged guest OS): Manages hardware access and devices.
Allocates and maps resources for other domains (Domain U).
Boots first, without file system drivers.
Security risks exist if Domain 0 is compromised.
Domain U (unprivileged guest OS): Runs on resources allocated by Domain 0.
Security:
• Xen is Linux-based with a C2 security level.
• Strong security policies are needed to protect Domain 0.
VM Capabilities:
• Domain 0 acts as a VMM, enabling users to create, save, modify, share, migrate, and roll back VMs
like files.
• Rolling back or rerunning VMs allows fixing errors or redistributing content.
VM Execution Model:
Traditional machine states are linear, while VM states form a tree structure: Multiple instances of a VM can
exist simultaneously.
VMs can roll back to previous states or rerun from saved points.
------------------------------------------------------------------------------------------------------------------
4) What is the difference between LAN, SAN, and NAS, and how has Ethernet speed
improved networking for distributed computing?
How Faster Ethernet Improves Distributed Computing
1. Higher Data Speeds → 10GbE, 25GbE, and 100GbE reduce delays in data exchange between nodes.
2. Lower Latency → Ensures real-time processing for AI, big data, and financial trading.
3. Better Cloud & Edge Computing → Faster communication between cloud servers and IoT devices.
4. Efficient Storage Access → Improves NAS/SAN performance for distributed databases.
5. Scalability → Supports large-scale applications like HPC, AI training, and scientific research.
------------------------------------------------------------------------------------------------------------------
5. Explain GPU Programming Model.
GPU programming models are designed to leverage the parallel processing capabilities of GPUs for high-
performance computing tasks. These models provide frameworks and languages that enable developers to write
programs that run efficiently on GPUs.
Division of Labor: CPUs handle complex logic and management tasks, while GPUs excel at parallel
processing.
Offloading Tasks: CPUs offload parallelizable tasks to GPUs using programming models like CUDA,
OpenCL, and DirectCompute.
Parallel Execution: GPUs have thousands of smaller cores that process multiple data streams simultaneously.
Data Transfer: Efficient data transfer between CPU and GPU memory is crucial, facilitated by high-speed
interconnects like PCIe.
Coordination: CPUs and GPUs coordinate tasks using appropriate APIs and frameworks to manage
workflows and data dependencies.
Applications: Used in graphics rendering, scientific research, machine learning, financial modeling, and
more.
Hybrid Architectures: Some processors integrate CPU and GPU cores on the same chip, enhancing
performance for mixed workloads.
------------------------------------------------------------------------------------------------------------------
6. What is scalable computing over the Internet, and how does it use technologies like
IoT and cloud computing?
Scalable Computing over the Internet refers to the ability to dynamically allocate and manage computing
resources over the internet in a way that can handle growing demands. This involves distributing
computational tasks across multiple systems (often using cloud computing or distributed computing
platforms) to accommodate varying workloads.
Evolution of Computing Technology
Over the last 60 years, computing has evolved through multiple platforms and environments.
The shift from Linpack Benchmark to HTC systems for measuring performance.
Communication Models:
1. H2H (Human-to-Human)
2. H2T (Human-to-Thing)
3. T2T (Thing-to-Thing)
------------------------------------------------------------------------------------------------------------------
7. Explain Cluster Architecture in detail.
A computing cluster consists of interconnected stand-alone computers which work cooperatively as a single
integrated computing resource
Figure 1.15 shows the architecture of a typical server cluster built around a low-latency, high bandwidth
interconnection network. This network can be as simple as a SAN (e.g., Myrinet) or a LAN (e.g., Ethernet).
To build a larger cluster with more nodes, the interconnection network can be built with multiple levels of
Gigabit Ethernet, Myrinet, or InfiniBand switches.
The gateway IP address locates the cluster. The system image of a computer is decided by the way the OS
manages the shared cluster resources.
All resources of a server node are managed by their own OS. Thus, most clusters have multiple system images
as a result of having many autonomous nodes under different OS control.
------------------------------------------------------------------------------------------------------------------
8) What is Virtualization? Explain full virtualization
Full Virtualization:
• Noncritical Instructions: Run directly on the hardware.
• Critical Instructions: Trapped and replaced with software-based emulation by the Virtual Machine
Monitor (VMM).
Security:
Critical instructions control hardware and can pose security risks.
Trapping them ensures system security.
Running noncritical instructions on hardware improves overall efficiency.
• Efficiency:
Running noncritical instructions on hardware improves overall efficiency.
**refer if want!
Host-Based Virtualization
Host-Based VM Architecture:
• A virtualization layer is installed on top of the host OS.
• The host OS remains responsible for managing hardware.
• Guest OSes run on top of the virtualization layer.
• Dedicated applications can run in VMs, while other applications may run
directly on the host OS. **
------------------------------------------------------------------------------------------------------------------
9) List and explain the implementation levels of Virtualization.
Virtualization is a computer architecture technology by which multiple virtual machines (VMs) are
multiplexed in the same hardware machine.
• According to a 2009 Gartner Report, virtualization was the top strategic technology poised to change the
computer industry.
Virtualization can be implemented across five abstraction layers:
1. Instruction Set Architecture (ISA) Level: Tools like Bochs and QEMU.
2. Hardware Level: Solutions like VMware and Xen.
3. Operating System Level: Examples include Ensim's VPS and FVM.
4. Library Support Level: Tools such as WINE or vCUDA.
5. Application Level: Examples like JVM or .NET CLR.
These layers provide flexibility in resource management and application independence.
Instruction Set Architecture (ISA) Level: Virtualizes one processor architecture to run on another by
emulating its instructions.
Hardware Abstraction Level: Creates virtual hardware environments directly on physical hardware.
Operating System Level: Partitions a single operating system to create multiple isolated containers or
environments.
Library Support Level: Virtualizes APIs to run applications designed for one platform on another.
User-Application Level: Virtualizes at the application layer to allow specific programs to run on any
system.
Example: Java Virtual Machine (JVM) runs Java applications on different operating systems.
------------------------------------------------------------------------------------------------------------------
10) What are various primitive VM operations in distributed computing environment
The Virtual Machine Monitor (VMM) provides the abstraction of a virtual machine (VM) to the guest
operating systems (OS). With full virtualization, the VMM creates a virtual machine environment that is
identical to a physical machine, allowing standard operating systems (like Windows or Linux) to run just as
they would on real hardware.
Several key VMM operations enable flexible management of virtual machines in a distributed environment:
Multiplexing (Figure 1.13a): The VMM can multiplex multiple virtual machines (VMs) across different
hardware systems, allowing several VMs to run on various physical machines while sharing resources
efficiently.
Suspension (Figure 1.13b): A VM can be suspended and stored in stable storage (like disk storage).
This means the VM's state is saved, and it can be resumed later without losing any data or progress.
Provisioning/Resume (Figure 1.13c): After being suspended, a VM can be resumed or provisioned
to a new hardware platform. This allows for the VM to be moved and started on a different machine
without loss of state.
Migration (Figure 1.13d): A VM can be migrated from one physical machine to another. This
migration can happen with minimal downtime and is beneficial for load balancing, fault tolerance, or
system maintenance.
------------------------------------------------------------------------------------------------------------------
11) Illustrate the differences between Full Virtualization and host-based virtualization.
------------------------------------------------------------------------------------------------------------------
13) How does OS-level virtualization enhance hardware-level virtualization, and what
are its benefits for cloud computing?
Challenges of Hardware-Level Virtualization:
1. Slow initialization due to each VM creating its image from scratch.
2. Storage issues from considerable repeated content among VM images.
Disadvantages include: Slow performance.
Low density.
The need for para-virtualization to modify the guest OS.
VEs share the same operating system kernel but can be customized for different users.
Alternate Name:
• Known as single-OS image virtualization because all VEs use a single shared OS kernel.
Advantages of OS Extensions
Benefits of OS-Level Virtualization Compared to Hardware-Level Virtualization:
1. Minimal startup/shutdown costs, low resource requirements, and high scalability.
2. Ability to synchronize state changes between a VM and its host environment when needed.
**
AWS, Azure, and Google Cloud use OS-level virtualization for Kubernetes-based container
orchestration on top of VMs.
Hybrid Cloud Deployments use hardware virtualization (VMs) for stability and containers for rapid
application scaling.**
------------------------------------------------------------------------------------------------------------------
1) Define Cloud Computing? Explain its characteristics and benefits.
2) What are the types of VM architectures, and how do they help in making computing easier?
3) Explain Xen Architecture in detail.
4) What is the difference between LAN, SAN, and NAS, and how has Ethernet speed improved
networking for distributed computing?
5) Explain GPU Programming Model.
6) What is scalable computing over the Internet, and how does it use technologies like IoT and cloud
computing?
7) Explain Cluster Architecture in detail.
8) What is Virtualization? Explain full virtualization
9) List and explain the implementation levels of Virtualization.
10) What are various primitive VM operations in distributed computing environment.
11) Illustrate the differences between Full Virtualization and host-based virtualization.
12) Consider a program P where 25% will be executed sequentially and remaining parallelly. Calculate the
speedup and efficiency considering fixed workload.
Ans:
Steps:
1. Sequential Fraction (S): The fraction of the program that must be executed sequentially.
o Here, S = 0.25 (25%).
2. Parallelizable Fraction (P): The fraction that can be executed in parallel.
o P = 1 - S = 0.75 (75%).
3. Speedup (Amdahl's Law): Speedup is calculated as:
Speedup=1/(S+P/N)
Where:
• SS = Sequential fraction (25% or 0.25),
• PP = Parallel fraction (75% or 0.75),
• NN = Number of processors (assuming all parallel workload is evenly divided across NN).
4. Efficiency: Efficiency is calculated as:
Efficiency=Speedup/N
Example Calculation:
Let's calculate with N = 4 processors (you can substitute other values for NN):
1. Speedup:
Speedup=1/(0.25+0.75/4)=1/(0.25+0.1875)=1/0.4375≈2.286
13) How does OS-level virtualization enhance hardware-level virtualization, and what are its benefits for
cloud computing?
Cloud Computing Question Bank - IA2
Module-3
1. Differentiate between public cloud, private cloud, Hybrid cloud models. Discuss their
advantages and disadvantages.
2. Explain the three main cloud service models (IaaS, PaaS and SaaS). Provide examples for
each
3. Explain the typical data centre networking structure. How does it support scability, high
availability and performance in cloud environments?
4. What are the various design objectives of Cloud Computing?
5. Describe the components of a cloud ecosystem. How do these components support scability
and efficiency?
6. What are the challenges in designing warehouse-scale cloud data centre’s?
7. Discuss the importance of data centre interconnection networks in cloud computing
8. Explain interconnection of modular Data centre’s with example
9. what is Inter-Module Connection network. Explain with an example
Module-4
10. Explain the major security concerns and risks faced by cloud users.
11. Discuss Various encryption techniques used for securing data in the cloud
12. Discuss security of database services.
13. Explain Security Risks Posed by Shared Images & Management OS.
14. Discuss how virtual machines are secured.
15. Explain reputation system design options.
Module-5
16. Explain the key features of cloud and grid computing platforms
17. Discuss the challenges and System issues in running a typical parallel program in distributed
systems
18. With a neat diagram, explain the data flow during a MapReduce job using Hadoop.
19. Describe the programming model and environment provided by Google App Engine
20. Discuss the architecture and components of OpenStack Nova with a diagram
21. Explain the programming environments and tools provided by Amazon AWS and Microsoft
Azure
22. Describe emerging cloud software environments and their significance in real-world
applications