0% found this document useful (0 votes)
59 views86 pages

Essential Cloud Computing Features

cloud computing mcq

Uploaded by

PRINCE RAJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views86 pages

Essential Cloud Computing Features

cloud computing mcq

Uploaded by

PRINCE RAJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Week 1

Which of the following fall(s) under the “essential characteristics” of cloud


computing?

A. Resource Pooling

B. Measured Service

C. Rapid Elasticity

D. Latency

Correct Answer: A,B,C

Explanation of each point:

1. Resource Pooling:

o In cloud computing, resources such as storage, processing power, and


memory are pooled to serve multiple users through a multi-tenant
model.

o Resource pooling enables efficient use of hardware by dynamically


allocating resources based on user demand, allowing cloud providers
to serve multiple customers with shared infrastructure.
2. Measured Service:

o Cloud computing operates on a "pay-as-you-go" model, where


resources usage is tracked and billed accordingly.

o Measured service ensures that customers are only charged for the
resources they use, such as compute hours, storage space, or network
bandwidth. This enables cost transparency and efficient billing.
3. Rapid Elasticity:

o Rapid elasticity allows cloud resources to be quickly scaled up or down


based on demand, making it easy to handle workload changes.

o This characteristic is essential in cloud computing, as it allows


organizations to meet varying demands without long provisioning
times, ensuring optimal performance.
4. Latency:

o Latency is the time delay in processing data requests and delivering


responses, usually referring to network delays.
o While latency affects performance, it is not considered an essential
characteristic of cloud computing itself, as it is more related to network
performance and data transfer rather than core cloud functionality.
Q. 2 “Google Doc” is an example of

A. PaaS

B. IaaS

C. SaaS

D. FaaS

Correct Answer: C

SaaS (Software as a Service):

 SaaS provides ready-to-use software applications over the internet, allowing


users to access them via a web browser without worrying about underlying
infrastructure or software installation.
 Google Docs is an example of SaaS, as it is a cloud-based document
processing application that users can access and use directly via a web
browser. Google manages all underlying resources, updates, and
maintenance.
PaaS (Platform as a Service):

 PaaS provides a platform for developers to build, test, and deploy


applications. It includes an environment with operating systems, development
tools, and database management without needing to manage hardware.

 Examples of PaaS include Google App Engine and Microsoft Azure App
Services. Google Docs, however, is a complete application for end-users, not
a development platform, so it is not PaaS.
IaaS (Infrastructure as a Service):

 IaaS provides virtualized computing resources over the internet, such as


virtual machines, storage, and networking, for users to run their own software
and applications.

 Examples of IaaS include Amazon EC2 and Google Compute Engine. Google
Docs is a software application, not infrastructure, so it does not fit under IaaS.
FaaS (Function as a Service):

 FaaS is a serverless computing model that allows developers to run code in


response to events without managing servers, typically used for microservices
or event-driven applications.
 Examples of FaaS include AWS Lambda and Google Cloud Functions.
Google Docs is an end-user application, not a function-based service, so it is
not FaaS.
Q. 3 Business-Process-as-a-Service is not a part of XaaS.

A) True

B) False

Correct Answer: B

XaaS (Anything as a Service):

 XaaS refers to the collective delivery of all services through the cloud,
including infrastructure (IaaS), platform (PaaS), software (SaaS), and other
specialized services such as BPaaS.

 It provides flexible, scalable solutions where organizations can access various


services based on their needs, without investing in physical resources or
maintenance.

BPaaS (Business-Process-as-a-Service):

 BPaaS provides specific business functions as cloud services, often


combining software applications, processes, and infrastructure as a unified
service.

 Examples of BPaaS include cloud-based payroll processing, customer service


management, and other standardized business operations.

 BPaaS is therefore a type of XaaS because it delivers these business process


services over the cloud, fitting within the broader concept of Anything as a
Service.
Q. 4 Network Function Virtualization involves the implementation of _______
function in software that

can run on a range of industry-standard servers ______________.

A. network,software

B. hardware, software

C. hardware, network

D. network,hardware

Correct Answer: D

Network:
 In the context of Network Function Virtualization (NFV), the term "network"
refers to the various network functions that are traditionally implemented in
dedicated hardware appliances (such as routers, firewalls, and load
balancers).

 NFV aims to virtualize these network functions, allowing them to run as


software on standard servers instead of relying solely on proprietary hardware
solutions. This approach enhances flexibility, scalability, and efficiency in
managing network resources.
Hardware:

 The term "hardware" refers to the physical servers and infrastructure on which
the virtualized network functions run. NFV allows these network functions to
be deployed on standard, industry-standard hardware rather than specialized
networking devices.

 This flexibility enables service providers to utilize existing infrastructure more


effectively and to scale resources up or down based on demand without the
constraints of dedicated hardware.
Q.5 Which are the following applications for SaaS (Software as a Service)
architecture?

A) Billing software

B) CRM

C) App engines

D) None of above

Correct Answer: A,B

Billing Software:

 Billing software is used to manage invoicing, payments, and financial


transactions for businesses. As a SaaS application, it can be accessed over
the internet, allowing users to handle billing processes without installing
software on local machines.

 SaaS billing solutions offer features such as automated invoicing, payment


processing, and reporting, making it easy for companies to manage their
financial operations efficiently from anywhere.

CRM (Customer Relationship Management):

 CRM systems are designed to help businesses manage their relationships


and interactions with customers and potential clients. These applications track
customer data, sales interactions, and marketing campaigns.
 As SaaS solutions, CRMs like Salesforce, HubSpot, and Zoho allow users to
access customer information, manage leads, and analyze sales data through
web-based interfaces. This flexibility facilitates collaboration and information
sharing among teams.

App Engines:

 App engines (such as Google App Engine or Microsoft Azure App Service)
provide a platform for developers to build, deploy, and manage applications
but do not fall under the SaaS category. Instead, they are classified as PaaS
(Platform as a Service) since they provide the underlying infrastructure and
tools for application development rather than end-user applications.

 PaaS solutions are focused on development and deployment, whereas SaaS


applications are fully functional software solutions ready for end-users.
Q. 6 Web access to commercial software is one of the SaaS characteristics in
the cloud computing

paradigm.

A. True

B. False

Correct Answer: A

Web-Based Access:

 SaaS applications are delivered over the internet, allowing users to access
them through a web browser. This eliminates the need for local installation,
making it convenient for users to utilize software from any device with internet
connectivity.
 Users can access SaaS applications from various devices, including laptops,
tablets, and smartphones, enhancing mobility and flexibility.

Centralized Management:

 Since SaaS applications are hosted in the cloud, software updates,


maintenance, and security are managed by the service provider. This
centralized approach reduces the burden on users and IT departments,
allowing them to focus on other tasks.

 Users benefit from the latest features and security patches without needing to
manage updates themselves.

Subscription-Based Model:

 SaaS applications often operate on a subscription basis, where users pay for
access based on their usage, such as monthly or annual fees. This model
allows businesses to manage costs more effectively compared to traditional
licensing models.

 It also provides the flexibility to scale services up or down based on changing


business needs.

Multi-Tenancy Architecture:

 SaaS solutions typically use a multi-tenant architecture, where a single


instance of the software serves multiple customers. This design maximizes
resource utilization and reduces costs for both providers and users.

 Users share the underlying infrastructure while maintaining their data privacy
and security.
Q. 7 In the case of the client-server model: Statement (i) Virtualization is a core
concept; Statement (ii)

system can scale infinitely

A) Only Statement (i) is correct

B) Only Statement (ii) is correct

C) Both Statements (i) and (ii) are correct

D) None of the statements is correct

Correct Answer: D

Statement (i): Virtualization is a core concept.

 Explanation: Virtualization is indeed an important technology in cloud


computing, allowing for the abstraction of physical resources and enabling
multiple virtual machines to run on a single physical server. However, in the
context of the client-server model, virtualization is not a core concept. The
client-server model primarily focuses on the relationship between clients and
servers in a networked environment, where clients request resources or
services from servers. While virtualization can be used in some client-server
setups, it is not fundamental to the client-server architecture itself. Thus, this
statement is not correct.

Statement (ii): The system can scale infinitely.

 Explanation: While client-server architectures can be designed to scale to


meet increased demand, they do not inherently allow for infinite scalability.
Scaling in a client-server model typically involves adding more servers or
upgrading existing ones to handle increased loads, which can have practical
limitations based on hardware, software, and network infrastructure. Infinite
scalability is more characteristic of cloud-native architectures and distributed
systems, which can leverage techniques like horizontal scaling and
microservices. Therefore, this statement is also not correct.

Q. 8 Client-server model is always load-balanced

A) True

B) False

Correct Answer: B

Definition of Client-Server Model:

 The client-server model is a network architecture where clients (users or


devices) request services or resources from a centralized server. The server
processes these requests and provides the necessary responses or data.
 This model can function effectively without load balancing, depending on the
application and the server's capacity.

Load Balancing:

 Load balancing is a technique used to distribute workloads across multiple


servers to ensure no single server becomes overwhelmed with requests. This
helps improve performance, reliability, and availability by optimizing resource
use and minimizing response time.

 While load balancing can be implemented in a client-server architecture, it is


not an inherent feature of the model itself. The basic client-server model can
operate with a single server, which may become a bottleneck if it receives too
many requests.

Scalability Considerations:
 In some scenarios, a client-server setup may not be load-balanced,
particularly in smaller applications or systems where a single server suffices to
handle the expected load. In such cases, adding load balancing would be
unnecessary and could complicate the architecture.

 Load balancing is more commonly associated with large-scale applications,


distributed systems, or cloud environments where multiple servers are
required to handle high traffic efficiently.
Q. 9 PaaS (Platform as a Service) brings the benefits: (i) Creation of software
(ii) Integration of web
services and databases
A. Only (i)

B. Only (ii)

C. Both (i) and (ii)

D. Neither (i) nor (ii)

Correct Answer: C

(i) Creation of Software:

 Explanation: PaaS provides a platform that includes the necessary tools,


infrastructure, and services for developers to create, test, and deploy software
applications. This includes development frameworks, programming
languages, and middleware that streamline the software development
process.

 PaaS allows developers to focus on writing code and building applications


without worrying about managing the underlying hardware or software
infrastructure, thus facilitating the rapid creation of software.

(ii) Integration of Web Services and Databases:

 Explanation: PaaS environments typically offer features that support the


integration of various web services and databases. This allows developers to
connect their applications to third-party services and databases seamlessly.

 PaaS platforms often include APIs, data management tools, and connectors
that enable easy integration of web services, such as authentication services,
payment gateways, and data storage solutions. This capability enhances the
functionality and scalability of applications being developed.
Q.10 Which of the following is false?

a) Private cloud is dedicated solely to an organization.

b) Community cloud is a composition of public and private cloud.

c) Public cloud is available to the general public.

d) None of these
Correct Answer: b

a) Private cloud is dedicated solely to an organization.

 Explanation: This statement is true. A private cloud is a cloud computing


environment that is exclusively used by a single organization. It can be
managed internally or by a third-party provider and is hosted on-premises or
in a dedicated data center. The resources in a private cloud are not shared
with other organizations, providing greater control, security, and customization
for the organization that owns it.

b) Community cloud is a composition of public and private cloud.


 Explanation: This statement is false. A community cloud is a cloud
infrastructure that is shared among several organizations that have similar
requirements and concerns, such as compliance or security needs. It is not
simply a combination of public and private clouds; rather, it is a distinct model
that allows multiple organizations to collaborate and share resources in a way
that is tailored to their common interests. The community cloud can be
managed by the organizations or by a third party and can be hosted on-
premises or externally.

c) Public cloud is available to the general public.

 Explanation: This statement is true. A public cloud is a cloud computing model


that provides services and resources to anyone who wants to use them.
These services are typically offered by third-party providers and are
accessible over the internet. Public clouds are multi-tenant environments
where multiple customers share the same infrastructure, leading to cost
efficiency and scalability.
d) None of these

 Explanation: This option suggests that all the previous statements are true,
which is not correct because statement (b) is false.

Week 2
Service-Oriented Architecture (SOA) possess:

a) A service provider, service requestor and service broker

b) A service provider and service requestor

c) A service requestor and service broker


d) Only a service broker

Correct Option: A

Service Provider:

 The service provider is the entity that creates, hosts, and manages a service
in SOA. It defines the service, including its operations and how to access it,
and makes it available to service requestors.
 The service provider registers the service in a service broker or registry so
that potential users (requestors) can discover it. This component is
responsible for fulfilling the requests sent by service requestors.

Service Requestor:

 The service requestor is the entity that consumes or uses the service offered
by the service provider. The requestor locates the service in the service
broker, then binds to and interacts with the service provider to execute the
desired operations.

 In SOA, service requestors are typically applications, systems, or users that


need to perform a specific function provided by the service, such as data
processing or resource access.

Service Broker (or Service Registry):

 The service broker acts as an intermediary that connects service requestors


with service providers. It maintains a registry of available services and their
descriptions, allowing service requestors to discover the appropriate services
to meet their needs.

 The service broker does not directly process requests but provides a directory
that enables the discovery of services based on specified criteria. Once a
service requestor finds a suitable service through the broker, it directly
communicates with the service provider.
Q. 2 XML is designed to describe _________.

a) pricing

b) SLA

c) data

d) service

Correct Option: C

1. a) Pricing:

o XML (eXtensible Markup Language) is not specifically designed to


describe pricing. While pricing information can be structured and stored
within an XML document, XML itself is a general-purpose language for
defining data, not limited to any specific domain like pricing.

2. b) SLA (Service Level Agreement):


o XML is not specifically designed to describe SLAs, though SLAs and
other contracts can be formatted in XML for structuring and sharing
information. XML can be used to represent any structured information,
but its primary purpose is to describe and store data in a standardized
way.

3. c) Data:
o Explanation: XML is designed to describe and structure data. It allows
developers to define custom tags that represent different types of
information, making it ideal for encoding data in a way that is both
human-readable and machine-readable.

o XML is commonly used to store and transfer structured data across


different systems and platforms. It separates the data from its
presentation, allowing it to be used in a wide variety of applications and
shared between different software systems.

4. d) Service:

o XML is not specifically intended to describe services. However, it is


often used in service-oriented architectures (SOA) and web services to
format data exchanged between services, like SOAP messages in web
service communication.
QUESTION 3:

SOAP (Simple Object Access Protocol) does not restrict the endpoint
implementation technology

choices. SOAP is a platform-neutral choice.

a) True

b) False
Correct Option: A

Platform-Neutral Choice:

 SOAP (Simple Object Access Protocol) is designed to be platform-neutral,


which means it can be used across various operating systems, programming
languages, and hardware. This neutrality is possible because SOAP
messages are formatted in XML, a widely supported data format that is
independent of platform-specific dependencies.

Endpoint Implementation Flexibility:

 SOAP does not restrict the implementation technology of the endpoints. This
means that the service provider and the service consumer can be
implemented in different programming languages or hosted on different
platforms. For example, a client application written in Python can interact with
a SOAP service hosted on a Java-based server because both systems
communicate via XML-formatted SOAP messages.
 SOAP relies on standard protocols like HTTP, SMTP, or others for message
transmission, making it compatible with various technologies and network
protocols.

Interoperability:

 SOAP’s platform-neutral and language-independent characteristics make it


ideal for applications requiring high interoperability, where different systems
need to communicate in a standardized way without being tied to a specific
technology stack.

QUESTION 4:

A Cyber‐Physical Cloud Computing (CPCC) architectural framework is a


________environment

that can rapidly build, modify and provision cyber‐physical systems composed
of a set

of__________ based sensor, processing, control, and data services.

a) system, cloud computing

b) cloud computing, system

c) system, edge computing

d) edge, system computing

Correct Answer: A

1. Cyber-Physical Cloud Computing (CPCC):

o CPCC combines cyber-physical systems (CPS) with cloud computing


to create an architectural framework for managing and provisioning
services. A cyber-physical system refers to the integration of
computational processes with physical processes, often involving
sensors, controllers, and actuators to interact with the physical
environment.

o The CPCC framework allows for the rapid building, modification, and
provisioning of these systems by using cloud computing resources to
provide sensor, processing, control, and data services.

2. "System":

o In this context, "system" refers to the cyber-physical system itself,


which is the main subject of CPCC. CPCC is used to manage the
lifecycle and components of these systems, allowing them to be more
flexible, scalable, and capable of real-time interaction with the physical
environment.

3. "Cloud Computing":
o Cloud computing provides the backbone of CPCC by offering on-
demand access to resources such as data storage, processing power,
and software services. By utilizing cloud computing, CPCC can
dynamically allocate resources and rapidly modify or update services
as needed.

o Through cloud computing, CPCC can manage diverse components like


sensors and control systems in a virtualized environment, promoting
scalability and resource efficiency.

Explanation of Other Options:

 b) cloud computing, system: This option reverses the order of terms, which
doesn’t fit the description since CPCC is an architectural system that
leverages cloud computing.

 c) system, edge computing: While edge computing is related, CPCC primarily


relies on cloud computing rather than edge computing for centralized
processing and service provisioning.

 d) edge, system computing: This option does not accurately describe CPCC
and the relationship between cyber-physical systems and cloud computing.
QUESTION 5:

Network Virtualization is a _________ environment that allows _______


service providers to

dynamically compose ____________virtual networks.

a) networking, single, single

b) physical, single, multiple

c) networking, multiple, single

d) networking, multiple, multiple

Correct Option: D

Network Virtualization as a "Networking Environment":

 Network virtualization is a networking environment that abstracts physical


network resources to create virtual networks. It enables the separation of
network resources from the physical hardware, allowing them to be used
flexibly and managed independently of the physical network infrastructure.

"Multiple Service Providers":


 Network virtualization supports the dynamic creation and management of
virtual networks by multiple service providers. This capability is essential in
multi-tenant environments, where different providers or customers share the
same physical infrastructure. Each provider can independently configure and
manage their own virtual network without affecting others.

"Multiple Virtual Networks":

 The primary goal of network virtualization is to allow the creation of multiple


virtual networks on top of a single physical network. This means several
isolated virtual networks can coexist and operate simultaneously, each
tailored to the needs of individual service providers or applications.

Customized wearable devices for collecting health parameters are the best
examples of

a) IoHT

b) Fog device

c) Fog-Cloud interfaced.

d) Cloud-Fog-Edge-IoHT

Correct Answer: d

a) IoHT (Internet of Health Things):

 Explanation: IoHT refers to a network of connected devices and sensors


specifically designed to monitor and collect health-related data. While
wearable health devices are a part of IoHT, this option alone doesn’t cover the
full architecture needed to manage and process data efficiently in real-time,
especially with the involvement of cloud and edge computing.

b) Fog Device:

 Explanation: A fog device is a computing device located closer to the "edge"


of the network, acting as an intermediary between edge devices and cloud
data centers. While fog devices can assist in processing data from wearables,
this option does not fully capture the broader architecture that also includes
cloud and edge processing along with IoHT devices.

c) Fog-Cloud Interfaced:
 Explanation: The Fog-Cloud interface involves processing and data
management across both fog and cloud layers. This enables data collected at
the edge to be pre-processed by fog devices before being sent to the cloud
for deeper analysis and storage. However, this option alone does not
encompass the edge layer or IoHT aspects involved in wearable health
devices.

d) Cloud-Fog-Edge-IoHT:

 Explanation: This option best describes the full architecture for wearable
health devices:

o IoHT: Wearable devices collect health-related data and are part of the
IoHT ecosystem.

o Edge: Wearable devices themselves are edge devices that collect and
sometimes preprocess data before sending it to fog or cloud systems.
o Fog: Fog computing provides an intermediate layer that can process,
filter, and temporarily store data closer to the wearable devices,
allowing for faster response times and efficient data handling.

o Cloud: The cloud stores large amounts of health data, provides


advanced analytics, and enables access to health data from anywhere.

 This complete architecture supports real-time data collection, preprocessing,


storage, and analysis, making it ideal for health applications involving
wearable devices.
QUESTION 7:

Dew Computing is a paradigm where on-premises computers provide


functionality that is

_________ of cloud services and is also collaborative with cloud services

a) dependant

b) independent
c) partial dependant

d) none of these

Correct Option: B

a) Dependent:

 Explanation: If on-premises computers were dependent on cloud services,


they would require a constant connection to the cloud to function. However,
Dew Computing is designed to allow these on-premises computers to operate
even without cloud access, thus making them independent of cloud services.

b) Independent:
 Explanation: Dew Computing is a computing paradigm where on-premises
computers can operate independently of cloud services, meaning they can
function on their own without relying on a continuous cloud connection. At the
same time, Dew Computing enables these computers to synchronize and
collaborate with cloud services when the connection is available, allowing
data to be shared and updated across both local and cloud systems. This
independence provides flexibility and resilience, especially in situations where
internet connectivity may be intermittent.

c) Partial Dependent:

 Explanation: Partial dependency would mean the on-premises systems


require cloud services for some operations but not all. However, Dew
Computing specifically emphasizes full independence from cloud services,
with optional collaboration rather than partial dependency.
QUESTION 8:

SOAP uses ______ as transport protocol

a) UDDI

b) SLA

c) HTTP

d) XML
Correct Answer: c

a) UDDI (Universal Description, Discovery, and Integration):

 Explanation: UDDI is a standard for service discovery, not a transport


protocol. It is used to publish and discover information about web services,
enabling applications to find and connect to each other. While UDDI can be
used in conjunction with SOAP for service discovery, it does not transport
SOAP messages.

b) SLA (Service Level Agreement):

 Explanation: SLA is a formal document that defines the expected service


quality, availability, and responsibilities between a service provider and a
customer. It is not a transport protocol and does not play a role in sending or
receiving SOAP messages.
c) HTTP (Hypertext Transfer Protocol):
 Explanation: HTTP is the correct answer. SOAP commonly uses HTTP as its
transport protocol to send and receive messages over the web. HTTP
provides a standardized method for transmitting SOAP messages between a
client and server, making it ideal for web-based communication. SOAP can
also use other protocols, like SMTP, but HTTP is the most widely used
transport protocol.

d) XML (eXtensible Markup Language):

 Explanation: XML is a markup language used to structure and format data


within a SOAP message. While XML forms the basis of the SOAP message
format, it is not a transport protocol. Instead, it defines the data format and
structure that is transported by protocols like HTTP.

QUESTION 9:

Virtual Machine Monitor is also known as


a) Cluster Manager

b) Virtual Machine Handler

c) Virtual Machine Manager

d) Hypervisor

Correct Option: D

a) Cluster Manager:

 Explanation: A Cluster Manager is responsible for managing a group of


interconnected computers (or nodes) as a single resource pool in a cluster
computing environment. It handles tasks like load balancing, job scheduling,
and resource allocation across multiple machines but is not specifically
related to managing virtual machines.

b) Virtual Machine Handler:


 Explanation: While this term might seem related, it is not a standard term for a
Virtual Machine Monitor. It doesn’t specifically refer to the software layer that
enables the creation, management, and isolation of virtual machines.

c) Virtual Machine Manager:

 Explanation: Though sometimes used interchangeably with "Hypervisor,"


"Virtual Machine Manager" isn’t the most accurate term for the software
responsible for directly managing virtual machines. The term "Hypervisor"
more accurately describes the software layer that provides a virtualization
environment.
d) Hypervisor:
 Explanation: The correct answer is Hypervisor. A Hypervisor, or Virtual
Machine Monitor (VMM), is the software layer that allows multiple virtual
machines (VMs) to run on a single physical host by allocating resources and
ensuring isolation between them. The Hypervisor can manage the virtual
hardware for each VM and facilitate communication with the host hardware. It
is a core component in virtualization technology, enabling efficient resource
utilization and system independence.
QUESTION 10:
Which of the following is/are XML parser API(s)?

a) XaaS (Anything as a Model)

b) SAX (Simple API to XML)

c) CLI (Command Line Interface)

d) DOM (Document Object Model)

Correct Option: B, D

a) XaaS (Anything as a Service):

 Explanation: XaaS stands for "Anything as a Service," a model in cloud


computing that delivers a wide range of IT functions as services over the
internet, such as Software as a Service (SaaS), Platform as a Service (PaaS),
etc. XaaS is not related to XML parsing and is not an XML parser API.

b) SAX (Simple API for XML):

 Explanation: SAX is an event-driven, sequential API for parsing XML. Unlike


DOM, SAX does not load the entire XML document into memory. Instead, it
reads the XML document element by element and triggers events as it
processes each part of the document. This makes SAX more memory-efficient
than DOM, especially for large XML files, as it doesn’t require the whole
document to be loaded at once.

c) CLI (Command Line Interface):


 Explanation: CLI stands for Command Line Interface, which is a text-based
interface that allows users to interact with a program by typing commands.
CLI is not related to XML parsing and does not function as an XML parser
API.

d) DOM (Document Object Model):

 Explanation: DOM is an API that represents an XML document as a tree


structure in memory, where each node corresponds to an element in the XML
document. This allows programs to access, modify, or delete elements in the
XML document easily. Since the entire document is loaded into memory, DOM
is ideal for applications that need to repeatedly access or manipulate different
parts of the XML file.
Week 3
QUESTION 1:

Which of the following statement(s) regarding OpenStack storage is/are right?

A. Object storage is managed by Cinder

B. Both ephemeral storage and block storage are accessible from within VM

C. Block storage persists until VM is terminated

D. Ephemeral storage is used to run operating system and/or scratch space

Correct Option: B,D

A. Object storage is managed by Cinder:

 Explanation: This statement is incorrect. In OpenStack, object storage is


managed by Swift, not Cinder. Cinder is responsible for block storage, which
is different from object storage. Object storage is designed to handle
unstructured data, while block storage is used for persistent data that requires
a file system.

B. Both ephemeral storage and block storage are accessible from within VM:

 Explanation: This statement is correct. In OpenStack, both ephemeral storage


(temporary storage that exists only during the lifecycle of the VM) and block
storage (persistent storage that can be attached and detached from VMs) can
be accessed from within a virtual machine. Ephemeral storage is typically
used for the root filesystem and temporary data, while block storage provides
additional space for applications and databases.

C. Block storage persists until VM is terminated:

 Explanation: This statement is incorrect in its generalization. While block


storage (managed by Cinder) is designed to persist independently of the VM's
lifecycle, it is not automatically deleted when the VM is terminated unless
explicitly configured to do so. Block storage volumes can be detached and
reattached to different VMs, allowing data to persist beyond the lifecycle of
any single VM.

D. Ephemeral storage is used to run operating system and/or scratch space:

 Explanation: This statement is correct. Ephemeral storage is typically used to


host the operating system and provides scratch space for temporary files and
data. This storage is transient and is destroyed when the VM is terminated,
making it suitable for temporary data that does not need to persist after the
VM's lifecycle.
QUESTION 2:
A task takes time T in a uniprocessor system. In a parallel implementation, the
task runs on P

processors parallelly. The parallel efficiency is Eff, where 0<Eff<1. What is the
time taken by each

processor (M) in this implementation?


A. M = T

B. M = T/(Eff P) ×

C. M = T/P

D. M = (T Eff)/P ×

Correct Option: B

QUESTION 3:

What does the term "biasness towards vendors" imply in the context of SLA
monitoring?

A. Vendor-driven selection of monitoring parameters

B. Customer-driven selection of monitoring parameters


C. Balanced approach in monitoring parameters

D. Lack of active monitoring on the customer's side

Correct Answer: A

A. Vendor-driven selection of monitoring parameters:

 Explanation: This statement accurately reflects the implication of "biasness


towards vendors" in the context of SLA (Service Level Agreement) monitoring.
It suggests that the monitoring parameters and metrics may be selected
based on the vendor's interests or capabilities, rather than being aligned with
the actual needs and expectations of the customer. This can lead to a
situation where the SLA monitoring favors the vendor's perspective, potentially
overlooking critical aspects that matter to the customer.

B. Customer-driven selection of monitoring parameters:

 Explanation: This option contradicts the concept of vendor bias. If the


selection of monitoring parameters is customer-driven, it implies that the
parameters are chosen based on the customer's requirements and priorities,
which would not reflect a bias towards the vendor.

C. Balanced approach in monitoring parameters:


 Explanation: A balanced approach would mean that both the vendor and the
customer have input into the selection of monitoring parameters. This
scenario would not indicate bias towards the vendor but rather a collaborative
effort, which is contrary to the term "biasness towards vendors."

D. Lack of active monitoring on the customer's side:

 Explanation: This option suggests that the customer is not actively involved in
monitoring, which could lead to a lack of oversight. However, it does not
directly relate to the idea of vendor bias in the selection of monitoring
parameters; it implies a different issue regarding the customer's engagement.
QUESTION 4:
How does the master node in the Google File System maintain communication
with chunk servers?

A. Command messages

B. Update messages

C. Query messages

D. Heartbeat messages

Correct Answer: D

A. Command messages:
 Explanation: While command messages can be used in various distributed
systems to send instructions from one node to another, this term is not
specifically used to describe the communication method between the master
node and chunk servers in the Google File System (GFS). The master node
primarily maintains control and coordination through other means.

B. Update messages:

 Explanation: Update messages suggest the transmission of changes or


modifications to data or configurations. While the master node may send
updates to chunk servers about file metadata or chunk locations, this is not
the primary method of maintaining ongoing communication. The focus in GFS
is on the periodic checking of the health of chunk servers.
C. Query messages:
 Explanation: Query messages typically refer to requests for information or
data. Although the master node may issue queries to the chunk servers for
status or data retrieval, this terminology does not capture the ongoing, health-
monitoring aspect of the communication between the master node and the
chunk servers.

D. Heartbeat messages:

 Explanation: This is the correct answer. In the Google File System, the master
node maintains communication with chunk servers primarily through heartbeat
messages. These are periodic signals sent from chunk servers to the master
node to indicate that they are alive and functioning properly. Heartbeat
messages are crucial for monitoring the health and availability of chunk
servers, allowing the master node to take appropriate actions if a server fails
or becomes unresponsive.
QUESTION 5:

In a cloud, total service uptime is 175 minutes and availability of the service is
0.85. What is the

service downtime?

A. 55 minutes

B. 148.75 minutes

C. 26.25 minutes

D. 45 minutes

Correct Option: C

Detailed Answer: Availability = 1 – (downtime/uptime).

Downtime = Uptime (1-Availability) = 175*(1-0.85) = 26.25 minutes. ×

QUESTION 6:

Statement 1: In ephemeral storage, the stored objects persist until the VM is


terminated.

Statement 2: The ephemeral storage is managed by Cinder in OpenStack.

A. Statement 1 is TRUE, Statement 2 is FALSE

B. Statement 2 is TRUE, Statement 1 is FALSE


C. Both statements are TRUE
D. Both statements are FALSE

Correct Answer: A

Detailed Solution: Ephemeral storage is managed by NOVA in OpenStack.

Statement 1: In ephemeral storage, the stored objects persist until the VM is


terminated.

 Explanation: This statement is TRUE. In OpenStack, ephemeral storage is


temporary storage that is allocated for the duration of a virtual machine (VM)
instance. The data stored in ephemeral storage exists only while the VM is
running; once the VM is terminated, the ephemeral storage and its contents
are deleted. Thus, the statement accurately reflects the nature of ephemeral
storage.

Statement 2: The ephemeral storage is managed by Cinder in OpenStack.

 Explanation: This statement is FALSE. In OpenStack, ephemeral storage is


managed by Nova, which is the compute service responsible for handling
instances and their associated resources. Cinder, on the other hand, is the
block storage service in OpenStack that manages persistent storage volumes.
Therefore, ephemeral storage does not fall under the management of Cinder
but is managed by Nova.
QUESTION 7:

“Midsize providers can achieve similar statistical economies to an infinitely


large provider” Does

this fall under?

A. Correlated demand
B. Dependent demand

C. Independent demand

D. Mixed demand

Correct Option: C

Detailed Answer: Midsize providers can achieve similar statistical economies


to an infinitely large

provider–independent demands.

A. Correlated demand:
 Explanation: Correlated demand refers to situations where the demand for
one product or service is related to the demand for another. In other words, if
the demand for one item increases, the demand for another item may also
increase. This does not apply to the context of midsize providers achieving
statistical economies, as the statement focuses on the independence of
demand patterns rather than their correlation.

B. Dependent demand:

 Explanation: Dependent demand is a concept where the demand for one


product or service is directly influenced by the demand for another product or
service. This typically applies to components or materials that are required to
produce a final product (e.g., the demand for wheels is dependent on the
demand for cars). This concept is not relevant to the statement about midsize
providers achieving economies of scale, as it emphasizes the independence
of demand rather than dependency.

C. Independent demand:

 Explanation: This is the correct answer. Independent demand refers to


demand that is not influenced by the demand for other products or services. In
the context of the statement, it suggests that midsize providers can
experience similar statistical economies (cost advantages or efficiencies) as
large providers because their demand patterns do not depend on other
factors. This independence allows them to leverage statistical economies of
scale similar to those of infinitely large providers, even with a smaller scale of
operations.

D. Mixed demand:

 Explanation: Mixed demand implies a combination of both independent and


dependent demand elements. However, the statement specifically
emphasizes the capability of midsize providers to achieve statistical
economies independent of the scale of larger providers. Therefore, mixed
demand does not accurately describe the situation presented in the
statement.
QUESTION 8:

Let D(t) and R(t) be the instantaneous demand and resources at time t
respectively. If demand is

exponential (D(t)=et

), any fixed provisioning interval (tp) according to the current demands will fall

linearly behind.
A. TRUE

B. FALSE

Correct Option: B

Detailed Answer: If demand is exponential (D(t)=et

), any fixed provisioning interval (tp)

according to the current demands will fall exponentially behind.

QUESTION 9:

Which of the following is/are expected common SLA parameter(s) for both
Software-as-a-Service

and Storage-as-a-Service models?

A. usability

B. scalability

C. recovery

D. None of these

Correct Option: B

Detailed Answer: Scalability is common among the options.

A. Usability:

 Explanation: Usability refers to how easy and user-friendly a service or


software is for its intended users. While usability is important for Software-as-
a-Service (SaaS) applications, it is not as directly applicable to Storage-as-a-
Service (StaaS) models, where the focus is more on storage capabilities and
performance. Therefore, usability is not a common SLA parameter for both
models.

B. Scalability:
 Explanation: This is the correct answer. Scalability is a critical parameter for
both SaaS and StaaS. It refers to the ability of a service to handle increased
loads or to grow with the needs of the user without a significant drop in
performance. For SaaS, scalability ensures that the application can
accommodate more users or increased transactions as demand grows. For
StaaS, it means that storage resources can be easily expanded to meet
increasing data storage needs. Both service models must demonstrate
scalability to meet customer requirements effectively.

C. Recovery:

 Explanation: Recovery generally refers to the ability to restore services and


data after a failure or disaster. While recovery is a significant SLA parameter
for both SaaS and StaaS, its implementation can vary widely between the two
models. In SaaS, recovery might pertain to application-level data, while in
StaaS, it focuses on the recovery of stored data. Although important, recovery
may not be considered a common SLA parameter in the same way that
scalability is.

D. None of these:

 Explanation: This option implies that none of the given parameters are
common SLA parameters, which is incorrect since scalability is indeed a
common SLA parameter for both SaaS and StaaS.
QUESTION 10:

Data retention and deletion by cloud providers do not fall under one of the SLA
requirements.

A. True

B. False

Correct Option: A

Detailed Answer: Some cloud providers have legal requirements of retaining


data even of it has

been deleted by the consumer. Hence, they must be able to prove their
compliance with these

policies.

Week 4
QUESTION 1:

SQL Azure is a cloud-based relational database service that is based on:

(a) Oracle

(b) SQL Server

(c) MySQL

(d) None

Correct Answer: b

(a) Oracle:
 Explanation: Oracle is a separate relational database management system
(RDBMS) developed by Oracle Corporation. SQL Azure is not based on
Oracle; instead, it is specifically built on Microsoft’s SQL Server technology.
Therefore, this option is incorrect.

(b) SQL Server:

 Explanation: This is the correct answer. SQL Azure, now known as Azure SQL
Database, is a cloud-based relational database service provided by Microsoft
Azure that is based on Microsoft SQL Server. It offers many features and
capabilities found in SQL Server, allowing users to leverage familiar SQL
Server tools and functionalities in a cloud environment. This service supports
automatic scaling, high availability, and various deployment options.

(c) MySQL:

 Explanation: MySQL is an open-source relational database management


system owned by Oracle Corporation. While Azure does offer MySQL as part
of its services (Azure Database for MySQL), SQL Azure itself is specifically
based on SQL Server technology. Thus, this option is also incorrect.

(d) None:

 Explanation: This option implies that SQL Azure is not based on any of the
mentioned database systems, which is incorrect. Since SQL Azure is built on
SQL Server, this option does not apply.
Microsoft Azure provides

(a) SaaS

(b) PaaS

(c) IaaS
(d) None
Correct Answer: a, b, c

(a) SaaS (Software as a Service):

 Explanation: Microsoft Azure offers Software as a Service (SaaS) solutions,


which provide fully managed applications that users can access over the
internet without managing the underlying infrastructure or application logic.
Examples include Microsoft 365 and Dynamics 365, which are hosted on
Azure. SaaS allows users to use software applications without the need for
installation, maintenance, or infrastructure management.

(b) PaaS (Platform as a Service):

 Explanation: Azure also provides Platform as a Service (PaaS), enabling


developers to build, deploy, and manage applications without dealing with the
complexities of underlying infrastructure. Azure PaaS offers services like
Azure App Services, Azure SQL Database, and Azure Functions, which help
developers focus on writing code and deploying applications, while Azure
handles server management, scalability, and security.

(c) IaaS (Infrastructure as a Service):

 Explanation: Infrastructure as a Service (IaaS) in Azure provides virtualized


computing resources over the internet, such as virtual machines, storage, and
networking. This allows organizations to manage their own operating systems,
applications, and configurations while offloading physical hardware and
networking maintenance to Azure. Services like Azure Virtual Machines and
Azure Storage are examples of IaaS offerings in Azure.

Azure App Service plan defines

(a) Region

(b) Instance size

(c) Scale count

(d) None

Correct Answer: a,b,c

(a) Region:

 Explanation: The Region setting in an Azure App Service plan specifies the
geographical location where the app’s resources (such as compute power and
storage) are deployed. Choosing the correct region helps reduce latency for
users in a specific area and ensures compliance with data residency
requirements. It is a critical part of the configuration in an App Service plan.

(b) Instance size:


 Explanation: Instance size refers to the virtual machine specifications (such as
CPU, memory, and storage) for the app’s hosting environment. Azure App
Service plans offer different instance sizes (e.g., small, medium, large) to
allow scaling the app's performance according to workload demands. Setting
the instance size helps determine the power and cost of running the
application.

(c) Scale count:

 Explanation: Scale count defines the number of instances running for an app
within the App Service plan. By adjusting the scale count, the app can handle
more users or requests. It allows automatic or manual scaling, ensuring that
the app performs optimally under different loads.
QUESTION 4:

The OpenStack component - Glance monitors and meters the OpenStack cloud
for billing,

benchmarking. State True of False.


a) True

b) False

Correct Answer: b

QUESTION 5:

GCP: Choose the correct option(s)

a) To run your web-application, you need to configure only the Google Storage
bucket

b) “gcloud app deploy app.yaml” the command can be used to deploy your
app to app-engine

c) After launching your application to app-engine anyone can view the app at

http://[YOUR_PROJECT_ID].appspot.com

d) “gcloud app browse” – can be used to start the local development server for
the application
Correct Answer: b, c
(a) To run your web application, you need to configure only the Google Storage
bucket:

 Explanation: This statement is incorrect. While Google Cloud Storage buckets


are used to store and serve static assets (e.g., images, videos), a web
application in GCP typically requires more than just a storage bucket
configuration. Running a web app would involve configuring a service like
Google App Engine or Compute Engine to host the application logic, rather
than relying solely on a storage bucket.
(b) “gcloud app deploy app.yaml” the command can be used to deploy your app to
app-engine:

 Explanation: This statement is correct. The command gcloud app deploy


app.yaml is used to deploy an application to Google App Engine by specifying
the configuration file (app.yaml). This command enables developers to push
their app to App Engine, making it accessible on the web.

(c) After launching your application to app-engine anyone can view the app at
http://[YOUR_PROJECT_ID].appspot.com:

 Explanation: This statement is correct. After deployment, Google App Engine


provides a default URL in the format
http://[YOUR_PROJECT_ID].appspot.com, where [YOUR_PROJECT_ID] is
replaced by the unique project ID. This URL allows public access to the
deployed application.

(d) “gcloud app browse” – can be used to start the local development server for
the application:

 Explanation: This statement is incorrect. The command gcloud app browse


opens the deployed application in a web browser but does not start a local
development server. To run a development server locally, other commands or
tools such as the App Engine SDK would be used, depending on the
development environment.
QUESTION 6:

In OpenStack, the different components of Nova (e.g. scheduler, Compute, api


etc.) communicates

via:

(a) Message Queues

(b) Neutron

(c) Conductor
(d) Swift
Correct Answer: a

(a) Message Queues:

 Explanation: This is the correct answer. In OpenStack, the components of


Nova (such as the scheduler, compute, and API services) communicate with
each other primarily via message queues. Message queues allow
asynchronous communication and task distribution across different services,
making it a reliable choice for managing inter-component communication in
distributed systems like OpenStack. RabbitMQ is commonly used as the
message queue service in OpenStack.

(b) Neutron:

 Explanation: Neutron is the networking component of OpenStack, responsible


for managing networks, subnets, and IP addresses. It provides network
connectivity for VMs but is not used for communication between Nova
components.

(c) Conductor:

 Explanation: The Conductor service in Nova is used to handle complex


database interactions and to mediate certain types of requests to ensure the
security and efficiency of database operations. However, it is not responsible
for the main communication between Nova components.

(d) Swift:

 Explanation: Swift is OpenStack's object storage service, which is used to


store and retrieve large amounts of unstructured data. Swift is unrelated to
Nova's internal communication and is not used for managing interactions
between Nova components.
QUESTION 7:

In OpenStack, __________ is a system for managing networks and IP


addresses.

(a) Nova

(b) Keystone

(c) Neutron

(d) None of these


Correct Answer: c

(a) Nova:
 Explanation: Nova is the compute component in OpenStack, responsible for
provisioning and managing virtual machines. It handles compute resources
but is not responsible for managing networks or IP addresses.

(b) Keystone:

 Explanation: Keystone is the identity service in OpenStack, which manages


authentication and authorization. It provides user and service authentication
and assigns roles to control access to resources. However, it does not handle
network management or IP addressing.

(c) Neutron:

 Explanation: Neutron is the correct answer. Neutron is the networking


component of OpenStack, designed to provide networking-as-a-service. It
manages networks, subnets, and IP addresses, enabling connectivity for
instances. Neutron allows users to create and manage their own networks,
set up routers, and allocate IP addresses.
QUESTION 8:

Cloud DataStore in GCP is a NoSQL document database built for automatic


scaling, high

performance, and ease of application development


a) True

b) False

Correct Answer: a

NoSQL Document Database:

 Explanation: Cloud Datastore is a NoSQL database, meaning it stores data in


a flexible schema-less format, which is useful for applications that require
dynamic and hierarchical data structures. It supports storing documents in
JSON-like structures.

Automatic Scaling:

 Explanation: Cloud Datastore automatically scales with workload demands.


This means as the number of requests or data volume grows, the database
can handle the increase in load without requiring manual intervention, making
it ideal for applications with variable traffic.
High Performance:
 Explanation: Built on Google’s infrastructure, Cloud Datastore is optimized for
high performance, supporting rapid read and write operations. This makes it
suitable for applications that require quick response times and high
throughput.

Ease of Application Development:


 Explanation: Cloud Datastore is designed with developer ease in mind. It
integrates seamlessly with other GCP services, provides a straightforward API
for interaction, and supports ACID transactions at the entity group level,
making it simpler to build and manage scalable applications.
QUESTION 9:

GCP: Which one is/are correct statement(s)?

a) You can reuse the project ID only after you delete the previous project in
GCP

b) A CNAME alias is a DNS record that lets you use a URL from your own
domain to access

resources, such as a bucket and objects, in Cloud Storage using your custom
domain URL

c) “Multi-Regional” Storage class is used for the bucket to stream videoes and
host hot web

content accessed frequently around the world

d) “Nearline” Storage class is used for the bucket to store data accessed
frequently in one part

of the world

Correct Answer: c, b

(a) You can reuse the project ID only after you delete the previous project in GCP:
 Explanation: This statement is incorrect. In Google Cloud Platform, project IDs
are globally unique and cannot be reused, even after the associated project
has been deleted. Once a project ID is created and used, it cannot be
reclaimed by any other project.

(b) A CNAME alias is a DNS record that lets you use a URL from your own
domain to access resources, such as a bucket and objects, in Cloud Storage using
your custom domain URL:

 Explanation: This statement is correct. A CNAME (Canonical Name) alias can


be used in Google Cloud Storage to map a custom domain URL to a storage
bucket. This allows users to access storage resources using a more familiar
domain name, improving accessibility and branding.
(c) “Multi-Regional” Storage class is used for the bucket to stream videos and
host hot web content accessed frequently around the world:

 Explanation: This statement is correct. The Multi-Regional Storage class in


GCP is designed for data that requires high availability and low-latency
access across multiple geographic locations. It is ideal for hosting frequently
accessed (hot) content, such as video streaming or web resources accessed
globally.

(d) “Nearline” Storage class is used for the bucket to store data accessed
frequently in one part of the world:

 Explanation: This statement is incorrect. Nearline Storage is intended for data


that is accessed less frequently, typically once a month or less. It is suitable
for backup, archival, or infrequently accessed data, rather than for frequently
accessed data. For frequently accessed data in a specific region, a Regional
or Multi-Regional Storage class would be more appropriate.
QUESTION 10:

OpenStack: Which IP use is preferred for transferring data to a VM from

(i) Another VM in the same cloud

(ii) One organization's network-


(a) i. Floating IP, ii. Private IP

(b) i. Private IP, ii. Floating IP

(c) Floating IP in both cases

(d) Private IP in both cases

Correct Answer: b

1. (i) Transferring data to a VM from Another VM in the Same Cloud:

o Preferred IP Type: Private IP. When two VMs are within the same cloud
(especially within the same tenant network in OpenStack), they can
communicate using private IP addresses. Private IPs are local to the
cloud’s internal network and provide faster, more secure
communication without exposing data to the public internet.

2. (ii) Transferring data to a VM from One Organization’s Network:

o Preferred IP Type: Floating IP. For external access, such as from an


organization’s network or the public internet, a floating IP is needed.
Floating IPs are public IP addresses that can be associated with VMs
in OpenStack to allow external access. This makes it possible for
systems outside the cloud to communicate with the VM securely.

Explanation of Other Options:


 (a) i. Floating IP, ii. Private IP:

o This option is incorrect because a floating IP is unnecessary for


communication between VMs within the same cloud, and a private IP
would not provide access from an external network.

 (c) Floating IP in both cases:


o This option is incorrect because using floating IPs for internal VM-to-
VM communication within the same cloud is inefficient and adds
unnecessary complexity. Private IPs are preferred for internal
communication.

 (d) Private IP in both cases:

o This option is incorrect because a private IP alone would not allow


access from an organization’s external network to a VM in the cloud.
Floating IPs are required for such external communication.

Week 5

QUESTION 1:

In a SLA negotiation, the provider agreed with the service availability of 98%.
The consumer runs

the application for X hours/day. At the end of one month [31 days], the total
service outage was 12

hrs. However, SLA negotiation (in terms of service availability) is honored.

a. X is atleast 19.74

b. X is atmost 19.74

c. X is exactly 19.74

d. Insufficient information

Correct Answer: a
QUESTION 2:
Average resource demand is 45 units,Baseline (owned) unit cost is 200
units,Time is 10 hours,Peak

resource demand is 100 units. If the cloud is cheaper than owning of computer
infrastructures, the

utility premium is

a. Greater than 2.22

b. Less than 2.22

c. Atleast 4.45

d. Atmost 4.45

Correct Answer: b

QUESTION 3:

In computing, there is a linear relationship between the number of processing


cores used and

power consumption.

a. TRUE

b. FALSE

Correct Answer: a

QUESTION 4:

The ________ takes a series of key/value pairs, processes each, and generates
zero or more output.

a. map function

b. partition function

c. reduce function

d. None of these

Correct Answer: a

(a) Map Function:


 Explanation: The map function is part of the MapReduce programming model.
It processes a set of input key-value pairs and applies a specific operation to
each pair. This function generates zero or more intermediate key-value pairs
based on the logic provided, and it is the first stage in the MapReduce model.
The map function is responsible for breaking down the data and preparing it
for the reduce function.

(b) Partition Function:

 Explanation: The partition function is not responsible for generating output


from the key-value pairs. Instead, it organizes intermediate data after the map
phase, distributing it across different reducers for further processing. This
ensures that each reducer receives a specific subset of the data for
processing.

(c) Reduce Function:

 Explanation: The reduce function comes after the map function in the
MapReduce model. Its job is to aggregate, summarize, or combine the
intermediate results produced by the map function. The reduce function
processes grouped key-value pairs and generates the final output but does
not take the original input directly.
QUESTION 5:

In a MapReduce framework the HDFS block size is 64 MB. We have 6 files of


size 64KB, 65MB,

X MB, Y KB, 67KB and 127MB. 24 blocks are created by Hadoop framework.
The size of X and

Y are respectively [one or more than one options may be correct, select all
correct options]:

a. 66 and 64

b. 64 and 64

c. 64 and 66
d. 128 and 64

Correct Answer: b, c

QUESTION 6:

Which among the following is/are logical resource(s)?


a. Network
b. Computer

c. Database

d. Execution

Correct Answer: d

(a) Network:

 Explanation: A network is considered a physical resource rather than a


logical resource. It comprises hardware components (such as routers,
switches, and cables) that enable communication between devices.
While the network can have logical aspects (like network protocols), it
fundamentally consists of physical infrastructure.

(b) Computer:

 Explanation: A computer is also classified as a physical resource. It


includes hardware components like CPUs, memory, storage devices,
and other peripherals. The term "computer" generally refers to the
physical device itself, though virtual machines can represent logical
resources.

(c) Database:

 Explanation: A database is typically considered a logical resource in


terms of how it organizes and stores data. However, since it often
requires a physical storage medium (like a disk or server), it may not be
purely logical. Therefore, while databases have logical aspects, they
depend on physical resources for implementation.

(d) Execution:
 Explanation: Execution is indeed a logical resource. It refers to the
process of running applications or programs and the associated
execution environment (such as runtime, threads, and processes).
Execution does not have a physical manifestation but rather represents
the logical activity that occurs within computing environments.

QUESTION 7:

When load decreases, VM management can be done by

a. Live migrate VMs to more utilized nodes


b. Shutdown unused nodes
c. Migrate VMs to less utilized nodes
d. None of these

Correct Answer: a,b

(a) Live migrate VMs to more utilized nodes:

 Explanation: Live migration refers to the process of moving a running virtual


machine (VM) from one physical host to another without disconnecting the
client or shutting down the VM. When the load on a system decreases,
migrating VMs to more utilized nodes can help balance the load across the
available resources. This practice ensures that the resources are used
efficiently and can lead to better performance and resource allocation.

(b) Shutdown unused nodes:

 Explanation: When the load decreases, shutting down unused nodes is an


effective way to optimize resource usage and reduce operational costs. By
powering down physical servers that are not needed, organizations can save
energy and maintenance costs while also simplifying the management of their
infrastructure. This approach is beneficial in a cloud environment where
resources can be dynamically allocated based on demand.

(c) Migrate VMs to less utilized nodes:


 Explanation: This option is generally incorrect in the context of managing VM
load. When load decreases, migrating VMs to less utilized nodes may lead to
inefficient resource utilization and potential performance issues. Instead, the
focus should be on consolidating workloads to more utilized nodes to optimize
resource distribution
QUESTION 8:

Correspondence between resources required by the users and resources


available with the provider

is known as a. Resource provisioning

b. Resource adaptation

c. Resource mapping

d. None of these

Correct Answer: c

(a) Resource provisioning:

 Explanation: Resource provisioning refers to the process of allocating


resources to users or applications as needed. This involves setting up and
configuring resources (such as servers, storage, and networking) to ensure
that users have access to the necessary capabilities. While related to the
management of resources, provisioning does not specifically address the
correspondence between user requirements and available resources.

(b) Resource adaptation:

 Explanation: Resource adaptation involves modifying or adjusting resources


to better fit the needs of users or applications. This can include scaling
resources up or down based on demand or changing configurations to
improve performance. However, it does not primarily focus on the direct
correspondence between user requirements and available provider resources.

(c) Resource mapping:

 Explanation: Resource mapping is the correct term that describes the


correspondence between the resources required by users and the resources
available from the provider. This concept involves identifying and matching
user demands (such as computational power, storage capacity, or network
bandwidth) with the resources that the provider has available. Resource
mapping ensures that users are assigned the appropriate resources based on
their specific requirements.
QUESTION 9:

Ability or capacity of that system to adjust the resources dynamically to fulfill


the requirements of

the user is known as

a. Resource provisioning

b. Resource adaptation

c. Resource mapping

d. None of these

Correct Answer: b

(a) Resource provisioning:

 Explanation: Resource provisioning refers to the process of allocating and


configuring resources to users or applications as needed. While it is an
essential part of resource management, provisioning does not specifically
encompass the ability to dynamically adjust resources in response to
changing user requirements.
(b) Resource adaptation:
 Explanation: Resource adaptation is the correct term that describes the ability
or capacity of a system to dynamically adjust resources to fulfill the
requirements of users. This involves scaling resources up or down, modifying
configurations, or reallocating resources based on real-time demands.
Adaptation ensures that the system can efficiently meet varying workloads
and user needs, enhancing performance and resource utilization.

(c) Resource mapping:

 Explanation: Resource mapping involves identifying and matching user


resource requirements with the available resources of a provider. While it is
an important concept in resource management, it does not directly refer to the
dynamic adjustment of resources; rather, it focuses on the correspondence
between what users need and what is available.
QUESTION 10:

Statement 1: Map operation consists of transforming one set of key-value


pairs to another.

Statement 2: Each reducer groups the results of the map step using the same
key.

a. Both statements are true

b. Both statements are false

c. Statement 1 is true and Statement 2 is false

d. Statement 1 is false and Statement 2 is true

Correct Answer: a

Statement 1: Map operation consists of transforming one set of key-value pairs to


another.
 Explanation: This statement is true. In the context of the MapReduce
programming model, the map operation takes an input set of key-value pairs
and processes them to produce a new set of key-value pairs. The
transformation can involve filtering, sorting, or performing calculations on the
input data. The output of the map function is a set of intermediate key-value
pairs that are passed to the next phase of processing.

Statement 2: Each reducer groups the results of the map step using the same
key.

 Explanation: This statement is also true. In the MapReduce framework, after


the map phase, the output key-value pairs are shuffled and sorted by key.
Each reducer then processes the grouped results by key, which means that all
values associated with the same key from the map step are collected and
passed to the reducer. This allows the reducer to aggregate, summarize, or
perform further processing on the grouped values.

Week 6

QUESTION 1:

Interception is considered as an attack on

a) Confidentiality

b) Availability
c) Integrity

d) Authenticity
Correct Answer: a

(a) Confidentiality:

 Explanation: Confidentiality refers to the protection of information from


unauthorized access and disclosure. An interception attack involves an
unauthorized party capturing or accessing data that is being transmitted over
a network. This type of attack compromises confidentiality because the
intercepted information can be viewed or extracted by the attacker, potentially
leading to sensitive data being exposed.

(b) Availability:

 Explanation: Availability ensures that authorized users have access to


information and resources when needed. Attacks that affect availability
typically include denial-of-service attacks, which prevent users from accessing
services or data. Interception does not directly affect availability, as it involves
unauthorized access to information rather than preventing access.

(c) Integrity:

 Explanation: Integrity refers to the assurance that information is accurate,


reliable, and has not been altered or tampered with. While interception could
potentially lead to integrity violations if an attacker alters the intercepted data,
the primary concern of interception is unauthorized access to information,
making this option less relevant in the context of interception specifically.
(d) Authenticity:

 Explanation: Authenticity ensures that information is genuine and from a


verified source. An attack that compromises authenticity might involve
impersonating a trusted source or tampering with messages to make them
appear legitimate. Interception alone does not necessarily compromise
authenticity unless the attacker also manipulates the data.
QUESTION 2:

Find the correct statement(s):

a) Different types of cloud computing service models provide different levels


of security

services

b) Adapting your on-premises systems to a cloud model requires that you


determine

what security mechanisms are required and mapping those to controls that
exist in

your chosen cloud service provider

c) Data should be transferred and stored in an encrypted format for security


purpose

d) All are incorrect statements

Correct Answer: a, b, c

QUESTION 3:

Which of the following is/are example(s) of passive attack?

a) Replay

b) Denial of service

c) Traffic analysis

d) Masquerade
Correct Answer: c

(a) Replay:

 Explanation: A replay attack is considered an active attack, not a passive one.


In this type of attack, an attacker captures data packets transmitted over a
network and then retransmits them to trick the system into believing they are
legitimate. This manipulation of previously transmitted data is characteristic of
active attacks.

(b) Denial of service:


 Explanation: Denial of Service (DoS) attacks are also active attacks. They aim
to make a network service unavailable to its intended users by overwhelming
it with traffic or exploiting vulnerabilities. This disrupts the availability of
services, which is a hallmark of active attack strategies.

(c) Traffic analysis:

 Explanation: Traffic analysis is indeed a passive attack. It involves monitoring


and analyzing network traffic to glean information about the communication
patterns, without altering the data or interfering with the transmission.
Attackers use traffic analysis to gather intelligence about a system, such as
determining who is communicating with whom and the volume of data being
exchanged, without directly affecting the communication itself.

(d) Masquerade:

 Explanation: A masquerade attack is another form of active attack, where an


attacker impersonates a legitimate user or system to gain unauthorized
access to information or systems. This involves actively deceiving the system
and users, which distinguishes it from passive attack strategies.

QUESTION 4:

Modification is considered as an attack on

(a) Confidentiality

(b) Availability
(c) Integrity

(d) Authenticity

Correct Answer: c

(a) Confidentiality:

 Explanation: Confidentiality refers to the protection of information from


unauthorized access and disclosure. An attack that compromises
confidentiality involves unauthorized parties accessing sensitive data. While
modification could potentially lead to unauthorized access, it is not primarily
an attack on confidentiality.
(b) Availability:
 Explanation: Availability ensures that information and resources are
accessible to authorized users when needed. Attacks on availability, such as
denial-of-service attacks, prevent legitimate users from accessing services or
data. Modification does not directly affect availability but rather alters data.

(c) Integrity:
 Explanation: Integrity refers to the assurance that information is accurate and
has not been tampered with or altered in an unauthorized manner. A
modification attack specifically involves unauthorized changes to data, which
compromises its integrity. This type of attack could involve altering files,
messages, or transactions, resulting in incorrect or misleading information.

(d) Authenticity:

 Explanation: Authenticity ensures that information is genuine and from a


verified source. While modification can impact authenticity (if the modified
data is presented as legitimate), the primary concern of a modification attack
is the alteration of the data itself, making integrity the more direct target.

QUESTION 5:

Spoofing is not an example of

(a) Deception

(b) Disclosure

(c) Usurpation

(d) Disruption

Correct Answer: b, d

(a) Deception:

 Explanation: Spoofing involves impersonating or masquerading as another


entity, typically to deceive systems or users into believing that the attacker is a
legitimate user or service. This action directly relates to deception, as the
attacker is manipulating the identity presented to the target. Thus, spoofing is
indeed an example of deception.

(b) Disclosure:

 Explanation: Disclosure refers to the unauthorized exposure of sensitive


information. While spoofing can lead to scenarios where information is
eventually disclosed (for example, if an attacker gains access to confidential
data), the act of spoofing itself does not directly cause the disclosure of
information. Instead, it is more about misleading identification rather than
revealing sensitive data. Therefore, spoofing is not primarily an example of
disclosure.
(c) Usurpation:

 Explanation: Usurpation involves taking control of a system or service, often


by exploiting trust. Spoofing can lead to usurpation if the attacker successfully
gains access or control over resources by impersonating a legitimate user.
Hence, spoofing is an example of usurpation as it can facilitate unauthorized
access or control.

(d) Disruption:

 Explanation: Disruption involves preventing legitimate access to services or


resources, often through means like denial-of-service attacks. While spoofing
can be part of a larger attack that causes disruption, the act of spoofing itself
is not inherently disruptive. It primarily involves misrepresentation rather than
directly affecting availability or service continuity. Therefore, spoofing is not
primarily an example of disruption
QUESTION 6:

Consider the following statements:

Statement I: Authorization is the identification of legitimate users.

Statement II: Integrity is the protection against data alteration/corruption.

Identify the correct options:

a) Statement I is TRUE and statement II is FALSE.

b) Statement I is FALSE and statement II is TRUE.

c) Both statements are TRUE.

d) Both statements are FALSE.

Correct Option: b

Statement I: Authorization is the identification of legitimate users.


 Explanation: This statement is FALSE. Authorization refers to the process of
granting or denying specific access rights or permissions to users after they
have been authenticated. It determines what an authenticated user is allowed
to do within a system. On the other hand, authentication is the process of
identifying and verifying legitimate users. Therefore, the statement incorrectly
describes authorization as identification.

Statement II: Integrity is the protection against data alteration/corruption.

 Explanation: This statement is TRUE. Integrity refers to the accuracy and


consistency of data. Protecting data integrity involves ensuring that
information remains unaltered and that unauthorized modifications or
corruption do not occur. Mechanisms like checksums, hashing, and access
controls are often employed to maintain data integrity.
QUESTION 7:
Access policy control refers to

a) Cyclic Inheritance Control

b) Virus Attack

c) Violation of SoD (separation of duties) Constraint

d) Man in the middle attack

Correct Answer: a, c

(a) Cyclic Inheritance Control:

 Explanation: Cyclic Inheritance Control is a mechanism used in access control


systems to manage and prevent issues that arise from circular relationships in
access permissions. This type of control is relevant in scenarios where roles
or permissions are assigned in a hierarchy that could create loops, potentially
leading to security vulnerabilities or ambiguous permissions. Thus, it is indeed
related to access policy control.

(b) Virus Attack:

 Explanation: A virus attack involves malicious software that can infect and
spread across systems, causing damage or unauthorized data access. While
it poses a security threat, it does not pertain directly to the concept of access
policy control. Access policies focus on defining who can access what
resources under what conditions, rather than the mechanics of malware or
virus behavior.

(c) Violation of SoD (Separation of Duties) Constraint:

 Explanation: The Separation of Duties (SoD) principle is a critical aspect of


access control that aims to reduce the risk of fraud and error by ensuring that
no single individual has control over all aspects of a critical process. A
violation of this principle can create security vulnerabilities, as it may allow an
individual to perform actions without sufficient checks and balances.
Therefore, violations of SoD are relevant to access policy control.

(d) Man in the Middle Attack:


 Explanation: A Man in the Middle (MitM) attack involves an attacker
intercepting communication between two parties, allowing the attacker to
eavesdrop or manipulate the communication. While this is a significant
security concern, it is more about communication security and data integrity
rather than directly related to access policy control.
QUESTION 8:
Which of the options is/are considered as the basic components of security?

a) Confidentiality

b) Integrity

c) Reliability

d) Efficiency

Correct Answer: a, b

(a) Confidentiality:

 Explanation: Confidentiality refers to the protection of information from


unauthorized access and disclosure. It ensures that sensitive information is
kept secret and only accessible to individuals or systems that have the
appropriate permissions. Mechanisms such as encryption, access controls,
and authentication are commonly used to maintain confidentiality.

(b) Integrity:

 Explanation: Integrity is the assurance that information is accurate, consistent,


and unaltered during storage, transmission, and processing. It prevents
unauthorized modifications to data and ensures that data remains trustworthy.
Techniques like checksums, hashes, and audit trails help ensure data integrity
by detecting unauthorized changes.

(c) Reliability:
 Explanation: Reliability refers to the ability of a system to consistently perform
its intended function over time without failure. While reliability is important for
overall system performance and availability, it is not typically classified as a
basic component of security.

(d) Efficiency:

 Explanation: Efficiency relates to the performance and resource utilization of a


system. While it is important for system design and operation, it does not
directly relate to security components.
QUESTION 9:

Which of the following is/are not a type of passive attack?


a) Traffic Analysis
b) Release of message contents

c) Denial of service

d) Replay

Correct Answer: c, d

(a) Traffic Analysis:

 Explanation: Traffic Analysis is a type of passive attack where an attacker


monitors and analyzes patterns in network traffic to gather information without
altering the data. This can include details about the source and destination of
messages, timing, and frequency of communications. It is considered a
passive attack because it does not involve direct interaction with the
communication itself.

(b) Release of message contents:


 Explanation: The release of message contents is also a passive attack, as it
involves intercepting and reading the actual content of messages being
transmitted over a network. This type of attack does not modify the data or
disrupt the communication but instead focuses on unauthorized access to
information.

(c) Denial of Service (DoS):

 Explanation: A Denial of Service (DoS) attack is not a passive attack; it is an


active attack aimed at disrupting the normal functioning of a service by
overwhelming it with excessive requests, causing it to become unavailable to
legitimate users. This attack involves direct interaction and manipulation of the
targeted service, making it an active form of attack.

(d) Replay:

 Explanation: A Replay attack is also considered an active attack. In this type


of attack, a malicious actor captures valid data transmissions (such as login
credentials) and then retransmits (or "replays") them to impersonate a
legitimate user. This manipulation of data is a direct attack on the integrity of
the communication, thus classifying it as active rather than passive.
QUESTION 10:

Side channel exploitation has the potential to extract RSA & AES secret keys

a) True

b) False
Correct Answer: a
Solution: Cross-VM information leakage due to sharing of physical resource
(CPU’s data

caches).

Week 7

QUESTION 1:

The key features of mobile cloud computing (MCC) are

a) Facilitates the quick development, delivery and management of mobile apps

b) Uses more device resources because applications are cloud-supported


c) Improves reliability with information backed up and stored in the cloud

d) None of these

Correct Answer: a, c

(a) Facilitates the quick development, delivery, and management of mobile apps:

 Explanation: Mobile Cloud Computing (MCC) allows developers to create


mobile applications that can quickly access cloud services and resources.
This reduces the complexity involved in app development, as developers can
leverage cloud infrastructure to enhance app functionalities, manage data,
and improve deployment efficiency. By using cloud services, developers can
also focus on core functionalities rather than infrastructure concerns.

(b) Uses more device resources because applications are cloud-supported:


 Explanation: This statement is generally not a key feature of MCC. While
cloud-supported applications may rely on cloud resources, they are designed
to offload heavy processing and storage to the cloud, which can actually
reduce the burden on mobile devices. Consequently, MCC aims to optimize
resource usage on mobile devices rather than increase it.

(c) Improves reliability with information backed up and stored in the cloud:

 Explanation: One of the significant benefits of mobile cloud computing is


enhanced reliability. By storing data and applications in the cloud, users can
ensure that their information is backed up and easily recoverable in case of
device failure or loss. This provides users with a sense of security, knowing
that their critical data is safely stored and accessible from any device
connected to the internet.
QUESTION 2:
Dynamic runtime offloading involves the issues of

a) Runtime application partitioning

b) Migration of intensive components

c) Continuous synchronization for the entire duration of runtime execution


platform

d) None of these

Correct Answer: a, b, c

(a) Runtime application partitioning:

 Explanation: Dynamic runtime offloading involves partitioning an application at


runtime to determine which components can be executed on the mobile
device and which can be offloaded to the cloud or another resource. This
partitioning process is crucial for optimizing performance and resource
utilization, allowing applications to balance their workload effectively between
local and remote resources.

(b) Migration of intensive components:


 Explanation: During dynamic runtime offloading, computationally intensive
components of an application can be migrated from the mobile device to a
cloud service to enhance performance and reduce latency. This migration
helps ensure that resource-intensive tasks do not overwhelm the mobile
device, allowing for smoother user experiences and efficient resource
management.

(c) Continuous synchronization for the entire duration of runtime execution


platform:

 Explanation: Continuous synchronization is essential when using dynamic


runtime offloading. As components are migrated between the mobile device
and the cloud, maintaining data consistency and synchronization is critical for
the application's functionality. This ensures that any changes made on either
side are reflected in real-time, preventing data loss and inconsistencies during
execution.
QUESTION 3:

What is/are true about cloudlet?


a) Increases the latency in reaching the cloud servers
b) Reduces the latency in reaching the cloud servers

c) Resides far from the mobile devices

d) Resides near to the mobile devices

Correct Answer: b, d

(a) Increases the latency in reaching the cloud servers:

 Explanation: This statement is false. Cloudlets are designed to decrease


latency, not increase it. By providing resources and processing power closer
to the user, cloudlets help ensure faster access to data and applications.

(b) Reduces the latency in reaching the cloud servers:

 Explanation: Cloudlets significantly reduce latency by acting as intermediary


computing resources that are geographically closer to mobile devices. By
processing data and applications at the cloudlet level, the time taken to
communicate with distant cloud servers is minimized, leading to faster
response times for users.

(c) Resides far from the mobile devices:

 Explanation: This statement is also false. Cloudlets are intended to be located


close to mobile devices, such as within local networks or urban areas, to
provide quicker access to computational resources and improve user
experience.

(d) Resides near to the mobile devices:

 Explanation: This statement is true. Cloudlets are typically deployed in


proximity to mobile devices to enhance performance by providing nearby
processing power, storage, and connectivity, thereby reducing the time it
takes for data to travel between the devices and the resources they need.
QUESTION 4:

What is/are true about mobile cloud computing (MCC)?

a) MCC increases the running cost for computation intensive applications

b) MCC reduces the running cost for computation intensive applications

c) MCC decreases battery lifetime

d) None of these

Correct Answer: b

(a) MCC increases the running cost for computation-intensive applications:


 Explanation: This statement is false. Mobile cloud computing (MCC) is
designed to leverage cloud resources, which can often be more cost-effective
than running computation-intensive applications solely on mobile devices. By
offloading processing tasks to the cloud, users can reduce costs associated
with local resource consumption.

(b) MCC reduces the running cost for computation-intensive applications:

 Explanation: This statement is true. MCC allows mobile applications to utilize


powerful cloud resources for computation-intensive tasks instead of relying on
the limited processing power of mobile devices. This can lead to lower
operational costs since cloud providers often have economies of scale that
can offer cheaper processing and storage solutions compared to local
resources.

(c) MCC decreases battery lifetime:

 Explanation: This statement is generally false. While using cloud services may
involve some network communication that can drain battery life, offloading
computation to the cloud can potentially preserve battery life on the mobile
device by reducing the need for heavy local processing. Thus, the overall
effect can be beneficial for battery longevity, especially for resource-intensive
applications.
QUESTION 5:

What is/are true about the execution of services in mobile cloud computing
(MCC)?

a) All services are executed in cloud


b) Some services are executed in mobile devices and some services are
executed in cloud

c) All computation intensive services are executed in mobile devices

d) None of these

Correct Answer: b

(a) All services are executed in cloud:

 Explanation: This statement is generally false. While many services in mobile


cloud computing (MCC) leverage cloud resources, not all services are
executed in the cloud. Some tasks may be better suited for execution on
mobile devices, particularly those that require immediate user interaction or
have low computational demands.

(b) Some services are executed in mobile devices and some services are
executed in cloud:
 Explanation: This statement is true. MCC typically involves a hybrid approach
where certain services and processes are executed on the mobile device
itself, while others, particularly those that are computationally intensive or
require extensive resources, are offloaded to the cloud. This approach helps
to balance the load and optimize performance.

(c) All computation-intensive services are executed in mobile devices:

 Explanation: This statement is false. Computation-intensive services are often


executed in the cloud to leverage its powerful computing capabilities. Mobile
devices typically have limited processing power and resources, making them
less suited for handling high-demand computation tasks.
QUESTION 6:

What of the following is/are fog device(s)?

a) Cellular base stations


b) Network routers

c) WiFi Gateways

d) None of these

Correct Answer: a, b, c

(a) Cellular base stations:


 Explanation: Cellular base stations are indeed considered fog devices. They
facilitate the connection between mobile devices and the broader internet,
enabling local processing and low-latency communication. By handling data
closer to the edge of the network, they reduce the amount of data that needs
to be sent to the cloud, which is a key characteristic of fog computing.

(b) Network routers:

 Explanation: Network routers can also function as fog devices. They can
process data locally and make decisions about data routing based on the
traffic conditions, providing quick responses and reducing latency. They play a
critical role in managing data flow between local devices and cloud services.

(c) WiFi Gateways:

 Explanation: WiFi gateways are another example of fog devices. They


connect various devices to the internet and can perform local data processing,
making it possible to execute certain applications closer to the end user. This
enhances performance and can provide more efficient use of bandwidth by
filtering and managing data traffic.
QUESTION 7:
What is/are the advantage(s) of fog computing?

a) Reduction in data movement across the network resulting in reduced


congestion
b) Increase in data movement across the network resulting in increased
congestion

c) Serving the real-time applications

d) None of these

Correct Answer: a, c

(a) Reduction in data movement across the network resulting in reduced


congestion:

 Explanation: Fog computing minimizes the amount of data that needs to be


sent over the network by processing data closer to the source (i.e., at the
edge). This local processing reduces the volume of data transmitted to the
central cloud, alleviating network congestion and improving overall network
efficiency. By handling data locally, it decreases the bandwidth requirements
and reduces latency.

(b) Increase in data movement across the network resulting in increased


congestion:

 Explanation: This statement is incorrect. Fog computing is designed


specifically to reduce data movement by processing data locally, thus
decreasing the amount of data sent to and from the cloud. Therefore, it does
not lead to increased congestion; rather, it mitigates it.

(c) Serving the real-time applications:


 Explanation: Fog computing is particularly advantageous for real-time
applications that require low latency and fast response times, such as IoT
devices, smart cities, and autonomous vehicles. By processing data at the
edge, fog computing enables immediate decision-making and actions, which
is crucial for applications that cannot tolerate delays.
QUESTION 8:

Consider the following statements:

Statement 1: In Geospatial Cloud, it is needed to integrate data from


heterogeneous back-end data

service.

Statement 2: Data services can be inside and/or outside of the cloud


environment in Geospatial
Cloud.

a) Statement 1 is Correct, but Statement 2 is Incorrect.

b) Statement 2 is Correct, but Statement 1 is Incorrect.

c) Both statements are Correct.

d) Both statements are Incorrect

Correct Answer: c

Statement 1: In Geospatial Cloud, it is needed to integrate data from


heterogeneous back-end data service.

 Explanation: This statement is correct. Geospatial Cloud platforms often


involve integrating data from various sources, which may include different
types of geographic information systems (GIS), databases, and other data
services. These sources can have different formats and structures, requiring
robust integration mechanisms to ensure that the data can be utilized
effectively for geospatial analysis and applications.

Statement 2: Data services can be inside and/or outside of the cloud environment
in Geospatial Cloud.

 Explanation: This statement is also correct. In a Geospatial Cloud setup, data


services can reside both within the cloud infrastructure and in external
environments. For example, while some geospatial data may be stored and
processed within the cloud, other data might be accessed from on-premises
systems or external databases. This flexibility is crucial for organizations that
utilize both cloud-based and local data sources for their geospatial
applications.
QUESTION 9:

Which of the following statement(s) is/are FALSE about Fog Computing?

a) Fog nodes present near to the end-user

b) Fog computing enables real-time applications

c) Fog nodes’ response time is much higher than Cloud’s

d) Network routers, WiFi Gateways will not be capable of running applications

Correct Answer: c, d

Statement a: Fog nodes present near to the end-user.


 Explanation: This statement is TRUE. Fog computing architecture is designed
to bring computation, storage, and networking services closer to the end-
users or devices. This proximity reduces latency and improves the overall
responsiveness of applications.

Statement b: Fog computing enables real-time applications.


 Explanation: This statement is TRUE. One of the main advantages of fog
computing is its ability to support real-time applications. By processing data
closer to where it is generated (at the edge of the network), fog computing can
facilitate quicker decision-making and response times, making it ideal for
applications such as IoT, smart cities, and autonomous vehicles.

Statement c: Fog nodes’ response time is much higher than Cloud’s.

 Explanation: This statement is FALSE. Fog nodes typically have a lower


response time compared to cloud computing because they are located closer
to the data source and the end-users. This reduced distance allows for
quicker data processing and response, which is a primary benefit of fog
computing.

Statement d: Network routers, WiFi Gateways will not be capable of running


applications.

 Explanation: This statement is FALSE. Many network routers and WiFi


gateways are now designed with the capability to run applications and
perform computations. This is a fundamental aspect of fog computing, where
these devices serve as fog nodes, enabling localized processing and
application execution.
QUESTION 10:

Which of the following is/are true about Geospatial Cloud Model?


a) It integrates data from homogeneous back-end data services

b) Data services can be inside and/or outside the cloud environment

c) Data services inside cloud can be run through SaaS service model

d) None of the above


Correct Answer: b

Statement a: It integrates data from homogeneous back-end data services.

 Explanation: This statement is FALSE. The Geospatial Cloud Model typically


integrates data from heterogeneous back-end data services, which means it
can handle various types of data from different sources and formats, rather
than just similar (homogeneous) data services.

Statement b: Data services can be inside and/or outside the cloud environment.
 Explanation: This statement is TRUE. In the Geospatial Cloud Model, data
services can exist both within the cloud and outside of it. This flexibility allows
for integration of external data sources with internal cloud services, enhancing
the model's versatility in data management and processing.

Statement c: Data services inside the cloud can be run through SaaS service
model.

 Explanation: This statement is generally TRUE but is not the primary focus of
the Geospatial Cloud Model itself. While SaaS (Software as a Service) can be
utilized to provide data services, the key aspect of the Geospatial Cloud
Model lies more in its ability to integrate various data services, regardless of
the specific service model used.

Week 8

QUESTION 1:

An IoT platform’s basic building blocks is/ are (choose the correct option(s)).

a. Gateway

b. Images

c. Network and Cloud

d. Containers

Correct Answer: a, c

Option a: Gateway

 Explanation: Gateways are critical components of an IoT platform. They serve


as the intermediary between IoT devices and the cloud or network, facilitating
communication, data processing, and protocol translation. They help in
managing the data traffic and ensure that data from various IoT devices can
be aggregated and sent to the cloud for processing.

Option b: Images

 Explanation: While images can be part of the data that IoT devices generate
(especially in applications like surveillance or image recognition), they are not
considered a fundamental building block of an IoT platform. Instead, they are
just a type of data that might be transmitted and processed.

Option c: Network and Cloud


 Explanation: The network and cloud infrastructure are essential components
of an IoT platform. The network provides connectivity for IoT devices to
communicate with each other and with cloud services, while the cloud serves
as a platform for data storage, processing, and analysis. This combination
enables the scalability and functionality required for IoT applications.

Option d: Containers

 Explanation: Containers can be used in IoT applications for deploying


software and services but are not considered a basic building block of an IoT
platform. They are a technology used for application virtualization and can aid
in development and deployment but are not foundational elements like
gateways or network/cloud infrastructure.

QUESTION 2:
__________ is used to delete a local image.

a. Docker rm

b. Docker rmi

c. Docker rvi

d. Docker push

Correct Answer: b

Option a: Docker rm

 Explanation: This command is used to remove one or more stopped


containers, not images. It does not affect images directly.

Option b: Docker rmi


 Explanation: The docker rmi command is specifically used to remove one or
more images from the local Docker image repository. This command deletes
the specified images, freeing up space.

Option c: Docker rvi


 Explanation: This is not a valid Docker command. There is no docker rvi
command in Docker's command set.

Option d: Docker push

 Explanation: The docker push command is used to upload images to a


Docker registry (such as Docker Hub). It does not delete images.
QUESTION 3:
Docker Hub is a registry used to host various docker images.

a) True

b) False

Correct Answer: (a)

Docker Hub is indeed a registry used to host a variety of Docker images. It serves as
a central repository where users can share, store, and manage their Docker
container images. Users can pull images from Docker Hub to use in their local
environments or push their own images to share with others.
QUESTION 4:

__________ enables different networks, spreads in a huge geographical area


to connect together

and be employed simultaneously by multiple users on demand.


a) Serverless

b) IoT Cloud

c) Sensor Cloud

d) Green Cloud

Correct Answer: c

In contrast:
 Serverless computing focuses on running code in response to events without
managing servers.
 IoT Cloud generally refers to cloud platforms specifically designed for Internet
of Things (IoT) devices and services.

 Green Cloud emphasizes energy-efficient cloud computing practices.

QUESTION 5:

Virtual machines get virtual access to host resources through a ________

a) Containers

b) Hypervisor

c) Both a and b

d) Images
Correct Answer: b
 Hypervisor: A hypervisor is software that creates and runs virtual
machines (VMs). It allows multiple VMs to share the same physical
resources of a host machine while providing each VM with its own
isolated environment. The hypervisor manages the allocation of the
host’s CPU, memory, and storage resources to the VMs, enabling them
to operate as if they were independent physical machines.

Key Points:

 Types of Hypervisors:

o Type 1 (Bare-metal): Runs directly on the hardware of the host


machine (e.g., VMware ESXi, Microsoft Hyper-V).

o Type 2 (Hosted): Runs on top of a host operating system (e.g.,


VMware Workstation, Oracle VirtualBox).

 Containers: While containers also provide virtualization, they do so at


the OS level, sharing the host OS kernel instead of virtualizing the
hardware. Therefore, they do not get virtual access to host resources in
the same way VMs do through a hypervisor.

 Images: Images are templates used to create VMs or containers, but


they do not manage access to host resources.

QUESTION 6:

Vehicles providing their networking and data processing capabilities to other


vehicles through the

cloud comes under which service of IoT-based Vehicular Data Clouds.

a) SaaS
b) PaaS

c) IaaS

d) None of these

Correct Answer: c

 IaaS (Infrastructure as a Service): IaaS provides virtualized computing


resources over the internet. In the context of IoT-based vehicular data clouds,
vehicles that provide networking and data processing capabilities can be seen
as offering infrastructure resources to other vehicles. This allows other
vehicles to utilize those capabilities without needing to have their own
hardware or processing resources.

Key Points:
 SaaS (Software as a Service): This model delivers software applications over
the internet, usually on a subscription basis. It does not specifically relate to
networking or data processing capabilities provided by vehicles.

 PaaS (Platform as a Service): PaaS provides a platform allowing customers to


develop, run, and manage applications without dealing with the underlying
infrastructure. It is more about software development and application hosting
rather than directly providing networking and data processing capabilities.

 IaaS: Vehicles offering their resources for networking and data processing
essentially provide an infrastructure layer that can be accessed and utilized by
other vehicles, fitting the IaaS model.
QUESTION 7:

Sensor data can be easily shared by different groups of users without any
extra effort/ measure.
a. True

b. False

Correct Answer: b

Detailed Solution: One of the limitations of Sensor Networks is “Sensor data can not
be easily

shared by different groups of users.” Hence, the correct option is (b). Lecture 38,
9:32 min.
QUESTION 8:

Container is a compile time instance of an image.

a) True

b) False

Correct Answer: (b)

Detailed Solution::Container is a run time instance of an image.

QUESTION 9:

In the context of Green Cloud Computing, the Power Usage Effectiveness is


defined as

a. Power Delivered / Overall Power

b. Overall Power / Power Delivered


c. Overall Power * Power Delivered
d. None of these

Correct Answer: b

Detailed Solution: In the context of Green Cloud Computing, the Power Usage
Effectiveness is

defined as Overall Power / Power Delivered. So, the correct option is (b).
Lecture 37, 28:45 min.

QUESTION 10:

Statement 1: Sensor-Cloud proxy exposes sensor resources as cloud


services.

Statement 2: Sensor network is still managed from the Sensor-Cloud Interface


via Sensor
Network Proxy

a. Statement 1 is True and Statement 2 is False

b. Statement 2 is True and Statement 1 is False

c. Both statements are True

d. Both statements are False

Correct Answer: c
Detailed Solution: Sensor cloud proxy exposes sensor resources as cloud
services.
Sensor network is still managed from the Sensor-Cloud Interface via Sensor
Network Proxy.

Lecture 38, 21:43 min.- 22:09 min.

Week 9

Which of the following statements best describes fog computing?

a) Fog computing refers to a model where data, processing, and applications


are concentrated

in the cloud rather than at the network edge.


b) Fog computing is a term introduced by Cisco Systems to describe a model
that centralizes
data processing in the cloud to manage wireless data transfer to distributed
IoT devices.

c) Fog computing is a model where data, processing, and applications are


concentrated in

devices at the network edge rather than existing almost entirely in the cloud.

d) The vision of fog computing is to enable applications on a few connected


devices to run

directly in the cloud without interaction at the network edge.


Correct Answer: (c)

 Fog Computing Definition: Fog computing is a decentralized computing model


that extends cloud computing capabilities to the edge of the network. In this
model, data processing and storage occur closer to where data is generated
(i.e., at the edge) rather than relying solely on centralized cloud data centers.
This reduces latency, improves response times, and enhances the efficiency
of data handling for IoT devices.

Key Points of Other Options:

 (a): This statement incorrectly suggests that fog computing concentrates


resources in the cloud, which is the opposite of what fog computing aims to
achieve.

 (b): While Cisco did introduce the term, this option also misrepresents fog
computing as being centralized in the cloud, which contradicts its fundamental
principle of edge computing.
 (d): This statement is incorrect because fog computing specifically focuses on
utilizing the network edge for processing rather than relying entirely on the
cloud, emphasizing the importance of interaction at the edge for applications.
QUESTION 2:
Which of the following challenges is most effectively addressed by using fog
and edge computing

instead of a "cloud-only" approach for IoT applications?

a) Resource management issues related to workload balance and task


scheduling in
cloud-based environments.
b) The inefficiency of processing time-sensitive applications directly in the
cloud due to high
latency and large data bandwidth requirements.

c) The need for improved security and privacy features in cloud-based


systems, which are not
addressed by fog and edge computing.

d) The difficulty in integrating multiple cloud services and platforms for


comprehensive IoT data management.

Correct Answer: (b)

Explanation:

 Fog and Edge Computing Benefits: Fog and edge computing bring data
processing closer to the source (i.e., the IoT devices). This proximity
significantly reduces latency, which is crucial for time-sensitive applications,
such as real-time analytics, autonomous vehicles, and smart manufacturing.
By processing data locally or at the edge of the network, these models
minimize the delays associated with transmitting large volumes of data to
centralized cloud servers.

Key Points of Other Options:

 (a): While resource management and workload balance are important


challenges, they can also be addressed within a cloud-only model through
advanced scheduling and resource allocation strategies. Fog and edge
computing primarily aim to enhance real-time data processing rather than
solely managing resources.

 (c): Security and privacy are important considerations in both cloud and edge
computing environments. While fog and edge computing can provide some
additional security benefits (e.g., local data processing), they are not
inherently designed to address these concerns better than cloud systems.

 (d): Integrating multiple cloud services and platforms is a challenge that can
exist regardless of the architecture. Fog and edge computing can facilitate
better data management, but they do not inherently solve the complexity of
integrating different cloud services.

QUESTION 3:

Which of the following correctly describes a classification of resource


management architectures in

fog/edge computing?
Threads of a process share
a) Data Flow

b) Control.

c) Tenancy.

d) Infrastructure.

Correct Answer: (c)

Detailed Solution: Tenancy is correctly described as the support for hosting multiple
applications
or a single application on an edge node.

QUESTION 4:

Which of the following characteristics is NOT typically associated with fog


computing

infrastructure?
a) Location awareness and low latency

b) Better bandwidth utilization

c) High computational power concentrated solely in the Cloud

d) Support for mobility

Correct Answer: (c)

Here’s a breakdown of each option and why (c) is the correct choice:

 (a) Location awareness and low latency: This characteristic is typically


associated with fog computing. Fog computing is designed to process data
closer to where it is generated (at the edge), which enhances location
awareness and reduces latency compared to processing everything in the
cloud.

 (b) Better bandwidth utilization: This is another characteristic of fog


computing. By processing data closer to the source, fog computing reduces
the amount of data that needs to be sent to the cloud, thus optimizing
bandwidth utilization.

 (c) High computational power concentrated solely in the Cloud: This


statement is NOT characteristic of fog computing. Fog computing aims to
distribute computational power across edge devices and local nodes rather
than concentrating it solely in the cloud. This distribution helps in reducing
latency and improving response times for time-sensitive applications.

 (d) Support for mobility: Fog computing is well-suited for supporting mobility,
as it allows devices to process data locally and interact with other devices in
real-time, which is essential for mobile applications and services.
QUESTION 5:

In the fog computing paradigm, which of the following accurately describes


the relationship

between local and global analyses?

a) Local analyses are performed exclusively in the Cloud, while global


analyses are done at

the edge devices.


b) Local and global analyses are performed only in the Cloud data centers.
c) Local analyses are performed at the edge devices, and global analyses can
be either

performed at the edge or forwarded to the Cloud.

d) Local analyses are conducted by IoT devices, and global analyses are not
necessary in fog
computing.

Correct Answer: (c)

Detailed Solution: Local analyses in fog computing are performed at the edge
devices to ensure

low latency and quick processing.Global analyses can be either performed at the
edge or forwarded
to the Cloud for further processing, depending on the system's requirements and
resource

availability.Local and global analyses are not solely performed in the Cloud; they are
distributed

based on the needs of the application and infrastructure.

QUESTION 6:

What is the primary goal of the application placement problem in the Cloud-
Fog-Edge framework?
a) To map all applications onto the Cloud servers to maximize computational
power.

b) To find available resources in the network that satisfy application


requirements, respect

constraints, and optimize the objective, such as minimizing energy


consumption.

c) To place all application components on edge devices to ensure low latency.

d) To disregard resource capacities and focus solely on network constraints.

Correct Answer: (b)

Detailed Solution: In the Cloud-Fog-Edge framework, application placement involves


mapping

components onto infrastructure while considering resource (CPU, RAM), network


(latency,

bandwidth), and application constraints (locality, delay sensitivity). The goal is to


meet these

constraints and optimize objectives like energy consumption. Application constraints,


such as
locality requirements, ensure specific services run in designated locations, making
them key factors

in the placement process.

QUESTION 7:

Which of the following is an example of an application constraint in the


application placement

problem on the Cloud-Fog-Edge framework?

a) Finite capabilities of CPU and RAM on infrastructure nodes.

b) Network latency and bandwidth limitations.

c) Locality requirements restricting certain services’ executions to specific


locations.

d) Availability of storage resources in the Fog nodes.


Correct Answer: (c)
Detailed Solution: Locality requirements are application constraints that restrict
services to

specific locations, making them key in application placement. In contrast, Option A


deals with

resource constraints, Option B with network constraints, and Option D with resource
availability,

none of which are application-specific constraints.

QUESTION 8:

What is the primary purpose of offloading in the context of edge computing?

a) To move all data processing from edge nodes to the cloud.

b) To augment computing requirements by moving servers, applications, and


associated data

closer to the network edge.

c) To reduce the number of user devices connected to the network.

d) To centralize all computational resources in the cloud for better


performance.

Correct Answer: (b)

Detailed Solution:This question highlights the key purpose of offloading, which


involves moving

servers, applications, and data closer to the network edge to enhance computing
capabilities and

bring services closer to the data source, improving efficiency and reducing latency.

QUESTION 9:

What is the primary goal of a cloud federation?

a) To centralize all cloud services under a single provider.

b) To deploy and manage multiple cloud services to meet business needs by


collaborating

among different Cloud Service Providers (CSPs).

c) To limit the geographical reach of cloud services.


d) To reduce the number of cloud service providers globally.
Correct Answer: (b)

Detailed Solution: Cloud federation's goal is to efficiently manage and deploy cloud
services by
collaborating among multiple CSPs. This enhances capacity utilization,
interoperability, and service

offerings, unlike centralizing services under one provider.

QUESTION 10:

Which of the following is a key benefit of forming a cloud federation?

a) Centralized control of global cloud services.

b) Increased resource utilization and load balancing across multiple Cloud


Service Providers

(CSPs).

c) Reduced collaboration among Cloud Service Providers.

d) Limiting the geographical footprint of Cloud Service Providers.

Correct Answer: (b)

Detailed Solution: A key benefit of cloud federation is maximizing resource utilization


and
achieving effective load balancing across multiple CSPs, improving efficiency and
reliability
through shared resources.

Week 10

QUESTION 1:

Why is VM migration important in cloud computing environments?

a) To centralize all virtual machines on a single server.


b) To efficiently distribute VM load across servers, allowing for system
maintenance and

operational efficiency.
c) To permanently shut down under-utilized servers.
d) To increase the number of servers in a data center.

Correct Answer: (b)

Detailed Solution: VM migration is crucial in cloud computing for balancing the


workload across

servers, enabling maintenance without downtime, and managing operational


parameters like power

consumption. It allows for dynamic allocation of resources to ensure efficient


operation and
maintain service quality.

QUESTION 2:

What is the difference between cold (non-live) and hot (live) VM migration?

a) Cold migration turns off the VM during migration, while hot migration keeps
the VM

running.

b) Cold migration keeps the VM running during migration, while hot migration
turns off the

VM.

c) Both cold and hot migration suspend the VM during the process.

d) Cold migration requires more resources than hot migration.


Correct Answer: (a)

Detailed Solution: Cold (non-live) migration involves turning off or suspending the
VM during the
migration process, whereas hot (live) migration allows the VM to continue running
and providing

services while being migrated.’

QUESTION 3:

Which of the following approaches are commonly used in live VM migration?

a) Cold-copy and Hot-copy.


b) Pre-copy and Post-copy.
c) Suspend-copy and Resume-copy.

d) Start-copy and End-copy.

Correct Answer: (b)

Detailed Solution: In live VM migration, the two main approaches are pre-copy,
where the VM's

memory pages are copied to the destination before the VM is transferred, and post-
copy, where the

VM is first transferred to the destination, and then its memory pages are copied over
as needed.

These methods help minimize downtime during the migration process.

QUESTION 4:

Which of the following is a primary concern during VM migration to ensure


service continuity?

a) Maximizing downtime and total migration time

b) Minimizing both downtime and total migration time, and avoiding


unnecessary disruption

of active services

c) Allowing resource contention with the migrating OS to speed up the process


d) Ensuring that the migration process takes as long as possible to ensure
stability

Correct Answer: (b)

Detailed Solution: During VM migration, it's crucial to minimize both the downtime
(time

services are unavailable) and the total migration time (time to complete the
migration).

Additionally, the process should avoid disrupting active services by managing


resource contention

effectively.
QUESTION 5:

Which phase of live VM migration involves suspending the execution of the


VM at the source and
copying the remaining dirty pages and CPU state to the destination?

a) Pre-Copy Phase

b) Post-Copy Phase

c) Stop-and-Copy Phase

d) On-Demand Copy Phase

Correct Answer: (c)

Detailed Solution: In the Stop-and-Copy Phase of live VM migration, the VM's


execution is

suspended at the source, and the remaining dirty pages along with the CPU state
are copied to the

destination before resuming the VM.

QUESTION 6:

What is the primary advantage of the post-copy live memory migration


strategy?

a) It avoids copying any memory pages from the source to the destination.

b) It ensures that memory pages are only copied on demand, potentially


reducing unnecessary

data transfer.

c) It copies all memory pages before stopping the VM at the source.

d) It immediately restarts the VM at the source after copying the CPU state.

Correct Answer: (b)

Detailed Solution: Post-copy live memory migration copies memory pages only when
they are

needed by the VM at the destination, reducing the amount of unnecessary data


transfer compared to
other strategies.
QUESTION 7:

Which of the following is NOT a requirement for live VM migration?

a) Load balancing

b) Fault tolerance

c) Power management

d) Data replication

Correct Answer: (d)

Detailed Solution: Live VM migration involves requirements such as load balancing,


fault

tolerance, power management, and resource sharing to ensure seamless operation


and system

maintenance. Data replication is not a specific requirement for live VM migration.

QUESTION 8:

In serial VM migration, what happens to the remaining VMs when the first VM
enters the

stop-and-copy phase?

a) They continue to provide services

b) They are suspended to prevent memory dirtying

c) They start their pre-copy cycle

d) They are migrated simultaneously


Correct Answer: (b)

Detailed Solution:In serial VM migration, when the first VM enters the stop-and-copy
phase, the

remaining VMs are suspended to prevent them from dirtying memory, ensuring a
smooth migration
process.
QUESTION 9:

What is a key advantage of using containers in cloud computing?


a) Containers virtualize the hardware to run multiple operating systems
b) Containers are heavyweight virtual machines with extensive resource
requirements

c) Containers package code and dependencies, allowing applications to run


consistently

across different environments

d) Containers require specific hardware configurations to function properly

Correct Answer: (c)

Detailed Solution: Containers are lightweight virtualization techniques that package


application

code along with all its dependencies, enabling consistent performance across
various computing

environments.

QUESTION 10:

What is the main function of a Docker container image?

a) To create a virtual machine with its own operating system

b) To package an application along with its code, runtime, system tools,


libraries, and settings

c) To manage physical hardware resources for applications

d) To execute applications directly on the host operating system without


isolation

Correct Answer: (b)

Detailed Solution: A Docker container image is a lightweight, standalone package


that includes

everything needed to run an application, such as code, runtime, system tools,


libraries, and settings,
ensuring consistent operation across different environments.

Week 11

QUESTION 1:
Which of the following best describes the key features of dew computing?
a. Independence and collaboration

b. Independence and centralization

c. Collaboration and decentralization

d. Connectivity and scalability

Correct Answer: a

Detailed Solution: The correct answer is a) Independence and collaboration because


these are

the core principles of dew computing, allowing local devices to operate


autonomously while still

connecting to the cloud for data synchronization when needed.


QUESTION 2:
Which of the following best describes serverless computing?

a. Developers manage scalability and orchestration of containers.


b. Developers run their logic as functions, and the cloud provider manages
scalability

c. Developers handle all containerization and runtime environments.

d. Developers run their applications directly on dedicated servers.

Correct Answer: b

Detailed Solution: The correct answer is b) Developers run their logic as functions,
and the cloud

provider manages scalability because serverless computing allows developers to


submit their code

as functions without worrying about infrastructure. The cloud provider automatically


handles the

scaling and orchestration, enabling efficient parallel execution of tasks without the
need for manual

container management.

QUESTION 3:

Which of the following best describes Function-as-a-Service (FaaS)?


a) Functions run continuously and scale vertically.
b) Functions are triggered by events and executed in isolated environments.

c) Functions are always active and manage their own scaling.

d) Functions are large, continuously running parts of an application.

Correct Answer: (b)

Detailed Solution: The correct answer is B) Functions are triggered by events and
executed in

isolated environments because Function-as-a-Service (FaaS) is an event-driven


model where

functions are only activated in response to specific triggers, such as client requests
or external

events. These functions run in isolated environments provided by the FaaS platform,
which also

handles the horizontal scaling based on the volume of incoming events. Unlike
traditional

applications, FaaS functions are not constantly active, making them efficient for
handling specific

tasks within a broader application.

QUESTION 4:

How does Serverless Computing differ from traditional Cloud Computing?

a) It focuses on system administrators and exposes server management.

b) It targets programmers by abstracting server management and simplifying


development.

c) It requires developers to handle all operational responsibilities.

d) It makes cloud software development more complicated.

Correct Answer: b

Detailed Solution: The correct answer is B) It targets programmers by abstracting


server

management and simplifying development because serverless computing removes


the need for

developers to manage servers, allowing them to focus on writing code. This shift
makes cloud
development easier and more accessible for programmers, while the cloud provider
handles the

operational responsibilities.

QUESTION 5:

What is a key benefit of using AWS Lambda for running code?

a) You need to manage AWS resources and scaling.

b) You have to focus on operating system management and provisioning.

c) You upload code and AWS Lambda handles execution and scaling based on
events.

d) You must manually handle event sources and log streams.


Correct Answer: C

Detailed Solution: AWS Lambda allows you to focus on writing code while it
manages execution,

scaling, and resource provisioning based on event triggers, simplifying cloud


computing tasks.

QUESTION 6:
What does Google Cloud Functions primarily handle in terms of execution
environment?
a) Server-based environments with manual provisioning

b) Fully managed environments with automatic scaling

c) Local environments requiring extensive server management

d) Dedicated virtual machines for each function

Correct Answer: b

Detailed Solution: Google Cloud Functions operates in a fully managed environment,


meaning

developers do not need to provision or manage servers, and the platform


automatically handles

scaling.
QUESTION 7:

What is the primary focus of Azure Functions for developers?

a. Managing and maintaining servers

b. Writing code and configuring functions

c. Handling infrastructure scaling manually

d. Deploying compiled languages only

Correct Answer: b

Detailed Solution: Azure Functions allows developers to focus on writing code and
configuring

functions while it manages server maintenance and scaling..

QUESTION 8:

What is one major challenge of using renewable energy sources in cloud


datacenters?

a) High capital costs and unpredictability

b) Increased server maintenance requirements

c) Higher energy consumption from non-renewable sources

d) Decreased system reliability


Correct Answer: (a)

Detailed Solution:Renewable energy sources face challenges such as high initial


costs and

unpredictability in supply, which can impact their implementation in cloud


datacenters.
QUESTION 9:

What is the primary focus of the power manager component in a sustainable


cloud computing
datacenter?

a. Controlling the temperature of the datacenter

b. Managing the power supply from renewable and grid sources


c. Handling virtual machine migrations
d. Scheduling workloads to balance energy use

Correct Answer: b

Detailed Solution: The power manager in a sustainable cloud computing datacenter


is primarily

responsible for managing the power supply, including balancing energy sources from
renewables

and grid electricity.

QUESTION 10:

Which component of sustainable cloud computing aims to balance the


temperature in cloud

datacenters to enhance energy efficiency?

a. Application Design

b. Capacity Planning

c. Cooling Management

d. Renewable Energy

Correct Answer: c

Detailed Solution: Cooling Management focuses on maintaining the temperature


within cloud

datacenters to ensure energy efficiency, as excessive heat can increase energy


consumption and

affect performance.

Week 12

QUESTION 1:

According to the given definition, which of the following statement(s) is (are)


true about dew

computing?

a. Dew computing is a cloud computing paradigm where all computing is done


on the cloud
without any reliance on on-premises computers.

b. Dew computing is a paradigm where on-premises computers provide


functionality that is
dependent on cloud services.

c. Dew computing is a paradigm where on-premises computers and cloud


services are

completely isolated from each other and do not collaborate in any way.

d. Dew computing is a paradigm where on-premises computers provide


functionality that is

independent of cloud services and is also collaborative with cloud services.

Answer: d

Detailed Solution: According to the definition given, dew computing is a paradigm


where

on-premises computers provide functionality that is independent of cloud services


and is also

collaborative with cloud services.

QUESTION 2:

What are the different aspects of CPS?

a. Cyber, physical, and communication only


b. Cyber, dynamics, and safety only

c. Cyber, physical, computation, dynamics, communication, security, and


safety
d. Cyber, physical, and computation only

Answer: c

QUESTION 3:

What is the benefit of 5G's ability to scale down in data rates, power, and
mobility for IoT devices?

a. It allows for faster data rates and lower latency


b. It provides extremely lean and low-cost connectivity solutions
c. It enables immersive experiences like VR and AR

d. It provides ultra-reliable, low-latency links for mission-critical


communications.
Answer: c

Detailed Solution: 5G is meant to seamlessly connect a massive number of


embedded sensors

in virtually everything through the ability to scale down in data rates, power, and

mobility—providing extremely lean and low‐cost connectivity solutions

QUESTION 4:

Fog-Edge computing leads to increased network congestion

a. True

b. False

Correct Answer: b

Detailed Solution: Fog-Edge computing leads to less network congestion

QUESTION 5:
What is(are) the key feature(s) of Mobile Cloud computing for 5G networks?

a. Sharing resources for mobile applications


b. Improved reliability due to data storage in the cloud

c. Increased resource consumption by mobile applications

d. None of these

Correct Answer: a and b

Detailed Solution: Key features of MCC for 5G networks include sharing resources
for mobile

applications and improved reliability as data is backed up and stored in the cloud.

QUESTION 6:

Mobility Analytics utilizes the cloud platform for computation and storage.
A) True

B) False

Correct Answer: A

Detailed Solution: Mobility Analytics utilizes a Cloud platform for computation and
storage.

QUESTION 7:

In which computing environment is latency fixed due to the location of


application modules at the

Area Gateway?

a. Fog computing

b. Cloud computing
c. Serverless Computing

d. None of the above

Correct Answer: a

(a) Fog computing: In fog computing, data processing occurs closer to the edge of
the network, often at the Area Gateway, which is located near the end-users or
devices. This proximity reduces the distance data must travel, resulting in lower
latency for applications. The latency in this environment can be considered relatively
fixed due to the consistent location of application modules at the gateway.
(b) Cloud computing: Cloud computing typically involves centralized data centers
that can be far from the end-users, leading to higher latency due to the increased
distance that data must travel. Latency in cloud environments can vary based on
many factors, including network conditions and geographical distance from the data
center.

(c) Serverless Computing: In serverless computing, the execution environment is


managed by cloud providers, and functions are deployed in a way that is abstracted
from the underlying infrastructure. While serverless computing can optimize for
reduced latency, it does not inherently fix latency due to the potential for functions to
run in various locations, including centralized cloud environments.
QUESTION 8:
Resource-constrained low-latency devices drive the need of

a. Heterogeneous and distributed computing architectures

b. Homogeneous and distributed computing architectures

c. Heterogeneous and parallel computing architectures

d. Homogeneous and parallel computing architectures

Correct Option: a

Detailed Solution: On‐premises and edge data centers will continue to close
the gap between

resource‐constrained low‐latency devices and distant cloud data centers,


leading to driving the

need for heterogeneous and distributed computing architectures.

(a) Heterogeneous and distributed computing architectures: This option


addresses the need for diverse types of computing resources (heterogeneous) that
can work together across various locations (distributed). Resource-constrained
devices often have limitations in processing power and energy, so leveraging a mix
of different types of resources (like CPUs, GPUs, and specialized processors) that
are located closer to these devices (in edge data centers) is essential. This
architecture helps in minimizing latency by processing data nearer to where it is
generated while efficiently utilizing various resources.

(b) Homogeneous and distributed computing architectures: While distributed


architectures are beneficial for addressing latency issues, using homogeneous
resources (all the same type) may not be efficient for resource-constrained devices,
which benefit from a variety of computing capabilities. This option lacks the flexibility
needed for effectively managing diverse workloads.
(c) Heterogeneous and parallel computing architectures: Although parallel
computing can enhance performance, it does not directly address the need for
distribution across various geographical locations. Heterogeneous computing can
indeed operate in parallel, but the emphasis in the context of resource-constrained
devices is more on distribution than just parallel execution.

(d) Homogeneous and parallel computing architectures: Similar to option (b), this
choice restricts the architecture to using only one type of resource, which may not
effectively meet the demands of various resource-constrained devices that require
adaptability and efficiency.
QUESTION 9:
Customized wearable devices for collecting health parameters are the best
examples of

a. IoHT
b. Fog device

c. Fog-Cloud interfaced.

d. Cloud-Fog-Edge-IoHT

Correct Answer: d

(a) IoHT (Internet of Health Things):

 IoHT refers to the network of connected devices specifically designed to


monitor and manage health-related parameters. Customized wearable
devices, such as fitness trackers and health monitors, fall under this category
as they collect vital health data (like heart rate, steps taken, sleep patterns,
etc.) and transmit this information for analysis. They represent a significant
trend in health technology, integrating IoT concepts into healthcare for better
patient monitoring and management.

(b) Fog device:

 Fog devices refer to computing resources that are positioned between cloud
data centers and edge devices. While wearables can be part of a fog
architecture for processing data locally, the term "fog device" is too generic
and does not specifically capture the health aspect of wearables.

(c) Fog-Cloud interfaced:

 This option refers to a setup where fog computing and cloud computing
interact. While wearable devices may utilize such an architecture for data
processing and storage, it does not specifically emphasize their role in health
monitoring.

(d) Cloud-Fog-Edge-IoHT:

 This option suggests a combination of cloud, fog, edge computing, and IoHT.
While it reflects a comprehensive infrastructure for health monitoring, it does
not directly focus on the customized wearable devices themselves.

QUESTION 10:

The cyber-physical system involves transdisciplinary approaches, merging the


theory of
cybernetics, mechatronics, design, and process science.
a. True

b. False

Correct Answer: a

Detailed Solution: The cyber-physical system involves transdisciplinary approaches,


merging the

theory of cybernetics, mechatronics, design, and process science.

You might also like