CLOUD COMPUTING
Unit – 3
1. Explain High level system Architecture framework for energy efficient green cloud
computing.
OR
Explain green cloud computing Architecture. 10 Marks
A high-level architecture for supporting energy-efficient resource allocation in a green cloud
computing infrastructure is shown in figure. It consists of four main components:
1. Consumers/brokers: Cloud consumers or their brokers submit service requests from
anywhere in the world to the cloud. It is important to note that there can be a difference between
cloud consumers and users of deployed services. For instance, a consumer can be a company
deploying a Web application, which presents varying workloads according to the number of
“users” accessing it.
2.Green Resource Allocator: Acts as the interface between the cloud infrastructure and
consumers. It requires the interaction of the following components to support energy-efficient
resource management:
Service Analyzer. Interprets and analyzes the service requirements of a submitted
request before deciding whether to accept or reject it. Hence, it needs the latest load and
energy information from VM Manager and Energy Monitor, respectively.
Consumer Profiler. Gathers specific characteristics of consumers so that important
consumers can be granted special privileges and prioritized over other consumers.
Pricing. Decides how service requests are charged to manage the supply and demand of
computing resources and facilitate prioritizing service allocations effectively.
Energy Monitor. Observes and determines which physical machines to power on or off.
Service Scheduler. Assigns requests to VMs and determines resource entitlements for
allocated VMs. It also decides when VMs are to be added or removed to meet demand.
VM Manager. Keeps track of the availability of VMs and their resource entitlements. It
is also in charge of migrating VMs across physical machines.
Accounting. Maintains the actual usage of resources by requests to compute usage costs.
Historical usage information can also be used to improve service allocation decisions.
Green Negotiator. Negotiates with the consumers/brokers to finalize the SLAs with
specified prices and penalties (for violations of SLAs) between the cloud provider and the
consumer, depending on the consumer’s QoS requirements and energy-saving schemes.
3. VMs: Multiple VMs can be dynamically started and stopped on a single physical machine to
meet accepted requests, hence providing maximum flexibility to configure various partitions of
resources on the same physical machine to different specific requirements of service requests.
4. Physical machines: The underlying physical computing servers provide hardware
infrastructure for creating virtualized resources to meet service demands
2. Explain market-oriented cloud computing (MOCC). Draw a reference model for MOCC.
OR
Give global view of MOCC. 10 Marks
Market-oriented cloud computing originated from the coordination of several components:
service consumers, service providers, and other entities that make trading possible. Market
orientation not only influences the organization on the global scale of the cloud computing
market. It also shapes the internal architecture of cloud computing providers that need to support
a more flexible allocation of their resources, which is driven by additional parameters such as
those defining the quality of service.
The fundamental component to define a global market-oriented architecture is the virtual
marketplace—represented by the Cloud Exchange (CEx)—which acts as a market maker,
bringing service producers and consumers together. The principal players in the virtual
marketplace are the cloud coordinators and the cloud brokers. The cloud coordinators represent
the cloud vendors and publish the services that vendors offer. The cloud brokers operate on
behalf of the consumers and identify the subset of services that match customers’ requirements in
terms of service profiles and quality of service. Brokers perform the same function as they would
in the real world. They mediate between coordinators and consumers by acquiring services from
the first and subleasing them to the latter. Brokers can accept requests from many users. At the
same time, users can leverage different brokers. A similar relationship can be considered
between coordinators and cloud computing services vendors.
Several components contribute to the realization of the Cloud Exchange and implement its
features. In the reference model, it is possible to identify three major components:
1. Directory: The market directory contains a listing of all the published services that are
available in the cloud marketplace. The directory not only contains a simple mapping between
service names and the corresponding vendor offering them. It also provides additional metadata
that can help the brokers or the end users in filtering from among the services of interest those
that can really meet the expected quality of service. Moreover, several indexing methods can be
provided to optimize the discovery of services according to various criteria. This component is
modified in its content by service providers and queried by service consumers.
2. Auctioneer: The auctioneer is in charge of keeping track of the running auctions in the
marketplace and of verifying that the auctions for services are properly conducted and that
malicious market players are prevented from performing illegal activities.
3. Bank: The bank is the component that takes care of the financial aspect of all the operations
happening in the virtual marketplace. It also ensures that all the financial transactions are carried
out in a secure and dependable environment. Consumers and providers may register with the
bank and have one or multiple accounts that can be used to perform the transactions in the virtual
marketplace.
3.Explain MOCC architecture for data center. Or Explain reference architecture for cloud data
center. 10 Marks
Datacenters are the building blocks of the computing infrastructure that backs the services
offered by a cloud computing vendor, no matter its specific category (IaaS, PaaS, or SaaS). The
diagram provides an overall view of the components that can support a cloud computing provider
in making available its services on a market-oriented basis. There are four major components of
the architecture:
1. Consumers/brokers: Same answer as question 1.
2. SLA resource allocator. The allocator represents the interface between the data center and
the cloud service provider and the external world. Its main responsibility is ensuring that service
requests are satisfied according to the SLA agreed to with the user. Several components
coordinate allocator activities in order to realize this goal:
Service Request Examiner and Admission Control Module. This module operates in
the front-end and filters user and broker requests in order to accept those that are feasible
given the current status of the system and the workload that is already processing.
Accepted requests are allocated and scheduled for execution.
Pricing Module. This module is responsible for charging users according to the SLA
they signed. Different parameters can be considered in charging users; for instance, the
most common case for IaaS providers is to charge according to the characteristics of the
virtual machines requested in terms of memory, disk size, computing capacity, and the
time they are used.
Accounting Module. This module maintains the actual information on usage of
resources and stores the billing information for each user. These data are made available
to the Service Request Examiner and Admission Control module when assessing users’
requests
Dispatcher. This component is responsible for the low-level operations that are required
to realize admitted service requests. In an IaaS scenario, this module instructs the
infrastructure to deploy as many virtual machines as are needed to satisfy a user’s
request.
Resource Monitor. This component monitors the status of the computing resources,
either physical or virtual. IaaS providers mostly focus on keeping track of the availability
of VMs and their resource entitlements.
Service Request Monitor. This component keeps track of the execution progress of
service requests. The information collected through the Service Request Monitor is
helpful for analyzing system performance and for providing quality feedback about the
provider’s capability to satisfy requests
3. VMs: Same answer as question 1.
4. Physical machines: Same answer as question 1.
4. Explain industrial implementation of MOCC. 5 Marks
Ex: Spot Cloud
Spot Cloud is an online portal that implements a virtual marketplace, where sellers and buyers
can register and trade cloud computing services. The platform is a market place operating in the
IaaS sector. Buyers are looking for compute capacity that can meet the requirements of their
applications, while sellers can make available their infrastructure to serve buyers’ needs and earn
revenue. Spot Cloud is not only an enabler for IaaS providers and resellers, but its intermediary
role also includes a complete bookkeeping of the transactions associated with the use of
resources. Users deposit credit on their Spot Cloud account and capacity sellers are paid
following the usual payper-use model. Spot Cloud provides a comprehensive set of features that
are expected for a virtual marketplace. Some of them include:
Detailed logging of all the buyers’ transactions
Full metering, billing for any capacity
Full control over pricing and availability of capacity in the market
Federation management (many providers, many customers, but one platform)
Hybrid cloud support (internal and external resource management)
Full market administration and reporting
Applications and pre-build appliances directories
5. Explain 3rd party cloud services. Spot Cloud or Meta CDN. 5 Marks
Ex: Meta CDN
The Meta CDN interface exposes its services through users and applications through the Web;
users interact with a portal, while applications take advantage of the programmatic access
provided by means of Web services. The main operations of MetaCDN are the creation of
deployments over storage clouds and their management.
In particular, four different deployment options can be selected:
Coverage and performance-optimized deployment: In this case Meta CDN will deploy
as many replicas as possible to all available locations.
Direct deployment: In this case Meta CDN allows the selection of the deployment
regions for the content and will match the selected regions with the supported providers
serving those areas.
Cost-optimized deployment: In this case Meta CDN deploys as many replicas in the
locations identified by the deployment request. The available storage transfer allowance
and budget will be used to deploy the replicas and keep them active for as long as
possible.
QoS optimized deployment: In this case Meta CDN selects the providers that can better
match the QoS requirements attached to the deployment, such as average response time
and throughput from a particular location.
6. With a neat diagram explain Federated clouds/Inter Cloud. 5 Marks
Inter Cloud is a service-oriented architectural framework for cloud federation that supports utility
driven interconnection of clouds. It is composed of a set of elements that interact via a market-
oriented system to enable trading of cloud assets such as computing power, storage, and
execution of applications. The Inter Cloud model comprises two main elements: Cloud Exchange
and Cloud Coordinator:
Cloud Exchange: It offers services that allow providers to find each other in order to directly
trade cloud assets, as well as allowing parties to register and run auctions. In the former case,
Cloud Exchange acts as a directory service for the federation. In the latter case, it runs the
auction. For offering such services to the federation, Cloud Exchange implements a Web service-
based interface that allows datacenters to join and leave the federation; to publish resources they
want to sell; to register their resource requirements so that parties interested in selling providers
are able to locate potential buyers for their resources; to query resource offers that match specific
requirements; to query requirements that match available resources from a party; to withdraw
offers and requests from the coordinator; to offer resources in auctions; to register bids; and to
consult the status of a running auction.
Cloud Coordinator: This component manages domain-specific issues related to the federation.
This component is present on each party that wants join the federation. Cloud Coordinator has
front-end components as well as back-end components Front-end components interact with the
Cloud Exchange and with other coordinators. The former allows datacenters to announce their
offers and requirements, whereas the latter allows the coordinator to learn about the current state
of the datacenter to decide whether actions from the federation are required or not.
7. Software Defined Networks 10 Marks
Software-defined networking (SDN) is an agile networking architecture designed to help
organizations keep pace with the dynamic nature of today’s applications. It separates network
management from the underlying network infrastructure, allowing administrators to dynamically
adjust network-wide traffic flow to meet changing needs.
“Software defined” does not mean that it only uses virtual switches instead of dedicated hardware
(even if it mostly does): it refers to the fact that switches can be programmed. Their behavior is
defined by a software configuration. Indeed, the main feature of SDN is that the switch control
plane is decoupled from the data plane.
Control plane is where the administration of the network takes place: it corresponds to
the setting up of the packet processing rules, and from there to the establishment of the
whole network switching policy.
Data plane encompasses the application of those rules defined on control planes: this is
the actual packet processing. When some packets require some particular, more complex
processing, they can be handled to the control plane, where the decision regarding this
packet will occur.
Characteristic Features of SDN
Directly programmable: Network control is directly programmable because it is
decoupled from forwarding functions.
Agile: Abstracting control from forwarding lets administrators dynamically adjust
network-wide traffic flow to meet changing needs.
Centrally managed: Network intelligence is (logically) centralized in software-based
SDN controllers that maintain a global view of the network, which appears to
applications and policy engines as a single, logical switch
Programmatically configured: SDN lets network managers configure, manage, secure,
and optimize network resources very quickly via dynamic, automated SDN programs,
which they can write themselves because the programs do not depend on proprietary
software.
Open standards-based and vendor-neutral: When implemented through open
standards, SDN simplifies network design and operation because instructions are
provided by SDN controllers instead of multiple, vendor-specific devices and protocols.
8. Network Functions Virtualization (NFV) 5 Marks
NFV incorporates cloud and virtualization technologies to drive rapid development of
new network services with elastic scale and automation.
NFV brings agility in delivering network services with capital efficiency by removing
bottlenecks imposed by manual processes, and allowing new services to be deployed on
demand. NFV allows service providers to deliver services faster and cost-effectively, and
to leverage automation so that they can adapt to customers’ needs for scale and agility.
The modular architecture of NFV is what allows service providers to automate at every
level. Major components of the architecture include: NFV infrastructure (NFVI)
building block—Provides the virtualization layer (hypervisors or container management
systems such as Docker), and the physical compute, storage, and networking components
that host the VNFs.
VNFs—Software-based applications that provide one or more network services. VNFs
use the virtualized infrastructure provided by the NFVI to connect into the network and
provide programmable, scalable network services.
Management and orchestration (MANO)—Provides the overarching management and
orchestration of the VNFs in the NFV architecture.
9. Fog Computing 10 Marks
The fog extends the cloud to be closer to the things that produce and act on IoT data. These
devices, called fog nodes, can be deployed anywhere with a network connection: on a factory
floor, on top of a power pole, alongside a railway track, in a vehicle, or on an oil rig. Any device
with computing, storage, and network connectivity can be a fog node. Examples include industrial
controllers, switches, routers, embedded servers, and video surveillance cameras.
How Does Fog Work?
Developers either port or write IoT applications for fog nodes at the network edge. The
fog nodes closest to the network edge ingest the data from IoT devices. Then—and this is
crucial—the fog IoT application directs different types of data to the optimal place for
analysis.
The most time-sensitive data is analyzed on the fog node closest to the things generating
the data. In a Cisco Smart Grid distribution network, for example, the most time-sensitive
requirement is to verify that protection and control loops are operating properly.
Therefore, the fog nodes closest to the grid sensors can look for signs of problems and
then prevent them by sending control commands to actuators.
Data that can wait seconds or minutes for action is passed along to an aggregation node
for analysis and action. In the Smart Grid example, each substation might have its own
aggregation node that reports the operational status of each downstream feeder and
lateral.
Data that is less time sensitive is sent to the cloud for historical analysis, big data
analytics, and long-term storage (see sidebar). For example, each of thousands or
hundreds of thousands of fog nodes might send periodic summaries of grid data to the
cloud for historical analysis and storage.
Benefits of Fog Computing
Greater business agility: With the right tools, developers can quickly develop fog
applications and deploy them where needed. Machine manufacturers can offer MaaS to
their customers. Fog applications program the machine to operate in the way each customer
needs.
Better security: Protect your fog nodes using the same policy, controls, and procedures
you use in other parts of your IT environment. Use the same physical security and
cybersecurity solutions.
Deeper insights, with privacy control: Analyze sensitive data locally instead of sending it
to the cloud for analysis. Your IT team can monitor and control the devices that collect,
analyze, and store data.
Lower operating expense: Conserve network bandwidth by processing selected data
locally instead of sending it to the cloud for analysis.
10. Microservice Architecture 10 Marks
Microservices (Microservice Architecture), is an architectural style that structures an application as
a collection of small autonomous services, modeled around a business domain. In Microservice
Architecture, each service is self-contained and implements a single business capability.
Different clients from different devices try to use different services like search, build, configure
and other management capabilities. All the services are separated based on their domains and
functionalities and are further allotted to individual microservices. These microservices have
their own load balancer and execution environment to execute their functionalities & at the same
time captures data in their own databases. All the microservices communicate with each other
through a stateless server which is either REST or Message Bus. Microservices know their path
of communication with the help of Service Discovery and perform operational capabilities such
as automation, monitoring. Then all the functionalities performed by microservices are
communicated to clients via API Gateway. All the internal points are connected from the API
Gateway. So, anybody who connects to the API Gateway automatically gets connected to the
complete system.
Features of Microservices
Decoupling – Services within a system are largely decoupled. So the application as a
whole can be easily built, altered, and scaled
Componentization – Microservices are treated as independent components that can be
easily replaced and upgraded
Business Capabilities – Microservices are very simple and focus on a single capability.
Autonomy – Developers and teams can work independently of each other, thus increasing
speed
Continuous Delivery – Allows frequent releases of software, through systematic
automation of software creation, testing, and approval.
Responsibility – Microservices do not focus on applications as projects. Instead, they
treat applications as products for which they are responsible.
Decentralized Governance – The focus is on using the right tool for the right job. That
means there is no standardized pattern or any technology pattern. Developers have the
freedom to choose the best useful tools to solve their problems.
Agility – Microservices support agile development. Any new feature can be quickly
developed and discarded again
Advantages of Miocroservices
Independent Development – All microservices can be easily developed based on their
individual functionality
Independent Deployment – Based on their services, they can be individually deployed in
any application.
Fault Isolation – Even if one service of the application does not work, the system still
continues to function
Mixed Technology Stack – Different languages and technologies can be used to build
different services of the same application
Granular Scaling –Individual components can scale as per need, there is no need to scale
all components together