Week 1
Day 1 : Morning Session
Building blocks of cloud computing
1.1Architecture of Computer System
Computer is an electronic machine that makes performing any task very easy. In computer, the CPU
executes each instruction provided to it, in a series of steps, this series of steps is called Machine Cycle, and
is repeated for each instruction. One machine cycle involves fetching of instruction, decoding the
instruction, transferring the data, executing the instruction.
Computer system has five basic units that help the computer to perform operations, which are given
below:
1. Input Unit
2. Output Unit
3. Storage Unit
4. Arithmetic Logic Unit
5. Control Unit
Input Unit
Input unit connects the external environment with internal computer system. It provides data and
instructions to the computer system. Commonly used input devices are keyboard, mouse, magnetic tape etc.
Input unit performs following tasks:
Accept the data and instructions from the outside environment.
Convert it into machine language.
Supply the converted data to computer system.
Output Unit
It connects the internal system of a computer to the external environment. It provides the results of any
computation, or instructions to the outside world. Some output devices are printers, monitor etc.
Storage Unit
This unit holds the data and instructions. It also stores the intermediate results before these are sent to the
output devices. It also stores the data for later use.
The storage unit of a computer system can be divided into two categories:
Primary Storage: This memory is used to store the data which is being currently executed. It is used
for temporary storage of data. The data is lost, when the computer is switched off. RAM is used as
primary storage memory.
Secondary Storage: The secondary memory is slower and cheaper than primary memory. It is used
for permanent storage of data. Commonly used secondary memory devices are hard disk, CD etc.
Arithmetic Logical Unit
All the calculations are performed in ALU of the computer system. The ALU can perform basic operations
such as addition, subtraction, division, multiplication etc. Whenever calculations are required, the control
unit transfers the data from storage unit to ALU. When the operations are done, the result is transferred back
to the storage unit.
Control Unit
It controls all other units of the computer. It controls the flow of data and instructions to and from the
storage unit to ALU. Thus it is also known as central nervous system of the computer.
CPU
It is Central Processing Unit of the computer. The control unit and ALU are together known as CPU. CPU
is the brain of computer system. It performs following tasks:
It performs all operations.
It takes all decisions.
It controls all the units of computer.
Above figure shows the block diagram of a computer.
1.2 Servers vs Desktop and laptops
Server
A server is a software service running on a dedicated computer and the service provided by this can be
used by other computers in the network. Sometimes the physical computer that runs this service is also
referred to as the server. Mainly servers provide a dedicated functionality such as web servers serving web
pages, print servers providing printing functionalities, and database servers providing database
functionalities including storage and management of data.The dedicated servers normally include faster
CPUs, large high performing RAM (Random Access Memory) and large storage devices such as multiple
hard drives. Furthermore, servers use operating systems (OS) that are server oriented providing special
features suitable for the server environments. In these OS, GUI is an optional feature and provides advanced
back up facilities and tight system security features.
Desktop
A desktop is a computer intended for personal use and it is typically kept in a single place. Furthermore,
desktop refers to a computer that is laid horizontally on the desk unlike towers. Early desktop computers
were very large and they took up the space in a whole room. It was only in 1970s the first computers that
could be kept on desk arrived. Widely used OS today in desktops are Windows, Mac OS X, and Linux.
While Windows and Linux could be used with any desktop, Mac OS X has some restrictions. Desktops are
powered from a wall socket and therefore power consumption is not a critical issue. Furthermore, desktop
computers provide more space for heat dissipation. Initially, desktop computers were not integrated with
wireless technologies such as WiFi, Bluetooth and 3G, but currently they are integrated with wireless
technologies.
Laptop
A laptop is a personal computer that can be easily moved and used in a variety of locations. Most laptops
are designed to have all of the functionality of a desktop computer, which means they can generally run the
same software and open the same types of files. However, laptops also tend to be more expensive than
comparable desktop computers. Because laptops are designed for portability, there are some important
differences between them and desktop computers. A laptop has an all-in-one design, with a built-
in monitor, keyboard, touchpad (which replaces the mouse), and speakers. This means it is fully
functional, even when no peripherals are connected. A laptop is also quicker to set up, and there are fewer
cables to get in the way.
1.3 Client Server Computing
In client server computing, the clients requests a resource and the server provides that resource. A server
may serve multiple clients at the same time while a client is in contact with only one server. Both the client
and server usually communicate via a computer network but sometimes they may reside in the same system.
An illustration of the client server system is given as follows −
Characteristics of Client Server Computing
The salient points for client server computing are as follows:
The client server computing works with a system of request and response. The client sends a request
to the server and the server responds with the desired information.
The client and server should follow a common communication protocol so they can easily interact
with each other. All the communication protocols are available at the application layer.
A server can only accommodate a limited number of client requests at a time. So it uses a system
based to priority to respond to the requests.
Denial of Service attacks hamper servers ability to respond to authentic client requests by overwhelm
ing it with false requests.
An example of a client server computing system is a web server. It returns the web pages to the
clients that requested them.
Advantages of Client Server Computing
The different advantages of client server computing are −
All the required data is concentrated in a single place i.e. the server. So it is easy to protect the data
and provide authorisation and authentication.
The server need not be located physically close to the clients. Yet the data can be accessed
efficiently.
It is easy to replace, upgrade or relocate the nodes in the client server model because all the nodes are
independent and request data only from the server.
All the nodes i.e clients and server may not be build on similar platforms yet they can easily facilitate
the transfer of data.
Disadvantages of Client Server Computing
The different disadvantages of client server computing are −
If all the clients simultaneously request data from the server, it may get overloaded. This may lead to
congestion in the network.
If the server fails for any reason, then none of the requests of the clients can be fulfilled. This leads
of failure of the client server network.
The cost of setting and maintaining a client server model are quite high.
1.4 Hard Drives - HDDs and SDDs
HDD ( Hard Disk Drive)
An HDD consists of a spinning disk (platter) coated with a magnetic material and a read/write head that
reads and writes data on the disk’s surface. The read/write head moves back and forth across the spinning
disk to access different parts of the data stored on the disk. HDDs have been around for decades and are
the more traditional type of storage device.
Features of HDD:
High storage capacity: HDDs offer a high storage capacity, with some models capable of storing up to
16TB of data.
Lower cost: HDDs are generally less expensive than SSDs, making them a more cost-effective option
for storing large amounts of data.
Larger size: HDDs are physically larger and heavier than SSDs, making them less suitable for use in
portable devices.
Slower performance: HDDs are slower than SSDs when it comes to data access and transfer speeds.
Mechanical parts: HDDs contain mechanical parts that can wear out over time, making them less
durable than SSDs.
SSD: (Solid State Drive)
SSDs, on the other hand, use flash memory to store data instead of a spinning disk. SSDs have no
moving parts, making them much faster, more durable, and less susceptible to mechanical failure than
HDDs.
Features of SSD:
Fast performance: SSDs offer much faster data access and transfer speeds than HDDs.
Compact size: SSDs are smaller and lighter than HDDs, making them an ideal option for use in
portable devices such as laptops and tablets.
Lower power consumption: SSDs consume less power than HDDs, making them more energy-efficient.
Higher cost: SSDs are generally more expensive than HDDs, making them a less cost-effective option
for storing large amounts of data.
No mechanical parts: SSDs have no moving parts, making them more durable and less susceptible to
mechanical failure than HDDs.
1.5 Storage - block vs file vs object
Object, block, and cloud file storage work differently. They each use distinct structures, systems, and
storage solutions.
Object storage
Object storage stores and manages data as discrete units called objects. An object typically consists of the
actual data—such as documents, images, or data values— and its associated metadata. Metadata is
additional information about the object that you can use to retrieve it. The metadata can include attributes
like the unique identifier, object's name, size, creation date, and custom-defined tags.
Object storage systems use a flat namespace, so objects are stored without the need for a hierarchical
structure. Instead, the object’s unique identifier provides the address for the object within the storage system.
A hashing algorithm generates the ID from the object's content, which ensures that objects with the same
content have the same identifier.
Block storage
Block storage works by dividing data into fixed-sized blocks and storing them as individual units. Blocks
range from a few kilobytes to several megabytes in size. They can be predetermined during the
configuration process.
The operating system gives each block a unique address or block number, logged inside a data lookup
table. The addressing uses a logical block addressing (LBA) scheme that assigns a sequential number to
each block.
Block storage allows direct access to individual data blocks. You can read or write data to specific blocks
without having to retrieve or modify the entire dataset the block belongs to.
Cloud file storage
Cloud file storage is a hierarchical storage system that provides shared access to file data. It uses a remote
infrastructure of servers to store data. The cloud provider maintains the servers and manages data on them.
Files contain metadata like the file name, size, timestamps, and permissions.
You can create, modify, delete, and read files. You can also organize them logically in directory trees for
intuitive access. Multiple users can simultaneously access the same files. Security for online file storage is
managed with user and group permissions, so that administrators can control access to the shared file data.
Summary of differences: object vs. block vs. file storage
Object storage Block storage Cloud file storage
Store files as objects. Can store files but requires Supports common file-level
Accessing files in object additional budget and protocols and permissions
File
storage with existing management resources to models. Usable by applications
management
applications requires new code support files on block configured to work with shared
and the use of APIs. storage. file storage.
Can store unlimited metadata
Metadata Uses very little associated Stores limited metadata relevant
for any object. Define custom
management metadata. to files only.
metadata fields.
High-performance, low
Stores unlimited data with Offers high performance for
Performance latency, and rapid data
minimal latency. shared file access.
transfer.
On-premises NAS servers or
Physical Distributed across multiple Distributed across SSDs and
over underlying physical block
storage storage nodes. HDDs.
storage.
Scalability Unlimited scale. Somewhat limited. Somewhat limited.
1.6 Distributed Networking
Distributed networking, used in distributed computing, is the network system over which computer
programming, software, and its data are spread out across more than one computer, but communicate
complex messages through their nodes (computers), and are dependent upon each other. The goal of a
distributed network is to share resources, typically to accomplish a single or similar goal. Usually, this takes
place over a computer network, however, internet-based computing is rising in popularity. Typically, a
distributed networking system is composed of processes, threads, agents, and distributed objects. Merely
distributed physical components is not enough to suffice as a distributed network; typically distributed
networking uses concurrent program execution.
Day 1 : Afternoon Session
1.7 IP addressing
An IP address is an address having information about how to reach a specific host, especially outside
the LAN. An IP address is a 32-bit unique address having an address space of 2 32.
Generally, there are two notations in which the IP address is written, dotted decimal notation and
hexadecimal notation.
Dotted Decimal Notation
Dotted Decimal Notation
Hexadecimal Notation
Classful Addressing
The 32-bit IP address is divided into five sub-classes. These are:
Class A
Class B
Class C
Class D
Class E
Each of these classes has a valid range of IP addresses. Classes D and E are reserved for multicast and
experimental purposes respectively. The order of bits in the first octet determines the classes of the IP
address.
The IPv4 address is divided into two parts:
Network ID
Host ID
The class of IP address is used to determine the bits used for network ID and host ID and the number of
total networks and hosts possible in that particular class. Each ISP or network administrator assigns an IP
address to each device that is connected to its network.
Classful Addressing
1.8 Networking - Routers and Switches
What is a Switch?
o A switch is a networking device, which provides the facility to share the information & resources by
connecting different network devices, such as computers, printers, and servers, within a small
business network.
o With the help of a switch, the connected devices can share the data & information and communicate
with each other.
o Without a switch, we cannot build a small business network and cannot connect devices within a
building or campus.
.
What is a Router?
o A router is a networking device used to connect multiple switches and their corresponding networks
to build a large network. These switches and their corresponding networks may be in a single
location or different locations.
o A router is an intelligent device and responsible for routing the data packets from source to
destination over a network. It also distributes or routes the internet connection from modem to all
the networking devices either wired or wireless, such as PC, Laptop, Mobile phone, tablet, etc
o The router connects multiple networks and allows the networked devices & users to access the
internet.
o It works on the network layer & route the data packets through the shortest path across the network.
Types of Router
There are mainly two types of the router, which are given below:
1. Wireless Router
o Wireless routers are the most commonly used routers in offices and homes as they don't need any
wire or cable to connect with networking devices.
o It provides a secure connection, and only authenticated users can access the network using the id &
password.
o Using wireless router, internet can be accessed by the n number of users within the specified range.
2. Wired Router/Broadband Router
o As its name suggests, it requires a wire or cable to connect to the network devices.
o Such routers are mostly used in schools or small offices to connect the PCs with the Ethernet cable.
Key Differences between the Switch & Router
o The main function of a switch is to connect the end devices such as computers, printers, etc., whereas
the main function of a router is to connect two different networks.
o A switch works on the data link layer of the OSI model; on the other hand, a router works on
the network layer of the OSI model.
o The switch aims to determine the destination address of the received IP packet and forward it to the
destination address. On the other hand, the router's main purpose is to find the smallest and best
routes for the packets to reach the destination, determined using the routing table.
o There are various switching techniques such as circuit switching, packet switching, and message
switching, which are used by a switch. In comparison, a router uses two routing techniques, which
are adaptive routing and non-adaptive routing techniques.
o A switch stores MAC address in the lookup table or CAM table to get the source and destination
addresses. In contrast, routers store the IP addresses in the routing table.
1.9 Networking - Firewalls
A firewall is a network security device, either hardware or software-based, which monitors all incoming
and outgoing traffic and based on a defined set of security rules it accepts, rejects or drops that specific
traffic. Accept : allow the traffic Reject : block the traffic but reply with an “unreachable
error” Drop : block the traffic with no reply A firewall establishes a barrier between secured internal
networks and outside untrusted network, such as the Internet.
1.10 Databases
A database is an organized collection of data, so that it can be easily accessed and managed.You can
organize data into tables, rows, columns, and index it to make it easier to find relevant information.
Database handlers create a database in such a way that only one set of software program provides access
of data to all the users.The main purpose of the database is to operate a large amount of information by
storing, retrieving, and managing data.
There are many dynamic websites on the World Wide Web nowadays which are handled through
databases. For example, a model that checks the availability of rooms in a hotel. It is an example of a
dynamic website that uses a database.
There are many databases available like MySQL, Sybase, Oracle, MongoDB, Informix, PostgreSQL,
SQL Server, etc.
Modern databases are managed by the database management system (DBMS).
SQL (Structured Query Language) is used to perform operations on the records stored in the database,
such as updating records, inserting records, deleting records, creating and modifying database tables,
views, etc.SQL depends on relational algebra and tuple relational calculus.
A cylindrical structure is used to display the image of a database.
1.11 Server virtualization
Server Virtualization Definition
Server virtualization is the process of dividing a physical server into multiple unique and isolated virtual
servers by means of a software application. Each virtual server can run its own operating systems
independently.
Key Benefits of Server Virtualization:
Higher server ability
Cheaper operating costs
Eliminate server complexity
Increased application performance
Deploy workload quicker
Three Kinds of Server Virtualization:
Full Virtualization: Full virtualization uses a hypervisor, a type of software that directly
communicates with a physical server's disk space and CPU. The hypervisor monitors the
physical server's resources and keeps each virtual server independent and unaware of the other
virtual servers. It also relays resources from the physical server to the correct virtual server as
it runs applications. The biggest limitation of using full virtualization is that a hypervisor has
its own processing needs. This can slow down applications and impact server performance.
Para-Virtualization: Unlike full virtualization, para-virtualization involves the entire network
working together as a cohesive unit. Since each operating system on the virtual servers is
aware of one another in para-virtualization, the hypervisor does not need to use as much
processing power to manage the operating systems.
OS-Level Virtualization: Unlike full and para-virtualization, OS-level visualization does not
use a hypervisor. Instead, the virtualization capability, which is part of the physical server
operating system, performs all the tasks of a hypervisor. However, all the virtual servers must
run that same operating system in this server virtualization method.
1.12 Docker Container
A container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to another. A Docker container
image is a lightweight, standalone, executable package of software that includes everything needed to run
an application: code, runtime, system tools, system libraries and settings. Container images become
containers at runtime and in the case of Docker containers – images become containers when they run
on Docker Engine.
Docker containers that run on Docker Engine:
Standard: Docker created the industry standard for containers, so they could be portable anywhere
Lightweight: Containers share the machine’s OS system kernel and therefore do not require an OS per
application, driving higher server efficiencies and reducing server and licensing costs
Secure: Applications are safer in containers and Docker provides the strongest default isolation
capabilities in the industry
Comparing Containers and Virtual Machines
Containers and virtual machines have similar resource isolation and allocation benefits, but function
differently because containers virtualize the operating system instead of hardware. Containers are more
portable and efficient.
CONTAINERS
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple
containers can run on the same machine and share the OS kernel with other containers, each running as
isolated processes in user space. Containers take up less space than VMs (container images are typically
tens of MBs in size), can handle more applications and require fewer VMs and Operating systems.
VIRTUAL MACHINES
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers.
The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an
operating system, the application, necessary binaries and libraries – taking up tens of GBs. VMs can also
be slow to boot.
1.13 Application Programming Interfaces (API)
Overview:
An Application programming interface is a software interface that helps in connecting between the
computer or between computer programs. It is an interface that provides the accessibility of information
such that weather forecasting. In simple words, you can say it is a software interface that offers service to
the other pieces of software.
Example –
Best examples of web services APIs are- SOAP (Simple object access protocol), REST(REpresentational
State Transfer).
Features :
An application programming interface is a software that allows two applications to talk to each
other.
Application programming interface helps in enabling applications to exchange data and functionality
easily.
The application programming interface is also called a middle man between two systems.
Application programming interface helps in data monetization.
Application programming interface helps in improving collaboration.
Different types of APIs :
These are common types of APIs as follows.
1. Open APIs –
It is also called public APIs which are available to any other users. Open APIs help external users to
access the data and services. It is an open-source application programming interface in which we
access with HTTP protocols.
2. Internal APIs –
It is also known as private APIs, only an internal system exposes this type of APIs. These are
designed for the internal use of the company rather than the external users.
3. Composite APIs –
It is a type of APIs that combines different data and services. The main reason to use Composites
APIs is to improve the performance and to speed the execution process and improve the performance
of the listeners in the web interfaces.
4. Partner APIs –
It is a type of APIs in which a developer needs specific rights or licenses in order to access. Partner
APIs are not available to the public.
Applications of APIs in the real world :
Here, some real-world applications are as follows.
Weather snippets –
In weather snippets, APIs are generally used to access a large set of datasets to access the information
of weather forecast which is very helpful information in day-to-day life.
Login –
In this functionality, APIs are widely used to log in via Google, Linked In, Git Hub, Twitter and
allow users to access the log-in portal by using the API interface.
Entertainment –
In this field, APIs are used to access and provide a huge set of databases to access movies, web
series, comedy, etc.
E-commerce website –
In this, APIs provide the functionality like if you have purchase something, and now you want to pay
so, API provides interface like you can pay using different bank debit cards, UPI(Unified Payments
Interface), credit card, wallet, etc.
Gaming –
In gaming, it provides an interface like you can access the information of the game, and you can
connect to different users and play with different-different users at the same time.
Day 2 Morning Session
Introduction to cloud computing
2.1 Introduction
Cloud Computing provides us means of accessing the applications as utilities over the Internet. It allows
us to create, configure, and customize the applications online.
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote location. Cloud can provide services over public and private networks, i.e., WAN,
LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on
cloud.
What is Cloud Computing?
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be installed locally on
the PC. Hence, the Cloud Computing is making our business applications mobile and collaborative.
Basic Concepts
There are certain services and models working behind the scene making the cloud computing feasible and
accessible to end users. Following are the working models for cloud computing:
Deployment Models
Service Models
Deployment Models
Deployment models define the type of access to the cloud, i.e., how the cloud is located? Cloud can have
any of the four types of access: Public, Private, Hybrid, and Community.
1.Public Cloud
The public cloud allows systems and services to be easily accessible to the general public. Public cloud
may be less secure because of its openness.
2.Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is more secured
because of its private nature.
3.Community Cloud
The community cloud allows systems and services to be accessible by a group of organizations.
4.Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical activities are performed
using private cloud while the non-critical activities are performed using public cloud.
Service Models
Cloud computing is based on service models. These are categorized into three basic service models which
are -
Infrastructure-as–a-Service (IaaS)
Platform-as-a-Service (PaaS)
Software-as-a-Service (SaaS)
The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the service models
inherit the security and management mechanism from the underlying model, as shown in the following
diagram:
Infrastructure-as-a-Service (IaaS)
IaaS provides access to fundamental resources such as physical machines, virtual machines, virtual storage,
etc.
It is the most flexible type of cloud service which lets you rent the hardware and contains the basic
building blocks for cloud and IT.
It gives complete control over the hardware that runs your application (servers, VMs, storage,
networks & operating systems).
It’s an instant computing infrastructure, provisioned and managed over the internet.
IaaS gives you the very best level of flexibility and management control over your IT resources.
It is most almost like the prevailing IT resources with which many IT departments and developers
are familiar.
Examples of IaaS: Virtual Machines or AWS EC2, Storage or Networking.
Platform-as-a-Service (PaaS)
PaaS provides the runtime environment for applications, development and deployment tools, etc.
PaaS is a cloud service model that gives a ready-to-use development environment where developers
can write and execute high-quality code to make customized applications.
It helps to create an application quickly without managing the underlying infrastructure. For example,
when deploying a web application using PaaS, you don’t have to install an operating system, web
server, or even system updates. However, you can scale and add new features to your services.
This cloud service model makes the method of developing and deploying applications simpler and it is
more expensive than IaaS but less expensive than SaaS.
Examples of PaaS: Elastic Beanstalk or Lambda from AWS, WebApps, Functions or Azure SQL DB
from Azure, Cloud SQL DB from Google Cloud, or Oracle Database Cloud Service from Oracle
Cloud.
Software-as-a-Service (SaaS)
SaaS model allows to use software applications as a service to end-users.
SaaS provides you with a complete product that is run and managed by the service provider.
The software is hosted online and made available to customers on a subscription basis or for purchase
in this cloud service model.
With a SaaS , you don’t need to worry about how the service is maintained or how the underlying
infrastructure is managed.
Examples of SaaS: Microsoft Office 365, Oracle ERP/HCM Cloud, SalesForce, Gmail, or Dropbox.
History of Cloud Computing
The concept of Cloud Computing came into existence in the year 1950 with implementation of
mainframe computers, accessible via thin/static clients. Since then, cloud computing has been evolved from
static clients to dynamic ones and from software to services. The following diagram explains the evolution
of cloud computing:
Benefits :
Cloud Computing has numerous advantages. Some of them are listed below -
One can access applications as utilities, over the Internet.
One can manipulate and configure the applications online at any time.
It does not require to install a software to access or manipulate cloud application.
Cloud Computing offers online development and deployment tools, programming runtime
environment through PaaS model.
Cloud resources are available over the network in a manner that provide platform independent access
to any type of clients.
Cloud Computing offers on-demand self-service. The resources can be used without interaction
with cloud service provider.
Cloud Computing is highly cost effective because it operates at high efficiency with optimum
utilization. It just requires an Internet connection
Cloud Computing offers load balancing that makes it more reliable.
Risks related to Cloud Computing
Although cloud Computing is a promising innovation with various benefits in the world of computing, it
comes with risks. Some of them are discussed below:
1.Security and Privacy
It is the biggest concern about cloud computing. Since data management and infrastructure management in
cloud is provided by third-party, it is always a risk to handover the sensitive information to cloud service
providers.
2.Lock In
It is very difficult for the customers to switch from one Cloud Service Provider (CSP) to another. It
results in dependency on a particular CSP for service.
3.Isolation Failure
This risk involves the failure of isolation mechanism that separates storage, memory, and routing between
the different tenants.
4.Management Interface Compromise
In case of public cloud provider, the customer management interfaces are accessible through the Internet.
5.Insecure or Incomplete Data Deletion
It is possible that the data requested for deletion may not get deleted. It happens because either of the
following reasons
Extra copies of data are stored but are not available at the time of deletion
Disk that stores data of multiple tenants is destroyed.
Characteristics of Cloud Computing
1.On Demand Self Service : Cloud Computing allows the users to use web services and resources on
demand. One can logon to a website at any time and use them.
2.Broad Network Access : Since cloud computing is completely web based, it can be accessed from
anywhere and at any time.
3.Resource Pooling : Cloud computing allows multiple tenants to share a pool of resources. One can share
single physical instance of hardware, database and basic infrastructure.
4.Rapid Elasticity : It is very easy to scale the resources vertically or horizontally at any time. Scaling of
resources means the ability of resources to deal with increasing or decreasing demand.
The resources being used by customers at any given point of time are automatically monitored.
5.Measured Service : In this service cloud provider controls and monitors all the aspects of cloud service.
Resource optimization, billing, and capacity planning etc. depend on it.
Day 3 Morning Session
Cloud Architecture
3.1 Introduction :
The cloud architecture is divided into 2 parts i.e.
1. Frontend
2. Backend
The below figure represents an internal architectural view of cloud computing.
Fig. Architecture of Cloud Computing
Architecture of cloud computing is the combination of both SOA (Service Oriented Architecture) and
EDA (Event Driven Architecture). Client infrastructure, application, service, runtime cloud, storage,
infrastructure, management and security all these are the components of cloud computing architecture.
1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud computing system. Means it contains
all the user interfaces and applications which are used by the client to access the cloud computing
services/resources. For example, use of a web browser to access the cloud platform.
Client Infrastructure – Client Infrastructure is a part of the frontend component. It contains the n
applications and user interfaces which are required to access the cloud platform.
In other words, it provides a GUI( Graphical User Interface ) to interact with the cloud.
2. Backend :
Backend refers to the cloud itself which is used by the service provider. It contains the resources as well as
manages the resources and provides security mechanisms. Along with this, it includes huge storage, virtual
applications, virtual machines, traffic control mechanisms, deployment models, etc.
1. Application –
Application in backend refers to a software or platform to which client accesses. Means it provides the
service in backend as per the client requirement.
2. Service –
Service in backend refers to the major three types of cloud based services like SaaS, PaaS and IaaS. Also
manages which type of service the user accesses.
3. Runtime Cloud-
Runtime cloud in backend provides the execution and Runtime platform/environment to the Virtual
machine.
4. Storage –
Storage in backend provides flexible and scalable storage service and management of stored data.
5. Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software components of cloud like it includes
servers, storage, network devices, virtualization software etc.
6. Management –
Management in backend refers to management of backend components like application, service, runtime
cloud, storage, infrastructure, and other security mechanisms etc.
7. Security –
Security in backend refers to implementation of different security mechanisms in the backend for secure
cloud resources, systems, files, and infrastructure to end-users.
8. Internet –
Internet connection acts as the medium or a bridge between frontend and backend and establishes the
interaction and communication between frontend and backend.
9. Database– Database in backend refers to provide database for storing structured data, such as SQL
and NOSQL databases. Example of Databases services include Amazon RDS, Microsoft Azure SQL
database and Google CLoud SQL.
10. Networking– Networking in backend services that provide networking infrastructure for application in
the cloud, such as load balancing, DNS and virtual private networks.
11. Analytics– Analytics in backend service that provides analytics capabillities for data in the cloud, such
as warehousing, bussness intellegence and machine learning.
Benefits of Cloud Computing Architecture :
Makes overall cloud computing system simpler.
Improves data processing requirements.
Helps in providing high security.
Makes it more modularized.
Results in better disaster recovery.
Gives good user accessibility.
Reduces IT operating costs.
Provides high level reliability.
Scalability.
3.2 Stateful vs Stateless Service
What is a Stateful Application?
Stateful applications save info about the user’s “state” by keeping track of what they do. This information,
like login information and user choices, affects how the user feels about the site. Here are some popular
examples of stateful applications:
Online shopping carts that keep track of what you put in them
Banking systems that keep track of account information
Social media sites that display information based on user’s preferences
Now that you know what stateful apps are, let’s talk about their pros and cons.
Pros:
They use the information they’ve saved to give customers what they want, which increases customer
engagement.
Remember what the user was doing before and pick up where they left off, retaining useful
information and improving the customer journey.
Secure apps save user login and session data, which protects important information.
Cons:
Developers have to work harder to keep state info, which makes it challenging to develop and
maintain stateful apps.
Creating a stateful app is not an easy task. It’s hard to keep track of state info across instances.
Stateful apps use more resources, especially memory, and storage, which slows them down.
In the event of a loss, it is hard to get the application back to the way it was because you also have to
recover the lost data.
What is a Stateless Application?
Stateless software applications are those applications that do not save information about previous
interactions, user sessions, or events. These applications do not preserve context or state between requests in
a stateless design.
Some examples of stateless applications are:
HTTP: HTTP is a stateless internet data transfer protocol. Each client-server request and response is
treated independently. Cookies and session management are used to keep user data across requests.
RESTful APIs: Networked applications often use Representational State Transfer (REST)
architecture. RESTful APIs are stateless; thus, each request has all the information the server needs
to process it without relying on prior requests.
Stateless microservices: Stateless microservices perform specified activities without storing state
information. Since each instance processes requests independently, adding instances can scale these
services horizontally.
Let’s discuss the pros and cons of using stateless applications.
Pros:
Stateless apps scale better because each request is processed separately. Adding more application
instances without state consistency concerns improves load balancing and horizontal scaling.
Stateless applications require less state management logic, making them easier to design, create, and
maintain.
Stateless applications don’t store state across requests, thus one failure doesn’t affect the others.
System fault tolerance improves.
Cons:
Stateless apps must send all data with each request and response, which may increase overhead.
Stateless applications may have lower performance and latency as they do not have to send all the
data with each request.
Chat, gaming, and real-time collaboration apps require state management. These cases may not suit
stateless architecture.
Stateful activities can complicate stateless apps. Developers must implement mechanisms for
managing state across multiple requests, such as cookies, tokens, or databases to manage state across
requests.
3.3 Scaling up vs Scaling out
Scaling Up (Vertical Scaling)
Scaling up (or vertical scaling) is adding more resources—like CPU, memory, and disk—to increase more
compute power and storage capacity. This term applies to traditional applications deployed on physical
servers or virtual machines as well.
The diagram above shows an application pod that begins with a small configuration with 1 CPU, 2 GB of
memory, and 100 GB disk space and scales vertically to large configurations with 4 CPU, 8 GB of memory,
and 500 GB disk space. Now with more compute resources and storage space, this application can process
and serve more requests from clients.
Scaling up seems to be a good choice if your application only needs to scale to a reasonable size. There are
some advantages and disadvantages of scaling up:
Advantages
It is simple and straightforward. For the applications with more traditional and monolithic
architecture, it is much simpler to just add more compute resources to scale.
You can take advantage of powerful server hardware. Today’s servers are more powerful than
ever, with more efficient CPUs, larger DIMM capacities, faster disks, and high-speed networking.
By taking advantage of these compute resources, you can scale up to very large application pods.
Disadvantages
Scaling up has limits. Even with today’s powerful servers, as you continue to add compute
resources to your application pod, you will still hit the physical hardware limitations sooner or later.
Bottlenecks develop in compute resources. As you add compute resources to a physical server, it is
difficult to increase and balance the performance linearly for all the components, and you will most
likely hit a bottleneck somewhere. For example, initially your server has a memory bottleneck with
100% usage of memory and 70% usage of CPU. After doubling the number of DIMMs, now you
have 100% of CPU usage vs 80% of memory usage.
It may cost more to host applications. Usually the larger servers with high compute power cost
more. If your application requires high compute resources, using these high-cost larger servers may
be the only choice.
With physical hardware limitations, scaling up vertically is a rather short term solution if your application
needs to continue growing.
Scaling Out (Horizontal Scaling)
Scaling out (or horizontal scaling) addresses some of the limitations of the scale up method. With
horizontal scaling, the compute resource limitations from physical hardware are no longer the issue. In fact,
you can use any reasonable size of server as long as the server has enough resources to run the pods. The
diagram below shows an example of an application pod with three replicas scaling out to five replicas, and
this is how Kubernetes normally manages application workloads.
Scaling out also has its advantages and disadvantages:
Advantages
It delivers long-term scalability. The incremental nature of scaling out allows you to scale your
application for expected and long-term growth.
Scaling back is easy. Your application can easily scale back by reducing the number of pods when
the load is low. This frees up compute resources for other applications.
You can utilize commodity servers. Normally, you don’t need large servers to run containerized
applications. Since application pods scale horizontally, servers can be added as needed.
Disadvantages
It may require re-architecting. You will need to re-architect your application if your application is
using monolithic architecture(s).
3.4 Load balancing
Load balancing is the process of distributing incoming requests or workloads across a group of servers,
also known as a server farm or server pool. The goal of load balancing is to optimize resource utilization,
minimize response time, avoid overload, and increase fault tolerance. Load balancing can be implemented at
different levels of the system architecture, such as the network layer, the application layer, or the database
layer.
Cloud-based load balancing services are a type of load balancing solution that are offered by cloud
providers, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure.
Cloud-based load balancing services are managed and operated by the cloud provider, and they provide
various features and functionalities, such as scalability, security, health checks, monitoring, and integration
with other cloud services. Cloud-based load balancing services can be either global or regional, depending
on the scope and location of the server pool.
What are the benefits of using cloud-based load balancing services?
1.One of the main benefits of using cloud-based load balancing services is that they are easy to set up and
use. You do not need to install, configure, or maintain any hardware or software for load balancing, as the
cloud service provider takes care of all the details.
2. You can also leverage the cloud provider's expertise and experience in load balancing, and benefit from
their continuous updates and improvements.
3. Another benefit of using cloud-based load balancing services is that they are scalable and flexible. You
can easily add or remove servers from the server pool, and adjust the load balancing parameters and rules
according to your needs and preferences.
4. You can also take advantage of the cloud provider's global network and infrastructure, and distribute your
traffic across multiple regions and zones.
What are the drawbacks of using cloud-based load balancing services?
1. One of the potential drawbacks of using cloud-based load balancing services is that they are costly. You
have to pay for the load balancing service itself.
2. Another drawback of using cloud-based load balancing services is that they are less customizable and
controllable.
3. You also have to deal with the cloud provider's limitations and constraints, such as latency, availability,
and compatibility.
3.5 Explicating Fault Tolerance in Cloud Computing
Fault tolerance in cloud computing is about designing a blueprint for continuing the ongoing work
whenever a few parts are down or unavailable. This helps the enterprises to evaluate their infrastructure
needs and requirements, and provide services when the associated devices are unavailable due to some
cause. It doesn’t mean that the alternate arrangement can provide 100% of the full service, but this concept
keeps the system in running mode at a useable, and most importantly, at a reasonable level. This is important
if the enterprises are to keep growing in a continuous mode and increase their productivity levels.
Main Concepts behind Fault Tolerance in Cloud Computing System
Replication: The fault-tolerant system works on the concept of running several other replicates for each
and every service. Thus, if one part of the system goes wrong, it has other instances that can be placed
instead of it to keep it running. Take, for example, a database cluster that has 3 servers with the same
information on each of them. All the actions like data insertion, updates, and deletion get written on each of
them. The servers, which are redundant, would be in inactive mode unless and until any fault tolerance
system doesn’t demand the availability of them.
Redundancy: When any system part fails or moves towards a downstate, then it is important to have
backup type systems. For example, a website program that has MS SQL as its database may fail in between
due to some hardware fault. Then a new database has to be availed in the redundancy concept when the
original is in offline mode. The server operates with the emergency database which comprises several
redundant services within.
3.6 Loose coupling
Loosely coupled architecture is an architectural style where the individual components of an application
are built independently from one another (the opposite paradigm of tightly coupled architectures). Each
component, sometimes referred to as a microservice, is built to perform a specific function in a way that can
be used by any number of other services. This pattern is generally slower to implement than tightly coupled
architecture but has a number of benefits, particularly as applications scale.
Loosely coupled applications allow teams to develop features, deploy, and scale independently, which
allows organizations to iterate quickly on individual components. Application development is faster and
teams can be structured around their competency, focusing on their specific application.
3.7 Monolithic and Microservices Architectures
What is a monolithic architecture?
A monolithic architecture is a traditional model of a software program, which is built as a unified unit that
is self-contained and independent from other applications. The word “monolith” is often attributed to
something large and glacial, which isn’t far from the truth of a monolith architecture for software design. A
monolithic architecture is a singular, large computing network with one code base that couples all of the
business concerns together. To make a change to this sort of application requires updating the entire stack
by accessing the code base and building and deploying an updated version of the service-side interface. This
makes updates restrictive and time-consuming.
Monoliths can be convenient early on in a project's life for ease of code management, cognitive overhead,
and deployment. This allows everything in the monolith to be released at once.
Advantages of a monolithic architecture
The advantages of a monolithic architecture include:
1. Easy deployment – One executable file or directory makes deployment easier.
2. Development – When an application is built with one code base, it is easier to develop.
3. Performance – In a centralized code base and repository, one API can often perform the same
function that numerous APIs perform with microservices.
4. Simplified testing – Since a monolithic application is a single, centralized unit, end-to-end
testing can be performed faster than with a distributed application.
5. Easy debugging – With all code located in one place, it’s easier to follow a request and find
an issue.
Disadvantages of a monolithic architecture
The disadvantages of a monolith include:
1.Slower development speed – A large, monolithic application makes development more
complex and slower.
2.Scalability – You can’t scale individual components.
3.Reliability – If there’s an error in any module, it could affect the entire application’s availability.
4.Barrier to technology adoption – Any changes in the framework or language affects the entire
application, making changes often expensive and time-consuming.
5.Lack of flexibility – A monolith is constrained by the technologies already used in the
monolith.
6.Deployment – A small change to a monolithic application requires the redeployment of the entire
monolith.
What are microservices?
A microservices architecture, also simply known as microservices, is an architectural method that relies on
a series of independently deployable services. These services have their own business logic and database
with a specific goal. Updating, testing, deployment, and scaling occur within each service. Microservices
decouple major business, domain-specific concerns into separate, independent code bases. Microservices
don’t reduce complexity, but they make any complexity visible and more manageable by separating tasks
into smaller processes that function independently of each other and contribute to the overall whole.
Adopting microservices often goes hand in hand with DevOps, since they are the basis for continuous
delivery practices that allow teams to adapt quickly to user requirements.
Advantages of microservices
1. Agility – Promote agile ways of working with small teams that deploy frequently.
2. Flexible scaling – If a microservice reaches its load capacity, new instances of that service can rapidly
be deployed to the accompanying cluster to help relieve pressure. We are now multi-tenanant and
stateless with customers spread across multiple instances. Now we can support much larger instance
sizes.
3. Continuous deployment – We now have frequent and faster release cycles. Before we would push out
updates once a week and now we can do so about two to three times a day.
4. Highly maintainable and testable – Teams can experiment with new features and roll back if
something doesn’t work. This makes it easier to update code and accelerates time-to-market for new
features. Plus, it is easy to isolate and fix faults and bugs in individual services.
5. Independently deployable – Since microservices are individual units they allow for fast and easy
independent deployment of individual features.
6. Technology flexibility – Microservice architectures allow teams the freedom to select the tools they
desire.
7. High reliability – You can deploy changes for a specific service, without the threat of bringing down the
entire application.
8. Happier teams – The Atlassian teams who work with microservices are a lot happier, since they are
more autonomous and can build and deploy themselves without waiting weeks for a pull request to be
approved.
Disadvantages of microservices
1. Development sprawl – Microservices add more complexity compared to a monolith architecture, since
there are more services in more places created by multiple teams. If development sprawl isn’t properly
managed, it results in slower development speed and poor operational performance.
2. Exponential infrastructure costs – Each new microservice can have its own cost for test suite,
deployment playbooks, hosting infrastructure, monitoring tools, and more.
3. Added organizational overhead – Teams need to add another level of communication and
collaboration to coordinate updates and interfaces.
4. Debugging challenges – Each microservice has its own set of logs, which makes debugging more
complicated. Plus, a single business process can run across multiple machines, further complicating
debugging.
5. Lack of standardization – Without a common platform, there can be a proliferation of languages,
logging standards, and monitoring.
6. Lack of clear ownership – As more services are introduced, so are the number of teams running those
services. Over time it becomes difficult to know the available services a team can leverage and who to
contact for support.
i
3.8. Event-driven Archtecture
Event-driven architecture (EDA) is a software architectural pattern that promotes the production,
detection, consumption, and reaction to events. Event-driven systems typically process and respond to
events in near-real-time. A significant benefit of this approach is that it enables applications to be more
loosely coupled and thus easier to develop, modify, replace, and scale. In an event-driven architecture,
services communicate with each other by producing and responding to events. This can be contrasted with a
request/response model in which services communicate with each other by invoking requests and waiting for
responses. Event-driven architectures are often used in cloud computing, making it possible to build
microservices that are loosely coupled and can be deployed and scaled independently.
Cloud Computing providers use event-driven architecture, which relies on event-driven software to
respond to events within their own cloud. This architecture is often used in applications that require real-
time processing, such as stock trading or event notifications. In these applications, event-driven software is
essential for providing timely responses to events. Cloud Computing uses event-driven architecture because
it allows the cloud provider to respond quickly to changes in demand and allows them to integrate cloud
services to work together efficiently. This flexibility is one of the advantages of Cloud Computing.
3.9 List out the Popular Cloud service providers along with their features (AWS, Azure, GCP)
Cloud Service Provider Companies
Cloud Service providers (CSP) offers various services such as Software as a Service, Platform as a
service, Infrastructure as a service, network services, business applications, mobile applications,
and infrastructure in the cloud. The cloud service providers host these services in a data center, and users
can access these services through cloud provider companies using an Internet connection.
There are the following Cloud Service Providers Companies -
Amazon Web Services (AWS)
AWS (Amazon Web Services) is a secure cloud service platform provided by Amazon. It offers various
services such as database storage, computing power, content delivery, Relational Database, Simple Email,
Simple Queue, and other functionality to increase the organization's growth.
Features of AWS
AWS provides various powerful features for building scalable, cost-effective, enterprise applications.
Some important features of AWS is given below-
o AWS is scalable because it has an ability to scale the computing resources up or down according to
the organization's demand.
o AWS is cost-effective as it works on a pay-as-you-go pricing model.
o It provides various flexible storage options.
o It offers various security services such as infrastructure security, data encryption, monitoring &
logging, identity & access control, penetration testing, and DDoS attacks.
o It can efficiently manage and secure Windows workloads.
2. Microsoft Azure
Microsoft Azure is also known as Windows Azure. It supports various operating systems, databases,
programming languages, frameworks that allow IT professionals to easily build, deploy, and manage
applications through a worldwide network. It also allows users to create different groups for related utilities.
Features of Microsoft Azure
o Microsoft Azure provides scalable, flexible, and cost-effective
o It allows developers to quickly manage applications and websites.
o It managed each resource individually.
o Its IaaS infrastructure allows us to launch a general-purpose virtual machine in different platforms
such as Windows and Linux.
o It offers a Content Delivery System (CDS) for delivering the Images, videos, audios, and
applications.
3. Google Cloud Platform
Google cloud platform is a product of Google. It consists of a set of physical devices, such as computers,
hard disk drives, and virtual machines. It also helps organizations to simplify the migration process.
Features of Google Cloud
o Google cloud includes various big data services such as Google BigQuery, Google CloudDataproc,
Google CloudDatalab, and Google Cloud Pub/Sub.
o It provides various services related to networking, including Google Virtual Private Cloud (VPC),
Content Delivery Network, Google Cloud Load Balancing, Google Cloud Interconnect, and Google
Cloud DNS.
o It offers various scalable and high-performance
o GCP provides various serverless services such as Messaging, Data Warehouse, Database, Compute,
Storage, Data Processing, and Machine learning (ML)
o It provides a free cloud shell environment with Boost Mode.
3.10 Open-Source cloud computing platforms
An open-source cloud platform is any cloud-based solution, service, or model developed using
open-source technologies and licensing agreements. Open-source cloud providers can cover
any private , public , or hybrid cloud models, as well as the full array of SaaS, IaaS , PaaS, and other
service solutions .
The following are some of the key open source cloud platforms.
1. OpenStack: This is open source software for creating private and public clouds, built and
disseminated by a large and democratic community of developers, in collaboration with users.
OpenStack is mostly deployed as Infrastructure-as-a-Service (IaaS), whereby virtual servers and
other resources are made available to customers. The software platform consists of interrelated
components that control diverse, multi-vendor hardware pools of processing, storage and
networking resources throughout a data centre.
2. Cloud Foundry: This is an open Platform-as-a-Service (PaaS), which provides a choice of clouds,
developer frameworks and application services. Cloud Foundry makes it faster and easier to build,
test, deploy and scale applications. It has different distributions. A popular one is Pivotal, from
IBM.
3. OpenShift: This is Red Hat’s cloud computing PaaS offering. It is an application platform in the
cloud, where app developers and teams can build, test, deploy and run their applications.
4. Cloudify: Cloudify was developed and designed on the principles of openness to power the IT
transformation revolution. It enables organisations to design, build and deliver various business
applications and network services. The latest version of Cloudify is 4.3, and it incorporates
enhanced features like advanced security, control and true self-service. Cloudify 4.3 introduced a
totally new concept for container orchestration with Kubernetes.( Container orchestration automates
the deployment, management, scaling, and networking of containers.)
5. WSO2: This is a Platform-as-a-Service (PaaS) framework from the Apache community. It
provides the elastic scalability feature for any type of service, using the underlying infra cloud.
WSO2 has a microservices based architecture that supports various services. It fosters agility and
flexibility with open source.
6. Cloud Stack: This is an open source software platform designed to manage the cloud computing
environment. It is an Infrastructure-as-a-Service (IaaS) cloud computing platform. Various service
providers use Cloud Stack to offer public, private and hybrid cloud services.
Day 4 Morning Session : AWS Cloud Overview
4.1 Amazon Web Service (AWS)
AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by
Amazon that includes a mixture of infrastructure-as-a-service (IaaS ), platform-as-a-service (PaaS) and
packaged-software-as-a-service (SaaS) offerings. AWS services can offer an organization tools such
as compute power, database storage and content delivery services.
Amazon.com Web Services launched its first web services in 2002 from the internal infrastructure
that Amazon.com built to handle its online retail operations. In 2006, it began offering its defining
IaaS services. AWS was one of the first companies to introduce a pay-as-you-go cloud computing
model that scales to provide users with compute, storage or throughput as needed.
AWS offers many different tools and solutions for enterprises and software developers that can be
used in data centers in up to 190 countries. Groups such as government agencies, education
institutions, non-profits and private organizations can use AWS services.
How AWS works
AWS is separated into different services; each can be configured in different ways based on the
user's needs. Users can see configuration options and individual server maps for an AWS service.
More than 200 services comprise the AWS portfolio, including those for compute, databases,
infrastructure management, application development and security. These services, by category,
include the following:
compute
storage
databases
data management
migration
hybrid cloud
networking
development tools
management
monitoring
security
governance
big data management
analytics
artificial intelligence (AI)
mobile development
messages and notification
4.2 Regions and AZ
What are AWS Regions?
AWS Regions are separate geographic areas that AWS uses to house its infrastructure. These are
distributed around the world so that customers can choose a region closest to them in order to host
their cloud infrastructure there. The closer your region is to you, the better, so that you can reduce
network latency as much as possible for your end-users. You want to be near the data centers for fast
service.
What are AWS Availability Zones?
An AWS Availability Zone (AZ) is the logical building block that makes up an AWS Region. There
are currently 69 AZs, which are isolated locations— data centers — within a region. Each region has
multiple AZs and when you design your infrastructure to have backups of data in other AZs you are
building a very efficient model of resiliency, i.e. a core concept of cloud computing .
See the below image from AWS documentation for a visual representation of Availablity Zones within
Regions.
4.3 Shared Responsibility Model and AWS Acceptable Policy
The AWS shared responsibility model defines what you (as an AWS account holder/user) and AWS
are responsible for when it comes to security and compliance.
Security and Compliance is a shared responsibility between AWS and the customer. This shared
model can help relieve customer’s operational burdens as AWS operates, manages, and controls the
components from the host operating system and virtualization layer down to the physical security of
the facilities in which the service operates.
The customer assumes responsibility and management of the guest operating system (including
updates and security patches), other associated application software as well as the configuration of the
AWS provided security group firewall.
AWS are responsible for “Security of the Cloud” .
AWS is responsible for protecting the infrastructure that runs all the services offered in the
AWS Cloud.
This infrastructure is composed of the hardware, software, networking, and facilities that run
AWS Cloud services.
Customers are responsible for “Security in the Cloud”.
AWS Acceptable Use Policy
This Acceptable Use Policy (“Policy”) governs your use of the services offered by Amazon Web
Services, Inc. and its affiliates (“Services”) and our website(s)
including http://aws.amazon.com (“AWS Site”). We may modify this Policy by posting a revised
version on the AWS Site. By using the Services or accessing the AWS Site, you agree to the latest
version of this Policy.
You may not use, or facilitate or allow others to use, the Services or the AWS Site:
for any illegal or fraudulent activity;
to violate the rights of others;
to threaten, incite, promote, or actively encourage violence, terrorism, or other serious harm;
for any content or activity that promotes child sexual exploitation or abuse;
to violate the security, integrity, or availability of any user, network, computer or communications
system, software application, or network or computing device;
to distribute, publish, send, or facilitate the sending of unsolicited mass email or other messages,
promotions, advertising, or solicitations (or “spam”).