Aos Unit-3
Aos Unit-3
Coordinating dispatch
Dispatch is the term used for starting actually the execution of a job that has been
scheduled. There are two ways to achieve distributed job scheduling:
There are pros and cons to centralization. Centralization makes consistency easy to
determine, but it creates bottlenecks in processing and allows one machine to see
all information. Decentralization provides an automatic and natural load-balancing
of job dispatch, and it allows machines to reveal information on a `need to know'
basis.
You promise to execute tasks or keep promises at distributed places and times:
You tell CFEngine what and how with the details of a promise.
You tell CFEngine where and when promises should be kept, using classes.
1
Standard jobs run sporadically on demand.
Standard jobs run on a regular schedule.
This list transfers to workflow processes too. If one job needs to follow after
another (because it depends on it for something), we can ask if this workflow is a
standard and regular occurrence, or a one-off phenomenon.
One-off workflows
Regular workflows
One-off workflows
methods:
Host2.Day24.January.Year2012.Hr16.Min50_55::
commands:
Host1.Day24.January.Year2012.Hr16.Min45_50::
"/usr/local/bin/my_job";
2
}
Host1 runs its task at 16:45, and Host2 excutes its part in the workflow five
minutes later. The advantage of this approach is that no direct communication is
required between Host1 and Host2. The disadvantage is that you, as the
orchestrator, have to guess how long the jobs will take. Moreover Host2 doesn't
know for certain whether host1 succeeded in carrying out its job, so it might be a
fruitless act.
We can change this by signalling between the processes. Whether not you consider
this an improvement or not depends on what you value highest: avoidance of
communication or certainty of outcome. In this version, we increase the certainty
of control by asking the predecessor or upstream host for confirmation of success if
the job was carried out.
classes:
Host2::
"hostX" # prefix
);
3
methods:
Host2.hostX_did_my_job
commands:
Host1.Day24.January.Year2012.Hr16.Min45_50::
"/usr/local/bin/my_job",
In this example, the methods promise runs on Host2 and the commands promise
runs one Host1 as before. Now, host 1 sets a signal class ‘did_my_job’ when it
carries out the job, and Host2 collects it by contacting the cf-serverd on Host1.
Assuming that Host1 has agreed to let Host2 know this information, by granting
access to it, Host2 can inherit this class, with a prefix of its own choosing. Thus is
transforms the class ‘did_my_job’ on Host1 into ‘hostX_did_my_job’ on Host2.
The advantage of this method is that the second job will only be started if the first
completed, and we don't have to know how long the job took. The disadvantage of
this is that we have to exchange some network information, and this has a small
network cost, and requires some extra configuration on the server side to grant
access to this context information:
access:
"did_my_job"
4
resource_type => "context",
Regular workflows
To make a job happen at a specific time, we used a very specific time classifier
‘Day24.January.Year2012.Hr16.Min45_50’. If we now want to make this
workflow into a regular occurrence, repeating at some interval we have two
options:
We repeat this at the same time each week, day, hour, etc.
We don't care about the precise time, we only care about the interval
between executions.
So, to make a promise repeat, we simply have to be less specific about the time.
Let us make the promise on Host1 apply every day between 16:00:00 (4 pm) and
16:59:59, and add an ifelapsed lock saying that we do not want to consider
rechecking more often tha once every 100 minutes (more than 1 hour). Now we
have a workflow process that starts at 16:00 hours each day and runs only once
each day.
5
classes:
Host2::
"did.*",
"Host1",
"no",
"hostX"
);
methods:
Host2.hostX_did_my_job
commands:
Host1.Hr16::
"/usr/local/bin/my_job",
6
body common control
########################################################
bundle common g
vars:
########################################################
vars:
7
"client" string => "downstream.exampe.org";
classes:
client_primed::
"$(g.signal)",
"$(server)",
"yes",
"hostX"
);
methods:
client_primed::
8
action => if_elapsed("5"),
server_primed::
reports:
!succeeded::
#######################################################
commands:
# do whatever...
"/bin/echo $(job)";
#########################################################
9
# Server config
#########################################################
########################################################
access:
"$(g.signal)"
In the examples above, we only had two hosts cooperating about jobs. In general, it
is not a good idea to link together many different hosts unless there is a good
reason for doing so. In HPC or Grid environments, where distributed jobs are more
common and results are combined from many sub-tasks, one typically uses some
10
more specialized middleware to accomplish this kind of cooperation. Such
software makes compromises of its own, but is generally better suited to the
specialized task for which it was written than a tool like CFEngine (whose main
design criteria are to be secure and generic).
Nevertheless, there are some tricks left in CFEngine for distributed scheduling if
we want to trigger a number of follow-ups from a single job, or aggregate a
number of jobs to drive a single follow-up (see figure).
When aggregating jobs, we must combine their exit status using AND or OR. The
most common case it that we require all the prerequisites in place in order to
generate the final result, i.e. trigger the followup only if all of the prerequisites
succeeded.
vars:
11
"n" slist => { "2", "3", "4" };
classes:
"did.*",
"Host$(n)",
"no",
"hostX"
),
methods:
Host2.Host3.Host4.hostX_did_my_job
commands:
Host1.Hr16::
"/usr/local/bin/my_job",
This example shows an all-or-nothing result. The follow-up job will only be
executed if all three jobs finish within the same 5 minute time-frame. There is no
error handling or recovery except to schedule the whole thing again.
12
Triggering from one or more predecessors, i.e. combining with OR, looks similar,
we just have to change the class expression:
...
methods:
(Host2|Host3|Host4).hostX_did_my_job
...
classes:
Host2::
"did.*",
"Host1",
"no",
"hostX"
);
13
methods:
Host2.hostX_did_my_job
commands:
Host1.Hr16::
"/usr/local/bin/my_job",
Self-healing workflows
Beware however, one-off jobs cannot be made convergent, because they only have
a single chance to succeed. It is a question of business process design whether you
design workflows to be sustainable and repeatable, or whether you trust the
outcome of a single shot process. Using the persistent classes in CFEngine together
with the if-elapsed locks to send signals between hosts, it is simple and automatic
to make convergent self-healing workflows
14
Long workflow chains
Long workflow chains are those which involve more than one trigger. These can
be created by repeating the pattern above several times. Note however, that each
link in the chain introduces a new level of uncertainty and potential failure. In
general, we would not recommend creating workflows with long chains.
DSM is a mechanism that manages memory across multiple nodes and makes inter-
process communications transparent to end-users. The applications will think that
they are running on shared memory. DSM is a mechanism of allowing user processes
to access shared data without using inter-process communications. In DSM every
node has its own memory and provides memory read and write services and it
provides consistency protocols. The distributed shared memory (DSM) implements
the shared memory model in distributed systems but it doesn’t have physical shared
memory. All the nodes share the virtual address space provided by the shared
memory model. The Data moves between the main memories of different nodes.
15
On-Chip Memory DSM is expensive and complex.
Bus-Based Multiprocessors:
A set of parallel wires called a bus acts as a connection between CPU and
memory.
accessing of same memory simultaneously by multiple CPUs is
prevented by using some algorithms
Cache memory is used to reduce network traffic.
Ring-Based Multiprocessors:
There is no global centralized memory present in Ring-based DSM.
All nodes are connected via a token passing ring.
In ring-bases DSM a single address line is divided into the shared area.
Advantages of Distributed shared memory:
Simpler abstraction: Programmer need not concern about data movement,
As the address space is the same it is easier to implement than RPC.
Easier portability: The access protocols used in DSM allow for a natural
transition from sequential to distributed systems. DSM programs are
portable as they use a common programming interface.
locality of data: Data moved in large blocks i.e. data near to the current
memory location that is being fetched, may be needed future so it will be
also fetched.
on-demand data movement: It provided by DSM will eliminate the data
exchange phase.
larger memory space: It provides large virtual memory space, the total
memory size is the sum of the memory size of all the nodes, paging
activities are reduced.
Better Performance: DSM improve performance and efficiency by
speeding up access to data.
Flexible communication environment: They can join and leave DSM
system without affecting the others as there is no need for sender and
receiver to existing,
process migration simplified: They all share the address space so one
process can easily be moved to a different machine.
Apart from the above-mentioned advantages, DSM has furthermore advantages like:
Less expensive when compared to using a multiprocessor system.
No bottlenecks in data access.
16
Scalability i.e. Scales are pretty good with a large number of nodes.
A Distributed File System (DFS) as the name suggests, is a file system that is
distributed on multiple file servers or multiple locations. It allows programs to
access or store isolated files as they do with the local ones, allowing programmers
to access files from any network or computer.
The main purpose of the Distributed File System (DFS) is to allows users of
physically distributed systems to share their data and resources by using a
Common File System. A collection of workstations and mainframes connected by
a Local Area Network (LAN) is a configuration on Distributed File System. A
DFS is executed as a part of the operating system. In DFS, a namespace is created
and this process is transparent for the clients.
DFS has two components:
Location Transparency –
Location Transparency achieves through the namespace component.
Redundancy –
Redundancy is done through a file replication component.
In the case of failure and heavy load, these components together improve data
availability by allowing the sharing of data in different locations to be logically
grouped under one folder, which is known as the “DFS root”.
It is not necessary to use both the two components of DFS together, it is possible to
use the namespace component without using the file replication component and it
is perfectly possible to use the file replication component without using the
namespace component between servers.
File system replication:
Early iterations of DFS made use of Microsoft’s File Replication Service (FRS),
which allowed for straightforward file replication between servers. The most recent
iterations of the whole file are distributed to all servers by FRS, which recognises
new or updated files.
17
“DFS Replication” was developed by Windows Server 2003 R2 (DFSR). By only
copying the portions of files that have changed and minimising network traffic
with data compression, it helps to improve FRS. Additionally, it provides users
with flexible configuration options to manage network traffic on a configurable
schedule.
Features of DFS :
Transparency :
Structure transparency –
There is no need for the client to know about the number or
locations of file servers and the storage devices. Multiple file
servers should be provided for performance, adaptability, and
dependability.
Access transparency –
Both local and remote files should be accessible in the same
manner. The file system should be automatically located on the
accessed file and send it to the client’s side.
Naming transparency –
There should not be any hint in the name of the file to the
location of the file. Once a name is given to the file, it should
not be changed during transferring from one node to another.
Replication transparency –
If a file is copied on multiple nodes, both the copies of the file
and their locations should be hidden from one node to another.
User mobility :
It will automatically bring the user’s home directory to the node where
the user logs in.
Performance :
Performance is based on the average amount of time needed to convince
the client requests. This time covers the CPU time + time taken to access
secondary storage + network access time. It is advisable that the
performance of the Distributed File System be similar to that of a
centralized file system.
Simplicity and ease of use :
The user interface of a file system should be simple and the number of
commands in the file should be small.
18
High availability :
A Distributed File System should be able to continue in case of any
partial failures like a link failure, a node failure, or a storage drive crash.
A high authentic and adaptable distributed file system should have
different and independent file servers for controlling different and
independent storage devices.
Scalability :
Since growing the network by adding new machines or joining two
networks together is routine, the distributed system will inevitably grow
over time. As a result, a good distributed file system should be built to
scale quickly as the number of nodes and users in the system grows.
Service should not be substantially disrupted as the number of nodes and
users grows.
High reliability :
The likelihood of data loss should be minimized as much as feasible in a
suitable distributed file system. That is, because of the system’s
unreliability, users should not feel forced to make backup copies of their
files. Rather, a file system should create backup copies of key files that
can be used if the originals are lost. Many file systems employ stable
storage as a high-reliability strategy.
Data integrity :
Multiple users frequently share a file system. The integrity of data saved
in a shared file must be guaranteed by the file system. That is, concurrent
access requests from many users who are competing for access to the
same file must be correctly synchronized using a concurrency control
method. Atomic transactions are a high-level concurrency management
mechanism for data integrity that is frequently offered to users by a file
system.
Security :
A distributed file system should be secure so that its users may trust that
their data will be kept private. To safeguard the information contained in
the file system from unwanted & unauthorized access, security
mechanisms must be implemented.
Heterogeneity :
Heterogeneity in distributed systems is unavoidable as a result of huge
19
scale. Users of heterogeneous distributed systems have the option of
using multiple computer platforms for different purposes.
History :
The server component of the Distributed File System was initially introduced as an
add-on feature. It was added to Windows NT 4.0 Server and was known as “DFS
4.1”. Then later on it was included as a standard component for all editions of
Windows 2000 Server. Client-side support has been included in Windows NT 4.0
and also in later on version of Windows.
Linux kernels 2.6.14 and versions after it come with an SMB client VFS known as
“cifs” which supports DFS. Mac OS X 10.7 (lion) and onwards supports Mac OS
X DFS.
Properties:
File transparency: users can access files without knowing where they are
physically stored on the network.
Load balancing: the file system can distribute file access requests across
multiple computers to improve performance and reliability.
Data replication: the file system can store copies of files on multiple
computers to ensure that the files are available even if one of the
computers fails.
Security: the file system can enforce access control policies to ensure that
only authorized users can access files.
Scalability: the file system can support a large number of users and a
large number of files.
Concurrent access: multiple users can access and modify the same file at
the same time.
Fault tolerance: the file system can continue to operate even if one or
more of its components fail.
Data integrity: the file system can ensure that the data stored in the files
is accurate and has not been corrupted.
File migration: the file system can move files from one location to
another without interrupting access to the files.
Data consistency: changes made to a file by one user are immediately
visible to all other users.
20
Support for different file types: the file system can support a wide range
of file types, including text files, image files, and video files.
Applications :
NFS –
NFS stands for Network File System. It is a client-server architecture that
allows a computer user to view, store, and update files remotely. The
protocol of NFS is one of the several distributed file system standards for
Network-Attached Storage (NAS).
CIFS –
CIFS stands for Common Internet File System. CIFS is an accent of
SMB. That is, CIFS is an application of SIMB protocol, designed by
Microsoft.
SMB –
SMB stands for Server Message Block. It is a protocol for sharing a file
and was invented by IMB. The SMB protocol was created to allow
computers to perform read and write operations on files to a remote host
over a Local Area Network (LAN). The directories present in the remote
host can be accessed via SMB and are called as “shares”.
Hadoop –
Hadoop is a group of open-source software services. It gives a software
framework for distributed storage and operating of big data using the
MapReduce programming model. The core of Hadoop contains a storage
part, known as Hadoop Distributed File System (HDFS), and an
operating part which is a MapReduce programming model.
NetWare –
NetWare is an abandon computer network operating system developed
by Novell, Inc. It primarily used combined multitasking to run different
services on a personal computer, using the IPX network protocol.
Working of DFS :
There are two ways in which DFS can be implemented:
Standalone DFS namespace –
It allows only for those DFS roots that exist on the local computer and
are not using Active Directory. A Standalone DFS can only be acquired
on those computers on which it is created. It does not provide any fault
21
liberation and cannot be linked to any other DFS. Standalone DFS roots
are rarely come across because of their limited advantage.
Domain-based DFS namespace –
It stores the configuration of DFS in Active Directory, creating the DFS
namespace root accessible
at \\<domainname>\<dfsroot> or \\<FQDN>\<dfsroot>
Advantages :
DFS allows multiple user to access or store the data.
It allows the data to be share remotely.
It improved the availability of file, access time, and network efficiency.
Improved the capacity to change the size of the data and also improves
the ability to exchange the data.
Distributed File System provides transparency of data even if server or
disk fails.
Disadvantages :
In Distributed File System nodes and connections needs to be secured
therefore we can say that security is at stake.
There is a possibility of lose of messages and data in the network while
movement from one node to another.
Database connection in case of Distributed File System is complicated.
Also handling of the database is not easy in Distributed File System as
compared to a single user system.
There are chances that overloading will take place if all nodes tries to
send data at once.
22
What is Multimedia?
The word multi and media are combined to form the word multimedia. The word
“multi” signifies “many.” Multimedia is a type of medium that allows information
to be easily transferred from one location to another.
Multimedia is the presentation of text, pictures, audio, and video with links and tools
that allow the user to navigate, engage, create, and communicate using a computer.
Multimedia refers to the computer-assisted integration of text, drawings, still and
moving images(videos) graphics, audio, animation, and any other media in which
any type of information can be expressed, stored, communicated, and processed
digitally.
To begin, a computer must be present to coordinate what you see and hear, as well
as to interact with. Second, there must be interconnections between the various
pieces of information. Third, you’ll need navigational tools to get around the web of
interconnected data.
Multimedia is being employed in a variety of disciplines, including education,
training, and business.
Categories of Multimedia
Linear Multimedia:
It is also called Non-interactive multimedia. In the case of linear multimedia, the
end-user cannot control the content of the application. It has literally no interactivity
of any kind. Some multimedia projects like movies in which material is thrown in a
linear fashion from beginning to end. A linear multimedia application lacks all the
features with the help of which, a user can interact with the application such as the
ability to choose different options, click on icons, control the flow of the media, or
change the pace at which the media is displayed. Linear multimedia works very well
for providing information to a large group of people such as at training sessions,
seminars, workplace meetings, etc.
Non-Linear Multimedia:
In Non-Linear multimedia, the end-user is allowed the navigational control to rove
through multimedia content at his own desire. The user can control the access of the
application. Non-linear offers user interactivity to control the movement of data. For
23
example computer games, websites, self-paced computer-based training packages,
etc.
Applications of Multimedia
Multimedia indicates that, in addition to text, graphics/drawings, and photographs,
computer information can be represented using audio, video, and animation.
Multimedia is used in:
Education
24
possible.
It is beneficial to surgeons because they can rehearse intricate procedures such as
brain removal and reconstructive surgery using images made from imaging scans of
the human body. Plans can be produced more efficiently to cut expenses and
problems.
Fine Arts
Multimedia artists work in the fine arts, combining approaches employing many
media and incorporating viewer involvement in some form. For example, a variety
of digital mediums can be used to combine movies and operas.
Digital artist is a new word for these types of artists. Digital painters make digital
paintings, matte paintings, and vector graphics of many varieties using computer
applications.
Engineering
Multimedia is frequently used by software engineers in computer simulations for
military or industrial training. It’s also used for software interfaces created by
creative experts and software engineers in partnership. Only multimedia is used to
perform all the minute calculations.
Components of Multimedia
Text
Characters are used to form words, phrases, and paragraphs in the text. Text appears
in all multimedia creations of some kind. The text can be in a variety of fonts and
sizes to match the multimedia software’s professional presentation. Text in
multimedia systems can communicate specific information or serve as a supplement
to the information provided by the other media.
Graphics
Non-text information, such as a sketch, chart, or photograph, is represented digitally.
Graphics add to the appeal of the multimedia application. In many circumstances,
people dislike reading big amounts of material on computers. As a result, pictures
are more frequently used than words to clarify concepts, offer background
information, and so on. Graphics are at the heart of any multimedia presentation.
The use of visuals in multimedia enhances the effectiveness and presentation of the
concept. Windows Picture, Internet Explorer, and other similar programs are often
25
used to see visuals. Adobe Photoshop is a popular graphics editing program that
allows you to effortlessly change graphics and make them more effective and
appealing.
Animations
A sequence of still photographs is being flipped through. It’s a set of visuals that
give the impression of movement. Animation is the process of making a still image
appear to move. A presentation can also be made lighter and more appealing by
using animation. In multimedia applications, the animation is quite popular. The
following are some of the most regularly used animation viewing programs: Fax
Viewer, Internet Explorer, etc.
Video
Photographic images that appear to be in full motion and are played back at speeds
of 15 to 30 frames per second. The term video refers to a moving image that is
accompanied by sound, such as a television picture. Of course, text can be included
in videos, either as captioning for spoken words or as text embedded in an image, as
in a slide presentation. The following programs are widely used to view videos: Real
Player, Window Media Player, etc.
Audio
Any sound, whether it’s music, conversation, or something else. Sound is the most
serious aspect of multimedia, delivering the joy of music, special effects, and other
forms of entertainment. Decibels are a unit of measurement for volume and sound
pressure level. Audio files are used as part of the application context as well as to
enhance interaction. Audio files must occasionally be distributed using plug-in
media players when they appear within online applications and webpages. MP3,
WMA, Wave, MIDI, and RealAudio are examples of audio formats. The following
programs are widely used to view videos: Real Player, Window Media Player, etc
Advantages of multimedia
(i) It is interactive and integrated: The digitization process integrates all of the
numerous mediums. The ability to receive immediate input enhances interactivity.
(ii) It’s quite user-friendly: The user does not use much energy because they can
sit and watch the presentation, read the text, and listen to the audio.
(iii) It is Flexible: Because it is digital, this media can be easily shared. Adapted to
suit various settings and audiences.
26
(iv) It appeals to a variety of senses: It makes extensive use of the user’s senses
while utilizing multimedia, for example, hearing, observing and conversing
(v) Available for all type of audiences: It can be utilized for a wide range of
audiences, from a single individual to a group of people.
Disadvantages of multimedia
(i) Expensive: It makes use of a wide range of resources, some of which can be
rather costly.
(ii) Overabundance of information: Because it is so simple to use, it can store an
excessive amount of data at once.
(iii) The time it takes for your presentation to load is affected by large files such as
video and music. If you add too much, you may need to utilize a larger computer to
store the information.
(iv) Compilation Time: It takes time to put together the original draft, despite its
flexibility
List the characteristics of Multimedia
(i) Multimedia systems must be controlled by a computer – storing, transmitting
and presenting the information to the end users
(ii) Multimedia systems are linked to one another, i.e., integrated: The system’s
multimedia components such as video, music, text, and graphics must all be
integrated in some way.
(iii) The data they work with must be represented digitally: The process of
converting an analog signal to a digital signal.
(iv) Usually, the interface to the final media presentation is interactive.
File placement
27
Cache in simple terms is a data storage layer which stores frequently
accessed data and help to serve the future requests for the same data at a
quick pace rather then accessing the data from its primary storage location.
Caching allows us to efficiently reuse previously retrieved/computed data
rather than spending time for accessing/computing the same data multiple
times.
1. Application Performance
2. Backend Load
3. Throughput
4. Predictable performance
5. Database Cost
28
Predictable performance: Sometimes we might need to deal with
spikes in application usage especially on special events or festive offers
on eCommerce sites might have an impact on application performance
with increasing database load — might result in higher latencies.
Caching comes for a rescue by mitigating such cases.
29
Caching Patterns/Policies:
1. Cache-Aside
2. Read-Through
3. Write-Through
4. Write-Back/Behind
5. Write-Around
6. Refresh-Ahead
Cache Aside — cache works along side with the database where data
will be lazy loaded into the cache. It will be best suited for read heavy
data (which means data which won’t update on frequent basis).
In Fig: 1 — When we request for a specific data, then the application first looks for
it in the cache (operation-1). When the application not able to find matching data in
the cache then it falls back (operation-2) and retrieve the same from the database (as
30
in operation 3 & 4) and the same will be updated in to cache for future retrievals
and return the data back to the user.
In Fig: 2— When we request for a specific data, then the application first looks for
it in the cache (operation-1) and returns the same if it finds matching data in the
cache (operation-2).
Read Through —As the name signifies, it tries to read the data from the
cache and cache communicates with the database on lazy-load basis.
In Fig: 1 — When the cache is asked for a data associated with a specific key
(operation-1) and if it doesn’t exist (operation-2) then the cache retrieve the data
from the datastore and place the same in the cache for future retrievals (operation-3)
and finally it returns the value back to the caller.
31
In Fig: 2 — When the cache is asked for a data associated with a specific key
(operation-1) and if it exist(operation-2) then the same will be returned back to the
caller.
Write Through —In this technique, we write the data into the datastore
through cache which means the data will be inserted/updated into the
cache first followed by a datastore (as in operation 1 & 2) which helps to
keep the data consistent across and best suited for write heavy
requirements.
Write Back — In this technique, we make multiple data entries into the
cache directly(operation-1 & operation-2) but not into the datastore
simultaneously (operation-3). Rather we queue the data which we
suppose to be inserted/updated into the cache and replicate the queued
data to the datastore at later stages.
Since there is a delay to update the latest data into database when compared to
cache, there is a possibility of data loss if the cache fails for some reason.
Since there is a delay to update the latest data into database when compared to
cache, there is a possibility of data loss if the cache fails for some reason (should
be resolved in combination with other patterns).
32
Write Around — In this pattern, the data will be written directly into the
data store without writing it to the cache (operatoin-1). On read operation
from the datastore, the same will be placed into the cache (as in
operation-2 & 3).
Best suited for applications that won’t frequently re-read recently written data into
the datastore.
Here are some of the commonly used Cache eviction (clean-up) techniques used
when cache reach its maximum limit.
33
Least Frequently Used (LFU) — We basically increment the value
every time when the data gets accessed from the cache, in this case item
with lowest count will be evicted (removed) first.
First In First Out (FIFO) — As the name signifies, we evict first item
accessed first without considering how often or how many times it was
accessed in the past.
Last In First Out (LIFO) — As it signifies, it evicts the item which was
most recently used irrespective of number of times or how often it was
accessed in the past.
Most Recently Used (MRU) — It actually helps when older items are
more likely to be used. We actually remove the most recently accessed
items first.
1.Database Caching
2.Web Caching
3.Cloud
4.DNS — Domain Name System
5.CDN — Content Delivery Network
6.Session Management
7.API — Application Programming Interfaces
34