1)diff bet process and thread
Process Thread
Process means a program in Thread means a segment of a
execution. process.
A process takes more time to
A thread takes less time to terminate.
terminate.
It takes more time for creation. It takes less time for creation.
It also takes more time for It takes less time for context
context switching. switching.
A process is less efficient in Thread is more efficient in terms of
terms of communication. communication.
Every process runs on its own
Threads share memory.
memory.
A Thread is lightweight as each
A process is heavyweight
thread in a process shares code, data,
compared to a thread.
and resources.
If one process is blocked, then If a user-level thread is blocked, then
it will not affect the execution all other user-level threads are
of other processes. blocked.
Process consumes more
Thread consumes fewer resources
resources
Changes to the parent process Since all threads of the same process
do not affect child processes. share address space and other
resources so any changes to the main
thread may affect the behavior of the
Process Thread
other threads of the process.
No system call is involved, it is
A system call is involved in it.
created using APIs.
A process does not share data
Threads share data with each other.
with each other.
Process switching uses an Thread switching may not require
interface in an operating calling involvement of operating
system. system.
2)election algorithm
Election algorithms are designed to choose a coordinator.
Election Algorithms: Election algorithms choose a process from a group
of processors to act as a coordinator. If the coordinator process crashes
due to some reasons, then a new coordinator is elected on other
processor. Election algorithm basically determines where a new copy of
the coordinator should be restarted. Election algorithm assumes that
every active process in the system has a unique priority number. The
process with highest priority will be chosen as a new coordinator. Hence,
when a coordinator fails, this algorithm elects that active process which
has highest priority number.Then this number is send to every active
process in the distributed system. We have two election algorithms for two
different configurations of a distributed system.
1. The Bully Algorithm – This algorithm applies to system where every
process can send a message to every other process in the
system. Algorithm – Suppose process P sends a message to the
coordinator.
1. If the coordinator does not respond to it within a time interval T,
then it is assumed that coordinator has failed.
2. Now process P sends an election messages to every process with
high priority number.
3. It waits for responses, if no one responds for time interval T then
process P elects itself as a coordinator.
4. Then it sends a message to all lower priority number processes that
it is elected as their new coordinator.
5. However, if an answer is received within time T from any other
process Q,
(I) Process P again waits for time interval T’ to receive another
message from Q that it has been elected as coordinator.
(II) If Q doesn’t responds within time interval T’ then it is
assumed to have failed and algorithm is restarted.
2. The Ring Algorithm – This algorithm applies to systems organized as
a ring(logically or physically). In this algorithm we assume that the link
between the process are unidirectional and every process can message to
the process on its right only. Data structure that this algorithm uses
is active list, a list that has a priority number of all active processes in
the system.
Algorithm –
1. If process P1 detects a coordinator failure, it creates new active list
which is empty initially. It sends election message to its neighbour
on right and adds number 1 to its active list.
2. If process P2 receives message elect from processes on left, it
responds in 3 ways:
(I) If message received does not contain 1 in active list then
P1 adds 2 to its active list and forwards the message.
(II) If this is the first election message it has received or sent,
P1 creates new active list with numbers 1 and 2. It then sends
election message 1 followed by 2.
(III) If Process P1 receives its own election message 1 then
active list for P1 now contains numbers of all the active
processes in the system. Now Process P1 detects highest
priority number from list and elects it as the new coordinator.
Election Algorithm
Many distributed algorithm require one process to acts as coordinator,
initiator, or otherwise perform some special role.
The goal of an election algorithm is to ensure that when an election starts
concludes with all process agreeing on who the new coordinator is to be.
Types of Election Algorithm
1. Token ring algorithm
2. Bully Algorithm
1. Token ring algorithm
When any process notice that the coordinator is not functioning, it builds
an ELECTION MESSAGE containing its own process number and sends the
message to its successor.
Example:
We start with 6 processes, connected in a logical ring. Process 6 is the
leader, as it has the highest number.
Process 6 fails.
Process 3 notices that Process 6 does not respond. So it starts an election,
sending a message containing its id to the next node in the ring.
Process 5 passes the message on, adding its own id to the message.
Process 0 passes the message on, adding its own id to the message.
Process 1 passes the message on, adding its own id to the message.
Process 4 passes the message on, adding its own id to the message.
When Process 3 receives the message back, it knows the message has
gone around the ring, as its own id is in the list. Picking the highest id in
the list, it starts the coordinator message “5 is the leader” around the
ring.
Process 5 passes on the coordinator message.
Process 0 passes on the coordinator message.
Process 1 passes on the coordinator message.
Process 4 passes on the coordinator message.
Process 3 receives the coordinator message, and stops it.
2. Bully Algorithm
When any process notices that the coordinator is no longer responding to
request, it initiates an ELECTION.
The process holds an ELECTION message as follows:
Example:
We start with 6 processes, all directly connected to each other. Process 6
is the leader,
as it has the highest number.
Process 6 fails.
Process 3 notices that Process 6 does not respond. So it starts an election,
notifying those processes with ids greater than 3.
Both Process 4 and Process 5 respond, telling Process 3 that they’ll take
over from here.
Process 4 sends election messages to both Process 5 and Process 6.
Only Process 5 answers and takes over the election.
Process 5 sends out only one election message to Process 6.
When Process 6 does not respond Process 5 declares itself the winner.
3)Lamport algorithm for mutual exclusion
Lamport’s Distributed Mutual Exclusion Algorithm is a permission
based algorithm proposed by Lamport as an illustration of his
synchronization scheme for distributed systems. In permission based
timestamp is used to order critical section requests and to resolve any
conflict between requests. In Lamport’s Algorithm critical section requests
are executed in the increasing order of timestamps i.e a request with
smaller timestamp will be given permission to execute critical section first
than a request with larger timestamp. In this algorithm:
Three type of messages ( REQUEST, REPLY and RELEASE) are
used and communication channels are assumed to follow FIFO order.
A site send a REQUEST message to all other site to get their
permission to enter critical section.
A site send a REPLY message to requesting site to give its
permission to enter the critical section.
A site send a RELEASE message to all other site upon exiting the
critical section.
Every site Si, keeps a queue to store critical section requests
ordered by their timestamps. request_queuei denotes the queue of
site Si
A timestamp is given to each critical section request using
Lamport’s logical clock.
Timestamp is used to determine priority of critical section requests.
Smaller timestamp gets high priority over larger timestamp. The
execution of critical section request is always in the order of their
timestamp.
Algorithm:
To enter Critical section:
o When a site Si wants to enter the critical section, it sends a
request message Request(tsi, i) to all other sites and places
the request on request_queuei. Here, Tsi denotes the
timestamp of Site Si
o When a site Sj receives the request message REQUEST(tsi,
i) from site Si, it returns a timestamped REPLY message to site
Si and places the request of site Si on request_queuej
To execute the critical section:
o A site Si can enter the critical section if it has received the
message with timestamp larger than (tsi, i) from all other
sites and its own request is at the top of request_queuei
To release the critical section:
o When a site Si exits the critical section, it removes its own
request from the top of its request queue and sends a
timestamped RELEASE message to all other sites
o When a site Sj receives the timestamped RELEASE message
from site Si, it removes the request of Si from its request
queue
Message Complexity: Lamport’s Algorithm requires invocation of 3(N –
1) messages per critical section execution. These 3(N – 1) messages
involves
(N – 1) request messages
(N – 1) reply messages
(N – 1) release messages
Drawbacks of Lamport’s Algorithm:
Unreliable approach: failure of any one of the processes will halt
the progress of entire system.
High message complexity: Algorithm requires 3(N-1) messages
per critical section invocation.
Performance:
Synchronization delay is equal to maximum message transmission
time
It requires 3(N – 1) messages per CS execution.
Algorithm can be optimized to 2(N – 1) messages by omitting
the REPLY message in some situations.
Advantages of Lamport’s Algorithm for Mutual Exclusion in
Distributed System:
1. Simplicity: Lamport’s algorithm is relatively easy to understand
and implement compared to other algorithms for mutual exclusion
in distributed systems.
2. Fairness: The algorithm guarantees fairness by providing a total
order of events that is used to determine the next process that can
enter the critical section.
3. Scalability: Lamport’s algorithm is scalable because it only
requires each process to communicate with its neighbors, rather
than all other processes in the system.
4. Compatibility: The algorithm is compatible with a wide range of
distributed systems and can be adapted to different network
topologies and communication protocols.
Disadvantages of Lamport’s Algorithm for Mutual Exclusion in
Distributed System:
1. Message Overhead: The algorithm requires a lot of message
passing between processes to maintain the total order of events,
which can lead to increased network traffic and latency.
2. Delayed Execution: The algorithm can lead to delays in the
execution of critical sections because a process may have to wait for
messages to arrive from other processes before entering the critical
section.
3. Limited Performance: The algorithm may not perform well in
systems with a high number of processes because of the increased
message overhead and delayed execution.
4. No Fault Tolerance: The algorithm does not provide any fault
tolerance mechanism, which means that if a process fails or
crashes, it may cause the entire system to fail or become unstable.
4)explain file accessing model
In Distributed File Systems (DFS), multiple machines are used to provide
the file system’s facility. Different file system utilize different conceptual
models of a file. The two most usually involved standards for file modeling
are structure and modifiability. File models in view of these standards are
described below.
File Accessing Models:
The file accessing model basically to depends on
The unit of data access/Transfer
The method utilized for accessing to remote files
Based on the unit of data access, following file access models may be
utilized to get to the particular file.
1. File-level transfer model: In file level transfer model, the all out
document is moved while a particular action requires the document
information to be sent the whole way through the circulated registering
network among client and server. This model has better versatility and is
proficient.
2. Block-level transfer model: In the block-level transfer model, record
information travels through the association among client and a server is
accomplished in units of document blocks. Thus, the unit of information
move in block-level transfer model is document blocks. The block-level
transfer model might be used in dispersed figuring climate containing a
few diskless workstations.
3. Byte-level transfer model: In the byte-level transfer model, record
information moves the association among client and a server is
accomplished in units of bytes. In this way, the unit of information move in
byte-level exchange model is bytes. The byte-level exchange model offers
more noteworthy versatility in contrast with the other record move models
since, it licenses recuperation and limit of a conflicting progressive sub
range of a document. The significant hindrance to the byte-level exchange
model is the trouble in store organization because of the variable-length
information for different access requests.
4. Record-level transfer model: The record-level file transfer model
might be used in the document models where the document contents are
organized as records. In record-level exchange model, document
information travels through the organization among client and a server is
accomplished in units of records. The unit of information move in record-
level transfer model is record.
The Method Utilizes for Accessing Remote Files:
A distributed file system might utilize one of the following models to
service a client’s file access request when the accessed to file is remote:
1. Remote service model: Handling of a client’s request is performed at
the server’s hub. Thusly, the client’s solicitation for record access is
passed across the organization as a message on to the server, the server
machine plays out the entrance demand, and the result is shipped off the
client. Need to restrict the amount of messages sent and the vertical per
message.
Remote access is taken care of across the organization so it is all the
slower.
Increase server weight and organization traffic. Execution
undermined.
Transmission of series of responses to explicit solicitation prompts
higher organization overhead.
For staying aware of consistency correspondence among client and
server is there to have a specialist copy predictable with clients put
away data.
Remote assistance better when essential memory is close to
nothing.
It is only an augmentation of neighborhood record system interface
across the network.
2. Data-caching model: This model attempts to decrease the
organization traffic of the past model by getting the data got from the
server center. This exploits the region part of the found in record gets to. A
replacement methodology, for instance, LRU is used to keep the store size
restricted.
Remote access can be served locally so that access can be quicker.
Network traffic, server load is reduced. Further develops versatility.
Network over head is less when transmission of huge of information
in comparison to remote service.
For keeping up with consistency, if less writes then better
performance in maintaining consistency ,if more frequent writes
then poor performance.
Caching is better for machines with disk or large main memory.
Lower level machine interface is different from upper level UI(user
interface).
Benefit of Data-caching model over the Remote service model:
The data -catching model offers the opportunity for expanded execution
and greater system versatility since it diminishes network traffic, conflict
for the network, and conflict for the document servers. Hence almost all
distributed file systems implement some form of caching.