Computer Networks
Course Code: CS – 3001
Fall 2024 Semester
Offered to BCS22 – 5K, BDS21 – 7C
Course Instructor: Aftab Alam
Lecture 7
September 10, 2024 (Tuesday)
Application layer: overview
P2P applications
Principles of network video streaming and content
applications distribution networks
Web and HTTP socket programming with
E-mail, SMTP, IMAP UDP and TCP
The Domain Name System
DNS
Application Layer: 2-2
Web and HTTP
First, a quick review…
web page consists of objects, each of which can be stored on
different Web servers
object can be HTML file, JPEG image, Java applet, audio file,…
web page consists of base HTML-file which includes several
referenced objects, each addressable by a URL, e.g.,
www.someschool.edu/someDept/pic.gif
host name path name
Application Layer: 2-3
HTTP overview
HTTP: hypertext transfer protocol
Web’s application-layer protocol
client/server model: PC running
• client: browser that requests, Firefox browser
receives, (using HTTP protocol) and
“displays” Web objects
server running
• server: Web server sends (using Apache Web
HTTP protocol) objects in response server
to requests
iPhone running
Safari browser
Application Layer: 2-4
HTTP overview (continued)
HTTP uses TCP: HTTP is “stateless”
client initiates TCP connection server maintains no
(creates socket) to server, port 80 information about past client
server accepts TCP connection requests
from client aside
protocols that maintain “state”
HTTP messages (application-layer are complex!
protocol messages) exchanged
past history (state) must be
between browser (HTTP client) and maintained
Web server (HTTP server) if server/client crashes, their views
TCP connection closed of “state” may be inconsistent,
must be reconciled
Application Layer: 2-5
HTTP connections: two types
Non-persistent HTTP Persistent HTTP
1. TCP connection opened TCP connection opened to
2. at most one object sent a server
over TCP connection multiple objects can be
3. TCP connection closed sent over single TCP
connection between client,
downloading multiple and that server
objects required multiple TCP connection closed
connections
Application Layer: 2-6
Non-persistent HTTP: example
User enters URL: www.someSchool.edu/someDepartment/home.index
(containing text, references to 10 jpeg images)
1a. HTTP client initiates TCP
connection to HTTP server 1b. HTTP server at host
(process) at www.someSchool.edu on www.someSchool.edu waiting for TCP
port 80 connection at port 80 “accepts”
connection, notifying client
2. HTTP client sends HTTP
request message (containing
URL) into TCP connection 3. HTTP server receives request message,
socket. Message indicates forms response message containing
time that client wants object requested object, and sends message
someDepartment/home.index into its socket
Application Layer: 2-7
Non-persistent HTTP: example (cont.)
User enters URL: www.someSchool.edu/someDepartment/home.index
(containing text, references to 10 jpeg images)
4. HTTP server closes TCP
5. HTTP client receives response connection.
message containing html file,
displays html. Parsing html file,
finds 10 referenced jpeg objects
6. Steps 1-5 repeated for
each of 10 jpeg objects
time
Application Layer: 2-8
Non-persistent HTTP: response time
RTT (definition): time for a small
packet to travel from client to initiate TCP
server and back connection
RTT
HTTP response time (per object):
one RTT to initiate TCP connection request file
one RTT for HTTP request and first few RTT time to
transmit
bytes of HTTP response to return file
file received
obect/file transmission time
time time
Non-persistent HTTP response time = 2RTT+ file transmission time
Application Layer: 2-9
Persistent HTTP (HTTP 1.1)
Non-persistent HTTP issues: Persistent HTTP (HTTP1.1):
requires 2 RTTs per object server leaves connection open after
OS overhead for each TCP sending response
connection subsequent HTTP messages
browsers often open multiple between same client/server sent
parallel TCP connections to over open connection
fetch referenced objects in client sends requests as soon as it
parallel encounters a referenced object
as little as one RTT for all the
referenced objects (cutting
response time in half)
Application Layer: 2-10
HTTP request message
two types of HTTP messages: request, response
HTTP request message:
• ASCII (human-readable format)
carriage return character
line-feed character
request line (GET, POST,
GET /index.html HTTP/1.1\r\n
HEAD commands) Host: www-net.cs.umass.edu\r\n
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X
10.15; rv:80.0) Gecko/20100101 Firefox/80.0 \r\n
header Accept: text/html,application/xhtml+xml\r\n
lines Accept-Language: en-us,en;q=0.5\r\n
Accept-Encoding: gzip,deflate\r\n
Connection: keep-alive\r\n
\r\n
carriage return, line feed
at start of line indicates
end of header lines * Check out the online interactive exercises for more
examples: http://gaia.cs.umass.edu/kurose_ross/interactive/ Application Layer: 2-11
HTTP request message: general format
method sp URL sp version cr lf request
line
header field name value cr lf
header
~
~ ~
~ lines
header field name value cr lf
cr lf
~
~ entity body ~
~ body
Application Layer: 2-12
Other HTTP request messages
POST method: HEAD method:
web page often includes form requests headers (only) that
input would be returned if specified
user input sent from client to URL were requested with an
server in entity body of HTTP HTTP GET method.
POST request message
PUT method:
uploads new file (object) to server
GET method (for sending data to server): completely replaces file that exists
include user data in URL field of HTTP at specified URL with content in
GET request message (following a ‘?’): entity body of POST HTTP request
www.somesite.com/animalsearch?monkeys&banana
message
Application Layer: 2-13
HTTP response message
status line (protocol HTTP/1.1 200 OK
status code status phrase) Date: Tue, 08 Sep 2020 00:53:20 GMT
Server: Apache/2.4.6 (CentOS)
OpenSSL/1.0.2k-fips PHP/7.4.9
mod_perl/2.0.11 Perl/v5.16.3
header Last-Modified: Tue, 01 Mar 2016 18:57:50 GMT
lines ETag: "a5b-52d015789ee9e"
Accept-Ranges: bytes
Content-Length: 2651
Content-Type: text/html; charset=UTF-8
\r\n
data, e.g., requested data data data data data ...
HTML file
* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Application Layer: 2-14
HTTP response status codes
status code appears in 1st line in server-to-client response message.
some sample codes:
200 OK
• request succeeded, requested object later in this message
301 Moved Permanently
• requested object moved, new location specified later in this message (in
Location: field)
400 Bad Request
• request msg not understood by server
404 Not Found
• requested document not found on this server
505 HTTP Version Not Supported
Application Layer: 2-15
Trying out HTTP (client side) for yourself
1. netcat to your favorite Web server:
% nc -c -v gaia.cs.umass.edu 80 opens TCP connection to port 80 (default HTTP server
port) at gaia.cs.umass. edu.
anything typed in will be sent to port 80 at
gaia.cs.umass.edu
2. type in a GET HTTP request:
GET /kurose_ross/interactive/index.php HTTP/1.1
Host: gaia.cs.umass.edu
by typing this in (hit carriage return twice), you send
this minimal (but complete) GET request to HTTP
server
3. look at response message sent by HTTP server!
(or use Wireshark to look at captured HTTP request/response)
Application Layer: 2-16
Maintaining user/server state: cookies
a stateful protocol: client makes
Recall: HTTP GET/response two changes to X, or none at all
interaction is stateless
X
no notion of multi-step exchanges of
HTTP messages to complete a Web X
“transaction”
• no need for client/server to track X’
“state” of multi-step exchange
t’
• all HTTP requests are independent of X’’
each other
• no need for client/server to “recover”
X’’
from a partially-completed-but-never-
time time
completely-completed transaction
Q: what happens if network connection or
client crashes at t’ ?
Application Layer: 2-17
Maintaining user/server state: cookies
Web sites and client browser use Example:
cookies to maintain some state Susan uses browser on laptop,
visits specific e-commerce site
between transactions for first time
four components: when initial HTTP requests
1) cookie header line of HTTP response arrives at site, site creates:
message • unique ID (aka “cookie”)
• entry in backend database
2) cookie header line in next HTTP for ID
request message
• subsequent HTTP requests
3) cookie file kept on user’s host, from Susan to this site will
managed by user’s browser contain cookie ID value,
4) back-end database at Web site allowing site to “identify”
Susan
Application Layer: 2-18
Maintaining user/server state: cookies
client
server
ebay 8734 usual HTTP request msg Amazon server
cookie file creates ID
usual HTTP response 1678 for user backend
create
ebay 8734 set-cookie: 1678 entry database
amazon 1678
usual HTTP request msg
cookie: 1678 cookie- access
specific
usual HTTP response msg action
one week later:
access
ebay 8734 usual HTTP request msg
amazon 1678 cookie: 1678 cookie-
specific
usual HTTP response msg action
time time Application Layer: 2-19
HTTP cookies: comments
aside
What cookies can be used for: cookies and privacy:
authorization cookies permit sites to
shopping carts learn a lot about you on
their site.
recommendations third party persistent
user session state (Web e-mail) cookies (tracking cookies)
allow common identity
(cookie value) to be
Challenge: How to keep state? tracked across multiple
at protocol endpoints: maintain state at web sites
sender/receiver over multiple
transactions
in messages: cookies in HTTP messages
carry state
Application Layer: 2-20
Web caches
Goal: satisfy client requests without involving origin server
user configures browser to
point to a (local) Web cache Web
cache
browser sends all HTTP client
origin
server
requests to cache
• if object in cache: cache
returns object to client
• else cache requests object
client
from origin server, caches
received object, then
returns object to client
Application Layer: 2-21
Web caches (aka proxy servers)
Web cache acts as both Why Web caching?
client and server
reduce response time for client
• server for original
requesting client request
• client to origin server • cache is closer to client
reduce traffic on an institution’s
server tells cache about
object’s allowable caching in access link
response header: Internet is dense with caches
• enables “poor” content providers
to more effectively deliver content
Application Layer: 2-22
Caching example
Scenario:
access link rate: 1.54 Mbps origin
RTT from institutional router to server: 2 sec servers
web object size: 100K bits public
Internet
average request rate from browsers to origin
servers: 15/sec
avg data rate to browsers: 1.50 Mbps
1.54 Mbps
access link
Performance:
problem: large
access link utilization = .97 queueing delays institutional
network
1 Gbps LAN
LAN utilization: .0015 at high utilization!
end-end delay = Internet delay +
access link delay + LAN delay
= 2 sec + minutes + usecs
Application Layer: 2-23
Option 1: buy a faster access link
Scenario: 154 Mbps
access link rate: 1.54 Mbps origin
RTT from institutional router to server: 2 sec servers
web object size: 100K bits public
Internet
average request rate from browsers to origin
servers: 15/sec
avg data rate to browsers: 1.50 Mbps 154 Mbps
1.54 Mbps
access link
Performance:
access link utilization = .97 .0097 institutional
network
1 Gbps LAN
LAN utilization: .0015
end-end delay = Internet delay +
access link delay + LAN delay
= 2 sec + minutes + usecs
Cost: faster access link (expensive!) msecs
Application Layer: 2-24
Option 2: install a web cache
Scenario:
access link rate: 1.54 Mbps origin
RTT from institutional router to server: 2 sec servers
web object size: 100K bits public
Internet
average request rate from browsers to origin
servers: 15/sec
avg data rate to browsers: 1.50 Mbps
1.54 Mbps
access link
Cost: web cache (cheap!)
institutional
network
Performance: 1 Gbps LAN
LAN utilization: .? How to compute link
access link utilization = ? utilization, delay?
average end-end delay = ? local web cache
Application Layer: 2-25
Calculating access link utilization, end-end delay
with cache:
suppose cache hit rate is 0.4:
40% requests served by cache, with low origin
servers
(msec) delay public
60% requests satisfied at origin Internet
• rate to browsers over access link
= 0.6 * 1.50 Mbps = .9 Mbps
1.54 Mbps
• access link utilization = 0.9/1.54 = .58 means access link
low (msec) queueing delay at access link
institutional
average end-end delay: network
1 Gbps LAN
= 0.6 * (delay from origin servers)
+ 0.4 * (delay when satisfied at cache)
= 0.6 (2.01) + 0.4 (~msecs) = ~ 1.2 secs local web cache
lower average end-end delay than with 154 Mbps link (and cheaper too!)
Application Layer: 2-26
Conditional GET
client server
Goal: don’t send object if cache has
HTTP request msg
up-to-date cached version If-modified-since: <date> object
not
• no object transmission delay (or use
modified
of network resources) HTTP response
before
HTTP/1.0
client: specify date of cached copy 304 Not Modified
<date>
in HTTP request
If-modified-since: <date>
server: response contains no HTTP request msg
If-modified-since: <date> object
object if cached copy is up-to-date: modified
HTTP/1.0 304 Not Modified HTTP response after
HTTP/1.0 200 OK <date>
<data>
Application Layer: 2-27
HTTP/2
Key goal: decreased delay in multi-object HTTP requests
HTTP1.1: introduced multiple, pipelined GETs over single TCP
connection
server responds in-order (FCFS: first-come-first-served scheduling) to
GET requests
with FCFS, small object may have to wait for transmission (head-of-
line (HOL) blocking) behind large object(s)
loss recovery (retransmitting lost TCP segments) stalls object
transmission
Application Layer: 2-28
HTTP/2
Key goal: decreased delay in multi-object HTTP requests
HTTP/2: [RFC 7540, 2015] increased flexibility at server in sending
objects to client:
methods, status codes, most header fields unchanged from HTTP 1.1
transmission order of requested objects based on client-specified
object priority (not necessarily FCFS)
push unrequested objects to client
divide objects into frames, schedule frames to mitigate HOL blocking
Application Layer: 2-29
HTTP/2: mitigating HOL blocking
HTTP 1.1: client requests 1 large object (e.g., video file) and 3 smaller
objects
server
GET O4 GET O3 GET O
2 GET O1 object data requested
client
O1
O2
O1 O3
O2
O3
O4
O4
objects delivered in order requested: O2, O3, O4 wait behind O1 Application Layer: 2-30
HTTP/2: mitigating HOL blocking
HTTP/2: objects divided into frames, frame transmission interleaved
server
GET O4 GET O3 GET O
2 GET O1 object data requested
client
O2
O4
O3 O1
O2
O3
O1
O4
O2, O3, O4 delivered quickly, O1 slightly delayed
Application Layer: 2-31
HTTP/2 to HTTP/3
HTTP/2 over single TCP connection means:
recovery from packet loss still stalls all object transmissions
• as in HTTP 1.1, browsers have incentive to open multiple parallel
TCP connections to reduce stalling, increase overall throughput
no security over vanilla TCP connection
HTTP/3: adds security, per object error- and congestion-
control (more pipelining) over UDP
• more on HTTP/3 in transport layer
Application Layer: 2-32