0% found this document useful (0 votes)
36 views23 pages

CNS Unit5 Material

very useful

Uploaded by

reena1286
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views23 pages

CNS Unit5 Material

very useful

Uploaded by

reena1286
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT–V

Objective:
The objective of the unit is to provide a detail information about how the protocols are
managed with in a network and how protocols are managed with in multiple networks. to
handle such protocols and manage the network it describes a protocol called SNMP

1. Basic Concepts of SNMP

• An integrated collection of tools for network monitoring and control.

– Single operator interface

– Minimal amount of separate equipment. Software and network


communications capability built into the existing equipment

• SNMP key elements:

o Management station

o Managament agent

o Management information base

o Network Management protocol

• Protocol context of SNMP

Proxy Configuration

1. SNMP v1 and v2

• Trap – an unsolicited message (reporting an alarm condition)

• SNMPv1 is ”connectionless” since it utilizes UDP

• (rather than TCP) as the transport layer protocol.

• SNMPv2 allows the use of TCP for ”reliable, connection-oriented”


service.
53
2. Comparison of SNMPv1 and SNMPv2

3. SNMPv1 Community Facility

4. SNMPv1 Administrative Concepts

1. SNMPv3

SNMPv3 defines a security capability to be used in conjunction with SNMPv1


or v2

2. SNMPv3 Flow

3. Traditional SNMP Manager

4. Traditional SNMP Agent.

5. SNMP3 Message Format with USM

6. User Security Model (USM)

Key Localization Process

SNMP was made with one design in mind… to be simple. SNMP is a simple protocol that can
be used on just about any networking device in use today. In some environments it’s used
heavily, in others it’s scarce. Some view it as a security threat; others see it as a way to
efficiently manage some of their key systems. However you decide to see it, SNMP is a easy to
use, easy to set up and not very difficult to understand.

The SNMP protocol was designed to provide a "simple" method of centralizing the
management of TCP/IP-based networks – plain and simple. If you want to manage devices
from a central location, the SNMP protocol is what facilitates the transfer of data from the
client portion of the equation (the device you are monitoring) to the server portion where the
data is centralized in logs for centralized viewing and analysis. Many application vendors
supply network management software: IBM’s Tivoli, Microsoft’s MOM and HP Openview are
three of over 100+ applications available today to manage just about anything imaginable.The
protocol is what makes this happen. The goals of the original SNMP protocols revolved around
one main factor that is still in use today: Remote Management of Devices. SNMP is commonly
used to manage devices on a network.
54
SNMP uses UDP

UDP stands for User Datagram Protocol and is the opposite of TCP, Transmission Control
Protocol which is a very reliable and high overhead protocol.

User Datagram Protocol is very low overhead, fast and unreliable. It is defined by RFC 768.
UDP is easier to implement and use than a more complex protocol such as TCP. It does
however provide plenty of functionality to allow a central manager station to communicate
with a remote agent that resides on any managed device that it can communicate with. The
unreliability comes in the form of checks and balances whereas if TCP sends something, it
waits for an acknowledgment and if it doesn’t hear back, it will resend. Since logging of
devices usually happens within a time period that is cyclic in nature, then it’s common sense
that you missed the event and you’ll catch it next time… the tradeoff being that the low
overhead protocol is simple to use and doesn’t eat up all your bandwidth like TCP based
applications going across your WAN.

SNMP Operation

SNMP design is pretty simple. There are two main players in SNMP. The manager and the
agent. The manager is generally the ‘main’ station such as HP Openview. The agent would be
the SNMP software running on a client system you are trying to monitor.

55
The manager is usually a software program running on a workstation or larger computer that
communicates with agent processes that run on each device being monitored. Agents can be
found on switches, firewalls, servers, wireless access points, routers, hubs, and even users'
workstations – the list goes on and on. As seen in the illustration, the manager polls the agents
making requests for information, and the agents respond when asked with the information
requested.

Network Management Station (NMS)

The manager is also called a Network Management Station or NMS for short. The software
used to create the NMS varies in functionality as well as expense. You can get cheaper
applications with lesser functionality or pay through the nose and get the Lamborghini of NMS
systems. Other functionalities of the NMS include reporting features, network topology
mapping and documenting, tools to allow you to monitor the traffic on your network, and so
on. Some management consoles can also produce trend analysis reports. These types of reports
can help you do capacity planning and set long-range goals.

SNMP Primitives

56
SNMP has three control primitives that initiate data flow from the requester which is usually
the Manager. These would be get, get-next and set. The manager uses the get primitive to get
a single piece of information from an agent. You would use get-next if you had more than one
item. When the data the manager needs to get from the agent consists of more than one item,
this primitive is used to sequentially retrieve data; for example, a table of values. You can use
set when you want to set a particular value. The manager can use this primitive to request that
the agent running on the remote device set a particular variable to a certain value. There are
two control primitives the responder (manager) uses to reply and that is get-response and trap.
One is used in response to the requester's direct query (get-response) and the other is an
asynchronous response to obtain the requester's attention (trap). As I mentioned earlier, I
alluded to the fact that the manager doesn’t always initiate – sometimes the agent can as well.
Although SNMP exchanges are usually initiated by the manager software, this primitive can
also be used when the agent needs to inform the manager of some important event. This is
commonly known and heard of as a ‘trap’ sent by the agent to the NMS.

The Management Information Base (MIB)

We just learned what primitives were… the agent and the manager, exchanging data. The data
they exchange also has a name. The types of data the agent and manager exchange are defined
by a database called the management information base (MIB).The MIB is a virtual information
store. Remember, it is a small database of information and it resides on the agent. Information
collected by the agent is stored in the MIB. The MIB is precisely defined; the current Internet
standard MIB contains more than a thousand objects. Each object in the MIB represents some
specific entity on the managed device.

SNMPv2 and SNMPv3

With all TCP/IP related protocols, it’s a well known fact that anything dating before the
creation of IPv6 (or IPng) has security weaknesses such as passwords sent in cleartext. SNMP
in its original form is very susceptible to attack if not secured properly, messages sent in
cleartext exposing community string passwords, or default passwords of public and private
being ‘guessed’ by anyone who knew how to exploit SNMP… beyond its inherent weaknesses
SNMP in its original implementation is still very simple to use and has been widely used
throughout the industry. SNMP in its first version lacked encryption or

57
authentication mechanisms. So, now that SNMP in its first version was good enough, work
began to make it better with SNMPv2 in 1994. Besides for some minor enhancements, the
main updates to this protocol come from the two new types of functionality, where traps can
be sent from one NMS to another NMS as well as a ‘get-bulk’ operation that allows larger
amounts of information to be retrieved from one request. SNMPv3 still being worked on and
is incorporating the best of both versions and enhanced security as well. SNMPv3 provides
secure access to devices by a combination of authenticating and encrypting packets over the
network. The security features provided in SNMPv3 are message integrity which ensures that
a packet has not been tampered with while in transit, authentication which is determining the
message is from a valid source and encryption, which is the securing of the packet by
scrambling its contents.

Intruders and Viruses

• Intruders

o Intrusion Techniques

o Password Protection

o Password Selection Strategies

o Intrusion Detection

• Viruses and Related Threats

– Malicious Programs

– The Nature of Viruses

– Antivirus Approaches

The problem of intruders in computer networks is rather old. In fact, it has been persistent since

the beginning of the computer age. One of the first official documents concerning computer

security and intruders is from 1980. It is the so called Anderson report [Ande1980]. Its contents

point out how current the threat of intruders was even back then. The Anderson

report [Ande1980] defines a lot of intrusion scenarios that are still up-to-date and applicable,

58
which is one of the reasons that it is still referred to today. On this account, section 2 of this

article explains the different types of intruders and their characteristics.

The following section presents several intrusion detection techniques and how intrusions can

be prevented. A promising approach for intrusion detection is introduced and its mode of

operation is briefly depicted. Considering an example of the effectiveness of this approach we

will show how the intrusion detection of this tool works in practice.

Whereas section 3 deals with closing security gaps by means of intrusion detection, section 4

brings out security issues regarding the password management on UNIX, and it describes

general problems of the password selection. Good passwords need to be distinguished from

bad passwords in order to make it a more difficult task for attackers to guess passwords. We

will present some of the techniques that claim to be solutions to these problems and discuss

their effectiveness.

TYPES OF INTRUDERS

The term "intruders" compromises more than just human attackers who manage to gain access

to computer resources although the resource was not meant to be used by them in the first place.

Apart from these human attackers who are popularly called "hackers", intruders can be

computer programs that seem to be useful, but contain secret functionality to invade a system

or a resource. These programs are also known as Trojan horses. Programs containing viruses

can act as intruders too. Computer systems can be any kind of internal network, e.g. within a

company. Computer resources can be work stations, mobile computers, as well as computer

programs. Although we don't need to distinguish between human attackers and computer

programs that perform illicit actions, we need to know some characteristics that define

intruders. One has to keep in mind that the following definitions not only apply to human

beings, but to illicit computer programs too, although below we will talk about "individuals"

acting in different types of threat scenarios. This is done in accordance with most of the
59
literature about this subject.

In general, three types of intruders can be distinguished: the misfeasor, the masquerader, and

the clandestine user. The definition for these terms can be traced back to [Ande1980] which

establishes these terms in detail. To refrain from repeating an exhaustive list of definitions

only the important differences in the characteristics of misfeasor, clandestine user, and

masquerader will be addressed.

Misfeasor
Imagine someone who emails blueprints and schematics the company he works for is holding

a patent on to his home email account in order to sell it to a competitor company. Another

example of such a misfeasance of ones privileges is printing offensive material at work.

Nowadays we can take for granted that someone has access to an email accounts or a printer

at work. It is obvious that no data was accessed without authorization in both of these

examples. However, the user misused some of his privileges.

On this account we define misfeasor as an individual who works within the scope of his

privileges but misuses them.

Clandestine user
Another user might take advantage of a security hole in the operating system in order to gain

administrative privileges to a computer resource. How this can be achieved on a recent

operating system will be shown in section 3.3 and we define clandestine user as an individual

who seizes supervisory control to disengage or avoid security mechanisms of the system such

as audit and access controls.

Masquerader
A third individual could steal another user's login id and the associated password. If this data

is at the disposal of an attacker he can use the system incognito for his illicit intensions. Yet,

60
sometimes stealing ids and passwords is not even necessary, because some users might

choose very simple passwords, which can be a mere repetition of the login id, some easily

accessible information related to their personal life, such as their spouse's name, or a

password that is very short, for example only 4 characters or even shorter.

We define masquerader as an individual who overcomes a systems access control to exploit a

legitimate user's account.

Common to misfeasor, clandestine user, and masquerader is that either they aim to increase

the amount of their privileges or they use the system in an unforeseen way.

If a system is tricked by an attacker to provide users with privileges they did not hold before,

the system is in a compromised state.

It has to be noted that misfeasors end clandestine users are internal attackers. That means,

initially they are legitimate users having some privileges in the internal network, whereas the

masquerader can be an attacker from outside the networks if he happens to correctly guess a

password.

IDENTIFYING INTRUDERS
Typically, everyone stores plenty of sensitive data in ones user account, such as personal data,

address books, data one is required to carefully protect by law, and data that grants access to

other systems or that is supposed to prove one's identity for example. It is fairly easy to find

examples for each of these types of data:

Personal data could be emails from your spouse. Address books might contain phone numbers

and addresses of the suppliers the company does business with. Time tracking of engineers has

to be handled with great care. Furthermore, if one has stored passwords or private and public

keys on ones account, the security systems that try to grant secure access to other systems or

that try to prove one's identity by these means will be useless. Moreover, if such

sensible data can be accessed by others the owner runs a high risk of financial losses and
61
personal harm.

INTRUSION DETECTION
The threats of attackers have to be addressed to. To this end intrusion detection techniques

have been developed to close security gaps of operating systems and network access controls.

Below different types of intrusion detection techniques will be introduced briefly and an

overview of their weaknesses and strengths will be given as they appear in [Stal2003] and

[Ilgu1995].

• Threshold Detection

Threshold Detection is one of the most rudimentary intrusion detection techniques compared

to the other ones. The idea of this approach is to record each occurrence of a suspicious event

and to compare it to a threshold number. However, it turns out that establishing threshold

numbers as well as rating the security relevance of events is a rather difficult task which is

often based on experiences and intuition. An implementation of this approach was developed

at Los Alamos National Laboratory and it is called NADIR.

• Anomaly Detection

Anomaly Detection is one of the earliest approaches which try to meet requirements described

in [Ande1980] to distinguish masquerader, misfeasor, and clandestine user. Implementations of

this approach are realized in statistical or rule based forms. Typically, anomaly detection

requires little knowledge of the actual system beforehand. In fact, usage patterns are established

automatically by means of neural networks for example. Intrusion detection systems that have

already implemented this approach are IDES, Wisdom & Sense, and TIM.

• Rule-based Penetration Identification

Rule-based Penetration Identification systems are expert systems that recognize single events

as well as sequences of events. The foundation pillar of this approach is a suspicious record

for each user. Initially this record has the value zero and the more suspicious a user becomes,
62
the higher his suspicious record. Examples that implement this technique are IDES, NADIR,

and Wisdom and Sense.

• Model-based Intrusion Detection

A higher level of abstraction than the approaches above is characteristic of this intrusion

detection technique. The objective of Model-based Intrusion Detection is to build penetration

scenarios of network rather than characterizing the behavior of a specific user. For identifying

penetrations the pieces of evidence are evaluated against a hypothesis.

VIRUS AND RELATED THREATS

VIRUS
Computer Virus is a kind of malicious software written intentionally to enter a computer
without the user’s permission or knowledge, with an ability to replicate itself, thus continuing
to spread. Some viruses do little but replicate others can cause severe harm or adversely effect
program and performance of the system.

TYPES OF VIRUS

Resident Viruses
This type of virus is a permanent which dwells in the RAM memory. From there it can
overcome and interrupt all of the operations executed by the system: corrupting files and
programs that are opened, closed, copied, renamed etc.

Examples include: Randex, CMJ, Meve, and MrKlunky.

Boot Virus
This type of virus affects the boot sector of a floppy or hard disk. This is a crucial part of a
disk, in which information on the disk itself is stored together with a program that makes it
possible to boot (start) the computer from the disk.
The best way of avoiding boot viruses is to ensure that floppy disks are write-protected and

63
never start your computer with an unknown floppy disk in the disk drive.

Examples of boot viruses include: Polyboot.B, AntiEXE.

Macro Virus
Macro viruses infect files that are created using certain applications or programs that contain
macros. These mini-programs make it possible to automate series of operations so that they are
performed as a single action, thereby saving the user from having to carry them out one byone.

Examples of macro viruses: Relax, Melissa.A, Bablas, O97M/Y2K.

Polymorphic Virus
Polymorphic viruses encrypt or encode themselves in a different way (using different
algorithms and encryption keys) every time they infect a system.This makes it impossible for
anti-viruses to find them using string or signature searches (because they are different in each
encryption) and also enables them to create a large number of copies of themselves.

Examples include: Elkern, Marburg, Satan Bug, and Tuareg.

Parasitic Viruses
Parasitic viruses modify the code of the infected file. The infected file remains partially or
fully functional.Parasitic viruses are grouped according to the section of the file they write
their code to:

 Prepending: the malicious code is written to the beginning of the file


 Appending: the malicious code is written to the end of the file
 Inserting: the malicious code is inserted in the middle of the file

Inserting file viruses use a variety of methods to write code to the middle of a file: they either
move parts of the original file to the end or copy their own code to empty sections of the target
file.

64
WORMS
A computer worm is a self-replicating computer program. It uses a network to send copies of
itself to other nodes (computers on the network) and it may do so without any user
intervention. Unlike a virus, it does not need to attach itself to an existing program.

LOGIC BOMB
A logic bomb is a piece of code intentionally inserted into a software system that will set off a
malicious function when specified conditions are met. For example, a programmer may hide a
piece of code that starts deleting files (such as the salary database trigger), should they ever
leave the company.

TROJAN HORSES
The Trojan horse, also known as trojan, in the context of computing and software, describes a
class of computer threats (malware) that appears to perform a desirable function but in fact
performs undisclosed malicious functions that allow unauthorized access to the host machine,
giving them the ability to save their files on the user's computer or even watch the user's screen
and control the computer.

MALWARE
Malwareis software designed to infiltrate or damage a computer system without the owner's
informed consent. The expression is a general term used by computer professionals to mean a
variety of forms of hostile, intrusive, or annoying software or program code.

SPYWARE
Spyware is computer software that is installed surreptitiously on a personal computer to
collect information about a user, their computer or browsing habits without the user's
informed consent.

65
Firewalls

• Firewall Design Principles

• Firewall Characteristics

• Types of Firewalls

 Packet-filtering routers

 Application-level gateways

 Circuit-level gateways

Trusted Systems:.

 Data Access Control.

 The Concept of Trusted Systems.

 Trojan Horse Defense

Network Firewalls operate at different layers of the OSI and TCP/IP network models.
The lowest layer at which a firewall can operate is the third level which is the network
layer for the OSI model and the Internet Protocol layer for TCP/IP. At
this layer a firewall can determine if a packet is from a trusted source but cannot grant
or deny access based on what it contains. Firewalls that operate at the highest layer,

66
which is the application layer, know a large amount of information including the source
and the packet contents. Therefore, they can be much more selective in granting access.
This may give the impression that firewalls functioning at a higher layer must be better,
which is not necessarily the case. The lower the layer the packetis intercepted the more
secure the system. If the intruder cannot get past the third layer, it is impossible to gain
control of the operating system.

Firewalls fall into four broad categories: packet filters, circuit level gateways,
application level gateways and stateful multilayer inspection firewalls. Packet filtering
firewalls operate at the network level of the OSI model or the IP layer of TCP/IP. In a
packet filtering firewall, each packet is compared to a set of rules beforeit is forwarded.
The firewall can drop the packet, forward it, or send a message to thesource. Circuit
level gateways operate at the session layer of the OSI model, or the TCP layer of
TCP/IP. Circuit level gateways examine each connection setup to ensure that it follows
legitimate TCP handshaking. Application level gateways or proxies operate at the
application layer. Packets received or leaving cannot access services for which there is
no proxy. Stateful multilayer inspection firewalls combine aspects of the other three
types of firewalls. They filter packets at the network layer, determine whether packets
are valid at the session layer, and assess the contents of packets at the application layer.

Firewall Architectures
After deciding the security requirements for the network the first step in designing
a firewall is deciding on a basic architecture. There are two classes of
firewall architectures, single layer and multiple layer. In a single layer architecture, one
host is allocated all firewall functions. This method is usually chosen when either cost
is a key factor or if there are only two networks to connect. The advantage to this
architecture is any changes to the firewall need only to be done at a single host.

The biggest disadvantage of the single layer approach it provides single

67
entry point. If this entry point is breached, the entire network becomes vulnerable to an intruder.

In a multiple layer architecture the firewall functions are distributed among

two or more hosts normally connected in series. This method is more difficult to

design and manage, it is also more costly, but can provide significantly greater

security by diversifying the firewall defense. A common design approach for this type

of architecture using two firewall hosts with a demilitarized network (DMZ) between

them separating the Internet and the internal network. Using this setup traffic between

the internal network and the Internet must pass through two firewalls and the DMZ

68
Firewall Types

After the security requirements are established, a basic architecture is selected

then Firewall functions can be chosen to meet these needs. The following is a

69
detailed discussion of the 4 firewall categories:

Packet Filtering Firewalls :

The first generation of firewall architectures appeared around 1985 and came out

of Cisco' s IOS software division. These are called packet filter firewalls.[4]

Packet Filtering is usually performed by a router as part of a firewall. A normal router

decides where to direct the data, a packet filtering router decides if it should forward

the data at all. Packet Filtering rules can be set on the following: physical network

interface the packet arrives on; source or destination IP address, the type of transport

layer (TCP, UDP, ICMP), or the transport layer source or destination ports. Packet

filtering firewalls are low cost, have only a small effect on the network performance,

and do not require client computers to be configured in any particular way. However,

packet filtering firewalls are not considered to be very secure on their own because

they do not understand application layer protocols. Therefore, they cannot make

content-based decisions on the packets, which makes them less secure than application

layer and circuit level firewalls. Another disadvantage of Packet filtering firewalls are

they are stateless and do not retain the state of a connection. They also have very little

or no logging capability which makes it hard to detect if the network is under attack.

Testing the grant and deny rules is also difficult which may leave the network

vulnerable or incorrectly configured.

70
Circuit Level Gateways :

Around 1989-1990, Dave Presotto and Howard Trickey of AT&T Bell Labs pioneered

the second generation of firewall architectures with research in circuit relays which

were called circuit level gateways.[4] Circuit level gateways are used for TCP

connections to observe handshaking between packets to ensure a requested session is

legitimate. Normally, it would store the following information: a unique session

identifier, the state of the connection (i.e., handshake established or closing),

sequencing information, source or destination IP address, and the physical network

interface through which the packet arrives or departs. The firewall then checks to see

if the sending host has permission to send to the destination, and that the receiving

host has permission to receive from the sender. If the connection is acceptable, all

packets are routed through the firewall with no more security tests. The advantages

of circuit level gateways is that they are usually faster than application layer firewalls

71
because they perform less evaluations and they can also protect a network by blocking

connections between specific Internet sources and internal hosts. The main

disadvantages to circuit level gateways are that they cannot restrict access to protocol

subsets other than TCP and similarly to packet filtering, testing the grant and deny

rules can be difficult which may leave the network vulnerable or incorrectly

configured.

Application Level Gateways :

The third generation of firewall architectures called Application level

gateways was independently researched and developed during the late 1980s and early

1990s mainly by Gene Spafford of Purdue University, Marcus Ranum, and Bill

Cheswick of AT&T Bell Laboratories.[4] Application level gateways or proxy

firewalls are software applications with two primary modes (proxy server or proxy

client). When a user on a trusted network wants to connect to a service on an

72
untrusted network such as the Internet, the request is directed to the proxy server on

the firewall. The proxy server pretends to be the real server on the Internet. It checks

the request and decides whether to permit or deny the request based on a set of rules.

If the request is approved, the server passes the request to the proxy client, which

contacts the real server on the Internet. Connections from the Internet are made to the

proxy client, which then passes them on to the proxy server for delivery to the real

client. This method ensures that all incoming connections are always made with the

proxy client, while outgoing connections are always made with the proxy server.

Therefore, there is no direct connection between the trusted and untrusted networks.

The main advantages are that application level gateways can set rules based on high-

level protocols, maintain state information about the communications passing through

the firewall server, and can keep detailed activity records. The main disadvantages are

its complex filtering and access control decisions can require significant

are its complex filtering and access control decisions can require significant

computing resources which can cause performance delays and its vulnerability to

operating system and application level bugs.

73
Stateful Multilayer Inspection Firewalls :

Check Point Software released the first commercial product based on this

fourth generation architecture in 1994 called stateful multilayer inspection

firewalls.[4] Stateful multilayer inspection firewalls provide the best security of the

four firewall types by monitoring the data being communicated at application socket

or port layer as well as the protocol and address level to verify that the request is

functioning as expected. An example is if during an FTP session the port numbers

being used or an IP address were to change, the firewall would not permit the

connection to continue. Another advantage is when a specific session is complete, any

ports that were being used are closed. Stateful inspection systems can dynamically

open and close ports for each session which differs from basic packet filtering that

leaves ports in a constant opened or closed state. The main disadvantageto stateful

multilayer inspection firewalls is that they can be costly because they

74
require the purchase of additional hardware and/or software that is not normally

packaged with a network device.

75

You might also like