0% found this document useful (0 votes)
117 views82 pages

Internship Project Report

Uploaded by

udhiman681
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views82 pages

Internship Project Report

Uploaded by

udhiman681
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Project Report

(6 Weeks Industrial Internship)

“Information and Cyber Security”

Submitted By
Anshul Tayal

101247

X-6

CSE

Under the Guidance of


Co-ordinator Internship Program

(EH1-Infotech)

Mr Sahil Baghla

JUNE – JULY, 2013

1
DECLARATION

Date: ________________

I, the undersigned Mr Anshul Tayal a student of Jaypee University of


Information Technology hereby declared that the work presented in this project
report is my own work and have been carried out under the Supervision of Mr.
Sahil Baghla of EH1-Infotech.

This report has not been submitted previously to any other university for any
examination.

______________________

Anshul Tayal

2
Acknowledgement

I have taken efforts in this project. However, it would not have been possible
without the kind support and help of many individuals and organization. I
would like to extend my sincere thanks to all of them.

I am highly indebted to Mr Sahil Baghla for their guidance and constant


supervision as well as for providing necessary information regarding the project
& also for their support in completing the project.

I would like to express my gratitude towards my Parents & member of EH1-


Infotech for their kind co-operation and encouragement which help me in
completion of this project.

I would like to express my special gratitude and thanks to industry persons for
giving me such attention and time.

My thanks and appreciations also go to my colleague in developing the project


and people who have willingly helped me out with their abilities.

3
INDEX

Declaration

Acknowledgement

Chapter 1 Introduction

Chapter 2 Windows Security Implementation


 Introduction to Malwares on Windows

 Password Restoration

 Firewall Implementation

 Antivirus

 Key Logger and Employment Monitoring System

Chapter 3 LAN Scanning


 Introduction to IP

 IP Address Scanning

 Port Scanning

 DNS Scanning

 GFI LAN Guard

Chapter 4 Wi-Fi Security


 Introduction to WEP Security

 Introduction to WPA/WPA2 Security

 Error in these Security

4
Chapter 5 Website Scanning and Security
 Introduction to Website Security

 Live Who IS

 Way Back Machine

 SQL Injection and Google Dorks

 Phishing Page

 Acunetix

Bibliography

5
About The Project

I am working on this project named “Information and Cyber Security”. In this


project, I have performed all operation in Labs and find the vulnerability that
has to be removed from the system. This includes Windows Security, LAN
Security, Wi-Fi Security and Websites Vulnerability and its Security.

In spite of securing, I had found some vulnerable websites with the help of Art
of Google Surfing. I do Penetration Testing of the websites and find the weak
point of websites and tell how to secure you from being theft and caught.

6
Chapter 1
Introduction
Computer security (also known as IT security) is information
security as applied to computers and networks. The field covers all
the processes and mechanisms by which computer-based equipment,
information and services are protected from unintended or
unauthorized access, change or destruction. Computer security also
includes protection from unplanned events and natural disasters.

Consumers, businesses and government rely on the internet and web


services for information and communications. Managing the transfer
of data and access to the internet requires reliability, privacy and
security in cyberspace. In our digital world, an attack on one
computer may affect multiple systems. Unauthorized access may
result in financial loss, release of confidential information, damaged
computer facilities, costly staff time to restore operations,
embarrassment by cyber vandals and diminished reputation.

Organizations use systems security professionals to design,


configure, implement, manage, support and secure reliable computer
systems that provide security and privacy to businesses and
consumers. The responsibilities of professionals in this field have
increased in recent years as cyber-attacks have become more
sophisticated. Employees with skills and knowledge in systems
security have become part of many information technology
infrastructure teams.

Information theft has led to the compromise of intellectual property,


credit card information, electronic funds, identity theft, and a host of
other negative consequences. Electronic theft or cyber-crime affects
individuals, corporations and government entities. Breaches are
routinely perpetrated by, ill intended employees, ex-employees,
organized crime groups, and foreign government sponsored
espionage groups.

7
While government mandates are driving organizations to address
compliance initiatives, the security of many data assets has seen
limited improvement. Many organizations are struggling quietly
having been victimized by information theft and are seeking to
understand the potential consequences and methods to recovery.
Information Defence helps organizations to identify threats to
intellectual property and sensitive data assets along with the
necessary measures to prepare for, prevent, and respond to cyber-
crime and data theft.

1.1 Cyberspace in Our Daily Lives

Our daily life, economic vitality, and national security depend on a


stable, safe, and resilient cyberspace. We rely on this vast array of
networks to communicate and travel, power our homes, run our
economy, and provide government services. Yet cyber intrusions and
attacks have increased dramatically over the last decade, exposing
sensitive personal and business information, disrupting critical
operations, and imposing high costs on the economy.

1.2 Securing the Cyber Ecosystem

DHS plays a key role in securing the federal government's civilian


cyber networks and helping to secure the broader cyber ecosystem
through:

 Partnerships with owners and operators of critical


infrastructure such as financial systems, chemical plants, and
water and electric utilities
 The release of actionable cyber alerts
 Investigations and arrests of cyber criminals, and
 Education about how the public can stay safe online.

Combating cyber threats is a shared responsibility. The public,


private, and non-profit sectors and every level of government – all
have an important role to play.

1.3 Responding Quickly to Cyber Vulnerabilities

By maintaining a team of skilled cyber security professionals and


partnering with the private sector, Organizations have been able to

8
effectively respond to cyber incidents; provide technical assistance to
owners and operators of critical infrastructure and disseminate
timely and actionable notifications regarding current and potential
security threats and vulnerabilities. By leveraging the resources of
the ICE Cyber Crimes Centre, Organizations have been integrally
involved in Internet investigations concerning identity and document
fraud, financial fraud, and smuggling.

9
Chapter 2
Windows Security Implementation

2.1 Introduction to Malwares


_____________________________________

Malware, short for malicious software, is software used or


programmed by attackers to disrupt computer operation, gather
sensitive information, or gain access to private computer systems. It
can appear in the form of code, scripts, active content, and other
software. Malware' is a general term used to refer to a variety of
forms of hostile or intrusive software.

Malware includes computer viruses, ransom ware, worms, Trojan


horses, rootkits, key loggers, dialers, spyware, adware,
malicious BHOs, rogue security software and other malicious
programs; the majority of active malware threats are usually worms
or Trojans rather than viruses. In law, malware is sometimes known
as a computer contaminant, as in the legal codes of several U.S
states. Malware is different from defective software, which is
legitimate software but contains harmful bugs that were not
corrected before release. However, some malware is disguised as
genuine software, and may come from an official company website in
the form of a useful or attractive program which has the harmful
malware embedded in it along with additional tracking software
that gathers marketing statistics.

Software such as anti-virus, anti-malware, and firewalls are relied


upon by users at home, small and large organisations around the
globe to safeguard against malware attacks which helps in
identifying and preventing the further spread of malware in the
network.

10
2.1.1 Vulnerability to malware

2.1.1.1 Security defects in software


Malware exploits security defects in the design of the operating system, in
applications (such as browsers—avoid using Internet Explorer 8 or earlier),
or in (old versions of) browser plugins such as Adobe Flash Player, Adobe
Acrobat / Reader, or Java (see Java SE critical security issues). Sometimes
even installing new versions of such plugins does not automatically uninstall
old versions. Security advisories from such companies announce security-
related updates. Common vulnerabilities are assigned CVE IDs and listed in
the US National Vulnerability Database. Secunia PSI is an example of
software, free for personal use that will check a PC for vulnerable out-of-date
software, and attempt to update it.
Most systems contain bugs, or loopholes, which may be exploited by malware.
A typical example is a buffer-overrun vulnerability, in which an interface
designed to store data, in a small area of memory, allows the caller to supply
more data than will fit. This extra data then overwrites the interface's
own executable structure (past the end of the buffer and other data). In this
manner, malware can force the system to execute malicious code, by
replacing legitimate code with its own payload of instructions (or data
values) copied into live memory, outside the buffer area.
2.1.1.2 Insecure design or user error
Originally, PCs had to be booted from floppy disks. Until recently, it was
common for a computer to boot from an external boot device by default. This
meant that the computer would, by default, boot from a floppy disk, USB flash
drive, or CD—and malicious boot code could be used to install malware or
boot into a modified operating system. Auto run or auto play features may
allow code to be automatically executed from a floppy disk, CD-
ROM or USB device with or without the user’s permission. Older email
software would automatically open HTML email containing malicious
JavaScript code; users may also unwarily open (execute) malicious email
attachments.
2.1.1.3 Over-privileged users and over-privileged code

 Over-privileged users: some systems allow all users to modify their


internal structures. This was the standard operating procedure for early
microcomputer and home computer systems, where there was no
distinction between an Administrator or root, and a regular user of the
system. In some systems, non-administrator users are over-privileged by
design, in the sense that they are allowed to modify internal structures of

11
the system. In some environments, users are over-privileged because they
have been inappropriately granted administrator or equivalent status.

 Over-privileged code: some systems allow code executed by a user to


access all rights of that user. Also standard operating procedure for early
microcomputer and home computer systems. Malware, running as over-
privileged code, can use this privilege to subvert the system. Almost all
currently popular operating systems and also many scripting
applications allow code too many privileges, usually in the sense that
when a user executes code, the system allows that code all rights of that
user. This makes users vulnerable to malware in the form of e-mail
attachments, which may or may not be disguised.
2.1.1.4 Use of the same operating system

 Weight of numbers: simply because the vast majority of existing malware


is written to attack Windows systems, then Windows systems are more
vulnerable to succumbing to malware attacks (regardless of the security
strengths or weaknesses of Windows itself).
 Homogeneity: e.g. when all computers in a network run the same
operating system; upon exploiting one, one worm can exploit them all:
For example, Microsoft Windows or Mac OS Xhave such a large share of
the market that concentrating on either could enable an exploited
vulnerability to subvert a large number of systems. Instead, introducing
diversity, purely for the sake of robustness, could increase short-term costs
for training and maintenance. However, having a few diverse nodes would
deter total shutdown of the network, and allow those nodes to help with
recovery of the infected nodes. Such separate, functional redundancy
could avoid the cost of a total shutdown.

2.2 Password Restoration


____________________________

Trinity Rescue Kit or TRK


is a free live Linux
distribution that aims
specifically at recovery
and repair operations on
Windows machines, but
is equally usable for
Linux recovery issues.

12
Since version 3.4 it has an easy to use scrollable text menu that allows anyone
who masters a keyboard and some English to perform maintenance and
repair on a computer, ranging from password resetting over disk clean up to
virus scanning.
It is possible to boot TRK in three different ways:
-as a bootable CD which you can burn yourself from a downloadable iso file
or a self-burning Windows executable
-from a USB stick/disk installable from Window or from the bootable TRK
cd.
-from network over PXE: you start 1 TRK from CD or USB and you run all
other computers from that one over the network without modifying anything
to your local network. TRK has received an easy to use text menu but has
equally kept the command line.
Here’s a sum up of some of the most important features, new and old:
-easily reset windows passwords with the improved win pass tool
-simple and easy menu interface
-5 different virus scan products integrated in a single uniform command line
with online update capability
-win clean, a utility that cleans up all sorts of unnecessary temporary files on
your computer.

13
2.2.1 The idea behind Trinity Rescue Kit

They brewed the idea of creating a free bootable Linux CD containing all
available free tools that can help you in any way of rescuing your Windows
installation and eventually, this is how far it has gotten now, with thousands
of hours of work gone into it. All this is for you, for free.
Trinity Rescue Kit is based on binaries and scripts from several other
distributions, like Timo’s Rescue CD, Mandriva 2005 and Fedora Core 3 and 4,
as well as many original source packages and tools from other distros.
Start-up procedure and methods, several scripts and concept is completely
self-made or at least heavily adapted.

2.3 Firewall Implementation


_______________________________

Firewall protection is a batch of security measures for your PC, designed to


keep malware, viruses, and hackers at bay.

Comodo's firewall protects you from the viruses, malware, and hackers.

World's #1 free Firewall that finds threats and protect your PC

 Fast and hassle-free online experience


 Blocks all Internet attacks
 Monitors in/out connections
 Manages traffic on your PC
 Secures all connections when you are online

Comodo Firewall Pro introduces the next evolution in computer security:


Default Deny Protection (DDP).Most security programs maintain a list of
known malware, and use that list to decide which applications and files

14
shouldn't access a PC. The problem here is obvious. What if the list of
malware is missing some entries, or isn't up to date?

DDP fixes this problem to ensure complete security. The firewall references a
list of over two million known PC-friendly applications. If a file that is not on
this safe-list knocks on your PC's door, the Firewall immediately alerts you to
the possibility of attacking malware. All this occurs before the malware
infects your computer. It's prevention-based security, the only way to keep
PCs totally safe.

5 top secrets why Comodo Firewall is different

 No complex configuration issues—perfect for amateur users


 Quickly learns user behaviour to deliver personalized protection
 User-friendly, attractive graphical interface
 Lots of configuration options let techies configure things just as they
like
 DDP-based security keeps you informed and PCs safe
One of the first steps in securing a computer is downloading and activating a
quality firewall to repel intruders. Only this free firewall software has access
to Comodo's extensive safe-list of PC-friendly applications, a key component of
Default Deny Protection.

2.3.1 Important Firewall Features

 Default Deny Protection


 Makes sure that only known PC-safe applications execute
 Prevention-based security
 Stops viruses and malware before they access your computer…so it isn't
too late to stop them.

2.3.2 Auto Sandbox Technology

The sandbox is a virtual operating environment for untrusted programs –


ensuring viruses and other malicious software are completely isolated from
the rest of your computer

15
2.3.3 Personalized alerts

 Firewall remembers which software is allowed to operate and changes


its alerts accordingly.
 Cloud based behaviour analysis system detects zero-day malware
INSTANTLY.
 Cloud based whitelisting of trusted publisher easily identifies a safe file
and vendor
 Suppresses operations that could interfere with a user’s gaming
experience such as alerts, virus database updates or scheduled scans.
 Provides users with the ability to lockdown their PC so only known
good applications can run.

2.3.4 Automatic updates

 Stay current with the latest protection.


 Precise and specific alert system
 Warnings specify the level and type of possible threat from each source.
 Easy to configure
 Tell the free firewall to scan upon installation and add all current
applications to the safe-list.

2.4 Antivirus
_____________________

With 125,000 new malicious programs appearing every single day – antivirus
protection is a necessity… not a luxury. Your PC needs effective antivirus
defences… and you deserve an antivirus solution that’s easy to manage.

2.4.1 Protects against all viruses

The cloud-based Kaspersky Security Network gathers data from millions of


participating users’ systems around the world… to help defend you from the
very latest viruses and malware attacks. Potential threats are monitored and
analysed – in real-time – and dangerous actions are completely blocked
before they can cause any harm.

16
2.4.2 Identifies suspicious websites and phishing websites

Advanced anti-phishing technologies proactively detect fraudulent URLs and


use real-time information from the cloud, to help ensure you’re not tricked
into disclosing your valuable data to phishing websites. Our URL Advisor also
adds colour-coded tags to all web links – to advise you of the danger level of
the link and subsequent pages.

2.4.3 Prevents malware from exploiting vulnerabilities in your PC

If your PC has application or system vulnerabilities that haven’t been updated


with the latest fixes, cyber criminals and malware could gain entry. In
addition to scanning for vulnerabilities, Kaspersky Anti-Virus analyses and
controls the actions of programs that have vulnerabilities – so they can’t
cause any harm.

2.4.4 Compatible with Windows 8

Kaspersky Internet Security is fully compatible with Microsoft’s latest


operating system – Windows 8 – and is integrated with Microsoft’s latest IT
security innovations. In addition, Kaspersky Now – a new application that has
been developed to support Microsoft’s new user interface – lets you monitor
your PC’s security status and launch vital security features.

2.4.5 System Watcher

Even if an unknown piece of malware manages to get onto your PC,


Kaspersky’s unique System Watcher will detect dangerous behaviour and
allow you to undo or rollback most malicious actions.

2.4.6 Better anti-phishing protection

A new anti-phishing engine improves your defences against Internet


fraudsters’ attempts to gain access to your personal information.

2.4.7 Automatic Exploit Prevention

Even if you’re PC and the applications running on it haven’t been updated


with the latest fixes, Kaspersky Anti-Virus can prevent exploitation of
vulnerabilities by:

17
 controlling the launch of executable file from application with
vulnerability
 analysing the behaviour of executable files for any similarities with
malicious programs
 restricting the actions allowed by applications with vulnerabilities
2.4.8 Optimised antivirus databases

With antivirus information provided from the cloud, we’ve significantly


reduced the size of the antivirus databases stored on your PC – which helps to
improve performance and reduce the time taken for installation and updates.

2.4.9 Reduced Battery Drain

When it’s installed on a laptop that is running on battery power,


Kaspersky Anti-Virus automatically reduces its usage of resources – to help
increase the time the laptop can run before needing to be recharged.

2.4.10 Easy-to-use interface

The main interface window is optimised to help boost performance and ease
of use for many popular user scenarios – including launching scans and fixing
problems.

2.4.11 Virtual Keyboard

Virtual Keyboard allows you to use mouse-clicks to enter your banking


information online – so your personal information can’t be tracked or stolen
by key loggers, hackers or identity thieves.

For other applications that are compatible with Microsoft’s new user
interface, Kaspersky technologies scan the applications for viruses – and
infected applications are removed and then replaced with clean applications.

2.5 Key Logger and Employment Monitoring System


________________________________________________________

Employee monitoring, due to the increase in cyber loafing and lawsuits, has
become more widespread and much easier with the use of new and cheaper
technologies. Both employers and employees are concerned with the ethical
implications of constant monitoring. While employers use monitoring devices

18
to keep track of their employees' actions and productivity, their employees
feel that too much monitoring is an invasion of their privacy. Thus, the ethics
of monitoring employees is explored and current practices are discussed. This
document further provides suggestions for reducing cyber loafing and
encourages institutions to create and effectively communicate ethical
standards for employee monitoring in their firms. The author has included
actual samples of employees' perceptions and feelings from the surveys and
discussions on being monitored.

Some of the features of employee tracking application are:

 Reduced errors and duplication


 Better information management
 Enhanced customer relationships
 Live monitoring
 Current Geo-location of employee
 Capture photos, videos and audios
 Incredibly easy to use
 Boost employee productivity
 Assign jobs to closest employee
 Increase employee accountability
 Management dashboard
 Comprehensive reporting
 Text message notification

Email log delivery - key logger can send you recorded logs through e-mail
delivery at set times - perfect for remote monitoring!

FTP delivery - Key logger can upload recorded logs through FTP delivery.

Network delivery - sends recorded logs through via LAN.

Clipboard logging - capture all text copied to the Windows Clipboard.

Invisible mode- makes it absolutely invisible to anyone. Key logger is not


visible in the task bar, system tray, Windows Task Manager, process viewers,
Start Menu and Windows Start up list.

Visual surveillance - periodically makes screenshots and stores the


compressed images to log.

19
Chat monitoring – Key logger is designed to record and monitor both sides
of a conversation in following chats:

 AIM
 Windows Live Messenger 2011
 ICQ 7
 Skype 4
 Yahoo Messenger 10
 Google Talk
 Miranda
 QiP 2010

Security - allows you to protect program settings, Hidden Mode and Log file.

Application monitoring – key logger will record the application that was in
use that received the keystroke!

Time/Date tracking - it allows you to pinpoint the exact time a window


received a keystroke!

Powerful Log Viewer - you can view and save the log as a HTML page or
plain text with key logger Log Viewer.

Small size – Key logger is several times smaller than other programs with
the same features. It has no additional modules and libraries, so its size is
smaller and the performance is higher.

Key logger fully supports Unicode characters which make it possible to record
keystrokes that include characters from many other character sets. It records
every keystroke. Capture passwords and all other invisible text.

20
Chapter 3
LAN Scanning

3.1 Introduction to IP
____________________________

An Internet Protocol address (IP address) is a numerical label assigned to


each device (e.g., computer, printer) participating in a computer network
that uses the Internet Protocol for communication. An IP address serves two
principal functions: host or network interface identification and location
addressing. Its role has been characterized as follows: "A name indicates
what we seek. An address indicates where it is. A route indicates how to get
there."

The designers of the Internet Protocol defined an IP address as a 32-bit


number and this system, known as Internet Protocol Version 4 (IPv4), is still
in use today. However, due to the enormous growth of the Internet and the
predicted depletion of available addresses, a new version of IP (IPv6), using
128 bits for the address, was developed in 1995. IPv6 was standardized as
RFC 2460 in 1998, and its deployment has been ongoing since the mid-2000s.

IP addresses are binary numbers, but they are usually stored in text files and
displayed in human-readable notations, such as 172.16.254.1 (for IPv4), and
2001:db8:0:1234:0:567:8:1 (for IPv6).

The Internet Assigned Numbers Authority (IANA) manages the IP address


space allocations globally and delegates five regional Internet registries
(RIRs) to allocate IP address blocks to local Internet registries (Internet
service providers) and other entities. The gap in version sequence between
IPv4 and IPv6 resulted from the assignment of number 5 to the experimental
Internet Stream Protocol in 1979, which however was never referred to as
IPv5.

21
3.1.1 IPv4 addresses

In IPv4 an address consists of 32 bits which limits the address space to


4294967296 (232) possible unique addresses. IPv4 reserves some addresses for
special purposes such as private networks (~18 million addresses) or
multicast addresses (~270 million addresses).

IPv4 addresses are canonically represented in dot-decimal notation, which


consists of four decimal numbers, each ranging from 0 to 255, separated by
dots, e.g., 172.16.254.1. Each part represents a group of 8 bits (octet) of the
address. In some cases of technical writing, IPv4 addresses may be presented
in various hexadecimal, octal, or binary representations.

3.1.1.1 IPv4 subnetting

In the early stages of development of the Internet Protocol,[1] network


administrators interpreted an IP address in two parts: network number
portion and host number portion. The highest order octet (most significant
eight bits) in an address was designated as the network number and the
remaining bits were called the rest field or host identifier and were used for
host numbering within a network.

Classful network design allowed for a larger number of individual network


assignments and fine-grained subnet work design. The first three bits of the
most significant octet of an IP address were defined as the class of the
address. Three classes (A, B, and C) were defined for universal unicast
addressing. Depending on the class derived, the network identification was
based on octet boundary segments of the entire address. Each class used
successively additional octets in the network identifier, thus reducing the
possible number of hosts in the higher order classes (B and C). The following
table gives an overview of this now obsolete system.

22
Historical classful network architecture

Size of Size of

Addresses
networks

network
Number
Leading

address
network rest
Class

Start
bits End address

per
of
number bit
bit field field

16,777,21 127.255.255.25
A 0 8 24 128 (27) 0.0.0.0
6 (224) 5

16,384 65,536 128.0.0. 191.255.255.25


B 10 16 16
(214) (216) 0 5

2,097,152 192.0.0. 223.255.255.25


C 110 24 8 256 (28)
(2 )
21
0 5

Today, remnants of classful network concepts function only in a limited scope


as the default configuration parameters of some network software and
hardware components (e.g. net mask), and in the technical jargon used in
network administrators' discussions.

3.1.1.2 IPv4 private addresses

Early network design, when global end-to-end connectivity was envisioned


for communications with all Internet hosts, intended that IP addresses be
uniquely assigned to a particular computer or device. Computers not
connected to the Internet, such as factory machines that communicate only
with each other via TCP/IP, need not have globally unique IP addresses.
Three ranges of IPv4 addresses for private networks were reserved in RFC
1918. These addresses are not routed on the Internet and thus their use need
not be coordinated with an IP address registry.

Today, when needed, such private networks typically connect to the Internet
through network address translation (NAT).

IANA-reserved private IPv4 network ranges

23
Start End No. of addresses

24-bit block (/8 prefix, 1 × A) 10.0.0.0 10.255.255.255 16777216

20-bit block (/12 prefix, 16 ×


172.16.0.0 172.31.255.255 1048576
B)

16-bit block (/16 prefix, 256 ×


192.168.0.0 192.168.255.255 65536
C)

3.1.2 IPv6 addresses

The rapid exhaustion of IPv4 address space, despite conservation techniques,


prompted the Internet Engineering Task Force (IETF) to explore new
technologies to expand the addressing capability in the Internet. The
permanent solution was deemed to be a redesign of the Internet Protocol
itself. This next generation of the Internet Protocol, intended to replace IPv4
on the Internet, was eventually named Internet Protocol Version 6 (IPv6) in
1995. The address size was increased from 32 to 128 bits or 16 octets. This,
even with a generous assignment of network blocks, is deemed sufficient for
the foreseeable future. Mathematically, the new address space provides the
potential for a maximum of 2128, or about 3.403×1038 addresses.

The large number of IPv6 addresses allows large blocks to be assigned for
specific purposes and, where appropriate, to be aggregated for efficient
routing. With a large address space, there is not the need to have complex
address conservation methods as used in CIDR.

3.1.2.1 IPv6 private addresses

Just as IPv4 reserves addresses for private or internal networks, blocks of


addresses are set aside in IPv6 for private addresses. In IPv6, these are
24
referred to as unique local addresses (ULA). RFC 4193 sets aside the routing
prefix fc00:: /7 for this block which is divided into two /8 blocks with different
implied policies. The addresses include a 40-bit pseudorandom number that
minimizes the risk of address collisions if sites merge or packets are
misrouted.

Early designs used a different block for this purpose (fec0 ::), dubbed site-
local addresses. However, the definition of what constituted sites remained
unclear and the poorly defined addressing policy created ambiguities for
routing. This address range specification was abandoned and must not be
used in new systems.

Addresses starting with fe80:: called link-local addresses, are assigned to


interfaces for communication on the link only. The addresses are
automatically generated by the operating system for each network interface.
None of the private address prefixes may be routed on the public Internet.

3.1.3 IP subnet works

IP networks may be divided into subnet works in both IPv4 and IPv6. For this
purpose, an IP address is logically recognized as consisting of two parts: the
network prefix and the host identifier, or interface identifier (IPv6). The
subnet mask or the CIDR prefix determines how the IP address is divided into
network and host parts.

The term subnet mask is only used within IPv4. Both IP versions however use
the CIDR concept and notation. In this, the IP address is followed by a slash
and the number (in decimal) of bits used for the network part, also called the
routing prefix. For example, an IPv4 address and its subnet mask may be
192.0.2.1 and 255.255.255.0, respectively. The CIDR notation for the same IP
address and subnet is 192.0.2.1/24, because the first 24 bits of the IP address
indicate the network and subnet.

3.1.4 IP address assignment

Internet Protocol addresses are assigned to a host either anew at the time of
booting, or permanently by fixed configuration of its hardware or software.

25
Persistent configuration is also known as using a static IP address. In
contrast, in situations when the computer's IP address is assigned newly each
time, this is known as using a dynamic IP address.

3.1.4.1 Methods

Static IP addresses are manually assigned to a computer by an administrator.


The exact procedure varies according to platform. This contrasts with
dynamic IP addresses, which are assigned either by the computer interface or
host software itself, as in Zeroconf, or assigned by a server using Dynamic
Host Configuration Protocol (DHCP). Even though IP addresses assigned
using DHCP may stay the same for long periods of time, they can generally
change. In some cases, a network administrator may implement dynamically
assigned static IP addresses. In this case, a DHCP server is used, but it is
specifically configured to always assign the same IP address to a particular
computer. This allows static IP addresses to be configured centrally, without
having to specifically configure each computer on the network in a manual
procedure.

In the absence or failure of static or state full (DHCP) address configurations,


an operating system may assign an IP address to a network interface using
state-less auto-configuration methods, such as Zeroconf.

3.1.5 Uses of dynamic address assignment

IP addresses are most frequently assigned dynamically on LANs and


broadband networks by the Dynamic Host Configuration Protocol (DHCP).
They are used because it avoids the administrative burden of assigning
specific static addresses to each device on a network. It also allows many
devices to share limited address space on a network if only some of them will
be online at a particular time. In most current desktop operating systems,
dynamic IP configuration is enabled by default so that a user does not need to
manually enter any settings to connect to a network with a DHCP server.
DHCP is not the only technology used to assign IP addresses dynamically.
Dialup and some broadband networks use dynamic address features of the
Point-to-Point Protocol.

3.1.6 IP address translation

26
Multiple client devices can appear to share IP addresses: either because they
are part of a shared hosting web server environment or because an IPv4
network address translator (NAT) or proxy server acts as an intermediary
agent on behalf of its customers, in which case the real originating IP
addresses might be hidden from the server receiving a request. A common
practice is to have a NAT hide a large number of IP addresses in a private
network. Only the "outside" interface(s) of the NAT need to have Internet-
routable addresses.

Most commonly, the NAT device maps TCP or UDP port numbers on the side
of the larger, public network to individual private addresses on the
masqueraded network. In small home networks, NAT functions are usually
implemented in a residential gateway device, typically one marketed as a
"router". In this scenario, the computers connected to the router would have
private IP addresses and the router would have a public address to
communicate on the Internet. This type of router allows several computers to
share one public IP address.

3.2 IP Address Scanning


_______________________________

Scanning of computer networks (searching for addresses with known


properties) is a practice that is often used by both network administrators
and crackers. Although it is widely accepted that activity of the latter is often
illegal, most of the time they depend on exactly the same tools that can be
used for perfectly legitimate network administration that can be used
maliciously.

Angry IP Scanner is widely-used open-source and multi-platform network


scanner. As a rule, almost all such programs are open-source, because they
are developed with the collaboration of many people without having any
commercial goals. Secure networks are possible only with the help of open-
source systems and tools, possibly reviewed by thousands of independent
experts and hackers alike.

Certainly, there are other network scanners in existence (especially single-


host port scanners), however, most of them are not cross-platform, are too
simple and do not offer the same level of extensibility and user-friendliness as

27
Angry IP Scanner. The program's target audience are network
administrators, consultants, developers, who all use the tool every day and
therefore have advanced requirements for usability, configurability, and
extensibility. However, Angry IP Scanner aims to be very friendly to novice
users as well.

3.2.1 Scanning

The word scan is derived from the Latin word scandere, which means to climb
and later came to mean "to scan a verse of poetry," because one could beat
the rhythm by lifting and putting down one's foot. As a rule, user provides a
list of IP addresses to the scanner with the goal of sequentially probing all of
them and gathering interesting information about each address as well as
overall statistics. The gathered information may include the following:

• whether the host is up (alive) or down (dead)


• average round trip time (of IP packets to the destination
address and back) – the same value as shown by the ping
program
• TTL (time to live) field value from the IP packet header, which
can be used to find out the rough distance to the destination
address (in number of routers the packet has travel)
• host and domain name (by using a DNS reverse lookup)
• versions of particular services running on the host (e.g.,
“Apache 2.0.32 (Linux 2.6.9)” in case of a web server)
• open (responding) and filtered TCP and UDP port numbers

The list of addresses for scanning is most often provided as a range, with
specified starting and ending addresses, or as a network, with specified
network address and corresponding net mask. Other options are also possible,
e.g. loading from a file or generation of random addresses according to some
particular rules. Angry IP Scanner has several different modules for
generation of IP addresses called feeders. Additional feeders can be added
with the help of plugins.

There are usually two types of network scanners: port scanners and IP
scanners.

3.2.1.1 Port scanners

28
It usually scans TCP and sometimes UDP ports of a single host by sequentially
probing each of them. This is similar to walking around a shopping mall and
writing down the list of all the shops you see there along with their status
(open or closed).

3.2.1.2 IP scanners

It scans many hosts and then gather additional information about those of
them that are available (alive). According to the shopping mall analogy, that
would be walking around the city looking for all shopping malls and then
discovering all kinds of shops that exist in each of the malls. As Angry IP
Scanner is an IP scanner, designed for scanning of multiple hosts, this will be
the type of network scanner reviewed in the following text.

3.2.2 Safety and Security

Fortunately, the short response is that it is both legal and safe, however with
some exceptions. Even though nowadays legal laws do not catch up with the
fast development of the IT world, network scanning has existed for almost as
long as the networks themselves, meaning that there was probably enough
time to update the laws. Nevertheless, scanning itself remains perfectly legal,
because in most cases it neither harms the scanned systems in any way nor
provides any direct possibilities of breaking into them. Network scanning is
even used by some popular network applications for automatic discovery of
peers and similar functionality.

As a rule, the scanning results just provide the publicly available and freely
obtainable information, collected and grouped together. However, this
legality may not apply in case some more advanced stealth scanning
techniques are used against a network you do not have any affiliation with.

As the topic of user's personal safety is covered: scanning in most cases is


legal, then how about the more general safety – the safety of all the people?
As was already mentioned before, nothing can be one hundred percent safe.
On the other hand, the best tools for maintaining the security are the same
ones that are used by those who are needed to be defended from. Only that
way it is possible to understand how do crackers think and how do they work.
Using the same tools as they do, it is possible to check the network until it is
too late because they have already managed to do it themselves. Every

29
serious network administrator knows that regular probing of own networks
is a very good way for keeping it secure.

3.2.3 Angry IP Scanner

 Log on as the administrator and launch Angry IP. The program will
scan your system and display all server ports used on the server.
 Create a server port range that you want to scan. This can be done by
selecting two ports, one at the bottom and one at the top of the ports
used list.
 Click "Start." Angry IP will scan every port in between the two selected
ports. It will take a few moments for the program to determine if the IP
addresses are active or not. Once finished, ports will be displayed in
blue or red. Blue ports mean the IP is currently in use, while red means
the IP is idle.
 Select an active IP address (blue) and you will be shown the computer
using the port as well as the current user.

3.3 Port Scanning


___________________________

N map (Network Mapper) is a security scanner originally written by Gordon


Lyon used to discover Host and services on a computer network, thus creating
a "map" of the network. To accomplish its goal, Nmap sends specially crafted
packets to the target host and then analyzes the responses.

The software provides a variety of features for probing computer networks


such as host discovery, service and operating system detection, and other
more in depth system information. These features are further extended by
scripts that can perform more advanced service detection, vulnerability
detection, and other information. Besides providing a variety of information
about what it is scanning, Nmap is also capable of adapting to network
conditions like, latency and network congestion during a scan. These features,
and new ones, are under continuous development and refinement by its active
user community.

Originally Nmap was a Linux only utility, but it has since been ported to
Microsoft Windows, Solaris, HP-UX, BSD variants (including Mac OS X),

30
Amiga OS, and SGI IRIX. Linux is the most popular platform with Windows
following it closely.

Nmap Security Scanner

results of an Nmap scan

3.3.1 Nmap features

 Host discovery - Identifying hosts on a network. For example, listing the


hosts that respond to pings or have a particular port open.
 Port scanning - Enumerating the open ports on one or more target
hosts.
 Version detection - Interrogating listening network services listening
on remote devices to determine the application name and version
number.[6]
 OS detection - Remotely determining the operating system and
hardware characteristics of network devices.
 Scriptable interaction with the target - using Nmap Scripting Engine
(NSE) and Lua programming language, customized queries can be
made.
 In addition to these, Nmap can provide further information on targets,
including reverse DNS names, device types, and MAC addresses.[7]

3.3.2 Uses of Nmap

 Auditing the security of a device by identifying the network connections


which can be made to it.
 Identifying open ports on a target host in preparation for auditing.[8]
 Network inventory, network mapping, maintenance, and asset
management.
 Auditing the security of a network by identifying unexpected new
servers.[9]

31
3.3.3 Basic commands working in Nmap

Nmap <targets' URL's or IP's with spaces between them>

e.g.: scanme.nmap.org, gnu.org/24, 192.168.0.1; 10.0.0-255.1-254 (The


command is nmap scanme.nmap.org and similar)

For OS detection:

Nmap -O <target-host's URL or IP>

For Version detection:

Nmap -sV <target-host's URL or IP>

For configuring response timings (-T0 to -T5: increasing in aggressiveness):

Nmap -T0 -sV -O <target-host's URL or IP>

3.3.4 Graphical interfaces

NmapFE was Nmap's official GUI for Nmap. NmapFE was replaced with
Zenmap, a new official graphical user interface based on UMIT.

Microsoft Windows specific GUIs exist, including NMapWin, which has not
been updated since v1.4.0 was released in June 2003, and NMapW by Syhunt.

Zen map, showing results for a port scan against Wikipedia

NmapFE, showing results for a port scan against Wikipedia

32
XNmap, a Mac OS X GUI

3.3.5 Reporting results

Nmap provides four possible output formats for the scan results. All but the
interactive output is saved to a file. All of the output formats in Nmap can be
easily manipulated by text processing software, enabling the user to create
customized reports.

Interactive

Presented and updated real time when a user runs the Nmap from the
command line. Various options can be entered during the scan to facilitate
monitoring.

XML

A format that can be further processed by XML capable tools. It can be


converted into a HTML report using XSLT.

Grepable

Output that is tailored to line-oriented processing tools such as grep, sed or


awk.

Normal

The output as seen while running Nmap from the command line, but saved to
a file.

Script kiddie

Meant to be the funny way to format the interactive output replacing letters
with their visually alike number representations. For example, Interesting
ports becomes Int3rest|ng p0rtz.

3.3.6 Purpose

Nmap is used to discover computers and services on a computer network,


thus creating a "map" of the network. Just like many simple port scanners,

33
Nmap is capable of discovering passive services on a network, despite the fact
that such services are not advertising themselves with a service discovery
protocol. In addition, Nmap may be able to determine various details about
the remote computers.

3.3.7 Ethical issues and legality

Like most tools used in computer security, Nmap can be used for black hat
hacking, or attempting to gain unauthorized access to computer systems. It
would typically be used to discover open ports which are likely to be running
vulnerable services, in preparation for attacking those services with another
program.

System administrators often use Nmap to search for unauthorized servers on


their network, or for computers which don't meet the organization's
minimum level of security.

Nmap is often confused with host vulnerability assessment tools such as


Nessus, which go further in their exploration of a target by testing for
common vulnerabilities in the open ports found.

3.3.8 Output from Nmap

 Command :- nmap -sV -T4 -O -A -v <target_host>


 Starting Nmap 5.35DC1 <http://nmap.org> at 2010-10-21 01:57 IST
NSE: Loaded 6 scripts for scanning.
Nmap scan report for <target_host> (<target_IP>)
Host is up (0.10s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE VERSION
80/tcp open http Apache Tomcat/Coyote JSP engine 1.1
113/tcp closed auth
 Running: Linux 2.6.X (96%), Cisco Linux 2.6.X (90%), HP embedded
(89%), Riverbed embedded (87%) Aggressive OS guesses: Linux 2.6.9
(96%), Linux 2.6.9 - 2.6.27 (96%), Linux 2.6.9 (CentOS 4.4) (95%),
Linux 2.6.15 - 2.6.26 (92%), Blue Coat Director (Linux 2.6.10) (92%),
Linux 2.6.26 (PCLinuxOS) (91%), Linux 2.6.11 (90%), HP Brocade 4Gb
SAN switch (89%), Linux 2.6.22.1-32.fc6 (x86, SMP) (89%), Linux 2.6.28

34
(88%) No exact OS matches for host (test conditions non-ideal). Uptime
guess: 35.708 days (since Wed Sep 15 08:58:56 2010)
 Nmap done: 1 IP address (1 host up) scanned in 19.94 seconds
Raw packets sent: 2080 (95.732KB)| Rcvd: 24 (1.476KB)
 TRACEROUTE (using port 113/tcp)
HOP RTT ADDRESS
1 2.27 ms 192.168.254.4
Nmap done: 1 IP address (1 host up) scanned in 19.94 seconds
 Raw packets sent: 2080 (95.732KB)| Rcvd: 24 (1.476KB)

3.4 DNS Scanning


___________________________

The Domain Name System support in Microsoft Windows NT, and thus its
derivatives Windows 2000, Windows XP, and Windows Server 2003,
comprises two clients and a server. Every Microsoft Windows machine has a
DNS lookup client, to perform ordinary DNS lookups. Some machines have a
Dynamic DNS client, to perform Dynamic DNS Update transactions,
registering the machines' names and IP addresses. Some machines run a DNS
server, to publish DNS data, to service DNS lookup requests from DNS lookup
clients, and to service DNS update requests from DNS update clients.

3.4.1 DNS lookup client

Applications perform DNS lookups with the aid of a DLL. They call library
functions in the DLL, which in turn handle all communications with DNS
servers (over UDP or TCP) and return the final results of the lookup back to
the applications.

Microsoft's DNS client also has optional support for local caching, in the form
of a DNS Client service. If there is one, and if such a connection can be made,
they hand the actual work of dealing with the lookup over to the DNS Client
service. The DNS Client service itself communicates with DNS servers, and
caches the results that it receives.

Microsoft's DNS client is capable of talking to multiple DNS servers. The exact
algorithm varies according to the version, and service pack level, of the
operating system; but in general all communication is with a preferred DNS

35
server until it fails to answer, whereupon communication switches to one of
several alternative DNS servers.

3.4.2 Effects of DNS Client service

There are several minor differences in system behavior depending on whether


the DNS Client service is started:

Parsing of the "hosts" file: The lookup functions read only the hosts file if they
cannot off-load their task onto the DNS Client service and have to fall back to
communicating with DNS servers themselves. In turn, the DNS Client service
reads the "hosts" file once, at startup, and only re-reads it if it notices that the
last modification timestamp of the file has changed since it last read it. Thus:

With the DNS Client service running: The "hosts" file is read and parsed only a
few times, once at service startup, and thereafter whenever the DNS Client
service notices that it has been modified.

Without the DNS Client service running: The "hosts" file is read and parsed
repeatedly, by each individual application program as it makes a DNS lookup.

The effect of multiple answers in the "hosts" file: The DNS Client service does
not use the "hosts" file directly when performing lookups. Instead, it (initially)
populates its cache from it, and then performs lookups using the data in its
cache. When the lookup functions fall back to doing the work themselves,
however, they scan the "hosts" file directly and sequentially, stopping when
the first answer is found. Thus:

With the DNS Client service running: If the "hosts" file contains multiple lines
denoting multiple answers for a given lookup, all of the answers in the cache
will be returned.

Without the DNS Client service running: If the "hosts" file contains multiple
lines denoting multiple answers for a given lookup, only the first answer
found will be returned.

Fallback from preferred to alternative DNS servers: The fallback from the
preferred DNS server to the alternative DNS servers is done by whatever
entity, the DNS Client service or the library functions themselves, is actually
performing the communication with them. Thus:

36
With the DNS Client service running: Fallback to the alternative DNS servers
happens globally. If the preferred DNS server fails to answer, all subsequent
communication is with the alternative DNS servers.

Without the DNS Client service running: Any fallback to the alternative DNS
servers happen locally, within each individual process that is making DNS
queries. Different processes may be in different states, some talking to the
preferred DNS server and some talking to alternative DNS servers.

Linux distributions and various versions of Unix have a generalized name


resolver layer. The resolver can be controlled to use a hosts file or Network
Information Service (NIS), by configuring the Name Service Switch.

3.4.3 Dynamic DNS Update client

Whilst DNS lookups read DNS data, DNS updates write them. Both
workstations and servers running Windows attempt to send Dynamic DNS
update requests to DNS servers.

It is thus necessary to run the DHCP Client service on pre-Vista machines,


even if DHCP isn't being used to configure the machine in order to
dynamically register a machine's name and address for DNS lookup. The
DHCP Client service registers name and address data whenever they are
changed (either manually by an administrator or automatically by the
granting or revocation of a DHCP lease). In Windows Vista (and Windows
Server 2008) Microsoft moved the registration functionality from the DHCP
Client service to the DNS Client service.

Servers running Microsoft Windows also attempt to register other


information, in addition to their names and IP addresses, such as the
locations of the LDAP and Kerberos services that they provide.

3.4.4 DNS server

Microsoft Windows server operating systems can run the DNS Server service.
This is a monolithic DNS server that provides many types of DNS service,
including caching, Dynamic DNS update, zone transfer, and DNS notification.
DNS notification implements a push mechanism for notifying a select set of
secondary servers for a zone when it is updated.

37
Microsoft's "DNS Server" service was first introduced in Windows NT 3.51 as
an add-on with Microsoft's collection of BackOffice services (at the time was
marked to be used for testing purposes only). Some sources claim that the
DNS server implementation in Windows NT 3.51 was a fork of ISC's BIND
version 4.3, but this is not true. The DNS server implementation in Windows
NT 3.51 was written by Microsoft. The DNS server components in all
subsequent releases of Windows Server have built upon that initial
implementation and do not use BIND source code. However, Microsoft has
taken care to ensure good interoperability with BIND and other
implementations in terms of zone file format, zone transfer, and other DNS
protocol details.

Microsoft's DNS server can be administered using either a graphical user


interface, the "DNS Management Console", or a command line interface, the
dnscmd utility. New to Windows Server 2012 is a fully featured PowerShell
provider for DNS server management.

3.5 GFI LAN Guard


___________________________

GFI LANguard Network Security Scanner is a tool that allows network


administrators to quickly and easily perform a network security audit. GFI
LANguard N.S.S. combines the functions of a port scanner and a security
scanner. It also creates reports that can be used to fix security issues on a
network.

Unlike other security scanners, GFI LANguard N.S.S. will not create a
'barrage' of information, which is virtually impossible to follow up on. Rather,
it will help highlight the most important information. It also provides
hyperlinks to security sites to find out more about these vulnerabilities.

GFI LANguard N.S.S. is a complete patch management solution. After it has


scanned your network and determined missing patches and service packs -
both in the operating system (OS) and in the applications - you can use GFI
LANguard N.S.S. to deploy those service packs and patches network-wide.

3.5.1 Performing a Scan

38
The first step in beginning an audit of a network is to perform a scan of
current network machines and devices.

To begin a new network scan:

 Click on File > New.


 Select scan a range of computers.
 Input the starting and ending range of the network to be scanned.
 Select Finish.
 Select the Play button [Start Scanning] from the main GFI LANguard
N.S.S. window.

LANguard Network Security Scanner will now do a scan of the entire range
entered. It will first detect which hosts/computers are on, and only scan
those. This is done using NETBIOS probes, ICMP ping and SNMP queries.

If a device does not answer to one of these GFI LANguard N.S.S. will assume,
for now, that the device either does not exist at a specific IP address or that it
is currently turned off. If you would like GFI LANguard N.S.S. to scan all
devices, even those that don't respond to these queries, look under the scan
options section of the manual at "Configuring Scan options, Scanning, Adding
non-responsive computers". But make sure you take notice of the warning, in
that section, about time issues before doing this.

Scans can also be done in the following manner:

39
 Scan one Computer - This will scan only one computer.
 Scan List of Computers - Computers can be added to the list either one
at a time, or you can import them from a text file. To add them right
click in the window and use the menu that pops up.
 Scan Computers that are part of a Network Domain - If you click on the
`Pick Computers' option you will be presented with a list of all of the
Workgroups and Domains that GFI LANguard N.S.S. found on the
network. Check the box next to the Workgroup or Domain that you
want to scan and GFI LANguard N.S.S. will scan all computers found in
that Workgroup/Domain. You can also select individual computers
within that Workgroup/Domain.

3.5.1.1 Automatically detect security vulnerabilities on your network

GFI LANguard Network Security Scanner (N.S.S.) is a tool that checks your
network for all potential methods that a hacker might use to attack your
network. By analysing the operating system and the applications running on
your network, GFI LANguard N.S.S. identifies possible security holes in your
network. In other words, it plays the devil's advocate and alerts you to
weaknesses before a hacker can find them, enabling you to deal with these
issues before a hacker can exploit them.

3.5.1.2 Provides in-depth information about all machines/devices

GFI LANguard Network Security Scanner scans your entire network, IP by IP,
and provides information such as service pack level of the machine, missing
security patches, open shares, open ports, services/applications active on the
computer, key registry entries, weak passwords, users and groups, and more.
Scan results are outputted to an HTML report, which can be
customized/queried, enabling you to proactively secure your network - for
example by shutting down unnecessary ports, closing shares, installing
service packs and hotfixes, etc.

3.5.1.3 Use GFI LANguard N.S.S. to:

 Check service pack levels of target machines


 Check for missing security patches
 Check for security alerts/vulnerabilities
 Detect unnecessary shares

40
 Detect unnecessary open ports
 Remotely install security patches for all network machines
 Detect new security holes using scheduled scan comparison
 Check for unused user accounts on workstations
 Check password policy and strength
 Check if auditing is turned on
 Make an inventory of your network
 Detect potential Trojans installed on users’ workstations
 Find out if the OS is advertising too much information
 Scans large networks by sending UDP query status to every IP
 Lists NETBIOS name table for each responding computer
 Provides NETBIOS hostname, currently logged username & MAC
address
 Provides a list of shares, users (detailed info), services, sessions, remote
TOD (time of day) & registry information from remote computer
(NT/2000)
 Tests password strength on Windows 9x/NT/2000 systems using a
dictionary of commonly used passwords
 SNMP device detection, SNMP Walk for inspecting network devices like
routers, network printers...
 Support for sending spoofed messages (social engineering)
 DNS lookup (www.somehost.com - > xxx.xxx.xxx.xxx); resolve hostnames
(reverse DNS)
 Trace route support for network mapping
 Configuration manager so you can easily save particular scans

41
Chapter 4
Wi-Fi Security

4.1 Introduction to WEP Security


______________________________________

Wired Equivalent Privacy (WEP) is a security algorithm for IEEE 802.11


wireless networks. Introduced as part of the original 802.11 standard ratified
in September 1999, its intention was to provide data confidentiality
comparable to that of a traditional wired network. WEP, recognizable by the
key of 10 or 26 hexadecimal digits, is widely in use and is often the first
security choice presented to users by router configuration tools.

Although its name implies that it is as secure as a wired connection, WEP has
been demonstrated to have numerous flaws and has been deprecated in
favour of newer standards such as WPA2. In 2003 the Wi-Fi Alliance
announced that WEP had been superseded by Wi-Fi Protected Access (WPA).
In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the
IEEE declared that both WEP-40 and WEP-104 "have been deprecated as
they fail to meet their security goals".

4.1.1 Encryption details

WEP was included as the privacy component of the original IEEE 802.11
standard ratified in September 1999. WEP uses the stream cipher RC4 for
confidentiality, and the CRC-32 checksum for integrity. It was deprecated in
2004 and is documented in the current standard.

Basic WEP encryption: RC4 key stream XORed with plaintext

42
Standard 64-bit WEP uses a 40 bit key (also known as WEP-40), which is
concatenated with a 24-bit initialization vector (IV) to form the RC4 key. At
the time that the original WEP standard was drafted, the U.S. Government's
export restrictions on cryptographic technology limited the key size. Once the
restrictions were lifted, manufacturers of access points implemented an
extended 128-bit WEP protocol using a 104-bit key size (WEP-104).

A 64-bit WEP key is usually entered as a string of 10 hexadecimal (base 16)


characters (0-9 and A-F). Each character represents four bits, 10 digits of
four bits each gives 40 bits; adding the 24-bit IV produces the complete 64-bit
WEP key. Most devices also allow the user to enter the key as five ASCII
characters, each of which is turned into eight bits using the character's byte
value in ASCII; however, this restricts each byte to be a printable ASCII
character, which is only a small fraction of possible byte values, greatly
reducing the space of possible keys.

A 128-bit WEP key is usually entered as a string of 26 hexadecimal


characters. Twenty-six digits of four bits each give 104 bits; adding the 24-bit
IV produces the complete 128-bit WEP key. Most devices also allow the user
to enter it as 13 ASCII characters.

A 256-bit WEP system is available from some vendors. As with the other WEP-
variants 24 bits of that is for the IV, leaving 232 bits for actual protection.
These 232 bits are typically entered as 58 hexadecimal characters. ((58 × 4
bits =) 232 bits) + 24 IV bits = 256-bit WEP key.

4.1.2 Authentication

Two methods of authentication can be used with WEP: Open System


authentication and Shared Key authentication.

For the sake of clarity, we discuss WEP authentication in the Infrastructure


mode (that is, between a WLAN client and an Access Point). The discussion
applies to the ad-Hoc mode as well.

In Open System authentication, the WLAN client need not provide its
credentials to the Access Point during authentication. Any client can
authenticate with the Access Point and then attempt to associate. In effect, no
authentication occurs. Subsequently WEP keys can be used for encrypting
data frames. At this point, the client must have the correct keys.

43
In Shared Key authentication, the WEP key is used for authentication in a
four step challenge-response handshake:

 The client sends an authentication request to the Access Point.


 The Access Point replies with a clear-text challenge.
 The client encrypts the challenge-text using the configured WEP key,
and sends it back in another authentication request.
 The Access Point decrypts the response. If this matches the challenge-
text the Access Point sends back a positive reply.
 After the authentication and association, the pre-shared WEP key is
also used for encrypting the data frames using RC4.

At first glance, it might seem as though Shared Key authentication is more


secure than Open System authentication, since the latter offers no real
authentication. However, it is quite the reverse. It is possible to derive the
keystream used for the handshake by capturing the challenge frames in
Shared Key authentication.[8] Hence, it is advisable to use Open System
authentication for WEP authentication, rather than Shared Key
authentication. (Note that both authentication mechanisms are weak.)

4.2 Introduction to WPA/WPA2 Security


_____________________________________________

Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access II (WPA2) are two
security protocols and security certification programs developed by the Wi-Fi
Alliance to secure wireless computer networks. The Alliance defined these in
response to serious weaknesses researchers had found in the previous system,
WEP (Wired Equivalent Privacy).

WPA (sometimes referred to as the draft IEEE 802.11i standard) became


available in 2003. The Wi-Fi Alliance intended it as an intermediate measure
in anticipation of the availability of the more secure and complex WPA2.
WPA2 became available in 2004 and is common shorthand for the full IEEE
802.11i (or IEEE 802.11i-2004) standard.

A flaw in a feature added to Wi-Fi, called Wi-Fi Protected Setup, allows WPA
and WPA2 security to be bypassed and effectively broken in many situations.

44
[2] WPA and WPA2 security implemented without using the Wi-Fi Protected
Setup feature are unaffected by the security vulnerability.

4.2.1 WPA

The Wi-Fi Alliance intended WPA as an intermediate measure to take the


place of WEP pending the availability of the full IEEE 802.11i standard. WPA
could be implemented through firmware upgrades on wireless network
interface cards designed for WEP that began shipping as far back as 1999.
However, since the changes required in the wireless access points (APs) were
more extensive than those needed on the network cards, most pre-2003 APs
could not be upgraded to support WPA.

The WPA protocol implements much of the IEEE 802.11i standard.


Specifically, the Temporal Key Integrity Protocol (TKIP), was adopted for
WPA. WEP used a 40-bit or 104-bit encryption key that must be manually
entered on wireless access points and devices and does not change. TKIP
employs a per-packet key, meaning that it dynamically generates a new 128-
bit key for each packet and thus prevents the types of attacks that
compromised WEP.

WPA also includes a message integrity check. This is designed to prevent an


attacker from capturing, altering and/or resending data packets. This
replaces the cyclic redundancy check (CRC) that was used by the WEP
standard. CRC's main flaw was that it did not provide a sufficiently strong
data integrity guarantee for the packets it handled. Well tested message
authentication codes existed to solve these problems, but they required too
much computation to be used on old network cards. WPA uses a message
integrity check algorithm called Michael to verify the integrity of the packets.
Michael is much stronger than a CRC, but not as strong as the algorithm used
in WPA2. Researchers have since discovered a flaw in WPA that relied on
older weaknesses in WEP and the limitations of Michael to retrieve the key
stream from short packets to use for re-injection and spoofing.

4.2.2 WPA2

WPA2 has replaced WPA. WPA2, which requires testing and certification by
the Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In
particular, it introduces CCMP, a new AES-based encryption mode with strong

45
security. Certification began in September, 2004; from March 13, 2006, WPA2
certification is mandatory for all new devices to bear the Wi-Fi trademark.

4.2.3 Hardware support

WPA was specifically designed to work with wireless hardware that was
produced prior to the introduction of the WPA protocol which had only
supported inadequate security through WEP. Some of these devices support
the security protocol only after a firmware upgrade. Firmware upgrades are
not available for some legacy devices.

Wi-Fi devices certified since 2006 support both the WPA and WPA2 security
protocol. WPA2 may not work with some older network cards.

4.2.4 Security

Pre-shared key mode (PSK, also known as Personal mode) is designed for
home and small office networks that don't require the complexity of an
802.1X authentication server. Each wireless network device encrypts the
network traffic using a 256 bit key. This key may be entered either as a string
of 64 hexadecimal digits, or as a passphrase of 8 to 63 printable ASCII
characters. If ASCII characters are used, the 256 bit key is calculated by
applying the PBKDF2 key derivation function to the passphrase, using the
SSID as the salt and 4096 iterations of HMAC-SHA1.

4.2.4.1 Weak password

Shared-key WPA remains vulnerable to password cracking attacks if users


rely on a weak password or passphrase. To protect against a brute force
attack, a truly random passphrase of 13 characters (selected from the set of
95 permitted characters) is probably sufficient. To further protect against
intrusion, the network's SSID should not match any entry in the top 1000
SSIDs as downloadable rainbow tables have been pre-generated for them and
a multitude of common passwords.

4.2.4.2 WPA short packet spoofing

In November 2008, the researchers at two German technical universities (TU


Dresden and TU Darmstadt), uncovered a WPA weakness which relies on a
previously known flaw in WEP that can be exploited only for the TKIP
algorithm in WPA. The flaw can only decrypt short packets with mostly

46
known contents, such as ARP messages. The attack requires Quality of Service
(as defined in 802.11e) to be enabled, which allows packet prioritization as
defined. The flaw does not lead to recovery of a key, but only to recovery of a
key stream that was used to encrypt a particular packet, and which can be
reused as many as seven times to inject arbitrary data of the same packet
length to a wireless client. For example, this allows someone to inject faked
ARP packets, making the victim send packets to the open Internet. Two
Japanese computer scientists, Toshihiro Ohigashi and Masakatu Morii,
further optimized the Tews/Beck attack; their attack doesn't require Quality
of Service to be enabled. In October 2009, Halvorsen with others made further
progress, enabling attackers to inject larger malicious packets (596 bytes in
size) within approximately 18 minutes and 25 seconds. In February 2010
Martin Beck found a new vulnerability which allows an attacker to decrypt
all traffic towards the client. The authors say that the attack can be defeated
by deactivating QoS, or by switching from TKIP to AES-based CCMP.

The vulnerabilities of TKIP are significant in that WPA-TKIP had been held to
be an extremely safe combination; indeed, WPA-TKIP is still a configuration
option upon a wide variety of wireless routing devices provided by many
hardware vendors.

4.2.4.3 WPS PIN recovery

A more serious security flaw was revealed in December 2011 by Stefan Vie
bock that affects wireless routers with the Wi-Fi Protected Setup (WPS)
feature, regardless of which encryption method they use. Most recent models
have this feature and enable it by default. Many consumer Wi-Fi device
manufacturers had taken steps to eliminate the potential of weak passphrase
choices by promoting alternative methods of automatically generating and
distributing strong keys when users add a new wireless adapter or appliance
to a network. These methods include pushing buttons on the devices or
entering an 8-digit PIN. The Wi-Fi Alliance standardized these methods as
Wi-Fi Protected Setup; however the PIN feature as widely implemented
introduced a major new security flaw. The flaw allows a remote attacker to
recover the WPS PIN and, with it, the router's WPA/WPA2 password in a few
hours. Users have been urged to turn off the WPS feature,[19] although this
may not be possible on some router models. Also note that the PIN is written

47
on a label on most Wi-Fi routers with WPS, and cannot be changed if
compromised.

4.2.4.4 MS-CHAPv2

Several weaknesses have been found in MS-CHAPv2, some of which severely


reduce the complexity of brute-force attacks making them feasible with
modern hardware. In 2012 the complexity of breaking MS-CHAPv2 was
reduced to that of breaking a single DES key, work by Moxie Marlinspike and
Marsh Ray. Moxie advised: "Enterprises who are depending on the mutual
authentication properties of MS-CHAPv2 for connection to their WPA2 Radius
servers should immediately start migrating to something else."

4.2.4.5 Hole196

Hole196 is a vulnerability in the WPA2 protocol that abuses the shared GTK.
It can be used to conduct man-in-the-middle and denial-of-service attacks.

4.2.5 WPA terminology

Different WPA versions and protection mechanisms can be distinguished


based on the (chronological) version of WPA, the target end-user (according
to the method of authentication key distribution), and the encryption
protocol used.

4.2.5.1 WPA

Initial WPA version, to supply enhanced security over the older WEP protocol.
Typically uses the TKIP encryption protocol.

4.2.5.2 WPA2

Also known as IEEE 802.11i-2004, is the successor of WPA, adds support for
CCMP which is intended to replace TKIP encryption protocol. Mandatory for
Wi-Fi–certified devices since 2006.

4.2.5.3 WPA-Personal

Also referred to as WPA-PSK (Pre-shared key) mode, it is designed for home


and small office networks and doesn't require an authentication server. Each
wireless network device authenticates with the access point using the same
256-bit key generated from a password or passphrase.

48
4.2.5.4 WPA-Enterprise

Also referred to as WPA-802.1X mode, and sometimes just WPA (as opposed
to WPA-PSK). It is designed for enterprise networks and requires a RADIUS
authentication server. This requires a more complicated setup, but provides
additional security (e.g. protection against dictionary attacks on short
passwords). An Extensible Authentication Protocol (EAP) is used for
authentication, which comes in different flavours.

4.2.6 Wi-Fi Protected Setup

An alternative authentication key distribution method intended to simplify


and strengthen the process, but which, as widely implemented, creates a
major security hole.

4.3 Error In These Security


_____________________________________________

Because RC4 is a stream cipher, the same traffic key must never be used twice.
The purpose of an IV, which is transmitted as plain text, is to prevent any
repetition, but a 24-bit IV is not long enough to ensure this on a busy network.
The way the IV was used also opened WEP to a related key attack. For a 24-
bit IV, there is a 50% probability the same IV will repeat after 5000 packets.

In August 2001, they published a cryptanalysis of WEP that exploits the way
the RC4 ciphers and IV are used in WEP, resulting in a passive attack that can
recover the RC4 key after eavesdropping on the network. Depending on the
amount of network traffic, and thus the number of packets available for
inspection, a successful key recovery could take as little as one minute. If an
insufficient number of packets are being sent, there are ways for an attacker
to send packets on the network and thereby stimulate reply packets which
can then be inspected to find the key. The attack was soon implemented, and
automated tools have since been released. It is possible to perform the attack
with a personal computer, off-the-shelf hardware and freely available
software such as aircrack-ng to crack any WEP key in minutes.

He surveyed a variety of shortcomings in WEP. They write "Experiments in


the field show that, with proper equipment, it is practical to eavesdrop on

49
WEP-protected networks from distances of a mile or more from the target."
They also reported two generic weaknesses:

The use of WEP was optional, resulting in many installations never even
activating it, and by default, WEP relies on a single shared key among users,
which leads to practical problems in handling compromises, which often
leads to ignoring compromises.

In 2005, a group from the U.S. Federal Bureau of Investigation gave a


demonstration where they cracked a WEP-protected network in 3 minutes
using publicly available tools.[10] Andreas Klein presented another analysis of
the RC4 stream cipher. Klein showed that there are more correlations
between the RC4 key stream and the key than the ones found by them which
can additionally be used to break WEP in WEP-like usage modes.

In 2006, Bittau, Handley, and Lackey showed that the 802.11 protocol itself
can be used against WEP to enable earlier attacks that were previously
thought impractical. After eaves dropping a single packet, an attacker can
rapidly bootstrap to be able to transmit arbitrary data. The eavesdropped
packet can then be decrypted one byte at a time (by transmitting about 128
packets per byte to decrypt) to discover the local network IP addresses.
Finally, if the 802.11 network is connected to the Internet, the attacker can
use 802.11 fragmentations to replay eavesdropped packets while crafting a
new IP header onto them. The access point can then be used to decrypt these
packets and relay them on to a buddy on the Internet, allowing real-time
decryption of WEP traffic within a minute of eaves dropping the first packet.

In 2007, they were able to extend Klein's 2005 attack and optimize it for
usage against WEP. With the new attack it is possible to recover a 104-bit
WEP key with probability 50% using only 40,000 captured packets. For
60,000 available data packets, the success probability is about 80% and for
85,000 data packets about 95%. Using active techniques like deauth and ARP
re-injection, 40,000 packets can be captured in less than one minute under
good conditions. The actual computation takes about 3 seconds and 3 MB of
main memory on a Pentium-M 1.7 GHz and can additionally be optimized for
devices with slower CPUs. The same attack can be used for 40-bit keys with an
even higher success probability.

50
In 2008, Payment Card Industry (PCI) Security Standards Council’s latest
update of the Data Security Standard (DSS), prohibits use of the WEP as part
of any credit-card processing after 30 June 2010, and prohibits any new
system from being installed that uses WEP after 31 March 2009. The use of
WEP contributed to the T.J. Maxx parent company network invasion.

4.3.1 Remedies

Use of encrypted tunneling protocols (e.g. IPSec, Secure Shell) can provide
secure data transmission over an insecure network. However, replacements
for WEP have been developed with the goal of restoring security to the
wireless network itself.

4.3.1.1 802.11i (WPA and WPA2)

The recommended solution to WEP security problems is to switch to WPA2.


WPA was an intermediate solution for hardware that could not support
WPA2. Both WPA and WPA2 are much more secure than WEP.[12] To add
support for WPA or WPA2, some old Wi-Fi access points might need to be
replaced or have their firmware upgraded. WPA was designed as an interim
software-implementable solution for WEP that could forestall immediate
deployment of new hardware.[13] However, TKIP (the basis of WPA) has
reached the end of its designed lifetime, has been partially broken, and has
been deprecated in the next[dated info] full release of the 802.11 standard.[14]

4.3.1.2 WEP2

This stopgap enhancement to WEP was present in some of the early 802.11i
drafts. It was implementable on some (not all) hardware not able to handle
WPA or WPA2, and extended both the IV and the key values to 128 bits.[15] It
was hoped to eliminate the duplicate IV deficiency as well as stop brute force
key attacks.

After it became clear that the overall WEP algorithm was deficient (and not
just the IV and key sizes) and would require even more fixes, both the WEP2
name and original algorithm were dropped. The two extended key lengths
remained in what eventually became WPA's TKIP.

51
4.3.1.3 WEPplus

WEPplus, also known as WEP+, is a proprietary enhancement to WEP by


Agere Systems (formerly a subsidiary of Lucent Technologies) that enhances
WEP security by avoiding "weak IVs".[16] It is only completely effective when
WEPplus is used at both ends of the wireless connection. As this cannot easily
be enforced, it remains a serious limitation. It also does not necessarily
prevent replay attacks, and is ineffective against later statistical attacks that
do not rely on weak IVs.[17]

4.3.1.4 Dynamic WEP

Dynamic WEP refers to the combination of 802.1x technology and the


Extensible Authentication Protocol. Dynamic WEP changes WEP keys
dynamically. It is a vendor-specific feature provided by several vendors such
as 3Com.

The dynamic change idea made it into 802.11i as part of TKIP, but not for the
actual WEP algorithm.

52
Chapter 5
Website Scanning and Security

5.1 Introduction to Website Security


_________________________________________

Web sites are unfortunately prone to security risks. And so are any networks
to which web servers are connected. Setting aside risks created by employee
use or misuse of network resources, your web server and the site it hosts
present your most serious sources of security risk.

Web servers by design open a window between your network and the world.
The care taken with server maintenance, web application updates and your
web site coding will define the size of that window, limit the kind of
information that can pass through it and thus establish the degree of web
security you will have.

"Web security" is relative and has two components, one internal and one
public. Your relative security is high if you have few network resources of
financial value, your company and site aren't controversial in any way, your
network is set up with tight permissions, your web server is patched up to
date with all settings done correctly, your applications on the web server are
all patched and updated, and your web site code is done to high standards.

Your web security is relatively lower if your company has financial assets like
credit card or identity information, if your web site content is controversial,
your servers, applications and site code are complex or old and are
maintained by an underfunded or outsourced IT department. All IT
departments are budget challenged and tight staffing often creates deferred
maintenance issues that play into the hands of any who want to challenge
your web security.

If you have assets of importance or if anything about your site puts you in the
public spotlight then your web security will be tested. We hope that the
information provided here will prevent you and your company from being
embarrassed - or worse.

53
It's well known that poorly written software creates security issues. The
number of bugs that could create web security issues is directly proportional
to the size and complexity of your web applications and web server. Basically,
all complex programs either have bugs or at the very, least weaknesses. On
top of that, web servers are inherently complex programs. Web sites are
themselves complex and intentionally invite ever greater interaction with the
public. And so the opportunities for security holes are many and growing.

Technically, the very same programming that increases the value of a web
site, namely interaction with visitors, also allows scripts or SQL commands to
be executed on your web and database servers in response to visitor requests.
Any web-based form or script installed at your site may have weaknesses or
outright bugs and every such issue presents a web security risk.

Contrary to common knowledge the balance between allowing web site


visitors some access to your corporate resources through a web site and
keeping unwanted visitors out of your network is a delicate one. There is no
one setting, no single switch to throw that sets the security hurdle at the
proper level. There are dozens of settings if not hundreds in a web server
alone, and then each service, application and open port on the server adds
another layer of settings. And then the web site code... you get the picture.

A web security issue is faced by site visitors as well. A common web site attack
involves the silent and concealed installation of code that will exploit the
browsers of visitors. Your site is not the end target at all in these attacks.
There are, at this time, many thousands of web sites out there that have been
compromised. The owners have no idea that anything has been added to their
sites and that their visitors are at risk. In the meantime visitors are being
subject to attack and successful attacks are installing nasty code onto the
visitor's computers.

5.1.1 Web Server Security

The world's most secure web server is the one that is turned off. Simple, bare-
bones web servers that have few open ports and few services on those ports
are the next best thing. This just isn't an option for most companies. Powerful
and flexible applications are required to run complex sites and these are
naturally more subject to web security issues.

54
Any system with multiple open ports, multiple services and multiple scripting
languages is vulnerable simply because it has so many points of entry to
watch.

5.1.2 Web Site Code and Web Security

You site undoubtedly provides some means of communication with its visitors.
In every place that interaction is possible you have a potential web security
vulnerability. Web sites often invite visitors to:

 Load a new page containing dynamic content


 Search for a product or location
 Fill out a contact form
 Search the site content
 Use a shopping cart
 Create an account
 Logon to an account

In each case noted above your web site visitor is effectively sending a
command to or through your web server - very likely to a database. In each
opportunity to communicate, such as a form field, search field or blog,
correctly written code will allow only a very narrow range of commands or
information types to pass - in or out. This is ideal for web security. However,
these limits are not automatic. It takes well trained programmers a good deal
of time to write code that allows all expected data to pass and disallows all
unexpected or potentially harmful data.

And there lies the problem. Code on your site has come from a variety of
programmers, some of whom work for third party vendors. Some of that code
is old, perhaps very old. Your site may be running software from half a dozen
sources, and then your own site designer and your webmaster has each
produced more code of their own, or made revisions to another's code that
may have altered or eliminated previously established web security
limitations.

Add to that the software that may have been purchased years ago and which
is not in current use. Many servers have accumulated applications that are no
longer in use and with which nobody on your current staff is familiar. This
code is often not easy to find, is about as valuable as an appendix and has not

55
been used, patched or updated for years - but it may be exactly what a hacker
is looking for!

5.1.3 Web Security Vulnerabilities

As you know there are a lot of people out there who call themselves hackers.
You can also easily guess that they are not all equally skilled. As a matter of
fact, the vast majority of them are simply copycats. They read about a
KNOWN technique that was devised by someone else and they use it to break
into a site that is interesting to them, often just to see if they can do it.
Naturally once they have done that they will take advantage of the site
weakness to do malicious harm, plant something or steal something.

A very small number of hackers are actually capable of discovering a new


way to overcome web security obstacles. Given the work being done by tens of
thousands of programmers worldwide to improve security, it is not easy to
discover a brand new method of attack. Hundreds, sometimes thousands of
man-hours might be put into developing a new exploit. This is sometimes
done by individuals, but just as often is done by teams supported by organized
crime. In either case they want to maximize their return on this investment in
time and energy and so they will very quietly focus on relatively few, very
valuable corporate or governmental assets. Until their new technique is
actually discovered, it is considered UNKNOWN.

Countering and attempting to eliminate any return on this hacking


investment you have hundreds if not thousands of web security entities. These
public and private groups watch for and share information about newly
discovered exploits so that an alarm can be raised and defense against
unknown exploits can be put in place quickly. The broad announcement of a
new exploit makes it a KNOWN exploit.

The outcome of this contest of wills, so to speak, is that exploits become


known and widely documented very soon after they are first used and
discovered. So at any one time there are thousands (perhaps tens of
thousands) of known vulnerabilities and only a very, very few unknown. And
those few unknown exploits are very tightly focused onto just a very few
highly valuable targets so as to reap the greatest return before discovery.
Because once known the best defended sites immediately take action to
correct their flaws and erect better defenses.

56
Your site is 1,000 times more likely to be attacked with a known exploit than
an unknown one. And the reason behind this is simple: There are so many
known exploits and the complexity of web servers and web sites is so great
that the chances are good that one of the known vulnerabilities will be
present and allow an attacker access to your site.

The number of sites worldwide is so great and the number of new, as of yet
undocumented and thus unknown exploits so small that your chances of
being attacked with one is nearly zero - unless you have network assets of
truly great value.

If you don't attract the attention of a very dedicated, well financed attack,
then your primary concern should be to eliminate your known vulnerabilities
so that a quick look would reveal no easy entry using known vulnerabilities.

5.1.4 Web Security Defense Strategy

There are two roads to accomplish excellent security. On one you would
assign all of the resources needed to maintain constant alert to new security
issues. You would ensure that all patches and updates are done at once, have
all of your existing applications reviewed for correct security, ensure that
only security knowledgeable programmers do work on your site and have
their work checked carefully by security professionals. You would also
maintain a tight firewall, antivirus protection and run IPS/IDS.

Your other option: use a web scanning solution to test your existing
equipment, applications and web site code to see if a KNOWN vulnerability
actually exists. While firewalls, antivirus and IPS/IDS are all worthwhile, it is
simple logic to also lock the front door. It is far more effective to repair a half
dozen actual risks than it is to leave them in place and try to build higher and
higher walls around them. Network and web site vulnerability scanning is the
most efficient security investment of all.

If one had to walk just one of these roads, diligent wall building or
vulnerability testing, it has been seen that web scanning will actually produce
a higher level of web security on a dollar for dollar basis. This is proven by the
number of well-defended web sites which get hacked every month, and the
much lower number of properly scanned web sites which have been
compromised.

57
5.2 LIVE WHO IS
_________________________

It provides you with high quality domain data. Get accurate, actionable
insights on domains and the people behind them from a single search.

5.2.1 Powerful Data Organization

No one likes searching through pages of text. We've gone the extra mile to
build a back-end that allows us to organize and present key pieces of
information like whois, DNS, and historical records to you with as few clicks
as possible.

5.2.2 Unbiased Domain Intelligence

Want to see if a domain is available without big brother watching? No


problem. We don't watch your every search and try to up-sell you. Our goal is
simple… to provide the most accurate and up to date domain information in
one place so you don't waste time looking at dozens of different sources to get
the answers you need.

58
5.2.3 Track Domains Across Different Registrars

It doesn't matter where domains are registered, we provide you with the tools
to save and organize as many domains as you want to your dashboard. This
way, you can keep domains you own, are interested in buying, or just think
are plain cool in one single, easy to manage location.

5.2.4 See Website Information

Search the whois database, look up domain and IP owner information, and
check out dozens of other statistics.

5.2.5 Save and Follow Domains

Organizing domains across multiple registrars for quick reference has never
been so easy.

5.2.6 On Demand Domain Data

Get all the data you need about a domain and everything associated with that
domain anytime with a single search.

59
5.3 Way Back Machine
________________________________

The Wayback Machine is a digital time capsule created by the Internet


Archive, a non-profit organization, based in San Francisco, California.
Created by Brewster Kahle and Bruce Gilliat, and is maintained with content
from Alexa Internet. The service enables users to see archived versions of web
pages across time, which the Archive calls a "three dimensional index".

The name Wayback Machine was chosen as a droll reference to a plot device
in an animated cartoon series, The Rocky and Bullwinkle Show. In it, Mr.
Peabody and Sherman routinely used a time machine called the "WABAC
machine" (pronounced "Wayback") to witness, participate in, and, more often
than not, alter famous events in history.

This page gives information about using the Wayback Machine to cite
archived copies of web pages used by articles. This is useful if a webpage has
changed, moved, or disappeared; links to the original content can be retained.

Any link to the Wayback Machine starts with http://web.archive.org/web/ .


This is followed by a date reference, and then the URL of the site. The
following example links to all versions of the main index page of Wikipedia.
The asterisk is a wild card for all dates:

http://web.archive.org/web/*/http://www.wikipedia.org

The next example links to the main index of Wikipedia on the date on
September 30, 2002 at 12:35:25 pm. the date format is YYYYMMDDhhmmss

http://web.archive.org/web/20020930123525/http://www.wikipedia.org

The next example links to the most current version of the Wikipedia. While
this is possible, it is discouraged; the most recent version is subject to change,
defeating the purpose of using the archive.

http://web.archive.org/web/2/http://www.wikipedia.org

5.4 SQL Injection and Google Dorks


__________________________________________
60
5.4.1 SQL Injection

SQL injection (also known as SQL fishing) is a technique often used to attack
data driven applications. This is done by including portions of SQL statements
in an entry field in an attempt to get the website to pass a newly formed
rogue SQL command to the database (e.g., dump the database contents to the
attacker). SQL injection is a code injection technique that exploits a security
vulnerability in an application's software. The vulnerability happens when
user input is either incorrectly filtered for string literal escape characters
embedded in SQL statements or user input is not strongly typed and
unexpectedly executed. SQL injection is mostly known as an attack vector for
websites but can be used to attack any type of SQL database.

In operational environments, it has been noted that applications experience


an average of 71 attempts an hour.

SQL injection attack (SQLIA) is considered one of the top 10 web application
vulnerabilities of 2007 and 2010 by the Open Web Application Security
Project. The attacking vector contains five main sub-classes depending on the
technical aspects of the attack's deployment:

 Classic SQLIA
 Inference SQL injection
 Interacting with SQL injection
 Database management system-specific SQLIA
 Compounded SQLIA
 SQL injection + insufficient authentication
 SQL injection + DDoS attacks
 SQL injection + DNS hijacking
 SQL injection + XSS

A complete overview of the SQL Injection classification is presented in the


next figure. The Storm Worm is one representation of Compounded SQLIA.

61
A Classification of SQL injection attacking vector until 2010.

This classification represents the state of SQLIA, respecting its evolution until
2010—further refinement is underway.

5.4.1.1 Incorrectly filtered escape characters

This form of SQL injection occurs when user input is not filtered for escape
characters and is then passed into a SQL statement. This result in the
potential manipulation of the statements performed on the database by the
end-user of the application.

The following line of code illustrates this vulnerability:

statement = "SELECT * FROM users WHERE name = '" + username + "';"

This SQL code is designed to pull up the records of the specified username
from its table of users. However, if the "username" variable is crafted in a
specific way by a malicious user, the SQL statement may do more than the
code author intended. For example, setting the "username" variable as:

' or '1'='1

or using comments to even block the rest of the query (there are three types
of SQL comments):

' or '1'='1' -- '

' or '1'='1' ({ '


62
' or '1'='1' /* '

renders one of the following SQL statements by the parent language:

SELECT * FROM users WHERE name = '' OR '1'='1';

SELECT * FROM users WHERE name = '' OR '1'='1' -- ';

If this code were to be used in an authentication procedure then this example


could be used to force the selection of a valid username because the
evaluation of '1'='1' is always true.

The following value of "userName" in the statement below would cause the
deletion of the "users" table as well as the selection of all data from the
"userinfo" table (in essence revealing the information of every user), using
an API that allows multiple statements:

a';DROP TABLE users; SELECT * FROM userinfo WHERE 't' = 't

This input renders the final SQL statement as follows and specified:

SELECT * FROM users WHERE name = 'a';DROP TABLE users; SELECT *


FROM userinfo WHERE 't' = 't';

While most SQL server implementations allow multiple statements to be


executed with one call in this way, some SQL APIs such
as PHP's mysql_query(); function do not allow this for security reasons. This
prevents attackers from injecting entirely separate queries, but doesn't stop
them from modifying queries.

5.4.1.2 Incorrect type handling

This form of SQL injection occurs when a user-supplied field is not strongly
typed or is not checked for type constraints. This could take place when a
numeric field is to be used in a SQL statement, but the programmer makes no
checks to validate that the user supplied input is numeric. For example:

statement := "SELECT * FROM user info WHERE id = " + a variable + ";"

It is clear from this statement that the author intended a_variable to be a


number correlating to the "id" field. However, if it is in fact a string then
the end-user may manipulate the statement as they choose, thereby
bypassing the need for escape characters. For example, setting a variable to

63
 DROP TABLE users will drop (delete) the "users" table from the
database, since the SQL becomes:
 SELECT * FROM userinfo WHERE id=1;DROP TABLE users;
5.4.1.3 Blind SQL injection

Blind SQL Injection is used when a web application is vulnerable to an SQL


injection but the results of the injection are not visible to the attacker. The
page with the vulnerability may not be one that displays data but will display
differently depending on the results of a logical statement injected into the
legitimate SQL statement called for that page. This type of attack can become
time-intensive because a new statement must be crafted for each bit
recovered. There are several tools that can automate these attacks once the
location of the vulnerability and the target information has been established.

5.4.1.4 Conditional responses

One type of blind SQL injection forces the database to evaluate a logical
statement on an ordinary application screen. As an example, a book review
website uses a query string to determine which book review to display. So
the URL http://books.example.com/showReview.php?ID=5 would cause the
server to run the query

SELECT * FROM bookreviews WHERE ID = '5';

from which it would populate the review page with data from the review
with ID 5, stored in the table bookreviews. The query happens completely on
the server; the user does not know the names of the database, table, or fields,
nor does the user know the query string. The user only sees that the above
URL returns a book review. A hacker can load the

 SELECT * FROM bookreviews WHERE ID = '5' AND '1'='1';


 SELECT * FROM bookreviews WHERE ID = '5' AND '1'='2';
respectively. If the original review loads with the "1=1" URL and a blank or
error page is returned from the "1=2" URL, the site is likely vulnerable to a
SQL injection attack. The hacker may proceed with this query string designed
to reveal the version number of MySQL running on the
server: http://books.example.com/showReview.php?ID=5 AND substring(@@
version,1,1)=4, which would show the book review on a server running MySQL

64
4 and a blank or error page otherwise. The hacker can continue to use code
within query strings to glean more information from the server until another
avenue of attack is discovered or his or her goals are achieved.

5.4.1.5 Parameterized statements

With most development platforms, parameterized statements that work with


parameters can be used (sometimes called placeholders or bind variables)
instead of embedding user input in the statement. A placeholder can only
store a value of the given type and not an arbitrary SQL fragment. Hence the
SQL injection would simply be treated as a strange (and probably invalid)
parameter value.

In many cases, the SQL statement is fixed, and each parameter is a scalar, not
a table. The user input is then assigned (bound) to a parameter.

5.4.1.6 Enforcement at the coding level

Using object-relational mapping libraries avoids the need to write SQL code.
The ORM library in effect will generate parameterized SQL statements from
object-oriented code.

5.4.1.7 Escaping

A straightforward, though error-prone, way to prevent injections is to escape


characters that have a special meaning in SQL. The manual for an SQL DBMS
explains which characters have a special meaning, which allows creating a
comprehensive blacklist of characters that need translation. For instance,
every occurrence of a single quote (') in a parameter must be replaced by two
single quotes ('') to form a valid SQL string literal. For example, in PHP it is
usual to escape parameters using the
function mysql_real_escape_string(); before sending the SQL query:

 $query = sprintf("SELECT * FROM `Users` WHERE UserName='%s' AND


Password='%s'",
 mysql_real_escape_string($Username),
 mysql_real_escape_string($Password));
 mysql_query($query);
This function, i.e. mysql_real_escape_string(), calls MySQL's library function
mysql_real_escape_string, which prepends backslashes to the following

65
characters: \x00, \n, \r, \, ', " and \x1a. This function must always (with few
exceptions) be used to make data safe before sending a query to MySQL.
There are other functions for many database types in PHP such as
pg_escape_string() for PostgreSQL. There is, however, one function that works
for escaping characters, and is used especially for querying on databases that
do not have escaping functions in PHP. This function is: addslashes(string $str
). It returns a string with backslashes before characters that need to be
quoted in database queries, etc. These characters are single quote ('), double
quote ("), backslash (\) and NULL (the NULL byte).
Routinely passing escaped strings to SQL is error prone because it is easy to
forget to escape a given string. Creating a transparent layer to secure the
input can reduce this error-proneness, if not entirely eliminate it.

5.4.1.8 Pattern check

Integer, float or boolean parameters can be checked if their value is valid


representation for the given type. Strings that must follow some strict pattern
(date, UUID, alphanumeric only, etc.) can be checked if they match this
pattern.

5.4.1.9 Database permissions

Limiting the permissions on the database logon used by the web application
to only what is needed may help reduce the effectiveness of any SQL injection
attacks that exploit any bugs in the web application.

For example on SQL server, a database logon could be restricted from


selecting on some of the system tables which would limit exploits that try to
insert JavaScript into all the text columns in the database.

 deny SELECT ON sys.sysobjects TO webdatabaselogon;


 deny SELECT ON sys.objects TO webdatabaselogon;
 deny SELECT ON sys.TABLES TO webdatabaselogon;
 deny SELECT ON sys.views TO webdatabaselogon;

5.4.2 Google Dorks

 intitle:admin inurl:admin.php site:co.in


 inurl:admin.php site:co.in
 inurl:admin.php site:com

66
 intitle:login inurl:login.php site:co.in
 intitle:login inurl:login.php site:.in
 inurl:userlogin.php site:.in
 inurl:loginpanel.php site:.in

5.4.3 Vulnerable Sites

 http://www.exposer.co.in/admin.php
 http://lionsclubofwashim.co.in/admin.php
 http://www.goldentimes.co.in/admin.php
 www.shubhmilan.in/userlogin.php
 suresolutions.co.in/userlogin.php?
 www.induscapital.in/userlogin.php
 www.smptngo.in/userLogin.php?
 http://www.shelterguide.co.in/admin/login.php
 http://www.hdsoftware.co.in/admin/login.php
 http://ncm.co.in/admin/login.php
 http://www.indavara.in/admin/login.php
 http://www.jaipurhome.in/admin/login.php
 http://smpsbhilwara.in/admin/login.php
 http://nppbisauli.in/admin/login.php
 http://nppkayamganj.in/admin/login.php
 http://www.npgauratha.in/admin/login.php
 http://www.mcnagrotabagwan.in/admin/login.php

5.5 Phishing Page


_______________________

Phishing is the act of attempting to acquire information such as


usernames, passwords, and credit card details (and sometimes, indirectly,
money) by masquerading as a trustworthy entity in an electronic
communication. Communications purporting to be from popular social web
sites, auction sites, online payment processors or IT administrators are
commonly used to lure the unsuspecting public. Phishing emails may contain
links to websites that are infected with malware. Phishing is typically carried
out by email spoofing or instant messaging, and it often directs users to enter
details at a fake website whose look and feel are almost identical to the
67
legitimate one. Phishing is an example of social engineering techniques used
to deceive users, and exploits the poor usability of current web security
technologies. Attempts to deal with the growing number of reported phishing
incidents include legislation, user training, public awareness, and technical
security measures.

A phishing technique was described in detail in 1987, and (according to its


creator) the first recorded use of the term "phishing" was made in 1995 by
Jason Shannon of AST Computers. The term is a variant of fishing, probably
influenced by phreaking, and alludes to "baits" used in hopes that the
potential victim will "bite" by clicking a malicious link or opening a malicious
attachment, in which case their financial information and passwords may
then be stolen.

5.5.1 List of phishing techniques

5.5.1.1 Phishing

Phishing is a way of attempting to acquire information such as


usernames, passwords, and credit card details by masquerading as a
trustworthy entity in an electronic communication.

5.5.1.2 Spear phishing

Phishing attempts directed at specific individuals or companies have been


termed spearphishing. Attackers may gather personal information about
their target to increase their probability of success.

5.5.1.3 Clone phishing

A type of phishing attack whereby a legitimate, and previously delivered,


email containing an attachment or link has had its content and recipient
address(es) taken and used to create an almost identical or cloned email. The
attachment or Link within the email is replaced with a malicious version and
then sent from an email address spoofed to appear to come from the original
sender. It may claim to be a resend of the original or an updated version to
the original.

This technique could be used to pivot (indirectly) from a previously infected


machine and gain a foothold on another machine, by exploiting the social

68
trust associated with the inferred connection due to both parties receiving the
original email.

5.5.1.4 Whaling

Several recent phishing attacks have been directed specifically at senior


executives and other high profile targets within businesses, and the
term whaling has been coined for these kinds of attacks.

5.5.2 Link manipulation

Most methods of phishing use some form of technical deception designed to


make a link in an email (and the spoofed website it leads to) appear to belong
to the spoofed organization. MisspelledURLs or the use of subdomains are
common tricks used by phishers. In the following example
URL, http://www.yourbank.example.com/, it appears as though the URL will
take you to the example section of the yourbank website; actually this URL
points to the "yourbank" (i.e. phishing) section of the example website.
Another common trick is to make the displayed text for a link (the text
between the <A> tags) suggest a reliable destination, when the link actually
goes to the phishers' site. The following example
link, //en.wikipedia.org/wiki/Genuine, appears to direct the user to an article
entitled "Genuine"; clicking on it will in fact take the user to the article
entitled "Deception". In the lower left hand corner of most browsers users can
preview and verify where the link is going to take them. Hovering your cursor
over the link for a couple of seconds may do a similar thing, but this can still
be set by the phisher.

A further problem with URLs has been found in the handling


of Internationalized domain names (IDN) in web browsers, that might allow
visually identical web addresses to lead to different, possibly malicious,
websites. Despite the publicity surrounding the flaw, known as IDN
spoofing or homograph attack, phishers have taken advantage of a similar
risk, using open URL redirectors on the websites of trusted organizations to
disguise malicious URLs with a trusted domain. Even digital certificates do
not solve this problem because it is quite possible for a phisher to purchase a
valid certificate and subsequently change content to spoof a genuine website.

69
5.5.3 Filter evasion

Phishers have used images instead of text to make it harder for anti-phishing
filters to detect text commonly used in phishing emails.

5.5.4 Website forgery

Once a victim visits the phishing website, the deception is not over. Some
phishing scams use JavaScript commands in order to alter the address
bar. This is done either by placing a picture of a legitimate URL over the
address bar, or by closing the original address bar and opening a new one
with the legitimate URL.

An attacker can even use flaws in a trusted website's own scripts against the
victim. These types of attacks (known as cross-site scripting) are particularly
problematic, because they direct the user to sign in at their bank or service's
own web page, where everything from the web address to the security
certificates appears correct. In reality, the link to the website is crafted to
carry out the attack, making it very difficult to spot without specialist
knowledge. Just such a flaw was used in 2006 against PayPal.

A Universal Man-in-the-middle (MITM) Phishing Kit, discovered in 2007,


provides a simple-to-use interface that allows a phisher to convincingly
reproduce websites and capture log-in details entered at the fake site.

To avoid anti-phishing techniques that scan websites for phishing-related


text, phishers have begun to use Flash-based websites. These look much like
the real website, but hide the text in a multimedia object.

5.5.5 Phone phishing

Not all phishing attacks require a fake website. Messages that claimed to be
from a bank told users to dial a phone number regarding problems with their
bank accounts. Once the phone number (owned by the phisher, and provided
by a Voice over IP service) was dialled, prompts told users to enter their
account numbers and PIN. Vishing (voice phishing) sometimes uses fake
caller-ID data to give the appearance that calls come from a trusted
organization.

70
5.5.6 Other techniques

Another attack used successfully is to forward the client to a bank's


legitimate website, then to place a popup window requesting credentials on
top of the website in a way that it appears the bank is requesting this
sensitive information.

One of the latest phishing techniques is tabnabbing. It takes advantage of the


multiple tabs that users use and silently redirects a user to the affected site.

Evil twins is a phishing technique that is hard to detect. A phisher creates a


fake wireless network that looks similar to a legitimate public network that
may be found in public places such as airports, hotels or coffee shops.
Whenever someone logs on to the bogus network, fraudsters try to capture
their passwords and/or credit card information.

5.5.7 Damage caused by phishing

The damage caused by phishing ranges from denial of access to email to


substantial financial loss. It is estimated that between May 2004 and May
2005, approximately 1.2 million computer users in the United States suffered
losses caused by phishing, totaling approximately US$929 million. United
States businesses lose an estimated US$2 billion per year as their clients
become victims. In 2007, phishing attacks escalated. 3.6 million adults
lost US$3.2 billion in the 12 months ending in August 2007. Microsoft claims
these estimates are grossly exaggerated and puts the annual phishing loss in
the US at US$60 million. In the United Kingdom losses from web banking
fraud—mostly from phishing—almost doubled to GB£23.2m in 2005,
from GB£12.2min 2004, while 1 in 20 computer users claimed to have lost out
to phishing in 2005.

The stance adopted by the UK banking body APACS is that "customers must
also take sensible precautions ... so that they are not vulnerable to the
criminal." Similarly, when the first spate of phishing attacks hit the Irish
Republic's banking sector in September 2006, the Bank of Ireland initially
refused to cover losses suffered by its customers (and it still insists that its
policy is not to do so), although losses to the tune of €11,300 were made good.

71
5.5.8 Anti-phishing

As recently as 2007, the adoption of anti-phishing strategies by businesses


needing to protect personal and financial information was low. Now there
are several different techniques to combat phishing, including legislation and
technology created specifically to protect against phishing. These techniques
include steps that can be taken by individuals, as well as by organizations.
Phone, web site, and email phishing can now be reported to authorities, as
described below.

5.5.9 Social responses

One strategy for combating phishing is to train people to recognize phishing


attempts, and to deal with them. Education can be effective, especially where
training provides direct feedback. One newer phishing tactic, which uses
phishing emails targeted at a specific company, known as spear phishing, has
been harnessed to train individuals at various locations, including United
States Military Academy at West Point, NY. In a June 2004 experiment with
spear phishing, 80% of 500 West Point cadets who were sent a fake email
from a non-existent Col. Robert Melville at West Point, were tricked into
clicking on a link that would supposedly take them to a page where they
would enter personal information. (The page informed them that they had
been lured.)

People can take steps to avoid phishing attempts by slightly modifying their
browsing habits. When contacted about an account needing to be "verified"
(or any other topic used by phishers), it is a sensible precaution to contact the
company from which the email apparently originates to check that the email
is legitimate. Alternatively, the address that the individual knows is the
company's genuine website can be typed into the address bar of the browser,
rather than trusting any hyperlinks in the suspected phishing message.

Nearly all legitimate e-mail messages from companies to their customers


contain an item of information that is not readily available to phishers. Some
companies, for example PayPal, always address their customers by their
username in emails, so if an email addresses the recipient in a generic fashion
("Dear PayPal customer") it is likely to be an attempt at phishing. Emails
from banks and credit card companies often include partial account numbers.
However, recent research has shown that the public do not typically

72
distinguish between the first few digits and the last few digits of an account
number—a significant problem since the first few digits are often the same
for all clients of a financial institution. People can be trained to have their
suspicion aroused if the message does not contain any specific personal
information. Phishing attempts in early 2006, however, used personalized
information, which makes it unsafe to assume that the presence of personal
information alone guarantees that a message is legitimate. Furthermore,
another recent study concluded in part that the presence of personal
information does not significantly affect the success rate of phishing attacks,
which suggests that most people do not pay attention to such details.

The Anti-Phishing Working Group, an industry and law enforcement


association, has suggested that conventional phishing techniques could
become obsolete in the future as people are increasingly aware of the social
engineering techniques used by phishers. They predict that pharming and
other uses of malware will become more common tools for stealing
information.

Everyone can help educate the public by encouraging safe practices, and by
avoiding dangerous ones. Unfortunately, even well-known players are known
to incite users to hazardous behaviour, e.g. by requesting their users to reveal
their passwords for third party services, such as email.

5.6 Acunetix
____________________

Acunetix Web Vulnerability Scanner ensures website security by


automatically checking for SQL injection, Cross-Site Scripting and other
vulnerabilities. The scanner checks password strength on authentication
pages and automatically audits shopping carts, forms, dynamic content and
other web applications. Detailed reports resulting from the scan identify
where vulnerabilities exist. The Acunetix WVS Reporting Application allows
security alerts to be presented in a document which abides by the PCI
Compliance specification.

Acunetix is a market leader in web application security technology, founded


to combat the alarming rise in web attacks. Its flagship product, Acunetix
Web Vulnerability Scanner, is the result of several years of work by a team of

73
highly experienced security developers. Acunetix customers include the US
Army, US Air force, AT&T, KPMG, Telstra, Fujitsu, and Adidas.

5.6.1 A complete guide to securing a website

To secure a website or a web application, one has to first


understand the target application, how it works and the scope behind it.
Ideally, the penetration tester should have some basic knowledge of
programming and scripting languages, and also web security.

A website security audit usually consists of two steps. Most of the time, the
first step usually is to launch an automated scan. Afterwards, depending on
the results and the website’s complexity, a manual penetration test follows.
To properly complete both the automated and manual audits, a number of
tools are available, to simplify the process and make it efficient from the
business point of view. Automated tools help the user making sure the whole
website is properly crawled, and that no input or parameter is left
unchecked. Automated web vulnerability scanners also help in finding a high
percentage of the technical vulnerabilities, and give you a very good overview
of the website’s structure, and security status. Thanks to automated
scanners, you can have a better overview and understanding of the target
website, which eases the manual penetration process.

For the manual security audit, one should also have a number of tools to ease
the process, such as tools to launch fuzzing tests, tools to edit HTTP requests
and review HTTP responses, proxy to analyse the traffic and so on.

In this white paper we explain in detail how to do a complete website security


audit and focus on using the right approach and tools. We describe the whole
process of securing a website in an easy to read step by step format; what
needs to be done prior to launching an automated website vulnerability scan
up till the manual penetration testing phase.

74
5.6.2 Manual Assessment of target website or web application

Securing a website or a web application with an


automated web vulnerability scanner can be a straight forward and
productive process, if all the necessary pre-scan tasks and procedures are
taken care of. Depending on the size and complexity of the web application
structure, launching an automated web security scan with typical ‘out of the
box’ settings, may lead to a number of false positives, waste of time and
frustration.

Even though in recent year’s web vulnerability scanning technology has


improved, a good web vulnerability scanner sometimes needs to be pre-
configured. Web vulnerability scanners are designed to scan a wide variety of
complex custom made web applications. Therefore most of the times, one
would need to fine tune the scanner to his or her needs to achieve the desired
correct scan results.

Before launching any kind of automated security scanning process, a manual


assessment of the target website needs to be performed. It is a well known
fact that an automated scanner will scan every entry point in your website
which most likely you tend to forget, and test it for a wide variety of
vulnerabilities.

During the manual assessment, familiarize yourself with the website topology
and architecture. Keep record of the number of pages and files present in the
website, and take record of the directory and file structure. If you have access
to the website’s root directory and source code, take your time to get to know
it. If not, you can manually hover the links throughout the website. This
process will help you understand the structure of the URL’s. Also, take a note
of all the submission and other type of online forms available on the website.

During the pre-automated scan manual assessment, apart from getting used
to directory structures and number of files, get to know what web technology
is used to develop the target website, e.g. .NET or PHP. There are a number
of vulnerabilities which are specific for different types of technologies.

75
Once the manual assessment process is ready, you should know enough about
the target website to help you determine if the website was properly crawled
from the automated black box scanner before a scan is launched. If the
website is not crawled properly, i.e. the scanner is unable to crawl some parts
or parameters from the website; the whole “securing the website” point is
invalidated. The manual assessment will help you go a long way towards
heading off invalid scans and false positives. It will also help you get more
familiar with the website itself, and that’s the best way to help you configure
the automated scanner to cover and check the entire website.

5.6.3 Get familiar with the software

Although many automated web vulnerability scanners have a comfortable


GUI, if you are new to web security, you might get confused with the number
of options and technical terms you’ll encounter when trying to use a black box
scanner. Though do not give up, it is not rocket science. Commercial black
box scanners are backed up by professional support departments, so make
sure you use them. You could also find a good amount of information and
‘how to’s’ about the product you are using online. There are also a good
number of open source solutions as well, but most of the time you have to dig
deep and paddle on your own in rough waters to find support for such
solutions. Many commercial software companies are also using social
networks to make it easier for you to get to know more about their product,
how it works and best practices on how you should use it.

5.6.4 Configuring the automated black box scanner

Once you’re familiar with the automated black box scanner you will be using,
and the target website or web application you will be scanning, it is time to
get down to business and get your hands dirty. To start off with, one must
first configure the scanner. The most crucial things you should configure in
the scanner before launching any automated process are;

Custom 404 Pages – If the server returns HTTP status code 200 when a non
existing URL is requested.

URL Rewrite rules – If the website is using search engine friendly URL’s,
configure these rules to help the scanner understand the website structure so
it can crawl it properly.

76
Login Sequences – If parts of the website are password protected and you
would like the scanner to scan them, record a login sequence to train the
scanner to automatically login to the password protected section, crawl it
and scan it.

Mark page which need manual intervention – If the website contains pages
which require the user to enter a one time value when accessed, such as
CAPTCHA, mark these pages as pages which need manual intervention, so
during the crawling process the scanner will automatically prompt you to
enter such values.

Submission Forms – If you would like to use specific details each time a
particular form is crawled from the scanner, configure the scanner with such
details. Nowadays scanners make it easy for you by populating the fields
automatically (such as in Acunetix WVS).

Scanner Filters – Use the scanner filters to specify a file, or a file type, or
directory which you would like to be excluded from the scan. You can also
exclude specific parameters.

Acunetix WVS Settings node

5.6.5 Protect your data

From time to time I noticed people complaining that web


vulnerability scanners are too invasive, therefore they opt not to run them
against their website. This is definitely a bad presumption and wrong

77
solution, because if an automated web vulnerability scanner can break down
your website, imagine what a malicious user can do. The solution is to start
securing your website and make sure it can handle properly an automated
scan.
To start off with, automated web vulnerability scanners tend to perform
invasive scans against the target website, since they try to input data which a
website has not been designed to handle. If the automated vulnerability
scanner is not that invasive against a target website, then it is not really
checking for all vulnerabilities and is not doing an in-depth security check.
Such security checks could and will lead to a number of unwanted results;
such as deletion of database records, change a blog’s theme, a number of
garbage posts placed on your forum, a huge number of emails in your
mailbox, and even worse, a non functional website. This is expected, because
like a malicious user would do, the automated black box scanner will try its
best to find security holes in your website, and tries to find ways and means
how to get unauthorized access.

Therefore it is imperative that such scans are not launched against live
servers. Ideally a replica of the live environment should be created in a test
lab, so if something goes wrong, only the replica is affected. Though, if a test
lab is not available, make sure you have latest backups. If something goes
wrong, the live website can be restored and be functional again in the
shortest time possible.

5.6.6 Launching the scan

Once the manual website analysis is ready, and the black box scanner is
configured, we are ready to launch the automated scan. If time permits, you
should first run a crawl of the website, so once the crawl is ready, you can
confirm that all the files in the website and input parameters are crawled
from the scanner. Once you confirm that all the files are crawled, you can
safely proceed with the automated scan.

5.6.7 After the scan – Analysing the results

Once the automated security scan is ready, you already have a good overview
of your website’s security level. Look into the details of every reported
vulnerability and make sure you have all the required information to fix the
vulnerability. A typical black box scanner such as Acunetix Web Vulnerability

78
Scanner will report a good amount of detail about the discovered
vulnerability, such as the HTTP request and response headers, HTML
response, a description of the vulnerability and a number of web links from
where you can learn more about the vulnerability reported, and how to fix it.

Acunetix WVS website security scan results

If AcuSensor Technology (Acunetix WVS) is enabled, much more debug


information is reported; the line of code which leads to the reported
vulnerability, SQL stack trace in case of SQL injection etc.

Analysing the automated scan results in detail will also help you understand
more the way the web application works and how the input parameters are
used, thus giving you an idea of what type of tests to launch in the manual
penetration test and which parameters to target.

5.6.8 Manual penetration test

There are a number of advantages in using a commercial black box security


scanner such as Acunetix Web Vulnerability Scanner. Apart from benefitting
from professional support and official documentation, it also includes a
number of manual advanced penetration testing tools. Having all the web
penetration testing tools available in a centralized web security solution has
the advantage that all the tools support importing and exporting of data
from one to the other, which you will definitely need. It also helps manually
analyzing the scan results by exporting the automated scan results to the
manual tools and further look into the issues.

79
Acunetix WVS HTTP Editor

As much as the automated scan, the manual penetration test is also a very
important step in securing a website. If the advanced manual penetration
testing tools are used properly, they can ease the manual penetration test
process and help you be more efficient. The manual penetration testing helps
you audit your website and check for logical vulnerabilities. Even though
automated scans can hint you of such vulnerabilities, and help you in pin
pointing them out, most of them can only be discovered and verified
manually.

5.6.8.1 Below are two examples of logical vulnerabilities

While auditing a shopping cart, you notice that if you manually set the price
parameter to 0, in the checkout request, the customer can get the product for
free without being asked for the payment details.

Or else imagine an online ads company promotes a new campaign; create an


online account, buy $100 worth of ads and they will give you an extra $100
worth of ads for free. During development stage, the developers should make
some kind of check statement like the following;

IF new account AND deposits $100 THEN give $100

If the developers forgot the AND statement, then upon opening an account
and without the need to purchase $50 worth of adverts, you will still get your
$100 worth of free ads.

One might think that such logical vulnerabilities are very remote, or that they
are a joke, but we do encounter them when analysing production web

80
applications. Such vulnerabilities are typically discovered by using several
manual penetration testing tools together, like the HTTP Sniffer to analyze
the application logic, and then the HTTP Editor to build HTTP requests, send
them and analyze the server response.

Acunetix WVS HTTP Sniffer

5.6.9 Conclusion

As we can see from the above, web security is very different from network
security. As a concept, network security can be simplified to “allow good guys
in and block the bad guys.” Web security is different; it is much more than
that. Though never give up.

81
Bibliography

82

You might also like