Internship Project Report
Internship Project Report
Submitted By
Anshul Tayal
101247
X-6
CSE
(EH1-Infotech)
Mr Sahil Baghla
1
DECLARATION
Date: ________________
This report has not been submitted previously to any other university for any
examination.
______________________
Anshul Tayal
2
Acknowledgement
I have taken efforts in this project. However, it would not have been possible
without the kind support and help of many individuals and organization. I
would like to extend my sincere thanks to all of them.
I would like to express my special gratitude and thanks to industry persons for
giving me such attention and time.
3
INDEX
Declaration
Acknowledgement
Chapter 1 Introduction
Password Restoration
Firewall Implementation
Antivirus
IP Address Scanning
Port Scanning
DNS Scanning
4
Chapter 5 Website Scanning and Security
Introduction to Website Security
Live Who IS
Phishing Page
Acunetix
Bibliography
5
About The Project
In spite of securing, I had found some vulnerable websites with the help of Art
of Google Surfing. I do Penetration Testing of the websites and find the weak
point of websites and tell how to secure you from being theft and caught.
6
Chapter 1
Introduction
Computer security (also known as IT security) is information
security as applied to computers and networks. The field covers all
the processes and mechanisms by which computer-based equipment,
information and services are protected from unintended or
unauthorized access, change or destruction. Computer security also
includes protection from unplanned events and natural disasters.
7
While government mandates are driving organizations to address
compliance initiatives, the security of many data assets has seen
limited improvement. Many organizations are struggling quietly
having been victimized by information theft and are seeking to
understand the potential consequences and methods to recovery.
Information Defence helps organizations to identify threats to
intellectual property and sensitive data assets along with the
necessary measures to prepare for, prevent, and respond to cyber-
crime and data theft.
8
effectively respond to cyber incidents; provide technical assistance to
owners and operators of critical infrastructure and disseminate
timely and actionable notifications regarding current and potential
security threats and vulnerabilities. By leveraging the resources of
the ICE Cyber Crimes Centre, Organizations have been integrally
involved in Internet investigations concerning identity and document
fraud, financial fraud, and smuggling.
9
Chapter 2
Windows Security Implementation
10
2.1.1 Vulnerability to malware
11
the system. In some environments, users are over-privileged because they
have been inappropriately granted administrator or equivalent status.
12
Since version 3.4 it has an easy to use scrollable text menu that allows anyone
who masters a keyboard and some English to perform maintenance and
repair on a computer, ranging from password resetting over disk clean up to
virus scanning.
It is possible to boot TRK in three different ways:
-as a bootable CD which you can burn yourself from a downloadable iso file
or a self-burning Windows executable
-from a USB stick/disk installable from Window or from the bootable TRK
cd.
-from network over PXE: you start 1 TRK from CD or USB and you run all
other computers from that one over the network without modifying anything
to your local network. TRK has received an easy to use text menu but has
equally kept the command line.
Here’s a sum up of some of the most important features, new and old:
-easily reset windows passwords with the improved win pass tool
-simple and easy menu interface
-5 different virus scan products integrated in a single uniform command line
with online update capability
-win clean, a utility that cleans up all sorts of unnecessary temporary files on
your computer.
13
2.2.1 The idea behind Trinity Rescue Kit
They brewed the idea of creating a free bootable Linux CD containing all
available free tools that can help you in any way of rescuing your Windows
installation and eventually, this is how far it has gotten now, with thousands
of hours of work gone into it. All this is for you, for free.
Trinity Rescue Kit is based on binaries and scripts from several other
distributions, like Timo’s Rescue CD, Mandriva 2005 and Fedora Core 3 and 4,
as well as many original source packages and tools from other distros.
Start-up procedure and methods, several scripts and concept is completely
self-made or at least heavily adapted.
Comodo's firewall protects you from the viruses, malware, and hackers.
14
shouldn't access a PC. The problem here is obvious. What if the list of
malware is missing some entries, or isn't up to date?
DDP fixes this problem to ensure complete security. The firewall references a
list of over two million known PC-friendly applications. If a file that is not on
this safe-list knocks on your PC's door, the Firewall immediately alerts you to
the possibility of attacking malware. All this occurs before the malware
infects your computer. It's prevention-based security, the only way to keep
PCs totally safe.
15
2.3.3 Personalized alerts
2.4 Antivirus
_____________________
With 125,000 new malicious programs appearing every single day – antivirus
protection is a necessity… not a luxury. Your PC needs effective antivirus
defences… and you deserve an antivirus solution that’s easy to manage.
16
2.4.2 Identifies suspicious websites and phishing websites
17
controlling the launch of executable file from application with
vulnerability
analysing the behaviour of executable files for any similarities with
malicious programs
restricting the actions allowed by applications with vulnerabilities
2.4.8 Optimised antivirus databases
The main interface window is optimised to help boost performance and ease
of use for many popular user scenarios – including launching scans and fixing
problems.
For other applications that are compatible with Microsoft’s new user
interface, Kaspersky technologies scan the applications for viruses – and
infected applications are removed and then replaced with clean applications.
Employee monitoring, due to the increase in cyber loafing and lawsuits, has
become more widespread and much easier with the use of new and cheaper
technologies. Both employers and employees are concerned with the ethical
implications of constant monitoring. While employers use monitoring devices
18
to keep track of their employees' actions and productivity, their employees
feel that too much monitoring is an invasion of their privacy. Thus, the ethics
of monitoring employees is explored and current practices are discussed. This
document further provides suggestions for reducing cyber loafing and
encourages institutions to create and effectively communicate ethical
standards for employee monitoring in their firms. The author has included
actual samples of employees' perceptions and feelings from the surveys and
discussions on being monitored.
Email log delivery - key logger can send you recorded logs through e-mail
delivery at set times - perfect for remote monitoring!
FTP delivery - Key logger can upload recorded logs through FTP delivery.
19
Chat monitoring – Key logger is designed to record and monitor both sides
of a conversation in following chats:
AIM
Windows Live Messenger 2011
ICQ 7
Skype 4
Yahoo Messenger 10
Google Talk
Miranda
QiP 2010
Security - allows you to protect program settings, Hidden Mode and Log file.
Application monitoring – key logger will record the application that was in
use that received the keystroke!
Powerful Log Viewer - you can view and save the log as a HTML page or
plain text with key logger Log Viewer.
Small size – Key logger is several times smaller than other programs with
the same features. It has no additional modules and libraries, so its size is
smaller and the performance is higher.
Key logger fully supports Unicode characters which make it possible to record
keystrokes that include characters from many other character sets. It records
every keystroke. Capture passwords and all other invisible text.
20
Chapter 3
LAN Scanning
3.1 Introduction to IP
____________________________
IP addresses are binary numbers, but they are usually stored in text files and
displayed in human-readable notations, such as 172.16.254.1 (for IPv4), and
2001:db8:0:1234:0:567:8:1 (for IPv6).
21
3.1.1 IPv4 addresses
22
Historical classful network architecture
Size of Size of
Addresses
networks
network
Number
Leading
address
network rest
Class
Start
bits End address
per
of
number bit
bit field field
16,777,21 127.255.255.25
A 0 8 24 128 (27) 0.0.0.0
6 (224) 5
Today, when needed, such private networks typically connect to the Internet
through network address translation (NAT).
23
Start End No. of addresses
The large number of IPv6 addresses allows large blocks to be assigned for
specific purposes and, where appropriate, to be aggregated for efficient
routing. With a large address space, there is not the need to have complex
address conservation methods as used in CIDR.
Early designs used a different block for this purpose (fec0 ::), dubbed site-
local addresses. However, the definition of what constituted sites remained
unclear and the poorly defined addressing policy created ambiguities for
routing. This address range specification was abandoned and must not be
used in new systems.
IP networks may be divided into subnet works in both IPv4 and IPv6. For this
purpose, an IP address is logically recognized as consisting of two parts: the
network prefix and the host identifier, or interface identifier (IPv6). The
subnet mask or the CIDR prefix determines how the IP address is divided into
network and host parts.
The term subnet mask is only used within IPv4. Both IP versions however use
the CIDR concept and notation. In this, the IP address is followed by a slash
and the number (in decimal) of bits used for the network part, also called the
routing prefix. For example, an IPv4 address and its subnet mask may be
192.0.2.1 and 255.255.255.0, respectively. The CIDR notation for the same IP
address and subnet is 192.0.2.1/24, because the first 24 bits of the IP address
indicate the network and subnet.
Internet Protocol addresses are assigned to a host either anew at the time of
booting, or permanently by fixed configuration of its hardware or software.
25
Persistent configuration is also known as using a static IP address. In
contrast, in situations when the computer's IP address is assigned newly each
time, this is known as using a dynamic IP address.
3.1.4.1 Methods
26
Multiple client devices can appear to share IP addresses: either because they
are part of a shared hosting web server environment or because an IPv4
network address translator (NAT) or proxy server acts as an intermediary
agent on behalf of its customers, in which case the real originating IP
addresses might be hidden from the server receiving a request. A common
practice is to have a NAT hide a large number of IP addresses in a private
network. Only the "outside" interface(s) of the NAT need to have Internet-
routable addresses.
Most commonly, the NAT device maps TCP or UDP port numbers on the side
of the larger, public network to individual private addresses on the
masqueraded network. In small home networks, NAT functions are usually
implemented in a residential gateway device, typically one marketed as a
"router". In this scenario, the computers connected to the router would have
private IP addresses and the router would have a public address to
communicate on the Internet. This type of router allows several computers to
share one public IP address.
27
Angry IP Scanner. The program's target audience are network
administrators, consultants, developers, who all use the tool every day and
therefore have advanced requirements for usability, configurability, and
extensibility. However, Angry IP Scanner aims to be very friendly to novice
users as well.
3.2.1 Scanning
The word scan is derived from the Latin word scandere, which means to climb
and later came to mean "to scan a verse of poetry," because one could beat
the rhythm by lifting and putting down one's foot. As a rule, user provides a
list of IP addresses to the scanner with the goal of sequentially probing all of
them and gathering interesting information about each address as well as
overall statistics. The gathered information may include the following:
The list of addresses for scanning is most often provided as a range, with
specified starting and ending addresses, or as a network, with specified
network address and corresponding net mask. Other options are also possible,
e.g. loading from a file or generation of random addresses according to some
particular rules. Angry IP Scanner has several different modules for
generation of IP addresses called feeders. Additional feeders can be added
with the help of plugins.
There are usually two types of network scanners: port scanners and IP
scanners.
28
It usually scans TCP and sometimes UDP ports of a single host by sequentially
probing each of them. This is similar to walking around a shopping mall and
writing down the list of all the shops you see there along with their status
(open or closed).
3.2.1.2 IP scanners
It scans many hosts and then gather additional information about those of
them that are available (alive). According to the shopping mall analogy, that
would be walking around the city looking for all shopping malls and then
discovering all kinds of shops that exist in each of the malls. As Angry IP
Scanner is an IP scanner, designed for scanning of multiple hosts, this will be
the type of network scanner reviewed in the following text.
Fortunately, the short response is that it is both legal and safe, however with
some exceptions. Even though nowadays legal laws do not catch up with the
fast development of the IT world, network scanning has existed for almost as
long as the networks themselves, meaning that there was probably enough
time to update the laws. Nevertheless, scanning itself remains perfectly legal,
because in most cases it neither harms the scanned systems in any way nor
provides any direct possibilities of breaking into them. Network scanning is
even used by some popular network applications for automatic discovery of
peers and similar functionality.
As a rule, the scanning results just provide the publicly available and freely
obtainable information, collected and grouped together. However, this
legality may not apply in case some more advanced stealth scanning
techniques are used against a network you do not have any affiliation with.
29
serious network administrator knows that regular probing of own networks
is a very good way for keeping it secure.
Log on as the administrator and launch Angry IP. The program will
scan your system and display all server ports used on the server.
Create a server port range that you want to scan. This can be done by
selecting two ports, one at the bottom and one at the top of the ports
used list.
Click "Start." Angry IP will scan every port in between the two selected
ports. It will take a few moments for the program to determine if the IP
addresses are active or not. Once finished, ports will be displayed in
blue or red. Blue ports mean the IP is currently in use, while red means
the IP is idle.
Select an active IP address (blue) and you will be shown the computer
using the port as well as the current user.
Originally Nmap was a Linux only utility, but it has since been ported to
Microsoft Windows, Solaris, HP-UX, BSD variants (including Mac OS X),
30
Amiga OS, and SGI IRIX. Linux is the most popular platform with Windows
following it closely.
31
3.3.3 Basic commands working in Nmap
For OS detection:
NmapFE was Nmap's official GUI for Nmap. NmapFE was replaced with
Zenmap, a new official graphical user interface based on UMIT.
Microsoft Windows specific GUIs exist, including NMapWin, which has not
been updated since v1.4.0 was released in June 2003, and NMapW by Syhunt.
32
XNmap, a Mac OS X GUI
Nmap provides four possible output formats for the scan results. All but the
interactive output is saved to a file. All of the output formats in Nmap can be
easily manipulated by text processing software, enabling the user to create
customized reports.
Interactive
Presented and updated real time when a user runs the Nmap from the
command line. Various options can be entered during the scan to facilitate
monitoring.
XML
Grepable
Normal
The output as seen while running Nmap from the command line, but saved to
a file.
Script kiddie
Meant to be the funny way to format the interactive output replacing letters
with their visually alike number representations. For example, Interesting
ports becomes Int3rest|ng p0rtz.
3.3.6 Purpose
33
Nmap is capable of discovering passive services on a network, despite the fact
that such services are not advertising themselves with a service discovery
protocol. In addition, Nmap may be able to determine various details about
the remote computers.
Like most tools used in computer security, Nmap can be used for black hat
hacking, or attempting to gain unauthorized access to computer systems. It
would typically be used to discover open ports which are likely to be running
vulnerable services, in preparation for attacking those services with another
program.
34
(88%) No exact OS matches for host (test conditions non-ideal). Uptime
guess: 35.708 days (since Wed Sep 15 08:58:56 2010)
Nmap done: 1 IP address (1 host up) scanned in 19.94 seconds
Raw packets sent: 2080 (95.732KB)| Rcvd: 24 (1.476KB)
TRACEROUTE (using port 113/tcp)
HOP RTT ADDRESS
1 2.27 ms 192.168.254.4
Nmap done: 1 IP address (1 host up) scanned in 19.94 seconds
Raw packets sent: 2080 (95.732KB)| Rcvd: 24 (1.476KB)
The Domain Name System support in Microsoft Windows NT, and thus its
derivatives Windows 2000, Windows XP, and Windows Server 2003,
comprises two clients and a server. Every Microsoft Windows machine has a
DNS lookup client, to perform ordinary DNS lookups. Some machines have a
Dynamic DNS client, to perform Dynamic DNS Update transactions,
registering the machines' names and IP addresses. Some machines run a DNS
server, to publish DNS data, to service DNS lookup requests from DNS lookup
clients, and to service DNS update requests from DNS update clients.
Applications perform DNS lookups with the aid of a DLL. They call library
functions in the DLL, which in turn handle all communications with DNS
servers (over UDP or TCP) and return the final results of the lookup back to
the applications.
Microsoft's DNS client also has optional support for local caching, in the form
of a DNS Client service. If there is one, and if such a connection can be made,
they hand the actual work of dealing with the lookup over to the DNS Client
service. The DNS Client service itself communicates with DNS servers, and
caches the results that it receives.
Microsoft's DNS client is capable of talking to multiple DNS servers. The exact
algorithm varies according to the version, and service pack level, of the
operating system; but in general all communication is with a preferred DNS
35
server until it fails to answer, whereupon communication switches to one of
several alternative DNS servers.
Parsing of the "hosts" file: The lookup functions read only the hosts file if they
cannot off-load their task onto the DNS Client service and have to fall back to
communicating with DNS servers themselves. In turn, the DNS Client service
reads the "hosts" file once, at startup, and only re-reads it if it notices that the
last modification timestamp of the file has changed since it last read it. Thus:
With the DNS Client service running: The "hosts" file is read and parsed only a
few times, once at service startup, and thereafter whenever the DNS Client
service notices that it has been modified.
Without the DNS Client service running: The "hosts" file is read and parsed
repeatedly, by each individual application program as it makes a DNS lookup.
The effect of multiple answers in the "hosts" file: The DNS Client service does
not use the "hosts" file directly when performing lookups. Instead, it (initially)
populates its cache from it, and then performs lookups using the data in its
cache. When the lookup functions fall back to doing the work themselves,
however, they scan the "hosts" file directly and sequentially, stopping when
the first answer is found. Thus:
With the DNS Client service running: If the "hosts" file contains multiple lines
denoting multiple answers for a given lookup, all of the answers in the cache
will be returned.
Without the DNS Client service running: If the "hosts" file contains multiple
lines denoting multiple answers for a given lookup, only the first answer
found will be returned.
Fallback from preferred to alternative DNS servers: The fallback from the
preferred DNS server to the alternative DNS servers is done by whatever
entity, the DNS Client service or the library functions themselves, is actually
performing the communication with them. Thus:
36
With the DNS Client service running: Fallback to the alternative DNS servers
happens globally. If the preferred DNS server fails to answer, all subsequent
communication is with the alternative DNS servers.
Without the DNS Client service running: Any fallback to the alternative DNS
servers happen locally, within each individual process that is making DNS
queries. Different processes may be in different states, some talking to the
preferred DNS server and some talking to alternative DNS servers.
Whilst DNS lookups read DNS data, DNS updates write them. Both
workstations and servers running Windows attempt to send Dynamic DNS
update requests to DNS servers.
Microsoft Windows server operating systems can run the DNS Server service.
This is a monolithic DNS server that provides many types of DNS service,
including caching, Dynamic DNS update, zone transfer, and DNS notification.
DNS notification implements a push mechanism for notifying a select set of
secondary servers for a zone when it is updated.
37
Microsoft's "DNS Server" service was first introduced in Windows NT 3.51 as
an add-on with Microsoft's collection of BackOffice services (at the time was
marked to be used for testing purposes only). Some sources claim that the
DNS server implementation in Windows NT 3.51 was a fork of ISC's BIND
version 4.3, but this is not true. The DNS server implementation in Windows
NT 3.51 was written by Microsoft. The DNS server components in all
subsequent releases of Windows Server have built upon that initial
implementation and do not use BIND source code. However, Microsoft has
taken care to ensure good interoperability with BIND and other
implementations in terms of zone file format, zone transfer, and other DNS
protocol details.
Unlike other security scanners, GFI LANguard N.S.S. will not create a
'barrage' of information, which is virtually impossible to follow up on. Rather,
it will help highlight the most important information. It also provides
hyperlinks to security sites to find out more about these vulnerabilities.
38
The first step in beginning an audit of a network is to perform a scan of
current network machines and devices.
LANguard Network Security Scanner will now do a scan of the entire range
entered. It will first detect which hosts/computers are on, and only scan
those. This is done using NETBIOS probes, ICMP ping and SNMP queries.
If a device does not answer to one of these GFI LANguard N.S.S. will assume,
for now, that the device either does not exist at a specific IP address or that it
is currently turned off. If you would like GFI LANguard N.S.S. to scan all
devices, even those that don't respond to these queries, look under the scan
options section of the manual at "Configuring Scan options, Scanning, Adding
non-responsive computers". But make sure you take notice of the warning, in
that section, about time issues before doing this.
39
Scan one Computer - This will scan only one computer.
Scan List of Computers - Computers can be added to the list either one
at a time, or you can import them from a text file. To add them right
click in the window and use the menu that pops up.
Scan Computers that are part of a Network Domain - If you click on the
`Pick Computers' option you will be presented with a list of all of the
Workgroups and Domains that GFI LANguard N.S.S. found on the
network. Check the box next to the Workgroup or Domain that you
want to scan and GFI LANguard N.S.S. will scan all computers found in
that Workgroup/Domain. You can also select individual computers
within that Workgroup/Domain.
GFI LANguard Network Security Scanner (N.S.S.) is a tool that checks your
network for all potential methods that a hacker might use to attack your
network. By analysing the operating system and the applications running on
your network, GFI LANguard N.S.S. identifies possible security holes in your
network. In other words, it plays the devil's advocate and alerts you to
weaknesses before a hacker can find them, enabling you to deal with these
issues before a hacker can exploit them.
GFI LANguard Network Security Scanner scans your entire network, IP by IP,
and provides information such as service pack level of the machine, missing
security patches, open shares, open ports, services/applications active on the
computer, key registry entries, weak passwords, users and groups, and more.
Scan results are outputted to an HTML report, which can be
customized/queried, enabling you to proactively secure your network - for
example by shutting down unnecessary ports, closing shares, installing
service packs and hotfixes, etc.
40
Detect unnecessary open ports
Remotely install security patches for all network machines
Detect new security holes using scheduled scan comparison
Check for unused user accounts on workstations
Check password policy and strength
Check if auditing is turned on
Make an inventory of your network
Detect potential Trojans installed on users’ workstations
Find out if the OS is advertising too much information
Scans large networks by sending UDP query status to every IP
Lists NETBIOS name table for each responding computer
Provides NETBIOS hostname, currently logged username & MAC
address
Provides a list of shares, users (detailed info), services, sessions, remote
TOD (time of day) & registry information from remote computer
(NT/2000)
Tests password strength on Windows 9x/NT/2000 systems using a
dictionary of commonly used passwords
SNMP device detection, SNMP Walk for inspecting network devices like
routers, network printers...
Support for sending spoofed messages (social engineering)
DNS lookup (www.somehost.com - > xxx.xxx.xxx.xxx); resolve hostnames
(reverse DNS)
Trace route support for network mapping
Configuration manager so you can easily save particular scans
41
Chapter 4
Wi-Fi Security
Although its name implies that it is as secure as a wired connection, WEP has
been demonstrated to have numerous flaws and has been deprecated in
favour of newer standards such as WPA2. In 2003 the Wi-Fi Alliance
announced that WEP had been superseded by Wi-Fi Protected Access (WPA).
In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the
IEEE declared that both WEP-40 and WEP-104 "have been deprecated as
they fail to meet their security goals".
WEP was included as the privacy component of the original IEEE 802.11
standard ratified in September 1999. WEP uses the stream cipher RC4 for
confidentiality, and the CRC-32 checksum for integrity. It was deprecated in
2004 and is documented in the current standard.
42
Standard 64-bit WEP uses a 40 bit key (also known as WEP-40), which is
concatenated with a 24-bit initialization vector (IV) to form the RC4 key. At
the time that the original WEP standard was drafted, the U.S. Government's
export restrictions on cryptographic technology limited the key size. Once the
restrictions were lifted, manufacturers of access points implemented an
extended 128-bit WEP protocol using a 104-bit key size (WEP-104).
A 256-bit WEP system is available from some vendors. As with the other WEP-
variants 24 bits of that is for the IV, leaving 232 bits for actual protection.
These 232 bits are typically entered as 58 hexadecimal characters. ((58 × 4
bits =) 232 bits) + 24 IV bits = 256-bit WEP key.
4.1.2 Authentication
In Open System authentication, the WLAN client need not provide its
credentials to the Access Point during authentication. Any client can
authenticate with the Access Point and then attempt to associate. In effect, no
authentication occurs. Subsequently WEP keys can be used for encrypting
data frames. At this point, the client must have the correct keys.
43
In Shared Key authentication, the WEP key is used for authentication in a
four step challenge-response handshake:
Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access II (WPA2) are two
security protocols and security certification programs developed by the Wi-Fi
Alliance to secure wireless computer networks. The Alliance defined these in
response to serious weaknesses researchers had found in the previous system,
WEP (Wired Equivalent Privacy).
A flaw in a feature added to Wi-Fi, called Wi-Fi Protected Setup, allows WPA
and WPA2 security to be bypassed and effectively broken in many situations.
44
[2] WPA and WPA2 security implemented without using the Wi-Fi Protected
Setup feature are unaffected by the security vulnerability.
4.2.1 WPA
4.2.2 WPA2
WPA2 has replaced WPA. WPA2, which requires testing and certification by
the Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In
particular, it introduces CCMP, a new AES-based encryption mode with strong
45
security. Certification began in September, 2004; from March 13, 2006, WPA2
certification is mandatory for all new devices to bear the Wi-Fi trademark.
WPA was specifically designed to work with wireless hardware that was
produced prior to the introduction of the WPA protocol which had only
supported inadequate security through WEP. Some of these devices support
the security protocol only after a firmware upgrade. Firmware upgrades are
not available for some legacy devices.
Wi-Fi devices certified since 2006 support both the WPA and WPA2 security
protocol. WPA2 may not work with some older network cards.
4.2.4 Security
Pre-shared key mode (PSK, also known as Personal mode) is designed for
home and small office networks that don't require the complexity of an
802.1X authentication server. Each wireless network device encrypts the
network traffic using a 256 bit key. This key may be entered either as a string
of 64 hexadecimal digits, or as a passphrase of 8 to 63 printable ASCII
characters. If ASCII characters are used, the 256 bit key is calculated by
applying the PBKDF2 key derivation function to the passphrase, using the
SSID as the salt and 4096 iterations of HMAC-SHA1.
46
known contents, such as ARP messages. The attack requires Quality of Service
(as defined in 802.11e) to be enabled, which allows packet prioritization as
defined. The flaw does not lead to recovery of a key, but only to recovery of a
key stream that was used to encrypt a particular packet, and which can be
reused as many as seven times to inject arbitrary data of the same packet
length to a wireless client. For example, this allows someone to inject faked
ARP packets, making the victim send packets to the open Internet. Two
Japanese computer scientists, Toshihiro Ohigashi and Masakatu Morii,
further optimized the Tews/Beck attack; their attack doesn't require Quality
of Service to be enabled. In October 2009, Halvorsen with others made further
progress, enabling attackers to inject larger malicious packets (596 bytes in
size) within approximately 18 minutes and 25 seconds. In February 2010
Martin Beck found a new vulnerability which allows an attacker to decrypt
all traffic towards the client. The authors say that the attack can be defeated
by deactivating QoS, or by switching from TKIP to AES-based CCMP.
The vulnerabilities of TKIP are significant in that WPA-TKIP had been held to
be an extremely safe combination; indeed, WPA-TKIP is still a configuration
option upon a wide variety of wireless routing devices provided by many
hardware vendors.
A more serious security flaw was revealed in December 2011 by Stefan Vie
bock that affects wireless routers with the Wi-Fi Protected Setup (WPS)
feature, regardless of which encryption method they use. Most recent models
have this feature and enable it by default. Many consumer Wi-Fi device
manufacturers had taken steps to eliminate the potential of weak passphrase
choices by promoting alternative methods of automatically generating and
distributing strong keys when users add a new wireless adapter or appliance
to a network. These methods include pushing buttons on the devices or
entering an 8-digit PIN. The Wi-Fi Alliance standardized these methods as
Wi-Fi Protected Setup; however the PIN feature as widely implemented
introduced a major new security flaw. The flaw allows a remote attacker to
recover the WPS PIN and, with it, the router's WPA/WPA2 password in a few
hours. Users have been urged to turn off the WPS feature,[19] although this
may not be possible on some router models. Also note that the PIN is written
47
on a label on most Wi-Fi routers with WPS, and cannot be changed if
compromised.
4.2.4.4 MS-CHAPv2
4.2.4.5 Hole196
Hole196 is a vulnerability in the WPA2 protocol that abuses the shared GTK.
It can be used to conduct man-in-the-middle and denial-of-service attacks.
4.2.5.1 WPA
Initial WPA version, to supply enhanced security over the older WEP protocol.
Typically uses the TKIP encryption protocol.
4.2.5.2 WPA2
Also known as IEEE 802.11i-2004, is the successor of WPA, adds support for
CCMP which is intended to replace TKIP encryption protocol. Mandatory for
Wi-Fi–certified devices since 2006.
4.2.5.3 WPA-Personal
48
4.2.5.4 WPA-Enterprise
Also referred to as WPA-802.1X mode, and sometimes just WPA (as opposed
to WPA-PSK). It is designed for enterprise networks and requires a RADIUS
authentication server. This requires a more complicated setup, but provides
additional security (e.g. protection against dictionary attacks on short
passwords). An Extensible Authentication Protocol (EAP) is used for
authentication, which comes in different flavours.
Because RC4 is a stream cipher, the same traffic key must never be used twice.
The purpose of an IV, which is transmitted as plain text, is to prevent any
repetition, but a 24-bit IV is not long enough to ensure this on a busy network.
The way the IV was used also opened WEP to a related key attack. For a 24-
bit IV, there is a 50% probability the same IV will repeat after 5000 packets.
In August 2001, they published a cryptanalysis of WEP that exploits the way
the RC4 ciphers and IV are used in WEP, resulting in a passive attack that can
recover the RC4 key after eavesdropping on the network. Depending on the
amount of network traffic, and thus the number of packets available for
inspection, a successful key recovery could take as little as one minute. If an
insufficient number of packets are being sent, there are ways for an attacker
to send packets on the network and thereby stimulate reply packets which
can then be inspected to find the key. The attack was soon implemented, and
automated tools have since been released. It is possible to perform the attack
with a personal computer, off-the-shelf hardware and freely available
software such as aircrack-ng to crack any WEP key in minutes.
49
WEP-protected networks from distances of a mile or more from the target."
They also reported two generic weaknesses:
The use of WEP was optional, resulting in many installations never even
activating it, and by default, WEP relies on a single shared key among users,
which leads to practical problems in handling compromises, which often
leads to ignoring compromises.
In 2006, Bittau, Handley, and Lackey showed that the 802.11 protocol itself
can be used against WEP to enable earlier attacks that were previously
thought impractical. After eaves dropping a single packet, an attacker can
rapidly bootstrap to be able to transmit arbitrary data. The eavesdropped
packet can then be decrypted one byte at a time (by transmitting about 128
packets per byte to decrypt) to discover the local network IP addresses.
Finally, if the 802.11 network is connected to the Internet, the attacker can
use 802.11 fragmentations to replay eavesdropped packets while crafting a
new IP header onto them. The access point can then be used to decrypt these
packets and relay them on to a buddy on the Internet, allowing real-time
decryption of WEP traffic within a minute of eaves dropping the first packet.
In 2007, they were able to extend Klein's 2005 attack and optimize it for
usage against WEP. With the new attack it is possible to recover a 104-bit
WEP key with probability 50% using only 40,000 captured packets. For
60,000 available data packets, the success probability is about 80% and for
85,000 data packets about 95%. Using active techniques like deauth and ARP
re-injection, 40,000 packets can be captured in less than one minute under
good conditions. The actual computation takes about 3 seconds and 3 MB of
main memory on a Pentium-M 1.7 GHz and can additionally be optimized for
devices with slower CPUs. The same attack can be used for 40-bit keys with an
even higher success probability.
50
In 2008, Payment Card Industry (PCI) Security Standards Council’s latest
update of the Data Security Standard (DSS), prohibits use of the WEP as part
of any credit-card processing after 30 June 2010, and prohibits any new
system from being installed that uses WEP after 31 March 2009. The use of
WEP contributed to the T.J. Maxx parent company network invasion.
4.3.1 Remedies
Use of encrypted tunneling protocols (e.g. IPSec, Secure Shell) can provide
secure data transmission over an insecure network. However, replacements
for WEP have been developed with the goal of restoring security to the
wireless network itself.
4.3.1.2 WEP2
This stopgap enhancement to WEP was present in some of the early 802.11i
drafts. It was implementable on some (not all) hardware not able to handle
WPA or WPA2, and extended both the IV and the key values to 128 bits.[15] It
was hoped to eliminate the duplicate IV deficiency as well as stop brute force
key attacks.
After it became clear that the overall WEP algorithm was deficient (and not
just the IV and key sizes) and would require even more fixes, both the WEP2
name and original algorithm were dropped. The two extended key lengths
remained in what eventually became WPA's TKIP.
51
4.3.1.3 WEPplus
The dynamic change idea made it into 802.11i as part of TKIP, but not for the
actual WEP algorithm.
52
Chapter 5
Website Scanning and Security
Web sites are unfortunately prone to security risks. And so are any networks
to which web servers are connected. Setting aside risks created by employee
use or misuse of network resources, your web server and the site it hosts
present your most serious sources of security risk.
Web servers by design open a window between your network and the world.
The care taken with server maintenance, web application updates and your
web site coding will define the size of that window, limit the kind of
information that can pass through it and thus establish the degree of web
security you will have.
"Web security" is relative and has two components, one internal and one
public. Your relative security is high if you have few network resources of
financial value, your company and site aren't controversial in any way, your
network is set up with tight permissions, your web server is patched up to
date with all settings done correctly, your applications on the web server are
all patched and updated, and your web site code is done to high standards.
Your web security is relatively lower if your company has financial assets like
credit card or identity information, if your web site content is controversial,
your servers, applications and site code are complex or old and are
maintained by an underfunded or outsourced IT department. All IT
departments are budget challenged and tight staffing often creates deferred
maintenance issues that play into the hands of any who want to challenge
your web security.
If you have assets of importance or if anything about your site puts you in the
public spotlight then your web security will be tested. We hope that the
information provided here will prevent you and your company from being
embarrassed - or worse.
53
It's well known that poorly written software creates security issues. The
number of bugs that could create web security issues is directly proportional
to the size and complexity of your web applications and web server. Basically,
all complex programs either have bugs or at the very, least weaknesses. On
top of that, web servers are inherently complex programs. Web sites are
themselves complex and intentionally invite ever greater interaction with the
public. And so the opportunities for security holes are many and growing.
Technically, the very same programming that increases the value of a web
site, namely interaction with visitors, also allows scripts or SQL commands to
be executed on your web and database servers in response to visitor requests.
Any web-based form or script installed at your site may have weaknesses or
outright bugs and every such issue presents a web security risk.
A web security issue is faced by site visitors as well. A common web site attack
involves the silent and concealed installation of code that will exploit the
browsers of visitors. Your site is not the end target at all in these attacks.
There are, at this time, many thousands of web sites out there that have been
compromised. The owners have no idea that anything has been added to their
sites and that their visitors are at risk. In the meantime visitors are being
subject to attack and successful attacks are installing nasty code onto the
visitor's computers.
The world's most secure web server is the one that is turned off. Simple, bare-
bones web servers that have few open ports and few services on those ports
are the next best thing. This just isn't an option for most companies. Powerful
and flexible applications are required to run complex sites and these are
naturally more subject to web security issues.
54
Any system with multiple open ports, multiple services and multiple scripting
languages is vulnerable simply because it has so many points of entry to
watch.
You site undoubtedly provides some means of communication with its visitors.
In every place that interaction is possible you have a potential web security
vulnerability. Web sites often invite visitors to:
In each case noted above your web site visitor is effectively sending a
command to or through your web server - very likely to a database. In each
opportunity to communicate, such as a form field, search field or blog,
correctly written code will allow only a very narrow range of commands or
information types to pass - in or out. This is ideal for web security. However,
these limits are not automatic. It takes well trained programmers a good deal
of time to write code that allows all expected data to pass and disallows all
unexpected or potentially harmful data.
And there lies the problem. Code on your site has come from a variety of
programmers, some of whom work for third party vendors. Some of that code
is old, perhaps very old. Your site may be running software from half a dozen
sources, and then your own site designer and your webmaster has each
produced more code of their own, or made revisions to another's code that
may have altered or eliminated previously established web security
limitations.
Add to that the software that may have been purchased years ago and which
is not in current use. Many servers have accumulated applications that are no
longer in use and with which nobody on your current staff is familiar. This
code is often not easy to find, is about as valuable as an appendix and has not
55
been used, patched or updated for years - but it may be exactly what a hacker
is looking for!
As you know there are a lot of people out there who call themselves hackers.
You can also easily guess that they are not all equally skilled. As a matter of
fact, the vast majority of them are simply copycats. They read about a
KNOWN technique that was devised by someone else and they use it to break
into a site that is interesting to them, often just to see if they can do it.
Naturally once they have done that they will take advantage of the site
weakness to do malicious harm, plant something or steal something.
56
Your site is 1,000 times more likely to be attacked with a known exploit than
an unknown one. And the reason behind this is simple: There are so many
known exploits and the complexity of web servers and web sites is so great
that the chances are good that one of the known vulnerabilities will be
present and allow an attacker access to your site.
The number of sites worldwide is so great and the number of new, as of yet
undocumented and thus unknown exploits so small that your chances of
being attacked with one is nearly zero - unless you have network assets of
truly great value.
If you don't attract the attention of a very dedicated, well financed attack,
then your primary concern should be to eliminate your known vulnerabilities
so that a quick look would reveal no easy entry using known vulnerabilities.
There are two roads to accomplish excellent security. On one you would
assign all of the resources needed to maintain constant alert to new security
issues. You would ensure that all patches and updates are done at once, have
all of your existing applications reviewed for correct security, ensure that
only security knowledgeable programmers do work on your site and have
their work checked carefully by security professionals. You would also
maintain a tight firewall, antivirus protection and run IPS/IDS.
Your other option: use a web scanning solution to test your existing
equipment, applications and web site code to see if a KNOWN vulnerability
actually exists. While firewalls, antivirus and IPS/IDS are all worthwhile, it is
simple logic to also lock the front door. It is far more effective to repair a half
dozen actual risks than it is to leave them in place and try to build higher and
higher walls around them. Network and web site vulnerability scanning is the
most efficient security investment of all.
If one had to walk just one of these roads, diligent wall building or
vulnerability testing, it has been seen that web scanning will actually produce
a higher level of web security on a dollar for dollar basis. This is proven by the
number of well-defended web sites which get hacked every month, and the
much lower number of properly scanned web sites which have been
compromised.
57
5.2 LIVE WHO IS
_________________________
It provides you with high quality domain data. Get accurate, actionable
insights on domains and the people behind them from a single search.
No one likes searching through pages of text. We've gone the extra mile to
build a back-end that allows us to organize and present key pieces of
information like whois, DNS, and historical records to you with as few clicks
as possible.
58
5.2.3 Track Domains Across Different Registrars
It doesn't matter where domains are registered, we provide you with the tools
to save and organize as many domains as you want to your dashboard. This
way, you can keep domains you own, are interested in buying, or just think
are plain cool in one single, easy to manage location.
Search the whois database, look up domain and IP owner information, and
check out dozens of other statistics.
Organizing domains across multiple registrars for quick reference has never
been so easy.
Get all the data you need about a domain and everything associated with that
domain anytime with a single search.
59
5.3 Way Back Machine
________________________________
The name Wayback Machine was chosen as a droll reference to a plot device
in an animated cartoon series, The Rocky and Bullwinkle Show. In it, Mr.
Peabody and Sherman routinely used a time machine called the "WABAC
machine" (pronounced "Wayback") to witness, participate in, and, more often
than not, alter famous events in history.
This page gives information about using the Wayback Machine to cite
archived copies of web pages used by articles. This is useful if a webpage has
changed, moved, or disappeared; links to the original content can be retained.
http://web.archive.org/web/*/http://www.wikipedia.org
The next example links to the main index of Wikipedia on the date on
September 30, 2002 at 12:35:25 pm. the date format is YYYYMMDDhhmmss
http://web.archive.org/web/20020930123525/http://www.wikipedia.org
The next example links to the most current version of the Wikipedia. While
this is possible, it is discouraged; the most recent version is subject to change,
defeating the purpose of using the archive.
http://web.archive.org/web/2/http://www.wikipedia.org
SQL injection (also known as SQL fishing) is a technique often used to attack
data driven applications. This is done by including portions of SQL statements
in an entry field in an attempt to get the website to pass a newly formed
rogue SQL command to the database (e.g., dump the database contents to the
attacker). SQL injection is a code injection technique that exploits a security
vulnerability in an application's software. The vulnerability happens when
user input is either incorrectly filtered for string literal escape characters
embedded in SQL statements or user input is not strongly typed and
unexpectedly executed. SQL injection is mostly known as an attack vector for
websites but can be used to attack any type of SQL database.
SQL injection attack (SQLIA) is considered one of the top 10 web application
vulnerabilities of 2007 and 2010 by the Open Web Application Security
Project. The attacking vector contains five main sub-classes depending on the
technical aspects of the attack's deployment:
Classic SQLIA
Inference SQL injection
Interacting with SQL injection
Database management system-specific SQLIA
Compounded SQLIA
SQL injection + insufficient authentication
SQL injection + DDoS attacks
SQL injection + DNS hijacking
SQL injection + XSS
61
A Classification of SQL injection attacking vector until 2010.
This classification represents the state of SQLIA, respecting its evolution until
2010—further refinement is underway.
This form of SQL injection occurs when user input is not filtered for escape
characters and is then passed into a SQL statement. This result in the
potential manipulation of the statements performed on the database by the
end-user of the application.
This SQL code is designed to pull up the records of the specified username
from its table of users. However, if the "username" variable is crafted in a
specific way by a malicious user, the SQL statement may do more than the
code author intended. For example, setting the "username" variable as:
' or '1'='1
or using comments to even block the rest of the query (there are three types
of SQL comments):
The following value of "userName" in the statement below would cause the
deletion of the "users" table as well as the selection of all data from the
"userinfo" table (in essence revealing the information of every user), using
an API that allows multiple statements:
This input renders the final SQL statement as follows and specified:
This form of SQL injection occurs when a user-supplied field is not strongly
typed or is not checked for type constraints. This could take place when a
numeric field is to be used in a SQL statement, but the programmer makes no
checks to validate that the user supplied input is numeric. For example:
63
DROP TABLE users will drop (delete) the "users" table from the
database, since the SQL becomes:
SELECT * FROM userinfo WHERE id=1;DROP TABLE users;
5.4.1.3 Blind SQL injection
One type of blind SQL injection forces the database to evaluate a logical
statement on an ordinary application screen. As an example, a book review
website uses a query string to determine which book review to display. So
the URL http://books.example.com/showReview.php?ID=5 would cause the
server to run the query
from which it would populate the review page with data from the review
with ID 5, stored in the table bookreviews. The query happens completely on
the server; the user does not know the names of the database, table, or fields,
nor does the user know the query string. The user only sees that the above
URL returns a book review. A hacker can load the
64
4 and a blank or error page otherwise. The hacker can continue to use code
within query strings to glean more information from the server until another
avenue of attack is discovered or his or her goals are achieved.
In many cases, the SQL statement is fixed, and each parameter is a scalar, not
a table. The user input is then assigned (bound) to a parameter.
Using object-relational mapping libraries avoids the need to write SQL code.
The ORM library in effect will generate parameterized SQL statements from
object-oriented code.
5.4.1.7 Escaping
65
characters: \x00, \n, \r, \, ', " and \x1a. This function must always (with few
exceptions) be used to make data safe before sending a query to MySQL.
There are other functions for many database types in PHP such as
pg_escape_string() for PostgreSQL. There is, however, one function that works
for escaping characters, and is used especially for querying on databases that
do not have escaping functions in PHP. This function is: addslashes(string $str
). It returns a string with backslashes before characters that need to be
quoted in database queries, etc. These characters are single quote ('), double
quote ("), backslash (\) and NULL (the NULL byte).
Routinely passing escaped strings to SQL is error prone because it is easy to
forget to escape a given string. Creating a transparent layer to secure the
input can reduce this error-proneness, if not entirely eliminate it.
Limiting the permissions on the database logon used by the web application
to only what is needed may help reduce the effectiveness of any SQL injection
attacks that exploit any bugs in the web application.
66
intitle:login inurl:login.php site:co.in
intitle:login inurl:login.php site:.in
inurl:userlogin.php site:.in
inurl:loginpanel.php site:.in
http://www.exposer.co.in/admin.php
http://lionsclubofwashim.co.in/admin.php
http://www.goldentimes.co.in/admin.php
www.shubhmilan.in/userlogin.php
suresolutions.co.in/userlogin.php?
www.induscapital.in/userlogin.php
www.smptngo.in/userLogin.php?
http://www.shelterguide.co.in/admin/login.php
http://www.hdsoftware.co.in/admin/login.php
http://ncm.co.in/admin/login.php
http://www.indavara.in/admin/login.php
http://www.jaipurhome.in/admin/login.php
http://smpsbhilwara.in/admin/login.php
http://nppbisauli.in/admin/login.php
http://nppkayamganj.in/admin/login.php
http://www.npgauratha.in/admin/login.php
http://www.mcnagrotabagwan.in/admin/login.php
5.5.1.1 Phishing
68
trust associated with the inferred connection due to both parties receiving the
original email.
5.5.1.4 Whaling
69
5.5.3 Filter evasion
Phishers have used images instead of text to make it harder for anti-phishing
filters to detect text commonly used in phishing emails.
Once a victim visits the phishing website, the deception is not over. Some
phishing scams use JavaScript commands in order to alter the address
bar. This is done either by placing a picture of a legitimate URL over the
address bar, or by closing the original address bar and opening a new one
with the legitimate URL.
An attacker can even use flaws in a trusted website's own scripts against the
victim. These types of attacks (known as cross-site scripting) are particularly
problematic, because they direct the user to sign in at their bank or service's
own web page, where everything from the web address to the security
certificates appears correct. In reality, the link to the website is crafted to
carry out the attack, making it very difficult to spot without specialist
knowledge. Just such a flaw was used in 2006 against PayPal.
Not all phishing attacks require a fake website. Messages that claimed to be
from a bank told users to dial a phone number regarding problems with their
bank accounts. Once the phone number (owned by the phisher, and provided
by a Voice over IP service) was dialled, prompts told users to enter their
account numbers and PIN. Vishing (voice phishing) sometimes uses fake
caller-ID data to give the appearance that calls come from a trusted
organization.
70
5.5.6 Other techniques
The stance adopted by the UK banking body APACS is that "customers must
also take sensible precautions ... so that they are not vulnerable to the
criminal." Similarly, when the first spate of phishing attacks hit the Irish
Republic's banking sector in September 2006, the Bank of Ireland initially
refused to cover losses suffered by its customers (and it still insists that its
policy is not to do so), although losses to the tune of €11,300 were made good.
71
5.5.8 Anti-phishing
People can take steps to avoid phishing attempts by slightly modifying their
browsing habits. When contacted about an account needing to be "verified"
(or any other topic used by phishers), it is a sensible precaution to contact the
company from which the email apparently originates to check that the email
is legitimate. Alternatively, the address that the individual knows is the
company's genuine website can be typed into the address bar of the browser,
rather than trusting any hyperlinks in the suspected phishing message.
72
distinguish between the first few digits and the last few digits of an account
number—a significant problem since the first few digits are often the same
for all clients of a financial institution. People can be trained to have their
suspicion aroused if the message does not contain any specific personal
information. Phishing attempts in early 2006, however, used personalized
information, which makes it unsafe to assume that the presence of personal
information alone guarantees that a message is legitimate. Furthermore,
another recent study concluded in part that the presence of personal
information does not significantly affect the success rate of phishing attacks,
which suggests that most people do not pay attention to such details.
Everyone can help educate the public by encouraging safe practices, and by
avoiding dangerous ones. Unfortunately, even well-known players are known
to incite users to hazardous behaviour, e.g. by requesting their users to reveal
their passwords for third party services, such as email.
5.6 Acunetix
____________________
73
highly experienced security developers. Acunetix customers include the US
Army, US Air force, AT&T, KPMG, Telstra, Fujitsu, and Adidas.
A website security audit usually consists of two steps. Most of the time, the
first step usually is to launch an automated scan. Afterwards, depending on
the results and the website’s complexity, a manual penetration test follows.
To properly complete both the automated and manual audits, a number of
tools are available, to simplify the process and make it efficient from the
business point of view. Automated tools help the user making sure the whole
website is properly crawled, and that no input or parameter is left
unchecked. Automated web vulnerability scanners also help in finding a high
percentage of the technical vulnerabilities, and give you a very good overview
of the website’s structure, and security status. Thanks to automated
scanners, you can have a better overview and understanding of the target
website, which eases the manual penetration process.
For the manual security audit, one should also have a number of tools to ease
the process, such as tools to launch fuzzing tests, tools to edit HTTP requests
and review HTTP responses, proxy to analyse the traffic and so on.
74
5.6.2 Manual Assessment of target website or web application
During the manual assessment, familiarize yourself with the website topology
and architecture. Keep record of the number of pages and files present in the
website, and take record of the directory and file structure. If you have access
to the website’s root directory and source code, take your time to get to know
it. If not, you can manually hover the links throughout the website. This
process will help you understand the structure of the URL’s. Also, take a note
of all the submission and other type of online forms available on the website.
During the pre-automated scan manual assessment, apart from getting used
to directory structures and number of files, get to know what web technology
is used to develop the target website, e.g. .NET or PHP. There are a number
of vulnerabilities which are specific for different types of technologies.
75
Once the manual assessment process is ready, you should know enough about
the target website to help you determine if the website was properly crawled
from the automated black box scanner before a scan is launched. If the
website is not crawled properly, i.e. the scanner is unable to crawl some parts
or parameters from the website; the whole “securing the website” point is
invalidated. The manual assessment will help you go a long way towards
heading off invalid scans and false positives. It will also help you get more
familiar with the website itself, and that’s the best way to help you configure
the automated scanner to cover and check the entire website.
Once you’re familiar with the automated black box scanner you will be using,
and the target website or web application you will be scanning, it is time to
get down to business and get your hands dirty. To start off with, one must
first configure the scanner. The most crucial things you should configure in
the scanner before launching any automated process are;
Custom 404 Pages – If the server returns HTTP status code 200 when a non
existing URL is requested.
URL Rewrite rules – If the website is using search engine friendly URL’s,
configure these rules to help the scanner understand the website structure so
it can crawl it properly.
76
Login Sequences – If parts of the website are password protected and you
would like the scanner to scan them, record a login sequence to train the
scanner to automatically login to the password protected section, crawl it
and scan it.
Mark page which need manual intervention – If the website contains pages
which require the user to enter a one time value when accessed, such as
CAPTCHA, mark these pages as pages which need manual intervention, so
during the crawling process the scanner will automatically prompt you to
enter such values.
Submission Forms – If you would like to use specific details each time a
particular form is crawled from the scanner, configure the scanner with such
details. Nowadays scanners make it easy for you by populating the fields
automatically (such as in Acunetix WVS).
Scanner Filters – Use the scanner filters to specify a file, or a file type, or
directory which you would like to be excluded from the scan. You can also
exclude specific parameters.
77
solution, because if an automated web vulnerability scanner can break down
your website, imagine what a malicious user can do. The solution is to start
securing your website and make sure it can handle properly an automated
scan.
To start off with, automated web vulnerability scanners tend to perform
invasive scans against the target website, since they try to input data which a
website has not been designed to handle. If the automated vulnerability
scanner is not that invasive against a target website, then it is not really
checking for all vulnerabilities and is not doing an in-depth security check.
Such security checks could and will lead to a number of unwanted results;
such as deletion of database records, change a blog’s theme, a number of
garbage posts placed on your forum, a huge number of emails in your
mailbox, and even worse, a non functional website. This is expected, because
like a malicious user would do, the automated black box scanner will try its
best to find security holes in your website, and tries to find ways and means
how to get unauthorized access.
Therefore it is imperative that such scans are not launched against live
servers. Ideally a replica of the live environment should be created in a test
lab, so if something goes wrong, only the replica is affected. Though, if a test
lab is not available, make sure you have latest backups. If something goes
wrong, the live website can be restored and be functional again in the
shortest time possible.
Once the manual website analysis is ready, and the black box scanner is
configured, we are ready to launch the automated scan. If time permits, you
should first run a crawl of the website, so once the crawl is ready, you can
confirm that all the files in the website and input parameters are crawled
from the scanner. Once you confirm that all the files are crawled, you can
safely proceed with the automated scan.
Once the automated security scan is ready, you already have a good overview
of your website’s security level. Look into the details of every reported
vulnerability and make sure you have all the required information to fix the
vulnerability. A typical black box scanner such as Acunetix Web Vulnerability
78
Scanner will report a good amount of detail about the discovered
vulnerability, such as the HTTP request and response headers, HTML
response, a description of the vulnerability and a number of web links from
where you can learn more about the vulnerability reported, and how to fix it.
Analysing the automated scan results in detail will also help you understand
more the way the web application works and how the input parameters are
used, thus giving you an idea of what type of tests to launch in the manual
penetration test and which parameters to target.
79
Acunetix WVS HTTP Editor
As much as the automated scan, the manual penetration test is also a very
important step in securing a website. If the advanced manual penetration
testing tools are used properly, they can ease the manual penetration test
process and help you be more efficient. The manual penetration testing helps
you audit your website and check for logical vulnerabilities. Even though
automated scans can hint you of such vulnerabilities, and help you in pin
pointing them out, most of them can only be discovered and verified
manually.
While auditing a shopping cart, you notice that if you manually set the price
parameter to 0, in the checkout request, the customer can get the product for
free without being asked for the payment details.
If the developers forgot the AND statement, then upon opening an account
and without the need to purchase $50 worth of adverts, you will still get your
$100 worth of free ads.
One might think that such logical vulnerabilities are very remote, or that they
are a joke, but we do encounter them when analysing production web
80
applications. Such vulnerabilities are typically discovered by using several
manual penetration testing tools together, like the HTTP Sniffer to analyze
the application logic, and then the HTTP Editor to build HTTP requests, send
them and analyze the server response.
5.6.9 Conclusion
As we can see from the above, web security is very different from network
security. As a concept, network security can be simplified to “allow good guys
in and block the bad guys.” Web security is different; it is much more than
that. Though never give up.
81
Bibliography
82