0% found this document useful (0 votes)
27 views19 pages

Ilovepdf Merged Pagenumber

The document outlines various types of security incidents, including ransomware, phishing, DDoS attacks, and supply chain attacks, and emphasizes the importance of incident response processes to mitigate these threats. It details the phases of incident response, such as preparation, detection, containment, eradication, recovery, and lessons learned, as well as the significance of incident analysis and damage assessment. Additionally, it discusses the role of SIEM tools and threat intelligence in enhancing cybersecurity measures and incident reporting procedures.

Uploaded by

leelavasantala1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views19 pages

Ilovepdf Merged Pagenumber

The document outlines various types of security incidents, including ransomware, phishing, DDoS attacks, and supply chain attacks, and emphasizes the importance of incident response processes to mitigate these threats. It details the phases of incident response, such as preparation, detection, containment, eradication, recovery, and lessons learned, as well as the significance of incident analysis and damage assessment. Additionally, it discusses the role of SIEM tools and threat intelligence in enhancing cybersecurity measures and incident reporting procedures.

Uploaded by

leelavasantala1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

1 .

Security Incidents

Ans : A security incident, or security event, is any digital or physical breach that threatens the
confidentiality, integrity or availability or an organization’s information systems or sensitive data.
Security incidents can range from intentional cyberattacks by hackers or unauthorized users, to
unintentional violations of security policy by legitimate authorized users

- Some of the most common security incidents include

• Ransomware: Ransomware is a type of malicious software, or malware, that locks up a victim's data or
computing device and threatens to keep it locked—or worse—unless the victim pays the attacker a
ransom

- Phishing and social engineering:. Phishing attacks are digital or voice messages that try to manipulate
recipients into sharing sensitive information, downloading malicious software, transferring money or
assets to the wrong people, or taking some other damaging action

- DDoS attacks: In a distributed denial-of-service (DDoS) attack, hackers gain remote control of large
numbers of computers and use them to overwhelm a target organization’s network or servers with
traffic, making those resources unavailable to legitimate users.

- Supply chain attacks:Supply chain attacks are cyberattacks that infiltrate a target organization by
attacking its vendors—e.g., by stealing sensitive data from a supplier’s systems, or by using a vendor’s
services to distribute malware

2. Incident Response
• Incident response (sometimes called cybersecurity incident response) refers to an organization’s
processes and technologies for detecting and responding to cyberthreats, security breaches or
cyberattacks.

• The goal of incident response is to prevent cyberattacks before they happen, and to minimize the cost
and business disruption resulting from any cyberattacks that occur.
-- Purpose of Incident Response Plan

Instructions for documenting for collecting information and documenting incidents for post-mortem
review and (if necessary) legal proceedings. An incident response capability also helps with dealing
properly with legal issues that may arise during incidents.

3.Incident Response Frameworks?

1
Ans : 1. Preparation : This first phase of incident response is also a continuous one, to make sure that
the CSIRT always has best possible procedures and tools in place to respond to identify, contain and
recover from an incident as quickly as possible and within minimal business disruption

Ensure that all aspects of your incident response plan (training, execution, hardware and software
resources, etc.) are approved and funded in advance

2. Detection/Identification. • During this phase, security team members monitor the network for
suspicious activity and potential threats. They analyze data, notifications and alerts gathered from
device logs and from various security tools (antivirus software, firewalls) installed on the network,
filtering out the false positives and triage the actual alerts in order of severity.
3. Containment : The incident response team takes steps to stop the breach from doing further damage
to the network. Containment activities can be split into two categories:Short-term containment
measures focus on preventing the current threat from spreading by isolating the affected systems, such
as by taking infected devices offline.

4. Eradication
• Eradication is the phase of effective incident response that entails removing the threat and restoring
affected systems to their previous state, ideally while minimizing data loss. Ensuring that the proper
steps have been taken to this point, including measures that not only remove the malicious content but
also ensure that the affected systems are completely clean, are the main actions associated with
eradication.

5. Recovery
• Testing, monitoring, and validating systems while putting them back into production in order to verify
that they are not re-infected or compromised are the main tasks associated with this step of incident
response. This phase also includes decision making in terms of the time and date to restore operations,
testing and verifying the compromised systems, monitoring for abnormal behaviors, and using tools for
testing, monitoring, and validating system behavior.

6. Lessons Learned
• Lessons learned is a critical phase of incident response because it helps to educate and improve future
incident response efforts. This is the step that gives organizations the opportunity to update their
incident response plans with information that may have been missed during the incident, plus complete
documentation to provide information for future incidents.

4. Incident Analysis
• The incident response team should work quickly to analyze and validate each Incident, following a
predefined process and documenting each step taken.

2
Profile Networks and Systems.: Profiling is measuring the characteristics of expected activity so that
changes to it can be more easily identified.

Understand Normal Behaviors: Incident response team members should study networks, systems, and
applications to understand what their normal behavior is so that abnormal behavior can be recognized
more easily

• Create a Log Retention Policy: : Information regarding an incident may be recorded in several places,
such as firewall, IDPS, and application logs. Creating and implementing a log retention policy that
specifies how long log data should be maintained

• Keep All Host Clocks Synchronized: Protocols such as the Network Time Protocol (NTP) synchronize
clocks among hosts. Event correlation will be more complicated if the devices reporting events have
inconsistent clock settings

• Use Internet Search Engines for Research: Internet search engines can help analysts find information
on unusual activity. For example, an analyst may see some unusual connection attempts targeting TCP
port 22912

• Perform Event Correlation: Evidence of an incident may be captured in several logs that each contain
different types of data—a firewall log may have the source IP address that was used, whereas an
application log may contain a username.

• Maintain and Use a Knowledge Base of Information: The knowledge base should include information
that handlers need for referencing quickly during incident analysis. Although it is possible to build a
knowledge with a complex structure, a simple approach can be effective.

• Run Packet Sniffers to Collect Additional Data:Sometimes the indicators do not record enough detail to
permit the handler to understand what is occurring. If an incident is occurring over a network, the
fastest way to collect the necessary data may be to have a packet sniffer capture network traffic.

• Filter the Data: There is simply not enough time to review and analyze all the indicators; at minimum,
the most suspicious activity should be investigated. One effective strategy is to filter out categories of
indicators that tend to be insignificant.

• Seek Assistance from Others: Occasionally, the team will be unable to determine the full cause and
nature of an incident. If the team lacks sufficient information to contain and eradicate the incident, then
it should consult with internal resources

5.Incident Damage Assesment


• To review the accumulated data to understand if the incident was driven by a successful external
attacker or malicious insider, the data will reveal how severe the incident was and how your team
responded according to the threat. level attack. • If you’re not able to piece together the entire

3
story from the available data, then you can always launch a full investigation that can ensure that no
stone is left unturned.

• Depending on the comprehensive nature of the investigation, you may be able to divulge if the hacker
performed a web application layer intrusion, a SQL Injection attack, or even hijack a web server to take
control of the critical backend systems.

-- Cost Assesment.

• The purpose of the Incident Cost Framework is to provide an assessment framework to help your
organization understand and quantify the magnitude of loss from a cyber security incident. Managing
the risks, liabilities and costs associated with a cyber incident is a challenge faced by many organizations.
• Utilizing this assessment framework will allow your organization to:Understand and measure the
severity of a cyber incident on staff, departments, data, software and hardware.Enhance your existing
processes and operations.Understand the tools and resources required to support an effective response.

6 .What is incident reporting/Recording ?


• Incident reporting is the process of capturing, recording, and managing an incident occurrence, such as
in injury, property damage, or security incident. • It typically involves completing an incident report
form when an incident occurrence has happened and following it up with additional incident follow on
forms such as an investigation, corrective action and sign off.

-- Steps to Review an Incident Response Policy


• Evaluating the Existing Situation: This evaluation should consider the vulnerability level of each asset
and the appropriate mitigation and remediation methods. It involves taking inventory of all company
assets and prioritizing them based on value, privacy requirements, and risk level.
• Establishing the Incident Response Team: The incident response team is responsible for determining
the procedures for responding to an incident, implementing these procedures, and evaluating their
effectiveness in the aftermath of an incident

• Creating the Incident Response Plan (IRP): An IRP should contain these
elements:Prevention━ establishing preventative security tools and policies.Detection━determining how
systems and tools can detect incidents and notify the relevant team members.Analysis━ locating data
logs and de

SIEM TOOL
• SIEM tool is the software that acts as an analytics-driven security command center. All event data is
collected in a centralized location. • The SIEM tool does the parsing and categorizing for you, but more
importantly, it provides context that gives security analysts deeper insight regarding security events
across their infrastructure.

1. Datadog Security Monitoring 2. Logpoint SIEM 3. SolarWinds Security Event Manager. 4.


Graylog. 5. Manage Engine Event Log Analyzer. 6. ManageEngine Log360. 7. Heimdal Threat Hunting
and Action Center 8. Exabeam Fusion. 9. Elastic Security. 10. Fortinet FortiSIEM. 11. Splunk

4
Enterprise Security. 12. Rapid7 InsightIDR 13. LogRhythm NextGen SIEM Platform. 14. Trellix Helix.
15. AT&T Cybersecurity AlienVault Unified Security Management
2. What is Splunk?
• Splunk is a well-developed and advanced software tool designed for organizations to perform indexing
and searching log files stored in a system. It analyzes machine-generated data in real-time. It also
searches, monitors, and examines machine-generated data via a web-style interface.

SPLUNK ARCHITECTURE

• Universal Forwarder (UF): Splunk Universal Forwarder is considered a lightweight component that
helps in pushing data to the heavy Splunk forwarder. Here, the task of the component is to forward the
log data from the server. The universal forwarder can also be installed on the client-side or application
side. • Load Balancer (LB): The main task of the load balancers is to distribute the workloads over the
network or the application traffic over a cluster of servers. • Heavy Forwarder (HF): Splunk Heavy
Forwarder is acknowledged as a heavy component. It mainly filters the data that is collecting only error
logs. • Indexer: The indexer stores and indexes the filtered data. It also improves Splunk’s
performance and automatically implements indexing. • Search Head (SH): It helps in distributing the
searches to the other indexers, and is also used to achieve intelligence and perform reporting. •
Deployment Server (DS): In the deployment server sharing of data is performed between the
components.
3 . Threat Intelligence
• Threat intelligence is data that is collected, processed, and analyzed to understand a threat actor’s
motives, targets, and attack behaviors.
• In the world of cybersecurity, advanced persistent threats (APTs) and defenders are constantly trying
to outmaneuver each other.
• Data on a threat actor’s next move is crucial to proactively tailoring your defenses and preempt future
attacks. • sheds light on the unknown, enabling security teams to make better decisions

4 . VERSIONS OF SPLUNK
Versions of Splunk consist of 3 parts that are:-
• Splunk Enterprise: Splunk enterprise components are the paid version with unlimited access to the IT
businesses. Its architecture supports single and multi-site clustering for disaster recovery. Splunk
Enterprise also gathers and analyzes the data from websites, applications, etc. • Splunk Cloud:
Splunk Cloud is the hosted platform provided as a service with subscription pricing. The features
included in this package are similar to the Splunk enterprise version. In the architecture, clustering is
managed by Splunk. • Splunk Light: Splunk Light is the free version with up to 500MB indexing per day.
In this version, the features and functionalities are limited as compared to other versions. The
architecture supports only a single instance.

5 . Analyzing Live malware traffic samples


Steps to perform. • Identify all of the files that contribute to a malware system. • Perform static
analysis, examining identifiers, such as metadata and possible traces of how this software appeared on
your system. Carry out research on the data you record.• Perform advanced static analysis, reading

5
through the code, and mapping how the different modules of the suite work together and what system
resources or resident software it exploits.

6 . Analyzing Live malware traffic samples in SIEM


1. Data Collection and Integration: • Ensure that your SIEM is properly configured to collect network
traffic data from relevant sources, such as network appliances, firewalls, intrusion detection systems
(IDS), and network taps.
2. Data Normalization:• Normalize and enrich the network traffic data. This involves converting raw data
into a standardized format that your SIEM can understand. Normalize fields like IP addresses, ports, and
protocols.
3. Traffic Aggregation:• Aggregate network traffic data to create meaningful data sets. You can group
data by time intervals, source IP addresses, destination IP addresses, or specific network segments.
4. Packet Capture and Storage:• Integrate your SIEM with packet capture solutions to capture and store
packet-level data for suspicious traffic flows. This is invaluable for in-depth analysis.
5. Deep Packet Inspection (DPI): • If your SIEM supports DPI, use it to perform deeper analysis of
packet contents, including payload inspection. Look for indicators like malicious URLs, filenames, or
payload encryption.
6. Traffic Visualization:• Utilize traffic visualization tools and dashboards within your SIEM to gain a
visual representation of network traffic patterns. This can help in identifying anomalies quickly.
7 . UEBA

• User and entity behavior analytics (UEBA) is a cybersecurity solution that uses algorithms and machine
learning to detect anomalies in the behavior of not only the users in a corporate network but also the
routers, servers, and endpoints in that network.
• UEBA seeks to recognize any peculiar or suspicious behavior—instances where there are irregularities
from normal everyday patterns or usage.

- Components of UEBA - There are three main components of a UEBA solution

Analytics : collects and organizes data on what it determines to be normal behavior of users and entities.
The system builds profiles of how each normally acts regarding application usage, communication and
download activity, and network connectivity. Statistical models are then formulated and applied to
detect unusual behavior.
Integration: with other security products and systems already in place is a must as organizations grow
and evolve. They most likely have a security stack in place, which may include legacy systems that may
not keep up with today's ever-increasing threat landscape. The beauty of UEBA is that it is not meant to
obviate existing security products in use across the enterprise. With proper integration, UEBA systems
are able to compare data collected from various sources, including logs, packet capture data, and other
datasets, and integrate these to make the system more robust.

* Presentation: is the process of communicating the findings of the UEBA system and devising an
appropriate response. This can vary between organizations.

6
8 . AI in SIEM detection
• AI security SIEM utilizes machine learning, cybersecurity threat feeds and user behavior analytics to
detect risky and abnormal activities which can automate many difficult to produce and time consuming
manual tasks of threat hunting.

1.Detecting Users’ deviation from themselves: • Abnormal increase in user activity level (over time)•
Deviation in a specific type of user activity, such as an increase in authentication requests. • Deviation
of the user’s risk posture. • Abnormal rate of increase in the user’s risky activity. • Deviation or
increase in a user’s local to remote activity (helps detect exfiltration activities). • Changes made to a
user’s systems, software installation etc.

2. Detecting change in user’s activity vs. frequencyAbnormal increase in user activity level (over time):
• SIEM products can create an activity and frequency distribution model over time for each user. ie. a
user’s activity with a frequency distribution over one week.Any time the user’s activity or the frequency
of that activity changes (the actuals deviate from the predicted

7
1.Introduction to SIEM ?

Ans : ▪ Security information and event management (SIEM) software is a relatively new type of
centralized logging software compared to syslog.
▪ SIEM products have one or more log servers that perform log analysis, and one or more database
servers that store the logs. ▪ Most SIEM products support two ways of collecting logs from log
generators:
1. Agentless. 2. Agentbase
1. Agentless: ▪ The SIEM server receives data from the individual log generating hosts without needing
to have any special software installed on those hosts. ▪ Some servers pull logs from the hosts, which is
usually done by having the server authenticate to each host and retrieve its logs regularly.

2. Agent-Based: • An agent program is installed on the log generating host to perform event filtering and
aggregation and log normalization for a particular type of log, then transmit the normalized log data to
an SIEM server, usually on a real-time or near-real-time basis for analysis and storage. • If a host has
multiple types of logs of interest, then it might be necessary to install multiple agents.

SIEM PRODUCTS FEATURES


▪ Graphical user interfaces (GUI) that are specifically designed to assist analysts in identifying potential
problems and reviewing all available data related to each problem ▪ A security knowledge base, with
information on known vulnerabilities, the likely meaning of certain log messages, and other technical
data; log analysts can often customize the knowledge base as needed ▪ Incident tracking and reporting
capabilities, sometimes with robust workflow features ▪ Asset information storage and correlation e.g.,
giving higher priority to an attack that targets a vulnerable OS or a more important hos

$. 12 COMPONENTS AND CAPABILITIES IN A SIEM ARCHITECTURE

1. Data aggregation ▪ Collects and aggregates data from security systems and network devices 2. Threat
intelligence feeds. ▪ Combines internal data with third-party data on threats and vulnerabilities. 3.
Correlation and security monitoring ▪ Links events and related data into security incidents, threats or
forensic findings. 4. Analytics▪ uses statistical models and machine learning to identify deeper
relationships between data elements 5. Alerting • Analyses events and sends
alerts to notify security staff of immediate issues
6. Dashboards▪ Creates visualizations to let staff review event data, identify patterns and anomalies. 7.
Compliance▪ Gathers log data for standards like HIPAA, PCI/DSS, HITECH, SOX and GDPRandgenerates
reports. 8. Retention. ▪ Stores long-term historical data, useful for compliance and forensic
investigations

9. Forensic analysis▪ Enables exploration of log and event data to discover details of a security incident.
10. Threat hunting. ▪ Enables security staff to run queries on log and event data to proactively uncover
threats. 11. Incident response. ▪ Helps security teams identify and respond to security incidents,
bringing in all relevant data rapidly. 12. SOC automation. ▪
Advanced SIEMs can automatically respond to incidents by orchestrating security systems in an
approach known as security orchestration and response

8
3 . INDEXER CLUSTERS AND INDEX REPLICATION

Ans ▪ Indexer clusters are groups of Splunk Enterprise indexers configured to replicate each others' data,
so that the system keeps multiple copies of all data. This process is known as index replication. By
maintaining multiple, identical copies of Splunk Enterprise data, clusters prevent data loss while
promoting data availability for searching

The key benefits of index replication are:

▪ Data availability. An indexer is always available to handle incoming data, and the indexed data is
available for searching. ▪ Data fidelity. You never lose any data. You have assurance that the data sent to
the cluster is exactly the same data that gets stored in the cluster and that a search can later access.▪
Data recovery. Your system can tolerate downed indexers without losing data or losing access to data.▪
Disaster recovery. With multisite clustering, your system can tolerate the failure of an entire data
center.▪ Search affinity. With multisite clustering, search heads can access the entire set of data through
their local sites, greatly reducing long-distance network traffic

4. PARTS OF AN INDEXER CLUSTER

▪ An indexer cluster is a group of Splunk Enterprise instances, or nodes, that, working in concert,
provide a redundant indexing and searching capability. Each cluster has three types of nodes: ▪ A
single manager node to manage the cluster. ▪ Several to many peer nodes to index and maintain
multiple copies of the data and to search the data. ▪ One or more search heads to coordinate searches
across the set of peer nodes. #The manager node manages the cluster. It coordinates the replicating
activities of the peer nodes and tells the search head where to find data. It also helps manage the
configuration of peer nodes and orchestrates remedial activities if a peer goes down.▪ The search head
runs searches across the set of peer nodes. You must use a search head to manage searches across
indexer clusters.

5. EVENT LOGS. ▪ Events are classified into System, Security, Application, Directory Service, DNS
Server & DFS Replication categories.
▪ Directory Service, DNS Server & DFS Replication logs are applicable only for Active Directory.
▪ Events that are related to system or data security are called security events and
its log file is called Security logs.
6. WHAT IS A LOG FORMAT?
▪ A log format refers to the structure and organization of data recorded in log files. ▪ It defines how the
information is presented, what fields are included, and the format in which they are logged. ▪ Log
formats are essential for standardized and consistent logging across systems and applications, enabling
easy parsing, analysis, and interpretation of log data.These fields may include:

▪ Timestamp: The date and time when the event occurred, providing a chronological order of log entries.
▪ Source IP/Hostname: The IP address or hostname of the device or system that generated the log entry.
▪ Event Severity/Level: Indicates the level of importance or severity of the logged event, such as "INFO,"
"WARNING," or "ERROR.". ▪ Event Category: Categorizes the event or activity, providing a high-level

9
classification of the log entry (e.g., authentication, network, application). ▪ Description/Message:
Provides a detailed description or message explaining the event, including relevant contextual
information. ▪ User/Actor: Specifies the user or entity involved in the event, helping to identify the
source or cause of the logged activity. ▪ Source/Destination IP/Port: For network-related events, the
source and destination IP addresses or port numbers may be included to identify network connections.
▪ Protocol/Method: Specifies the protocol or method used in the event (e.g., HTTP, FTP, TCP), providing
additional information about the logged activity. ▪ Result/Status: Indicates the outcome or result of the
event (e.g., success, failure, denied), aiding in the analysis and identification of potential issues.▪
Additional Custom Fields: Depending on the specific needs of the system or application, additional
custom fields may be included to capture specific information relevant to the environment.▪ Text-based
formats (JSON, XML, YAML) tend to be larger due to their readable nature. They include property names,
delimiters, and tags repeated times in the same message.▪ CSV is compact for simple, tabular data but
lacks self-descriptiveness and is not suitable for hierarchical data structures.
6. LOG BASELINE.

▪A log baseline refers to the established and documented set of normal or expected log events and
patterns within a system or network. ▪It serves as a reference point or benchmark for comparison
when analysing logs to identify anomalies, security incidents, or deviations from the expected behaviour.
KEY ELEMENTS OF A LOG BASELINE
▪ Log Frequency: Determine the expected frequency or rate of log events within the baseline. This can
help establish a baseline for log volume and identify any significant increases or decreases in log activity.
▪ Log Anomalies: Document known anomalies or exceptions that may occur within the log data but are
considered normal or expected. This can include certain error messages, maintenance activities, or
authorized system changes.
▪ Log Types: Identify the types of logs that are relevant to the system or network being monitored. This
may include logs from operating systems, databases, applications, firewalls, intrusion detection systems,
and other devices orA services.
▪ Log Sources: Determine the specific sources of log data within the environment. This can include
individual devices, servers, network devices, and other components that generate log entries.

7. CORRELATION RULES

Ans 1. Build Relationships Across Machine Data. ▪ Event correlation enables you to find relationships
between seemingly unrelated events in data from multiple sources and to understand which events are
most relevant. ▪ Splunk correlations can provide functionality similar to sophisticated event
management or correlation systems. ▪ You can also automate the results of correlations to generate
alerts or support business metrics, leading to better business decisions and Operational Intelligence.

2.Time & Geolocation-Based Correlations• Identify relationships based on time and proximity or
geographic location • See all or any subset events that take place over a given time period and pinpoint
their location. • View events that take place across time periods to support security or operations
investigations

10
3. Transaction-Based Correlations. • Track a series of events as a single transaction and produce a
“duration” or “event count”. • Correlate events from any number of separate IT systems and data
sources. • Summarize the resulting grouping of events in reports

4. Subsearches. A subsearch is a search within a primary, or outer, search. When a search contains a
subsearch, the subsearch typically runs first.Subsearches must be enclosed in square brackets in the
primary search. • Take the results of one search and use them in another to create “if/then”
conditions. • Only see the results of a search if certain conditions are met. • Correlate data and
evaluate events in the context of the whole event set, including data across different indexes or Splunk
servers in a distributed environment

5. Correlate data with external data sources. • Enhance, enrich, validate or add context with
structured data sources. • Lookup tables can be a static CSV file, a KV store collection, or the output of
a Python script. You can also use the results of a search to populate the CSV file or KV store collection
and then set that up as a lookup table
6. Joins. • Support for “SQL-like” inner and outer joins. • Link one data set to another based on
common fields. • See the results for completely different datasets in a single view

11
1.What are Logs?

A log is a record of the events occurring within an organization’s systems and networks. Logs are
composed of log entries; each entry contains information related to a specific event that has occurred
within a system or network. Many logs within an
organization contain records related to computer security. These computer security logs are
generated by many sources, including security software, such as antivirus software, firewalls, and
intrusion detection and prevention systems; operating systems on servers, workstations, and
networking equipment; and applications.

2. Computer Security Log Management?


Organizations should establish standard log management operational processes. The major
log management operational processes typically include configuring log sources, performing log analysis,
initiating responses to identified events, and managing long-term storage. Administrators have other
responsibilities as well, such as the following:

Monitoring the logging status of all log sources .Monitoring log rotation and archival processes. Checking
for upgrades and patches to logging software, and acquiring, testing, and deploying them

3.Log management in organization ?


Organizations should prioritize log management appropriately throughout the organization.
After an organization defines its requirements and goals for the log management process, it should then
prioritize the requirements and goals based on the organization’s perceived reduction of risk and the
expected time and resources needed to perform log management functions.
An organization should also define roles and responsibilities for log management for key personnel
throughout the organization, including establishing log management duties at both the individual
system level and the log management infrastructure level.

Organizations should create and maintain a log management infrastructure.


Organizations should provide proper support for all staff with log management responsibilities.
Organizations should establish standard log management operational processes.
Organizations should establish policies and procedures for log management.
Organizations should prioritize log management appropriately throughout the
organization.

4.Log Management Infrastructure ?


#A log management infrastructure consists of the hardware, software, networks, and media used to
generate, transmit, store, analyze, and dispose of log data. #Log
management infrastructures typically perform several functions that support the analysis and security of
log data.

12
After establishing an initial log management policy and identifying roles and responsibilities, an
organization should next develop one or more log management infrastructures that effectively support
the policy and roles

@Architecture :

A log management infrastructure typically comprises the following three tiers:


1. Log Generation : The first tier contains the hosts that generate the log data. Some hosts run logging
client applications or services that make their log data available through networks to log servers in the
second tier. # Other hosts make their logs available through other means, such as allowing the servers
to authenticate to them and retrieve copies of the log files.

#The second tier is composed of one or more log servers that receive log data or copies of log data from
the hosts in the first tier. #The data is transferred to the servers either in a real-time or near-real-time
manner, or in occasional batches based on a schedule or the amount of log data waiting to be
transferred. #Servers that receive log data from multiple log generators are sometimes called
collectors or aggregators.

2. Log Analysis and Storage :


The third tier contains consoles that may be used to monitor and review log data and the results of
automated analysis. #Log data may be stored on the log servers themselves or on separate database
servers.
3. Log Monitoring. : Log monitoring consoles can also be used to generate reports. In some log
management infrastructures, consoles can also be used to provide management for the log servers and
clients. Also, console user privileges sometimes can be limited to only the necessary functions and data
sources for each user.

Functions of Log Management Architecture


Log management infrastructures typically perform several functions that assist in the storage, analysis,
and disposal of log data. These functions are normally performed in such a way that they do not alter
the original logs. The following items describe common log management infrastructure functions:
1. Generation:
• Log parsing is extracting data from a log so that the parsed values can be used as input for another
logging process. #A simple example of parsing is reading a text-based log file that contains 10 comma-

13
separated values per line and extracting the 10 values from each line. #Parsing is performed as part of
many other logging functions, such as log conversion and log viewing.

• Event filtering is the suppression of log entries from analysis, reporting, or long-term storage because
their characteristics indicate that they are unlikely to contain information of interest. For example,
duplicate entries and standard informational entries might be filtered because they do not provide
useful information to log analysts.
Typically, filtering does not affect the generation or short-term storage of events because it does not
alter the original log files.

• Event aggregation, similar entries are consolidated into a single entry containing a count of the
number of occurrences of the event. For example, a thousand entries that each record part of a scan
could be aggregated into a single entry that indicates how many hosts were scanned. Aggregation is
often performed as logs are originally generated (the generator counts similar related events and
periodically writes a log entry containing the count), and it can also be performed as part of log
reduction or event correlation processes, which are described below.

2. Storage:
• Log rotation is closing a log file and opening a new log file when the first file is considered to be
complete. Log rotation is typically performed according to a schedule (e.g., hourly, daily, weekly) or
when a log file reaches a certain size. # The primary benefits of log rotation are preserving log entries
and keeping the size of log files manageable. When a log file is rotated, the preserved log file can be
compressed to save space.

• Log archival is retaining logs for an extended period of time, typically on removable media, a storage
area network (SAN), or a specialized log archival appliance or server. Logs often need to be preserved to
meet legal or regulatory requirements.

• Log compression is storing a log file in a way that reduces the amount of storage space needed for the
file without altering the meaning of its contents. Log compression is often performed when logs are
rotated or archived.Contd..

•Log reduction is removing unneeded entries from a log to create a new log that is smaller. A similar
process is event reduction, which removes unneeded data fields from all log entries. Log and event
reduction are often performed in conjunction with log archival so that only the log entries and data
fields of interest are placed into long-term storage.
•Log conversion is parsing a log in one format and storing its entries in a second format. For example,
conversion could take data from a log stored in a database and save it in an XML format in a text file.

• Log archival is retaining logs for an extended period of time, typically on removable media, a storage
area network (SAN), or a specialized log archival appliance or server. Logs often need to be preserved to
meet legal or regulatory requirements.
• Log compression is storing a log file in a way that reduces the amount of storage space needed for the
file without altering the meaning of its contents. Log compression is often performed when logs are

14
rotated or archived.Contd...
• Log normalization, each log data field is converted to a particular data representation and categorized
consistently.

•Log file integrity checking involves calculating a message digest for each file and storing the message
digest securely to ensure that changes to archived logs are detected. A message digest is a digital
signature that uniquely identifies data and has the property that changing a single bit in the data causes
a completely different message digest to be generated

3. Analysis:
• Event correlation is finding relationships between two or more log entries. The most common form of
event correlation is rule-based correlation, which matches multiple log entries from a single source or
multiple sources based on logged values, such as timestamps, IP addresses, and event types.

•Log viewing is displaying log entries in a human-readable format.


•Log reporting is displaying the results of log analysis. Log reporting is often performed to summarize
significant activity over a particular period of time or to record detailed information related to a
particular event or series of events.

4. Disposal
•Log clearing is removing all entries from a log that precede a certain date and time. Log clearing is
often performed to remove old log data that is no longer needed on a system because it is not of
importance or it has been archived.

5.Log Management Planning


• Log Management Planning is used to establish and maintain successful log management
infrastructures, an organization should perform significant planning and other preparatory actions for
performing log management.
• It describes the definition of log management roles and responsibilities, the creation of feasible
logging policies, and the design of log management infrastructures.

Log Management Planning Process 1. Define Roles and Responsibilities • As part of the log management
planning process, an organization should define the roles and responsibilities of individuals and teams
who are expected to be involved in log management.
Security administrators: who are usually responsible for managing and monitoring the log management
infrastructures, configuring logging on security devices (e.g., firewalls, network-based intrusion
detection systems, antivirus servers), reporting on the results of log management activities, and
assisting others with configuring logging and performing log analysis.

# Computer security incident response teams: Those who use log data when handling some incidents.

Application developer:Those who may need to design or customize applications so that they perform
logging in accordance with the logging requirements and recommendations

15
Information security officers, who may oversee the log management infrastructures.
Chief information officers (CIO), who oversee the IT resources that generate, transmit, and store the
logs.
Auditors, who may use log data when performing audits.
Individuals involved in the procurement of software that should or can generate computer security log
data.

2. Establish Logging Policies. Organizations should develop policies that clearly define mandatory
requirements and suggested recommendations for several aspects of log management, including the
following
Log generation. • Which types of hosts must or should perform logging. • Which
host components must or should perform logging (e.g., OS, service, application). • Which types
of events each component must or should log (e.g., security events, network connections,
authentication attempts)
• Which data characteristics must or should be logged for each type of event (e.g., username and source
IP address for authentication attempts)

Log transmission
• Which types of hosts must or should transfer logs to a log management infrastructure. •Which types
of entries and data characteristics must or should be transferred from individual hosts to a log
management infrastructure
•How log data must or should be transferred (e.g., which protocols are permissible), including out-of-
band methods where appropriate (e.g., for standalone systems)

Log Storage and Disposal


•How often logs should be rotated •How the confidentiality, integrity, and availability of each type of
log data must or should be protected while in storage. • How long each type of log data must or
should be preserved • How unneeded log data must or should be disposed of

Log Analysis
• How often each type of log data must or should be analyzed. • Who must or should be able to access
the log and how such accesses should be logged
• What must or should be done when suspicious activity or an anomaly is identified

Log Management Operational Processes


System-level and infrastructure administrators should follow standard processes for managing the logs
for which they are responsible. This section describes the major operational processes for log
management, which are as follows:

16
• Configure the log sources, including log generation, storage, and security. • Perform analysis of log
data. • Initiate appropriate responses to identified events. • Manage the long-term storage of log
data.

17
1.What are IDS and IPS?

Ans IDS (Intrusion Detection System) and IPS (Intrusion Prevention System) are cybersecurity tools
designed to detect and respond to malicious activity or policy violations in a network or system. They
work differently but often complement each other.

Purpose: Monitors network traffic or system activities for suspicious behavior. Function: Alerts
administrators when it detects potential threats or unusual patterns.

Types: 1. Network-based IDS (NIDS): Monitors network traffic. 2. Host-based IDS (HIDS): Monitors
activities on individual devices. Action: Passive; it only detects and alerts but does not block or stop
attacks. Example Use: Identifying unauthorized access attempts or malware traffic.

Intrusion Prevention System (IPS) Purpose: Not only detects but also actively prevents or blocks
detected threats. Function: Inspects network traffic, identifies threats, and takes immediate action (e.g.,
dropping malicious packets or blocking access). Placement: Often deployed inline between a network
and its destination to intercept and process data in real time.Action: Active; it prevents threats

2 . How can we find “Sweet Spot” in the Cybersecurity?

Ans : Finding the "sweet spot" in cybersecurity involves achieving an optimal balance between security,
usability, cost, and organizational needs.

1. Assess the Risk Landscape. Identify Assets: Determine what needs protection (e.g., sensitive data,
systems, intellectual property).

2 Align Cybersecurity with Business Goals # Understand Business Objectives: Ensure that security
measures align with the organization's goals (e.g., scalability, customer satisfaction, compliance).

3. Implement Layered Security. Apply a defense-in-depth strategy that uses multiple layers of security
controls:

4 Balance Cost vs. Benefit : Cost-Effectiveness: Use tools and solutions that provide the best protection
relative to their cost.

5. Focus on Usability User Training: Security awareness programs to reduce human error.

6 Measure and Monitor Continuously. Metrics and KPIs: Track indicators such as incident response time,
vulnerabilities patched, or phishing simulation results.

7. Leverage Frameworks and Standards : Use established frameworks to guide your strategy: NIST
Cybersecurity Framework (CSF)

3. Explain Security and its controls.

18
Ans : Security refers to measures taken to protect systems, networks, data, and resources from
unauthorized access, misuse, threats, or damage. It ensures confidentiality, integrity, and availability
(CIA) of information and systems. # Security Controls. Security controls are safeguards or
countermeasures to mitigate risks and protect assets. They are typically categorized into three main
types:

1. Administrative Controls Definition: Policies, procedures, and guidelines that define how security is
managed.Examples:Security policies (e.g., password policy).Risk assessments.Employee training and
awareness programs.Incident response plans.

2. Technical ControlsDefinition: Technology-based measures to protect systems and data.


Examples:Firewalls and intrusion detection/prevention systems (IDS/IPS).Encryption. Antivirus
software.Access controls like multi-factor authentication (MFA). Security Information and Event
Management (SIEM) tools.

3. Physical ControlsDefinition: Measures to protect physical assets and infrastructure. Examples: Locks,
fences, and security cameras. Access badges or biometric authentication for entry. Guard personnel.
Environmental controls like fire suppression systems.

4 . Differentiate between reliability and security ?

Ans Reliability: Focuses on ensuring consistent performance and availability of the monitoring system
without downtime or failures. Deals with issues like hardware/software failures, system crashes, and
disruptions.Ensures monitoring tools operate continuously and provide accurate data over time.Involves
fault tolerance, backup systems, and high availability mechanisms. A reliable monitoring system can
still be vulnerable to attacks without security.

Security: Focuses on protecting the monitoring system and its data from unauthorized access or
attacks.Addresses risks like cyberattacks, data breaches, and insider threats. Ensures the confidentiality,
integrity, and authenticity of monitoring data.Involves encryption, access controls, and defense against
malicious activities.A secure monitoring system may not function properly if it's unreliable.

5 .Why Two factor authentication(2FA) is required?

Ans Two-Factor Authentication (2FA) is required because it significantly enhances the security of user
accounts and systems by adding an extra layer of protection beyond a simple password. Here's why 2FA
is necessary:

2. Protects Against Password-Based AttacksIt defends against common attacks such as: Phishing: Even if
a password is stolen through phishing, the second factor prevents unauthorized access.Brute Force:
Randomly guessing passwords is ineffective if a second factor is required.

19

You might also like