Disclaimer: This is a Bachelor's degree project tested in a lab environment. If you deploy it in a real enterprise network, you take full responsibility for the outcome.
TrunkPod deploys honeypots across multiple VLANs to lure and expose attackers on compromised machines, giving you visibility into anomalies across all network segments.
Clone the repository:
git clone https://github.com/kuzlik340/TrunkPodDependencies are installed automatically on first run via install_requirements.sh.
All configuration lives in the configs/ directory:
cd configs/You need to create two files. Example templates are provided to get you started:
| File | Template |
|---|---|
honeypots.yaml |
honeypots.example.yaml |
network.yaml |
network.example.yaml |
configs/honeypots.yaml
honeypots:
- name: honeypot1
ip: 192.168.20.60
vlan: 20
services:
- name: ldap
port: 389
- name: ssh
port: 22
mac: B8:2A:72:EF:00:01
- name: honeypot2
ip: 192.168.30.70
vlan: 30
services:
- name: ldap
port: 389
- name: ssh
port: 22
- name: http_server
port: 80
- name: https_server
port: 443
- name: telnet
port: 23
mac: B8:2A:72:EF:00:02Field reference:
name— used as the Podman container name; must be unique across all honeypots.ip— must be unique. The script checks the target VLAN for conflicts before deploying; if the IP is taken, it will stop and ask you to change it.vlan— the VLAN ID where this honeypot will be deployed.services— list of services to run on this honeypot. Available options:ssh,ldap,http_server,https_server,telnet. The same service can run on multiple ports on the same honeypot, as long as ports don't conflict.mac— must be unique across all honeypots.
A configuration validator runs before each deployment. If anything is misconfigured, it will tell you exactly what to fix.
configs/network.yaml
vlans:
- id: 20
name: network1
range: 192.168.20.0/24
gateway: 192.168.20.1
- id: 30
name: network2
range: 192.168.30.0/24
gateway: 192.168.30.1Field reference:
id— VLAN ID (must match the IDs used inhoneypots.yaml).name— human-readable label, not used by scripts.range— IP range of the VLAN segment.gateway— gateway address for honeypots in this VLAN.
After creating both config files, generate the BPF filter for the traffic parser:
python configs/create_filter_bpf.py configs/honeypots.yamlExample output:
#define BPF_FILTER \
"(ether dst B8:2A:72:EF:00:01) or " \
"(ether dst B8:2A:72:EF:00:02) or " \
"(ip and (dst host 192.168.20.60 or dst host 192.168.30.70)) or " \
"(arp and (host 192.168.20.60 or host 192.168.30.70))"
Paste the output into logging/low_level_logging/traffic_parser.c,
Install tools for the logger:
sudo apt install build-essential libpcap-dev makethen build:
makeEdit logging/logging_pipeline/.env with your Elasticsearch connection details:
STACK_VERSION=8.15.2
TRUNKPOD_LOG_PATH=/var/log/trunkpod
TRAFFIC_LOG_PATH=/var/log/traffic_parser
ELASTICHOST=https://<ip_address>
ELASTICUSER=elastic_user
ELASTICPASS=elastic_passwordRun the deployment script, specifying the trunk interface:
sudo ./TrunkPod.sh --interface <your_interface>This will:
- Install any missing requirements
- Validate your configuration
- Prompt you to generate honeytokens (you choose how many)
- Create all corresponding interfaces
- Check if the IP addresses in configuration are available
- Setup all networking for the honeypots
- Deploy all honeypots defined in
honeypots.yaml
For a full list of available flags run
sudo ./TrunkPod.sh --help
Note: Clean flags (
--clean,--clean-build-logs,--clean-honeypot-logs) cannot be combined with other flags — the script will stop after cleaning. Simply rerun with your intended flags afterwards.
The traffic_parser binary captures packets destined for your honeypots. It's best run as a systemd service:
cd logging/low_level_logging
sudo cp traffic_parser /usr/local/bin/
sudo cp traffic-parser.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable traffic_parser.service
sudo systemctl start traffic_parser.serviceCheck its status:
sudo systemctl status traffic_parser.serviceStart the logging pipeline (Filebeat + forwarding to Elasticsearch):
cd logging/logging_pipeline
docker compose up