Security Onion 2.4 E-Book
Security Onion 2.4 E-Book
Release 2.4
1   About                                                                                                                                                                                                   1
    1.1 Security Onion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                                          1
    1.2 Security Onion Solutions, LLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                                             1
    1.3 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                                           1
2   Introduction                                                                                                                                                                                            3
    2.1 Network Visibility . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    5
    2.2 Host Visibility . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    5
    2.3 Analysis Tools . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    6
    2.4 Workflow . . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   10
    2.5 Deployment Scenarios       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   11
    2.6 Conclusion . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   11
3 License 13
5   Getting Started                                                                                                                                                                                        39
    5.1 Best Practices . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   40
    5.2 Architecture . . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   41
    5.3 Hardware Requirements          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   51
    5.4 Operating System . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   56
    5.5 Partitioning . . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   57
    5.6 Download . . . . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   58
    5.7 VMware . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   59
    5.8 VirtualBox . . . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   61
    5.9 Proxmox . . . . . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   62
    5.10 Booting Issues . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   63
    5.11 Airgap . . . . . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   63
    5.12 Installation . . . . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   64
    5.13 Amazon Cloud Image .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   66
    5.14 Azure Cloud Image . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   72
    5.15 Google Cloud Image . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   77
    5.16 Configuration . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   82
    5.17 After Installation . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   84
                                                                                                                                                                                                            i
     6.1    Alerts . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    89
     6.2    Dashboards . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    95
     6.3    Hunt . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   103
     6.4    Cases . . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   104
     6.5    PCAP . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   113
     6.6    Grid . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   118
     6.7    Downloads . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   120
     6.8    Administration . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   121
     6.9    Kibana . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   124
     6.10   Elastic Fleet . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   128
     6.11   Osquery Manager .              .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   132
     6.12   InfluxDB . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   132
     6.13   CyberChef . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   133
     6.14   Playbook . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   135
     6.15   ATT&CK Navigator               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   140
10 Logs                                                                                                                                                                                                                179
   10.1     Ingest . . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   179
   10.2     Logstash . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   181
   10.3     Redis . . . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   185
   10.4     Elasticsearch . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   187
   10.5     ElastAlert . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   194
   10.6     Curator . . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   197
   10.7     Data Fields . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   198
   10.8     Alert Data Fields . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   199
   10.9     Elastalert Fields . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   200
   10.10    Zeek Fields . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   200
   10.11    Community ID . . . .               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   201
   10.12    SOC Logs . . . . . .               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   201
   10.13    Other Supported Logs               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   202
11 Updating                                                                                                          205
   11.1 soup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
   11.2 End Of Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
12 Accounts 211
ii
   12.1   Passwords . . . . . . . . . . . . . .                .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   211
   12.2   MFA . . . . . . . . . . . . . . . . .                .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   212
   12.3   Adding Accounts . . . . . . . . . . .                .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   213
   12.4   Listing Accounts . . . . . . . . . . .               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   214
   12.5   Disabling Accounts . . . . . . . . .                 .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   215
   12.6   Role-Based Access Control (RBAC) .                   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   216
   12.7   Kratos . . . . . . . . . . . . . . . .               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   221
13 Services 223
15 Tuning                                                                                                                                                                                              239
   15.1 BPF . . . . . . . . . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   239
   15.2 Managing Rules . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   241
   15.3 Adding Local Rules . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   244
   15.4 Managing Alerts . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   245
   15.5 High Performance Tuning        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   249
   15.6 Salt . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   250
17 Utilities                                                                                                                                                                                           267
   17.1 jq . . . . . . . . . . . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   267
   17.2 so-allow . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   268
   17.3 so-elastic-auth-password-reset         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   268
   17.4 so-elasticsearch-query . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   268
   17.5 so-import-pcap . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   269
   17.6 so-import-evtx . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   271
   17.7 so-monitor-add . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   271
   17.8 so-status . . . . . . . . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   271
   17.9 so-test . . . . . . . . . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   272
18 Help 275
                                                                                                                                                                                                        iii
     18.1   FAQ . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   275
     18.2   Directory Structure .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   279
     18.3   Tools . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   280
     18.4   Support . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   280
     18.5   Community Support       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   281
     18.6   Help Wanted . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   281
19 Security                                                                                                        283
   19.1 Vulnerability Disclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
   19.2 Product and Supply Chain Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
21 Appendix 293
iv
                                                                                                 CHAPTER           1
About
Security Onion is a free and open platform built by defenders for defenders. It includes network visibility, host visi-
bility, intrusion detection honeypots, log management, and case management. Security Onion has been downloaded
over 2 million times and is being used by security teams around the world to monitor and defend their enterprises. Our
easy-to-use Setup wizard allows you to build a distributed grid for your enterprise in minutes!
Doug Burks started Security Onion as a free and open project in 2008 and then founded Security Onion Solutions,
LLC in 2014.
Important: Security Onion Solutions, LLC is the only official provider of hardware appliances, training, and profes-
sional services for Security Onion.
For more information about these products and services, please see our company site at https://securityonionsolutions.
com.
1.3 Documentation
 Warning: Documentation is always a work in progress and some documentation may be missing or incorrect.
 Please let us know if you notice any issues.
                                                                                                                     1
Security Onion Documentation, Release 2.4
1.3.1 License
This documentation is licensed under CC BY 4.0. You can read more about this license at https://creativecommons.
org/licenses/by/4.0/.
1.3.2 Formats
This documentation is published online at https://securityonion.net/docs. If you are viewing an offline version of this
documentation but have Internet access, you might want to switch to the online version at https://securityonion.net/docs
to see the latest version.
This documentation is also available in PDF format at https://readthedocs.org/projects/securityonion/downloads/pdf/
2.4/.
Many folks have asked for a printed version of our documentation. Whether you work on airgapped networks or
simply want a portable reference that doesn’t require an Internet connection or batteries, this is what you’ve been
asking for. Thanks to Richard Bejtlich for writing the inspiring foreword! Proceeds go to the Rural Technology Fund!
You can purchase your copy at https://securityonion.net/book.
1.3.3 Authors
Security Onion Solutions is the primary author and maintainer of this documentation. Some content has been con-
tributed by members of our community. Thanks to all the folks who have contributed to this documentation over the
years!
1.3.4 Contributing
We welcome your contributions to our documentation! We will review any suggestions and apply them if appropriate.
If you are accessing the online version of the documentation and notice that a particular page has incorrect information,
you can submit corrections by clicking the Edit on GitHub button in the upper right corner of each page.
To submit a new page, you can submit a pull request (PR) to the 2.4 branch of the securityonion-docs repo at
https://github.com/Security-Onion-Solutions/securityonion-docs.
Pages are written in RST format and you can find several RST guides on the Internet including https://thomas-cokelaer.
info/tutorials/sphinx/rest_syntax.html.
2                                                                                                  Chapter 1. About
                                                                                                  CHAPTER            2
Introduction
Security Onion is a free and open platform built by defenders for defenders. It includes network visibility, host
visibility, intrusion detection honeypots, log management, and case management.
For network visibility, we offer signature based detection via Suricata, rich protocol metadata and file extraction using
your choice of either Zeek or Suricata, full packet capture via Stenographer, and file analysis via Strelka. For host
visibility, we offer the Elastic Agent which provides data collection, live queries via osquery, and centralized manage-
ment using Elastic Fleet. Intrusion detection honeypots based on OpenCanary can be added to your deployment for
even more enterprise visibility. All of these logs flow into Elasticsearch and we’ve built our own user interfaces for
alerts, dashboards, threat hunting, case management, and grid management.
In the diagram below, we see Security Onion in a traditional enterprise network with a firewall, workstations, and
servers. You can use Security Onion to monitor north/south traffic to detect an adversary entering an environment,
establishing command-and-control (C2), or perhaps data exfiltration. You’ll probably also want to monitor east/west
traffic to detect lateral movement. As more and more of our network traffic becomes encrypted, it’s important to fill in
those blind spots with additional visibility in the form of endpoint telemetry. Security Onion can consume logs from
your servers and workstations so that you can then hunt across all of your network and host logs at the same time.
                                                                                                                       3
Security Onion Documentation, Release 2.4
4                                           Chapter 2. Introduction
                                                                     Security Onion Documentation, Release 2.4
From a network visibility standpoint, Security Onion seamlessly weaves together intrusion detection, network meta-
data, full packet capture, file analysis, and intrusion detection honeypots.
Security Onion generates NIDS (Network Intrusion Detection System) alerts by monitoring your network traffic and
looking for specific fingerprints and identifiers that match known malicious, anomalous, or otherwise suspicious traffic.
This is signature-based detection so you might say that it’s similar to antivirus signatures for the network, but it’s a bit
deeper and more flexible than that. NIDS alerts are generated by Suricata.
Unlike signature-based intrusion detection that looks for specific needles in the haystack of data, network metadata
provides you with logs of connections and standard protocols like DNS, HTTP, FTP, SMTP, SSH, and SSL. This
provides a real depth and visibility into the context of data and events on your network. Security Onion provides
network metadata using your choice of either Zeek or Suricata.
Full packet capture is like a video camera for your network, but better because not only can it tell us who came and
went, but also exactly where they went and what they brought or took with them (exploit payloads, phishing emails,
file exfiltration). It’s a crime scene recorder that can tell us a lot about the victim and the white chalk outline of a
compromised host on the ground. There is certainly valuable evidence to be found on the victim’s body, but evidence
at the host can be destroyed or manipulated; the camera doesn’t lie, is hard to deceive, and can capture a bullet in
transit. Full packet capture is recorded by Stenographer.
As Zeek and Suricata are monitoring your network traffic, they can extract files transferred across the network. Strelka
can then analyze those files and provide additional metadata.
We also have an Intrusion Detection Honeypot node that allows you to build a node that mimics services. Connections
to these services automatically generate alerts.
In addition to network visibility, Security Onion provides endpoint visibility via the Elastic Agent which provides data
collection, live queries via osquery, and centralized management using Elastic Fleet.
For devices like firewalls and routers that don’t support the installation of agents, Security Onion can consume standard
Syslog.
With all of the data sources mentioned above, there is an incredible amount of data available at your fingertips.
Fortunately, Security Onion tightly integrates the following tools to help make sense of this data.
Security Onion Console (SOC) is the first thing you see when you log into Security Onion. It includes our Alerts
interface which allows you to see all of your NIDS alerts from Suricata.
Security Onion Console (SOC) also includes our Dashboards interface which gives you a nice overview of not only
your NIDS/HIDS alerts but also network metadata logs from Zeek or Suricata and any other logs that you may be
collecting.
6                                                                                    Chapter 2. Introduction
                                                                  Security Onion Documentation, Release 2.4
Hunt is similar to Dashboards but its default queries are more focused on threat hunting.
Cases is the case management interface. As you are working in Alerts, Dashboards, or Hunt, you may find alerts or
logs that are interesting enough to send to Cases and create a case. Other analysts can collaborate with you as you
Security Onion Console (SOC) also includes an interface for full packet capture (PCAP) retrieval.
2.3.2 CyberChef
CyberChef allows you to decode, decompress, and analyze artifacts. Alerts, Dashboards, Hunt, and PCAP all allow
you to quickly and easily send data to CyberChef for further analysis.
8                                                                                       Chapter 2. Introduction
                                                                Security Onion Documentation, Release 2.4
2.3.3 Playbook
Playbook allows you to create a Detection Playbook, which itself consists of individual plays. These plays are fully
self-contained and describe the different aspects around the particular detection strategy.
2.4 Workflow
All of these analysis tools work together to provide efficient and comprehensive analysis capabilities. For example,
here’s one potential workflow:
     • Go to the Alerts page and review any unacknowledged alerts.
     • Review Dashboards for anything that looks suspicious.
     • Once you’ve found something that you want to investigate, you might want to pivot to Hunt to expand your
       search and look for additional logs relating to the source and destination IP addresses.
     • If any of those alerts or logs look interesting, you might want to pivot to PCAP to review the full packet capture
       for the entire stream.
     • Depending on what you see in the stream, you might want to send it to CyberChef for further analysis and
       decoding.
     • Escalate alerts and logs to Cases and document any observables. Pivot to Hunt to cast a wider net for those
       observables.
     • Develop a play in Playbook that will automatically alert on observables moving forward and update your cover-
       age in ATT&CK Navigator.
     • If you have the Elastic Agent deployed, then you might want to search for additional host logs or run live queries
       against your endpoints using osquery.
     • Finally, return to Cases and document the entire investigation and close the case.
10                                                                                          Chapter 2. Introduction
                                                                   Security Onion Documentation, Release 2.4
Analysts around the world are using Security Onion today for many different architectures. The Security Onion Setup
wizard allows you to easily configure the best deployment scenario to suit your needs.
2.6 Conclusion
After you install Security Onion, you will have comprehensive network and host visibility for your enterprise. Our
analyst tools will enable you to use all of that data to detect intruders more quickly and paint a more complete picture
of what they’re doing in your environment. Get ready to peel back the layers of your enterprise and make your
adversaries cry!
12                                          Chapter 2. Introduction
                                                                                            CHAPTER           3
License
Security Onion is a free and open platform. Most software included in Security Onion is licensed under open source
licenses.
Elastic components and Security Onion components are licensed under the Elastic License 2.0 (ELv2). During instal-
lation, you will be prompted to accept the Elastic License:
Note:
You can find the full text of the Elastic License 2.0 (ELv2) at:
https://securityonion.net/license
                                                                                                               13
Security Onion Documentation, Release 2.4
https://blog.securityonion.net/2022/08/security-onion-enterprise-features-and.html
14                                                                                   Chapter 3. License
                                                                                            CHAPTER           4
If this is your first time using Security Onion 2, then we highly recommend that you start with our Security Onion
ISO image as shown in the Download section. Then install the ISO image as shown in the Installation section and
configure for IMPORT as shown in the Configuration section. This can be done in a minimal virtual machine with as
little as 4GB RAM, 2 CPU cores, and 200GB of storage. For more information about virtualization, please see the
VMware, VirtualBox, and Proxmox sections.
The following screenshots will walk you through:
    • installing our Security Onion ISO image
    • configuring for IMPORT
    • logging into Security Onion Console (SOC)
    • navigating to Grid and importing a pcap or evtx file
    • reviewing data via Alerts, Dashboards, Hunt, and PCAP
Once you’re comfortable with your IMPORT installation, then you can move on to more advanced installations as
shown in the Architecture section.
                                                                                                               15
Security Onion Documentation, Release 2.4
                                      17
Security Onion Documentation, Release 2.4
                                      19
Security Onion Documentation, Release 2.4
                                      21
Security Onion Documentation, Release 2.4
                                      23
Security Onion Documentation, Release 2.4
                                      25
Security Onion Documentation, Release 2.4
                                      27
Security Onion Documentation, Release 2.4
                                      29
Security Onion Documentation, Release 2.4
                                      31
Security Onion Documentation, Release 2.4
                                      33
Security Onion Documentation, Release 2.4
                                      35
Security Onion Documentation, Release 2.4
                                      37
Security Onion Documentation, Release 2.4
Getting Started
If you’re ready to get started with Security Onion, you may have questions like:
What are the recommended best practices?
See the Best Practices section.
How many machines do I need?
Depending on what you’re trying to do, you may need anywhere from one machine to thousands of machines. The
Architecture section will help you decide.
What kind of hardware does each of those machines need?
This could be anything from a small virtual machine to a large rack mount server with lots of CPU cores, lots of RAM,
and lots of storage. The Hardware Requirements section provides further details.
Which ISO image should I download?
We recommend our Security Onion ISO image for most use cases, but you should review the Operating System,
Partitioning, Release Notes, and Download sections for more information.
If I just want to try Security Onion in a virtual machine, how do I create a virtual machine?
See the VMware, VirtualBox, and Proxmox sections.
How do I deploy Security Onion in the cloud?
See the Amazon Cloud Image, Azure Cloud Image, and Google Cloud Image sections.
What if I have trouble booting the ISO image?
Check out the Booting Issues section.
What if I’m on an airgap network?
Review the Airgap section.
Once I’ve booted the ISO image, how do I install it?
The Installation section has steps for our Security Onion ISO image and for other ISO images.
After installation, how do I configure Security Onion?
                                                                                                                  39
Security Onion Documentation, Release 2.4
Security Onion provides lots of options and flexibility, but for best results we recommend the following best practices.
5.1.1 Installation
     • Download our Security Onion ISO image for the quickest and easiest installation experience (see the Download
       section).
     • For production deployments, prefer dedicated hardware to VMs when possible (see the Hardware Requirements
       section).
     • If VMs must be used, ensure that resources are properly dedicated to VMs to avoid resource contention.
     • Use local storage and avoid NFS, NAS, iSCSI, etc.
     • Adequately spec your hardware to meet your current usage and allow for growth over time.
     • Prefer taps to span ports when possible.
     • Make sure that any network firewalls have the proper firewall rules in place to allow ongoing operation and
       updates (see the Firewall section).
5.1.2 Configuration
     • Make sure that both hostname and IP address are correct during installation.
     • Avoid changing hostname and IP address after installation.
     • Linux is case sensitive where other operating systems might not be, so we recommend using lowercase for things
       like hostnames, usernames, etc.
     • Security Onion is a free and open platform based on standard Linux distros, but we recommend treating it as an
       appliance and avoid installing third party software as this may conflict with our components and cause issues
       when updating.
     • Avoid installing automation tools such as Puppet and Chef as these may conflict with our existing Salt automa-
       tion.
     • Avoid installing monitoring tools such as Zabbix as this may conflict with our existing InfluxDB monitoring.
     • Avoid installing third-party endpoint security agents as they may break functionality or introduce unacceptable
       performance overhead.
     • Avoid changing file permissions or umask settings.
     • Hardening guidelines may break functionality, so if you must apply those hardening guidelines, we recommend
       testing thoroughly before deploying to production.
    • Join our discussion forum at https://securityonion.net/discuss or subscribe to one of our social media channels
      to be notified of Security Onion updates.
    • Keep your deployment updated as we frequently fix bugs and add new features.
    • If possible, test updates on a test deployment before deploying to production.
5.2 Architecture
If you’re going to deploy Security Onion, you should first decide on what type of deployment you want. This could
be anything from a temporary Import installation in a small virtual machine on your personal laptop all the way to a
large scalable enterprise deployment consisting of a manager node, multiple search nodes, and lots of forward nodes.
This section will discuss what those different deployment types look like from an architecture perspective.
5.2.1 Import
The simplest architecture is an Import node. An import node is a single standalone box that runs just enough
components to be able to import pcap or evtx files using the Grid page. It does not support adding Elastic agents or
additional Security Onion nodes.
5.2. Architecture                                                                                                 41
Security Onion Documentation, Release 2.4
5.2.2 Evaluation
The next architecture is Evaluation. It’s a little more complicated than Import because it has a network interface
dedicated to sniffing live traffic from a TAP or span port. Processes monitor the traffic on that sniffing interface and
generate logs. Elastic Agent collects those logs and sends them directly to Elasticsearch where they are parsed and
indexed. Evaluation mode is designed for a quick installation to temporarily test out Security Onion. It is not designed
for production usage at all and it does not support adding Elastic agents or additional Security Onion nodes.
5.2.3 Standalone
Standalone is similar to Evaluation in that all components run on one box. However, instead of Elastic Agent
sending logs directly to Elasticsearch, it sends them to Logstash, which sends them to Redis for queuing. A second
Logstash pipeline pulls the logs out of Redis and sends them to Elasticsearch, where they are parsed and indexed.
This type of deployment is typically used for testing, labs, POCs, or very low-throughput environments. It’s not as
scalable as a distributed deployment.
5.2. Architecture                                                                                               43
Security Onion Documentation, Release 2.4
5.2.4 Distributed
A standard distributed deployment includes a manager node, one or more forward nodes running network sensor
components, and one or more search nodes running Elastic search components. This architecture may cost more
upfront, but it provides for greater scalability and performance, as you can simply add more nodes to handle more
traffic or log sources.
     • Recommended deployment type
     • Consists of a manager node, one or more forward nodes, and one or more search nodes
Note: If you install a dedicated manager node, you must also deploy one or more search nodes. Otherwise, all
logs will queue on the manager and have no place to be stored. If you are limited on the number of nodes you can
deploy, you can install a manager search node so that your manager node can act as a search node and store those
logs. However, please keep in mind that overall performance and scalability of a manager search node will be lower
compared to our recommended architecture of dedicated manager node and separate search nodes.
5.2. Architecture                                                                                              45
Security Onion Documentation, Release 2.4
Management
The manager node runs Security Onion Console (SOC) and Kibana. It has its own local instance of Elasticsearch,
but that’s mainly used for storing Cases data and central configuration. An analyst connects to the manager node from
a client workstation (perhaps Security Onion Desktop) to execute queries and retrieve data. Please keep in mind that a
dedicated manager node requires separate search nodes.
The manager node runs the following components:
     • Security Onion Console (SOC)
     • Elasticsearch
     • Logstash
     • Kibana
     • Curator
     • ElastAlert
     • Redis
Search Node
Search nodes pull logs from the Redis queue on the manager node and then parse and index those logs. When a user
queries the manager node, the manager node then queries the search nodes, and they return search results.
Search Nodes run the following components:
     • Elasticsearch
     • Logstash
     • Curator
Manager Search
A manager search node is both a manager node and a search node at the same time. Since it is parsing, indexing,
and searching data, it has higher hardware requirements than a normal manager node.
A manager search node runs the following components:
     • Security Onion Console (SOC)
     • Elasticsearch
     • Logstash
     • Kibana
     • Curator
     • ElastAlert
     • Redis
Forward Node
A forward node forwards alerts and logs from Suricata and Zeek via Elastic Agent to Logstash on the manager
node, where they are stored in Elasticsearch on the manager node or a search node (if the manager node has been
configured to use a search node). Full packet capture recorded by Stenographer remains on the forward node itself.
Forward nodes run the following components:
    • Zeek
    • Suricata
    • Stenographer
An Elastic Fleet Standalone Node is ideal when there are a large number of Elastic endpoints deployed. It reduces
the amount of overhead on the Manager node by transferring the workload associated with managing endpoints to
a dedicated system. It is also useful for off-network Elastic Agent endpoints that do not have remote access to the
Manager node as it can be deployed to the DMZ and TCP/8220 (Elastic Agent Managment network traffic) and
TCP/5055 (Elastic Agent log shipping) made accessible to your off-network endpoints.
Receiver Node
The Receiver Node runs Logstash and Redis and allows for events to continue to be processed by search nodes in
the event the manager node is offline. When a receiver node joins the grid, Elastic Agent on all nodes adds this new
address as a load balanced Logstash output. The search nodes add this new node as another Logstash input. Receiver
nodes are “active-active” and you can add as many as you want (within reason) and events will be balanced among
them.
5.2. Architecture                                                                                                47
Security Onion Documentation, Release 2.4
The Intrusion Detection Honeypot node mimics common services such as HTTP, FTP, and SSH. Any interaction with
these fake services will automatically result in an alert.
Heavy Node
There is also an option to have a manager node and one or more heavy nodes.
5.2. Architecture                                                                                   49
Security Onion Documentation, Release 2.4
 Warning: Heavy nodes are NOT recommended for most users due to performance reasons, and should only be
 used for testing purposes or in low-throughput environments.
Note: Heavy nodes do not consume from the Redis queue on the manager. This means that if you just have a
manager and heavy nodes, then the Redis queue on the manager will grow and never be drained. To avoid this, you
have two options. If you are starting a new deployment, you can make your manager a manager search so that
it will drain its own Redis queue. Alternatively, if you have an existing deployment with a manager and want to
avoid rebuilding, then you can add a separate search node (NOT heavy node) to consume from the Redis queue on the
manager.
Heavy nodes perform sensor duties and store their own logs in their own local Elasticsearch instance. This results in
higher hardware requirements and lower performance. Heavy nodes do NOT pull logs from the redis queue on the
manager like search nodes do.
Heavy Nodes run the following components:
    • Elasticsearch
    • Logstash
    • Curator
    • Zeek
    • Suricata
    • Stenographer
The Architecture section should have helped you determine how many machines you will need for your deployment.
This section will help you determine what kind of hardware specs each of those machines will need.
Security Onion only supports x86-64 architecture (standard Intel or AMD 64-bit processors).
If you just want to import a pcap or evtx file using the Grid page, then you can configure Security Onion as an Import
Node with the following minimum specs:
    • 4GB RAM
    • 2 CPU cores
    • 200GB storage
For all other configurations, the minimum specs for running Security Onion are:
    • 12GB RAM
    • 4 CPU cores
    • 200GB storage
Note: These minimum specs are for EVAL mode with minimal services running. These requirements may increase
drastically as you enable more services, monitor more traffic, and consume more logs. For more information, please
see the detailed sections below.
For best results, we recommend purchasing new hardware that meets the hardware requirements detailed below.
Tip: If you’re planning to purchase new hardware, please consider official Security Onion appliances from Security
Onion Solutions (https://securityonionsolutions.com). Our custom appliances have already been designed for certain
roles and traffic levels and have Security Onion pre-installed. Purchasing from Security Onion Solutions will save you
time and effort and help to support development of Security Onion as a free and open platform!
5.3.4 Storage
We only support local storage. Remote storage like SAN/iSCSI/FibreChannel/NFS increases complexity and points
of failure, and has serious performance implications. You may be able to make remote storage work, but we do not
provide any support for it. By using local storage, you keep everything self-contained and you don’t have to worry
about competing for resources. Local storage is usually the most cost efficient solution as well.
5.3.5 NIC
You’ll need at least one wired network interface dedicated to management (preferably connected to a dedicated man-
agement network). We recommend using static IP addresses where possible.
If you plan to sniff network traffic from a tap or span port, then you will need one or more interfaces dedicated to
sniffing (no IP address). The installer will automatically disable NIC offloading functions such as tso, gso, and gro
on sniffing interfaces to ensure that Suricata and Zeek get an accurate view of the traffic.
Make sure you get good quality network cards, especially for sniffing. Most users report good experiences with Intel
cards.
Security Onion is designed to use wired interfaces. You may be able to make wireless interfaces work, but we don’t
recommend or support it.
5.3.6 UPS
Like most IT systems, Security Onion has databases and those databases don’t like power outages or other ungraceful
shutdowns. To avoid power outages and having to manually repair databases, please consider a UPS.
In a standalone deployment, the manager components and the sensor components all run on a single box, therefore,
your hardware requirements will reflect that. You’ll need at minimum 24GB RAM, 4 CPU cores, and 200GB storage.
At the bare minimum of 24GB RAM, you would most likely need swap space to avoid issues.
This deployment type is recommended for evaluation purposes, POCs (proof-of-concept) and small to medium size
single sensor deployments. Although you can deploy Security Onion in this manner, it is recommended that you
separate the backend components and sensor components.
    • CPU: Used to parse incoming events, index incoming events, search metatadata, capture PCAP, analyze packets,
      and run the frontend components. As data and event consumption increases, a greater amount of CPU will be
      required.
    • RAM: Used for Logstash, Elasticsearch, disk cache for Lucene, Suricata, Zeek, etc. The amount of available
      RAM will directly impact search speeds and reliability, as well as ability to process and capture traffic.
    • Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It
      is typically recommended to retain no more than 30 days of hot Elasticsearch indices.
Please refer to the Architecture section for detailed deployment scenarios.
In an enterprise distributed deployment, a manager node will store logs from itself and forward nodes. It can also act
as a syslog destination for other log sources to be indexed into Elasticsearch. An enterprise manager node should have
8 CPU cores at a minimum, 16-128GB RAM, and enough disk space (multiple terabytes recommended) to meet your
retention requirements.
    • CPU: Used to parse incoming events, index incoming events, and search metadata. As consumption of data and
      events increases, more CPU will be required.
    • RAM: Used for Logstash, Elasticsearch, and disk cache for Lucene. The amount of available RAM will directly
      impact search speeds and reliability.
    • Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It
      is typically recommended to retain no more than 30 days of hot Elasticsearch indices.
Please refer to the Architecture section for detailed deployment scenarios.
This deployment type utilizes search nodes to parse and index events. As a result, the hardware requirements of the
manager node are reduced. An enterprise manager node should have at least 4-8 CPU cores, 16GB RAM, and 200GB
to 1TB of disk space. Many folks choose to host their manager node in their VM farm since it has lower hardware
requirements than sensors but needs higher reliability and availability.
    • CPU: Used to receive incoming events and place them into Redis. Used to run all the front end web components
      and aggregate search results from the search nodes.
    • RAM: Used for Logstash and Redis. The amount of available RAM directly impacts the size of the Redis queue.
    • Disk: Used for general OS purposes and storing Kibana dashboards.
Please refer to the Architecture section for detailed deployment scenarios.
Search nodes increase search and retention capacity with regard to Elasticsearch. These nodes parse and index events,
and provide the ability to scale horizontally as overall data intake increases. Search nodes should have at least 4-8
CPU cores, 16-64GB RAM, and 200GB of disk space or more depending on your logging requirements.
    • CPU: Used to parse incoming events and index incoming events. As consumption of data and events increases,
      more CPU will be required.
     • RAM: Used for Logstash, Elasticsearch, and disk cache for Lucene. The amount of available RAM will directly
       impact search speeds and reliability.
     • Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It
       is typically recommended to retain no more than 30 days of hot Elasticsearch indices.
Please refer to the Architecture section for detailed deployment scenarios.
A forward node runs sensor components only, and forwards metadata to the manager node. All PCAP stays local to
the sensor, and is accessed through use of an agent.
     • CPU: Used for analyzing and storing network traffic. As monitored bandwidth increases, a greater amount of
       CPU will be required. See below.
     • RAM: Used for write cache and processing traffic.
     • Disk: Used for storage of PCAP and metadata. A larger amount of storage allows for a longer retention period.
Please refer to the Architecture section for detailed deployment scenarios.
A heavy node runs all the sensor components AND Elastic components locally. This dramatically increases the
hardware requirements. In this case, all indexed metadata and PCAP are retained locally. When a search is performed
through Kibana, the manager node queries this node’s Elasticsearch instance.
     • CPU: Used to parse incoming events, index incoming events, and search metadata. As monitored bandwidth
       (and the amount of overall data/events) increases, a greater amount of CPU will be required.
     • RAM: Used for Logstash, Elasticsearch, and disk cache for Lucene. The amount of available RAM will directly
       impact search speeds and reliability.
     • Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It
       is typically recommended to retain no more than 30 days of hot Elasticsearch indices.
Please refer to the Architecture section for detailed deployment scenarios.
Since receiver nodes only run Logstash and Redis, they don’t require much CPU or disk space. However, more RAM
means you can set a larger queue size for Redis.
For an Intrusion Detection Honeypot node, the overall system requirements are low: 1GB RAM, 2 CPU cores, 1 NIC,
and 100GB disk space.
The following hardware considerations apply to sensors. If you are using a heavy node or standalone deployment type,
please note that it will dramatically increase CPU/RAM/Storage requirements.
Virtualization
We recommend dedicated physical hardware (especially if you’re monitoring lots of traffic) to avoid competing for
resources. Sensors can be virtualized, but you’ll have to ensure that they are allocated sufficient resources.
CPU
Suricata and Zeek are very CPU intensive. The more traffic you are monitoring, the more CPU cores you’ll need. A
very rough ballpark estimate would be 200Mbps per Suricata worker or Zeek worker. So if you have a fully saturated
1Gbps link and are running Suricata for NIDS alerts and Zeek for metadata, then you’ll want at least 5 Suricata
workers and 5 Zeek workers. This means you’ll need at least 10 CPU cores for Suricata and Zeek with additional
CPU cores for Stenographer and/or other services. If you are monitoring a high amount of traffic and/or have a small
number of CPU cores, you might consider using Suricata for both alerts and metadata. This eliminates the need for
Zeek and allows for more efficient CPU usage.
RAM
Storage
Sensors that have full packet capture enabled need LOTS of storage. For example, suppose you are monitoring a link
that averages 50Mbps, here are some quick calculations: 50Mb/s = 6.25 MB/s = 375 MB/minute = 22,500 MB/hour
= 540,000 MB/day. So you’re going to need about 540GB for one day’s worth of pcaps (multiply this by the number
of days of pcap you want to keep). The more disk space you have, the more PCAP retention you’ll have for doing
investigations after the fact. Disk is cheap, get all you can!
Packets
You’ll need some way of getting packets into your sensor interface(s). If you’re just evaluating Security Onion, you
can replay PCAPs for Testing. For a production deployment, you’ll need a SPAN/monitor port on an existing switch
or a dedicated TAP. We recommend dedicated TAPs where possible. If collecting traffic near a NAT boundary, make
sure you collect from inside the NAT boundary so that you see the true internal IP addresses.
Inexpensive tap/span options (listed alphabetically):
     • Dualcomm
     • Midbit SharkTap
     • Mikrotik
     • Netgear GS105Ev2
Enterprise Tap options (listed alphabetically):
     • APCON
     • Arista
     • cPacket
     • Garland
     • Gigamon
     • KeySight / Ixia / Net Optics
     • Profitap
Further Reading
Note: For large networks and/or deployments, please also see https://github.com/pevma/SEPTun.
For most use cases, we recommend using our Security Onion ISO image which is based on Oracle Linux 9. For more
information, please see https://blog.securityonion.net/2023/07/security-onion-24-base-os.html.
If you don’t want to use our Security Onion 2.4 ISO image, you can still perform a network installation of our Security
Onion components after manually installing one of the following:
     • Oracle Linux 9
     • Rocky Linux 9
     • Alma Linux 9
     • CentOS Stream 9
     • RHEL 9
     • Ubuntu 22.04
     • Debian 12
5.4.1 Support
Supported
Our Security Onion 2.4 ISO image (based on Oracle Linux 9) is the only fully supported installation method. Choose
this option if any of the following apply to you:
    • You are deploying in an enterprise environment.
    • You are deploying in an airgap environment.
    • You are performing a distributed deployment.
    • You want the quickest and easiest installation with the fewest issues.
    • You need full support.
Unsupported
If you don’t want to use our Security Onion 2.4 ISO image and choose to perform a manual OS installation followed
by a network installation of our Security Onion components, then we recommend using Oracle Linux 9 or Rocky
Linux 9. CentOS Stream 9 or Alma Linux 9 should also work. Another option might be RHEL 9 itself although that
is a paid option.
If you really want to run Ubuntu 22.04 or Debian 12, then please note that these distros may work but they get less
testing and therefore you will be more likely to run into issues.
If you choose Ubuntu 22.04, we recommend the Ubuntu 22.04 Server ISO image and selecting the Ubuntu Server
installation option as there are known issues when choosing the Ubuntu Server (minimized) option.
5.5 Partitioning
Now that you understand Hardware Requirements and Operating System options, we should next discuss disk par-
titioning. If you’re installing Security Onion for a production deployment, you’ll want to pay close attention to
partitioning to make sure you don’t fill up a partition at some point.
As the Hardware Requirements section mentions, the MINIMUM requirement is 200GB storage. This is to allow
100GB for /nsm and 100GB for the rest of /.
5.5.2 ISO
If you use our Security Onion ISO image, it will automatically partition your disk for you. If you instead use another
ISO image, you will most likely need to manually modify their default partition layout.
5.5.3 LVM
You may want to consider Logical Volume Management (LVM) as it will allow you to more easily change your
partitioning in the future if you need to. Our Security Onion ISO image uses LVM by default.
5.5. Partitioning                                                                                                  57
Security Onion Documentation, Release 2.4
5.5.4 /boot
You probably want a dedicated /boot partition of at least 512MB at the beginning of the drive.
5.5.5 /nsm
The vast majority of data will be written to /nsm, so you’ll want to dedicate the vast majority of your disk space to
that partition. You’ll want at least 100GB.
5.5.6 /
/ (the root partition) currently contains /var/lib/docker/ (more on that below) and thus you’ll want at least
100GB.
5.5.7 Docker
Docker images are currently written to /var/lib/docker/. The current set of Docker images uses 30GB on disk.
If you’re planning a production deployment, you should plan on having enough space for another set of those Docker
images for in-place updates.
5.5.8 Other
If you install an ISO image other than our Security Onion ISO, then the installer may try to dedicate a large amount of
space to /home. You may need to adjust this to ensure that it is not overly large and wasting valuable disk space.
5.5.9 Example
Here’s an example of how our current Security Onion ISO image partitions a 1TB disk:
     • 512MB /boot partition at the beginning of the drive
     • the remainder of the drive is an LVM volume that is then partitioned as follows:
          – 630GB /nsm
          – 300GB /
          – 2GB /tmp
          – 8GB swap
5.6 Download
Before downloading, we highly recommend that you review the Release Notes section so that you are aware of all
recent changes!
We recommend that you download our Security Onion ISO image but see the Operating System page for other options.
Tip: For most use cases, we recommend using our Security Onion ISO image as it’s the quickest and easiest method.
 Warning: ALWAYS verify the checksum of ANY downloaded ISO image! Regardless of whether you’re
 downloading our Security Onion ISO image or any other ISO image, you should ALWAYS verify the downloaded
 ISO image to ensure it hasn’t been tampered with or corrupted during download. If it fails to verify, try download-
 ing again. If it still fails to verify, try downloading from another computer or another network.
    • If downloading our Security Onion ISO image, you can find the download link and verification in-
      structions at https://github.com/Security-Onion-Solutions/securityonion/blob/2.4/main/DOWNLOAD_AND_
      VERIFY_ISO.md.
    • If downloading any other ISO image, please verify that ISO image using whatever instructions they provide.
 Warning: If you download our ISO image and then scan it with antivirus software, it is possible that one or more
 of the files included in the ISO image may generate false positives. If you look at the antivirus scan details, it
 will most likely tell you that it alerted on a file in SecurityOnion\agrules\. This is part of Strelka and it
 is being incorrectly flagged as a backdoor when it is really just a Yara ruleset that looks for backdoors. In some
 cases, the alert may be for a sample EXE that is included in Strelka but again a false positive.
Note: If you’re going to create a bootable USB from one of the ISO images above, there are many ways to do
that. One popular choice that seems to work well for many folks is Balena Etcher which can be downloaded at
https://www.balena.io/etcher/.
5.7 VMware
5.7.1 Overview
In this section, we’ll cover creating a virtual machine (VM) for our Security Onion ISO image in VMware Worksta-
tion Pro and VMware Fusion. These steps should be fairly similar for most VMware installations. If you don’t al-
ready have VMware, you can download VMware Workstation Player from https://www.vmware.com/products/player/
playerpro-evaluation.html.
Note: With the sniffing interface in bridged mode, you will be able to see all traffic to and from the host machine’s
physical NIC. If you would like to see ALL the traffic on your network, you will need a method of forwarding that
traffic to the interface to which the virtual adapter is bridged. This can be achieved with a tap or SPAN port.
VMware Workstation is available for many different host operating systems, including Windows and several popular
Linux distros. Follow the steps below to create a VM in VMware Workstation Pro for our Security Onion ISO image:
   1. From the VMware main window, select File >> New Virtual Machine.
   2. Select Typical installation >> Click Next.
   3. Installer disc image file >> SO ISO file path >> Click Next.
   4. Choose Linux, CentOS 64-Bit and click Next.
   5. Specify virtual machine name and click Next.
5.7. VMware                                                                                                        59
Security Onion Documentation, Release 2.4
     6. Specify disk size (minimum 200GB), store as single file, click Next.
     7. Customize hardware and increase Memory and Processors based on the Hardware Requirements section.
     8. Network Adapter (NAT or Bridged – if you want to be able to access your Security Onion machine from other
        devices in the network, then choose Bridged, otherwise choose NAT to leave it behind the host) – in this tutorial,
        this will be the management interface.
     9. Add >> Network Adapter (Bridged) - this will be the sniffing (monitor) interface.
 10. Click Close.
 11. Click Finish.
 12. Power on the virtual machine and then follow the installation steps for your desired installation type in the
     Installation section.
5.7.3 Fusion
VMware Fusion is available for Mac OS. For more information about VMware Fusion, please see https://www.
vmware.com/products/fusion.html.
Follow the steps below to create a VM in VMware Fusion for our Security Onion ISO image:
     1. From the VMware Fusion main window, click File and then click New.
     2. Select the Installation Method appears. Click Install from disc or image and click
        Continue.
     3. Create a New Virtual Machine appears. Click Use another disc or disc image..., se-
        lect our ISO image, click Open, then click Continue.
     4. Choose Operating System appears. Click Linux, click CentOS 64-bit, then click Continue.
     5. Choose Firmware Type appears. Click Legacy BIOS and then click Continue.
     6. Finish screen appears. Click the Customize Settings button.
     7. Save As screen appears. Give the VM a name and click the Save button.
     8. Settings window appears. Click Processors & Memory.
     9. Processors & Memory screen appears. Increase processors and memory based on the Hardware Require-
        ments section. Click the Add Device... button.
 10. Add Device screen appears. Click Network Adapter and click the Add... button.
 11. Network Adapter 2 screen appears. This will be the sniffing (monitor) interface. Select your desired
     network adapter configuration. Click the Show All button.
 12. Settings screen appears. Click Hard Disk (SCSI).
 13. Hard Disk (SCSI) screen appears. Increase the disk size to at least 200GB depending on your use case.
     Click the Apply button.
 14. Close the Settings window.
 15. At the window for your new VM, click the Play button to power on the virtual machine.
 16. Follow the installation steps for your desired installation type in the Installation section.
5.7.4 ESXi
If you’re using VMware ESXi, then you’re likely familiar with VM creation and installation and so we won’t detail
that here. There are a few things specific to ESXi that you might want to be aware of:
    • You may need to set your monitoring interface in the vSwitch to VLAN ID 4095 to allow all traffic through.
      You can read more about this at https://github.com/Security-Onion-Solutions/securityonion/discussions/7185.
    • If you’re trying to monitor multiple network interfaces, then you may need to enable the Allow MAC
      Changes option at both the vSwitch and Port Group levels. You can read more about this at https://github.
      com/Security-Onion-Solutions/securityonion/discussions/2676.
If using a graphical desktop, you may want to install open-vm-tools-desktop to enable more screen resolution
options and other features. For example, using our ISO image or standard Oracle Linux 9:
5.8 VirtualBox
In this section, we’ll cover installing Security Onion on VirtualBox. You can download a copy of VirtualBox for
Windows, Mac OS X, or Linux at https://www.virtualbox.org.
5.8.1 Creating VM
First, launch VirtualBox and click the “New” button. Provide a name for the virtual machine (“Security Onion” for
example) and specify the type (“Linux”) and version (Oracle Linux 9.x), then click “Continue.” We’ll next define how
much memory we want to make available to our virtual machine based on the Hardware Requirements section.
Next, we’ll create a virtual hard drive. Specify “Create a virtual hard drive now” then click “Create” to choose the
hard drive file type “VDI (VirtualBox Disk Image)” and “Continue.” For storage, we have the options of “Dynamically
allocated” or “Fixed size.” For a client virtual machine, “Dynamically allocated” is the best choice as it will grow the
hard disk up to whatever we define as the maximum size on an as needed basis until full, at which point Security
Onion’s disk cleanup routines will work to keep disk space available. If you happen to be running a dedicated sensor
in a virtual machine, I would suggest using “Fixed size,” which will allocate all of the disk space you define up front and
save you some disk performance early on. Once you’ve settled on the storage allocation, click “Continue” and provide
a name from your hard disk image file and specify the location where you want the disk file to be created if other
than the default location. For disk size, you’ll want at least 200GB so you have enough capacity for retrieving/testing
packet captures and downloading system updates. Click “Create” and your Security Onion VM will be created.
At this point, you can click “Settings” for your new virtual machine so we can get it configured. Mount the Security
Onion ISO file so our VM can boot from it to install Linux. Click the “Storage” icon, then under “Controller: IDE”
select the “Empty” CD icon. To the right, you’ll see “CD/DVD Drive” with “IDE Secondary” specified with another
CD icon. Click the icon, then select “Choose a virtual CD/DVD disk file” and browse to where you downloaded the
Security Onion ISO file, select it then choose “Open.” Next click “Network” then “Adapter 2.” You’ll need to click the
checkbox to enable it then attach it to “Internal Network.” Under the “Advanced” options, set “Promiscuous Mode” to
“Allow All.” Click “Ok” and we are ready to install the operating system.
Hit the “Start” button with your new virtual machine selected and after a few seconds the boot menu will load.
Follow the installation steps for your desired installation type in the Installation section.
5.8. VirtualBox                                                                                                         61
Security Onion Documentation, Release 2.4
Tip: You’ll notice two icons on the top right in VirtualBox Manager when you select your virtual machine: Details
and Snapshots. Click “Snapshots” then click the camera icon and give your snapshot a name and description. Once
we have a snapshot, we’ll be able to make changes to the system and revert those changes back to the state we are
preserving.
5.9 Proxmox
Proxmox Virtual Environment is a virtualization platform similar to VMware or VirtualBox. You can read more about
Proxmox VE at https://www.proxmox.com/en/proxmox-ve.
5.9.1 CPU
Proxmox defaults to a VM CPU which may not include all of the features of your host CPU. You may need to change
this to host to pass through the host CPU type.
5.9.2 Display
If you plan to use NetworkMiner or other Mono-based applications in a Proxmox VM, then you may need to set the
VM Display to VMware compatible (vmware).
5.9.3 NIC
If you’re going to install Security Onion in Proxmox and sniff live network traffic, you may need to do some additional
configuration in Proxmox itself.
The first option is to sniff traffic from a physical NIC that has been passed through to the VM. For more information
about Proxmox passthrough, please see:
https://www.servethehome.com/how-to-pass-through-pcie-nics-with-proxmox-ve-on-intel-and-amd/
https://pve.proxmox.com/wiki/PCI_Passthrough
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
Once the physical NIC is passed through to the Security Onion VM, then Security Onion should be able to correctly
configure the NIC for sniffing.
Virtual NIC
The second option is to sniff traffic from a Proxmox virtual NIC. For more details, please see the discussion at https:
//github.com/Security-Onion-Solutions/securityonion/discussions/8245.
Keep in mind you may need to manually disable NIC offloading features on any Proxmox NIC used for sniffing (the
physical interface and any related bridge interface). One way to do this is to add a post-up command to each sniffing
interface in /etc/network/interfaces on the Proxmox host.
For example, if you have a Proxmox physical interface called enp2s0 with a bridge interface called vmbr1, then
you might log into Proxmox and edit /etc/network/interfaces by adding the following to the enp2s0 section:
post-up for i in rx tx sg tso ufo gso gro lro; do ethtool -K enp2s0 $i off; done
post-up for i in rx tx sg tso ufo gso gro lro; do ethtool -K vmbr1 $i off; done
If you have trouble booting an ISO image, here are some troubleshooting steps:
    • Verify the downloaded ISO image using hashes or GPG key.
    • Verify that your machine is x86-64 architecture (standard Intel or AMD 64-bit).
    • If you’re trying to run a 64-bit virtual machine, verify that your 64-bit processor supports virtualization and that
      virtualization is enabled in the BIOS.
    • If you’re trying to create a bootable USB from an ISO image, try using Balena Etcher which can be downloaded
      at https://www.balena.io/etcher/.
    • Certain display adapters may require the nomodeset option passed to the kernel (see https://unix.
      stackexchange.com/questions/353896/linux-install-goes-to-blank-screen).
    • If you’re still having problems with our 64-bit ISO image, try downloading the standard x86-64 ISO image for
      Oracle Linux 9. If it doesn’t run, then you should double-check your 64-bit compatibility.
Tip: If all else fails but standard x86-64 Oracle Linux 9 installs normally, then you can always install our components
on top of it as described on the Installation page.
5.11 Airgap
Security Onion is committed to allowing users to run a full install on networks that do not have Internet access. You
will need to use our Security Onion ISO image as it includes everything you need to run without Internet access and
then you will need to choose the airgap option during Setup.
The Security Onion ISO image includes the Emerging Threats (ET) ruleset. When soup updates an airgap system via
ISO, it automatically installs the latest ET rules as well. If you would like to switch to a different ruleset like Emerging
Threats Pro (ETPRO), then you can manually copy the ETPRO rules to /nsm/repo/rules/emerging-all.
rules using a command like:
5.12 Installation
 Warning: Please make sure that your hostname is correct during installation. Setup generates certificates based
 on the hostname and we do not support changing the hostname after Setup.
Note: If you want to deploy in the cloud using one of our official cloud images, you can skip to the Amazon Cloud
Image, Azure Cloud Image, or Google Cloud Image sections.
Having downloaded your desired ISO according to the Download section, it’s now time to install! There are separate
sections below to walk you through installing using our Security Onion ISO image (based on Oracle Linux 9) or
manually installing from another ISO and then installing our components on top.
5.12. Installation                                                                                                  65
Security Onion Documentation, Release 2.4
If you want to install Security Onion via another ISO image (not using our Security Onion ISO image), follow the
steps below.
     1. Review the Hardware Requirements and Release Notes sections.
     2. Download the ISO image for your desired x86-64 Operating System. Verify the ISO image and then boot from
        it.
     3. Follow the prompts in the installer. If you’re building a production deployment, you’ll probably want to use
        LVM and dedicate most of your disk space to /nsm as discussed in the Partitioning section.
     4. Reboot into your new installation.
     5. Login using the username and password you specified during installation.
     6. Install prerequisites. If you’re using a RHEL flavor like Oracle Linux 9:
If you would like to deploy Security Onion in Amazon Web Services (AWS), we have an Amazon Machine Image
(AMI) that is already built for you: https://securityonion.net/aws/?ref=_ptnr_soc_docs_230525
 Warning: Existing 2.4 RC1 or newer Security Onion AMI installations should use the soup command to upgrade
 to newer versions of Security Onion. Attempting to switch to a newer AMI from the AWS Marketplace could
 cause loss of data and require full grid re-installation. Upgrading from Security Onion 2.3 or beta versions of 2.4
 is unsupported.
Note: This section does not cover network connectivity to the Security Onion node. This can be achieved
through configuring an external IP for the node’s management interface, or through the use of a VPN con-
nection via OpenVPN. For more details about VPN connections, please see https://medium.com/@svfusion/
setup-site-to-site-vpn-to-aws-with-pfsense-1cac16623bd6.
Note: This section does not cover how to set up a VPC in AWS. For more details about setting up a VPC, please
see https://docs.aws.amazon.com/directoryservice/latest/admin-guide/gsg_create_vpc.html. Ensure that all Security
Onion nodes can access the manager node over the necessary ports. This could require adding rules to your AWS
security groups in order to satisfy the Security Onion Firewall Node Communication requirements.
5.13.1 Requirements
Before proceeding, determine the grid architecture desired. Choose from a single-node grid versus a distributed, multi-
node grid. Additionally, determine if the lower latency of ephemeral instance storage is needed (typically when there
is high-volume of traffic being monitored, which is most production scenarios), or if network-based storage, EBS, can
be used for increased redundancy.
For simple, low-volume production monitoring, a single node grid can be used. EBS must be used for Elasticsearch
data storage if used for production purposes. Single node grids cannot use ephemeral instance storage without being at
risk of data loss. However, for temporary evaluation installations, where there is little concern for data loss, ephemeral
instance storage can be used.
Listed below are the minimum suggested single-node instance quantities, sizes, and storage requirements for either
standalone or evaluation installations (choose one, not both). Note that when using virtual machines with the minimum
RAM requirements you may need to enable memory swapping.
Standalone:
    • Quantity: 1
    • Type: t3a.xlarge
    • Storage: 256GB EBS (Optimized) gp3
Evaluation
    • Quantity: 1
    • Type: t3a.2xlarge
    • Storage: 256GB EBS (Optimized) gp3
    • Storage: 100GB Instance Storage (SSD/NVMe)
Distributed Grid
For high volume production monitoring, choose a multi-node grid architecture. At least two search nodes must be
used in this architecture. This is required due to the use of ephemeral instance storage for Elasticsearch data storage,
where each of the search nodes retains a replica of another search node, for disaster recovery.
Listed below are the minimum suggested distributed grid instance quantities, sizes, and storage requirements. Prefer
increasing VM memory over enabling swap memory, for best performance. High volume networks will need more
powerful VM types with more storage than those listed below.
VPN Node
    • Quantity: 1
    • Type: t3a.micro (Nitro eligible)
To setup the Security Onion AMI and VPC mirror configuration, use the steps below.
Security Groups act like a firewall for your Amazon EC2 instances controlling both inbound and outbound traffic. You
will need to create a security group specifically for the interface that you will be using to sniff the traffic. This security
group will need to be as open as possible to ensure all traffic destined to the sniffing interface will be allowed through.
To create a security group, follow these steps:
     • From the EC2 Dashboard Select: Security Groups under the Network & Security sections in the left
       window pane.
     • Select: Create Security Group
     • Provide a Security Group Name and Description.
     • Select the appropriate VPC for the security group.
     • With the inbound tab selected, select: Add Rule
     • Add the appropriate inbound rules to ensure all desired traffic destined for the sniffing interface is allowed.
     • Press the Create security group button.
Prior to launching the Security Onion AMI you will need to create the interface that will be used to monitor your VPC.
This interface will be attached to the Security Onion AMI as a secondary interface. To create a sniffing interface,
follow these steps:
     • From the EC2 Dashboard Select: Network Interfaces under the Network & Security section in the left
       window pane.
Instance Creation
To configure a Security Onion instance (repeat for each node in a distributed grid), follow these steps:
    • From the EC2 dashboard select: Launch Instance
    • Search the AWS Marketplace for Security Onion and make sure you get the latest version of the Security
      Onion official AMI.
    • Choose the appropriate instance type based on the desired hardware requirements and select Next:
      Configure Instance Details. For assistance on determining resource requirements please review the
      AWS Requirements section above.
    • From the subnet menu select the same subnet as the sniffing interface.
    • Under the Network interfaces section configure the eth0 (management) interface.
    • (Distributed “Sensor” node or Single-Node grid only) Under the Network interfaces section select: Add
      Device to attach the previously created sniffing interface to the instance.
    • (Distributed “Sensor” node or Single-Node grid only) From the Network Interface menu for eth1 choose the
      sniffing interface you created for this instance. Please note if you have multiple interfaces listed you can verify
      the correct interface by navigating to the Network Interfaces section in the EC2 Dashboard.
    • Select: Next:     Add Storage and configure the volume settings.
    • Select: Next:     Add Tags and add any additional tags for the instance.
    • Select: Next:     Configure Security Group and add the appropriate inbound rules.
    • Select: Review and Launch
    • If prompted, select the appropriate SSH keypair that will be used to ssh into the Security Onion instance for
      administration
    • The default username for the Security Onion AMI is: onion
For distributed search nodes, or an evaluation node if using ephemeral storage, SSH into the node and cancel out of
the setup. Prepare the ephemeral partition by executing the following command:
sudo so-prepare-fs
By default, this command expects the ephemeral device to be located at /dev/nvme1n1 and will mount that device
at /nsm/elasticsearch. If this fails run lsblk to determine which disk to use. To override either of those two
defaults, specify them as arguments. For example:
cd /securityonion
sudo ./so-setup-network
If this is an ephemeral evaluation node, ensure the node has been prepared as described in the preceding section.
After SSH’ing into the node, setup will begin automatically. Follow the prompts, selecting the appropriate install
options. Most distributed installations will use the hostname or other web access method, due to the need for
both cluster nodes inside the private network, and analyst users across the public Internet to reach the manager. This
allows for custom DNS entries to define the correct IP (private vs public) depending on whether it’s a cluster node or
an analyst user. Users evaluating Security Onion for the first time should consider choosing the other option and
specifying the node’s public cloud IP.
AWS provides a built-in NTP server at IP 169.254.169.123. This can be specified in the SOC Configuration
screen after setup completes. By default the server will use the time servers at ntp.org.
For distributed manager nodes using ephemeral storage, go to SOC Configuration.        Search for
number_of_replicas and change to 1. This will double the storage cost but will ensure at least two
VMs have the data, in case of an ephemeral disk loss.
Optionally, adjust ElastAlert indices so that they have a replica. This will cause them to turn yellow but that will be
fixed when search nodes come online:
This is an optional step due to the ElastAlert indices being used primarily for short-term/recent alert history. In the
event of a data loss when ElastAlert 2 restarts the indices will be regenerated.
Follow standard Security Onion search node installation, answering the setup prompts as applicable. If you are using
ephemeral storage be sure to first prepare the instance as directed earlier in this section.
SSH into the sensor node and run through setup to set this node up as a sensor. Choose eth0 as the main interface
and eth1 as the monitoring interface.
Setup the VPN (out of scope for this guide) and connect the sensor node to the VPN. During the Security Onion setup
of the Sensor, when prompted to choose the management interface, select the VPN tunnel interface, typically tun0.
If connecting sensors through the VPN instance you will need to add the inside interface of your VPN concentrator to
the sensor firewall hostgroup. For instance, assuming the following architecture:
SO Sensor        -> VPN Endpoint     -> Internet -> VPN Endpoint -> SO Manager
Location: Remote    Location: Remote                Location: AWS   Location: AWS
192.168.33.13       192.168.33.10                   10.55.1.10      10.55.1.20
In order to add the Remote Network Forward Node to the Grid, you would have to add 10.55.1.10 to the sensor
firewall hostgroup.
This change can be done in the SOC Configuration screen. Then, either wait up to 15 minutes for the scheduled con-
figuration sync to run, or force a synchronization immediately via the SOC Configuration Options. Once the firewall
hostgroup configuration has been synchronized your Manager will be ready for remote minions to start connecting.
Traffic mirroring allows you to copy the traffic to/from an instance and send it to the sniffing interface of a network
security monitoring sensor or a group of interfaces using a network load balancer. For more details about AWS Traffic
Mirroring please see: https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html
Tip: You can only mirror traffic from an EC2 instance that is powered by the AWS Nitro system. For a list
of supported Nitro systems, please see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#
ec2-nitro-instances.
A mirror target in AWS refers to the destination for the mirrored traffic. This can be a single interface or a group of
interfaces using a network load balancer. To configure a mirror target, follow these steps:
    • From the VPC dashboard select: Mirror Targets under the Traffic Mirroring section in the left window
      pane.
    • Select: Create traffic mirror target
    • Under the Choose target section select the appropriate target type and choose the sniffing interface connected to
      the Security Onion instance. For more details about traffic mirror targets please see: https://docs.aws.amazon.
      com/vpc/latest/mirroring/traffic-mirroring-targets.html
    • Select: Create
A mirror filter allows you to define the traffic that is copied to in the mirrored session and is useful for tuning out noisy
or unwanted traffic. To configure a mirror filter, follow these steps:
    • From the VPC dashboard select: Mirror Filters under the Traffic Mirroring section in the left window
      pane.
    • Select: Create traffic mirror filter
    • Add the appropriate inbound and outbound rules. For mor details about traffic mirror filters please see: https:
      //docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-filters.html
    • Select: Create
A traffic mirror session defines the source of the traffic to be mirrored based on the selected traffic mirror filters
and sends that traffic to the desired traffic mirror target. For more details about traffic mirror sessions please see:
https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-session.html
     • From the VPC dashboard select: Mirror Sessions under the Traffic Mirroring section in the left window
       pane.
     • Select: Create traffic mirror session
     • Under the Mirror source section, choose the interface that you want to be mirrored.
     • Under the Mirror target section, choose the interface or load balancer you want to send the mirrored traffic to.
     • Assign a session number under the Additional settings section for the mirror session.
     • In the filters section under Additional settings choose the mirror filter you want to apply to the mirrored traffic.
     • Select: Create
To verify the mirror session is sending the correct data to the sniffing interface run the following command on the
Security Onion AWS Sensor instance:
You should see VXLAN tagged traffic being mirrored from the interface you selected as the Mirror Source.
To verify Zeek is properly decapsulating and parsing the VXLAN traffic you can verify logs are being generated in the
/nsm/zeek/logs/current directory:
ls -la /nsm/zeek/logs/current/
Azure users can deploy an official Security Onion virtual machine image found on the Azure Marketplace: https:
//securityonion.net/azure
 Warning: Existing 2.4 RC1 or newer Security Onion Azure Image installations should use the soup command to
 upgrade to newer versions of Security Onion. Attempting to switch to a newer image from the Azure Marketplace
 could cause loss of data and require full grid re-installation. Upgrading from Security Onion 2.3 or beta versions
 of 2.4 is unsupported.
Note: Azure has put on hold their Virtual TAP preview feature, which means in order to install a Security Onion
sensor in the Azure cloud you will need to use a packet broker offering from the Azure Marketplace. See more
information here: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview
Note: This section does not cover network connectivity to the Security Onion node. This can be achieved through
configuring an external IP for the node’s management interface, or through the use of a VPN connection via OpenVPN.
Note: This section does not cover how to set up a virtual network in Azure. For more details about setting up a
virtual network, please see https://docs.microsoft.com/en-us/azure/virtual-network/. Ensure that all Security Onion
nodes can access the manager node over the necessary ports. This could require adding rules to your Azure Virtual
Network and/or VMs in order to satisfy the Security Onion Firewall Node Communication requirements.
5.14.1 Requirements
Before proceeding, determine the grid architecture desired. Choose from a single-node grid versus a distributed,
multi-node grid.
Security Onion recommends using either Premium SSD disks, or the more expensive Ultra SSD disks, with suitable
IOPS and throughput matched to your expected network monitoring requirements.
For simple, low-volume production monitoring, a single node grid can be used.
Listed below are the minimum suggested single-node instance quantities, sizes, and storage requirements for either
standalone or evaluation installations (choose one, not both). Note that when using virtual machines with the minimum
RAM requirements you may need to enable memory swapping.
Standalone:
    • Quantity: 1
    • Type: Standard_D4as_v4
    • Storage: 256GB Premium SSD
Evaluation
    • Quantity: 1
    • Type: Standard_D8as_v4
    • Storage: 256GB Premium SSD
Distributed Grid
For high volume production monitoring, choose a multi-node grid architecture. At least two search nodes are recom-
mended for redundancy purposes.
Listed below are the minimum suggested distributed grid instance quantities, sizes, and storage requirements. Prefer
increasing VM memory over enabling swap memory, for best performance. High volume networks will need more
powerful VM types with more storage than those listed below.
VPN Node
    • Quantity: 1
    • Type: Option 1: Standard_B1s - Lower cost for use with low vpn traffic volume
    • Type: Option 2: Standard_D4as_v4 w/ accelerated networking - Higher cost for high vpn traffic volume
    • Storage: 64GB Premium SSD
Manager
    • Quantity: 1
    • Type: Standard_D4as_v4
    • Storage: 256GB Premium SSD
Search Nodes
     • Quantity: 2 or more
     • Type: Standard_D4as_v4
     • Storage: 256GB Premium SSD
Sensor monitoring the VPN ingress
     • Quantity: 1
     • Type: Standard_D4as_v4
     • Storage: 512GB Premium SSD
To setup a Security Onion sensor node in Azure, follow the prerequisite steps below prior to creating the sensor VM.
Security Groups act like a firewall for your Azure virtual machines, controlling both inbound and outbound traffic.
You should consider whether a security group is needed for your virtual network, and specifically for the interface
that you will be using to sniff the traffic. This security group will need to be as open as possible to ensure all traffic
destined to the sniffing interface will be allowed through. To create a security group, follow these steps:
     • In the Azure Dashboard search for: Network security groups.
     • Select: Create
     • Provide a name, such as so-monitoring-security-group.
     • Select the appropriate resource group and region.
     • Select Review + Create
     • Review the summary
     • Select: Create
     • Select: Go to resource
     • Adjust the Inbound security rules to ensure that all incoming monitoring traffic is allowed.
Prior to launching the Security Onion sensor virtual machine you will need to create the interface that will be used
to monitor your virtual network. This interface will be attached to the Security Onion sensor virtual machine as a
secondary interface. To create a sniffing interface, follow these steps:
     • In the Azure Dashboard search for: Network interfaces.
     • Select: Create
     • Provide a name, such as so-monitoring-interface.
     • Choose the resource group, region, virtual network, subnet, security group from the steps above, and IP settings.
     • Select: Review + Create
     • Review the summary
• Select: Create
Instance Creation
To configure a Security Onion instance (repeat for each node in a distributed grid), follow these steps:
    • In the Azure Dashboard search for: Virtual machines
    • Select: Create and then Virtual machine
    • Choose or create a new Resource group.
    • Enter a suitable name for this virtual machine, such as so-vm-manager.
    • Choose the desired Region and Availability options. (Use East US 2 for Ultra SSD support, if needed.)
    • Choose the Security Onion 2 VM Image. If this option is not listed on the Image dropdown, select See
      all images and search for onion.
    • Choose the appropriate Size based on the desired hardware requirements. For assistance on determining resource
      requirements please review the Requirements section above.
    • Change the Username to onion. Note that this is not mandatory – if you accidentally leave it to the default
      azureuser, that’s ok, you’ll simply use the azureuser username any place where the documentation states
      to use the onion username.
    • Select an existing SSH public key if one already exists, otherwise select the option to Generate new key
      pair.
    • Choose Other for Licensing type.
    • Select Next:      Disks
    • Ensure Premium SSD is selected.
    • For single-node grids, distributed sensor nodes, or distributed search nodes: If you would like to separate the
      /nsm partition into its own disk, create and attach a data disk for this purpose, with a minimum size of 100GB,
      or more depending on predicted storage needs. Note that the size of the /nsm partition determines the rate that
      old packet and event data is pruned. Separating the /nsm partition can provide more flexibility with scaling up
      the grid node sizes, but requires a little more setup, which is described later.
    • Select Next:      Networking
    • Choose the virtual network for this virtual machine.
    • Choose a public IP if you intend to access this virtual machine directly (not recommended for production grids).
    • Choose appropriate security group settings. Note that this is typically not the same security group used for the
      sensor monitoring interface.
    • Accelerated networking will be automatically enabled if the virtual machine size supports it.
    • Select: Review + create
    • Review the summary. If a Validation failed message appears, correct the missing inputs under each tab
      section containing a red dot to the right of the tab name.
    • Select. Create and download the new public key, if you chose to generate a new key.
    • If this VM is a single-node grid, or is sensor node:
         – Stop the new VM after deployment completes.
         – Edit the VM and attach the monitoring network interface created earlier.
         – Start the VM.
Note that you’ll need to reference the SSH public key when using SSH to access the new VMs. For example:
After SSH’ing into the node, setup will begin automatically. Follow the prompts, selecting the appropriate install
options. Most distributed installations will use the hostname or other web access method, due to the need for
both cluster nodes inside the private network, and analyst users across the public Internet to reach the manager. This
allows for custom DNS entries to define the correct IP (private vs public) depending on whether it’s a cluster node or
an analyst user. Users evaluating Security Onion for the first time should consider choosing the other option and
specifying the node’s public cloud IP.
Follow standard Security Onion search node installation, answering the setup prompts as applicable.
Setup the VPN (out of scope for this guide) and connect the sensor node to the VPN. When prompted to choose the
management interface, select the VPN tunnel interface, such as tun0. Use the internal IP address of the manager
inside Azure when prompted for the manager IP.
SSH into the sensor node and run through setup to set this node up as a sensor. Choose eth0 as the main interface
and eth1 as the monitoring interface.
Note: Azure has put on hold their Virtual TAP preview feature, which means in order to install a Security Onion
sensor in the Azure cloud you will need to use a packet broker offering from the Azure Marketplace. See more
information here: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview
To verify the Azure sensor is receiving the correct data on the sniffing interface run the following command on the
Security Onion Azure sensor instance:
To verify Zeek is properly decapsulating and parsing the traffic you can verify logs are being generated in the /nsm/
zeek/logs/current directory:
ls -la /nsm/zeek/logs/current/
If you would like to deploy Security Onion in Google Cloud Platform (GCP), choose the Security Onion 2 image listed
on the Google Marketplace: https://securityonion.net/google/?ref=_ptnr_soc_docs_230824
 Warning: Existing 2.4 RC1 or newer Security Onion Google Image installations should use the soup command to
 upgrade to newer versions of Security Onion. Attempting to switch to a newer image from the Google Marketplace
 could cause loss of data and require full grid re-installation. Upgrading from Security Onion 2.3 or beta versions
 of 2.4 is unsupported.
Note: This section does not cover network connectivity to the Security Onion node. This can be achieved through
configuring an external IP for the node’s management interface, or through the use of a VPN connection via OpenVPN.
Note: This section does not cover all aspects of how to set up a VPC in GCP, as each deployments is typically unique
for the user. For more details about setting up a VPC, please see https://cloud.google.com/vpc/docs/vpc. Ensure that
all Security Onion nodes can access the manager node over the necessary ports. This could require adding rules to
your GCP Virtual Private Cloud and/or VMs in order to satisfy the Security Onion Firewall Node Communication
requirements.
5.15.1 Requirements
Before proceeding, determine the grid architecture desired. Choose from a single-node grid versus a distributed,
multi-node grid. Additionally, determine if the lower latency of local instance storage is needed (typically when there
is high-volume of traffic being monitored, which is most production scenarios), or if persistent disks can be used for
increased redundancy.
For simple, low-volume production monitoring, a single node grid can be used. Persistent disks must be used for
Elasticsearch data storage if used for production purposes. Single node grids cannot use local disks without being at
risk of losing Elasticsearch data. However, for temporary evaluation installations, where there is little concern for data
loss, local disks can be used.
Listed below are the minimum suggested single-node instance quantities, sizes, and storage requirements for either
standalone or evaluation installations (choose one, not both). Note that when using virtual machines with the minimum
RAM requirements you may need to enable memory swapping.
Standalone:
    • Quantity: 1
    • Type: n2-standard-4
    • Storage: 256GB Balanced Persistent Disk
Evaluation*:
    • Quantity: 1
    • Type: n2-standard-8
Distributed Grid
For high volume production monitoring, choose a multi-node grid architecture. At least two search nodes must be
used in this architecture. This is required due to the use of local disks for Elasticsearch data storage, where each of
the search nodes retains a replica of another search node, for disaster recovery.
Listed below are the minimum suggested distributed grid instance quantities, sizes, and storage requirements. Prefer
increasing VM memory over enabling swap memory, for best performance. High volume networks will need more
powerful VM types with more storage than those listed below.
VPN Node
     • Quantity: 1
     • Type: e2.micro
     • Storage: 50GB Balanced Persistent Disk
Manager
     • Quantity: 1
     • Type: n2-standard-4
     • Storage: 300GB Balanced Persistent Disk
Search Nodes
     • Quantity: 2 or more
     • Type: n2-standard-4
     • Storage: 256GB Balanced Persistent Disk
     • Storage: 375GB Local Disk (NVMe)
Sensor monitoring the VPN ingress
     • Quantity: 1
     • Type: n2-standard-4
     • Storage: 500GB Balanced Persistent Disk
To accomplish traffic mirroring in GCP, a packet mirroring policy must be created and assigned to an internal load
balancer. Google supports multiple methods for selecting what traffic to mirror. For example, a special tag keyword
can be configured on the mirror policy, such as “so-mirror”, and any VM that should have its traffic monitored can be
given that special tag. The mirrored traffic will be forwarded to the internal load balancer, and a Security Onion sensor
VM will be a member of that load balancer’s instance group.
Follow the steps below to setup a traffic mirroring configuration. You will need to be logged into the Google Cloud
Console, and somewhat familiar with GCP and how zones and regions are used. Note that these steps are only one of
many ways to do this. For example, your scenario may require more advanced configuration, such as packet filtering,
or additional VPCs.
Create a new Virtual Private Cloud (VPC) network for collection of monitored network traffic. This will be referred to
below as the Monitored VPC network. Define one subnet within this VPC that will be dedicated to receiving monitored
traffic.
Add a new firewall rule to this VPC network to allow all incoming mirrored traffic. Specify a target tag of
so-collector and a source tag of so-mirror. This will allow all mirrored traffic originating from a VM NIC
tagged with so-mirror, and residing in this same VPC network, to be delivered to the sensor VM’s monitoring NIC
tagged with so-collector.
Create a new Virtual Private Cloud (VPC) network where the Security Onion grid will communicate. Configure the
subnets as desired, however, at least one subnet is required, and this VPC cannot overlap IP space with the above
Monitored VPC network. Ensure that SSH access (port TCP/22) and HTTPS (port TCP/443) is enabled so that you
have the ability to connect to VMs from your external network. For security purposes it’s recommended to limit
inbound access from trusted IPs.
Add a new firewall rule to allow all traffic originating from any VM instance within the Security Onion VPC network.
Choose a source IP range that encapsulates the IP ranges of the subnet(s) created above. This is necessary for connec-
tivity between the manager and minion nodes. You can also choose to be more specific about traffic within the VPC
however the rules must satisfy the Security Onion Firewall Node Communication requirements.
Create an unmanaged Instance Group. This is found under the Compute Engine section of the Google Cloud Console.
Use the Security Onion VPC as the selected network. Leave the VM instances blank; later in this document the
Security Onion sensor node will be added to this group. Port mapping is not required for this group.
Under Network services, within the Google Cloud Console, create a Load Balancer. Choose TCP Load Balancer and
select the Only between my VMs option. Click Continue and then select the Monitoring VPC network.
For the Backend configuration, choose the Instance Group created above. Ignore the informative box that explains the
need to use additional NICs in the group instances. Specify that the backend is a failover group for backup. Create a
new Health check that uses port TCP/22 (SSH) as the health test, with the following timing settings:
    • Check Interval: 300
    • Timeout: 1
    • Healthy Threshold: 1
    • Unhealthy Threshold: 1
Note that this health check is put in place only to satisfy the GCP requirement that all backends have a health check
assigned. Since the backend group is marked as a failover, it will always forward traffic, regardless of the health check
result.
For the Frontend configuration, select the subnet in the Monitoring VPC network that you created specifically for
receiving monitored traffic. Choose non-shared IP. If there you would like to forward all traffic, choose All ports and
enable global access. Under Advanced Configurations, enable the Load Balancer for Packet mirroring
checkbox.
Traffic mirroring allows you to copy the traffic to/from an instance (or multiple instances) and send it to the sniffing
interface of a network security monitoring sensor or a group of interfaces using a network load balancer. For more
details about GCP Traffic Mirroring please see: https://cloud.google.com/vpc/docs/packet-mirroring
Create a Packet Mirroring policy. This can be found in the Google Cloud Console under the VPC network section.
When selecting the VPC network, choose the option that denotes the mirrored source and collector destination are in
the same VPC network and select the Mirrored VPC network created earlier.
Under Select mirrored source, check the box next to the “Select with network tag” label. Then enter a tag named
so-mirror. Once completed with the grid setup, you can later tag all your VMs, whose traffic you want monitored,
with the same so-mirror tag.
Under Select collector destination, choose the front end forwarding rule that was created during the Load Balancer
setup earlier.
Finally, choose to mirror all traffic, unless you prefer to filter specific traffic for mirroring.
Instance Creation
To configure a Security Onion instance (repeat for each node in a distributed grid), follow these steps:
     • Access the Google Cloud Marketplace at https://console.cloud.google.com/marketplace.
     • Ensure you have a means of authenticating to VM instances over SSH. One method to authenticate is via a
       project-wide SSH key, which can be defined in Compute Engine -> Metadata -> SSH Keys.
     • Search the Marketplace for Security Onion and Launch the latest version of the Security Onion 2 official
       VM image.
     • Choose the appropriate machine type based on the desired hardware requirements. For assistance on determining
       resource requirements please review the Requirements section above.
     • Under the Networking interfaces section, expand the pre-added Network interface and select the Security Onion
       VPC network and desired subnet. External ephemeral IP is sufficient, unless you are planning to use a VPN
       to access the Security Onion Console, in which case no external ephemeral IP is necessary. Using a VPN is
       recommended, but setup of a VPN in GCP is out of scope of this guide.
     • (Distributed “Sensor” node or Single-Node grid only) Add a second Network interface and select the monitoring
       VPC network, and the appropriate subnet. No external ephemeral IP is necessary for this interface. Specify the
       network tag so-collector for this VM.
     • (Distributed “Manager” node or Single-Node grid only) If not using a VPN, enable the Allow HTTPS traffic
       from the Internet checkbox, and specify allowed source IP ranges. Under network tags, type https-server
       and press <ENTER>.
     • Adjust the boot disk size and type as necessary, using the guidance in the above Requirements section and
       elsewhere in the Security Onion documentation.
     • (Distributed “Search” node or Evaluation grid only) Under Disks, click Add Local SSD. Choose NVMe and
       select the desired disk capacity based on anticipated log/event retention.
     • If requested, review GCP Marketplace Terms, and if acceptable click the corresponding checkbox.
     • Select: Create
For distributed search nodes, or an evaluation node if using local disk storage, SSH into the node and cancel out of the
setup. Prepare the local disk partition by executing the following command:
sudo so-prepare-fs
By default, this command expects the local disk device to be located at /dev/nvme1n1 and will mount that device
at /nsm/elasticsearch. If this fails run lsblk to determine which disk to use. To override either of those two
defaults, specify them as arguments. For example:
cd /securityonion
sudo ./so-setup-network
If this is an evaluation node with a local disk, ensure the node has been prepared as described in the preceding section.
After SSH’ing into the node, setup will begin automatically. Follow the prompts, selecting the appropriate install
options. Most distributed installations will use the hostname or other web access method, due to the need for
both cluster nodes inside the private network, and analyst users across the public Internet to reach the manager. This
allows for custom DNS entries to define the correct IP (private vs public) depending on whether it’s a cluster node or
an analyst user. Users evaluating Security Onion for the first time should consider choosing the other option and
specifying the node’s public cloud IP.
GCP provides a built-in NTP server at hostname metadata.google.internal. This can be specified in the
SOC Configuration screen after setup completes. By default the server will use the time servers at ntp.org.
For distributed manager nodes using ephemeral storage, go to SOC Configuration.        Search for
number_of_replicas and change to 1. This will double the storage cost but will ensure at least two
VMs have the data, in case of an ephemeral disk loss.
Optionally, adjust ElastAlert indices so that they have a replica. This will cause them to turn yellow but that will be
fixed when search nodes come online:
This is an optional step due to the ElastAlert indices being used primarily for short-term/recent alert history. In the
event of a data loss when ElastAlert 2 restarts the indices will be regenerated.
Follow standard Security Onion search node installation, answering the setup prompts as applicable. If you are using
local disk storage be sure to first prepare the instance as directed earlier in this section.
In the GCP console, under Compute Engine go to the Instance Group page and edit the instance group that was created
earlier. Use the dropdown list to add the new sensor VM instance to this group.
SSH into the sensor node and run through setup to set this node up as a sensor. Choose ens4 as the main interface
and ens5 as the monitoring interface.
Setup the VPN (out of scope for this guide) and connect the sensor node to the VPN. When prompted to choose the
management interface, select the VPN tunnel interface, such as tun0. Use the internal IP (not the ephemeral IP)
address of the manager inside GCP when prompted for the manager IP.
If connecting sensors through the VPN instance you will need to add the inside interface of your VPN concentrator to
the sensor firewall hostgroup. For instance, assuming the following architecture:
SO Sensor        -> VPN Endpoint     -> Internet -> VPN Endpoint -> SO Manager
Location: Remote    Location: Remote                Location: Googe Location: Google
192.168.33.13       192.168.33.10                   10.55.1.10      10.55.1.20
In order to add the Remote Network Forward Node to the Grid, you would have to add 10.55.1.10 to the sensor
firewall hostgroup.
This change can be done in the SOC Configuration screen. Then, either wait up to 15 minutes for the scheduled con-
figuration sync to run, or force a synchronization immediately via the SOC Configuration Options. Once the firewall
hostgroup configuration has been synchronized your Manager will be ready for remote minions to start connecting.
Deploy a temporary test VM instance, using a e2.micro, debian-based instance in the Monitored VPC network, and in
the same region used in the rest of this guide. Add the so-mirror network tag to the VM.
SSH into the sensor node created earlier in this guide, and run the following command to watch mirrored traffic:
tcpdump -nni ens5
While that is running, in another terminal, SSH into this new test VM and run a curl command to a popular website.
You should see that HTTP/HTTPS traffic appear in the tcpdump output.
Login to Security Onion and verify that the traffic also appears in the Hunt user interface.
Delete the temporary test VM instance when the verification is completed.
5.16 Configuration
Now that you’ve installed Security Onion, it’s time to configure it!
Security Onion is designed for many different use cases. Here are just a few examples!
Tip: If this is your first time using Security Onion and you just want to try it out, we recommend the Import option
as it’s the quickest and easiest way to get started.
5.16.1 Import
One of the easiest ways to get started with Security Onion is using it to forensically analyze pcap and log files. Just
install Security Onion in Import mode and then import pcap files or Windows event logs in EVTX format using the
Grid page.
5.16.2 Evaluation
Evaluation Mode is ideal for classroom or small lab environments. Evaluation is not designed for production
usage. Choose EVAL, follow the prompts (see screenshots below), and then proceed to the After Installation section.
Standalone is similar to Evaluation in that it only requires a single box, but Standalone is more ready for production
usage. Choose STANDALONE, follow the prompts, and then proceed to the After Installation section.
If deploying a distributed environment, install and configure the manager node first and then join the other nodes to it.
For best performance, the manager node should be dedicated to just being a manager for the other nodes (the manager
node should not do any network sniffing, that should be handled by dedicated forward nodes).
Build the manager by running Setup, selecting the DISTRIBUTED install submenu, and choosing the New
Deployment option. You can choose either MANAGER or MANAGERSEARCH. If you choose MANAGER, then you
5.16. Configuration                                                                                                   83
Security Onion Documentation, Release 2.4
must join one or more search nodes (this is optional if you choose MANAGERSEARCH) and you will want to do this
before you start joining other node types.
Build nodes by running Setup, selecting the DISTRIBUTED install submenu, choosing Existing Deployment,
and selecting the appropriate option. Please note that all nodes will need to be able to connect to the manager node on
several ports and the manager will need to connect to search nodes and heavy nodes. You’ll need to make sure that
any network firewalls have firewall rules to allow this traffic as defined in the Firewall section. In addition to network
firewalls, you’ll need to make sure the manager’s host-based firewall allows the connections. You can do this in two
ways. The first option is going to Administration –> Configuration –> firewall –> hostgroups, selecting the appropriate
node type, and adding the IP address. The second option is to wait until the node tries to join and it will prompt you to
run a specific command on the manager. Regardless of which of the two options you choose, it will eventually prompt
you to go to Administration –> Grid Members, find the node in the Pending Members list, click the Review button,
and then click the Accept button.
Proceed to the After Installation section.
5.17.1 Services
You can check the Grid page to see if all services are running correctly.
Note: Please note that new nodes start off showing a red Fault and may take a few minutes to fully initialize before
they show a green OK.
You can also verify services are running from the command line with the so-status command:
sudo so-status
Depending on what kind of installation you did, the Setup wizard may have already walked you through adding firewall
rules to allow your analyst IP address(es). If you need to make other adjustments to firewall rules, you can do so by
going to Administration –> Configuration –> firewall –> hostgroups.
5.17.3 SSH
You should be able to do most administration from Security Onion Console (SOC) but if you need access to the
command line then we recommend using SSH rather than the Console.
    • Review the Curator and Elasticsearch sections to see if you need to change any of the default index retention
      settings.
5.17.5 Other
    • Full-time analysts may want to connect using a dedicated Security Onion Desktop.
    • Any IDS/NSM system needs to be tuned for the network it’s monitoring. Please see the Tuning section.
    • Configure the OS to use your preferred NTP server.
Once all configuration is complete, you can then connect to Security Onion Console (SOC) with your web browser.
We recommend chromium-based browsers such as Google Chrome. Other browsers may work, but fully updated
chromium-based browsers provide the best compatibility.
Depending on the options you chose in the installer, connect to the IP address or hostname of your Security Onion
                                                                                                              87
Security Onion Documentation, Release 2.4
installation. Then login using the email address and password that you specified in the installer.
Once logged in, you’ll notice the user menu in the upper right corner. This allows you to manage your user settings
and access documentation and other resources.
On the left side of the page, you’ll see links for analyst tools like Alerts, Dashboards, Hunt, Cases, PCAP, Kibana,
CyberChef , Playbook, and ATT&CK Navigator. While Alerts, Dashboards, Hunt, Cases, and PCAP are built into
SOC itself, the remaining tools are external and will spawn separate browser tabs.
If you’d like to customize SOC, please see the SOC Customization section. If you’d like to learn more about SOC
logs, please see the SOC Logs section.
6.1 Alerts
Security Onion Console (SOC) includes an Alerts interface which gives you an overview of the alerts that Security
Onion is generating. You can then quickly drill down into details, pivot to Hunt or the PCAP interface, and escalate
alerts to Cases.
6.1.1 Options
At the top of the page, there is an Options menu that allows you to set options such as Acknowledged/Escalated,
Automatic Refresh Interval, and Time Zone.
Toggles
The first toggle is labeled Temporarily enable advanced interface features. If you enable this
option, then the interface will show more advanced features similar to Dashboards and Hunt. These advanced features
are only enabled temporarily so if you navigate away from the page and then return to the page, it will default back to
its simplified view.
The Acknowledged and Escalated toggles control what alerts are displayed:
6.1. Alerts                                                                                                         89
Security Onion Documentation, Release 2.4
     • Enabling the Acknowledged toggle will only show alerts that have previously been acknowledged by an
       analyst.
     • Enabling the Escalated toggle will only show alerts that have previously been escalated by an analyst to
       Cases.
Another option is the Automatic Refresh Interval setting. When enabled, the Alerts page will automatically refresh at
the time interval you select.
Time Zone
Alerts will try to detect your local time zone via your browser. You can manually specify your time zone if necessary.
The query bar defaults to Group By Name, Module which groups the alerts by rule.name and event.
module. If you want to send your current Alerts query to Hunt, you can click the crosshair icon to the right of
the query bar.
You can click the dropdown box to select other queries which will group by other fields.
By default, Alerts searches the last 24 hours. If you want to search a different time frame, you can change it in the
upper right corner of the screen.
The remainder of the page is a data table that starts in the grouped view and can be switched to the detailed view. Both
views have some functionality in common:
    • Clicking the table headers allows you to sort ascending or descending.
    • Clicking the bell icon acknowledges an alert. That alert can then be seen by selecting the Acknowledged
      toggle at the top of the page. In the Acknowledged view, clicking the bell icon removes the acknowledgement.
    • Clicking the blue exclamation icon escalates the alert to Cases and allows you to create a new case or add to an
      existing case. If you need to find that original escalated alert in the Alerts page, you can enable the Escalated
      toggle (which will automatically enable the Acknowledged toggle as well).
    • Clicking a value in the table brings up a context menu of actions for that value. This allows you to refine your
      existing search, start a new search, or even pivot to external sites like Google and VirusTotal.
    • You can adjust the Rows per page setting in the bottom right and use the left and right arrow icons to page
      through the table.
Grouped View
By default, alerts are grouped by whatever criteria is selected in the query bar. Clicking a field value and then selecting
the Drilldown option allows you to drill down into that value which switches to the detailed view. You can also click
the value in the Count column to perform a quick drilldown. Note that this quick drilldown feature is only enabled for
certain queries.
If you’d like to remove a particular field from the grouped view, you can click the trash icon at the top of the table to
the right of the field name.
Detailed View
If you click a value in the grouped view and then select the Drilldown option, the display will switch to the detailed
view. This shows all search results and allows you to then drill into individual search results as necessary. Clicking
the table headers allows you to sort ascending or descending. Starting from the left side of each row, there is an
arrow which will expand the result to show all of its fields. To the right of that arrow is the Timestamp field.
Next, a few standard fields are shown: rule.name, event.severity_label, source.ip, source.port,
destination.ip, and destination.port. Depending on what kind of data you’re looking at, there may be
some additional data-specific fields as well.
6.1. Alerts                                                                                                             91
Security Onion Documentation, Release 2.4
When you click the arrow to expand a row in the Events table, it will show all of the individual fields from that event.
Field names are shown on the left and field values on the right. When looking at the field names, there is an icon to
the left that will add that field to the groupby section of your query. You can click on values on the right to bring up
the context menu to refine your search or pivot to other pages.
Clicking a value in the page brings up a context menu that allows you to refine your existing search, start a new search,
or even pivot to external sites like Google and VirusTotal.
6.1. Alerts                                                                                                           93
Security Onion Documentation, Release 2.4
Include
Clicking the Include option will add the selected value to your existing search to only show search results that
include that value.
Exclude
Clicking the Exclude option will exclude the selected value from your existing search results.
Only
Clicking the Only option will start a new search for the selected value and retain any existing groupby terms.
Group By
Clicking the Group By option will update the existing query and aggregate the results based on the selected field.
New Group By
Clicking the New Group By option will create a new data table for the selected field.
Numeric Ops
If the value you clicked is numeric, then the Numeric Ops sub-menu allows you to choose operations like less than,
less than or equal, greater than, greater than or equal, or Between. Choosing the Between option displays a window so
that you can specify a range of values.
Clipboard
The Clipboard sub-menu has several options that allow you to copy selected data to your clipboard in different
ways.
Actions
6.2 Dashboards
Security Onion Console (SOC) includes a Dashboards interface which includes an entire set of pre-built dashboards
for our standard data types.
6.2.1 Options
At the top of the page, there is an Options menu that allows you to set options such as Auto Apply, Exclude case data,
Exclude SOC Logs, Automatic Refresh Interval, and Time Zone.
Auto Apply
The Auto Apply option defaults to enabled and will automatically submit your query any time you change filters,
groupings, or date ranges.
Dashboards excludes Cases data by default. If you disable this option, then you can use Dashboards to query your
Cases data.
6.2. Dashboards                                                                                                    95
Security Onion Documentation, Release 2.4
Dashboards also excludes SOC diagnostic logs by default. If you disable this option, then you can use Dashboards to
query your SOC diagnostic logs.
The Automatic Refresh Interval setting will automatically refresh your query at the time interval you select.
Time Zone
Dashboards will try to detect your local time zone via your browser. You can manually specify your time zone if
necessary.
The easiest way to get started is to click the query drop down box and select one of the pre-defined dashboards.
These pre-defined dashboards cover most of the major data types that you would expect to see in a Security Onion
deployment: NIDS alerts from Suricata, protocol metadata logs from Zeek or Suricata, endpoint logs, and firewall
logs.
If you would like to save your own personal queries, you can bookmark them in your browser. If you would like to
customize the default queries for all users, please see the SOC Customization section.
By default, Dashboards searches the last 24 hours. If you want to search a different time frame, you can change it in
the upper right corner of the screen. You can use the default relative time or click the clock icon to change to absolute
time.
The first section of output contains a Most Occurrences visualization, a timeline visualization, and a Fewest Occur-
rences visualization. Bar charts are clickable, so you can click a value to update your search criteria. Aggregation
defaults to 10 values, so Most Occurrences is the Top 10 and Fewest Occurrences is the Bottom 10 (long tail). The
number of aggregation values is controlled by the Fetch Limit setting in the Group Metrics section.
The middle section of output is the Group Metrics section. It consists of one or more data tables or visualizations that
allow you to stack (aggregate) arbitrary fields.
Group metrics are controlled by the groupby parameter in the search bar. You can read more about the groupby
parameter in the OQL section below.
Clicking the table headers allows you to sort ascending or descending. Refreshing the page will retain the sort, but
only for the first table.
Clicking a value in the Group Metrics table brings up a context menu of actions for that value. This allows you to
refine your existing search, start a new search, or even pivot to external sites like Google and VirusTotal. The default
Fetch Limit for the Group Metrics table is 10. If you need to see more than the top 10, you can increase the Fetch
Limit and then page through the output using the left and right arrow icons or increase the Rows per page setting.
You can use the buttons in the Count column header to convert the data table to a pie chart or bar chart. If the data table
is grouped by more than one field, then you will see an additional button that will convert the data table to a sankey
diagram. There is a Maximize View button that will maximize the table to fill the pane (you can press the Esc key to
return to normal view). Each of the groupby field headers has a trash button that will remove the field from the table.
6.2. Dashboards                                                                                                         97
Security Onion Documentation, Release 2.4
Once you have switched to a chart, you will see different buttons at the top of the chart. You can use the Show Table
button to return to the data table, the Toggle Legend button to toggle the legend, and the Remove button to remove the
chart altogether. There is a Maximize View button that will maximize the chart to fill the pane (you can press the Esc
key to return to normal view).
6.2.6 Events
The third and final section of the page is a data table that contains all search results and allows you to drill into individ-
ual search results as necessary. Clicking the table headers allows you to sort ascending or descending. Starting from the
left side of each row, there is an arrow which will expand the result to show all of its fields. To the right of that arrow is
the Timestamp field. Next, a few standard fields are shown: source.ip, source.port, destination.ip,
destination.port, log.id.uid (Zeek unique identifier), network.community_id (Community ID), and
event.dataset. Depending on what kind of data you’re looking at, there may be some additional data-specific
fields as well.
Clicking a value in the Events table brings up a context menu of actions for that value. This allows you to refine your
existing search, start a new search, or even pivot to external sites like Google and VirusTotal.
The default Fetch Limit for the Events table is 100. If you need to see more than 100 events, you can increase the
Fetch Limit and then page through the output using the left and right arrow icons or increase the Rows per page
setting.
When you click the arrow to expand a row in the Events table, it will show all of the individual fields from that event.
Field names are shown on the left and field values on the right. When looking at the field names, there is an icon to
the left that will add that field to the groupby section of your query. You can click on values on the right to bring up
the context menu to refine your search or pivot to other pages.
6.2. Dashboards                                                                                                      99
Security Onion Documentation, Release 2.4
6.2.7 Statistics
The bottom left corner of the page shows statistics about the current query including the speed of the backend data
fetch and the total round trip time.
Clicking a value in the page brings up a context menu that allows you to refine your existing search, start a new search,
or even pivot to external sites like Google and VirusTotal.
Include
Clicking the Include option will add the selected value to your existing search to only show search results that
include that value.
Exclude
Clicking the Exclude option will exclude the selected value from your existing search results.
Only
Clicking the Only option will start a new search for the selected value and retain any existing groupby terms.
Group By
If one or more Group By data tables already exists, clicking the Group By option will add the field to the most
recent data table. If there are no existing Group By data tables, clicking the Group By option will create a new
data table for the selected field.
New Group By
Clicking the New Group By option will create a new data table for the selected field.
Numeric Ops
If the value you clicked is numeric, then the Numeric Ops sub-menu allows you to choose operations like less than,
less than or equal, greater than, greater than or equal, or Between. Choosing the Between option displays a window so
that you can specify a range of values.
Clipboard
The Clipboard sub-menu has several options that allow you to copy selected data to your clipboard in different
ways.
Actions
    • Clicking the PCAP option will pivot to the PCAP interface to retrieve full packet capture for the selected stream.
    • Clicking the Google option will search Google for the selected value.
    • Clicking the VirusTotal option will search VirusTotal for the selected value.
If you’d like to add your own custom actions, see the SOC Customization section.
6.2.9 OQL
Onion Query Language (OQL) starts with standard Lucene query syntax and then allows you to add optional segments
that control what Dashboards does with the results from the query.
sortby
The sortby segment can be added to the end of a hunt query. This can help ensure that you see the most recent data,
for example, when sorting by descending timestamp. Otherwise, if the search yields a dataset larger than the X Limit
size selected in the UI then you will only get the first X records and then those will be sorted on the web browser.
You can specify one field to sort by or multiple fields separated by spaces. The default order is descending but if you
want to force the sort order to be ascending you can add the optional caret (^) symbol to the end of the field name.
groupby
The groupby segment tells Dashboards to group by (aggregate) a particular field. So, for example, if you want to
group by destination IP address, you can add the following to your search:
| groupby destination.ip
The groupby segment supports multiple aggregations so you can add more fields that you want to group by, sepa-
rating those fields with spaces. For example, to group by destination IP address and then destination port in the same
data table, you could use:
OQL supports multiple groupby segments so if you wanted each of those fields to have their own independent data
tables, you could do:
In addition to rendering standard data tables, you can optionally render the data as a pie chart, bar chart, or sankey
diagram.
    • The pie chart is specified using the -pie option:
    • The sankey diagram is specified using the -sankey option, but keep in mind that this requires at least two
      fields:
The -maximize option will maximize the table or chart to fill the pane. After viewing the maximized result, you can
press the Esc key to return to normal view.
By default, grouping by a particular field won’t show any values if that field is missing. If you would like to
include missing values, you can add an asterisk after the field name. For example, suppose you want to look
for non-HTTP traffic on port 80 using a query like event.dataset:conn AND destination.port:80
| groupby network.protocol destination.port. If there was non-HTTP traffic on port 80, the
network.protocol field may be null and so this query would only return port 80 traffic identified as HTTP.
To fix this, add the asterisk after the network.protocol:
Please note that adding the asterisk to a non-string field may not work as expected. As an alternative, you may be able
to use the asterisk with the equivalent keyword field if it is available. For example, source.geo.ip* may return
0 results, or a query failure error, but source.geo.ip.keyword* may work as expected.
There’s a known limitation with Sankey diagrams where the diagram is unable to render all data when multiple fields
of the diagram contain the same value. This causes a recursion issue. For example, this can occur if using an OQL
query of * | groupby -sankey source.ip destination.ip and the included events have a specific IP
appearing in both the source.ip and destination.ip fields. SOC will attempt to prevent the recursion issue
by omitting any data that introduces recursion. This can result in some diagrams showing partial data on the diagram,
and when this occurs the Sankey diagram will have the phrase (partial) appended to the title. In rare scenarios, it’s
possible for the diagram to be completely blank, such as if all data results have the same value in each field. Following
the example mentioned above, this could happen if the source.ip and destination.ip were always equal.
6.3 Hunt
Security Onion Console (SOC) includes a Hunt interface which is similar to our Dashboards interface but is tuned
more for threat hunting.
The main difference between Hunt and Dashboards is that Hunt’s default queries are more focused than the overview
queries in Dashboards. A second difference is that most of the default Dashboards queries display a separate table for
each aggregated field, whereas many of the default queries in Hunt aggregate multiple fields in a single table which
can be beneficial when hunting for more obscure activity.
6.4 Cases
Security Onion Console (SOC) includes our Cases interface for case management. It allows you to escalate logs from
Alerts, Dashboards, and Hunt, and then assign analysts, add comments and attachments, and track observables.
6.4.1 Installation
Cases is a part of Security Onion Console (SOC). It’s automatically enabled when doing an Import, Eval, Standalone,
Manager, or ManagerSearch installation. If you want the quickest and easiest way to try out Cases, you can follow our
First Time Users guide to install a minimal Import installation.
On a new deployment, Cases will be empty until you create a new case.
To create a new case, click the + icon and then fill out the Title and Description and optionally the fields on the right
side including Assignee, Status, Severity, Priority, TLP, PAP, Category, and Tags. Clicking the fields on the right
side reveals drop-down boxes with standard options. The Assignee field will only list user accounts that are currently
enabled.
Alternatively, if you find events of interest in Alerts, Dashboards, or Hunt, you can escalate directly to Cases using the
escalate button (blue triangle with exclamation point). Clicking the escalate button will escalate the data from the row
as it is displayed. This means that if you’re looking at an aggregated view, you will get limited details in the resulting
escalated case. If you want more details to be included in the case, then first drill into the aggregation and escalate one
of the individual items in that aggregation.
Once you click the escalate button, you can choose to escalate to a new case or an existing case.
6.4.3 Comments
On the Comments tab, you can add comments about the case. The Comments field uses markdown syntax and you
can read more about that at https://www.markdownguide.org/cheat-sheet/.
6.4.4 Attachments
On the Attachments tab, you can upload attachments. For each attachment, you can optionally define TLP and add
tags. Cases will automatically generate SHA256, SHA1, and MD5 hash values for each attachment. Buttons next to
the hash values allow you to copy the value or add it as an observable.
6.4.5 Observables
On the Observables tab, you can track observables like IP addresses, domain names, hashes, etc. You can add observ-
ables directly on this tab or you can add them from the Events tab as well.
You can add multiple observables of the same type by selecting the option labeled Enable this checkbox to
have a separate observable added for each line of the provided value above.
For each observable, you can click the icon on the far left of the row to drill into the observable and see more infor-
mation about it. To the right of that is the the hunt icon which will start a new hunt for the observable. Clicking the
lightning bolt icon will analyze the observable (see the Analyzers section later).
You can also add observables directly from Alerts, Dashboards, or Hunt. Click the observable and select the Add to
Case option. You’ll then have the option of adding the observable to a new case or an existing case.
6.4.6 Events
On the Events tab, you can see any events that have been escalated to the case. This could be Suricata alerts, network
metadata from Suricata or Zeek, or endpoint logs.
For each event, you can click the icon on the far left of the row to drill in and see all the fields included in that event.
If you find something that you would like to track as an Observable, you can click the eye icon on the far left of the row
to add it to the Observables tab. It will attempt to automatically identify well known data types such as IP addresses.
To the right of the eye icon is a Hunt icon that can be used to start a new hunt for that particular value.
6.4.7 History
On the History tab, you can see the history of the case itself, including any changes made by each user. For each row
of history, you can click the icon on the far left of the row to drill in and see more information.
Once you have one or more cases, you can use the main Cases page to get an overview of all cases.
6.4.9 Options
Starting at the top of the main Cases page, the Options menu allows you to set options such as Automatic Refresh
Interval and Time Zone.
There is also a toggle labeled Temporarily enable advanced interface features. If you enable this
option, then the interface will show more advanced features similar to Dashboards and Hunt. These advanced features
are only enabled temporarily so if you navigate away from the page and then return to the page, it will default back to
its simplified view.
The query bar defaults to Open Cases. Clicking the drop-down box reveals other options such as Closed Cases, My
Open Cases, My Closed Cases, and Templates. If you want to send your current query to Hunt, you can click the
crosshair icon to the right of the query bar.
Under the query bar, you’ll notice colored bubbles that represent the individual components of the query and the fields
to group by. If you want to remove part of the query, you can click the X in the corresponding bubble to remove it and
run a new search.
The time picker is to the right of the query bar. By default, Cases searches the last 12 months. If you want to search a
different time frame, you can change it here.
The remainder of the main Cases page is a data table that shows a high level overview of the cases matching the current
search criteria.
    • Clicking the table headers allows you to sort ascending or descending.
    • Clicking a value in the table brings up a context menu of actions for that value. This allows you to refine your
      existing search, start a new search, or even pivot to external sites like Google and VirusTotal.
    • You can adjust the Rows per page setting in the bottom right and use the left and right arrow icons to page
      through the table.
    • When you click the arrow to expand a row in the data table, it will show the high level fields from that case.
      Field names are shown on the left and field values on the right. When looking at the field names, there is an icon
      to the left that will add that field to the groupby section of your query. You can click on values on the right to
      bring up the context menu to refine your search.
    • To the right of the arrow is a binoculars icon. Clicking this will display the full case including the Comments,
      Attachments, Observables, Events, and History tabs.
6.4.13 Data
Cases data is stored in Elasticsearch. You can view it in Dashboards or Hunt by clicking the Options menu and
disabling the Exclude case data option. You can then search the so-case index with the following query:
_index:"*:so-case"
6.4.14 Analyzers
We have included analyzers which allow you to quickly gather context around an observable.
The following is a summary of the built-in analyzers and their supported data types:
Running Analyzers
To enqueue an analyzer job, click the lightning bolt icon on the left side of the observable menu:
All configured analyzers supporting the observable’s data type will then run and return their analysis:
Note: Observable values must be formatted to correctly match the observable type in order for analyzers to properly
execute against them. For example, an IP observable type should not contain more than one IP address.
Analyzer Output
The collapsed job view for an analyzer will return a summary view of the analysis:
Expanding the collapsed row will reveal a more detailed view of the analysis:
Configuring Analyzers
Some analyzers require authentication or other details to be configured before use. If analysis is requested for an
observable and an analyzer supports that observable type but the analyzer is left unconfigured, then it will not run.
The following analyzers require users to configure authentication or other parameters in order for the analyzer to work
correctly:
    • AlienVault OTX
    • EmailRep
    • GreyNoise
    • LocalFile
    • Pulsedive
    • Urlscan
    • VirusTotal
To configure an analyzer, navigate to Administration -> Configuration -> sensoroni.
At the top of the page, click the Options menu and then enable the Show all configurable settings,
including advanced settings. option. Then navigate to sensoroni -> analyzers.
Developing Analyzers
If you’d like to develop a custom analyzer, take a look at the developer’s guide at https://github.com/
Security-Onion-Solutions/securityonion/tree/dev/salt/sensoroni/files/analyzers.
6.5 PCAP
Security Onion Console (SOC) includes a PCAP interface which allows you to access your full packet capture that
was written to disk by Stenographer.
In most cases, you’ll pivot to PCAP from a particular event in Alerts, Dashboards, or Hunt by choosing the PCAP
action on the action menu.
Alternatively, you can go directly to the PCAP interface, click the blue + button, and then put in your search criteria
to search for a particular stream.
Security Onion will then locate the stream and render a high level overview of the packets.
If there are many packets in the stream, you can use the LOAD MORE button, Rows per page setting, and arrows
to navigate through the list of packets.
You can drill into individual rows to see the actual payload data. There are buttons at the top of the table that control
what data is displayed in the individual rows. By disabling Show all packet data and HEX, we can get an
ASCII transcript.
You can select text with your mouse and then use the context menu to send that selected text to CyberChef , Google,
or other destinations defined in the actions list.
You can send all of the visible packet data to CyberChef by clicking the CyberChef icon on the right side of the table
header. Please note that this only sends packet data that is currently being displayed, so if you are looking at a large
stream you may need to use the LOAD MORE button to display all packets in the stream.
Finally, you can download the full pcap file by clicking the download button on the far right side of the table header.
If you are using Security Onion Desktop, then the pcap will automatically open in NetworkMiner. Alternatively, you
could open the pcap in Wireshark.
Once you’ve viewed one or more PCAPs, you will see them listed on the main PCAP page.
When you are done with a PCAP, you may want to delete it using the X button on the far right. This deletes the cached
PCAP file saved at /nsm/soc/jobs/.
6.5.1 Troubleshooting
If you have trouble retrieving PCAP, here are some things to check:
    • Verify that Stenographer is enabled.
    • Check Grid and verify that all services are running properly.
    • Check InfluxDB and verify that PCAP Retention is long enough to include the stream you’re looking for.
    • Check to see if you have any BPF configuration that may cause Stenographer to ignore the traffic.
    • Make sure that there is plenty of free space on /nsm so that Stenographer can carve the stream and write the
      output to disk.
6.6 Grid
Security Onion Console (SOC) includes a Grid interface which allows you to quickly check the status of all nodes in
your grid.
Starting at the top of the page, there is a Grid EPS value in the upper right corner that shows the sum of all Con-
sumption EPS measurements in the entire grid. Below that you will find a list of all nodes in your grid.
Note: Please note that new nodes start off showing a red Fault and may take a few minutes to fully initialize before
they show a green OK.
You can drill into individual nodes to see detailed information including Node Status, Container Status, and Appliance
Images.
Online Since
The Online Since field shows how long the node has been online.
Production EPS
The Production EPS field is how much a node is producing. This is taken from the number of events out in Elastic
Agent.
Consumption EPS
Process Status
If the Process Status field shows Fault, you can check the Container Status section to determine which
process has failed.
Connection Status
The Connection Status field shows whether or not the node is currently connected to the grid.
RAID Status
If you are using an official Security Onion Solutions appliance with RAID support, then you will see the corresponding
status appear in this field.
Description
The Description field shows the optional Description you may have entered during Setup.
There are a few icons in the lower left of the Node Status section depending on what kind of node you are looking
at:
    • Clicking the first icon takes you to the InfluxDB dashboard for that particular node to view health metrics.
    • If the node is a network sensor, then there will be an additional icon for sending test traffic to the sensor.
    • Depending on the node type, there may be an additional icon for uploading your own PCAP or EVTX file.
      Clicking this icon results in an upload form. Once you’ve selected a file and initiated the upload, a status
      message appears. Uploaded PCAP files are automatically imported via so-import-pcap and EVTX files are
      automatically imported via so-import-evtx. Once the import is complete, a message will appear containing a
      hyperlink to view the logs from the import. Please note that this is designed for smaller files. If you need to
      import files larger than 25MB, then you will need to manually import via so-import-pcap or so-import-evtx.
If any containers show anything other than running, then you might want to double-check the configuration for that
container and check the corresponding logs in /opt/so/log/.
If you have purchased our official Security Onion Solutions appliances, then the grid page will show pictures of the
front and rear of the appliances, useful for walking through connectivity discussions with personnel in the data center.
If you are not using official Security Onion Solutions appliances, then it will simply display a message to that effect.
Note: You can manage Grid members and Grid configuration in the Administration section.
6.7 Downloads
Security Onion Console (SOC) includes a Downloads interface that allows you to download the Elastic Agent for
various operating systems.
6.8 Administration
Security Onion Console (SOC) includes an Administration section which allows you to administer Users, Grid Mem-
bers, and Configuration.
6.8.1 Users
The Users page shows all user accounts that have been created for the grid.
The Role(s) column lists roles assigned to the user as defined in the Role-Based Access Control (RBAC) section.
The Status column will show different icons depending on the status of the account. In the screenshot above:
    • the first account is disabled
    • the second account is enabled and has TOTP MFA enabled
    • the third account is enabled but does not have TOTP MFA enabled and has not yet changed their password
Hovering over the icon in the Status column will show you these details as well.
The Grid Members page shows nodes that have attempted to join the grid and whether or not they have been accepted
into the grid by an administrator.
Unaccepted members are displayed on the left side and broken into three sections: Pending Members, Denied Mem-
bers, and Rejected Members. When you accept a member, it will then move to the right side under Accepted Members.
For accepted members, you can click the REVIEW button to show additional information about the grid member. If
you want to remove the member, you can then click the DELETE button and review the confirmation.
6.8.3 Configuration
The Configuration page allows you to configure various components of your grid.
The most common configuration options are shown in the quick links on the right side. On the left side, you can click
on a component in the tree view to drill into it and show all available settings for that component. You can then click
on a setting to show the current setting or modify it if necessary. If you make a mistake, you can easily revert back to
the default value. If a blue question mark appears on the setting page, you can click it to go to the documentation for
that component.
If you’re not sure of which component a particular setting may belong to, you can use the Filter at the top of the list to
look for a particular setting. To the right of the Filter field are buttons that do the following:
    • expand all settings
    • collapse all settings
    • show settings that have been modified from the default value
    • show settings that have a unique value specified for one or more nodes in the grid
Note: If you see a key that includes _x_, it is a placeholder value used to represent a period (.).
By default, the Configuration page only shows the most widely used settings. If you want to see all settings, you
can go to the Options bar at the top of the page and then click the toggle labeled Show all configurable
settings, including advanced settings.
Warning: Changing advanced settings is unsupported and could result in requiring a full cluster re-installation.
6.9 Kibana
Security Onion Console (SOC) includes a link on the sidebar that takes you to Kibana.
From https://www.elastic.co/kibana:
      Kibana is a free and open user interface that lets you visualize your Elasticsearch data and navigate the
      Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your
      apps.
6.9.1 Authentication
Log into Kibana using the same username and password that you use for Security Onion Console (SOC).
You can add new user accounts to both Kibana and Security Onion Console (SOC) at the same time as shown in the
Adding Accounts section. Please note that if you instead create accounts directly in Kibana, then those accounts will
only have access to Kibana and not Security Onion Console (SOC).
6.9.2 Dashboards
Once you log into Kibana, you should start on the Security Onion - Home dashboard.
Notice the visualization in the upper left is labeled Security Onion - Navigation. This navigation panel
contains links to other dashboards and will change depending on what dashboard you’re currently looking at. For
example, when you’re on the Security Onion - Home dashboard and click the Alert link, you will go to
the Security Onion - Alerts dashboard and the Navigation panel will then contain links to more specific
alert dashboards for Playbook and Suricata. When you’re done looking at alerts, you can click the Home link in the
navigation panel to go back to the main Security Onion - Home dashboard.
If you ever need to reload dashboards, you can run the following command on your manager:
sudo so-kibana-config-load
If that doesn’t resolve the issue, then you may need to run the following:
If you try to modify a default dashboard, your change will get overwritten. Instead of modifying, copy the desired
dashboard and edit the copy. You may also want to consider setting up Kibana Spaces as this will allow you to
make whatever changes you want without them being overwritten. This includes not only dashboards but certain
Kibana settings as well. You can read more about Kibana Spaces at https://www.elastic.co/guide/en/kibana/current/
xpack-spaces.html.
6.9.3 Pivoting
PCAP/Cases
The _id field has a hyperlink which is labeled as Hunt and optionally pivot to PCAP/Cases. Clicking
this hyperlink takes you to Hunt and searches for that particular record. From Hunt, you can then escalate the event to
Cases or pivot to full packet capture via our PCAP interface (assuming it’s a network event). You can usually find the
_id field as the rightmost column in the log panels at the bottom of the dashboards.
You can also find the hyperlinked _id field by drilling into a row in the log panel.
Indicator Dashboard
Several fields are hyperlinked to the Indicator dashboard to allow you to get all the information you can about a
particular indicator. Here are just a few:
uid
source.ip
source.port
destination.ip
destination.port
Search results in the dashboards and through Discover are limited to the first 100 results for a particular query. If you
don’t feel like this is adequate after narrowing your search, you can adjust the value for discover:sampleSize
in Kibana by navigating to Stack Management -> Advanced Settings and changing the value. It may be
best to change this value incrementally to see how it affects performance for your deployment.
6.9.5 Timestamps
By default, Kibana will display timestamps in the timezone of your local browser. If you would prefer timestamps in
UTC, you can go to Management –> Advanced Settings and set dateFormat:tz to UTC.
6.9.6 Configuration
You can configure Kibana by going to Administration –> Configuration –> kibana.
Kibana logs to /opt/so/log/kibana/kibana.log. Depending on what you’re looking for, you may also need
to look at the Docker logs for the container:
If you try to access Kibana and it says Kibana server is not ready yet even after waiting a few minutes
for it to fully initialize, then check /opt/so/log/kibana/kibana.log. You may see something like:
Another Kibana instance appears to be migrating the index. Waiting for that migration
 ˓→to complete. If no other Kibana instance is attempting migrations, you can get past
If that’s the case, then you can do the following (replacing .kibana_6 with the actual index name that was mentioned
in the log):
sudo so-kibana-restart
If you then are able to login to Kibana but your dashboards don’t look right, you can reload them as follows:
so-kibana-config-load
6.9.8 Features
You can enable or disable specific features by clicking the main menu in the upper left corner, then click Stack
Management, then click Spaces, then click Default. For more information, please see https://www.elastic.co/
guide/en/kibana/master/xpack-spaces.html#spaces-control-feature-visibility.
Security Onion Console (SOC) includes a link on the sidebar that takes you to the Fleet page inside Kibana.
6.10.1 Configuration
Elastic Fleet is pre-configured during Security Onion setup, however, centralized management of configuration is
provided within its user interface inside of Kibana.
Configuration options for various components are detailed below.
Agents
This section displays registered Elastic agents (https://docs.securityonion.net/en/2.4/elastic-agent.html) and allows the
user to add additional agents.
To view agent details, click the Host name.
To assign the agent to a new policy, unenroll, upgrade the agent, or perform other actions, click the Actions menu
on the right side of the agent listing and select the appropriate option.
By default, Elastic Agent is installed on every Security Onion grid node. As a result, all grid node agents will be
enrolled in the SO-Grid-Nodes agent policy. We do not recommend removing policy settings for Security Onion
grid node agents.
Adding Agents
Agent Policies
Agent policies dictate what data each agent will ingest and forward to Elasticsearch. This could be through the use of
an HTTP, log file, or TCP-based input.
The individual components within each agent policy are called integrations (referred to as package policies at
the API level), and refer to a specific input and settings pertinent to a data source.
For example, the SO-Grid-Nodes agent policy is comprised of the following integrations:
Integrations
    • auditd
    • barracuda
    • cisco_asa
    • crowdstrike
    • darktrace
    • f5_bigip
    • fortinet
    • fortinet_fortigate
    • gcp
    • http_endpoint
    • httpjson
    • juniper
    • juniper_srx
    • kafka_log
    • lastpass
    • m365_defender
    • microsoft_defender_endpoint
    • microsoft_dhcp
    • netflow
    • o365
    • okta
    • panw
    • pfsense
    • sentinel_one
    • sonicwall_firewall
    • symantec_endpoint
    • ti_abusech
    • ti_misp
    • ti_otx
    • ti_recordedfuture
    • zscaler_zia
    • zscaler_zpa
Adding an Integration
New integrations can be added to existing policies to provide increased visibility and more comprehensive monitoring.
To add an integration to an existing policy:
From Fleet -> Agent policies -> $Policy name, click Add Integration and follow the steps for
adding the integration.
A custom integration can be added by adding an integration such as the Custom Logs integration. We can specify
various settings relative to the data source and define additional actions to be performed.
Enrollment Tokens
An enrollment token allows an agent to enroll in Fleet, subscribe to a particular agent policy, and send data.
Each agent policy typically uses its own enrollment token. It is recommended that these tokens are NOT to be changed,
especially those generated by default Security Onion agent policies.
Data Streams
Settings
Note: We do NOT recommend changing these settings, as they are managed by Security Onion.
Security Onion Console (SOC) includes a link on the sidebar which takes you to the Osquery Manager page inside
Kibana.
Note: For more information about Osquery Manager, please see https://docs.elastic.co/en/integrations/osquery_
manager.
6.12 InfluxDB
Security Onion Console (SOC) includes a link on the sidebar that takes you to InfluxDB.
From https://github.com/influxdata/influxdb:
      InfluxDB is an open source time series platform. This includes APIs for storing and querying data, pro-
      cessing it in the background for ETL or monitoring and alerting purposes, user dashboards, and visualizing
      and exploring the data and more.
6.12.1 Authentication
Log into InfluxDB using the same username and password that you use for Security Onion Console (SOC).
6.12.2 Configuration
You can configure InfluxDB by going to Administration –> Configuration –> influxdb.
6.13 CyberChef
Security Onion Console (SOC) includes a link on the sidebar that takes you to CyberChef.
From https://github.com/gchq/CyberChef:
      The Cyber Swiss Army Knife
      CyberChef is a simple, intuitive web app for carrying out all manner of “cyber” operations within a
      web browser. These operations include simple encoding like XOR or Base64, more complex encryption
      like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data,
      calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much
      more.
      The tool is designed to enable both technical and non-technical analysts to manipulate data in complex
      ways without having to deal with complex tools or algorithms.
      There are four main areas in CyberChef:
        1. The input box in the top right, where you can paste, type or drag the text or file you want to operate
           on.
        2. The output box in the bottom right, where the outcome of your processing will be displayed.
        3. The operations list on the far left, where you can find all the operations that CyberChef is capable of
           in categorised lists, or by searching.
        4. The recipe area in the middle, where you can drag the operations that you want to use and specify
           arguments and options.
6.13.1 Screenshot
6.13.2 Accessing
To access CyberChef, log into Security Onion Console (SOC) and click the CyberChef hyperlink.
You can send highlighted text from PCAP to CyberChef. When the CyberChef tab opens, you will see your highlighted
text in both the Input box and the Output box.
You can send all visible packet data from PCAP to CyberChef. When the CyberChef tab opens, it will automatically
apply the From Hexdump recipe to render the hexdump that was sent.
Suppose you are looking at an interesting HTTP file download in PCAP and want to extract the file using CyberChef:
    • Click the PCAP CyberChef button and CyberChef will launch in a new tab. It will then show the hexdump in
      the Input box, automatically apply the From Hexdump recipe, and show the HTTP transcript in the Output
      box.
    • You may want to apply an operation from the left column. One option is to use the Extract Files operation
      and optionally specify certain file types for extraction. Another option is to instead remove the HTTP headers
      using the Strip HTTP headers operation.
    • If a magic wand appears in the Output box, then CyberChef has detected some applicable operations and you
      can click the magic wand to automatically apply those operations. For example, CyberChef might automatically
      apply Strip HTTP headers and then render the file.
6.14 Playbook
Security Onion Console (SOC) includes a link on the sidebar that takes you to Playbook which allows you to create a
Detection Playbook, which itself consists of individual Plays. These Plays are fully self-contained and describe the
different aspects around a particular detection strategy.
   3. The actual query needed to implement the Play’s objective. In our case, the ElastAlert / Elasticsearch configu-
      ration.
Any results from a Play (low, medium, high, critical severity) are available to view within Dashboards, Hunt, or
Kibana. High or critical severity results from a Play will generate an Alert within the Security Onion Console Alerts
interface.
The final piece to Playbook is automation. Once a Play is made active, the following happens:
    • The required ElastAlert config is put into production
    • ATT&CK Navigator layer is updated to reflect current coverage
You can access Playbook by logging into Security Onion Console (SOC) and clicking the Playbook link. You will
see over 500 plays already created that have been imported from the Sigma Community repostory of rules at https:
//github.com/Neo23x0/sigma/tree/master/rules.
Click on Edit to edit a Play. There will only be a few fields that you can modify - to make edits to the others (Title,
Description, etc), you will need to edit the Sigma inside the Sigma field. Keep in mind that the Sigma is YAML
formatted, so if you have major edits to make it is recommended to lint it and/or Convert it through the Sigma
Editor to confirm that it is formatted correctly. Be sure to remove the prepended and postpended Playbook-specific
syntax highlighting before linting/converting - {{collapse(View Sigma) <pre><code class="yaml">
and </code></pre>}}.
Once you save your changes, Playbook will update the rest of the fields to match your edits, including regenerating
the Elastalert rule if needed.
When you are ready to start alerting on your Play, change the Status of the play to Active. This will create the
ElastAlert config. Any edits made to the Play in Playbook will automatically update the ElastAlert configuration.
The Elastalert rules are located under /opt/so/rules/elastalert/playbook/<PlayID>.yaml.
Elastalert rules created by Playbook will run every 3 minutes, with a buffer_time of 15 minutes.
Performance testing is still ongoing. We recommend avoiding the Malicious Nishang PowerShell
Commandlets play as it can cause serious performance problems. You may also want to avoid others with a status
of experimental.
When results from your Plays are found (ie alerts), they are available to view within Alerts.
If you have a Play that is generating false positives, you can tune it by adding a Custom Filter to the Play.
For example, suppose you are seeing a large amount of Suspicious Service Path Modification alerts.
Drilling down into the alerts, it appears to be a legitimate configuration change by one of the IT Ops Service Accounts.
This can be tuned out by doing the following:
    • Open the Play and click Edit
    • Add the following filter in the Custom Filter field (YAML Formatting!):
sofilter:
  User: SA_ITOPS
The sofilter syntax is important - add as many top-level filter clauses as you need, but they should all start with
sofilter - for example sofilter1, sofilter2
    • Click Submit and Playbook will take care of the rest, which includes automatically adding the custom filter to
      the rule when it is converted.
Custom filters are applied right away (written out to the backend ElastAlert rule file), but ElastAlert could take a couple
minutes to pick up on the change, as it runs rules every 3 minutes.
It is not recommended to edit the Sigma directly for Community rules, as if there is ever an update for that Sigma rule
from the Sigma rules repo, your changes will get overwritten.
Finally, if you are seeing legitimate executions that are not unique to your environment, you might consider submitting
a PR to the rule in the Sigma repo (https://github.com/SigmaHQ/sigma/tree/master/rules).
By default, once a user has authenticated through SOC they can access Playbook without having to login again to the
app itself. This anonymous access has the permissions of the analyst role.
If you need administrator access to Playbook, you can login as admin with the randomized password found via sudo
salt-call pillar.get secrets. However, the Playbook UI is designed to be used with a user that has an
analyst role. Using an admin account will be very confusing to newcomers to Playbook, since many of the fields will
now be shown/editable and it will look much more cluttered.
If you need your team to login with individual user accounts, you can disable anonymous access and create new user
accounts and add them to the analyst group which will give them all the relevant permissions.
To do this, login with a user that has administrative access, and navigate to Administration –> Users –> New User.
Fill out the relevant fields. By default, Playbook is not connected to an email server so password resets via email will
not work. Once the new user has been created, go back to Administration –> Users and select the newly created user.
There will be a Groups tab, from which you can add the user to the Analyst group. This will give the user all the
needed permissions.
To disable anonymous access, login with a user that has administrative access and navigate to Administration –>
Projects –> Detection Playbooks. Unselect the Public checkbox.
so-playbook-sync runs every 5 minutes. This script queries Playbook for all active plays and then checks to
make sure that there is an ElastAlert config for each play. It also runs through the same process for inactive plays.
Sigma support currently extends to the following log sources in Security Onion:
         • Windows Eventlogs and Sysmon (via Elastic Agent)
         • osquery (via Elastic Agent)
         • network (via Zeek logs)
The pre-loaded Plays depend on Sysmon and Windows Eventlogs shipped with Elastic Agent.
For best compatibility, use the following Sigma Taxonomy:
         • Process Creation: https://github.com/Neo23x0/sigma/wiki/Taxonomy#process-creation-events
         • Network: https://github.com/Neo23x0/sigma/wiki/Taxonomy#specific
The current Security Onion Sigmac field mappings can be found here: https://github.com/Security-Onion-Solutions/
securityonion-image/blob/master/so-soctopus/so-soctopus/playbook/securityonion-baseline.yml
The pre-loaded Plays come from the community Sigma repository at https://github.com/Neo23x0/sigma/tree/master/
rules. The default config is to only pull in the Windows rules. The rest of the rules from the community repository can
be added via Administration –> Configuration –> soctopus as shown below.
Playbook logs can be found in /opt/so/log/playbook/. Depending on what you’re looking for, you may also
need to look at the Docker logs for the container:
Security Onion Console (SOC) includes a link on the sidebar that takes you to ATT&CK Navigator.
From https://github.com/mitre-attack/attack-navigator:
      The ATT&CK Navigator is designed to provide basic navigation and annotation of ATT&CK matrices,
      something that people are already doing today in tools like Excel. We’ve designed it to be simple and
      generic - you can use the Navigator to visualize your defensive coverage, your red/blue team planning,
      the frequency of detected techniques or anything else you want to do. The Navigator doesn’t care - it just
      allows you to manipulate the cells in the matrix (color coding, adding a comment, assigning a numerical
      value, etc.). We thought having a simple tool that everyone could use to visualize the matrix would help
      make it easy to use ATT&CK.
      The principal feature of the Navigator is the ability for users to define layers - custom views of the
      ATT&CK knowledge base - e.g. showing just those techniques for a particular platform or highlighting
      techniques a specific adversary has been known to use. Layers can be created interactively within the
      Navigator or generated programmatically and then visualized via the Navigator.
6.15.1 Accessing
To access Navigator, log into Security Onion Console (SOC) and then click the Navigator hyperlink on the left side.
The default layer is titled Playbook and is automatically updated when a Play from Playbook is made active/inactive.
This allows you to see your Detection Playbook coverage across the ATT&CK framework.
Right-clicking any Technique and selecting View Related Plays will open Playbook with a pre-filtered view of
any plays that are tagged with the selected Technique.
6.15.3 Configuration
Navigator reads its configuration from /opt/so/conf/navigator/. However, please keep in mind that if you
make any changes here they may be overwritten since the config is managed with Salt.
Note:
For more information about ATT&CK Navigator, please see:
https://github.com/mitre-attack/attack-navigator
Full-time analysts may want to use a dedicated Security Onion desktop. This allows you to investigate pcaps, malware,
and other potentially malicious artifacts without impacting your Security Onion deployment or your usual desktop
environment.
Note: Security Onion Desktop currently only supports Oracle Linux 9, so you’ll either need to use our Security Onion
ISO image (recommended) or a manual installation of Oracle Linux 9.
                                                                                                                143
Security Onion Documentation, Release 2.4
Security Onion Desktop consists of a full desktop environment including Chromium, NetworkMiner, Wireshark, and
other analyst tools.
Installation
There are a few different ways to install Security Onion Desktop:
    • Our Security Onion ISO image includes a boot menu option for Desktop installs that will partition your disk
      appropriately and immediately perform a Desktop installation. The minimum disk size is 50GB.
    • If you’re doing a network installation on Oracle Linux 9 (NOT using our Security Onion ISO image), then in
      our normal Setup wizard, you can choose OTHER and then choose ANALYST.
    • The so-desktop-install command is totally independent of the standard setup process, so you can run it
      before or after setup or not run setup at all if all you really want is the Analyst desktop itself.
Joining to Grid
You can optionally join your Desktop installation to your grid. This allows it to pull updates from the grid and
automatically trust the grid’s HTTPS certificate. It also updates the manager’s firewall to allow the Desktop installation
to connect. Starting with Security Onion 2.4.20, Desktop nodes will now display on the Grid page along with the other
grid nodes.
If you choose not to join your Desktop installation to your grid, then you may need to allow the traffic through the
host-based Firewall by going to Administration –> Configuration –> firewall –> hostgroups –> analyst.
Disabling
The analyst desktop is controlled via Salt pillar. If you need to disable the Desktop desktop environment, find the
workstation setting in your Salt pillar and change enabled: true to enabled: false:
workstation:
  gui:
    enabled: false
7.1 Chromium
Chromium is the web browser included in our Security Onion Desktop installation.
Note:
For more information about Chromium, please see:
https://www.chromium.org/chromium-projects/
7.2 NetworkMiner
From https://www.netresec.com/?page=networkminer:
        NetworkMiner is an open source Network Forensic Analysis Tool (NFAT) for Windows (but also works
        in Linux / Mac OS X / FreeBSD). NetworkMiner can be used as a passive network sniffer/packet cap-
        turing tool in order to detect operating systems, sessions, hostnames, open ports etc. without putting any
        traffic on the network. NetworkMiner can also parse PCAP files for off-line analysis and to regener-
        ate/reassemble transmitted files and certificates from PCAP files.
        NetworkMiner makes it easy to perform advanced Network Traffic Analysis (NTA) by providing extracted
        artifacts in an intuitive user interface. The way data is presented not only makes the analysis simpler, it
        also saves valuable time for the analyst or forensic investigator.
7.2.1 Usage
NetworkMiner is a part of our Security Onion Desktop installation. Our desktop automatically registers NetworkMiner
as a pcap handler, so if you download a pcap file from the PCAP interface, you can simply click on it to open in
NetworkMiner.
7.2.2 Screenshot
Suppose you are looking at an interesting HTTP file download in PCAP and want to extract the file. Click the PCAP
download button and then open the pcap file with NetworkMiner. NetworkMiner will automatically attempt to detect
and extract any files transferred. You can access these extracted files on the Files tab. If any files are images, they can
be viewed on the Images tab.
Note:
For more information about NetworkMiner, please see:
https://www.netresec.com/?page=networkminer
7.3 Wireshark
From https://www.wireshark.org/:
      Wireshark is the world’s foremost and widely-used network protocol analyzer. It lets you see what’s hap-
      pening on your network at a microscopic level and is the de facto (and often de jure) standard across many
      commercial and non-profit enterprises, government agencies, and educational institutions. Wireshark de-
      velopment thrives thanks to the volunteer contributions of networking experts around the globe and is the
      continuation of a project started by Gerald Combs in 1998.
7.3.1 Usage
7.3.2 Screenshot
Suppose you are looking at an interesting HTTP file download in PCAP and want to extract the file. Click the PCAP
download button and then open the pcap file with Wireshark. To extract files from HTTP traffic, click File - Export
Objects - HTTP. Then select the file(s) to save and specify where to save them.
Network Visibility
When you log into Security Onion Console (SOC), you may see alerts from Suricata or Intrusion Detection Honeypot,
protocol metadata logs from Zeek or Suricata, file analysis logs from Strelka, or full packet capture from Stenographer.
How is that data generated and stored? This section covers the various processes that Security Onion uses to analyze
and log network traffic.
                                                                                                                   149
Security Onion Documentation, Release 2.4
8.1 AF-PACKET
Security Onion uses AF-PACKET to collect traffic from network interfaces. AF-PACKET is built into the Linux
kernel and includes fanout capabilities enabling it to act as a flow-based load balancer. This means, for example, if
you configure Suricata for 4 AF-PACKET threads then each thread would receive about 25% of the total traffic that
AF-PACKET is seeing.
 Warning: If you try to test AF-PACKET fanout using tcpreplay locally, please note that load balancing will
 not work properly and all (or most) traffic will be handled by the first worker in the AF-PACKET cluster. If you
 need to test AF-PACKET load balancing properly, you can run tcpreplay on another machine connected to your
 AF-PACKET machine.
 Warning:
 Please note that Stenographer should correctly log traffic on a VLAN but won’t log the actual VLAN tags due to
 the way that AF-PACKET works:
 https://github.com/google/stenographer/issues/211
Note:
For more information about AF-PACKET, please see:
https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt
8.2 Stenographer
Security Onion uses Stenographer to write network traffic to disk. From https://github.com/google/stenographer:
        Stenographer is a full-packet-capture utility for buffering packets to disk for intrusion detection and in-
        cident response purposes. It provides a high-performance implementation of NIC-to-disk packet writing,
        handles deleting those files as disk fills up, and provides methods for reading back specific sets of packets
        quickly and easily.
Stenographer uses AF-PACKET for packet acquisition. It’s important to note that Stenographer is totally independent
from Suricata and Zeek. This means that Stenographer has no impact on your NIDS alerts and protocol metadata.
8.2.1 Output
Stenographer writes full packet capture to /nsm/pcap/. It will automatically start purging old data once the partition
reaches 90%. This value is configurable as shown in the Configuration section below.
8.2.2 Analysis
You can access full packet capture via the PCAP interface:
Alerts, Dashboards, Hunt, and Kibana allow you to easily pivot to the PCAP interface.
You can also access packet capture from the command line of the box where the pcap is stored using a steno query
as defined at https://github.com/google/stenographer#querying. In the following examples, replace “YourStenoQuery-
Here” with your actual steno query.
The first option is using docker to run stenoread. If the query succeeds, you can then find the resulting pcap file in
/nsm/pcaptmp/ in the host filesystem:
We’ve included a wrapper script called so-pcap-export to make this a little easier. For example:
If the query succeeds, you can then find the resulting output.pcap file in /nsm/pcapout/ in the host filesystem.
8.2.4 Configuration
You can configure Stenographer by going to Administration –> Configuration –> pcap.
For example, suppose you want to change the default value for purging old pcap. You could go to Administration –>
Configuration –> pcap –> config –> diskfreepercentage and set the value to something appropriate for your system.
By default, Stenographer limits the number of files in the pcap directory to 30000 to avoid limitations with the ext3
filesystem. However, if you’re using the ext4 or xfs filesystems, then it is safe to increase this value. So if you have a
large amount of storage and find that you only have 3 weeks worth of PCAP on disk while still having plenty of free
space, then you may want to increase this default setting. To do so, you can go to Administration –> Configuration –>
pcap –> config –> maxdirectoryfiles and set the value to something appropriate for your system.
Diagnostic logging for Stenographer can be found at /opt/so/log/stenographer/. Depending on what you’re
looking for, you may also need to look at the Docker logs for the container:
8.2.7 Disabling
Since Stenographer is totally independent from Suricata and Zeek, you can disable it without impacting your NIDS
alerts or protocol metadata. If you decide to disable Stenographer, you can do so by going to Administration –>
Configuration –> pcap –> enabled.
 Warning:
 Please note that Stenographer should correctly record traffic on a VLAN but won’t log the actual VLAN tags due
 to the way that AF-PACKET works:
 https://github.com/google/stenographer/issues/211
8.3 Suricata
From https://suricata.io:
      Suricata is a free and open source, mature, fast and robust network threat detection engine. Suricata
      inspects the network traffic using a powerful and extensive rules and signature language, and has powerful
      Lua scripting support for detection of complex threats.
Suricata NIDS alerts can be found in Alerts, Dashboards, Hunt, and Kibana. Here’s an example of Suricata NIDS
alerts in Alerts:
If enabled, Suricata metadata (protocol logs) can be found in Dashboards, Hunt, and Kibana.
8.3.1 Community ID
8.3.2 Configuration
You can configure Suricata by going to Administration –> Configuration –> suricata.
If you would like to configure/manage IDS rules, please see the Managing Rules and Managing Alerts sections.
8.3.3 HOME_NET
The HOME_NET variable defines the networks that are considered home networks (those networks that you are
monitoring and defending). The default value is RFC1918 private address space (10.0.0.0/8, 192.168.0.0/16, and
172.16.0.0/12). You can modify this default value by going to Administration –> Configuration –> suricata –> config
–> vars –> address-groups –> HOME_NET.
8.3.4 EXTERNAL_NET
By default, EXTERNAL_NET is set to any (which includes HOME_NET) to detect lateral movement inside your
environment. You can modify this default value by going to Administration –> Configuration –> suricata –> config –>
vars –> address-groups –> EXTERNAL_NET.
8.3.5 Performance
If Suricata is experiencing packet loss, then you may need to do one or more of the following: tune the ruleset (see
the Managing Alerts section), apply a BPF, adjust max-pending-packets in the Suricata configuration, or adjust
AF-PACKET workers in Administration –> Configuration –> suricata –> config –> af-packet –> threads.
Note:
For other tuning considerations, please see:
https://suricata.readthedocs.io/en/latest/performance/tuning-considerations.html
If you have multiple physical CPUs, you’ll most likely want to pin sniffing processes to a CPU in the same Non-
Uniform Memory Access (NUMA) domain that your sniffing NIC is bound to. Accessing a CPU in the same NUMA
domain is faster than across a NUMA domain.
Note:
For more information about determining NUMA domains using lscpu and lstopo, please see:
https://github.com/brokenscripts/cpu_pinning
8.3.6 Thresholding
To edit the thresholding configuration, go to Administration –> Configuration –> suricata –> thresholding –> SIDS.
Reference the example files at https://github.com/Security-Onion-Solutions/securityonion/blob/master/pillar/
thresholding/pillar.usage    and https://github.com/Security-Onion-Solutions/securityonion/blob/master/pillar/
thresholding/pillar.example.
8.3.7 Metadata
Depending on what options you choose in Setup, it may ask if you want to use Zeek or Suricata for metadata. If you
choose Suricata, then here are some of the kinds of metadata you can expect to see in Dashboards or Hunt:
    • Connections
    • DHCP
    • DNS
    • Files
    • FTP
    • HTTP
    • SSL
If you later find that some of that metadata is unnecessary, you can filter out the unnecessary metadata by writing
rules. We have included some examples at https://github.com/Security-Onion-Solutions/securityonion/blob/dev/salt/
idstools/sorules/filters.rules.
The global pillar on your manager node controls the metadata engine on each sensor. Only one metadata engine at a
time is supported.
To change your grid’s metadata engine from Zeek to Suricata:
    • On the manager, go to Administration –> Configuration –> global –> mdengine and change the value from
      ZEEK to SURICATA.
    • Stop Zeek on all nodes:
If you choose Suricata for metadata, it will extract files from network traffic and Strelka will then analyze those
extracted files. If you would like to extract additional file types, then you can add file types as shown at https:
//github.com/Security-Onion-Solutions/securityonion/blob/dev/salt/idstools/sorules/extraction.rules.
8.3.9 Disabling
Suricata can be disabled by going to Administration –> Configuration –> suricata –> enabled.
If you’re not seeing the Suricata alerts that you expect to see, here are some things that you can check:
    • If you have metadata enabled, check to see if you have metadata for the connections. Depending on your
      configuration, this could be Suricata metadata or Zeek metadata.
    • If you have metadata enabled but aren’t seeing any metadata, then something may be preventing the process
      from seeing the traffic. Check to see if you have any BPF configuration that may cause the process to ignore
      the traffic. If you’re sniffing traffic from the network, verify that the traffic is reaching the NIC using tcpdump.
      If importing a pcap file, verify that file contains the traffic you expect and that the Suricata process can read the
      file and any parent directories.
    • Check your HOME_NET configuration to make sure it includes the networks that you’re watching traffic for.
    • Check to see if you have a full NIDS ruleset with rules that should specifically alert on the traffic and that those
      rules are enabled.
    • Check to see if you have any threshold or suppression configuration that might be preventing alerts.
    • Check the Suricata log for additional clues.
    • Check the Elastic Agent, Logstash, and Elasticsearch logs for any pipeline issues that may be preventing the
      alerts from being written to Elasticsearch.
    • Try installing a simple import node (perhaps in a VM) following the steps in the First Time Users section and
      see if you get alerts there. If so, compare the working system to the non-working system and determine where
      the differences are.
8.3.12 Stats
To test a new rule, use the following utility on a node that runs Suricata (ie Forward or Import).
The file should contain the new rule that you would like to test. The pcap should contain network data that will trigger
the rule.
If your network traffic has VLAN tags, then Suricata will log them. Dashboards has a VLAN dashboard which will
show this data.
8.4 Zeek
Security Onion includes Zeek for network protocol analysis and metadata. From https://www.zeek.org/:
      Zeek is a powerful network analysis framework that is much different from the typical IDS you may know.
      (Zeek is the new name for the long-established Bro system. Note that parts of the system retain the “Bro”
      name, and it also often appears in the documentation and distributions.)
Zeek logs are sent to Elasticsearch for parsing and storage and can then be found in Dashboards, Hunt, and Kibana.
Here’s an example of Zeek logs in Hunt:
8.4.1 Community ID
Zeek reports both packet loss and capture loss and you can find graphs of these in InfluxDB. If Zeek reports packet
loss, then you most likely need to adjust the number of Zeek workers as shown below or filter out traffic using BPF. If
Zeek is reporting capture loss but no packet loss, this usually means that the capture loss is happening upstream in the
tap or span port itself.
8.4.3 Configuration
You can configure Zeek by going to Administration –> Configuration –> zeek.
8.4.4 HOME_NET
The HOME_NET variable defines the networks that are considered home networks (those networks that you are
monitoring and defending). The default value is RFC1918 private address space (10.0.0.0/8, 192.168.0.0/16, and
172.16.0.0/12). You can modify this default value by going to Administration –> Configuration –> zeek –> config –>
networks –> HOME_NET.
8.4.5 Performance
Zeek uses AF-PACKET so that you can spin up multiple Zeek workers to handle more traffic.
If you have multiple physical CPUs, you’ll most likely want to pin sniffing processes to a CPU in the same Non-
Uniform Memory Access (NUMA) domain that your sniffing NIC is bound to. Accessing a CPU in the same NUMA
domain is faster than across a NUMA domain.
Note: For more information about determining NUMA domains using lscpu and lstopo, please see https://github.
com/brokenscripts/cpu_pinning.
You can modify Zeek worker count by going to Administration –> Configuration –> zeek –> config –> node –>
workers.
8.4.6 Disabling
Zeek can be disabled by going to Administration –> Configuration –> zeek –> enabled.
8.4.7 Syslog
To forward Zeek logs to an external syslog collector, please see the Syslog Output section.
8.4.8 Logs
Zeek logs are stored in /nsm/zeek/logs. They are collected by Elastic Agent, parsed by and stored in Elastic-
search, and viewable in Dashboards, Hunt, and Kibana.
We configure Zeek to output logs in JSON format. If you need to parse those JSON logs from the command line, you
can use jq.
Zeek monitors your network traffic and creates logs, such as:
conn.log
    • TCP/UDP/ICMP connections
    • For more information, see:
https://docs.zeek.org/en/latest/scripts/base/protocols/conn/main.zeek.html#type-Conn::Info
dns.log
    • DNS activity
    • For more information, see:
https://docs.zeek.org/en/latest/scripts/base/protocols/dns/main.zeek.html#type-DNS::Info
ftp.log
    • FTP activity
    • For more information, see:
https://docs.zeek.org/en/latest/scripts/base/protocols/ftp/info.zeek.html#type-FTP::Info
http.log
ssl.log
notice.log
    • Zeek notices
    • For more information, see:
https://docs.zeek.org/en/latest/scripts/base/frameworks/notice/main.zeek.html#type-Notice::Info
Zeek also provides other logs by default and you can read more about them at https://docs.zeek.org/en/latest/
script-reference/log-files.html.
In addition to Zeek’s default logs, we also include protocol analyzers for STUN, TDS, and Wireguard traffic and
several different ICS/SCADA protocols. These analyzers are enabled by default.
We also include MITRE BZAR scripts and you can read more about them at https://github.com/mitre-attack/bzar.
Please note that the MITRE BZAR scripts are disabled by default. If you would like to enable them, you can do so
via Administration –> Configuration –> zeek. Once enabled, you can then check for BZAR detections by going to
Dashboards and selecting the Zeek Notice dashboard.
As you can see, Zeek log data can provide a wealth of information to the analyst, all easily accessible through Dash-
boards, Hunt, or Kibana.
If your network traffic has VLAN tags, then Zeek will log them in conn.log. Dashboards includes a VLAN dashboard
which shows this data.
8.4.10 Intel
Please note that Zeek is very strict about the format of intel.dat. When editing this file, please follow these
guidelines:
    • no leading spaces or lines
    • separate fields with a single literal tab character
    • no trailing spaces or lines
The default intel.dat file follows these guidelines so you can reference it as an example of the proper format.
When finished editing intel.dat, run sudo salt $SENSORNAME_$ROLE state.highstate to
sync /opt/so/saltstack/local/salt/zeek/policy/intel/ to /opt/so/conf/zeek/policy/
intel/. If you have a distributed deployment with separate forward nodes, it may take up to 15 minutes for intel to
sync to the forward nodes.
If you experience an error, or do not notice /nsm/zeek/logs/current/intel.log being generated, try hav-
ing a look in /nsm/zeek/logs/current/reporter.log for clues. You may also want to restart Zeek after
making changes by running sudo so-zeek-restart.
Zeek diagnostic logs can be found in /nsm/zeek/logs/. Look for files like reporter.log, stats.log,
stderr.log, and stdout.log. Depending on what you’re looking for, you may also need to look at the Docker
logs for the container:
8.5 Strelka
From https://github.com/target/strelka:
      Strelka is a real-time file scanning system used for threat hunting, threat detection, and incident response.
      Based on the design established by Lockheed Martin’s Laika BOSS and similar projects (see: related
      projects), Strelka’s purpose is to perform file extraction and metadata collection at huge scale.
Depending on what options you choose in Setup, it may ask if you want to use Zeek or Suricata for metadata.
Whichever engine you choose for metadata will then extract files from network traffic. Strelka then analyzes those
files and they end up in /nsm/strelka/processed/.
Security Onion checks file hashes before sending to Strelka to avoid analyzing the same file multiple times in a 48
hour period.
8.5.1 Alerts
Strelka scans files using YARA rules. If it detects a match, then it will generate an alert that can be found in Alerts,
Dashboards, Hunt, or Kibana. Here is an example of Strelka detecting Poison Ivy RAT:
Drilling into that alert, we find more information about the file and the YARA rule:
You can read more about YARA rules in the Adding Local Rules section.
8.5.2 Logs
Even if Strelka doesn’t detect a YARA match, it will still log metadata about the file. You can find Strelka logs in
Dashboards, Hunt, and Kibana. Here’s an example of the default Strelka dashboard in Dashboards:
8.5.3 Configuration
You can configure Strelka by going to Administration –> Configuration –> strelka.
Strelka diagnostic logs are in /nsm/strelka/log/. Depending on what you’re looking for, you may also need to
look at the Docker logs for the containers:
Security Onion includes an Intrusion Detection Honeypot (IDH) node option. This allows you to build a node that
mimics common services such as HTTP, FTP, and SSH. Any interaction with these fake services will automatically
result in an alert.
From the book, Intrusion Detection Honeypots (Sanders, C):
      An Intrusion Detection Honeypot (IDH) is a security resource placed inside your network perimeter that
      generates alerts when probed or attacked. These systems, services, and tokens rely on deception to lure at-
      tackers in and convince them to interact. Unbeknownst to the attacker, you’re alerted when that interaction
      occurs and can begin investigating the compromise.
Chris Sanders and Josh Brower presented the IDH concept at Security Onion Conference
2021 and you can view the recording at https://www.youtube.com/watch?v=NzUhfARVfJk&list=
PLljFlTO9rB17mESq7Z9OeFKvVh39vJW34&index=5.
8.6.1 Installation
IDH nodes are dedicated to just being IDH nodes and cannot run any other services. Therefore, you must have a
separate manager to connect to. You can join a new IDH node to an existing Standalone deployment or full distributed
deployment. Our ISO image includes a boot menu option for IDH installs that will partition your disk appropriately
with lower requirements than a full installation.
 Warning: The IDH node is designed to be placed inside your network perimeter! It should not be accessible from
 the Internet!
8.6.2 Configuration
The IDH node utilizes OpenCanary which is a modular opensource honeypot by Thinkst. You can read more about it
at https://github.com/thinkst/opencanary.
OpenCanary logs can be found through Dashboards, Hunt, or Kibana using the following queries:
event.module: opencanary
event.dataset: idh
Sigma Plays within Playbook look for certain logs emitted by OpenCanary to generate alerts, which can be viewed in
the Alerts interface.
The following services are available for use with the IDH node. Pay special attention to how an alert is triggered for a
service as some of them require more than a simple connection request to trigger.
8.6.5 sshd
For IDH nodes, the local sshd is configured to listen on TCP/2222 and connections are only accepted from the Manager
node. This allows TCP/22 to be used for honeypot services.
You can configure IDH by going to Administration –> Configuration –> idh.
For example, suppose that we already have the HTTP service running but we want to change the default port from 80
to 8080.
Host Visibility
Security Onion can consume many kinds of host logs. You can send logs to Security Onion via your choice of either
Elastic Agent or Syslog:
    • Choose Elastic Agent for comprehensive telemetry if you can install an agent on the host.
    • Choose Syslog if you can’t install an agent but the device supports sending standard syslog. Examples include
      firewalls, switches, routers, and other network devices.
For Windows endpoints, you can optionally augment the standard Windows logging with Sysmon and/or Autoruns.
From https://www.elastic.co/elastic-agent:
      With Elastic Agent you can collect all forms of data from anywhere with a single unified agent per host.
      One thing to install, configure, and scale.
Each Security Onion node uses the Elastic Agent to transport logs to Elasticsearch. You can also deploy the Elastic
Agent to your endpoints to transport logs to your Security Onion deployment.
9.1.1 Deployment
Note: In order to receive logs from the Elastic Agent, Security Onion must be running Logstash. Evaluation Mode
and Import Mode do not run Logstash, so you’ll need Standalone or a full Distributed Deployment. In a Distributed
Deployment, forward nodes do not run Logstash, so you’ll need to configure agents to send to your manager or receiver
nodes. For more information, please see the Architecture section.
To deploy an Elastic agent to an endpoint, go to the Security Onion Console (SOC) Downloads page and download
the proper Elastic agent for the operating system of that endpoint. Don’t forget to allow the agent to connect through
the firewall by going to Administration –> Configuration –> firewall –> hostgroups.
                                                                                                                 173
Security Onion Documentation, Release 2.4
9.1.2 Logs
Once the agent starts sending logs, you should be able to find them in Dashboards, Hunt, or Kibana.
9.1.3 Management
9.1.5 Integrations
You can read more about integrations in the Elastic Fleet section and at https://docs.elastic.co/integrations.
Note:     For more information about the Elastic Agent, please see https://www.elastic.co/guide/en/fleet/current/
fleet-overview.html.
9.2 Syslog
If you want to send syslog from other devices, you should check to see if the device has an existing Elastic Agent
integration. If so, using the Elastic Agent integration should provide some parsing by default.
If your device does not have an existing Elastic Agent integration, you can still collect standard syslog. Start by going
to Administration –> Configuration –> firewall –> hostgroups.
Then choose the syslog option to allow the port through the firewall. If sending syslog to a sensor, please see the
Examples in the Firewall section. If you need to add custom parsing for those syslog logs, we recommend using
Elasticsearch ingest parsing.
Also note that if you’re monitoring network traffic with Zeek, then by default it will detect any syslog in that network
traffic and log it even if that syslog was not destined for that particular Security Onion node.
9.3 Sysmon
From https://learn.microsoft.com/en-us/sysinternals/downloads/sysmon:
      System Monitor (Sysmon) is a Windows system service and device driver that, once installed on a system,
      remains resident across system reboots to monitor and log system activity to the Windows event log. It
      provides detailed information about process creations, network connections, and changes to file creation
      time. By collecting the events it generates using Windows Event Collection or SIEM agents and subse-
      quently analyzing them, you can identify malicious or anomalous activity and understand how intruders
      and malware operate on your network.
9.3.1 Integration
Josh Brower wrote a great paper on integrating sysmon into Security Onion:
https://www.sans.org/reading-room/whitepapers/forensics/
sysmon-enrich-security-onion-039-s-host-level-capabilities-35837
Please note that the paper is a few years old and was therefore written for an older version of Security Onion.
9.3.2 Downloads
9.3.3 Transport
Sysmon logs can be collected and transported using Elastic Agent. Confirm that your configuration does NOT use the
Elastic Sysmon module. Security Onion will do all the necessary parsing.
9.3.4 Visualizations
Once Security Onion is receiving and parsing Sysmon data, you can search for that data and visualize it via Dash-
boards, Hunt, or Kibana. Each of these interfaces have at least one dashboard or query specifically designed for
Sysmon data.
Note:
For more information about sysmon, please see:
https://learn.microsoft.com/en-us/sysinternals/downloads/sysmon
9.4 Autoruns
From https://docs.microsoft.com/en-us/sysinternals/downloads/autoruns:
        This utility, which has the most comprehensive knowledge of auto-starting locations of any startup mon-
        itor, shows you what programs are configured to run during system bootup or login, and when you start
        various built-in Windows applications like Internet Explorer, Explorer and media players. These programs
        and drivers include ones in your startup folder, Run, RunOnce, and other Registry keys. Autoruns reports
        Explorer shell extensions, toolbars, browser helper objects, Winlogon notifications, auto-start services,
        and much more. Autoruns goes way beyond other autostart utilities.
9.4.1 Integration
Pertinax
Josh Brower developed a great project called Pertinax to normalize autoruns data and integrate it into Security Onion:
https://github.com/defensivedepth/Pertinax/wiki/Introduction
AutorunsToWinEventLog
Another method for integrating Autoruns into your logging infrastructure is AutorunsToWinEventLog:
https://github.com/palantir/windows-event-forwarding/tree/master/AutorunsToWinEventLog
9.4.2 Downloads
Note:
For more information about Autoruns, please see:
https://docs.microsoft.com/en-us/sysinternals/downloads/autoruns
Logs
Once logs are generated by network sniffing processes or endpoints, where do they go? How are they parsed? How
are they stored? That’s what we’ll discuss in this section.
10.1 Ingest
10.1.1 Import
Core Pipeline: Elastic Agent [IMPORT Node] –> Elasticsearch Ingest [IMPORT Node]
Logs: Zeek, Suricata
10.1.2 Eval
Core Pipeline: Elastic Agent [EVAL Node] –> Elasticsearch Ingest [EVAL Node]
Logs: Zeek, Suricata, Osquery/Fleet
Osquery Shipper Pipeline: Osquery [Endpoint] –> Fleet [EVAL Node] –> Elasticsearch Ingest via Core Pipeline
Logs: WEL, Osquery, syslog
10.1.3 Standalone
Core Pipeline: Elastic Agent [SA Node] –> Logstash [SA Node] –> Redis [SA Node] <–> Logstash [SA Node] –>
Elasticsearch Ingest [SA Node]
Logs: Zeek, Suricata, Osquery/Fleet, syslog
                                                                                                              179
Security Onion Documentation, Release 2.4
WinLogbeat: Winlogbeat [Windows Endpoint]–> Logstash [SA Node] –> Redis [SA Node] <–> Logstash [SA Node]
–> Elasticsearch Ingest [SA Node]
Logs: WEL, Sysmon
Pipeline: Elastic Agent [Fleet Node] –> Logstash [M | MS] –> Elasticsearch Ingest [S | MS]
Logs: Osquery
Core Pipeline: Elastic Agent [Fleet | Forward] –> Logstash [Manager] –> Redis [Manager]
Logs: Zeek, Suricata, Osquery/Fleet, syslog
Core Pipeline: Elastic Agent [Fleet | Forward] –> Logstash [MS] –> Redis [MS] <–> Logstash [MS] –> Elasticsearch
Ingest [MS]
Logs: Zeek, Suricata, Osquery/Fleet, syslog
Pipeline: Elastic Agent [MS] –> Logstash [MS] –> Elasticsearch Ingest [MS]
Logs: Local Osquery/Fleet
WinLogbeat: Winlogbeat [Windows Endpoint]–> Logstash [MS] –> Elasticsearch Ingest [MS]
Logs: WEL
10.1.7 Heavy
Pipeline: Elastic Agent [Heavy Node] –> Logstash [Heavy] –> Redis [Heavy] <–> Logstash [Heavy] –>
Elasticsearch Ingest [Heavy]
Logs: Zeek, Suricata, Osquery/Fleet, syslog
10.1.8 Search
Pipeline: Redis [Manager] –> Logstash [Search] –> Elasticsearch Ingest [Search]
Logs: Zeek, Suricata, Osquery/Fleet, syslog
10.1.9 Forward
Pipeline: Elastic Agent [Forward] –> Logstash [M | MS] –> Elasticsearch Ingest [S | MS]
Logs: Zeek, Suricata, syslog
10.2 Logstash
From https://www.elastic.co/products/logstash :
      Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of
      sources, transforms it, and then sends it to your favorite “stash.”
When Security Onion is running in Standalone mode or in a full distributed deployment, Logstash transports unparsed
logs to Elasticsearch which then parses and stores those logs. It’s important to note that Logstash does NOT run when
Security Onion is configured for Import or Eval mode. You can read more about that in the Architecture section.
10.2.1 Configuration
You can configure Logstash by going to Administration –> Configuration –> logstash.
ls_pipeline_batch_size
      The maximum number of events an individual worker thread will collect from inputs before attempting
      to execute its filters and outputs. Larger batch sizes are generally more efficient, but come at the cost of
      increased memory overhead. This is set to 125 by default.
ls_pipeline_workers
      The number of workers that will, in parallel, execute the filter and output stages of the pipeline. If you
      find that events are backing up, or that the CPU is not saturated, consider increasing this number to better
      utilize machine processing power. By default this value is set to the number of cores in the system.
lsheap
If total available memory is 8GB or greater, Setup sets the Logstash heap size to 25% of available memory, but no
greater than 4GB.
For more information, please see https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#
compressed_oops.
You may need to adjust the value depending on your system’s performance. The changes will be applied the next
time the minion checks in. You can force it to happen immediately by running sudo salt-call state.apply
logstash on the actual node or by running sudo salt $SENSORNAME_$ROLE state.apply logstash
on the manager node.
10.2.2 Parsing
Logstash does not parse logs in Security Onion, so modifying existing parsers or adding new parsers should be done
via Elasticsearch.
Please keep in mind that we don’t provide free support for third party systems, so this section will be just a brief
introduction to how you would send syslog to external syslog collectors. If you need commercial support, please see
https://www.securityonionsolutions.com.
To forward events to an external destination with minimal modifications to the original event, create a new custom con-
figuration file on the manager in /opt/so/saltstack/local/salt/logstash/pipelines/config/
custom/ for the applicable output. We recommend using either the http, tcp, udp, or syslog output plugin. At
this time we only support the default bundled Logstash output plugins.
For example, to forward all Zeek events from the dns dataset, we could use a configuration like the following:
output {
  if [module] =~ "zeek" and [dataset] =~ "dns" {
    udp {
      id => "cloned_events_out"
      host => "192.168.x.x"
      port => 1001
      codec => "json_lines"
    }
  }
}
 Warning: When using the tcp output plugin, if the destination host or port is down, it will cause the Logstash
 pipeline to be blocked. To avoid this, try using the other output options or consider having forwarded logs use a
 separate Logstash pipeline.
 Also keep in mind that when forwarding logs from the manager, some fields may not be set as expected since the
 events have not yet been processed by the Ingest Node configuration.
In Security Onion Console (SOC), navigate to Administration -> Configuration. At the top of the page, click
the Options menu and then enable the Show all configurable settings, including advanced
settings. option. Then navigate to logstash -> defined_pipelines -> manager and append the name of your newly
created file to the list of config files used for the manager pipeline:
custom/myfile.conf
The configuration will be applied at the next 15-minute interval or you can apply it immediately by clicking the
SYNCHRONIZE GRID button under the Options menu.
You can monitor events flowing through the output by running the following command on the manager:
To forward events to an external destination AFTER they have traversed the Logstash pipelines (NOT ingest node
pipelines), perform the same steps as above but instead of adding the reference for your Logstash output to the
manager pipeline add it to search pipeline instead. The configuration will be applied at the next 15-minute interval
or you can apply it immediately by clicking the SYNCHRONIZE GRID button under the Options menu.
You can monitor events flowing through the output by running the following command on the search nodes:
Please keep in mind that events will be forwarded from all applicable search nodes, as opposed to just the manager.
10.2.6 Queue
Memory-backed
From https://www.elastic.co/guide/en/logstash/current/persistent-queues.html:
      By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline work-
      ers) to buffer events. The size of these in-memory queues is fixed and not configurable.
Persistent
If you experience adverse effects using the default memory-backed queue, you might consider a disk-based persistent
queue. From https://www.elastic.co/guide/en/logstash/current/persistent-queues.html:
      In order to protect against data loss during abnormal termination, Logstash has a persistent queue feature
      which will store the message queue on disk. Persistent queues provide durability of data within Logstash.
      The total capacity of the queue in number of bytes. Make sure the capacity of your disk drive is greater
      than the value you specify here. If both queue.max_events and queue.max_bytes are specified, Logstash
      uses whichever criteria is reached first.
If you want to check for dropped events, you can enable the dead letter queue. This will write all records that are not
able to make it into Elasticsearch into a sequentially-numbered file (for each start/restart of Logstash).
This can be achieved by adding the following to the Logstash configuration:
dead_letter_queue.enable: true
More information:
https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html
Redis
When using search nodes, Logstash on the manager node outputs to Redis (which also runs on the manager node).
Redis queues events from the Logstash output (on the manager node) and the Logstash input on the search node(s)
pull(s) from Redis. If you notice new events aren’t making it into Elasticsearch, you may want to first check Logstash
on the manager node and then the Redis queue.
The Logstash log file is located at /opt/so/log/logstash/logstash.log. Log file settings can be ad-
justed in /opt/so/conf/logstash/etc/log4j2.properties. By default, logs are set to rollover daily
and purged after 7 days. Depending on what you’re looking for, you may also need to look at the Docker logs for the
container:
sudo docker logs so-logstash
10.2.8 Errors
Read-Only
10.3 Redis
From https://redis.io/:
      Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and
      message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range
      queries, bitmaps, hyperloglogs and geospatial indexes with radius queries.
On Standalone (non-Eval) installations and distributed deployments, Logstash on the manager node outputs to Redis.
Search nodes can then consume from Redis.
10.3.1 Queue
sudo so-redis-count
If the queue is backed up and doesn’t seem to be draining, try stopping Logstash on the manager node:
sudo so-logstash-stop
If the Redis queue looks okay, but you are still having issues with logs getting indexed into Elasticsearch, you will
want to check the Logstash statistics on the search node(s).
10.3.2 Tuning
Security Onion configures Redis to use 812MB of your total system memory. If you have sufficient RAM available,
you may want to increase the redis_maxmemory setting by going to Administration –> Configuration –> redis.
This value is in Megabytes so to set it to use 8 gigs of ram you would set the value to 8192.
Logstash on the manager node is configured to send to Redis. For best performance, you may want to tune the
ls_pipeline_batch_size value at Administration –> Configuration –> logstash_settings to find the sweet spot
for your deployment.
Note:
For more information about the Logstash output plugin for Redis, please see:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-redis.html
Logstash on search nodes pulls from Redis.                  For best performance, you may want to tune
ls_pipeline_batch_size and ls_input_threads at Administration –> Configuration –>
logstash_settings to find the sweet spot for your deployment.
Note:
For more information about the Logstash input plugin for Redis, please see:
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html
Redis logs can be found at /opt/so/log/redis/. Depending on what you’re looking for, you may also need to
look at the Docker logs for the container:
10.4 Elasticsearch
From https://www.elastic.co/products/elasticsearch:
      Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing num-
      ber of use cases. As the heart of the Elastic Stack, it centrally stores your data for lightning fast search,
      fine-tuned relevancy, and powerful analytics that scale with ease.
10.4.1 Data
Indexing
Starting in Security Onion 2.4, most data is associated with a data stream, which is an abstraction from traditional
indices that leverages one or more backing indices to manage and represent the data within the data stream. The usage
of data streams allows for greater flexibility in data management.
Data streams can be targeting during search or other operations directly, similar to how indices are targeted.
For example, a CLI-based query against Zeek connection records would look like the following:
so-elasticsearch-query logs-zeek-so/_search?q=event.dataset:conn
When this query is run against the backend data, it is actually targeting one or more backing indices, such as:
.ds-logs-zeek-so-2022-03-07.0001
.ds-logs-zeek-so-2022-03-08.0001
.ds-logs-zeek-so-2022-03-08.0002
Similarly, you can target a single backing index with the following query:
so-elasticsearch-query .ds-logs-zeek-so-2022-03-08.001/_search?q=event.dataset:conn
Schema
Security Onion tries to adhere to the Elastic Common Schema wherever possible. Otherwise, additional fields or slight
modifications to native Elastic field mappings may be found within the data.
Management
In Security Onion 2.4, Elasticsearch data is handled partially by both Curator and ILM (https://www.elastic.co/guide/
en/elasticsearch/reference/current/index-lifecycle-management.html).
Only Curator performs the following actions:
10.4.2 Querying
You can query Elasticsearch using web interfaces like Alerts, Dashboards, Hunt, and Kibana. You can also query
Elasticsearch from the command line using a tool like curl. You can also use so-elasticsearch-query.
10.4.3 Authentication
You can authenticate to Elasticsearch using the same username and password that you use for Security Onion Console
(SOC).
You can add new user accounts to both Elasticsearch and Security Onion Console (SOC) at the same time as shown
in the Adding Accounts section. Please note that if you instead create accounts directly in Elastic, then those accounts
will only have access to Elastic and not Security Onion Console (SOC).
10.4.5 Storage
10.4.6 Parsing
Elasticsearch receives unparsed logs from Logstash or Elastic Agent. Elasticsearch then parses and stores those logs.
Parsers are stored in /opt/so/conf/elasticsearch/ingest/. Custom ingest parsers can be placed in /
opt/so/saltstack/local/salt/elasticsearch/files/ingest/. To make these changes take ef-
fect, restart Elasticsearch using so-elasticsearch-restart.
Elastic Agent may pre-parse or act on data before the data reaches Elasticsearch, altering the data stream or index to
which it is written, or other characteristics such as the event dataset or other pertinent information. This configuration
is maintained in the agent policy or integration configuration in Elastic Fleet.
Note:
For more about Elasticsearch ingest parsing, please see:
https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html
10.4.7 Templates
Fields are mapped to their appropriate data type using templates. When making changes for parsing, it is necessary
to ensure fields are mapped to a data type to allow for indexing, which in turn allows for effective aggregation and
searching in Dashboards, Hunt, and Kibana. Elasticsearch leverages both component and index templates.
Component Templates
From https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html:
        Component templates are reusable building blocks that configure mappings, settings, and aliases. While
        you can use component templates to construct index templates, they aren’t directly applied to a set of
        indices.
Also see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html.
Index Templates
From https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html:
        An index template is a way to tell Elasticsearch how to configure an index when it is created. Templates
        are configured prior to index creation. When an index is created - either manually or through indexing a
        document - the template settings are used as a basis for creating the index. Index templates can contain a
        collection of component templates, as well as directly specify settings, mappings, and aliases.
In Security Onion, component templates are                 stored   in   /opt/so/saltstack/default/salt/
elasticsearch/templates/component/.
These templates are specified to be used in the index template definitions in /opt/so/saltstack/default/
salt/elasticsearch/defaults.yml.
10.4.8 Community ID
For logs that don’t naturally include Community ID, we use the Elasticsearch Community ID processor:
https://www.elastic.co/guide/en/elasticsearch/reference/current/community-id-processor.html
10.4.9 Configuration
You can configure Elasticsearch by going to Administration –> Configuration –> elasticsearch.
If you get errors like failed to create query: field expansion for [*] matches too
many fields, limit: 3500, got: XXXX, then this usually means that you’re sending in additional logs
and so you have more fields than our default max_clause_count value. To resolve this, you can go to Adminis-
tration –> Configuration –> elasticsearch –> config –> indices –> query –> bool –> max_clause_count and adjust the
value for any boxes running Elasticsearch in your deployment.
Shards
To see your existing shards, run the following command and the number of shards will be shown in the fifth column:
Given the sizing tips above, if any of your indices are averaging more than 50GB per shard, then you should probably
increase the shard count until you get below that recommended maximum of 50GB per shard.
The number of shards for an index can be adjusted by going to Administration –> Configuration –> elasticsearch –>
index_settings –> so-INDEX-NAME –> index_template –> template –> settings –> index –> number_of_shards.
Please keep in mind that old indices will retain previous shard settings and the above settings will only be applied to
newly created indices.
Heap Size
If total available memory is 8GB or greater, Setup configures the heap size to be 33% of available memory, but no
greater than 25GB. You may need to adjust the value for heap size depending on your system’s performance. You can
modify this by going to Administration –> Configuration –> elasticsearch –> esheap.
Field limit
Security Onion currently defaults to a field limit of 5000. If you receive error messages from Logstash, or you
would simply like to increase this, you can do so by going to Administration –> Configuration –> elasticsearch –>
index_settings –> so-INDEX-NAME –> index_template –> template –> settings –> index –> mapping –> total_fields
–> limit.
Please note that the change to the field limit will not occur immediately, only on index creation.
Elasticsearch indices are closed based on the close setting shown at Administration –> Configuration –> elastic-
search –> index_settings –> so-INDEX-NAME –> close. This setting configures Curator to close any index older
than the value given. The more indices are open, the more heap is required. Having too many open indices can lead to
performance issues. There are many factors that determine the number of days you can have in an open state, so this
is a good setting to adjust specific to your environment.
Size-based deletion of Elasticsearch indices occurs based on the value of cluster-wide elasticsearch.
retention.retention_pct, which is derived from the total disk space available for /nsm/elasticsearch
across all nodes in the Elasticsearch cluster. The default value for this setting is 50 percent.
To modify this value, first navigate to Administration -> Configuration. At the top of the page, click the Options
menu and then enable the Show all configurable settings, including advanced settings.
option. Then navigate to elasticsearch -> retention -> retention_pct. The change will take effect at the next 15 minute
interval. If you would like to make the change immediately, you can click the SYNCHRONIZE GRID button under
the Options menu at the top of the page.
If your open indices are using more than retention_pct, then Curator will delete old open indices until disk
space is back under retention_pct. If your total Elastic disk usage (both open and closed indices) is above
retention_pct, then so-curator-closed-delete will delete old closed indices until disk space is back
under retention_pct. so-curator-closed-delete does not use Curator because Curator cannot calcu-
late disk space used by closed indices. For more information, see https://www.elastic.co/guide/en/elasticsearch/client/
curator/current/filtertype_space.html.
Curator and so-curator-closed-delete run on the same schedule. This might seem like there is a potential
to delete open indices before deleting closed indices. However, keep in mind that Curator’s delete.yml is only going
to see disk space used by open indices and not closed indices. So if we have both open and closed indices, we may
be at retention_pct but Curator’s delete.yml is going to see disk space at a value lower than retention_pct
and so it shouldn’t delete any open indices.
For example, suppose our retention_pct is 50%, total disk space is 1TB, and we have 30 days of open indices and
300 days of closed indices. We reach retention_pct and both Curator and so-curator-closed-delete
execute at the same time.        Curator’s delete.yml will check disk space used but it will see that disk
space is at maybe 500GB so it thinks we haven’t reached retention_pct and does not delete anything.
so-curator-closed-delete gets a more accurate view of disk space used, sees that we have indeed reached
retention_pct, and so it deletes closed indices until we get lower than retention_pct. In most cases, Curator
deletion should really only happen if we have open indices without any closed indices.
Time-based deletion occurs through the use of the $data_stream.policy.phases.delete.min_age setting within the life-
cycle policy tied to each index and is controlled by ILM. It is important to note that size-based deletion takes priority
over time-based deletion, as disk may reach retention_pct and indices will be deleted before the min_age value
is reached.
Policies can be edited within the SOC administration interface by navigating to Administration -> Configuration ->
elasticsearch -> $index -> policy -> phases -> delete -> min_age. Changes will take effect when a new index is created.
Security Onion supports Elastic clustering. In this configuration, Elasticsearch instances join together to create a
single cluster. When using Elastic clustering, index deletion is based on the delete settings shown in the global
pillar above. The delete settings in the global pillar configure Curator to delete indices older than the value given.
For each index, please ensure that the close setting is set to a smaller value than the delete setting.
Let’s discuss the process for determining appropriate delete settings.            First, check your indices using so-
elasticsearch-query to query _cat/indices. For example:
Adding all the index sizes together plus a little padding results in 3.5GB per day. We will use this as our baseline.
If we look at our total /nsm size for our search nodes (data nodes in Elastic nomenclature), we can calculate how
many days open or closed that we can store. The equation shown below determines the proper delete timeframe. Note
that total usable space depends on replica counts. In the example below we have 2 search nodes with 140GB for
280GB total of /nsm storage. Since we have a single replica we need to take that into account. The formula for that
is:
1 replica = 2 x Daily Index Size 2 replicas = 3 x Daily Index Size 3 replicas = 4 x Daily Index Size
Let’s use 1 replica:
Total Space / copies of data = Usable Space
280 / 2 = 140
Suppose we want a little cushion so let’s make Usable Space = 130
Usable NSM space / Daily Index Size = Days
For our example above lets fill in the proper values:
130GB / 3.5GB = 37.1428571 days rounded down to 37 days
Therefore, we can set all of our delete values to 37.
10.4.13 Re-indexing
Re-indexing may need to occur if field data types have changed and conflicts arise. This process can be VERY time-
consuming, and we only recommend this if keeping data is absolutely critical.
10.4.14 Clearing
If you want to clear all Elasticsearch data including documents and indices, you can run the so-elastic-clear
command.
10.4.15 GeoIP
Elasticsearch 8 no longer includes GeoIP databases by default. We include GeoIP databases for Elasticsearch so that
all users will have GeoIP functionality. If your search nodes have Internet access and can reach geoip.elastic.co and
storage.googleapis.com, then you can opt-in to database updates if you want more recent information. To do this, add
the following to your Elasticsearch Salt config:
config:
  ingest:
    geoip:
      downloader:
        enabled: true
Note:
For more information about Elasticsearch, please see:
https://www.elastic.co/products/elasticsearch
10.5 ElastAlert
From https://elastalert2.readthedocs.io/en/latest/elastalert.html#overview:
        ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data
        in Elasticsearch.
ElastAlert queries Elasticsearch and provides an alerting mechanism with multiple output types, such as Slack, Email,
JIRA, OpsGenie, and many more.
10.5.1 Configuration
You can modify ElastAlert configuration by going to Administration –> Configuration –> elastalert.
es_index_patterns: '*:so-*,*:endgame-*,*:elastalert*'
Slack
To have ElastAlert send alerts to something like Slack, we can simply change the alert type and details for a rule like
so:
alert:
- "slack":
    slack_webhook_url: "https://hooks.slack.com/services/YOUR_WEBHOOK_URI"
Email - Internal
alert:
- "email"
email:
- "youremail@yourcompany.com"
smtp_host: "your_company_smtp_server"
smtp_port: 25
from_addr: "elastalert@yourcompany.com"
Email - External
If we need to use an external email provider like Gmail, we can add something like the following:
alert:
- "email"
email:
- "youremail@gmail.com"
smtp_host: "smtp.gmail.com"
smtp_port: 465
smtp_ssl: true
from_addr: "youremail@gmail.com"
smtp_auth_file: '/opt/elastalert/rules/smtp_auth_file.txt'
Then create a new file called /opt/so/rules/elastalert/smtp_auth_file.txt and add the following:
user: youremail@gmail.com
password: yourpassword
so-elastalert-create
so-elastalert-create is a tool created by Bryant Treacle that can be used to help ease the pain of ensuring
correct syntax and creating Elastalert rules from scratch. It will walk you through various questions, and eventually
output an Elastalert rule file that you can deploy in your environment to start alerting quickly and easily.
so-elastalert-test
Note: so-elastalert-test does not yet include all options available to elastalert-test-rule.
Defaults
With Security Onion’s example rules, Elastalert is configured by default to only count the number of hits for a particular
match, and will not return the actual log entry for which an alert was generated.
This is governed by the use of use_count_query:            true in each rule file.
If you would like to view the data for the match, you can simply remark this line in the rule file(s). Keep in mind, this
may impact performance negatively, so testing the change in a single file at a time may be the best approach.
Timeframe
Keep in mind, for queries that span greater than a minute back in time, you may want to add the following fields to
your rule to ensure searching occurs as planned (for example, for 10 minutes):
buffer_time:
    minutes: 10
allow_buffer_time_overlap: true
https://elastalert2.readthedocs.io/en/latest/ruletypes.html#buffer-time
https://github.com/Yelp/elastalert/issues/805
Elastalert diagnostic logs are in /opt/so/log/elastalert/. Depending on what you’re looking for, you may
also need to look at the Docker logs for the container:
10.6 Curator
From https://www.elastic.co/guide/en/elasticsearch/client/curator/current/about.html#about:
      Elasticsearch Curator helps you curate, or manage, your Elasticsearch indices and snapshots by:
        1. Obtaining the full list of indices (or snapshots) from the cluster, as the actionable list
        2. Iterate through a list of user-defined filters to progressively remove indices (or snapshots) from this
           actionable list as needed.
        3. Perform various actions on the items which remain in the actionable list.
10.6.1 Configuration
Curator defaults to closing indices older than 30 days. Curator also deletes old indices to prevent your disk from filling
up.
Curator configuration can be found by going to Administration –> Configuration –> curator.
For more information about the Curator close and delete settings, please see the Elasticsearch section.
When Curator completes an action, it logs its activity in a log file found in /opt/so/log/curator/. Depending
on what you’re looking for, you may also need to look at the Docker logs for the container:
Note:
For more information about Curator, please see:
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/about.html#about
This page references the various types of data fields utilized by the Elastic Stack in Security Onion.
10.7.1 ECS
10.7.2 Fields
Fields are mapped to their proper type using template files found in /opt/so/conf/elasticsearch/
templates/.
Elasticsearch receives NIDS alerts from Suricata via Elastic Agent or Logstash and parses them using:
/opt/so/conf/elasticsearch/ingest/suricata.alert
/opt/so/conf/elasticsearch/ingest/common.nids
/opt/so/conf/elasticsearch/ingest/common
event.module:"suricata"
event.dataset:"alert"
source.ip
source.port
destination.ip
destination.port
network.transport
rule.gid
rule.name
rule.rule
rule.rev
rule.severity
rule.uuid
rule.version
The following lists field names as they are formatted in Elasticsearch. Elastalert provides its own template to use for
mapping into Elastalert, so we do not current utilize a config file to parse data from Elastalert.
index:*:elastalert_status
alert_info.type
alert_sent
alert_time
endtime
hist
matches
match_body.@timestamp
match_body.num_hits
match_body.num_matches
rule_name
starttime
time_taken
Zeek logs are sent to Elasticsearch where they are parsed using ingest parsing. Most Zeek logs have a few standard
fields and they are parsed as follows:
ts => @timestamp
uid => log.id.uid
id.orig_h => source.ip
id.orig_p => source.port
id.resp_h => destination.ip
id.resp_p => destination.port
The remaining fields in each log are specific to the log type. To see how the fields are mapped for a specific Zeek log,
take a look at its ingest parser.
You can find ingest parsers in your local filesystem at /opt/so/conf/elasticsearch/ingest/ or you can
find them online at:
https://github.com/Security-Onion-Solutions/securityonion/tree/2.4/main/salt/elasticsearch/files/ingest
For example, suppose you want to know how the Zeek conn.log is parsed. You could take a look at /opt/so/conf/
elasticsearch/ingest/zeek.conn or view it online at:
https://github.com/Security-Onion-Solutions/securityonion/blob/2.4/main/salt/elasticsearch/files/ingest/zeek.conn
You’ll see that zeek.conn then calls the zeek.common pipeline (/opt/so/conf/elasticsearch/
ingest/zeek.common):
https://github.com/Security-Onion-Solutions/securityonion/blob/2.4/main/salt/elasticsearch/files/ingest/zeek.
common
which in turn calls the common pipeline (/opt/so/conf/elasticsearch/ingest/common):
https://github.com/Security-Onion-Solutions/securityonion/blob/2.4/main/salt/elasticsearch/files/ingest-dynamic/
common
10.11 Community ID
From https://github.com/corelight/community-id-spec:
        When processing flow data from a variety of monitoring applications (such as Zeek and Suricata), it’s
        often desirable to pivot quickly from one dataset to another. While the required flow tuple information
        is usually present in the datasets, the details of such “joins” can be tedious, particular in corner cases.
        This spec describes “Community ID” flow hashing, standardizing the production of a string identifier
        representing a given network flow, to reduce the pivot to a simple string comparison.
Security Onion enables the built-in Community ID support in both Zeek and Suricata.
For logs that don’t naturally include Community ID, we use the Elasticsearch Community ID processor:
https://www.elastic.co/guide/en/elasticsearch/reference/current/community-id-processor.html
Note:
For more information about Community ID, please see:
https://github.com/corelight/community-id-spec
SOC auth is handled by Kratos and you can read more about that at https://github.com/ory/kratos. SOC auth logs can
be found at /opt/so/log/kratos/. To look for successful SOC logins, you can run the following:
sudo zgrep "Identity authenticated successfully and was issued an Ory Kratos Session
 ˓→Cookie" /opt/so/log/kratos/*
Those logs should be ingested into Elasticsearch and available for searching in Dashboards, Hunt, and Kibana. Both
Dashboards and Hunt have pre-defined queries for SOC auth logs.
identity_id
Once you see the auth logs, you will notice that the login is logged as identity_id. You can find your desired
identity_id as follows, replacing USERNAME@DOMAIN.COM with your desired SOC username:
We include Elasticsearch ingest parsers for several log types that don’t have Elastic Agent integrations.
Security Onion includes Elasticsearch ingest parsers for pfSense firewall logs. To enable this, first go to Administration
–> Configuration –> firewall –> hostgroups.
Once there, then select the syslog option and allow the traffic through the firewall.
Next, configure your pfSense firewall to send syslog to the IP address of your Security Onion box. If you are using
pfSense 2.6.0 or higher, make sure that Log Message Format is set to BSD (RFC 3164, default). You
should then be able to see your firewall logs using the Firewall query in Dashboards or Hunt.
We include Elasticsearch ingest parsers for rita logs. To enable this support, add the following in the relevant Salt
minion pillar and then restart Elastic Agent on the minion(s):
rita:
  enabled: True
the value for beacon.score in a beacon record equals 1, an alert will be generated and viewable in Alerts.
Updating
11.1 soup
soup stands for Security Onion UPdater. To install updates, run the soup command:
sudo soup
If necessary, soup will update itself and then ask you to run soup again. Once soup is fully updated, it will then
check for other updates. This includes Security Onion version updates, Security Onion hotfixes, and operating system
(OS) updates.
After running soup or rebooting a Security Onion node, it may take a few minutes for services to display an OK status
when running so-status. This may be due to the intial on-boot Salt highstate running. If services do not appear to be
fully up and running within 15 minutes, try running the following command:
 Warning: If you have a production deployment, we recommend that you test the upgrade process on a test
 deployment if possible before deploying to production.
When we release a new version of Security Onion, we update the Release Notes section and publish a blog post to
https://blog.securityonion.net. You’ll want to review these for any relevant information about the individual updates.
If soup finds a full version update, then it will update the Security Onion version in /etc/soversion, all Salt
code, and all Docker images.
                                                                                                                 205
Security Onion Documentation, Release 2.4
soup automatically keeps the previous version of Docker images. These older unused Docker images will be auto-
matically removed at the next version update. If you need to remove these older Docker images immediately, first
verify that the upgrade completed successfully and that everything is working properly. You could then remove the
older images individually or all at once using a command like:
sudo docker system prune -a
However, please note that this an aggressive option and you should exercise caution if you have any non-standard
Docker images or configuration. You may want to test it on a test system first.
soup checks for Security Onion hotfixes. Hotfixes typically include updates to the Salt code and small configuration
changes that do not warrant a full version update. This does not include Docker images since that would require a full
version update.
After applying a hotfix, you may notice that the Security Onion version in /etc/soversion stays the same. The
application of the hotfix is tracked on the manager in the /etc/sohotfix file.
11.1.3 OS Updates
soup will check for local configurations in /opt/so/saltstack/local/ that may cause issues and flag them
with the message Potentially breaking changes found in the following files. Please exam-
ine the output of soup and review any local configurations for possible issues.
11.1.5 Log
If soup displays any errors, you can check /root/soup.log for additional clues.
11.1.6 ssh
If you run soup via ssh and the ssh session terminates, then any processes running in that session would terminate.
You should avoid leaving soup unattended especially if the machine you are ssh’ing from is configured to sleep after
a period of time. You might also consider using something like screen or tmux so that if your ssh session terminates,
the processes will continue running on the server.
11.1.7 Airgap
When you run soup on an Airgap install, it will ask for the location of the upgrade media. You can do one of the
following:
    • burn the latest ISO image to a DVD and insert it in the DVD drive
    • flash the ISO image to a USB drive and insert that USB drive
    • simply copy the ISO file itself to the airgapped manager
You can also specify the path on the command line using the -f option. For example (change this to reflect the actual
path to the ISO image):
11.1.8 Agents
If soup updated to a new version of the Elastic stack, then you might need to update your Elastic Agents via Elastic
Fleet.
11.1.9 log_size_limit
soup will check your Elasticsearch log_size_limit values to see if they are over the recommended val-
ues. If so, it will ask you to update the values in /opt/so/saltstack/local/pillar/minions/
. When updating these files, please update any and all instances of log_size_limit as it may exist as
elasticsearch:log_size_limit or manager:log_size_limit.
11.1.10 Kibana
After soup completes, if Kibana says Kibana server is not ready yet even after waiting a few minutes
for it to fully initialize, then take a look at the Diagnostic Logging section of the Kibana page.
If Kibana loads but the dashboards display errors that they didn’t before the upgrade, first shift-reload your browser to
make sure there are no cache issues. If that doesn’t resolve the issue, then you may need to reload the dashboards on
your manager:
sudo rm /opt/so/state/kibana_*.txt
sudo salt-call state.apply kibana.so_savedobjects_defaults -l info queue=True
11.1.11 Automation
soup can be automated as follows (assuming you’ve previously accepted the Elastic license):
sudo soup -y
This will make soup proceed unattended, automatically answering yes to any prompt. If you have an airgap instal-
lation, you can specify the path to the ISO image using the -f option as follows:
11.1.12 Errors
soup will check Salt pillars to make sure they can be rendered. If not, it will output a message like this:
There is an issue rendering the manager's pillars. Please correct the issues in the
 ˓→sls files mentioned below before running SOUP again.
This usually means that somebody has modified the Salt sls files and introduced a typo.
Downloading images
As soup is downloading container images, it may encounter errors if there are Internet connection issues or if the
disk runs out of free space. Once you’ve resolved the underlying condition, you can manually refresh your container
images using so-docker-refresh.
Here are some other errors that you may see when running soup:
local:
     Data failed to compile:
----------
     Rendering SLS 'base:common' failed: Jinja variable 'list object' has no attribute
 ˓→'values'
and/or
There is a problem downloading the so-xyz:2.4.0 image. Details:
gpg: Signature made Thu 18 Feb 2021 02:26:10 PM UTC using RSA key ID FE507013 gpg:
 ˓→BAD signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
If you see these errors, it most likely means that a salt highstate process was already running when soup began.
You can wait a few minutes and then try soup again. Alternatively, you can run sudo salt-call state.
highstate and wait for it to complete before running soup again.
If you have a distributed deployment with a manager node and separate sensor nodes and/or search nodes, you only
need to run soup on the manager. Once soup has completed, other nodes should update themselves at the next Salt
highstate (typically within 15 minutes).
 Warning: Just because the update completed on the manager does NOT mean the upgrade is complete on
 other nodes in the grid. Do not manually restart anything until you know that all the search/heavy nodes in your
 deployment are updated. This is especially important if you are using true clustering for Elasticsearch.
 Each minion is on a random 15 minute check-in period and things like network bandwidth can be a factor in how
 long the actual upgrade takes. If you have a heavy node on a slow link, it is going to take a while to get the
 containers to it. Depending on what changes happened between the versions, Elasticsearch might not be able to
 talk to said heavy node until the update is complete.
 If it looks like you’re missing data after the upgrade, please avoid restarting services and instead make sure at
 least one search node has completed its upgrade. The best way to do this is to run sudo salt-call state.
 highstate from a search node and make sure there are no errors. Typically if it works on one node it will work
 on the rest. Forward nodes are less complex and will update as they check in so you can monitor those from the
 Grid section of Security Onion Console (SOC).
    • Compares the installed version with what is available on github or the ISO image.
    • Checks to see if Salt needs to be updated (more on this later).
    • Downloads the new Docker images or, if airgap, copies them from the ISO image.
    • Stops the Salt master and minion and restarts it in a restricted mode. This mode only allows the manager to
      connect to it so that we make sure the manager is done before any of the minions are updated.
    • Updates Salt if necessary. This will cause the master and minion services to restart but still in restricted mode.
    • Makes any changes to pillars that are needed such as adding new settings or renaming values. This varies from
      release to release.
    • If the grid is in Airgap mode, then it copies the latest ET Open rules and yara rules to the manager.
    • The new Salt code is put into place on the manager.
    • Runs a highstate on the manager which is the actual upgrade where it will use the new Salt code and Docker
      containers.
    • Unlocks the Salt master service and allows minions to connect again.
    • Issues a command to all minions to update Salt if necessary. This is important to note as it takes time to to update
      the Salt minion on all minions. If the minion doesn’t respond for whatever reason, it will not be upgraded at this
      time. This is not an issue because the first thing that gets checked when a minion talks to the master is if Salt
      needs to be updated and will apply the update if it does.
    • Nodes connect back to the manager and actually perform the upgrade to the new version.
This page lists End Of Life (EOL) dates for older versions of Security Onion and older components.
TheHive 3 reached EOL on December 31, 2021. TheHive and Cortex were fully removed from Security Onion in
Security Onion 2.3.120:
https://blog.securityonion.net/2022/04/security-onion-23120-now-available.html
Accounts
12.1 Passwords
When you first install Security Onion, you create a standard OS user account for yourself. If you need to change your
OS user password, you can use the passwd command:
passwd
Your default user account should have sudo permissions. Command-line utilities that require administrative access
can be prefixed with sudo. For example, the so-status command requires administrative access so you can run it with
sudo as follows:
sudo so-status
Log into Security Onion Console (SOC) using the username and password you created in the Setup wizard.
                                                                                                                211
Security Onion Documentation, Release 2.4
You can change your password in Security Onion Console (SOC) by clicking the user icon in the upper right corner,
clicking Settings, and then going to the Security tab:
If you’ve forgotten your SOC password, an administrator can change it using the Administration interface.
Once logged in to SOC using the username and password method, users can optionally enable passwordless logins,
provided the setting is enabled. The login screen will show a separate section for passwordless logins, if it is enabled.
Note that it is enabled by default on new installations.
Activate passwordless login for your Security Onion Console (SOC) user by clicking the user icon in the upper right
corner, clicking Settings, and then going to the Security tab. Scroll down to the Security Keys section
and follow the provided instructions.
Similarly, disable passwordless logins by returning to the Security tab and clicking the delete icon next to any
previously-created Security Key.
Note: While it is possible to use TOTP MFA as a second authentication factor in combination with passwordless
logins, it is not possible to use a second security key as a second authentication factor with passwordless logins.
Important: The webauthn specification requires that the web server be accessed via a hostname. Therefore, IP
addresses cannot be used to access SOC when utilizing webauthn. Also, the server’s TLS certificate must not have any
errors. Consequently, self-signed certificates will only be permitted provided the certificate authority (CA) has also
been imported into analyst’s browsers and/or operating systems, and marked as trusted.
12.2 MFA
You can enable Multi-Factor Authentication (MFA) to further protect your account. This can be enabled in Security
Onion Console (SOC) by clicking the user icon in the upper right corner, clicking Settings, and then going to the
Security tab.
12.2.1 TOTP
Time-based One-Time Passwords (TOTP) can be activated on a user account. TOTP requires the use of an authenticator
app. Currently only Google Authenticator has been tested, however other authenticator apps that implement the time-
based one-time password (TOTP) specification could also work.
If you have a user account on multiple Security Onion deployments with TOTP activated, they may be listed identically
in your authenticator app. If so, you should be able to edit the listing in your authenticator app so that you can
distinguish between them.
 Warning: Please note that TOTP requires that both the Security Onion manager and the device supplying the
 TOTP code to have their system time set correctly. Otherwise, the TOTP code may be seen as invalid and rejected.
Note: If you lose access to your authenticator app, an administrator can reset your password using the Administration
interface which will also remove the TOTP from your account.
WebAuthn allows the use of built-in mobile device biometric sensors, USB security devices, and other PKI-based
security devices to authenticate users during the login process.
If the Security Onion installation has been configured to use security keys for MFA instead of passwordless logins
then you can add one or more security keys to your account as a second authentication factor.
Note: If you lose access to your security key device, an administrator can reset your password using the Administra-
tion interface which will also remove the security keys from your account.
Important: The webauthn specification requires that the web server be accessed via a hostname. Therefore, IP
addresses cannot be used to access SOC when utilizing webauthn. Also, the server’s TLS certificate must not have any
errors. Consequently, self-signed certificates will only be permitted provided the certificate authority (CA) has also
been imported into analyst’s browsers and/or operating systems, and marked as trusted.
12.3.1 OS
If you need to add a new OS user account, you can use the adduser command. For example, to add a new account
called tom:
12.3.2 SOC
If you need to add a new account to Security Onion Console (SOC), navigate to the Administration interface, click
Users, and then click the + icon. Fill out the necessary information and then click the ADD button.
12.4.1 OS
Operating System (OS) user accounts are stored in /etc/passwd. You can get a list of all OS accounts using the
following command:
If you want a list of user accounts (not service accounts), then you can filter /etc/passwd for accounts with a UID
greater than 999 like this:
cat /etc/passwd | awk -F: '$3 > 999 {print ;}' | cut -d: -f1
12.4.2 SOC
You can get a list of users in Security Onion Console (SOC) by navigating to the Administration interface and then
clicking Users:
For more information about the Users page, please see the Administration section.
12.5.1 OS
If you need to disable an OS user account, you can expire the account using usermod --expiredate 1. For
example, to disable the account for user tom:
For more information, please see passwd manual by typing man passwd and the usermod manual by typing man
usermod.
12.5.2 SOC
If you need to disable an account in Security Onion Console (SOC), you can go to the Administration interface, expand
the user account, and click the LOCK USER button.
After disabling a user account, the Administration page will show the disabled user account with a disabled icon in the
Status column:
For more information about the Users page, please see the Administration section.
The ability to restrict or grant specific privileges to a subset of users is covered by role-based access control, or
“RBAC” for short. RBAC is an authorization technique in which users are assigned one of a small set of roles, and
then the roles are associated to many low-level privileges. This provides the ability to build software with fine-grained
access control, but without the need to maintain complex associations of users to large numbers of privileges. Users
are traditionally assigned a single role, one which correlates closely with their role in the organization. However, it’s
possible to assign a user to multiple roles, if necessary.
RBAC in Security Onion covers both Security Onion privileges and Elastic stack privileges. Security Onion privileges
are only involved with functionality specifically provided by the components developed by Security Onion, while
Elastic stack privileges are only involved with the Elasticsearch, Kibana, and related Elastic stack. For example,
Security Onion will check if a user has permission to create a PCAP request, while Elastic will check if the same user
has permission to view a particular index or document stored in Elasticsearch.
Security Onion ships with the following user roles: superuser, analyst, limited-analyst, auditor, and
limited-auditor.
See the table below which explains the specific Security Onion privileges granted to each role.
Note: Both auditor and limited-auditor roles can interact with previously created PCAPs if they were
created before a user was converted to that role (e.g. user was downgraded from analyst to auditor). This is
denoted by O in the above table.
Note: A system role called agent is used by the Security Onion agent that runs on each node of the Security Onion
grid. This special role is given the jobs/process, nodes/read, and nodes/write permissions (defined at the bottom of
this page). Avoid creating custom roles that share the same name as Security Onion-provided roles.
12.6.2 Superusers
After a new installation of Security Onion completes, a single administrator user will be created and assigned the
superuser role. Additional users can also be assigned to the superuser role, if desired.
In the Administration interface, navigate to the Users screen and click the + icon to add a new user. In the popup dialog
you can check the roles you would like to assign to the new user.
In the Administration interface, navigate to the Users screen and click the > icon to the left of the email address needing
adjusting. Check or uncheck the desired roles.
 Warning: The creation of custom RBAC roles is an advanced feature that is recommended only for experienced
 administrators.
These steps will guide you through an example where we wish to introduce a new role called eastcoast-analyst,
which will inherit the same Security Onion permissions as a limited-analyst, but will be restricted to only view a subset
of documents in the Elastic stack. We base this role on the limited-analyst instead of the analyst role so that
the user does not have the ability to create arbitrary PCAPs on any sensor.
   1. For the Security Onion role: Follow the instructions in the next section entitled “Defining Security Onion Roles”
      to create a new role named eastcoast-analyst.
   2. For the Elastic stack role: Create a new json role file named eastcoast-analyst.json under /opt/
      so/saltstack/local/salt/elasticsearch/roles. In this example we will define the new role
      that only allows access to documents from sensors on the east coast of the United States. Specifically, the role
      will include a query filter that limits search results to only include documents originating from sensors having a
      name prefixed with nyc (New York City) or atl (Atlanta).
            eastcoast-analyst.json :
            {
              "cluster": [
                 "cancel_task",
                 "create_snapshot",
                 "monitor",
                 "monitor_data_frame_transforms",
                 "monitor_ml",
                 "monitor_rollup",
                 "monitor_snapshot",
                 "monitor_text_structure",
                 "monitor_transform",
                 "monitor_watcher",
                 "read_ccr",
                 "read_ilm",
                 "read_pipeline",
                 "read_slm"
              ],
              "indices": [
                 {
                   "names": [
                      "so-*"
                   ],
                   "privileges": [
                      "index",
                      "maintenance",
                      "monitor",
                      "read",
                      "read_cross_cluster",
                      "view_index_metadata"
                   ],
                   "query": "{ \"bool\": { \"should\": [ { \"prefix\": { \"observer.
            ˓→name\": \"nyc\" }}, { \"prefix\": { \"observer.name\": \"atl\" }} ]}}"
                 }
              ],
              "applications": [
                                                                                               (continues on next page)
           Note: The format of the json in this file must match the request body outlined in the Elas-
           tic docs here: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-put-role.
           html#security-api-put-role-request-body.
           The available cluster and indices permissions are explained in the Elastic docs here: https://www.
           elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html.
           The available kibana permissions can be obtained by running the following command on the manager
           node:
2) Inheriting the permissions of another role, and optionally adding more permissions to the new custom role.
Note: The custom_roles file contains further instructions on modifying roles that are not within the scope of this
documentation.
   1. Creating the role for the above east coast analyst using the first method, building the custom role from scratch,
      would be written like so:
           case-admin:eastcoast-analyst
           event-admin:eastcoast-analyst
           node-monitor:eastcoast-analyst
           user-monitor:eastcoast-analyst
           job-user:eastcoast-analyst
   2. Alternatively, the eastcoast-analyst role could be created by inheriting the permissions of the analyst
      role:
limited-analyst:eastcoast-analyst
The available low-level Security Onion privileges are listed in the table below:
These discrete privileges are then collected into privilege groups as defined below:
 case-admin                            cases/write
 case-monitor                          cases/read
 event-admin                           events/read, events/write, events/ack
 event-monitor                         events/read
 job-admin                             jobs/read, jobs/pivot, jobs/write, jobs/delete
 job-monitor                           jobs/read
 job-user                              jobs/pivot
 job-processor                         jobs/process †
 node-admin                            nodes/read, nodes/write
 node-monitor                          nodes/read
 user-admin                            roles/read, roles/write, users/read, users/write, users/delete
 user-monitor                          roles/read, users/read
12.7 Kratos
Security Onion Console (SOC) authentication is handled by Kratos. You can read more about Kratos at https://github.
com/ory/kratos.
12.7.1 Configuration
You can configure Kratos by going to Administration –> Configuration –> kratos.
Services
You can control individual services with the so-<component>-<verb> scripts. You can see a list of all of these
scripts with the following command:
ls /usr/sbin/so-*
The following examples are for Zeek, but you could substitute whatever service you’re trying to control (Logstash,
Elasticsearch, etc.).
Start Zeek:
sudo so-zeek-start
Stop Zeek:
sudo so-zeek-stop
Restart Zeek:
sudo so-zeek-restart
                                                                                                             223
Security Onion Documentation, Release 2.4
This section covers how to customize Security Onion for your environment.
You can customize Security Onion Console (SOC) by going to Administration –> Configuration –> soc.
                                                                                                     225
Security Onion Documentation, Release 2.4
Below are some ways in which you can customize SOC. Once all customizations are complete, you can make the
changes take effect by clicking the Options bar at the top and then clicking the SYNCHRONIZE GRID button.
You can customize the SOC login page with a login banner by going to Administration –> Configuration –> soc –>
files –> soc –> Login Banner. The login banner uses the common Markdown (.md) format and you can learn more
about that at https://markdownguide.org.
After logging into SOC, you’ll start on the main SOC Overview page which can be customized as well. You can
customize this by going to Administration –> Configuration –> soc –> files –> soc –> Overview Page. This uses
Markdown format as mentioned above.
14.1.3 Links
You can also customize the links on the left side. To do so, go to Administration –> Configuration –> soc –> server –>
client –> tools.
The default timeout for user login sessions is 24 hours. This is a fixed timespan and will expire regardless of whether
the user is active or idle in SOC. You can configure this by going to Administration –> Configuration –> kratos –>
sessiontimeout.
You can enable reverse DNS lookups by going to Administration –> Configuration –> soc –> server –> client –>
enableReverseLookup.
If you’d like to add your own custom queries to Alerts, Dashboards, or Hunt, you can go to Administration –>
Configuration –> soc –> server –> client and then select the specific app you’d like to modify.
To see all available fields for your queries, go down to the Events table and then click the arrow to expand a row. It
will show all of the individual fields from that particular event.
For example, suppose you want to add GeoIP information like source.geo.region_iso_code or
destination.geo.region_iso_code to Alerts. You would go to Administration –> Configuration –> soc
–> server –> client –> alerts –> queries and insert the following line wherever you want it show up in the query list:
Please note that some events may not have GeoIP information and this query would only show those alerts that do
have GeoIP information.
Alerts, Dashboards, and Hunt have an action menu with several default actions. If you’d like to add your own custom
HTTP GET or POST actions, you can go to Administration –> Configuration –> soc –> actions. For example, sup-
pose you want to add AbuseIPDB with URL https://www.abuseipdb.com/check/{value}. Insert the
following as the next to last line:
˓→check/{value}" ]}
You can also create background actions that don’t necessarily result in the user being taken to a new page or tab. For
example, if you want to have a new action submit a case to JIRA, you would define it as a background POST action.
When it completes the POST, it will show an auto-fading message in SOC telling you that the action completed.
Alternatively, instead of the auto-fading message you can have it pop a new tab (or redirect SOC tab) to JIRA. Because
of CORS restrictions, SOC can’t expect to have visibility into the result of the background POST so there is no attempt
to parse the response of any background action, other than the status code/text from the request’s response.
Here is an example of a background action that submits a javascript fetch to a remote resource and then optionally
shows the user a second URL:
{
   "name": "My Background Action",
   "description": "Something wonderful!",
   "icon": "fa-star",
   "target": "_blank",
   "links": [
     "http://somewhere.invalid/?somefield={:client.ip|base64}"
   ],
   "background": true,
   "method": "POST",
   "options": {
     "mode": "no-cors",
     "headers": {
       "header1": "header1value",
       "header2:" "header2value"
     }
   },
   "body": "something={value|base64}",
   "backgroundSuccessLink": "https://securityonion.net?code={responseCode}&text=
 ˓→{responseStatus}",
   "backgroundFailureLink": "https://google.com?q={error}"
},
The options object is the same options object that will be passed into the Javascript fetch() method. You can
read more about that at https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch.
There may come a time where you are not sure what fields to target for the request body, or you may want to forward
events of different types that contain different field names. This is ideal if you would like to send the event to a case
management system, a SOAR platform, or something similar. In this case, the eventJson variable can be used to
pass the entire event as a JSON string.
To use this variable, construct the body of the request within the action configuration, like so:
"body":       "{eventJson}"
NOTE: You may run into issues using the eventJson variable, depending on the size of the event and the amount of
data being passed in the request.
14.1.8 Cases
Cases comes with presets for things like category, severity, TLP, PAP, tags, and status. You can modify these presets
by going to Administration –> Configuration –> soc –> server –> client –> case –> presets.
14.1.9 Escalation
Alerts, Dashboards, and Hunt display logs with a blue triangle that allows you to escalate the event. This defaults to
our Cases interface. If for some reason you want to escalate to a different case management system, you can change
this setting. You can go to Administration –> Configuration –> soc –> server –> modules –> cases and specify one of
the following values:
    • soc - Enables the built-in Case Management, with our Escalation menu (default).
    • elasticcases - Enables escalation to the Elastic Cases tool. Escalations will always open a new case; there
      will not be an advanced escalation menu popup. This module will use the same user/pass that SOC uses to talk
      to Elastic. Note, however, that Elastic cases is actually a Kibana feature, therefore, when this setting is used,
      SOC will be communicating with the local Kibana service (via its API) for case escalations.
14.2 nginx
14.2.1 Configuration
You can modify nginx configuration by going to Administration –> Configuration –> nginx.
If you’d like to replace the default cert with your own cert, then you can do so as shown below.
Warning: Please be very careful when modifying advanced settings like this!
   1. At the top of the page, click the Options menu and then enable the Show all configurable
      settings, including advanced settings. option.
   2. On the left side, go to nginx, expand ssl, and then select the Replace Default Cert setting.
   3. On the right side, change the setting to true and then click the checkmark to save the value.
   4. On the left side, select the SSL/TLS Cert File setting.
   5. On the right side, paste your new cert file and then click the checkmark to save it.
   6. On the left side, select the SSL/TLS Key File setting.
   7. On the right side, paste your new key file and then click the checkmark to save it.
14.3 Proxy
Setup will ask if you want to connect through a proxy server and, if so, it will automatically configure the system for
you. If you need to make changes after Setup, please see the proxy settings in Administration –> Configuration –>
manager.
There is no way to set a global proxy on Linux, but several tools will route their traffic through a proxy if the following
lines are added to /etc/environment:
http_proxy=<proxy_url>
https_proxy=<proxy_url>
ftp_proxy=<proxy_url>
no_proxy="localhost, 127.0.0.1, <management_ip>, <hostname>"
Where:
      <proxy_url> is the url of the proxy server. (For example, http://10.0.0.2:3128 or https:/
      /user:password@your.proxy.url)
      <management_ip> is the IP address of the Security Onion box.
      <hostname> is the hostname of the Security Onion box.
Note: You may also need to include the IP address and hostname of the manager in the no_proxy variable above if
configuring the proxy on a forward node.
[http]
  proxy = <proxy_url>
14.3.2 sudo
If you’re going to run something using sudo, remember to use the -i option to force it to process the environment
variables. For example:
sudo -i so-rule-update
 Warning: Using sudo su - will ignore /etc/environment, instead use sudo su if you need to operate
 as root.
14.4 Firewall
This section will cover both network firewalls outside of Security Onion and the host-based firewall built into Security
Onion.
This first sub-section will discuss network firewalls outside of Security Onion.
Internet Communication
When configuring network firewalls for Internet-connected deployments (non-Airgap), you’ll want to ensure that the
deployment can connect outbound to the following:
    • raw.githubusercontent.com (Security Onion public key)
    • pkg-containers.githubusercontent.com
    • sigs.securityonion.net (Signature files for Security Onion containers)
    • ghcr.io (Container downloads)
    • rules.emergingthreatspro.com (Emerging Threats IDS rules)
    • rules.emergingthreats.net (Emerging Threats IDS open rules)
    • github.com (Strelka and Sigma rules updates)
If you are using our Security Onion ISO image, you will also need access to the following:
    • repo.securityonion.net (primary repo for Oracle Linux package updates)
    • so-repo-east.s3.us-east-005.backblazeb2.com (secondary repo for Oracle Linux package updates)
If you are not using our Security Onion ISO image and are instead performing a network installation, you will also
need access to the following:
    • update repo for whatever base OS you’re installing on (Operating System packages)
    • download.docker.com (Docker packages)
Node Communication
When configuring network firewalls for distributed deployments, you’ll want to ensure that nodes can connect as
shown below.
All nodes to manager:
    • TCP/443 - Sensoroni
    • TCP/5000 - Docker registry
    • TCP/8086 - influxdb
    • TCP/4505 - Salt
    • TCP/4506 - Salt
Elastic Agent:
    • TCP/8220 (All nodes to Manager, Fleet nodes) - Elastic Agent management
    • TCP/8443 (All nodes to Manager) - Elastic Agent binary updates
    • TCP/5055 (All nodes to Manager, Fleet nodes, Receiver nodes) - Elastic Agent data
Search nodes from/to manager:
    • TCP/9300 - Node-to-node for Elasticsearch
    • TCP/9696 - Redis
The remainder of this section will cover the host firewall built into Security Onion.
14.4.3 Configuration
You can configure the firewall by going to Administration –> Configuration –> firewall –> hostgroups.
If for some reason you can’t access Security Onion Console (SOC), you can use the so-firewall command to allow
your IP address to connect (replacing <IP ADDRESS> with your actual IP address):
Port groups are a way of grouping together ports similar to a firewall port/service alias. For example, if you have a
web server you might add ports 80 and 443 into a port group.
Host groups are similar to port groups but for storing lists of hosts that will be allowed to connect to the associated
port groups.
14.4.6 Function
The firewall state is designed with the idea of creating port groups and host groups, each with their own alias or name,
and associating the two in order to create an allow rule. A node that has a port group and host group association
assigned to it will allow those hosts to connect to those ports on that node.
The default allow rules for each node are defined by its role (manager, searchnode, sensor, heavynode, etc) in the
grid. Host groups and port groups can be created or modified from the manager node by going to Administration –>
Configuration –> firewall –> hostgroups. When setup is run on a new node, it will ask the manager to add itself to the
appropriate host groups. All node types are added to the minion host group to allow Salt communication. If you were
to add a search node, you would see its IP appear in both the minion and the search_node host groups.
When you go to Administration –> Configuration –> firewall, you will only see hostgroups by default. If you
need to modify port groups, then you will need to click the Options menu and then enable the Show all
configurable settings, including advanced settings. option.
The analyst hostgroup is allowed access to the nginx ports which are 80 and 443 by default. In this example, we will
extend the default nginx port group to include a custom port.
   1. At the top of the page, click the Options menu and then enable the Show all configurable
      settings, including advanced settings. option.
   2. On the left side, go to firewall, select portgroups, locate the nginx portgroup, and then select tcp.
   3. On the right side, select the manager node, specify your custom port to be added, and then click the checkmark
      to save the value.
   4. If you would like to apply the rules immediately, click the SYNCHRONIZE GRID button under the Options
      at the top of the page.
In this example, we will add a new custom hostgroup to allow a custom set of hosts to connect to a custom port on an
IDH node.
   1. At the top of the page, click the Options menu and then enable the Show all configurable
      settings, including advanced settings. option.
   2. On the left side, go to firewall, select hostgroups, and then select customhostgroup0.
   3. On the right side, select the IDH node that you want to allow access to, add the list of hosts that require access,
      and then click the checkmark to save the value.
   4. On the left side, go to firewall, select portgroups, select customportgroup0, and then select the
      appropriate protocol.
   5. On the right side, select the IDH node that you want to allow access to, add your custom port, and then click the
      checkmark to save the value.
   6. On the left side, go to firewall, role, and then select idh, chain, DOCKER-USER, hostgroups,
      customhostgroup0, portgroups.
   7. On the right side, select the IDH node that you want to allow access to, add the portgroup
      customportgroup0, and then click the checkmark to save the value.
   8. The next time the IDH node checks in, it should get the appropriate firewall rules.
14.5 Email
Some applications rely on having a mail server in the OS itself and other applications have their own mail configuration
and so they don’t rely on a mail server in the OS itself.
You can install and configure your favorite mail server. Depending on your needs, this could be something simple like
nullmailer or something more complex like exim4.
14.5.2 Elastalert
14.6 NTP
Depending on how you installed, the underlying operating system may be configured to pull time updates from the
NTP Pool Project and perhaps others as a fallback. You may want to change this default NTP config to your preferred
NTP provider by going to Administration –> Configuration –> ntp.
Anybody can join the NTP Pool Project and provide NTP service. Occasionally, somebody provides NTP service
from a residential DHCP address that at some point in time may have also been used for Tor. This results in IDS alerts
for Tor nodes where the port is 123 (NTP). This is another good reason to modify the NTP configuration to pull time
updates from your preferred NTP provider.
14.7 Console
When you log into the local bash console (tty1), you may see lots of messages from the Linux kernel. To avoid these
kernel messages, you have a few options:
    • You can use SSH instead of the local bash console.
    • If you really need to use the local console, you can temporarily disable console messages with sudo dmesg
      -D. For more information about dmesg, please see https://man7.org/linux/man-pages/man1/dmesg.1.html. Also
      see https://man7.org/linux/man-pages/man8/sysctl.8.html and https://www.kernel.org/doc/html/next/core-api/
      printk-basics.html.
14.8 SSH
Security Onion uses the latest SSH packages. It does not manage the SSH configuration in /etc/ssh/
sshd_config with Salt. This allows you to add any PAM modules or enable two factor authentication (2FA)
of your choosing.
14.9 Hostname
Setup generates certificates based on the hostname and we do not support changing the hostname after Setup. Please
make sure that your hostname is correct during installation.
14.10 IP Address
The Best Practices section recommends that you avoid changing IP addresses after installation. If for some reason you
must do so, you can try the experimental utility so-ip-update.
 Warning: so-ip-update is an experimental utility and only supports standalone machines, not distributed
 deployments.
If you need to change the URL for web access to Security Onion (for example, from IP to FQDN), go to Administration
–> Configuration –> global.
Tuning
To get the best performance out of Security Onion, you’ll want to tune it for your environment. Start by creating
Berkeley Packet Filters (BPFs) to ignore any traffic that you don’t want your network sensors to process. Then tune
your IDS rulesets. There may be entire categories of rules that you want to disable first and then look at the remaining
enabled rules to see if there are individual rules that can be disabled. Once your rules and alerts are under control, then
check to see if you have packet loss. If so, then tune the number of AF-PACKET workers for sniffing processes. If you
are on a large network, you may need to do additional tuning like pinning processes to CPU cores. More information
on each of these topics can be found in this section.
15.1 BPF
15.1.1 Configuration
You can modify your BPF configuration by going to Administration –> Configuration –> bpf. You can apply BPF
configuration to Stenographer, Suricata, or Zeek.
                                                                                                                      239
Security Onion Documentation, Release 2.4
Multiple Conditions
If your BPF contains multiple conditions you can join them with a logical and or logical or.
Here’s an example of joining conditions with a logical and:
VLAN
If you have traffic that has VLAN tags, you can craft a BPF as follows:
Notice that you must include your filter on both sides of the vlan tag.
For example:
(not (host 192.168.1.2 or host 192.168.1.3 or host 192.168.1.4)) or (vlan and (not
 ˓→(host 192.168.1.2 or host 192.168.1.3 or host 192.168.1.4)))
Warning:
 Please note that Stenographer should correctly record traffic on a VLAN but won’t log the actual VLAN tags due
 to the way that AF-PACKET works:
 https://github.com/google/stenographer/issues/211
If you need to troubleshoot BPF, you can use tcpdump as shown in the following articles:
https://taosecurity.blogspot.com/2004/09/understanding-tcpdumps-d-option-have.html
https://taosecurity.blogspot.com/2004/12/understanding-tcpdumps-d-option-part-2.html
https://taosecurity.blogspot.com/2008/12/bpf-for-ip-or-vlan-traffic.html
Note:
For more information about BPF, please see:
https://en.wikipedia.org/wiki/Berkeley_Packet_Filter
https://biot.com/capstats/bpf.html
Assuming you have Internet access, Security Onion will automatically update your NIDS rules on a daily basis. If you
need to manually update your rules, you can run the following on your manager node:
sudo so-rule-update
If you have a distributed deployment and you update the rules on your manager node, then those rules will automat-
ically replicate from the manager node to your sensors within 15 minutes. If you don’t want to wait 15 minutes, you
can force the sensors to update immediately by running the following command on your manager node:
15.2.2 Configuration
You can modify your rule configuration by going to Administration –> Configuration –> idstools.
15.2.3 Rulesets
Security Onion offers the following choices for rulesets to be used by Suricata.
15.2.4 ET Open
15.2.9 Other
15.3.1 NIDS
You can add local NIDS rules by going to Administration –> Configuration –> idstools.
At the top of the page, click the Options menu and then enable the Show all configurable settings,
including advanced settings. option. Then navigate to idstools –> rules –> Local Rules. Add your new
rule(s) and click the checkmark to save them. The configuration will be applied at the next 15-minute interval or you
can apply it immediately by clicking the SYNCHRONIZE GRID button under the Options menu.
15.3.2 YARA
Default YARA rules are provided from Florian Roth’s signature-base Github repo at https://github.com/Neo23x0/
signature-base.
If you have Internet access and want to have so-yara-update pull YARA rules from a remote Github repo, copy
/opt/so/saltstack/local/salt/strelka/rules/, and modify repos.txt to include the repo URL
(one per line).
Next, run so-yara-update to pull down the rules. Finally, run so-strelka-restart to allow Strelka to pull
in the new rules.
Network Security Monitoring, as a practice, is not a solution you can plug into your network, make sure you see
blinking lights and tell people you are “secure.” It requires active intervention from an analyst to qualify the quantity
of information presented. One of those regular interventions is to ensure that you are tuning properly and proactively
attempting to reach an acceptable level of signal to noise.
There are two alerting engines within Security Onion: Suricata and Playbook (Sigma). Though each engine uses its
own severity level system, Security Onion converts that to a standardized alert severity:
event.severity: 4 ==> event.severity_label: critical
event.severity: 3 ==> event.severity_label: high
event.severity: 2 ==> event.severity_label: medium
event.severity: 1 ==> event.severity_label: low
All alerts are viewable in Alerts, Dashboards, Hunt, and Kibana.
The easiest way to test that our NIDS is working as expected might be to simply access http://testmynids.org/uid/
index.html from a machine that is being monitored by Security Onion. You can do so via the command line using
curl:
curl testmynids.org/uid/index.html
Alternatively, you could also test for additional hits with a utility called tmNIDS, running the tool in interactive mode:
If everything is working correctly, you should see a corresponding alert (GPL ATTACK_RESPONSE id check
returned root) in Alerts, Dashboards, Hunt, or Kibana. If you do not see this alert, try checking to see if the rule
is enabled in /opt/so/rules/nids/all.rules:
Rulesets come with a large number of rules enabled (over 20,000 by default). You should only run the rules necessary
for your environment, so you may want to disable entire categories of rules that don’t apply to you. Run the following
command to get a listing of categories and the number of rules in each:
cut -d\" -f2 /opt/so/rules/nids/all.rules | grep -v "^$" | grep -v "^#" | awk '{print
 ˓→$1, $2}'|sort |uniq -c |sort -nr
In tuning your sensor, you must first understand whether or not taking corrective actions on this signature will lower
your overall security stance. For some alerts, your understanding of your own network and the business being trans-
acted across it will be the deciding factor. For example, if you don’t care that users are accessing Facebook, then you
can silence the policy-based signatures for Facebook access.
Another consideration is whether or not the traffic is being generated by a misconfigured piece of equipment. If it is,
then the most expedient measure may be to resolve the misconfiguration and then reinvestigate tuning.
There are multiple ways to handle overly productive signatures and we’ll try to cover as many as we can without
producing a full novel on the subject. After making one of the changes described below, your ruleset will need to be
updated as shown in the Managing Rules section.
You can disable, modify, or threshold alerts by going to Administration –> Configuration –> idstools.
You can disable an alert by going to Administration –> Configuration –> idstools –> sids –> disabled.
If you want to disable multiple alerts at one time, you can use regular expressions. For example, to disable all alerts
that contain heartbleed:
re:heartbleed
You can modify an alert by going to Administration –> Configuration –> idstools –> sids –> modify.
To include an escaped $ character in the regex pattern you’ll need to make sure it’s properly escaped. For example, if
you want to modify SID 2009582 and change $EXTERNAL_NET to $HOME_NET:
The first string is a regex pattern, while the second is just a raw value. You’ll need to ensure the first of the two properly
escapes any characters that would be interpreted by regex. The second only needs the $ character escaped to prevent
bash from treating that as a variable.
In some cases, you may not want to use the modify option above, but instead create a copy of the rule and disable the
original. You can add local rules by going to Administration –> Configuration –> idstools –> rules –> Local Rules.
After pasting the rule, you may want to bump the SID into the 90,000,000 range and set the revision to 1. Then make
any other changes to the rule. Now that we have a signature that will generate alerts a little more selectively, we need
to disable the original SID as shown above.
15.4.8 Threshold
Thresholds, rate filters, and suppressions allow you to make finer grained decisions about certain alerts without having
to rewrite them. The most common is a suppression which allows you to suppress alerts by specifying the SID, whether
you want to track by source/destination/either, and the IP address or subnet. This way, you can still have certain alerts
enabled, but the situations in which they alert are limited. It’s important to note that with this functionality, care should
be given to the thresholds being written to make sure they do not suppress legitimate alerts. You can learn more about
Suricata thresholds at https://docs.suricata.io/en/suricata-6.0.0/configuration/global-thresholds.html.
You can manage threshold entries for Suricata by going to Administration –> Configuration –> suricata –> threshold-
ing –> SIDS.
Usage:
<signature id>:
  - threshold:
      gen_id: <generator id>
      type: <threshold | limit | both>
      track: <by_src | by_dst>
      count: <count>
      seconds: <seconds>
  - rate_filter:
      gen_id: <generator id>
      track: <by_src | by_dst | by_rule | by_both>
      count: <count>
      seconds: <seconds>
      new_action: <alert | pass>
                                                                                                        (continues on next page)
Please note that Suricata 6 has a 64-character limitation on the IP field in a threshold. You can read more about this at
https://redmine.openinfosecfoundation.org/issues/4377.
Suppress
For example, suppose you want to suppress SID 2013030 where the source IP address is in the 10.10.3.0/24 subnet:
2013030:
  - suppress:
      gen_id: 1
      track: by_src
      ip: 10.10.3.0/24
15.4.9 Flowbits
idstools may seem like it is ignoring your disabled rules request if you try to disable a rule that has flowbits set.
For example, consider the following rules that reference the ET.MSSQL flowbit.
First rule:
alert tcp $HOME_NET any -> $EXTERNAL_NET !1433 (msg:"ET POLICY Outbound MSSQL
 ˓→Connection to Non-Standard Port - Likely Malware"; flow:to_server,established;
Second rule:
alert tcp $HOME_NET any -> $EXTERNAL_NET 1433 (msg:"ET POLICY Outbound MSSQL
 ˓→Connection to Standard port (1433)"; flow:to_server,established; content:"|12 01 00|
Third rule:
alert tcp $HOME_NET any -> $EXTERNAL_NET !1433 (msg:"ET TROJAN Bancos.DV MSSQL CnC
 ˓→Connection Outbound"; flow:to_server,established; flowbits:isset,ET.MSSQL; content:
˓→"|49 00 B4 00 4D 00 20 00 54 00 48 00 45 00 20 00 4D 00 41 00 53 00 54 00 45 00 52
If you try to disable the first two rules without disabling the third rule (which has flowbits:isset,ET.MSSQL)
the third rule could never fire due to one of the first two rules needing to fire first. idstools helpfully resolves all of your
flowbit dependencies, and in this case, is “re-enabling” that rule for you on the fly. Disabling all three of those rules
by adding the following to disablesid.conf has the obvious negative effect of disabling all three of the rules:
1:2013409
1:2013410
1:2013411
When you run sudo so-rule-update, watch the “Setting Flowbit State. . . ” section and you can see that if you
disable all three (or however many rules share that flowbit) that the “Enabled XX flowbits” line is decremented and all
three rules should then be disabled in your all.rules.
For best performance, CPU intensive processes like Zeek and Suricata should be pinned to specific CPUs. In most
cases, you’ll want to pin sniffing processes to the same CPU that your sniffing NIC is bound to. For more information,
please see the Performance subsection in the appropriate Suricata and Zeek sections.
15.5.2 Misc
15.5.3 RSS
Check your sniffing interfaces to see if they have Receive Side Scaling (RSS) queues. If so, you may need to reduce
to 1:
https://suricata.readthedocs.io/en/latest/performance/packet-capture.html#rss
15.5.4 Disk/Memory
Use hdparm to gather drive statistics and alter settings, as described here:
https://www.linux-magazine.com/Online/Features/Tune-Your-Hard-Disk-with-hdparm
vm.dirty_ratio is the maximum amount of system memory that can be filled with dirty pages before everything
must get committed to disk.
vm.dirty_background_ratio is the percentage of system memory that can be filled with “dirty” pages, or
memory pages that still need to be written to disk – before the pdflush/flush/kdmflush background processes kick in to
write it to disk.
More information:
https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
15.5.5 Elastic
You will want to make sure that each part of the pipeline is operating at maximum efficiency. Depending on your
configuration, this may include Elastic Agent, Logstash, Redis, and Elasticsearch.
15.6 Salt
From https://docs.saltstack.com/en/latest/:
      Salt is a new approach to infrastructure management built on a dynamic communication bus. Salt can be
      used for data-driven orchestration, remote execution for any infrastructure, configuration management for
      any app stack, and much more.
Note: Salt is a core component of Security Onion as it manages all processes on all nodes. In a distributed deployment,
the manager node controls all other nodes via salt. These non-manager nodes are referred to as salt minions.
Salt minions must be able to connect to the manager node on ports 4505/tcp and 4506/tcp:
https://docs.saltproject.io/en/getstarted/system/communication.html
You can use salt’s test.ping to verify that all your nodes are up:
Similarly, you can use salt’s cmd.run to execute a command on all your nodes at once. For example, to check disk
space on all nodes:
If you want to force a node to do a full update of all salt states, you can run so-checkin. This will execute
salt-call state.highstate -l info which outputs to the terminal with the log level set to info so that
you can see exactly what’s happening:
sudo so-checkin
15.6.5 Configuration
Many of the options that are configurable in Security Onion are done by going to Administration and then Configura-
tion.
Currently, the salt-minion service startup is delayed by 30 seconds. This was implemented to avoid some issues that
we have seen regarding Salt states that used the ip_interfaces grain to grab the management interface IP.
You may see the following error in the salt-master log located at /opt/so/log/salt/master:
[ERROR    ][24983] Event iteration failed with exception: 'list' object has no
 ˓→attribute 'items'
The root cause of this error is a state trying to run on a minion when another state is already running. This error now
occurs in the log due to a change in the exception handling within Salt’s event module. Previously, in the case of an
exception, the code would just pass. However, the exception is now logged. The error can be ignored as it is not an
indication of any issue with the minions.
This section is a collection of miscellaneous tricks and tips for Security Onion.
16.1 Backup
Security Onion performs a daily backup of some critical files so that you can recover your grid from a catastophic
failure of the manager. Daily backups create a tar file located in the /nsm/backup/ directory located on the manager.
You may want to replicate this backup directory to a location outside of your manager in case the manager ever needs
to be rebuilt.
Here is what gets backed up automatically:
    • /etc/pki/ - All of the certs including the CA are backed up. Restoring this would allow you to communicate
      with your salt minions again.
    • /opt/so/saltstack/local/ - This includes all customizations done via Administration –> Configura-
      tion.
You can configure backups by going to Administration –> Configuration –> backup.
                                                                                                                253
Security Onion Documentation, Release 2.4
16.1.1 Elasticsearch
Elasticsearch data is not automatically backed up. This includes things that may be important to you like Kibana
customizations and Cases data. Kibana customizations are located in the .kibana indices and Cases data is stored
in the so-case and so-casehistory indices. To backup this data, there are a few options.
The first option is to enable snapshots with Curator to snapshot data to an external storage device such as a NAS.
The second option is to use Elasticsearch’s built-in support for snapshots:          https://www.elastic.co/guide/en/
elasticsearch/reference/current/snapshot-restore.html
This option requires that you configure Elasticsearch with a path.repo setting where it can store the snapshots.
Once Elasticsearch has the path.repo setting, you should be able to log into Kibana and configure snapshots as
shown in the link above. Those snapshots will then be accessible in /nsm/elasticsearch/repo/.
A third option is if you have a distributed deployment with Elasticsearch clustering, then you can enable replicas to
have redundancy in case of a single node failure. Of course, please keep in mind that enabling replicas doubles your
storage needs.
16.2 Docker
From https://www.docker.com/what-docker:
      Docker is the world’s leading software container platform. Developers use Docker to eliminate “works
      on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and
      manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to
      build agile software delivery pipelines to ship new features faster, more securely and with confidence for
      both Linux, Windows Server, and Linux-on-mainframe apps.
16.2.1 Download
If you download our Security Onion ISO image, the Docker engine and these Docker images are baked right into the
ISO image.
If you instead use another ISO image, our installer will download Docker images from ghcr.io as necessary.
16.2.2 Security
To prevent tampering, our Docker images are signed using GPG keys. soup verifies GPG signatures any time Docker
images are updated.
16.2.3 Elastic
To maintain a high level of stability, reliability, and support, our Elastic Docker images are based on the Docker images
provided by Elastic.co.
16.2.4 Images
After installation, you can see all Docker images with the following command:
16.2.5 Logs
If a service is not writing its logs to /opt/so/log, then you may need to check the Docker logs for more detail. For
example, to check the Docker logs for Kibana:
16.2.6 Registry
By default, Docker configures its network bridge with an IP address of 172.17.0.1. This works fine for networks
that aren’t already using the 172.17.0.0/16 range. If you are using this range in your network, then you can
change the Docker range during installation.
16.2.8 Containers
Our Docker containers all belong to a common Docker bridge network, called so-elastic-net. Each container is
also aliased, so that communication can occur between the different docker containers using said alias. For example,
communication to the so-elasticsearch container would occur through an alias of elasticsearch.
You may come across interfaces in ifconfig with the format veth*. These are the external interfaces for each
of the Docker containers. These interfaces correspond to internal Docker container interfaces (within the Docker
container itself).
To identify which external interface belongs to which container, we can do something like the following:
From the host, type:
This should provide you with a value with which you can grep the host net class ifindex(es):
Example:
grep 25 /sys/class/net/veth*/ifindex | cut -d'/' -f5
If you have VMware Tools installed and you suspend and then resume, the Docker interfaces will no longer have
IP addresses and the Elastic stack will no longer be able to communicate. One workaround is to remove /etc/
16.2.10 Dependencies
Playbook
SOCtopus
Suricata
Kibana
Zeek
Dr. Johannes Ullrich of the SANS Internet Storm Center posted a great DNS Anomaly Detection script based on the
query logs coming from his DNS server. We can do the same thing with Zeek’s dns.log (where Zeek captures all the
DNS queries it sees on the network).
Note: Please note that the following script is only intended for standalone machines and will not work properly on
distributed deployments. Another option which might work better is ElastAlert and its new_term rule.
Thanks to senatorhotchkiss on our mailing list for updating the original script to replace bro-cut with jq:
#!/bin/bash
ZEEK_LOGS="/nsm/zeek/logs"
TODAY=`date +%Y-%m-%d`
YESTERDAY=`date -d yesterday +%Y-%m-%d`
OLD_DIRS=`ls $ZEEK_LOGS | grep "20*-*" | egrep -v "current|stats|$TODAY|$YESTERDAY"`
TMPDIR=/tmp
OLDLOG=$TMPDIR/oldlog
NEWLOG=$TMPDIR/newlog
SUSPECTS=$TMPDIR/suspects
join -1 2 -2 2 -a 2 $OLDLOG $NEWLOG | egrep -v '.* [0-9]+ [0-9]+$' | sort -nr -k2 |
 ˓→head -50 > $SUSPECTS
echo
echo "===================================="
echo "Top 50 First Time Seen DNS queries:"
echo "===================================="
cat $SUSPECTS
At Security Onion Conference 2016, Eric Conrad shared some IDS rules for detecting unusual ICMP echo re-
quests/replies and identifying C2 channels that may utilize ICMP tunneling for covert communication.
16.4.1 Usage
We can add the rules to /opt/so/rules/nids/local.rules and the variables to suricata.yaml so that
we can gain better insight into ICMP echoes or replies over a certain size, containing particularly suspicious content,
etc.
16.4.2 Presentation
16.4.3 Download
16.5.1 Overview
This section is a brief overview of connecting a Jupyter notebook/server instance to Elasticsearch to slice and dice
the data as you wish. It will not cover the setup of a Jupyter instance, which has been thoroughly documented (using
Docker) at https://jupyter-docker-stacks.readthedocs.io/en/latest/index.html.
At the top of the page, click the Options menu and enable the Show all configurable settings,
including advanced settings. option. On the left side, select the elasticsearch_rest option. On
the right side, add your IP address or CIDR blocks and click the checkmark to save.
Once complete, you should be able to connect to the Elasticsearch instance. You can confirm connectivity using tools
like curl or Powershell’s Test-NetConnection.
Note: The following steps are heavily inspired by Roberto Rodriguez’s Medium post:
https://medium.com/threat-hunters-forge/jupyter-notebooks-from-sigma-rules-%EF%B8%
8F-to-query-elasticsearch-31a74cc59b99
The Jupyter environment will need to have at least the following Python libraries installed:
    • elasticsearch
    • elasticsearch_dsl
    • pandas
You can install these using the following commands on the Jupyter host, or within the Jupyter Docker container:
Once the Python prerequisites are installed, we can start executing commands within our notebook.
We’ll start with importing the libraries we just mentioned. In the first cell, we’ll paste the following:
Then, we’ll press Shift+ENTER to execute the command(s) within the cell (can also click to run the cell from the
Run menu).
In the next cell, we’ll specify the Elasticsearch instance address and port (192.168.6.100:9200) and the user-
name (jupyter) and password (password) we created within Security Onion, as well as the index filter we would
like to use for searching (*:so-*):
es = Elasticsearch(['https://192.168.6.100:9200'],
ca_certs=False,verify_certs=False, http_auth=('jupyter','password'))
searchContext = Search(using=es, index='*:so-*', doc_type='doc')
Note: We are choosing to use verify_certs=False here to avoid complications with self-signed certificates
during testing. Ideally, we would want to make sure we are performing verification wherever possible.
Again, we’ll execute the code within the cell, by pressing Shift+ENTER.
We may see a warning like the following due to the fact that we are not performing verification for certificates:
For convenience during our testing, we can disable the warning in future runs, by pasting the following the next cell
and executing it with Shift+ENTER:
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
s = searchContext.query('query_string', query='event.module:sysmon')
In this example, we are looking for logs that contain a field called event.module and a value of sysmon (Sysmon
logs). Once more, we’ll press Shift+ENTER and continue on.
Finally, we’ll submit our query in the next cell using the following:
response = s.execute()
if response.success():
   df = pd.DataFrame((d.to_dict() for d in s.scan()))
df
The above code simply takes the results and converts them to a Python dict:
We can select a few fields, and modify the column values if we like:
response = s.execute()
if response.success():
     df = pd.DataFrame(([d['event']['dataset'], d['process']['executable'], d['file'][
 ˓→'target']] for d in s))
df.columns=['Dataset','Executable', 'Target']
df
Then we end up with something a little bit more targeted (you may need to adjust pd.options.display.
max_colwidth for it to display appropriately) :
Obviously, there is much more we can do with this data other than just running the above example code. Happy
hunting!
If you ever need to add a new disk to expand your /nsm partition, there are at least 3 different ways to do this.
Warning: Before doing this in production, make sure you practice this on a non-production system!
If you installed using LVM, then you should be able to use LVM to add new disk space to your LVM partitions.
If you aren’t using LVM, you can mount a drive directly to /nsm. If doing this after installation, you will need to stop
services, move the data, and then restart services as shown below.
Stop services:
That should prevent most things from starting. If performing this on a manager you will need to do sudo service
docker stop after the reboot.
Move the data:
Restart services:
A variation on Method 2 is to make /nsm a symbolic link to the new logging location. Certain services like AppArmor
may need special configuration to handle the symlink.
The easiest way to download pcaps for testing is our so-test tool. Alternatively, you could manually download pcaps
from one or more of the following locations:
    • https://www.malware-traffic-analysis.net/
    • https://digitalcorpora.org/corpora/network-packet-dumps
    • https://www.netresec.com/?page=PcapFiles
    • https://www.netresec.com/?page=MACCDC
    • https://github.com/zeek/zeek/tree/master/testing/btest/Traces
    • https://www.ll.mit.edu/r-d/datasets/2000-darpa-intrusion-detection-scenario-specific-datasets
    • https://wiki.wireshark.org/SampleCaptures
    • https://www.stratosphereips.org/datasets-overview
    • https://ee.lbl.gov/anonymized-traces.html
    • https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Public_Data_Sets
    • https://forensicscontest.com/puzzles
    • https://github.com/markofu/hackeire/tree/master/2011/pcap
    • https://www.defcon.org/html/links/dc-ctf.html
    • https://github.com/chrissanders/packets
You can download pcaps from the link above using a standard web browser or from the command line using a tool like
wget or curl. Here are some examples.
To download the pcap from https://www.malware-traffic-analysis.net/2020/09/16/index.html using wget:
wget https://www.malware-traffic-analysis.net/2020/09/16/2020-09-16-Qakbot-infection-
 ˓→traffic.pcap.zip
wget https://download.netresec.com/pcap/maccdc-2012/maccdc2012_00000.pcap.gz
16.7.1 Replay
You can use tcpreplay to replay any standard pcap to the sniffing interface of your Security Onion sensor.
16.7.2 Import
A drawback to using tcpreplay is that it’s replaying the pcap as new traffic and thus the timestamps that you see in
Kibana and other interfaces do not reflect the original timestamps from the pcap. To avoid this, you can import the
pcap using the Grid page.
There may come a time when you need to remove a node from your distributed deployment. To do this, you’ll need to
remove the node’s configuration from a few different components.
16.8.1 Salt
You can remove a node from salt by going to Administration –> Grid Members.
Find the Grid Member you would like to remove, click the REVIEW button, and then click the DELETE button.
16.8.2 SOC
To remove the node from the SOC Grid page, make sure the node is powered off and then restart SOC:
sudo so-soc-restart
If you want to send logs to an external system, you can configure Logstash to output to syslog.
Note:
For more information about Logstash’s syslog output plugin, please see:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-syslog.html
Please keep in mind that we don’t provide free support for third party systems.
When you run Security Onion Setup, it sets the operating system timezone to UTC/GMT. Logging in UTC is consid-
ered a best practice across the cybersecurity industry because it makes it that much easier to correlate events across
different systems, organizations, or time zones. Additionally, it avoids issues with time zones that have daylight
savings time which would result in a one-hour time warp twice a year.
Web interfaces like Alerts, Dashboards, Hunt, and Kibana should try to detect the timezone of your web browser
and then render those UTC timestamps in local time. Alerts, Dashboards, and Hunt allow you to manually set your
timezone under Options.
Utilities
17.1 jq
From https://stedolan.github.io/jq/:
      jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data with
      the same ease that sed, awk, grep and friends let you play with text.
17.1.1 Usage
We configure Zeek and Suricata to write logs to /nsm/ in JSON format. If you want to parse those logs from the
command line, then you can use jq. Here’s a basic example:
jq '.' /nsm/zeek/logs/current/conn.log
This command will parse all of the records in /nsm/zeek/logs/current/conn.log. For each of the records,
it will then output every field and its value.
                                                                                                                     267
Security Onion Documentation, Release 2.4
17.2 so-allow
In previous versions of Security Onion, so-allow was used to allow traffic through the host-based Firewall. This is
now done by going to Administration –> Configuration –> firewall –> hostgroups.
17.3 so-elastic-auth-password-reset
Elastic service accounts use randomly generated passwords that are 72 characters in length. If you need to reset these
passwords, you can use the so-elastic-auth-password-reset utility.
17.4 so-elasticsearch-query
You can use so-elasticsearch-query to submit a cURL request to the local Security Onion Elasticsearch host
from the command line.
17.4.1 Usage
Where:
    • PATH represents the elastic function being requested.
    • ARGS is used to specify additional, optional curl parameters.
17.4.2 Examples
sudo so-elasticsearch-query /
Here’s a more complicated example that includes piping the output to jq:
If you want to delete an old index, you can do that using the -XDELETE option. For example, to delete the Zeek index
for 2022/05/07:
17.5 so-import-pcap
so-import-pcap will import one or more pcaps into Security Onion and preserve original timestamps. It will do
the following:
    • generate IDS alerts using Suricata
    • generate network metadata using Zeek
    • store IDS alerts and network metadata in Elasticsearch with original timestamps
    • store pcaps where Security Onion Console (SOC) can find them
    • provide a hyperlink for you to view all alerts and logs in Security Onion Console (SOC)
In addition to viewing alerts and logs in Security Onion Console (SOC), you can also find logs in Kibana.
Tip: You can run this command manually, but for most use cases it’s easier to upload a pcap via Grid and it will
automatically run so-import-pcap for you.
17.5.1 Screenshot
17.5.2 Configuration
so-import-pcap requires you to run through Setup and choose a configuration that supports so-import-pcap. This
includes Import Node and other nodes that include sensor services like Eval and Standalone. The quickest and easiest
option is to choose Import Node which gives you the minimal services necessary to import a pcap.
17.5.3 Usage
Once Setup completes, you can then run sudo so-import-pcap and supply the full path to at least one pcap file.
For example, to import a single pcap named import.pcap:
Please note that if you import multiple pcaps at one time, so-import-pcap currently only provides a hyperlink for the
last pcap in the list. If you need a hyperlink for each pcap, then you can run one pcap file per so-import-pcap and use
a for-loop to iterate over your collection of pcap files.
so-import-pcap calculates the MD5 hash of the imported pcap and creates a directory in /nsm/import/ for that
hash. This is where so-import-pcap stores the alerts and logs generated by the traffic in the pcap. If you try to import
that same pcap again, it will tell you that it has already imported that pcap. If for some reason you really do need to
import that pcap again, you can remove that pcap’s directory in /nsm/import/ and then try again.
17.5.4 Examples
If you don’t already have some pcap files to import, see PCAPs for Testing for a list of sites where you can download
sample pcaps.
Our Quick Malware Analysis series at https://blog.securityonion.net/search/label/quick%20malware%20analysis uses
so-import-pcap to import pcaps from https://www.malware-traffic-analysis.net/ and other sites. Following along with
these blog posts in your own so-import-pcap VM is a great way to practice your skills!
17.6 so-import-evtx
so-import-evtx will import one or more evtx files into Security Onion.
17.6.1 Usage
Run sudo so-import-evtx and supply the full path to at least one evtx file. For example, to import a single evtx
file named import.evtx:
so-import-evtx then provides a hyperlink for you to view all logs in Security Onion Console (SOC). You can also find
logs in Kibana.
17.7 so-monitor-add
If you’ve already run through Setup but later find that you need to add a new monitor (sniffing) interface, you can run
so-monitor-add. This will allow you to add network interfaces to bond0 so that their traffic is monitored.
 Warning: Cloud images sniff directly from network interfaces rather than using bond0 so this utility won’t work
 in those environments.
17.8 so-status
To check the status of Security Onion services, you can either run sudo so-status or simply view the Status
panel on the Grid page.
so-status reads the list of enabled services from /opt/so/conf/so-status/so-status.conf and checks
the status of each. If you ever disable a service, you may need to remove it from that file.
so-status -h
Usage: /usr/sbin/so-status [OPTIONS]
   Options:
    -h                  - Prints this usage information
    -q                  - Suppress output; useful for automation of exit code value
    -j                  - Output in JSON format
    -i                  - Consider the installation outcome regardless of whether the
 ˓→system appears healthy
  Exit codes:
    0                       -   Success, system appears to be running correctly
    1                       -   Error, one or more subsystems are not running
    2                       -   System is starting
    99                      -   Installation in progress
    100                     -   System installation encountered errors
sudo so-status -q
echo $?
0
17.9 so-test
so-test will run so-tcpreplay to replay some pcap samples to your sniffing interface.
 Warning: You will need to have Internet access in order to download the pcap samples. Also, if you have
 a distributed deployment, make sure you run so-tcpreplay on the manager first to download the necessary
 Docker image.
so-test
Replay functionality not enabled; attempting to enable now (may require Internet
 ˓→access)...
Once this completes, you can then go to Alerts, Dashboards, and Hunt to review data.
Help
18.1 FAQ
                                                                                                         275
Security Onion Documentation, Release 2.4
Support / Help
IDS engines
Security Onion internals
Tuning
Miscellaneous
No, we only support x86-64 (standard Intel/AMD 64-bit architectures). Please see the Hardware Requirements section.
back to top
No, Security Onion does not support blocking traffic. Most organizations have some sort of Next Generation Firewall
(NGFW) with IPS features and that is the proper place for blocking to occur. Security Onion is designed to monitor
the traffic that makes it through your firewall.
back to top
Where can I read more about the tools contained within Security Onion?
Standard network connections to or from Security Onion are encrypted. This includes SSH, HTTPS, Elasticsearch
network queries, and Salt minion traffic. Endpoint agent traffic is encrypted where supported. This includes the Elastic
Agent which supports encryption with additional configuration. SOC user account passwords are hashed via bcrypt in
Kratos and you can read more about that at https://github.com/ory/kratos.
back to top
18.1.6 Tuning
What are the default firewall settings and how do I change them?
What do I need to modify in order to have the log files stored on a different mount point?
18.1.7 Miscellaneous
Why is Security Onion connecting to an IP address on the Internet over port 123?
Security Onion automatically backs up some important configuration as described in the Backup section. However,
there is no automated data backup. Network Security Monitoring as a whole is considered “best effort”. It is not a
“mission critical” resource like a file server or web server. Since we’re dealing with “big data” (potentially terabytes
of full packet capture) of a transient nature, backing up the data would be prohibitively expensive. Most organizations
don’t do any data backups and instead just rebuild boxes when necessary.
We understand the appeal of integrating with directory services like Active Directory and LDAP, but we typically
recommend against joining any security infrastructure (including Security Onion) to directory services. The reason
is that when you get an adversary inside your network, one of their first goals is going to be gaining access to that
directory. If they get access to the directory, then they get access to everything connected to the directory. For that
reason, we recommend that all security infrastructure (including Security Onion) be totally separate from directory
services.
back to top
18.2.1 /opt/so/conf
Applications read their configuration from /opt/so/conf/. However, please keep in mind that most config files
are managed with Salt, so if you manually modify those config files, your changes may be overwritten at the next Salt
update.
18.2.2 /opt/so/log
18.2.3 /opt/so/rules
18.2.4 /opt/so/saltstack/local
18.2.5 /nsm
18.2.6 /nsm/zeek
18.2.7 /nsm/elasticsearch
18.2.8 /nsm/pcap
18.3 Tools
Security Onion would like to thank the following projects for their contribution to our community!
(listed alphabetically)
    • ATT&CK Navigator
    • Curator
    • CyberChef
    • Docker
    • ElastAlert
    • Elasticsearch
    • Elastic Agent
    • InfluxDB
    • Kibana
    • Logstash
    • Redis
    • Salt
    • Stenographer
    • Strelka
    • Suricata
    • Zeek
18.4 Support
If you need private or priority support, please consider purchasing hardware appliances or support from Security Onion
Solutions:
https://securityonionsolutions.com/support
Tip: Purchasing from Security Onion Solutions helps to support development of Security Onion as a free and open
platform!
If you need free support, you can reach out to our Community Support.
First, check to see if your question has already been answered in the Help or FAQ sections.
18.5.3 Forum
Once you’ve read and understand all of the above, you can post your question to the community support forum at
https://securityonion.net/discuss.
Folks frequently ask how they can give back to the Security Onion community. Here are a few of our community
teams that you can help with.
We need more folks to help spread the word about Security Onion by blogging, tweeting, and other social media.
If you’d like help out other Security Onion users, please join the forum and start answering questions!
https://securityonion.net/discuss
If you find that some information in our Documentation is incorrect or lacking, please feel free to submit Pull Requests
via GitHub!
https://github.com/Security-Onion-Solutions/securityonion-docs
Most of our code is on GitHub. Please feel free to submit pull requests!
https://github.com/Security-Onion-Solutions
18.6.5 Thanks
The following folks have made significant contributions to Security Onion over the years. Thanks!
    • Lawrence Abrams
    • Jack Blanchard
    • Kevin Branch
    • Josh Brower
    • Pete Di Giorgio
    • Dennis Distler
    • Jason Ertel
    • Seth Hall
    • Paul Halliday
    • Joe Hargis
    • Mark Hillick
    • Wes Lambert
    • Dustin Lee
    • Josh More
    • Corey Ogburn
    • Eric Ooi
    • Josh Patterson
    • Phil Plantamura
    • Liam Randall
    • Mike Reeves
    • Scott Runnels
    • Jon Schipp
    • Brad Shoop
    • Bryant Treacle
    • William Wernert
Security
If you have any security concerns regarding Security Onion or believe you have uncovered a vulnerability, please send
an email to security@securityonion.net per the following guidelines:
    • Include a description of the issue and steps to reproduce
    • Use plain text format in the email (no Word documents or PDF files)
Please do NOT disclose publicly until we have had sufficient time to resolve the issue.
Note: This security address should be used only for undisclosed vulnerabilities. Dealing with fixed issues or general
questions on how to use Security Onion should be handled via the normal Support channels.
Security Onion is based on free and open software. Third-party components, as well as the software that the Security
Onion team develops, is built from source code that is readily available for the public to review. Community contrib-
utors, or anyone for that matter, can request to have notifications pushed to them when a change is accepted into the
public repositories. This is very different from closed source software since those closed source code bases are only
visible to a very small group of developers. Further, if a closed source code base does not have formal code review
procedures in place, or lacks infrastructure around the code base to make others aware of new changes, this further
restricts visibility and review of changes. These deficiencies allow attackers that gain unauthorized access to a closed
source code base to make changes without others detecting it.
When upstream, third-party components are updated in Security Onion, the change requires multiple checks before it
can be merged into the master (released) branch. First, all commits must be signed using cryptography before being
allowed into the master branch. Second, code reviews and approvals from multiple team members are required before
the pull requests can be merged. Both of these restrictions are enforced by the source code repository itself, which
eliminates risk of a human mistake allowing the process to be bypassed. Further, changes to the Security Onion source
                                                                                                                   283
Security Onion Documentation, Release 2.4
code repositories cause notifications to be delivered to the Security Onion development team, as well as anyone in the
public who choose to be notified of such changes. On top of this, Security Onion developers are required (enforced by
the repository itself) to use multi-factor authentication in order to approve changes.
Additionally, Security Onion’s build infrastructure runs both unit level tests and fully automated end-to-end tests on
every release, to ensure the Security Onion platform, and its components, continue to operate as expected. When a
change is merged into Security Onion, whether it’s to upgrade an upstream component or a modification to the source
code maintained by the Security Onion developers, which breaks the automated tests, we are notified and take action
to review the failure and root cause. Often this results in our developers chasing down upstream code commits to find
out why something changed, and if it was intended or not. Fortunately, these investigations are typically bug related,
rather than malicious, and our team will either contribute a pull request to fix the upstream project, or file an issue to
raise awareness to the project maintainers.
There is no guarantee that any software, open or closed source, will always be free from attacks. However, our
commitment to open software, and our investments into repeatable processes and software automation and testing
technologies improves Security Onion’s posture when it comes to safe guarding the product and its user-base.
Release Notes
                                                                                                   285
Security Onion Documentation, Release 2.4
   • FIX: Component templates not updated when packages are updated #11065
   • FIX: Importing both PCAP and EVTX files fails #11030
   • FIX: Logstash container missing on distributed receiver #11099
   • FIX: pipeline with id logs-system.syslog-1.6.4 does not exist #11038
   • FIX: Suricata permissions on Heavy Nodes are incorrect #11031
   • FIX: Firewall state custom host group assignments for single portgroup entry #10917
   • FIX: IDH node #10882
   • FIX: IPTables Persistence #10884
   • FIX: Install Error: so-yara-download failed #10880
   • FIX: Install screen - Firewall #10945
   • FIX: List settings updated with blank values should be stored as empty lists #10936
   • FIX: Login page shows error banner briefly on initial page load #10911
   • FIX: RAID status on Grid page #10935
   • FIX: SOC Auth dashboard #10878
   • FIX: Security Onion Desktop state should default to Gnome Classic #10958
   • FIX: sensor MTU setting in SOC Config should be read only #10883
   • FIX: so-status taking several seconds to complete #10909
   • FIX: soup #10902
   • FIX: syslog not working #10896
   • FIX: verbiage and links in soc_sensor.yaml #10906
   • UPGRADE: Elastic 8.8.2 #10864
   • FEATURE: Add link to Downloads page for convenient access to firewall settings #10702
   • FEATURE: Add more SOC Config quick links #10563
   • FEATURE: Add time zone selection to Grid page #8629
   • FEATURE: Add webauthn support to SOC #10608
   • FEATURE: Allow import of PCAP and EVTX via SOC UI #10413
   • FEATURE: Elastic Fleet - Automatically Update Logstash Outputs #10746
   • FEATURE: Elastic Fleet Server URL - Custom Domain #10744
   • FEATURE: Supported Integrations #10590
   • FEATURE: so-import-evtx #10673
   • FIX: Strelka rule path #10715
   • FIX: 2.4 ISO image won’t install on Virtualbox #10534
   • FIX: Account for Suricata XFF function in parsing and ingestion #8643
   • FIX: Add more Zeek logs to excluded list #10569
   • FIX: Analyzer requests and whoisit updates #10524
   • FIX: Change Playbook index to data stream and update event.severity_label #10523
   • FIX: Cleanup log-rotate.conf #10545
   • FIX: Curator should ignore empty list #10512
   • FIX: Don’t override default integration ingest node pipelines #10542
   • FIX: Ensure operations on records with “Missing” fields use correct search #8025
   • FIX: Ensure packages aren’t installed from default Rocky repos #10630
   • FIX: Exclude System logs from Hunt/Dashboard Queries. #10122
   • FIX: Finish SSL cert integration into SOC config UI #10533
   • FIX: Improve SOC login error message for disabled users #8908
   • FIX: Increase net.core.wmem_default value #10602
   • FIX: InfluxDB NSM Disk Usage visualization #10520
   • FIX: Integration logs not parsed correctly #10672
   • FIX: Logstash soc.fields.query warning #10528
   • FIX: Node description config setting should only apply at the node level #10562
   • FIX: Remove default excluded rules from YARA repo #10718
   • FIX: Review Kibana Dashboards #10664
   • FIX: Rework dataset name and add tags based on suffix #10526
   • FIX: Rework field to account for missing classifiers #10420
   • FIX: SOC Config NTP quick link #10519
   • FIX: Scheduled jobs trying to run during setup #10468
    • FIX: SOC only displaying data for users assigned the superuser role #10068
    • FIX: Sort grid members lists #10185
    • FIX: Suricata DNS A and CNAME parsing #10117
    • FIX: Using SOC Configuration to change mdengine from ZEEK to SURICATA fails #10189
    • FIX: Zeek @local and @local-sigs need to strip the @ for config but replace in local.zeek #10050
    • FIX: Zeek is not honoring lbprocs #10062
    • UPGRADE: Elastic 8.7.0 #10059
    • UPGRADE: Suricata 6.0.11 #10067
    • UPGRADE: Zeek 5.0.8 #10107
https://blog.securityonion.net/2023/03/security-onion-24-beta-release-now.html
Appendix
This appendix provides an overview of the process of migrating from the old Security Onion 2.3 to the new Security
Onion 2.4.
Tip: If you are a current Security Onion Solutions customer with Professional Services or Appliance coverage,
contact SOS support and we can help you through this process.
 Warning: Security Onion 2.4 is a MAJOR change, so please note the following:
     • Security Onion 2.4 has higher hardware requirements, so you should check that your hardware meets those
       requirements.
     • The /nsm partition must be on a separate disk.
     • InfluxDB data is not migrated.
     • If you have a distributed deployment, please note that 2.3 search nodes defaulted to cross cluster search
       whereas 2.4 defaults to full Elastic clustering. This means that you may need to rename or delete some
       Elasticsearch indices.
     • We do not provide any guarantees that the upgrade process will work! If the upgrade fails, be prepared to
       perform a fresh installation of Security Onion 2.4.
For the reasons listed above, we recommend that most users procure new hardware and perform a fresh installation of
Security Onion 2.4.
Tip: If you’re planning to purchase new hardware, please consider official Security Onion appliances from Security
Onion Solutions (https://securityonionsolutions.com). Our custom appliances have already been designed for certain
roles and traffic levels and have Security Onion 2 pre-installed. Purchasing from Security Onion Solutions will save
you time and effort and help to support development of Security Onion as a free and open platform!
                                                                                                               293
Security Onion Documentation, Release 2.4
If you have reviewed all of the warnings above and still want to attempt migration, you should be able to do the
following.
 Warning: We recommend trying this process in a test environment before attempting in your production environ-
 ment.
 Warning: Please ensure that you have local access to the machine being upgraded via console, DRAC, IPMI, etc.
 Failure to do so could result in an unsuccessful upgrade, requiring a clean installation of Security Onion 2.4.
First, make sure that your 2.3 installation is fully updated via soup:
sudo soup
If there are any remaining docker processes, stop them (replacing $CONT_ID with the actual ID):
Unmount /nsm:
Boot the Security Onion 2.4 ISO image, go through the initial OS installation as shown in the Installation section, and
reboot.
After reboot, cancel setup and change partitioning (replacing /home/user/ with your desired temporary location):
sudo mount -a
sudo systemctl daemon-reload
If you get the error mysql error 1130:            172.17.1.1' is not allowed to connect to this
mysql server, then run the following:
UPDATE mysql.user SET Host = '172.17.1.1' WHERE User = 'root' AND Host = 'localhost';
sudo so-checkin
                                                                                                            295
Security Onion Documentation, Release 2.4
Cheat Sheet
If you are viewing the online version of this documentation, you can click here for our Security Onion Cheat Sheet.
This was based on a cheat sheet originally created by Chris Sanders which can be found here:
https://chrissanders.org/2017/06/security-onion-cheat-sheet/
297