DaemonEye Documentation
Welcome to the DaemonEye documentation! This comprehensive guide covers everything you need to know about DaemonEye, a high-performance, security-focused process monitoring system built in Rust.
What is DaemonEye?
DaemonEye is a complete rewrite of the Python prototype, designed for cybersecurity professionals, threat hunters, and security operations centers. It provides real-time process monitoring, threat detection, and alerting capabilities across multiple platforms.
Key Features
- Real-time Process Monitoring: Continuous monitoring of system processes with minimal performance impact
- Threat Detection: SQL-based detection rules with hot-reloading capabilities
- Multi-tier Architecture: Core, Business, and Enterprise tiers with different feature sets
- Cross-platform Support: Linux, macOS, and Windows support
- Container Ready: Docker and Kubernetes deployment options
- Security Focused: Built with security best practices and minimal attack surface
Three-Component Security Architecture
DaemonEye follows a robust three-component security architecture:
- ProcMonD (Collector): Privileged process monitoring daemon with minimal attack surface
- daemoneye-agent (Orchestrator): User-space process for alerting and network operations
- daemoneye-cli: Command-line interface for queries and configuration
This separation ensures robust security by isolating privileged operations from network functionality.
Documentation Structure
This documentation is organized into several sections:
- Getting Started: Quick start guide for new users
- Project Overview: Detailed project information and features
- Architecture: System architecture and design principles
- Technical Documentation: Technical specifications and implementation details
- User Guides: Comprehensive user and operator guides
- API Reference: Complete API documentation
- Deployment: Installation and deployment guides
- Security: Security considerations and best practices
- Testing: Testing strategies and guidelines
- Contributing: Contribution guidelines and development setup
Quick Links
- Installation Guide
- Configuration Guide
- Operator Guide
- API Reference
- Docker Deployment
- Kubernetes Deployment
Getting Help
If you need help with DaemonEye:
- Check the Getting Started guide
- Review the Troubleshooting section
- Consult the API Reference for technical details
- Join our community discussions on GitHub
- Contact support for commercial assistance
License
DaemonEye follows a dual-license strategy:
- Core Components: Apache 2.0 licensed (procmond, daemoneye-agent, daemoneye-cli, daemoneye-lib)
- Business Tier Features: $199/site one-time license (Security Center, GUI, enhanced connectors, curated rules)
- Enterprise Tier Features: Custom pricing (kernel monitoring, federation, STIX/TAXII integration)
This documentation is continuously updated. For the latest information, always refer to the most recent version.
Getting Started with DaemonEye
This guide will help you get DaemonEye up and running quickly on your system. DaemonEye is designed to be simple to deploy while providing powerful security monitoring capabilities
Table of Contents
- Prerequisites
- Installation
- Quick Start
- Basic Configuration
- Creating Your First Detection Rule
- Common Operations
- Troubleshooting
- Next Steps
- Support
Prerequisites
System Requirements
Minimum Requirements:
- OS: Linux (kernel 3.10+), macOS (10.14+), or Windows (10+)
- RAM: 512MB available memory
- Disk: 1GB free space
- CPU: Any x86_64 or ARM64 processor
Recommended Requirements:
- OS: Linux (kernel 4.15+), macOS (11+), or Windows (11+)
- RAM: 2GB+ available memory
- Disk: 10GB+ free space
- CPU: 2+ cores
Privilege Requirements
DaemonEye requires elevated privileges for process monitoring. The system is designed to:
- Request minimal privileges during startup
- Drop privileges immediately after initialization
- Continue operating with standard user privileges
Linux: Requires CAP_SYS_PTRACE
capability (or root) Windows: Requires SeDebugPrivilege
(or Administrator) macOS: Requires appropriate entitlements (or root)
Installation
Option 1: Pre-built Binaries (Recommended)
-
Download the latest release:
# Linux wget https://github.com/daemoneye/daemoneye/releases/latest/download/daemoneye-linux-x86_64.tar.gz tar -xzf daemoneye-linux-x86_64.tar.gz # macOS curl -L https://github.com/daemoneye/daemoneye/releases/latest/download/daemoneye-macos-x86_64.tar.gz | tar -xz # Windows # Download and extract from GitHub releases
-
Install to system directories:
# Linux/macOS sudo cp procmond daemoneye-agent daemoneye-cli /usr/local/bin/ sudo chmod +x /usr/local/bin/procmond /usr/local/bin/daemoneye-agent /usr/local/bin/daemoneye-cli # Windows # Copy to C:\Program Files\DaemonEye\
Option 2: From Source
-
Install Rust (1.85+):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source ~/.cargo/env
-
Clone and build:
git clone https://github.com/daemoneye/daemoneye.git cd daemoneye cargo build --release
-
Install built binaries:
sudo cp target/release/procmond target/release/daemoneye-agent target/release/daemoneye-cli /usr/local/bin/
Option 3: Package Managers
Homebrew (macOS):
brew install daemoneye/daemoneye/daemoneye
APT (Ubuntu/Debian):
# Add repository (when available)
sudo apt update
sudo apt install daemoneye
YUM/DNF (RHEL/CentOS):
# Add repository (when available)
sudo yum install daemoneye
Quick Start
1. Create Configuration Directory
# Linux/macOS
sudo mkdir -p /etc/daemoneye
sudo chown $USER:$USER /etc/daemoneye
# Windows
mkdir C:\ProgramData\DaemonEye
2. Generate Initial Configuration
# Generate default configuration
daemoneye-cli config init --output /etc/daemoneye/config.yaml
This creates a basic configuration file:
# DaemonEye Configuration
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
database:
event_store_path: /var/lib/daemoneye/events.redb
audit_ledger_path: /var/lib/daemoneye/audit.sqlite
retention_days: 30
detection:
rules_path: /etc/daemoneye/rules
enabled_rules: ['*']
alerting:
sinks:
- type: stdout
enabled: true
- type: syslog
enabled: true
facility: daemon
# Platform-specific settings
platform:
linux:
enable_ebpf: false # Requires kernel 4.15+
windows:
enable_etw: false # Requires Windows 10+
macos:
enable_endpoint_security: false # Requires macOS 10.15+
3. Create Data Directory
# Linux/macOS
sudo mkdir -p /var/lib/daemoneye
sudo chown $USER:$USER /var/lib/daemoneye
# Windows
mkdir C:\ProgramData\DaemonEye\data
4. Start the Services
Option A: Manual Start (Testing)
# Terminal 1: Start daemoneye-agent (manages procmond)
daemoneye-agent --config /etc/daemoneye/config.yaml
# Terminal 2: Use CLI for queries
daemoneye-cli --config /etc/daemoneye/config.yaml query "SELECT * FROM processes LIMIT 10"
Option B: System Service (Production)
# Linux (systemd)
sudo cp scripts/systemd/daemoneye.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable daemoneye
sudo systemctl start daemoneye
# macOS (launchd)
sudo cp scripts/launchd/com.daemoneye.agent.plist /Library/LaunchDaemons/
sudo launchctl load /Library/LaunchDaemons/com.daemoneye.agent.plist
# Windows (Service)
# Run as Administrator
sc create "DaemonEye Agent" binPath="C:\Program Files\DaemonEye\daemoneye-agent.exe --config C:\ProgramData\DaemonEye\config.yaml"
sc start "DaemonEye Agent"
5. Verify Installation
# Check service status
daemoneye-cli health
# View recent processes
daemoneye-cli query "SELECT pid, name, executable_path FROM processes ORDER BY collection_time DESC LIMIT 10"
# Check alerts
daemoneye-cli alerts list
# View system metrics
daemoneye-cli metrics
Basic Configuration
Essential Settings
Scan Interval: How often to collect process data
app:
scan_interval_ms: 30000 # 30 seconds
Database Retention: How long to keep data
database:
retention_days: 30 # Keep data for 30 days
Log Level: Verbosity of logging
app:
log_level: info # debug, info, warn, error
Alert Configuration
Enable Syslog Alerts:
alerting:
sinks:
- type: syslog
enabled: true
facility: daemon
tag: daemoneye
Enable Webhook Alerts:
alerting:
sinks:
- type: webhook
enabled: true
url: https://your-siem.com/webhook
headers:
Authorization: Bearer your-token
Enable File Output:
alerting:
sinks:
- type: file
enabled: true
path: /var/log/daemoneye/alerts.json
format: json
Creating Your First Detection Rule
1. Create Rules Directory
mkdir -p /etc/daemoneye/rules
2. Create a Simple Rule
Create /etc/daemoneye/rules/suspicious-processes.sql
:
-- Detect processes with suspicious names
SELECT
pid,
name,
executable_path,
command_line,
collection_time
FROM processes
WHERE
name IN ('malware.exe', 'backdoor.exe', 'trojan.exe')
OR name LIKE '%suspicious%'
OR executable_path LIKE '%temp%'
ORDER BY collection_time DESC;
3. Test the Rule
# Validate the rule
daemoneye-cli rules validate /etc/daemoneye/rules/suspicious-processes.sql
# Test the rule
daemoneye-cli rules test /etc/daemoneye/rules/suspicious-processes.sql
# Enable the rule
daemoneye-cli rules enable suspicious-processes
4. Monitor for Alerts
# Watch for new alerts
daemoneye-cli alerts watch
# List recent alerts
daemoneye-cli alerts list --limit 10
# Export alerts
daemoneye-cli alerts export --format json --output alerts.json
Common Operations
Querying Process Data
Basic Queries:
# List all processes
daemoneye-cli query "SELECT * FROM processes LIMIT 10"
# Find processes by name
daemoneye-cli query "SELECT * FROM processes WHERE name = 'chrome'"
# Find high CPU processes
daemoneye-cli query "SELECT * FROM processes WHERE cpu_usage > 50.0"
# Find processes by user
daemoneye-cli query "SELECT * FROM processes WHERE user_id = '1000'"
Advanced Queries:
# Process tree analysis
daemoneye-cli query "
SELECT
p1.pid as parent_pid,
p1.name as parent_name,
p2.pid as child_pid,
p2.name as child_name
FROM processes p1
JOIN processes p2 ON p1.pid = p2.ppid
WHERE p1.name = 'systemd'
"
# Suspicious process patterns
daemoneye-cli query "
SELECT
pid,
name,
executable_path,
COUNT(*) as occurrence_count
FROM processes
WHERE executable_path LIKE '%temp%'
GROUP BY pid, name, executable_path
HAVING occurrence_count > 5
"
Managing Rules
# List all rules
daemoneye-cli rules list
# Enable/disable rules
daemoneye-cli rules enable rule-name
daemoneye-cli rules disable rule-name
# Validate rule syntax
daemoneye-cli rules validate rule-file.sql
# Test rule execution
daemoneye-cli rules test rule-file.sql
# Import/export rules
daemoneye-cli rules import rules-bundle.tar.gz
daemoneye-cli rules export --output rules-backup.tar.gz
System Health Monitoring
# Check overall health
daemoneye-cli health
# Check component status
daemoneye-cli health --component procmond
daemoneye-cli health --component daemoneye-agent
# View performance metrics
daemoneye-cli metrics
# Check database status
daemoneye-cli database status
# View recent logs
daemoneye-cli logs --tail 50
Troubleshooting
Common Issues
Permission Denied:
# Check if running with sufficient privileges
sudo daemoneye-cli health
# Verify capability requirements
getcap /usr/local/bin/procmond
Database Locked:
# Check for running processes
ps aux | grep daemoneye
# Stop services and restart
sudo systemctl stop daemoneye
sudo systemctl start daemoneye
No Processes Detected:
# Check scan interval
daemoneye-cli config get app.scan_interval_ms
# Verify database path
daemoneye-cli config get database.event_store_path
# Check logs for errors
daemoneye-cli logs --level error
Debug Mode
Enable debug logging for troubleshooting:
app:
log_level: debug
Or use command-line flag:
daemoneye-agent --config /etc/daemoneye/config.yaml --log-level debug
Getting Help
- Documentation: Check the full documentation in
docs/
- Logs: Review logs with
daemoneye-cli logs
- Health Checks: Use
daemoneye-cli health
for system status - Community: Join discussions on GitHub or community forums
Next Steps
Now that you have DaemonEye running:
- Read the Operator Guide for detailed usage instructions
- Explore Configuration Guide for advanced configuration
- Learn Rule Development for creating custom detection rules
- Review Security Architecture for understanding the security model
- Check Deployment Guide for production deployment
Support
- Documentation: Comprehensive guides in the
docs/
directory - Issues: Report bugs and request features on GitHub
- Community: Join discussions and get help from the community
- Security: Follow responsible disclosure for security issues
Congratulations! You now have DaemonEye running and monitoring your system. The system will continue to collect process data and execute detection rules according to your configuration.
DaemonEye Project Overview
Mission Statement
DaemonEye is a security-focused, high-performance process monitoring system designed to detect suspicious activity on systems through continuous process monitoring, behavioral analysis, and pattern recognition. Its primary purpose is to detect suspicious activity on systems by monitoring abnormal process behavior and patterns.
This is a complete Rust 2024 rewrite of a proven Python prototype, delivering enterprise-grade performance with audit-grade integrity while maintaining the security-first, offline-capable philosophy.
Core Mission
Detect and alert on suspicious system activity through continuous process monitoring, behavioral analysis, and pattern recognition. Provide security operations teams with a reliable, high-performance threat detection solution that operates independently of external dependencies while maintaining audit-grade integrity and operator-centric workflows.
Key Value Propositions
Audit-Grade Integrity
- Certificate Transparency-style Merkle tree with inclusion proofs suitable for compliance and forensics
- BLAKE3 hashing for fast, cryptographically secure hash computation
- Optional Ed25519 signatures for enhanced integrity verification
- Append-only audit ledger with monotonic sequence numbers
Offline-First Operation
- Full functionality without internet access, perfect for airgapped environments
- Local rule caching ensures detection continues during network outages
- Buffered alert delivery with persistent queue for reliability
- Bundle-based configuration and rule distribution system
Security-First Architecture
- Privilege separation with minimal attack surface
- Sandboxed execution and minimal privileges
- Zero unsafe code goal with comprehensive safety verification
- SQL injection prevention with AST validation
High Performance
- <5% CPU overhead while monitoring 10,000+ processes
- Sub-second process enumeration for large systems
-
1,000 records/second database write rate
- <100ms alert latency per detection rule
Operator-Centric Design
- Built for operators, by operators
- Workflows optimized for contested environments
- Comprehensive CLI with multiple output formats
- Color support with NO_COLOR and TERM=dumb handling
Three-Component Architecture
DaemonEye implements a three-component security architecture with strict privilege separation:
1. procmond (Privileged Process Collector)
- Purpose: Minimal privileged component for secure process data collection
- Security: Runs with elevated privileges, drops them immediately after initialization
- Network: No network access whatsoever
- Database: Write-only access to audit ledger
- Features: Process enumeration, executable hashing, Certificate Transparency-style audit ledger
2. daemoneye-agent (Detection Orchestrator)
- Purpose: User-space detection rule execution and alert management
- Security: Minimal privileges, outbound-only network connections
- Database: Read/write access to event store
- Features: SQL-based detection engine, multi-channel alerting, procmond lifecycle management
3. daemoneye-cli (Operator Interface)
- Purpose: Command-line interface for queries, management, and diagnostics
- Security: No network access, read-only database operations
- Features: JSON/table output, color handling, shell completions, system health monitoring
4. daemoneye-lib (Shared Core)
- Purpose: Common functionality shared across all components
- Modules: config, models, storage, detection, alerting, crypto, telemetry
- Security: Trait-based abstractions with security boundaries
Target Users
Primary Users
- SOC Analysts: Monitoring fleet infrastructure for process anomalies
- Security Operations & Incident Response Teams: Investigating compromised systems
- System Reliability Engineers: Requiring low-overhead monitoring
- Blue Team Security Engineers: Integrating with existing security infrastructure
- DevSecOps Teams: Embedding security monitoring in deployments
Organizational Context
- Small Teams: Core tier for basic process monitoring
- Consultancies: Business tier for client management and reporting
- Enterprises: Enterprise tier for large-scale, federated deployments
- Government/Military: Airgapped environments with strict compliance requirements
Key Features
Threat Detection Capabilities
Process Behavior Analysis
- Detect process hollowing and executable integrity violations
- Identify suspicious parent-child relationships
- Monitor process lifecycle events in real-time
- Track process memory usage and CPU consumption patterns
Anomaly Detection
- Identify unusual resource consumption patterns
- Detect suspicious process name duplications
- Monitor for process injection techniques
- Track unusual network activity patterns
SQL-Based Detection Engine
- Flexible rule creation using standard SQL queries
- Sandboxed execution with resource limits
- AST validation to prevent injection attacks
- Comprehensive library of built-in detection rules
Built-in Detection Rules
- Common malware tactics, techniques, and procedures (TTPs)
- MITRE ATT&CK framework coverage
- Process hollowing and injection detection
- Suspicious network activity patterns
System Integration
Cross-Platform Support
- Linux: Native process enumeration with eBPF integration (Enterprise tier)
- macOS: EndpointSecurity framework integration (Enterprise tier)
- Windows: ETW integration and Windows API access (Enterprise tier)
- Container Support: Kubernetes DaemonSet deployment
Multi-Channel Alerting
- Local Outputs: stdout, syslog, file output
- Network Outputs: webhooks, email, Kafka
- SIEM Integration: Splunk HEC, Elasticsearch, CEF format
- Enterprise Integration: STIX/TAXII feeds, federated Security Centers
Certificate Transparency Audit Logging
- Cryptographic integrity for forensic analysis
- Certificate Transparency-style Merkle tree with rs-merkle
- Ed25519 digital signatures and inclusion proofs
- Millisecond-precision timestamps
Resource-Bounded Operation
- Graceful degradation under load
- Memory pressure detection and response
- CPU throttling under high load conditions
- Circuit breaker patterns for external dependencies
Technology Stack
Core Technologies
- Language: Rust 2024 Edition (MSRV: 1.85+)
- Safety:
unsafe_code = "forbid"
at workspace level - Quality:
warnings = "deny"
with zero-warnings policy - Async Runtime: Tokio with full feature set for I/O and task management
Database Layer
- Core: redb (pure Rust embedded database) for optimal performance
- Business/Enterprise: PostgreSQL for centralized data aggregation
- Features: WAL mode, connection pooling, ACID compliance
CLI Framework
- clap v4: Derive macros with shell completions
- Terminal: Automatic color detection, NO_COLOR and TERM=dumb support
- Output: JSON, human-readable table, CSV formats
Configuration Management
- Hierarchical loading: Embedded defaults → System files → User files → Environment → CLI flags
- Formats: YAML, JSON, TOML support via figment and config crates
- Validation: Comprehensive validation with detailed error messages
Error Handling
- Libraries: thiserror for structured error types
- Applications: anyhow for error context and chains
- Pattern: Graceful degradation with detailed error context
Logging & Observability
- Structured Logging: tracing ecosystem with JSON output
- Metrics: Optional Prometheus integration
- Performance: Built-in performance monitoring and resource tracking
Performance Requirements
System Performance
- CPU Usage: <5% sustained during continuous monitoring
- Memory Usage: <100MB resident under normal operation
- Process Enumeration: <5 seconds for 10,000+ processes
- Database Operations: >1,000 records/second write rate
- Alert Latency: <100ms per detection rule execution
Scalability
- Single Agent: Monitor 10,000+ processes with minimal overhead
- Fleet Management: Support for 1,000+ agents per Security Center
- Regional Centers: Aggregate data from multiple regional deployments
- Enterprise Federation: Hierarchical data aggregation and query distribution
Security Principles
Principle of Least Privilege
- Only procmond runs with elevated privileges when necessary
- Immediate privilege drop after initialization
- Detection and alerting run in user space
- Component-specific database access patterns
Defense in Depth
- Multiple layers of security controls
- Input validation at all boundaries
- Sandboxed rule execution
- Cryptographic integrity verification
Zero Trust Architecture
- Mutual TLS authentication between components
- Certificate-based agent registration
- No implicit trust relationships
- Continuous verification and validation
License Model
Dual-License Strategy
- Core Components: Apache 2.0 licensed (procmond, daemoneye-agent, daemoneye-cli, daemoneye-lib)
- Business Tier Features: $199/site one-time license (Security Center, GUI, enhanced connectors, curated rules)
- Enterprise Tier Features: Custom pricing (kernel monitoring, federation, STIX/TAXII integration)
Feature Gating
- Compile-time feature gates for tier-specific functionality
- Runtime license validation with cryptographic signatures
- Graceful degradation when license is invalid or expired
- Site restriction validation for license compliance
Getting Started
Quick Start
- Install: Download and install DaemonEye for your platform
- Configure: Set up basic configuration and detection rules
- Deploy: Start the monitoring services
- Monitor: Use the CLI to query data and manage alerts
Next Steps
- Read the Architecture Overview to understand the system design
- Follow the Getting Started Guide for detailed setup instructions
- Review the Operator Guide for day-to-day usage
- Explore the Configuration Guide for advanced configuration
DaemonEye represents the next generation of process monitoring, combining the security and performance benefits of Rust with proven threat detection techniques to provide a comprehensive solution for modern security operations.
DaemonEye Pricing
Part of the DaemonEye suite of tools: Continuous monitoring. Immediate alerts.
"Auditd without the noise. Osquery without the bloat."
DaemonEye Architecture Note
ProcMonD is the privileged process monitoring component of the DaemonEye package. DaemonEye consists of three components:
- ProcMonD (Collector): Runs with high privileges, focused solely on process monitoring, with a minimal attack surface and no direct network functionality.
- Orchestrator: Operates in user space with very few privileges, receives events from ProcMonD, handles all network communication and alert delivery to log sinks, and communicates with ProcMonD only via secure, memory-only IPC (e.g., Unix sockets).
- CLI: Local command-line interface that interacts with the orchestrator for querying data, exporting results, and tuning service configuration. This separation ensures robust security: ProcMonD remains isolated and hardened, while orchestration/network tasks are delegated to a low-privilege process, and all user interaction is handled via the CLI through the orchestrator.
Free / Homelab
$0 — Always Free For hackers, homelabbers, and operators who want clean visibility without SaaS strings.
- Full daemon (Rust core)
- SQL rule engine (DIY + community rules)
- Syslog, email, webhook alerts
- Tamper-evident logging
- Cross-platform (Linux, macOS, Windows)
- GitHub Sponsors tip jar if you dig it
For the lab. For your side projects. For free, forever.
Business
Flat License — $199/site For small teams and consultancies who need more polish and integrations. One-time fee, no subscription.
- Everything in Free
- Curated rule packs (malware TTPs, suspicious parent/child, process hollowing)
- Output connectors: Splunk HEC, Elastic, Kafka
- Container / K8s DaemonSet deployment
- Export to CEF, JSON, or STIX-lite
- Optional GUI frontend ($19 per seat)
- Signed installers (MSI/DMG, ready to deploy)
Professional-grade monitoring you can actually run offline.
Enterprise
Org License — Let's Talk For SOCs, IR teams, and industrial/government environments where process visibility is non-negotiable. (Pricing starts in the low 4-figures, one-time license. Optional paid update packs.)
- Everything in Business
- eBPF integration for kernel-level visibility
- Central collector for fleet monitoring
- Advanced SIEM integration (full STIX/TAXII, compliance mappings)
- Hardened builds with SLSA provenance & Cosign signatures
- Optional commercial license for enterprises who can't ship Apache 2.0
- Quarterly Enterprise Rule Packs with threat intel updates
When compliance meets detection. Built for enclaves, critical infrastructure, and SOCs that need serious visibility.
Notes
- No subscriptions. No license servers. No hidden telemetry.
- Free tier is fully functional — paid tiers add polish and scale.
- Pricing is a starting point — EvilBit Labs is not a sales shop, we keep it simple.
Feature Comparison
Feature | Free/Homelab | Business | Enterprise |
---|---|---|---|
Core Monitoring | ✅ | ✅ | ✅ |
SQL Rule Engine | ✅ | ✅ | ✅ |
Basic Alerts | ✅ | ✅ | ✅ |
Cross-Platform | ✅ | ✅ | ✅ |
Curated Rule Packs | ❌ | ✅ | ✅ |
SIEM Connectors | ❌ | ✅ | ✅ |
Container Support | ❌ | ✅ | ✅ |
Export Formats | Basic | CEF/STIX | Full STIX/TAXII |
GUI Frontend | ❌ | Optional | ✅ |
Kernel Monitoring | ❌ | ❌ | ✅ |
Fleet Management | ❌ | ❌ | ✅ |
Compliance Mappings | ❌ | ❌ | ✅ |
Enterprise Support | ❌ | ❌ | ✅ |
Getting Started
Free Tier
- Download from GitHub releases
- Follow the Installation Guide
- Start monitoring immediately
- No registration required
Business Tier
- Contact EvilBit Labs for license key
- Download Business tier build
- Apply license key during installation
- Access to curated rule packs and connectors
Enterprise Tier
- Contact EvilBit Labs for custom pricing
- Discuss requirements and deployment scale
- Receive tailored solution and support
- Full enterprise features and support
Support
- Free Tier: Community support via GitHub Issues
- Business Tier: Email support with 48-hour response
- Enterprise Tier: Dedicated support with SLA
Contact
For Business and Enterprise licensing:
- Email: support@evilbitlabs.com
- GitHub: EvilBit-Labs/daemoneye
- Website: evilbitlabs.io/daemoneye
Pricing is subject to change. Contact EvilBit Labs for the most current pricing information.
DaemonEye Architecture Overview
Three-Component Security Architecture
DaemonEye implements a single crate with multiple binaries architecture using feature flags for precise dependency control. The system follows a three-component security design with strict privilege separation to provide continuous process monitoring and threat detection. The system is designed around the principle of minimal attack surface while maintaining high performance and audit-grade integrity.
graph TB subgraph "DaemonEye Three-Component Architecture" subgraph "procmond (Privileged Collector)" PM[Process Monitor] HC[Hash Computer] AL[Audit Logger] IPC1[IPC Server] end subgraph "daemoneye-agent (Detection Orchestrator)" DE[Detection Engine] AM[Alert Manager] RM[Rule Manager] IPC2[IPC Client] NS[Network Sinks] end subgraph "daemoneye-cli (Operator Interface)" QE[Query Executor] HC2[Health Checker] DM[Data Manager] end subgraph "daemoneye-lib (Shared Core)" CFG[Configuration] MOD[Models] STO[Storage] DET[Detection] ALT[Alerting] CRY[Crypto] end end subgraph "Data Stores" ES[Event Store<br/>redb] AL2[Audit Ledger<br/>Certificate Transparency] end subgraph "External Systems" SIEM[SIEM Systems] WEBHOOK[Webhooks] SYSLOG[Syslog] end PM --> DE HC --> DE AL --> AL2 IPC1 <--> IPC2 DE --> AM AM --> NS NS --> SIEM NS --> WEBHOOK NS --> SYSLOG QE --> DE HC2 --> DE DM --> DE DE --> ES AL --> AL2
Component Responsibilities
procmond (Privileged Process Collector)
Primary Purpose: Minimal privileged component for secure process data collection with purpose-built simplicity.
Key Responsibilities:
- Process Enumeration: Cross-platform process data collection using sysinfo crate
- Executable Hashing: SHA-256 hash computation for integrity verification
- Audit Logging: Certificate Transparency-style Merkle tree with inclusion proofs
- IPC Communication: Simple protobuf-based communication with daemoneye-agent
Security Boundaries:
- Starts with minimal privileges, optionally requests enhanced access
- Drops all elevated privileges immediately after initialization
- No network access whatsoever
- No SQL parsing or complex query logic
- Write-only access to audit ledger
- Simple protobuf IPC only (Unix sockets/named pipes)
Key Interfaces:
#[async_trait]
pub trait ProcessCollector {
async fn enumerate_processes(&self) -> Result<Vec<ProcessRecord>>;
async fn handle_detection_task(&self, task: DetectionTask) -> Result<DetectionResult>;
async fn serve_ipc(&self) -> Result<()>;
}
daemoneye-agent (Detection Orchestrator)
Primary Purpose: User-space detection rule execution, alert management, and procmond lifecycle management.
Key Responsibilities:
- Detection Engine: SQL-based rule execution with security validation
- Alert Management: Alert generation, deduplication, and delivery
- Rule Management: Rule loading, validation, and hot-reloading
- Process Management: procmond lifecycle management (start, stop, restart, health monitoring)
- Network Communication: Outbound-only connections for alert delivery
Security Boundaries:
- Operates in user space with minimal privileges
- Manages redb event store (read/write access)
- Translates complex SQL rules into simple protobuf tasks for procmond
- Outbound-only network connections for alert delivery
- Sandboxed rule execution with resource limits
- IPC client for communication with procmond
Key Interfaces:
#[async_trait]
pub trait DetectionEngine {
async fn execute_rules(&self, scan_id: i64) -> Result<Vec<Alert>>;
async fn validate_sql(&self, query: &str) -> Result<ValidationResult>;
}
#[async_trait]
pub trait AlertManager {
async fn generate_alert(&self, detection: DetectionResult) -> Result<Alert>;
async fn deliver_alert(&self, alert: &Alert) -> Result<DeliveryResult>;
}
daemoneye-cli (Operator Interface)
Primary Purpose: Command-line interface for queries, management, and diagnostics.
Key Responsibilities:
- Data Queries: Safe SQL query execution with parameterization
- System Management: Configuration, rule management, health monitoring
- Data Export: Multiple output formats (JSON, table, CSV)
- Diagnostics: System health checks and troubleshooting
Security Boundaries:
- No network access
- No direct database access (communicates through daemoneye-agent)
- Input validation for all user-provided data
- Safe SQL execution via daemoneye-agent with prepared statements
- Communicates only with daemoneye-agent for all operations
Key Interfaces:
#[async_trait]
pub trait QueryExecutor {
async fn execute_query(&self, query: &str, params: &[Value]) -> Result<QueryResult>;
async fn export_data(&self, format: ExportFormat, filter: &Filter) -> Result<ExportResult>;
}
daemoneye-lib (Shared Core)
Primary Purpose: Common functionality shared across all components.
Key Modules:
- config: Hierarchical configuration management
- models: Core data structures and serialization
- storage: Database abstractions and connection management
- detection: SQL validation and rule execution framework
- alerting: Multi-channel alert delivery system
- crypto: Cryptographic functions for audit chains
- telemetry: Observability and metrics collection
Data Flow Architecture
The system implements a pipeline processing model with clear phases and strict component separation:
1. Collection Phase
sequenceDiagram participant SA as daemoneye-agent participant PM as procmond participant SYS as System SA->>PM: DetectionTask(ENUMERATE_PROCESSES) PM->>SYS: Enumerate processes SYS-->>PM: Process data PM->>PM: Compute hashes PM->>PM: Write to audit ledger PM-->>SA: DetectionResult(processes)
2. Detection Phase
sequenceDiagram participant DE as Detection Engine participant DB as Event Store participant RM as Rule Manager DE->>RM: Load detection rules RM-->>DE: SQL rules DE->>DB: Execute SQL queries DB-->>DE: Query results DE->>DE: Generate alerts DE->>DB: Store alerts
3. Alert Delivery Phase
sequenceDiagram participant AM as Alert Manager participant S1 as Sink 1 participant S2 as Sink 2 participant S3 as Sink 3 AM->>S1: Deliver alert (parallel) AM->>S2: Deliver alert (parallel) AM->>S3: Deliver alert (parallel) S1-->>AM: Delivery result S2-->>AM: Delivery result S3-->>AM: Delivery result
IPC Protocol Design
Purpose: Secure, efficient communication between procmond and daemoneye-agent.
Protocol Specification:
syntax = "proto3";
// Simple detection tasks sent from daemoneye-agent to procmond
message DetectionTask {
string task_id = 1;
TaskType task_type = 2;
optional ProcessFilter process_filter = 3;
optional HashCheck hash_check = 4;
optional string metadata = 5;
}
enum TaskType {
ENUMERATE_PROCESSES = 0;
CHECK_PROCESS_HASH = 1;
MONITOR_PROCESS_TREE = 2;
VERIFY_EXECUTABLE = 3;
}
// Results sent back from procmond to daemoneye-agent
message DetectionResult {
string task_id = 1;
bool success = 2;
optional string error_message = 3;
repeated ProcessRecord processes = 4;
optional HashResult hash_result = 5;
}
Transport Layer:
- Unix domain sockets on Linux/macOS
- Named pipes on Windows
- Async message handling with tokio
- Connection authentication and encryption (optional)
- Automatic reconnection with exponential backoff
Database Architecture
Event Store (redb)
- Purpose: High-performance process data storage
- Access: daemoneye-agent read/write, daemoneye-cli read-only
- Features: WAL mode, concurrent access, embedded database
- Schema: process_snapshots, scans, detection_rules, alerts
Audit Ledger (Certificate Transparency)
- Purpose: Tamper-evident audit trail using Certificate Transparency-style Merkle tree
- Access: procmond write-only
- Features: Merkle tree with BLAKE3 hashing and Ed25519 signatures
- Implementation: rs-merkle for inclusion proofs and periodic checkpoints
Security Architecture
Privilege Separation
- Only procmond runs with elevated privileges when necessary
- Immediate privilege drop after initialization
- Detection and alerting run in user space
- Component-specific database access patterns
SQL Injection Prevention
- AST validation using sqlparser crate
- Prepared statements and parameterized queries only
- Sandboxed detection rule execution with resource limits
- Query whitelist preventing data modification operations
Resource Management
- Bounded channels with configurable backpressure policies
- Memory budgets with cooperative yielding
- Timeout enforcement and cancellation support
- Circuit breakers for external dependencies
Performance Characteristics
Process Collection
- Baseline: <5 seconds for 10,000+ processes
- CPU Usage: <5% sustained during continuous monitoring
- Memory Usage: <100MB resident under normal operation
- Hash Computation: SHA-256 for all accessible executables
Detection Engine
- Rule Execution: <100ms per detection rule
- SQL Validation: AST parsing and validation
- Resource Limits: 30-second timeout, memory limits
- Concurrent Execution: Parallel rule processing
Alert Delivery
- Multi-Channel: Parallel delivery to multiple sinks
- Reliability: Circuit breakers and retry logic
- Performance: Non-blocking delivery with backpressure
- Monitoring: Delivery success rates and latency metrics
Cross-Platform Strategy
Process Enumeration
- Phase 1: sysinfo crate for unified cross-platform baseline
- Phase 2: Platform-specific enhancements (eBPF, ETW, EndpointSecurity)
- Fallback: Graceful degradation when enhanced features unavailable
Privilege Management
- Linux: CAP_SYS_PTRACE, immediate capability dropping
- Windows: SeDebugPrivilege, token restriction after init
- macOS: Minimal entitlements, sandbox compatibility
Component Communication
procmond ↔ daemoneye-agent
- Protocol: Custom Protobuf over Unix Sockets/Named Pipes
- Direction: Bidirectional with simple task/result pattern
- Security: Process isolation, no network access
daemoneye-agent ↔ daemoneye-cli
- Protocol: Local IPC or direct database access
- Direction: daemoneye-cli queries daemoneye-agent
- Security: Local communication only, input validation
External Communication
- Alert Delivery: Outbound-only network connections
- SIEM Integration: HTTPS, mTLS, webhook protocols
- Security Center: mTLS with certificate authentication
Error Handling Strategy
Graceful Degradation
- Continue operation when individual components fail
- Fallback mechanisms for enhanced features
- Circuit breakers for external dependencies
- Comprehensive error logging and monitoring
Recovery Patterns
- Automatic retry with exponential backoff
- Health checks and component restart
- Data consistency verification
- Audit trail integrity validation
This architecture provides a robust foundation for implementing DaemonEye's core monitoring functionality while maintaining security, performance, and reliability requirements across all supported platforms.
DaemonEye System Architecture
Overview
DaemonEye implements a single crate with multiple binaries architecture using feature flags for precise dependency control. The system follows a sophisticated three-component security design around the principle of minimal attack surface while maintaining high performance and audit-grade integrity. The system follows a pipeline processing model where process data flows from collection through detection to alerting, with each component having clearly defined responsibilities and security boundaries.
High-Level Architecture
graph TB subgraph "DaemonEye Three-Component Architecture" subgraph "procmond (Privileged Collector)" PM[Process Monitor] HC[Hash Computer] AL[Audit Logger] IPC1[IPC Server] end subgraph "daemoneye-agent (Detection Orchestrator)" DE[Detection Engine] AM[Alert Manager] RM[Rule Manager] IPC2[IPC Client] NS[Network Sinks] end subgraph "daemoneye-cli (Operator Interface)" QE[Query Executor] HC2[Health Checker] DM[Data Manager] end subgraph "daemoneye-lib (Shared Core)" CFG[Configuration] MOD[Models] STO[Storage] DET[Detection] ALT[Alerting] CRY[Crypto] end end subgraph "Data Stores" ES[Event Store<br/>redb] AL2[Audit Ledger<br/>Hash-chained] end subgraph "External Systems" SIEM[SIEM Systems] WEBHOOK[Webhooks] SYSLOG[Syslog] end PM --> DE HC --> DE AL --> AL2 IPC1 <--> IPC2 DE --> AM AM --> NS NS --> SIEM NS --> WEBHOOK NS --> SYSLOG QE --> DE HC2 --> DE DM --> DE DE --> ES AL --> AL2
Component Design
procmond (Privileged Process Collector)
Architectural Role: Minimal privileged component for secure process data collection with purpose-built simplicity.
Core Responsibilities
- Process Enumeration: Cross-platform process data collection using sysinfo crate
- Executable Hashing: SHA-256 hash computation for integrity verification
- Audit Logging: Tamper-evident logging with cryptographic chains
- IPC Communication: Simple protobuf-based communication with daemoneye-agent
Security Boundaries
- Privilege Management: Starts with minimal privileges, optionally requests enhanced access
- Privilege Dropping: Drops all elevated privileges immediately after initialization
- Network Isolation: No network access whatsoever
- Code Simplicity: No SQL parsing or complex query logic
- Audit Logging: Write-only access to hash-chained audit ledger
- Communication: Simple protobuf IPC only (Unix sockets/named pipes)
Key Interfaces
#[async_trait]
pub trait ProcessCollector: Send + Sync {
async fn enumerate_processes(&self) -> Result<Vec<ProcessRecord>>;
async fn handle_detection_task(&self, task: DetectionTask) -> Result<DetectionResult>;
async fn serve_ipc(&self) -> Result<()>;
}
#[async_trait]
pub trait HashComputer: Send + Sync {
async fn compute_hash(&self, path: &Path) -> Result<Option<String>>;
fn get_algorithm(&self) -> &'static str;
}
#[async_trait]
pub trait AuditLogger: Send + Sync {
async fn log_event(&self, event: &AuditEvent) -> Result<()>;
async fn verify_chain(&self) -> Result<ChainVerificationResult>;
}
Implementation Structure
pub struct ProcessCollector {
config: CollectorConfig,
hash_computer: Box<dyn HashComputer>,
audit_logger: Box<dyn AuditLogger>,
ipc_server: Box<dyn IpcServer>,
privilege_manager: PrivilegeManager,
}
impl ProcessCollector {
pub async fn new(config: CollectorConfig) -> Result<Self> {
let mut privilege_manager = PrivilegeManager::new();
// Request minimal required privileges
privilege_manager.request_enhanced_privileges().await?;
let collector = Self {
config,
hash_computer: Box::new(Sha256HashComputer::new()),
audit_logger: Box::new(SqliteAuditLogger::new(&config.audit_path)?),
ipc_server: Box::new(UnixSocketServer::new(&config.ipc_path)?),
privilege_manager,
};
// Drop privileges immediately after initialization
collector.privilege_manager.drop_privileges().await?;
Ok(collector)
}
}
daemoneye-agent (Detection Orchestrator)
Architectural Role: User-space detection rule execution, alert management, and procmond lifecycle management.
Core Responsibilities
- Detection Engine: SQL-based rule execution with security validation
- Alert Management: Alert generation, deduplication, and delivery
- Rule Management: Rule loading, validation, and hot-reloading
- Process Management: procmond lifecycle management (start, stop, restart, health monitoring)
- Network Communication: Outbound-only connections for alert delivery
Security Boundaries
- User Space Operation: Operates in user space with minimal privileges
- Event Store Management: Manages redb event store (read/write access)
- Rule Translation: Translates complex SQL rules into simple protobuf tasks for procmond
- Network Access: Outbound-only network connections for alert delivery
- Sandboxed Execution: Sandboxed rule execution with resource limits
- IPC Communication: IPC client for communication with procmond
Key Interfaces
#[async_trait]
pub trait DetectionEngine: Send + Sync {
async fn execute_rules(&self, scan_id: i64) -> Result<Vec<Alert>>;
async fn validate_sql(&self, query: &str) -> Result<ValidationResult>;
async fn load_rules(&self) -> Result<Vec<DetectionRule>>;
}
#[async_trait]
pub trait AlertManager: Send + Sync {
async fn generate_alert(&self, detection: DetectionResult) -> Result<Alert>;
async fn deliver_alert(&self, alert: &Alert) -> Result<DeliveryResult>;
async fn deduplicate_alert(&self, alert: &Alert) -> Result<Option<Alert>>;
}
#[async_trait]
pub trait ProcessManager: Send + Sync {
async fn start_procmond(&self) -> Result<()>;
async fn stop_procmond(&self) -> Result<()>;
async fn restart_procmond(&self) -> Result<()>;
async fn health_check(&self) -> Result<HealthStatus>;
}
Implementation Structure
pub struct DetectionEngine {
db: redb::Database,
rule_manager: RuleManager,
alert_manager: AlertManager,
sql_validator: SqlValidator,
ipc_client: IpcClient,
process_manager: ProcessManager,
}
impl DetectionEngine {
pub async fn new(config: AgentConfig) -> Result<Self> {
let db = redb::Database::create(&config.event_store_path)?;
// Initialize SQL validator with security constraints
let sql_validator = SqlValidator::new()
.with_allowed_functions(ALLOWED_SQL_FUNCTIONS)
.with_read_only_mode(true)
.with_timeout(Duration::from_secs(30));
// Initialize IPC client for procmond communication
let ipc_client = IpcClient::new(&config.procmond_socket_path)?;
// Start procmond process
let process_manager = ProcessManager::new(config.procmond_config);
process_manager.start_procmond().await?;
Ok(Self {
db,
rule_manager: RuleManager::new(&config.rules_path)?,
alert_manager: AlertManager::new(&config.alerting_config)?,
sql_validator,
ipc_client,
process_manager,
})
}
}
daemoneye-cli (Operator Interface)
Architectural Role: Command-line interface for queries, management, and diagnostics.
Core Responsibilities
- Data Queries: Safe SQL query execution with parameterization
- System Management: Configuration, rule management, health monitoring
- Data Export: Multiple output formats (JSON, table, CSV)
- Diagnostics: System health checks and troubleshooting
Security Boundaries
- No Network Access: No network access whatsoever
- No Direct Event Store Access: Communicates through daemoneye-agent
- Input Validation: Comprehensive validation for all user-provided data
- Safe SQL Execution: SQL execution via daemoneye-agent with prepared statements
- Local Communication Only: Communicates only with daemoneye-agent
Key Interfaces
#[async_trait]
pub trait QueryExecutor: Send + Sync {
async fn execute_query(&self, query: &str, params: &[Value]) -> Result<QueryResult>;
async fn export_data(&self, format: ExportFormat, filter: &Filter) -> Result<ExportResult>;
}
#[async_trait]
pub trait HealthChecker: Send + Sync {
async fn check_system_health(&self) -> Result<HealthStatus>;
async fn check_component_health(&self, component: Component) -> Result<ComponentHealth>;
}
#[async_trait]
pub trait DataManager: Send + Sync {
async fn export_alerts(&self, format: ExportFormat, filter: &AlertFilter) -> Result<ExportResult>;
async fn export_processes(&self, format: ExportFormat, filter: &ProcessFilter) -> Result<ExportResult>;
}
daemoneye-lib (Shared Core)
Architectural Role: Common functionality shared across all components.
Core Modules
- config: Hierarchical configuration management with validation
- models: Core data structures and serialization/deserialization
- storage: Event store abstractions and connection management
- detection: SQL validation and rule execution framework
- alerting: Multi-channel alert delivery system
- crypto: Cryptographic functions for audit chains and integrity
- telemetry: Observability, metrics collection, and health monitoring
Module Structure
pub mod config {
pub mod environment;
pub mod hierarchical;
pub mod validation;
}
pub mod models {
pub mod alert;
pub mod audit;
pub mod detection_rule;
pub mod process;
}
pub mod storage {
pub mod audit_ledger;
pub mod connection_pool;
pub mod redb;
}
pub mod detection {
pub mod rule_engine;
pub mod sandbox;
pub mod sql_validator;
}
pub mod alerting {
pub mod deduplication;
pub mod delivery;
pub mod sinks;
}
pub mod crypto {
pub mod hash_chain;
pub mod integrity;
pub mod signatures;
}
pub mod telemetry {
pub mod health;
pub mod metrics;
pub mod tracing;
}
Data Flow Architecture
Process Collection Pipeline
sequenceDiagram participant SA as daemoneye-agent participant PM as procmond participant SYS as System participant AL as Audit Ledger SA->>PM: DetectionTask(ENUMERATE_PROCESSES) PM->>SYS: Enumerate processes SYS-->>PM: Process data PM->>PM: Compute SHA-256 hashes PM->>AL: Write to audit ledger PM-->>SA: DetectionResult(processes) SA->>SA: Store in event store
Detection and Alerting Pipeline
sequenceDiagram participant DE as Detection Engine participant DB as Event Store participant RM as Rule Manager participant AM as Alert Manager participant SINK as Alert Sinks DE->>RM: Load detection rules RM-->>DE: SQL rules DE->>DB: Execute SQL queries DB-->>DE: Query results DE->>AM: Generate alerts AM->>AM: Deduplicate alerts AM->>SINK: Deliver alerts (parallel) SINK-->>AM: Delivery results
Query and Management Pipeline
sequenceDiagram participant CLI as daemoneye-cli participant SA as daemoneye-agent participant DB as Event Store CLI->>SA: Execute query request SA->>SA: Validate SQL query SA->>DB: Execute prepared statement DB-->>SA: Query results SA->>SA: Format results SA-->>CLI: Formatted results
IPC Protocol Design
Protocol Specification
The IPC protocol uses Protocol Buffers for efficient, type-safe communication between procmond and daemoneye-agent.
syntax = "proto3";
// Simple detection tasks sent from daemoneye-agent to procmond
message DetectionTask {
string task_id = 1;
TaskType task_type = 2;
optional ProcessFilter process_filter = 3;
optional HashCheck hash_check = 4;
optional string metadata = 5;
}
enum TaskType {
ENUMERATE_PROCESSES = 0;
CHECK_PROCESS_HASH = 1;
MONITOR_PROCESS_TREE = 2;
VERIFY_EXECUTABLE = 3;
}
message ProcessFilter {
repeated string process_names = 1;
repeated uint32 pids = 2;
optional string executable_pattern = 3;
}
message HashCheck {
string expected_hash = 1;
string hash_algorithm = 2;
string executable_path = 3;
}
// Results sent back from procmond to daemoneye-agent
message DetectionResult {
string task_id = 1;
bool success = 2;
optional string error_message = 3;
repeated ProcessRecord processes = 4;
optional HashResult hash_result = 5;
}
message ProcessRecord {
uint32 pid = 1;
optional uint32 ppid = 2;
string name = 3;
optional string executable_path = 4;
repeated string command_line = 5;
optional int64 start_time = 6;
optional double cpu_usage = 7;
optional uint64 memory_usage = 8;
optional string executable_hash = 9;
optional string hash_algorithm = 10;
optional string user_id = 11;
bool accessible = 12;
bool file_exists = 13;
int64 collection_time = 14;
}
Transport Layer
Unix Domain Sockets (Linux/macOS):
pub struct UnixSocketServer {
path: PathBuf,
listener: UnixListener,
}
impl IpcServer for UnixSocketServer {
async fn serve<F>(&self, handler: F) -> Result<()>
where
F: Fn(DetectionTask) -> Result<DetectionResult> + Send + Sync + 'static,
{
let mut incoming = self.listener.incoming();
while let Some(stream) = incoming.next().await {
let stream = stream?;
let handler = handler.clone();
tokio::spawn(async move {
Self::handle_connection(stream, handler).await;
});
}
Ok(())
}
}
Named Pipes (Windows):
pub struct NamedPipeServer {
pipe_name: String,
server: NamedPipeServerStream,
}
impl IpcServer for NamedPipeServer {
async fn serve<F>(&self, handler: F) -> Result<()>
where
F: Fn(DetectionTask) -> Result<DetectionResult> + Send + Sync + 'static,
{
// Windows named pipe implementation
// Similar to Unix socket but using Windows named pipes
}
}
Data Storage Architecture
Event Store (redb)
Purpose: High-performance process data storage with concurrent access.
Schema Design:
// Process snapshots table
pub struct ProcessSnapshot {
pub id: Uuid,
pub scan_id: i64,
pub collection_time: i64,
pub pid: u32,
pub ppid: Option<u32>,
pub name: String,
pub executable_path: Option<PathBuf>,
pub command_line: Vec<String>,
pub start_time: Option<i64>,
pub cpu_usage: Option<f64>,
pub memory_usage: Option<u64>,
pub executable_hash: Option<String>,
pub hash_algorithm: Option<String>,
pub user_id: Option<String>,
pub accessible: bool,
pub file_exists: bool,
pub platform_data: Option<serde_json::Value>,
}
// Scan metadata table
pub struct ScanMetadata {
pub id: i64,
pub start_time: i64,
pub end_time: i64,
pub process_count: i32,
pub collection_duration_ms: i64,
pub system_info: SystemInfo,
}
// Detection rules table
pub struct DetectionRule {
pub id: String,
pub name: String,
pub description: Option<String>,
pub version: i32,
pub sql_query: String,
pub enabled: bool,
pub severity: AlertSeverity,
pub category: Option<String>,
pub tags: Vec<String>,
pub author: Option<String>,
pub created_at: i64,
pub updated_at: i64,
pub source_type: RuleSourceType,
pub source_path: Option<PathBuf>,
}
// Alerts table
pub struct Alert {
pub id: Uuid,
pub alert_time: i64,
pub rule_id: String,
pub title: String,
pub description: String,
pub severity: AlertSeverity,
pub scan_id: Option<i64>,
pub affected_processes: Vec<u32>,
pub process_count: i32,
pub alert_data: serde_json::Value,
pub rule_execution_time_ms: Option<i64>,
pub dedupe_key: String,
}
Audit Ledger (Hash-chained)
Purpose: Tamper-evident audit trail with cryptographic integrity using hash-chained log file.
Implementation:
The audit ledger is implemented as a hash-chained log file, not a database table. Each entry contains a cryptographic hash of the previous entry, creating an immutable chain.
Hash Chain Implementation:
pub struct AuditChain {
hasher: blake3::Hasher,
signer: Option<ed25519_dalek::Keypair>,
previous_hash: Option<blake3::Hash>,
}
impl AuditChain {
pub fn append_entry(&mut self, entry: &AuditEntry) -> Result<AuditRecord> {
let entry_data = serde_json::to_vec(entry)?;
let entry_hash = blake3::hash(&entry_data);
let record = AuditRecord {
sequence: self.next_sequence(),
timestamp: SystemTime::now().duration_since(UNIX_EPOCH)?.as_millis() as i64,
actor: entry.actor.clone(),
action: entry.action.clone(),
payload_hash: entry_hash,
previous_hash: self.previous_hash,
entry_hash: self.compute_entry_hash(&entry_hash)?,
signature: self.sign_entry(&entry_hash)?,
};
self.previous_hash = Some(record.entry_hash);
Ok(record)
}
pub fn verify_chain(&self, records: &[AuditRecord]) -> Result<VerificationResult> {
// Verify hash chain integrity and signatures
for (i, record) in records.iter().enumerate() {
if i > 0 {
let prev_record = &records[i - 1];
if record.previous_hash != Some(prev_record.entry_hash) {
return Err(VerificationError::ChainBroken(i));
}
}
// Verify entry hash
let computed_hash = self.compute_entry_hash(&record.payload_hash)?;
if record.entry_hash != computed_hash {
return Err(VerificationError::HashMismatch(i));
}
// Verify signature if present
if let Some(signature) = &record.signature {
self.verify_signature(&record.payload_hash, signature)?;
}
}
Ok(VerificationResult::Valid)
}
}
Security Architecture
Privilege Separation Model
Principle: Each component operates with the minimum privileges required for its function.
pub struct PrivilegeManager {
initial_privileges: Privileges,
current_privileges: Privileges,
drop_completed: bool,
}
impl PrivilegeManager {
pub async fn request_enhanced_privileges(&mut self) -> Result<()> {
// Platform-specific privilege escalation
#[cfg(target_os = "linux")]
self.request_linux_capabilities()?;
#[cfg(target_os = "windows")]
self.request_windows_privileges()?;
#[cfg(target_os = "macos")]
self.request_macos_entitlements()?;
Ok(())
}
pub async fn drop_privileges(&mut self) -> Result<()> {
// Immediate privilege drop after initialization
self.drop_all_elevated_privileges()?;
self.drop_completed = true;
self.audit_privilege_drop().await?;
Ok(())
}
}
SQL Injection Prevention
AST Validation: All user-provided SQL undergoes comprehensive validation.
pub struct SqlValidator {
parser: sqlparser::Parser<sqlparser::dialect::SQLiteDialect>,
allowed_functions: HashSet<String>,
}
impl SqlValidator {
pub fn validate_query(&self, sql: &str) -> Result<ValidationResult> {
let ast = self.parser.parse_sql(sql)?;
for statement in &ast {
match statement {
Statement::Query(query) => self.validate_select_query(query)?,
_ => return Err(ValidationError::ForbiddenStatement),
}
}
Ok(ValidationResult::Valid)
}
fn validate_select_query(&self, query: &Query) -> Result<()> {
// Validate SELECT body, WHERE clauses, functions, etc.
// Reject any non-whitelisted constructs
self.validate_select_body(&query.body)?;
self.validate_where_clause(&query.selection)?;
Ok(())
}
}
Resource Management
Bounded Channels: Configurable capacity with backpressure policies.
pub struct BoundedChannel<T> {
sender: mpsc::Sender<T>,
receiver: mpsc::Receiver<T>,
capacity: usize,
backpressure_policy: BackpressurePolicy,
}
impl<T> BoundedChannel<T> {
pub async fn send(&self, item: T) -> Result<(), ChannelError> {
match self.backpressure_policy {
BackpressurePolicy::Block => {
self.sender.send(item).await?;
}
BackpressurePolicy::Drop => {
if self.sender.try_send(item).is_err() {
return Err(ChannelError::ChannelFull);
}
}
BackpressurePolicy::Error => {
self.sender.try_send(item)?;
}
}
Ok(())
}
}
Performance Characteristics
Process Collection Performance
- Baseline: <5 seconds for 10,000+ processes
- CPU Usage: <5% sustained during continuous monitoring
- Memory Usage: <100MB resident under normal operation
- Hash Computation: SHA-256 for all accessible executables
Detection Engine Performance
- Rule Execution: <100ms per detection rule
- SQL Validation: AST parsing and validation
- Resource Limits: 30-second timeout, memory limits
- Concurrent Execution: Parallel rule processing
Alert Delivery Performance
- Multi-Channel: Parallel delivery to multiple sinks
- Reliability: Circuit breakers and retry logic
- Performance: Non-blocking delivery with backpressure
- Monitoring: Delivery success rates and latency metrics
Cross-Platform Strategy
Process Enumeration
- Phase 1: sysinfo crate for unified cross-platform baseline
- Phase 2: Platform-specific enhancements (eBPF, ETW, EndpointSecurity)
- Fallback: Graceful degradation when enhanced features unavailable
Privilege Management
- Linux: CAP_SYS_PTRACE, immediate capability dropping
- Windows: SeDebugPrivilege, token restriction after init
- macOS: Minimal entitlements, sandbox compatibility
This architecture provides a robust foundation for implementing DaemonEye's core monitoring functionality while maintaining security, performance, and reliability requirements across all supported platforms.
DaemonEye Feature Tiers
DaemonEye is offered in three distinct tiers, each designed to meet different organizational needs and deployment scales. All tiers maintain the core security-first architecture while adding progressively more advanced capabilities.
Core Tier (Open Source)
License: Apache 2.0 Target: Individual users, small teams, proof-of-concept deployments
Core Components
- procmond: Privileged process collector with minimal attack surface
- daemoneye-agent: User-space detection orchestrator with SQL-based rules
- daemoneye-cli: Command-line interface for queries and management
- daemoneye-lib: Shared library with common functionality
Key Features
- ✅ Process Monitoring: Cross-platform process enumeration and monitoring
- ✅ SQL Detection Engine: Flexible rule creation using standard SQL queries
- ✅ Multi-Channel Alerting: stdout, syslog, webhook, email, file output
- ✅ Audit Logging: Certificate Transparency-style Merkle tree with inclusion proofs
- ✅ Offline Operation: Full functionality without internet access
- ✅ CLI Interface: Comprehensive command-line management tools
- ✅ Configuration Management: Hierarchical configuration system
- ✅ Cross-Platform Support: Linux, macOS, Windows
Performance Characteristics
- CPU Usage: <5% sustained during continuous monitoring
- Memory Usage: <100MB resident under normal operation
- Process Enumeration: <5 seconds for 10,000+ processes
- Database Operations: >1,000 records/second write rate
- Alert Latency: <100ms per detection rule execution
Use Cases
- Individual security researchers and analysts
- Small development teams requiring process monitoring
- Proof-of-concept security deployments
- Educational and training environments
- Airgapped or offline environments
Business Tier (Commercial)
License: $199/site (one-time) Target: Small to medium teams, consultancies, managed security services
All Core Tier Features Plus
Security Center Server
- Centralized Management: Single point of control for multiple agents
- Agent Registration: Secure mTLS-based agent authentication
- Data Aggregation: Centralized collection of alerts and process data
- Configuration Distribution: Centralized rule management and deployment
- Integration Hub: Single point for external SIEM integrations
Web GUI Frontend
- Fleet Dashboard: Real-time view of all connected agents
- Alert Management: Filtering, sorting, and export of alerts
- Rule Management: Visual rule editor and deployment interface
- System Health: Agent connectivity and performance metrics
- Data Visualization: Charts and graphs for security analytics
Enhanced Output Connectors
- Splunk HEC: Native Splunk HTTP Event Collector integration
- Elasticsearch: Bulk indexing with index pattern management
- Kafka: High-throughput message streaming
- CEF Format: Common Event Format for SIEM compatibility
- STIX 2.1: Structured Threat Information eXpression export
Curated Rule Packs
- Malware TTPs: Common malware tactics, techniques, and procedures
- MITRE ATT&CK: Framework-based detection rules
- Industry Standards: CIS, NIST, and other compliance frameworks
- Cryptographic Signatures: Ed25519-signed rule packs for integrity
- Auto-Update: Automatic rule pack distribution and updates
Container & Kubernetes Support
- Docker Images: Pre-built container images for all components
- Kubernetes Manifests: DaemonSet and deployment configurations
- Helm Charts: Package management for Kubernetes deployments
- Service Mesh: Istio and Linkerd integration support
Deployment Patterns
- Direct Agent-to-SIEM: Agents send directly to configured SIEM systems
- Centralized Proxy: All agents route through Security Center
- Hybrid Mode: Agents send to both Security Center and direct SIEM (recommended)
Performance Characteristics
- Agents per Security Center: 1,000+ agents
- Alert Throughput: 10,000+ alerts per minute
- Data Retention: Configurable retention policies
- Query Performance: Sub-second queries across agent fleet
Use Cases
- Security consultancies managing multiple clients
- Managed Security Service Providers (MSSPs)
- Small to medium enterprises with distributed infrastructure
- Organizations requiring centralized security management
- Teams needing enhanced SIEM integration
Enterprise Tier (Commercial)
License: $199/site (one-time) Target: Large enterprises, government agencies, critical infrastructure
All Business Tier Features Plus
Kernel-Level Monitoring
- Linux eBPF: Real-time syscall monitoring and process tracking
- Windows ETW: Event Tracing for Windows integration
- macOS EndpointSecurity: Native security framework integration
- Container Awareness: Kubernetes and Docker container monitoring
- Network Correlation: Process-to-network activity correlation
Federated Security Centers
- Hierarchical Architecture: Regional and Primary Security Centers
- Distributed Queries: Cross-center query execution and aggregation
- Data Replication: Automatic data synchronization between centers
- Failover Support: Automatic failover and load balancing
- Geographic Distribution: Multi-region deployment support
Advanced Threat Intelligence
- STIX/TAXII Integration: Automated threat intelligence ingestion
- Indicator Conversion: STIX indicators to detection rules
- Threat Feed Management: Multiple threat intelligence sources
- IOC Matching: Indicator of Compromise correlation
- Threat Hunting: Advanced query capabilities for threat hunting
Enterprise Analytics
- Distributed Analytics: Cross-fleet security analytics
- Machine Learning: Anomaly detection and behavioral analysis
- Risk Scoring: Dynamic risk assessment and prioritization
- Compliance Reporting: Automated compliance and audit reporting
- Custom Dashboards: Configurable security dashboards
Advanced Security Features
- Zero Trust Architecture: Comprehensive zero trust implementation
- Identity Integration: Active Directory and LDAP integration
- Role-Based Access Control: Granular permission management
- Audit Trail: Comprehensive audit logging and compliance
- Data Encryption: End-to-end encryption for all data flows
High Availability & Scalability
- Clustering: Multi-node Security Center clusters
- Load Balancing: Automatic load distribution
- Disaster Recovery: Backup and recovery procedures
- Horizontal Scaling: Scale-out architecture support
- Performance Optimization: Advanced caching and optimization
Performance Characteristics
- Agents per Federation: 10,000+ agents
- Regional Centers: 100+ regional centers per federation
- Query Latency: <100ms for distributed queries
- Data Volume: Petabyte-scale data processing
- Uptime: 99.99% availability SLA
Use Cases
- Large enterprises with global infrastructure
- Government agencies and critical infrastructure
- Financial services and healthcare organizations
- Organizations requiring compliance (SOX, HIPAA, PCI-DSS)
- Multi-tenant service providers
Feature Comparison Matrix
Feature | Core | Business | Enterprise |
---|---|---|---|
Process Monitoring | ✅ | ✅ | ✅ |
SQL Detection Engine | ✅ | ✅ | ✅ |
Multi-Channel Alerting | ✅ | ✅ | ✅ |
Audit Logging | ✅ | ✅ | ✅ |
Offline Operation | ✅ | ✅ | ✅ |
CLI Interface | ✅ | ✅ | ✅ |
Security Center | ❌ | ✅ | ✅ |
Web GUI | ❌ | ✅ | ✅ |
Enhanced Connectors | ❌ | ✅ | ✅ |
Curated Rule Packs | ❌ | ✅ | ✅ |
Container Support | ❌ | ✅ | ✅ |
Kernel Monitoring | ❌ | ❌ | ✅ |
Federation | ❌ | ❌ | ✅ |
STIX/TAXII | ❌ | ❌ | ✅ |
Advanced Analytics | ❌ | ❌ | ✅ |
Zero Trust | ❌ | ❌ | ✅ |
High Availability | ❌ | ❌ | ✅ |
Licensing Architecture
Dual-License Strategy
The DaemonEye project maintains a dual-license approach to balance open source accessibility with commercial sustainability:
- Core Components: Apache 2.0 licensed (procmond, daemoneye-agent, daemoneye-cli, daemoneye-lib)
- Business Tier Features: $199/site one-time license (Security Center, GUI, enhanced connectors, curated rules)
- Enterprise Tier Features: Custom pricing (kernel monitoring, federation, STIX/TAXII integration)
Feature Gating Implementation
// Compile-time feature gates
#[cfg(feature = "business-tier")]
pub mod security_center;
#[cfg(feature = "business-tier")]
pub mod enhanced_connectors;
#[cfg(feature = "enterprise-tier")]
pub mod kernel_monitoring;
#[cfg(feature = "enterprise-tier")]
pub mod federation;
Runtime License Validation
- Cryptographic Signatures: Ed25519 signatures for license validation
- Site Restrictions: Hostname/domain matching for license compliance
- Feature Activation: Runtime feature activation based on license
- Graceful Degradation: Fallback to lower tier when license is invalid
License Distribution
- Core Tier: GitHub releases with Apache 2.0 license
- Business Tier: Separate distribution channel with license keys
- Enterprise Tier: Enterprise distribution with support and SLA
- Hybrid Builds: Single binary with runtime feature activation
Migration Path
Core → Business
- Install Security Center server
- Configure agent uplink connections
- Deploy curated rule packs
- Set up enhanced connectors
Business → Enterprise
- Enable kernel-level monitoring
- Deploy federated Security Centers
- Integrate STIX/TAXII feeds
- Configure advanced analytics
Backward Compatibility
- All tiers maintain API compatibility
- Configuration migration tools provided
- Data export/import capabilities
- Gradual feature activation
Choose the tier that best fits your organization's needs, with the flexibility to upgrade as requirements grow and evolve.
Technical Documentation
This section contains comprehensive technical documentation for DaemonEye, covering implementation details, architecture specifications, and technical guides.
Table of Contents
- Core Monitoring
- Business Tier Features
- Enterprise Tier Features
- Technical Architecture
- Implementation Details
- Performance Considerations
- Security Implementation
- Testing Strategy
- Deployment Considerations
Core Monitoring
The core monitoring system provides real-time process monitoring and threat detection capabilities. This is the foundation of DaemonEye and is available in all tiers.
Read Core Monitoring Documentation →
Business Tier Features
Business tier features extend the core monitoring with additional capabilities including the Security Center, enhanced integrations, and curated rule packs.
Read Business Tier Documentation →
Enterprise Tier Features
Enterprise tier features provide advanced monitoring capabilities including kernel monitoring, network event monitoring, and federated security center architecture.
Read Enterprise Tier Documentation →
Technical Architecture
Component Overview
DaemonEye follows a three-component security architecture:
- ProcMonD: Privileged process monitoring daemon
- daemoneye-agent: User-space orchestrator and alerting
- daemoneye-cli: Command-line interface and management
Data Flow
graph LR A[<b>ProcMonD</b><br/>• Process Enum<br/>• Hashing<br/>• Audit] -->|Process Data<br/>Alerts| B[<b>daemoneye-agent</b><br/>• Alerting<br/>• Rules<br/>• Network<br/>• Storage] B -->|Lifecycle Mgmt<br/>Tasking| A B -->|Query Results<br/>Status| C[<b>daemoneye-cli</b><br/>• Queries<br/>• Config<br/>• Monitor<br/>• Manage] C -->|Commands<br/>Config| B
Technology Stack
- Language: Rust 2024 Edition
- Runtime: Tokio async runtime
- Event Store: redb (pure Rust embedded database)
- Audit Ledger: Certificate Transparency-style Merkle tree
- IPC: Custom Protobuf over Unix Sockets
- CLI: clap v4 with derive macros
- Configuration: figment with hierarchical loading
- Logging: tracing ecosystem with structured JSON
- Testing: cargo-nextest with comprehensive test suite
Implementation Details
Process Collection
The process collection system uses the sysinfo
crate for cross-platform process enumeration:
use sysinfo::{System, SystemExt, ProcessExt};
pub struct ProcessCollector {
system: System,
}
impl ProcessCollector {
pub fn new() -> Self {
Self {
system: System::new_all(),
}
}
pub async fn collect_processes(&mut self) -> Result<Vec<ProcessInfo>, CollectionError> {
self.system.refresh_all();
let processes = self.system
.processes()
.values()
.map(|p| ProcessInfo::from(p))
.collect();
Ok(processes)
}
}
Event Store Operations
DaemonEye uses redb for high-performance event storage:
use redb::{Database, ReadableTable, WritableTable};
pub struct EventStore {
db: Database,
}
impl EventStore {
pub fn new(path: &str) -> Result<Self, redb::Error> {
let db = Database::create(path)?;
Ok(Self { db })
}
pub async fn insert_process(&self, process: &ProcessInfo) -> Result<(), redb::Error> {
let write_txn = self.db.begin_write()?;
{
let mut table = write_txn.open_table(PROCESS_TABLE)?;
let key = process.pid;
let value = serde_json::to_vec(process)?;
table.insert(key, &value)?;
}
write_txn.commit()?;
Ok(())
}
}
Alert System
The alert system provides multi-channel alert delivery:
use async_trait::async_trait;
#[async_trait]
pub trait AlertSink: Send + Sync {
async fn send(&self, alert: &Alert) -> Result<DeliveryResult, DeliveryError>;
async fn health_check(&self) -> HealthStatus;
fn name(&self) -> &str;
}
pub struct AlertManager {
sinks: Vec<Box<dyn AlertSink>>,
}
impl AlertManager {
pub async fn send_alert(&self, alert: Alert) -> Result<(), AlertError> {
let mut results = Vec::new();
for sink in &self.sinks {
let result = sink.send(&alert).await;
results.push(result);
}
// Check if any sink succeeded
if results.iter().any(|r| r.is_ok()) {
Ok(())
} else {
Err(AlertError::DeliveryFailed)
}
}
}
Performance Considerations
Memory Management
DaemonEye is designed for minimal memory usage:
- Process Collection: ~1MB per 1000 processes
- Database Operations: ~10MB for 100,000 records
- Alert Processing: ~1MB for 10,000 alerts
- Total Memory: <100MB under normal operation
CPU Usage
CPU usage is optimized for minimal impact:
- Process Collection: <5% CPU sustained
- Database Operations: <2% CPU for queries
- Alert Processing: <1% CPU for delivery
- Total CPU: <5% sustained during monitoring
Storage Requirements
Storage requirements scale with data retention:
- Process Records: ~1KB per process per collection
- Alert Records: ~500B per alert
- Audit Logs: ~100B per operation
- Total Storage: ~1GB per month for 1000 processes
Security Implementation
Input Validation
All inputs are validated using the validator
crate:
use validator::{Validate, ValidationError};
#[derive(Validate)]
pub struct DetectionRule {
#[validate(length(min = 1, max = 1000))]
pub name: String,
#[validate(custom = "validate_sql")]
pub sql_query: String,
#[validate(range(min = 1, max = 1000))]
pub priority: u32,
}
fn validate_sql(sql: &str) -> Result<(), ValidationError> {
// Parse and validate SQL syntax
let ast = sqlparser::parse(sql)?;
validate_ast(&ast)?;
Ok(())
}
SQL Injection Prevention
Multiple layers of SQL injection prevention:
- AST Validation: Parse and validate SQL queries
- Prepared Statements: Use parameterized queries
- Sandboxed Execution: Isolated query execution
- Input Sanitization: Clean and validate all inputs
Cryptographic Integrity
BLAKE3 hashing and Ed25519 signatures ensure data integrity:
use blake3::Hasher;
use ed25519_dalek::{Keypair, Signature};
pub struct IntegrityChecker {
hasher: Hasher,
keypair: Keypair,
}
impl IntegrityChecker {
pub fn hash_data(&self, data: &[u8]) -> [u8; 32] {
self.hasher.update(data).finalize().into()
}
pub fn sign_data(&self, data: &[u8]) -> Signature {
self.keypair.sign(data)
}
}
Testing Strategy
Unit Testing
Comprehensive unit testing with mocks and test doubles:
#[cfg(test)]
mod tests {
use super::*;
use mockall::mock;
mock! {
pub ProcessCollector {}
#[async_trait]
impl ProcessCollectionService for ProcessCollector {
async fn collect_processes(&self) -> Result<CollectionResult, CollectionError>;
}
}
#[tokio::test]
async fn test_agent_with_mock_collector() {
let mut mock_collector = MockProcessCollector::new();
mock_collector
.expect_collect_processes()
.times(1)
.returning(|| Ok(CollectionResult::default()));
let agent = daemoneye-agent::new(Box::new(mock_collector));
let result = agent.run_collection_cycle().await;
assert!(result.is_ok());
}
}
Integration Testing
Test component interactions and data flow:
#[tokio::test]
async fn test_database_integration() {
let temp_dir = TempDir::new().unwrap();
let db_path = temp_dir.path().join("test.db");
let db = Database::new(&db_path).await.unwrap();
db.create_schema().await.unwrap();
let process = ProcessInfo::default();
db.insert_process(&process).await.unwrap();
let retrieved = db.get_process(process.pid).await.unwrap();
assert_eq!(process.pid, retrieved.pid);
}
Performance Testing
Benchmark critical operations:
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_process_collection(c: &mut Criterion) {
let mut group = c.benchmark_group("process_collection");
group.bench_function("collect_processes", |b| {
b.iter(|| {
let collector = ProcessCollector::new();
black_box(collector.collect_processes())
})
});
group.finish();
}
criterion_group!(benches, benchmark_process_collection);
criterion_main!(benches);
Deployment Considerations
Container Deployment
DaemonEye is designed for containerized deployment:
- Docker: Multi-stage builds for minimal images
- Kubernetes: DaemonSet for process monitoring
- Security: Non-root containers with minimal privileges
- Resource Limits: Configurable CPU and memory limits
Platform Support
Cross-platform support with platform-specific optimizations:
- Linux: Primary target with
/proc
filesystem access - macOS: Native process enumeration with security framework
- Windows: Windows API process access with service deployment
Configuration Management
Hierarchical configuration with multiple sources:
- Command-line flags (highest precedence)
- Environment variables (
DaemonEye_*
) - User configuration file (
~/.config/daemoneye/config.yaml
) - System configuration file (
/etc/daemoneye/config.yaml
) - Embedded defaults (lowest precedence)
This technical documentation provides comprehensive information about DaemonEye's implementation. For specific implementation details, consult the individual technical guides.
Core Monitoring Technical Specification
Overview
The Core Monitoring specification defines the fundamental process monitoring capabilities that form the foundation of DaemonEye. This includes process enumeration, executable integrity verification, SQL-based detection engine, and multi-channel alerting across the three-component architecture.
Process Collection Architecture
Cross-Platform Process Enumeration
DaemonEye uses a layered approach to process enumeration, providing a unified interface across different operating systems while allowing platform-specific optimizations.
Base Implementation (sysinfo crate)
Primary Interface: The sysinfo
crate provides cross-platform process enumeration with consistent data structures.
use sysinfo::{System, SystemExt, ProcessExt, Pid};
pub struct ProcessCollector {
system: System,
config: CollectorConfig,
hash_computer: Box<dyn HashComputer>,
}
impl ProcessCollector {
pub async fn enumerate_processes(&self) -> Result<Vec<ProcessRecord>> {
self.system.refresh_processes();
let mut processes = Vec::new();
let collection_time = SystemTime::now()
.duration_since(UNIX_EPOCH)?
.as_millis() as i64;
for (pid, process) in self.system.processes() {
let process_record = ProcessRecord {
id: Uuid::new_v4(),
scan_id: self.get_current_scan_id(),
collection_time,
pid: pid.as_u32(),
ppid: process.parent().map(|p| p.as_u32()),
name: process.name().to_string(),
executable_path: process.exe().map(|p| p.to_path_buf()),
command_line: process.cmd().to_vec(),
start_time: process.start_time(),
cpu_usage: process.cpu_usage(),
memory_usage: Some(process.memory()),
executable_hash: self.compute_executable_hash(process.exe()).await?,
hash_algorithm: Some("sha256".to_string()),
user_id: self.get_process_user(pid).await?,
accessible: true,
file_exists: process.exe().map(|p| p.exists()).unwrap_or(false),
platform_data: self.get_platform_specific_data(pid).await?,
};
processes.push(process_record);
}
Ok(processes)
}
}
Platform-Specific Enhancements
Linux eBPF Integration (Enterprise Tier):
#[cfg(target_os = "linux")]
pub struct EbpfProcessCollector {
base_collector: ProcessCollector,
ebpf_monitor: Option<EbpfMonitor>,
}
impl EbpfProcessCollector {
pub async fn enumerate_processes(&self) -> Result<Vec<ProcessRecord>> {
// Use eBPF for real-time process events if available
if let Some(ebpf) = &self.ebpf_monitor {
return self.enumerate_with_ebpf(ebpf).await;
}
// Fallback to sysinfo
self.base_collector.enumerate_processes().await
}
}
Windows ETW Integration (Enterprise Tier):
#[cfg(target_os = "windows")]
pub struct EtwProcessCollector {
base_collector: ProcessCollector,
etw_monitor: Option<EtwMonitor>,
}
impl EtwProcessCollector {
pub async fn enumerate_processes(&self) -> Result<Vec<ProcessRecord>> {
// Use ETW for enhanced process monitoring if available
if let Some(etw) = &self.etw_monitor {
return self.enumerate_with_etw(etw).await;
}
// Fallback to sysinfo
self.base_collector.enumerate_processes().await
}
}
macOS EndpointSecurity Integration (Enterprise Tier):
#[cfg(target_os = "macos")]
pub struct EndpointSecurityProcessCollector {
base_collector: ProcessCollector,
es_monitor: Option<EndpointSecurityMonitor>,
}
impl EndpointSecurityProcessCollector {
pub async fn enumerate_processes(&self) -> Result<Vec<ProcessRecord>> {
// Use EndpointSecurity for real-time monitoring if available
if let Some(es) = &self.es_monitor {
return self.enumerate_with_endpoint_security(es).await;
}
// Fallback to sysinfo
self.base_collector.enumerate_processes().await
}
}
Executable Integrity Verification
Hash Computation: SHA-256 hashing of executable files for integrity verification.
pub struct Sha256HashComputer {
buffer_size: usize,
}
impl HashComputer for Sha256HashComputer {
async fn compute_hash(&self, path: &Path) -> Result<Option<String>> {
if !path.exists() {
return Ok(None);
}
let mut file = File::open(path).await?;
let mut hasher = Sha256::new();
let mut buffer = vec![0u8; self.buffer_size];
loop {
let bytes_read = file.read(&mut buffer).await?;
if bytes_read == 0 {
break;
}
hasher.update(&buffer[..bytes_read]);
}
let hash = hasher.finalize();
Ok(Some(format!("{:x}", hash)))
}
fn get_algorithm(&self) -> &'static str {
"sha256"
}
}
Performance Optimization: Asynchronous hash computation with configurable buffer sizes.
impl ProcessCollector {
async fn compute_executable_hash(&self, path: Option<&Path>) -> Result<Option<String>> {
let path = match path {
Some(p) => p,
None => return Ok(None),
};
// Skip hashing for system processes or inaccessible files
if self.should_skip_hashing(path) {
return Ok(None);
}
// Compute hash asynchronously
let hash_computer = self.hash_computer.clone();
let path = path.to_path_buf();
tokio::task::spawn_blocking(move || {
hash_computer.compute_hash(&path)
}).await?
}
fn should_skip_hashing(&self, path: &Path) -> bool {
// Skip hashing for system processes or temporary files
let path_str = path.to_string_lossy();
path_str.contains("/proc/") ||
path_str.contains("/sys/") ||
path_str.contains("/tmp/") ||
path_str.contains("\\System32\\")
}
}
SQL-Based Detection Engine
SQL Validation and Security
AST Validation: Comprehensive SQL parsing and validation to prevent injection attacks.
use sqlparser::{ast::*, dialect::SQLiteDialect, parser::Parser};
pub struct SqlValidator {
parser: Parser<SQLiteDialect>,
allowed_functions: HashSet<String>,
allowed_operators: HashSet<String>,
}
impl SqlValidator {
pub fn new() -> Self {
let dialect = SQLiteDialect {};
let parser = Parser::new(&dialect);
Self {
parser,
allowed_functions: Self::get_allowed_functions(),
allowed_operators: Self::get_allowed_operators(),
}
}
pub fn validate_query(&self, sql: &str) -> Result<ValidationResult> {
let ast = self.parser.parse_sql(sql)?;
for statement in &ast {
match statement {
Statement::Query(query) => self.validate_select_query(query)?,
_ => return Err(ValidationError::ForbiddenStatement),
}
}
Ok(ValidationResult::Valid)
}
fn validate_select_query(&self, query: &Query) -> Result<()> {
self.validate_select_body(&query.body)?;
if let Some(selection) = &query.selection {
self.validate_where_clause(selection)?;
}
if let Some(group_by) = &query.group_by {
self.validate_group_by(group_by)?;
}
if let Some(having) = &query.having {
self.validate_having(having)?;
}
Ok(())
}
fn validate_select_body(&self, body: &SetExpr) -> Result<()> {
match body {
SetExpr::Select(select) => {
for item in &select.projection {
self.validate_projection_item(item)?;
}
if let Some(from) = &select.from {
self.validate_from_clause(from)?;
}
}
_ => return Err(ValidationError::UnsupportedSetExpr),
}
Ok(())
}
fn validate_projection_item(&self, item: &SelectItem) -> Result<()> {
match item {
SelectItem::UnnamedExpr(expr) => self.validate_expression(expr)?,
SelectItem::ExprWithAlias { expr, .. } => self.validate_expression(expr)?,
SelectItem::Wildcard => Ok(()), // Allow SELECT *
_ => Err(ValidationError::UnsupportedSelectItem),
}
}
fn validate_expression(&self, expr: &Expr) -> Result<()> {
match expr {
Expr::Identifier(_) => Ok(()),
Expr::Literal(_) => Ok(()),
Expr::BinaryOp { left, op, right } => {
self.validate_operator(op)?;
self.validate_expression(left)?;
self.validate_expression(right)?;
Ok(())
}
Expr::Function { name, args, .. } => {
self.validate_function(name, args)?;
Ok(())
}
Expr::Cast { expr, .. } => self.validate_expression(expr),
Expr::Case { .. } => Ok(()), // Allow CASE expressions
_ => Err(ValidationError::UnsupportedExpression),
}
}
fn validate_function(&self, name: &ObjectName, args: &[FunctionArg]) -> Result<()> {
let func_name = name.to_string().to_lowercase();
if !self.allowed_functions.contains(&func_name) {
return Err(ValidationError::ForbiddenFunction(func_name));
}
// Validate function arguments
for arg in args {
match arg {
FunctionArg::Unnamed(FunctionArgExpr::Expr(expr)) => {
self.validate_expression(expr)?;
}
_ => return Err(ValidationError::UnsupportedFunctionArg),
}
}
Ok(())
}
fn get_allowed_functions() -> HashSet<String> {
[
"count",
"sum",
"avg",
"min",
"max",
"length",
"substr",
"upper",
"lower",
"datetime",
"strftime",
"unixepoch",
"coalesce",
"nullif",
"ifnull",
]
.iter()
.map(|s| s.to_string())
.collect()
}
}
Detection Rule Execution
Sandboxed Execution: Safe execution of detection rules with resource limits.
pub struct DetectionEngine {
db: redb::Database,
sql_validator: SqlValidator,
rule_manager: RuleManager,
alert_manager: AlertManager,
}
impl DetectionEngine {
pub async fn execute_rules(&self, scan_id: i64) -> Result<Vec<Alert>> {
let rules = self.rule_manager.load_enabled_rules().await?;
let mut alerts = Vec::new();
for rule in rules {
match self.execute_rule(&rule, scan_id).await {
Ok(rule_alerts) => alerts.extend(rule_alerts),
Err(e) => {
tracing::error!(
rule_id = %rule.id,
error = %e,
"Failed to execute detection rule"
);
// Continue with other rules
}
}
}
Ok(alerts)
}
async fn execute_rule(&self, rule: &DetectionRule, scan_id: i64) -> Result<Vec<Alert>> {
// Validate SQL before execution
self.sql_validator.validate_query(&rule.sql_query)?;
// Execute with timeout and resource limits
let execution_result = tokio::time::timeout(
Duration::from_secs(30),
self.execute_sql_query(&rule.sql_query, scan_id)
).await??;
// Generate alerts from query results
let mut alerts = Vec::new();
for row in execution_result.rows {
let alert = self.alert_manager.generate_alert(
&rule,
&row,
scan_id
).await?;
if let Some(alert) = alert {
alerts.push(alert);
}
}
Ok(alerts)
}
async fn execute_sql_query(&self, sql: &str, scan_id: i64) -> Result<QueryResult> {
// Use read-only connection for security
let read_txn = self.db.begin_read()?;
let table = read_txn.open_table(PROCESSES_TABLE)?;
// Execute prepared statement with parameters
let mut stmt = self.db.prepare(sql)?;
stmt.bind((":scan_id", scan_id))?;
let mut rows = Vec::new();
while let Some(row) = stmt.next()? {
rows.push(ProcessRow::from_sqlite_row(row)?);
}
Ok(QueryResult { rows })
}
}
Alert Generation and Management
Alert Data Model
Structured Alerts: Comprehensive alert structure with full context.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Alert {
pub id: Uuid,
pub alert_time: i64,
pub rule_id: String,
pub title: String,
pub description: String,
pub severity: AlertSeverity,
pub scan_id: Option<i64>,
pub affected_processes: Vec<u32>,
pub process_count: i32,
pub alert_data: serde_json::Value,
pub rule_execution_time_ms: Option<i64>,
pub dedupe_key: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum AlertSeverity {
Low,
Medium,
High,
Critical,
}
impl Alert {
pub fn new(rule: &DetectionRule, process_data: &ProcessRow, scan_id: Option<i64>) -> Self {
let alert_time = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis() as i64;
let dedupe_key = self.generate_dedupe_key(rule, process_data);
Self {
id: Uuid::new_v4(),
alert_time,
rule_id: rule.id.clone(),
title: rule.name.clone(),
description: rule.description.clone().unwrap_or_default(),
severity: rule.severity.clone(),
scan_id,
affected_processes: vec![process_data.pid],
process_count: 1,
alert_data: serde_json::to_value(process_data).unwrap(),
rule_execution_time_ms: None,
dedupe_key,
}
}
fn generate_dedupe_key(&self, rule: &DetectionRule, process_data: &ProcessRow) -> String {
// Generate deduplication key based on rule and process characteristics
format!(
"{}:{}:{}:{}",
rule.id,
process_data.pid,
process_data.name,
process_data.executable_path.as_deref().unwrap_or("")
)
}
}
Alert Deduplication
Intelligent Deduplication: Prevent alert spam while maintaining security visibility.
pub struct AlertManager {
db: redb::Database,
dedupe_cache: Arc<Mutex<HashMap<String, Instant>>>,
dedupe_window: Duration,
}
impl AlertManager {
pub async fn generate_alert(
&self,
rule: &DetectionRule,
process_data: &ProcessRow,
scan_id: Option<i64>,
) -> Result<Option<Alert>> {
let alert = Alert::new(rule, process_data, scan_id);
// Check for deduplication
if self.is_duplicate(&alert).await? {
return Ok(None);
}
// Store alert in database
self.store_alert(&alert).await?;
// Update deduplication cache
self.update_dedupe_cache(&alert).await?;
Ok(Some(alert))
}
async fn is_duplicate(&self, alert: &Alert) -> Result<bool> {
let mut cache = self.dedupe_cache.lock().await;
if let Some(last_seen) = cache.get(&alert.dedupe_key) {
if last_seen.elapsed() < self.dedupe_window {
return Ok(true);
}
}
Ok(false)
}
async fn update_dedupe_cache(&self, alert: &Alert) -> Result<()> {
let mut cache = self.dedupe_cache.lock().await;
cache.insert(alert.dedupe_key.clone(), Instant::now());
Ok(())
}
}
Multi-Channel Alert Delivery
Alert Sink Architecture
Pluggable Sinks: Flexible alert delivery through multiple channels.
#[async_trait]
pub trait AlertSink: Send + Sync {
async fn send(&self, alert: &Alert) -> Result<DeliveryResult>;
async fn health_check(&self) -> HealthStatus;
fn name(&self) -> &str;
}
pub struct AlertDeliveryManager {
sinks: Vec<Box<dyn AlertSink>>,
retry_policy: RetryPolicy,
circuit_breaker: CircuitBreaker,
}
impl AlertDeliveryManager {
pub async fn deliver_alert(&self, alert: &Alert) -> Result<Vec<DeliveryResult>> {
let mut results = Vec::new();
// Deliver to all sinks in parallel
let sink_tasks: Vec<_> = self.sinks.iter()
.map(|sink| {
let alert = alert.clone();
let sink = sink.as_ref();
tokio::spawn(async move {
self.deliver_to_sink(sink, &alert).await
})
})
.collect();
for task in sink_tasks {
match task.await {
Ok(result) => results.push(result),
Err(e) => {
tracing::error!("Alert delivery task failed: {}", e);
results.push(DeliveryResult::Failed(e.to_string()));
}
}
}
Ok(results)
}
async fn deliver_to_sink(&self, sink: &dyn AlertSink, alert: &Alert) -> DeliveryResult {
// Apply circuit breaker
if self.circuit_breaker.is_open(sink.name()) {
return DeliveryResult::CircuitBreakerOpen;
}
// Retry with exponential backoff
let mut attempt = 0;
let mut delay = Duration::from_millis(100);
loop {
match sink.send(alert).await {
Ok(result) => {
self.circuit_breaker.record_success(sink.name());
return result;
}
Err(e) => {
attempt += 1;
if attempt >= self.retry_policy.max_attempts {
self.circuit_breaker.record_failure(sink.name());
return DeliveryResult::Failed(e.to_string());
}
tokio::time::sleep(delay).await;
delay = std::cmp::min(delay * 2, Duration::from_secs(60));
}
}
}
}
}
Specific Sink Implementations
Stdout Sink:
pub struct StdoutSink {
format: OutputFormat,
}
#[async_trait]
impl AlertSink for StdoutSink {
async fn send(&self, alert: &Alert) -> Result<DeliveryResult> {
let output = match self.format {
OutputFormat::Json => serde_json::to_string_pretty(alert)?,
OutputFormat::Text => self.format_text(alert),
OutputFormat::Csv => self.format_csv(alert),
};
println!("{}", output);
Ok(DeliveryResult::Success)
}
async fn health_check(&self) -> HealthStatus {
HealthStatus::Healthy
}
fn name(&self) -> &str {
"stdout"
}
}
Syslog Sink:
pub struct SyslogSink {
facility: SyslogFacility,
tag: String,
socket: UnixDatagram,
}
#[async_trait]
impl AlertSink for SyslogSink {
async fn send(&self, alert: &Alert) -> Result<DeliveryResult> {
let priority = self.map_severity_to_priority(&alert.severity);
let timestamp = self.format_timestamp(alert.alert_time);
let message = format!(
"<{}>{} {} {}: {}",
priority,
timestamp,
self.tag,
alert.title,
alert.description
);
self.socket.send(message.as_bytes()).await?;
Ok(DeliveryResult::Success)
}
async fn health_check(&self) -> HealthStatus {
// Check if syslog socket is accessible
HealthStatus::Healthy
}
fn name(&self) -> &str {
"syslog"
}
}
Webhook Sink:
pub struct WebhookSink {
url: Url,
client: reqwest::Client,
headers: HeaderMap,
timeout: Duration,
}
#[async_trait]
impl AlertSink for WebhookSink {
async fn send(&self, alert: &Alert) -> Result<DeliveryResult> {
let payload = serde_json::to_value(alert)?;
let response = self.client
.post(self.url.clone())
.headers(self.headers.clone())
.json(&payload)
.timeout(self.timeout)
.send()
.await?;
if response.status().is_success() {
Ok(DeliveryResult::Success)
} else {
Err(AlertDeliveryError::HttpError(response.status()))
}
}
async fn health_check(&self) -> HealthStatus {
// Perform health check by sending a test request
match self.client
.get(self.url.clone())
.timeout(Duration::from_secs(5))
.send()
.await
{
Ok(response) if response.status().is_success() => HealthStatus::Healthy,
_ => HealthStatus::Unhealthy,
}
}
fn name(&self) -> &str {
"webhook"
}
}
Performance Requirements and Optimization
Process Collection Performance
Target Metrics:
- Process Enumeration: <5 seconds for 10,000+ processes
- CPU Usage: <5% sustained during continuous monitoring
- Memory Usage: <100MB resident under normal operation
- Hash Computation: Complete within enumeration time
Optimization Strategies:
impl ProcessCollector {
async fn enumerate_processes_optimized(&self) -> Result<Vec<ProcessRecord>> {
let start_time = Instant::now();
// Use parallel processing for hash computation
let (processes, hash_tasks): (Vec<_>, Vec<_>) = self.collect_basic_process_data()
.into_iter()
.partition(|p| p.executable_path.is_none());
// Compute hashes in parallel
let hash_results = futures::future::join_all(
hash_tasks.into_iter().map(|p| self.compute_hash_async(p))
).await;
let mut all_processes = processes;
all_processes.extend(hash_results.into_iter().flatten());
let duration = start_time.elapsed();
tracing::info!(
process_count = all_processes.len(),
duration_ms = duration.as_millis(),
"Process enumeration completed"
);
Ok(all_processes)
}
}
Detection Engine Performance
Target Metrics:
- Rule Execution: <100ms per detection rule
- SQL Validation: <10ms per query
- Resource Limits: 30-second timeout, memory limits
- Concurrent Execution: Parallel rule processing
Optimization Strategies:
impl DetectionEngine {
async fn execute_rules_optimized(&self, scan_id: i64) -> Result<Vec<Alert>> {
let rules = self.rule_manager.load_enabled_rules().await?;
// Group rules by complexity for optimal scheduling
let (simple_rules, complex_rules) = self.categorize_rules(rules);
// Execute simple rules in parallel
let simple_alerts = futures::future::join_all(
simple_rules.into_iter().map(|rule| self.execute_rule(rule, scan_id))
).await;
// Execute complex rules sequentially to avoid resource contention
let mut complex_alerts = Vec::new();
for rule in complex_rules {
let alerts = self.execute_rule(rule, scan_id).await?;
complex_alerts.extend(alerts);
}
// Combine results
let mut all_alerts = Vec::new();
for result in simple_alerts {
all_alerts.extend(result?);
}
all_alerts.extend(complex_alerts);
Ok(all_alerts)
}
}
Error Handling and Recovery
Graceful Degradation
Process Collection Failures:
impl ProcessCollector {
async fn enumerate_processes_with_fallback(&self) -> Result<Vec<ProcessRecord>> {
match self.enumerate_processes_enhanced().await {
Ok(processes) => Ok(processes),
Err(e) => {
tracing::warn!("Enhanced enumeration failed, falling back to basic: {}", e);
self.enumerate_processes_basic().await
}
}
}
}
Detection Engine Failures:
impl DetectionEngine {
async fn execute_rule_with_recovery(&self, rule: &DetectionRule, scan_id: i64) -> Result<Vec<Alert>> {
match self.execute_rule(rule, scan_id).await {
Ok(alerts) => Ok(alerts),
Err(e) => {
tracing::error!(
rule_id = %rule.id,
error = %e,
"Rule execution failed, marking as disabled"
);
// Disable problematic rules to prevent repeated failures
self.rule_manager.disable_rule(&rule.id).await?;
Ok(Vec::new())
}
}
}
}
Resource Management
Memory Pressure Handling:
impl ProcessCollector {
async fn handle_memory_pressure(&self) -> Result<()> {
let memory_usage = self.get_memory_usage()?;
if memory_usage > self.config.memory_threshold {
tracing::warn!("Memory pressure detected, reducing batch size");
// Reduce batch size for hash computation
self.hash_computer.set_buffer_size(
self.hash_computer.buffer_size() / 2
);
// Trigger garbage collection
tokio::task::yield_now().await;
}
Ok(())
}
}
This core monitoring specification provides the foundation for DaemonEye's process monitoring capabilities, ensuring high performance, security, and reliability across all supported platforms.
Business Tier Features Technical Specification
Overview
The Business Tier Features extend the core DaemonEye architecture with professional-grade capabilities targeting small teams and consultancies. The design maintains the security-first, offline-capable philosophy while adding enterprise integrations, curated content, and centralized management capabilities.
The key architectural addition is the DaemonEye Security Center, a new component that provides centralized aggregation, management, and visualization capabilities while preserving the autonomous operation of individual agents.
Table of Contents
- Security Center Architecture
- Agent Registration and Authentication
- Curated Rule Packs
- Enhanced Output Connectors
- Export Format Implementations
- Web GUI Frontend
- Deployment Patterns
- Container and Kubernetes Support
- Performance and Scalability
Security Center Architecture
Component Overview
graph TB subgraph "Business Tier Architecture" SC[Security Center Server] GUI[Web GUI Frontend] subgraph "Agent Node 1" PM1[procmond] SA1[daemoneye-agent] CLI1[daemoneye-cli] end subgraph "Agent Node 2" PM2[procmond] SA2[daemoneye-agent] CLI2[daemoneye-cli] end subgraph "External Integrations" SPLUNK[Splunk HEC] ELASTIC[Elasticsearch] KAFKA[Kafka] end end SA1 -.->|mTLS| SC SA2 -.->|mTLS| SC GUI -->|HTTPS| SC SC -->|HTTP/HTTPS| SPLUNK SC -->|HTTP/HTTPS| ELASTIC SC -->|TCP| KAFKA PM1 --> SA1 PM2 --> SA2
Security Center Server
Technology Stack:
- Framework: Axum web framework with tokio async runtime
- Database: PostgreSQL with connection pooling for scalable data storage
- Authentication: Mutual TLS (mTLS) for agent connections, JWT for web GUI
- Configuration: Same hierarchical config system as core DaemonEye
- Observability: OpenTelemetry tracing with Prometheus metrics export
Core Modules:
pub mod security_center {
pub mod agent_registry; // Agent authentication and management
pub mod data_aggregator; // Central data collection and storage
pub mod database; // PostgreSQL connection pool and migrations
pub mod health;
pub mod integration_hub; // External system connectors
pub mod observability; // OpenTelemetry tracing and Prometheus metrics
pub mod rule_distributor; // Rule pack management and distribution
pub mod web_api; // REST API for GUI frontend // Health checks and system monitoring
}
Database Layer
Connection Pool: sqlx::PgPool with configurable min/max connections
pub struct SecurityCenterDatabase {
pool: sqlx::PgPool,
metrics: DatabaseMetrics,
}
impl SecurityCenterDatabase {
pub async fn new(config: DatabaseConfig) -> Result<Self> {
let pool = sqlx::PgPool::builder()
.max_connections(config.max_connections)
.min_connections(config.min_connections)
.acquire_timeout(Duration::from_secs(config.connection_timeout))
.idle_timeout(Duration::from_secs(config.idle_timeout))
.max_lifetime(Duration::from_secs(config.max_lifetime))
.build(&config.url)
.await?;
// Run migrations
sqlx::migrate!("./migrations").run(&pool).await?;
Ok(Self {
pool,
metrics: DatabaseMetrics::new(),
})
}
}
Database Schema:
-- Agent registration and management
CREATE TABLE agents (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
hostname VARCHAR(255) NOT NULL,
ip_address INET,
certificate_fingerprint VARCHAR(128) NOT NULL UNIQUE,
first_seen TIMESTAMPTZ NOT NULL DEFAULT NOW(),
last_seen TIMESTAMPTZ NOT NULL DEFAULT NOW(),
version VARCHAR(50) NOT NULL,
status VARCHAR(20) NOT NULL CHECK (status IN ('active', 'inactive', 'error')),
metadata JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Aggregated alerts from all agents
CREATE TABLE aggregated_alerts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
agent_id UUID NOT NULL REFERENCES agents(id) ON DELETE CASCADE,
rule_id VARCHAR(100) NOT NULL,
rule_name VARCHAR(255) NOT NULL,
severity VARCHAR(20) NOT NULL CHECK (severity IN ('low', 'medium', 'high', 'critical')),
timestamp TIMESTAMPTZ NOT NULL,
hostname VARCHAR(255) NOT NULL,
process_data JSONB NOT NULL,
metadata JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Rule pack management
CREATE TABLE rule_packs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL,
version VARCHAR(50) NOT NULL,
description TEXT,
author VARCHAR(255),
signature TEXT NOT NULL,
content BYTEA NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
deployed_at TIMESTAMPTZ,
UNIQUE(name, version)
);
Agent Registration and Authentication
Certificate-Based Authentication
Agent Registration Flow:
- Agent generates client certificate during first startup
- Agent connects to Security Center with certificate
- Security Center validates certificate and registers agent
- Ongoing communication uses established mTLS session
pub struct AgentRegistry {
db: SecurityCenterDatabase,
ca_cert: X509Certificate,
agent_certs: Arc<Mutex<HashMap<String, X509Certificate>>>,
}
impl AgentRegistry {
pub async fn register_agent(&self, cert: &X509Certificate) -> Result<AgentInfo> {
let fingerprint = cert.fingerprint()?;
// Validate certificate against CA
self.validate_certificate(cert)?;
// Extract agent information from certificate
let hostname = self.extract_hostname(cert)?;
let version = self.extract_version(cert)?;
// Register agent in database
let agent = AgentInfo {
id: Uuid::new_v4(),
hostname,
certificate_fingerprint: fingerprint,
version,
status: AgentStatus::Active,
first_seen: Utc::now(),
last_seen: Utc::now(),
};
self.db.insert_agent(&agent).await?;
self.agent_certs.lock().await.insert(fingerprint, cert.clone());
Ok(agent)
}
pub async fn authenticate_agent(&self, fingerprint: &str) -> Result<AgentInfo> {
// Check if agent is registered and active
let agent = self.db.get_agent_by_fingerprint(fingerprint).await?;
if agent.status != AgentStatus::Active {
return Err(AuthenticationError::AgentInactive);
}
// Update last seen timestamp
self.db.update_agent_last_seen(agent.id, Utc::now()).await?;
Ok(agent)
}
}
Enhanced Agent Capabilities
Uplink Communication: Secure connection to Security Center with fallback to standalone operation.
pub struct Enhanceddaemoneye-agent {
base_agent: daemoneye-agent,
security_center_client: Option<SecurityCenterClient>,
uplink_config: UplinkConfig,
}
impl Enhanceddaemoneye-agent {
pub async fn new(config: AgentConfig) -> Result<Self> {
let base_agent = daemoneye-agent::new(config.clone()).await?;
let security_center_client = if config.uplink.enabled {
Some(SecurityCenterClient::new(&config.uplink).await?)
} else {
None
};
Ok(Self {
base_agent,
security_center_client,
uplink_config: config.uplink,
})
}
pub async fn start(&mut self) -> Result<()> {
// Start base agent
self.base_agent.start().await?;
// Connect to Security Center if configured
if let Some(client) = &mut self.security_center_client {
client.connect().await?;
self.start_uplink_communication().await?;
}
Ok(())
}
async fn start_uplink_communication(&self) -> Result<()> {
let client = self.security_center_client.as_ref().unwrap();
// Start periodic heartbeat
let heartbeat_interval = Duration::from_secs(30);
let mut interval = tokio::time::interval(heartbeat_interval);
tokio::spawn(async move {
loop {
interval.tick().await;
if let Err(e) = client.send_heartbeat().await {
tracing::warn!("Heartbeat failed: {}", e);
}
}
});
// Start alert forwarding
self.start_alert_forwarding().await?;
Ok(())
}
}
Curated Rule Packs
Rule Pack Structure
YAML-based Rule Packs with cryptographic signatures:
# rule-pack-malware-ttps.yaml
metadata:
name: Malware TTPs
version: 1.2.0
description: Common malware tactics, techniques, and procedures
author: DaemonEye Security Team
signature: ed25519:base64-signature
rules:
- id: process-hollowing-detection
name: Process Hollowing Detection
description: Detects potential process hollowing attacks
sql: |
SELECT * FROM process_snapshots
WHERE executable_path != mapped_image_path
AND parent_pid IN (SELECT pid FROM process_snapshots WHERE name = 'explorer.exe')
severity: high
tags: [process-hollowing, malware, defense-evasion]
Rule Pack Validation
Cryptographic Signatures: Ed25519 signatures for rule pack integrity.
pub struct RulePackValidator {
public_key: ed25519_dalek::PublicKey,
}
impl RulePackValidator {
pub fn validate_rule_pack(&self, pack: &RulePack) -> Result<ValidationResult> {
// Verify cryptographic signature
let signature = ed25519_dalek::Signature::from_bytes(&pack.signature)?;
let message = serde_json::to_vec(&pack.metadata)?;
self.public_key.verify_strict(&message, &signature)?;
// Validate SQL syntax for all rules
for rule in &pack.rules {
self.validate_rule_sql(&rule.sql)?;
}
// Check for rule ID conflicts
self.check_rule_conflicts(&pack.rules)?;
Ok(ValidationResult::Valid)
}
fn validate_rule_sql(&self, sql: &str) -> Result<()> {
let validator = SqlValidator::new();
validator.validate_query(sql)?;
Ok(())
}
}
Rule Distribution
Automatic Distribution: Agents automatically download and apply rule packs.
pub struct RuleDistributor {
db: SecurityCenterDatabase,
rule_pack_storage: RulePackStorage,
distribution_scheduler: DistributionScheduler,
}
impl RuleDistributor {
pub async fn deploy_rule_pack(&self, pack: RulePack) -> Result<DeploymentResult> {
// Validate rule pack
let validator = RulePackValidator::new();
validator.validate_rule_pack(&pack)?;
// Store rule pack
let pack_id = self.rule_pack_storage.store(&pack).await?;
// Schedule distribution to agents
self.distribution_scheduler.schedule_distribution(pack_id).await?;
Ok(DeploymentResult::Success)
}
pub async fn distribute_to_agent(&self, agent_id: Uuid, pack_id: Uuid) -> Result<()> {
let pack = self.rule_pack_storage.get(pack_id).await?;
let agent = self.db.get_agent(agent_id).await?;
// Send rule pack to agent
let client = SecurityCenterClient::for_agent(&agent)?;
client.send_rule_pack(&pack).await?;
// Update agent rule assignments
self.db.assign_rule_pack(agent_id, pack_id).await?;
Ok(())
}
}
Enhanced Output Connectors
Splunk HEC Integration
Splunk HTTP Event Collector integration with authentication and batching.
pub struct SplunkHecConnector {
endpoint: Url,
token: SecretString,
index: Option<String>,
source_type: String,
client: reqwest::Client,
batch_size: usize,
batch_timeout: Duration,
}
impl SplunkHecConnector {
pub async fn send_event(&self, event: &ProcessAlert) -> Result<(), ConnectorError> {
let hec_event = HecEvent {
time: event.timestamp.timestamp(),
host: event.hostname.clone(),
source: "daemoneye",
sourcetype: &self.source_type,
index: self.index.as_deref(),
event: serde_json::to_value(event)?,
};
let response = self.client
.post(&self.endpoint)
.header("Authorization", format!("Splunk {}", self.token.expose_secret()))
.json(&hec_event)
.send()
.await?;
response.error_for_status()?;
Ok(())
}
pub async fn send_batch(&self, events: &[ProcessAlert]) -> Result<(), ConnectorError> {
let hec_events: Vec<HecEvent> = events.iter()
.map(|event| self.convert_to_hec_event(event))
.collect::<Result<Vec<_>, _>>()?;
let response = self.client
.post(&self.endpoint)
.header("Authorization", format!("Splunk {}", self.token.expose_secret()))
.json(&hec_events)
.send()
.await?;
response.error_for_status()?;
Ok(())
}
}
Elasticsearch Integration
Elasticsearch bulk indexing with index pattern management.
pub struct ElasticsearchConnector {
client: elasticsearch::Elasticsearch,
index_pattern: String,
pipeline: Option<String>,
batch_size: usize,
}
impl ElasticsearchConnector {
pub async fn bulk_index(&self, events: &[ProcessAlert]) -> Result<(), ConnectorError> {
let mut body = Vec::new();
for event in events {
let index_name = self.resolve_index_name(&event.timestamp);
let action = json!({
"index": {
"_index": index_name,
"_type": "_doc"
}
});
body.push(action);
body.push(serde_json::to_value(event)?);
}
let response = self.client
.bulk(BulkParts::None)
.body(body)
.send()
.await?;
self.handle_bulk_response(response).await
}
fn resolve_index_name(&self, timestamp: &DateTime<Utc>) -> String {
self.index_pattern
.replace("{YYYY}", ×tamp.format("%Y").to_string())
.replace("{MM}", ×tamp.format("%m").to_string())
.replace("{DD}", ×tamp.format("%d").to_string())
}
}
Kafka Integration
Kafka high-throughput message streaming with partitioning.
pub struct KafkaConnector {
producer: FutureProducer,
topic: String,
partition_strategy: PartitionStrategy,
}
impl KafkaConnector {
pub async fn send_event(&self, event: &ProcessAlert) -> Result<(), ConnectorError> {
let key = self.generate_partition_key(event);
let payload = serde_json::to_vec(event)?;
let record = FutureRecord::to(&self.topic)
.key(&key)
.payload(&payload)
.partition(self.calculate_partition(&key));
self.producer.send(record, Duration::from_secs(5)).await?;
Ok(())
}
fn generate_partition_key(&self, event: &ProcessAlert) -> String {
match self.partition_strategy {
PartitionStrategy::ByHostname => event.hostname.clone(),
PartitionStrategy::ByRuleId => event.rule_id.clone(),
PartitionStrategy::BySeverity => event.severity.to_string(),
PartitionStrategy::RoundRobin => Uuid::new_v4().to_string(),
}
}
}
Export Format Implementations
CEF (Common Event Format)
CEF Format for SIEM compatibility.
pub struct CefFormatter;
impl CefFormatter {
pub fn format_process_alert(alert: &ProcessAlert) -> String {
format!(
"CEF:0|DaemonEye|DaemonEye|1.0|{}|{}|{}|{}",
alert.rule_id,
alert.rule_name,
Self::map_severity(&alert.severity),
Self::build_extensions(alert)
)
}
fn build_extensions(alert: &ProcessAlert) -> String {
format!(
"rt={} src={} suser={} sproc={} cs1Label=Command Line cs1={} cs2Label=Parent Process cs2={}",
alert.timestamp.timestamp_millis(),
alert.hostname,
alert.process.user.unwrap_or_default(),
alert.process.name,
alert.process.command_line.unwrap_or_default(),
alert.process.parent_name.unwrap_or_default()
)
}
fn map_severity(severity: &AlertSeverity) -> u8 {
match severity {
AlertSeverity::Low => 3,
AlertSeverity::Medium => 5,
AlertSeverity::High => 7,
AlertSeverity::Critical => 10,
}
}
}
STIX 2.1 Objects
STIX 2.1 structured threat information export.
pub struct StixExporter;
impl StixExporter {
pub fn create_process_object(process: &ProcessSnapshot) -> StixProcess {
StixProcess {
type_: "process".to_string(),
spec_version: "2.1".to_string(),
id: format!("process--{}", Uuid::new_v4()),
created: process.timestamp,
modified: process.timestamp,
pid: process.pid,
name: process.name.clone(),
command_line: process.command_line.clone(),
parent_ref: process
.parent_pid
.map(|ppid| format!("process--{}", Self::get_parent_uuid(ppid))),
binary_ref: Some(format!(
"file--{}",
Self::create_file_object(&process.executable_path).id
)),
}
}
pub fn create_indicator_object(alert: &ProcessAlert) -> StixIndicator {
StixIndicator {
type_: "indicator".to_string(),
spec_version: "2.1".to_string(),
id: format!("indicator--{}", Uuid::new_v4()),
created: alert.timestamp,
modified: alert.timestamp,
pattern: Self::build_stix_pattern(alert),
pattern_type: "stix".to_string(),
pattern_version: "2.1".to_string(),
valid_from: alert.timestamp,
labels: vec![alert.severity.to_string()],
confidence: Self::map_confidence(alert),
}
}
}
Web GUI Frontend
Technology Stack
Frontend: React with TypeScript for type safety State Management: React Query for server state management UI Framework: Tailwind CSS with shadcn/ui components Charts: Recharts for data visualization Authentication: JWT tokens with automatic refresh
Core Features
Fleet Dashboard: Real-time view of all connected agents
interface FleetDashboard {
agents: AgentStatus[];
totalAlerts: number;
activeRules: number;
systemHealth: HealthStatus;
}
interface AgentStatus {
id: string;
hostname: string;
status: 'active' | 'inactive' | 'error';
lastSeen: Date;
alertCount: number;
version: string;
}
Alert Management: Filtering, sorting, and export of alerts
interface AlertManagement {
alerts: Alert[];
filters: AlertFilters;
sortBy: SortOption;
exportFormat: ExportFormat;
}
interface AlertFilters {
severity: AlertSeverity[];
ruleId: string[];
hostname: string[];
dateRange: DateRange;
}
Rule Management: Visual rule editor and deployment interface
interface RuleManagement {
rules: DetectionRule[];
rulePacks: RulePack[];
editor: RuleEditor;
deployment: DeploymentStatus;
}
interface RuleEditor {
sql: string;
validation: ValidationResult;
testResults: TestResult[];
}
Deployment Patterns
Pattern 1: Direct Agent-to-SIEM
Agents send alerts directly to configured SIEM systems without Security Center.
# Agent configuration for direct SIEM integration
alerting:
routing_strategy: direct
sinks:
- type: splunk_hec
enabled: true
endpoint: https://splunk.example.com:8088/services/collector
token: ${SPLUNK_HEC_TOKEN}
Pattern 2: Centralized Proxy
All agents route through Security Center for centralized management.
# Agent configuration for centralized proxy
alerting:
routing_strategy: proxy
security_center:
enabled: true
endpoint: https://security-center.example.com:8443
certificate_path: /etc/daemoneye/agent.crt
key_path: /etc/daemoneye/agent.key
Pattern 3: Hybrid (Recommended)
Agents send to both Security Center and direct SIEM systems.
# Agent configuration for hybrid routing
alerting:
routing_strategy: hybrid
security_center:
enabled: true
endpoint: https://security-center.example.com:8443
sinks:
- type: splunk_hec
enabled: true
endpoint: https://splunk.example.com:8088/services/collector
Container and Kubernetes Support
Docker Images
Multi-stage builds for optimized container images.
# Build stage
FROM rust:1.85 as builder
WORKDIR /app
COPY . .
RUN cargo build --release
# Runtime stage
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/daemoneye-agent /usr/local/bin/
COPY --from=builder /app/target/release/procmond /usr/local/bin/
COPY --from=builder /app/target/release/daemoneye-cli /usr/local/bin/
# Create non-root user
RUN useradd -r -s /bin/false daemoneye
USER daemoneye
ENTRYPOINT ["daemoneye-agent"]
Kubernetes Manifests
DaemonSet for agent deployment across all nodes.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemoneye-agent
namespace: security
spec:
selector:
matchLabels:
app: daemoneye-agent
template:
metadata:
labels:
app: daemoneye-agent
spec:
serviceAccountName: daemoneye-agent
hostPID: true
hostNetwork: true
containers:
- name: procmond
image: daemoneye/procmond:latest
securityContext:
privileged: true
capabilities:
add: [SYS_PTRACE]
volumeMounts:
- name: proc
mountPath: /host/proc
readOnly: true
- name: data
mountPath: /var/lib/daemoneye
- name: daemoneye-agent
image: daemoneye/daemoneye-agent:latest
securityContext:
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: data
mountPath: /var/lib/daemoneye
- name: config
mountPath: /etc/daemoneye
volumes:
- name: proc
hostPath:
path: /proc
- name: data
hostPath:
path: /var/lib/daemoneye
- name: config
configMap:
name: daemoneye-config
Performance and Scalability
Security Center Performance
Target Metrics:
- Agents per Security Center: 1,000+ agents
- Alert Throughput: 10,000+ alerts per minute
- Query Latency: <100ms for dashboard queries
- Data Retention: Configurable retention policies
Optimization Strategies
Connection Pooling: Efficient database connection management
pub struct ConnectionPoolManager {
pool: sqlx::PgPool,
metrics: PoolMetrics,
}
impl ConnectionPoolManager {
pub async fn get_connection(&self) -> Result<PooledConnection> {
let start = Instant::now();
let conn = self.pool.acquire().await?;
self.metrics.connection_acquired.record(start.elapsed());
Ok(PooledConnection::new(conn))
}
}
Batch Processing: Efficient alert processing and delivery
pub struct BatchProcessor {
batch_size: usize,
batch_timeout: Duration,
processor: Arc<dyn AlertProcessor>,
}
impl BatchProcessor {
pub async fn process_alerts(&self, alerts: Vec<Alert>) -> Result<()> {
let batches = alerts.chunks(self.batch_size);
for batch in batches {
self.processor.process_batch(batch).await?;
}
Ok(())
}
}
The Business Tier Features provide professional-grade capabilities for small to medium teams while maintaining DaemonEye's core security principles and performance characteristics.
Enterprise Tier Features
This document describes the Enterprise tier features of DaemonEye, including kernel monitoring, network event monitoring, and federated security center architecture.
Overview
The Enterprise tier extends DaemonEye with advanced monitoring capabilities and enterprise-grade features:
- Kernel Monitoring Layer: eBPF, ETW, and EndpointSecurity integration
- Network Event Monitor: Real-time network traffic analysis
- Federated Security Center: Multi-site security center architecture
- STIX/TAXII Integration: Threat intelligence sharing
- Advanced Analytics: Machine learning and behavioral analysis
Kernel Monitoring Layer
Linux eBPF Integration
DaemonEye uses eBPF (Extended Berkeley Packet Filter) for low-level system monitoring:
use aya::{
programs::{Xdp, XdpFlags},
Bpf,
};
pub struct EBPFMonitor {
bpf: Bpf,
program: Xdp,
}
impl EBPFMonitor {
pub async fn new() -> Result<Self, MonitorError> {
let bpf = Bpf::load_file("monitor.o")?;
let program: &mut Xdp = bpf.program_mut("monitor").unwrap().try_into()?;
program.load()?;
program.attach("eth0", XdpFlags::default())?;
Ok(Self { bpf, program })
}
}
Windows ETW Integration
Windows Event Tracing for Windows (ETW) provides comprehensive system monitoring:
use windows::{core::PCWSTR, Win32::System::Diagnostics::Etw::*};
pub struct ETWMonitor {
session_handle: TRACEHANDLE,
trace_properties: EVENT_TRACE_PROPERTIES,
}
impl ETWMonitor {
pub fn new() -> Result<Self, MonitorError> {
let mut trace_properties = EVENT_TRACE_PROPERTIES::default();
trace_properties.Wnode.BufferSize = std::mem::size_of::<EVENT_TRACE_PROPERTIES>() as u32;
trace_properties.Wnode.Guid = GUID::from("12345678-1234-1234-1234-123456789012");
trace_properties.Wnode.ClientContext = 1;
trace_properties.Wnode.Flags = WNODE_FLAG_TRACED_GUID;
trace_properties.LogFileMode = EVENT_TRACE_REAL_TIME_MODE;
trace_properties.LoggerNameOffset = std::mem::size_of::<EVENT_TRACE_PROPERTIES>() as u32;
trace_properties.LogFileNameOffset = 0;
trace_properties.BufferSize = 64;
trace_properties.MinimumBuffers = 2;
trace_properties.MaximumBuffers = 2;
trace_properties.FlushTimer = 1;
trace_properties.EnableFlags = EVENT_TRACE_FLAG_PROCESS;
let session_handle = StartTraceW(
&mut 0,
PCWSTR::from_raw("DaemonEye\0".as_ptr() as *const u16),
&mut trace_properties,
)?;
Ok(Self {
session_handle,
trace_properties,
})
}
}
macOS EndpointSecurity Integration
macOS EndpointSecurity framework provides real-time security event monitoring:
use endpoint_sec::{
Client, ClientBuilder, Event, EventType, Process,
};
pub struct EndpointSecurityMonitor {
client: Client,
}
impl EndpointSecurityMonitor {
pub async fn new() -> Result<Self, MonitorError> {
let client = ClientBuilder::new()
.name("com.daemoneye.monitor")
.build()
.await?;
Ok(Self { client })
}
pub async fn start_monitoring(&self) -> Result<(), MonitorError> {
let mut stream = self.client.subscribe(&[
EventType::NotifyExec,
EventType::NotifyFork,
EventType::NotifyExit,
EventType::NotifySignal,
]).await?;
while let Some(event) = stream.next().await {
self.handle_event(event).await?;
}
Ok(())
}
}
Network Event Monitor
The Network Event Monitor provides real-time network traffic analysis:
use pcap::{Capture, Device};
pub struct NetworkMonitor {
capture: Capture<Device>,
}
impl NetworkMonitor {
pub fn new(interface: &str) -> Result<Self, MonitorError> {
let device = Device::lookup()?
.find(|d| d.name == interface)
.ok_or(MonitorError::DeviceNotFound)?;
let capture = Capture::from_device(device)?
.promisc(true)
.buffer_size(65536)
.open()?;
Ok(Self { capture })
}
pub async fn start_capture(&mut self) -> Result<(), MonitorError> {
while let Ok(packet) = self.capture.next() {
self.process_packet(packet).await?;
}
Ok(())
}
}
Federated Security Center Architecture
The Federated Security Center enables multi-site security center deployment:
pub struct FederatedSecurityCenter {
primary_center: SecurityCenter,
regional_centers: Vec<RegionalSecurityCenter>,
federation_config: FederationConfig,
}
pub struct FederationConfig {
pub primary_endpoint: String,
pub regional_endpoints: Vec<String>,
pub sync_interval: Duration,
pub conflict_resolution: ConflictResolution,
}
pub enum ConflictResolution {
PrimaryWins,
TimestampWins,
ManualReview,
}
STIX/TAXII Integration
DaemonEye integrates with STIX/TAXII for threat intelligence sharing:
use stix::{
objects::{Indicator, Malware, ThreatActor},
taxii::client::TaxiiClient,
};
pub struct STIXTAXIIIntegration {
client: TaxiiClient,
collection_id: String,
}
impl STIXTAXIIIntegration {
pub async fn new(endpoint: &str, collection_id: &str) -> Result<Self, IntegrationError> {
let client = TaxiiClient::new(endpoint)?;
Ok(Self {
client,
collection_id: collection_id.to_string(),
})
}
pub async fn fetch_indicators(&self) -> Result<Vec<Indicator>, IntegrationError> {
let objects = self.client
.get_objects(&self.collection_id, "indicator")
.await?;
let indicators: Vec<Indicator> = objects
.into_iter()
.filter_map(|obj| obj.try_into().ok())
.collect();
Ok(indicators)
}
}
Advanced Analytics
Enterprise tier includes machine learning and behavioral analysis:
pub struct BehavioralAnalyzer {
models: Vec<BehavioralModel>,
anomaly_threshold: f64,
}
pub struct BehavioralModel {
name: String,
features: Vec<String>,
model: Box<dyn Model>,
}
impl BehavioralAnalyzer {
pub fn analyze_process(&self, process: &ProcessInfo) -> Result<AnomalyScore, AnalysisError> {
let features = self.extract_features(process);
let mut scores = Vec::new();
for model in &self.models {
let score = model.model.predict(&features)?;
scores.push(score);
}
let anomaly_score = self.aggregate_scores(scores);
Ok(anomaly_score)
}
}
Deployment Considerations
Resource Requirements
Enterprise tier features require additional resources:
- CPU: 2+ cores for kernel monitoring
- Memory: 4+ GB for network monitoring and analytics
- Storage: 100+ GB for event storage and analytics data
- Network: High-bandwidth for network monitoring
Security Considerations
- Kernel monitoring requires elevated privileges
- Network monitoring may capture sensitive data
- Federated architecture requires secure communication
- STIX/TAXII integration requires secure authentication
Performance Impact
- Kernel monitoring: 2-5% CPU overhead
- Network monitoring: 5-10% CPU overhead
- Analytics processing: 10-20% CPU overhead
- Storage requirements: 10x increase for event data
Configuration
Enterprise tier configuration extends the base configuration:
enterprise:
kernel_monitoring:
enable_ebpf: true
ebpf_program_path: /etc/daemoneye/ebpf/monitor.o
enable_etw: true
etw_session_name: DaemonEye
enable_endpoint_security: true
es_client_name: com.daemoneye.monitor
network_monitoring:
enable_packet_capture: true
capture_interface: eth0
capture_filter: tcp port 80 or tcp port 443
max_packet_size: 1500
buffer_size_mb: 100
federation:
enable_federation: true
primary_endpoint: https://primary.daemoneye.com
regional_endpoints:
- https://region1.daemoneye.com
- https://region2.daemoneye.com
sync_interval: 300
conflict_resolution: primary_wins
stix_taxii:
enable_integration: true
taxii_endpoint: https://taxii.example.com
collection_id: daemoneye-indicators
sync_interval: 3600
analytics:
enable_behavioral_analysis: true
anomaly_threshold: 0.8
model_update_interval: 86400
enable_machine_learning: true
Troubleshooting
Common Issues
Kernel Monitoring Failures:
- Check kernel version compatibility
- Verify eBPF/ETW/EndpointSecurity support
- Check privilege requirements
- Review kernel logs for errors
Network Monitoring Issues:
- Verify network interface permissions
- Check packet capture filters
- Monitor buffer usage
- Review network performance impact
Federation Sync Issues:
- Check network connectivity
- Verify authentication credentials
- Review sync logs
- Check conflict resolution settings
Analytics Performance:
- Monitor CPU and memory usage
- Check model update frequency
- Review feature extraction performance
- Optimize anomaly detection thresholds
This document provides comprehensive information about Enterprise tier features. For additional help, consult the troubleshooting section or contact support.
Query Pipeline and SQL Dialect
Overview
DaemonEye implements a sophisticated SQL-to-IPC Translation pipeline that allows operators to write complex SQL detection rules while maintaining strict security boundaries and optimal performance. This document explains how the query pipeline works and the limitations of the supported SQL dialect.
Query Pipeline Architecture
DaemonEye's query processing follows a two-phase approach:
flowchart LR subgraph "Phase 1: SQL-to-IPC Translation" SQL[SQL Detection Rule] --> Parser[sqlparser AST] Parser --> Extractor[Collection Requirements Extractor] Extractor --> IPC[Protobuf IPC Tasks] end subgraph "Phase 2: Data Collection & Analysis" IPC --> Procmond[procmond Collection] Procmond --> DB[(redb Event Store)] DB --> SQLExec[SQL Rule Execution] SQLExec --> Alerts[Alert Generation] end SQL -.->|Original Rule| SQLExec
Phase 1: SQL-to-IPC Translation
- SQL Parsing: User-written SQL detection rules are parsed using the
sqlparser
crate - AST Analysis: The Abstract Syntax Tree is analyzed to extract collection requirements
- Task Generation: Simple protobuf collection tasks are generated for procmond
- Overcollection Strategy: procmond may collect more data than strictly needed to ensure comprehensive detection
Phase 2: Data Collection & Analysis
- Process Collection: procmond executes the protobuf tasks to collect process data
- Data Storage: Collected data is stored in the redb event store
- SQL Execution: The original SQL rule is executed against the collected data
- Alert Generation: Detection results trigger alert generation and delivery
Supported SQL Dialect
DaemonEye supports a restricted SQL dialect optimized for process monitoring and security. The dialect is based on SQLite syntax with specific limitations and extensions.
Allowed SQL Constructs
Basic Queries
-- Simple SELECT queries
SELECT * FROM processes WHERE name = 'suspicious-process';
-- Aggregations
SELECT COUNT(*) as process_count, name
FROM processes
GROUP BY name
HAVING COUNT(*) > 10;
-- Joins (when applicable)
SELECT p.name, p.pid, s.start_time
FROM processes p
JOIN scans s ON p.scan_id = s.id;
Supported Functions
String Functions (useful for process data analysis):
-- String length analysis
SELECT name, LENGTH(command_line) as cmd_length
FROM processes
WHERE LENGTH(command_line) > 100;
-- Substring extraction
SELECT name, SUBSTR(executable_path, 1, 10) as path_prefix
FROM processes
WHERE executable_path IS NOT NULL;
-- Pattern matching
SELECT * FROM processes
WHERE name LIKE '%suspicious%'
OR executable_path LIKE '/tmp/%';
-- String search
SELECT * FROM processes
WHERE INSTR(command_line, 'malicious') > 0;
Encoding Functions (useful for hash analysis):
-- Hexadecimal encoding/decoding
SELECT name, HEX(executable_hash) as hash_hex
FROM processes
WHERE executable_hash IS NOT NULL;
-- Binary data analysis
SELECT name, UNHEX(executable_hash) as hash_binary
FROM processes
WHERE LENGTH(executable_hash) = 64; -- SHA-256 length
Mathematical Functions:
-- Numeric analysis
SELECT name, cpu_usage, memory_usage
FROM processes
WHERE cpu_usage > 50.0
OR memory_usage > 1073741824; -- 1GB
Banned SQL Constructs
Security-Critical Functions
-- These functions are banned for security reasons:
-- load_extension() - SQLite extension loading
-- eval() - Code evaluation
-- exec() - Command execution
-- system() - System calls
-- shell() - Shell execution
File System Operations
-- These functions are not applicable to process monitoring:
-- readfile() - File reading
-- writefile() - File writing
-- edit() - File editing
Complex Pattern Matching
-- These functions are complex to translate to IPC tasks:
-- glob() - Glob patterns
-- regexp() - Regular expressions (performance concerns)
-- match() - Pattern matching
Mathematical Functions (Not Applicable)
-- These functions are not useful for process monitoring:
-- abs() - Absolute value
-- random() - Random numbers
-- randomblob() - Random binary data
Formatting Functions (Not Applicable)
-- These functions are not useful for process monitoring:
-- quote() - SQL quoting
-- printf() - String formatting
-- format() - String formatting
-- char() - Character conversion
-- unicode() - Unicode functions
-- soundex() - Soundex algorithm
-- difference() - String difference
Process Data Schema
The processes
table contains comprehensive process information:
-- Core process information
CREATE TABLE processes (
id INTEGER PRIMARY KEY,
scan_id INTEGER NOT NULL,
collection_time INTEGER NOT NULL,
pid INTEGER NOT NULL,
ppid INTEGER,
name TEXT NOT NULL,
executable_path TEXT,
command_line TEXT,
start_time INTEGER,
cpu_usage REAL,
memory_usage INTEGER,
status TEXT,
executable_hash TEXT, -- SHA-256 hash in hex format
hash_algorithm TEXT, -- Usually 'sha256'
user_id INTEGER,
group_id INTEGER,
accessible BOOLEAN,
file_exists BOOLEAN,
environment_vars TEXT, -- JSON string of environment variables
metadata TEXT, -- JSON string of additional metadata
platform_data TEXT -- JSON string of platform-specific data
);
Example Detection Rules
Basic Process Monitoring
-- Detect processes with suspicious names
SELECT pid, name, executable_path, command_line
FROM processes
WHERE name LIKE '%suspicious%'
OR name LIKE '%malware%'
OR name LIKE '%backdoor%';
Resource Usage Analysis
-- Detect high resource usage processes
SELECT pid, name, cpu_usage, memory_usage, command_line
FROM processes
WHERE cpu_usage > 80.0
OR memory_usage > 2147483648 -- 2GB
ORDER BY memory_usage DESC;
Hash-Based Detection
-- Detect processes with known malicious hashes
SELECT pid, name, executable_path, executable_hash
FROM processes
WHERE executable_hash IN (
'a1b2c3d4e5f6789012345678901234567890abcdef1234567890abcdef',
'f1e2d3c4b5a6978012345678901234567890abcdef1234567890abcdef'
);
Command Line Analysis
-- Detect suspicious command line patterns
SELECT pid, name, command_line
FROM processes
WHERE command_line LIKE '%nc -l%' -- Netcat listener
OR command_line LIKE '%wget%' -- Download tools
OR command_line LIKE '%curl%' -- Download tools
OR command_line LIKE '%base64%' -- Encoding tools
OR LENGTH(command_line) > 1000; -- Unusually long commands
Environment Variable Analysis
-- Detect processes with suspicious environment variables
SELECT pid, name, environment_vars
FROM processes
WHERE environment_vars LIKE '%SUSPICIOUS_VAR%'
OR environment_vars LIKE '%MALWARE_CONFIG%';
Path-Based Detection
-- Detect processes running from suspicious locations
SELECT pid, name, executable_path
FROM processes
WHERE executable_path LIKE '/tmp/%'
OR executable_path LIKE '/var/tmp/%'
OR executable_path LIKE '/dev/shm/%'
OR executable_path LIKE '%.exe' -- Windows executables on Unix
OR executable_path IS NULL; -- No executable path
Performance Considerations
Query Optimization
- Indexing: Time-based indexes are automatically created for efficient querying
- Batch Processing: Large result sets are processed in batches to prevent memory issues
- Query Timeouts: All queries have configurable timeouts to prevent system hangs
Resource Limits
- Memory Usage: Queries are limited to prevent excessive memory consumption
- CPU Usage: Complex queries are throttled to maintain system performance
- Result Size: Large result sets are paginated to prevent memory exhaustion
Security Considerations
SQL Injection Prevention
- AST Validation: All SQL is parsed and validated before execution
- Prepared Statements: All queries use parameterized statements
- Function Whitelist: Only approved functions are allowed
- Sandboxed Execution: Queries run in read-only database connections
Data Privacy
- Field Masking: Sensitive fields can be masked in logs and exports
- Command Line Redaction: Command lines can be redacted for privacy
- Access Control: Database access is restricted by component
Best Practices
Writing Effective Detection Rules
- Use Specific Patterns: Avoid overly broad patterns that generate false positives
- Leverage Hash Detection: Use executable hashes for precise malware detection
- Combine Multiple Criteria: Use multiple conditions to reduce false positives
- Test Thoroughly: Validate rules against known good and bad processes
Performance Optimization
- Use Indexes: Leverage time-based and field-based indexes
- Limit Result Sets: Use LIMIT clauses for large queries
- Avoid Complex Joins: Keep queries simple and focused
- Monitor Resource Usage: Watch for queries that consume excessive resources
Security Guidelines
- Validate Input: Always validate user-provided SQL fragments
- Use Parameterized Queries: Never concatenate user input into SQL
- Review Function Usage: Ensure only approved functions are used
- Monitor Query Performance: Watch for queries that might indicate attacks
Troubleshooting
Common Issues
Query Syntax Errors:
- Check SQL syntax against supported dialect
- Ensure all functions are in the allowed list
- Verify table and column names
Performance Issues:
- Add appropriate indexes
- Simplify complex queries
- Use LIMIT clauses for large result sets
Security Violations:
- Review banned function usage
- Check for SQL injection attempts
- Validate input parameters
Debugging Queries
-- Use EXPLAIN to understand query execution
EXPLAIN SELECT * FROM processes WHERE name LIKE '%test%';
-- Check query performance
SELECT COUNT(*) as total_processes FROM processes;
SELECT COUNT(*) as recent_processes FROM processes
WHERE collection_time > (strftime('%s', 'now') - 3600) * 1000;
Future Enhancements
Planned Features
- Advanced Pattern Matching: Support for more complex regex patterns
- Machine Learning Integration: ML-based anomaly detection
- Real-time Streaming: Support for real-time query execution
- Query Optimization: Automatic query optimization and indexing
Extension Points
- Custom Functions: Support for user-defined functions
- External Data Sources: Integration with external threat intelligence
- Advanced Analytics: Statistical analysis and correlation
- Visualization: Query result visualization and dashboards
SQL Dialect Quick Reference
Allowed Functions
String Functions
Function | Description | Example |
---|---|---|
LENGTH(str) | String length | LENGTH(command_line) |
SUBSTR(str, start, length) | Substring extraction | SUBSTR(executable_path, 1, 10) |
INSTR(str, substr) | Find substring position | INSTR(command_line, 'malicious') |
LIKE pattern | Pattern matching | name LIKE '%suspicious%' |
Encoding Functions
Function | Description | Example |
---|---|---|
HEX(data) | Convert to hexadecimal | HEX(executable_hash) |
UNHEX(hex) | Convert from hexadecimal | UNHEX('deadbeef') |
Mathematical Functions
Function | Description | Example |
---|---|---|
COUNT(*) | Count rows | COUNT(*) as process_count |
SUM(expr) | Sum values | SUM(memory_usage) |
AVG(expr) | Average values | AVG(cpu_usage) |
MAX(expr) | Maximum value | MAX(memory_usage) |
MIN(expr) | Minimum value | MIN(start_time) |
Banned Functions
Security-Critical (Always Banned)
load_extension()
- SQLite extension loadingeval()
- Code evaluationexec()
- Command executionsystem()
- System callsshell()
- Shell execution
File System Operations (Not Applicable)
readfile()
- File readingwritefile()
- File writingedit()
- File editing
Complex Pattern Matching (Performance Concerns)
glob()
- Glob patternsregexp()
- Regular expressionsmatch()
- Pattern matching
Mathematical Functions (Not Applicable)
abs()
- Absolute valuerandom()
- Random numbersrandomblob()
- Random binary data
Formatting Functions (Not Applicable)
quote()
- SQL quotingprintf()
- String formattingformat()
- String formattingchar()
- Character conversionunicode()
- Unicode functionssoundex()
- Soundex algorithmdifference()
- String difference
Process Data Schema
-- Core process information
CREATE TABLE processes (
id INTEGER PRIMARY KEY,
scan_id INTEGER NOT NULL,
collection_time INTEGER NOT NULL,
pid INTEGER NOT NULL,
ppid INTEGER,
name TEXT NOT NULL,
executable_path TEXT,
command_line TEXT,
start_time INTEGER,
cpu_usage REAL,
memory_usage INTEGER,
status TEXT,
executable_hash TEXT, -- SHA-256 hash in hex format
hash_algorithm TEXT, -- Usually 'sha256'
user_id INTEGER,
group_id INTEGER,
accessible BOOLEAN,
file_exists BOOLEAN,
environment_vars TEXT, -- JSON string of environment variables
metadata TEXT, -- JSON string of additional metadata
platform_data TEXT -- JSON string of platform-specific data
);
Common Query Patterns
Basic Detection
-- Find processes by name
SELECT * FROM processes WHERE name = 'suspicious-process';
-- Find processes with pattern matching
SELECT * FROM processes WHERE name LIKE '%malware%';
Resource Analysis
-- High CPU usage
SELECT * FROM processes WHERE cpu_usage > 80.0;
-- High memory usage
SELECT * FROM processes WHERE memory_usage > 2147483648; -- 2GB
Hash-Based Detection
-- Known malicious hashes
SELECT * FROM processes
WHERE executable_hash = 'a1b2c3d4e5f6789012345678901234567890abcdef1234567890abcdef';
Command Line Analysis
-- Suspicious command patterns
SELECT * FROM processes
WHERE command_line LIKE '%nc -l%' -- Netcat listener
OR command_line LIKE '%wget%' -- Download tools
OR LENGTH(command_line) > 1000; -- Unusually long commands
Path-Based Detection
-- Suspicious executable locations
SELECT * FROM processes
WHERE executable_path LIKE '/tmp/%'
OR executable_path LIKE '/var/tmp/%'
OR executable_path IS NULL;
Performance Tips
Use Indexes
- Time-based queries:
WHERE collection_time > ?
- Process ID queries:
WHERE pid = ?
- Name queries:
WHERE name = ?
Limit Result Sets
-- Use LIMIT for large queries
SELECT * FROM processes WHERE name LIKE '%test%' LIMIT 100;
Avoid Complex Operations
-- Good: Simple conditions
WHERE name = 'process' AND pid > 1000;
-- Avoid: Complex nested operations
WHERE LENGTH(SUBSTR(command_line, 1, 100)) > 50;
Security Best Practices
Use Parameterized Queries
-- Good: Parameterized
SELECT * FROM processes WHERE name = ?;
-- Bad: String concatenation
SELECT * FROM processes WHERE name = '" + user_input + "';
Validate Input
- Always validate user-provided SQL fragments
- Use only approved functions
- Check for banned function usage
Monitor Performance
- Watch for queries that consume excessive resources
- Use query timeouts
- Monitor memory usage
User Guides
This section contains comprehensive user guides for DaemonEye, covering everything from basic usage to advanced configuration and troubleshooting.
Table of Contents
- Operator Guide
- Configuration Guide
- Quick Start
- Common Tasks
- Troubleshooting
- Advanced Usage
- Best Practices
Operator Guide
The operator guide provides comprehensive information for system administrators and security operators who need to deploy, configure, and maintain DaemonEye in production environments.
Configuration Guide
The configuration guide covers all aspects of DaemonEye configuration, from basic settings to advanced tuning and security hardening.
Quick Start
Installation
Install DaemonEye using your preferred method:
Using Package Managers:
# Ubuntu/Debian
sudo apt install daemoneye
# RHEL/CentOS
sudo yum install daemoneye
# macOS
brew install daemoneye
# Windows
choco install daemoneye
Using Docker:
docker run -d --privileged \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
daemoneye/daemoneye:latest
Using Kubernetes:
kubectl apply -f https://raw.githubusercontent.com/EvilBit-Labs/daemoneye/main/deploy/kubernetes/daemoneye.yaml
Basic Configuration
Create a basic configuration file:
# /etc/daemoneye/config.yaml
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
data_dir: /var/lib/daemoneye
log_dir: /var/log/daemoneye
database:
path: /var/lib/daemoneye/processes.db
retention_days: 30
alerting:
enabled: true
sinks:
- type: syslog
enabled: true
facility: daemon
Starting DaemonEye
Linux (systemd):
sudo systemctl start daemoneye
sudo systemctl enable daemoneye
macOS (launchd):
sudo launchctl load /Library/LaunchDaemons/com.daemoneye.agent.plist
Windows (Service):
Start-Service DaemonEye
Docker:
docker run -d --name daemoneye \
--privileged \
-v /etc/daemoneye:/config:ro \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
daemoneye/daemoneye:latest
Basic Usage
Check Status:
daemoneye-cli health
Query Processes:
daemoneye-cli query "SELECT pid, name, executable_path FROM processes LIMIT 10"
List Alerts:
daemoneye-cli alerts list
View Logs:
daemoneye-cli logs --tail 100
Common Tasks
Process Monitoring
Monitor Specific Processes:
# Monitor processes by name
daemoneye-cli watch processes --filter "name LIKE '%apache%'"
# Monitor processes by CPU usage
daemoneye-cli watch processes --filter "cpu_usage > 10.0"
# Monitor processes by memory usage
daemoneye-cli watch processes --filter "memory_usage > 1000000"
Query Process Information:
# Get all processes
daemoneye-cli query "SELECT * FROM processes"
# Get processes by PID
daemoneye-cli query "SELECT * FROM processes WHERE pid = 1234"
# Get processes by name pattern
daemoneye-cli query "SELECT * FROM processes WHERE name LIKE '%nginx%'"
# Get processes by executable path
daemoneye-cli query "SELECT * FROM processes WHERE executable_path LIKE '%/usr/bin/%'"
Alert Management
Configure Alerting:
# Enable syslog alerts
daemoneye-cli config set alerting.sinks[0].enabled true
daemoneye-cli config set alerting.sinks[0].type syslog
daemoneye-cli config set alerting.sinks[0].facility daemon
# Enable webhook alerts
daemoneye-cli config set alerting.sinks[1].enabled true
daemoneye-cli config set alerting.sinks[1].type webhook
daemoneye-cli config set alerting.sinks[1].url "https://alerts.example.com/webhook"
View Alerts:
# List recent alerts
daemoneye-cli alerts list
# List alerts by severity
daemoneye-cli alerts list --severity high
# List alerts by rule
daemoneye-cli alerts list --rule "suspicious_processes"
# Get alert details
daemoneye-cli alerts show <alert-id>
Rule Management
Create Detection Rules:
# Create a rule file
cat > /etc/daemoneye/rules/suspicious-processes.sql << 'EOF'
-- Detect processes with suspicious names
SELECT
pid,
name,
executable_path,
command_line,
collection_time
FROM processes
WHERE
name IN ('malware.exe', 'backdoor.exe', 'trojan.exe')
OR name LIKE '%suspicious%'
OR executable_path LIKE '%temp%'
ORDER BY collection_time DESC;
EOF
# Validate the rule
daemoneye-cli rules validate /etc/daemoneye/rules/suspicious-processes.sql
# Test the rule
daemoneye-cli rules test suspicious-processes
Manage Rules:
# List all rules
daemoneye-cli rules list
# Enable/disable rules
daemoneye-cli rules enable suspicious-processes
daemoneye-cli rules disable suspicious-processes
# Reload rules
daemoneye-cli rules reload
Configuration Management
View Configuration:
# Show current configuration
daemoneye-cli config show
# Show specific setting
daemoneye-cli config get app.scan_interval_ms
# Show all settings with defaults
daemoneye-cli config show --include-defaults
Update Configuration:
# Set a single value
daemoneye-cli config set app.scan_interval_ms 60000
# Set multiple values
daemoneye-cli config set app.scan_interval_ms 60000 app.batch_size 500
# Update from file
daemoneye-cli config load /path/to/config.yaml
Validate Configuration:
# Validate configuration
daemoneye-cli config validate
# Check configuration syntax
daemoneye-cli config check
Troubleshooting
Common Issues
Service Won't Start:
# Check service status
sudo systemctl status daemoneye
# Check logs
sudo journalctl -u daemoneye -f
# Check configuration
daemoneye-cli config validate
Permission Denied:
# Check file permissions
ls -la /var/lib/daemoneye/
ls -la /var/log/daemoneye/
# Fix permissions
sudo chown -R daemoneye:daemoneye /var/lib/daemoneye
sudo chown -R daemoneye:daemoneye /var/log/daemoneye
Database Issues:
# Check database status
daemoneye-cli database status
# Check database integrity
daemoneye-cli database integrity-check
# Repair database
daemoneye-cli database repair
Performance Issues:
# Check system metrics
daemoneye-cli metrics
# Check resource usage
daemoneye-cli system status
# Optimize configuration
daemoneye-cli config optimize
Debug Mode
Enable Debug Logging:
# Set debug level
daemoneye-cli config set app.log_level debug
# Restart service
sudo systemctl restart daemoneye
# Monitor debug logs
daemoneye-cli logs --level debug --tail 100
Debug Specific Components:
# Debug process collection
daemoneye-cli debug collector
# Debug alert delivery
daemoneye-cli debug alerts
# Debug database operations
daemoneye-cli debug database
Health Checks
System Health:
# Overall health
daemoneye-cli health
# Component health
daemoneye-cli health --component procmond
daemoneye-cli health --component daemoneye-agent
daemoneye-cli health --component database
# Detailed health report
daemoneye-cli health --verbose
Performance Health:
# Performance metrics
daemoneye-cli metrics
# Resource usage
daemoneye-cli system resources
# Performance analysis
daemoneye-cli system analyze
Advanced Usage
Custom Integrations
SIEM Integration:
# Splunk HEC
integrations:
siem:
splunk:
enabled: true
hec_url: https://splunk.example.com:8088/services/collector
hec_token: ${SPLUNK_HEC_TOKEN}
index: daemoneye
# Elasticsearch
integrations:
siem:
elasticsearch:
enabled: true
url: https://elasticsearch.example.com:9200
username: ${ELASTIC_USERNAME}
password: ${ELASTIC_PASSWORD}
index: daemoneye-processes
Export Formats:
# CEF Export
integrations:
export:
cef:
enabled: true
output_file: /var/log/daemoneye/cef.log
cef_version: "1.0"
device_vendor: "DaemonEye"
device_product: "Process Monitor"
# STIX Export
integrations:
export:
stix:
enabled: true
output_file: /var/log/daemoneye/stix.json
stix_version: "2.1"
Performance Tuning
Optimize for High Load:
app:
scan_interval_ms: 60000 # Reduce scan frequency
batch_size: 500 # Smaller batches
max_memory_mb: 256 # Limit memory usage
max_cpu_percent: 3.0 # Limit CPU usage
database:
cache_size: -128000 # 128MB cache
temp_store: MEMORY # Use memory for temp tables
synchronous: NORMAL # Balance safety and performance
Optimize for Low Latency:
app:
scan_interval_ms: 10000 # Increase scan frequency
batch_size: 100 # Smaller batches
max_memory_mb: 512 # More memory for caching
detection:
enable_rule_caching: true
cache_ttl_seconds: 300
max_concurrent_rules: 5
Security Hardening
Enable Security Features:
security:
enable_privilege_dropping: true
drop_to_user: daemoneye
drop_to_group: daemoneye
enable_audit_logging: true
enable_integrity_checking: true
hash_algorithm: blake3
enable_signature_verification: true
Network Security:
security:
network:
enable_tls: true
cert_file: /etc/daemoneye/cert.pem
key_file: /etc/daemoneye/key.pem
ca_file: /etc/daemoneye/ca.pem
verify_peer: true
Best Practices
Deployment
- Start Small: Begin with basic monitoring and gradually add features
- Test Configuration: Always validate configuration before deployment
- Monitor Resources: Keep an eye on CPU and memory usage
- Regular Updates: Keep DaemonEye updated with latest releases
- Backup Data: Regularly backup configuration and data
Configuration
- Use Hierarchical Config: Leverage multiple configuration sources
- Environment Variables: Use environment variables for secrets
- Validation: Always validate configuration changes
- Documentation: Document custom configurations
- Version Control: Keep configuration files in version control
Monitoring
- Set Up Alerting: Configure appropriate alert thresholds
- Monitor Performance: Track system performance metrics
- Log Analysis: Regularly review logs for issues
- Health Checks: Implement automated health monitoring
- Incident Response: Have a plan for handling alerts
Security
- Principle of Least Privilege: Run with minimal required privileges
- Network Security: Use TLS for all network communications
- Access Control: Implement proper authentication and authorization
- Audit Logging: Enable comprehensive audit logging
- Regular Updates: Keep security patches current
This user guide provides comprehensive information for using DaemonEye. For additional help, consult the specific user guides or contact support.
DaemonEye Operator Guide
This guide provides comprehensive instructions for operators managing DaemonEye in production environments. It covers day-to-day operations, troubleshooting, and advanced configuration.
Table of Contents
- System Overview
- Basic Operations
- Process Monitoring
- Alert Management
- Rule Management
- System Health Monitoring
- Configuration Management
- Troubleshooting
- Best Practices
System Overview
Component Status
Check the overall health of your DaemonEye installation:
# Overall system health
daemoneye-cli health
# Component-specific health
daemoneye-cli health --component procmond
daemoneye-cli health --component daemoneye-agent
daemoneye-cli health --component database
Expected Output:
System Health: Healthy
├── procmond: Running (PID: 1234)
├── daemoneye-agent: Running (PID: 1235)
├── database: Connected
└── alerting: All sinks operational
Service Management
Start Services:
# Linux (systemd)
sudo systemctl start daemoneye
# macOS (launchd)
sudo launchctl load /Library/LaunchDaemons/com.daemoneye.agent.plist
# Windows (Service)
sc start "DaemonEye Agent"
Stop Services:
# Linux (systemd)
sudo systemctl stop daemoneye
# macOS (launchd)
sudo launchctl unload /Library/LaunchDaemons/com.daemoneye.agent.plist
# Windows (Service)
sc stop "DaemonEye Agent"
Restart Services:
# Linux (systemd)
sudo systemctl restart daemoneye
# macOS (launchd)
sudo launchctl unload /Library/LaunchDaemons/com.daemoneye.agent.plist
sudo launchctl load /Library/LaunchDaemons/com.daemoneye.agent.plist
# Windows (Service)
sc stop "DaemonEye Agent"
sc start "DaemonEye Agent"
Basic Operations
Querying Process Data
List Recent Processes:
# Last 10 processes
daemoneye-cli query "SELECT pid, name, executable_path, collection_time FROM processes ORDER BY collection_time DESC LIMIT 10"
# Processes by name
daemoneye-cli query "SELECT * FROM processes WHERE name = 'chrome'"
# High CPU processes
daemoneye-cli query "SELECT pid, name, cpu_usage FROM processes WHERE cpu_usage > 50.0 ORDER BY cpu_usage DESC"
Process Tree Analysis:
# Find child processes of a specific parent
daemoneye-cli query "
SELECT
p1.pid as parent_pid,
p1.name as parent_name,
p2.pid as child_pid,
p2.name as child_name
FROM processes p1
JOIN processes p2 ON p1.pid = p2.ppid
WHERE p1.name = 'systemd'
"
# Process hierarchy depth
daemoneye-cli query "
WITH RECURSIVE process_tree AS (
SELECT pid, ppid, name, 0 as depth
FROM processes
WHERE ppid IS NULL
UNION ALL
SELECT p.pid, p.ppid, p.name, pt.depth + 1
FROM processes p
JOIN process_tree pt ON p.ppid = pt.pid
)
SELECT pid, name, depth FROM process_tree ORDER BY depth, pid
"
Suspicious Process Patterns:
# Processes with suspicious names
daemoneye-cli query "
SELECT pid, name, executable_path, command_line
FROM processes
WHERE name IN ('malware.exe', 'backdoor.exe', 'trojan.exe')
OR name LIKE '%suspicious%'
OR executable_path LIKE '%temp%'
"
# Processes with unusual parent-child relationships
daemoneye-cli query "
SELECT
p1.pid as parent_pid,
p1.name as parent_name,
p2.pid as child_pid,
p2.name as child_name
FROM processes p1
JOIN processes p2 ON p1.pid = p2.ppid
WHERE p1.name = 'explorer.exe'
AND p2.name NOT IN ('chrome.exe', 'firefox.exe', 'notepad.exe')
"
Data Export
Export to Different Formats:
# JSON export
daemoneye-cli query "SELECT * FROM processes WHERE cpu_usage > 10.0" --format json > high_cpu_processes.json
# CSV export
daemoneye-cli query "SELECT pid, name, cpu_usage, memory_usage FROM processes" --format csv > process_metrics.csv
# Table format (default)
daemoneye-cli query "SELECT * FROM processes LIMIT 5" --format table
Export with Filters:
# Export processes from last hour
daemoneye-cli query "
SELECT * FROM processes
WHERE collection_time > (strftime('%s', 'now') - 3600) * 1000
" --format json > recent_processes.json
# Export by user
daemoneye-cli query "
SELECT * FROM processes
WHERE user_id = '1000'
" --format csv > user_processes.csv
Process Monitoring
Real-time Monitoring
Watch Process Activity:
# Monitor new processes in real-time
daemoneye-cli watch processes --filter "name LIKE '%chrome%'"
# Monitor high CPU processes
daemoneye-cli watch processes --filter "cpu_usage > 50.0"
# Monitor specific user processes
daemoneye-cli watch processes --filter "user_id = '1000'"
Process Statistics:
# Process count by name
daemoneye-cli query "
SELECT name, COUNT(*) as count
FROM processes
GROUP BY name
ORDER BY count DESC
LIMIT 10
"
# CPU usage distribution
daemoneye-cli query "
SELECT
CASE
WHEN cpu_usage IS NULL THEN 'Unknown'
WHEN cpu_usage = 0 THEN '0%'
WHEN cpu_usage < 10 THEN '1-9%'
WHEN cpu_usage < 50 THEN '10-49%'
WHEN cpu_usage < 100 THEN '50-99%'
ELSE '100%+'
END as cpu_range,
COUNT(*) as process_count
FROM processes
GROUP BY cpu_range
ORDER BY process_count DESC
"
# Memory usage statistics
daemoneye-cli query "
SELECT
AVG(memory_usage) as avg_memory,
MIN(memory_usage) as min_memory,
MAX(memory_usage) as max_memory,
COUNT(*) as process_count
FROM processes
WHERE memory_usage IS NOT NULL
"
Process Investigation
Deep Process Analysis:
# Get detailed information about a specific process
daemoneye-cli query "
SELECT
pid,
name,
executable_path,
command_line,
start_time,
cpu_usage,
memory_usage,
executable_hash,
user_id,
collection_time
FROM processes
WHERE pid = 1234
"
# Find processes with the same executable
daemoneye-cli query "
SELECT
executable_path,
COUNT(*) as instance_count,
GROUP_CONCAT(pid) as pids
FROM processes
WHERE executable_path IS NOT NULL
GROUP BY executable_path
HAVING COUNT(*) > 1
ORDER BY instance_count DESC
"
# Process execution timeline
daemoneye-cli query "
SELECT
pid,
name,
collection_time,
cpu_usage,
memory_usage
FROM processes
WHERE name = 'chrome'
ORDER BY collection_time DESC
LIMIT 20
"
Alert Management
Viewing Alerts
List Recent Alerts:
# Last 10 alerts
daemoneye-cli alerts list --limit 10
# Alerts by severity
daemoneye-cli alerts list --severity high,critical
# Alerts by rule
daemoneye-cli alerts list --rule "suspicious-processes"
# Alerts from specific time range
daemoneye-cli alerts list --since "2024-01-15 10:00:00" --until "2024-01-15 18:00:00"
Alert Details:
# Get detailed information about a specific alert
daemoneye-cli alerts show <alert-id>
# Export alerts to file
daemoneye-cli alerts export --format json --output alerts.json
# Export alerts with filters
daemoneye-cli alerts export --severity high,critical --format csv --output critical_alerts.csv
Alert Filtering and Search
Advanced Alert Queries:
# Alerts affecting specific processes
daemoneye-cli query "
SELECT
a.id,
a.title,
a.severity,
a.alert_time,
a.affected_processes
FROM alerts a
WHERE JSON_EXTRACT(a.alert_data, '$.pid') = 1234
ORDER BY a.alert_time DESC
"
# Alerts by hostname
daemoneye-cli query "
SELECT
a.id,
a.title,
a.severity,
a.alert_time,
JSON_EXTRACT(a.alert_data, '$.hostname') as hostname
FROM alerts a
WHERE JSON_EXTRACT(a.alert_data, '$.hostname') = 'server-01'
ORDER BY a.alert_time DESC
"
# Alert frequency by rule
daemoneye-cli query "
SELECT
rule_id,
COUNT(*) as alert_count,
MAX(alert_time) as last_alert
FROM alerts
GROUP BY rule_id
ORDER BY alert_count DESC
"
Alert Response
Acknowledge Alerts:
# Acknowledge a specific alert
daemoneye-cli alerts acknowledge <alert-id> --comment "Investigating"
# Acknowledge multiple alerts
daemoneye-cli alerts acknowledge --rule "suspicious-processes" --comment "False positive"
# List acknowledged alerts
daemoneye-cli alerts list --status acknowledged
Alert Suppression:
# Suppress alerts for a specific rule
daemoneye-cli alerts suppress --rule "suspicious-processes" --duration "1h" --reason "Maintenance"
# Suppress alerts for specific processes
daemoneye-cli alerts suppress --process 1234 --duration "30m" --reason "Known good process"
# List active suppressions
daemoneye-cli alerts suppressions list
Rule Management
Rule Operations
List Rules:
# List all rules
daemoneye-cli rules list
# List enabled rules only
daemoneye-cli rules list --enabled
# List rules by category
daemoneye-cli rules list --category "malware"
# List rules by severity
daemoneye-cli rules list --severity high,critical
Rule Validation:
# Validate a rule file
daemoneye-cli rules validate /path/to/rule.sql
# Validate all rules
daemoneye-cli rules validate --all
# Test a rule with sample data
daemoneye-cli rules test /path/to/rule.sql --sample-data
Rule Management:
# Enable a rule
daemoneye-cli rules enable suspicious-processes
# Disable a rule
daemoneye-cli rules disable suspicious-processes
# Update a rule
daemoneye-cli rules update suspicious-processes --file /path/to/new-rule.sql
# Delete a rule
daemoneye-cli rules delete suspicious-processes
Rule Development
Create a New Rule:
# Create a new rule file
cat > /etc/daemoneye/rules/custom-rule.sql << 'EOF'
-- Detect processes with suspicious names
SELECT
pid,
name,
executable_path,
command_line,
collection_time
FROM processes
WHERE
name IN ('malware.exe', 'backdoor.exe', 'trojan.exe')
OR name LIKE '%suspicious%'
OR executable_path LIKE '%temp%'
ORDER BY collection_time DESC;
EOF
# Validate the rule
daemoneye-cli rules validate /etc/daemoneye/rules/custom-rule.sql
# Enable the rule
daemoneye-cli rules enable custom-rule
Rule Testing:
# Test rule against current data
daemoneye-cli rules test custom-rule --live
# Test rule with specific time range
daemoneye-cli rules test custom-rule --since "2024-01-15 00:00:00" --until "2024-01-15 23:59:59"
# Test rule performance
daemoneye-cli rules test custom-rule --benchmark
Rule Import/Export
Export Rules:
# Export all rules
daemoneye-cli rules export --output rules-backup.tar.gz
# Export specific rules
daemoneye-cli rules export --rules "suspicious-processes,high-cpu" --output selected-rules.tar.gz
# Export rules by category
daemoneye-cli rules export --category "malware" --output malware-rules.tar.gz
Import Rules:
# Import rules from file
daemoneye-cli rules import rules-backup.tar.gz
# Import rules with validation
daemoneye-cli rules import rules-backup.tar.gz --validate
# Import rules with conflict resolution
daemoneye-cli rules import rules-backup.tar.gz --resolve-conflicts
System Health Monitoring
Performance Metrics
System Performance:
# View system metrics
daemoneye-cli metrics
# CPU usage over time
daemoneye-cli metrics --metric cpu_usage --duration 1h
# Memory usage over time
daemoneye-cli metrics --metric memory_usage --duration 1h
# Process collection rate
daemoneye-cli metrics --metric collection_rate --duration 1h
Database Performance:
# Database status
daemoneye-cli database status
# Database size
daemoneye-cli database size
# Database performance metrics
daemoneye-cli database metrics
# Database maintenance
daemoneye-cli database maintenance --vacuum
Log Analysis
View Logs:
# Recent logs
daemoneye-cli logs --tail 50
# Logs by level
daemoneye-cli logs --level error
# Logs by component
daemoneye-cli logs --component procmond
# Logs with filters
daemoneye-cli logs --filter "error" --tail 100
Log Analysis:
# Error frequency
daemoneye-cli logs --analyze --level error
# Performance issues
daemoneye-cli logs --analyze --filter "slow"
# Security events
daemoneye-cli logs --analyze --filter "security"
Configuration Management
Configuration Files
View Configuration:
# Show current configuration
daemoneye-cli config show
# Show specific configuration section
daemoneye-cli config show alerting
# Show configuration with defaults
daemoneye-cli config show --include-defaults
Update Configuration:
# Update configuration value
daemoneye-cli config set app.scan_interval_ms 60000
# Update multiple values
daemoneye-cli config set alerting.sinks[0].enabled true
# Reload configuration
daemoneye-cli config reload
Configuration Validation:
# Validate configuration file
daemoneye-cli config validate /etc/daemoneye/config.yaml
# Validate current configuration
daemoneye-cli config validate
# Check configuration for issues
daemoneye-cli config check
Environment Management
Environment Variables:
# Set environment variables
export DAEMONEYE_LOG_LEVEL=debug
export DAEMONEYE_DATABASE_PATH=/var/lib/daemoneye/events.redb
# View environment configuration
daemoneye-cli config show --environment
Service Configuration:
# Update service configuration
sudo systemctl edit daemoneye
# Reload service configuration
sudo systemctl daemon-reload
sudo systemctl restart daemoneye
Troubleshooting
Common Issues
Service Won't Start:
# Check service status
sudo systemctl status daemoneye
# Check logs for errors
sudo journalctl -u daemoneye -f
# Check configuration
daemoneye-cli config validate
# Check permissions
ls -la /var/lib/daemoneye/
Database Issues:
# Check database status
daemoneye-cli database status
# Check database integrity
daemoneye-cli database integrity-check
# Repair database
daemoneye-cli database repair
# Rebuild database
daemoneye-cli database rebuild
Alert Delivery Issues:
# Check alert sink status
daemoneye-cli alerts sinks status
# Test alert delivery
daemoneye-cli alerts test-delivery
# Check network connectivity
daemoneye-cli network test
# View delivery logs
daemoneye-cli logs --filter "delivery"
Debug Mode
Enable Debug Logging:
# Set debug log level
daemoneye-cli config set app.log_level debug
# Restart service
sudo systemctl restart daemoneye
# Monitor debug logs
daemoneye-cli logs --level debug --tail 100
Component Debugging:
# Debug procmond
sudo daemoneye-cli debug procmond --verbose
# Debug daemoneye-agent
daemoneye-cli debug daemoneye-agent --verbose
# Debug database
daemoneye-cli debug database --verbose
Performance Issues
High CPU Usage:
# Check process collection rate
daemoneye-cli metrics --metric collection_rate
# Reduce scan interval
daemoneye-cli config set app.scan_interval_ms 60000
# Check for problematic rules
daemoneye-cli rules list --performance
High Memory Usage:
# Check memory usage
daemoneye-cli metrics --metric memory_usage
# Reduce batch size
daemoneye-cli config set app.batch_size 500
# Check database size
daemoneye-cli database size
Slow Queries:
# Check query performance
daemoneye-cli database query-stats
# Optimize database
daemoneye-cli database optimize
# Check for slow rules
daemoneye-cli rules list --slow
Best Practices
Security
- Regular Updates: Keep DaemonEye updated to the latest version
- Access Control: Limit access to DaemonEye configuration and data
- Audit Logging: Enable comprehensive audit logging
- Network Security: Use secure connections for remote management
- Backup: Regularly backup configuration and database
Performance
- Resource Monitoring: Monitor CPU, memory, and disk usage
- Rule Optimization: Optimize detection rules for performance
- Database Maintenance: Regular database maintenance and cleanup
- Alert Tuning: Tune alert thresholds to reduce noise
- Capacity Planning: Plan for growth in process count and data volume
Operations
- Documentation: Document custom rules and configurations
- Testing: Test rules and configurations in non-production environments
- Monitoring: Set up comprehensive monitoring and alerting
- Incident Response: Develop procedures for security incidents
- Training: Train operators on DaemonEye features and best practices
Maintenance
- Regular Backups: Backup configuration and database regularly
- Log Rotation: Implement log rotation to prevent disk space issues
- Database Cleanup: Regular cleanup of old data
- Rule Review: Regular review and update of detection rules
- Performance Tuning: Regular performance analysis and tuning
This operator guide provides comprehensive instructions for managing DaemonEye in production environments. For additional help, consult the troubleshooting section or contact support.
DaemonEye Configuration Guide
This guide provides comprehensive information about configuring DaemonEye for different deployment scenarios and requirements.
Table of Contents
- Configuration Overview
- Configuration Hierarchy
- Core Configuration
- Alerting Configuration
- Database Configuration
- Platform-Specific Configuration
- Business Tier Configuration
- Enterprise Tier Configuration
- Environment Variables
- Configuration Examples
- Troubleshooting
Configuration Overview
DaemonEye uses a hierarchical configuration system that allows you to override settings at different levels. The configuration is loaded in the following order (later sources override earlier ones):
- Embedded defaults (lowest precedence)
- System configuration files (
/etc/daemoneye/config.yaml
) - User configuration files (
~/.config/daemoneye/config.yaml
) - Environment variables (
DAEMONEYE_*
) - Command-line flags (highest precedence)
Configuration Hierarchy
File Locations
System Configuration:
- Linux:
/etc/daemoneye/config.yaml
- macOS:
/Library/Application Support/DaemonEye/config.yaml
- Windows:
C:\ProgramData\DaemonEye\config.yaml
User Configuration:
- Linux/macOS:
~/.config/daemoneye/config.yaml
- Windows:
%APPDATA%\DaemonEye\config.yaml
Service-Specific Configuration:
- Linux:
/etc/daemoneye/procmond.yaml
,/etc/daemoneye/daemoneye-agent.yaml
- macOS:
/Library/Application Support/DaemonEye/procmond.yaml
- Windows:
C:\ProgramData\DaemonEye\procmond.yaml
Configuration Formats
DaemonEye supports multiple configuration formats:
- YAML (recommended): Human-readable, supports comments
- JSON: Machine-readable, no comments
- TOML: Alternative human-readable format
Core Configuration
Application Settings
app:
# Scan interval in milliseconds
scan_interval_ms: 30000
# Batch size for process collection
batch_size: 1000
# Log level: debug, info, warn, error
log_level: info
# Data retention period in days
retention_days: 30
# Maximum memory usage in MB
max_memory_mb: 512
# Enable performance monitoring
enable_metrics: true
# Metrics collection interval in seconds
metrics_interval_secs: 60
Process Collection Settings
collection:
# Enable process enumeration
enable_process_collection: true
# Enable executable hashing
enable_hash_computation: true
# Hash algorithm (sha256, sha1, md5)
hash_algorithm: sha256
# Skip hashing for system processes
skip_system_processes: true
# Skip hashing for temporary files
skip_temp_files: true
# Maximum hash computation time per process (ms)
max_hash_time_ms: 5000
# Enable enhanced process metadata collection
enable_enhanced_metadata: false
Detection Engine Settings
detection:
# Path to detection rules directory
rules_path: /etc/daemoneye/rules
# Enable rule hot-reloading
enable_hot_reload: true
# Rule execution timeout in seconds
rule_timeout_secs: 30
# Maximum memory per rule execution (MB)
max_rule_memory_mb: 128
# Enable rule performance monitoring
enable_rule_metrics: true
# Rule execution concurrency
max_concurrent_rules: 10
# Enable rule validation
enable_rule_validation: true
Alerting Configuration
Alert Sinks
alerting:
# Enable alerting
enabled: true
# Alert deduplication window in minutes
dedupe_window_minutes: 60
# Maximum alert queue size
max_queue_size: 10000
# Alert processing concurrency
max_concurrent_deliveries: 5
# Sink configurations
sinks:
# Standard output sink
- type: stdout
enabled: true
format: json # json, text, csv
# File output sink
- type: file
enabled: false
path: /var/log/daemoneye/alerts.json
format: json
rotation:
max_size_mb: 100
max_files: 10
# Syslog sink
- type: syslog
enabled: true
facility: daemon
tag: daemoneye
host: localhost
port: 514
protocol: udp # udp, tcp
# Webhook sink
- type: webhook
enabled: false
url: https://your-siem.com/webhook
method: POST
headers:
Authorization: Bearer ${WEBHOOK_TOKEN}
Content-Type: application/json
timeout_secs: 30
retry_attempts: 3
retry_delay_ms: 1000
# Email sink
- type: email
enabled: false
smtp_host: smtp.example.com
smtp_port: 587
smtp_username: ${SMTP_USERNAME}
smtp_password: ${SMTP_PASSWORD}
smtp_tls: true
from: daemoneye@example.com
to: [security@example.com]
subject: 'DaemonEye Alert: {severity} - {title}'
# Splunk HEC sink (Business Tier)
- type: splunk_hec
enabled: false
endpoint: https://splunk.example.com:8088/services/collector
token: ${SPLUNK_HEC_TOKEN}
index: daemoneye
source_type: daemoneye:alert
sourcetype: daemoneye:alert
# Elasticsearch sink (Business Tier)
- type: elasticsearch
enabled: false
hosts: [https://elastic.example.com:9200]
username: ${ELASTIC_USERNAME}
password: ${ELASTIC_PASSWORD}
index_pattern: daemoneye-{YYYY.MM.DD}
pipeline: daemoneye-alerts
# Kafka sink (Business Tier)
- type: kafka
enabled: false
brokers: [kafka.example.com:9092]
topic: daemoneye.alerts
security_protocol: SASL_SSL
sasl_mechanism: PLAIN
sasl_username: ${KAFKA_USERNAME}
sasl_password: ${KAFKA_PASSWORD}
Alert Filtering
alerting:
# Global alert filters
filters:
# Minimum severity level
min_severity: low # low, medium, high, critical
# Exclude specific rules
exclude_rules: [test-rule, debug-rule]
# Include only specific rules
include_rules: [] # Empty means all rules
# Exclude specific hosts
exclude_hosts: [test-server, dev-workstation]
# Include only specific hosts
include_hosts: [] # Empty means all hosts
# Time-based filtering
time_filters:
# Exclude alerts during maintenance windows
maintenance_windows:
- start: 02:00
end: 04:00
days: [sunday]
- start: 12:00
end: 13:00
days: [monday, tuesday, wednesday, thursday, friday]
Database Configuration
Event Store (redb)
database:
# Event store configuration
event_store:
# Database file path
path: /var/lib/daemoneye/events.redb
# Maximum database size in MB
max_size_mb: 10240
# Enable WAL mode for better performance
wal_mode: true
# WAL checkpoint interval in seconds
wal_checkpoint_interval_secs: 300
# Connection pool size
max_connections: 10
# Connection timeout in seconds
connection_timeout_secs: 30
# Idle connection timeout in seconds
idle_timeout_secs: 600
Audit Ledger (SQLite)
database:
# Audit ledger configuration
audit_ledger:
# Database file path
path: /var/lib/daemoneye/audit.sqlite
# Enable WAL mode for durability
wal_mode: true
# WAL checkpoint mode (NORMAL, FULL, RESTART, TRUNCATE)
wal_checkpoint_mode: FULL
# Synchronous mode (OFF, NORMAL, FULL)
synchronous: FULL
# Journal mode (DELETE, TRUNCATE, PERSIST, MEMORY, WAL)
journal_mode: WAL
# Cache size in KB
cache_size_kb: 2000
# Page size in bytes
page_size_bytes: 4096
Data Retention
database:
# Data retention policies
retention:
# Process data retention in days
process_data_days: 30
# Alert data retention in days
alert_data_days: 90
# Audit log retention in days
audit_log_days: 365
# Enable automatic cleanup
enable_cleanup: true
# Cleanup interval in hours
cleanup_interval_hours: 24
# Cleanup batch size
cleanup_batch_size: 1000
Platform-Specific Configuration
Linux Configuration
platform:
linux:
# Enable eBPF monitoring (Enterprise Tier)
enable_ebpf: false
# eBPF program path
ebpf_program_path: /usr/lib/daemoneye/daemoneye_monitor.o
# eBPF ring buffer size
ebpf_ring_buffer_size: 1048576 # 1MB
# Enable process namespace monitoring
enable_namespace_monitoring: true
# Enable cgroup monitoring
enable_cgroup_monitoring: true
# Process collection method
collection_method: sysinfo # sysinfo, ebpf, hybrid
# Privilege requirements
privileges:
# Required capabilities
capabilities: [SYS_PTRACE]
# Drop privileges after initialization
drop_privileges: true
# Privilege drop timeout in seconds
privilege_drop_timeout_secs: 30
Windows Configuration
platform:
windows:
# Enable ETW monitoring (Enterprise Tier)
enable_etw: false
# ETW session name
etw_session_name: DaemonEye
# ETW buffer size in KB
etw_buffer_size_kb: 64
# ETW maximum buffers
etw_max_buffers: 100
# Enable registry monitoring
enable_registry_monitoring: false
# Enable file system monitoring
enable_filesystem_monitoring: false
# Process collection method
collection_method: sysinfo # sysinfo, etw, hybrid
# Privilege requirements
privileges:
# Required privileges
privileges: [SeDebugPrivilege]
# Drop privileges after initialization
drop_privileges: true
macOS Configuration
platform:
macos:
# Enable EndpointSecurity monitoring (Enterprise Tier)
enable_endpoint_security: false
# EndpointSecurity event types
es_event_types:
- ES_EVENT_TYPE_NOTIFY_EXEC
- ES_EVENT_TYPE_NOTIFY_FORK
- ES_EVENT_TYPE_NOTIFY_EXIT
# Enable file system monitoring
enable_filesystem_monitoring: false
# Enable network monitoring
enable_network_monitoring: false
# Process collection method
collection_method: sysinfo # sysinfo, endpoint_security, hybrid
# Privilege requirements
privileges:
# Required entitlements
entitlements: [com.apple.security.cs.allow-jit]
# Drop privileges after initialization
drop_privileges: true
Business Tier Configuration
Security Center
business_tier:
# License configuration
license:
# License key
key: ${DAEMONEYE_LICENSE_KEY}
# License validation endpoint (optional)
validation_endpoint:
# Offline validation only
offline_only: true
# Security Center configuration
security_center:
# Enable Security Center
enabled: false
# Security Center endpoint
endpoint: https://security-center.example.com:8443
# Client certificate path
client_cert_path: /etc/daemoneye/agent.crt
# Client key path
client_key_path: /etc/daemoneye/agent.key
# CA certificate path
ca_cert_path: /etc/daemoneye/ca.crt
# Connection timeout in seconds
connection_timeout_secs: 30
# Heartbeat interval in seconds
heartbeat_interval_secs: 30
# Retry configuration
retry:
max_attempts: 3
base_delay_ms: 1000
max_delay_ms: 30000
backoff_multiplier: 2.0
Rule Packs
business_tier:
# Rule pack configuration
rule_packs:
# Enable automatic updates
auto_update: true
# Update interval in hours
update_interval_hours: 24
# Rule pack sources
sources:
- name: official
url: https://rules.daemoneye.com/packs/
signature_key: ed25519:public-key
enabled: true
- name: custom
url: https://internal-rules.company.com/
signature_key: ed25519:custom-key
enabled: true
# Local rule pack directory
local_directory: /etc/daemoneye/rule-packs
# Signature validation
signature_validation:
enabled: true
strict_mode: true
allowed_keys: [ed25519:official-key, ed25519:custom-key]
Enhanced Connectors
business_tier:
# Enhanced output connectors
enhanced_connectors:
# Splunk HEC connector
splunk_hec:
enabled: false
endpoint: https://splunk.example.com:8088/services/collector
token: ${SPLUNK_HEC_TOKEN}
index: daemoneye
source_type: daemoneye:alert
sourcetype: daemoneye:alert
batch_size: 100
batch_timeout_ms: 5000
# Elasticsearch connector
elasticsearch:
enabled: false
hosts: [https://elastic.example.com:9200]
username: ${ELASTIC_USERNAME}
password: ${ELASTIC_PASSWORD}
index_pattern: daemoneye-{YYYY.MM.DD}
pipeline: daemoneye-alerts
batch_size: 1000
batch_timeout_ms: 10000
# Kafka connector
kafka:
enabled: false
brokers: [kafka.example.com:9092]
topic: daemoneye.alerts
security_protocol: SASL_SSL
sasl_mechanism: PLAIN
sasl_username: ${KAFKA_USERNAME}
sasl_password: ${KAFKA_PASSWORD}
batch_size: 100
batch_timeout_ms: 5000
Enterprise Tier Configuration
Kernel Monitoring
enterprise_tier:
# Kernel monitoring configuration
kernel_monitoring:
# Enable kernel monitoring
enabled: false
# Monitoring method
method: auto # auto, ebpf, etw, endpoint_security, disabled
# eBPF configuration (Linux)
ebpf:
enabled: false
program_path: /usr/lib/daemoneye/daemoneye_monitor.o
ring_buffer_size: 2097152 # 2MB
max_events_per_second: 10000
# ETW configuration (Windows)
etw:
enabled: false
session_name: DaemonEye
buffer_size_kb: 128
max_buffers: 200
providers:
- name: Microsoft-Windows-Kernel-Process
guid: 22FB2CD6-0E7B-422B-A0C7-2FAD1FD0E716
level: 5
keywords: 0xFFFFFFFFFFFFFFFF
# EndpointSecurity configuration (macOS)
endpoint_security:
enabled: false
event_types:
- ES_EVENT_TYPE_NOTIFY_EXEC
- ES_EVENT_TYPE_NOTIFY_FORK
- ES_EVENT_TYPE_NOTIFY_EXIT
- ES_EVENT_TYPE_NOTIFY_OPEN
- ES_EVENT_TYPE_NOTIFY_CLOSE
Federation
enterprise_tier:
# Federation configuration
federation:
# Enable federation
enabled: false
# Federation tier
tier: agent # agent, regional, primary
# Regional Security Center
regional_center:
endpoint: https://regional-center.example.com:8443
certificate_path: /etc/daemoneye/regional.crt
key_path: /etc/daemoneye/regional.key
# Primary Security Center
primary_center:
endpoint: https://primary-center.example.com:8443
certificate_path: /etc/daemoneye/primary.crt
key_path: /etc/daemoneye/primary.key
# Data synchronization
sync:
# Sync interval in minutes
interval_minutes: 5
# Sync batch size
batch_size: 1000
# Enable compression
compression: true
# Enable encryption
encryption: true
STIX/TAXII Integration
enterprise_tier:
# STIX/TAXII configuration
stix_taxii:
# Enable STIX/TAXII integration
enabled: false
# TAXII servers
servers:
- name: threat-intel-server
url: https://threat-intel.example.com/taxii2/
username: ${TAXII_USERNAME}
password: ${TAXII_PASSWORD}
collections: [malware-indicators, attack-patterns]
# Polling configuration
polling:
# Poll interval in minutes
interval_minutes: 60
# Maximum indicators per poll
max_indicators: 10000
# Indicator confidence threshold
min_confidence: 50
# Indicator conversion
conversion:
# Convert STIX indicators to detection rules
auto_convert: true
# Rule template for converted indicators
rule_template: stix-indicator-{id}
# Rule severity mapping
severity_mapping:
low: low
medium: medium
high: high
critical: critical
Environment Variables
Core Variables
# Application settings
export DAEMONEYE_LOG_LEVEL=info
export DAEMONEYE_SCAN_INTERVAL_MS=30000
export DAEMONEYE_BATCH_SIZE=1000
export DAEMONEYE_RETENTION_DAYS=30
# Database settings
export DAEMONEYE_DATABASE_PATH=/var/lib/daemoneye/events.redb
export DAEMONEYE_AUDIT_LEDGER_PATH=/var/lib/daemoneye/audit.sqlite
# Alerting settings
export DAEMONEYE_ALERTING_ENABLED=true
export DAEMONEYE_WEBHOOK_URL=https://your-siem.com/webhook
export DAEMONEYE_WEBHOOK_TOKEN=your-webhook-token
# Platform settings
export DAEMONEYE_ENABLE_EBPF=false
export DAEMONEYE_ENABLE_ETW=false
export DAEMONEYE_ENABLE_ENDPOINT_SECURITY=false
Business Tier Variables
# Security Center
export DAEMONEYE_SECURITY_CENTER_ENABLED=false
export DAEMONEYE_SECURITY_CENTER_ENDPOINT=https://security-center.example.com:8443
export DAEMONEYE_CLIENT_CERT_PATH=/etc/daemoneye/agent.crt
export DAEMONEYE_CLIENT_KEY_PATH=/etc/daemoneye/agent.key
# Enhanced connectors
export SPLUNK_HEC_TOKEN=your-splunk-token
export ELASTIC_USERNAME=your-elastic-username
export ELASTIC_PASSWORD=your-elastic-password
export KAFKA_USERNAME=your-kafka-username
export KAFKA_PASSWORD=your-kafka-password
Enterprise Tier Variables
# Kernel monitoring
export DAEMONEYE_KERNEL_MONITORING_ENABLED=false
export DAEMONEYE_EBPF_ENABLED=false
export DAEMONEYE_ETW_ENABLED=false
export DAEMONEYE_ENDPOINT_SECURITY_ENABLED=false
# Federation
export DAEMONEYE_FEDERATION_ENABLED=false
export DAEMONEYE_REGIONAL_CENTER_ENDPOINT=https://regional.example.com:8443
# STIX/TAXII
export TAXII_USERNAME=your-taxii-username
export TAXII_PASSWORD=your-taxii-password
Configuration Examples
Basic Production Configuration
# /etc/daemoneye/config.yaml
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
retention_days: 30
enable_metrics: true
collection:
enable_process_collection: true
enable_hash_computation: true
hash_algorithm: sha256
skip_system_processes: true
detection:
rules_path: /etc/daemoneye/rules
enable_hot_reload: true
rule_timeout_secs: 30
max_concurrent_rules: 10
alerting:
enabled: true
dedupe_window_minutes: 60
sinks:
- type: syslog
enabled: true
facility: daemon
tag: daemoneye
- type: webhook
enabled: true
url: https://your-siem.com/webhook
headers:
Authorization: Bearer ${WEBHOOK_TOKEN}
database:
event_store:
path: /var/lib/daemoneye/events.redb
max_size_mb: 10240
wal_mode: true
audit_ledger:
path: /var/lib/daemoneye/audit.sqlite
wal_mode: true
synchronous: FULL
High-Performance Configuration
# /etc/daemoneye/config.yaml
app:
scan_interval_ms: 15000 # More frequent scanning
batch_size: 2000 # Larger batches
log_level: warn # Less verbose logging
retention_days: 7 # Shorter retention
max_memory_mb: 1024 # More memory
enable_metrics: true
collection:
enable_process_collection: true
enable_hash_computation: true
hash_algorithm: sha256
skip_system_processes: true
max_hash_time_ms: 2000 # Faster hash computation
detection:
rules_path: /etc/daemoneye/rules
enable_hot_reload: true
rule_timeout_secs: 15 # Faster rule execution
max_concurrent_rules: 20 # More concurrent rules
max_rule_memory_mb: 64 # Less memory per rule
alerting:
enabled: true
dedupe_window_minutes: 30
max_concurrent_deliveries: 10
sinks:
- type: syslog
enabled: true
facility: daemon
tag: daemoneye
- type: kafka
enabled: true
brokers: [kafka.example.com:9092]
topic: daemoneye.alerts
batch_size: 100
batch_timeout_ms: 1000
database:
event_store:
path: /var/lib/daemoneye/events.redb
max_size_mb: 20480
wal_mode: true
wal_checkpoint_interval_secs: 60
max_connections: 20
retention:
process_data_days: 7
alert_data_days: 30
enable_cleanup: true
cleanup_interval_hours: 6
Airgapped Environment Configuration
# /etc/daemoneye/config.yaml
app:
scan_interval_ms: 60000 # Less frequent scanning
batch_size: 500 # Smaller batches
log_level: info
retention_days: 90 # Longer retention
enable_metrics: true
collection:
enable_process_collection: true
enable_hash_computation: true
hash_algorithm: sha256
skip_system_processes: true
detection:
rules_path: /etc/daemoneye/rules
enable_hot_reload: false # Disable hot reload
rule_timeout_secs: 60
max_concurrent_rules: 5
alerting:
enabled: true
dedupe_window_minutes: 120
sinks:
- type: file
enabled: true
path: /var/log/daemoneye/alerts.json
format: json
rotation:
max_size_mb: 50
max_files: 20
- type: syslog
enabled: true
facility: daemon
tag: daemoneye
database:
event_store:
path: /var/lib/daemoneye/events.redb
max_size_mb: 5120
wal_mode: true
audit_ledger:
path: /var/lib/daemoneye/audit.sqlite
wal_mode: true
synchronous: FULL
journal_mode: WAL
Troubleshooting
Configuration Validation
# Validate configuration file
daemoneye-cli config validate /etc/daemoneye/config.yaml
# Validate current configuration
daemoneye-cli config validate
# Check for configuration issues
daemoneye-cli config check
# Show effective configuration
daemoneye-cli config show --include-defaults
Common Configuration Issues
Invalid YAML Syntax:
# Check YAML syntax
python -c "import yaml; yaml.safe_load(open('/etc/daemoneye/config.yaml'))"
# Use online YAML validator
# https://www.yamllint.com/
Missing Required Fields:
# Check for missing required fields
daemoneye-cli config check --strict
# Show configuration with defaults
daemoneye-cli config show --include-defaults
Permission Issues:
# Check file permissions
ls -la /etc/daemoneye/config.yaml
ls -la /var/lib/daemoneye/
# Fix permissions
sudo chown daemoneye:daemoneye /var/lib/daemoneye/
sudo chmod 755 /var/lib/daemoneye/
Environment Variable Issues:
# Check environment variables
env | grep DAEMONEYE
# Test environment variable substitution
daemoneye-cli config show --environment
Configuration Debugging
Enable Debug Logging:
app:
log_level: debug
Configuration Loading Debug:
# Show configuration loading process
daemoneye-cli config debug
# Show configuration sources
daemoneye-cli config sources
Test Configuration Changes:
# Test configuration without applying
daemoneye-cli config test /path/to/new-config.yaml
# Apply configuration with validation
daemoneye-cli config apply /path/to/new-config.yaml --validate
This configuration guide provides comprehensive information about configuring DaemonEye for different deployment scenarios. For additional help, consult the troubleshooting section or contact support.
API Reference
This section contains comprehensive API documentation for DaemonEye, covering all public interfaces, data structures, and usage examples.
Core API
The core API provides the fundamental interfaces for process monitoring, alerting, and data management.
API Overview
Component APIs
DaemonEye provides APIs for each component:
- ProcMonD API: Process collection and system monitoring
- daemoneye-agent API: Alerting and orchestration
- daemoneye-cli API: Command-line interface and management
- daemoneye-lib API: Shared library interfaces
API Design Principles
- Async-First: All APIs use async/await patterns
- Error Handling: Structured error types with
thiserror
- Type Safety: Strong typing with Rust's type system
- Documentation: Comprehensive rustdoc comments
- Examples: Code examples for all public APIs
Quick Reference
Process Collection
use daemoneye_lib::collector::ProcessCollector;
let collector = ProcessCollector::new();
let processes = collector.collect_processes().await?;
Database Operations
use daemoneye_lib::storage::Database;
let db = Database::new("processes.db").await?;
let processes = db.query_processes("SELECT * FROM processes WHERE pid = ?", &[1234]).await?;
Alert Management
use daemoneye_lib::alerting::AlertManager;
let mut alert_manager = AlertManager::new();
alert_manager.add_sink(Box::new(SyslogSink::new("daemon")?));
alert_manager.send_alert(alert).await?;
Configuration Management
use daemoneye_lib::config::Config;
let config = Config::load_from_file("config.yaml").await?;
let scan_interval = config.get::<u64>("app.scan_interval_ms")?;
Data Structures
ProcessInfo
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ProcessInfo {
pub pid: u32,
pub name: String,
pub executable_path: Option<String>,
pub command_line: Option<String>,
pub start_time: Option<DateTime<Utc>>,
pub cpu_usage: Option<f64>,
pub memory_usage: Option<u64>,
pub status: ProcessStatus,
pub executable_hash: Option<String>,
pub collection_time: DateTime<Utc>,
}
Alert
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct Alert {
pub id: Uuid,
pub rule_name: String,
pub severity: AlertSeverity,
pub message: String,
pub process: ProcessInfo,
pub timestamp: DateTime<Utc>,
pub metadata: HashMap<String, String>,
}
DetectionRule
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct DetectionRule {
pub name: String,
pub description: String,
pub sql_query: String,
pub priority: u32,
pub enabled: bool,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}
Error Types
CollectionError
#[derive(Debug, Error)]
pub enum CollectionError {
#[error("Permission denied accessing process {pid}")]
PermissionDenied { pid: u32 },
#[error("Process {pid} no longer exists")]
ProcessNotFound { pid: u32 },
#[error("I/O error: {0}")]
IoError(#[from] std::io::Error),
#[error("System error: {0}")]
SystemError(String),
}
DatabaseError
#[derive(Debug, Error)]
pub enum DatabaseError {
#[error("Database connection failed: {0}")]
ConnectionFailed(String),
#[error("Query execution failed: {0}")]
QueryFailed(String),
#[error("Transaction failed: {0}")]
TransactionFailed(String),
#[error("SQLite error: {0}")]
SqliteError(#[from] rusqlite::Error),
}
AlertError
#[derive(Debug, Error)]
pub enum AlertError {
#[error("Alert delivery failed: {0}")]
DeliveryFailed(String),
#[error("Invalid alert format: {0}")]
InvalidFormat(String),
#[error("Alert sink error: {0}")]
SinkError(String),
#[error("Alert queue full")]
QueueFull,
}
Service Traits
ProcessCollectionService
#[async_trait]
pub trait ProcessCollectionService: Send + Sync {
async fn collect_processes(&self) -> Result<CollectionResult, CollectionError>;
async fn get_system_info(&self) -> Result<SystemInfo, CollectionError>;
async fn get_process_by_pid(&self, pid: u32) -> Result<Option<ProcessInfo>, CollectionError>;
}
DetectionService
#[async_trait]
pub trait DetectionService: Send + Sync {
async fn execute_rules(&self, scan_context: &ScanContext) -> Result<Vec<Alert>, DetectionError>;
async fn load_rules(&self) -> Result<Vec<DetectionRule>, DetectionError>;
async fn add_rule(&self, rule: DetectionRule) -> Result<(), DetectionError>;
async fn remove_rule(&self, rule_name: &str) -> Result<(), DetectionError>;
}
AlertSink
#[async_trait]
pub trait AlertSink: Send + Sync {
async fn send(&self, alert: &Alert) -> Result<DeliveryResult, DeliveryError>;
async fn health_check(&self) -> HealthStatus;
fn name(&self) -> &str;
}
Configuration API
Config
pub struct Config {
app: AppConfig,
database: DatabaseConfig,
alerting: AlertingConfig,
security: SecurityConfig,
}
impl Config {
pub async fn load_from_file(path: &str) -> Result<Self, ConfigError>;
pub async fn load_from_env() -> Result<Self, ConfigError>;
pub fn get<T>(&self, key: &str) -> Result<T, ConfigError> where T: DeserializeOwned;
pub fn set<T>(&mut self, key: &str, value: T) -> Result<(), ConfigError> where T: Serialize;
pub fn validate(&self) -> Result<(), ConfigError>;
}
AppConfig
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct AppConfig {
pub scan_interval_ms: u64,
pub batch_size: usize,
pub log_level: String,
pub data_dir: PathBuf,
pub log_dir: PathBuf,
pub pid_file: Option<PathBuf>,
pub user: Option<String>,
pub group: Option<String>,
pub max_memory_mb: Option<u64>,
pub max_cpu_percent: Option<f64>,
}
Database API
Database
pub struct Database {
conn: Connection,
}
impl Database {
pub async fn new(path: &str) -> Result<Self, DatabaseError>;
pub async fn create_schema(&self) -> Result<(), DatabaseError>;
pub async fn insert_process(&self, process: &ProcessInfo) -> Result<(), DatabaseError>;
pub async fn get_process(&self, pid: u32) -> Result<Option<ProcessInfo>, DatabaseError>;
pub async fn query_processes(&self, query: &str, params: &[&dyn ToSql]) -> Result<Vec<ProcessInfo>, DatabaseError>;
pub async fn insert_alert(&self, alert: &Alert) -> Result<(), DatabaseError>;
pub async fn get_alerts(&self, limit: Option<usize>) -> Result<Vec<Alert>, DatabaseError>;
pub async fn cleanup_old_data(&self, retention_days: u32) -> Result<(), DatabaseError>;
}
Alerting API
AlertManager
pub struct AlertManager {
sinks: Vec<Box<dyn AlertSink>>,
queue: Arc<Mutex<VecDeque<Alert>>>,
max_queue_size: usize,
}
impl AlertManager {
pub fn new() -> Self;
pub fn add_sink(&mut self, sink: Box<dyn AlertSink>);
pub async fn send_alert(&self, alert: Alert) -> Result<(), AlertError>;
pub async fn health_check(&self) -> HealthStatus;
pub fn queue_size(&self) -> usize;
pub fn queue_capacity(&self) -> usize;
}
Alert Sinks
SyslogSink
pub struct SyslogSink {
facility: String,
priority: String,
tag: String,
}
impl SyslogSink {
pub fn new(facility: &str) -> Result<Self, SinkError>;
pub fn with_priority(self, priority: &str) -> Self;
pub fn with_tag(self, tag: &str) -> Self;
}
WebhookSink
pub struct WebhookSink {
url: String,
method: String,
timeout: Duration,
retry_attempts: u32,
headers: HashMap<String, String>,
}
impl WebhookSink {
pub fn new(url: &str) -> Self;
pub fn with_method(self, method: &str) -> Self;
pub fn with_timeout(self, timeout: Duration) -> Self;
pub fn with_retry_attempts(self, attempts: u32) -> Self;
pub fn with_header(self, key: &str, value: &str) -> Self;
}
FileSink
pub struct FileSink {
path: PathBuf,
format: OutputFormat,
rotation: RotationPolicy,
max_files: usize,
}
impl FileSink {
pub fn new(path: &str) -> Self;
pub fn with_format(self, format: OutputFormat) -> Self;
pub fn with_rotation(self, rotation: RotationPolicy) -> Self;
pub fn with_max_files(self, max_files: usize) -> Self;
}
CLI API
Cli
pub struct Cli {
pub command: Commands,
pub config: Option<PathBuf>,
pub log_level: String,
}
#[derive(Subcommand)]
pub enum Commands {
Run(RunCommand),
Config(ConfigCommand),
Rules(RulesCommand),
Alerts(AlertsCommand),
Health(HealthCommand),
Query(QueryCommand),
Logs(LogsCommand),
}
impl Cli {
pub async fn execute(self) -> Result<(), CliError>;
}
Commands
RunCommand
#[derive(Args)]
pub struct RunCommand {
#[arg(short, long)]
pub daemon: bool,
#[arg(short, long)]
pub foreground: bool,
}
ConfigCommand
#[derive(Subcommand)]
pub enum ConfigCommand {
Show(ConfigShowCommand),
Set(ConfigSetCommand),
Get(ConfigGetCommand),
Validate(ConfigValidateCommand),
Load(ConfigLoadCommand),
}
#[derive(Args)]
pub struct ConfigShowCommand {
#[arg(long)]
pub include_defaults: bool,
#[arg(long)]
pub format: Option<String>,
}
RulesCommand
#[derive(Subcommand)]
pub enum RulesCommand {
List(RulesListCommand),
Add(RulesAddCommand),
Remove(RulesRemoveCommand),
Enable(RulesEnableCommand),
Disable(RulesDisableCommand),
Validate(RulesValidateCommand),
Test(RulesTestCommand),
Reload(RulesReloadCommand),
}
IPC API
IpcServer
pub struct IpcServer {
socket_path: PathBuf,
handlers: HashMap<String, Box<dyn IpcHandler>>,
}
impl IpcServer {
pub fn new(socket_path: &str) -> Self;
pub fn add_handler(&mut self, name: &str, handler: Box<dyn IpcHandler>);
pub async fn run(self) -> Result<(), IpcError>;
}
IpcClient
pub struct IpcClient {
socket_path: PathBuf,
connection: Option<Connection>,
}
impl IpcClient {
pub async fn new(socket_path: &str) -> Result<Self, IpcError>;
pub async fn send_request(&self, request: IpcRequest) -> Result<IpcResponse, IpcError>;
pub async fn close(self) -> Result<(), IpcError>;
}
IPC Messages
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum IpcRequest {
CollectProcesses,
GetProcess { pid: u32 },
QueryProcesses { query: String, params: Vec<Value> },
GetAlerts { limit: Option<usize> },
SendAlert { alert: Alert },
HealthCheck,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum IpcResponse {
Processes(Vec<ProcessInfo>),
Process(Option<ProcessInfo>),
QueryResult(Vec<ProcessInfo>),
Alerts(Vec<Alert>),
AlertSent,
Health(HealthStatus),
Error(String),
}
Utility APIs
Logger
pub struct Logger {
level: Level,
format: LogFormat,
output: LogOutput,
}
impl Logger {
pub fn new() -> Self;
pub fn with_level(self, level: Level) -> Self;
pub fn with_format(self, format: LogFormat) -> Self;
pub fn with_output(self, output: LogOutput) -> Self;
pub fn init(self) -> Result<(), LogError>;
}
Metrics
pub struct Metrics {
registry: Registry,
}
impl Metrics {
pub fn new() -> Self;
pub fn counter(&self, name: &str) -> Counter;
pub fn gauge(&self, name: &str) -> Gauge;
pub fn histogram(&self, name: &str) -> Histogram;
pub fn register(&self, metric: Box<dyn Metric>) -> Result<(), MetricsError>;
}
Error Handling
Error Types
All APIs use structured error types:
#[derive(Debug, Error)]
pub enum DaemonEyeError {
#[error("Collection error: {0}")]
Collection(#[from] CollectionError),
#[error("Database error: {0}")]
Database(#[from] DatabaseError),
#[error("Alert error: {0}")]
Alert(#[from] AlertError),
#[error("Configuration error: {0}")]
Config(#[from] ConfigError),
#[error("IPC error: {0}")]
Ipc(#[from] IpcError),
#[error("CLI error: {0}")]
Cli(#[from] CliError),
}
Error Context
Use anyhow
for error context:
use anyhow::{Context, Result};
pub async fn collect_processes() -> Result<Vec<ProcessInfo>> {
let processes = sysinfo::System::new_all()
.processes()
.values()
.map(|p| ProcessInfo::from(p))
.collect::<Vec<_>>();
Ok(processes)
.context("Failed to collect process information")
}
Examples
Basic Process Monitoring
use daemoneye_lib::collector::ProcessCollector;
use daemoneye_lib::storage::Database;
use daemoneye_lib::alerting::AlertManager;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize components
let collector = ProcessCollector::new();
let db = Database::new("processes.db").await?;
let mut alert_manager = AlertManager::new();
// Add alert sink
alert_manager.add_sink(Box::new(SyslogSink::new("daemon")?));
// Collect processes
let processes = collector.collect_processes().await?;
// Store in database
for process in &processes {
db.insert_process(process).await?;
}
// Check for suspicious processes
let suspicious = db.query_processes(
"SELECT * FROM processes WHERE name LIKE '%suspicious%'",
&[]
).await?;
// Send alerts
for process in suspicious {
let alert = Alert::new("suspicious_process", process);
alert_manager.send_alert(alert).await?;
}
Ok(())
}
Custom Alert Sink
use daemoneye_lib::alerting::{AlertSink, Alert, DeliveryResult, DeliveryError};
use async_trait::async_trait;
pub struct CustomSink {
endpoint: String,
client: reqwest::Client,
}
impl CustomSink {
pub fn new(endpoint: &str) -> Self {
Self {
endpoint: endpoint.to_string(),
client: reqwest::Client::new(),
}
}
}
#[async_trait]
impl AlertSink for CustomSink {
async fn send(&self, alert: &Alert) -> Result<DeliveryResult, DeliveryError> {
let response = self.client
.post(&self.endpoint)
.json(alert)
.send()
.await
.map_err(|e| DeliveryError::Network(e.to_string()))?;
if response.status().is_success() {
Ok(DeliveryResult::Success)
} else {
Err(DeliveryError::Http(response.status().as_u16()))
}
}
async fn health_check(&self) -> HealthStatus {
match self.client.get(&self.endpoint).send().await {
Ok(response) if response.status().is_success() => HealthStatus::Healthy,
_ => HealthStatus::Unhealthy,
}
}
fn name(&self) -> &str {
"custom_sink"
}
}
This API reference provides comprehensive documentation for all DaemonEye APIs. For additional examples and usage patterns, consult the specific API documentation.
DaemonEye Core API Reference
This document provides comprehensive API reference for the DaemonEye core library (daemoneye-lib
) and its public interfaces.
Table of Contents
- Core Data Models
- Configuration API
- Storage API
- Detection API
- Alerting API
- Crypto API
- Telemetry API
- Error Types
Core Data Models
ProcessRecord
Represents a single process snapshot with comprehensive metadata.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ProcessRecord {
/// Unique identifier for this process record
pub id: Uuid,
/// Scan identifier this record belongs to
pub scan_id: i64,
/// Collection timestamp in milliseconds since Unix epoch
pub collection_time: i64,
/// Process ID
pub pid: u32,
/// Parent process ID
pub ppid: Option<u32>,
/// Process name
pub name: String,
/// Path to executable file
pub executable_path: Option<PathBuf>,
/// Command line arguments
pub command_line: Vec<String>,
/// Process start time in milliseconds since Unix epoch
pub start_time: Option<i64>,
/// CPU usage percentage
pub cpu_usage: Option<f64>,
/// Memory usage in bytes
pub memory_usage: Option<u64>,
/// SHA-256 hash of executable file
pub executable_hash: Option<String>,
/// Hash algorithm used (always "sha256")
pub hash_algorithm: Option<String>,
/// User ID running the process
pub user_id: Option<String>,
/// Whether process data was accessible
pub accessible: bool,
/// Whether executable file exists
pub file_exists: bool,
/// Platform-specific data
pub platform_data: Option<serde_json::Value>,
}
Example Usage:
use daemoneye_lib::models::ProcessRecord;
use uuid::Uuid;
let process = ProcessRecord {
id: Uuid::new_v4(),
scan_id: 12345,
collection_time: 1640995200000, // 2022-01-01 00:00:00 UTC
pid: 1234,
ppid: Some(1),
name: "chrome".to_string(),
executable_path: Some("/usr/bin/chrome".into()),
command_line: vec!["chrome".to_string(), "--no-sandbox".to_string()],
start_time: Some(1640995100000),
cpu_usage: Some(15.5),
memory_usage: Some(1073741824), // 1GB
executable_hash: Some("a1b2c3d4e5f6...".to_string()),
hash_algorithm: Some("sha256".to_string()),
user_id: Some("1000".to_string()),
accessible: true,
file_exists: true,
platform_data: Some(serde_json::json!({
"thread_count": 25,
"priority": "normal"
})),
};
Alert
Represents a detection result with full context and metadata.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct Alert {
/// Unique alert identifier
pub id: Uuid,
/// Alert timestamp in milliseconds since Unix epoch
pub alert_time: i64,
/// Rule identifier that generated this alert
pub rule_id: String,
/// Alert title
pub title: String,
/// Alert description
pub description: String,
/// Alert severity level
pub severity: AlertSeverity,
/// Scan identifier (if applicable)
pub scan_id: Option<i64>,
/// Affected process IDs
pub affected_processes: Vec<u32>,
/// Number of affected processes
pub process_count: i32,
/// Additional alert data as JSON
pub alert_data: serde_json::Value,
/// Rule execution time in milliseconds
pub rule_execution_time_ms: Option<i64>,
/// Deduplication key
pub dedupe_key: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum AlertSeverity {
Low,
Medium,
High,
Critical,
}
Example Usage:
use daemoneye_lib::models::{Alert, AlertSeverity};
use uuid::Uuid;
let alert = Alert {
id: Uuid::new_v4(),
alert_time: 1640995200000,
rule_id: "suspicious-processes".to_string(),
title: "Suspicious Process Detected".to_string(),
description: "Process with suspicious name detected".to_string(),
severity: AlertSeverity::High,
scan_id: Some(12345),
affected_processes: vec![1234, 5678],
process_count: 2,
alert_data: serde_json::json!({
"processes": [
{"pid": 1234, "name": "malware.exe"},
{"pid": 5678, "name": "backdoor.exe"}
]
}),
rule_execution_time_ms: Some(15),
dedupe_key: "suspicious-processes:malware.exe".to_string(),
};
DetectionRule
Represents a SQL-based detection rule with metadata and versioning.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct DetectionRule {
/// Unique rule identifier
pub id: String,
/// Rule name
pub name: String,
/// Rule description
pub description: Option<String>,
/// Rule version
pub version: i32,
/// SQL query for detection
pub sql_query: String,
/// Whether rule is enabled
pub enabled: bool,
/// Alert severity for this rule
pub severity: AlertSeverity,
/// Rule category
pub category: Option<String>,
/// Rule tags
pub tags: Vec<String>,
/// Rule author
pub author: Option<String>,
/// Creation timestamp
pub created_at: i64,
/// Last update timestamp
pub updated_at: i64,
/// Rule source type
pub source_type: RuleSourceType,
/// Source file path (if applicable)
pub source_path: Option<PathBuf>,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum RuleSourceType {
Builtin,
File,
User,
}
Configuration API
Hierarchical Configuration
The configuration system supports hierarchical loading with environment variable substitution.
use daemoneye_lib::config::{Config, ConfigBuilder, ConfigError};
// Create configuration builder
let mut builder = ConfigBuilder::new();
// Load configuration from multiple sources
builder
.add_embedded_defaults()?
.add_file("/etc/daemoneye/config.yaml")?
.add_file("~/.config/daemoneye/config.yaml")?
.add_environment("DaemonEye_")?
.add_cli_args(args)?;
// Build final configuration
let config: Config = builder.build()?;
// Access configuration values
let scan_interval = config.get::<u64>("app.scan_interval_ms")?;
let log_level = config.get::<String>("app.log_level")?;
Configuration Validation
use daemoneye_lib::config::{ConfigValidator, ValidationResult};
let validator = ConfigValidator::new();
let result: ValidationResult = validator.validate(&config)?;
if !result.is_valid() {
for error in result.errors() {
eprintln!("Configuration error: {}", error);
}
}
Environment Variable Substitution
use daemoneye_lib::config::EnvironmentSubstitutor;
let substitutor = EnvironmentSubstitutor::new();
let config_with_env = substitutor.substitute(config)?;
Storage API
Event Store (redb)
High-performance embedded database for process data storage.
use daemoneye_lib::storage::{EventStore, EventStoreConfig, ProcessQuery};
// Create event store
let config = EventStoreConfig {
path: "/var/lib/daemoneye/events.redb".into(),
max_size_mb: 10240,
wal_mode: true,
max_connections: 10,
};
let event_store = EventStore::new(config)?;
// Store process records
let processes = vec![process_record1, process_record2];
event_store.store_processes(&processes).await?;
// Query processes
let query = ProcessQuery::new()
.with_pid(1234)
.with_name("chrome")
.with_time_range(start_time, end_time);
let results = event_store.query_processes(&query).await?;
// Export data
let export_config = ExportConfig {
format: ExportFormat::Json,
output_path: "/tmp/export.json".into(),
time_range: Some((start_time, end_time)),
};
event_store.export_data(&export_config).await?;
Audit Ledger (SQLite)
Tamper-evident audit trail with cryptographic integrity.
use daemoneye_lib::storage::{AuditLedger, AuditEntry, AuditRecord};
// Create audit ledger
let audit_ledger = AuditLedger::new("/var/lib/daemoneye/audit.sqlite")?;
// Log audit entry
let entry = AuditEntry {
actor: "procmond".to_string(),
action: "process_collection".to_string(),
payload: serde_json::json!({
"process_count": 150,
"scan_id": 12345
}),
};
let record = audit_ledger.append_entry(&entry).await?;
// Verify audit chain
let verification_result = audit_ledger.verify_chain().await?;
if !verification_result.is_valid() {
eprintln!("Audit chain verification failed: {:?}", verification_result.errors());
}
Detection API
SQL Validator
Comprehensive SQL validation to prevent injection attacks.
use daemoneye_lib::detection::{SqlValidator, ValidationResult, ValidationError};
// Create SQL validator
let validator = SqlValidator::new()
.with_allowed_functions(vec![
"count", "sum", "avg", "min", "max",
"length", "substr", "upper", "lower",
"datetime", "strftime"
])
.with_read_only_mode(true)
.with_timeout(Duration::from_secs(30));
// Validate SQL query
let sql = "SELECT pid, name, cpu_usage FROM processes WHERE cpu_usage > 50.0";
let result: ValidationResult = validator.validate_query(sql)?;
match result {
ValidationResult::Valid => println!("Query is valid"),
ValidationResult::Invalid(errors) => {
for error in errors {
eprintln!("Validation error: {}", error);
}
}
}
Detection Engine
SQL-based detection rule execution with security validation.
use daemoneye_lib::detection::{DetectionEngine, DetectionResult, RuleExecutionConfig};
// Create detection engine
let config = DetectionEngineConfig {
event_store: event_store.clone(),
rule_timeout: Duration::from_secs(30),
max_concurrent_rules: 10,
enable_metrics: true,
};
let detection_engine = DetectionEngine::new(config)?;
// Execute detection rules
let execution_config = RuleExecutionConfig {
scan_id: Some(12345),
rule_ids: None, // Execute all enabled rules
timeout: Duration::from_secs(60),
};
let results: Vec<DetectionResult> = detection_engine
.execute_rules(&execution_config)
.await?;
// Process detection results
for result in results {
if !result.alerts.is_empty() {
println!("Rule {} generated {} alerts", result.rule_id, result.alerts.len());
}
}
Rule Manager
Detection rule management with hot-reloading support.
use daemoneye_lib::detection::{RuleManager, RuleManagerConfig};
// Create rule manager
let config = RuleManagerConfig {
rules_path: "/etc/daemoneye/rules".into(),
enable_hot_reload: true,
validation_enabled: true,
};
let rule_manager = RuleManager::new(config)?;
// Load rules
let rules = rule_manager.load_rules().await?;
// Enable/disable rules
rule_manager.enable_rule("suspicious-processes").await?;
rule_manager.disable_rule("test-rule").await?;
// Validate rule
let validation_result = rule_manager.validate_rule_file("/path/to/rule.sql").await?;
// Test rule
let test_result = rule_manager.test_rule("suspicious-processes", test_data).await?;
Alerting API
Alert Manager
Alert generation, deduplication, and delivery management.
use daemoneye_lib::alerting::{AlertManager, AlertManagerConfig, DeduplicationConfig};
// Create alert manager
let config = AlertManagerConfig {
deduplication: DeduplicationConfig {
enabled: true,
window_minutes: 60,
key_fields: vec!["rule_id".to_string(), "process_name".to_string()],
},
max_queue_size: 10000,
delivery_timeout: Duration::from_secs(30),
};
let alert_manager = AlertManager::new(config)?;
// Generate alert
let alert = alert_manager.generate_alert(
&detection_result,
&process_data
).await?;
// Deliver alert
if let Some(alert) = alert {
let delivery_result = alert_manager.deliver_alert(&alert).await?;
println!("Alert delivered: {:?}", delivery_result);
}
Alert Sinks
Pluggable alert delivery channels.
use daemoneye_lib::alerting::sinks::{AlertSink, StdoutSink, SyslogSink, WebhookSink};
// Create alert sinks
let stdout_sink = StdoutSink::new(OutputFormat::Json);
let syslog_sink = SyslogSink::new(SyslogConfig {
facility: SyslogFacility::Daemon,
tag: "daemoneye".to_string(),
host: "localhost".to_string(),
port: 514,
});
let webhook_sink = WebhookSink::new(WebhookConfig {
url: "https://your-siem.com/webhook".parse()?,
headers: vec![
("Authorization".to_string(), "Bearer token".to_string()),
],
timeout: Duration::from_secs(30),
retry_attempts: 3,
});
// Send alert to all sinks
let sinks: Vec<Box<dyn AlertSink>> = vec![
Box::new(stdout_sink),
Box::new(syslog_sink),
Box::new(webhook_sink),
];
for sink in sinks {
let result = sink.send(&alert).await?;
println!("Sink {} result: {:?}", sink.name(), result);
}
Crypto API
Hash Chain
Cryptographic hash chain for audit trail integrity.
use daemoneye_lib::crypto::{HashChain, HashChainConfig, ChainVerificationResult};
// Create hash chain
let config = HashChainConfig {
algorithm: HashAlgorithm::Blake3,
enable_signatures: true,
private_key_path: Some("/etc/daemoneye/private.key".into()),
};
let mut hash_chain = HashChain::new(config)?;
// Append entry to chain
let entry = AuditEntry {
actor: "procmond".to_string(),
action: "process_collection".to_string(),
payload: serde_json::json!({"process_count": 150}),
};
let record = hash_chain.append_entry(&entry).await?;
println!("Chain entry: {}", record.entry_hash);
// Verify chain integrity
let verification_result: ChainVerificationResult = hash_chain.verify_chain().await?;
if verification_result.is_valid() {
println!("Chain verification successful");
} else {
eprintln!("Chain verification failed: {:?}", verification_result.errors());
}
Digital Signatures
Ed25519 digital signatures for enhanced integrity.
use daemoneye_lib::crypto::{SignatureManager, SignatureConfig};
// Create signature manager
let config = SignatureConfig {
private_key_path: "/etc/daemoneye/private.key".into(),
public_key_path: "/etc/daemoneye/public.key".into(),
};
let signature_manager = SignatureManager::new(config)?;
// Sign data
let data = b"important audit data";
let signature = signature_manager.sign(data)?;
// Verify signature
let is_valid = signature_manager.verify(data, &signature)?;
println!("Signature valid: {}", is_valid);
Telemetry API
Metrics Collection
Prometheus-compatible metrics collection.
use daemoneye_lib::telemetry::{MetricsCollector, MetricType, MetricValue};
// Create metrics collector
let metrics_collector = MetricsCollector::new()?;
// Record metrics
metrics_collector.record_counter("processes_collected_total", 150)?;
metrics_collector.record_gauge("active_processes", 45.0)?;
metrics_collector.record_histogram("collection_duration_seconds", 2.5)?;
// Export metrics
let metrics_data = metrics_collector.export_metrics()?;
println!("Metrics: {}", metrics_data);
Health Monitoring
System health monitoring and status reporting.
use daemoneye_lib::telemetry::{HealthMonitor, HealthStatus, ComponentHealth};
// Create health monitor
let health_monitor = HealthMonitor::new()?;
// Check system health
let health_status: HealthStatus = health_monitor.check_system_health().await?;
println!("System Health: {:?}", health_status.status);
for (component, health) in health_status.components {
println!("{}: {:?}", component, health);
}
// Check specific component
let db_health: ComponentHealth = health_monitor.check_component("database").await?;
println!("Database Health: {:?}", db_health);
Error Types
Core Error Types
use daemoneye_lib::errors::{DaemonEyeError, DaemonEyeErrorKind};
// Error handling example
match some_operation().await {
Ok(result) => println!("Success: {:?}", result),
Err(DaemonEyeError::Configuration(e)) => {
eprintln!("Configuration error: {}", e);
}
Err(DaemonEyeError::Database(e)) => {
eprintln!("Database error: {}", e);
}
Err(DaemonEyeError::Detection(e)) => {
eprintln!("Detection error: {}", e);
}
Err(e) => {
eprintln!("Unexpected error: {}", e);
}
}
Error Categories
- ConfigurationError: Configuration loading and validation errors
- DatabaseError: Database operation errors
- DetectionError: Detection rule execution errors
- AlertingError: Alert generation and delivery errors
- CryptoError: Cryptographic operation errors
- ValidationError: Data validation errors
- IoError: I/O operation errors
This API reference provides comprehensive documentation for the DaemonEye core library. For additional examples and advanced usage, consult the source code and integration tests.
Deployment Documentation
This section contains comprehensive deployment guides for DaemonEye, covering installation, configuration, and deployment strategies across different platforms and environments.
Table of Contents
- Installation Guide
- Configuration Guide
- Docker Deployment
- Deployment Overview
- Quick Start
- Configuration Management
- Production Deployment
- Container Deployment
- Cloud Deployment
- Troubleshooting
- Best Practices
Installation Guide
Complete installation instructions for all supported platforms including Linux, macOS, and Windows.
Configuration Guide
Comprehensive configuration management covering all aspects of DaemonEye setup, tuning, and customization.
Docker Deployment
Complete guide for containerizing and deploying DaemonEye using Docker and Docker Compose.
Deployment Overview
Supported Platforms
DaemonEye supports deployment on:
- Linux: Ubuntu, RHEL, CentOS, Debian, Arch Linux
- macOS: 10.14+ (Mojave or later)
- Windows: Windows 10+ or Windows Server 2016+
- Containers: Docker, Podman, containerd
note
Container Runtime Notes:
- Docker: Most common, requires privileged containers for host process monitoring
- Podman: Rootless containers supported, better security isolation
- ⚠️ eBPF Limitation: Rootless containers may have limited eBPF functionality due to kernel capabilities restrictions
- containerd: Lower-level runtime, requires additional configuration for privileged access
Deployment Methods
- Package Managers: APT, YUM, Homebrew, Chocolatey
- Pre-built Binaries: Direct download and installation
- Source Build: Compile from source code
- Release Tooling: cargo-dist for automated cross-platform builds
- Containers: Docker images and container deployment
- Cloud: AWS, Azure, GCP marketplace deployments
caution
Orchestration platforms (Kubernetes, Docker Swarm, Nomad) are not officially supported. While technically possible to deploy DaemonEye on these platforms, they are not recommended for production use due to:
- Lack of native DaemonSet support (except Kubernetes)
- Complex privileged container requirements
- Node-specific monitoring constraints
- Limited testing and validation
Architecture Considerations
Single Node Deployment
For small to medium environments:
graph TB subgraph "Single Node" A[<b>ProcMonD</b>] B[<b>daemoneye-agent</b>] C[<b>CLI</b>] D[<b>Database</b>] A <--> B B <--> C B <--> D end
Multi-Node Deployment
For large environments with multiple monitoring targets:
graph TB subgraph "Node 1" A1[<b>ProcMonD</b>] B1[<b>daemoneye-agent</b>] A1 <--> B1 end subgraph "Node 2" A2[<b>ProcMonD</b>] B2[<b>daemoneye-agent</b>] A2 <--> B2 end subgraph "Node 3" A3[<b>ProcMonD</b>] B3[<b>daemoneye-agent</b>] A3 <--> B3 end subgraph "Central Management" C[<b>Security Center</b>] D[<b>Database</b>] C <--> D end B1 --> C B2 --> C B3 --> C
Container Deployment
For containerized environments, ProcMonD can be deployed in two ways:
warning
Containerized ProcMonD only works on Linux hosts. macOS and Windows must use host process deployment.
graph TB subgraph "Container Host" A2[<b>ProcMonD</b><br/>Host Process] subgraph "Containers" A1[<strong>ProcMonD</strong><br/>Container<br/>privileged] B[<b>daemoneye-agent</b><br/>Container] C[<b>CLI</b>] end A1 <-->|Option 1| B A2 <-->|Option 2| B B <--> C end
Deployment Recommendations:
- Option 1 (Containerized ProcMonD): Recommended for containerized environments where you want full containerization. Requires privileged container access to monitor host processes.
- Option 2 (Host Process ProcMonD): Recommended for hybrid deployments where you want to minimize container privileges while maintaining containerized management components.
warning
Option 1 requires running a privileged container, which grants the container access to the host system. This increases the attack surface and should only be used in trusted environments with proper security controls in place.
Quick Start
Docker Quick Start
# Pull the latest image
docker pull daemoneye/daemoneye:latest
# Run with basic configuration
docker run -d --name daemoneye \
--privileged \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
daemoneye/daemoneye:latest
# Check status
docker logs daemoneye
Package Manager Quick Start
Ubuntu/Debian:
# Add repository
wget -qO - https://packages.daemoneye.com/apt/key.gpg | sudo apt-key add -
echo "deb https://packages.daemoneye.com/apt stable main" | sudo tee /etc/apt/sources.list.d/daemoneye.list
# Install
sudo apt update
sudo apt install daemoneye
# Start service
sudo systemctl start daemoneye
sudo systemctl enable daemoneye
RHEL/CentOS:
# Add repository
sudo tee /etc/yum.repos.d/daemoneye.repo << 'EOF'
[daemoneye]
name=DaemonEye
baseurl=https://packages.daemoneye.com/yum/stable/
enabled=1
gpgcheck=1
gpgkey=https://packages.daemoneye.com/apt/key.gpg
EOF
# Install
sudo yum install daemoneye
# Start service
sudo systemctl start daemoneye
sudo systemctl enable daemoneye
macOS:
# Install with Homebrew
brew tap daemoneye/daemoneye
brew install daemoneye
# Start service
brew services start daemoneye
Windows:
# Install with Chocolatey
choco install daemoneye
# Start service
Start-Service DaemonEye
Configuration Management
Environment Variables
DaemonEye supports configuration through environment variables:
# Basic configuration
export DaemonEye_LOG_LEVEL=info
export DaemonEye_SCAN_INTERVAL_MS=30000
export DaemonEye_BATCH_SIZE=1000
# Database configuration
export DaemonEye_DATABASE_PATH=/var/lib/daemoneye/processes.db
export DaemonEye_DATABASE_RETENTION_DAYS=30
# Alerting configuration
export DaemonEye_ALERTING_ENABLED=true
export DaemonEye_ALERTING_SINKS_0_TYPE=syslog
export DaemonEye_ALERTING_SINKS_0_FACILITY=daemon
Configuration Files
Hierarchical configuration with multiple sources:
- Command-line flags (highest precedence)
- Environment variables (
DaemonEye_*
) - User configuration file (
~/.config/daemoneye/config.yaml
) - System configuration file (
/etc/daemoneye/config.yaml
) - Embedded defaults (lowest precedence)
Basic Configuration
# /etc/daemoneye/config.yaml
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
data_dir: /var/lib/daemoneye
log_dir: /var/log/daemoneye
database:
path: /var/lib/daemoneye/processes.db
retention_days: 30
max_connections: 10
alerting:
enabled: true
sinks:
- type: syslog
enabled: true
facility: daemon
priority: info
- type: webhook
enabled: false
url: https://alerts.example.com/webhook
timeout_ms: 5000
retry_attempts: 3
security:
enable_privilege_dropping: true
drop_to_user: daemoneye
drop_to_group: daemoneye
enable_audit_logging: true
Production Deployment
Resource Requirements
Minimum Requirements:
- CPU: 1 core
- RAM: 512MB
- Storage: 1GB
- Network: 100Mbps
Recommended Requirements:
- CPU: 2+ cores
- RAM: 2GB+
- Storage: 10GB+
- Network: 1Gbps
High-Performance Requirements:
- CPU: 4+ cores
- RAM: 8GB+
- Storage: 100GB+
- Network: 10Gbps
Security Considerations
- Principle of Least Privilege: Run with minimal required privileges
- Network Security: Use TLS for all network communications
- Data Protection: Encrypt sensitive data at rest and in transit
- Access Control: Implement proper authentication and authorization
- Audit Logging: Enable comprehensive audit logging
Performance Tuning
CPU Optimization:
app:
scan_interval_ms: 60000 # Reduce scan frequency
batch_size: 500 # Smaller batches
max_cpu_percent: 5.0 # Limit CPU usage
Memory Optimization:
app:
max_memory_mb: 512 # Limit memory usage
batch_size: 250 # Smaller batches
gc_interval_ms: 300000 # Garbage collection interval
Database Optimization:
database:
cache_size: -128000 # 128MB cache
temp_store: MEMORY # Use memory for temp tables
synchronous: NORMAL # Balance safety and performance
wal_mode: true # Enable WAL mode
Monitoring and Observability
Metrics Collection:
observability:
enable_metrics: true
metrics_port: 9090
metrics_path: /metrics
Health Checks:
observability:
enable_health_checks: true
health_check_port: 8080
health_check_path: /health
Logging:
observability:
logging:
enable_structured_logging: true
log_format: json
enable_log_rotation: true
max_log_file_size_mb: 100
max_log_files: 10
Container Deployment
Docker Compose
version: '3.8'
services:
procmond:
image: daemoneye/procmond:latest
container_name: daemoneye-procmond
privileged: true
volumes:
- /var/lib/daemoneye:/data
- /var/log/daemoneye:/logs
- ./config:/config:ro
environment:
- DaemonEye_LOG_LEVEL=info
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
command: [--config, /config/config.yaml]
restart: unless-stopped
daemoneye-agent:
image: daemoneye/daemoneye-agent:latest
container_name: daemoneye-agent
depends_on:
- procmond
volumes:
- /var/lib/daemoneye:/data
- /var/log/daemoneye:/logs
- ./config:/config:ro
environment:
- DaemonEye_LOG_LEVEL=info
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
command: [--config, /config/config.yaml]
restart: unless-stopped
daemoneye-cli:
image: daemoneye/daemoneye-cli:latest
container_name: daemoneye-cli
depends_on:
- daemoneye-agent
volumes:
- /var/lib/daemoneye:/data
- ./config:/config:ro
environment:
- DaemonEye_DATA_DIR=/data
command: [--help]
restart: no
Cloud Deployment
AWS Deployment
EC2 Instance:
# Launch EC2 instance
aws ec2 run-instances \
--image-id ami-0c02fb55956c7d316 \
--instance-type t3.medium \
--key-name your-key \
--security-group-ids sg-12345678 \
--subnet-id subnet-12345678 \
--user-data file://user-data.sh
Azure Deployment
Azure Container Instances:
# Deploy container
az container create \
--resource-group myResourceGroup \
--name daemoneye \
--image daemoneye/daemoneye:latest \
--cpu 1 \
--memory 2 \
--ports 8080 9090 \
--environment-variables DaemonEye_LOG_LEVEL=info
Troubleshooting
Common Issues
Service Won't Start:
# Check service status
sudo systemctl status daemoneye
# Check logs
sudo journalctl -u daemoneye -f
# Check configuration
daemoneye-cli config validate
Permission Denied:
# Check file permissions
ls -la /var/lib/daemoneye/
ls -la /var/log/daemoneye/
# Fix permissions
sudo chown -R daemoneye:daemoneye /var/lib/daemoneye
sudo chown -R daemoneye:daemoneye /var/log/daemoneye
Database Issues:
# Check database status
daemoneye-cli database status
# Check database integrity
daemoneye-cli database integrity-check
# Repair database
daemoneye-cli database repair
Performance Issues:
# Check system metrics
daemoneye-cli metrics
# Check resource usage
daemoneye-cli system resources
# Optimize configuration
daemoneye-cli config optimize
Debug Mode
Enable Debug Logging:
# Set debug level
daemoneye-cli config set app.log_level debug
# Restart service
sudo systemctl restart daemoneye
# Monitor debug logs
daemoneye-cli logs --level debug --tail 100
Debug Specific Components:
# Debug process collection
daemoneye-cli debug collector
# Debug alert delivery
daemoneye-cli debug alerts
# Debug database operations
daemoneye-cli debug database
Health Checks
System Health:
# Overall health
daemoneye-cli health
# Component health
daemoneye-cli health --component procmond
daemoneye-cli health --component daemoneye-agent
daemoneye-cli health --component database
# Detailed health report
daemoneye-cli health --verbose
Best Practices
Deployment
- Start Small: Begin with basic monitoring and gradually add features
- Test Configuration: Always validate configuration before deployment
- Monitor Resources: Keep an eye on CPU and memory usage
- Regular Updates: Keep DaemonEye updated with latest releases
- Backup Data: Regularly backup configuration and data
Security
- Principle of Least Privilege: Run with minimal required privileges
- Network Security: Use TLS for all network communications
- Access Control: Implement proper authentication and authorization
- Audit Logging: Enable comprehensive audit logging
- Regular Updates: Keep security patches current
Performance
- Resource Monitoring: Monitor CPU, memory, and storage usage
- Configuration Tuning: Optimize configuration for your environment
- Load Testing: Test performance under expected load
- Capacity Planning: Plan for growth and scaling
- Regular Maintenance: Perform regular maintenance and optimization
This deployment documentation provides comprehensive guidance for deploying DaemonEye. For additional help, consult the specific deployment guides or contact support.
DaemonEye Installation Guide
This guide provides comprehensive installation instructions for DaemonEye across different platforms and deployment scenarios.
Table of Contents
- System Requirements
- Installation Methods
- Platform-Specific Installation
- Service Configuration
- Post-Installation Setup
- Verification
- Troubleshooting
System Requirements
Minimum Requirements
Operating System:
- Linux: Kernel 3.10+ (Ubuntu 16.04+, RHEL 7.6+, Debian 9+)
- macOS: 10.14+ (Mojave or later)
- Windows: Windows 10+ or Windows Server 2016+
Hardware:
- CPU: x86_64 or ARM64 processor
- RAM: 512MB available memory
- Disk: 1GB free space
- Network: Internet access for initial setup (optional)
Privileges:
- Linux:
CAP_SYS_PTRACE
capability or root access - Windows:
SeDebugPrivilege
or Administrator access - macOS: Appropriate entitlements or root access
Recommended Requirements
Operating System:
- Linux: Kernel 4.15+ (Ubuntu 18.04+, RHEL 8+, Debian 10+)
- macOS: 11+ (Big Sur or later)
- Windows: Windows 11+ or Windows Server 2019+
Hardware:
- CPU: 2+ cores
- RAM: 2GB+ available memory
- Disk: 10GB+ free space
- Network: Stable internet connection
Enhanced Features (Enterprise Tier):
- Linux: Kernel 4.7+ for eBPF support
- Windows: Windows 7+ for ETW support
- macOS: 10.15+ for EndpointSecurity support
Installation Methods
Method 1: Pre-built Binaries (Recommended)
Download Latest Release:
# Linux x86_64
wget https://github.com/daemoneye/daemoneye/releases/latest/download/daemoneye-linux-x86_64.tar.gz
tar -xzf daemoneye-linux-x86_64.tar.gz
# Linux ARM64
wget https://github.com/daemoneye/daemoneye/releases/latest/download/daemoneye-linux-aarch64.tar.gz
tar -xzf daemoneye-linux-aarch64.tar.gz
# macOS x86_64
curl -L https://github.com/daemoneye/daemoneye/releases/latest/download/daemoneye-macos-x86_64.tar.gz | tar -xz
# macOS ARM64 (Apple Silicon)
curl -L https://github.com/daemoneye/daemoneye/releases/latest/download/daemoneye-macos-aarch64.tar.gz | tar -xz
# Windows x86_64
# Download from GitHub releases and extract
Install to System Directories:
# Linux/macOS
sudo cp procmond daemoneye-agent daemoneye-cli /usr/local/bin/
sudo chmod +x /usr/local/bin/procmond /usr/local/bin/daemoneye-agent /usr/local/bin/daemoneye-cli
# Create system directories
sudo mkdir -p /etc/daemoneye
sudo mkdir -p /var/lib/daemoneye
sudo mkdir -p /var/log/daemoneye
# Set ownership
sudo chown -R $USER:$USER /etc/daemoneye
sudo chown -R $USER:$USER /var/lib/daemoneye
sudo chown -R $USER:$USER /var/log/daemoneye
# Windows
# Copy to C:\Program Files\DaemonEye\
# Add to PATH environment variable
Method 2: Package Managers
Homebrew (macOS):
# Add DaemonEye tap
brew tap daemoneye/daemoneye
# Install DaemonEye
brew install daemoneye
# Start service
brew services start daemoneye
APT (Ubuntu/Debian):
# Add repository key
wget -qO - https://packages.daemoneye.com/apt/key.gpg | sudo apt-key add -
# Add repository
echo "deb https://packages.daemoneye.com/apt stable main" | sudo tee /etc/apt/sources.list.d/daemoneye.list
# Update package list
sudo apt update
# Install DaemonEye
sudo apt install daemoneye
# Start service
sudo systemctl start daemoneye
sudo systemctl enable daemoneye
YUM/DNF (RHEL/CentOS/Fedora):
# Add repository
sudo tee /etc/yum.repos.d/daemoneye.repo << 'EOF'
[daemoneye]
name=DaemonEye
baseurl=https://packages.daemoneye.com/yum/stable/
enabled=1
gpgcheck=1
gpgkey=https://packages.daemoneye.com/apt/key.gpg
EOF
# Install DaemonEye
sudo yum install daemoneye # RHEL/CentOS
# or
sudo dnf install daemoneye # Fedora
# Start service
sudo systemctl start daemoneye
sudo systemctl enable daemoneye
Chocolatey (Windows):
# Install DaemonEye
choco install daemoneye
# Start service
Start-Service DaemonEye
Method 3: From Source
Install Rust (1.85+):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
rustup update
Clone and Build:
# Clone repository
git clone https://github.com/daemoneye/daemoneye.git
cd daemoneye
# Build in release mode
cargo build --release
# Install built binaries
sudo cp target/release/procmond target/release/daemoneye-agent target/release/daemoneye-cli /usr/local/bin/
sudo chmod +x /usr/local/bin/procmond /usr/local/bin/daemoneye-agent /usr/local/bin/daemoneye-cli
Cross-Platform Building:
# Install cross-compilation toolchain
rustup target add x86_64-unknown-linux-gnu
rustup target add aarch64-unknown-linux-gnu
rustup target add x86_64-apple-darwin
rustup target add aarch64-apple-darwin
# Build for different targets
cargo build --release --target x86_64-unknown-linux-gnu
cargo build --release --target aarch64-unknown-linux-gnu
cargo build --release --target x86_64-apple-darwin
cargo build --release --target aarch64-apple-darwin
Method 4: Using cargo-dist (Release Tooling)
DaemonEye uses cargo-dist
and cargo-release
for automated building, packaging, and releasing. This is the recommended method for developers and contributors who want to build release-quality binaries.
Install cargo-dist and cargo-release:
# Install cargo-dist for cross-platform binary distribution
cargo install cargo-dist
# Install cargo-release for automated versioning and releasing
cargo install cargo-release
Build with cargo-dist:
# Build and package for all supported platforms
cargo dist build
# Build for specific platforms only
cargo dist build --targets x86_64-unknown-linux-gnu,aarch64-apple-darwin
# Build and create installers
cargo dist build --artifacts=all
Release with cargo-release:
# Prepare a new release (updates version, changelog, etc.)
cargo release --execute
# Dry run to see what would be changed
cargo release --dry-run
# Release with specific version
cargo release 1.0.0 --execute
cargo-dist Configuration:
The project includes a Cargo.toml
configuration for cargo-dist that defines:
- Supported platforms: Linux (x86_64, aarch64), macOS (x86_64, aarch64), Windows (x86_64)
- Package formats: Tarballs, ZIP files, and platform-specific installers
- Asset inclusion: Binaries, documentation, and configuration templates
- Signing: GPG signing for release artifacts
Benefits of cargo-dist:
- Cross-platform builds: Single command builds for all supported platforms
- Consistent packaging: Standardized package formats across platforms
- Automated signing: GPG signing of release artifacts
- Installation scripts: Platform-specific installation helpers
- Checksum generation: Automatic generation of SHA-256 checksums
- CI/CD integration: Designed for automated release pipelines
Release Workflow:
# 1. Update version and changelog
cargo release --execute
# 2. Build and package all platforms
cargo dist build --artifacts=all
# 3. Test packages locally
cargo dist test
# 4. Publish to GitHub releases
cargo dist publish
note
For Contributors: If you're contributing to DaemonEye and need to test your changes, use cargo dist build
to create release-quality binaries that match the official distribution format.
Platform-Specific Installation
Linux Installation
Ubuntu/Debian:
# Update system
sudo apt update && sudo apt upgrade -y
# Install dependencies
sudo apt install -y ca-certificates curl wget gnupg lsb-release
# Add DaemonEye repository
wget -qO - https://packages.daemoneye.com/apt/key.gpg | sudo apt-key add -
echo "deb https://packages.daemoneye.com/apt stable main" | sudo tee /etc/apt/sources.list.d/daemoneye.list
# Install DaemonEye
sudo apt update
sudo apt install daemoneye
# Configure service
sudo systemctl enable daemoneye
sudo systemctl start daemoneye
RHEL/CentOS:
# Update system
sudo yum update -y
# Install dependencies
sudo yum install -y ca-certificates curl wget
# Add DaemonEye repository
sudo tee /etc/yum.repos.d/daemoneye.repo << 'EOF'
[daemoneye]
name=DaemonEye
baseurl=https://packages.daemoneye.com/yum/stable/
enabled=1
gpgcheck=1
gpgkey=https://packages.daemoneye.com/apt/key.gpg
EOF
# Install DaemonEye
sudo yum install daemoneye
# Configure service
sudo systemctl enable daemoneye
sudo systemctl start daemoneye
Arch Linux:
# Install from AUR
yay -S daemoneye
# Or build from source
git clone https://aur.archlinux.org/daemoneye.git
cd daemoneye
makepkg -si
macOS Installation
Using Homebrew:
# Install Homebrew if not already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Add DaemonEye tap
brew tap daemoneye/daemoneye
# Install DaemonEye
brew install daemoneye
# Start service
brew services start daemoneye
Manual Installation:
# Download and extract
curl -L https://github.com/daemoneye/daemoneye/releases/latest/download/daemoneye-macos-x86_64.tar.gz | tar -xz
# Install to system directories
sudo cp procmond daemoneye-agent daemoneye-cli /usr/local/bin/
sudo chmod +x /usr/local/bin/procmond /usr/local/bin/daemoneye-agent /usr/local/bin/daemoneye-cli
# Create directories
sudo mkdir -p /Library/Application\ Support/DaemonEye
sudo mkdir -p /var/lib/daemoneye
sudo mkdir -p /var/log/daemoneye
# Set ownership
sudo chown -R $(whoami):staff /Library/Application\ Support/DaemonEye
sudo chown -R $(whoami):staff /var/lib/daemoneye
sudo chown -R $(whoami):staff /var/log/daemoneye
Windows Installation
Using Chocolatey:
# Install Chocolatey if not already installed
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
# Install DaemonEye
choco install daemoneye
# Start service
Start-Service DaemonEye
Manual Installation:
# Download from GitHub releases
# Extract to C:\Program Files\DaemonEye\
# Add to PATH
$env:PATH += ";C:\Program Files\DaemonEye"
# Create data directories
New-Item -ItemType Directory -Path "C:\ProgramData\DaemonEye" -Force
New-Item -ItemType Directory -Path "C:\ProgramData\DaemonEye\data" -Force
New-Item -ItemType Directory -Path "C:\ProgramData\DaemonEye\logs" -Force
Service Configuration
Linux (systemd)
Create Service File:
sudo tee /etc/systemd/system/daemoneye.service << 'EOF'
[Unit]
Description=DaemonEye Security Monitoring Agent
Documentation=https://docs.daemoneye.com
After=network.target
Wants=network.target
[Service]
Type=notify
User=daemoneye
Group=daemoneye
ExecStart=/usr/local/bin/daemoneye-agent --config /etc/daemoneye/config.yaml
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=30
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=daemoneye
# Security settings
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/daemoneye /var/log/daemoneye
CapabilityBoundingSet=CAP_SYS_PTRACE
AmbientCapabilities=CAP_SYS_PTRACE
[Install]
WantedBy=multi-user.target
EOF
Create User and Directories:
# Create daemoneye user
sudo useradd -r -s /bin/false -d /var/lib/daemoneye daemoneye
# Set ownership
sudo chown -R daemoneye:daemoneye /var/lib/daemoneye
sudo chown -R daemoneye:daemoneye /var/log/daemoneye
sudo chown -R daemoneye:daemoneye /etc/daemoneye
# Reload systemd and start service
sudo systemctl daemon-reload
sudo systemctl enable daemoneye
sudo systemctl start daemoneye
macOS (launchd)
Create LaunchDaemon:
sudo tee /Library/LaunchDaemons/com.daemoneye.agent.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.daemoneye.agent</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/daemoneye-agent</string>
<string>--config</string>
<string>/Library/Application Support/DaemonEye/config.yaml</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/var/log/daemoneye/agent.log</string>
<key>StandardErrorPath</key>
<string>/var/log/daemoneye/agent.error.log</string>
<key>UserName</key>
<string>daemoneye</string>
<key>GroupName</key>
<string>staff</string>
</dict>
</plist>
EOF
Load and Start Service:
# Load service
sudo launchctl load /Library/LaunchDaemons/com.daemoneye.agent.plist
# Check status
sudo launchctl list | grep daemoneye
Windows (Service)
Create Service:
# Create service
New-Service -Name "DaemonEye Agent" -BinaryPathName "C:\Program Files\DaemonEye\daemoneye-agent.exe --config C:\ProgramData\DaemonEye\config.yaml" -DisplayName "DaemonEye Security Monitoring Agent" -StartupType Automatic
# Start service
Start-Service "DaemonEye Agent"
# Check status
Get-Service "DaemonEye Agent"
Post-Installation Setup
Generate Initial Configuration
# Generate default configuration
daemoneye-cli config init --output /etc/daemoneye/config.yaml
# Or for user-specific configuration
daemoneye-cli config init --output ~/.config/daemoneye/config.yaml
Create Data Directories
# Linux/macOS
sudo mkdir -p /var/lib/daemoneye
sudo mkdir -p /var/log/daemoneye
sudo chown -R $USER:$USER /var/lib/daemoneye
sudo chown -R $USER:$USER /var/log/daemoneye
# Windows
mkdir "C:\ProgramData\DaemonEye\data"
mkdir "C:\ProgramData\DaemonEye\logs"
Set Up Basic Rules
# Create rules directory
mkdir -p /etc/daemoneye/rules
# Create a basic rule
cat > /etc/daemoneye/rules/suspicious-processes.sql << 'EOF'
-- Detect processes with suspicious names
SELECT
pid,
name,
executable_path,
command_line,
collection_time
FROM processes
WHERE
name IN ('malware.exe', 'backdoor.exe', 'trojan.exe')
OR name LIKE '%suspicious%'
OR executable_path LIKE '%temp%'
ORDER BY collection_time DESC;
EOF
# Validate the rule
daemoneye-cli rules validate /etc/daemoneye/rules/suspicious-processes.sql
Configure Alerting
# Enable syslog alerts
daemoneye-cli config set alerting.sinks[0].enabled true
daemoneye-cli config set alerting.sinks[0].type syslog
daemoneye-cli config set alerting.sinks[0].facility daemon
# Enable webhook alerts (if SIEM is available)
daemoneye-cli config set alerting.sinks[1].enabled true
daemoneye-cli config set alerting.sinks[1].type webhook
daemoneye-cli config set alerting.sinks[1].url "https://your-siem.com/webhook"
daemoneye-cli config set alerting.sinks[1].headers.Authorization "Bearer ${WEBHOOK_TOKEN}"
Verification
Check Installation
# Check binary versions
procmond --version
daemoneye-agent --version
daemoneye-cli --version
# Check service status
# Linux
sudo systemctl status daemoneye
# macOS
sudo launchctl list | grep daemoneye
# Windows
Get-Service "DaemonEye Agent"
Test Basic Functionality
# Check system health
daemoneye-cli health
# List recent processes
daemoneye-cli query "SELECT pid, name, executable_path FROM processes LIMIT 10"
# Check alerts
daemoneye-cli alerts list
# Test rule execution
daemoneye-cli rules test suspicious-processes
Performance Verification
# Check system metrics
daemoneye-cli metrics
# Monitor process collection
daemoneye-cli watch processes --filter "cpu_usage > 10.0"
# Check database status
daemoneye-cli database status
Troubleshooting
Common Installation Issues
Permission Denied:
# Check file permissions
ls -la /usr/local/bin/procmond
ls -la /usr/local/bin/daemoneye-agent
ls -la /usr/local/bin/daemoneye-cli
# Fix permissions
sudo chmod +x /usr/local/bin/procmond /usr/local/bin/daemoneye-agent /usr/local/bin/daemoneye-cli
Service Won't Start:
# Check service logs
# Linux
sudo journalctl -u daemoneye -f
# macOS
sudo log show --predicate 'process == "daemoneye-agent"' --last 1h
# Windows
Get-EventLog -LogName Application -Source "DaemonEye" -Newest 10
Configuration Errors:
# Validate configuration
daemoneye-cli config validate
# Check configuration syntax
daemoneye-cli config check
# Show effective configuration
daemoneye-cli config show --include-defaults
Database Issues:
# Check database status
daemoneye-cli database status
# Check database integrity
daemoneye-cli database integrity-check
# Repair database
daemoneye-cli database repair
Debug Mode
# Enable debug logging
daemoneye-cli config set app.log_level debug
# Restart service
# Linux
sudo systemctl restart daemoneye
# macOS
sudo launchctl unload /Library/LaunchDaemons/com.daemoneye.agent.plist
sudo launchctl load /Library/LaunchDaemons/com.daemoneye.agent.plist
# Windows
Restart-Service "DaemonEye Agent"
# Monitor debug logs
daemoneye-cli logs --level debug --tail 100
Performance Issues
High CPU Usage:
# Check process collection rate
daemoneye-cli metrics --metric collection_rate
# Reduce scan interval
daemoneye-cli config set app.scan_interval_ms 60000
# Check for problematic rules
daemoneye-cli rules list --performance
High Memory Usage:
# Check memory usage
daemoneye-cli metrics --metric memory_usage
# Reduce batch size
daemoneye-cli config set app.batch_size 500
# Check database size
daemoneye-cli database size
Slow Queries:
# Check query performance
daemoneye-cli database query-stats
# Optimize database
daemoneye-cli database optimize
# Check for slow rules
daemoneye-cli rules list --slow
Getting Help
- Documentation: Check the full documentation in
docs/
- Logs: Review logs with
daemoneye-cli logs
- Health Checks: Use
daemoneye-cli health
for system status - Community: Join discussions on GitHub or community forums
- Support: Contact support for commercial assistance
This installation guide provides comprehensive instructions for installing DaemonEye across different platforms. For additional help, consult the troubleshooting section or contact support.
DaemonEye Configuration Guide
This guide provides comprehensive configuration instructions for DaemonEye, covering all aspects of system setup, tuning, and customization.
Table of Contents
- Configuration Overview
- Configuration Sources
- Configuration Structure
- Core Settings
- Database Configuration
- Alerting Configuration
- Security Configuration
- Performance Tuning
- Platform-Specific Settings
- Advanced Configuration
- Configuration Management
- Troubleshooting
Configuration Overview
DaemonEye uses a hierarchical configuration system that allows for flexible and maintainable settings across different environments and deployment scenarios.
Configuration Philosophy
- Hierarchical: Multiple sources with clear precedence
- Environment-Aware: Different settings for dev/staging/prod
- Secure: Sensitive settings protected and encrypted
- Validated: All configuration validated at startup
- Hot-Reloadable: Most settings can be updated without restart
Configuration Precedence
- Command-line flags (highest precedence)
- Environment variables (
DaemonEye_*
) - User configuration file (
~/.config/daemoneye/config.yaml
) - System configuration file (
/etc/daemoneye/config.yaml
) - Embedded defaults (lowest precedence)
Configuration Sources
Command-Line Flags
# Basic configuration
daemoneye-agent --config /path/to/config.yaml --log-level debug
# Override specific settings
daemoneye-agent --scan-interval 30000 --batch-size 1000
# Show effective configuration
daemoneye-cli config show --include-defaults
Environment Variables
# Set environment variables
export DaemonEye_LOG_LEVEL=debug
export DaemonEye_SCAN_INTERVAL_MS=30000
export DaemonEye_DATABASE_PATH=/var/lib/daemoneye/processes.db
export DaemonEye_ALERTING_SINKS_0_TYPE=syslog
export DaemonEye_ALERTING_SINKS_0_FACILITY=daemon
# Run with environment configuration
daemoneye-agent
Configuration Files
YAML Format (recommended):
# /etc/daemoneye/config.yaml
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
data_dir: /var/lib/daemoneye
log_dir: /var/log/daemoneye
database:
path: /var/lib/daemoneye/processes.db
max_connections: 10
retention_days: 30
alerting:
sinks:
- type: syslog
enabled: true
facility: daemon
- type: webhook
enabled: false
url: https://alerts.example.com/webhook
headers:
Authorization: Bearer ${WEBHOOK_TOKEN}
JSON Format:
{
"app": {
"scan_interval_ms": 30000,
"batch_size": 1000,
"log_level": "info",
"data_dir": "/var/lib/daemoneye",
"log_dir": "/var/log/daemoneye"
},
"database": {
"path": "/var/lib/daemoneye/processes.db",
"max_connections": 10,
"retention_days": 30
},
"alerting": {
"sinks": [
{
"type": "syslog",
"enabled": true,
"facility": "daemon"
}
]
}
}
TOML Format:
[app]
scan_interval_ms = 30000
batch_size = 1000
log_level = "info"
data_dir = "/var/lib/daemoneye"
log_dir = "/var/log/daemoneye"
[database]
path = "/var/lib/daemoneye/processes.db"
max_connections = 10
retention_days = 30
[[alerting.sinks]]
type = "syslog"
enabled = true
facility = "daemon"
Configuration Structure
Complete Configuration Schema
# Application settings
app:
scan_interval_ms: 30000 # Process scan interval in milliseconds
batch_size: 1000 # Batch size for database operations
log_level: info # Logging level (trace, debug, info, warn, error)
data_dir: /var/lib/daemoneye # Data directory
log_dir: /var/log/daemoneye # Log directory
pid_file: /var/run/daemoneye.pid # PID file location
user: daemoneye # User to run as
group: daemoneye # Group to run as
max_memory_mb: 512 # Maximum memory usage in MB
max_cpu_percent: 5.0 # Maximum CPU usage percentage
# Database configuration
database:
path: /var/lib/daemoneye/processes.db # Database file path
max_connections: 10 # Maximum database connections
retention_days: 30 # Data retention period
vacuum_interval_hours: 24 # Database vacuum interval
wal_mode: true # Enable WAL mode
synchronous: NORMAL # Synchronous mode
cache_size: -64000 # Cache size in KB (negative = KB)
temp_store: MEMORY # Temporary storage location
journal_mode: WAL # Journal mode
# Alerting configuration
alerting:
enabled: true # Enable alerting
max_queue_size: 10000 # Maximum alert queue size
delivery_timeout_ms: 5000 # Alert delivery timeout
retry_attempts: 3 # Number of retry attempts
retry_delay_ms: 1000 # Delay between retries
circuit_breaker_threshold: 5 # Circuit breaker failure threshold
circuit_breaker_timeout_ms: 60000 # Circuit breaker timeout
# Alert sinks
sinks:
- type: syslog # Sink type
enabled: true # Enable this sink
facility: daemon # Syslog facility
priority: info # Syslog priority
tag: daemoneye # Syslog tag
- type: webhook # Webhook sink
enabled: false # Disabled by default
url: https://alerts.example.com/webhook
method: POST # HTTP method
timeout_ms: 5000 # Request timeout
retry_attempts: 3 # Retry attempts
headers: # Custom headers
Authorization: Bearer ${WEBHOOK_TOKEN}
Content-Type: application/json
template: default # Alert template
- type: file # File sink
enabled: false # Disabled by default
path: /var/log/daemoneye/alerts.log
format: json # Output format (json, text)
rotation: daily # Log rotation (daily, weekly, monthly)
max_files: 30 # Maximum log files to keep
- type: stdout # Standard output sink
enabled: false # Disabled by default
format: json # Output format (json, text)
# Security configuration
security:
enable_privilege_dropping: true # Enable privilege dropping
drop_to_user: daemoneye # User to drop privileges to
drop_to_group: daemoneye # Group to drop privileges to
enable_audit_logging: true # Enable audit logging
audit_log_path: /var/log/daemoneye/audit.log
enable_integrity_checking: true # Enable integrity checking
hash_algorithm: blake3 # Hash algorithm (blake3, sha256)
enable_signature_verification: true # Enable signature verification
public_key_path: /etc/daemoneye/public.key
private_key_path: /etc/daemoneye/private.key
# Access control
access_control:
allowed_users: [] # Allowed users (empty = all)
allowed_groups: [] # Allowed groups (empty = all)
denied_users: [] # Denied users
denied_groups: [] # Denied groups
# Network security
network:
enable_tls: false # Enable TLS for network connections
cert_file: /etc/daemoneye/cert.pem
key_file: /etc/daemoneye/key.pem
ca_file: /etc/daemoneye/ca.pem
verify_peer: true # Verify peer certificates
# Process collection configuration
collection:
enable_process_collection: true # Enable process collection
enable_file_monitoring: false # Enable file monitoring
enable_network_monitoring: false # Enable network monitoring
enable_kernel_monitoring: false # Enable kernel monitoring (Enterprise)
# Process collection settings
process_collection:
include_children: true # Include child processes
include_threads: false # Include thread information
include_memory_maps: false # Include memory map information
include_file_descriptors: false # Include file descriptor information
max_processes: 10000 # Maximum processes to collect
exclude_patterns: # Process exclusion patterns
- systemd*
- kthreadd*
- ksoftirqd*
# File monitoring settings
file_monitoring:
watch_directories: [] # Directories to watch
exclude_patterns: # File exclusion patterns
- '*.tmp'
- '*.log'
- '*.cache'
max_file_size_mb: 100 # Maximum file size to monitor
# Network monitoring settings
network_monitoring:
enable_packet_capture: false # Enable packet capture
capture_interface: any # Network interface to capture
capture_filter: '' # BPF filter expression
max_packet_size: 1500 # Maximum packet size
buffer_size_mb: 100 # Capture buffer size
# Detection engine configuration
detection:
enable_detection: true # Enable detection engine
rule_directory: /etc/daemoneye/rules # Rules directory
rule_file_pattern: '*.sql' # Rule file pattern
enable_hot_reload: true # Enable hot reloading
reload_interval_ms: 5000 # Reload check interval
max_concurrent_rules: 10 # Maximum concurrent rule executions
rule_timeout_ms: 30000 # Rule execution timeout
enable_rule_caching: true # Enable rule result caching
cache_ttl_seconds: 300 # Cache TTL in seconds
# Rule execution settings
execution:
enable_parallel_execution: true # Enable parallel rule execution
max_parallel_rules: 5 # Maximum parallel rules
enable_rule_optimization: true # Enable rule optimization
enable_query_planning: true # Enable query planning
# Alert generation
alert_generation:
enable_alert_deduplication: true # Enable alert deduplication
deduplication_window_ms: 60000 # Deduplication window
enable_alert_aggregation: true # Enable alert aggregation
aggregation_window_ms: 300000 # Aggregation window
max_alerts_per_rule: 1000 # Maximum alerts per rule
# Observability configuration
observability:
enable_metrics: true # Enable metrics collection
metrics_port: 9090 # Metrics server port
metrics_path: /metrics # Metrics endpoint path
enable_health_checks: true # Enable health checks
health_check_port: 8080 # Health check port
health_check_path: /health # Health check endpoint
# Tracing configuration
tracing:
enable_tracing: false # Enable distributed tracing
trace_endpoint: http://jaeger:14268/api/traces
trace_sampling_rate: 0.1 # Trace sampling rate
trace_service_name: daemoneye # Service name for traces
# Logging configuration
logging:
enable_structured_logging: true # Enable structured logging
log_format: json # Log format (json, text)
log_timestamp_format: rfc3339 # Timestamp format
enable_log_rotation: true # Enable log rotation
max_log_file_size_mb: 100 # Maximum log file size
max_log_files: 10 # Maximum log files to keep
# Performance monitoring
performance:
enable_profiling: false # Enable performance profiling
profile_output_dir: /tmp/daemoneye/profiles
enable_memory_profiling: false # Enable memory profiling
enable_cpu_profiling: false # Enable CPU profiling
# Platform-specific configuration
platform:
linux:
enable_ebpf: false # Enable eBPF monitoring (Enterprise)
ebpf_program_path: /etc/daemoneye/ebpf/monitor.o
enable_audit: false # Enable Linux audit integration
audit_rules_path: /etc/daemoneye/audit.rules
windows:
enable_etw: false # Enable ETW monitoring (Enterprise)
etw_session_name: DaemonEye
enable_wmi: false # Enable WMI monitoring
wmi_namespace: root\cimv2
macos:
enable_endpoint_security: false # Enable EndpointSecurity (Enterprise)
es_client_name: com.daemoneye.monitor
enable_system_events: false # Enable system event monitoring
# Integration configuration
integrations:
# SIEM integrations
siem:
splunk:
enabled: false
hec_url: https://splunk.example.com:8088/services/collector
hec_token: ${SPLUNK_HEC_TOKEN}
index: daemoneye
source: daemoneye
sourcetype: daemoneye:processes
elasticsearch:
enabled: false
url: https://elasticsearch.example.com:9200
username: ${ELASTIC_USERNAME}
password: ${ELASTIC_PASSWORD}
index: daemoneye-processes
kafka:
enabled: false
brokers: [kafka1.example.com:9092, kafka2.example.com:9092]
topic: daemoneye.processes
security_protocol: PLAINTEXT
sasl_mechanism: PLAIN
username: ${KAFKA_USERNAME}
password: ${KAFKA_PASSWORD}
# Export formats
export:
cef:
enabled: false
output_file: /var/log/daemoneye/cef.log
cef_version: '1.0'
device_vendor: DaemonEye
device_product: Process Monitor
device_version: 1.0.0
stix:
enabled: false
output_file: /var/log/daemoneye/stix.json
stix_version: '2.1'
stix_id: daemoneye-process-monitor
json:
enabled: false
output_file: /var/log/daemoneye/events.json
pretty_print: true
include_metadata: true
Core Settings
Application Settings
Basic Configuration:
app:
scan_interval_ms: 30000 # How often to scan processes (30 seconds)
batch_size: 1000 # Number of processes to process in each batch
log_level: info # Logging verbosity
data_dir: /var/lib/daemoneye # Where to store data files
log_dir: /var/log/daemoneye # Where to store log files
Performance Tuning:
app:
max_memory_mb: 512 # Limit memory usage to 512MB
max_cpu_percent: 5.0 # Limit CPU usage to 5%
scan_interval_ms: 60000 # Reduce scan frequency for lower CPU
batch_size: 500 # Smaller batches for lower memory
Security Settings:
app:
user: daemoneye # Run as non-root user
group: daemoneye # Run as non-root group
pid_file: /var/run/daemoneye.pid # PID file location
Logging Configuration
Structured Logging:
observability:
logging:
enable_structured_logging: true
log_format: json
log_timestamp_format: rfc3339
enable_log_rotation: true
max_log_file_size_mb: 100
max_log_files: 10
Log Levels:
app:
log_level: debug # trace, debug, info, warn, error
Log Rotation:
observability:
logging:
enable_log_rotation: true
max_log_file_size_mb: 100 # Rotate when file reaches 100MB
max_log_files: 10 # Keep 10 rotated files
Database Configuration
SQLite Settings
Basic Database Configuration:
database:
path: /var/lib/daemoneye/processes.db
max_connections: 10
retention_days: 30
Performance Optimization:
database:
wal_mode: true # Enable Write-Ahead Logging
synchronous: NORMAL # Balance safety and performance
cache_size: -64000 # 64MB cache (negative = KB)
temp_store: MEMORY # Store temp tables in memory
journal_mode: WAL # Use WAL journal mode
Maintenance Settings:
database:
vacuum_interval_hours: 24 # Vacuum database every 24 hours
retention_days: 30 # Keep data for 30 days
enable_auto_vacuum: true # Enable automatic vacuuming
Database Security
Access Control:
database:
enable_encryption: false # Enable database encryption
encryption_key: ${DB_ENCRYPTION_KEY}
enable_access_control: true # Enable access control
allowed_users: [daemoneye] # Allowed database users
Alerting Configuration
Alert Sinks
Syslog Sink:
alerting:
sinks:
- type: syslog
enabled: true
facility: daemon
priority: info
tag: daemoneye
Webhook Sink:
alerting:
sinks:
- type: webhook
enabled: true
url: https://alerts.example.com/webhook
method: POST
timeout_ms: 5000
retry_attempts: 3
headers:
Authorization: Bearer ${WEBHOOK_TOKEN}
Content-Type: application/json
File Sink:
alerting:
sinks:
- type: file
enabled: true
path: /var/log/daemoneye/alerts.log
format: json
rotation: daily
max_files: 30
Alert Processing
Deduplication and Aggregation:
detection:
alert_generation:
enable_alert_deduplication: true
deduplication_window_ms: 60000
enable_alert_aggregation: true
aggregation_window_ms: 300000
max_alerts_per_rule: 1000
Delivery Settings:
alerting:
max_queue_size: 10000
delivery_timeout_ms: 5000
retry_attempts: 3
retry_delay_ms: 1000
circuit_breaker_threshold: 5
circuit_breaker_timeout_ms: 60000
Security Configuration
Privilege Management
Privilege Dropping:
security:
enable_privilege_dropping: true
drop_to_user: daemoneye
drop_to_group: daemoneye
Access Control:
security:
access_control:
allowed_users: [] # Empty = all users
allowed_groups: [] # Empty = all groups
denied_users: [root] # Deny root user
denied_groups: [wheel] # Deny wheel group
Audit and Integrity
Audit Logging:
security:
enable_audit_logging: true
audit_log_path: /var/log/daemoneye/audit.log
Integrity Checking:
security:
enable_integrity_checking: true
hash_algorithm: blake3
enable_signature_verification: true
public_key_path: /etc/daemoneye/public.key
private_key_path: /etc/daemoneye/private.key
Network Security
TLS Configuration:
security:
network:
enable_tls: true
cert_file: /etc/daemoneye/cert.pem
key_file: /etc/daemoneye/key.pem
ca_file: /etc/daemoneye/ca.pem
verify_peer: true
Performance Tuning
Process Collection
Collection Settings:
collection:
process_collection:
include_children: true
include_threads: false
include_memory_maps: false
include_file_descriptors: false
max_processes: 10000
exclude_patterns:
- systemd*
- kthreadd*
- ksoftirqd*
Performance Optimization:
app:
scan_interval_ms: 60000 # Reduce scan frequency
batch_size: 500 # Smaller batches
max_memory_mb: 256 # Limit memory usage
max_cpu_percent: 3.0 # Limit CPU usage
Database Performance
Connection Pooling:
database:
max_connections: 20 # Increase connection pool
cache_size: -128000 # 128MB cache
temp_store: MEMORY # Use memory for temp tables
Query Optimization:
detection:
execution:
enable_rule_optimization: true
enable_query_planning: true
enable_parallel_execution: true
max_parallel_rules: 5
Memory Management
Memory Limits:
app:
max_memory_mb: 512 # Hard memory limit
max_cpu_percent: 5.0 # CPU usage limit
Garbage Collection:
app:
gc_interval_ms: 300000 # Garbage collection interval
gc_threshold_mb: 100 # GC threshold
Platform-Specific Settings
Linux Configuration
eBPF Monitoring (Enterprise):
platform:
linux:
enable_ebpf: true
ebpf_program_path: /etc/daemoneye/ebpf/monitor.o
enable_audit: true
audit_rules_path: /etc/daemoneye/audit.rules
System Integration:
platform:
linux:
enable_systemd_integration: true
systemd_unit: daemoneye.service
enable_logrotate: true
logrotate_config: /etc/logrotate.d/daemoneye
Windows Configuration
ETW Monitoring (Enterprise):
platform:
windows:
enable_etw: true
etw_session_name: DaemonEye
enable_wmi: true
wmi_namespace: root\cimv2
Service Integration:
platform:
windows:
service_name: DaemonEye Agent
service_display_name: DaemonEye Security Monitoring Agent
service_description: Monitors system processes for security threats
macOS Configuration
EndpointSecurity (Enterprise):
platform:
macos:
enable_endpoint_security: true
es_client_name: com.daemoneye.monitor
enable_system_events: true
LaunchDaemon Integration:
platform:
macos:
launchdaemon_plist: /Library/LaunchDaemons/com.daemoneye.agent.plist
enable_console_logging: true
Advanced Configuration
Custom Rules
Rule Directory:
detection:
rule_directory: /etc/daemoneye/rules
rule_file_pattern: '*.sql'
enable_hot_reload: true
reload_interval_ms: 5000
Rule Execution:
detection:
max_concurrent_rules: 10
rule_timeout_ms: 30000
enable_rule_caching: true
cache_ttl_seconds: 300
Custom Integrations
SIEM Integration:
integrations:
siem:
splunk:
enabled: true
hec_url: https://splunk.example.com:8088/services/collector
hec_token: ${SPLUNK_HEC_TOKEN}
index: daemoneye
source: daemoneye
sourcetype: daemoneye:processes
Export Formats:
integrations:
export:
cef:
enabled: true
output_file: /var/log/daemoneye/cef.log
cef_version: '1.0'
device_vendor: DaemonEye
device_product: Process Monitor
device_version: 1.0.0
Custom Templates
Alert Templates:
alerting:
templates:
default: |
{
"timestamp": "{{.Timestamp}}",
"rule": "{{.RuleName}}",
"severity": "{{.Severity}}",
"process": {
"pid": {{.Process.PID}},
"name": "{{.Process.Name}}",
"path": "{{.Process.ExecutablePath}}"
}
}
syslog: |
{{.Timestamp}} {{.Severity}} {{.RuleName}}: Process {{.Process.Name}} (PID {{.Process.PID}}) triggered alert
Configuration Management
Configuration Validation
Validate Configuration:
# Validate configuration file
daemoneye-cli config validate /path/to/config.yaml
# Check configuration syntax
daemoneye-cli config check
# Show effective configuration
daemoneye-cli config show --include-defaults
Configuration Testing:
# Test configuration without starting service
daemoneye-agent --config /path/to/config.yaml --dry-run
# Test specific settings
daemoneye-cli config test --setting app.scan_interval_ms
Configuration Updates
Hot Reload:
# Reload configuration without restart
daemoneye-cli config reload
# Update specific setting
daemoneye-cli config set app.scan_interval_ms 60000
# Update multiple settings
daemoneye-cli config set app.scan_interval_ms 60000 app.batch_size 500
Configuration Backup:
# Backup current configuration
daemoneye-cli config backup --output /backup/daemoneye-config-$(date +%Y%m%d).yaml
# Restore configuration
daemoneye-cli config restore --input /backup/daemoneye-config-20240101.yaml
Environment Management
Development Environment:
# config-dev.yaml
app:
log_level: debug
scan_interval_ms: 10000
batch_size: 100
database:
path: /tmp/daemoneye-dev.db
retention_days: 1
Production Environment:
# config-prod.yaml
app:
log_level: info
scan_interval_ms: 60000
batch_size: 1000
database:
path: /var/lib/daemoneye/processes.db
retention_days: 30
Staging Environment:
# config-staging.yaml
app:
log_level: info
scan_interval_ms: 30000
batch_size: 500
database:
path: /var/lib/daemoneye/processes-staging.db
retention_days: 7
Troubleshooting
Configuration Issues
Invalid Configuration:
# Check configuration syntax
daemoneye-cli config check
# Validate configuration
daemoneye-cli config validate
# Show configuration errors
daemoneye-cli config show --errors
Missing Settings:
# Show all settings with defaults
daemoneye-cli config show --include-defaults
# Show specific setting
daemoneye-cli config get app.scan_interval_ms
# Set missing setting
daemoneye-cli config set app.scan_interval_ms 30000
Permission Issues:
# Check file permissions
ls -la /etc/daemoneye/config.yaml
# Fix permissions
sudo chown daemoneye:daemoneye /etc/daemoneye/config.yaml
sudo chmod 644 /etc/daemoneye/config.yaml
Performance Issues
High CPU Usage:
# Reduce scan frequency
app:
scan_interval_ms: 120000 # 2 minutes
# Reduce batch size
app:
batch_size: 250
# Exclude more processes
collection:
process_collection:
exclude_patterns:
- "systemd*"
- "kthreadd*"
- "ksoftirqd*"
- "migration*"
- "rcu_*"
High Memory Usage:
# Limit memory usage
app:
max_memory_mb: 256
# Reduce batch size
app:
batch_size: 250
# Enable garbage collection
app:
gc_interval_ms: 300000
gc_threshold_mb: 100
Slow Database Operations:
# Optimize database settings
database:
cache_size: -128000 # 128MB cache
temp_store: MEMORY
synchronous: NORMAL
wal_mode: true
# Enable query optimization
detection:
execution:
enable_rule_optimization: true
enable_query_planning: true
Debugging Configuration
Enable Debug Logging:
app:
log_level: debug
observability:
logging:
enable_structured_logging: true
log_format: json
Configuration Debugging:
# Show effective configuration
daemoneye-cli config show --include-defaults --format json
# Test configuration
daemoneye-agent --config /path/to/config.yaml --dry-run
# Check configuration sources
daemoneye-cli config sources
Performance Debugging:
observability:
performance:
enable_profiling: true
profile_output_dir: /tmp/daemoneye/profiles
enable_memory_profiling: true
enable_cpu_profiling: true
This configuration guide provides comprehensive instructions for configuring DaemonEye. For additional help, consult the troubleshooting section or contact support.
DaemonEye Docker Deployment Guide
This guide provides comprehensive instructions for deploying DaemonEye using Docker and Docker Compose, including containerization, orchestration, and production deployment strategies.
Table of Contents
- Docker Overview
- Container Images
- Basic Docker Deployment
- Docker Compose Deployment
- Production Deployment
- Kubernetes Deployment
- Security Considerations
- Monitoring and Logging
- Troubleshooting
Docker Overview
DaemonEye is designed to run efficiently in containerized environments, providing:
- Isolation: Process monitoring within container boundaries
- Scalability: Easy horizontal scaling and load balancing
- Portability: Consistent deployment across different environments
- Security: Container-based privilege isolation
- Orchestration: Integration with Kubernetes and other orchestration platforms
Container Architecture
DaemonEye uses a multi-container architecture:
- procmond: Privileged process monitoring daemon
- daemoneye-agent: User-space orchestrator and alerting
- daemoneye-cli: Command-line interface and management
- Security Center: Web-based management interface (Business/Enterprise tiers)
Container Images
Official Images
Core Images:
# Process monitoring daemon
docker pull daemoneye/procmond:latest
docker pull daemoneye/procmond:1.0.0
# Agent orchestrator
docker pull daemoneye/daemoneye-agent:latest
docker pull daemoneye/daemoneye-agent:1.0.0
# CLI interface
docker pull daemoneye/daemoneye-cli:latest
docker pull daemoneye/daemoneye-cli:1.0.0
# Security Center (Business/Enterprise)
docker pull daemoneye/security-center:latest
docker pull daemoneye/security-center:1.0.0
Image Tags:
latest
: Latest stable release1.0.0
: Specific version1.0.0-alpine
: Alpine Linux variant (smaller size)1.0.0-debian
: Debian-based variantdev
: Development buildsrc-1.0.0
: Release candidate
Image Variants
Alpine Linux (Recommended):
# Smaller size, security-focused
FROM alpine:3.18
RUN apk add --no-cache ca-certificates
COPY procmond /usr/local/bin/
ENTRYPOINT ["procmond"]
Debian:
# Full-featured, larger size
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY procmond /usr/local/bin/
ENTRYPOINT ["procmond"]
Distroless:
# Minimal attack surface
FROM gcr.io/distroless/cc-debian12
COPY procmond /usr/local/bin/
ENTRYPOINT ["procmond"]
Basic Docker Deployment
Single Container Deployment
Run Process Monitor:
# Basic run
docker run -d \
--name daemoneye-procmond \
--privileged \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
daemoneye/procmond:latest
# With custom configuration
docker run -d \
--name daemoneye-procmond \
--privileged \
-v /etc/daemoneye:/config \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
-e DaemonEye_LOG_LEVEL=info \
daemoneye/procmond:latest --config /config/config.yaml
Run Agent:
# Basic run
docker run -d \
--name daemoneye-agent \
--link daemoneye-procmond:procmond \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
daemoneye/daemoneye-agent:latest
# With custom configuration
docker run -d \
--name daemoneye-agent \
--link daemoneye-procmond:procmond \
-v /etc/daemoneye:/config \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
-e DaemonEye_LOG_LEVEL=info \
daemoneye/daemoneye-agent:latest --config /config/config.yaml
Run CLI:
# Interactive CLI
docker run -it \
--rm \
--link daemoneye-agent:agent \
-v /var/lib/daemoneye:/data \
daemoneye/daemoneye-cli:latest
# Execute specific command
docker run --rm \
--link daemoneye-agent:agent \
-v /var/lib/daemoneye:/data \
daemoneye/daemoneye-cli:latest query "SELECT * FROM processes LIMIT 10"
Multi-Container Deployment
Create Network:
# Create custom network
docker network create daemoneye-network
# Run with custom network
docker run -d \
--name daemoneye-procmond \
--network daemoneye-network \
--privileged \
-v /var/lib/daemoneye:/data \
daemoneye/procmond:latest
docker run -d \
--name daemoneye-agent \
--network daemoneye-network \
-v /var/lib/daemoneye:/data \
daemoneye/daemoneye-agent:latest
Docker Compose Deployment
Basic Docker Compose
docker-compose.yml:
version: '3.8'
services:
procmond:
image: daemoneye/procmond:latest
container_name: daemoneye-procmond
privileged: true
volumes:
- /var/lib/daemoneye:/data
- /var/log/daemoneye:/logs
- ./config:/config:ro
environment:
- DaemonEye_LOG_LEVEL=info
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
daemoneye-agent:
image: daemoneye/daemoneye-agent:latest
container_name: daemoneye-agent
depends_on:
- procmond
volumes:
- /var/lib/daemoneye:/data
- /var/log/daemoneye:/logs
- ./config:/config:ro
environment:
- DaemonEye_LOG_LEVEL=info
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
daemoneye-cli:
image: daemoneye/daemoneye-cli:latest
container_name: daemoneye-cli
depends_on:
- daemoneye-agent
volumes:
- /var/lib/daemoneye:/data
- ./config:/config:ro
environment:
- DaemonEye_DATA_DIR=/data
command: [--help]
restart: no
networks:
- daemoneye-network
networks:
daemoneye-network:
driver: bridge
volumes:
daemoneye-data:
driver: local
daemoneye-logs:
driver: local
Production Docker Compose
docker-compose.prod.yml:
version: '3.8'
services:
procmond:
image: daemoneye/procmond:1.0.0
container_name: daemoneye-procmond
privileged: true
user: 1000:1000
volumes:
- daemoneye-data:/data
- daemoneye-logs:/logs
- ./config/procmond.yaml:/config/config.yaml:ro
- ./rules:/rules:ro
environment:
- DaemonEye_LOG_LEVEL=info
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
- DaemonEye_RULE_DIR=/rules
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
healthcheck:
test: [CMD, procmond, health]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
daemoneye-agent:
image: daemoneye/daemoneye-agent:1.0.0
container_name: daemoneye-agent
depends_on:
procmond:
condition: service_healthy
user: 1000:1000
volumes:
- daemoneye-data:/data
- daemoneye-logs:/logs
- ./config/daemoneye-agent.yaml:/config/config.yaml:ro
environment:
- DaemonEye_LOG_LEVEL=info
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
- DaemonEye_PROCMOND_ENDPOINT=tcp://procmond:8080
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
healthcheck:
test: [CMD, daemoneye-agent, health]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
security-center:
image: daemoneye/security-center:1.0.0
container_name: daemoneye-security-center
depends_on:
daemoneye-agent:
condition: service_healthy
user: 1000:1000
volumes:
- daemoneye-data:/data
- ./config/security-center.yaml:/config/config.yaml:ro
environment:
- DaemonEye_LOG_LEVEL=info
- DaemonEye_DATA_DIR=/data
- DaemonEye_AGENT_ENDPOINT=tcp://daemoneye-agent:8080
- DaemonEye_WEB_PORT=8080
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
ports:
- 8080:8080
healthcheck:
test: [CMD, curl, -f, http://localhost:8080/health]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
nginx:
image: nginx:alpine
container_name: daemoneye-nginx
depends_on:
security-center:
condition: service_healthy
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
ports:
- 80:80
- 443:443
restart: unless-stopped
networks:
- daemoneye-network
networks:
daemoneye-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
daemoneye-data:
driver: local
daemoneye-logs:
driver: local
Development Docker Compose
docker-compose.dev.yml:
version: '3.8'
services:
procmond:
build:
context: .
dockerfile: procmond/Dockerfile
container_name: daemoneye-procmond-dev
privileged: true
volumes:
- ./data:/data
- ./logs:/logs
- ./config:/config:ro
- ./rules:/rules:ro
environment:
- DaemonEye_LOG_LEVEL=debug
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
- DaemonEye_RULE_DIR=/rules
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
daemoneye-agent:
build:
context: .
dockerfile: daemoneye-agent/Dockerfile
container_name: daemoneye-agent-dev
depends_on:
- procmond
volumes:
- ./data:/data
- ./logs:/logs
- ./config:/config:ro
environment:
- DaemonEye_LOG_LEVEL=debug
- DaemonEye_DATA_DIR=/data
- DaemonEye_LOG_DIR=/logs
- DaemonEye_PROCMOND_ENDPOINT=tcp://procmond:8080
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
daemoneye-cli:
build:
context: .
dockerfile: daemoneye-cli/Dockerfile
container_name: daemoneye-cli-dev
depends_on:
- daemoneye-agent
volumes:
- ./data:/data
- ./config:/config:ro
environment:
- DaemonEye_DATA_DIR=/data
command: [--help]
restart: no
networks:
- daemoneye-network
networks:
daemoneye-network:
driver: bridge
Production Deployment
Production Configuration
Environment Variables:
# .env file
DaemonEye_VERSION=1.0.0
DaemonEye_LOG_LEVEL=info
DaemonEye_DATA_DIR=/var/lib/daemoneye
DaemonEye_LOG_DIR=/var/log/daemoneye
DaemonEye_CONFIG_DIR=/etc/daemoneye
# Database settings
DATABASE_PATH=/var/lib/daemoneye/processes.db
DATABASE_RETENTION_DAYS=30
# Alerting settings
ALERTING_SYSLOG_ENABLED=true
ALERTING_SYSLOG_FACILITY=daemon
ALERTING_WEBHOOK_ENABLED=false
ALERTING_WEBHOOK_URL=https://alerts.example.com/webhook
ALERTING_WEBHOOK_TOKEN=${WEBHOOK_TOKEN}
# Security settings
SECURITY_ENABLE_PRIVILEGE_DROPPING=true
SECURITY_DROP_TO_USER=1000
SECURITY_DROP_TO_GROUP=1000
SECURITY_ENABLE_AUDIT_LOGGING=true
# Performance settings
APP_SCAN_INTERVAL_MS=60000
APP_BATCH_SIZE=1000
APP_MAX_MEMORY_MB=512
APP_MAX_CPU_PERCENT=5.0
Production Docker Compose:
version: '3.8'
services:
procmond:
image: daemoneye/procmond:${DaemonEye_VERSION}
container_name: daemoneye-procmond
privileged: true
user: ${SECURITY_DROP_TO_USER}:${SECURITY_DROP_TO_GROUP}
volumes:
- daemoneye-data:/data
- daemoneye-logs:/logs
- ./config/procmond.yaml:/config/config.yaml:ro
- ./rules:/rules:ro
environment:
- DaemonEye_LOG_LEVEL=${DaemonEye_LOG_LEVEL}
- DaemonEye_DATA_DIR=${DaemonEye_DATA_DIR}
- DaemonEye_LOG_DIR=${DaemonEye_LOG_DIR}
- DaemonEye_RULE_DIR=/rules
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
healthcheck:
test: [CMD, procmond, health]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging:
driver: json-file
options:
max-size: 100m
max-file: '5'
daemoneye-agent:
image: daemoneye/daemoneye-agent:${DaemonEye_VERSION}
container_name: daemoneye-agent
depends_on:
procmond:
condition: service_healthy
user: ${SECURITY_DROP_TO_USER}:${SECURITY_DROP_TO_GROUP}
volumes:
- daemoneye-data:/data
- daemoneye-logs:/logs
- ./config/daemoneye-agent.yaml:/config/config.yaml:ro
environment:
- DaemonEye_LOG_LEVEL=${DaemonEye_LOG_LEVEL}
- DaemonEye_DATA_DIR=${DaemonEye_DATA_DIR}
- DaemonEye_LOG_DIR=${DaemonEye_LOG_DIR}
- DaemonEye_PROCMOND_ENDPOINT=tcp://procmond:8080
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
healthcheck:
test: [CMD, daemoneye-agent, health]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging:
driver: json-file
options:
max-size: 100m
max-file: '5'
security-center:
image: daemoneye/security-center:${DaemonEye_VERSION}
container_name: daemoneye-security-center
depends_on:
daemoneye-agent:
condition: service_healthy
user: ${SECURITY_DROP_TO_USER}:${SECURITY_DROP_TO_GROUP}
volumes:
- daemoneye-data:/data
- ./config/security-center.yaml:/config/config.yaml:ro
environment:
- DaemonEye_LOG_LEVEL=${DaemonEye_LOG_LEVEL}
- DaemonEye_DATA_DIR=${DaemonEye_DATA_DIR}
- DaemonEye_AGENT_ENDPOINT=tcp://daemoneye-agent:8080
- DaemonEye_WEB_PORT=8080
command: [--config, /config/config.yaml]
restart: unless-stopped
networks:
- daemoneye-network
ports:
- 8080:8080
healthcheck:
test: [CMD, curl, -f, http://localhost:8080/health]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging:
driver: json-file
options:
max-size: 100m
max-file: '5'
networks:
daemoneye-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
daemoneye-data:
driver: local
daemoneye-logs:
driver: local
Deployment Scripts
deploy.sh:
#!/bin/bash
set -e
# Configuration
COMPOSE_FILE="docker-compose.prod.yml"
ENV_FILE=".env"
BACKUP_DIR="/backup/daemoneye"
LOG_DIR="/var/log/daemoneye"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check prerequisites
check_prerequisites() {
log_info "Checking prerequisites..."
if ! command -v docker &> /dev/null; then
log_error "Docker is not installed"
exit 1
fi
if ! command -v docker-compose &> /dev/null; then
log_error "Docker Compose is not installed"
exit 1
fi
if [ ! -f "$ENV_FILE" ]; then
log_error "Environment file $ENV_FILE not found"
exit 1
fi
if [ ! -f "$COMPOSE_FILE" ]; then
log_error "Compose file $COMPOSE_FILE not found"
exit 1
fi
log_info "Prerequisites check passed"
}
# Backup existing deployment
backup_deployment() {
log_info "Backing up existing deployment..."
if [ -d "$BACKUP_DIR" ]; then
rm -rf "$BACKUP_DIR"
fi
mkdir -p "$BACKUP_DIR"
# Backup data volumes
if docker volume ls | grep -q daemoneye-data; then
docker run --rm -v daemoneye-data:/data -v "$BACKUP_DIR":/backup alpine tar czf /backup/data.tar.gz -C /data .
fi
# Backup logs
if [ -d "$LOG_DIR" ]; then
cp -r "$LOG_DIR" "$BACKUP_DIR/logs"
fi
log_info "Backup completed"
}
# Deploy DaemonEye
deploy_daemoneye() {
log_info "Deploying DaemonEye..."
# Pull latest images
docker-compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" pull
# Stop existing services
docker-compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" down
# Start services
docker-compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" up -d
# Wait for services to be healthy
log_info "Waiting for services to be healthy..."
sleep 30
# Check service health
if docker-compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" ps | grep -q "unhealthy"; then
log_error "Some services are unhealthy"
docker-compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" logs
exit 1
fi
log_info "Deployment completed successfully"
}
# Main execution
main() {
log_info "Starting DaemonEye deployment..."
check_prerequisites
backup_deployment
deploy_daemoneye
log_info "DaemonEye deployment completed successfully"
}
# Run main function
main "$@"
rollback.sh:
#!/bin/bash
set -e
# Configuration
COMPOSE_FILE="docker-compose.prod.yml"
ENV_FILE=".env"
BACKUP_DIR="/backup/daemoneye"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Rollback deployment
rollback_deployment() {
log_info "Rolling back DaemonEye deployment..."
# Stop current services
docker-compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" down
# Restore data volumes
if [ -f "$BACKUP_DIR/data.tar.gz" ]; then
docker run --rm -v daemoneye-data:/data -v "$BACKUP_DIR":/backup alpine tar xzf /backup/data.tar.gz -C /data
fi
# Restore logs
if [ -d "$BACKUP_DIR/logs" ]; then
cp -r "$BACKUP_DIR/logs"/* /var/log/daemoneye/
fi
# Start services
docker-compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" up -d
log_info "Rollback completed"
}
# Main execution
main() {
log_info "Starting DaemonEye rollback..."
rollback_deployment
log_info "DaemonEye rollback completed successfully"
}
# Run main function
main "$@"
Kubernetes Deployment
Kubernetes Manifests
Namespace:
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: daemoneye
labels:
name: daemoneye
ConfigMap:
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: daemoneye-config
namespace: daemoneye
data:
procmond.yaml: |
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
data_dir: /data
log_dir: /logs
database:
path: /data/processes.db
retention_days: 30
security:
enable_privilege_dropping: true
drop_to_user: 1000
drop_to_group: 1000
daemoneye-agent.yaml: |
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
data_dir: /data
log_dir: /logs
database:
path: /data/processes.db
retention_days: 30
alerting:
enabled: true
sinks:
- type: syslog
enabled: true
facility: daemon
- type: webhook
enabled: true
url: http://daemoneye-webhook:8080/webhook
Secret:
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: daemoneye-secrets
namespace: daemoneye
type: Opaque
data:
webhook-token: <base64-encoded-token>
database-encryption-key: <base64-encoded-key>
PersistentVolumeClaim:
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: daemoneye-data
namespace: daemoneye
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast-ssd
DaemonSet for procmond:
# procmond-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemoneye-procmond
namespace: daemoneye
spec:
selector:
matchLabels:
app: daemoneye-procmond
template:
metadata:
labels:
app: daemoneye-procmond
spec:
serviceAccountName: daemoneye-procmond
containers:
- name: procmond
image: daemoneye/procmond:1.0.0
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: data
mountPath: /data
- name: logs
mountPath: /logs
- name: rules
mountPath: /rules
readOnly: true
env:
- name: DaemonEye_LOG_LEVEL
value: info
- name: DaemonEye_DATA_DIR
value: /data
- name: DaemonEye_LOG_DIR
value: /logs
command: [procmond]
args: [--config, /config/procmond.yaml]
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500m
livenessProbe:
exec:
command:
- procmond
- health
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
exec:
command:
- procmond
- health
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumes:
- name: config
configMap:
name: daemoneye-config
- name: data
persistentVolumeClaim:
claimName: daemoneye-data
- name: logs
emptyDir: {}
- name: rules
configMap:
name: daemoneye-rules
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
Deployment for daemoneye-agent:
# daemoneye-agent-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: daemoneye-agent
namespace: daemoneye
spec:
replicas: 1
selector:
matchLabels:
app: daemoneye-agent
template:
metadata:
labels:
app: daemoneye-agent
spec:
serviceAccountName: daemoneye-agent
containers:
- name: daemoneye-agent
image: daemoneye/daemoneye-agent:1.0.0
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: data
mountPath: /data
- name: logs
mountPath: /logs
env:
- name: DaemonEye_LOG_LEVEL
value: info
- name: DaemonEye_DATA_DIR
value: /data
- name: DaemonEye_LOG_DIR
value: /logs
- name: DaemonEye_PROCMOND_ENDPOINT
value: tcp://daemoneye-procmond:8080
command: [daemoneye-agent]
args: [--config, /config/daemoneye-agent.yaml]
resources:
requests:
memory: 512Mi
cpu: 200m
limits:
memory: 1Gi
cpu: 1000m
livenessProbe:
exec:
command:
- daemoneye-agent
- health
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
exec:
command:
- daemoneye-agent
- health
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumes:
- name: config
configMap:
name: daemoneye-config
- name: data
persistentVolumeClaim:
claimName: daemoneye-data
- name: logs
emptyDir: {}
Service:
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: daemoneye-agent
namespace: daemoneye
spec:
selector:
app: daemoneye-agent
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
type: ClusterIP
ServiceAccount and RBAC:
# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: daemoneye-procmond
namespace: daemoneye
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: daemoneye-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: daemoneye-agent
subjects:
- kind: ServiceAccount
name: daemoneye-agent
namespace: daemoneye
Helm Chart
Chart.yaml:
# Chart.yaml
apiVersion: v2
name: daemoneye
description: DaemonEye Security Monitoring Agent
type: application
version: 1.0.0
appVersion: 1.0.0
keywords:
- security
- monitoring
- processes
- threat-detection
home: https://daemoneye.com
sources:
- https://github.com/daemoneye/daemoneye
maintainers:
- name: DaemonEye Team
email: team@daemoneye.com
values.yaml:
# values.yaml
image:
repository: daemoneye
tag: 1.0.0
pullPolicy: IfNotPresent
replicaCount: 1
serviceAccount:
create: true
annotations: {}
name: ''
podSecurityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
service:
type: ClusterIP
port: 8080
ingress:
enabled: false
className: ''
annotations: {}
hosts:
- host: daemoneye.example.com
paths:
- path: /
pathType: Prefix
tls: []
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 200m
memory: 512Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
persistence:
enabled: true
storageClass: ''
accessMode: ReadWriteOnce
size: 10Gi
config:
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
database:
retention_days: 30
alerting:
enabled: true
sinks:
- type: syslog
enabled: true
facility: daemon
secrets: {}
monitoring:
enabled: false
serviceMonitor:
enabled: false
namespace: ''
interval: 30s
scrapeTimeout: 10s
Security Considerations
Container Security
Security Context:
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- CAP_SYS_PTRACE # Only for procmond
Privileged Containers:
# Only procmond needs privileged access
securityContext:
privileged: true
capabilities:
add:
- CAP_SYS_PTRACE
- CAP_SYS_ADMIN
Network Security:
# Network policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: daemoneye-network-policy
namespace: daemoneye
spec:
podSelector:
matchLabels:
app: daemoneye
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: daemoneye
ports:
- protocol: TCP
port: 8080
egress:
- to:
- namespaceSelector:
matchLabels:
name: daemoneye
ports:
- protocol: TCP
port: 8080
Image Security
Image Scanning:
# Scan images for vulnerabilities
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy image daemoneye/procmond:1.0.0
# Scan with specific severity levels
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy image --severity HIGH,CRITICAL daemoneye/procmond:1.0.0
Image Signing:
# Sign images with Docker Content Trust
export DOCKER_CONTENT_TRUST=1
docker push daemoneye/procmond:1.0.0
Multi-stage Builds:
# Multi-stage build for smaller attack surface
FROM rust:1.85-alpine AS builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM alpine:3.18
RUN apk add --no-cache ca-certificates
COPY --from=builder /app/target/release/procmond /usr/local/bin/
USER 1000:1000
ENTRYPOINT ["procmond"]
Monitoring and Logging
Container Monitoring
Prometheus Metrics:
# Enable metrics collection
observability:
enable_metrics: true
metrics_port: 9090
metrics_path: /metrics
Grafana Dashboard:
{
"dashboard": {
"title": "DaemonEye Monitoring",
"panels": [
{
"title": "Process Collection Rate",
"type": "graph",
"targets": [
{
"expr": "rate(daemoneye_processes_collected_total[5m])",
"legendFormat": "Processes/sec"
}
]
},
{
"title": "Memory Usage",
"type": "graph",
"targets": [
{
"expr": "daemoneye_memory_usage_bytes",
"legendFormat": "Memory Usage"
}
]
}
]
}
}
Log Aggregation
Fluentd Configuration:
# fluentd-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: daemoneye
data:
fluent.conf: |
<source>
@type tail
path /var/log/daemoneye/*.log
pos_file /var/log/fluentd/daemoneye.log.pos
tag daemoneye.*
format json
</source>
<match daemoneye.**>
@type elasticsearch
host elasticsearch.logging.svc.cluster.local
port 9200
index_name daemoneye
type_name _doc
</match>
ELK Stack Integration:
# elasticsearch-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0
env:
- name: discovery.type
value: single-node
- name: xpack.security.enabled
value: 'false'
ports:
- containerPort: 9200
resources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 2Gi
cpu: 1000m
Troubleshooting
Common Issues
Container Won't Start:
# Check container logs
docker logs daemoneye-procmond
# Check container status
docker ps -a
# Check resource usage
docker stats daemoneye-procmond
Permission Denied:
# Check file permissions
docker exec daemoneye-procmond ls -la /data
# Fix permissions
docker exec daemoneye-procmond chown -R 1000:1000 /data
Network Issues:
# Check network connectivity
docker exec daemoneye-agent ping daemoneye-procmond
# Check DNS resolution
docker exec daemoneye-agent nslookup daemoneye-procmond
Database Issues:
# Check database status
docker exec daemoneye-agent daemoneye-cli database status
# Check database integrity
docker exec daemoneye-agent daemoneye-cli database integrity-check
# Repair database
docker exec daemoneye-agent daemoneye-cli database repair
Debug Mode
Enable Debug Logging:
# docker-compose.yml
services:
procmond:
environment:
- DaemonEye_LOG_LEVEL=debug
command: [procmond, --config, /config/config.yaml, --log-level, debug]
Debug Container:
# Run debug container
docker run -it --rm --privileged \
-v /var/lib/daemoneye:/data \
-v /var/log/daemoneye:/logs \
daemoneye/procmond:latest /bin/sh
# Check system capabilities
docker run --rm --privileged daemoneye/procmond:latest capsh --print
Performance Issues
High CPU Usage:
# Check CPU usage
docker stats daemoneye-procmond
# Reduce scan frequency
docker exec daemoneye-procmond daemoneye-cli config set app.scan_interval_ms 120000
High Memory Usage:
# Check memory usage
docker stats daemoneye-procmond
# Limit memory usage
docker run -d --memory=512m daemoneye/procmond:latest
Slow Database Operations:
# Check database performance
docker exec daemoneye-agent daemoneye-cli database query-stats
# Optimize database
docker exec daemoneye-agent daemoneye-cli database optimize
Health Checks
Container Health:
# Check container health
docker inspect daemoneye-procmond | jq '.[0].State.Health'
# Run health check manually
docker exec daemoneye-procmond procmond health
Service Health:
# Check service health
curl http://localhost:8080/health
# Check metrics
curl http://localhost:9090/metrics
This Docker deployment guide provides comprehensive instructions for containerizing and deploying DaemonEye. For additional help, consult the troubleshooting section or contact support.
DaemonEye Kubernetes Deployment Guide
This guide provides comprehensive instructions for deploying DaemonEye on Kubernetes, including manifests, Helm charts, and production deployment strategies.
Table of Contents
- Kubernetes Overview
- Prerequisites
- Basic Deployment
- Production Deployment
- Helm Chart Deployment
- Security Configuration
- Monitoring and Observability
- Troubleshooting
Kubernetes Overview
DaemonEye is designed to run efficiently on Kubernetes, providing:
- Scalability: Horizontal pod autoscaling and cluster-wide deployment
- High Availability: Multi-replica deployments with health checks
- Security: RBAC, network policies, and pod security standards
- Observability: Prometheus metrics, structured logging, and distributed tracing
- Management: Helm charts and GitOps integration
Architecture Components
- procmond: DaemonSet for process monitoring on each node
- daemoneye-agent: Deployment for alerting and orchestration
- daemoneye-cli: Job/CronJob for management tasks
- Security Center: Deployment for web-based management (Business/Enterprise)
Prerequisites
Cluster Requirements
Minimum Requirements:
- Kubernetes 1.20+
- 2+ worker nodes
- 4+ CPU cores total
- 8+ GB RAM total
- 50+ GB storage
Recommended Requirements:
- Kubernetes 1.24+
- 3+ worker nodes
- 8+ CPU cores total
- 16+ GB RAM total
- 100+ GB storage
Required Tools
# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Install kustomize
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
Basic Deployment
Namespace and RBAC
namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: daemoneye
labels:
name: daemoneye
app.kubernetes.io/name: daemoneye
app.kubernetes.io/version: 1.0.0
rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: daemoneye-procmond
namespace: daemoneye
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: daemoneye-procmond
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: daemoneye-procmond
subjects:
- kind: ServiceAccount
name: daemoneye-procmond
namespace: daemoneye
ConfigMap and Secrets
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: daemoneye-config
namespace: daemoneye
data:
procmond.yaml: |
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
data_dir: /data
log_dir: /logs
database:
path: /data/processes.db
retention_days: 30
security:
enable_privilege_dropping: true
drop_to_user: 1000
drop_to_group: 1000
daemoneye-agent.yaml: |
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
data_dir: /data
log_dir: /logs
database:
path: /data/processes.db
retention_days: 30
alerting:
enabled: true
sinks:
- type: syslog
enabled: true
facility: daemon
- type: webhook
enabled: true
url: http://daemoneye-webhook:8080/webhook
secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: daemoneye-secrets
namespace: daemoneye
type: Opaque
data:
webhook-token: <base64-encoded-token>
database-encryption-key: <base64-encoded-key>
Persistent Storage
pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: daemoneye-data
namespace: daemoneye
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast-ssd
DaemonSet for procmond
procmond-daemonset.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemoneye-procmond
namespace: daemoneye
spec:
selector:
matchLabels:
app: daemoneye-procmond
template:
metadata:
labels:
app: daemoneye-procmond
spec:
serviceAccountName: daemoneye-procmond
containers:
- name: procmond
image: daemoneye/procmond:1.0.0
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: data
mountPath: /data
- name: logs
mountPath: /logs
env:
- name: DaemonEye_LOG_LEVEL
value: info
- name: DaemonEye_DATA_DIR
value: /data
- name: DaemonEye_LOG_DIR
value: /logs
command: [procmond]
args: [--config, /config/procmond.yaml]
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500m
livenessProbe:
exec:
command:
- procmond
- health
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
exec:
command:
- procmond
- health
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumes:
- name: config
configMap:
name: daemoneye-config
- name: data
persistentVolumeClaim:
claimName: daemoneye-data
- name: logs
emptyDir: {}
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
Deployment for daemoneye-agent
daemoneye-agent-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: daemoneye-agent
namespace: daemoneye
spec:
replicas: 1
selector:
matchLabels:
app: daemoneye-agent
template:
metadata:
labels:
app: daemoneye-agent
spec:
serviceAccountName: daemoneye-agent
containers:
- name: daemoneye-agent
image: daemoneye/daemoneye-agent:1.0.0
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: data
mountPath: /data
- name: logs
mountPath: /logs
env:
- name: DaemonEye_LOG_LEVEL
value: info
- name: DaemonEye_DATA_DIR
value: /data
- name: DaemonEye_LOG_DIR
value: /logs
- name: DaemonEye_PROCMOND_ENDPOINT
value: tcp://daemoneye-procmond:8080
command: [daemoneye-agent]
args: [--config, /config/daemoneye-agent.yaml]
resources:
requests:
memory: 512Mi
cpu: 200m
limits:
memory: 1Gi
cpu: 1000m
livenessProbe:
exec:
command:
- daemoneye-agent
- health
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
exec:
command:
- daemoneye-agent
- health
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumes:
- name: config
configMap:
name: daemoneye-config
- name: data
persistentVolumeClaim:
claimName: daemoneye-data
- name: logs
emptyDir: {}
Service
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: daemoneye-agent
namespace: daemoneye
spec:
selector:
app: daemoneye-agent
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
type: ClusterIP
Deploy Basic Setup
# Create namespace
kubectl apply -f namespace.yaml
# Apply RBAC
kubectl apply -f rbac.yaml
# Apply configuration
kubectl apply -f configmap.yaml
kubectl apply -f secret.yaml
# Apply storage
kubectl apply -f pvc.yaml
# Deploy components
kubectl apply -f procmond-daemonset.yaml
kubectl apply -f daemoneye-agent-deployment.yaml
kubectl apply -f service.yaml
# Check deployment status
kubectl get pods -n daemoneye
kubectl get services -n daemoneye
Production Deployment
Production Configuration
production-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: daemoneye-config
namespace: daemoneye
data:
procmond.yaml: |
app:
scan_interval_ms: 60000
batch_size: 1000
log_level: info
data_dir: /data
log_dir: /logs
max_memory_mb: 512
max_cpu_percent: 5.0
database:
path: /data/processes.db
retention_days: 30
max_connections: 20
cache_size: -128000
wal_mode: true
security:
enable_privilege_dropping: true
drop_to_user: 1000
drop_to_group: 1000
enable_audit_logging: true
audit_log_path: /logs/audit.log
daemoneye-agent.yaml: |
app:
scan_interval_ms: 60000
batch_size: 1000
log_level: info
data_dir: /data
log_dir: /logs
max_memory_mb: 1024
max_cpu_percent: 10.0
database:
path: /data/processes.db
retention_days: 30
max_connections: 20
cache_size: -128000
wal_mode: true
alerting:
enabled: true
max_queue_size: 10000
delivery_timeout_ms: 5000
retry_attempts: 3
sinks:
- type: syslog
enabled: true
facility: daemon
priority: info
- type: webhook
enabled: true
url: http://daemoneye-webhook:8080/webhook
timeout_ms: 5000
retry_attempts: 3
- type: file
enabled: true
path: /logs/alerts.log
format: json
rotation: daily
max_files: 30
detection:
enable_detection: true
rule_directory: /rules
enable_hot_reload: true
max_concurrent_rules: 10
rule_timeout_ms: 30000
enable_rule_caching: true
cache_ttl_seconds: 300
observability:
enable_metrics: true
metrics_port: 9090
metrics_path: /metrics
enable_health_checks: true
health_check_port: 8080
health_check_path: /health
logging:
enable_structured_logging: true
log_format: json
enable_log_rotation: true
max_log_file_size_mb: 100
max_log_files: 10
Production DaemonSet
production-procmond-daemonset.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemoneye-procmond
namespace: daemoneye
spec:
selector:
matchLabels:
app: daemoneye-procmond
template:
metadata:
labels:
app: daemoneye-procmond
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'
prometheus.io/path: /metrics
spec:
serviceAccountName: daemoneye-procmond
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: procmond
image: daemoneye/procmond:1.0.0
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1000
runAsGroup: 1000
capabilities:
add:
- CAP_SYS_PTRACE
- CAP_SYS_ADMIN
drop:
- ALL
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: data
mountPath: /data
- name: logs
mountPath: /logs
- name: rules
mountPath: /rules
readOnly: true
- name: tmp
mountPath: /tmp
env:
- name: DaemonEye_LOG_LEVEL
value: info
- name: DaemonEye_DATA_DIR
value: /data
- name: DaemonEye_LOG_DIR
value: /logs
- name: DaemonEye_RULE_DIR
value: /rules
command: [procmond]
args: [--config, /config/procmond.yaml]
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500m
livenessProbe:
exec:
command:
- procmond
- health
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
exec:
command:
- procmond
- health
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
ports:
- name: metrics
containerPort: 9090
protocol: TCP
- name: health
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: daemoneye-config
- name: data
persistentVolumeClaim:
claimName: daemoneye-data
- name: logs
emptyDir: {}
- name: rules
configMap:
name: daemoneye-rules
- name: tmp
emptyDir: {}
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
nodeSelector:
kubernetes.io/os: linux
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
Production Deployment
production-daemoneye-agent-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: daemoneye-agent
namespace: daemoneye
spec:
replicas: 2
selector:
matchLabels:
app: daemoneye-agent
template:
metadata:
labels:
app: daemoneye-agent
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'
prometheus.io/path: /metrics
spec:
serviceAccountName: daemoneye-agent
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: daemoneye-agent
image: daemoneye/daemoneye-agent:1.0.0
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1000
runAsGroup: 1000
capabilities:
drop:
- ALL
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: data
mountPath: /data
- name: logs
mountPath: /logs
- name: tmp
mountPath: /tmp
env:
- name: DaemonEye_LOG_LEVEL
value: info
- name: DaemonEye_DATA_DIR
value: /data
- name: DaemonEye_LOG_DIR
value: /logs
- name: DaemonEye_PROCMOND_ENDPOINT
value: tcp://daemoneye-procmond:8080
command: [daemoneye-agent]
args: [--config, /config/daemoneye-agent.yaml]
resources:
requests:
memory: 512Mi
cpu: 200m
limits:
memory: 1Gi
cpu: 1000m
livenessProbe:
exec:
command:
- daemoneye-agent
- health
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
exec:
command:
- daemoneye-agent
- health
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
ports:
- name: metrics
containerPort: 9090
protocol: TCP
- name: health
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: daemoneye-config
- name: data
persistentVolumeClaim:
claimName: daemoneye-data
- name: logs
emptyDir: {}
- name: tmp
emptyDir: {}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- daemoneye-agent
topologyKey: kubernetes.io/hostname
Horizontal Pod Autoscaler
hpa.yaml:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: daemoneye-agent-hpa
namespace: daemoneye
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: daemoneye-agent
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 60
Helm Chart Deployment
Helm Chart Structure
daemoneye/
├── Chart.yaml
├── values.yaml
├── values-production.yaml
├── values-development.yaml
├── templates/
│ ├── namespace.yaml
│ ├── rbac.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ ├── pvc.yaml
│ ├── procmond-daemonset.yaml
│ ├── daemoneye-agent-deployment.yaml
│ ├── service.yaml
│ ├── hpa.yaml
│ ├── networkpolicy.yaml
│ └── servicemonitor.yaml
└── charts/
Chart.yaml
apiVersion: v2
name: daemoneye
description: DaemonEye Security Monitoring Agent
type: application
version: 1.0.0
appVersion: 1.0.0
keywords:
- security
- monitoring
- processes
- threat-detection
home: https://daemoneye.com
sources:
- https://github.com/daemoneye/daemoneye
maintainers:
- name: DaemonEye Team
email: team@daemoneye.com
dependencies:
- name: prometheus
version: 15.0.0
repository: https://prometheus-community.github.io/helm-charts
condition: monitoring.prometheus.enabled
values.yaml
# Default values for daemoneye
image:
repository: daemoneye
tag: 1.0.0
pullPolicy: IfNotPresent
replicaCount: 1
serviceAccount:
create: true
annotations: {}
name: ''
podSecurityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
service:
type: ClusterIP
port: 8080
ingress:
enabled: false
className: ''
annotations: {}
hosts:
- host: daemoneye.example.com
paths:
- path: /
pathType: Prefix
tls: []
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 200m
memory: 512Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
persistence:
enabled: true
storageClass: ''
accessMode: ReadWriteOnce
size: 10Gi
config:
app:
scan_interval_ms: 30000
batch_size: 1000
log_level: info
database:
retention_days: 30
alerting:
enabled: true
sinks:
- type: syslog
enabled: true
facility: daemon
secrets: {}
monitoring:
enabled: false
serviceMonitor:
enabled: false
namespace: ''
interval: 30s
scrapeTimeout: 10s
prometheus:
enabled: false
server:
enabled: true
persistentVolume:
enabled: true
size: 8Gi
alertmanager:
enabled: true
persistentVolume:
enabled: true
size: 2Gi
grafana:
enabled: false
adminPassword: admin
persistentVolume:
enabled: true
size: 1Gi
networkPolicy:
enabled: false
ingress:
enabled: true
rules: []
egress:
enabled: true
rules: []
Deploy with Helm
# Add DaemonEye Helm repository
helm repo add daemoneye https://charts.daemoneye.com
helm repo update
# Install DaemonEye
helm install daemoneye daemoneye/daemoneye \
--namespace daemoneye \
--create-namespace \
--values values.yaml
# Install with production values
helm install daemoneye daemoneye/daemoneye \
--namespace daemoneye \
--create-namespace \
--values values-production.yaml
# Upgrade deployment
helm upgrade daemoneye daemoneye/daemoneye \
--namespace daemoneye \
--values values.yaml
# Uninstall
helm uninstall daemoneye --namespace daemoneye
Security Configuration
Network Policies
networkpolicy.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: daemoneye-network-policy
namespace: daemoneye
spec:
podSelector:
matchLabels:
app: daemoneye
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: daemoneye
- podSelector:
matchLabels:
app: daemoneye
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 9090
egress:
- to:
- namespaceSelector:
matchLabels:
name: daemoneye
- podSelector:
matchLabels:
app: daemoneye
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 9090
- to: []
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
Pod Security Standards
pod-security-policy.yaml:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: daemoneye-psp
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
RBAC Configuration
rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: daemoneye-procmond
namespace: daemoneye
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: daemoneye-agent
namespace: daemoneye
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: daemoneye-procmond
rules:
- apiGroups: [""]
resources: ["nodes", "pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: daemoneye-procmond
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: daemoneye-procmond
subjects:
- kind: ServiceAccount
name: daemoneye-procmond
namespace: daemoneye
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: daemoneye-agent
rules:
- apiGroups: [""]
resources: ["pods", "services", "endpoints"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: daemoneye-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: daemoneye-agent
subjects:
- kind: ServiceAccount
name: daemoneye-agent
namespace: daemoneye
Monitoring and Observability
Prometheus ServiceMonitor
servicemonitor.yaml:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: daemoneye
namespace: daemoneye
labels:
app: daemoneye
spec:
selector:
matchLabels:
app: daemoneye
endpoints:
- port: metrics
path: /metrics
interval: 30s
scrapeTimeout: 10s
Grafana Dashboard
grafana-dashboard.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: daemoneye-grafana-dashboard
namespace: daemoneye
labels:
grafana_dashboard: '1'
data:
daemoneye-dashboard.json: |
{
"dashboard": {
"title": "DaemonEye Monitoring",
"panels": [
{
"title": "Process Collection Rate",
"type": "graph",
"targets": [
{
"expr": "rate(daemoneye_processes_collected_total[5m])",
"legendFormat": "Processes/sec"
}
]
},
{
"title": "Memory Usage",
"type": "graph",
"targets": [
{
"expr": "daemoneye_memory_usage_bytes",
"legendFormat": "Memory Usage"
}
]
}
]
}
}
Troubleshooting
Common Issues
Pod Won't Start:
# Check pod status
kubectl get pods -n daemoneye
# Check pod logs
kubectl logs -n daemoneye daemoneye-procmond-xxx
# Check pod events
kubectl describe pod -n daemoneye daemoneye-procmond-xxx
Permission Denied:
# Check security context
kubectl get pod -n daemoneye daemoneye-procmond-xxx -o yaml | grep securityContext
# Check file permissions
kubectl exec -n daemoneye daemoneye-procmond-xxx -- ls -la /data
Network Issues:
# Check service endpoints
kubectl get endpoints -n daemoneye
# Check network connectivity
kubectl exec -n daemoneye daemoneye-agent-xxx -- ping daemoneye-procmond
Database Issues:
# Check database status
kubectl exec -n daemoneye daemoneye-agent-xxx -- daemoneye-cli database status
# Check database integrity
kubectl exec -n daemoneye daemoneye-agent-xxx -- daemoneye-cli database integrity-check
Debug Mode
Enable Debug Logging:
# Update ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: daemoneye-config
namespace: daemoneye
data:
procmond.yaml: |
app:
log_level: debug
# ... rest of config
Debug Pod:
# Run debug pod
kubectl run debug --image=daemoneye/daemoneye-cli:1.0.0 -it --rm -- /bin/sh
# Check system capabilities
kubectl run debug --image=daemoneye/daemoneye-cli:1.0.0 -it --rm -- capsh --print
Performance Issues
High CPU Usage:
# Check resource usage
kubectl top pods -n daemoneye
# Check HPA status
kubectl get hpa -n daemoneye
# Scale up manually
kubectl scale deployment daemoneye-agent --replicas=3 -n daemoneye
High Memory Usage:
# Check memory usage
kubectl top pods -n daemoneye
# Check memory limits
kubectl describe pod -n daemoneye daemoneye-agent-xxx | grep Limits
Slow Database Operations:
# Check database performance
kubectl exec -n daemoneye daemoneye-agent-xxx -- daemoneye-cli database query-stats
# Optimize database
kubectl exec -n daemoneye daemoneye-agent-xxx -- daemoneye-cli database optimize
This Kubernetes deployment guide provides comprehensive instructions for deploying DaemonEye on Kubernetes. For additional help, consult the troubleshooting section or contact support.
Security Documentation
This document provides comprehensive security information for DaemonEye, including threat model, security considerations, and best practices.
Table of Contents
- Threat Model
- Security Architecture
- Security Features
- Security Configuration
- Security Best Practices
- Security Considerations
- Compliance
- Security Testing
- Security Updates
Threat Model
Attack Vectors
DaemonEye is designed to protect against various attack vectors:
- Process Injection: Monitoring for code injection techniques
- Privilege Escalation: Detecting unauthorized privilege changes
- Persistence Mechanisms: Identifying malicious persistence techniques
- Lateral Movement: Monitoring for lateral movement indicators
- Data Exfiltration: Detecting suspicious data access patterns
Security Boundaries
DaemonEye implements strict security boundaries:
- Process Isolation: Components run in separate processes
- Privilege Separation: Minimal required privileges per component
- Network Isolation: No listening ports by default
- Data Encryption: Sensitive data encrypted at rest and in transit
- Audit Logging: Comprehensive audit trail for all operations
Security Architecture
Three-Component Security Model
The three-component architecture provides defense in depth:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ ProcMonD │ │ daemoneye-agent │ │ daemoneye-cli │
│ (Privileged) │◄──►│ (User Space) │◄──►│ (Management) │
│ │ │ │ │ │
│ • Process Enum │ │ • Alerting │ │ • Queries │
│ • File Hashing │ │ • Network Ops │ │ • Configuration │
│ • Audit Logging │ │ • Rule Engine │ │ • Monitoring │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Privilege Management
ProcMonD (Privileged):
- Requires
CAP_SYS_PTRACE
capability - Runs with minimal required privileges
- Drops privileges after initialization
- Isolated from network operations
daemoneye-agent (User Space):
- Runs as non-privileged user
- Handles network operations
- No direct system access
- Communicates via IPC only
daemoneye-cli (Management):
- Runs as regular user
- Read-only access to data
- No system modification capabilities
- Audit all operations
Security Features
Memory Safety
DaemonEye is built in Rust with memory safety guarantees:
// No unsafe code allowed
#![forbid(unsafe_code)]
// Safe memory management
let process_info = ProcessInfo {
pid: process.pid(),
name: process.name().to_string(),
executable_path: process.exe().map(|p| p.to_string_lossy().to_string()),
// ... other fields
};
Input Validation
Comprehensive input validation prevents injection attacks:
use validator::{Validate, ValidationError};
#[derive(Validate)]
pub struct DetectionRule {
#[validate(length(min = 1, max = 1000))]
pub name: String,
#[validate(custom = "validate_sql")]
pub sql_query: String,
#[validate(range(min = 1, max = 1000))]
pub priority: u32,
}
fn validate_sql(sql: &str) -> Result<(), ValidationError> {
// Validate SQL syntax and prevent injection
let ast = sqlparser::parse(sql)?;
validate_ast(&ast)?;
Ok(())
}
Cryptographic Integrity
BLAKE3 hashing and Ed25519 signatures ensure data integrity:
use blake3::Hasher;
use ed25519_dalek::{Keypair, Signature};
pub struct IntegrityChecker {
hasher: Hasher,
keypair: Keypair,
}
impl IntegrityChecker {
pub fn hash_data(&self, data: &[u8]) -> [u8; 32] {
self.hasher.update(data).finalize().into()
}
pub fn sign_data(&self, data: &[u8]) -> Signature {
self.keypair.sign(data)
}
pub fn verify_signature(&self, data: &[u8], signature: &Signature) -> bool {
self.keypair.verify(data, signature).is_ok()
}
}
SQL Injection Prevention
Multiple layers of SQL injection prevention:
- AST Validation: Parse and validate SQL queries
- Prepared Statements: Use parameterized queries
- Sandboxed Execution: Isolated query execution
- Input Sanitization: Clean and validate all inputs
use rusqlite::Connection;
use sqlparser::ast::Statement;
pub struct SafeQueryExecutor {
conn: Connection,
allowed_tables: HashSet<String>,
}
impl SafeQueryExecutor {
pub fn execute_query(&self, query: &str) -> Result<QueryResult, QueryError> {
// Parse and validate SQL
let ast = sqlparser::parse(query)?;
self.validate_ast(&ast)?;
// Check table permissions
self.check_table_access(&ast)?;
// Execute with prepared statement
let mut stmt = self.conn.prepare(query)?;
let rows = stmt.query_map([], |row| {
// Safe row processing
Ok(ProcessRecord::from_row(row)?)
})?;
Ok(QueryResult::from_rows(rows))
}
}
Security Configuration
Authentication and Authorization
security:
authentication:
enable_auth: true
auth_method: jwt
jwt_secret: ${JWT_SECRET}
token_expiry: 3600
authorization:
enable_rbac: true
roles:
- name: admin
permissions: [read, write, delete, configure]
- name: operator
permissions: [read, write]
- name: viewer
permissions: [read]
access_control:
allowed_users: []
allowed_groups: []
denied_users: [root]
denied_groups: [wheel]
Network Security
security:
network:
enable_tls: true
cert_file: /etc/daemoneye/cert.pem
key_file: /etc/daemoneye/key.pem
ca_file: /etc/daemoneye/ca.pem
verify_peer: true
cipher_suites: [TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256]
firewall:
enable_firewall: true
allowed_ports: [8080, 9090]
allowed_ips: [10.0.0.0/8, 192.168.0.0/16]
block_unknown: true
Data Protection
security:
encryption:
enable_encryption: true
algorithm: AES-256-GCM
key_rotation_days: 30
data_protection:
enable_field_masking: true
masked_fields: [command_line, environment_variables]
enable_data_retention: true
retention_days: 30
audit:
enable_audit_logging: true
audit_log_path: /var/log/daemoneye/audit.log
log_level: info
include_sensitive_data: false
Security Best Practices
Deployment Security
-
Principle of Least Privilege:
- Run components with minimal required privileges
- Use dedicated users and groups
- Drop privileges after initialization
-
Network Security:
- Use TLS for all network communications
- Implement firewall rules
- Monitor network traffic
-
Data Protection:
- Encrypt sensitive data at rest
- Use secure key management
- Implement data retention policies
-
Access Control:
- Implement role-based access control
- Use strong authentication
- Monitor access patterns
Configuration Security
-
Secure Defaults:
- Disable unnecessary features
- Use secure default settings
- Require explicit configuration for sensitive features
-
Secret Management:
- Use environment variables for secrets
- Implement secret rotation
- Never hardcode credentials
-
Input Validation:
- Validate all inputs
- Sanitize user data
- Use parameterized queries
Operational Security
-
Monitoring:
- Monitor system health
- Track security events
- Implement alerting
-
Logging:
- Enable comprehensive logging
- Use structured logging
- Implement log rotation
-
Updates:
- Keep software updated
- Monitor security advisories
- Test updates in staging
Security Considerations
Threat Detection
DaemonEye can detect various security threats:
-
Malware Execution:
- Suspicious process names
- Unusual execution patterns
- Code injection attempts
-
Privilege Escalation:
- Unauthorized privilege changes
- Setuid/setgid abuse
- Capability escalation
-
Persistence Mechanisms:
- Startup modifications
- Service installations
- Scheduled task creation
-
Lateral Movement:
- Network scanning
- Credential theft
- Remote execution
Incident Response
-
Detection:
- Real-time monitoring
- Automated alerting
- Threat intelligence integration
-
Analysis:
- Forensic data collection
- Timeline reconstruction
- Root cause analysis
-
Containment:
- Process isolation
- Network segmentation
- Access restrictions
-
Recovery:
- System restoration
- Security hardening
- Monitoring enhancement
Compliance
Security Standards
DaemonEye helps meet various security standards:
-
NIST Cybersecurity Framework:
- Identify: Asset discovery and classification
- Protect: Access control and data protection
- Detect: Continuous monitoring and threat detection
- Respond: Incident response and automation
- Recover: Business continuity and restoration
-
ISO 27001:
- Information security management
- Risk assessment and treatment
- Security monitoring and incident management
- Continuous improvement
-
SOC 2:
- Security controls
- Availability monitoring
- Processing integrity
- Confidentiality protection
Audit Requirements
-
Audit Logging:
- Comprehensive event logging
- Certificate Transparency-style audit ledger
- Long-term retention
-
Access Controls:
- User authentication
- Role-based authorization
- Access monitoring
-
Data Protection:
- Encryption at rest and in transit
- Data classification
- Retention policies
Security Testing
Vulnerability Assessment
-
Static Analysis:
- Code review
- Dependency scanning
- Configuration validation
-
Dynamic Analysis:
- Penetration testing
- Fuzzing
- Runtime monitoring
-
Security Scanning:
- Container image scanning
- Network vulnerability scanning
- Application security testing
Security Validation
-
Unit Testing:
- Security function testing
- Input validation testing
- Error handling testing
-
Integration Testing:
- Component interaction testing
- Security boundary testing
- End-to-end security testing
-
Performance Testing:
- Security overhead measurement
- Load testing with security features
- Stress testing under attack
Security Updates
Update Process
-
Security Advisories:
- Monitor security mailing lists
- Track CVE databases
- Subscribe to vendor notifications
-
Patch Management:
- Test patches in staging
- Deploy during maintenance windows
- Verify patch effectiveness
-
Vulnerability Response:
- Assess vulnerability impact
- Implement temporary mitigations
- Deploy permanent fixes
Security Monitoring
-
Threat Intelligence:
- Subscribe to threat feeds
- Monitor security blogs
- Participate in security communities
-
Continuous Monitoring:
- Real-time security monitoring
- Automated threat detection
- Incident response automation
-
Security Metrics:
- Track security KPIs
- Monitor compliance metrics
- Report security status
This security documentation provides comprehensive guidance for securing DaemonEye deployments. For additional security information, consult the specific security guides or contact the security team.
DaemonEye Security Design Overview
Executive Summary
DaemonEye is a security-focused, high-performance process monitoring system designed to provide continuous threat detection and behavioral analysis while maintaining strict security boundaries and audit-grade integrity. The system implements a three-component architecture with privilege separation, cryptographic integrity verification, and comprehensive audit logging to meet enterprise security requirements.
This document provides a comprehensive technical overview of DaemonEye's security design, architecture, and implementation details for security professionals, compliance officers, and system architects responsible for threat detection and incident response capabilities.
Table of Contents
- Security Architecture Overview
- Threat Model and Security Boundaries
- Component Security Design
- Cryptographic Security Framework
- Performance and Scalability
- Data Protection and Privacy
- Audit and Compliance Features
- Network Security and Communication
- Operational Security Controls
- Security Testing and Validation
- US Government ISSO Considerations
- Additional NIST SP 800-53 Compliance Requirements
- Configuration Management (CM) Family
- Contingency Planning (CP) Family
- Identification and Authentication (IA) Family
- Incident Response (IR) Family
- Maintenance (MA) Family
- Risk Assessment (RA) Family
- System and Services Acquisition (SA) Family
- Enhanced System and Communications Protection (SC) Family
- Implementation Priority
- Deployment Security Considerations
- Footnotes
- Conclusion
Security Architecture Overview
Core Security Principles
DaemonEye is built on fundamental security principles that guide every aspect of its design and implementation:
Principle of Least Privilege: Each component operates with the minimum privileges required for its specific function, with immediate privilege dropping after initialization.
Defense in Depth: Multiple layers of security controls protect against various attack vectors, from memory safety to network isolation.
Fail-Safe Design: The system fails securely, maintaining monitoring capabilities even when individual components experience issues.
Audit-First Approach: All security-relevant events are cryptographically logged with tamper-evident integrity guarantees.
Zero Trust Architecture: No component trusts another without explicit verification, and all communications are authenticated and encrypted.
Three-Component Security Architecture
graph TB subgraph "Security Boundaries" subgraph "Privileged Domain" PM["procmond<br/>(Process Collector)<br/>• Minimal privileges<br/>• Process enumeration<br/>• Hash computation<br/>• Audit logging<br/>• No network access"] end subgraph "User Space Domain" SA["daemoneye-agent<br/>(Detection Orchestrator)<br/>• SQL detection engine<br/>• Alert management<br/>• Multi-channel delivery<br/>• Database management<br/>• Outbound-only network"] CLI["daemoneye-cli<br/>(Operator Interface)<br/>• Query interface<br/>• System management<br/>• No direct DB access<br/>• No network access"] end subgraph "Data Storage" AUDIT["Audit Ledger<br/>(Merkle Tree)<br/>• Tamper-evident<br/>• Append-only<br/>• Cryptographic integrity"] STORE["Event Store<br/>(redb Database)<br/>• Process data<br/>• Detection results<br/>• Alert history"] end end PM -->|Write-only| AUDIT SA -->|Read/Write| STORE SA -->|Read-only| AUDIT CLI -->|IPC Query| SA SA -->|IPC Task| PM style PM fill:#ffebee style SA fill:#e8f5e8 style CLI fill:#e3f2fd style AUDIT fill:#fff3e0 style STORE fill:#f3e5f5
Security Control Matrix
Security Control | procmond | daemoneye-agent | daemoneye-cli | Implementation |
---|---|---|---|---|
Privilege Separation | ✅ Elevated (temporary) | ✅ User space | ✅ User space | Platform-specific capabilities |
Network Isolation | ✅ No network | ✅ Outbound only | ✅ No network | Firewall rules + code restrictions |
Memory Safety | ✅ Rust + zero unsafe | ✅ Rust + zero unsafe | ✅ Rust + zero unsafe | Compiler-enforced |
Input Validation | ✅ Protobuf schema | ✅ SQL AST validation | ✅ CLI validation | Type-safe parsing |
Audit Logging | ✅ All operations | ✅ All operations | ✅ All operations | Structured JSON + Merkle tree |
Cryptographic Integrity | ✅ BLAKE3 hashing | ✅ BLAKE3 hashing | ✅ BLAKE3 hashing | Hardware-accelerated |
Error Handling | ✅ Graceful degradation | ✅ Circuit breakers | ✅ Safe defaults | Comprehensive error types |
Threat Model and Security Boundaries
Attack Surface Analysis
Primary Attack Vectors:
- Process Enumeration Attacks: Attempts to exploit privileged process access
- SQL Injection: Malicious detection rules targeting the SQL engine
- IPC Communication Attacks: Exploitation of inter-process communication
- Database Tampering: Attempts to modify stored process data or audit logs
- Privilege Escalation: Exploitation of temporary elevated privileges
- Network-based Attacks: Exploitation of alert delivery channels
Security Boundary Enforcement
Process Isolation:
- procmond runs in isolated process space with minimal privileges
- daemoneye-agent operates in user space with restricted database access
- daemoneye-cli has no direct system access, only IPC communication
Network Isolation:
- procmond: Zero network access (air-gapped)
- daemoneye-agent: Outbound-only connections for alert delivery
- daemoneye-cli: No network access, local IPC only
Data Access Controls:
- Audit ledger: Write-only for procmond, read-only for others
- Event store: Read/write for daemoneye-agent, query-only for daemoneye-cli
- Configuration: Hierarchical access with validation
Threat Mitigation Strategies
Memory Safety:
- Rust's ownership system prevents buffer overflows and use-after-free
- Zero unsafe code policy with isolated exceptions
- Comprehensive testing including fuzzing and static analysis
Input Validation:
- Protobuf schema validation for all IPC messages
- SQL AST parsing with whitelist-based function approval
- CLI argument validation with type safety
Cryptographic Protection:
- BLAKE3 hashing for all audit entries
- Merkle tree integrity verification
- Optional Ed25519 signatures for external verification
Component Security Design
procmond (Process Collection Component)
Security Role: Privileged data collector with minimal attack surface
Security Features:
- Temporary Privilege Escalation: Requests only required capabilities (CAP_SYS_PTRACE on Linux, SeDebugPrivilege on Windows)1
- Immediate Privilege Drop: Drops all elevated privileges after initialization2
- No Network Access: Completely air-gapped from network interfaces
- Minimal Codebase: Simple, auditable process enumeration logic
- Cryptographic Hashing: SHA-256 computation for executable integrity verification3
- Audit Logging: All operations logged to tamper-evident audit ledger4
Security Boundaries:
// Example security boundary enforcement
pub struct ProcessCollector {
privilege_manager: PrivilegeManager,
hash_computer: SecureHashComputer,
audit_logger: AuditLogger,
}
impl ProcessCollector {
pub async fn initialize(&mut self) -> Result<()> {
// Request minimal required privileges
self.privilege_manager.request_enhanced_privileges().await?;
// Perform initialization tasks
self.setup_process_enumeration().await?;
// Immediately drop privileges
self.privilege_manager.drop_privileges().await?;
// Log privilege drop for audit
self.audit_logger.log_privilege_drop().await?;
Ok(())
}
}
daemoneye-agent (Detection Orchestrator)
Security Role: User-space detection engine with network alerting capabilities
Security Features:
- SQL Injection Prevention: AST-based query validation with whitelist functions5
- Sandboxed Execution: Read-only database connections for rule execution6
- Resource Limits: Timeout and memory constraints on detection rules7
- Multi-Channel Alerting: Circuit breaker pattern for reliable delivery8
- Audit Trail: Comprehensive logging of all detection activities9
SQL Security Implementation:
- AST Validation: Parse SQL queries using AST validation to prevent injection attacks5
- Function Whitelist: Only allow SELECT statements with approved functions (COUNT, SUM, AVG, MIN, MAX, LENGTH, SUBSTR, datetime functions)10
- Prepared Statements: Use prepared statements with read-only database connections6
- Timeout Protection: Complete within 30 seconds or timeout with appropriate logging7
- Audit Logging: Reject forbidden constructs and log attempts for audit purposes9
pub struct SqlValidator {
parser: sqlparser::Parser<sqlparser::dialect::SQLiteDialect>,
allowed_functions: HashSet<String>,
forbidden_statements: HashSet<String>,
}
impl SqlValidator {
pub fn validate_query(&self, sql: &str) -> Result<ValidationResult> {
let ast = self.parser.parse_sql(sql)?;
for statement in &ast {
match statement {
Statement::Query(query) => {
// Only allow SELECT statements
self.validate_select_query(query)?;
}
_ => return Err(ValidationError::ForbiddenStatement),
}
}
// Validate function whitelist
self.validate_function_usage(&ast)?;
Ok(ValidationResult::Valid)
}
}
daemoneye-cli (Operator Interface)
Security Role: Secure query interface with no direct system access
Security Features:
- No Direct Database Access: All queries routed through daemoneye-agent
- Input Sanitization: Comprehensive validation of all user inputs
- Safe SQL Execution: Prepared statements with parameter binding11
- Output Formats: Support JSON, human-readable table, and CSV output12
- Rule Management: List, validate, test, and import/export detection rules13
- Health Monitoring: Display component status with color-coded indicators14
- Large Dataset Support: Streaming and pagination for result sets15
- Audit Logging: All queries and operations logged
Cryptographic Security Framework
Hash Function Selection
BLAKE3 for Audit Integrity:
- Security: 256-bit security level with resistance to length extension attacks
- Hardware Acceleration: Optimized implementations available
- Deterministic: Consistent output across platforms and implementations
- Requirements: Specified for audit ledger hash computation16
SHA-256 for Executable Hashing:
- Industry Standard: Widely recognized and trusted
- Compatibility: Integrates with existing security tools and databases
- Verification: Easy integration with external verification systems
- Requirements: Specified for executable file integrity verification3
FIPS 140-2 Compliance:
- FIPS 140-2 Level 1: Software-based cryptographic module
- Approved Algorithms: SHA-256, BLAKE3 (when approved), Ed25519
- Key Management: FIPS-approved key generation and storage
- Cryptographic Module Validation: CMVP validation support
- Algorithm Implementation: FIPS-approved cryptographic implementations
Common Criteria Evaluation:
- Target of Evaluation (TOE): DaemonEye security monitoring system
- Security Target (ST): Comprehensive security requirements documentation
- Evaluation Assurance Level (EAL): EAL4+ evaluation support
- Protection Profile: Custom protection profile development
- Security Functional Requirements: SFR implementation and testing
Merkle Tree Audit Ledger
Cryptographic Properties:
- Tamper Evidence: Any modification to historical entries invalidates the entire chain17
- Inclusion Proofs: Cryptographic proof that specific entries exist in the ledger17
- Checkpoint Signatures: Optional Ed25519 signatures for external verification
- Forward Security: New entries don't compromise historical integrity
- Append-Only: Monotonic sequence numbers for all entries18
- BLAKE3 Hashing: Fast, cryptographically secure hash computation16
- Millisecond Precision: Proper ordering and millisecond-precision timestamps19
Implementation Details:
pub struct AuditLedger {
merkle_tree: MerkleTree<Blake3Hasher>,
checkpoints: Vec<Checkpoint>,
signature_key: Option<Ed25519KeyPair>,
}
impl AuditLedger {
pub fn append_entry(&mut self, entry: AuditEntry) -> Result<InclusionProof> {
// Canonicalize entry for consistent hashing
let canonical = self.canonicalize_entry(&entry)?;
let leaf_hash = Blake3Hasher::hash(&canonical);
// Insert into Merkle tree
self.merkle_tree.insert(leaf_hash).commit();
// Generate inclusion proof
let proof = self
.merkle_tree
.proof(&[self.merkle_tree.leaves().len() - 1]);
// Optional: Sign checkpoint if threshold reached
if self.should_create_checkpoint() {
self.create_signed_checkpoint()?;
}
Ok(proof)
}
}
Key Management
Key Generation:
- Ed25519 key pairs generated using secure random number generation
- Keys stored in OS keychain or hardware security modules when available
- Key rotation policies for long-term deployments
Signature Verification:
- Public key distribution through secure channels
- Certificate chain validation for enterprise deployments
- Air-gap compatible verification procedures
Performance and Scalability
Verified Performance Requirements
Core Performance Targets:
- CPU Usage: < 5% sustained during continuous monitoring20
- Process Enumeration: < 5 seconds for systems with up to 10,000 processes21
- Detection Rule Timeout: 30 seconds maximum execution time22
- Alert Delivery Retry: Up to 3 attempts with maximum 60-second delay23
- Audit Timestamps: Millisecond-precision timestamps19
Enterprise Performance Targets:
- Kernel Event Processing: Sub-millisecond latency from event occurrence to detection24
- Fleet Query Response: < 60 seconds for queries across up to 10,000 endpoints25
- CPU Overhead: < 2% per monitored endpoint for 10,000+ processes26
- Event Processing: 100,000+ events per minute with sub-second query response27
Resource Management
Memory Safety:
- Rust's ownership system prevents buffer overflows and use-after-free
- Zero unsafe code policy with isolated exceptions
- Comprehensive testing including fuzzing and static analysis
Concurrency Control:
- Bounded channels with configurable capacity
- Circuit breaker patterns for external dependencies
- Graceful degradation under resource constraints
Data Protection and Privacy
Data Classification
Core Data Types (All Tiers):
- Public Data: Process names, basic system information
- Internal Data: Process metadata, detection rules, configuration
- Confidential Data: Command line arguments, file paths, user information
- Restricted Data: Cryptographic hashes, audit logs, alert details
Government Data Classifications (All Tiers):
DaemonEye supports all standard government data classification levels with appropriate handling controls and access restrictions.
Privacy Controls
Data Masking (All Tiers):
- Configurable redaction of sensitive command line arguments
- Optional anonymization of user identifiers
- Field-level privacy controls for different deployment scenarios
Retention Policies (All Tiers):
- Configurable data retention periods for different data types
- Automatic purging of expired data
- Secure deletion procedures for sensitive information
Access Controls (All Tiers):
- Role-based access to different data classifications
- Audit logging of all data access
- Principle of least privilege for data access
Business Tier Data Protection Features
Centralized Data Management:
- Security Center: Centralized aggregation and management of data from multiple agents
- mTLS Authentication: Mutual TLS with certificate chain validation for secure agent connections
- Certificate Management: Automated certificate provisioning and rotation
- Role-Based Access Control: Granular permissions for different user roles
Enhanced Data Export:
- Standard Format Support: CEF (Common Event Format), structured JSON, and STIX-lite exports
- SIEM Integration: Native connectors for Splunk, Elasticsearch, and Kafka
- Data Portability: Comprehensive export capabilities for data migration and analysis
Code Signing and Integrity:
- Signed Installers: MSI installers for Windows and DMG packages for macOS with valid code signing certificates
- Enterprise Deployment: Proper metadata for enterprise deployment tools
- Security Validation: Operating system security validation without warnings
Enterprise Tier Data Protection Features
Advanced Cryptographic Security:
- SLSA Level 3 Provenance: Complete software supply chain attestation
- Cosign Signatures: Hardware security module-backed code signing
- Software Bill of Materials (SBOM): Complete dependency and component inventory
- Signature Verification: Mandatory signature verification before execution
Federated Data Architecture:
- Multi-Tier Security Centers: Hierarchical data aggregation across geographic regions
- Federated Storage: Distributed data storage with local and global aggregation
- Data Sovereignty: Regional data residency compliance
- Cross-Region Replication: Secure data replication with integrity verification
Advanced Compliance and Threat Intelligence:
- STIX/TAXII Integration: Automated threat intelligence feed consumption and processing
- Compliance Framework Mappings: NIST, ISO 27001, CIS framework mappings
- Quarterly Rule Packs: Curated threat intelligence updates with automated rule deployment
- Compliance Reporting: Automated compliance reporting and evidence collection
Kernel-Level Data Protection:
- Real-Time Event Processing: Sub-millisecond processing of kernel-level events
- Network Correlation: Process-to-network event correlation for lateral movement detection
- Memory Analysis: Advanced memory analysis capabilities for sophisticated attack detection
- Platform-Specific Monitoring: eBPF (Linux), ETW (Windows), EndpointSecurity (macOS) integration
Compliance Features
Core Compliance (All Tiers):
GDPR Compliance:
- Data minimization principles
- Right to erasure implementation
- Data portability features
- Privacy by design architecture
SOC 2 Type II:
- Comprehensive audit logging
- Access control documentation
- Incident response procedures
- Regular security assessments
NIST Cybersecurity Framework:
- Identify: Asset inventory and risk assessment
- Protect: Access controls and data encryption
- Detect: Continuous monitoring and alerting
- Respond: Incident response and forensics
- Recover: Business continuity and restoration
Business Tier Compliance Features:
Enhanced Audit and Reporting:
- Centralized Audit Logs: Aggregated audit logs from multiple agents
- Automated Compliance Reporting: Scheduled compliance reports and dashboards
- Data Retention Management: Centralized data retention policy enforcement
- Audit Trail Integrity: Cryptographic verification of audit log integrity across the fleet
Enterprise Integration Compliance:
- SIEM Integration: Native compliance with major SIEM platforms (Splunk, Elasticsearch, QRadar)
- Standard Format Support: CEF, STIX-lite, and other compliance-standard formats
- Data Lineage Tracking: Complete data lineage from collection to reporting
Enterprise Tier Compliance Features:
Advanced Compliance Frameworks:
- NIST SP 800-53: Complete security controls mapping and implementation
- ISO 27001: Information security management system compliance
- CIS Controls: Center for Internet Security controls implementation
- FedRAMP: Federal Risk and Authorization Management Program compliance
Threat Intelligence and Advanced Monitoring:
- STIX/TAXII Integration: Automated threat intelligence feed consumption
- Compliance Mappings: Real-time mapping of detection events to compliance controls
- Advanced SIEM Integration: Full STIX/TAXII support with compliance mappings
- Quarterly Threat Updates: Automated deployment of curated threat intelligence rule packs
Hardened Security and Supply Chain:
- SLSA Level 3 Provenance: Complete software supply chain attestation
- Cosign Signatures: Hardware security module-backed code signing
- Software Bill of Materials (SBOM): Complete dependency and component inventory
- Supply Chain Security: End-to-end supply chain security verification
FISMA Compliance:
- NIST SP 800-53 security controls implementation
- Risk assessment and authorization processes
- Continuous monitoring and assessment
- Incident response and reporting procedures
FedRAMP Authorization:
- Cloud security requirements compliance
- Third-party assessment organization (3PAO) validation
- Agency authorization and continuous monitoring
- Security control inheritance and shared responsibility
FISMA High/Moderate/Low Impact Systems:
- Tailored security control baselines
- Risk-based security control selection
- Control enhancement implementation
- Assessment and authorization documentation
Audit and Compliance Features
Comprehensive Audit Logging
Structured Logging:
- JSON format with consistent field naming28
- Correlation IDs for tracking related events29
- Millisecond-precision timestamps28
- Configurable log levels and filtering28
- Prometheus-compatible metrics for collection rate, detection latency, and alert delivery30
- HTTP health endpoints with component-level status checks31
Audit Event Types:
- Process enumeration events
- Detection rule executions
- Alert generation and delivery
- System configuration changes
- Security events and violations
- Administrative actions
Compliance Reporting
Automated Reports:
- Daily security summaries
- Weekly compliance dashboards
- Monthly audit reports
- Quarterly risk assessments
Export Capabilities:
- SIEM integration (Splunk, Elasticsearch, QRadar)
- Compliance tool integration (GRC platforms)
- Custom report generation
- Air-gap compatible exports
Forensic Capabilities
Incident Response:
- Timeline reconstruction from audit logs
- Process tree analysis
- Hash verification for evidence integrity
- Chain of custody documentation
Evidence Preservation:
- Immutable audit log storage
- Cryptographic integrity verification
- Secure backup and archival
- Legal hold procedures
Network Security and Communication
IPC Security Model
Transport Security:
- Unix domain sockets with restricted permissions (0700 directory, 0600 socket)
- Windows named pipes with appropriate security descriptors
- No network exposure of IPC endpoints
Message Security:
- Protobuf schema validation
- CRC32 integrity verification
- Length-delimited framing
- Timeout and rate limiting
Authentication:
- Process-based authentication
- Capability negotiation
- Connection limits and monitoring
Alert Generation Security
Alert Structure:
- Required Fields: Timestamp, severity, rule_id, title, and description32
- Process Details: Include affected process details (PID, name, executable path)33
- Severity Levels: Support four severity levels (low, medium, high, critical)34
- Deduplication: Implement deduplication using configurable keys35
- Database Storage: Store alerts with delivery tracking information36
Security Controls:
- Input validation for all alert fields
- Sanitization of user-provided content
- Rate limiting to prevent alert flooding
- Audit logging of all alert generation
Alert Delivery Security
Multi-Channel Delivery:
- stdout: Local logging and monitoring
- syslog: Centralized logging infrastructure
- webhook: HTTPS with certificate validation
- email: SMTP with TLS encryption
- file: Secure file system storage37
Delivery Guarantees:
- Circuit breaker pattern for failing channels38
- Exponential backoff with jitter23
- Dead letter queue for failed deliveries39
- Delivery audit trail40
Network Isolation:
- Outbound-only connections
- No listening ports
- Firewall-friendly design
- Air-gap compatibility
Offline Operation Security
Offline Capabilities:
- Core Functionality: All core functionality continues operating normally when network connectivity is unavailable41
- Process Monitoring: Process enumeration, detection rules, and database operations function without degradation42
- Alert Delivery: Alert delivery degrades gracefully with local sinks continuing to work43
- Bundle Distribution: Support bundle-based configuration and rule distribution for airgapped systems44
- Bundle Import: Validate and apply bundles atomically with conflict resolution45
Security Benefits:
- No external attack surface
- Reduced dependency on network security
- Enhanced isolation and containment
- Compliance with air-gap requirements
Operational Security Controls
Configuration Security
Secure Defaults:
- Minimal privilege requirements
- Disabled network features by default
- Strict input validation
- Comprehensive logging enabled
Configuration Validation:
- Schema-based validation
- Environment-specific checks
- Security policy enforcement
- Change audit logging
Secrets Management:
- Environment variable support
- OS keychain integration
- No hardcoded credentials
- Secure credential rotation
Monitoring and Alerting
Security Metrics:
- Failed authentication attempts
- Privilege escalation events
- SQL injection attempts
- Network connection failures
- Database integrity violations
Health Monitoring:
- Component status tracking
- Performance metrics collection
- Resource utilization monitoring
- Error rate tracking
Incident Detection:
- Anomaly detection algorithms
- Threshold-based alerting
- Correlation with external threat intelligence
- Automated response capabilities
Backup and Recovery
Data Backup:
- Regular database snapshots
- Audit log archival
- Configuration backup
- Cryptographic verification
Disaster Recovery:
- Point-in-time recovery
- Cross-platform restoration
- Integrity verification
- Testing procedures
Business Continuity:
- Graceful degradation
- Failover capabilities
- Service restoration procedures
- Communication protocols
Security Testing and Validation
Static Analysis
Code Quality:
- Zero warnings policy with
cargo clippy -- -D warnings
- Memory safety verification
- Type safety enforcement
- Security linting rules
Dependency Scanning:
cargo audit
for vulnerability detectioncargo deny
for license compliance- Supply chain security verification
- Regular dependency updates
Dynamic Testing
Fuzzing:
- SQL injection test vectors
- Protobuf message fuzzing
- Configuration file fuzzing
- Network protocol fuzzing
Penetration Testing:
- Privilege escalation testing
- IPC communication testing
- Database security testing
- Network isolation verification
Performance Testing:
- Load testing with high process counts
- Memory usage profiling
- CPU utilization monitoring
- Database performance testing
Security Validation
Cryptographic Verification:
- Hash function correctness
- Merkle tree integrity
- Signature verification
- Random number generation
Compliance Testing:
- SOC 2 Type II requirements
- GDPR compliance verification
- NIST framework alignment
- Industry standard validation
US Government ISSO Considerations
This section explains how DaemonEye addresses specific concerns and requirements that US Government Information System Security Officers (ISSOs) must consider when evaluating security monitoring solutions for federal systems.
What DaemonEye Provides for ISSOs
Audit-Grade Evidence Collection:
DaemonEye's Merkle tree audit ledger provides cryptographically verifiable evidence that ISSOs can use for compliance reporting and forensic investigations. The system generates inclusion proofs for every audit event, enabling ISSOs to demonstrate data integrity and non-repudiation to auditors and investigators.
Minimal Attack Surface for High-Risk Environments:
The three-component architecture with privilege separation means that even if one component is compromised, the system maintains security boundaries. This is particularly important for ISSOs managing systems with sensitive data, as it limits the blast radius of potential security incidents.
Airgapped Operation Capability:
DaemonEye operates entirely offline, making it suitable for classified environments where network connectivity is restricted. ISSOs can deploy the system in airgapped networks without compromising security or functionality.
FISMA Compliance Support
NIST SP 800-53 Control Implementation:
DaemonEye directly implements several NIST SP 800-53 controls that ISSOs must verify:
- AU-2 (Audit Events): Comprehensive logging of all security-relevant events with structured JSON format
- AU-9 (Protection of Audit Information): Cryptographic integrity protection using BLAKE3 hashing
- AU-10 (Non-Repudiation): Merkle tree audit ledger provides cryptographic proof of data integrity
- SI-4 (Information System Monitoring): Continuous process monitoring with real-time threat detection
- SC-7 (Boundary Protection): Outbound-only network connections with no listening ports
- AC-6 (Least Privilege): Minimal privilege implementation with immediate privilege dropping
Evidence for ATO Packages:
The system generates the audit evidence and documentation that ISSOs need for Authorization to Operate (ATO) packages, including:
- Cryptographic integrity verification reports
- Privilege separation documentation
- Input validation test results
- Performance benchmarks under load
Risk Management Framework (RMF) Support
Continuous Monitoring Capabilities:
DaemonEye provides the continuous monitoring capabilities that ISSOs need for ongoing authorization, including:
- Real-time security control effectiveness measurement
- Automated compliance reporting
- Performance metrics collection
- Incident detection and alerting
Documentation and Evidence:
The system generates the technical documentation and evidence that ISSOs require for RMF steps:
- Security control implementation details
- Configuration management procedures
- Test results and validation reports
- Risk assessment supporting data
FedRAMP Authorization Support
Cloud-Ready Security Architecture:
DaemonEye's design supports FedRAMP authorization requirements:
- No inbound network connections (meets cloud security requirements)
- Cryptographic data protection at rest and in transit
- Comprehensive audit logging for compliance reporting
- Minimal privilege implementation
Third-Party Assessment Preparation:
The system provides the technical controls and documentation that 3PAOs need to validate:
- Security control implementation
- Vulnerability assessment results
- Penetration testing support
- Performance under load
DoD and Intelligence Community Support
STIG Compliance:
DaemonEye's architecture aligns with DoD Security Technical Implementation Guides:
- Process isolation and privilege separation
- Cryptographic data protection
- Comprehensive audit logging
- Input validation and error handling
CMMC Level Support:
The system supports Controlled Unclassified Information (CUI) protection requirements:
- Data classification handling
- Access control implementation
- Audit trail maintenance
- Incident response capabilities
Intelligence Community Requirements:
For IC environments, DaemonEye provides:
- Airgapped operation capability
- Multi-level security support through data classification
- Compartmented information handling
- Cross-domain solution compatibility
Operational Benefits for ISSOs
Reduced Compliance Burden:
- Automated audit log generation with cryptographic integrity
- Built-in compliance reporting capabilities
- Standard format exports (CEF, STIX-lite) for SIEM integration
- Comprehensive documentation for ATO packages
Enhanced Security Posture:
- Real-time threat detection and alerting
- Minimal attack surface reduces security risks
- Privilege separation limits impact of compromises
- Cryptographic integrity verification
Operational Efficiency:
- Low resource overhead (<5% CPU, <100MB memory)
- Offline operation reduces network security concerns
- Automated dependency scanning and vulnerability management
- Performance monitoring and health checks
Additional NIST SP 800-53 Compliance Requirements
Based on analysis of DaemonEye's current design against NIST SP 800-53 requirements, the following additional controls should be addressed to improve compliance with US Government customers. Each control includes vendor-specific implementation notes for DaemonEye product development:
Configuration Management (CM) Family
CM-2 (Baseline Configurations):
- Vendor Implementation: Implement configuration baselines for all DaemonEye components with version control
- Product Requirements: Provide secure default configurations, configuration templates, and version-controlled configuration schemas
- Implementation Notes: Include configuration validation, rollback capabilities, and configuration drift detection. Already planned: hierarchical configuration management with embedded defaults, system files, user files, environment variables, and CLI flags. Additional work needed: formal configuration baselines and version control for configuration schemas.
CM-3 (Configuration Change Control):
- Vendor Implementation: Automated change approval workflows with security impact analysis
- Product Requirements: Implement configuration change tracking, approval workflows, and security impact assessment
- Implementation Notes: Include change audit logging, rollback procedures, and configuration change notifications. Already planned: comprehensive configuration validation with detailed error messages and hierarchical configuration loading. Additional work needed: automated change approval workflows and security impact analysis.
CM-4 (Security Impact Analysis):
- Vendor Implementation: Mandatory SIA for all configuration changes
- Product Requirements: Automated security impact analysis for configuration changes
- Implementation Notes: Include risk assessment, security control validation, and impact documentation. Already planned: comprehensive configuration validation with detailed error messages. Additional work needed: automated security impact analysis and formal SIA procedures.
CM-5 (Access Restrictions for Change):
- Vendor Implementation: Role-based access controls for configuration management
- Product Requirements: Implement granular permissions for configuration changes
- Implementation Notes: Include privilege escalation controls, change authorization, and access audit logging. Already planned: privilege separation with minimal privilege requirements per component and immediate privilege dropping. Additional work needed: role-based access controls for configuration management and granular permissions.
CM-6 (Configuration Settings):
- Vendor Implementation: Automated enforcement of security configuration settings
- Product Requirements: Implement configuration hardening, security policy enforcement, and compliance checking
- Implementation Notes: Include automated remediation, configuration validation, and policy violation alerts. Already planned: comprehensive configuration validation with detailed error messages and hierarchical configuration loading. Additional work needed: automated enforcement of security configuration settings and policy violation alerts.
CM-7 (Least Functionality):
- Vendor Implementation: Disable unnecessary features and services by default
- Product Requirements: Implement minimal installation footprint with optional feature enablement
- Implementation Notes: Include feature flags, service disablement, and functionality auditing. Already planned: minimal attack surface design with three-component architecture and privilege separation. Additional work needed: feature flags and service disablement controls.
CM-8 (System Component Inventory):
- Vendor Implementation: Real-time inventory of all system components and dependencies
- Product Requirements: Implement component discovery, dependency tracking, and inventory reporting
- Implementation Notes: Include SBOM generation, vulnerability scanning, and component lifecycle management. Already planned: three-component architecture with clear component separation and dependency management. Additional work needed: real-time inventory tracking and SBOM generation.
Contingency Planning (CP) Family
CP-2 (Contingency Plan):
- Vendor Implementation: Documented contingency plans for DaemonEye operations
- Product Requirements: Provide disaster recovery procedures, failover documentation, and recovery time objectives
- Implementation Notes: Include automated failover, data replication, and recovery procedures. Already planned: graceful degradation under resource constraints and fail-safe design. Additional work needed: formal contingency plans and disaster recovery procedures.
CP-6 (Alternate Storage Sites):
- Vendor Implementation: Backup data storage capabilities
- Product Requirements: Implement data backup, replication, and archival capabilities
- Implementation Notes: Include automated backups, data integrity verification, and recovery testing. Already planned: redb database with ACID transactions and data integrity verification. Additional work needed: automated backup scheduling and alternate storage site capabilities.
CP-7 (Alternate Processing Sites):
- Vendor Implementation: Failover capabilities for critical operations
- Product Requirements: Implement high availability, load balancing, and failover mechanisms
- Implementation Notes: Include health monitoring, automatic failover, and service restoration. Already planned: resource management with graceful degradation and circuit breaker patterns. Additional work needed: high availability and load balancing mechanisms.
CP-9 (System Backup):
- Vendor Implementation: Automated backup and recovery procedures
- Product Requirements: Implement automated backup scheduling, data verification, and recovery procedures
- Implementation Notes: Include incremental backups, compression, encryption, and recovery testing. Already planned: redb database with data integrity verification and ACID transactions. Additional work needed: automated backup scheduling and recovery procedures.
CP-10 (System Recovery and Reconstitution):
- Vendor Implementation: Documented recovery procedures
- Product Requirements: Provide recovery documentation, testing procedures, and validation methods
- Implementation Notes: Include recovery automation, validation scripts, and testing frameworks. Already planned: graceful degradation and fail-safe design with resource management. Additional work needed: formal recovery procedures and validation methods.
Identification and Authentication (IA) Family
IA-3 (Device Identification and Authentication):
- Vendor Implementation: Device authentication for agent connections
- Product Requirements: Implement device certificates, mutual TLS, and device identity verification
- Implementation Notes: Include certificate management, device enrollment, and authentication protocols. Already planned: mTLS authentication and certificate management for Business/Enterprise tiers. Additional work needed: device authentication for agent connections and device identity verification.
IA-4 (Identifier Management):
- Vendor Implementation: Unique identifier management for all system components
- Product Requirements: Implement unique component identification, identifier generation, and management
- Implementation Notes: Include UUID generation, identifier persistence, and identifier validation. Already planned: three-component architecture with clear component identification and separation. Additional work needed: unique identifier management and identifier generation.
IA-5 (Authenticator Management):
- Vendor Implementation: Secure management of authentication credentials
- Product Requirements: Implement credential storage, rotation, and management
- Implementation Notes: Include secure storage, credential rotation, and access controls. Already planned: secure credential management through environment variables and OS keychain integration. Additional work needed: credential rotation and formal authenticator management.
IA-7 (Cryptographic Module Authentication):
- Vendor Implementation: Authentication for cryptographic modules
- Product Requirements: Implement cryptographic module authentication and validation
- Implementation Notes: Include module verification, authentication protocols, and security validation. Already planned: BLAKE3 cryptographic hashing and Ed25519 signatures for audit integrity. Additional work needed: cryptographic module authentication and validation.
Incident Response (IR) Family
IR-4 (Incident Handling):
- Vendor Implementation: Automated incident detection and response capabilities
- Product Requirements: Implement incident detection, automated response, and escalation procedures
- Implementation Notes: Include threat detection, automated containment, and response workflows. Already planned: SQL-based detection engine with automated alert generation and multi-channel alert delivery. Additional work needed: automated incident response and escalation procedures.
IR-5 (Incident Monitoring):
- Vendor Implementation: Continuous monitoring for security incidents
- Product Requirements: Implement real-time monitoring, threat detection, and incident tracking
- Implementation Notes: Include monitoring dashboards, alert correlation, and incident tracking. Already planned: continuous process monitoring with real-time threat detection and structured alert generation. Additional work needed: monitoring dashboards and incident tracking.
IR-6 (Incident Reporting):
- Vendor Implementation: Automated reporting to US-CERT and other authorities
- Product Requirements: Implement incident reporting, notification systems, and compliance reporting
- Implementation Notes: Include automated reporting, notification templates, and compliance integration. Already planned: structured alert generation with multiple delivery channels and compliance reporting capabilities. Additional work needed: automated reporting to authorities and notification templates.
Maintenance (MA) Family
MA-3 (Maintenance Tools):
- Vendor Implementation: Secure maintenance tools and procedures
- Product Requirements: Implement secure maintenance interfaces, tools, and procedures
- Implementation Notes: Include maintenance authentication, tool validation, and procedure documentation. Already planned: comprehensive development tools and testing frameworks with security validation. Additional work needed: secure maintenance interfaces and formal maintenance procedures.
MA-4 (Nonlocal Maintenance):
- Vendor Implementation: Secure remote maintenance capabilities
- Product Requirements: Implement secure remote access, maintenance protocols, and access controls
- Implementation Notes: Include encrypted channels, authentication, and access logging. Already planned: authenticated IPC channels and secure communication protocols. Additional work needed: secure remote maintenance capabilities and access controls.
Risk Assessment (RA) Family
RA-5 (Vulnerability Scanning):
- Vendor Implementation: Automated vulnerability scanning
- Product Requirements: Implement vulnerability scanning, assessment, and reporting
- Implementation Notes: Include automated scanning, vulnerability databases, and remediation guidance. Already planned: automated dependency scanning and vulnerability management with security tools integration. Additional work needed: vulnerability scanning capabilities and remediation guidance.
System and Services Acquisition (SA) Family
SA-3 (System Development Life Cycle):
- Vendor Implementation: Security integration in SDLC
- Product Requirements: Implement secure development practices, security testing, and validation
- Implementation Notes: Include security requirements, testing frameworks, and validation procedures. Already planned: comprehensive testing strategy with unit, integration, and fuzz testing, plus security-focused development practices. Additional work needed: formal SDLC security integration and validation procedures.
SA-4 (Acquisition Process):
- Vendor Implementation: Secure acquisition process
- Product Requirements: Implement secure distribution, verification, and installation procedures
- Implementation Notes: Include code signing, package verification, and secure distribution. Already planned: code signing and integrity verification for Business/Enterprise tiers. Additional work needed: secure acquisition process and package verification.
SA-5 (Information System Documentation):
- Vendor Implementation: Comprehensive system documentation
- Product Requirements: Provide complete documentation, security guides, and operational procedures
- Implementation Notes: Include technical documentation, security guides, and compliance documentation. Already planned: comprehensive documentation including AGENTS.md, .kiro/steering/development.md, and technical specifications. Additional work needed: formal system documentation and compliance documentation.
SA-8 (Security Engineering Principles):
- Vendor Implementation: Security engineering principles
- Product Requirements: Implement security-by-design, secure coding practices, and security validation
- Implementation Notes: Include secure architecture, coding standards, and security validation. Already planned: security-first architecture with privilege separation, minimal attack surface, and defense-in-depth design. Additional work needed: formal security engineering principles and validation procedures.
SA-11 (Developer Security Testing and Evaluation):
- Vendor Implementation: Security testing by developers
- Product Requirements: Implement security testing, vulnerability assessment, and validation
- Implementation Notes: Include automated testing, security scanning, and validation frameworks. Already planned: comprehensive testing strategy with unit, integration, fuzz testing, and security-focused development practices. Additional work needed: formal security testing and evaluation procedures.
SA-12 (Supply Chain Protection):
- Vendor Implementation: Supply chain security controls
- Product Requirements: Implement supply chain security, dependency management, and verification
- Implementation Notes: Include SBOM generation, dependency scanning, and supply chain validation. Already planned: automated dependency scanning and vulnerability management with security tools integration. Additional work needed: SBOM generation and supply chain validation.
SA-15 (Development Process, Standards, and Tools):
- Vendor Implementation: Secure development processes
- Product Requirements: Implement secure development practices, standards, and tools
- Implementation Notes: Include development standards, security tools, and process validation. Already planned: comprehensive development workflow with security-focused practices and quality gates. Additional work needed: formal development process standards and tool validation.
SA-16 (Developer-Provided Training):
- Vendor Implementation: Security training for developers
- Product Requirements: Provide security training, documentation, and best practices
- Implementation Notes: Include training materials, security guidelines, and best practices. Already planned: comprehensive documentation including AGENTS.md, .kiro/steering/development.md, and technical specifications. Additional work needed: formal security training materials and developer guidelines.
SA-17 (Developer Security Architecture and Design):
- Vendor Implementation: Security architecture and design
- Product Requirements: Implement secure architecture, design patterns, and security controls
- Implementation Notes: Include security architecture, design patterns, and control implementation. Already planned: three-component security architecture with privilege separation and minimal attack surface design. Additional work needed: formal security architecture documentation and design patterns.
SA-18 (Tamper Resistance and Detection):
- Vendor Implementation: Tamper resistance and detection
- Product Requirements: Implement tamper detection, integrity verification, and protection mechanisms
- Implementation Notes: Include integrity checking, tamper detection, and protection mechanisms. Already planned: code signing verification and integrity checking for Business/Enterprise tiers. Additional work needed: enhanced tamper detection and protection mechanisms.
SA-19 (Component Authenticity):
- Vendor Implementation: Component authenticity verification
- Product Requirements: Implement component verification, authenticity checking, and validation
- Implementation Notes: Include signature verification, authenticity validation, and integrity checking. Already planned: code signing and integrity verification for Business/Enterprise tiers. Additional work needed: component authenticity verification and validation.
SA-20 (Customized Development of Critical Components):
- Vendor Implementation: Custom development of critical components
- Product Requirements: Implement custom security components, specialized functionality, and enhanced security
- Implementation Notes: Include custom development, specialized components, and enhanced security. Already planned: three-component architecture with specialized security functions and custom cryptographic implementations. Additional work needed: formal custom development procedures and specialized component documentation.
SA-21 (Developer Screening):
- Vendor Implementation: Background screening for developers
- Product Requirements: Implement developer vetting, access controls, and security clearance
- Implementation Notes: Include background checks, access controls, and security clearance. Already planned: comprehensive development workflow with security-focused practices and quality gates. Additional work needed: formal developer screening procedures and access controls.
SA-22 (Unsupported System Components):
- Vendor Implementation: Management of unsupported components
- Product Requirements: Implement component lifecycle management, deprecation handling, and migration support
- Implementation Notes: Include lifecycle management, deprecation procedures, and migration support. Already planned: three-component architecture with clear component separation and dependency management. Additional work needed: formal component lifecycle management and deprecation procedures.
Enhanced System and Communications Protection (SC) Family
SC-1 (System and Communications Protection Policy and Procedures):
- Vendor Implementation: Document comprehensive SC policies and procedures for DaemonEye deployment
- Product Requirements: Provide security configuration guides, network isolation procedures, and communication protocols
- Implementation Notes: Include deployment hardening guides, security configuration templates, and operational procedures for airgapped environments. Already planned: comprehensive security configuration guides, operational procedures, and development workflow documentation. Additional work needed: formal SC policy documentation and deployment hardening guides.
SC-2 (Application Partitioning):
- Vendor Implementation: Implement strict application partitioning in DaemonEye's three-component architecture
- Product Requirements: Ensure procmond, daemoneye-agent, and daemoneye-cli operate in isolated process spaces with defined interfaces
- Implementation Notes: Use process isolation, separate memory spaces, and controlled IPC communication channels between components. Already planned: three-component architecture with procmond (privileged collector), daemoneye-agent (user-space orchestrator), and daemoneye-cli (command interface) operating in isolated process spaces. Additional work needed: enhanced partitioning controls and formal partitioning documentation.
SC-3 (Security Function Isolation):
- Vendor Implementation: Isolate security functions within dedicated components (procmond for collection, daemoneye-agent for detection)
- Product Requirements: Implement privilege separation with minimal privilege requirements per component
- Implementation Notes: Use platform-specific capabilities (Linux capabilities, Windows privileges) with immediate privilege dropping after initialization. Already planned: privilege separation with immediate privilege dropping after initialization, platform-specific capability management for Linux, Windows, and macOS. Additional work needed: enhanced isolation controls and formal isolation documentation.
SC-4 (Information in Shared Resources):
- Vendor Implementation: Protect sensitive information in shared IPC channels and database storage
- Product Requirements: Implement data protection in shared resources, with access controls for shared resources
- Implementation Notes: Use CRC32 integrity verification for IPC messages (local memory only), database encryption for stored data, and access control lists for shared files. Already planned: IPC security with protobuf message serialization and CRC32 integrity verification for local inter-process communication. Additional work needed: database encryption at rest and enhanced access controls for shared resources.
SC-5 (Denial of Service Protection):
- Vendor Implementation: Implement DoS protection mechanisms for DaemonEye components
- Product Requirements: Provide resource quotas, memory limits, and protection against resource exhaustion attacks
- Implementation Notes: Include memory limits, CPU usage bounds, database connection limits, and graceful degradation under resource constraints. Note: DaemonEye has no inbound network communications, so network-based DoS attacks are not applicable. Already planned: resource management with bounded channels, memory limits, timeout support, and circuit breaker patterns for external dependencies. Additional work needed: enhanced resource protection and formal DoS protection documentation.
SC-6 (Resource Availability):
- Vendor Implementation: Ensure resource availability controls for DaemonEye operations
- Product Requirements: Implement resource monitoring, automatic recovery, and failover mechanisms
- Implementation Notes: Include health monitoring, automatic restart capabilities, and resource usage tracking with alerts. Already planned: resource management with graceful degradation, bounded channels, and cooperative yielding under resource constraints. Additional work needed: enhanced failover mechanisms and formal resource availability documentation.
SC-9 (Transmission Confidentiality):
- Vendor Implementation: Ensure transmission confidentiality for all DaemonEye communications
- Product Requirements: Implement encryption for IPC channels, alert delivery, and data transmission
- Implementation Notes: Use TLS for webhook alerts, encrypted IPC channels, and secure data export formats. Already planned: TLS for webhook alert delivery and secure data export formats for SIEM integration. Additional work needed: enhanced encryption for IPC channels and formal transmission confidentiality documentation.
SC-10 (Network Disconnect):
- Vendor Implementation: Support network disconnect capabilities for airgapped deployments
- Product Requirements: Ensure full functionality without network connectivity, with graceful degradation of network-dependent features
- Implementation Notes: Implement offline operation modes, local alert storage, and bundle-based configuration distribution. Already planned: offline-first operation with full functionality without internet access, bundle-based configuration distribution for airgapped environments, and graceful degradation of network-dependent features. Additional work needed: enhanced airgapped deployment documentation and formal network disconnect procedures.
SC-11 (Trusted Path):
- Vendor Implementation: Provide trusted path for critical DaemonEye operations
- Product Requirements: Implement secure communication channels for administrative operations and configuration changes
- Implementation Notes: Use authenticated IPC channels, secure configuration interfaces, and protected administrative access. Already planned: authenticated IPC channels with connection authentication and optional encryption for secure inter-process communication. Additional work needed: enhanced trusted path mechanisms and formal trusted path documentation.
SC-16 (Transmission of Security Attributes):
- Vendor Implementation: Transmit security attributes with all DaemonEye data and communications. Already Planned: Data classification support is specified in product.md. Additional Required: Enhanced security attribute transmission and formal security attribute documentation.
- Product Requirements: Include data classification, sensitivity labels, and security markings in all transmissions
- Implementation Notes: Embed security attributes in protobuf messages, database records, and alert payloads
SC-17 (Public Key Infrastructure Certificates):
- Vendor Implementation: Implement PKI certificate management for DaemonEye components. Already Planned: mTLS authentication and certificate management are specified in product.md for Business/Enterprise tiers. Additional Required: Enhanced PKI certificate management and formal PKI documentation.
- Product Requirements: Support certificate-based authentication, mutual TLS, and certificate validation
- Implementation Notes: Include certificate generation, validation, rotation, and revocation for agent authentication and alert delivery
SC-18 (Mobile Code):
- Vendor Implementation: Control mobile code execution in DaemonEye environment. Already Planned: Code signing verification is specified in product.md for Business/Enterprise tiers. Additional Required: Enhanced mobile code controls and formal mobile code documentation.
- Product Requirements: Implement controls for mobile code execution and validation
- Implementation Notes: Include code signing verification, execution sandboxing, and mobile code validation
SC-19 (Voice Over Internet Protocol):
-
Vendor Implementation: Secure VoIP communications for DaemonEye (if applicable). Not Currently Planned: VoIP communications are not part of the current DaemonEye design. Additional Required: VoIP security controls if voice communications are added to the product.
-
Product Requirements: Implement encryption and security controls for VoIP communications
-
Implementation Notes: Include VoIP encryption, authentication, and security protocols for voice communications SC-20 (Secure Name/Address Resolution Service):
-
Vendor Implementation: Implement secure DNS resolution for DaemonEye. Not Currently Planned: DNS resolution is not part of the current DaemonEye design. Additional Required: Secure DNS resolution capabilities if DNS functionality is added to the product.
-
Product Requirements: Provide secure name resolution and address resolution services
-
Implementation Notes: Include DNS over TLS, DNS over HTTPS, and secure DNS configuration
SC-21 (Secure Name/Address Resolution Service (Recursive or Caching Resolver)):
- Vendor Implementation: Implement secure recursive DNS resolution for DaemonEye. Not Currently Planned: DNS resolution is not part of the current DaemonEye design. Additional Required: Secure recursive DNS resolution capabilities if DNS functionality is added to the product.
- Product Requirements: Provide secure recursive DNS resolution with caching capabilities
- Implementation Notes: Include secure recursive DNS, DNS caching security, and resolution validation
SC-22 (Architecture and Provisioning for Name/Address Resolution Service):
- Vendor Implementation: Design secure DNS architecture for DaemonEye. Not Currently Planned: DNS architecture is not part of the current DaemonEye design. Additional Required: Secure DNS architecture if DNS functionality is added to the product.
- Product Requirements: Implement secure DNS architecture and provisioning
- Implementation Notes: Include secure DNS infrastructure, provisioning automation, and architecture security
SC-23 (Session Authenticity):
- Vendor Implementation: Ensure session authenticity for DaemonEye administrative sessions. Already Planned: Authentication and integrity verification are specified in tech.md. Additional Required: Enhanced session management and formal session authenticity documentation.
- Product Requirements: Implement session management, authentication, and integrity verification
- Implementation Notes: Use secure session tokens, session timeout controls, and cryptographic session validation
SC-24 (Fail in Known State):
- Vendor Implementation: Ensure DaemonEye fails in a known, secure state. Already Planned: Graceful degradation and fail-safe design are specified in product.md. Additional Required: Enhanced fail-safe mechanisms and formal fail-safe documentation.
- Product Requirements: Implement fail-safe mechanisms that maintain security boundaries during failures
- Implementation Notes: Include graceful shutdown procedures, secure state preservation, and recovery from known states
SC-25 (Thin Nodes):
- Vendor Implementation: Support thin client security for DaemonEye. Not Currently Planned: Thin client deployments are not part of the current DaemonEye design. Additional Required: Thin client security controls if thin client functionality is added to the product.
- Product Requirements: Implement security controls for thin client deployments
- Implementation Notes: Include thin client authentication, secure communication, and minimal client footprint
SC-26 (Honeypots):
- Vendor Implementation: Implement honeypot capabilities for DaemonEye. Not Currently Planned: Honeypot capabilities are not part of the current DaemonEye design. Additional Required: Honeypot capabilities if threat intelligence collection is expanded.
- Product Requirements: Provide decoy systems and monitoring capabilities for threat detection
- Implementation Notes: Include honeypot deployment, monitoring capabilities, and threat intelligence collection
SC-27 (Platform-Independent Applications):
- Vendor Implementation: Ensure DaemonEye operates consistently across different platforms. Already Planned: Cross-platform support and OS support matrix are specified in tech.md. Additional Required: Enhanced platform independence testing and formal platform independence documentation.
- Product Requirements: Provide platform-independent security controls and consistent behavior
- Implementation Notes: Use cross-platform libraries, consistent security policies, and platform-specific optimizations where needed
SC-29 (Heterogeneity):
- Vendor Implementation: Support heterogeneous system environments for DaemonEye deployment. Already Planned: Multi-platform support and OS support matrix are specified in tech.md. Additional Required: Enhanced heterogeneity testing and formal heterogeneity documentation.
- Product Requirements: Ensure compatibility across different operating systems, architectures, and configurations
- Implementation Notes: Include multi-platform support, architecture-specific optimizations, and configuration flexibility
SC-30 (Concealment and Misdirection):
- Vendor Implementation: Implement concealment and misdirection capabilities for DaemonEye. Not Currently Planned: Concealment and misdirection capabilities are not part of the current DaemonEye design. Additional Required: Concealment and misdirection capabilities if advanced threat detection is expanded.
- Product Requirements: Provide decoy systems and misdirection techniques for threat detection
- Implementation Notes: Include honeypot deployment, decoy data generation, and misdirection techniques
SC-31 (Covert Channel Analysis):
- Vendor Implementation: Analyze and mitigate covert channels in DaemonEye design. Already Planned: Resource usage monitoring and channel capacity limitations are specified in tech.md. Additional Required: Enhanced covert channel analysis and formal covert channel documentation.
- Product Requirements: Implement controls to prevent information leakage through covert channels
- Implementation Notes: Include timing analysis, resource usage monitoring, and channel capacity limitations
SC-32 (Information System Partitioning):
- Vendor Implementation: Implement system partitioning in DaemonEye architecture. Already Planned: Process isolation and separate databases are specified in product.md and tech.md. Additional Required: Enhanced system partitioning and formal partitioning documentation.
- Product Requirements: Ensure logical and physical separation of different security domains
- Implementation Notes: Use process isolation, separate databases, and controlled data flow between partitions
SC-33 (Transmission Preparation Integrity):
- Vendor Implementation: Ensure integrity of data preparation for transmission. Already Planned: CRC32 integrity verification and data validation are specified in tech.md. Additional Required: Enhanced transmission integrity and formal transmission integrity documentation.
- Product Requirements: Implement data validation, checksums, and integrity verification before transmission
- Implementation Notes: Use CRC32 checksums, data validation, and integrity verification for all IPC and network communications
SC-34 (Modifiable Components):
- Vendor Implementation: Control modification of DaemonEye components. Already Planned: Code signing verification and integrity checking are specified in product.md for Business/Enterprise tiers. Additional Required: Enhanced tamper detection and formal modification control documentation.
- Product Requirements: Implement tamper detection, code signing verification, and modification controls
- Implementation Notes: Include integrity checking, signature verification, and protection against unauthorized modifications
SC-35 (Honeytokens):
- Vendor Implementation: Implement honeytoken capabilities for DaemonEye. Not Currently Planned: Honeytoken capabilities are not part of the current DaemonEye design. Additional Required: Honeytoken capabilities if advanced threat detection is expanded.
- Product Requirements: Provide decoy data and monitoring capabilities for threat detection
- Implementation Notes: Include fake process data, decoy alerts, and monitoring of access to honeytoken data
SC-36 (Distributed Processing and Storage):
- Vendor Implementation: Support distributed processing and storage for DaemonEye. Already Planned: Federated security centers and distributed data storage are specified in product.md for Enterprise tier. Additional Required: Enhanced distributed processing and formal distributed architecture documentation.
- Product Requirements: Implement distributed architecture with secure communication and data consistency
- Implementation Notes: Include federated security centers, distributed data storage, and secure inter-node communication
SC-37 (Out-of-Band Channels):
- Vendor Implementation: Support out-of-band communication channels for DaemonEye. Already Planned: Bundle-based configuration distribution for airgapped systems is specified in product.md. Additional Required: Enhanced out-of-band channels and formal out-of-band documentation.
- Product Requirements: Implement alternative communication methods for critical operations
- Implementation Notes: Include secure out-of-band alert delivery, administrative access, and emergency communication channels
SC-38 (Operations Security):
-
Vendor Implementation: Implement operations security controls for DaemonEye. Already Planned: Operational security procedures are documented in AGENTS.md and .kiro/steering/development.md. Additional Required: Enhanced operations security and formal operations security documentation.
-
Product Requirements: Protect operational information and prevent information leakage
-
Implementation Notes: Include operational security procedures, information protection, and security awareness training SC-40 (Wireless Link Protection):
-
Vendor Implementation: Protect wireless communications for DaemonEye (if applicable). Not Currently Planned: Wireless communications are not part of the current DaemonEye design. Additional Required: Wireless security controls if wireless functionality is added to the product.
-
Product Requirements: Implement encryption and security controls for wireless communications
-
Implementation Notes: Include wireless security protocols, encryption, and access controls for wireless deployments
SC-41 (Port and I/O Device Access):
- Vendor Implementation: Control access to ports and I/O devices for DaemonEye. Not Currently Planned: Port and I/O device access controls are not part of the current DaemonEye design. Additional Required: Port and I/O device access controls if device access functionality is added to the product.
- Product Requirements: Implement access controls and monitoring for system ports and devices
- Implementation Notes: Include device access controls, port monitoring, and I/O device security policies
SC-42 (Sensor Capability and Data):
- Vendor Implementation: Implement sensor capabilities and data protection for DaemonEye. Already Planned: Process monitoring sensors and data collection security are specified in product.md and tech.md. Additional Required: Enhanced sensor capabilities and formal sensor documentation.
- Product Requirements: Provide sensor data collection, processing, and protection capabilities
- Implementation Notes: Include process monitoring sensors, data collection security, and sensor data integrity verification
SC-43 (Usage Restrictions):
- Vendor Implementation: Implement usage restrictions for DaemonEye components. Already Planned: Resource usage limits and capability restrictions are specified in tech.md. Additional Required: Enhanced usage restrictions and formal usage restriction documentation.
- Product Requirements: Control and monitor usage of system resources and capabilities
- Implementation Notes: Include resource usage limits, capability restrictions, and usage monitoring and reporting
SC-44 (Detachable Media):
- Vendor Implementation: Control access to detachable media for DaemonEye. Not Currently Planned: Detachable media controls are not part of the current DaemonEye design. Additional Required: Detachable media controls if media access functionality is added to the product.
- Product Requirements: Implement controls for removable media access and data protection
- Implementation Notes: Include media access controls, data encryption for removable media, and media sanitization procedures
SC-45 (System Time Synchronization):
- Vendor Implementation: Ensure time synchronization for DaemonEye components. Already Planned: Millisecond-precision timestamps are specified in product.md. Additional Required: Enhanced time synchronization and formal time synchronization documentation.
- Product Requirements: Implement accurate time synchronization and time-based security controls
- Implementation Notes: Include NTP synchronization, time validation, and timestamp integrity verification
SC-46 (Cross-Service Attack Prevention):
- Vendor Implementation: Prevent cross-service attacks in DaemonEye. Already Planned: Service isolation and access controls are specified in product.md and tech.md. Additional Required: Enhanced cross-service attack prevention and formal attack prevention documentation.
- Product Requirements: Implement isolation and protection between different services and components
- Implementation Notes: Include service isolation, access controls, and monitoring for cross-service attack attempts
SC-47 (Alternate Communications Paths):
- Vendor Implementation: Provide alternate communication paths for DaemonEye. Already Planned: Multiple alert delivery channels are specified in product.md. Additional Required: Enhanced alternate communication paths and formal communication path documentation.
- Product Requirements: Implement redundant communication channels and failover mechanisms
- Implementation Notes: Include multiple alert delivery channels, backup communication methods, and automatic failover
SC-48 (Application Partitioning):
-
Vendor Implementation: Implement application partitioning for DaemonEye security. Already Planned: Component isolation and data separation are specified in product.md and tech.md. Additional Required: Enhanced application partitioning and formal partitioning documentation.
-
Product Requirements: Ensure logical separation of application components and data
-
Implementation Notes: Include component isolation, data separation, and controlled interfaces between partitions SC-49 (Replay-Resistant Authentication):
-
Vendor Implementation: Implement replay-resistant authentication for DaemonEye. Already Planned: Authentication mechanisms are specified in tech.md. Additional Required: Enhanced replay-resistant authentication and formal authentication documentation.
-
Product Requirements: Provide authentication mechanisms that resist replay attacks
-
Implementation Notes: Include nonce-based authentication, timestamp validation, and replay attack prevention
SC-50 (Software-Enforced Separation):
- Vendor Implementation: Implement software-enforced separation in DaemonEye. Already Planned: Process isolation and access control enforcement are specified in product.md and tech.md. Additional Required: Enhanced software-enforced separation and formal separation documentation.
- Product Requirements: Use software controls to enforce security boundaries and separation
- Implementation Notes: Include process isolation, memory protection, and access control enforcement
SC-51 (Hardware-Based Security):
- Vendor Implementation: Leverage hardware-based security features for DaemonEye. Already Planned: HSM integration and TPM support are specified in product.md for Enterprise tier. Additional Required: Enhanced hardware-based security and formal hardware security documentation.
- Product Requirements: Utilize hardware security modules and trusted platform modules where available
- Implementation Notes: Include HSM integration, TPM support, and hardware-based key storage
SC-52 (Portable Storage Devices):
- Vendor Implementation: Control portable storage device access for DaemonEye. Not Currently Planned: Portable storage device controls are not part of the current DaemonEye design. Additional Required: Portable storage device controls if device access functionality is added to the product.
- Product Requirements: Implement controls for portable storage devices and data protection
- Implementation Notes: Include device access controls, data encryption, and portable storage security policies
SC-53 (Enforceable Flow Control):
- Vendor Implementation: Implement enforceable flow control for DaemonEye. Already Planned: Data flow monitoring and access controls are specified in tech.md. Additional Required: Enhanced flow control and formal flow control documentation.
- Product Requirements: Control and monitor data flow between system components
- Implementation Notes: Include data flow monitoring, access controls, and flow restriction mechanisms
SC-54 (Shared Memory):
- Vendor Implementation: Protect shared memory in DaemonEye. Already Planned: Memory access controls are specified in tech.md. Additional Required: Enhanced shared memory protection and formal shared memory documentation.
- Product Requirements: Implement secure shared memory access and protection
- Implementation Notes: Include memory access controls, shared memory encryption, and access monitoring
SC-55 (Enforceable Access Control):
- Vendor Implementation: Implement enforceable access control for DaemonEye. Already Planned: Access control enforcement is specified in product.md and tech.md. Additional Required: Enhanced access control and formal access control documentation.
- Product Requirements: Provide mandatory access controls and enforcement mechanisms
- Implementation Notes: Include role-based access control, mandatory access controls, and access enforcement
SC-56 (Enforceable Execution Domains):
- Vendor Implementation: Implement enforceable execution domains for DaemonEye. Already Planned: Execution isolation and domain separation are specified in product.md and tech.md. Additional Required: Enhanced execution domains and formal execution domain documentation.
- Product Requirements: Control execution environments and domain boundaries
- Implementation Notes: Include execution isolation, domain separation, and execution environment controls
SC-57 (Data Location):
- Vendor Implementation: Control data location for DaemonEye. Already Planned: Data residency controls are specified in product.md for Enterprise tier. Additional Required: Enhanced data location controls and formal data location documentation.
- Product Requirements: Implement data residency controls and location restrictions
- Implementation Notes: Include data location tracking, residency controls, and geographic restrictions
SC-58 (Secure Operations):
- Vendor Implementation: Ensure secure operations for DaemonEye. Already Planned: Secure configuration and operational security procedures are documented in AGENTS.md and .kiro/steering/development.md. Additional Required: Enhanced secure operations and formal secure operations documentation.
- Product Requirements: Implement secure operational procedures and controls
- Implementation Notes: Include secure configuration, operational security procedures, and security monitoring
SC-59 (Information Flow Enforcement):
- Vendor Implementation: Implement information flow enforcement for DaemonEye. Already Planned: Information flow monitoring is specified in tech.md. Additional Required: Enhanced information flow enforcement and formal information flow documentation.
- Product Requirements: Control and monitor information flow between system components
- Implementation Notes: Include data flow controls, information flow monitoring, and flow restriction enforcement
Implementation Priority
High Priority (Core Security Controls):
- Configuration Management (CM-2, CM-3, CM-4, CM-5, CM-6, CM-7, CM-8)
- Incident Response (IR-4, IR-5, IR-6)
- Risk Assessment (RA-5)
- System and Services Acquisition (SA-3, SA-4, SA-5, SA-8, SA-11, SA-12, SA-15, SA-16, SA-17, SA-18, SA-19, SA-20, SA-21, SA-22)
Medium Priority (Operational Controls):
- Contingency Planning (CP-2, CP-6, CP-7, CP-9, CP-10)
- Identification and Authentication (IA-3, IA-4, IA-5, IA-7)
- Maintenance (MA-3, MA-4)
- Enhanced System and Communications Protection (SC-2, SC-3, SC-4, SC-5, SC-6, SC-9, SC-10, SC-11, SC-16, SC-17, SC-23, SC-24, SC-27, SC-29, SC-31, SC-32, SC-33, SC-34, SC-35, SC-36, SC-37, SC-38, SC-40, SC-41, SC-42, SC-43, SC-44, SC-45, SC-46, SC-47, SC-48, SC-49, SC-50, SC-51, SC-52, SC-53, SC-54, SC-55, SC-56, SC-57, SC-58, SC-59)
Excluded Controls (End-Consumer Responsibilities):
- Personnel Security (PS-1 through PS-6): Personnel management is the responsibility of the deploying organization
- Physical and Environmental Protection (PE-1 through PE-6): Physical security is the responsibility of the deploying organization
- Contingency Training (CP-3, CP-4): Training programs are the responsibility of the deploying organization
- Incident Response Training (IR-2, IR-3): Training programs are the responsibility of the deploying organization
- Maintenance Personnel (MA-5): Personnel management is the responsibility of the deploying organization
- Media Protection (MP-1 through MP-6): Media handling is the responsibility of the deploying organization
- Risk Assessment (RA-1, RA-2, RA-3): Risk assessment is the responsibility of the deploying organization
- System and Services Acquisition (SA-1, SA-2): Acquisition processes are the responsibility of the deploying organization
Deployment Security Considerations
Installation Security
Package Integrity:
- Code signing for all distributions
- Cryptographic verification of packages
- Secure distribution channels
- Installation audit logging
Privilege Requirements:
- Minimal installation privileges
- Service account creation
- File system permissions
- Network configuration
Configuration Hardening:
- Secure default configurations
- Security policy templates
- Environment-specific hardening
- Compliance checklists
Runtime Security
Process Isolation:
- Container security (if applicable)
- Systemd security features
- SELinux/AppArmor integration
- Resource limits and quotas
Network Security:
- Firewall configuration
- Network segmentation
- Traffic monitoring
- Intrusion detection
Monitoring Integration:
- SIEM integration
- Log aggregation
- Metrics collection
- Alert correlation
Maintenance Security
Update Procedures:
- Secure update channels
- Rollback capabilities
- Testing procedures
- Change management
Access Management:
- Principle of least privilege
- Regular access reviews
- Multi-factor authentication
- Audit logging
Incident Response:
- Response procedures
- Evidence collection
- Communication protocols
- Recovery procedures
Footnotes
Conclusion
DaemonEye's security design provides a comprehensive framework for secure process monitoring and threat detection. The three-component architecture with strict privilege separation, cryptographic integrity verification, and comprehensive audit logging ensures that the system meets enterprise security requirements while maintaining high performance and operational efficiency.
The system's defense-in-depth approach, combined with its air-gap compatibility and compliance features, makes it suitable for deployment in high-security environments where traditional monitoring solutions may not meet security requirements.
For additional technical details, refer to the API Reference and Deployment Guide documentation.
-
Requirement 6.2: Request platform-specific privileges (CAP_SYS_PTRACE on Linux, SeDebugPrivilege on Windows) ↩
-
Requirement 6.3: Immediately drop all elevated privileges after initialization ↩
-
Requirement 2.1: SHA-256 computation for executable integrity verification ↩ ↩2
-
Requirement 6.5: Log all privilege changes for audit trail ↩
-
Requirement 3.1: Parse SQL queries using AST validation to prevent injection attacks ↩ ↩2
-
Requirement 3.3: Use prepared statements with read-only database connections ↩ ↩2
-
Requirement 3.4: Complete within 30 seconds or timeout with appropriate logging ↩ ↩2
-
Requirement 5.2: Implement circuit breaker pattern for failing channels ↩
-
Requirement 3.5: Reject forbidden constructs and log attempts for audit purposes ↩ ↩2
-
Requirement 3.2: Only allow SELECT statements with approved functions (COUNT, SUM, AVG, MIN, MAX, LENGTH, SUBSTR, datetime functions) ↩
-
Requirement 8.1: Execute user-provided SQL queries with parameterization and prepared statements ↩
-
Requirement 8.2: Support JSON, human-readable table, and CSV output formats ↩
-
Requirement 8.3: Provide capabilities to list, validate, test, and import/export detection rules ↩
-
Requirement 8.4: Display component status with color-coded indicators ↩
-
Requirement 8.5: Support streaming and pagination for result sets ↩
-
Requirement 7.3: Use BLAKE3 for fast, cryptographically secure hash computation ↩ ↩2
-
Requirement 7.4: Provide chain verification function to detect tampering ↩ ↩2
-
Requirement 7.1: Record security events in append-only audit ledger with monotonic sequence numbers ↩
-
Requirement 7.5: Millisecond-precision timestamps for audit events ↩ ↩2
-
Requirement 1.2: CPU usage < 5% sustained during continuous monitoring ↩
-
Requirement 1.1: Process enumeration < 5 seconds for systems with up to 10,000 processes ↩
-
Requirement 3.4: Detection rule timeout 30 seconds maximum execution time ↩
-
Requirement 5.3: Alert delivery retry up to 3 attempts with maximum 60-second delay ↩ ↩2
-
Enterprise Requirement 1.4: Sub-millisecond latency from event occurrence to detection ↩
-
Enterprise Requirement 2.6: Fleet query response < 60 seconds for up to 10,000 endpoints ↩
-
Enterprise Requirement 7.1: CPU overhead < 2% per monitored endpoint for 10,000+ processes ↩
-
Enterprise Requirement 7.2: 100,000+ events per minute with sub-second query response ↩
-
Requirement 10.1: Use structured JSON format with consistent field naming and configurable log levels ↩ ↩2 ↩3
-
Requirement 10.4: Embed performance metrics in log entries with correlation IDs ↩
-
Requirement 10.2: Provide Prometheus-compatible metrics for collection rate, detection latency, and alert delivery ↩
-
Requirement 10.3: Expose HTTP health endpoints with component-level status checks ↩
-
Requirement 4.1: Generate alert with timestamp, severity, rule_id, title, and description ↩
-
Requirement 4.2: Include affected process details (PID, name, executable path) ↩
-
Requirement 4.3: Support four severity levels (low, medium, high, critical) ↩
-
Requirement 4.4: Implement deduplication using configurable keys ↩
-
Requirement 4.5: Store alerts in database with delivery tracking information ↩
-
Requirement 5.1: Support multiple sinks including stdout, syslog, webhook, email, and file output ↩
-
Requirement 5.2: Implement circuit breaker pattern with configurable failure thresholds ↩
-
Requirement 5.4: Store failed alerts in dead letter queue for later processing ↩
-
Requirement 5.5: Process alerts to multiple sinks in parallel without blocking other deliveries ↩
-
Requirement 9.1: All core functionality continues operating normally when network connectivity is unavailable ↩
-
Requirement 9.2: Process enumeration, detection rules, and database operations function without degradation ↩
-
Requirement 9.3: Alert delivery degrades gracefully with local sinks continuing to work ↩
-
Requirement 9.4: Support bundle-based configuration and rule distribution for airgapped systems ↩
-
Requirement 9.5: Validate and apply bundles atomically with conflict resolution ↩
Testing Documentation
This document provides comprehensive testing strategies and guidelines for DaemonEye, covering unit testing, integration testing, performance testing, and security testing.
Table of Contents
- Testing Philosophy
- Testing Strategy
- Unit Testing
- Integration Testing
- End-to-End Testing
- Performance Testing
- Security Testing
- Test Configuration
- Continuous Integration
- Test Maintenance
Testing Philosophy
DaemonEye follows a comprehensive testing strategy that ensures:
- Reliability: Robust error handling and edge case coverage
- Performance: Meets performance requirements under load
- Security: Validates security controls and prevents vulnerabilities
- Maintainability: Easy to understand and modify tests
- Coverage: High test coverage across all components
Testing Strategy
Three-Tier Testing Architecture
- Unit Tests: Test individual components in isolation
- Integration Tests: Test component interactions and data flow
- End-to-End Tests: Test complete workflows and user scenarios
Testing Pyramid
┌─────────────────┐
│ E2E Tests │ ← Few, slow, expensive
│ (Manual) │
├─────────────────┤
│ Integration │ ← Some, medium speed
│ Tests │
├─────────────────┤
│ Unit Tests │ ← Many, fast, cheap
│ (Automated) │
└─────────────────┘
Unit Testing
Core Testing Framework
DaemonEye uses a comprehensive unit testing framework:
#[cfg(test)]
mod tests {
use super::*;
use insta::assert_snapshot;
use tempfile::TempDir;
#[tokio::test]
async fn test_process_collection() {
let collector = ProcessCollector::new();
let processes = collector.collect_processes().await.unwrap();
assert!(!processes.is_empty());
assert!(processes.iter().any(|p| p.pid > 0));
}
#[tokio::test]
async fn test_database_operations() {
let temp_dir = TempDir::new().unwrap();
let db_path = temp_dir.path().join("test.db");
let db = Database::new(&db_path).await.unwrap();
let process = ProcessInfo {
pid: 1234,
name: "test_process".to_string(),
// ... other fields
};
db.insert_process(&process).await.unwrap();
let retrieved = db.get_process(1234).await.unwrap();
assert_eq!(process.pid, retrieved.pid);
assert_eq!(process.name, retrieved.name);
}
}
Mocking and Test Doubles
Use mocks for external dependencies:
use mockall::mock;
mock! {
pub ProcessCollector {}
#[async_trait]
impl ProcessCollectionService for ProcessCollector {
async fn collect_processes(&self) -> Result<CollectionResult, CollectionError>;
async fn get_system_info(&self) -> Result<SystemInfo, CollectionError>;
}
}
#[tokio::test]
async fn test_agent_with_mock_collector() {
let mut mock_collector = MockProcessCollector::new();
mock_collector
.expect_collect_processes()
.times(1)
.returning(|| Ok(CollectionResult::default()));
let agent = daemoneye-agent::new(Box::new(mock_collector));
let result = agent.run_collection_cycle().await;
assert!(result.is_ok());
}
Property-Based Testing
Use property-based testing for complex logic:
use proptest::prelude::*;
proptest! {
#[test]
fn test_process_info_serialization(process in any::<ProcessInfo>()) {
let serialized = serde_json::to_string(&process).unwrap();
let deserialized: ProcessInfo = serde_json::from_str(&serialized).unwrap();
assert_eq!(process, deserialized);
}
#[test]
fn test_sql_query_validation(query in "[a-zA-Z0-9_\\s]+") {
let result = validate_sql_query(&query);
// Property: validation should not panic
let _ = result;
}
}
Integration Testing
Database Integration Tests
Test database operations with real SQLite:
#[tokio::test]
async fn test_database_integration() {
let temp_dir = TempDir::new().unwrap();
let db_path = temp_dir.path().join("integration_test.db");
let db = Database::new(&db_path).await.unwrap();
// Test schema creation
db.create_schema().await.unwrap();
// Test data insertion
let process = ProcessInfo {
pid: 1234,
name: "test_process".to_string(),
executable_path: Some("/usr/bin/test".to_string()),
command_line: Some("test --arg value".to_string()),
start_time: Some(Utc::now()),
cpu_usage: Some(0.5),
memory_usage: Some(1024),
status: ProcessStatus::Running,
executable_hash: Some("abc123".to_string()),
collection_time: Utc::now(),
};
db.insert_process(&process).await.unwrap();
// Test data retrieval
let retrieved = db.get_process(1234).await.unwrap();
assert_eq!(process.pid, retrieved.pid);
// Test query execution
let results = db.query_processes("SELECT * FROM processes WHERE pid = ?", &[1234]).await.unwrap();
assert_eq!(results.len(), 1);
}
IPC Integration Tests
Test inter-process communication:
#[tokio::test]
async fn test_ipc_communication() {
let temp_dir = TempDir::new().unwrap();
let socket_path = temp_dir.path().join("test.sock");
// Start server
let server = IpcServer::new(&socket_path).await.unwrap();
let server_handle = tokio::spawn(async move {
server.run().await
});
// Wait for server to start
tokio::time::sleep(Duration::from_millis(100)).await;
// Connect client
let client = IpcClient::new(&socket_path).await.unwrap();
// Test request/response
let request = IpcRequest::CollectProcesses;
let response = client.send_request(request).await.unwrap();
assert!(matches!(response, IpcResponse::Processes(_)));
// Cleanup
server_handle.abort();
}
Alert Delivery Integration Tests
Test alert delivery mechanisms:
#[tokio::test]
async fn test_alert_delivery() {
let mut alert_manager = AlertManager::new();
// Add test sinks
let syslog_sink = SyslogSink::new("daemon").unwrap();
let webhook_sink = WebhookSink::new("http://localhost:8080/webhook").unwrap();
alert_manager.add_sink(Box::new(syslog_sink));
alert_manager.add_sink(Box::new(webhook_sink));
// Create test alert
let alert = Alert {
id: Uuid::new_v4(),
rule_name: "test_rule".to_string(),
severity: AlertSeverity::High,
message: "Test alert".to_string(),
process: ProcessInfo::default(),
timestamp: Utc::now(),
metadata: HashMap::new(),
};
// Send alert
let result = alert_manager.send_alert(alert).await;
assert!(result.is_ok());
}
End-to-End Testing
CLI Testing
Test command-line interface:
use insta::assert_snapshot;
use std::process::Command;
#[test]
fn test_cli_help() {
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.assert()
.success()
.stdout(predicate::str::contains("DaemonEye CLI"));
}
#[test]
fn test_cli_query() {
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.args(&["query", "SELECT * FROM processes LIMIT 1"])
.assert()
.success();
}
#[test]
fn test_cli_config() {
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.args(&["config", "show"])
.assert()
.success()
.stdout(predicate::str::contains("app:"));
}
Full System Testing
Test complete system workflows:
#[tokio::test]
async fn test_full_system_workflow() {
let temp_dir = TempDir::new().unwrap();
let config_path = temp_dir.path().join("config.yaml");
// Create test configuration
let config = Config::default();
config.save_to_file(&config_path).unwrap();
// Start procmond
let procmond_handle = tokio::spawn(async move {
let procmond = ProcMonD::new(&config_path).await.unwrap();
procmond.run().await
});
// Start daemoneye-agent
let agent_handle = tokio::spawn(async move {
let agent = daemoneye-agent::new(&config_path).await.unwrap();
agent.run().await
});
// Wait for services to start
tokio::time::sleep(Duration::from_secs(2)).await;
// Test CLI operations
let mut cmd = Command::cargo_bin("daemoneye-cli").unwrap();
cmd.args(&["--config", config_path.to_str().unwrap(), "query", "SELECT COUNT(*) FROM processes"])
.assert()
.success();
// Cleanup
procmond_handle.abort();
agent_handle.abort();
}
Performance Testing
Load Testing
Test system performance under load:
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_process_collection(c: &mut Criterion) {
let mut group = c.benchmark_group("process_collection");
group.bench_function("collect_processes", |b| {
b.iter(|| {
let collector = ProcessCollector::new();
black_box(collector.collect_processes())
})
});
group.bench_function("collect_processes_parallel", |b| {
b.iter(|| {
let collector = ProcessCollector::new();
black_box(collector.collect_processes_parallel())
})
});
group.finish();
}
fn benchmark_database_operations(c: &mut Criterion) {
let mut group = c.benchmark_group("database_operations");
group.bench_function("insert_process", |b| {
let db = Database::new(":memory:").unwrap();
let process = ProcessInfo::default();
b.iter(|| black_box(db.insert_process(&process)))
});
group.bench_function("query_processes", |b| {
let db = Database::new(":memory:").unwrap();
// Insert test data
for i in 0..1000 {
let process = ProcessInfo {
pid: i,
..Default::default()
};
db.insert_process(&process).unwrap();
}
b.iter(|| black_box(db.query_processes("SELECT * FROM processes WHERE pid > ?", &[500])))
});
group.finish();
}
criterion_group!(
benches,
benchmark_process_collection,
benchmark_database_operations
);
criterion_main!(benches);
Memory Testing
Test memory usage and leaks:
#[tokio::test]
async fn test_memory_usage() {
let initial_memory = get_memory_usage();
// Run operations that should not leak memory
for _ in 0..1000 {
let collector = ProcessCollector::new();
let _processes = collector.collect_processes().await.unwrap();
drop(collector);
}
// Force garbage collection
tokio::task::yield_now().await;
let final_memory = get_memory_usage();
let memory_increase = final_memory - initial_memory;
// Memory increase should be minimal
assert!(memory_increase < 10 * 1024 * 1024); // 10MB
}
fn get_memory_usage() -> usize {
// Platform-specific memory usage detection
#[cfg(target_os = "linux")]
{
let status = std::fs::read_to_string("/proc/self/status").unwrap();
for line in status.lines() {
if line.starts_with("VmRSS:") {
let parts: Vec<&str> = line.split_whitespace().collect();
return parts[1].parse::<usize>().unwrap() * 1024; // Convert to bytes
}
}
0
}
#[cfg(not(target_os = "linux"))]
{
// Fallback for other platforms
0
}
}
Stress Testing
Test system behavior under stress:
#[tokio::test]
async fn test_stress_collection() {
let collector = ProcessCollector::new();
// Run collection continuously for 60 seconds
let start = Instant::now();
let mut count = 0;
while start.elapsed() < Duration::from_secs(60) {
let processes = collector.collect_processes().await.unwrap();
count += processes.len();
// Small delay to prevent overwhelming the system
tokio::time::sleep(Duration::from_millis(100)).await;
}
// Should have collected a reasonable number of processes
assert!(count > 0);
println!("Collected {} processes in 60 seconds", count);
}
Security Testing
Fuzz Testing
Test with random inputs:
use cargo_fuzz;
#[no_mangle]
pub extern "C" fn fuzz_process_info(data: &[u8]) {
if let Ok(process_info) = ProcessInfo::from_bytes(data) {
// Test that deserialization doesn't panic
let _ = process_info.pid;
let _ = process_info.name;
}
}
#[no_mangle]
pub extern "C" fn fuzz_sql_query(data: &[u8]) {
if let Ok(query) = std::str::from_utf8(data) {
// Test SQL query validation
let _ = validate_sql_query(query);
}
}
Security Boundary Testing
Test security boundaries:
#[tokio::test]
async fn test_privilege_dropping() {
let collector = ProcessCollector::new();
// Should start with elevated privileges
assert!(collector.has_privileges());
// Drop privileges
collector.drop_privileges().await.unwrap();
// Should no longer have privileges
assert!(!collector.has_privileges());
// Should still be able to collect processes (with reduced capabilities)
let processes = collector.collect_processes().await.unwrap();
assert!(!processes.is_empty());
}
#[tokio::test]
async fn test_sql_injection_prevention() {
let db = Database::new(":memory:").unwrap();
// Test various SQL injection attempts
let malicious_queries = vec![
"'; DROP TABLE processes; --",
"1' OR '1'='1",
"'; INSERT INTO processes VALUES (9999, 'hacker', '/bin/evil'); --",
];
for query in malicious_queries {
let result = db.execute_query(query).await;
// Should either reject the query or sanitize it safely
match result {
Ok(_) => {
// If query succeeds, verify no damage was done
let count = db.count_processes().await.unwrap();
assert_eq!(count, 0); // No processes should exist
}
Err(_) => {
// Query was rejected, which is also acceptable
}
}
}
}
Input Validation Testing
Test input validation:
#[test]
fn test_input_validation() {
// Test valid inputs
let valid_process = ProcessInfo {
pid: 1234,
name: "valid_process".to_string(),
executable_path: Some("/usr/bin/valid".to_string()),
command_line: Some("valid --arg value".to_string()),
start_time: Some(Utc::now()),
cpu_usage: Some(0.5),
memory_usage: Some(1024),
status: ProcessStatus::Running,
executable_hash: Some("abc123".to_string()),
collection_time: Utc::now(),
};
assert!(valid_process.validate().is_ok());
// Test invalid inputs
let invalid_process = ProcessInfo {
pid: 0, // Invalid PID
name: "".to_string(), // Empty name
executable_path: Some("".to_string()), // Empty path
command_line: Some("a".repeat(10000).to_string()), // Too long
start_time: Some(Utc::now()),
cpu_usage: Some(-1.0), // Negative CPU usage
memory_usage: Some(0),
status: ProcessStatus::Running,
executable_hash: Some("invalid_hash".to_string()),
collection_time: Utc::now(),
};
assert!(invalid_process.validate().is_err());
}
Test Configuration
Test Environment Setup
# test-config.yaml
app:
log_level: debug
scan_interval_ms: 1000
batch_size: 10
database:
path: ':memory:'
max_connections: 5
retention_days: 1
alerting:
enabled: false
testing:
enable_mocks: true
mock_external_services: true
test_data_dir: /tmp/daemoneye-test
cleanup_after_tests: true
Test Data Management
pub struct TestDataManager {
temp_dir: TempDir,
test_data: HashMap<String, Vec<u8>>,
}
impl TestDataManager {
pub fn new() -> Self {
Self {
temp_dir: TempDir::new().unwrap(),
test_data: HashMap::new(),
}
}
pub fn add_test_data(&mut self, name: &str, data: &[u8]) {
self.test_data.insert(name.to_string(), data.to_vec());
}
pub fn get_test_data(&self, name: &str) -> Option<&[u8]> {
self.test_data.get(name).map(|v| v.as_slice())
}
pub fn create_test_database(&self) -> PathBuf {
let db_path = self.temp_dir.path().join("test.db");
let db = Database::new(&db_path).unwrap();
db.create_schema().unwrap();
db_path
}
}
Continuous Integration
GitHub Actions Workflow
name: Tests
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
rust: [1.85, stable, beta]
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.rust }}
override: true
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y libsqlite3-dev
- name: Run tests
run: |
cargo test --verbose
cargo test --verbose --features integration-tests
- name: Run benchmarks
run: cargo bench --verbose
- name: Run fuzz tests
run: |
cargo install cargo-fuzz
cargo fuzz build
cargo fuzz run process_info
cargo fuzz run sql_query
- name: Generate coverage report
run: |
cargo install cargo-tarpaulin
cargo tarpaulin --out Html --output-dir coverage
Test Reporting
use insta::assert_snapshot;
#[test]
fn test_config_serialization() {
let config = Config::default();
let serialized = serde_yaml::to_string(&config).unwrap();
// Snapshot testing for configuration
assert_snapshot!(serialized);
}
#[test]
fn test_alert_format() {
let alert = Alert {
id: Uuid::parse_str("123e4567-e89b-12d3-a456-426614174000").unwrap(),
rule_name: "test_rule".to_string(),
severity: AlertSeverity::High,
message: "Test alert message".to_string(),
process: ProcessInfo::default(),
timestamp: Utc::now(),
metadata: HashMap::new(),
};
let formatted = alert.format_json().unwrap();
assert_snapshot!(formatted);
}
Test Maintenance
Test Organization
// tests/
// ├── unit/
// │ ├── collector_tests.rs
// │ ├── database_tests.rs
// │ └── alert_tests.rs
// ├── integration/
// │ ├── ipc_tests.rs
// │ ├── database_tests.rs
// │ └── alert_delivery_tests.rs
// ├── e2e/
// │ ├── cli_tests.rs
// │ └── system_tests.rs
// └── common/
// ├── test_helpers.rs
// └── test_data.rs
Test Utilities
// tests/common/test_helpers.rs
pub struct TestHelper {
temp_dir: TempDir,
config: Config,
}
impl TestHelper {
pub fn new() -> Self {
let temp_dir = TempDir::new().unwrap();
let config = Config::default();
Self { temp_dir, config }
}
pub fn create_test_database(&self) -> Database {
let db_path = self.temp_dir.path().join("test.db");
Database::new(&db_path).unwrap()
}
pub fn create_test_config(&self) -> PathBuf {
let config_path = self.temp_dir.path().join("config.yaml");
self.config.save_to_file(&config_path).unwrap();
config_path
}
pub fn cleanup(&self) {
// Cleanup test resources
}
}
impl Drop for TestHelper {
fn drop(&mut self) {
self.cleanup();
}
}
This testing documentation provides comprehensive guidance for testing DaemonEye. For additional testing information, consult the specific test files or contact the development team.
Contributing to DaemonEye
Thank you for your interest in contributing to DaemonEye! This guide will help you get started with contributing to the project.
Table of Contents
- Getting Started
- Development Environment
- Code Standards
- Testing Requirements
- Pull Request Process
- Issue Reporting
- Documentation
- Community Guidelines
- Development Workflow
- Getting Help
- License
Getting Started
Prerequisites
Before contributing to DaemonEye, ensure you have:
- Rust 1.85+: Latest stable Rust toolchain
- Git: Version control system
- Docker: For containerized testing (optional)
- Just: Task runner (install with
cargo install just
) - Editor: VS Code with Rust extension (recommended)
Fork and Clone
-
Fork the repository on GitHub
-
Clone your fork locally:
git clone https://github.com/your-username/daemoneye.git cd daemoneye
-
Add the upstream repository:
git remote add upstream https://github.com/EvilBit-Labs/daemoneye.git
Development Setup
-
Install dependencies:
just setup
-
Run tests to ensure everything works:
just test
-
Build the project:
just build
Development Environment
Project Structure
DaemonEye/
├── procmond/ # Process monitoring daemon
├── daemoneye-agent/ # Agent orchestrator
├── daemoneye-cli/ # CLI interface
├── daemoneye-lib/ # Shared library
├── docs/ # Documentation
├── tests/ # Integration tests
├── examples/ # Example configurations
├── justfile # Task runner
├── Cargo.toml # Workspace configuration
└── README.md # Project README
Workspace Configuration
DaemonEye uses a Cargo workspace with the following structure:
[workspace]
resolver = "2"
members = [
"procmond",
"daemoneye-agent",
"daemoneye-cli",
"daemoneye-lib",
]
[workspace.dependencies]
tokio = { version = "1.0", features = ["full"] }
clap = { version = "4.0", features = ["derive", "completion"] }
serde = { version = "1.0", features = ["derive"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "sqlite"] }
sysinfo = "0.30"
tracing = "0.1"
thiserror = "1.0"
anyhow = "1.0"
Development Commands
Use the just
task runner for common development tasks:
# Setup development environment
just setup
# Build the project
just build
# Run tests
just test
# Run linting
just lint
# Format code
just fmt
# Run benchmarks
just bench
# Generate documentation
just docs
# Clean build artifacts
just clean
Code Standards
Rust Standards
DaemonEye follows strict Rust coding standards:
- Edition: Always use Rust 2024 Edition
- Linting: Zero warnings policy with
cargo clippy -- -D warnings
- Safety:
unsafe_code = "forbid"
enforced at workspace level - Formatting: Standard
rustfmt
with 119 character line length - Error Handling: Use
thiserror
for structured errors,anyhow
for error context
Code Style
// Use thiserror for library errors
#[derive(Debug, Error)]
pub enum CollectionError {
#[error("Permission denied accessing process {pid}")]
PermissionDenied { pid: u32 },
#[error("Process {pid} no longer exists")]
ProcessNotFound { pid: u32 },
#[error("I/O error: {0}")]
IoError(#[from] std::io::Error),
}
// Use anyhow for application error context
use anyhow::{Context, Result};
pub async fn collect_processes() -> Result<Vec<ProcessInfo>> {
let processes = sysinfo::System::new_all()
.processes()
.values()
.map(|p| ProcessInfo::from(p))
.collect::<Vec<_>>();
Ok(processes)
.context("Failed to collect process information")
}
// Document all public APIs
/// Collects information about all running processes.
///
/// # Returns
///
/// A vector of `ProcessInfo` structs containing details about each process.
///
/// # Errors
///
/// This function will return an error if:
/// - System information cannot be accessed
/// - Process enumeration fails
/// - Memory allocation fails
///
/// # Examples
///
/// ```rust
/// use daemoneye_lib::collector::ProcessCollector;
///
/// let collector = ProcessCollector::new();
/// let processes = collector.collect_processes().await?;
/// println!("Found {} processes", processes.len());
/// ```
pub async fn collect_processes() -> Result<Vec<ProcessInfo>, CollectionError> {
// Implementation
}
Naming Conventions
- Functions:
snake_case
- Variables:
snake_case
- Types:
PascalCase
- Constants:
SCREAMING_SNAKE_CASE
- Modules:
snake_case
- Files:
snake_case.rs
Documentation Standards
All public APIs must be documented with rustdoc comments:
/// A process information structure containing details about a running process.
///
/// This structure provides comprehensive information about a process including
/// its PID, name, executable path, command line arguments, and resource usage.
///
/// # Examples
///
/// ```rust
/// use daemoneye_lib::ProcessInfo;
///
/// let process = ProcessInfo {
/// pid: 1234,
/// name: "example".to_string(),
/// executable_path: Some("/usr/bin/example".to_string()),
/// command_line: Some("example --arg value".to_string()),
/// start_time: Some(Utc::now()),
/// cpu_usage: Some(0.5),
/// memory_usage: Some(1024),
/// status: ProcessStatus::Running,
/// executable_hash: Some("abc123".to_string()),
/// collection_time: Utc::now(),
/// };
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ProcessInfo {
/// The process ID (PID) of the process
pub pid: u32,
/// The name of the process
pub name: String,
/// The full path to the process executable
pub executable_path: Option<String>,
/// The command line arguments used to start the process
pub command_line: Option<String>,
/// The time when the process was started
pub start_time: Option<DateTime<Utc>>,
/// The current CPU usage percentage
pub cpu_usage: Option<f64>,
/// The current memory usage in bytes
pub memory_usage: Option<u64>,
/// The current status of the process
pub status: ProcessStatus,
/// The SHA-256 hash of the process executable
pub executable_hash: Option<String>,
/// The time when this information was collected
pub collection_time: DateTime<Utc>,
}
Testing Requirements
Test Coverage
All code must have comprehensive test coverage:
- Unit Tests: Test individual functions and methods
- Integration Tests: Test component interactions
- End-to-End Tests: Test complete workflows
- Property Tests: Test with random inputs
- Fuzz Tests: Test with malformed inputs
Test Structure
#[cfg(test)]
mod tests {
use super::*;
use insta::assert_snapshot;
use tempfile::TempDir;
#[tokio::test]
async fn test_process_collection() {
let collector = ProcessCollector::new();
let processes = collector.collect_processes().await.unwrap();
assert!(!processes.is_empty());
assert!(processes.iter().any(|p| p.pid > 0));
}
#[test]
fn test_process_info_serialization() {
let process = ProcessInfo::default();
let serialized = serde_json::to_string(&process).unwrap();
let deserialized: ProcessInfo = serde_json::from_str(&serialized).unwrap();
assert_eq!(process, deserialized);
}
}
Running Tests
# Run all tests
cargo test
# Run specific test
cargo test test_process_collection
# Run tests with output
cargo test -- --nocapture
# Run integration tests
cargo test --test integration
# Run benchmarks
cargo bench
# Run fuzz tests
cargo fuzz run process_info
Pull Request Process
Before Submitting
-
Sync with upstream:
git fetch upstream git checkout main git merge upstream/main
-
Create a feature branch:
git checkout -b feature/your-feature-name
-
Make your changes:
- Write code following the coding standards
- Add comprehensive tests
- Update documentation
- Run all tests and linting
-
Commit your changes:
git add . git commit -m "feat: add new feature description"
Commit Message Format
Use conventional commits format:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
Types:
feat
: New featurefix
: Bug fixdocs
: Documentation changesstyle
: Code style changesrefactor
: Code refactoringtest
: Test changeschore
: Build process or auxiliary tool changes
Examples:
feat(collector): add process filtering by name
Add ability to filter processes by name pattern using regex.
This improves performance when monitoring specific processes.
Closes #123
fix(database): resolve memory leak in query execution
Fix memory leak that occurred when executing large queries.
The issue was caused by not properly cleaning up prepared statements.
Fixes #456
Pull Request Guidelines
- Title: Clear, descriptive title
- Description: Detailed description of changes
- Tests: Ensure all tests pass
- Documentation: Update relevant documentation
- Breaking Changes: Clearly mark any breaking changes
- Related Issues: Link to related issues
Pull Request Template
## Description
Brief description of the changes made.
## Type of Change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
- [ ] Performance improvement
- [ ] Code refactoring
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] End-to-end tests pass
- [ ] Manual testing completed
- [ ] Performance tests pass
## Checklist
- [ ] Code follows the project's style guidelines
- [ ] Self-review of code completed
- [ ] Code is properly commented
- [ ] Documentation updated
- [ ] No new warnings introduced
- [ ] Breaking changes documented
## Related Issues
Closes #123
Fixes #456
Issue Reporting
Bug Reports
When reporting bugs, please include:
- Environment: OS, Rust version, DaemonEye version
- Steps to Reproduce: Clear, numbered steps
- Expected Behavior: What should happen
- Actual Behavior: What actually happens
- Logs: Relevant log output
- Screenshots: If applicable
Feature Requests
When requesting features, please include:
- Use Case: Why is this feature needed?
- Proposed Solution: How should it work?
- Alternatives: Other solutions considered
- Additional Context: Any other relevant information
Issue Templates
Use the provided issue templates:
- Bug Report:
.github/ISSUE_TEMPLATE/bug_report.md
- Feature Request:
.github/ISSUE_TEMPLATE/feature_request.md
- Question:
.github/ISSUE_TEMPLATE/question.md
Documentation
Documentation Standards
- Keep documentation up to date
- Use clear, concise language
- Include code examples
- Follow markdown best practices
- Use consistent formatting
Documentation Structure
docs/
├── src/
│ ├── introduction.md
│ ├── getting-started.md
│ ├── architecture/
│ ├── technical/
│ ├── user-guides/
│ ├── api-reference/
│ ├── deployment/
│ ├── security.md
│ ├── testing.md
│ └── contributing.md
├── book.toml
└── README.md
Building Documentation
# Install mdbook
cargo install mdbook
# Build documentation
mdbook build
# Serve documentation locally
mdbook serve
Community Guidelines
Code of Conduct
We are committed to providing a welcoming and inclusive environment for all contributors. Please:
- Be respectful and constructive
- Focus on what is best for the community
- Show empathy towards other community members
- Accept constructive criticism gracefully
- Help others learn and grow
Communication
- GitHub Issues: For bug reports and feature requests
- GitHub Discussions: For questions and general discussion
- Pull Requests: For code contributions
- Discord: For real-time chat (invite link in README)
Recognition
Contributors are recognized in:
- CONTRIBUTORS.md file
- Release notes
- Project documentation
- Community highlights
Development Workflow
Branch Strategy
main
: Production-ready codedevelop
: Integration branch for featuresfeature/*
: Feature development branchesbugfix/*
: Bug fix brancheshotfix/*
: Critical bug fixes
Release Process
- Version Bumping: Update version numbers
- Changelog: Update CHANGELOG.md
- Documentation: Update documentation
- Testing: Run full test suite
- Release: Create GitHub release
- Distribution: Publish to package managers
Continuous Integration
All pull requests must pass:
- Unit tests
- Integration tests
- Linting checks
- Security scans
- Performance benchmarks
- Documentation builds
Getting Help
If you need help contributing:
- Check Documentation: Review this guide and other docs
- Search Issues: Look for similar issues or discussions
- Ask Questions: Use GitHub Discussions or Discord
- Contact Maintainers: Reach out to project maintainers
License
By contributing to DaemonEye, you agree that your contributions will be licensed under the same license as the project (Apache 2.0 for core features, commercial license for business/enterprise features).
Thank you for contributing to DaemonEye! Your contributions help make the project better for everyone.