Gold Digger Documentation
Welcome to the Gold Digger documentation! Gold Digger is a fast, secure MySQL/MariaDB query tool written in Rust that exports structured data in multiple formats.
What is Gold Digger?
Gold Digger is a command-line tool designed for extracting data from MySQL and MariaDB databases with structured output support. It provides:
- Multiple Output Formats: Export data as CSV, JSON, or TSV
- Security First: Built-in TLS (rustls) support and credential protection
- Type Safety: Rust-powered reliability with proper NULL handling
- CLI-First Design: Environment variable support with CLI override capability
Quick Navigation
- New to Gold Digger? Start with our Quick Start Guide
- Need to install? Check our Installation Guide
- Looking for examples? Browse our Usage Examples
- Developer? Visit the API Reference
- Having issues? See our Troubleshooting Guide
Key Features
- CLI-First Design: Command-line flags with environment variable fallbacks
- Safe Type Handling: Automatic NULL and type conversion without panics
- Multiple Output Formats: CSV (RFC 4180), JSON with type inference, TSV
- Secure by Default: Automatic credential redaction with TLS support
- Structured Exit Codes: Proper error codes for automation and scripting
- Shell Integration: Completion support for Bash, Zsh, Fish, PowerShell
- Configuration Debugging: JSON config dump with credential protection
- Cross-Platform: Works on Windows, macOS, and Linux
Getting Help
If you encounter issues or have questions:
- Check the Troubleshooting Guide
- Review the Configuration Documentation
- Visit the GitHub Repository
Let's get started with Gold Digger!
Installation
This section covers installation instructions for Gold Digger on different platforms.
Choose your platform:
System Requirements
- Operating System: Windows 10+, macOS 10.15+, or Linux (any modern distribution)
- Architecture: x86_64 (64-bit)
- Network: Internet connection for downloading dependencies (if building from source)
Installation Methods
Gold Digger can be installed through several methods:
- Pre-built Binaries (Recommended): Download from GitHub releases
- Cargo Install: Install from crates.io using Rust's package manager
- Build from Source: Clone and compile the repository
Each platform-specific guide covers these methods in detail.
Windows Installation
Install Gold Digger on Windows systems.
Pre-built Binaries (Recommended)
- Visit the GitHub Releases page
- Download the latest
gold_digger-windows.exe
file - Move the executable to a directory in your PATH
- Open Command Prompt or PowerShell and verify:
gold_digger --version
Using Cargo (Rust Package Manager)
Prerequisites
Install Rust from rustup.rs:
# Download and run rustup-init.exe
# Follow the installation prompts
Install Gold Digger
cargo install gold_digger
Build from Source
Prerequisites
- Git for Windows
- Rust toolchain (via rustup)
- Visual Studio Build Tools or Visual Studio Community
Build Steps
# Clone the repository
git clone https://github.com/EvilBit-Labs/gold_digger.git
cd gold_digger
# Build release version
cargo build --release
# The executable will be in target/release/gold_digger.exe
TLS Support
Windows builds use the native SChannel TLS implementation by default. For pure Rust TLS:
# Standard installation with TLS support
cargo install gold_digger
# Or minimal installation without TLS
cargo install gold_digger --no-default-features --features "json,csv,additional_mysql_types,verbose"
Verification
Test your installation:
gold_digger --help
Troubleshooting
Common Issues
- Missing Visual C++ Redistributable: Install from Microsoft
- PATH not updated: Restart your terminal after installation
- Antivirus blocking: Add exception for the executable
Getting Help
If you encounter issues:
- Check the Troubleshooting Guide
- Visit the GitHub Issues page
macOS Installation
Install Gold Digger on macOS systems.
Pre-built Binaries (Recommended)
- Visit the GitHub Releases page
- Download the latest
gold_digger-macos
file - Make it executable and move to PATH:
chmod +x gold_digger-macos
sudo mv gold_digger-macos /usr/local/bin/gold_digger
- Verify installation:
gold_digger --version
Using Homebrew (Coming Soon)
# Future release
brew install gold_digger
Using Cargo (Rust Package Manager)
Prerequisites
Install Rust using rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
Install Gold Digger
cargo install gold_digger
Build from Source
Prerequisites
- Xcode Command Line Tools:
xcode-select --install
- Rust toolchain (via rustup)
Build Steps
# Clone the repository
git clone https://github.com/EvilBit-Labs/gold_digger.git
cd gold_digger
# Build release version
cargo build --release
# The executable will be in target/release/gold_digger
TLS Support
Pure Rust TLS (rustls) by default; no native SecureTransport path.
# Standard installation (TLS via rustls)
cargo install gold_digger
# Or minimal installation without extra features (TLS still available)
cargo install gold_digger --no-default-features --features "json,csv,additional_mysql_types,verbose"
Verification
Test your installation:
gold_digger --help
Troubleshooting
Common Issues
- Gatekeeper blocking execution: Right-click โ Open, or use
sudo spctl --master-disable
- Command not found: Ensure
/usr/local/bin
is in your PATH - Permission denied: Check file permissions with
ls -la
Getting Help
If you encounter issues:
- Check the Troubleshooting Guide
- Visit the GitHub Issues page
Linux Installation
Install Gold Digger on Linux distributions.
Pre-built Binaries (Recommended)
- Visit the GitHub Releases page
- Download the latest
gold_digger-linux
file - Make it executable and install:
chmod +x gold_digger-linux
sudo mv gold_digger-linux /usr/local/bin/gold_digger
- Verify installation:
gold_digger --version
Using Cargo (Rust Package Manager)
Prerequisites
Install Rust using rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
Install Gold Digger
cargo install gold_digger
Build from Source
Prerequisites
Ubuntu/Debian:
sudo apt update
sudo apt install build-essential pkg-config libssl-dev git
RHEL/CentOS/Fedora:
sudo dnf install gcc pkg-config git
# or for older versions:
# sudo yum install gcc pkg-config git
Arch Linux:
sudo pacman -S base-devel pkg-config git
Build Steps
# Clone the repository
git clone https://github.com/EvilBit-Labs/gold_digger.git
cd gold_digger
# Build release version
cargo build --release
# The executable will be in target/release/gold_digger
TLS Support
Linux builds use rustls exclusively for TLS connections with platform certificate store integration:
# Standard installation (TLS always available)
cargo install gold_digger
# Minimal installation (TLS still available)
cargo install gold_digger --no-default-features --features "json,csv,additional_mysql_types,verbose"
Note: TLS support is built into all Gold Digger binaries. No OpenSSL dependencies are required.
Distribution Packages (Coming Soon)
Future releases will include:
- Debian/Ubuntu
.deb
packages - RHEL/CentOS/Fedora
.rpm
packages - Arch Linux AUR package
Verification
Test your installation:
gold_digger --help
Troubleshooting
Common Issues
- Linker errors: Install
build-essential
or equivalent - Permission denied: Check executable permissions and PATH
- Missing pkg-config: Install
pkg-config
package
Note: OpenSSL development headers are no longer required - Gold Digger uses rustls exclusively.
Getting Help
If you encounter issues:
- Check the Troubleshooting Guide
- Visit the GitHub Issues page
Quick Start
Get up and running with Gold Digger in minutes.
Basic Usage
Gold Digger requires three pieces of information:
- Database connection URL
- SQL query to execute
- Output file path
Your First Query
Using Environment Variables
# Set environment variables
export DATABASE_URL="mysql://user:password@localhost:3306/database"
export DATABASE_QUERY="SELECT id, name FROM users LIMIT 10"
export OUTPUT_FILE="users.json"
# Run Gold Digger
gold_digger
Using CLI Flags (Recommended)
gold_digger \
--db-url "mysql://user:password@localhost:3306/database" \
--query "SELECT id, name FROM users LIMIT 10" \
--output users.csv
Common Usage Patterns
Pretty JSON Output
gold_digger \
--db-url "mysql://user:pass@localhost:3306/db" \
--query "SELECT id, name, email FROM users LIMIT 5" \
--output users.json \
--pretty
Query from File
# Create a query file
echo "SELECT COUNT(*) as total_users FROM users" > user_count.sql
# Use the query file
gold_digger \
--db-url "mysql://user:pass@localhost:3306/db" \
--query-file user_count.sql \
--output stats.json
Force Output Format
# Force CSV format regardless of file extension
gold_digger \
--db-url "mysql://user:pass@localhost:3306/db" \
--query "SELECT * FROM products" \
--output data.txt \
--format csv
Handle Empty Results
# Exit with code 0 even if no results (default exits with code 1)
gold_digger \
--allow-empty \
--db-url "mysql://user:pass@localhost:3306/db" \
--query "SELECT * FROM users WHERE id = 999999" \
--output empty.json
Verbose Logging
# Enable verbose output for debugging
gold_digger -v \
--db-url "mysql://user:pass@localhost:3306/db" \
--query "SELECT COUNT(*) FROM large_table" \
--output count.json
Shell Completion
Generate shell completion scripts for better CLI experience:
# Bash
gold_digger completion bash > ~/.bash_completion.d/gold_digger
source ~/.bash_completion.d/gold_digger
# Zsh
gold_digger completion zsh > ~/.zsh/completions/_gold_digger
# Fish
gold_digger completion fish > ~/.config/fish/completions/gold_digger.fish
# PowerShell
gold_digger completion powershell > gold_digger.ps1
Configuration Debugging
Check your current configuration with credential redaction:
gold_digger \
--db-url "mysql://user:pass@localhost:3306/db" \
--query "SELECT 1" \
--output test.json \
--dump-config
Example output:
{
"database_url": "***REDACTED***",
"query": "SELECT 1",
"output": "test.json",
"format": "json",
"verbose": 0,
"quiet": false,
"pretty": false,
"allow_empty": false,
"features": {
"ssl": true,
"json": true,
"csv": true,
"verbose": true
}
}
Exit Codes
Gold Digger uses structured exit codes for automation:
- 0: Success with results (or empty with
--allow-empty
) - 1: Success but no rows returned
- 2: Configuration error
- 3: Database connection/authentication failure
- 4: Query execution failure
- 5: File I/O operation failure
Next Steps
- Learn about Configuration Options
- Explore Output Formats
- See more Examples
Configuration
Complete configuration guide for Gold Digger CLI options and environment variables.
Configuration Precedence
Gold Digger follows this configuration precedence order:
- CLI flags (highest priority)
- Environment variables (fallback)
- Error if neither provided
CLI Flags
Required Parameters
You must provide either CLI flags or corresponding environment variables:
gold_digger \
--db-url "mysql://user:pass@host:3306/db" \
--query "SELECT * FROM table" \
--output results.json
All Available Flags
Flag | Short | Environment Variable | Description |
---|---|---|---|
--db-url <URL> | - | DATABASE_URL | Database connection string |
--query <SQL> | -q | DATABASE_QUERY | SQL query to execute |
--query-file <FILE> | - | - | Read SQL from file (mutually exclusive with --query ) |
--output <FILE> | -o | OUTPUT_FILE | Output file path |
--format <FORMAT> | - | - | Force output format: csv , json , or tsv |
--pretty | - | - | Pretty-print JSON output |
--verbose | -v | - | Enable verbose logging (repeatable: -v , -vv ) |
--quiet | - | - | Suppress non-error output |
--allow-empty | - | - | Exit with code 0 even if no results |
--dump-config | - | - | Print current configuration as JSON |
--help | -h | - | Print help information |
--version | -V | - | Print version information |
Subcommands
Subcommand | Description |
---|---|
completion <shell> | Generate shell completion scripts |
Supported shells: bash
, zsh
, fish
, powershell
Mutually Exclusive Options
--query
and--query-file
cannot be used together--verbose
and--quiet
cannot be used together
Environment Variables
Core Variables
# Database connection (required)
export DATABASE_URL="mysql://user:password@localhost:3306/database"
# SQL query (required, unless using --query-file)
export DATABASE_QUERY="SELECT id, name FROM users LIMIT 10"
# Output file (required)
export OUTPUT_FILE="results.json"
Connection String Format
mysql://username:password@hostname:port/database?ssl-mode=required
Components:
username
: Database userpassword
: User passwordhostname
: Database server hostname or IPport
: Database port (default: 3306)database
: Database namessl-mode
: TLS/SSL configuration (optional)
SSL/TLS Parameters
Parameter | Values | Description |
---|---|---|
ssl-mode | disabled , preferred , required , verify-ca , verify-identity | SSL connection mode |
Example with TLS:
export DATABASE_URL="mysql://user:pass@host:3306/db?ssl-mode=required"
Output Format Configuration
Format Detection
Format is automatically detected by file extension:
# CSV output
export OUTPUT_FILE="data.csv"
# JSON output
export OUTPUT_FILE="data.json"
# TSV output (default for unknown extensions)
export OUTPUT_FILE="data.tsv"
export OUTPUT_FILE="data.txt" # Also becomes TSV
Format Override
Force a specific format regardless of file extension:
gold_digger \
--output data.txt \
--format json # Forces JSON despite .txt extension
TLS/SSL Configuration
TLS Security Modes
Gold Digger provides four mutually exclusive TLS security modes:
Flag | Description | Use Case |
---|---|---|
(none) | Platform certificate store validation (default) | Production environments |
--tls-ca-file <FILE> | Custom CA certificate file for trust anchor pinning | Internal infrastructure |
--insecure-skip-hostname-verify | Skip hostname verification (keeps chain and time validation) | Development environments |
--allow-invalid-certificate | Disable certificate validation entirely (DANGEROUS) | Testing only (never prod) |
TLS Examples
Production (default):
gold_digger \
--db-url "mysql://user:pass@prod.db.example.com:3306/mydb" \
--query "SELECT * FROM users" \
--output users.json
Internal infrastructure:
gold_digger \
--db-url "mysql://user:pass@internal.db:3306/mydb" \
--tls-ca-file /etc/ssl/certs/internal-ca.pem \
--query "SELECT * FROM data" \
--output data.csv
Development:
gold_digger \
--db-url "mysql://dev:devpass@192.168.1.100:3306/dev" \
--insecure-skip-hostname-verify \
--query "SELECT * FROM test_data" \
--output dev_data.json
Testing only (DANGEROUS):
gold_digger \
--db-url "mysql://test:test@test.db:3306/test" \
--allow-invalid-certificate \
--query "SELECT COUNT(*) FROM test_table" \
--output count.json
TLS Error Handling
Gold Digger provides intelligent error messages with specific CLI flag suggestions:
Error: Certificate validation failed: certificate has expired
Suggestion: Use --allow-invalid-certificate for testing environments
Error: Hostname verification failed for 192.168.1.100: certificate is for db.company.com
Suggestion: Use --insecure-skip-hostname-verify to bypass hostname checks
Security Configuration
Credential Protection
Important: Gold Digger automatically redacts credentials from logs and error output.
Safe logging example:
Connecting to database... โ
Query executed successfully
Wrote 150 rows to output.json
Credentials are never logged:
- Database passwords
- Connection strings
- Environment variable values
Secure Connection Examples
Require TLS:
export DATABASE_URL="mysql://user:pass@host:3306/db?ssl-mode=required"
Verify certificate:
export DATABASE_URL="mysql://user:pass@host:3306/db?ssl-mode=verify-ca"
Advanced Configuration
Configuration Debugging
Use the --dump-config
flag to see the resolved configuration:
# Show current configuration (credentials redacted)
gold_digger --db-url "mysql://user:pass@host:3306/db" \
--query "SELECT 1" --output test.json --dump-config
# Example output:
{
"database_url": "***REDACTED***",
"query": "SELECT 1",
"query_file": null,
"output": "test.json",
"format": "json",
"verbose": 0,
"quiet": false,
"pretty": false,
"allow_empty": false,
"features": {
"ssl": true,
"json": true,
"csv": true,
"verbose": true,
"additional_mysql_types": true
}
}
Shell Completion Setup
Generate and install shell completions for improved CLI experience:
# Bash completion
gold_digger completion bash > ~/.bash_completion.d/gold_digger
source ~/.bash_completion.d/gold_digger
# Zsh completion
gold_digger completion zsh > ~/.zsh/completions/_gold_digger
# Add to ~/.zshrc: fpath=(~/.zsh/completions $fpath)
# Fish completion
gold_digger completion fish > ~/.config/fish/completions/gold_digger.fish
# PowerShell completion
gold_digger completion powershell >> $PROFILE
Pretty JSON Output
Enable pretty-printed JSON for better readability:
# Compact JSON (default)
gold_digger --query "SELECT id, name FROM users LIMIT 3" --output compact.json
# Pretty-printed JSON
gold_digger --query "SELECT id, name FROM users LIMIT 3" --output pretty.json --pretty
Example:
{
"data": [
{
"id": 1,
"name": "Alice"
},
{
"id": 2,
"name": "Bob"
}
]
}
Query from File
Store complex queries in files:
# Create query file
echo "SELECT u.name, COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
GROUP BY u.id, u.name
ORDER BY post_count DESC" > complex_query.sql
# Use query file
gold_digger \
--db-url "mysql://user:pass@host:3306/db" \
--query-file complex_query.sql \
--output user_stats.json
Handling Empty Results
By default, Gold Digger exits with code 1 when no results are returned:
# Default behavior - exit code 1 if no results
gold_digger --query "SELECT * FROM users WHERE id = 999999" --output empty.json
# Allow empty results - exit code 0
gold_digger --allow-empty --query "SELECT * FROM users WHERE id = 999999" --output empty.json
Troubleshooting Configuration
Common Configuration Errors
Missing required parameters:
Error: Missing required configuration: DATABASE_URL
Solution: Provide either --db-url
flag or DATABASE_URL
environment variable.
Mutually exclusive flags:
Error: Cannot use both --query and --query-file
Solution: Choose either inline query or query file, not both.
Invalid connection string:
Error: Invalid database URL format
Solution: Ensure URL follows mysql://user:pass@host:port/db
format.
Output Formats
Gold Digger supports three structured output formats: CSV, JSON, and TSV.
Format Selection
Format is determined by this priority order:
--format
flag (explicit override)- File extension in output path
- TSV as fallback for unknown extensions
Examples
# Format determined by extension
gold_digger --output data.csv # CSV format
gold_digger --output data.json # JSON format
gold_digger --output data.tsv # TSV format
# Explicit format override
gold_digger --output data.txt --format json # JSON despite .txt extension
CSV Format
Comma-Separated Values - Industry standard tabular format.
Specifications
- Standard: RFC4180-compliant
- Quoting:
QuoteStyle::Necessary
(only when required) - Line Endings: CRLF (
\r\n
) - NULL Handling: Empty strings
- Encoding: UTF-8
Example Output
id,name,email,created_at
1,John Doe,john@example.com,2024-01-15 10:30:00
2,"Smith, Jane",jane@example.com,2024-01-16 14:22:33
3,Bob Johnson,,2024-01-17 09:15:45
When to Use CSV
- Excel compatibility required
- Data analysis in spreadsheet applications
- Legacy systems expecting CSV input
- Minimal file size for large datasets
CSV Quoting Rules
Fields are quoted only when they contain:
- Commas (
,
) - Double quotes (
"
) - Newlines (
\n
or\r\n
)
JSON Format
JavaScript Object Notation - Structured data format with rich type support.
Specifications
- Structure:
{"data": [...]}
- Key Ordering: Deterministic (BTreeMap, not HashMap)
- NULL Handling: JSON
null
values - Encoding: UTF-8
- Pretty Printing: Optional with
--pretty
flag
Example Output
Compact (default):
{
"data": [
{
"id": "1",
"name": "John Doe",
"email": "john@example.com",
"created_at": "2024-01-15 10:30:00"
},
{
"id": "2",
"name": "Jane Smith",
"email": "jane@example.com",
"created_at": "2024-01-16 14:22:33"
}
]
}
Pretty-printed (--pretty
):
{
"data": [
{
"created_at": "2024-01-15 10:30:00",
"email": "john@example.com",
"id": "1",
"name": "John Doe"
},
{
"created_at": "2024-01-16 14:22:33",
"email": "jane@example.com",
"id": "2",
"name": "Jane Smith"
}
]
}
When to Use JSON
- API integration and web services
- Complex data structures with nested objects
- Type preservation (though Gold Digger converts all to strings)
- Modern applications expecting JSON input
JSON Features
- Deterministic ordering: Keys are always in the same order
- NULL safety: Database NULL values become JSON
null
- Unicode support: Full UTF-8 character support
TSV Format
Tab-Separated Values - Simple, reliable format for data exchange.
Specifications
- Delimiter: Tab character (
\t
) - Quoting:
QuoteStyle::Necessary
- Line Endings: Unix (
\n
) - NULL Handling: Empty strings
- Encoding: UTF-8
Example Output
id name email created_at
1 John Doe john@example.com 2024-01-15 10:30:00
2 Jane Smith jane@example.com 2024-01-16 14:22:33
3 Bob Johnson 2024-01-17 09:15:45
When to Use TSV
- Unix/Linux tools (awk, cut, sort)
- Data processing pipelines
- Avoiding comma conflicts in data
- Simple parsing requirements
TSV Advantages
- No comma conflicts: Data can contain commas without quoting
- Simple parsing: Easy to split on tab characters
- Unix-friendly: Works well with command-line tools
NULL Value Handling
Different formats handle database NULL values differently:
Format | NULL Representation | Example |
---|---|---|
CSV | Empty string | 1,John,,2024-01-15 |
JSON | JSON null | {"id":"1","name":"John","email":null} |
TSV | Empty string | 1 John 2024-01-15 |
Type Safety and Data Conversion
Gold Digger automatically handles all MySQL data types safely without requiring explicit casting.
Automatic Type Conversion
All MySQL data types are converted safely:
-- โ
Safe - Gold Digger handles all types automatically
SELECT id, name, price, created_at, is_active, description
FROM products;
Type Conversion Rules
MySQL Type | CSV/TSV Output | JSON Output | NULL Handling |
---|---|---|---|
INT , BIGINT | String representation | Number (if valid) | Empty string / null |
DECIMAL , FLOAT | String representation | Number (if valid) | Empty string / null |
VARCHAR , TEXT | Direct string | String | Empty string / null |
DATE , DATETIME | ISO format string | String | Empty string / null |
BOOLEAN | "0" or "1" | true /false (if "true"/"false") | Empty string / null |
NULL | Empty string | null | Always handled safely |
JSON Type Inference
When outputting to JSON, Gold Digger attempts to preserve appropriate data types:
{
"data": [
{
"id": 123, // Integer preserved
"price": 19.99, // Float preserved
"name": "Product", // String preserved
"active": true, // Boolean inferred
"description": null // NULL preserved
}
]
}
Performance Considerations
File Size Comparison
For the same dataset:
- TSV: Smallest (no quotes, simple delimiters)
- CSV: Medium (quotes when necessary)
- JSON: Largest (structure overhead, key names repeated)
Processing Speed
- TSV: Fastest to generate and parse
- CSV: Fast, with quoting overhead
- JSON: Slower due to structure and key ordering
Format-Specific Options
CSV Options
# Standard CSV
gold_digger --output data.csv
# CSV is always RFC4180-compliant with necessary quoting
JSON Options
# Compact JSON (default)
gold_digger --output data.json
# Pretty-printed JSON
gold_digger --output data.json --pretty
TSV Options
# Standard TSV
gold_digger --output data.tsv
# TSV with explicit format
gold_digger --output data.txt --format tsv
Integration Examples
Excel Integration
# Generate Excel-compatible CSV
gold_digger \
--query "SELECT CAST(id AS CHAR) as ID, name as Name FROM users" \
--output users.csv
API Integration
# Generate JSON for API consumption
gold_digger \
--query "SELECT CAST(id AS CHAR) as id, name, email FROM users" \
--output users.json \
--pretty
Unix Pipeline Integration
# Generate TSV for command-line processing
gold_digger \
--query "SELECT CAST(id AS CHAR) as id, name FROM users" \
--output users.tsv
# Process with standard Unix tools
cut -f2 users.tsv | sort | uniq -c
Troubleshooting Output Formats
Common Issues
Malformed CSV:
- Check for unescaped quotes in data
- Verify line ending compatibility
Invalid JSON:
- Ensure all columns are properly cast
- Check for NULL handling issues
TSV parsing errors:
- Look for tab characters in data
- Verify delimiter expectations
Validation
Test output format validity:
# Validate CSV
csvlint data.csv
# Validate JSON
jq . data.json
# Check TSV structure
column -t -s $'\t' data.tsv | head
Examples
Practical examples for common Gold Digger use cases.
Basic Data Export
Simple User Export
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT id, name, email FROM users LIMIT 100" \
--output users.csv
Pretty JSON Output
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT id, name, email FROM users LIMIT 10" \
--output users.json \
--pretty
Complex Queries
Joins and Aggregations
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT u.name, COUNT(p.id) as post_count
FROM users u LEFT JOIN posts p ON u.id = p.user_id
WHERE u.active = 1
GROUP BY u.id, u.name
ORDER BY post_count DESC" \
--output user_stats.json
Date Range Queries
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT DATE(created_at) as date, COUNT(*) as orders
FROM orders
WHERE created_at >= '2023-01-01'
GROUP BY DATE(created_at)
ORDER BY date" \
--output daily_orders.csv
Using Query Files
Complex Query from File
Create a query file:
-- analytics_query.sql
SELECT
p.category,
COUNT(*) as product_count,
AVG(p.price) as avg_price,
SUM(oi.quantity) as total_sold
FROM products p
LEFT JOIN order_items oi ON p.id = oi.product_id
LEFT JOIN orders o ON oi.order_id = o.id
WHERE o.created_at >= DATE_SUB(NOW(), INTERVAL 30 DAY)
GROUP BY p.category
ORDER BY total_sold DESC;
Use the query file:
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query-file analytics_query.sql \
--output monthly_analytics.json \
--pretty
Environment Variables
Basic Environment Setup
export DATABASE_URL="mysql://user:pass@localhost:3306/mydb"
export DATABASE_QUERY="SELECT * FROM products WHERE price > 100"
export OUTPUT_FILE="expensive_products.json"
gold_digger
Windows PowerShell
$env:DATABASE_URL="mysql://user:pass@localhost:3306/mydb"
$env:DATABASE_QUERY="SELECT id, name, price FROM products WHERE active = 1"
$env:OUTPUT_FILE="C:\data\active_products.csv"
gold_digger
Output Format Control
Force Specific Format
# Force CSV format regardless of file extension
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT * FROM users" \
--output data.txt \
--format csv
Format Comparison
# CSV output (RFC 4180 compliant)
gold_digger --query "SELECT id, name FROM users LIMIT 5" --output users.csv
# JSON output with type inference
gold_digger --query "SELECT id, name FROM users LIMIT 5" --output users.json
# TSV output (tab-separated)
gold_digger --query "SELECT id, name FROM users LIMIT 5" --output users.tsv
Error Handling and Debugging
Handle Empty Results
# Exit with code 0 even if no results (default exits with code 1)
# The --allow-empty flag changes the command's behavior by permitting empty result sets
# and creating an empty output file instead of exiting with error code 1
gold_digger \
--allow-empty \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT * FROM users WHERE id = 999999" \
--output empty_result.json
Verbose Logging
# Enable verbose output for debugging
gold_digger -v \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT COUNT(*) as total FROM large_table" \
--output count.json
Configuration Debugging
# Check resolved configuration (credentials redacted)
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT 1 as test" \
--output test.json \
--dump-config
Data Type Handling
Automatic Type Conversion
Gold Digger safely handles all MySQL data types without requiring explicit casting:
# All data types handled automatically
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT id, name, price, created_at, is_active, description
FROM products" \
--output products.json
NULL Value Handling
# NULL values are handled safely
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT id, name, COALESCE(description, 'No description') as description
FROM products" \
--output products_with_defaults.csv
Special Values
# Handles NaN, Infinity, and other special values
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT id, name,
CASE WHEN price = 0 THEN 'NaN' ELSE price END as price
FROM products" \
--output products_special.json
Automation and Scripting
Bash Script Example
#!/bin/bash
set -e
DB_URL="mysql://user:pass@localhost:3306/mydb"
OUTPUT_DIR="/data/exports"
DATE=$(date +%Y%m%d)
# Export users
gold_digger \
--db-url "$DB_URL" \
--query "SELECT * FROM users WHERE active = 1" \
--output "$OUTPUT_DIR/users_$DATE.csv"
# Export orders
gold_digger \
--db-url "$DB_URL" \
--query "SELECT * FROM orders WHERE DATE(created_at) = CURDATE()" \
--output "$OUTPUT_DIR/daily_orders_$DATE.json" \
--pretty
echo "Export completed successfully"
Error Handling in Scripts
#!/bin/bash
DB_URL="mysql://user:pass@localhost:3306/mydb"
QUERY="SELECT COUNT(*) as count FROM users"
OUTPUT="user_count.json"
if gold_digger --db-url "$DB_URL" --query "$QUERY" --output "$OUTPUT"; then
echo "Export successful"
cat "$OUTPUT"
else
case $? in
1) echo "No results found" ;;
2) echo "Configuration error" ;;
3) echo "Database connection failed" ;;
4) echo "Query execution failed" ;;
5) echo "File I/O error" ;;
*) echo "Unknown error" ;;
esac
exit 1
fi
Performance Optimization
Large Dataset Export
# For large datasets, use LIMIT and OFFSET for pagination
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT * FROM large_table ORDER BY id LIMIT 10000 OFFSET 0" \
--output batch_1.csv
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT * FROM large_table ORDER BY id LIMIT 10000 OFFSET 10000" \
--output batch_2.csv
Optimized Queries
# Use indexes and specific columns for better performance
gold_digger \
--db-url "mysql://user:pass@localhost:3306/mydb" \
--query "SELECT id, name, email FROM users
WHERE created_at >= '2023-01-01'
AND status = 'active'
ORDER BY id" \
--output recent_active_users.json
TLS/SSL Connections
Secure Connection (Default)
# Uses platform certificate store for validation
gold_digger \
--db-url "mysql://user:pass@secure-db.example.com:3306/mydb" \
--query "SELECT id, name FROM users LIMIT 10" \
--output secure_users.json
Custom CA Certificate
# Use custom CA certificate for internal infrastructure
gold_digger \
--db-url "mysql://user:pass@internal-db.company.com:3306/mydb" \
--tls-ca-file /etc/ssl/certs/company-ca.pem \
--query "SELECT * FROM sensitive_data" \
--output internal_data.csv
Development Environment
# Skip hostname verification for development servers
gold_digger \
--db-url "mysql://dev:devpass@192.168.1.100:3306/dev_db" \
--insecure-skip-hostname-verify \
--query "SELECT * FROM test_data" \
--output dev_data.json
Testing Environment (DANGEROUS)
# Accept invalid certificates for testing only
gold_digger \
--db-url "mysql://test:test@test-db:3306/test" \
--allow-invalid-certificate \
--query "SELECT COUNT(*) as total FROM test_table" \
--output test_count.json
โ ๏ธ Security Warning: Never use --allow-invalid-certificate
in production environments.
Shell Completion
Setup Completion
# Bash
gold_digger completion bash > ~/.bash_completion.d/gold_digger
source ~/.bash_completion.d/gold_digger
# Zsh
gold_digger completion zsh > ~/.zsh/completions/_gold_digger
# Fish
gold_digger completion fish > ~/.config/fish/completions/gold_digger.fish
# PowerShell
gold_digger completion powershell > gold_digger.ps1
Using Completion
After setup, you can use tab completion:
gold_digger --<TAB> # Shows available flags
gold_digger --format <TAB> # Shows format options (csv, json, tsv)
gold_digger completion <TAB> # Shows shell options
Database Security
Security considerations for database connections and credential handling.
Credential Protection
warning
Never log or expose database credentials in output or error messages.
Gold Digger automatically redacts sensitive information from logs and error output.
Connection Security
Use Strong Authentication
- Create dedicated database users with minimal required permissions
- Use strong, unique passwords
- Consider certificate-based authentication where supported
Network Security
- Always use TLS/SSL for remote connections
- Restrict database access by IP address when possible
- Use VPN or private networks for sensitive data
Best Practices
- Principle of Least Privilege: Grant only necessary permissions
- Regular Credential Rotation: Update passwords regularly
- Monitor Access: Log and review database access patterns
- Secure Storage: Never store credentials in plain text files
TLS/SSL Configuration
Configure secure connections to your MySQL/MariaDB database with Gold Digger's flexible TLS security controls.
TLS Implementation
Gold Digger uses rustls for secure database connections with platform certificate store integration:
- Pure Rust TLS: Uses rustls for consistent cross-platform behavior
- Platform Integration: Automatically uses system certificate stores (Windows, macOS, Linux)
- Enhanced Security Controls: Four distinct TLS security modes via CLI flags
- Smart Error Messages: Provides specific guidance when certificate validation fails
TLS Security Modes
Gold Digger provides four TLS security modes via mutually exclusive CLI flags:
1. Platform Trust (Default)
Uses your system's certificate store with full validation:
# Default behavior - uses platform certificate store
gold_digger \
--db-url "mysql://user:pass@prod.db:3306/mydb" \
--query "SELECT * FROM users" \
--output users.json
2. Custom CA Trust
Use a custom CA certificate file for trust anchor pinning:
# Use custom CA certificate
gold_digger \
--db-url "mysql://user:pass@internal.db:3306/mydb" \
--tls-ca-file /etc/ssl/certs/internal-ca.pem \
--query "SELECT * FROM sensitive_data" \
--output data.json
3. Skip Hostname Verification
Skip hostname verification while keeping other security checks:
# Skip hostname verification for development
gold_digger \
--db-url "mysql://user:pass@192.168.1.100:3306/mydb" \
--insecure-skip-hostname-verify \
--query "SELECT * FROM test_data" \
--output test.json
โ ๏ธ Security Warning: Displays warning about man-in-the-middle attack vulnerability.
4. Accept Invalid Certificates
Disable all certificate validation (DANGEROUS - testing only):
# Accept any certificate for testing (DANGEROUS)
gold_digger \
--db-url "mysql://user:pass@test.db:3306/mydb" \
--allow-invalid-certificate \
--query "SELECT * FROM test_table" \
--output test.csv
๐จ Security Warning: Displays prominent warning about insecure connection.
TLS Error Handling
Gold Digger provides intelligent error messages with specific CLI flag suggestions:
Certificate Validation Failures
Error: Certificate validation failed: certificate has expired
Suggestion: Use --allow-invalid-certificate for testing environments
Solutions by error type:
- Expired certificates: Use
--allow-invalid-certificate
(testing only) - Self-signed certificates: Use
--allow-invalid-certificate
or--tls-ca-file
with custom CA - Internal CA certificates: Use
--tls-ca-file /path/to/internal-ca.pem
Hostname Verification Failures
Error: Hostname verification failed for 192.168.1.100: certificate is for db.company.com
Suggestion: Use --insecure-skip-hostname-verify to bypass hostname checks
Common causes:
- Connecting to servers by IP address
- Certificates with mismatched hostnames
- Development environments with generic certificates
Custom CA File Issues
Error: CA certificate file not found: /path/to/ca.pem
Solution: Ensure the file exists and is readable
Error: Invalid CA certificate format in /path/to/ca.pem: not valid PEM
Solution: Ensure the file contains valid PEM-encoded certificates
Mutually Exclusive Flag Errors
Error: Mutually exclusive TLS flags provided: --tls-ca-file, --insecure-skip-hostname-verify
Solution: Use only one TLS security option at a time
Security Features
Credential Protection
Gold Digger automatically protects sensitive information:
# Credentials are automatically redacted in error messages
gold_digger \
--db-url "mysql://user:secret@host:3306/db" \
--query "SELECT 1" \
--output test.json \
--dump-config
# Output shows:
# "database_url": "***REDACTED***"
URL Redaction
All database URLs are sanitized in logs and error output:
# Before redaction:
mysql://admin:supersecret@db.example.com:3306/production
# After redaction:
mysql://***REDACTED***:***REDACTED***@db.example.com:3306/production
Error Message Sanitization
TLS error messages are scrubbed of sensitive information:
- Passwords and tokens are replaced with
***REDACTED***
- Connection strings are sanitized
- Query content with credentials is masked
Build Configuration
# Standard build
cargo build --release
# Minimal build
cargo build --release --no-default-features --features "json csv"
Production Recommendations
Security Best Practices
- Always use TLS for production databases
- Verify certificates - don't skip validation
- Use strong passwords and rotate regularly
- Monitor TLS versions - ensure TLS 1.2+ only
- Keep certificates updated - monitor expiration
Connection Security
# โ
Secure connection example
gold_digger \
--db-url "mysql://app_user:strong_password@secure-db.example.com:3306/production" \
--query "SELECT COUNT(*) FROM users" \
--output user_count.json
# โ Avoid insecure connections in production
# mysql://user:pass@insecure-db.example.com:3306/db (no TLS)
Certificate Management
- Use certificates from trusted CAs
- Implement certificate rotation procedures
- Monitor certificate expiration dates
- Test certificate changes in staging first
Troubleshooting Guide
Common TLS Issues
Issue: TLS connection failed
TLS connection failed: connection timed out
Solutions:
- Check network connectivity
- Verify server is running
- Confirm firewall allows TLS traffic
- Verify TLS configuration:
- Check server certificate validity and chain integrity
- Confirm server is listening on the TLS port (typically 3306)
- Validate TLS cipher/protocol compatibility between client and server
- Test using
openssl s_client -connect host:3306
orcurl --tlsv1.2
to capture handshake details - Review server logs for TLS-related errors or connection rejections
- Enable verbose TLS client debug output to diagnose timeouts and handshake failures
Issue: Certificate validation failed
Certificate validation failed: self signed certificate in certificate chain
Solutions:
- Install proper CA certificates
- Use certificates from trusted CA
- For testing only: use
--allow-invalid-certificate
flag - For custom CAs: use
--tls-ca-file /path/to/ca.pem
Debugging TLS Issues
Enable verbose logging for TLS troubleshooting:
gold_digger -v \
--db-url "mysql://user:pass@host:3306/db" \
--query "SELECT 1" \
--output debug.json
# Example verbose output
Using platform certificate store for TLS validation
TLS connection established: TLS 1.3, cipher: TLS_AES_256_GCM_SHA384
Security Warnings
Gold Digger displays security warnings for insecure modes:
# Hostname verification disabled
WARNING: Hostname verification disabled. Connection is vulnerable to man-in-the-middle attacks.
# Certificate validation disabled
WARNING: Certificate validation disabled. Connection is NOT secure.
This should ONLY be used for testing. Never use in production.
Exit Codes for TLS Errors
TLS-related errors map to specific exit codes:
- Exit 2: TLS configuration errors (mutually exclusive flags, invalid CA files)
- Exit 3: TLS connection failures (handshake, validation, network issues)
Deployment Recommendations
Production Environments
Use default platform trust mode:
gold_digger --db-url "mysql://app:secure@prod.db:3306/app" \
--query "SELECT * FROM orders" --output orders.json
Internal Infrastructure
Use custom CA trust mode:
gold_digger --db-url "mysql://service:token@internal.db:3306/data" \
--tls-ca-file /etc/pki/internal-ca.pem \
--query "SELECT * FROM metrics" --output metrics.csv
Development Environments
Use hostname skip for development servers:
gold_digger --db-url "mysql://dev:devpass@192.168.1.100:3306/dev" \
--insecure-skip-hostname-verify \
--query "SELECT * FROM test_data" --output dev_data.json
Testing Environments Only
Accept invalid certificates for testing:
gold_digger --db-url "mysql://test:test@localhost:3306/test" \
--allow-invalid-certificate \
--query "SELECT COUNT(*) FROM test_table" --output count.json
โ ๏ธ Security Warning: Never use --allow-invalid-certificate
in production.
TLS Migration Guide
Release Artifact Verification
This document provides instructions for verifying the integrity and authenticity of Gold Digger release artifacts.
Overview
Each Gold Digger release includes the following security artifacts:
- Binaries: Cross-platform executables for Linux, macOS, and Windows
- Checksums: SHA256 checksums for all binaries (
SHA256SUMS
and individual.sha256
files) - SBOMs: Software Bill of Materials in CycloneDX format (
.sbom.cdx.json
files) - Signatures: Cosign keyless signatures (
.sig
and.crt
files)
Checksum Verification
Using the Consolidated Checksums File
- Download the
SHA256SUMS
file from the release - Download the binary you want to verify
- Verify the checksum:
# Linux/macOS
sha256sum -c SHA256SUMS
# Or verify a specific file
sha256sum gold_digger-x86_64-unknown-linux-gnu.tar.gz
# Compare with the value in SHA256SUMS
Using Individual Checksum Files
Each binary has a corresponding .sha256
file:
# Download both the binary and its checksum file
wget https://github.com/EvilBit-Labs/gold_digger/releases/download/v1.0.0/gold_digger-x86_64-unknown-linux-gnu.tar.gz
wget https://github.com/EvilBit-Labs/gold_digger/releases/download/v1.0.0/gold_digger-x86_64-unknown-linux-gnu.tar.gz.sha256
# Verify the checksum
sha256sum -c gold_digger-x86_64-unknown-linux-gnu.tar.gz.sha256
Signature Verification
Gold Digger releases are signed using Cosign with keyless OIDC authentication.
Install Cosign
# Linux/macOS
curl -O -L "https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64"
sudo mv cosign-linux-amd64 /usr/local/bin/cosign
sudo chmod +x /usr/local/bin/cosign
# Or use package managers
# Homebrew (macOS/Linux)
brew install cosign
# APT (Ubuntu/Debian)
sudo apt-get update
sudo apt install cosign
Verify Signatures
Each binary has corresponding .sig
(signature) and .crt
(certificate) files:
# Download the binary and its signature files
wget https://github.com/EvilBit-Labs/gold_digger/releases/download/v1.0.0/gold_digger-x86_64-unknown-linux-gnu.tar.gz
wget https://github.com/EvilBit-Labs/gold_digger/releases/download/v1.0.0/gold_digger-x86_64-unknown-linux-gnu.tar.gz.sig
wget https://github.com/EvilBit-Labs/gold_digger/releases/download/v1.0.0/gold_digger-x86_64-unknown-linux-gnu.tar.gz.crt
# Verify the signature
cosign verify-blob \
--certificate gold_digger-x86_64-unknown-linux-gnu.tar.gz.crt \
--signature gold_digger-x86_64-unknown-linux-gnu.tar.gz.sig \
--certificate-identity-regexp "^https://github\.com/EvilBit-Labs/gold_digger/\.github/workflows/release\.yml@refs/tags/v[0-9]+\.[0-9]+\.[0-9]+$" \
--certificate-oidc-issuer-regexp "^https://token\.actions\.githubusercontent\.com$" \
gold_digger-x86_64-unknown-linux-gnu.tar.gz
Understanding the Certificate
The certificate contains information about the signing identity:
# Examine the certificate
openssl x509 -in gold_digger-x86_64-unknown-linux-gnu.tar.gz.crt -text -noout
Look for:
- Subject: Should contain GitHub Actions workflow information
- Issuer: Should be from Sigstore/Fulcio
- SAN (Subject Alternative Name): Should contain the GitHub repository URL
Extracting Certificate Identity and Issuer
To extract the exact certificate identity and issuer values for verification:
# Extract the certificate identity (SAN URI)
openssl x509 -in gold_digger-x86_64-unknown-linux-gnu.tar.gz.crt -text -noout | grep -A1 "X509v3 Subject Alternative Name" | grep URI
# Extract the OIDC issuer
openssl x509 -in gold_digger-x86_64-unknown-linux-gnu.tar.gz.crt -text -noout | grep -A10 "X509v3 extensions" | grep -A5 "1.3.6.1.4.1.57264.1.1" | grep "https://token.actions.githubusercontent.com"
The certificate identity should match the pattern:
https://github.com/EvilBit-Labs/gold_digger/.github/workflows/release.yml@refs/tags/v1.0.0
The OIDC issuer should be: https://token.actions.githubusercontent.com
Security Note: The verification commands in this documentation use exact regex patterns anchored
to these specific values to prevent signature forgery attacks. Never use wildcard patterns like .*
in production verification.
SBOM Inspection
Software Bill of Materials (SBOM) files provide detailed information about dependencies and components.
Install SBOM Tools
# Install syft for SBOM generation and inspection
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
# Install grype for vulnerability scanning
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
Inspect SBOM Contents
# Download the SBOM file
wget https://github.com/EvilBit-Labs/gold_digger/releases/download/v1.0.0/gold_digger-x86_64-unknown-linux-gnu.tar.gz.sbom.cdx.json
# View SBOM in human-readable format
syft packages file:gold_digger-x86_64-unknown-linux-gnu.tar.gz.sbom.cdx.json -o table
# View detailed JSON structure
jq . gold_digger-x86_64-unknown-linux-gnu.tar.gz.sbom.cdx.json | less
Vulnerability Assessment
Use the SBOM to check for known vulnerabilities:
# Scan the SBOM for vulnerabilities
grype sbom:gold_digger-x86_64-unknown-linux-gnu.tar.gz.sbom.cdx.json
# Generate a vulnerability report
grype sbom:gold_digger-x86_64-unknown-linux-gnu.tar.gz.sbom.cdx.json -o json > vulnerability-report.json
Complete Verification Script
Here's a complete script that verifies all aspects of a release artifact:
#!/bin/bash
set -euo pipefail
RELEASE_TAG="v1.0.0"
ARTIFACT_NAME="gold_digger-x86_64-unknown-linux-gnu.tar.gz"
BASE_URL="https://github.com/EvilBit-Labs/gold_digger/releases/download/${RELEASE_TAG}"
echo "๐ Verifying Gold Digger release artifact: ${ARTIFACT_NAME}"
# Download all required files
echo "๐ฅ Downloading files..."
wget -q "${BASE_URL}/${ARTIFACT_NAME}"
wget -q "${BASE_URL}/${ARTIFACT_NAME}.sha256"
wget -q "${BASE_URL}/${ARTIFACT_NAME}.sig"
wget -q "${BASE_URL}/${ARTIFACT_NAME}.crt"
wget -q "${BASE_URL}/${ARTIFACT_NAME}.sbom.cdx.json"
# Verify checksum
echo "๐ Verifying checksum..."
if sha256sum -c "${ARTIFACT_NAME}.sha256"; then
echo "โ
Checksum verification passed"
else
echo "โ Checksum verification failed"
exit 1
fi
# Verify signature
echo "๐ Verifying signature..."
if cosign verify-blob \
--certificate "${ARTIFACT_NAME}.crt" \
--signature "${ARTIFACT_NAME}.sig" \
--certificate-identity "https://github.com/EvilBit-Labs/gold_digger/.github/workflows/release.yml@refs/tags/${RELEASE_TAG}" \
--certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
"${ARTIFACT_NAME}"; then
echo "โ
Signature verification passed"
else
echo "โ Signature verification failed"
exit 1
fi
# Validate SBOM
echo "๐ Validating SBOM..."
if jq empty "${ARTIFACT_NAME}.sbom.cdx.json" 2>/dev/null; then
echo "โ
SBOM is valid JSON"
# Show SBOM summary
echo "๐ SBOM Summary:"
syft packages "sbom:${ARTIFACT_NAME}.sbom.cdx.json" -o table | head -20
else
echo "โ SBOM validation failed"
exit 1
fi
echo "๐ All verifications passed! The artifact is authentic and secure."
Airgap Installation Guide
For environments without internet access:
1. Download Required Files
On a connected machine, download:
- The binary archive
- The
.sha256
checksum file - The
.sig
and.crt
signature files - The
.sbom.cdx.json
SBOM file
2. Transfer to Airgap Environment
Transfer all files to the airgap environment using approved methods (USB, secure file transfer, etc.).
3. Verify in Airgap Environment
# Verify checksum (no network required)
# For Linux (GNU coreutils):
sha256sum -c gold_digger-x86_64-unknown-linux-gnu.tar.gz.sha256
# For macOS (native):
shasum -a 256 -c gold_digger-x86_64-unknown-linux-gnu.tar.gz.sha256
# Note: If sha256sum is not available on macOS, install GNU coreutils:
# brew install coreutils
# Extract and install
tar -xzf gold_digger-x86_64-unknown-linux-gnu.tar.gz
sudo mv gold_digger /usr/local/bin/
sudo chmod +x /usr/local/bin/gold_digger
# Verify installation
gold_digger --version
4. Optional: Offline Signature Verification
If Cosign is available in the airgap environment:
# Verify signature (requires Cosign but no network)
cosign verify-blob \
--certificate gold_digger-x86_64-unknown-linux-gnu.tar.gz.crt \
--signature gold_digger-x86_64-unknown-linux-gnu.tar.gz.sig \
--certificate-identity-regexp "^https://github\.com/EvilBit-Labs/gold_digger/\.github/workflows/release\.yml@refs/tags/v[0-9]+\.[0-9]+\.[0-9]+$" \
--certificate-oidc-issuer-regexp "^https://token\.actions\.githubusercontent\.com$" \
gold_digger-x86_64-unknown-linux-gnu.tar.gz
Security Considerations
Trust Model
- Signatures: Trust is rooted in GitHub's OIDC identity and Sigstore's transparency log
- Checksums: Protect against corruption and tampering
- SBOMs: Enable vulnerability assessment and supply chain analysis
Verification Best Practices
- Always verify checksums before using any binary
- Verify signatures when possible to ensure authenticity
- Review SBOMs for security-sensitive deployments
- Use the latest release unless you have specific version requirements
- Report security issues through GitHub's security advisory process
Automated Verification
For CI/CD pipelines, consider automating verification:
- name: Verify Gold Digger Release
run: |
# Download and verify as shown above
# Fail the pipeline if verification fails
Troubleshooting
Common Issues
Checksum mismatch: Re-download the file, check for network issues Signature verification fails: Ensure you have the correct certificate and signature files SBOM parsing errors: Verify the SBOM file wasn't corrupted during download
Getting Help
- Security issues: Use GitHub's security advisory process
- General questions: Open an issue on the GitHub repository
- Documentation: Check the main documentation at
/docs/
Security Best Practices
Comprehensive security guidelines for using Gold Digger in production.
Production Deployment
File Permissions
- Set restrictive permissions on output files containing sensitive data
- Use umask to control default file permissions
- Consider encrypting output files for highly sensitive data
Credential Management
- Use environment variables instead of CLI flags for credentials
- Implement credential rotation procedures
- Use secrets management systems in containerized environments
Network Security
- Always use TLS/SSL for database connections
- Restrict database access by IP address
- Use private networks or VPNs for sensitive operations
Data Handling
Output Security
- Review output files for sensitive information before sharing
- Implement data retention policies
- Use secure file transfer methods
Query Safety
- Validate SQL queries before execution
- Use parameterized queries when possible
- Avoid exposing sensitive data in query logs
Monitoring and Auditing
Access Logging
- Log all database access attempts
- Monitor for unusual query patterns
- Implement alerting for security events
Regular Security Reviews
- Audit database permissions regularly
- Review and update security configurations
- Test backup and recovery procedures
Development Security
Security Scanning
Gold Digger includes comprehensive security scanning tools for development and CI/CD:
# Run security audit for known vulnerabilities
just audit
# Check licenses and security policies
just deny
# Comprehensive security scan (audit + deny + grype)
just security
# Generate Software Bill of Materials (SBOM)
just sbom
Dependency Management
- Regularly update dependencies to patch security vulnerabilities
- Use
cargo audit
to check for known security issues - Review dependency licenses with
cargo deny
- Generate and review SBOMs for supply chain security
Vulnerability Scanning
The just security
command performs comprehensive vulnerability scanning:
- Security Audit: Uses
cargo audit
to check for known vulnerabilities in dependencies - License Compliance: Uses
cargo deny
to enforce license and security policies - Container Scanning: Uses
grype
to scan for vulnerabilities in the final binary
Supply Chain Security
- All release artifacts are signed with Cosign using keyless OIDC
- SBOMs are generated for all releases in CycloneDX format
- Dependencies are tracked and audited automatically
- Use
just sbom
to inspect the software bill of materials locally
Troubleshooting
Common issues and solutions for Gold Digger.
Quick Diagnostics
Before diving into specific issues, try these basic checks:
- Verify Installation:
gold_digger --version
- Check Configuration: Ensure all required parameters are set
- Test Database Connection: Use a simple query first
- Review Error Messages: Look for specific error codes and messages
Common Issue Categories
- Connection Problems - Database connectivity and TLS issues (rustls-only implementation)
- Type Safety - Safe data type handling and conversion
- Type Errors - Data type conversion problems
- Performance Issues - Slow queries and memory usage
Getting Help
If you can't find a solution here:
- Check the GitHub Issues
- Review the Configuration Guide
- Create a new issue with detailed error information
Error Codes
Gold Digger uses standard exit codes:
0
: Success1
: No results returned2
: Configuration error3
: Database connection failure4
: Query execution failure5
: File I/O error
Connection Problems
Troubleshooting database connection issues with Gold Digger's enhanced error handling.
Gold Digger Connection Error Codes
Gold Digger provides structured exit codes for different connection failures:
- Exit Code 2: Configuration errors (invalid URL format, missing parameters)
- Exit Code 3: Connection/authentication failures (network, credentials, TLS)
Common Connection Errors
Connection Refused (Exit Code 3)
Error Message:
Database connection failed: Connection refused. Check server availability and network connectivity
Causes & Solutions:
- Database server not running: Start MySQL/MariaDB service
- Wrong port: Verify port number (default: 3306)
- Firewall blocking: Check firewall rules on both client and server
- Network issues: Test basic network connectivity
Diagnostic Steps:
# Test network connectivity
telnet hostname 3306
# Check if service is running
systemctl status mysql # Linux
brew services list | grep mysql # macOS
Access Denied (Exit Code 3)
Error Message:
Database authentication failed: Access denied for user 'username'@'host'. Check username and password
Causes & Solutions:
- Invalid credentials: Verify username and password
- Insufficient permissions: Grant appropriate database permissions
- Host restrictions: Check MySQL user host permissions
- Account locked/expired: Verify account status
Diagnostic Steps:
# Test credentials manually
mysql -h hostname -P port -u username -p database
# Check user permissions
SHOW GRANTS FOR 'username'@'host';
Unknown Database (Exit Code 3)
Error Message:
Database connection failed: Unknown database 'dbname'
Causes & Solutions:
- Database doesn't exist: Create database or verify name
- Typo in database name: Check spelling and case sensitivity
- No access to database: Grant permissions to the database
Diagnostic Steps:
# List available databases
SHOW DATABASES;
# Create database if needed
CREATE DATABASE dbname;
Invalid URL Format (Exit Code 2)
Error Message:
Invalid database URL format: Invalid URL scheme. URL: ***REDACTED***
Causes & Solutions:
- Wrong URL scheme: Use
mysql://
nothttp://
or others - Missing components: Ensure format is
mysql://user:pass@host:port/db
- Special characters: URL-encode special characters in passwords
Correct Format:
mysql://username:password@hostname:port/database
TLS Connection Issues (rustls-only implementation)
Gold Digger uses rustls for all TLS connections with enhanced security controls and better error messages.
TLS Handshake and Connection Issues (Exit Code 3)
Common Error Messages:
TLS handshake failed: protocol version mismatch
Certificate validation failed: unable to get local issuer certificate
Certificate validation failed: self signed certificate in certificate chain
Hostname verification failed for 192.168.1.100: certificate is for db.company.com
Causes & Solutions:
- Certificate validation failures:
- Self-signed certificates: Use
--allow-invalid-certificate
(testing only) or--tls-ca-file /path/to/ca.pem
- Expired certificates: Use
--allow-invalid-certificate
(testing only) - Internal CA certificates: Use
--tls-ca-file /path/to/internal-ca.pem
- Missing CA certificates: Install system CA certificates or use custom CA file
- Self-signed certificates: Use
- Hostname verification failures:
- IP address connections: Use
--insecure-skip-hostname-verify
for development - Certificate hostname mismatch: Use
--insecure-skip-hostname-verify
or fix certificate
- IP address connections: Use
- TLS version/cipher issues: Ensure server supports TLS 1.2+ with compatible cipher suites
Gold Digger TLS CLI Flags:
# Use custom CA certificate (recommended for internal infrastructure)
gold_digger --tls-ca-file /etc/ssl/certs/internal-ca.pem --db-url "mysql://..." --query "..." --output results.json
# Skip hostname verification (development environments)
gold_digger --insecure-skip-hostname-verify --db-url "mysql://user:pass@192.168.1.100:3306/db" --query "..." --output results.json
# Accept invalid certificates (testing only - DANGEROUS)
gold_digger --allow-invalid-certificate --db-url "mysql://..." --query "..." --output results.json
Diagnostic Steps:
# Check server TLS configuration
SHOW VARIABLES LIKE 'tls_version';
SHOW VARIABLES LIKE 'ssl_cipher';
# Verify certificate chain
openssl s_client -connect hostname:3306 -servername hostname
# Test with Gold Digger verbose mode
gold_digger -v --db-url "mysql://..." --query "SELECT 1" --output test.json
Network Troubleshooting
Firewall Issues
Symptoms:
- Connection timeouts
- "Connection refused" errors
- Intermittent connectivity
Solutions:
# Check if port is open (Linux/macOS)
nmap -p 3306 hostname
# Test connectivity
telnet hostname 3306
# Check local firewall (Linux)
sudo ufw status
sudo iptables -L
# Check local firewall (macOS)
sudo pfctl -sr
DNS Resolution Issues
Symptoms:
- "Host not found" errors
- Connection works with IP but not hostname
Solutions:
# Test DNS resolution
nslookup hostname
dig hostname
# Try IP address directly
gold_digger --db-url "mysql://user:pass@192.168.1.100:3306/db" --query "SELECT 1" --output test.json
Debugging Connection Issues
Enable Verbose Logging
gold_digger -v \
--db-url "mysql://user:pass@host:3306/db" \
--query "SELECT 1" \
--output debug.json
Verbose Output Example:
Connecting to database...
Database connection failed: Connection refused
Configuration Debugging
# Check resolved configuration (credentials redacted)
gold_digger \
--db-url "mysql://user:pass@host:3306/db" \
--query "SELECT 1" \
--output test.json \
--dump-config
Test with Minimal Query
# Use simple test query
gold_digger \
--db-url "mysql://user:pass@host:3306/db" \
--query "SELECT 1 as test" \
--output connection_test.json
Advanced Diagnostics
MySQL Error Code Mapping
Gold Digger maps specific MySQL error codes to contextual messages:
MySQL Error | Code | Gold Digger Message |
---|---|---|
ER_ACCESS_DENIED_ERROR | 1045 | Access denied - invalid credentials |
ER_DBACCESS_DENIED_ERROR | 1044 | Access denied to database |
ER_BAD_DB_ERROR | 1049 | Unknown database |
CR_CONNECTION_ERROR | 2002 | Connection failed - server not reachable |
CR_CONN_HOST_ERROR | 2003 | Connection failed - server not responding |
CR_SERVER_GONE_ERROR | 2006 | Connection lost - server has gone away |
Connection Pool Issues
Symptoms:
- Intermittent connection failures
- "Too many connections" errors
Solutions:
- Check MySQL
max_connections
setting - Monitor active connection count
- Verify connection pool configuration
Performance-Related Connection Issues
Symptoms:
- Slow connection establishment
- Timeouts on large queries
Solutions:
# Test with smaller query first
gold_digger \
--db-url "mysql://user:pass@host:3306/db" \
--query "SELECT COUNT(*) FROM table LIMIT 1" \
--output count_test.json
# Check server performance
SHOW PROCESSLIST;
SHOW STATUS LIKE 'Threads_connected';
Getting Help
When reporting connection issues, include:
- Complete error message (credentials will be automatically redacted)
- Gold Digger version:
gold_digger --version
- Database server version:
SELECT VERSION();
- Connection string format (without credentials)
- Network environment (local, remote, cloud, etc.)
- TLS requirements and certificate setup
Example Issue Report:
Gold Digger Version: 0.2.6
Database: MySQL 8.0.35
Error: Certificate validation failed: self signed certificate
Connection: mysql://user:***@remote-db.example.com:3306/production
Environment: Connecting from local machine to cloud database
TLS: Required, using self-signed certificates
Integration Testing Issues
Common issues and solutions for Gold Digger's integration testing framework.
Current Implementation Status
The integration testing framework is actively under development. Some tests may be incomplete or marked as TODO. This is expected during the development phase.
Docker-Related Issues
Docker Not Available
Symptom: Integration tests fail with "Docker not available" or similar errors.
Solution:
# Check Docker status
docker info
# Start Docker service (Linux)
sudo systemctl start docker
# Start Docker Desktop (macOS/Windows)
# Use Docker Desktop application
# Verify Docker is working
docker run hello-world
Container Startup Timeouts
Symptom: Tests fail with container startup timeouts in CI environments.
Solution:
# Increase timeout for CI environments
export TESTCONTAINERS_WAIT_TIMEOUT=300
# Run tests with extended timeout
cargo test --features integration_tests -- --ignored
Port Conflicts
Symptom: Tests fail with "port already in use" errors.
Solution:
# Check for conflicting processes
lsof -i :3306
# Kill conflicting MySQL/MariaDB processes
sudo pkill -f mysql
sudo pkill -f mariadb
# Use random ports (testcontainers default)
# No manual port configuration needed
Test Execution Issues
Integration Tests Not Running
Symptom: Integration tests are skipped or not executed.
Solution:
# Ensure integration_tests feature is enabled
cargo test --features integration_tests -- --ignored
# Check test discovery
cargo test --features integration_tests --list | grep -E "(integration|tls_variants|database_seeding)"
# Run specific integration tests
cargo test --features integration_tests --test tls_variants_test -- --ignored
cargo test --features integration_tests --test tls_integration -- --ignored
cargo test --features integration_tests --test database_seeding_test -- --ignored
# Check if Docker is available
docker info
TLS Variant Tests Not Working
Symptom: TLS variant tests fail with configuration or connection errors.
Solution:
# Check TLS variant test specifically
cargo test test_mysql_tls_variant --test tls_variants_test -- --ignored --nocapture
# Verify TLS configuration validation
cargo test test_tls_container_config_methods --test tls_variants_test
# Check container creation
cargo test test_connection_url_generation --test tls_variants_test -- --ignored
# Enable verbose logging
RUST_LOG=debug cargo test --test tls_variants_test -- --ignored --nocapture
Test Data Issues
Symptom: Tests fail due to missing or incorrect test data.
Solution:
# Verify test fixtures exist
ls tests/fixtures/
# Check schema and seed data
cat tests/fixtures/schema.sql
cat tests/fixtures/seed_data.sql
# Regenerate test data if needed
just test-integration --verbose
Memory Issues with Large Datasets
Symptom: Tests fail with out-of-memory errors during large dataset testing.
Solution:
# Increase available memory for tests
export RUST_MIN_STACK=8388608
# Run tests with memory profiling
cargo test --features integration_tests --release -- --ignored
# Skip performance tests if memory-constrained
cargo test --features integration_tests -- --ignored --skip performance
TLS Certificate Issues
Certificate Validation Failures
Symptom: TLS tests fail with certificate validation errors.
Solution:
# Check TLS certificate fixtures
ls tests/fixtures/tls/
# Verify certificate format
openssl x509 -in tests/fixtures/tls/server.pem -text -noout
# Regenerate certificates if expired
# (Certificate generation scripts in tests/fixtures/tls/)
TLS Connection Failures
Symptom: TLS integration tests fail to establish secure connections.
Solution:
# Test TLS configuration separately
cargo test --test tls_config_unit_tests
# Check rustls dependencies
cargo tree | grep rustls
# Verify no native-tls conflicts
! cargo tree | grep native-tls || echo "native-tls conflict detected"
CI Environment Issues
GitHub Actions Failures
Symptom: Integration tests pass locally but fail in GitHub Actions.
Solution:
# Test with act (local GitHub Actions simulation)
act -j test-integration
# Check CI-specific environment variables
env | grep CI
env | grep GITHUB
# Use CI-compatible timeouts
export CI=true
export GITHUB_ACTIONS=true
Resource Limits in CI
Symptom: Tests fail due to CI resource constraints.
Solution:
# Use smaller test datasets in CI
if [ "$CI" = "true" ]; then
export TEST_DATASET_SIZE=100
else
export TEST_DATASET_SIZE=1000
fi
# Skip resource-intensive tests in CI
cargo test -- --ignored --skip large_dataset
Database-Specific Issues
MySQL Version Compatibility
Symptom: Tests fail with MySQL version-specific errors.
Solution:
# Check MySQL version in container
docker exec -it <container_id> mysql --version
# Use specific MySQL version
# (Configured in testcontainers setup)
# Test with multiple MySQL versions
cargo test --features integration_tests mysql_8_0 -- --ignored
cargo test --features integration_tests mysql_8_1 -- --ignored
MariaDB SSL Configuration
Symptom: MariaDB TLS tests fail with SSL configuration errors.
Solution:
# Check MariaDB SSL status
docker exec -it <container_id> mysql -e "SHOW VARIABLES LIKE 'have_ssl';"
# Verify SSL certificate mounting
docker exec -it <container_id> ls -la /etc/mysql/ssl/
# Check MariaDB error log
docker logs <container_id>
Performance Issues
Slow Test Execution
Symptom: Integration tests take too long to complete.
Solution:
# Use nextest for parallel execution
cargo nextest run -- --ignored
# Run specific test categories
cargo test --test tls_variants_test -- --ignored
cargo test --test database_seeding_test -- --ignored
# Skip performance tests for faster feedback
cargo test -- --ignored --skip performance
Container Cleanup Issues
Symptom: Containers not cleaned up after tests, consuming resources.
Solution:
# Enable Ryuk for automatic cleanup
export TESTCONTAINERS_RYUK_DISABLED=false
# Manual container cleanup
docker ps -a | grep testcontainers | awk '{print $1}' | xargs docker rm -f
# Clean up test networks
docker network prune -f
Debugging Integration Tests
Verbose Test Output
# Run with verbose output
RUST_LOG=debug cargo test --features integration_tests -- --ignored --nocapture
# Keep containers running for inspection
TESTCONTAINERS_RYUK_DISABLED=true cargo test --features integration_tests -- --ignored
# Run specific test files with verbose output
RUST_LOG=debug cargo test --features integration_tests --test tls_variants_test -- --ignored --nocapture
RUST_LOG=debug cargo test --features integration_tests --test database_seeding_test -- --ignored --nocapture
# Generate detailed test reports
cargo nextest run --features integration_tests --profile ci -- --ignored
Common TLS Issues
Symptom: TLS tests fail with certificate or connection errors.
Solution:
# Check TLS certificate generation
cargo test test_tls_config_validation --test tls_variants_test
# Verify TLS container configuration
cargo test test_tls_container_config_methods --test tls_variants_test
# Test TLS connection validation
cargo test test_complete_tls_vs_plain_workflow --test tls_variants_test -- --ignored
# Check rustls dependencies
cargo tree | grep rustls
Container Inspection
# List running containers
docker ps
# Inspect container configuration
docker inspect <container_id>
# Access container shell
docker exec -it <container_id> /bin/bash
# Check container logs
docker logs <container_id>
Test Coverage Analysis
# Generate coverage for integration tests
cargo llvm-cov --features integration_tests --html -- --ignored
# Generate coverage for all tests
cargo llvm-cov --features integration_tests --html -- --include-ignored
# View coverage report
open target/llvm-cov/html/index.html
Getting Help
If you encounter issues not covered here:
- Check the logs: Enable verbose logging with
RUST_LOG=debug
- Verify Docker setup: Ensure Docker is running and accessible
- Test isolation: Run individual tests to isolate issues
- CI reproduction: Use
act
to reproduce CI failures locally - Create an issue: Report bugs with detailed error messages and environment information
For more information, see the Integration Testing Framework documentation.
Type Safety and Data Conversion
Gold Digger handles MySQL data types safely without panicking on NULL values or type mismatches.
Safe Type Handling
Automatic Type Conversion
Gold Digger automatically converts all MySQL data types to string representations:
- NULL values โ Empty strings (
""
) - Integers โ String representation (
42
โ"42"
) - Floats/Doubles โ String representation (
3.14
โ"3.14"
) - Dates/Times โ ISO format strings (
2023-12-25 14:30:45.123456
) - Binary data โ UTF-8 conversion (with lossy conversion for invalid UTF-8)
Special Value Handling
Gold Digger handles special floating-point values:
- NaN โ
"NaN"
- Positive Infinity โ
"Infinity"
- Negative Infinity โ
"-Infinity"
JSON Output Type Inference
When outputting to JSON format, Gold Digger attempts to preserve data types:
{
"data": [
{
"id": 123, // Integer preserved
"price": 19.99, // Float preserved
"name": "Product", // String preserved
"active": true, // Boolean preserved
"description": null // NULL preserved as JSON null
}
]
}
Common Type Issues
NULL Value Handling
Problem: Database contains NULL values
Solution: Gold Digger handles NULLs automatically:
- CSV/TSV: NULL becomes empty string
- JSON: NULL becomes
null
value
-- This query works safely with NULLs
SELECT id, name, description FROM products WHERE id <= 10;
Mixed Data Types
Problem: Column contains mixed data types
Solution: All values are converted to strings safely:
-- This works even if 'value' column has mixed types
SELECT id, value FROM mixed_data_table;
Binary Data
Problem: Column contains binary data (BLOB, BINARY)
Solution: Binary data is converted to UTF-8 with lossy conversion:
-- Binary columns are handled safely
SELECT id, binary_data FROM files;
Date and Time Formats
Problem: Need consistent date formatting
Solution: Gold Digger uses ISO format for all date/time values:
-- Date/time columns are formatted consistently
SELECT created_at, updated_at FROM events;
Output format:
- Date only:
2023-12-25
- DateTime:
2023-12-25 14:30:45.123456
- Time only:
14:30:45.123456
Best Practices
Query Writing
- No casting required: Unlike previous versions, you don't need to cast columns to CHAR
- Use appropriate data types: Let MySQL handle the data types naturally
- Handle NULLs in SQL if needed: Use
COALESCE()
orIFNULL()
for custom NULL handling
-- Good: Let Gold Digger handle type conversion
SELECT id, name, price, created_at FROM products;
-- Also good: Custom NULL handling in SQL
SELECT id, COALESCE(name, 'Unknown') as name FROM products;
Output Format Selection
Choose the appropriate output format based on your needs:
- CSV: Best for spreadsheet import, preserves all data as strings
- JSON: Best for APIs, preserves data types where possible
- TSV: Best for tab-delimited processing, similar to CSV
Error Prevention
Gold Digger's safe type handling prevents common errors:
- No panics on NULL values
- No crashes on type mismatches
- Graceful handling of special values (NaN, Infinity)
- Safe binary data conversion
Migration from Previous Versions
Removing CAST Statements
If you have queries with explicit casting from previous versions:
-- Old approach (still works but unnecessary)
SELECT CAST(id AS CHAR) as id, CAST(name AS CHAR) as name FROM users;
-- New approach (recommended)
SELECT id, name FROM users;
Handling Type-Specific Requirements
If you need specific type handling, use SQL functions:
-- Format numbers with specific precision
SELECT id, ROUND(price, 2) as price FROM products;
-- Format dates in specific format
SELECT id, DATE_FORMAT(created_at, '%Y-%m-%d') as created_date FROM events;
-- Handle NULLs with custom values
SELECT id, COALESCE(description, 'No description') as description FROM items;
Troubleshooting Type Issues
Unexpected Output Format
Issue: Numbers appearing as strings in JSON
Cause: Value contains non-numeric characters or formatting
Solution: Clean the data in SQL:
SELECT id, CAST(TRIM(price_string) AS DECIMAL(10,2)) as price FROM products;
Binary Data Display Issues
Issue: Binary data showing as garbled text
Cause: Binary column being converted to string
Solution: Use SQL functions to handle binary data:
-- Convert binary to hex representation
SELECT id, HEX(binary_data) as binary_hex FROM files;
-- Or encode as base64 (MySQL 5.6+)
SELECT id, TO_BASE64(binary_data) as binary_b64 FROM files;
Date Format Consistency
Issue: Need different date format
Solution: Format dates in SQL:
-- US format
SELECT id, DATE_FORMAT(created_at, '%m/%d/%Y') as created_date FROM events;
-- European format
SELECT id, DATE_FORMAT(created_at, '%d.%m.%Y') as created_date FROM events;
Performance Considerations
Gold Digger's type conversion is optimized for safety and performance:
- Zero-copy string conversion where possible
- Efficient NULL handling without allocations
- Streaming-friendly design for large result sets
- Memory-efficient binary data handling
The safe type handling adds minimal overhead while preventing crashes and data corruption.
Type Errors
Solutions for data type conversion and NULL handling issues.
Common Type Conversion Errors
NULL Value Panics
Problem: Gold Digger crashes with a panic when encountering NULL values.
Error Message:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value'
Solution: Always cast columns to CHAR in your SQL queries:
-- โ Dangerous - can panic on NULL
SELECT id, name, created_at FROM users;
-- โ
Safe - handles NULL values properly
SELECT
CAST(id AS CHAR) as id,
CAST(name AS CHAR) as name,
CAST(created_at AS CHAR) as created_at
FROM users;
Non-String Type Errors
Problem: Numeric, date, or binary columns cause conversion errors.
Solution: Cast all non-string columns:
SELECT
CAST(user_id AS CHAR) as user_id,
username, -- Already string, no cast needed
CAST(balance AS CHAR) as balance,
CAST(last_login AS CHAR) as last_login,
CAST(is_active AS CHAR) as is_active
FROM accounts;
Best Practices for Type Safety
1. Always Use CAST
-- For all numeric types
CAST(price AS CHAR) as price,
CAST(quantity AS CHAR) as quantity,
-- For dates and timestamps
CAST(created_at AS CHAR) as created_at,
CAST(updated_at AS CHAR) as updated_at,
-- For boolean values
CAST(is_enabled AS CHAR) as is_enabled
2. Handle NULL Values Explicitly
-- Use COALESCE for default values
SELECT
CAST(COALESCE(phone, '') AS CHAR) as phone,
CAST(COALESCE(address, 'No address') AS CHAR) as address
FROM contacts;
3. Test Queries First
Before running large exports, test with a small subset:
-- Test with LIMIT first
SELECT CAST(id AS CHAR) as id, name
FROM large_table
LIMIT 5;
Output Format Considerations
JSON Output
NULL values in JSON output appear as null
:
{
"data": [
{
"id": "1",
"name": "John",
"phone": null
}
]
}
CSV/TSV Output
NULL values in CSV/TSV appear as empty strings:
id,name,phone
1,John,
2,Jane,555-1234
Debugging Type Issues
Enable Verbose Output
gold_digger --verbose \
--db-url "mysql://..." \
--query "SELECT ..." \
--output debug.json
Check Column Types
Query your database schema first:
DESCRIBE your_table;
-- or
SHOW COLUMNS FROM your_table;
Use Information Schema
SELECT
COLUMN_NAME,
DATA_TYPE,
IS_NULLABLE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'your_table';
Advanced Type Handling
Custom Formatting
-- Format numbers with specific precision
SELECT
CAST(FORMAT(price, 2) AS CHAR) as price,
CAST(FORMAT(tax_rate, 4) AS CHAR) as tax_rate
FROM products;
Date Formatting
-- Custom date formats
SELECT
CAST(DATE_FORMAT(created_at, '%Y-%m-%d %H:%i:%s') AS CHAR) as created_at,
CAST(DATE_FORMAT(updated_at, '%Y-%m-%d') AS CHAR) as updated_date
FROM records;
When to Contact Support
If you continue experiencing type errors after following these guidelines:
- Provide the exact SQL query causing issues
- Include the table schema (
DESCRIBE table_name
) - Share the complete error message
- Specify which output format you're using
note
The type conversion system is being improved in future versions to handle these cases more gracefully.
Performance Issues
Optimizing Gold Digger performance and memory usage.
Memory Usage
Large Result Sets
Gold Digger loads all results into memory. For large datasets:
- Limit Rows: Use
LIMIT
clauses to reduce result size - Paginate: Process data in smaller chunks
- Filter Early: Use
WHERE
clauses to reduce data volume
Memory Monitoring
Monitor memory usage during execution:
# Linux/macOS
top -p $(pgrep gold_digger)
# Windows
tasklist /fi "imagename eq gold_digger.exe"
Query Optimization
Efficient Queries
- Use indexes on filtered columns
- Avoid
SELECT *
- specify needed columns only - Use appropriate
WHERE
clauses
Example Optimizations
-- Instead of:
SELECT * FROM large_table
-- Use:
SELECT id, name, email FROM large_table WHERE active = 1 LIMIT 1000
Connection Performance
Connection Pooling
Gold Digger uses connection pooling internally, but:
- Minimize connection overhead with efficient queries
- Consider database server connection limits
Network Optimization
- Use local databases when possible
- Optimize network latency for remote connections
- Consider compression for large data transfers
Output Performance
Format Selection
- CSV: Fastest for large datasets
- JSON: More overhead but structured
- TSV: Good balance of speed and readability
File I/O
- Use fast storage (SSD) for output files
- Consider output file location (local vs network)
Troubleshooting Slow Performance
- Profile Queries: Use
EXPLAIN
to analyze query execution - Monitor Resources: Check CPU, memory, and I/O usage
- Database Tuning: Optimize database configuration
- Network Analysis: Check for network bottlenecks
Development Setup
Complete guide for setting up your development environment to contribute to Gold Digger.
Prerequisites
Required Software
- Rust (latest stable via rustup)
- Git for version control
- MySQL or MariaDB for integration testing
- just task runner (recommended)
Platform-Specific Requirements
macOS:
# Install Xcode Command Line Tools
xcode-select --install
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Linux (Ubuntu/Debian):
# Install build dependencies
sudo apt update
sudo apt install build-essential pkg-config libssl-dev git
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Windows:
# Install Visual Studio Build Tools
# Download from: https://visualstudio.microsoft.com/downloads/
# Install Rust
# Download from: https://rustup.rs/
Initial Setup
1. Clone Repository
git clone https://github.com/EvilBit-Labs/gold_digger.git
cd gold_digger
2. Install Development Tools
# Use the justfile for automated setup
just setup
# Or install manually:
rustup component add rustfmt clippy
cargo install cargo-nextest --locked
cargo install cargo-llvm-cov --locked
cargo install cargo-audit --locked
cargo install cargo-deny --locked
3. Set Up Pre-commit Hooks (Recommended)
Gold Digger uses comprehensive pre-commit hooks for code quality:
# Install pre-commit
pip install pre-commit
# Install hooks for this repository
pre-commit install
# Test hooks on all files (optional)
pre-commit run --all-files
Pre-commit Hook Coverage:
- Rust: Formatting (
cargo fmt
), linting (cargo clippy
), security audit (cargo audit
) - YAML/JSON: Formatting with Prettier
- Markdown: Formatting (
mdformat
) with GitHub Flavored Markdown support - Shell Scripts: Validation with ShellCheck
- GitHub Actions: Workflow validation with actionlint
- Commit Messages: Conventional commit format validation
- Documentation: Link checking and build validation
3. Install Documentation Tools
# Install mdBook and plugins for documentation
just docs-install
# Or install manually:
cargo install mdbook mdbook-admonish mdbook-mermaid mdbook-linkcheck mdbook-toc mdbook-open-on-gh mdbook-tabs mdbook-i18n-helpers
4. Verify Installation
# Build the project
cargo build
# Run tests
cargo test
# Check code quality
just ci-check
# Test pre-commit hooks (if installed)
pre-commit run --all-files
Development Tools
Essential Tools
Tool | Purpose | Installation |
---|---|---|
cargo-nextest | Fast parallel test runner | cargo install cargo-nextest |
cargo-llvm-cov | Code coverage analysis (cross-platform) | cargo install cargo-llvm-cov |
cargo-audit | Security vulnerability scanning | cargo install cargo-audit |
cargo-deny | License and security policy enforcement | cargo install cargo-deny |
just | Task runner (like make) | cargo install just |
Optional Tools
Tool | Purpose | Installation |
---|---|---|
cargo-watch | Auto-rebuild on file changes | cargo install cargo-watch |
cargo-outdated | Check for outdated dependencies | cargo install cargo-outdated |
act | Run GitHub Actions locally | Installation guide |
Project Structure
Source Code Organization
src/
โโโ main.rs # CLI entry point, env handling, format dispatch
โโโ lib.rs # Public API, shared utilities (rows_to_strings)
โโโ cli.rs # Clap CLI definitions and configuration
โโโ csv.rs # CSV output format (RFC4180, QuoteStyle::Necessary)
โโโ json.rs # JSON output format ({"data": [...]} with BTreeMap)
โโโ tab.rs # TSV output format (QuoteStyle::Necessary)
โโโ tls.rs # TLS/SSL configuration utilities
โโโ exit.rs # Exit code definitions and utilities
Configuration Files
โโโ Cargo.toml # Package configuration and dependencies
โโโ Cargo.lock # Dependency lock file
โโโ justfile # Task runner recipes
โโโ rustfmt.toml # Code formatting configuration
โโโ deny.toml # Security and license policy
โโโ rust-toolchain.toml # Rust version specification
โโโ .pre-commit-config.yaml # Pre-commit hooks
โโโ .editorconfig # Editor configuration
Documentation Structure
docs/
โโโ book.toml # mdBook configuration
โโโ src/ # Documentation source
โ โโโ SUMMARY.md # Table of contents
โ โโโ introduction.md # Landing page
โ โโโ installation/ # Installation guides
โ โโโ usage/ # Usage documentation
โ โโโ security/ # Security considerations
โ โโโ development/ # Developer guides
โ โโโ troubleshooting/ # Common issues
โโโ book/ # Generated output (gitignored)
Development Workflow
1. Code Quality Checks
# Format code (includes pre-commit hooks)
just format
# Check formatting
just fmt-check
# Run linter
just lint
# Run all quality checks
just ci-check
# Run pre-commit hooks manually
pre-commit run --all-files
2. Security Scanning
# Run security audit
just audit
# Check licenses and security policies
just deny
# Comprehensive security scanning (audit + deny + grype)
just security
# Generate Software Bill of Materials (SBOM)
just sbom
# Cargo-deny configuration validation
just deny
3. Cargo-deny Configuration
Gold Digger uses a dual cargo-deny configuration to balance local development flexibility with CI security enforcement:
Local Development (Tolerant)
- File:
deny.toml
(default) - Yanked crates:
warn
(shows warnings but doesn't fail) - Purpose: Allows local development to continue even with yanked dependencies
CI Environment (Strict)
- File:
deny.ci.toml
- Yanked crates:
error
(fails pipeline on yanked crates) - Purpose: Enforces strict security policies in CI
Usage
# Local development (tolerant)
just deny
# CI enforcement (strict)
just deny-ci
# CI workflow automatically uses strict configuration
# See .github/workflows/security.yml
Configuration Files
deny.toml
- Local development configuration with tolerant settingsdeny.ci.toml
- CI-specific configuration with strict enforcement- Both files maintain the same license and security policies, differing only in yanked crate handling
3. Testing
# Run tests (standard)
just test
# Run tests (fast parallel)
just test-nextest
# Run with coverage
just coverage
# Run specific test
cargo test test_name
# Integration testing (requires Docker)
just test-integration
# All tests including integration tests
just test-all
4. Building
# Debug build
just build
# Release build
just build-release
# Build all variants
just build-all
4. Documentation
# Build documentation
just docs-build
# Serve documentation locally
just docs-serve
# Check documentation links
just docs-check
# Generate rustdoc only
just docs
Feature Development
Feature Flag System
Gold Digger uses Cargo features for conditional compilation:
# Default features
default = ["json", "csv", "additional_mysql_types", "verbose"]
# Individual features
json = [] # Enable JSON output format
csv = [] # Enable CSV output format
additional_mysql_types = ["mysql_common?/bigdecimal"]
verbose = [] # Enable verbose logging
# TLS is always enabled with rustls - no feature flags needed
Testing Feature Combinations
# Test default features
cargo test
# Test minimal features
cargo test --no-default-features --features "csv json"
# Test with all features
cargo test --release
# Test without optional features
cargo test --no-default-features
Database Setup for Testing
Integration Test Framework
Gold Digger uses testcontainers for automated database testing, eliminating the need for manual database setup.
Prerequisites:
- Docker daemon must be installed and running on the host system
- Minimum Docker version 20.10+ (testcontainers requirement)
- Docker Compose 2.0+ recommended for enhanced container orchestration
- See Docker installation documentation for setup instructions
Note: No local database installation is required - testcontainers automatically spins up isolated database containers for testing.
# Integration tests automatically manage database containers
just test-integration
# No manual database setup required - testcontainers handles:
# - MySQL 8.0/8.1 container startup and configuration
# - MariaDB 10.11+ container with TLS certificates
# - Test schema creation and data seeding
# - Container cleanup after tests complete
Manual Database Setup (Optional)
For local development and debugging, you can set up databases manually:
Local MySQL/MariaDB
# Install MySQL (macOS)
brew install mysql
brew services start mysql
# Install MariaDB (Ubuntu)
sudo apt install mariadb-server
sudo systemctl start mariadb
# Create test database
mysql -u root -p
CREATE DATABASE gold_digger_test;
CREATE USER 'test_user'@'localhost' IDENTIFIED BY 'test_password';
GRANT ALL PRIVILEGES ON gold_digger_test.* TO 'test_user'@'localhost';
Docker Setup
# Start MySQL container for manual testing
docker run --name gold-digger-mysql \
-e MYSQL_ROOT_PASSWORD=rootpass \
-e MYSQL_DATABASE=gold_digger_test \
-e MYSQL_USER=test_user \
-e MYSQL_PASSWORD=test_password \
-p 3306:3306 \
-d mysql:8.0
# Test connection
export DATABASE_URL="mysql://test_user:test_password@localhost:3306/gold_digger_test"
export DATABASE_QUERY="SELECT 1 as test"
export OUTPUT_FILE="test.json"
cargo run
Integration Test Database Features
The comprehensive integration testing framework includes:
- Multi-Database Support: Both MySQL and MariaDB containers
- TLS Configuration: Both secure and standard connection testing
- Comprehensive Schema: All MySQL/MariaDB data types with edge cases
- Test Data Seeding: Unicode content, NULL values, large datasets
- Health Checks: Container readiness validation with CI-compatible timeouts
- Automatic Cleanup: No manual container management required
Code Style Guidelines
Rust Style
- Formatting: Use
rustfmt
with 100-character line limit - Linting: Zero tolerance for clippy warnings
- Error Handling: Use
anyhow::Result<T>
for fallible functions - Documentation: Document all public APIs with
///
comments
Module Organization
// Standard library imports
use std::{env, fs::File};
// External crate imports
use anyhow::Result;
use mysql::Pool;
// Local module imports
use gold_digger::rows_to_strings;
Safe Patterns
use anyhow::Result;
use mysql::{from_value_opt, Value};
fn example_function() -> Result<()> {
let database_value = Value::NULL; // example value
// โ
Safe database value conversion
let converted_value = match database_value {
Value::NULL => "".to_string(),
val => from_value_opt::<String>(val.clone()).unwrap_or_else(|_| format!("{:?}", val)),
};
// โ
Feature-gated compilation
#[cfg(feature = "verbose")]
eprintln!("Debug information: {}", converted_value);
Ok(())
}
// โ
Error propagation
fn process_data() -> anyhow::Result<()> {
let data = fetch_data()?;
transform_data(data)?;
Ok(())
}
fn fetch_data() -> anyhow::Result<String> {
Ok("data".to_string())
}
fn transform_data(_data: String) -> anyhow::Result<()> {
Ok(())
}
Debugging
Environment Variables
# Enable Rust backtrace
export RUST_BACKTRACE=1
# Enable verbose logging
export RUST_LOG=debug
# Test with safe example
just run-safe
Common Issues
Build failures:
- Check Rust version:
rustc --version
- Update toolchain:
rustup update
- Clean build:
cargo clean && cargo build
Test failures:
- Check database connection
- Verify environment variables
- Run single-threaded:
cargo test -- --test-threads=1
Clippy warnings:
- Fix automatically:
just fix
- Check specific lint:
cargo clippy -- -W clippy::lint_name
Contributing Guidelines
Before Submitting
- Run quality checks:
just ci-check
- Add tests for new functionality
- Update documentation if needed
- Follow commit conventions (Conventional Commits)
- Test feature combinations if adding features
- Ensure pre-commit hooks pass:
pre-commit run --all-files
Pull Request Process
- Fork the repository
- Create feature branch:
git checkout -b feature/description
- Make changes with tests
- Run
just ci-check
- Commit with conventional format
- Push and create pull request
Commit Message Format
type(scope): description
feat(csv): add support for custom delimiters
fix(json): handle null values in nested objects
docs(api): update configuration examples
test(integration): add TLS connection tests
Local GitHub Actions Testing
Setup act
# Install act
just act-setup
# Run CI workflow locally (dry-run)
just act-ci-dry
# Run full CI workflow
just act-ci
Environment Configuration
Create a .env.local
file in the project root to store tokens and secrets for local testing:
# .env.local
GITHUB_TOKEN=ghp_your_github_token_here
CODECOV_TOKEN=your_codecov_token_here
DATABASE_URL=mysql://user:pass@localhost:3306/testdb
Important: Never commit .env.local
to version control. It's already added to .gitignore
.
Running Workflows with Environment Files
# Test CI workflow with environment file
act --env-file .env.local
# Test specific job
act -j validate --env-file .env.local
# Test with secrets
act --env-file .env.local -s GITHUB_TOKEN=ghp_xxx -s CODECOV_TOKEN=xxx
# Dry run (simulation only)
act --dryrun --env-file .env.local
# Test release workflow
act workflow_dispatch --env-file .env.local -s GITHUB_TOKEN=ghp_xxx --input tag=v0.test.1
Workflow Testing
# Test specific job
just act-job validate
# Test release workflow
just act-release-dry v1.0.0
# Test cargo-dist workflow
just dist-plan
# Build cargo-dist artifacts locally
just dist-build
# Clean up act containers
just act-clean
Performance Profiling
Benchmarking
# Run benchmarks (when available)
just bench
# Profile release build
just profile
Memory Analysis
# Build with debug info
cargo build --release --profile release-with-debug
# Use valgrind (Linux)
valgrind --tool=massif target/release/gold_digger
# Use Instruments (macOS)
instruments -t "Time Profiler" target/release/gold_digger
Getting Help
Resources
- Documentation: This guide and API docs
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Common Commands Reference
# Quick development check
just check
# Full CI reproduction
just ci-check
# Security scanning
just security
# Generate SBOM
just sbom
# Coverage analysis
just cover
# Release preparation
just release-check
# Show all available commands
just help
Contributing
Guidelines for contributing to Gold Digger.
Getting Started
-
Fork the repository
-
Create a feature branch
-
Set up development environment:
just setup pre-commit install # Install pre-commit hooks
-
Make your changes
-
Add tests for new functionality
-
Ensure all quality checks pass:
just ci-check pre-commit run --all-files
-
Submit a pull request
Code Standards
Formatting
- Use
cargo fmt
for consistent formatting - 100-character line limit
- Follow Rust naming conventions
Quality Gates
All code must pass:
cargo fmt --check
cargo clippy -- -D warnings
cargo test
Pre-commit Hooks
Gold Digger uses comprehensive pre-commit hooks that automatically run on each commit:
- Rust: Code formatting, linting, and security auditing
- YAML/JSON: Formatting with Prettier
- Markdown: Formatting with mdformat (GitHub Flavored Markdown)
- Shell Scripts: Validation with ShellCheck
- GitHub Actions: Workflow validation with actionlint
- Commit Messages: Conventional commit format validation
- Documentation: Link checking and build validation
Install hooks: pre-commit install
Run manually: pre-commit run --all-files
Commit Messages
Use Conventional Commits:
feat: add new output format
fix: handle NULL values correctly
docs: update installation guide
Development Guidelines
Error Handling
- Use
anyhow::Result<T>
for fallible functions - Provide meaningful error messages
- Never panic in production code paths
Security
- Never log credentials or sensitive data
- Use secure defaults for TLS/SSL
- Validate all external input
Testing
- Write unit tests for new functions
- Add integration tests for CLI features using the comprehensive testing framework
- Test against both MySQL and MariaDB databases when applicable
- Validate output format compliance (CSV, JSON, TSV)
- Include error scenario testing with proper exit codes
- Maintain test coverage above 80%
Pull Request Process
- Description: Clearly describe changes and motivation
- Quality Checks: Ensure all pre-commit hooks and CI checks pass
- Testing: Include test results and coverage information
- Documentation: Update docs for user-facing changes
- Review: Address feedback promptly and professionally
Before Submitting
Run the complete quality check suite:
# Run all CI-equivalent checks
just ci-check
# Verify pre-commit hooks pass
pre-commit run --all-files
# Test multiple feature combinations
just build-all
# Run integration tests (requires Docker)
just test-integration
# Test release workflow (optional)
just release-dry
Code Review
Reviews focus on:
- Correctness and safety
- Performance implications
- Security considerations
- Code clarity and maintainability
Integration Testing Framework
Gold Digger features a comprehensive integration testing framework that validates functionality against real MySQL and MariaDB databases using testcontainers for automated container management.
Overview
The integration testing enhancement provides:
- Multi-Database Support: Both MySQL (8.0+) and MariaDB (10.11+) testing
- TLS/Non-TLS Testing: Secure and standard connection validation
- Comprehensive Data Type Coverage: All MySQL/MariaDB data types including edge cases
- Output Format Validation: CSV (RFC4180), JSON, and TSV format compliance testing
- Error Scenario Testing: Connection failures, SQL errors, and file I/O validation
- Performance Testing: Large dataset handling and memory usage validation
- Security Testing: Credential protection and TLS certificate validation
Test Architecture
Current Implementation Status
The integration testing framework is actively under development with the following components implemented:
โ Completed Components:
- Core integration test infrastructure with MySQL/MariaDB support
- TLS and non-TLS database variants (
TestDatabaseTls
,TestDatabasePlain
) - Container management with health checks and CI compatibility
- TLS certificate management with ephemeral certificate generation
- Comprehensive test database schema and seeding functionality
- Cross-platform support (Linux and macOS)
๐ง In Progress:
- Data type validation tests
- Output format validation framework
- Error scenario testing
- CLI integration tests
- Performance testing
- Security validation tests
Test Module Structure
tests/
โโโ integration/
โ โโโ mod.rs # Common test utilities and setup functions (โ
Implemented)
โ โโโ common.rs # Shared CLI execution and output parsing utilities (โ
Implemented)
โ โโโ containers.rs # MySQL/MariaDB container management with health checks (โ
Implemented)
โ โโโ data_types.rs # Comprehensive data type validation tests (๐ง Planned)
โ โโโ output_formats.rs # Format-specific validators (CSV, JSON, TSV) (๐ง Planned)
โ โโโ error_scenarios.rs # Error handling and exit code validation (๐ง Planned)
โ โโโ cli_integration.rs # CLI flag precedence and configuration tests (๐ง Planned)
โ โโโ performance.rs # Large dataset and memory usage tests (๐ง Planned)
โ โโโ security.rs # Credential protection and TLS security tests (๐ง Planned)
โโโ fixtures/
โ โโโ schema.sql # Comprehensive test database schema (โ
Implemented)
โ โโโ seed_data.sql # Test data covering all data types and edge cases (โ
Implemented)
โ โโโ tls/ # TLS certificates for secure connection testing (โ
Implemented)
โโโ test_support/ # Shared testing utilities (โ
Implemented)
โ โโโ cli.rs # CLI execution helpers
โ โโโ containers.rs # Container management utilities
โ โโโ fixtures.rs # Test data and schema utilities
โ โโโ parsing.rs # Output parsing and validation
โโโ tls_variants_test.rs # TLS and non-TLS database variant testing (โ
Implemented)
โโโ tls_integration.rs # TLS connection and certificate validation (โ
Implemented)
โโโ integration_tests.rs # Main integration test entry point (โ
Implemented)
โโโ database_seeding_test.rs # Database schema and data seeding tests (โ
Implemented)
โโโ container_setup_test.rs # Container lifecycle and health check tests (โ
Implemented)
Container Management
The framework uses testcontainers-modules for automated database lifecycle management with comprehensive TLS support:
use testcontainers_modules::{mariadb::Mariadb, mysql::Mysql};
// Base database types
pub enum TestDatabase {
MySQL { tls_enabled: bool },
MariaDB { tls_enabled: bool },
}
// TLS-enabled database variants
pub enum TestDatabaseTls {
MySQL { tls_config: TlsContainerConfig },
MariaDB { tls_config: TlsContainerConfig },
}
// Plain (non-TLS) database variants
pub enum TestDatabasePlain {
MySQL,
MariaDB,
}
// Container management with TLS support
pub struct DatabaseContainer {
db_type: TestDatabase,
container: Box<dyn ContainerInstance>,
connection_url: String,
temp_dir: TempDir,
tls_config: ContainerTlsConfig,
}
impl DatabaseContainer {
// Create TLS-enabled container
pub fn new_tls(db_type: TestDatabaseTls) -> anyhow::Result<Self> {
// Automatic container startup with health checks
// TLS configuration with ephemeral certificates
// Connection URL generation with SSL parameters
}
// Create plain (non-TLS) container
pub fn new_plain(db_type: TestDatabasePlain) -> anyhow::Result<Self> {
// Standard container setup without TLS
// Connection URL generation for unencrypted connections
}
// Validate TLS connections
pub fn validate_tls_connection(&self) -> anyhow::Result<TlsValidationResult> {
// TLS handshake validation
// Certificate verification testing
}
// Validate plain connections
pub fn validate_plain_connection(&self) -> anyhow::Result<PlainValidationResult> {
// Standard connection testing
// Non-TLS connection validation
}
}
TLS Configuration
The framework provides comprehensive TLS configuration options:
pub struct TlsContainerConfig {
pub require_secure_transport: bool,
pub min_tls_version: String, // "TLSv1.2" or "TLSv1.3"
pub cipher_suites: Vec<String>,
pub use_ephemeral_certs: bool, // Generate certificates per test run
pub ca_cert_path: Option<PathBuf>,
pub server_cert_path: Option<PathBuf>,
pub server_key_path: Option<PathBuf>,
}
impl TlsContainerConfig {
// Secure defaults with TLS 1.2+
pub fn new_secure() -> Self;
// Strict security with TLS 1.3 only
pub fn with_strict_security(self) -> anyhow::Result<Self>;
// Custom certificate paths
pub fn with_custom_certs(ca_cert: P, server_cert: P, server_key: P) -> Self;
}
Test Categories
1. Data Type Validation Tests
Comprehensive testing of MySQL/MariaDB data type handling:
- String Types: VARCHAR, TEXT, CHAR with Unicode and special characters
- Numeric Types: INTEGER, BIGINT, DECIMAL, FLOAT, DOUBLE with precision testing
- Temporal Types: DATE, DATETIME, TIMESTAMP, TIME, YEAR with timezone handling
- Binary Types: BINARY, VARBINARY, BLOB with large content testing
- Special Types: JSON, ENUM, SET, BOOLEAN with edge case validation
- NULL Handling: NULL value processing across all data types
2. Output Format Validation Tests
Format-specific compliance and consistency testing:
- CSV Validation: RFC4180 compliance, quoting behavior, header validation
- JSON Validation: Structure validation, deterministic ordering, NULL handling
- TSV Validation: Tab-delimited format, special character handling
- Cross-Format Consistency: Identical data representation across formats
3. Database Integration Tests
Real database connection and query execution validation:
- Container Management: MySQL/MariaDB container lifecycle with health checks
- Connection Testing: Both TLS and non-TLS connection establishment
- Query Execution: SQL query processing with various data scenarios
- Transaction Handling: Database transaction support and rollback testing
4. Error Scenario Tests
Comprehensive error handling and exit code validation:
- Connection Errors: Authentication failures, network timeouts, unreachable hosts
- SQL Errors: Syntax errors, permission denied, non-existent tables/columns
- File I/O Errors: Permission denied, disk space exhaustion, invalid paths
- Exit Code Validation: Proper exit code mapping for all error scenarios
5. CLI Integration Tests
Command-line interface and configuration validation:
- Flag Precedence: CLI flags override environment variables
- Mutual Exclusion: Conflicting option detection and error handling
- Configuration Resolution: Format detection and parameter validation
- Help and Completion: CLI help text and shell completion generation
6. Performance Tests
Large dataset handling and resource usage validation:
- Large Result Sets: Processing 1000+ row datasets without memory issues
- Wide Tables: Handling 20+ column tables with various data types
- Large Content: Processing 1MB+ text fields and binary data
- Memory Usage: Validation of reasonable memory bounds for result set size
7. Security Tests
Credential protection and TLS validation:
- Credential Redaction: DATABASE_URL masking in logs and error messages
- TLS Connection Testing: Certificate validation and secure connection establishment
- Connection String Security: Special character handling in passwords
- Security Warning Display: Appropriate warnings for insecure TLS modes
Running Integration Tests
Prerequisites
- Docker: Required for container-based testing
- Disk Space: ~500MB for Docker images and test artifacts
- Network Access: For pulling Docker images (first run only)
- Platform Support: Linux and macOS (Windows support planned)
Test Execution
# Run comprehensive integration tests
cargo test --features integration_tests -- --ignored
# Run all tests including integration tests
cargo test --features integration_tests -- --include-ignored
# Run specific integration test categories
cargo test --features integration_tests --test tls_variants_test -- --ignored
cargo test --features integration_tests --test tls_integration -- --ignored
cargo test --features integration_tests --test database_seeding_test -- --ignored
cargo test --features integration_tests --test container_setup_test -- --ignored
# Run TLS variant tests specifically
cargo test --features integration_tests test_mysql_tls_variant --test tls_variants_test -- --ignored
cargo test --features integration_tests test_mariadb_plain_variant --test tls_variants_test -- --ignored
# Using justfile commands
just test-integration # Run only integration tests
just test-all # Run all tests including integration tests
Current Test Categories
โ Implemented Tests:
-
TLS Variant Tests (
tests/tls_variants_test.rs
):- MySQL and MariaDB TLS-enabled containers
- Plain (non-TLS) container configurations
- Connection URL generation with SSL parameters
- TLS configuration validation
- Database variant conversions
-
TLS Integration Tests (
tests/tls_integration.rs
):- TLS connection establishment and validation
- Certificate verification testing
- SSL handshake validation
- TLS error handling
-
Database Seeding Tests (
tests/database_seeding_test.rs
):- Schema creation and validation
- Test data population with comprehensive data types
- NULL value handling across all data types
- Unicode and special character testing
-
Container Setup Tests (
tests/container_setup_test.rs
):- Container lifecycle management
- Health check validation
- Resource cleanup testing
- CI environment compatibility
๐ง Planned Tests:
- Data Type Validation Tests: Comprehensive MySQL/MariaDB data type handling
- Output Format Validation Tests: CSV, JSON, and TSV format compliance
- Error Scenario Tests: Connection failures, SQL errors, and file I/O issues
- CLI Integration Tests: Command-line flag precedence and configuration
- Performance Tests: Large dataset handling and memory usage validation
- Security Tests: Credential protection and TLS certificate validation
CI Integration
The integration testing framework is designed for CI environments:
- GitHub Actions: Docker service enabled with appropriate timeouts
- Resource Limits: Tests designed for shared CI resources
- Container Cleanup: Automatic cleanup prevents resource leaks
- Retry Logic: Configurable timeouts for container startup in CI
Test Matrix
GitHub Actions runs integration tests across multiple configurations:
strategy:
matrix:
database: [mysql-8.0, mysql-8.1, mariadb-10.11]
connection: [tls, non-tls]
features: [default, minimal]
Test Data and Fixtures
Database Schema
The test schema (tests/fixtures/schema.sql
) includes comprehensive data type coverage:
-- String types with various lengths and encodings
CREATE TABLE test_strings (
id INT PRIMARY KEY AUTO_INCREMENT,
varchar_col VARCHAR(255),
text_col TEXT,
char_col CHAR(10),
unicode_col VARCHAR(255) CHARACTER SET utf8mb4
);
-- Numeric types with precision and scale variations
CREATE TABLE test_numbers (
id INT PRIMARY KEY AUTO_INCREMENT,
int_col INT,
bigint_col BIGINT,
decimal_col DECIMAL(10,2),
float_col FLOAT,
double_col DOUBLE
);
-- Temporal types with timezone considerations
CREATE TABLE test_temporal (
id INT PRIMARY KEY AUTO_INCREMENT,
date_col DATE,
datetime_col DATETIME,
timestamp_col TIMESTAMP,
time_col TIME,
year_col YEAR
);
Test Data Seeding
The seed data (tests/fixtures/seed_data.sql
) includes:
- Normal Values: Standard data for each type
- Edge Cases: Boundary values, empty strings, zero values
- NULL Values: NULL handling across all columns
- Unicode Content: International characters and emojis
- Large Content: 1MB+ text fields for performance testing
TLS Certificates
Test TLS certificates (tests/fixtures/tls/
) provide:
- Self-Signed CA: Root certificate authority for testing
- Server Certificates: Valid certificates for container hostnames
- Invalid Certificates: Malformed certificates for error testing
- Expired Certificates: Time-based validation testing
Development Workflow
Adding New Integration Tests
- Identify Test Category: Determine which test module the new test belongs to
- Create Test Function: Add test function with appropriate attributes
- Use Test Utilities: Leverage common utilities for container management
- Validate Output: Use format-specific validators for output verification
- Handle Cleanup: Ensure proper resource cleanup in test teardown
Test Utilities
Common utilities provide consistent testing patterns:
use anyhow::Result;
fn test_example() -> Result<()> {
// Container management
let db = TestDatabase::new(DatabaseType::MySQL, true)?;
let connection_url = db.connection_url();
// CLI execution
let result = GoldDiggerCli::new()
.db_url(&connection_url)
.query("SELECT * FROM test_table")
.output_file(&temp_file)
.execute()?;
// Output validation
let validator = CsvValidator::new();
validator.validate_file(&temp_file)?;
validator.validate_row_count(expected_rows)?;
Ok(())
}
Debugging Integration Tests
# Run single integration test with verbose output
RUST_LOG=debug cargo test --features integration_tests \
test_mysql_data_type_conversion -- --ignored --nocapture
# Keep containers running for manual inspection
TESTCONTAINERS_RYUK_DISABLED=true cargo test --features integration_tests \
test_container_setup -- --ignored
# Generate test coverage for integration tests
cargo llvm-cov --features integration_tests --html -- --ignored
Best Practices
Test Design
- Isolation: Each test should be independent and not affect others
- Deterministic: Tests should produce consistent results across runs
- Fast Feedback: Unit tests for quick feedback, integration tests for comprehensive validation
- Resource Cleanup: Always clean up containers and temporary files
Container Management
- Health Checks: Wait for container readiness before running tests
- Timeout Handling: Configure appropriate timeouts for CI environments
- Resource Limits: Set memory and CPU limits for containers
- Network Isolation: Use container networks to avoid port conflicts
Data Validation
- Comprehensive Coverage: Test all data types and edge cases
- Format Compliance: Validate output format standards (RFC4180 for CSV)
- Cross-Format Consistency: Ensure identical data across output formats
- Error Scenarios: Test both success and failure paths
This integration testing framework ensures Gold Digger's reliability and correctness across diverse database environments and usage scenarios.
TLS and Non-TLS Database Variants
Gold Digger's integration testing framework provides comprehensive support for testing both TLS-enabled and plain (non-TLS) database connections through specialized database variants.
Overview
The TLS variants system provides:
- TestDatabaseTls: TLS-enabled database containers with SSL certificate mounting and MySQL TLS settings
- TestDatabasePlain: Standard unencrypted database containers for basic connection testing
- TlsContainerConfig: Configuration for TLS settings including certificate management and security policies
- Connection URL generation: Helper methods to generate appropriate connection URLs for each configuration type
- Connection validation: Test utilities to validate TLS vs non-TLS connection establishment
Database Variants
TLS-Enabled Variants
use integration::{TestDatabaseTls, TlsContainerConfig, containers::DatabaseContainer};
// MySQL with secure TLS defaults
let mysql_tls = TestDatabaseTls::mysql();
let mysql_container = DatabaseContainer::new_tls(mysql_tls)?;
// MariaDB with custom TLS configuration
let tls_config = TlsContainerConfig::new_secure().with_strict_security()?;
let mariadb_tls = TestDatabaseTls::mariadb_with_config(tls_config);
let mariadb_container = DatabaseContainer::new_tls(mariadb_tls)?;
Plain (Non-TLS) Variants
use integration::{TestDatabasePlain, containers::DatabaseContainer};
// MySQL without TLS
let mysql_plain = TestDatabasePlain::mysql();
let mysql_container = DatabaseContainer::new_plain(mysql_plain)?;
// MariaDB without TLS
let mariadb_plain = TestDatabasePlain::mariadb();
let mariadb_container = DatabaseContainer::new_plain(mariadb_plain)?;
TLS Configuration Options
Secure Defaults
The framework provides secure TLS defaults suitable for most testing scenarios:
let tls_config = TlsContainerConfig::new_secure();
// Configuration includes:
// - require_secure_transport: true
// - min_tls_version: "TLSv1.2"
// - use_ephemeral_certs: true
// - Strong cipher suites (ECDHE-RSA-AES256-GCM-SHA384, etc.)
Strict Security
For testing with enhanced security requirements:
let tls_config = TlsContainerConfig::new_secure().with_strict_security()?;
// Enhanced configuration includes:
// - min_tls_version: "TLSv1.3"
// - TLS 1.3 cipher suites only (TLS_AES_256_GCM_SHA384, etc.)
// - Stricter security policies
Custom Certificates
For testing with specific certificate requirements:
let tls_config = TlsContainerConfig::with_custom_certs(
"/path/to/ca.pem",
"/path/to/cert.pem",
"/path/to/key.pem"
);
Connection URL Generation
The container system provides flexible connection URL generation for different SSL modes:
// Basic connection URL (uses container defaults)
let url = container.connection_url();
// TLS-enabled URL (for TLS containers)
let tls_url = container.tls_connection_url()?;
// Explicitly disabled SSL URL
let plain_url = container.plain_connection_url()?;
// Custom SSL mode
let verify_ca_url = container.connection_url_with_ssl_mode("VERIFY_CA")?;
let required_url = container.connection_url_with_ssl_mode("REQUIRED")?;
let disabled_url = container.connection_url_with_ssl_mode("DISABLED")?;
Supported SSL Modes
- DISABLED: No SSL encryption
- PREFERRED: Use SSL if available, fallback to plain
- REQUIRED: Require SSL connection
- VERIFY_CA: Require SSL and verify CA certificate
- VERIFY_IDENTITY: Require SSL and verify server identity
Connection Validation
The framework provides utilities to validate connection behavior:
// For TLS containers
let tls_validation = container.validate_tls_connection()?;
assert!(tls_validation.tls_connection_success);
// For plain containers
let plain_validation = container.validate_plain_connection()?;
assert!(plain_validation.plain_connection_success);
Validation Results
pub struct TlsValidationResult {
pub tls_connection_success: bool,
pub tls_error: Option<String>,
}
pub struct PlainValidationResult {
pub plain_connection_success: bool,
pub plain_error: Option<String>,
}
Database Type Conversions
The variants can be converted to the base TestDatabase
enum for compatibility with existing code:
// TLS variant conversion
let tls_db = TestDatabaseTls::mysql();
let base_db = tls_db.to_test_database(); // TestDatabase::MySQL { tls_enabled: true }
// Plain variant conversion
let plain_db = TestDatabasePlain::mysql();
let base_db = plain_db.to_test_database(); // TestDatabase::MySQL { tls_enabled: false }
Running TLS Variant Tests
The TLS variants are tested in tests/tls_variants_test.rs
:
# Run all TLS variant tests (requires Docker and integration_tests feature)
cargo test --test tls_variants_test --features integration_tests
# Run specific variant tests
cargo test test_mysql_tls_variant --test tls_variants_test --features integration_tests
cargo test test_mariadb_plain_variant --test tls_variants_test --features integration_tests
cargo test test_connection_url_generation --test tls_variants_test --features integration_tests
# Run tests that don't require Docker (no feature flag needed)
cargo test test_database_variant_conversions --test tls_variants_test
cargo test test_tls_container_config_methods --test tls_variants_test
# Using justfile commands
just test-integration # Run integration tests with feature flag
just test-all # Run all tests including integration tests
Implementation Details
Current Capabilities
- TLS Container Creation: Basic TLS container setup with secure defaults
- Certificate Management: Placeholder certificate system (ephemeral certificates planned)
- Connection URL Generation: SSL parameter injection for different connection modes
- Connection Validation: Basic TLS and plain connection testing
- Configuration Validation: TLS configuration parameter validation
Current Limitations
-
Certificate Generation: Currently uses placeholder certificates. Full integration with ephemeral certificate generation is planned.
-
Container TLS Configuration: Basic TLS container setup is implemented. Full SSL certificate mounting and MySQL TLS settings configuration will be enhanced in subsequent development phases.
-
Connection Validation: Basic connection testing is implemented. More comprehensive TLS handshake validation will be added as the TLS infrastructure matures.
Future Enhancements
- Dynamic Certificate Generation: Integration with
rcgen
crate for ephemeral certificate generation - Full SSL Certificate Mounting: Complete SSL certificate mounting into containers
- MySQL/MariaDB TLS Configuration: Comprehensive TLS configuration (require_secure_transport, cipher suites, etc.)
- Advanced TLS Validation: Comprehensive TLS handshake and certificate validation testing
- Performance Comparison: Performance testing with TLS vs non-TLS connections
Best Practices
Test Design
- Use Appropriate Variants: Choose TLS variants for security testing, plain variants for basic functionality
- Validate Connections: Always validate connection establishment before running tests
- Clean Resource Management: Ensure proper container cleanup after tests
- Environment Detection: Use CI-aware timeouts and resource limits
TLS Testing
- Certificate Validation: Test both valid and invalid certificate scenarios
- SSL Mode Testing: Validate different SSL modes and their behavior
- Error Handling: Test TLS connection failures and error messages
- Security Policies: Validate TLS version and cipher suite enforcement
Performance Considerations
- Container Startup: TLS containers may take longer to start due to certificate setup
- Connection Overhead: TLS connections have additional handshake overhead
- Resource Usage: TLS containers may use more memory and CPU resources
This TLS variants system provides a solid foundation for comprehensive database connection testing that can be enhanced as the integration test infrastructure continues to mature.
Architecture
Gold Digger's architecture and design decisions.
High-Level Architecture
graph TD A[CLI Input] --> B[Configuration Resolution] B --> C[Database Connection] C --> D[Query Execution] D --> E[Result Processing] E --> F[Format Selection] F --> G[Output Generation]
Core Components
CLI Layer (main.rs
, cli.rs
)
- Argument parsing with clap
- Environment variable fallback
- Configuration validation
Database Layer (lib.rs
)
- MySQL connection management
- Query execution
- Result set processing
Output Layer (csv.rs
, json.rs
, tab.rs
)
- Format-specific serialization
- Consistent interface design
- Type-safe conversions
Design Principles
Security First
- Automatic credential redaction
- TLS/SSL by default
- Input validation and sanitization
Type Safety
- Rust's ownership system prevents memory errors
- Explicit NULL handling
- Safe type conversions
Performance
- Connection pooling
- Efficient serialization
- Minimal memory allocations
Key Design Decisions
Memory Model
- Current: Fully materialized results
- Rationale: Simplicity and reliability
- Future: Streaming support for large datasets
Error Handling
- Pattern:
anyhow::Result<T>
throughout - Benefits: Rich error context and propagation
- Trade-offs: Slightly larger binary size
Configuration Precedence
- Order: CLI flags > Environment variables
- Rationale: Explicit overrides implicit
- Benefits: Predictable behavior in automation
Module Dependencies
graph TD main --> cli main --> lib lib --> csv lib --> json lib --> tab cli --> clap lib --> mysql csv --> serde json --> serde_json
Future Architecture
Planned Improvements
- Streaming result processing
- Plugin system for custom formats
- Configuration file support
- Async/await for better concurrency
API Reference
Links to detailed API documentation and developer resources.
Rustdoc Documentation
The complete API documentation is available in the rustdoc section of this site.
Public API Overview
Core Functions
rows_to_strings()
- Convert database rows to string vectorsget_extension_from_filename()
- Extract file extensions for format detection
Output Modules
csv::write()
- CSV output generationjson::write()
- JSON output generationtab::write()
- TSV output generation
CLI Interface
cli::Cli
- Command-line argument structurecli::Commands
- Available subcommands
Usage Examples
Basic Library Usage
use gold_digger::{csv, rows_to_strings};
use mysql::{Pool, Row};
use std::fs::File;
fn example() -> anyhow::Result<()> {
// Convert database rows and write CSV
let rows: Vec<Row> = vec![]; // query results would go here
let string_rows = rows_to_strings(rows)?;
let output = File::create("output.csv")?;
csv::write(string_rows, output)?;
Ok(())
}
Custom Format Implementation
use anyhow::Result;
use std::io::Write;
pub fn write<W: Write>(rows: Vec<Vec<String>>, mut output: W) -> Result<()> {
for row in rows {
writeln!(output, "{}", row.join("|"))?;
}
Ok(())
}
Type Definitions
Key types used throughout the codebase:
Vec<Vec<String>>
- Standard row format for output modulesanyhow::Result<T>
- Error handling patternmysql::Row
- Database result row type
Error Handling
All public functions return anyhow::Result<T>
for consistent error handling:
use anyhow::Result;
fn example_function() -> Result<()> {
// Function implementation
Ok(())
}
Feature Flags
Conditional compilation based on Cargo features:
#[cfg(feature = "csv")]
pub mod csv;
#[cfg(feature = "json")]
pub mod json;
#[cfg(feature = "verbose")]
println!("Debug information");
Release Runbook
This document provides a step-by-step guide for creating releases using the cargo-dist automated release workflow.
Overview
Gold Digger uses cargo-dist for automated cross-platform releases. The release process is triggered by pushing a Git tag and automatically handles:
- Cross-platform builds (6 target platforms)
- Multiple installer generation
- Security signing and SBOM generation
- GitHub release creation
- Package manager updates
Pre-Release Checklist
Before creating a release, ensure the following:
Code Quality
-
All tests pass:
just test
-
Code formatting is correct:
just fmt-check
-
Linting passes:
just lint
-
Security audit passes:
just security
- No critical vulnerabilities in dependencies
Documentation
- README.md is up to date
- CHANGELOG.md has been generated using git-cliff
- API documentation is current
- Installation instructions are accurate
Configuration
-
Version number is updated in
Cargo.toml
-
dist-workspace.toml
configuration is correct - All target platforms are properly configured
- Installer configurations are valid
Release Process
1. Prepare Release Branch
# Ensure you're on main branch
git checkout main
git pull origin main
# Create release branch
git checkout -b release/v1.0.0
2. Update Version and Changelog
# Update version in Cargo.toml
# Edit Cargo.toml and update version = "1.0.0"
# Generate changelog using git-cliff
git-cliff --tag v1.0.0 --output CHANGELOG.md
# Commit changes
git add Cargo.toml CHANGELOG.md
git commit -m "chore: prepare v1.0.0 release"
3. Test Release Workflow
# Test cargo-dist configuration
cargo dist plan
# Test release workflow locally (requires act)
just act-release-dry v1.0.0-test
# Verify all artifacts would be generated correctly
4. Create Release Tag
# Push release branch
git push origin release/v1.0.0
# Create pull request for review (if applicable)
# After review and approval, merge to main
# Ensure you're on main branch
git checkout main
git pull origin main
# Create and push version tag
git tag v1.0.0
git push origin v1.0.0
5. Monitor Release Process
The release workflow will automatically:
- Plan Phase: Determine what artifacts to build
- Build Phase: Create binaries for all 6 target platforms
- Global Phase: Generate installers and SBOMs
- Host Phase: Upload artifacts and create GitHub release
- Publish Phase: Update Homebrew tap and other package managers
6. Verify Release
After the workflow completes, verify:
- GitHub release is created with correct version
- All 6 platform binaries are present
- Installers are generated (shell, PowerShell, MSI, Homebrew)
- SBOM files are included
- Checksums are provided
- Homebrew tap is updated (if applicable)
Release Artifacts
Each release includes the following artifacts:
Binaries
gold_digger-aarch64-apple-darwin.tar.gz
(macOS ARM64)gold_digger-x86_64-apple-darwin.tar.gz
(macOS Intel)gold_digger-aarch64-unknown-linux-gnu.tar.gz
(Linux ARM64)gold_digger-x86_64-unknown-linux-gnu.tar.gz
(Linux x86_64)gold_digger-aarch64-pc-windows-msvc.zip
(Windows ARM64)gold_digger-x86_64-pc-windows-msvc.zip
(Windows x86_64)
Installers
gold_digger-installer.sh
(Shell installer for Linux/macOS)gold_digger-installer.ps1
(PowerShell installer for Windows)gold_digger-x86_64-pc-windows-msvc.msi
(MSI installer for Windows)- Homebrew formula (automatically published to tap)
Security Artifacts
- SBOM files in CycloneDX format (
.cdx.json
) - GitHub attestation signatures
- SHA256 checksums
Troubleshooting
Common Issues
Workflow Fails During Build
- Check that all dependencies are available
- Verify target platform configurations
- Review build logs for specific error messages
Missing Artifacts
- Ensure all target platforms are configured in
dist-workspace.toml
- Check that build matrix includes all required platforms
- Verify artifact upload permissions
Homebrew Tap Update Fails
- Check
HOMEBREW_TAP_TOKEN
secret is configured - Verify tap repository permissions
- Review Homebrew formula generation
SBOM Generation Issues
- Ensure
cargo-cyclonedx
is properly configured - Check that all dependencies are available for SBOM generation
- Verify CycloneDX format compliance
Recovery Procedures
Failed Release
If a release fails partway through:
- Delete the failed release from GitHub
- Delete the tag locally and remotely
- Fix the issue that caused the failure
- Re-run the release process with the same version
Partial Artifacts
If some artifacts are missing:
- Check the workflow logs for specific failure reasons
- Re-run the specific failed job if possible
- Create a patch release if necessary
Post-Release Tasks
Documentation Updates
- Update any version-specific documentation
- Verify installation instructions work with new release
- Update any example configurations
Monitoring
- Monitor for any issues reported by users
- Check that package manager installations work correctly
- Verify security scanning results
Communication
- Announce the release on appropriate channels
- Update any external documentation or references
- Notify stakeholders of the release
Configuration Reference
dist-workspace.toml
Key configuration options:
[dist]
# Target platforms
targets = [
"aarch64-apple-darwin",
"x86_64-apple-darwin",
"aarch64-unknown-linux-gnu",
"x86_64-unknown-linux-gnu",
"aarch64-pc-windows-msvc",
"x86_64-pc-windows-msvc",
]
# Installers to generate
installers = ["shell", "powershell", "npm", "homebrew", "msi"]
# Security features
github-attestation = true
cargo-auditable = true
cargo-cyclonedx = true
# Homebrew tap
tap = "EvilBit-Labs/homebrew-tap"
git-cliff Configuration
The project uses git-cliff for automated changelog generation from conventional commits. The
configuration is typically in cliff.toml
(if present) or uses git-cliff defaults.
Key git-cliff commands:
# Generate changelog for a specific version
git-cliff --tag v1.0.0 --output CHANGELOG.md
# Generate changelog without header (for release notes)
git-cliff --tag v1.0.0 --strip header --output release-notes.md
# Preview changelog without writing to file
git-cliff --tag v1.0.0 --prepend CHANGELOG.md
Conventional Commit Format:
feat:
- New featuresfix:
- Bug fixesdocs:
- Documentation changesstyle:
- Code style changesrefactor:
- Code refactoringtest:
- Test additions or changeschore:
- Maintenance tasks
Environment Variables
Required secrets for the release workflow:
GITHUB_TOKEN
: Standard GitHub token for repository accessHOMEBREW_TAP_TOKEN
: Token for updating Homebrew tap repository
Support
For issues with the release process:
- Check the cargo-dist documentation
- Review the DISTRIBUTION.md guide
- Create an issue in the repository
- Check the troubleshooting guide
Release Notes Template
This template provides a structure for creating release notes for Gold Digger releases.
Template Structure
# Gold Digger v1.0.0
## ๐ Release Highlights
Brief overview of the most important changes in this release.
## โจ New Features
- **Feature Name**: Description of the new feature
- **Another Feature**: Description of another new feature
## ๐ง Improvements
- **Improved Area**: Description of the improvement
- **Another Improvement**: Description of another improvement
## ๐ Bug Fixes
- **Fixed Issue**: Description of the bug fix
- **Another Fix**: Description of another bug fix
## ๐ Security Updates
- **Security Enhancement**: Description of security improvements
- **Vulnerability Fix**: Description of security fixes
## ๐ฆ Installation
### Quick Install
```bash
# Linux/macOS
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/EvilBit-Labs/gold_digger/releases/download/{{VERSION}}/gold_digger-installer.sh | sh
# Windows
powershell -c "irm https://github.com/EvilBit-Labs/gold_digger/releases/download/{{VERSION}}/gold_digger-installer.ps1 | iex"
Package Managers
# Homebrew
brew install EvilBit-Labs/tap/gold-digger
# Manual download
# Visit: https://github.com/EvilBit-Labs/gold_digger/releases/tag/v1.0.0
๐ What's Changed
Breaking Changes
- Breaking Change: Description of breaking changes and migration steps
Deprecations
- Deprecated Feature: Description of deprecated features and alternatives
Configuration Changes
- Config Change: Description of configuration changes
๐งช Testing
This release has been tested on:
- Linux: Ubuntu 22.04 (x86_64, ARM64)
- macOS: 13+ (Intel, Apple Silicon)
- Windows: 10/11 (x86_64, ARM64)
๐ Security
- All binaries are signed with GitHub attestation
- SBOM files included for security auditing
- SHA256 checksums provided for integrity verification
Verification
gh attestation verify gold_digger-v1.0.0-x86_64-unknown-linux-gnu.tar.gz --attestation gold_digger-v1.0.0-x86_64-unknown-linux-gnu.tar.gz.intoto.jsonl
After verification, check the included SBOM and SHA256 files for complete integrity validation.
๐ Changelog
For a complete list of changes, see the CHANGELOG.md.
๐ Known Issues
- Issue Description: Workaround or status
๐ Upgrade Guide
From v0.2.x
- Step 1: Description of upgrade step
- Step 2: Description of upgrade step
From v0.1.x
- Breaking Change: Description of breaking changes
- Migration: Steps to migrate
๐ Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Security: SECURITY.md
๐ Contributors
Thanks to all contributors who made this release possible:
- @contributor1 - Description of contribution
- @contributor2 - Description of contribution
๐ License
Gold Digger is released under the MIT License. See LICENSE for details.
## Usage Instructions
### For Major Releases (v1.0.0, v2.0.0, etc.)
1. **Include breaking changes section** with detailed migration steps
2. **Highlight major new features** prominently
3. **Provide comprehensive upgrade guide**
4. **Include security section** with detailed information
### For Minor Releases (v1.1.0, v1.2.0, etc.)
1. **Focus on new features and improvements**
2. **Include any configuration changes**
3. **Note any deprecations**
4. **Provide brief upgrade notes if needed**
### For Patch Releases (v1.0.1, v1.0.2, etc.)
1. **Focus on bug fixes and security updates**
2. **Keep it concise**
3. **Highlight any critical fixes**
4. **Minimal upgrade guidance**
## Customization Tips
### Version-Specific Content
- **Update version numbers** throughout the template
- **Adjust installation URLs** to match the specific release
- **Update testing matrix** if platforms have changed
- **Modify upgrade guides** based on previous versions
### Content Guidelines
- **Use clear, concise language**
- **Include code examples** where helpful
- **Link to relevant documentation**
- **Highlight security improvements**
- **Provide migration steps** for breaking changes
### Automation
The release notes can be partially automated using:
```bash
# Generate changelog from git commits using git-cliff
git-cliff --tag v1.0.0 --output CHANGELOG.md
# Extract commit messages for specific version
git log v0.2.0..v1.0.0 --oneline --grep="feat\|fix\|docs\|style\|refactor\|test\|chore"
Integration with cargo-dist
When using cargo-dist, the release notes can be:
- Included in the GitHub release created by cargo-dist
- Generated automatically from conventional commits using git-cliff
- Customized for specific release highlights
- Linked from the main documentation
Example Completed Release Notes
See the GitHub Releases page for examples of completed release notes for previous versions.
Cargo-Dist Release Workflow Completion Summary
This document summarizes the completion of GitHub issue #58: "CI/CD: Document and prepare cargo-dist release workflow for production".
Issue Status: โ COMPLETED
The cargo-dist release workflow has been fully implemented, tested, and documented. The project is ready for its first production release using cargo-dist.
What Was Accomplished
1. Documentation Updates
README.md Enhancements
- Added Release Process section with comprehensive cargo-dist information
- Updated Security section to reflect GitHub attestation (not Cosign)
- Added development testing commands for local release validation
- Included cross-platform build information (6 target platforms)
- Documented git-cliff integration for automated changelog generation
DISTRIBUTION.md Corrections
- Updated artifact signing from Cosign to GitHub attestation
- Corrected verification commands to use GitHub CLI
- Maintained comprehensive platform and installer documentation
CONTRIBUTING.md Additions
- Added Release Process section with detailed workflow steps
- Included cargo-dist testing commands for contributors
- Documented automated release features and key capabilities
2. New Documentation Created
Release Runbook (docs/src/development/release-runbook.md
)
- Complete step-by-step guide for creating releases
- Pre-release checklist with quality gates
- Troubleshooting section for common issues
- Recovery procedures for failed releases
- Configuration reference for dist-workspace.toml
Release Notes Template (docs/src/development/release-notes-template.md
)
- Comprehensive template for all release types (major, minor, patch)
- Version-specific customization guidelines
- Integration instructions for cargo-dist
- Automation tips using git-cliff and conventional commits
Documentation Integration
- Updated SUMMARY.md to include new documentation
- Cross-referenced between all release-related documents
- Maintained consistent documentation structure
3. Technical Validation
Cargo-Dist Configuration Verification
- Confirmed 6 target platforms are properly configured:
aarch64-apple-darwin
,x86_64-apple-darwin
(macOS)aarch64-unknown-linux-gnu
,x86_64-unknown-linux-gnu
(Linux)aarch64-pc-windows-msvc
,x86_64-pc-windows-msvc
(Windows)
- Verified multiple installers: shell, PowerShell, MSI, Homebrew, npm
- Confirmed security features: GitHub attestation, cargo-cyclonedx SBOM
- Tested cargo-dist plan - all artifacts properly configured
Quality Assurance
- All tests passing (89 tests across 7 binaries)
- Full quality checks completed successfully
- Code formatting and linting standards maintained
- Security audit passed with no critical vulnerabilities
- Documentation builds successfully
4. Release Workflow Features
Automated Release Process
- Git tag triggered releases (e.g.,
v1.0.0
) - Cross-platform native builds on respective runners
- Multiple installer generation for different platforms
- GitHub attestation signing for all artifacts
- CycloneDX SBOM generation for security auditing
- Automatic Homebrew tap updates
Security Integration
- GitHub attestation for artifact signing
- cargo-cyclonedx for SBOM generation
- cargo-auditable for dependency tracking
- SHA256 checksums for integrity verification
- No personal access tokens - uses GitHub OIDC
Package Manager Support
- Homebrew formula automatically published to
EvilBit-Labs/homebrew-tap
- Shell installer for Linux/macOS
- PowerShell installer for Windows
- MSI installer for Windows
- npm package for Node.js environments
Current Implementation Status
โ Fully Implemented and Tested
- Release workflow (
.github/workflows/release.yml
) - Cross-platform builds (6 target platforms)
- Security signing (GitHub attestation)
- SBOM generation (CycloneDX format)
- Multiple installers (5 different formats)
- Package manager integration (Homebrew tap)
โ Documentation Complete
- Release runbook with step-by-step instructions
- Release notes template for consistent communication
- Updated README.md with cargo-dist information
- Corrected DISTRIBUTION.md with accurate signing details
- Enhanced CONTRIBUTING.md with release process
โ Quality Validated
- All tests passing (89/89)
- Code quality standards maintained
- Security audit passed
- Documentation builds successfully
- Cargo-dist configuration verified
Next Steps for Production Release
Immediate Actions
- Review documentation - ensure all team members understand the process
- Test release workflow - use
just act-release-dry v1.0.0-test
locally - Prepare release notes - use the provided template
- Create first cargo-dist release - push a version tag
Production Release Checklist
-
Update version in
Cargo.toml
- Generate changelog using git-cliff
-
Create and push version tag (e.g.,
v1.0.0
) - Monitor release workflow execution
- Verify all 6 platform artifacts are created
- Confirm installers are generated correctly
- Validate SBOM files are included
- Check Homebrew tap is updated
- Publish release notes using template
Benefits Achieved
Developer Experience
- Automated releases - no manual intervention required
- Consistent artifacts - same process for every release
- Cross-platform support - single workflow for all platforms
- Security integration - built-in signing and SBOM generation
User Experience
- Multiple installation methods - shell, PowerShell, MSI, Homebrew
- Signed artifacts - verified integrity and provenance
- Complete documentation - clear installation and usage instructions
- Security transparency - SBOM files for dependency auditing
Maintenance
- Reduced manual work - automated release process
- Consistent quality - standardized release artifacts
- Security compliance - built-in security features
- Easy troubleshooting - comprehensive documentation
Conclusion
Issue #58 has been successfully completed. The cargo-dist release workflow is:
- โ Fully implemented with all 6 target platforms
- โ Comprehensively tested and validated
- โ Thoroughly documented with runbooks and templates
- โ Ready for production deployment
The project is now prepared for its first cargo-dist production release, with all necessary documentation, tools, and processes in place.