Power-Aware AI Inference Comparison Platform
Compare power consumption between local and cloud AI inference with real-time monitoring and detailed analysis.
π Real-Time Power Monitoring: Hardware-level power consumption analysis with actual wattage estimation
β‘ Local vs Cloud Comparison: Data-driven power efficiency analysis
π Comprehensive Logging: Organized experiment tracking and results
οΏ½ Ready to Use: GPT-2 (local) + Mistral-7B (cloud) integration
# Clone the repository
git clone https://github.com/fantops/EdgeOptimizer.git
cd EdgeOptimizer
# Install dependencies
pip install -r requirements.txt
# Set up configuration
cp configs/experiment_config.json.example configs/experiment_config.json
# Edit configs/experiment_config.json with your HuggingFace API key# Quick 30-second local inference test
cd experiments
python3 power_comparison.py local 30
# 10-minute comprehensive comparison
python3 power_comparison.py both 600
# Cloud-only testing
python3 power_comparison.py cloud 120{
"power_breakdown": {
"estimated_watts": 57.9,
"processing_overhead_watts": 49.9,
"cpu_contribution_percent": 75.2,
"efficiency_score": 3.0
},
"recommendation": "LOCAL inference is more power efficient"
}| Metric | Local Inference | Cloud Inference | Winner |
|---|---|---|---|
| Average Power | 57.9W | 58.4W | π Local |
| Response Time | 11.3s | 17.7s | π Local |
| Processing Overhead | 49.9W | 50.4W | π Local |
| Battery Impact | -24.7%/hour | -12.8%/hour | π Local |
Edit configs/experiment_config.json:
{
"cloud_api_key": "hf_your_token_here",
"cloud_model": "mistralai/Mistral-7B-Instruct-v0.2:featherless-ai",
"experiment_duration": 300,
"test_prompts": [
"What is machine learning?",
"How does a neural network work?",
"Explain quantum computing"
]
}EdgeOptimizer/
βββ app/
β βββ chatbot.py # SimpleChatbot using ModelManager
βββ optimizer/ # Core modular components
β βββ __init__.py # Exports all components
β βββ agent.py # EdgeOptimizerAgent (main orchestrator)
β βββ cloud_inference.py # Multi-provider cloud API manager
β βββ config.py # ConfigManager with validation
β βββ experiment_runner.py # Reusable experiment framework
β βββ logging_config.py # Centralized logging system
β βββ model_manager.py # Local model management with caching
β βββ monitor.py # SystemMonitor and PowerTracker
βββ experiments/
β βββ power_comparison.py # Main power comparison script
β βββ enhanced_power_monitor.py # Advanced power tracking
βββ configs/
β βββ experiment_config.json # Experiment and API settings
β βββ optimizer_config.json # Power management configuration
βββ logs/ # Organized logging structure
β βββ power_analysis/ # Power consumption logs
β βββ experiments/ # Experiment result logs
β βββ system/ # System operation logs
β βββ archive/ # Archived old logs
βββ main.py # Configuration-driven main application
βββ requirements.txt # Python dependencies
SystemMonitor- Battery, CPU, memory, temperature monitoringModelManager- Local model loading, caching, and inference with singleton patternConfigManager- JSON configuration loading with path resolution and validationCloudInferenceManager- Multi-provider API support (HuggingFace, OpenAI)ExperimentRunner- Reusable experiment orchestration with power integrationEdgeOptimizerAgent- Intelligent power-aware inference routingPowerTracker- Specialized power monitoring for experiments
# Import log utilities
from optimizer.logging_config import EdgeOptimizerLogger, cleanup_logs
# View log summary
logger = EdgeOptimizerLogger()
summary = logger.get_log_summary()
print(summary)
# Clean up old logs (keep 10 most recent)
archived = cleanup_logs(keep_recent=10)
print(f"Archived {archived} old log files")| Test Duration | Local Power | Cloud Power | Local Speed | Cloud Speed | Winner |
|---|---|---|---|---|---|
| 1 minute | 29.5W | N/A | 9.8s avg | N/A | π Local |
| 10 minutes | 57.9W | 58.4W | 11.3s avg | 17.7s avg | π Local |
| 30 minutes | 56.2W | 59.1W | 10.9s avg | 18.2s avg | π Local |
Key Insights:
- Local inference consistently uses 0.5-3W less power
- Local responses are 36-67% faster than cloud
- Processing overhead stable at ~50W during AI inference
This project is licensed under the MIT License - see the LICENSE file for details.
Interested in contributing? See our Contributing Guide for development guidelines and power comparison best practices.