LazyShell is a command-line interface that helps you quickly generate and execute shell commands using AI. It supports multiple AI providers and provides an interactive configuration system for easy setup.
- π Generates shell commands from natural language descriptions
- β‘ Supports multiple AI providers (Groq, Google Gemini, OpenRouter, Anthropic, OpenAI, Ollama, Mistral)
- π§ Interactive configuration system - no manual environment setup needed
- π Safe execution with confirmation prompt
- π Fast and lightweight
- π Automatic fallback to environment variables
- πΎ Persistent configuration storage
- π Automatic clipboard integration - generated commands are copied to clipboard
- π§ͺ Built-in evaluation system for testing AI performance
- π Model benchmarking capabilities
- π€ LLM Judge evaluation system
- βοΈ CI/CD integration with automated quality checks
- π₯οΈ System-aware command generation - detects OS, distro, and package manager
- π Command refinement - iteratively improve commands with AI feedback
npm install -g lazyshellyarn global add lazyshellpnpm add -g lazyshellbun add -g lazyshellcurl -fsSL https://raw.githubusercontent.com/bernoussama/lazyshell/main/install | bash-
First Run: LazyShell will automatically prompt you to select an AI provider and enter your API key:
lazyshell "find all files larger than 100MB" # or use the short alias lsh "find all files larger than 100MB"
-
Interactive Setup: Choose from supported providers:
- Groq - Fast LLaMA models with great performance
- Google Gemini - Google's latest AI models
- OpenRouter - Access to multiple models including free options
- Anthropic Claude - Powerful reasoning capabilities
- OpenAI - GPT models including GPT-4
- Ollama - Local models (no API key required)
- Mistral - Mistral AI models for code generation
- LMStudio - Local models via LMStudio (experimental, no API key required)
-
Automatic Configuration: Your preferences are saved to
~/.lazyshell/config.jsonand used for future runs. -
Clipboard Integration: Generated commands are automatically copied to your clipboard for easy pasting.
On first run, LazyShell will guide you through:
- Selecting your preferred AI provider
- Entering your API key (if required)
- Automatically saving the configuration
# Open configuration UI
lazyshell configYou can still use environment variables as before:
export GROQ_API_KEY='your-api-key-here'
# OR
export GOOGLE_GENERATIVE_AI_API_KEY='your-api-key-here'
# OR
export OPENROUTER_API_KEY='your-api-key-here'
# OR
export ANTHROPIC_API_KEY='your-api-key-here'
# OR
export OPENAI_API_KEY='your-api-key-here'Note: Ollama and LMStudio don't require API keys as they run models locally.
- Linux/macOS:
~/.lazyshell/config.json - Windows:
%USERPROFILE%\.lazyshell\config.json
| Provider | Models | API Key Required | Notes |
|---|---|---|---|
| Groq | LLaMA 3.3 70B | Yes | Fast inference, excellent performance |
| Google Gemini | Gemini 2.0 Flash Lite | Yes | Latest Google AI models |
| OpenRouter | Multiple models | Yes | Includes free tier options |
| Anthropic | Claude 3.5 Haiku | Yes | Advanced reasoning capabilities |
| OpenAI | GPT-4o Mini | Yes | Industry standard models |
| Ollama | Local models | No | Run models locally |
| Mistral | Devstral Small | No | Code-optimized models |
| LMStudio | Local models | No | Experimental - Local models via LMStudio |
lazyshell "your natural language command description"
# or use the short alias
lsh "your natural language command description"lazyshell -s "find all JavaScript files" # No explanation, just the command
lsh --silent "show disk usage" # Same with long flag# Find files
lazyshell "find all JavaScript files modified in the last 7 days"
# System monitoring
lazyshell "show disk usage sorted by size"
# Process management
lazyshell "find all running node processes"
# Docker operations
lazyshell "list all docker containers with their memory usage"
# File operations
lazyshell "compress all .log files in this directory"
# Package management (system-aware)
lazyshell "install docker" # Uses apt/yum/pacman/etc based on your distro- Execute: Run the generated command immediately
- Refine: Modify your prompt to get a better command
- Cancel: Exit without running anything
- Clipboard: Commands are automatically copied for manual execution
LazyShell automatically detects your system environment:
- Operating System: Linux, macOS, Windows
- Linux Distribution: Ubuntu, Fedora, Arch, etc.
- Package Manager: apt, yum, dnf, pacman, zypper, etc.
- Shell: bash, zsh, fish, etc.
- Current Directory: Provides context for relative paths
This enables LazyShell to generate system-appropriate commands and suggest the right package manager for installations.
LazyShell includes a flexible evaluation system for testing and benchmarking AI performance:
import { runEval, Levenshtein, LLMJudge, createLLMJudge } from './lib/eval';
await runEval("My Eval", {
// Test data function
data: async () => {
return [{ input: "Hello", expected: "Hello World!" }];
},
// Task to perform
task: async (input) => {
return input + " World!";
},
// Scoring methods
scorers: [Levenshtein, LLMJudge],
});- ExactMatch: Perfect string matching
- Levenshtein: Edit distance similarity
- Contains: Substring matching
- LLMJudge: AI-powered quality evaluation
- createLLMJudge: Custom AI judges with specific criteria
- AI-Powered Evaluation: Uses LLMs to evaluate command quality without expected outputs
- Multiple Criteria: Quality, correctness, security, efficiency assessments
- Rate Limiting: Built-in retry logic and exponential backoff
- Configurable Models: Use different AI models for judging
- Generic TypeScript interfaces for any evaluation task
- Multiple scoring methods per evaluation
- Async support for LLM-based tasks
- Detailed scoring reports with averages
- Error handling for failed test cases
See docs/EVALUATION.md for complete documentation.
LazyShell includes comprehensive benchmarking capabilities to compare AI model performance:
# Build and run benchmarks
bun run build
bun dist/bench_models.mjs- Multi-Model Testing: Compare Groq, Gemini, Ollama, Mistral, and OpenRouter models
- Performance Metrics: Response time, success rate, and output quality
- Standardized Prompts: Consistent test cases across all models
- JSON Reports: Detailed results saved to
benchmark-results/directory
llama-3.3-70b-versatile(Groq)gemini-2.0-flash-lite(Google)devstral-small-2505(Mistral)ollama3.2(Ollama)or-devstral(OpenRouter)
LazyShell includes automated quality assessments that run in CI to ensure consistent performance:
- Automated Testing: Runs on every PR and push to main/develop
- Threshold-Based: Configurable quality thresholds that must be met
- LLM Judges: Uses AI to evaluate command quality, correctness, security, and efficiency
- GitHub Actions: Integrated with CI/CD pipeline
- Add
GROQ_API_KEYto your GitHub repository secrets - Evaluations run automatically with 70% threshold by default
- CI fails if quality scores drop below the threshold
# Run CI evaluations locally
bun run eval:ci# Run basic evaluations
bun run build && bun dist/lib/basic.eval.mjs
# Run LLM judge evaluation
bun run build && bun dist/lib/llm-judge.eval.mjs
# Test AI library
bun run build && bun dist/test-ai-lib.mjs
# Run example evaluations
bun run build && bun dist/lib/example.eval.mjsSee docs/CI_EVALUATIONS.md for complete setup and configuration guide.
- Bun (recommended)
-
Clone the repository:
git clone https://github.com/bernoussama/lazyshell.git cd lazyshell -
Install dependencies:
bun install
-
Build the project:
bun run build
-
Link the package for local development:
bun link --global
bun x # Quick run with jiti (development)
bun run build # Compile TypeScript with pkgroll
bun run typecheck # Type checking only
bun run lint # Check code formatting and linting
bun run lint:fix # Fix formatting and linting issues
bun run eval:ci # Run CI evaluations locally
bun run release:patch # Build, version bump, publish, and push
bun run prerelease # Build, prerelease version, publish, and pushsrc/
βββ index.ts # Main CLI entry point
βββ utils.ts # Utility functions (command execution, history)
βββ bench_models.ts # Model benchmarking script
βββ test-ai-lib.ts # AI library testing script
βββ commands/
β βββ config.ts # Configuration UI command
βββ helpers/
β βββ index.ts # Helper exports
β βββ package-manager.ts # System package manager detection
βββ lib/
βββ ai.ts # AI provider integrations and command generation
βββ config.ts # Configuration management
βββ eval.ts # Evaluation framework
βββ basic.eval.ts # Basic evaluation examples
βββ ci-eval.ts # CI evaluation script
βββ example.eval.ts # Example evaluation scenarios
βββ llm-judge.eval.ts # LLM judge evaluation examples
- TypeScript: Full type safety and modern JavaScript features
- pkgroll: Modern bundling with tree-shaking
- jiti: Fast development with TypeScript execution
- Watch Mode: Auto-compilation during development
- Modular Architecture: Clean separation of concerns
- ESM: Modern ES modules throughout
- Invalid configuration: Delete
~/.lazyshell/config.jsonto reset or uselazyshell config - API key errors: Run
lazyshell configto re-enter your API key - Provider not working: Try switching to a different provider in the configuration
LazyShell will automatically fall back to environment variables if the config file is invalid or incomplete.
- Clipboard not working: Ensure your system supports clipboard operations
- Model timeout: Some models (especially Ollama) may take longer to respond
- Rate limiting: Built-in retry logic handles temporary rate limits
- Command not found: Make sure the package is properly installed globally
For troubleshooting, you can check:
- Configuration file:
~/.lazyshell/config.json - System detection: The AI considers your OS, distro, and package manager
- Command history: Generated commands are added to your shell history
Contributions are welcome! Please feel free to submit a Pull Request.
- Follow TypeScript best practices
- Add tests for new features
- Update documentation as needed
- Run evaluations before submitting PRs
- Use the KISS principle (Keep It Simple Stupid)
- Follow GitHub flow (create feature branches)
This project is licensed under the GPL-3.0 License - see the LICENSE file for details.
- Built with Commander.js
- Interactive prompts powered by @clack/prompts
- Clipboard integration via @napi-rs/clipboard
- AI SDK integration with Vercel AI SDK
- Bundled with pkgroll
- Powered by AI models from multiple providers
- Inspired by the need to be lazy (in a good way!)