Queue tasks during the day, wake up to completed work.
A sleek, dark-themed overnight AI assistant that processes ANY task using your local LLM while you sleep. Now featuring a professional midnight interface designed for night-time productivity.
OSS at Night solves a simple but frustrating problem: local AI models are powerful but slow. Instead of waiting for each task to complete, this app lets you queue up multiple tasks throughout your day and then process them all at once while you sleep.
Think of it like a print queue, but for AI tasks. With its new dark theme optimized for nighttime use, it's designed to never time out, making it perfect for complex jobs that might take hours. You can add tasks from any device on your networkβphone, tablet, or laptopβand wake up to the completed work.
- π Dark Theme Interface: Professional midnight-inspired design that's easy on the eyes during late-night sessions.
- Never Times Out: Built from the ground up for slow, local models that need hours to run.
- Dual Interfaces: A stunning dark-themed web GUI for easy access and a powerful CLI for automation and power users.
- Phone & Tablet Access: Queue tasks from anywhere on your network with a mobile-responsive interface.
- Universal Task Engine: Handles a wide variety of tasks:
- search: In-depth research and report generation.
- create: Generate new content like articles, documentation, or creative writing.
- process: Transform existing contentβsummarize, rephrase, or change its tone.
- code: Generate, debug, or document software code.
- chain: Execute multi-step workflows where the output of one step feeds into the next.
- Web Search Integration: Optionally uses Serper or Tavily APIs to incorporate real-time information into any task.
- Full File Management: A complete file CRUD (Create, Read, Update, Delete) plugin with over 15 operations for managing your workspace.
- Visual Gallery: A stunning dark-themed, filterable gallery view at
/galleryto browse, inspect, and export completed tasks from both GUI and CLI databases. - 100% Local & Private: Your data never leaves your machine (unless you enable the optional web search).
# Clone the repository
git clone https://github.com/lalomorales22/oss-at-night
cd oss-at-night
# We recommend 'uv' for fast package management
# Install uv if you don't have it:
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a virtual environment and activate it
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Run the interactive installer (Recommended)
python install.pyThe installer will check your dependencies, verify your Ollama connection, and help you get set up.
Open a new terminal and ensure your Ollama instance is running.
# Terminal 1: Start the Ollama server
ollama serve
# Terminal 2: Pull the recommended model if you don't have it
ollama pull gpt-oss:20b# Start the web application
python obp-GUI.pyAccess the GUI on your computer at http://localhost:5001.
To access from your phone or another device, use the "Network Access" URL shown in the startup message (e.g., http://192.168.0.64:5001).
The system operates on a simple principle: queue now, process later.
-
Add Tasks: Throughout the day, you add tasks using either the web GUI or the CLI. You can define the task type and add specific instructions using metadata.
-
Process Queue: When you're ready, you start the processor. It works through the queue one task at a time, sending carefully crafted prompts to your local Ollama model.
-
Get Results: The system saves the results of each task to an SQLite database and as individual files in the
results/directory. The process is fault-tolerant; if it crashes, it remembers where it left off.
You can provide tasks in a simple text format or through the GUI's metadata builder. Metadata allows you to customize how each task is handled.
CLI Task Format (tasks.txt):
{search}
search_query=AI safety 2025::Research the latest developments in AI safety.
{create}
tone=professional,filename=blog_post.md::Write a blog post about quantum computing.
{code}
language=python,filename=analyzer.py::Create a data visualization script using pandas.
Metadata Guide:
Metadata is a set of key-value pairs that give the AI specific instructions.
- For Search:
search_query,comparison,filename - For Create:
tone,audience,format,filename - For Code:
language,filename,include_docs,include_tests - For File Ops:
operation(e.g., create, search, list),pattern,search_text
Example (Code Task Metadata):
{
"language": "python",
"filename": "data_analyzer.py",
"include_docs": true,
"libraries": "pandas,matplotlib"
}To allow tasks to access real-time information, you can connect a web search provider.
Get a Free API Key:
- Serper: serper.dev (2,500 free searches/month)
- Tavily: tavily.com (1,000 free searches/month)
Configure:
# Copy the example environment file
cp .env.example .env
# Edit the .env file and add your key
SERPER_API_KEY=your_serper_key_here
# or TAVILY_API_KEY=your_tavily_key_hereThe application will automatically use the key if it's present.
You can define or modify task processing steps by editing the YAML files in the task_configs/ directory. This allows you to create highly customized, multi-step workflows.
Example (search_tasks.yaml):
type: search
steps:
- name: web_search # First, run a web search
plugin: web_search
optional: true
- name: summarize # Then, summarize the results
prompt: "Summarize these search results in a clear, organized way: {web_search_result}"
- name: create_report # Finally, create a detailed report
prompt: "Create a detailed report based on this summary: {summarize_result}"
- name: save_report # And save the file
plugin: file_operations
operation: create
filename_template: "search_{task_id}.md"Docker provides an isolated and consistent environment.
Run Setup Script: This script creates necessary directories and sets permissions.
./docker-setup.shStart with Docker Compose:
# Build and start the containers in the background
docker-compose up --build -dManage:
- View logs:
docker-compose logs -f - Stop:
docker-compose down
First, run the connection test script to diagnose common issues with your Ollama setup.
python test_connection.pyIf you can't access the GUI from your phone or another device:
- Same Network: Ensure your phone and computer are on the same Wi-Fi network.
- Correct IP Address: Use the exact "Network Access" IP address shown in the terminal when you start the app (e.g.,
http://192.168.0.64:5001), not localhost. - Firewall: Your OS firewall might be blocking the connection.
- macOS: Go to System Preferences β Security & Privacy β Firewall and allow incoming connections for "Python".
- Linux:
sudo ufw allow 5001
- Run Network Test:
python network_test.pyprovides detailed diagnostics.
If the Docker container can't connect to Ollama:
- Ensure Ollama is running on your host machine (
ollama serve). - Your
docker-compose.ymlshould setOLLAMA_HOST=http://host.docker.internal:11434. This is the standard for Mac and Windows. For Linux, you may need to usehttp://172.17.0.1:11434. - If you see database errors in Docker, run
./docker-setup.shagain and rebuild withdocker-compose up --build.
If web search isn't working:
- Make sure your
.envfile exists and contains your actual API key, not the placeholder text. - Restart the application (or Docker container) after editing the
.envfile.
OSS at Night features a carefully crafted dark theme with:
- Deep black backgrounds with subtle blue gradient overlays
- Vibrant accent colors for improved visibility
- Glassmorphic effects with backdrop blur for a modern feel
- Smooth animations and transitions
- Professional color-coded task status indicators
- Eye-friendly contrast ratios for extended nighttime use
oss-at-night/
βββ obp-GUI.py # Main Web interface with dark theme
βββ obp-CLI.py # Main Command-line interface
βββ processor_config.yaml # Global configuration for the processor
βββ task_configs/ # Directory for custom task workflow definitions
βββ gallery_template.html # Template for the /gallery view
βββ install.py # Interactive installer script
βββ Dockerfile # Docker container definition
βββ docker-compose.yml # Docker orchestration file
βββ requirements.txt # Python dependencies
βββ .env.example # Template for API keys
βββ workspace/ # Default directory for file operations
βββ results/ # Output directory for completed task files
βββ data/ # Contains the SQLite databases
Pull requests are welcome! Please see CONTRIBUTING.md for guidelines. To set up a development environment:
# Install development and testing tools
pip install -r requirements-dev.txt
# Run tests
make test
# Check code formatting
make lintThis project is licensed under the MIT License. See the LICENSE file for details.