Releases: lfnovo/open-notebook
v1.2.0 - Command Palette & Azure OpenAI Support
Release v1.2.0
Release Date: December 2025
This release brings significant UI/UX improvements, better Azure OpenAI support, and various bug fixes and refinements since v1.1.0 (October 24, 2025).
Highlights
- Command Palette: New keyboard-driven navigation for quick access to notebooks, sources, and search
- Azure OpenAI Support: Modality-specific configuration for Azure deployments
- Improved Chat Experience: Compact references with numbered citations
Features
- Command Palette for Quick Navigation - Press
Cmd/Ctrl+Kto quickly navigate and search across the application (#288) - @lfnovo - Azure OpenAI Modality-Specific Configuration - Configure different Azure OpenAI deployments for LLM, embeddings, and speech (#243) - @lfnovo
- Compact Chat References - Chat now displays numbered citations for cleaner reference handling (#220) - @lfnovo
Bug Fixes
- URL Params in Search Page - Fixed command palette navigation to properly handle URL parameters (#289) - @lfnovo
- UTF-8 Encoding for Migrations - Fixed async migrations file reading to use UTF-8 encoding (#279) - @kuroshiro1902
- UI Scrolling and API Routes - Fixed UI scrolling behavior and API route ordering issues (#253) - @lfnovo
- Hide Sources Notes - Fixed display of source notes (#273) - @lfnovo
Refactoring
- Environment Variables Loading - Moved environment variables loading to application entry point for better initialization (#283) - @kuroshiro1902
- Duplicate Model Validation - Optimized duplicate model validation and improved error handling (#219) - @lfnovo
Documentation
- SSL Verification Configuration - Added documentation for SSL verification configuration for local providers (#281) - @lfnovo
- Contribution Workflow - Improved contribution workflow and project governance documentation (#246) - @lfnovo
Maintenance
- Podcast Transcripts - Improved podcast transcript generation (#221) - @lfnovo
- Esperanto Update - Bumped Esperanto for Anthropic on LangChain support (#244) - @lfnovo
- Security: js-yaml - Updated js-yaml from 4.1.0 to 4.1.1 in frontend (#263) - @dependabot
Full Changelog
v1.1.0...v1.2.0: v1.1.0...main
v1.1.0 - Simplified Reverse Proxy + Critical Bug Fixes
🚀 v1.1.0 - Simplified Reverse Proxy Configuration
This release dramatically simplifies reverse proxy setup and fixes a critical bug that prevented runtime configuration from working in production builds.
🎯 Major Features
Next.js API Rewrites - Single-Port Deployment
Open Notebook now uses Next.js rewrites to internally proxy /api/* requests from port 8502 to the FastAPI backend on port 5055. This eliminates the need for complex reverse proxy configurations!
Before (v1.0.x) - Complex nginx config:
upstream frontend { server app:8502; }
upstream api { server app:5055; }
location /api/ { proxy_pass http://api/api/; }
location /_config { proxy_pass http://frontend; }
location / { proxy_pass http://frontend; }After (v1.1.0) - Simple nginx config:
location / {
proxy_pass http://app:8502;
# That's it! Next.js handles API routing internally
}Benefits:
- ✅ 75% reduction in configuration complexity (12 lines → 3 lines)
- ✅ Single-port deployment for 95% of use cases
- ✅ Zero breaking changes - existing deployments continue working
- ✅ Zero performance overhead (< 1ms latency added)
- ✅ Works with nginx, Traefik, Caddy, Coolify, and more
🐛 Critical Bug Fixes
Fixed: Runtime Configuration Never Worked in Production
Issue: The /_config endpoint that provides the API_URL to the browser was completely broken in production builds.
Root Cause: Next.js treats folders starting with _ as "private folders" and excludes them from routing entirely. The /_config route was never built into production!
Impact:
- Runtime configuration was non-functional
- Auto-detection for reverse proxy scenarios didn't work
- Remote deployments required manual URL configuration
Fix: Renamed /_config → /config and updated all references
Why this matters: This endpoint is critical for:
- Telling the browser where to make API requests
- Enabling zero-config deployments with auto-detection
- Supporting reverse proxy scenarios where API URL differs from frontend URL
📝 Full Changelog
Features
- Next.js API Rewrites: Added automatic proxying of
/api/*requests from frontend to backend - INTERNAL_API_URL environment variable: New optional variable for multi-container deployments (defaults to
http://localhost:5055) - Updated documentation: Comprehensive reverse proxy examples for nginx, Traefik, Caddy, and Coolify
Bug Fixes
- Fixed
/configendpoint: Renamed from/_configto fix Next.js private folder exclusion (CRITICAL) - Fixed React hook warnings: Resolved exhaustive-deps warnings in AddExistingSourceDialog
Configuration
- Updated
supervisord.confandsupervisord.single.confto passINTERNAL_API_URL - Added detailed documentation for
INTERNAL_API_URLin.env.example - Updated README architecture diagram to show internal proxying
Documentation
- Comprehensive reverse proxy guide with simplified examples
- Clear explanation of
API_URLvsINTERNAL_API_URL - Migration guide for existing deployments
🔄 Migration Guide
For New Deployments
No special configuration needed! The simplified proxy setup works out of the box:
# Just proxy to port 8502
docker run -e API_URL=https://your-domain.com lfnovo/open_notebook:v1-latestFor Existing Deployments
No action required! This release is 100% backward compatible:
- ✅ Existing two-port configurations continue working
- ✅ Direct API access on port 5055 still functional
- ✅ External API integrations unaffected
- ✅ All environment variables work as before
Optional: Simplify your reverse proxy config to single-port (see examples above)
For Multi-Container Deployments
If you're using separate containers for frontend and API, set the new environment variable:
services:
frontend:
environment:
- INTERNAL_API_URL=http://api:5055 # Use your API service name📊 Testing
All changes have been comprehensively tested:
- ✅ Direct API access (backward compatibility)
- ✅ Proxied API access through rewrites
- ✅ Runtime config endpoint (
/config) - ✅ Header forwarding (X-Forwarded-*)
- ✅ File uploads via proxy
- ✅ Performance (< 1ms overhead)
- ✅ Multiple reverse proxy types
📚 Documentation
- Reverse Proxy Guide - Updated with simplified examples
- Environment Variables - Detailed INTERNAL_API_URL documentation
🙏 Credits
Special thanks to the community for reporting reverse proxy configuration issues. Your feedback drives improvements!
Full Diff: v1.0.11...v1.1.0
Related Issues: Resolves #179, Related to OSS-321
Open Notebook v1 is out of beta
There is a whole lot of features in this release:
- The layout is much cleaner, faster, snappier. All the expensive file/model processing operations now run in the background making for a much pleasureable use of the app.
- You can now chat with your sources same way you could chat with your notebooks. You can watch youtube videos directly from the app and re-download apps that you uploaded before.
- A full new Sources feature to manage your sources outside of notebooks. Sources can be created without assignment to Notebooks and now they can also be assigned to multiple notebooks.
- You can pick the model to use for each chat and change it anytime
- Improved all the pages and UIs, specially: podcasts, models, transformations, settings,
- Create a new Advanced menu for reembedding content (after changing the embedding model)
- API connecivity issues fixed including for people using reverse proxies
- Enabled OpenAI compatible embedding and TTS endpoints. You can now have your fully offline podcast creation. I provided a detailed guide on how to do it with Peaches (https://github.com/lfnovo/open-notebook/blob/main/docs/features/local_tts.md).
- There were tons of bug fixes since we started the beta.
PLEASE NOTE (MIGRATION)
If you are not on v1 yet, you need to change the docker tag and expose port 5055. See the MIGRATION.md file for further instructions.
0.3.2 - Support for Open AI compatible endpoints (i.e. LM St
Small database init bug fix
v0.3.1 chore: bump
v0.3.0 - API, Podcasts and more
This is a big release in preparation for building the new front-end and several features coming up soon.
- Creates the API layer for Open Notebook (you can use the APIs to create your automations now)
- Migrates the SurrealDB SDK to the official one, solves a lot of encoding issues
- Change all database calls to async -- faster and more responsible UI
- New podcast framework supporting 1-4 speakers plus several new providers and models
- Better podcast transcript generation
- Implement the surreal-commands library for async processing and more responsiveness
- Improve docker image and docker-compose configurations -- faster builds, smaller images, better runs
- Enables the option to protect your installation with a password so that it can be deployed openly
Keep posted as this is opening the path for very cool new features soon
Implement Esperanto and launch several providers
Version 0.2.2 is out 🎉
This is a big release in preparation for the launch of v1 (APIs e new Front-end).
In this release, we are replacing the in-house model management capabilities with the Esperanto library, which I also maintain and has been battle tested in production through a lot of projects.
**What changed? **
- New providers added: Mistral (text and embedding), Gemini (text, embedding, text to speech), Deepseek (text), Voyage (embeddings), Azure Open AI
- Improved the performance of all model usage with a lightweight HTTP-based framework
- Fixes some issues reported by users with Ollama
- Improved the model selection page
- Launched a model management guide to help you get started
- Fixes an issue that was asking for migrations repeatedly
- Fixes the Youtube transcript download issue that affected many apps
- Fixes an issue when the user deletes all transformations and the source page breaks
Release 0.2.0 - content-core, docling, firecrawl and jina
- Document processing is now handled by Docling (when possible) and fallbacks to our old text-based mechanism. This will greatly improve processing of complex PDFs, tables, as well as add support for CSV among other formats.,
- URL processing is now extended and can be handled by either Firecrawl or Jina besides that original HTTP-based approach which failes quite often on Javascript-heavy websites. Both tools have a generous free-tier so it should be a no-brainer to use them.,
- New settings page, we moved some content processing related settings to a new Settings page to unclutter the "add source" UI. You can use the new page to change the default engines for documents and URLs (although I suggest to keep it as auto). You can also decide when to embed content and whether to keep a local copy of files you upload (suggest you don't).
Transformations UI and Long Form Podcasts
Please, take notice on the warning below for users migrating from 0.1.0
Changes in this release
- Transformations are not longer managed in yaml file. They've moved to the UI under the Transformations menu item, making it simpler and easier to customize your prompts.
- Long form podcasts (15+ minutes) are now available, with improved Gemini api
- Many bug fixes
Error Migrating from v0.1.0
If you are migrating from version v0.1.0, there was a problem in the Surreal SDK that made the migrations count go crazy. So upgrading to 0.1.1 might not trigger the new database migration that is required for this to run. I fixed the issue on my fork of the SDK for now. But if you already ran 0.1.0, you need to do a fix.
If that is your case, just follow this:
Start the app.
docker compose upEnter the docker compose exec mode on the DB machine
docker compose exec surrealdb /surreal sql --endpoint http://localhost:8000 --ns open_notebook --user root --pass root --db open_notebookRun this command to reset the migration count to the right number:
delete from _sbl_migrations where version > 4;Then, just run the app on the browser and let it migrate correctly.
If you don't want to do all that, you can also just delete the surreal_data folder and start from scratch.
v0.1 - Release Candidate
- Better citations and improved search capabilities
- The "Ask" feature is much smarter now and let's you check its thinking
- Enabled support for X.AI and Groq models
- Select default transformations to apply to all content
- Save insights as custom notes
- Items are added to context by default