Flm Compagnon is a modern GUI designed to accompany and manage the FastFlowLM (FLM) project. It offers a smooth user experience to interact with your local AI models, monitor the server, and manage your configurations.
Important
This application requires FastFlowLM (FLM) to be installed on your system. Without it, Flm Compagnon will not function.
- Models: Model manager (download, delete, inspect details).
- Server: Configuration and management of the FLM server instance.
- System Tray: Quick access to server controls, model selection, and status directly from the notification area.
- Auto-start: Option to launch the application automatically at Windows startup.
- Start Minimized: Option to launch the application minimized to the system tray (configurable in settings).
- Settings: Application customization.
- About: View application version, hardware information, and check for updates.
- Multilingual: Interface management in English, French, and Japanese.
- Theme: Management of light and dark themes.
| Models List | Model Information |
|---|---|
| Browse and manage your local AI models | View detailed model information (family, size, quantization) |
| Server Configuration | Custom Parameters | Server Running |
|---|---|---|
| Select model and configure server options | Advanced settings (port, context, CORS...) | Real-time server logs and status |
| Settings | About (Light) | About (Dark) |
|---|---|---|
| Language, theme and application preferences | Hardware info and update checker | Dark theme support |
| Server gestion | Models gestion |
|---|---|
| Quick access to server controls from system tray | Quick access to models controls from system tray |
- Add a startup check when FLM is launched (verify model availability and server prerequisites)
- Add an automatic update check at application startup
- Finalize saving and loading of custom usage configuration (persist user presets)
- Add an in-app memory / resource calculator for chosen model + server configuration
- Add NPU and RAM usage monitoring and display (real-time stats)
- Ensure "Run at startup" setting is preserved across updates and installer actions
Completed
- Clean code and optimisation
- Fix the server management design for consistency
- Add a version check for the companion application (+ changelog)
- Add caching for the list of models, CPU version, and RAM
- Force an update of the model list on the server configuration side when models are modified (DLL, delete)
- Add menus to the notification area icon (server management, models)
- Complete the translation of all texts for multilingual support
- Flm update 0.9.21 β add the option to launch the server without a model using ASR for Whisper
- Flm update 0.9.22 β add the option to launch the server with host parameters
This project is open-source and open to contributions! Feel free to propose improvements via Pull Requests or report issues.
To run the project in development mode, you will need Rust and Node.js.
-
Install JavaScript dependencies:
npm install
-
Run the application in development mode:
npm run tauri dev
To build the project in release mode, you will need Rust and Node.js.
-
Install JavaScript dependencies:
npm install
-
Build the application in release mode:
npm run tauri build