"Oh l'ami" - French for "Hi friend!"
Ollami is a frontend application designed to interact with local Ollama models for text generation, reasoning, chat and more.
Why Use Ollami? 💡
- Save time and resources by running your favorite models directly on your machine.
- Quickly access and interact with a wide range of models, available directly in the interface.
- Seamlessly test and evaluate local model performance in a real-world application context.
You have installed the latest version of Node.js and npm. If not :
Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. It allows you to run JavaScript applications on your local machine, and much more.
Get up and running with large language models locally.
Find detailed instructions on installing Ollama here
curl -fsSL https://ollama.com/install.sh | shInteract with models like Mistral effortlessly:
ollama run llama3:latestClone the repository to your local machine development folder using the following command:
git clone https://github.com/aetaix/ollami.git ollami
cd ollamiInstall packages and start the app:
npm installLaunch the app:
npm run devNote : No need to add .env variable, the app will use the default Ollama server locally started while using the
ollama runcommand. By default the server is running onhttp://127.0.0.1:11434
Discover an extensive array of models at ollama.com/library
Here are some example models that can be downloaded:
| Model | Parameters | Size | Download |
|---|---|---|---|
| Llama 2 | 7B | 3.8GB | ollama run llama2 |
| Mistral | 7B | 4.1GB | ollama run mistral |
| Dolphin Phi | 2.7B | 1.6GB | ollama run dolphin-phi |
| Phi-2 | 2.7B | 1.7GB | ollama run phi |
| Neural Chat | 7B | 4.1GB | ollama run neural-chat |
| Starling | 7B | 4.1GB | ollama run starling-lm |
| Code Llama | 7B | 3.8GB | ollama run codellama |
| Llama 2 Uncensored | 7B | 3.8GB | ollama run llama2-uncensored |
| Llama 2 13B | 13B | 7.3GB | ollama run llama2:13b |
| Llama 2 70B | 70B | 39GB | ollama run llama2:70b |
| Orca Mini | 3B | 1.9GB | ollama run orca-mini |
| Vicuna | 7B | 3.8GB | ollama run vicuna |
| LLaVA | 7B | 4.5GB | ollama run llava |
| Gemma | 2B | 1.4GB | ollama run gemma:2b |
| Gemma | 7B | 4.8GB | ollama run gemma:7b |
Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.