Ollama is an AI model management tool that allows users to install and use custom large language models locally.
The project aims to:
- Create a Discord bot that will utilize Ollama and chat to chat with users!
- User Preferences on Chat
- Message Persistance on Channels and Threads
- Threads
- Channels
- Containerization with Docker
- Slash Commands Compatible
- Generated Token Length Handling for >2000
- Token Length Handling of any message size
- User vs. Server Preferences
- Redis Caching
- Administrator Role Compatible
- Multi-User Chat Generation (Multiple users chatting at the same time) - This was built in from Ollama
v0.2.1+
- Automatic and Manual model pulling through the Discord client
- Allow others to create their own models personalized for their own servers!
- Documentation on creating your own LLM
- Documentation on web scrapping and cleaning
These are guides to the feature set included and the events triggered in this app.
- Clone this repo using
git clone https://github.com/kevinthedang/discord-ollama.git
or just use GitHub Desktop to clone the repo. - You will need a
.env
file in the root of the project directory with the bot's token. There is a.env.sample
is provided for you as a reference for what environment variables.- For example,
CLIENT_TOKEN = [Bot Token]
- For example,
- Please refer to the docs for bot setup.
- Creating a Discord App
- Local Machine Setup
- Docker Setup for Servers and Local Machines
- Nvidia is recommended for now, but support for other GPUs should be development.
- Local use is not recommended.
- NodeJS
- This project runs on
lts\hydrogen
. - This project supports any NodeJS version above
16.x.x
to only allow ESModules.
- This project runs on
- Ollama
- Discord.js Docs
- Setting up Docker (Ubuntu 20.04)
discord-ollama © 2023 by Kevin Dang is licensed under CC BY 4.0