Join the Discord to get installation help, report bugs, or show off the crazy conversations your pawns have!
Requires .NET 9:
- The latest release should contain all the .NET files you need.
RimDialogue Local can be run with:
- A local LLM using Ollama, or
- A cloud LLM using an API key.
To run RimDialogue with a local LLM:
-
Download and install Ollama
Ollama Download Page -
Installation and configuration instructions
Ollama GitHub Repository
System Requirements:
Depending on the model you choose, Ollama may require a powerful machine.
Llama 3.2 3B and Llama 3.2 1B are OK and will run on most machines.
Find more models here: Ollama Model Library
If you have a AMD GPU you may want to do something like ollama-for-amd.
- Go to the model page on the Ollama website and copy the model name.
- Open a command prompt by typing
cmdin the Windows search bar. - In the command window, type
ollama pull <paste model name>and press enter. The model should now start downloading.
If you prefer uncensored models, here are some options:
- Llama 2 Uncensored
- Mistral Uncensored
- Orca2 Uncensored
- Mixtral Uncensored
- Zephyr Uncensored
- Wizard Vicuna Uncensored
- WizardLM Uncensored
If running a local LLM isn’t an option, you can use an API key for a cloud-hosted LLM.
Supported Providers:
-
Groq
Get API keys: Groq API KeysGroq offers a free tier, but it’s heavily throttled.
-
AWS & OpenAI
Warning: These options are complex to configure and may incur high costs if set up improperly.
- Go to the RimDialogue Server Releases.
- Download the latest
RimDialogueLocalServer_<version>.zip. - Unzip to a directory of your choice.
-
Open the
appsettings.jsonfile in a text editor. -
In the
MODELSarea, configure your provider:- For Ollama:
- Ensure
OllamaUrlpoints to the correct port (default:11434). - Set
OllamaModelIdto your chosen model (e.g.,"llama3.2").
- Ensure
- For Cloud Providers:
- Fill in your API credentials in the
MODELSsection for your provider (e.g.,GroqApiKeyandGroqModelIdfor Groq).
- Fill in your API credentials in the
- For Ollama:
-
Adjust optional settings:
- RateLimit: Sets the number of requests per second allowed.
- For Ollama you can set this higher (0.5 - 1.0) depending on your machine.
- For cloud providers this will depend on your token budget. If your provider limits requests per minute, turn this down.
- 0.016667 is 1 request per minute.
- 0.166667 is 10 requests per minute.
- 0.416667 is 25 requests per minute.
- MaxPromptLength: Limits prompt size before truncation.
- Set lower for tight input token budgets or higher (40,000–50,000) for local setups.
- Options Settings:
- Enable / disable additional prompt data with boolean options under
//OPTIONS SETTINGS. - If your cloud provider limits input tokens, turn some of these settings off to reduce the prompt size and the number of input tokens.
- Enable / disable additional prompt data with boolean options under
- Server Port:
- By default, the server runs on port
7293. Change this in theUrlsfield if needed.
- By default, the server runs on port
- RateLimit: Sets the number of requests per second allowed.
-
Subscribe to the mod on Steam:
RimDialogue Mod Page -
Alternatively, launch RimWorld, press Mods, and use the Steam Workshop browser to subscribe.
- From RimWorld’s main menu, press Mods.
- Find RimDialogue in the left column.
- Click Enable in the mod description window.
- Press Save and apply changes.
- From the main menu, press Options.
- Select Mod Options.
- Choose RimDialogue.
- Scroll to the Server URL setting and set it to:
http://localhost:7293/Adjust the port if you changed it during server configuration.
- Restart Rimworld.
- Go back into the RimDialogue mod options and select the model you configured with the
MODELbutton.
- Run
RimDialogueLocal.exefrom the installation folder. - Start your RimWorld game normally.
Contributions are welcome! Feel free to open issues or pull requests.
This project is licensed under the CC BY-NC-SA 4.0 International License.