Skip to content
View SomeOddCodeGuy's full-sized avatar

Highlights

  • Pro

Block or report SomeOddCodeGuy

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
SomeOddCodeGuy/README.md

About


SomeOddCodeGuy

I'm a developer who's been visiting the world of LLMs as a hobby since 2023. My main focus is on locally run, offline, LLMs which I mostly use for even more hobby tinkering.

I generally do most of my development locally, and I do most of my work on weekends. Usually I'm too tired after work to do much in the weekday evenings.

I'm quite passionate in regards to the power of workflows with LLMs, and as a developer I generally prefer more manual chat-style interfacing with LLMs powered by workflows than I do leaving a task to an automated agent. There are some exceptions, however; web searching is a great use of agents, IMO.

But as a developer, with the current tech (as of 2025-08), I feel that I can iterate faster and cleaner sitting in between the AI and my code.

Update: 2025-12-07

I haven't vanished or quit working on Wilmer; I'm currently in the middle of a job transition, and it's taking a fair bit of my time. I'm still doing Wilmer work in the background, but I don't have a lot of time to do clean testing and ensure a push is good, so I'm holding off pushing anything until I hit the point that I can. I'm using Wilmer daily, so don't take my absence as meaning I gave up on it. Far from it. I've just been spending more time making workflows than updating the code itself.


I started Wilmer during the Llama 2 era based on the idea that open-weight models at the time were weak as generalists compared to the big proprietary models like ChatGPT; however, individual fine-tunes within scoped domains (like coding or medical) could often compete with those big models. My goal has always been to try to find a way, either through routing or workflows, to help my local models keep pace with the big APIs.

Obviously, modern open-weight models are strong enough to not need that help nearly as much, but that just means the same methods can push those models even farther.

I'm not a python developer by trade; I picked it up to work on Wilmer, and I've been learning it ever since. Some of the mess in the codebases here are tech debt due to my fumbling along and learning early on as I started to understand it more. In my day job, I'm a dev manager that mostly works with C# and web tech.

Posts


  • This started as a reddit post and then I decided to make it into an article, since I think it was something a lot of folks were interested in reading.
  • Guide is a bit older now, but still applies. I've automated a lot of this in workflows, but when I'm somewhere like my work, I'd still make use of these techniques.
  • Many of my Wilmer workflows are in some part inspired by the general workflows I do here

Mac Studio Speed Tests

Comparison Speed Tests

MMLU-Pro Home Runs To Compare Effects Of Quantization


SomeOddCodeGuy

Pinned Loading

  1. WilmerAI WilmerAI Public

    WilmerAI is one of the oldest LLM semantic routers. It uses multi-layer prompt routing and complex workflows to allow you to not only create practical chatbots, but to extend any kind of applicatio…

    Python 796 49

  2. OfflineWikipediaTextApi OfflineWikipediaTextApi Public

    This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full article text

    Python 101 9