-
llm-tokenizer
LLM tokenizer library with caching and chat template support
-
tiktoken
A high-performance pure-Rust implementation of OpenAI's tiktoken BPE tokenizer
-
miktik
A unified, multi-backend tokenizer library for LLMs
-
bamboo-compression
Compression utilities for Bamboo sessions and memory workflows
-
riptoken
Fast BPE tokenizer for LLMs — a faster, drop-in compatible reimplementation of tiktoken
-
mkcontext
that provides functionality for creating context
-
skimtoken
Fast token count estimation library
-
blazen-llm
LLM provider abstraction layer for the Blazen workflow engine
-
tiktoken-wasm
WASM bindings for the tiktoken BPE tokenizer
-
toktrie_tiktoken
tiktoken (OpenAI BPE) library support for toktrie and llguidance
-
toklab-core
Pure-Rust core for toklab: bulk tokenizer + counter for OpenAI BPE encodings
-
loctok
Count LOC (lines of code) & TOK (LLM tokens), fast
-
wordchipper-cli-util
Wordchipper CLI
-
rustbpe
A BPE (Byte Pair Encoding) tokenizer written in Rust with Python bindings
-
tiktokenx
A high-performance Rust implementation of OpenAI's tiktoken library
-
tokenmonster
Greedy tiktoken-like tokenizer with embedded vocabulary (cl100k-base approximator)
-
chat-splitter
Never exceed OpenAI's chat models' maximum number of tokens when using the async_openai Rust crate
-
tokin
Experimental fast tokenizer
Try searching with DuckDuckGo.