🎯
Focusing
Optimizing on device LLM training and deployment at https://kolosal.ai
-
Kolosal AI
- Jakarta, Indonesia
-
21:21
(UTC +07:00) - https://www.linkedin.com/in/rifkybujana/
- @BisriRifky
Highlights
- Pro
Pinned Loading
-
KolosalAI/Kolosal
KolosalAI/Kolosal PublicKolosal AI is an OpenSource and Lightweight alternative to LM Studio to run LLMs 100% offline on your device.
-
KolosalAI/kolosal-cli
KolosalAI/kolosal-cli PublicSuper lightweight Ollama + Qwen Code alternative to run Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.
-
KolosalAI/model-memory-calculator
KolosalAI/model-memory-calculator PublicSimple model memory requirements calculator for GGUF
-
KolosalAI/kolosal-server
KolosalAI/kolosal-server PublicKolosal AI is an OpenSource and Lightweight alternative to Ollama to run LLMs 100% offline on your device.
-
IndoBERT-QA
IndoBERT-QA PublicindoBERT Base-Uncased fine-tuned on Translated Squad v2.0
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.