-
djinni.ai
- Warsaw, Poland
- https://djinni.ai/
- https://medium.com/@maciejchalapuk
- http://softwarephilosophy.ninja/
Highlights
- Pro
Stars
The open source control plane for platform engineering.
High-performance open-source in-memory graph database for GraphRAG, AI memory, agentic AI, and real-time graph analytics. Cypher-compatible, built in C++.
[Alpha] Build performant PixiJS apps in Svelte without a struggle ✨
The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configuration.
Given some data, jsesc returns the shortest possible stringified & ASCII-safe representation of that data.
AI Agent Engineering Platform built on an Open Source TypeScript AI Agent Framework
The fastest JavaScript BPE Tokenizer Encoder Decoder for OpenAI's GPT models (gpt-5, gpt-o*, gpt-4o, etc.). Port of OpenAI's tiktoken with additional features.
Rapidly develop, evaluate, and iterate on prompts & models with strongly-typed structured outputs
🤖 GPU accelerated Neural networks in JavaScript for Browsers and Node.js
Provides unlimited AI answers for Node.js.
From the team behind Gatsby, Mastra is a framework for building AI-powered applications and agents with a modern TypeScript stack.
The batteries-included full-stack framework for the AI era. Develop JS/TS web apps (React, Node.js, and Prisma) using declarative code that abstracts away complex full-stack features like auth, bac…
Sweep: AI coding assistant for JetBrains
The agent engineering platform
The AI Toolkit for TypeScript. From the creators of Next.js, the AI SDK is a free open-source library for building AI-powered applications and agents
Easily migrate your codebase from one framework or language to another.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
CLI platform to experiment with codegen. Precursor to: https://lovable.dev
Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Accessible large language models via k-bit quantization for PyTorch.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
High-speed download of LLaMA, Facebook's 65B parameter GPT model