2 days ago
Menu bar app to know when your Mac is thermal throttling - angristan/MacThrottle
Production-ready platform for agentic workflow development. - langgenius/dify
Easy Data Preparation with latest LLMs-based Operators and Pipelines. - OpenDCAI/DataFlow
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems. Use when building, optimizing, or debugging agent systems that require effective context management.
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems. Use when building, optimizing, or debugging agent systems that require effective context management. - Agent-Skills-for-Context-Engineering/skills/project-development at main · muratcankoylan/Agent-Skills-for-Context-Engineering
This skill covers the methodology for identifying tasks suited to LLM processing, designing effective project architectures, and iterating rapidly using agent-assisted development. The principles apply whether building batch processing pipelines, multi-agent research systems, or interactive applications.
karpathy/hn-time-capsule: Analyzing Hacker News discussions from a decade ago in hindsight with LLMs
Analyzing Hacker News discussions from a decade ago in hindsight with LLMs - karpathy/hn-time-capsule
A Hacker News time capsule project that pulls the HN frontpage from exactly 10 years ago, analyzes articles and discussions using an LLM to evaluate prescience with the benefit of hindsight, and generates an HTML report.
A Model Context Protocol (MCP) server that facilitates structured, progressive thinking through defined stages. This tool helps break down complex problems into sequential thoughts, track the progression of your thinking process, and generate summaries.
Earnings Call Civilizational Score Prompt. GitHub Gist: instantly share code, notes, and snippets.
You are a financial historian and industry expert conducting a review of past earnings calls with the benefit of hindsight and wisdom.
In-depth tutorials on LLMs, RAGs and real-world AI agent applications. - ai-engineering-hub/fastest-rag-milvus-groq at main · patchy631/ai-engineering-hub
This project builds the fastest stack to build a RAG application with retrieval latency < 15ms.
It leverages binary quantization for efficient retrieval coupled with Groq’s blazing fast inference speeds.
5 days ago
Features
Unified toolkit: Consolidated features of CleanMyMac, AppCleaner, DaisyDisk, and iStat into a single binary Deep cleaning: Scans and removes caches, logs, and browser leftovers to reclaim gigabytes of space Smart uninstaller: Thoroughly removes apps along with launch agents, preferences, and hidden remnants Disk insights: Visualizes usage, manages large files, rebuilds caches, and refreshes system services Live monitoring: Real-time stats for CPU, GPU, memory, disk, and network to diagnose performance issues
08 Dec 25
Fast, filterable PR review. Entirely client-side via GitHub’s CORS API.
01 Dec 25
In which I talk about the process involved in switching forges, and how well that went.
29 Nov 25
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
mlabonne.github.io/blog/
Molly is a hardened version of Signal for Android, the fast simple yet secure messaging app by Signal Foundation.
Molly is a hardened version of Signal for Android, the fast simple yet secure messaging app by Signal Foundation.
28 Nov 25
The “you only glance once” object detection model. Contribute to czbiohub-sf/yogo development by creating an account on GitHub.
A version of the YOLO architecture (versions 1 through 3), optimized for inference speed on simple object detection problems. Designed for the remoscope project by the bioengineering team at the Chan-Zuckerberg Biohub SF.
Use mkslides to easily turn markdown files into beautiful slides using the power of Reveal.js!
MkSlides is a static site generator that’s geared towards building slideshows. Slideshow source files are written in Markdown, and configured with a single YAML configuration file. The workflow and commands are heavily inspired by MkDocs and reveal-md.
Lightning-fast, on-device TTS — running natively via ONNX. - supertone-inc/supertonic
Supertonic is a lightning-fast, on-device text-to-speech system designed for extreme performance with minimal computational overhead. Powered by ONNX Runtime, it runs entirely on your device—no cloud, no API calls, no privacy concerns.
27 Nov 25
Verbalized Sampling, a training-free prompting strategy to mitigate mode collapse in LLMs by requesting responses with probabilities. Achieves 2-3x diversity improvement while maintaining quality. Model-agnostic framework with CLI/API for creative writing, synthetic data generation, and dialogue simulation. - CHATS-lab/verbalized-sampling
Open-Source Memory Engine for LLMs, AI Agents
What is Memori Memori enables any LLM to remember conversations, learn from interactions, and maintain context across sessions with a single line: memori.enable(). Memory is stored in standard SQL databases (SQLite, PostgreSQL, MySQL) that you fully own and control.
Why Memori?
One-line integration - Works with OpenAI, Anthropic, LiteLLM, LangChain, and any LLM framework SQL-native storage - Portable, queryable, and auditable memory in databases you control 80-90% cost savings - No expensive vector databases required Zero vendor lock-in - Export your memory as SQLite and move anywhere Intelligent memory - Automatic entity extraction, relationship mapping, and context prioritization