Stars
An agentic skills framework & software development methodology that works.
The data layer for smarter educational AI. Integrate trusted instructional content and research directly into your AI-powered tools β improving precision, relevance, and instructional alignment.
A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code by TΓCHES.
"DeepTutor: AI-Powered Personalized Learning Assistant"
ryoppippi / sitemcp
Forked from egoist/sitefetchFetch an entire site and use it as an MCP Server
Get started with building Fullstack Agents using Gemini 2.5 and LangGraph
Use LLM to stream diagrams, instead of tokens, in real-time! (UIST 2023 Paper)
A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.
You were probably looking for our website... this is it. We moved our website here, so you can see the insides of how we work.
Refine high-quality datasets and visual AI models
4DHumans: Reconstructing and Tracking Humans with Transformers
[NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Extract data from all Google Scholar pages from a single Python module.
π Guides, papers, lessons, notebooks and resources for prompt engineering, context engineering, RAG, and AI Agents.
Aligning pretrained language models with instruction data generated by themselves.
Running large language models on a single GPU for throughput-oriented scenarios.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
βοΈπ₯ Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and π video, up to 5x faster than OpenAI CLIP and LLaVA πΌοΈ & ποΈ
MERLOT: Multimodal Neural Script Knowledge Models
f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source β self-host for your organization with complete privacy.
Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)
Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".