- Stanford, CA
Stars
An app that brings language models directly to your phone.
🚀 Simple share intent in an Expo Native Module
A TTS model capable of generating ultra-realistic dialogue in one pass.
Mixins for Tailwind CSS provides a declarative API for creating reusable groups of utilities, reducing code duplication and improving maintainability while emphasizing a utility-first approach.
XState helper for using asynchronous guards.
Our current standard for podcast RSS feeds.
React Native Expo Share Intent Demonstration
A multilingual text-to-speech synthesis system for ten lower-resourced Turkic languages: Azerbaijani, Bashkir, Kazakh, Kyrgyz, Sakha, Tatar, Turkish, Turkmen, Uyghur, and Uzbek.
🦹♂️ Twin blends the magic of Tailwind with the flexibility of css-in-js (emotion, styled-components, solid-styled-components, stitches and goober) at build time.
Browser Extension Template with ESbuild builds, support for React, Preact, Typescript, Tailwind, Manifest V3/V2 support and multi browser build including Chrome, Firefox, Safari, Edge, Brave.
WavJourney: Compositional Audio Creation with LLMs
A local-first and universal knowledge graph, personal search engine, and workspace for your life.
Export Hugging Face models to Core ML and TensorFlow Lite
Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, Slurm, 20+ clouds, on-prem).
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
[Unmaintained, see README] An ecosystem of Rust libraries for working with large language models
Fast inference engine for Transformer models
🔊 Text-Prompted Generative Audio Model
Source code for the X Recommendation Algorithm
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Robust Speech Recognition via Large-Scale Weak Supervision
Code and documentation to train Stanford's Alpaca models, and generate the data.