BGE-Large v1.5: High-accuracy English embedding model for retrieval
Compact English sentence embedding model for semantic search tasks
Facebook AI Research Sequence-to-Sequence Toolkit
CLIP model fine-tuned for zero-shot fashion product classification
OpenAI’s open-weight 120B model optimized for reasoning and tooling
OpenAI’s compact 20B open model for fast, agentic, and local use
Tiny pre-trained IBM model for multivariate time series forecasting
Multimodal Transformer for document image understanding and layout
A minimal PyTorch re-implementation of the OpenAI GPT
CTC-based forced aligner for audio-text in 158 languages
LLaMA: Open and Efficient Foundation Language Models
Robust BERT-based model for English with improved MLM training
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Flexible text-to-text transformer model for multilingual NLP tasks
T5-Small: Lightweight text-to-text transformer for NLP tasks
Metric monocular depth estimation (vision model)
Portuguese ASR model fine-tuned on XLSR-53 for 16kHz audio input
Russian ASR model fine-tuned on Common Voice and CSS10 datasets
Hackable and optimized Transformers building blocks