-
Millennium Science School
- Beijing, China
-
06:08
(UTC +08:00) - @llamafactory_ai
- https://huggingface.co/hiyouga
Lists (2)
Sort Name ascending (A-Z)
Starred repositories
21 Lessons, Get Started Building with Generative AI
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable…
A High-Quality Real Time Upscaler for Anime Video
Instruct-tune LLaMA on consumer hardware
Get started with building Fullstack Agents using Gemini 2.5 and LangGraph
📡 Simple and ready-to-use tutorials for TensorFlow
Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Sample code and notebooks for Generative AI on Google Cloud, with Gemini on Vertex AI
LangGPT: Empowering everyone to become a prompt expert! 🚀 📌 结构化提示词(Structured Prompt)提出者 📌 元提示词(Meta-Prompt)发起者 📌 最流行的提示词落地范式 | Language of GPT The pioneering framework for structured & meta-prompt…
QLoRA: Efficient Finetuning of Quantized LLMs
Public facing notes page
My continuously updated Machine Learning, Probabilistic Models and Deep Learning notes and demos (2000+ slides) 我不间断更新的机器学习,概率模型和深度学习的讲义(2000+页)和视频链接
A series of large language models trained from scratch by developers @01-ai
Flax is a neural network library for JAX that is designed for flexibility.
A course on aligning smol models.
"Probabilistic Machine Learning" - a book series by Kevin Murphy
A Bulletproof Way to Generate Structured JSON from Language Models
Democratizing Reinforcement Learning for LLMs
Fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters
Pytorch Implementation of DQN / DDQN / Prioritized replay/ noisy networks/ distributional values/ Rainbow/ hierarchical RL
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…
Coding the Machine Learning Tutorial for Learning to Learn
Doing simple retrieval from LLM models at various context lengths to measure accuracy