-
Zhipu AI
- Tsinghua University, Beijing, China
-
07:33
(UTC +08:00)
Lists (2)
Sort Name ascending (A-Z)
Starred repositories
Python - 100天从新手到大师
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 or handson-mlp instead.
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 or handson-mlp instead.
《动手学大模型Dive into LLMs》系列编程实践教程
面向开发者的 LLM 入门教程,吴恩达大模型系列课程中文版
pytorch handbook是一本开源的书籍,目标是帮助那些希望和使用PyTorch进行深度学习开发和研究的朋友快速入门,其中包含的Pytorch教程全部通过测试保证可以成功运行
Get started with building Fullstack Agents using Gemini 2.5 and LangGraph
llama3 implementation one matrix multiplication at a time
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
🤖 💬 Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)
This repo is meant to serve as a guide for Machine Learning/AI technical interviews.
Pytorch🍊🍉 is delicious, just eat it! 😋😋
Code for the book Deep Learning with PyTorch by Eli Stevens, Luca Antiga, and Thomas Viehmann.
LLM Zoomcamp - a free online course about real-life applications of LLMs. In 10 weeks you will learn how to build an AI system that answers questions about your knowledge base.
Acceptance rates for the major AI conferences
《大模型白盒子构建指南》:一个全手搓的Tiny-Universe
Book_7_《机器学习》 | 鸢尾花书:从加减乘除到机器学习;欢迎批评指正
Datasets, tools, and benchmarks for representation learning of code.
State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!
Approaching (Almost) Any Machine Learning Problem中译版,在线文档地址:https://ytzfhqs.github.io/AAAMLP-CN/
MAI-UI: Real-World Centric Foundation GUI Agents ranging from 2B to 235B
Representation Engineering: A Top-Down Approach to AI Transparency
Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]
High-speed simulator of convolutional spiking neural networks with at most one spike per neuron.
Penguin-VL: Exploring the Efficiency Limits of VLM with LLM-based Vision Encoders [Technical Report]