-
National Taiwan University
- Taiwan
Stars
A complete computer science study plan to become a software engineer.
Master the command line, in one page
Empowering everyone to build reliable and efficient software.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
FastAPI framework, high performance, easy to learn, fast to code, ready for production
Robust Speech Recognition via Large-Scale Weak Supervision
Models and examples built with TensorFlow
The fastest path to AI-powered full stack observability, even for lean teams.
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many mo…
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga…
Iconic font aggregator, collection, & patcher. 3,600+ icons, 50+ patched fonts: Hack, Source Code Pro, more. Glyph collections: Font Awesome, Material Design Icons, Octicons, & more
《代码随想录》LeetCode 刷题攻略:200道经典题目刷题顺序,共60w字的详细图解,视频难点剖析,50余张思维导图,支持C++,Java,Python,Go,JavaScript等多语言版本,从此算法学习不再迷茫!🔥🔥 来看看,你会发现相见恨晚!🚀
🌐 Jekyll is a blog-aware static site generator in Ruby
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
An attempt to answer the age old interview question "What happens when you type google.com into your browser and press enter?"
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
A collection of modern/faster/saner alternatives to common unix commands.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.