-
Viettel AI
- Ha Noi, Viet Nam
- https://www.linkedin.com/in/dangnh0611/
- https://www.kaggle.com/dangnh0611
Highlights
- Pro
Stars
In Pursuit of Pixel Supervision for Visual Pre-training
Deepnote is a drop-in replacement for Jupyter with an AI-first design, sleek UI, new blocks, and native data integrations. Use Python, R, and SQL locally in your favorite IDE, then scale to Deepnot…
DC-Gen: Post-Training Diffusion Acceleration with Deeply Compressed Latent Space
Fast and Accurate ML in 3 Lines of Code
An extremely fast Python package and project manager, written in Rust.
🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉
Free to use online tool for labelling photos. https://makesense.ai
Supercharge Your LLM with the Fastest KV Cache Layer
Python Fire is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object.
Completion After Prompt Probability. Make your LLM make a choice
A project to improve skills of large language models
Scalable toolkit for efficient model reinforcement
GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation
A powerful tool for creating fine-tuning datasets for LLM
GLM-4.6V/4.5V/4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
Muon is an optimizer for hidden layers in neural networks
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous …
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hopper, Ada and Blackwell GPUs, to provide better performance…
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
PyTorch native quantization and sparsity for training and inference
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Ongoing research training transformer models at scale