-
NYU
- New York
- https://jason-cs18.github.io/
- https://yanlu.substack.com/
Lists (1)
Sort Name ascending (A-Z)
Starred repositories
A library for efficient similarity search and clustering of dense vectors.
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
A brief computer graphics / rendering course
ncnn is a high-performance neural network inference framework optimized for the mobile platform
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM Android App:[MNN-LLM-Android](./apps/Android/MnnLlmChat/READ…
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
High-speed Large Language Model Serving for Local Deployment
Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.
Implementation of popular deep learning networks with TensorRT network definition API
PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)
Transformer related optimization, including BERT, GPT
A Python-embedded modeling language for convex optimization problems.
A C++ standalone library for machine learning
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its…
Tengine is a lite, high performance, modular inference engine for embedded device
An optimization-based multi-sensor state estimator
🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉
HIP: C++ Heterogeneous-Compute Interface for Portability
fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tps,多并发可达60+。