Work for Tencent-WXG. Focus on model inference optimization, such as inference engine and model compression.
- Shanghai
Stars
1
result
for source starred repositories
written in C++
Clear filter
PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)