@vllm-project maintainer | vLLM performance @RedHatOfficial |
MIT B.S. 2022, M. Eng. 2023 |
HPC, C++ & CUDA
-
Red Hat
- Cambridge, MA
-
20:40
(UTC -12:00) - proexpertprog.github.io
youkaichao
youkaichao
Ph.D. from Tsinghua University. Core maintainer of @vllm-project .
Co-Founder & Chief Scientist @Inferact .
@vllm-project Beijing, China
Jiangyun Zhu
ZJY0516
[ML]SYS. Currently interning at @Inferact , building vLLM and vLLM-Omni. Also at OSLab@ISCAS. Previously at NJU.
@Inferact Beijing, China
Taneem Ibrahim
taneem-ibrahim
Engineering Director and Principal Software Engineer
AI Inference @ Red Hat USA
Greg Pereira
Gregory-Pereira
Sr. Machine Learning Engineer @ Red Hat | Inference Engineering | Building llm-d: distributed inference for LLMs on Kubernetes
@RedHatOfficial @llm-d San Francisco
Alessandro Sangiorgi
fulvius31
Senior SWE Emerging Tech (OCTO) @ Red Hat; Dev of popular Android networking tool WIFI WPS WPA TESTER; MS Computer Engineer ITA and MS Computer Science USA
@RedHatOfficial @redhat-et
Neural Magic
neuralmagic
Neural Magic (Acquired by Red Hat) empowers developers to optimize & deploy LLMs at scale. Our model compression & acceleration enable top performance with vLLM
Boston