Skip to content
View ProExpertProg's full-sized avatar

Organizations

@cryptovoting

Block or report ProExpertProg

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
@youkaichao
youkaichao youkaichao
Ph.D. from Tsinghua University. Core maintainer of @vllm-project . Co-Founder & Chief Scientist @Inferact .

@vllm-project Beijing, China

@baonudesifeizhai
李火旺 baonudesifeizhai

stevens institute of technology

@ZJY0516
Jiangyun Zhu ZJY0516
[ML]SYS. Currently interning at @Inferact , building vLLM and vLLM-Omni. Also at OSLab@ISCAS. Previously at NJU.

@Inferact Beijing, China

@taneem-ibrahim
Taneem Ibrahim taneem-ibrahim
Engineering Director and Principal Software Engineer

AI Inference @ Red Hat USA

@Gregory-Pereira
Greg Pereira Gregory-Pereira
Sr. Machine Learning Engineer @ Red Hat | Inference Engineering | Building llm-d: distributed inference for LLMs on Kubernetes

@RedHatOfficial @llm-d San Francisco

@fulvius31
Alessandro Sangiorgi fulvius31
Senior SWE Emerging Tech (OCTO) @ Red Hat; Dev of popular Android networking tool WIFI WPS WPA TESTER; MS Computer Engineer ITA and MS Computer Science USA

@RedHatOfficial @redhat-et

@bohnstingl
Thomas Ortner bohnstingl

IBM Research Europe Säumerstrasse 4, 8803 Rüschlikon

@rukalucas
Lucas Shoji rukalucas
Brain and cognitive sciences, AI, and physics.

MIT Cambridge, MA

@neuralmagic
Neural Magic neuralmagic
Neural Magic (Acquired by Red Hat) empowers developers to optimize & deploy LLMs at scale. Our model compression & acceleration enable top performance with vLLM

Boston