FlashInfer: Kernel Library for LLM Serving
-
Updated
Mar 22, 2026 - Python
FlashInfer: Kernel Library for LLM Serving
This is Meta's fork of the CPython runtime. The name "cinder" here is historical, see https://github.com/facebookincubator/cinderx for the Python extension / JIT compiler.
yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.
DSL and compiler framework for automated finite-differences and stencil computation
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python ⚡
A basic x86-64 JIT compiler written from scratch in stock Python
A JIT compiler for hybrid quantum programs in PennyLane
A differentiable physics engine and multibody dynamics library for control and robot learning.
A Squeak/Smalltalk VM written in RPython.
WebAssembly interpreter in RPython
MagnetiCalc calculates the magnetic field of arbitrary coils.
GPU-accelerated Stain Normalization and Augmentation in PyTorch
A JIT compiled chess engine which traverses the search tree in batches in a best-first manner, allowing for neural network batching, asynchronous GPU use, and vectorized CPU computations.
Add a description, image, and links to the jit topic page so that developers can more easily learn about it.
To associate your repository with the jit topic, visit your repo's landing page and select "manage topics."