Highlights
- Pro
Starred repositories
AI coding assistant skill (Claude Code, Codex, OpenCode, Cursor, Gemini CLI, and more). Turn any folder of code, SQL schemas, R scripts, shell scripts, docs, papers, images, or videos into a querya…
Contains Company Wise Questions sorted based on Frequency and all time
Composable HLS library for rapid development of LLM accelerators. FlexLLM enables spatial-temporal hybrid architectures, with parameterized modulet templates customized for the prefill and decode s…
TeLLMe: An Efficient End-to-End Ternary LLM Prefill and Decode Accelerator with Table-Lookup Matmul on Edge FPGAs [FPGA2026]
Run a 1-billion parameter LLM on a $10 board with 256MB RAM
Official inference framework for 1-bit LLMs
Access the system clipboard from anywhere using the ANSI OSC52 sequence
Terminal weapon to search, watch, and keep track of animes.
Allo Accelerator Design and Programming Framework (PLDI'24)
This repository contains the code accompanying the FlexLLM paper presented at NSDI '26, which was evaluated by the artifact evaluation committee.
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Alveo Collective Communication Library: MPI-like communication operations for Xilinx Alveo accelerators
Ready-to-link, packaged Aurora IP on four QSFP28 lanes, providing 100Gb/s throughput, flow control and status monitoring
A list of awesome compiler projects and papers for tensor computation and deep learning.
TAPA compiles task-parallel HLS program into high-performance FPGA accelerators. UCLA-maintained. Community-maintained version with binary releases is at https://github.com/tuna/tapa.
A script allowing you to download images and videos from Telegram web even if the group restricts downloading.
Efficiently computes derivatives of NumPy code.
This repository contains the CHERI extension specification, adding hardware capabilities to RISC-V ISA to enable fine-grained memory protection and scalable compartmentalization.