Skip to content
View Taeu's full-sized avatar

Highlights

  • Pro

Block or report Taeu

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results

OmX - Oh My codeX: Your codex is not alone. Add hooks, agent teams, HUDs, and so much more.

TypeScript 5,887 471 Updated Apr 1, 2026

Teams-first Multi-agent orchestration for Claude Code

TypeScript 19,459 1,486 Updated Apr 1, 2026

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

TypeScript 343,457 67,979 Updated Apr 1, 2026

The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.

JavaScript 127,902 17,777 Updated Apr 1, 2026

Use Garry Tan's exact Claude Code setup: 23 opinionated tools that serve as CEO, Designer, Eng Manager, Release Manager, Doc Engineer, and QA

TypeScript 59,808 7,859 Updated Apr 1, 2026

AI agents running research on single-GPU nanochat training automatically

Python 62,853 8,801 Updated Mar 26, 2026

A collection of AI Agents papers (Updated biweekly)

1,240 93 Updated Mar 12, 2026

The open source coding agent.

TypeScript 133,949 14,429 Updated Apr 1, 2026

Stay in flow while building with AI

TypeScript 60 6 Updated Feb 15, 2026

Ralph is an autonomous AI agent loop that runs repeatedly until all PRD items are complete.

TypeScript 14,159 1,443 Updated Feb 2, 2026

A set of ready to use Agent Skills for research, science, engineering, analysis, finance and writing.

Python 16,948 1,847 Updated Mar 30, 2026

Official Notion Skills for Claude - step-by-step guides for Notion workflows

17 5 Updated Dec 16, 2025

Implementation of "Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length"

Python 1,958 222 Updated Mar 30, 2026

Light Image Video Generation Inference Framework

Python 2,127 175 Updated Apr 1, 2026

Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, images, and video, as well as generating speech in real time.

Jupyter Notebook 3,608 244 Updated Jan 8, 2026

State is a machine learning model that predicts cellular perturbation response across diverse contexts

Python 558 148 Updated Feb 24, 2026
Jupyter Notebook 27 5 Updated Aug 29, 2025

We present StableAvatar, the first end-to-end video diffusion transformer, which synthesizes infinite-length high-quality audio-driven avatar videos without any post-processing, conditioned on a re…

Python 1,217 107 Updated Jan 20, 2026

​​Unlimited-length talking video generation​​ that supports image-to-video and video-to-video generation

Python 134 18 Updated Aug 17, 2025

​​Unlimited-length talking video generation​​ that supports image-to-video and video-to-video generation

Python 5,220 865 Updated Dec 18, 2025

[AAAI 2026] FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for Audio-Driven Portrait Animation

66 3 Updated Aug 20, 2025

Real-time Claude Code usage monitor with predictions and warnings

Python 7,247 353 Updated Sep 14, 2025

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hopper, Ada and Blackwell GPUs, to provide better performance…

Python 3,251 681 Updated Mar 31, 2026

Accessible large language models via k-bit quantization for PyTorch.

Python 8,090 842 Updated Mar 31, 2026

A pytorch quantization backend for optimum

Python 1,035 86 Updated Nov 21, 2025

[NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation

Python 588 34 Updated Nov 11, 2025

[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.

Cuda 970 90 Updated Feb 25, 2026

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

Cuda 3,265 385 Updated Jan 17, 2026

A unified inference and post-training framework for accelerated video generation.

Python 3,335 307 Updated Apr 1, 2026

[CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers

Python 76 5 Updated Sep 3, 2024
Next