Stars
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Bridge local AI coding agents (Claude Code, Cursor, Gemini CLI, Codex) to messaging platforms (Feishu/Lark, DingTalk, Slack, Telegram, Discord, LINE, WeChat Work). Chat with your AI dev assistant f…
Battle-tested Claude Code workflow template — memory management, context engineering, and task routing from 3 months of daily usage
This is the official GitHub repository of the paper "Dia-LLaMA: Towards Large Language Model-driven CT Report Generation"
A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.
Scaling Agentic Reinforcement Learning with a Multi-Turn, Multi-Task Framework
A Serverless Text Annotation Tool for Corpus Development
Minimal reproduction of DeepSeek R1-Zero
The official codes for "AutoRG-Brain: Grounded Report Generation for Brain MRI".
Cardiac MR image processing.
A robust self-supervised deep learning framework for quantifying MR myocardial perfusion
🔥 Medical Image Analysis 2025: Towards Cardiac MRI Foundation Models: Comprehensive Visual-Tabular Representations for Whole-Heart Assessment and Beyond
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!
A next.js web application that integrates AI capabilities with draw.io diagrams. This app allows you to create, modify, and enhance diagrams through natural language commands and AI-assisted visual…
TorchCFM: a Conditional Flow Matching library
This a code repository of MFD-V2V: Unsupervised Cardiac Video Translation Via Motion Feature Guided Diffusion Model
Official Pytorch implementation for paper LaMoD: Latent Motion Diffusion Model For Myocardial Strain Generation
[BIBM 2024] Enhancing Cerebral Microbleed Segmentation with Pretrained UNETR++
[MICCAI 2025 Oral] Blood Pressure Assisted Cerebral Microbleed Segmentation via Meta-matching
Repository for the paper: Open-Ended Medical Visual Question Answering Through Prefix Tuning of Language Models (https://arxiv.org/abs/2303.05977)