-
Beijing University of Posts and Telecommunications
- Beijing
Highlights
- Pro
Stars
Transfer your AI chat conversations between Cursor IDE workspaces and devices with an intuitive UI.
OpenClaw 深度研究 Obsidian 知识库 | Obsidian-based atomic knowledge base for OpenClaw deep research (282 notes, 2397+ wikilinks)
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
An Obsidian plugin that embeds Claude Code as an AI collaborator in your vault
My personal Obsidian vault template. A bottom-up approach to note-taking and organizing things I am interested in.
An End-to-End Infrastructure for Training and Evaluating Various LLM Agents
Kaguya-19 / AgentCPM
Forked from OpenBMB/AgentCPMAn End-to-End Infrastructure for Training and Evaluating Various LLM Agents
tcy6 / OpenMMReasoner
Forked from EvolvingLMMs-Lab/OpenMMReasonerOpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe
tcy6 / verl
Forked from verl-project/verlverl: Volcano Engine Reinforcement Learning for LLMs
[CVPR 2026] OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe
Tools for merging pretrained large language models.
Codebase for Merging Language Models (ICML 2024)
轻松一键续杯 Cursor Pro,保持14天试用不掉。【支持 Claude 4】
Official data repository for the Open Reaction Database
Kaguya-19 / UltraRAG
Forked from OpenBMB/UltraRAGUltraRAG 2: Less Code, Lower Barrier, Faster Deployment! MCP-based low-code RAG framework, enabling researchers to build complex pipelines to creative innovation.
🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper. Don't be shy, join here: https://discord.gg/jP8KfhDhyN
Fully Open Framework for Democratized Multimodal Training
🐉 Loong: Synthesize Long CoTs at Scale through Verifiers.
Everything about the SmolLM and SmolVLM family of models
A Next-Generation Training Engine Built for Ultra-Large MoE Models
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Llava, Phi4, ...)…
Get your documents ready for gen AI
Writing AI Conference Papers: A Handbook for Beginners
Renderer for the harmony response format to be used with gpt-oss
tcy6 / gpt-oss
Forked from openai/gpt-ossgpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI