-
Fudan University
- Shanghai, China
-
11:07
(UTC +08:00) - https://chrisding.me
Highlights
- Pro
Lists (20)
Sort Name ascending (A-Z)
A-APR
B-Backend
B-Benchmark
C-SourceCode
F-FrameWork
F-Frontend
F-Fun
I-Interesting
L-LLM
O-Other
O-OtherReasearch
P-Paper-List
P-Paper-website
R-Read
R-RewardModel
S-School(fdu)
S-Sii-Lecs
S-Survey
T-Tools
W-World-Model
Stars
AI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
An in-the-wild benchmark for AI agents in the OpenClaw Environment.
Train the smallest LM you can that fits in 16MB. Best model wins!
ARIS ⚔️ (Auto-Research-In-Sleep) — Lightweight Markdown-only skills for autonomous ML research: cross-model review loops, idea discovery, and experiment automation. No framework, no lock-in — works…
Official repository of "Reliable Reasoning in SVG-LLMs via Multi-Task Multi-Reward Reinforcement Learning".
Official Implementation of "Visual-ERM: Reward Modeling for Visual Equivalence"
Use Claude Code as the foundation for coding infrastructure, allowing you to decide how to interact with the model while enjoying updates from Anthropic.
📚 Real-world OpenClaw automation examples from Moltbook
Evaluating Large Language Models with Grid-Based Game Competitions: An Extensible LLM Benchmark and Leaderboard
Official implementation of "EndoCoT". Scaling endogenous Chain-of-Thought (CoT) reasoning in diffusion models for complex structured generation.
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫、百度贴吧帖子 | 百度贴吧评论回复爬虫 | 知乎问答文章|评论爬虫
让每一次引用都成为可解释的影响力 Turning Every Citation into Explainable Impact
CLI-Anything: Making ALL Software Agent-Native
Claw-Eval is an evaluation harness for evaluating LLM as agents. All tasks verified by humans.
[ICLR 2026] The Tool Decathlon: Benchmarking Language Agents for Diverse, Realistic, and Long-Horizon Task Execution
InternVL-U is a 4B-parameter unified multimodal model (UMM) that brings multimodal understanding, reasoning, image generation, image editing into a single framework.
The first unified, efficient, and extensible evaluation toolkit for evaluating image generation and editing models across multiple benchmarks.
Open Source version of Claude Cowork with 500+ SaaS app integrations
OS-ATLAS: A Foundation Action Model For Generalist GUI Agents
OpenClaw-RL: Train any agent simply by talking
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
AI agents running research on single-GPU nanochat training automatically
A cross-platform desktop All-in-One assistant tool for Claude Code, Codex, OpenCode, openclaw & Gemini CLI.