-
Waseda University
- Tokyo
- https://www.t0d4.dev
- @t0d4_
Highlights
- Pro
Lists (9)
Sort Name ascending (A-Z)
Stars
Beginner, advanced, expert level Rust training material
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems. Use when building, optimizing, or debugging agent systems that require e…
A minimal, secure Python interpreter written in Rust for use by AI
A native Emacs buffer to interact with LLM agents powered by ACP
Official Implementation of "ToolSafe: Enhancing Tool Invocation Safety of LLM-based Agents via Proactive Step-level Guardrail and Feedback"
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
Lightweight and portable LLM sandbox runtime (code interpreter) Python library.
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
[NeurIPS 2025] A Graph-based LLM Framework for Real-world SE Tasks
Inspect: A framework for large language model evaluations
Open-source code analysis platform for C/C++/Java/Binary/Javascript/Python/Kotlin based on code property graphs. Discord https://discord.gg/vv4MH284Hc
Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement learning for Qwen3.5, GPT-OSS, Llama, and more!
A paper list for spatial reasoning
Utility to extract files and keychain information from iOS backups
A graphical tool that can extract and replace files from encrypted and non-encrypted iOS backups
Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
The interaction control harness for customer-facing AI agents - optimized for building controlled, consistent, and predictable customer interactions with LLMs.
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (LLM).
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models
The implementation of our ACM MM 2023 paper "AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning"