- Austin, TX
- https://honglizhan.github.io/
- @HongliZhan
Stars
Create beautiful slides on the web using Claude's frontend skills
Show usage stats for OpenAI Codex and Claude Code, without having to login.
Manage multiple AI terminal agents like Claude Code, Codex, OpenCode, and Amp.
Replication data and code for the paper: When LLMs are Reliable for Judging Empathic Communication
This is the official repository for HypoGeniC (Hypothesis Generation in Context) and HypoRefine, which are automated, data-driven tools that leverage large language models to generate hypothesis fo…
SkyRL: A Modular Full-stack RL Library for LLMs
Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.
Measuring how well CLI agents like Claude Code or Codex CLI can post-train base LLMs on a single H100 GPU in 10 hours
Code and data for our ICML 2025 paper "SPRI: Aligning Large Language Models with Context-Situated Principles"
🟣 Linear Algebra interview questions and answers to help you prepare for your next machine learning and data science interview in 2026.
Subjective Empathy in Natural Sustained Exchanges - 7 dimensions dataset
This repo is meant to serve as a guide for Machine Learning/AI technical interviews.
主要记录大语言大模型(LLMs) 算法(应用)工程师相关的知识及面试题
A list of review notes on ML topics.
Machine Learning algorithm implementations from scratch.
Lists of company wise questions available on leetcode premium. Every csv file in the companies directory corresponds to a list of questions on leetcode for a specific company based on the leetcode …
A holistic benchmark for LLM abstention
Open source annotation tool for machine learning practitioners.
A curated list of resources for using LLMs to develop more competitive grant applications.
This is a repository for sharing papers in the field of empathetic conversational AI. The related source code for each paper is linked if available.
🐙 OctoPack: Instruction Tuning Code Large Language Models
Evaluate your LLM's response with Prometheus and GPT4 💯
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)