- 👋 Hi, I’m Mingyu Jin, student at Rutgers University
- 👀 I’m interested in Trustworthy Large Language Models, Explainability, and Data Mining.
- 🌱 I’m currently learning about interpretability in transformers.
- 💞️ I’m looking to collaborate with students who are also interested in these areas
- 📫 How to reach me: mingyu.jin404@gmail.com
Pinned Loading
-
Rope_with_LLM
Rope_with_LLM Public[ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concentrated in low-frequency dimensions across different attentio…
-
The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models
The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models Public[ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlation between the effectiveness of CoT and the length of reas…
-
Luckfort/CD
Luckfort/CD Public[COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?
-
Stockagent
Stockagent Public[Preprint] Large Language Model-based Stock Trading in Simulated Real-world Environments
-
Disentangling-Memory-and-Reasoning
Disentangling-Memory-and-Reasoning Public[ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.
If the problem persists, check the GitHub status page or contact support.