Skip to content
View hxhcreate's full-sized avatar
🎯
Focusing
🎯
Focusing
  • Fudan Univ | BIT
  • Shanghai, China

Block or report hxhcreate

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
hxhcreate/README.md

typing-svg

About me 👋

  • 🌱 I’m currently learning and working on LLMs Safety
  • 🔭 I’m currently a research intern at ShanghaiAILab
  • 👯 I’m looking to collaborate on any research topics related to LLMs and Agents
  • 💬 Ask me about anything here
  • 📫 Reach me by email: hxh_create@outlook.com

📺Social

🎯Sign Up Everyday

Pinned Loading

  1. OpenSafetyLab/SALAD-BENCH OpenSafetyLab/SALAD-BENCH Public

    【ACL 2024】 SALAD benchmark & MD-Judge

    Python 163 15

  2. AI45Lab/VLSBench AI45Lab/VLSBench Public

    [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety

    Python 51 1

  3. AI45Lab/IS-Bench AI45Lab/IS-Bench Public

    Data and Code for Paper IS-Bench: Evaluating Interactive Safety of VLM-Driven Embodied Agents in Daily Household Tasks

    Python 30 2

  4. LLM_Deceive_Unintentionally LLM_Deceive_Unintentionally Public

    Experimental resources for paper titled "LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions"

    Python 5

  5. wonderNefelibata/Awesome-LRM-Safety wonderNefelibata/Awesome-LRM-Safety Public

    Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as DeepSeek-R1 and OpenAI o1, which are currently very popular.

    Python 76 6