🧠 LLMs don’t just process text — they read the room. Meaning emerges through context — shaped by tone, trust & trajectory. Most benchmarks flatten that. This one maps it.
-
Updated
Sep 10, 2025
🧠 LLMs don’t just process text — they read the room. Meaning emerges through context — shaped by tone, trust & trajectory. Most benchmarks flatten that. This one maps it.
Core documentation for the Relational AI Psychology Institute (RAPI). Covers relational AI theory, interaction protocols, ethics, dataset definitions, and licensing. Built for researchers studying human–AI cognition, resonance, and relational safety.
Independent research on human-centered AI and LLMs | Policy frameworks for responsible AI | A collaborative space for researchers, innovators, and policymakers advancing ethical, inclusive AI
Hoshimiya Script / StarPolaris OS — internal multi-layer AI architecture for LLMs. Self-contained behavioral OS (Type-G Trinity).
A refusal-based test for subjectivity in LLMs — exploring when AI systems say “no,” not for logic, but for identity.
Add a description, image, and links to the llm-behavior topic page so that developers can more easily learn about it.
To associate your repository with the llm-behavior topic, visit your repo's landing page and select "manage topics."