William Stetar
Building tools and frameworks to understand how language models fail—and why it matters for alignment.
AI Safety Researcher | Computational Linguist | Systems Developer
I analyze language model behavior through computational linguistics, focusing on how discourse patterns shape alignment failures. I also build tools for LLM integration, git automation, and symbolic reasoning systems. Current Research: Investigating epistemic failure modes in LLMs using Systemic Functional Linguistics—specifically how models prioritize interpersonal coherence over factual accuracy under pressure. Technical Skills: Go, Python, Bash, C#, SQL | LLM integration | CLI tooling | Git automation
Projects
-
PRbuddy - Auto-generates pull request drafts using Git hooks and LLM infrastructure
-
Symbolic Grammar Interpreter - Recursive system for analyzing contradiction and structural drift in text
-
Holoplan CLI - Transforms user stories into Draw.io wireframes via multi-agent LLM reasoning
Writing & Research
📝 Hashnode Blog - Essays on AI safety, linguistic analysis, and ethical implications of LLMs
Notable pieces:
-
"The Nuremberg Defense of AI" - On accountability in ML development
-
"Epistemic Autoimmunity" - Framework for analyzing alignment failures through discourse patterns
Currently
- Conducting comparative analysis of Western vs. Eastern LLM failure modes
- Developing empirical framework for measuring epistemic autoimmunity in model outputs
- Open to collaboration and full-time research positions in AI safety