A semantic standard library for C — explicit, composable modules that add meaning without hiding behavior.
-
Updated
Apr 9, 2026 - C
A semantic standard library for C — explicit, composable modules that add meaning without hiding behavior.
A high-fidelity formal modeling and verification framework for complex system architectures, powered by Lean 4.
Mirror repository for open-source OPC-UA Toolkit designed with security and embedded devices in mind. Main repository is on gitlab:
📉 Demonstrate the computational gap in safety-critical control systems by comparing traditional and meta approaches to system identification on low-power hardware.
🔍 Implement safe and reliable binary search functions for safety-critical systems, ensuring defined behavior and strict compliance with C11 standards.
AI-native micro issue manager for multi-agent LLM development. Hierarchical task decomposition with context chains, safety-critical standards, and zero-infrastructure deployment.
Binary emergency communication protocol for .NET — 8-byte tokens, signed envelopes, zero dependencies.
Formal verification of the Kevros AI Governance Enforcement Kernel. 1.94B states exhaustively checked (TLC), 20 machine-checkable theorems (Lean 4, 0 sorry), 71 proofs across 6 layers, zero violations. Reproducible under $4 in compute.
A disciplined C implementation of a fixed-size ring buffer inspired by the NASA/JPL Power of Ten rules for safety-critical software.
A certification-aware avionics integration and verification workbench for flight systems simulation, fault injection, traceability, and requirements-based testing.
Real-time Tunnel Atmospheric Hazard Detection and Alert Framework (Non-functioning Repository)
A Plane Client in Rust and ATC Server in C for a course project
V-Model Extension Pack for Spec Kit — enforces paired generation of development specs and test specs with regulatory-grade traceability
Safety-critical controllers for single/multi robotic navigation: CBF-QP, MPC-CBF, and etc.
R&D · Legally accountable AI systems for autonomous vehicle operation targeting 99.9999% certified safety. Convergence Human & Technology.
AILEE Trust Layer is a deterministic trust and safety middleware for AI systems. It mediates uncertainty using confidence thresholds, contextual grace, peer consensus, and fail-safe fallback—transforming raw model outputs into auditable, reliable decisions.
notes about many, many topics ...
C.H.I.L.D. - Comprehensive Harm Interdiction & Lifelong Defense: Biomonitoring for Innocence Protection
Add a description, image, and links to the safety-critical topic page so that developers can more easily learn about it.
To associate your repository with the safety-critical topic, visit your repo's landing page and select "manage topics."