🤖 Build models that understand human fallibility, bridging the gap between machine precision and human emotion for better AI decision-making.
-
Updated
Dec 20, 2025
🤖 Build models that understand human fallibility, bridging the gap between machine precision and human emotion for better AI decision-making.
🧠 Generate executable programs from natural language questions using a neuroscience-inspired framework for advanced visual reasoning.
Your gateway to the latest in NeuroSymbolic AI research, industry, and community.
Extensible Cognitive Hybrid Intelligence for Deductive Neural Assistance. A neurosymbolic theorem proving platform that transforms Quill (Agda-only neural solver) into a universal multi-prover system with aspect tagging, OpenCyc integration, and DeepProbLog probabilistic logic.
Claude skills for Synalinks
Integrating Symbolic Programming and Neuromorphic Modeling for Edge Labs with NVIDIA Jetson, DGX Spark, and GPU-based DNN/ML Systems
Aevov's NeuroSymbolic architecture for Web3
Web 3 Aevov ML System in Go which is an extension of https://github.com/aevov/Aevov-Web3/tree/main
Minimal Tensor Logic AI engine in PyTorch – Datalog reasoning as tensor operations (einsum) and neural networks in one framework.
PyC (Pytorch Concepts) is a PyTorch-based library for training concept-based interpretable deep learning models.
Domain-Aware Neurosymbolic Agent (Dana), an agent-native programming language
Experimental Python implementation of the Clarion cognitive architecture
Sparse Circuits on the GPU (ICLR2025)
Machine Learning with Symbolic Tensors
For the now-deprecated e75 and e150 cards
Embeddings as Probabilistic Equivalence in Logic Programs (NeurIPS2025)
AMS Network on Neurosymbolic AI for Medicine
An explainable inference software supporting annotated, real valued, graph based and temporal logic
Holographic vectors you can compute with. Bind structure, bundle sets, unbind components cross NumPy, PyTorch, and JAX.
A deep exploration of Algorithmic Empathy, the next frontier in AI understanding. This project examines how machines can learn from human fallibility, model disagreement, and align with moral reasoning. It blends psychology, fairness metrics, interpretability, and co-learning design into one framework for humane intelligence.
Add a description, image, and links to the neurosymbolic topic page so that developers can more easily learn about it.
To associate your repository with the neurosymbolic topic, visit your repo's landing page and select "manage topics."