Introduction to
NeuroSymbolic
AI
Artificial Intelligence
From MIT 6.S191 Introduction to Deep Learning
From MIT 6.S191 Introduction to Deep Learning
Narrow AI
Narrow AI
Narrow AI
Neural Network
• Good:
• Accuracy
• Adaptability and Data Processing
• Learn Complex Non-Linear Relationships
• Bad:
• Instability & Computational Limitations
• Lack of Explainability
• Adversarial Vulnerability
Neural Network
• Good:
• Accuracy
• Adaptability and Data Processing
• Learn Complex Non-Linear Relationships
• Bad:
• Instability & Computational Limitations
• Lack of Explainability
• Adversarial Vulnerability
Neural Network
• Good:
• Accuracy
• Adaptability and Data Processing
• Learn Complex Non-Linear Relationships
• Bad:
• Instability & Computational Limitations
• Lack of Explainability
• Adversarial Vulnerability
Pineapple
Neural Network
• Good:
• Accuracy
• Adaptability and Data Processing
• Learn Complex Non-Linear Relationships
• Bad:
• Instability & Computational Limitations
• Lack of Explainability
• Adversarial Vulnerability
Pineapple
Neural Network
• Good:
• Accuracy
• Adaptability and Data Processing
• Learn Complex Non-Linear Relationships
• Bad:
• Instability & Computational Limitations
• Lack of Explainability
• Adversarial Vulnerability
Pineapple
Antenna or Gun?
Symbolic AI
• Good:
• Understand and Relate Concepts
• Efficiency in specific tasks
• Interpretability
• Bad:
• Struggle with Unstructured Data
• Inflexibility in Learning
What is NeuroSymbolic (NeSy) AI then?
What is NeuroSymbolic (NeSy) AI then?
Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic
AI architectures to address the weaknesses of each, providing a robust AI capable of
reasoning, learning, and cognitive modelling. @Wikipedia
NeuroSymbolic AI Categories
• Multiple ways
• Easiest we found:
NeuroSymbolic AI Categories
• Multiple ways
• Easiest we found:
• According to how Neural and Symbolic parts are connected.
• Learning for Reasoning
• Reasoning for Learning
• Learning-Reasoning
Learning for Reasoning
• Aims to improve reasoning ability.
• Configured in a serial connection.
• The neural part works as an encoder, and the Symbolic part works as a decision-maker.
• Good for:
• Handling unstructured data
• Fast Convergence
• Verifiability
• Most of the current models are from this category.
Reasoning for Learning
• It uses a symbolic model to guide the output of the neural model.
• Configured in a parallel connection.
• Symbolic model can help with reward shaping.
• Using Symbolic model for action steering and controlling Neural model.
• Benefits:
• Performance and Interpretability
• Efficient Reward Shaping
• Explainable Policy
Learning-Reasoning
• Somehow, the best of both worlds.
• Configured in a bi-directional connection.
• Benefits:
• Improved Reasoning
• Enhanced Interpretability
• Faster Convergence
• Handling both Structured and Unstructured data.
Let’s look at some Examples!
Towards Deep Symbolic Reinforcement Learning
• Link
• Learning for Reasoning.
• Focus:
• Data Requirement
• Learning Speed
• Abstract Reasoning (Transfer Learning)
• Transparency
• NN helps to solve the Symbol Grounding Problem.
Towards Deep Symbolic Reinforcement Learning
• Input and Object Detection:
Towards Deep Symbolic Reinforcement Learning
• Symbolic Representation Building
Towards Deep Symbolic Reinforcement Learning
• Results:
• Percentage of collected items:
• Left: Only positive items
• Right: Both Positive and negative Items
Towards Deep Symbolic Reinforcement Learning
• Results:
• Performance of Transfer Learning
Deep Compositional Q&A with Neural Module Networks
• Link
• Learning for Reasoning or Maybe Learning-Reasoning.
• Focus:
• Accuracy
• Using NeSy architecture to evolve the state-of-the-art Vision models.
• Main modules: Attention, Re-Attention, Combination, Classification, Measurement
Learning Symbolic Rules for Interpretable DRL
• Environments:
Learning Symbolic Rules for Interpretable DRL
• Example Results:
Learning Symbolic Rules for Interpretable DRL
• Some Other Results!!!
Learning Symbolic Rules for Interpretable DRL
• Results:
• Performance on Shapes Dataset
Learning Symbolic Rules for Interpretable DRL
• Results:
• On VQA test server
SDRL
• Link -> SDRL: Interpretable and Data-efficient Deep Reinforcement Learning Leveraging
Symbolic Planning
• Probably one of few Learning-Reasoning architectures.
• Focus:
• Learn a sequence of subtasks and their policy.
• Performance
• Generalization
Learning Symbolic Rules for Interpretable DRL
• Environments:
• Taxi
• Reward of picking up the passenger reduces every 2K episode
Learning Symbolic Rules for Interpretable DRL
• Environments:
• Montezuma
Learning Symbolic Rules for Interpretable DRL
• Results:
• Performance on Taxi problem
Learning Symbolic Rules for Interpretable DRL
• Results:
• Performance on Montezuma problem
Learning Symbolic Rules for Interpretable DRL
• Results:
• Performance on Montezuma problem
Learning Symbolic Rules for Interpretable DRL
• Link
• Learning for Reasoning.
• Focus:
• No need for expert knowledge.
• Interpretability over NeSy performance
• Scalability
Learning Symbolic Rules for Interpretable DRL
• Environments:
Learning Symbolic Rules for Interpretable DRL
• Results:
Learning Symbolic Rules for Interpretable DRL
• Results: