π Website | π LinkedIn | βοΈ Email
"The limits of my language mean the limits of my world." β Wittgenstein
I am an AI researcher passionate about neuro-inspired deep learning models and the mathematical foundations of intelligence. I grew up in Italy and am currently a student at Minerva University, participating in an international program that allows me to study while traveling the world.
Currently, I am an incoming Member of Technical Staff (MTS) Intern at Sakana AI and a Research Assistant at the University of Toronto / Stanford. My goal is to explore the convergence of biological intelligence and artificial systems, striving to understand how high-level reasoning emerges from low-level circuits. I am deeply grateful to my research mentors for their continued guidance and support.
A PyTorch-native framework for aligning biological brain activity with Large Language Models. It provides standardized neuro-components, leakage-safe data flows, and end-to-end pipelines to replicate state-of-the-art brain-to-text decoding research.
The official codebase for our ICLR 2026 paper. This repository investigates how process uncertainty models (PUMs) interact with process reward models (PRMs) during inference-time search, introducing Uncertainty-Aware Tree Search (UATS) to dynamically allocate expansion budgets.
π¦ noctua
An AI-native CLI for Obsidian.md. It bridges your terminal and your local Vault using a modern, lightweight AI stack (Pydantic-AI, txtai), blending direct file manipulation with semantic intelligence and RAG capabilities.
- Challenges in Inference-Time Scaling with Uncertainty-Aware Tree Search Minniti, J., Band, N., Rudner, T. G. J. | ICLR 2026 Workshops (SPOT & Agentic AI in the Wild)
- Effects of Age, Semantic Relatedness, and Vocabulary Knowledge on Learning New Word Meanings Chen, P., Hulme, R.C., Minniti, J., Lee, C.L., Rodd, J.M. | Journal of Memory and Language & CLDC 12 [Under Review]
- NeuroAI: Implementing biological constraints (plasticity, sparsity) to create robust, efficient learning systems.
- Mathematical Foundations: Developing rigorous theories of intelligence to understand generalization and stability.
- Mechanistic Interpretability: Reverse-engineering model weights to decode emergent reasoning.
- Agentic Reasoning: Training models to think, plan, and verify their own chain of thought over long time horizons.
Technologies: PyTorch, JAX, vLLM, transformer-lens, gymnasium