Implementation of optimization algorithms for training extreme learning neural networks, including momentum (heavy ball) descent and smoothed gradient methods, implemented from scratch.
-
Updated
Apr 2, 2026 - Python
Implementation of optimization algorithms for training extreme learning neural networks, including momentum (heavy ball) descent and smoothed gradient methods, implemented from scratch.
A Python project implementing a neural network framework from scratch using NumPy. Includes fully connected (Dense) layers, ReLU and Sigmoid activations, a simple SGD optimizer, and a minimal training loop. Designed for hands-on learning of neural network fundamentals without relying on any deep learning frameworks
JAX compilation of RDDL description files, and a differentiable planner in JAX.
Research project analyzing stability and robustness of deep learning optimizers (SGD, Adam, SAM) under label noise and precision constraints.
Visual and interactive guide to optimization algorithms — from gradient descent to Adam, with Python notebooks and animations
Empirical comparison of SGD, Adam, RMSprop, Adagrad and L-BFGS on CIFAR-10 convergence analysis, gradient norm tracking, and loss landscape visualisation
A from-scratch implementation of feedforward neural networks using NumPy. Developed for the Artificial Intelligence Fundamentals course at the University of Parma, featuring manual backpropagation, mini-batch SGD, and inverted dropout on the MNIST dataset.
Optimizer Comparison Study - Empirical analysis of SGD vs Adam performance on MNIST with various initialization and scheduler configurations
A pure NumPy implementation of Ridge Regression (L2 Regularization) from scratch. Features vectorized Minibatch Stochastic Gradient Descent (SGD) and manual hyperparameter grid search without using scikit-learn.
Implementing PyTorch Optimizers from Scratch
EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification
Saccharomyces Genome Database (SGD) REST API wrapper
Exercises of the Laboratory of Computational Physics Mod. B
minimum bipartite matching via Riemann optimization
This work was published at IEEE ISIT 2022, where we proposed a novel algorithm named G-CADA aiming to improve the time and communication efficiency of distributed learning systems based on grouping and adaptive selection methods 😄
My beautiful Neural Network made from scratch and love. It plays the game Flappy-Birds flawlessly, in 3 to 9 generations!!
A repo that contains source code for my blog "Deep Learning Optimizers: A Comprehensive Guide for Beginners (2024)"
Parametric estimation of multivariate Hawkes processes with general kernels.
Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enforces privacy by clipping and sanitising the gradients with Gaussian noise during training.
Add a description, image, and links to the sgd topic page so that developers can more easily learn about it.
To associate your repository with the sgd topic, visit your repo's landing page and select "manage topics."