Indian sign language alphabet recognition system using CNN with diffGrad optimizer and stochastic pooling

…, A Ghorai, MM Singh, C Changdar, S Bhakta… - Multimedia Tools and …, 2023 - Springer
India has the largest deaf population in the world and sign language is the principal medium
for such persons to share information with normal people and among themselves. Yet, …

DiffMoment: an adaptive optimization technique for convolutional neural network

S Bhakta, U Nandi, T Si, SK Ghosal, C Changdar… - Applied …, 2023 - Springer
Stochastic Gradient Decent (SGD) is a very popular basic optimizer applied in the learning
algorithms of deep neural networks. However, it has fixed-sized steps for every epoch without …

emapDiffP: A novel learning algorithm for convolutional neural network optimization

S Bhakta, U Nandi, C Changdar, SK Ghosal… - Neural Computing and …, 2024 - Springer
Deep neural networks (DNNs) having multiple hidden layers are very efficient to learn large
volume datasets and applied in a wide range of applications. The DNNs are trained on …

SWOSBC: A novel optimizer for learning Convolutional Neural Network

S Bhakta, U Nandi, KR Mahapatra, MM Singh… - IEEE …, 2024 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) that aim to maximize accuracy and decrease loss can be
trained using optimization algorithms. One of the most significant fields of research is the …

sqFm: a novel adaptive optimization scheme for deep learning model

S Bhakta, U Nandi, M Mondal, KR Mahapatra… - Evolutionary …, 2024 - Springer
For deep model training, an optimization technique is required that minimizes loss and
maximizes accuracy. The development of an effective optimization method is one of the most …

Angularparameter: a novel optimization technique for deep learning models

S Bhakta, U Nandi, C Changdar… - … Techniques for Data …, 2023 - Springer
Training of deep learning models requires an optimization algorithm that minimizes error and
maximizes accuracy. The development of an efficient optimization algorithm is one of the …

aMacP: An adaptive optimization algorithm for Deep Neural Network

S Bhakta, U Nandi, C Changdar, B Paul, T Si, RK Pal - Neurocomputing, 2024 - Elsevier
Stochastic gradient-based optimizers are used to train convolutional neural networks (CNNs).
Due to its adaptable momentum, the Adam optimizer has recently gained a lot of attention …

[PDF][PDF] Numerical study of cavity-based flame-holder with slot injection for supersonic combustion

S BHAKTA, B SINGH - … and Development (IJMPERD) 8. 2, Apr …, 2005 - researchgate.net
The current paper has focused on finding the optimum cavity properties to enhance the mixing
phenomenon and combustion processes in supersonic flow using Computational Fluid …

ATCBBC: A Novel Optimizer for Neural Network Architectures

S Bhakta, U Nandi, KR Mahapatra, MM Singh… - … Conference on Machine …, 2023 - Springer
For deep neural networks, gradient descent (GD) is the backbone. Slow convergence is an
issue with GD. Using momentum is the well-known method of overcoming delayed …

ATCBBC: A Novel Optimizer for Neural Network Architectures Check for updates

S Bhakta, U Nandi, KR Mahapatra… - Proceedings of the …, 2024 - books.google.com
Deep learning [1, 2] simulates the way the human brain makes decisions, creates patterns,
and processes data using artificial neural network (ANN) to analyze the data. To accomplish it…