top of page


Mastering Momentum Optimization: Visualizing Loss Landscapes & Escaping Local Minima
In the rugged landscape of Deep Learning loss functions, standard Gradient Descent often struggles with local minima, saddle points, and the infamous "zig-zag" path. This article breaks down the geometry of loss landscapes—from 2D curves to 3D contours—and explains how Momentum Optimization acts as a confident driver. Learn how using a simple velocity term and the "moving average" of past gradients can significantly accelerate model convergence and smooth out noisy training p

Aryan
Dec 26, 2025


Optimizers in Deep Learning: Role of Gradient Descent, Types, and Key Challenges
Training a neural network is fundamentally an optimization problem. This blog explains the role of optimizers in deep learning, how gradient descent works, its batch, stochastic, and mini-batch variants, and why challenges like learning rate sensitivity, local minima, and saddle points motivate advanced optimization techniques.

Aryan
Dec 20, 2025


Batch Normalization Explained: Theory, Intuition, and How It Stabilizes Deep Neural Network Training
Batch Normalization is a powerful technique that stabilizes and accelerates the training of deep neural networks by normalizing layer activations. This article explains the intuition behind Batch Normalization, internal covariate shift, the step-by-step algorithm, and why BN improves convergence, gradient flow, and overall training stability.

Aryan
Dec 18, 2025


LOGISTIC REGRESSION - 2
Explore how Logistic Regression extends to multi-class problems using One-vs-Rest (OvR) and Softmax Regression. Learn about coefficient updates with gradient descent, one-hot encoding, and categorical cross-entropy loss for accurate predictions.

Aryan
Apr 16, 2025
bottom of page