top of page
All Posts


Singular Value Decomposition (SVD)
Singular Value Decomposition (SVD) is a powerful matrix factorization technique used across machine learning, computer vision, and data science. From transforming non-square matrices to enabling PCA without explicitly computing the covariance matrix, SVD simplifies complex transformations into elegant geometric steps. This blog unpacks its meaning, mechanics, and visual intuition with real-world applications.

Aryan
Apr 21
Â
Â


LOGISTIC REGRESSION - 4
Logistic Regression isn’t just about predicting 0s and 1s—it’s a beautiful dance of probability and optimization. At its core lies Maximum Likelihood Estimation (MLE), the technique we use to tune our model’s parameters for the best fit. From Bernoulli assumptions to log-likelihood derivation, from sigmoid curves to regularization—this post walks you through how MLE powers logistic regression, one math step at a time.

Aryan
Apr 20
Â
Â


LOGISTIC REGRESSION - 3
Logistic regression isn’t just about fitting curves—it’s about understanding how data speaks. This blog breaks down the difference between probability and likelihood with relatable examples and shows how these ideas power logistic regression and its MLE-based training.

Aryan
Apr 19
Â
Â


LOGISTIC REGRESSION - 2
Explore how Logistic Regression extends to multi-class problems using One-vs-Rest (OvR) and Softmax Regression. Learn about coefficient updates with gradient descent, one-hot encoding, and categorical cross-entropy loss for accurate predictions.

Aryan
Apr 16
Â
Â


LOGISTIC REGRESSION - 1
Explore logistic regression, a powerful classification algorithm, from its basic geometric principles like decision boundaries and half-planes, to its use of the sigmoid function for probabilistic predictions. Understand why maximum likelihood estimation and binary cross-entropy loss are crucial for finding the optimal model in classification tasks. Learn how distance from the decision boundary translates to prediction confidence.

Aryan
Apr 14
Â
Â


Hyper Parameter Tuning
Tuning machine learning models for peak performance requires more than just good data — it demands smart hyperparameter selection. This post dives into the difference between parameters and hyperparameters, and compares two powerful tuning methods: GridSearchCV and RandomizedSearchCV. Learn how they work, when to use each, and how they can improve your model’s accuracy efficiently.

Aryan
Apr 11
Â
Â


Data Leakage in Machine Learning
Data leakage is a hidden threat in machine learning that can cause your model to perform well during training but fail in real-world scenarios. This post explains what data leakage is, how it happens—through target leakage, preprocessing errors, and more—and how to detect and prevent it. Learn key techniques to build reliable ML models and avoid common pitfalls in your data pipeline.

Aryan
Apr 8
Â
Â


CROSS VALIDATION
Cross-validation is a powerful technique to evaluate machine learning models before deployment. This post explains why hold-out validation may fail, introduces k-fold and leave-one-out cross-validation, and explores how stratified cross-validation handles imbalanced datasets—ensuring your models generalize well to unseen data.

Aryan
Apr 6
Â
Â


ROC CURVE IN MACHINE LEARNING
Understanding how classification models convert probabilities into decisions is critical in machine learning. This post breaks down the ROC Curve, confusion matrix, and the art of threshold selection. With intuitive examples like spam detection and student placement, you’ll learn how to evaluate classifiers, minimize errors, and choose the best threshold using ROC and AUC-ROC.

Aryan
Apr 5
Â
Â


Kernel PCA
Kernel PCA extends traditional PCA by enabling nonlinear dimensionality reduction using the kernel trick. It projects data into a higher-dimensional space, making complex patterns more separable and preserving structure during reduction.

Aryan
Mar 27
Â
Â
bottom of page