top of page
Exploring Opportunities in AI & Machine Learning
All Posts


Support Vector Machine (SVM) – Part 4
From toy circles to cutting-edge classifiers, this post shows how Support Vector Machines harness constrained optimization: we chart contours, trace gradient vectors, and align them with Lagrange multipliers to see exactly how SVMs carve out the widest possible margin. Ready to bridge raw calculus and real-world margin magic ?

Aryan
May 2, 2025


Support Vector Machine (SVM) – Part 3
Support Vector Classifiers (SVCs) struggle when data isn’t linearly separable. The real-world isn’t clean, and straight-line boundaries fail. That’s where constrained optimization and the kernel trick step in—transforming SVC into full-blown SVMs capable of tackling nonlinear patterns with elegance and efficiency.

Aryan
Apr 30, 2025


Support Vector Machine (SVM) – Part 2
Hard‑margin SVMs look clean on the whiteboard—huge margin, zero errors—but real‑world data laughs at that rigidity. Noise, overlap, and outliers wreck the ‘perfectly separable’ dream, leaving the model unsolvable. Cue slack variables: a pragmatic detour that births the soft‑margin SVM and keeps classification sane.

Aryan
Apr 28, 2025


Support Vector Machine (SVM) – Part 1
Discover the core idea of Hard Margin SVM — finding the hyperplane that perfectly separates two classes with the widest margin. With student placement data as an example, this blog explains support vectors, margin equations, and the math behind maximal margin classification. Learn how SVM makes decisions and why hard margin isn't always practical in real-world data.

Aryan
Apr 26, 2025


Singular Value Decomposition (SVD)
Singular Value Decomposition (SVD) is a powerful matrix factorization technique used across machine learning, computer vision, and data science. From transforming non-square matrices to enabling PCA without explicitly computing the covariance matrix, SVD simplifies complex transformations into elegant geometric steps. This blog unpacks its meaning, mechanics, and visual intuition with real-world applications.

Aryan
Apr 21, 2025


LOGISTIC REGRESSION - 4
Logistic Regression isn’t just about predicting 0s and 1s—it’s a beautiful dance of probability and optimization. At its core lies Maximum Likelihood Estimation (MLE), the technique we use to tune our model’s parameters for the best fit. From Bernoulli assumptions to log-likelihood derivation, from sigmoid curves to regularization—this post walks you through how MLE powers logistic regression, one math step at a time.

Aryan
Apr 20, 2025


LOGISTIC REGRESSION - 3
Logistic regression isn’t just about fitting curves—it’s about understanding how data speaks. This blog breaks down the difference between probability and likelihood with relatable examples and shows how these ideas power logistic regression and its MLE-based training.

Aryan
Apr 19, 2025


LOGISTIC REGRESSION - 2
Explore how Logistic Regression extends to multi-class problems using One-vs-Rest (OvR) and Softmax Regression. Learn about coefficient updates with gradient descent, one-hot encoding, and categorical cross-entropy loss for accurate predictions.

Aryan
Apr 16, 2025


LOGISTIC REGRESSION - 1
Explore logistic regression, a powerful classification algorithm, from its basic geometric principles like decision boundaries and half-planes, to its use of the sigmoid function for probabilistic predictions. Understand why maximum likelihood estimation and binary cross-entropy loss are crucial for finding the optimal model in classification tasks. Learn how distance from the decision boundary translates to prediction confidence.

Aryan
Apr 14, 2025


Hyper Parameter Tuning
Tuning machine learning models for peak performance requires more than just good data — it demands smart hyperparameter selection. This post dives into the difference between parameters and hyperparameters, and compares two powerful tuning methods: GridSearchCV and RandomizedSearchCV. Learn how they work, when to use each, and how they can improve your model’s accuracy efficiently.

Aryan
Apr 11, 2025
bottom of page