top of page
All Posts


DECISION TREES - 3
Decision trees measure feature importance via impurity reduction (e.g., Gini). Overfitting occurs when trees fit noise, not patterns. Pruning reduces complexity: pre-pruning uses max depth or min samples, while post-pruning, like cost complexity pruning, trims nodes after growth. These methods enhance generalization, improving performance on new data, making them vital for effective machine learning models.

Aryan
May 17, 2025


Ensemble Learning
Ensemble Learning combines multiple machine learning models to improve accuracy, stability, and generalization. Inspired by the “Wisdom of the Crowd,” it relies on the idea that diverse models can correct each other’s errors. Popular methods include Voting, Bagging, Boosting, and Stacking. These approaches reduce overfitting, handle variance or bias, and enhance performance, making ensemble learning a key technique in modern machine learning.

Aryan
May 17, 2025


DECISION TREES - 2
Dive into Decision Trees for Regression (CART), understanding its core mechanics for continuous target variables. This post covers how CART evaluates splits using Mean Squared Error (MSE), its geometric interpretation of creating axis-aligned regions, and the step-by-step process of making predictions for both regression and classification tasks. Discover its advantages in handling non-linear data and key disadvantages like overfitting, emphasizing the need for regularization

Aryan
May 17, 2025


DECISION TREES - 1
Discover the power of decision trees in machine learning. This post dives into their intuitive approach, versatility for classification and regression, and the CART algorithm. Learn how Gini impurity and splitting criteria partition data for accurate predictions. Perfect for data science enthusiasts !

Aryan
May 16, 2025


Support Vector Machine (SVM) – Part 7
This blog demystifies how the RBF kernel in SVM creates highly adaptive, local decision boundaries by emphasizing similarity between nearby points. You'll understand the role of gamma, how the kernel's geometry defines regions of influence, and why RBF enables powerful non-linear classification by implicitly mapping data into infinite-dimensional space.

Aryan
May 8, 2025


Support Vector Machine (SVM) – Part 6
Dual Formulation in SVMs isn’t just a rewrite—it’s a revolution. It shifts the optimization game from weight space to α-space, focusing only on support vectors. We unpack how dual SVMs compute smarter, enable kernel tricks, and efficiently solve non-linear problems. This post breaks it all down—from dot products to RBFs—with clarity, code, and geometric insight.

Aryan
May 5, 2025


Support Vector Machine (SVM) – Part 5
Step beyond 2‑D lines into n‑D hyperplanes. This post walks you through soft‑margin SVMs, inequality‑constrained optimisation, KKT conditions, primal‑to‑dual conversion, and why only a handful of support vectors end up steering the whole classifier—your cheat‑sheet to scaling SVMs without losing your cool.

Aryan
May 4, 2025


Support Vector Machine (SVM) – Part 4
From toy circles to cutting-edge classifiers, this post shows how Support Vector Machines harness constrained optimization: we chart contours, trace gradient vectors, and align them with Lagrange multipliers to see exactly how SVMs carve out the widest possible margin. Ready to bridge raw calculus and real-world margin magic ?

Aryan
May 2, 2025


Support Vector Machine (SVM) – Part 3
Support Vector Classifiers (SVCs) struggle when data isn’t linearly separable. The real-world isn’t clean, and straight-line boundaries fail. That’s where constrained optimization and the kernel trick step in—transforming SVC into full-blown SVMs capable of tackling nonlinear patterns with elegance and efficiency.

Aryan
Apr 30, 2025


Support Vector Machine (SVM) – Part 2
Hard‑margin SVMs look clean on the whiteboard—huge margin, zero errors—but real‑world data laughs at that rigidity. Noise, overlap, and outliers wreck the ‘perfectly separable’ dream, leaving the model unsolvable. Cue slack variables: a pragmatic detour that births the soft‑margin SVM and keeps classification sane.

Aryan
Apr 28, 2025
bottom of page