top of page
Support Vector Machine (SVM)


Support Vector Machine (SVM) – Part 7
This blog demystifies how the RBF kernel in SVM creates highly adaptive, local decision boundaries by emphasizing similarity between nearby points. You'll understand the role of gamma, how the kernel's geometry defines regions of influence, and why RBF enables powerful non-linear classification by implicitly mapping data into infinite-dimensional space.

Aryan
May 8
Â
Â


Support Vector Machine (SVM) – Part 6
Dual Formulation in SVMs isn’t just a rewrite—it’s a revolution. It shifts the optimization game from weight space to α-space, focusing only on support vectors. We unpack how dual SVMs compute smarter, enable kernel tricks, and efficiently solve non-linear problems. This post breaks it all down—from dot products to RBFs—with clarity, code, and geometric insight.

Aryan
May 5
Â
Â


Support Vector Machine (SVM) – Part 5
Step beyond 2‑D lines into n‑D hyperplanes. This post walks you through soft‑margin SVMs, inequality‑constrained optimisation, KKT conditions, primal‑to‑dual conversion, and why only a handful of support vectors end up steering the whole classifier—your cheat‑sheet to scaling SVMs without losing your cool. 

Aryan
May 4
Â
Â


Support Vector Machine (SVM) – Part 4
From toy circles to cutting-edge classifiers, this post shows how Support Vector Machines harness constrained optimization: we chart contours, trace gradient vectors, and align them with Lagrange multipliers to see exactly how SVMs carve out the widest possible margin. Ready to bridge raw calculus and real-world margin magic ?

Aryan
May 2
Â
Â


Support Vector Machine (SVM) – Part 3
Support Vector Classifiers (SVCs) struggle when data isn’t linearly separable. The real-world isn’t clean, and straight-line boundaries fail. That’s where constrained optimization and the kernel trick step in—transforming SVC into full-blown SVMs capable of tackling nonlinear patterns with elegance and efficiency.

Aryan
Apr 30
Â
Â


Support Vector Machine (SVM) – Part 2
Hard‑margin SVMs look clean on the whiteboard—huge margin, zero errors—but real‑world data laughs at that rigidity. Noise, overlap, and outliers wreck the ‘perfectly separable’ dream, leaving the model unsolvable. Cue slack variables: a pragmatic detour that births the soft‑margin SVM and keeps classification sane.

Aryan
Apr 28
Â
Â


Support Vector Machine (SVM) – Part 1
Discover the core idea of Hard Margin SVM — finding the hyperplane that perfectly separates two classes with the widest margin. With student placement data as an example, this blog explains support vectors, margin equations, and the math behind maximal margin classification. Learn how SVM makes decisions and why hard margin isn't always practical in real-world data.

Aryan
Apr 26
Â
Â
bottom of page