top of page
BLOGS


LOGISTIC REGRESSION - 4
Logistic Regression isn’t just about predicting 0s and 1s—it’s a beautiful dance of probability and optimization. At its core lies Maximum Likelihood Estimation (MLE), the technique we use to tune our model’s parameters for the best fit. From Bernoulli assumptions to log-likelihood derivation, from sigmoid curves to regularization—this post walks you through how MLE powers logistic regression, one math step at a time.

Aryan
Apr 2014 min read
Â
Â


LOGISTIC REGRESSION - 3
Logistic regression isn’t just about fitting curves—it’s about understanding how data speaks. This blog breaks down the difference between probability and likelihood with relatable examples and shows how these ideas power logistic regression and its MLE-based training.

Aryan
Apr 1914 min read
Â
Â


LOGISTIC REGRESSION - 2
Explore how Logistic Regression extends to multi-class problems using One-vs-Rest (OvR) and Softmax Regression. Learn about coefficient updates with gradient descent, one-hot encoding, and categorical cross-entropy loss for accurate predictions.

Aryan
Apr 169 min read
Â
Â


LOGISTIC REGRESSION - 1
Explore logistic regression, a powerful classification algorithm, from its basic geometric principles like decision boundaries and half-planes, to its use of the sigmoid function for probabilistic predictions. Understand why maximum likelihood estimation and binary cross-entropy loss are crucial for finding the optimal model in classification tasks. Learn how distance from the decision boundary translates to prediction confidence.

Aryan
Apr 1411 min read
Â
Â
bottom of page