top of page


Loss Functions in Deep Learning: A Complete Guide to MSE, MAE, Cross-Entropy & More
Loss functions are the backbone of every neural network — they tell the model how wrong it is and how to improve.
This guide breaks down key loss functions like MSE, MAE, Huber, Binary Cross-Entropy, and Categorical Cross-Entropy — with formulas, intuition, and use cases.
Understand how loss drives learning through forward and backward propagation and why choosing the right one is crucial for better model performance.

Aryan
4 days ago
Â
Â


Exclusive Feature Bundling (EFB) in LightGBM: Boost Speed & Reduce Memory Usage
Exclusive Feature Bundling (EFB) is a key LightGBM optimization that reduces the number of features by merging sparse, mutually exclusive columns—cutting memory usage and training time without sacrificing accuracy.

Aryan
Sep 21
Â
Â


XGBoost Optimizations
XGBoost is one of the fastest gradient boosting algorithms, designed for high-dimensional and large-scale datasets. This guide explains its core optimizations—including approximate split finding, quantile sketches, and weighted quantile sketches—that reduce computation time while maintaining high accuracy.

Aryan
Sep 12
Â
Â


The Core Math Behind XGBoost
XGBoost isn’t just another boosting algorithm — its strength lies in the mathematics that power its objective function, optimization, and tree-building strategy. In this post, we break down the core math behind XGBoost: from gradients and Hessians to Taylor series approximation, leaf weight derivation, and similarity scores. By the end, you’ll understand how XGBoost balances accuracy with regularization to build powerful predictive models.

Aryan
Aug 26
Â
Â
bottom of page