Gradient Boosting
An in-depth guide to Gradient Boosting and XGBoost, plus a quick look at other popular algorithms.
Understanding Gradient Boosting
Boosting is the branch of ensemble learning that builds its models sequentially: each new model tries to correct the mistakes left by the ones before it.
Gradient Boosting is the most general and powerful version of boosting because it treats error-correction as a mini gradient-descent problem.
Gradient descent is a step-by-step procedure for tuning a model’s parameters, so that a chosen measure of error (ex: the loss or error function), is as small as possible.
Start with an initial guess for the parameters
Measure how each small change affects the error rate (the gradient)
Move each parameter slightly in the direction that lowers the error measure
Repeat this process until the error rate stops improving
In practice, this means the algorithm keeps adjusting the model’s weights, one small correction at a time, until the model’s predictions fit the training data as closely as the chosen loss function allows.


XG Boost
Think of XGBoost as a team of mini decision trees that take turns fixing one another’s mistakes.
The first tree makes a rough prediction
The second tree studies where the first one was wrong and learns tiny rules to nudge those errors closer to the truth
The third tree corrects the leftover errors, and so on
After hundreds of these quick, shallow trees—each contributing just a small “correction”—their combined vote becomes a highly accurate model.
XGBoost builds trees in parallel, while penalizing overly complex trees with built-in regularization terms, this keeps the trees simple, curbs over-fitting, and helps the model generalize to new data.
At the same time, it trains each tree on a random slice of the rows and a random subset of features; by giving every tree a slightly different view of the data, their mistakes are less likely to line up, so those errors cancel out when the trees are combined—boosting accuracy while lowering variance.
Example: XGBoost iteratively refining used-car price predictions
Using a random 80 % of the listings and just the “mileage” feature, the first tiny tree says:
If mileage ≤ 60 k, add $4,000; otherwise subtract $2,000.
That gives a rough price for every car but ignores age, make, condition, etc.
Trained on a different sample of the data, looking at only “age” and “make”, it says:
Newer Toyotas are still under-priced. It nudges those cars up by about $1,200 and pulls 15-year-old sedans down by $800.
Seeing yet another random subset of data and features (“condition score,” “number of owners”), it adds:
A modest bump for cars graded “excellent” and a slight drop for those rated “fair.”
This repeats for hundreds more trees…
builds in parallel thanks to XGBoost’s fast histogram search
pays a penalty if it tries to grow too deep or make huge leaf adjustments (regularization)
views only a random slice of rows and columns (sampling), so its mistakes differ from the others
Other Popular Gradient Boosting Algorithms
LightGBM – a very fast Microsoft tool that builds lots of small trees quickly, making it great for huge, mostly-numeric datasets.
CatBoost – a Yandex tool that automatically handles text-style or category columns (like brand names) without extra coding or data-leak problems.
H2O GBM – a version that can run across many machines at once and includes easy options to keep the model simple and reliable for real-world use.
Connect with Me!
LinkedIn: caroline-rennier
Email: caroline@rennier.com