Methodology
The 2026 F1 Power Unit caps MGU-K deployment at 350 kW and tapers default deployment from 290 to 345 km/h. Manual Override Mode (the "boost button") extends 350 kW deployment past that taper. We can't see the button being pressed in public telemetry, but we can detect its physical effect: continued acceleration where the car would otherwise flatten out. This page documents how the detection works and how driver scores are derived.
1 · Data Source
Telemetry comes from FastF1 v3.8.2, which mirrors
livetiming.formula1.com with a fallback to the Jolpica
Ergast API. Each race we pull race-session laps, take each driver's
top-5 fastest clean laps (excluding pit-in/pit-out
and laps slower than 1.08 × fastest), and resample telemetry onto a
common track-distance grid at 5 m resolution.
Telemetry channels used: Speed, Throttle,
Distance, X, Y. Sampling rate
is 4 Hz for car data, which gives roughly 15 m of localisation
uncertainty at 250 km/h — fine for zone-level analysis, too coarse
for sub-zone timing.
2 · 2026 Power Unit Basis
The detection is anchored in the 2026 PU regulations. With the MGU-K capped at 350 kW and default deployment tapering off above 290 km/h, a car at full throttle would normally stop accelerating shortly after 290 km/h and coast through to its drag-limited terminal speed. With Manual Override engaged, that 350 kW deployment is extended to higher speeds — so the car keeps pulling.
The red shaded region is the detection band: in this speed range the field's "default" cars are coasting while Override-engaged cars are still pulling. We don't see the button press itself; we infer it from the derivative of the speed trace.
3 · Boost Detection
For each lap, on each 5 m bin i, we flag the bin as boost when:
Where Ti is throttle percentage and vi is speed in km/h. Smoothing uses a 40 m uniform filter to suppress sample noise. Detected bins are then run through a minimum-length filter — runs shorter than 100 m are discarded as sensor jitter.
Why spatial acceleration (dv/dx) instead of time acceleration (dv/dt)?
Sampling rate varies across telemetry; dv/dt is noisier than taking the spatial derivative against a uniform 5 m grid. Spatial dv/dx is also intuitively the right thing — we're asking "is this car still gaining speed per metre travelled" — which is what determines whether it has more deployment headroom than the regulation default.
Thresholds in tabular form
| Parameter | Value | Why |
|---|---|---|
FULL_THROTTLE_MIN | 95 % | Below this we can't claim the driver is committed to the deployment. |
BOOST_SPEED_FLOOR_KPH | 245 km/h | Below this, default deployment is identical for everyone. |
ACCEL_DV_DX | 0.020 m/s / m | Above the noise floor, below default-deployment slope. |
HIGH_SPEED_BOOST_KPH | 295 km/h | Strict speed-only rule for end-of-straight cases where dv/dx flattens. |
MIN_BOOST_LEN_M | 100 m | Reject single-bin spikes. |
SMOOTH_WINDOW_M | 40 m | Filter length for speed/throttle smoothing. |
4 · Aggregating to Optimal Zones
For a given race, we stack every driver's boost mask on the common distance grid and combine them into a single field-wide boost-probability curve. Rather than averaging every driver equally, we weight each driver by their championship rank — the theory being that the points leaders are the most reliable signal of where boost is genuinely advantageous. A backmarker who never deploys through a key zone tells us little about whether the zone is correct.
Weights: F1 points scale by championship rank
Each driver's weight wd is determined by their championship rank entering the race, mapped through the standard F1 race-points scale:
The top driver in the standings carries roughly 25 % of the total weight on their own; the top three together account for about 57 %; everyone outside the top ten contributes nothing to the optimal-zone definition.
Causal rule. The weights for race N are computed from cumulative championship points through race N − 1 only — no look-ahead. For race 1 of the season there are no prior results, so the weighting falls back to equal weight across every driver with usable telemetry.
Scope. Weighting affects only the optimal-zone definition. Each driver's own zone scores (Section 5) are still computed against those zones with no per-driver re-weighting, so the season leaderboard remains a straightforward measure of how each driver lines up with the (weighted) optimal zones.
pi still lies in [0, 1] and represents the
championship-weighted fraction of the field deploying boost at that
track distance on their fast laps. We then smooth with a short uniform
filter and find peaks using SciPy's find_peaks. Each
peak region is expanded outward while p > 0.35, yielding
a contiguous optimal boost zone.
A final lap-wrap merge step combines zones that span the start/finish line (e.g. a zone ending at 5 685 m and a zone starting at 295 m on a 5 807 m circuit are merged into one zone that wraps the line).
Zone metadata stored
| Field | Description |
|---|---|
start_dist_m / end_dist_m | Lap distance bounds (circular) |
peak_dist_m | Distance of the local probability maximum |
strength | Mean p across the zone (field-agreement score) |
name | Human-readable label (e.g. "Dunlop / Degner Straight") |
5 · Driver Scoring
Each driver's per-zone score combines two pieces — how much of the zone they cover, and how well-centred their deployment is on the zone's peak.
A. Coverage
Fraction of the zone where the driver's per-bin boost probability exceeds the coverage threshold (0.40):
B. Centroid alignment
How close the driver's boost-weighted centroid sits to the zone's peak distance, measured circularly so wrapping zones still work:
The offset is converted to a 0–1 alignment score with a Gaussian decay around a "good radius" of 200 m:
C. Combined zone score
Coverage dominates because — in the Cadillac problem we keep finding — missing a zone matters more than being slightly off-centre within one. A driver who deploys at the right place but only occasionally is more correctable than a driver who never deploys there at all.
D. Overall score
The season leaderboard score is a simple unweighted mean across the races a driver has run.
6 · What the Score Does Not Capture
- Strategic saving — drivers with limited battery state or protecting tyres will deliberately under-deploy, losing points even though the call was rational.
- Traffic and DRS trains — close-following cars sometimes can't afford to spike speed further; we mitigate by using fastest clean laps only, but not perfectly.
- Button-timing resolution — at 4 Hz we can only localise a boost press to ~15 m at 250 km/h. Small sub-zone differences between drivers are under the noise floor.
- Team strategy calls — a driver may be told to use boost defensively rather than offensively; the score rewards only field-optimum deployment.
- Self-reinforcing optimum — because the optimal-zone definition is championship-weighted, the standard a driver is measured against is largely set by the leaders of the championship. A backmarker who finds a clever, atypical place to deploy boost won't be credited with redefining the optimum unless their own championship rise eventually shifts their weight upward.
In short: a high score means consistent, location-correct Manual Override usage on qualifying-pace laps. It is not a full proxy for race-craft quality.
7 · Reproducibility
All of the above is implemented in two scripts that live alongside the viewer in this folder:
build_boost_analysis.py |
FastF1 ingestion, detection, championship-weighted aggregation, scoring — produces boost_analysis_2026.json. |
inject_data.py |
Inlines the JSON payload into the viewer template. |
boost_analysis_2026.json |
Bundled data for all completed 2026 races. Each race entry now carries a weighting block with the standings used. |
The scripts run idempotently — re-running them after a new race regenerates the JSON, the viewer, and the inline snapshots used by this page's charts. Source code is in the Overtakers Cowork project.