Fusaka boosted Ethereum’s blob capacity — but nobody RSVP’d
Quick recap (the elevator pitch)
In December 2025 and January 2026, Ethereum quietly dialed up how much blob data it can accept per block. The network’s blobs — compressed bundles of Layer‑2 data that rollups post to Ethereum — had their target and maximum counts increased in two steps: from a pre‑upgrade baseline (6 target / 9 max) to an intermediate setting (10 target / 15 max), and finally to a bigger setting (14 target / 21 max). The clever bit: these changes were made without hard forks, using a client‑coordinated parameter override mechanism.
Data drama: more capacity, less usage, and a wobble at the edges
Three months into monitoring, the picture is weirdly underwhelming. An analysis covering over 750,000 slots shows the new, roomier limits aren’t being filled. Median blobs per block actually dropped (roughly from 6 down to about 4 after the first tweak), and blocks carrying 16 or more blobs are extremely rare — only on the order of a few hundred occurrences in the whole sample.
Worse, those rare big blocks look flaky. Miss rates — the share of blocks that fail to propagate or be attested properly — sit near 0.5% at low blob counts but climb once you hit the high end. At 16+ blobs the miss rate rises into the 0.77%–1.79% range, and at the new 21‑blob ceiling it peaked around 1.79% (roughly triple the baseline). That suggests the network’s validators, bandwidth, or timing chops start to strain as block blob sizes grow.
On the economics side, a pricing tweak (EIP‑7918) added a reserve floor so blob fees don’t collapse to basically nothing when demand is soft. Early dashboards show blob fees stabilized after the upgrade, so the floor seems to be doing its job — fees remain a meaningful signal rather than evaporating into free space.
Why this matters and what to watch
Bottom line: Fusaka succeeded in giving Ethereum a bigger sandbox and a working knob for capacity, but rollups haven’t rushed to fill the extra space and parts of the infra aren’t totally comfortable at the high end. That creates two practical takeaways:
1) Don’t crank capacity higher yet. Until miss rates for 16+ blob blocks normalize, pushing the target even further risks more missed blocks, delayed finality, or annoying reorgs during traffic spikes.
2) Let usage grow and let clients catch up. If rollups start legitimately needing the extra headroom, or if client and validator implementations optimize for higher blob loads, the network can safely consider future increases. For now, the safer play is to give the ecosystem time to adapt rather than piling on more capacity.
Also worth noting: the problem today looks less like a hard cap on data availability and more like demand, sequencing economics, and implementation limitations. Rollups may be batching better, demand might still be lagging, or sequencer decisions could be keeping usage low — so the “bottleneck” has moved, not disappeared.
There are bigger upgrades on the roadmap (think deeper changes to data availability sampling) that could expand capacity more robustly in the future. But for now, Fusaka’s legacy is clear: more room exists, prices are behaving, and reliability at the margin needs some love before we throw more people into the pool.
