Ethereum Foundation: Security First — the 128‑bit Rule and the New Race
The sprint happened — now slow down and check the math
For the past year, the zkEVM world has been on a caffeine-fueled sprint: proving times that used to take minutes have been chopped to seconds, costs plunged, and optimized rigs now churn out proofs for almost every mainnet block in under 10 seconds on target hardware. In short: the performance problem that kept people up at night is mostly solved.
But speed is only half the story. The foundation that holds all this performance together is cryptography, and several key assumptions that teams were quietly leaning on have been shown to be shakier than advertised. The takeaway from the core developers is blunt: fast proofs that can be forged are worse than slow honest ones. So the focus is shifting away from pure throughput toward provable soundness.
That means a clear bar has been set: long-lived, L1-grade systems need provable 128-bit security — not “probably safe if conjecture X stays true.” That target lines up with what academics and standards folks expect for systems that might carry massive real-world value, and it’s meant to be a margin you can actually sleep on.
The plan, the tools, and why this is harder than it sounds
The foundation laid out a crisp roadmap with three checkpoints. First: integrate each project’s proof system into a common calculator tool by early next year so everyone reports security using the same yardstick. The point is simple: no more bespoke bit-security claims based on optimistic assumptions — use the common tool and update it if new attacks show up.
Next up is a mid-year milestone that effectively says, “get to at least 100 bits by May, and keep proofs reasonably small.” That’s an intermediate safety net so teams can keep shipping without pretending they’ve finished the whole job. The end-of-year mark is the real finish line: provable 128-bit security, tight proof-size limits, and a formal argument about how all the little proof pieces are glued together.
How do teams plan to raise security without ballooning proof sizes and verification delays? A bunch of clever math and engineering tricks. New proximity tests and polynomial-commitment ideas promise smaller, faster proofs at higher security levels. Other techniques reduce wasted padding when encoding execution traces. There’s also deliberate randomness-searching (“grinding”) to find cheaper parameter choices inside the soundness envelope, and careful recursion designs that stitch many mini-proofs into one tidy final object.
Some independent projects are already exploring these tools in different combos, and benchmarks at the targeted security levels look promising: smaller proofs and quicker verification versus older constructions. Still, the math landscape keeps moving — assumptions that looked safe months ago have been reclassified once researchers found new algorithms or attacks — so today’s parameters may need further revision tomorrow.
Finally, a reality check: the “real-time proving” numbers come from curated hardware and lab-style runs. Getting thousands of independent validators to run provers at home, on their own power budgets and networks, is a different beast. And even if the latency and proof-size targets are met in the lab, documenting and formally verifying the full recursion architecture — the messy glue between many circuits — is painstaking and likely the longest pole in the tent.
Bottom line: the engineering sprint to make zk proofs fast is basically done. The next race is about making those proofs provably sound, small enough to propagate across the peer-to-peer network, and formally argued so the whole stack can serve as a serious, L1-level settlement layer. That’s where most of the hard, slow, nerdy work lives — and where the future security of billions will be decided. Buckle up: the security era just began, and it’s going to be meticulous, quirky, and absolutely necessary.
