Introduction: Why EIP-4844 Matters
EIP-4844 (aka proto-danksharding), activated in the Dencun upgrade on March 13, 2024, represents the most significant change to Ethereum's data availability layer since the Merge. It introduces a new transaction type โ blob-carrying transactions โ that provides a dedicated, cheaper data channel primarily for Layer 2 rollups. Rather than implementing full danksharding in one leap, EIP-4844 scaffolds the cryptographic and protocol primitives needed for the eventual full implementation.
If you've been building on or analyzing L2 economics, understanding the mechanics of blob transactions is no longer optional โ it's essential infrastructure knowledge.
The Core Problem: Calldata Costs for Rollups
Before EIP-4844, rollups (both optimistic and ZK) posted their transaction batches as calldata in regular Ethereum transactions. This data competes with all other EVM execution for block gas, meaning:
- Rollup data costs scaled with L1 gas demand
- Calldata persists forever in Ethereum's history, even though rollups only need temporary data availability
- At peak congestion, L2 fees spiked proportionally to L1 fees
Calldata costs approximately 16 gas per non-zero byte (4 gas per zero byte). For rollups posting megabytes of compressed batch data, this was the dominant cost component โ often 80-90% of total L2 transaction fees.
Blob Transactions: Type-3 Transaction Architecture
EIP-4844 introduces Type-3 transactions (transaction type 0x03) that carry blobs โ large binary data objects of exactly 128 KB (131,072 bytes), composed of 4096 field elements over the BLS12-381 scalar field.
Key architectural properties:
- Each blob is 128 KB, and each transaction can carry up to 6 blobs
- Target: 3 blobs per block; maximum: 6 blobs per block (post-Dencun; later raised to target 6 / max 9 after the Pectra upgrade in 2025)
- Blob data is NOT accessible to the EVM โ smart contracts cannot read blob contents
- Blob data is pruned from consensus nodes after ~18 days (4096 epochs)
- Only the KZG commitment and versioned hash of each blob are visible on the execution layer
Transaction Fields
Type-3 transactions extend the EIP-1559 format with:
`
max_fee_per_blob_gas // max willingness to pay per blob gas unit
blob_versioned_hashes // list of versioned hashes (one per blob)
`
The sidecar (transmitted on the p2p layer but not in the execution payload) contains:
`
blobs // the actual blob data
commitments // KZG commitments for each blob
proofs // KZG proofs for each blob
`
KZG Commitments: The Cryptographic Foundation
Each blob is committed using KZG (Kate-Zaverucha-Goldberg) polynomial commitments over the BLS12-381 elliptic curve. Here's the pipeline:
1. Blob โ Polynomial: The 4096 field elements are interpreted as evaluations of a degree-4095 polynomial at the roots of unity
2. Commitment: Using a trusted setup (the KZG ceremony with 141,416+ contributors), compute C = commit(p(x)) โ a single 48-byte G1 point
3. Versioned Hash: version_byte || SHA256(commitment)[1:] โ the version_byte is 0x01
4. Verification: Any party can verify point evaluations against the commitment without seeing the full data
This is foundational because full danksharding will use Data Availability Sampling (DAS), where validators verify data availability by sampling random chunks and verifying KZG proofs โ without downloading all blobs.
Edge Case: The Trusted Setup
The KZG scheme requires a structured reference string (SRS) from a trusted setup ceremony. The security assumption is 1-of-N honest: if even one of the 141,000+ ceremony participants discarded their toxic waste, the scheme is secure. This is a pragmatic trade-off. Future work may explore transparent alternatives (e.g., using IPA/Verkle-compatible commitments), but KZG was chosen for efficiency.
The Blob Fee Market: EIP-1559 for Blobs
Blob gas operates on a separate, independent EIP-1559-style fee market:
- Blob base fee adjusts using the exponential mechanism:
new_base_fee = old_base_fee * e^((actual_blobs - target_blobs) / adjustment_quotient) - 1 blob = 131,072 blob gas (2^17)
- The blob base fee started at 1 wei and adjusts independently of regular gas prices
- Users specify
max_fee_per_blob_gasanalogously tomax_fee_per_gas
Practical Implications
- When blob demand is below target, the blob base fee trends toward minimum (often near 1 wei), making L2 data posting nearly free
- When blob demand exceeds target consistently, fees can spike exponentially
- Blob fee volatility is a real concern: post-Pectra, the increased target helps, but any sudden surge in L2 demand can cause rapid fee escalation
Impact on L2 Architecture
Data Availability Workflow
1. L2 sequencer batches transactions and compresses them
2. Batch data is split into โค128 KB blobs
3. Sequencer submits Type-3 transaction with blob sidecar to L1
4. L1 smart contract receives versioned_hash via the BLOBHASH opcode (accessible in the EVM)
5. For optimistic rollups: challengers download blob data (within 18-day window) to reconstruct state and submit fraud proofs
6. For ZK rollups: the versioned hash anchors the data commitment that the validity proof references
Critical Edge Cases
- Data pruning window: After ~18 days, blob data is no longer guaranteed to be available from consensus nodes. Rollups requiring longer fraud proof windows must ensure alternative DA guarantees (e.g., archival services, L2 nodes storing their own data, or services like EthStorage/Portal Network)
- Blob size limitations: If a rollup's batch exceeds 6 ร 128 KB = 768 KB, it must split across multiple transactions or multiple blocks
- Reorgs and blob availability: During chain reorganizations, blobs from orphaned blocks may not be re-propagated. Rollup sequencers must handle resubmission logic
- EVM opacity: Since blob data isn't EVM-accessible, on-chain verification of blob contents requires the
BLOBHASHopcode combined with a point evaluation precompile (0x0a) that verifies KZG proof-of-evaluation claims
The Point Evaluation Precompile (0x0a)
This is an often-overlooked but critical component. The precompile at address 0x000000000000000000000000000000000000000A accepts:
versioned_hashโ the blob commitment hashzโ an evaluation pointyโ the claimed evaluation resultcommitmentโ the KZG commitmentproofโ the KZG proof
It verifies that p(z) = y given the commitment. This enables rollup contracts to verify specific properties of blob data without accessing the data directly โ essential for ZK rollup proof verification.
From Proto-Danksharding to Full Danksharding
EIP-4844 deliberately implements the format without the scaling. Full danksharding aims for:
- Target ~16 MB of blob data per block (vs. current ~384-768 KB)
- Data Availability Sampling (DAS): validators sample random blob chunks and verify KZG proofs instead of downloading everything
- Proposer-Builder Separation (PBS): necessary because constructing blocks with many blobs becomes resource-intensive
- 2D KZG scheme: extending commitments across rows and columns for erasure-coded data
Since EIP-4844 already establishes the KZG commitment format, versioned hashes, and the blob transaction type, the upgrade path to full danksharding requires no changes to L2 contracts or infrastructure โ only consensus-layer enhancements.
Conclusion
EIP-4844 is not merely a cost optimization; it is a fundamental re-architecture of Ethereum's data layer. By separating blob data from execution data, introducing a dedicated fee market, and establishing the KZG cryptographic primitives, it lays the groundwork for Ethereum to scale to tens of megabytes of data throughput per block. For L2 developers, understanding blob lifecycle management, fee market dynamics, and the point evaluation precompile is now critical for optimizing rollup economics and security.