Data Availability
Last updated
Last updated
Data availability is a fundamental problem in blockchain scaling. For a rollup to operate trust-minimally, all transaction data must be published somewhere verifiable so that anyone can reconstruct the state independently. If transactions are not fully accessible, validators cannot verify state transitions, leading to potential fraud or censorship.
Historically, rollups have relied on posting call data directly to Ethereum L1, which is both expensive and limited in throughput, but provides a high level of trust. Over time we are starting to see more alternative DA solutions come to market.
We have chosen Solana as our DA layer because we believe it will become the liquidity and asset hub for DeFi. Settling on Solana allows for trust-minimized and near-instant deposits between our rollup and the L1. Using Solana as our DA layer simplifies the architecture by reducing dependencies on external networks while maintaining high throughput. We also benefit from Solana’s ongoing bandwidth improvements (e.g., Firedancer) which will only further improve DA capacity.
Our DA solution consists of several smart contracts and functions deployed natively on Solana. These include:
Splits Bullet transaction batches (blobs) into smaller chunks, since Solana transactions have a maximum size of 1,232 bytes.
Chunks are stored by writing them into the transaction logs, utilizing Solana’s ledger storage instead of costly account storage.
Chunks are incrementally hashed into a Merkle tree, with the final blob_digest stored in the chunker account.
Once all chunks are submitted, the hasher program finalizes the data by hashing the blob_digest itself.
The final hash is stored in the hasher account, which provides a well-known address that can be checked for inclusion in Solana’s updated accounts list.
This ensures that data is provably available in Solana’s block history, even if it is not stored in a regular account.
To verify data availability, our proof system ensures that any blob stored in the chunker and hasher accounts can be reconstructed.
Inclusion proofs allow users to verify that specific chunks were submitted.
Completeness proofs ensure that a full blob is available and correctly stored.
Monitors the chunker and hasher accounts for updates off-chain.
Uses Solana’s Geyser plugin (Yellowstone) to stream data availability events in real time.
Stores indexed data in an embedded Redb key-value database, enabling efficient querying of stored rollup data.
Currently, our primary DA limitation is the asynchronous posting rate of rollup batches to Solana. Based on current Solana mainnet-beta benchmarks:
We already achieve 100 kB/s of sustained DA throughput today.
Each rollup transaction is approximately 200 bytes, allowing for a throughput of at least 500 transactions per second (TPS).
By batching transactions, we can amortize overhead, significantly increasing TPS.
In an optimal case, a market maker could pack approximately 96 orders into one Solana transaction blob, increasing throughput to 7,840 exchange orders per second.
While Solana provides a high-performance DA layer, we recognize that DA throughput could become a long-term constraint, especially when compared to the improvements we've made in the sequencer and state database. If bandwidth limitations persist, we may explore:
Celestia: Specialized DA network which is hitting 27MB/s on their Mammoth Mini testnet.
Hybrid Approach: Posting batch commitments to Celestia while keeping execution tightly coupled to Solana.
For now, our focus remains on maximizing Solana’s DA throughput, leveraging its low-cost ledger storage, and integrating deeply with Solana’s DeFi ecosystem.