Flashblocks support
New support for "Flashblocks" is now available for alpha testing on Base Mainnet. For more details about Base Flashblocks, see the Base documentation.
Disclaimers
The only endpoint currently supporting Flashblocks is
https://base-mainnet-flash.streamingfast.io:443.That endpoint is not guaranteed to be stable or available at all times (alpha testing).
The protocol might still change in the next few weeks, as we gather feedback on usage.
It is normal to receive only "some" partial blocks indexes. In Substreams, the data from missing ones will always be bundled in the next PartialBlockData.
Substreams only sends an "undo signal" in case of a reorg, or if the sent partial blocks are being discarded and the new block does not correspond to the partial blocks sent. It will not send "undo signals" between each partial block with the same block height and different block hash.
Description
Flashblocks are partial blocks that are not fully confirmed yet. They are emitted every 200ms and contain a fraction of the transactions that will be in the final block. Consuming them allows you to get access to transaction data as soon as it's sequenced, rather than waiting for full block confirmation. Transactions can be processed incrementally, making your applications more responsive or predictions more accurate.
Flashblocks in Substreams
Partial Blocks
In Substreams, Flashblocks are called partial blocks, as a generalization of the concept, even though Flashblocks are the only supported implementation yet.
To benefit from partial blocks:
You need the latest version of Substreams CLI or library (v1.17.9).
Your Substreams modules should avoid doing "block-level aggregations" and should only work on what is inside the "transactionTraces"
Your Substreams sink implementation should only take decisions on the block hash if it receives a full block or the "last_partial_block"
Here's how it works:
The "sequencer" emits a flashblock every 200ms (so a maximum of 10 per block height)
The instrumented Base node reader sends the increasing versions of the same block to the Substreams engine and eventually, the full block.
To keep up with the chain, it may skip a few emissions of partial blocks, but will never send the transactions out-of-order.
The Substreams engine will remember what was processed for each active Substreams and only process the new transactions since the last execution.
It sends the data inside BlockScopedData for each part of the full block as it gets it from the partial blocks, with
is_partial=true, withpartial_indexandis_last_partialpopulated.If there is a reorg, an UNDO signal is sent, followed by the correct full blocks for the new chain segment, until we are up to HEAD again and start receiving more partial blocks..
Changes to Protobuf models
Partial blocks are sent as regular
[BlockScopedData](https://buf.build/streamingfast/substreams/docs/main:sf.substreams.rpc.v2#sf.substreams.rpc.v2.BlockScopedData), withis_partial=true. The ordinal of that partial is set inpartial_indexand the last partial will always haveis_last_partial=trueThe
sf.substreams.rpc.v2.Requestandsf.substreams.rpc.v3.Requestnow contain this parameter:
Developing for partial blocks
When writing a substreams that will run on partial blocks, remember that your modules will run multiple times on small increments of the same block. This means that any type of aggregation in a mapper will be incorrect. Only process data inside the block as if it were a stream of transactions. Also, never use the block hash in your modules, as it changes between the versions of a partial block.
Example of workflow
For the hypothetical scenario where:
block #122 already exists at the time of the substreams connection
a block #123 is being emitted as partial blocks
each partial block contains exactly 10 new transactions (to simplify the example)
the Substreams engine receives only the blocks with index 2, 4, 7 (some may be skipped to keep up with the chain HEAD)
finally, it receives the full block #123
The module will be executed on full block #122 (with transactions 0-100).
Then, the module will be executed 4 times with partial data:
with transactions 0-20
with transactions 20-40
with transactions 40-70
with transaction 70-100 (when it gets the full block)
The user will receive 5 BlockScopedData messages:
The full block #122, with
Clock(num=122, ID=...)andisPartial=falseThe result of execution of trx 0-20, with
Clock(num=123, ID=0x123aaaaaa),isPartial=true,partialIndex=2,isLastPartial=falseThe result of execution of trx 20-40, with
Clock(num=123, ID=0x123bbbbbbb),isPartial=true,partialIndex=4,isLastPartial=falseThe result of execution of trx 40-70, with
Clock(num=123, ID=0x123ccccccc),isPartial=true,partialIndex=7,isLastPartial=falseThe result of execution of trx 70-100, with
Clock(num=123, ID=0x123ddddddd),isPartial=true,partialIndex=10,isLastPartial=true
Consuming partial blocks
A simple test, from terminal, with substreams run
substreams runGet the latest release of Substreams: https://github.com/streamingfast/substreams/releases/tag/v1.17.9
To test with a common module, using
jqto quickly see what is going on (you need jq):
substreams run -e https://base-mainnet-flash.streamingfast.io ethereum_common all_events -s -5 --partial-blocks -o jsonl | jq -r '"Block: #\(.["@block"]) Partial: \(.["@partial_index"]) (last:\(.["@is_last_partial"])) Event count:\(.["@data"].events|length)"'
This will print lines like this:
When you see "partial: null" and "last:null", it means that it is the actual full block.
To see how it performs with a clock, you can use, as always, the -o clock with something like this:
substreams run -e https://base-mainnet-flash.streamingfast.io https://github.com/graphprotocol/graph-node/raw/refs/heads/master/substreams/substreams-head-tracker/substreams-head-tracker-v1.0.0.spkg -s -1 -o clock --partial-blocks
This will print lines like this:
A more useful example, with the Substreams Webhook Sink:
Get the latest release of Substreams: https://github.com/streamingfast/substreams/releases/tag/v1.17.9
Run this command:
substreams sink webhook --partial-blocks -e https://base-mainnet-flash.streamingfast.io http://webhook.example.com path-to-your.spkg -s -1
Enjoy!
More sinks
Flashblock support is not implemented in other sinks. For example, we believe that it would be a bad idea to implement in the SQL sink, because it would cause too many "undo" operations. If you are using our Golang Substreams Sink SDK, you can simply:
Bump to the latest version of substreams in your go.mod (1.17.9 and above)
Define your sink flags with
sink.FlagPartialBlocksunderFlagIncludeOptional()Optionally, add some logic to handle the "IsPartial", "PartialIndex" and "IsLastPartial" attributes in function
HandleBlockScopedData(...)when creating the sinker.
Last updated
Was this helpful?

