Flashblocks support
New support for "Flashblocks" is now available for alpha testing on Base Mainnet. For more details about Base Flashblocks, see the Base documentation.
Disclaimers
The only endpoint supporting Flashblocks is
https://base-mainnet-flash.streamingfast.io:443.That endpoint is not guaranteed to be stable or available at all times.
The protocol might still change in the next few weeks, as we gather feedback on usage.
Flashblocks coming from Firehose or Substreams endpoint should not be considered final data. Only write to database data coming from full blocks: there is no "undo" mechanism for Flashblocks.
Description
Flashblocks are partial blocks that are not fully confirmed yet. They are emitted every 200ms and contain a fraction of the transactions that will be in the final block. Consuming them allows you to get access to transaction data as soon as it's sequenced, rather than waiting for full block confirmation. Transactions can be processed incrementally, making your applications more responsive or predictions more accurate.
Flashblocks in Substreams
Partial Blocks
In Substreams, Flashblocks are called partial blocks, as a generalization of the concept, even though Flashblocks are the only supported implementation yet.
To benefit from partial blocks:
You need the latest version of Substreams CLI or library.
Your Substreams modules should avoid doing "block-level aggregations" and should only work on what is inside the "transactionTraces"
Here's how it works:
The "sequencer" emits a flashblock every 200ms (so a maximum of 10 per block height)
The instrumented Base node reader sends the increasing versions of the same block to the Substreams engine and eventually, the full block.
To keep up with the chain, it may skip a few emissions of partial blocks, but will never send the transactions out-of-order.
The Substreams engine will remember what was processed for each active Substreams and only process the new transactions since the last execution.
It sends the PartialBlockData for each part of the full block as it gets it from the partial blocks
If there is a reorg, new and undo signals are sent for the full blocks, but there is no consideration for the sent partial blocks. The user must always consider that the partial blocks data may become invalid.
For this reason, there is no "cursor" sent with the partial blocks data.
Changes to Protobuf models
Partial blocks are not sent as
BlockScopedData, but asPartialBlockData, which is a new possible type forResponse.messageThe
sf.substreams.rpc.v2.Requestandsf.substreams.rpc.v3.Requestnow contain these parameters:
Developing for partial blocks
When writing a substreams that will run on partial blocks, remember that your modules will run multiple times on small increments of the same block. This means that any type of aggregation in a mapper will be incorrect. Only process data inside the block as if it were a stream of transactions.
While store modules should provide the incremental data as the partial blocks get processed, they may not represent exactly the same data as the full block would. When a block gets completed, they get recomputed from the final data so that inconsistencies don't add up.
Example of workflow
For the hypothetical scenario where:
a block #123 is being emitted as partial blocks
each partial block contains exactly 10 new transactions (to simplify the example)
the Substreams engine receives only the blocks with index 2, 4, 7 (some may be skipped to keep up with the chain HEAD)
finally, it receives the full block #123
The module will be executed 4 times with partial data:
with transactions 0-20
with transactions 20-40
with transactions 40-70
with transaction 70-100 (when it gets the full block)
Then, the module will be executed again with the full block data. This is the data that will be used to apply changes to the stores, to ensure consistency before we execute the next blocks.
The user will receive:
The result of execution of trx 0-20, within
PartialBlockDatawithClock(num=123, ID=0xaaaaaaaaa)andPartialIndex=2The result of execution of trx 20-40, within
PartialBlockDatawithClock(num=123, ID=0xbbbbbbbbbb)andPartialIndex=4The result of execution of trx 40-70, within
PartialBlockDatawithClock(num=123, ID=0xcccccccccc)andPartialIndex=7The result of execution of trx 70-100, within
PartialBlockDatawithClock(num=123, ID=0xdddddddddd)andPartialIndex=10The result of execution of trx 0-100, within
BlockDatawithClock (num=123, ID=0xdddddddddd)<- Note that the ID here is the same as the last partial block received.
Note that the last block above will only be received if the user requested include_partial_blocks (and NOT partial_blocks_only)
Consuming partial blocks
A simple test, from terminal, with substreams run
substreams runGet the latest release of Substreams: https://github.com/streamingfast/substreams/releases/tag/v1.17.8
To test with a common module, using
jqto quickly see what is going on (you need jq):
substreams run -e https://base-mainnet-flash.streamingfast.io ethereum_common all_events -s -1 --include-partial-blocks -o jsonl | jq -r '"Block: #\(.["@block"]) Partial: \(.["@partial_index"]) Event count:\(.["@data"].events|length)"'
This will print lines like this:
When you see "partial: null" it means that it is the actual full block.
To see how it performs with a clock, you can use, as always, the -o clock with something like this:
substreams run -e https://base-mainnet-flash.streamingfast.io https://github.com/graphprotocol/graph-node/raw/refs/heads/master/substreams/substreams-head-tracker/substreams-head-tracker-v1.0.0.spkg -s -1 -o clock --include-partial-blocks
This will print lines like this:
A more useful example, with the Substreams Webhook Sink:
Get the latest release of Substreams: https://github.com/streamingfast/substreams/releases/tag/v1.17.8
Run this command:
substreams sink webhook --partial-blocks-only -e https://base-mainnet-flash.streamingfast.io http://webhook.example.com path-to-your.spkg -s -1
Enjoy!
More sinks
Flashblock support is not implemented in other sinks. For example, we believe that it would be a bad idea to implement in the SQL sink, because it would cause too many "undo" operations. If you are using our Golang Substreams Sink SDK, you can simply:
Bump to the latest version of substreams in your go.mod (1.17.8 and above)
Define your sink flags with
sink.FlagIncludePartialBlocksand/orsink.FlagPartialBlocksOnlyunderFlagIncludeOptional()Implement the function
HandlePartialBlockData(...)and pass it toNewSinkerFullHandlersWithPartial(...)when creating the sinker.
Last updated
Was this helpful?

