Flashblocks support

New support for "Flashblocks" is now available for alpha testing on Base Mainnet. For more details about Base Flashblocks, see the Base documentation.

Description

Flashblocks are partial blocks that are not fully confirmed yet. They are emitted every 200ms and contain a fraction of the transactions that will be in the final block. Consuming them allows you to get access to transaction data as soon as it's sequenced, rather than waiting for full block confirmation. Transactions can be processed incrementally, making your applications more responsive or predictions more accurate.

Flashblocks in Substreams

Partial Blocks

  • In Substreams, Flashblocks are called partial blocks, as a generalization of the concept, even though Flashblocks are the only supported implementation yet.

  • To benefit from partial blocks:

    • You need the latest version of Substreams CLI or library.

    • Your Substreams modules should avoid doing "block-level aggregations" and should only work on what is inside the "transactionTraces"

Here's how it works:

  1. The "sequencer" emits a flashblock every 200ms (so a maximum of 10 per block height)

  2. The instrumented Base node reader sends the increasing versions of the same block to the Substreams engine and eventually, the full block.

  3. To keep up with the chain, it may skip a few emissions of partial blocks, but will never send the transactions out-of-order.

  4. The Substreams engine will remember what was processed for each active Substreams and only process the new transactions since the last execution.

  5. It sends the PartialBlockData for each part of the full block as it gets it from the partial blocks

  6. If there is a reorg, new and undo signals are sent for the full blocks, but there is no consideration for the sent partial blocks. The user must always consider that the partial blocks data may become invalid.

  7. For this reason, there is no "cursor" sent with the partial blocks data.

Changes to Protobuf models

Developing for partial blocks

When writing a substreams that will run on partial blocks, remember that your modules will run multiple times on small increments of the same block. This means that any type of aggregation in a mapper will be incorrect. Only process data inside the block as if it were a stream of transactions.

While store modules should provide the incremental data as the partial blocks get processed, they may not represent exactly the same data as the full block would. When a block gets completed, they get recomputed from the final data so that inconsistencies don't add up.

Example of workflow

For the hypothetical scenario where:

  • a block #123 is being emitted as partial blocks

  • each partial block contains exactly 10 new transactions (to simplify the example)

  • the Substreams engine receives only the blocks with index 2, 4, 7 (some may be skipped to keep up with the chain HEAD)

  • finally, it receives the full block #123

The module will be executed 4 times with partial data:

  1. with transactions 0-20

  2. with transactions 20-40

  3. with transactions 40-70

  4. with transaction 70-100 (when it gets the full block)

Then, the module will be executed again with the full block data. This is the data that will be used to apply changes to the stores, to ensure consistency before we execute the next blocks.

The user will receive:

  1. The result of execution of trx 0-20, within PartialBlockData with Clock(num=123, ID=0xaaaaaaaaa) and PartialIndex=2

  2. The result of execution of trx 20-40, within PartialBlockData with Clock(num=123, ID=0xbbbbbbbbbb) and PartialIndex=4

  3. The result of execution of trx 40-70, within PartialBlockData with Clock(num=123, ID=0xcccccccccc) and PartialIndex=7

  4. The result of execution of trx 70-100, within PartialBlockData with Clock(num=123, ID=0xdddddddddd) and PartialIndex=10

  5. The result of execution of trx 0-100, within BlockData with Clock (num=123, ID=0xdddddddddd) <- Note that the ID here is the same as the last partial block received.

Note that the last block above will only be received if the user requested include_partial_blocks (and NOT partial_blocks_only)

Consuming partial blocks

A simple test, from terminal, with substreams run

  1. Get the latest release of Substreams: https://github.com/streamingfast/substreams/releases/tag/v1.17.8

  2. To test with a common module, using jq to quickly see what is going on (you need jq):

substreams run -e https://base-mainnet-flash.streamingfast.io ethereum_common all_events -s -1 --include-partial-blocks -o jsonl | jq -r '"Block: #\(.["@block"]) Partial: \(.["@partial_index"]) Event count:\(.["@data"].events|length)"'

The jq part is optional, only used here to show a quick summary of the content. Without it, you would receive the full JSON objects with the ethereum events..

This will print lines like this:

When you see "partial: null" it means that it is the actual full block.

  1. To see how it performs with a clock, you can use, as always, the -o clock with something like this:

substreams run -e https://base-mainnet-flash.streamingfast.io https://github.com/graphprotocol/graph-node/raw/refs/heads/master/substreams/substreams-head-tracker/substreams-head-tracker-v1.0.0.spkg -s -1 -o clock --include-partial-blocks

This will print lines like this:

See the "negative age", that's because at partial block with idx=5, the proposed block timestamp is still 2 seconds in the future.

A more useful example, with the Substreams Webhook Sink:

  1. Get the latest release of Substreams: https://github.com/streamingfast/substreams/releases/tag/v1.17.8

  2. Run this command:

substreams sink webhook --partial-blocks-only -e https://base-mainnet-flash.streamingfast.io http://webhook.example.com path-to-your.spkg -s -1

Enjoy!

More sinks

Flashblock support is not implemented in other sinks. For example, we believe that it would be a bad idea to implement in the SQL sink, because it would cause too many "undo" operations. If you are using our Golang Substreams Sink SDK, you can simply:

  • Bump to the latest version of substreams in your go.mod (1.17.8 and above)

  • Define your sink flags with sink.FlagIncludePartialBlocks and/or sink.FlagPartialBlocksOnly under FlagIncludeOptional()

  • Implement the function HandlePartialBlockData(...) and pass it to NewSinkerFullHandlersWithPartial(...) when creating the sinker.

Last updated

Was this helpful?