Node
Node model overview
The Node class models a generic blockchain node in SBS. It encapsulates the local chain state, the active consensus protocol, peer connectivity, behaviour flags, reconfiguration state, and interactions with the simulation engine (events, scheduler, queue). Nodes are designed to be protocol-agnostic at the core, while delegating consensus-specific logic to a pluggable consensus protocol (CP) instance.
Key responsibilities
- Maintain the local blockchain, memory pool, and runtime state (alive/synced)
- Host the active consensus protocol instance and expose safe hooks for joining/updating it
- Validate incoming blocks against generic rules and configuration depth
- Detect desynchronisation and initiate high-level sync
- Apply configuration updates via the configuration chain
- Interact with the simulation queue to send/receive events
Core attributes
id: Unique node identifierblockchain: Local list of blocks (genesis at index 0)pool: Deque holding pending transactionsneighbours: Peers for gossip/sync checkslocation,bandwidth: Network placement and link capacity (used by Network)state:aliveandsyncedflagscp: Active consensus protocol instance (ConsensusProtocol)behaviour: Fault/Byzantine behaviour flags and timingsbacklog: Future events to be replayed (local to node)queue: Reference to the simulation event queuereconfiguration_state: Configuration-chain logic and local config blocks
Lifecycle and consensus integration
join_latest_conf(time): Creates the CP specified by the latest configuration block and initialises it with:time = timestarting_round = last_block_round + 1update(time): Attempts to adopt a newly synced configuration by delegating toReconfigurationState.try_apply_configuration(...). Should be called at protocol-defined safe points to avoid mid-round interference.
The Backlog
Nodes maintain a backlog of messages that arrived too early to be processed (e.g., messages from future rounds, commit messages when in a 'round_start' state etc..). Instead of discarding these outright, nodes store them in an ordered backlog.
When a node transitions into a new state—such as syncing, adopting a new configuration, or advancing consensus rounds—the handler replays the backlog via handle_backlog(node, time). This updates each backlog event’s timestamp to the current call time and attempts re-processing. Events that now pass validation are consumed and removed, while still-premature ones remain queued.
This mechanism is crucial because blockchains are inherently asynchronous: messages may arrive out of order, ahead of local progress, or under an old consensus view. By retaining and retrying “future” messages at safe points, nodes reduce wasted communication and smooth consensus progress.
Block intake and validation
validate_block(block) performs CP-agnostic checks before protocol-specific handling:
- Round check: block round must match cp.rounds.round
- Height/depth checks: reject non-advancing or malformed depths; mark gaps as future_block
- Configuration checks using block.extra_data["configuration_depth"]:
- < local conf depth → invalid
- > local conf depth → future_conf (triggers config-chain sync elsewhere)
- == → proceed with decision (valid or future_block)
Sync detection and high-level sync
is_synced_with_neighbours(): Compares local latest block depth with neighbours; if any neighbour is ahead, returns(False, node_furthest_ahead)attempt_sync(time, sync_node=None): If desynced, flipsstate.synced=Falseand schedules a high-level local sync event viaHighLevelSync.create_local_sync_event(...). When a node is resurrected, it first tries to fast-sync before rejoining the latest configuration.
Liveness controls
kill(): Marks node offline (alive=False); message/local events are ignored while offlineresurrect(time): Marks node online and attempts data-chain fast sync; if already in sync, rejoins the latest configuration immediately
Chain mutation and event handling
add_block(block, time, update_time_added=True): Appends a verified block, updatesblock.time_added(optional), and removes included transactions from the local pool viaTransactionFactoryadd_event(event): Enqueues an event if the node is online
Helpful utilities
blockchain_length(): Number of data blocks excluding genesisids,trunc_ids: Quick views of local chainlast_block: Convenience accessor for the latest block- Readable
__repr__/__str__forms for debugging and UI
In summary
- Consensus protocols (PBFT, Tendermint, BigFoot, etc.) run inside
node.cpand callnode.update(time)at safe boundaries to allow configuration adoption. - High-level sync uses propagation estimates from the Network model (transmission, latency, queueing, processing) to schedule a single local event that applies missing blocks, repeating as needed until up-to-date.
- The configuration chain links data blocks to the configuration in force at creation time via
configuration_depth, ensuring nodes accept blocks under the correct configuration and kick off configuration sync when needed.
This design keeps the Node generic by placing protocol-specific mechanics in CPs and configuration-chain logic in ReconfigurationState, while the Network and Manager components provide propagation timing and system events driving dynamic scenarios.