Run your own Ethereum Validator (Part 1)- PBFT, Casper FFG, Casper CBC and LMD GHOST | by BrianKim | Coinmonks | Jul, 2023

BrianKim
Coinmonks

16 min read

10 hours ago

Running a node allows you to directly, trustlessly and privately use Ethereum while supporting the network by keeping it more robust and decentralized.

This article is a bit arid heavily academic but I think having an in-depth core knowledge is the best way to take the first step into something. Trying not to skip this story and spare some time on reading it is definitely worthy.

It is readily apparent that Ethereum is based on Proof Of Stake consensus algorithm, but what it is and how it works are those questions that commonly seen on the internet. Despite the fact that the wealth of information out there, there’s no easy jumping-off point for a beginner to ramp up and start engaging with the project. Let’s dissect it in every detail.

PBFT is a Byzantine fault tolerance protocol for state machine replication. The state machine replication is a method for avoiding the unavailability of the online services due to failures by duplicating the system states over multiple machines to have some degree of redundancy.

How do all replicated state machines reach the consensus between each other?

In a situation where consensus can be achieved exclusively by relying on communication, the well-known consensus problem in decentralized systems commonly known as the Byzantine Generals Problem.

An intuitive idea to solve the General Byzantine problem is to use one or more rounds of voting to obtain majority consensus. PBFT’s greatest innovation lies in its design with multiple voting rounds, composed of three stages: Pre-prepare, Prepare, and Commit.

For the sake of simplicity, let me make the following assumptions:

  • We have in total 4 replicated state machines (node), one of them could be a traitor which can behave arbitrarily (Byzantine Fault).
  • Each node is assigned a number (0~3) and can recognize the validity of each other’s signature.
  • Nodes follow a round-robin rotation to determine who will be the Primary during the next epoch. As an example, node number 0 will be the Primary during the first view, node number 2 during the second one, and so on. When all nodes have been called upon to be Primaries, the rotation starts again.

Step 1: Pre-prepare

  • The Primary is responsible for receiving the client’s request.
  • The Primary is responsible for initiating the proposal including the message content, the current view number, a sequence number (numeral order of the action being undertaken).
  • The Primary sends that proposal as the “pre-prepare” messages with his signature to the other validators.

Step 2: Prepare

  • After each replica receives the “pre-prepare” message, it can either accept or reject the Primary’s proposal. If the replica accepts the Primary’s proposal, it will send a “prepare” message with its own signature to all the other replicas (including the Primary). If it rejects, it will not send any message.
  • If a replica receives 2f+1 “prepare” messages, it enters the “prepared” state. The collection of these “prepare” messages is collectively referred to as the “prepared certificate.”

Step 3: Commit

  • If the “prepared” node decides to commit, it will send a “commit” message with the signature to all the nodes (replicas). If it decides not to execute, no message is sent.
  • If the nodes receive 2f+1 “commit” messages, the message object is executed, which also means that the proposal has reached a consensus.
  • After the execution of the message, the node enters the “committed” state and returns the execution result (i.e. the reply) to the client. After the reply has been sent, the replicas wait for the next request.

The view change

The core reason for triggering the view-change protocol is that the Replica node confirms that in limited time, the current Primary node cannot reach a consensus on the exchange request. It may be because the Primary node is temporarily unavailable or it’s a fault node, or the network is unstable for the current distributed system, etc.

  • If the client does not hear reply back soon enough, then the client broadcasts directly to replica nodes and keeps sending its request until the request is replicated.
  • Each node starts a timer when it receives a request and stops it after the request has been executed.
  • If the Primary fails to broadcast the client’s request and start the “prepare” phase or if the message is not executed within a given maximum amount of time T after the timer had been started, the Replica node broadcast a “view change” message. This message will include the new view (current view number +1), and a number of other details.
  • Upon receiving 2f+1 “view change” messages, the new Primary can broadcast a “new view” message to all the replicas. This message will include the new view number, a proof that the previous request has not been executed, a “prepare” message, and a number of other components.
  • After receiving the “new view” message, each replica will follow the three steps of request execution described above, once the new Primary broadcasts the client’s request.

Casper FFG is a PBFT-inspired and improved consensus protocol. Although it is designed to be simple, its proof of security is not easy.

Minimal Slashing Condition

The four rules called PBFT’s “Minimal Slashing Conditions” of which any violation will cause the deposit to “slashed” are the following:

1. Sending a commit requires seeing 2/3 prepares.

2. If you make a prepare in some epoch pointing to some particular
previous epoch, then you need to have seen 2/3 prepares in that epoch,
and those prepares must point to the same previous epoch.

3. If you make a commit during some epoch, then you clearly saw 2/3 prepares
during that epoch, and so any future prepares that you do should better be
referencing that epoch or something newer.

4. You can't prepare twice in a single epoch.

These 4 rules can be further reduced to 2 commandments which are the minimal slashing condition for Casper FFG:

1. A validator must not publish two distinct votes for the same target height.
2. A validator must not vote within the span of its other votes.

How Does Casper FFG Work?

Casper FFG is an overlay that abstracts the block proposing mechanism and is only responsible for consensus while the block proposing mechanism is implemented by the underlying chain, and the block proposal from the underlying chain is called checkpoints.

Each node must vote on the checkpoints. The content of the vote is a link consisting of two checkpoints from different heights. The lower checkpoint of the link is called the source and the higher checkpoint of the link is called the target.

The node broadcasts the vote to the network and collects votes from other nodes at the same time. If the sum of the deposits of the nodes voting for a link L exceeds 2/3 of the total deposit, then L is called the supermajority link, denoted by s → t. For example, in the below figure, a supermajority link is formed between r->b1, b1->b2 and b2->b3.

Starting from the root, if a supermajority link is established between the two checkpoints, the target of the link becomes “justified” and the source of the link, which is currently “justified,” becomes “finalized”. The root is “justified” and “finalized” by default.

It can be noticed that after each two rounds of voting, each checkpoint will become “justified” and then “finalized,” which is almost the equivalent in PBFT of “prepared” and “committed.” For instance, if b2->b3 has become supermajority link, this means checkpoint b3 has become justified, and the previous justified checkpoint b2 has become finalized.

Fork Choice Rule & Enabling Dynamic Validator Sets

Each node must follow the rule of FOLLOWING THE CHAIN CONTAINING THE JUSTIFIED CHECKPOINT OF THE GREATEST HEIGHT.

According to Casper FFG paper, the set of validators needs to be able to change. New validators must be able to join, and existing validators must be able to leave, which leads to a new terminology of dynasty of a block. The dynasty of block b is the number of finalized checkpoints in the chain from root to the parent of block b.

When a validator’s deposit message is included in a block with dynasty d, then that validator will join the validator set at first block with dynasty d + 2, which is validator’s start dynasty, DS(ν). The opposite is true for withdrawal, if the validator’s withdraw message is included in a block with dynasty d, it similarly leaves the validator set at the first block with dynasty d + 2, which is validator’s end dynasty, DE(ν).

At the start of the end dynasty, the validator’s deposit is locked for a long period of time, called the withdrawal delay (approximately “four months worth of blocks”), before the deposit is withdrawn. If during the withdrawal delay, the validator violates any commandment, the deposit is slashed.

There are two subsets of validators for any given dynasty d. The forward validator set, DS(ν) ≤ d < DE(ν) and the rear validator set, DS(ν) < d ≤ DE(ν).

To support these dynamic validator sets, we redefine a supermajority link 
and a finalized checkpoint as follows:

• An ordered pair of checkpoints (s, t), where t is in dynasty d,
has a supermajority link if both at least 2/3 of the forward validator set
of dynasty d have published votes s → t and at least 2/3 of the rear validator
set of dynasty d have published votes s → t.

• Previously, a checkpoint c was called finalized if c is justified and
there is a supermajority link from c → c' where c' is a child of c.
We now add the condition that c is finalized if only if the votes
for the supermajority link c → c', as well as the supermajority link
justifying c, are included in (c')’s block chain and before the child of c'
i.e., before block number h(c') ∗ 100 + 1.

The forward and rear validator sets will usually greatly overlap, but if the two validator sets substantially differ, this “stitching” mechanism prevents safety failure in the case when two grandchildren of a finalized checkpoint have different dynasties because the evidence was included in one chain but not the other.

Attack from dynamic validator sets

In above figure, without the validator set stitching mechanism, it’s possible for two conflicting checkpoints c and c′ to both be finalized without any validator getting slashed. In this case c and c′ are the same height thus violating commandment 1, but because the validator sets finalizing c and c′ are disjoint, no one gets slashed.

Summary

Casper FFG makes improvements in a number of PBFT’s characteristics:

  • Economic constraints: PBFT is a permissioned protocol, as it relies on trusted nodes to run. Casper FFG is permissionless, by introducing the minimal slashing condition to use the risk of economic loss to restrict the behavior of nodes.
  • Abstract block proposing mechanism: PBFT relies on a honest primary to generate blocks and requires a view-change mechanism to restrict Byzantine nodes. Casper FFG is not concerned with the underlying block proposing mechanism. Instead, it is only responsible for reaching consensus.
  • Pipelined voting: PBFT has several messages such as <Prepare>, <Commit>, <View-change>. Casper FFG has only one: <Vote>, and the content of voting is not a single block/request, but two linked checkpoints. These checkpoints forming a chain will go through two rounds of voting at two different heights. Since each round of voting will finalize the source and justify the target, the consensus can continue as a pipeline.
  • Robust to attacks: PBFT is not resistant to long-range attacks and catastrophic crashes. For long-range attacks, the node must regularly update the latest blocks and forbid to revert blocks that have been finalized. Consecutively, for catastrophic crashes, Casper FFG introduces the “Inactivity Leak.”

How it works

For the simplicity, we will suppose that there is a fixed validator set consisting of N validators (a fancy word for “staking nodes”, we also assume that each node is staking the same amount of coins). Time is broken up into ten-second slots, and validator k can create a block in slot k, N + k, 2N + k, etc. Each block points to one specific parent block and the whole chain follows the longest chain rule.

The green chain is the longest chain (length 6) so it is considered to be the “canonical chain”.

To embark on, we replace the fork choice rule (the rule that chooses which chain among many possible choices is “the canonical chain”), moving away from the simple longest-chain-rule and instead using “latest message driven GHOST”. To show how LMD GHOST works, we will modify the above example. Suppose the validator set has size 5, which we label A, B, C, D, E, so validator A makes the blocks at slots 0 and 5, validator B at slots 1 and 6, etc.

Please do not mistake SLOT with BLOCK HEIGHT (In this example, remember we make an assumption that the Time is broken up into ten-second SLOTS, which means for every 10s, a block might be proposed and if it pass the validity check and reach consensus, it will be added to blockchain and increase the BLOCK HEIGHT)

A client evaluating the LMD GHOST fork choice rule cares only about the most recent (ie. highest-slot) message (ie. block) signed by each validator:

Latest messages in blue, slots from left to right (eg. A’s block on the left is at slot 0, etc.)

Now, we will use only these messages as source data for the “greedy heaviest observed subtree” (GHOST) fork choice rule.

Starting at the genesis block, then each time there is a fork choose the side where more of the latest messages support that block’s subtree, and keep doing this until you reach a block with no children. We can compute for each block the subset of latest messages that support either the block or one of its descendants.

To compute the head, we start at the beginning, and then at each fork pick the higher number.

  • First and foremost, pick the bottom chain as it has 4 latest messages supporting versus 1 for the single-block top chain at the first fork.
  • At the next fork support the middle chain because it has the highest supporting, precisely 4.

The result is 5–5–4–4–2–1 the same longest chain as before. Indeed, in a well-running network (ie. the orphan rate is low), almost all of the time LMD GHOST and the longest chain rule will give the exact same answer.

But in more extreme circumstances, this is not always true. For example, consider the following chain, with a more substantial three-block fork.

Scoring blocks by chain length. If we follow the longest chain rule, the top chain is longer, so the top chain wins.
Scoring blocks by number of supporting latest messages and using the GHOST rule (latest message from each validator shown in blue). The bottom chain has more recent support, so if we follow the LMD GHOST rule the bottom chain wins, though it’s not yet clear which of the three blocks takes precedence.

The LMD GHOST approach is advantageous in part because it is better at extracting information in conditions of high latency. If two validators create two blocks with the same parent, they should really be both counted as cooperating votes for the parent block, even though they are at the same time competing votes for themselves. The longest chain rule fails to capture this nuance.

Detecting finality

The LMD GHOST approach has another nice property: it’s sticky. For example, suppose that for two rounds, 4/5 of validators voted for the same chain (we’ll assume that the one of the five validators that did not, B, is attacking)

Four fifth validators built on top of E’s first block, and all four recognized that E had a high score in the LMD fork choice. By looking at the structure of the chain, we can know for a fact at least some of the messages that the validators must have seen at different times. Here is what we know about the four validators’ views:

Blocks produced by each validator in green, the latest messages we know that they saw from each of the other validators in blue.

Note that all four of the validators could have seen one or both of B’s blocks, and D and E could have seen C’s second block, making that the latest message in their views instead of C’s first block. However, the structure of the chain itself gives us no evidence that they actually did. Fortunately, as we will see below, this ambiguity does not matter for us.

A more detail look can be described that A’s view contains four latest-messages supporting the bottom chain, and none supporting B’s block. Hence, in (our simulation of) A’s eyes the score in favor of the bottom chain is at least 4–1.

The views of C, D and E paint a similar picture, with four latest-messages supporting the bottom chain. Hence, all four of the validators are in a position where they cannot change their minds unless two other validators change their minds first to bring the score to 2–3 in favor of B’s block.

A minimal viable attack. A and C illegally switch over to support B’s block (and can get penalized for this), giving it a 3–2 advantage, and at this point it becomes legal for D and E to also switch over.

Since fork choice rules such as LMD GHOST are sticky in this way, and clients can detect when the fork choice rule is “stuck on” a particular block, we can use this as a way of achieving asynchronously safe consensus.

CBC Casper is designed to be fundamentally very versatile and abstract, and come to consensus on pretty much any data structure.

Safety Oracles

To detect all possible situations where the chain becomes stuck on some block, a set of heuristics (“safety oracles”) will come to play and the simplest one is the clique oracle.

If there exists some subset V of the validators making up portion p of the total validator set (with p > 1/2), all of them make blocks supporting some block B and then make another round of blocks, which still supporting B that references their first round of blocks, we can reason as to because of the two rounds of messaging, we know that this subset V all:

  • (i) support B
  • (ii) know that B is well-supported, and so none of them can legally switch over unless enough others switch over first.

For some competing B’ to beat out B, the support such a B’ can legally have is initially at most 1 — p (everyone not part of the clique, clearly less than 1/2), and to win the LMD GHOST fork choice its support needs to get to 50%, so at least p — 1/2 need to illegally switch over to get it to the point where the LMD GHOST rule supports B’.

Note that the p = 3/4 clique oracle offers a 1/4 level of safety, and a set of blocks satisfying the clique can (in normal operation) be generated as long as 3/4 of nodes are online. Hence, in a BFT sense, the level of fault tolerance that can be reached using two-round clique oracles, in terms of both liveness and safety.

Going Further

CBC can be extended further in many ways.

  • First, one can come up with other safety oracles; higher-round clique oracles can reach 1/3 fault tolerance.
  • Second, we can add validator rotation mechanisms. The simplest is to allow the validator set to change by a small percentage every time the p = 3/4 clique oracle is satisfied, but there are other things that we can do as well.
  • Third, we can go beyond chain-like structures, and instead look at structures that increase the density of messages per unit time, like the Serenity beacon chain’s attestation structure:

In this case, it becomes worthwhile to separate attestations from blocks. A block is an object that actually grows the underlying DAG, whereas an attestation contributes to the fork choice rule. However, regardless of which way you do it, the core logic of CBC Casper remains the same.

Validity Rule & Slashing Conditions

Casper CBC’s validity rule claim that a block contains both a parent block and a set of attestations that it knows about that are not yet part of the chain (similar to “uncles” in the current Ethereum PoW chain).

For the block to be valid, the block’s parent must be the result of executing the LMD GHOST fork choice rule given the information included in the chain including in the block itself.

Dotted lines are uncle links, eg. when E creates a block, E notices that C is not yet part of the chain, and so includes a reference to C

Casper CBC safe with only one slashing condition which is can not make two attestations M1 and M2, unless either M1 is in the chain that M2 is attesting to or M2 is in the chain that M2 is attesting to.

Actually implementing validity and slashing conditions requires checking hash chains and executing fork choice rules in-consensus, so it is not nearly as simple as taking two messages and checking a couple of inequalities between the numbers that these messages commit to (as you can do in Casper FFG).

Liveness

Liveness in CBC Casper piggybacks off of the liveness of whatever the underlying chain algorithm is (eg. if it’s one-block-per-slot, then it depends on a synchrony assumption that all nodes will see everything produced in slot N before the start of slot N + 1).

It’s not possible to get “stuck” in such a way that one can not make progress. It’s possible to get to the point of finalizing new blocks from any situation, even one where there are attackers and/or network latency is higher than that required by the underlying chain algorithm.

Suppose that at some time T, the network “calms down” and synchrony assumptions are once again satisfied. Then, everyone will converge on the same view of the chain, with the same head H. From there, validators will begin to sign messages supporting H or descendants of it. From there, the chain can proceed smoothly, and will eventually satisfy a clique oracle, at which point H becomes finalized.

Chaotic network due to high latency
Network latency subsides, a majority of validators see all of the same blocks or at least enough of them to get to the same head when executing the fork choice, and start building on the head, further reinforcing its advantage in the fork choice rule
Chain proceeds “peacefully” at low latency. Soon, a clique oracle will be satisfied

Ethereum 2.0 is a distributed ledger based on EVM and integrating Casper FFG with many optimization proposals. Knowing the fundamental knowledge of all algorithms and mechanisms behind the scenes is the great way to get started implementing your own node with confidence.

Besides, it could be even more interesting to gather some insight about the original motivation of Casper FFG (pBFT) and the order Casper (Vlad Zamfir’s CBC Casper) which may arguably be considerably more complex than FFG, but in terms of ability to reason about the protocol, and the properties that it provides, it’s surprisingly simple.

https://www.linkedin.com/in/ninh-kim-927571149/

#Run #Ethereum #Validator #Part #PBFT #Casper #FFG #Casper #CBC #LMD #GHOST #BrianKim #Coinmonks #Jul

Leave a Comment