Recursive proofs on Mir

Brendan Farmer, Daniel Lubarov & William Borgeaud · 2021-01-25

TLDR: Every transaction on Mir is verified with a zero-knowledge proof. These proofs can be recursively composed, so a validator can verify a set of transactions with a single proof. We describe how this works and how it enables a more scalable sharded blockchain.

Transaction validation

An application on Mir is represented as a record1: a commitment to the application's state and logic. Records can be created, mutated, consumed, and invoked as functions within a transaction. Transactions encode state transitions applied to Mir's Commitment Log and Liveness Mask,2 (referred to as the Commitment Set and described here), and consist of a list of operations performed on records and a SNARK proving that these operations are valid.

This proof takes as public inputs: the Commitment Log indices (or addresses) for existing records involved in the transaction, commitments to newly created records, block roots for existing records,3 and optional auxiliary data.

Transaction Diagram

Validating a transaction on Mir requires:

  • Verifying that accessed record addresses are set to active in the Liveness Mask
  • Checking that prior block roots are contained in the Commitment Log
  • Verifying the transaction proof.

This presents a problem, because verifying a zero-knowledge proof is significantly more expensive than verifying an ECDSA signature. If validators needed to verify a proof for every transaction, Mir's throughput would be significantly lower than that of existing chains.

Fortunately, Mir uses a type of zero-knowledge proof that can be recursively composed, so a single proof can provide a cryptographic guarantee that a set of transaction proofs are valid. A subset of validators can verify a set of transactions and then prove to the rest of the network that those transactions are valid, allowing Mir to scale with the compute available to generate recursive proofs.

Recursive validation on Mir

We want to verify that an updated state of our Commitment Log and Liveness Mask is the result of applying a set of valid transactions. We need a proof that:

  • Verifies that each transaction in the set is valid
  • Ensures that no records are double-spent
  • Applies the state transition encoded in the set of transactions

On Mir, validators divide the work of generating this recursive proof. A recursive proof verifying all transaction proofs can be generated in parallel with a recursive proof verifying Commitment Set deletions and insertions.4

Verifying transaction proofs

For each transaction, we have a proof $\pi_T$, a set of consumed record addresses $\{d_i\}$, a set of block roots $\{r_i\}$, a set of created record commitments $\{c_i\}$,5 and a set of accessed record addresses $\{a_i\}$. To verify all the proofs in a block, we aggregate proofs in a binary tree structure, with the leaves being transaction proofs. The total wall-clock time required to generate the root recursive proof is sublinear in the number of transactions.6

Recursive Proof Tree

In the next layer of the tree, each proof $\pi_{R1}$ recursively verifies two transaction proofs by computing the public input for each proof and extracting the sets of block roots, the consumed and accessed record addresses, and commitments to created records within each transaction. This proof merges the sets $\{c_i\}$, $\{d_i\}$, $\{r_i\}$, and $\{a_i\}$ for both transaction proofs and passes the resulting sets to the parent proof.7

In the next layer, each proof $\pi_{R2}$ recursively verifies two $\pi_{R1}$ proofs. We aggregate the newly created commitments $\{c_i\}$ from each child transaction proof into a fixed-size binary Merkle tree with root $R_c$. We do the same for $\{d_i\}$, generating root $R_{d}$, and counting the number of $\{d_i\}$ entries. Given the previous root of the Commitment Set, $\pi_{R2}$ also verifies that block roots $\{r_i\}$ exist and accessed addresses $\{a_i\}$ are set to active.

We can proceed similarly, recursively verifying $\pi_{R2}$ and combining binary Merkle trees of created and consumed records at every other proof, until we're left with a single proof that verifies all transaction proofs in the set, the roots of the Merkle tree consisting of created commitments $R_c$, and consumed addresses $R_d$, and the total number of consumed records.

Verifying Commitment Set updates

We can precompute the Merkle root for the balanced binary tree consisting of $n$ leaves when each leaf is set to $1$, where $n$ is the block size, and append it to the Merkleized representation of the Liveness Mask, as every commitment created in a block will default to active. We can easily verify that this precomputed subtree root is included in the tree at the specified block height, but we also need to verify that deleted records are set to zero in the Liveness Mask.

Liveness Mask Updates

Operating outside the circuit, we can sort and group the $\{d_i\}$ into buckets based on the first few bits of the prefix. For each bucket, we strip this prefix from each address. Then in a proof $\pi_{S1}$, we check an inclusion proof for each $\{d_i\}$ with root $R_{d}$, and then perform the updates to the Liveness Mask (changing a 1 to a 0 for each consumed address), and output the new bucket root. Note these Merkle proofs are fairly inexpensive in our circuit model.8

In the next layer of $\pi_{S2}$ proofs, we recursively verify two $\pi_{S1}$ proofs, but we can't check inclusion proofs, since the circuit field is different, so we prepend the prefix corresponding to the respective bucket to each address, and pass the sets of addresses to the next proof.

Commitment Log Updates

We continue similarly until we generate a single proof $\pi_S$ that shows that there are no duplicates in the set of consumed records, that every consumed record has not been previously consumed, and that the new root of the Merkleized Liveness Mask is the result of setting newly created records to active and removing every consumed record. In this proof, we also verify that the final $R_c$ value is inserted at the correct block height in the commitment log.9

ZK-Sharding

Using recursive proofs to verify transactions has a few benefits. First, it eliminates the need for validators to verify historical transactions when joining the network: they can simply download the Liveness Mask, nodes in the top layers of the Commitment Log, and verify a proof. This is important because existing chains limit throughput to allow new validators to catch up to the latest state of the chain.10

More importantly, recursive proofs allow us to increase throughput. Blockchains scale through parallelization, by removing the requirement that every node must verify every transaction and allowing each node to verify a subset of transactions. The problem with this approach is that nodes now need to prove to the network that their set of validated transactions is actually valid. Current sharding designs provide a game-theoretic guarantee that a set of transactions is valid, but this has several downsides: a stronger set of cryptoeconomic assumptions, mechanism complexity, fraud proofs, and challenge periods.

Optimistic Sharding

Mir provides a stronger cryptographic guarantee of validity, since validators can verify a subset of transactions and generate a recursive proof showing that every transaction in the set is valid. Validators still need to pay the compute cost required to generate proofs, but this is negligible relative to transaction fees and the block rewards that the network is already paying, and in return, Mir can increase throughput by simply adding more nodes, while providing the same cryptographic guarantee of security.

ZK Sharding

In other scalable blockchain designs, such as Ethereum 2.0, the sharding model leaks into the developer experience via mechanisms like contract yanking. Mir avoids this complexity by fully replicating record state (in a highly compact form). Additionally, developers on Mir never need to consider gas costs, which has traditionally been the most time-consuming part of building a decentralized app.

Notes

1

Depending on context, ''record'' can refer to an application's state and logic, from the user/transaction creator's perspective, or a commitment to its state and logic, from the validator's perspective.

2

The Commitment Log is an append-only log that stores every record ever created in a Merkle Mountain Range structure (but validators only need to store upper layers). The Liveness Mask represents whether records are active or inactive by storing a 1 or 0 in the position of the record's address in the Commitment Log. It can be thought of as a bitmask that's Merkleized in a binary tree form, where the $n$-th leaf is the $n$-th bit in the mask.

3

By block root we mean the root of the subtree of the Commitment Log that contains the record commitment involved in the transaction. We need these block roots because we need to check that the correct record commitment is used in a transaction proof, so within the transaction, we verify a proof of inclusion with the block root and record address.

4

We also verify the consensus rules in a proof which is verified recursively, but we'll leave that for a future post.

5

We treat mutating records as consuming and then creating a new commitment, with record address consistency enforced by the kernel.

7

This is to ensure that we're consistently hashing over the same field, specifically the scalar field of the curve that the transaction proof uses.

6

$k\log n$ where $k$ is the proving time for a single recursive proof, and $n$ is the number of proofs that are recursively composed.

8

We use a custom gate to evaluate a full round of a width-4 Rescue permutation, and with our parameters, we have 16 rounds per hash. So if our tree depth is 32 and our bucket depth is 8, each Merkle proof takes $16 (32 - 8) = 384$ gates. We can reduce this cost further by precomputing Merkle states for small subtrees.

9

It might be important to note that $\pi_S$ is not required for increasing throughput: we could simply generate the recursive transaction proofs, removing the requirement that every validator verify every transaction proof, and let validators update the Liveness Mask outside the circuit. $\pi_S$ provides an additional security guarantee, allowing validators to fast sync (to download the latest state without verifying historical transactions) without a loss of security.

10

For instance, on Ethereum, geth's benchmarked throughput is ~500 tps, but the actual throughput of the chain is ~30x lower, to allow validators to catch up.