Expanding Beyond Mainnet

I played around with this idea and created MerkleResolver.sol → Goerli Contract

The idea is that you create a shape using createShape(bytes32[]) -> bytes32 , which expects an array of cells, which are hashes of record fields, like contenthash, addr(60), text(avatar). You then assign a shape to a node using setShape(node, shape).

After that, everything works normally. When you edit a record, it clears the merkle root. When you’re done editing records, you can commit(node) and it will compute and store the merkle root. You can only set records that exist on your shape.

MerkleResolver r = new MerkleResolver();
bytes32[] memory cells = new bytes32[](3);
cells[0] = r.CELL_CONTENTHASH();
cells[1] = r.cellForAddr(60);
cells[2] = r.cellForText("avatar");
bytes32 shape = r.createShape(cells);
bytes32 node = namehash("merkle.eth");
r.setShape(node, shape);
r.setText(node, "avatar", "https://...");
r.setAddr(node, 0x51050ec063d393217B436747617aD1C2285Aeeee);
r.setContenthash(node, hex"1234");
bytes32 root = r.commit(node);

If you have a shape for cells [A,B,C], you can upgrade your shape to [A,B,C,D,
] without changing the storage layout. Shapes can be shared between nodes. A “Standard” shape could contain the records typically shown.

The root is computed as a right-padded tree with nulls. If you set a shape with 6 cells, it will delete the values for [7, 8] and then only allow storing 1-6 by mapping from setText/setAddr/setContenthash → cell → index of cell

/ \ / \ / \ / \ 
1 2 3 4 5 6 X X

Instead of manual commit(), there could be a boolean, which when true, auto-commits after a change is made. It needs multicall(), a few other features like expandShape(), and some gas analysis.

This is cool, but I’m not sure I really understand the purpose. Wouldn’t proving the merkle root plus proving the branches in that subtree be at least as much overhead as just proving the storage slots directly?

I think you only need to prove the merkle root in L2Resolver (1 slot) and then the CCIP server can provide the L2 record data unverified (as full data or branches depending on the request) and the receiving contract can verify the root is valid and use that to verify the supplied data is valid.

Right, but wouldn’t:

  • Merkle proof of the data you care about to the merkle root in the L2 slot
  • Merkle proof of the L2 slot to the contract’s storage root
  • Merkle proof of the contract’s storage root to the chain’s state root

All add up to more proof data than just doing the last two, even if that meant more values in the L2 contract’s storage?

I think it depends on the size of the data and the type of tree (radix).


Ignore the merkle example and just assume a storage contract that writes hash-prefixed byte arrays:

set(bytes32 key, bytes value) stores keccak256(value) + value

1 storage proof + N slots of data + hash check is more efficient than N storage proofs.


And this brings me to my question:

I would classify the ENS EVMGateway as open (because the gateway itself can be substituted with another) and general (because it plucks slots from L2 without any additional assumptions.)

There also exists open gateways that only operate with specific L2 contracts. Gateways of this type are reliant on L2 contract functionality to extend their chain of trust. In my example above, each set()-stored value would only need it’s hash proven and the actual data could be supplied unmodified. The L1 verifier would verify the proofs of the hashes, and then verify the hash of the data. Since this requires that the hash written to L2 corresponds to the hash of the data, only contracts that obey this mechanic would be supported.

Instead of a set of trusted signers, they’d have a set of trusted storage contracts.

Would gateways of this type be acceptable? I think in essence, it’s the difference between:

  • general — (L1 verifier) + any L2 storage
  • specific — (L1 verifier + L2 storage)

From the POV of a user, I feel like these are actually equivalent, since you need to trust both the L1 and L2 contracts are functioning correctly. For example, an L2 contract that allows anyone to write anywhere can be read using an open gateway but the chain of trust is broken simply because the L2 implementation is flawed.

The advantage of a general gateway is that you can apply it to existing L2 contracts.

The disadvantage of a specific gateway is that it only operates on a trusted set of contracts, but it enables a huge amount of optimizations because you can use the EVM on both sides to enforce logic.

That’s true, but given we’re talking O(1) overhead vs O(log n) overhead, and verifications are done in a read context, is it worth the extra complexity?