Expanding Beyond Mainnet

Right, but you get exactly the same level of security from a centralized database storing those signatures. If you’re not going to verify the blockchain’s state proof, you gain no benefit from using a blockchain, and it isn’t trustless.

No, they can’t - because they can only produce valid proofs of L2 state; any invalid proof will be rejected by the contract that verifies the response.

No, it’s strictly more secure, because unlike an RPC endpoint, a gateway cannot lie.

My claim is that you must own a .eth or a web2 domain name to have a 2LD on an L2. 3+ is different but ENS has no control over subdomains.

We use the EVM-gateway to trustly forward any namespace from L1 to an L2 where everything can be proved (must be on-chain but content agnostic)—and then make that work with IMulticallable interface.

ENS can create the L2 registry at the standard address, setup a reverse registrar, public resolver, and associated nodes. Account holders can claim their reverse record and then build out any hierarchy they want reusing ENS infrastructure with L2 gas fees.

The L1 just needs to know the rewriter, which transforms ___raffy.eth to __51050ec063d393217b436747617ad1c2285aeeee.addr.reverse—for example the OwnedRewriter contract (adds the owner’s hex address). The gateway resolver contract verifies a storage proof for a Multicallable bundle.

The name 51050ec063d393217b436747617ad1c2285aeeee.addr.reverse can be claimed on an L2 w/o additional permissions or changes to ENS. From there, an entire L2 ecosystem can be built when combined with the PublicResolver. And the whole thing is immutable and trustless.

An underscored UUID registrar could mint base32-encoded names (z34ad._id) for same purpose, except these basenames could be transferred, whereas the OwnerRewriter transfers trust from the L1 (since the owners match).

I made a helper contract that offers the IExtendedResolver.resolve(name, request) interface that does the following:

  • If the resolver is a wildcard, do a wrapped resolve().
  • Otherwise it’s on-chain:
    • If request is a IMulticallable.multicall() wrapper, decode it and call those functions directly on the resolver, then return the results encoded.
    • Otherwise, call that request directly on the resolver.

This makes it so, for any name that’s dns-encodable (under 256 bytes per label, 99.98% of known names), you can call it like IExtendedResolver and you can give it any request, including multicall(), and it only does 1 CCIP request.

I made a simple demo that resolves both "adraffy.eth" (onchain, PublicResolver) and "raffy.gmcafe.eth" (wildcard, that supports multicall()) via the same call to the helper contract.

For this to work, servers must implement the multicall() interface (which is really simple since you just call your own resolver recursively.)

I think if the recursive CCIP limit was higher (ethers uses MAX_CCIP_REDIR = 10, but that code could be unrolled), in the case where the server doesn’t implement multicall(), you can encode an iterator into extraData, but I haven’t tested this yet.

While implementing this, I noticed that my own CCIP implementation was too strict, as I was only allowing requests originating from my own resolver contract. To support recursive CCIP, you need to accept requests from any contract.

Nick, can you clarify was was meant by EIP-3668 “Unsuccessful requests MUST return the appropriate HTTP status code - for example, 404 if the sender address is not supported by this gateway”. I originally interpreted this to mean, sender = my contract, but I feel like the sender should only be used to block malicious senders (otherwise recursive can’t work.)

Fortunately, even if recursive calls aren’t supported due to restricted sender logic, the client can always call the wildcard resolver directly. The helper contract just makes this more universal.

Creating an EVM gateway contract that can do similar decoding and prove the entire request, including a bounce through a L2 registry, is the next step.

1 Like

I think we’re mixing up multiple types

a) web2 gateway+db with gateway signer
b) web2 gateway+db or L2 with manager’s signature
c) web2 gateway with L2 state/storage proof
d) L2 RPC with manager’s signature

(a) & (b) are similar but (a) is depending on gateway signer not to lie/sign & return bad data.
(d) is dependent on CAIP22 like format & currently there’s no way to access L2 RPC endpoints directly.
(c) can’t use CAIP22 “like” formats? as it’s a web2 gateway reading L2 state/storage proof from L2 RPC endpoints on backend. ?some extra works on client side can unlock do that too?

To extend (b) we’re requesting CCIP gateway urls to support ipfs://
 ipns:// bzz://
 protocols so ccip can access web3 storage providers from client side without any web2/ccip gateway in middle


state/storage proof is best option and I guess ENS will run official gateways to multiple L2s 
 still there’s 2 minor benefits for supporting wb3 data/caip22 like formats, we don’t have to run a web2 gateway in the middle
 && we get extra privacy by not requesting data from random web2 gateway used by different domains


As it’ll be backwards compatible both can still be used as fallback mechanism.

@raffy I think you changed addr() in normalization/resolver page recently? & I see addr(60) in your example page too.

There are two addr() returns (address) & addr(node, 60) returns(bytes) for ETH address
 As addr() is from original resolver our wildcard/offchain data is using addr() format but our data is stored in addr(60) space *as address instead of bytes(address)


So I was wondering if we should skip using addr()/address and always use addr(60)/bytes format?

What name is this? Ah freetib.eth, I’ll look into this.

Your addr(60) isn’t encoded correctly, and I wasn’t doing normal addr() during wildcard. Fixed.

Over the last week I’ve had a play with this and published a pull request for this (in the context of L1 multi target resolution for now) here: Initial implementation of multi-target CCIP read by clowestab · Pull Request #28 · ensdomains/evmgateway · GitHub

I’d be appreciative of feedback



I’m rereading everything trying to get context/understanding of the ideas being put forward here.

This reads as being unnecessarily complex. The evmgateway code as is caters for this need anyway in that you can query multiple pieces of data from a resolver you just need to know the base storage slot of each value you want. Once you’ve discerned the resolver address and chain this can be done on any EVM compatible chain


What do you mean? 0x123 reverse resolves on an L2 to sub.raffy.eth. If the user has the private key for 0x123 so as to have set the reverse resolver on that L2 in this case do we care about L1?

Yup, I see this.

?? What do you mean. You do discern (and can keep track of) the node that is resolving a subname


I’ve re-read this multiple times and I’m getting lost in its complexity when in reality I don’t think it is actually that complex

It really depends what is going on behind the scenes - you can have 1 CCIP fetch request but then the gateway be sending off multiple RPC calls behind the scenes. OR the gateway could use multicall behind the scenes OR the gateway provider could implement their own middleware contract that batches these data requests and returns them as packed data (I’ve done this before with some of my tooling).

Oh I see, you’re basically implying it should be provided by ENS natively? I’m not convinced - theres a lot of pros to keeping the core contracts as simple as possible, and middleware providers can simply build on top of ENS. If you really wanted to directly you could do this in a hacky manner through the resolve method of ENSIP-10.

I’m in this camp. A well reputed service provider storing signed data in a centralised database offers the same security guarantees. The core concern would be that provider not responding (or disappearing completely) so the key is the well reputed bit. An L2 could also disappear.

Again, I’m confused. This doesn’t seem to follow the ENSIP-10 spec
?

Clients are meant to recursively iterate through the name tree until they find a resolver that implements at which point they call resolve. This seems to be a resolver that finds a resolver in its resolve
?

This might be more performant than a client (JS for example) discerning the resolver but i’d imagine in reality unless we are talking 10 levels of nested subdomains the performance difference would be negligible, but yeh
 crucially this isn’t an implementation of the specification?


More generally whilst this thread is super informative (and definitely needed) there seems to be a lot of different tangents being discussed and it is not particularly easy to follow (IMO). I’m also not sure that a level of agreement has been reached whereby a summary of current thoughts could easily be collated :grimacing:

1 Like

Sorry this thread is all over the place. You can ignore the original idea of switching chains.

This looks good to me. For the use case above, I want to read the resolver address from L2 registry, switch to that target, and read multiple records.

The existing IMulticallable interface is sufficient—I didn’t know it existed. I think wildcard servers should be encouraged to handle this request.

My thinking is this resolve() function should also process IMulticallable.multicall.selector.

To benefit from multicall() (single fetch) ENS clients need to be issuing requests in this style.

There are (5?) possible schemes for reading records of a name:
(m = registry reads, n = resolver reads)

  1. O(2m+2n) — ENSIP-10: resolve() per record
    [resolver() + supports(ensip10)] x m + [req + reply] x n
  2. O(2m+2) — ENSIP-10: resolve(multicall) all records
  3. O(2+n) — on-chain: eth_call per record
  4. O(4) — on-chain: supports(multicall) → call multicall() (internal multicall)
    resolver() + supports(ensip10) + supports(multicall) + multicall()
  5. O(3) — on-chain: use helper Multicall contract (external multicall)
    resolver() + supports(ensip10) + multicall()

My example above was just an experiment to provide a common interface that is O(1-2) for all types. It’s not a resolver itself per se. It does implement ENSIP-10. It lets you multicall an on-chain resolver from 2017 using IExtendedResolver interface.


I think CCIP fetches need to be bundled for complex dapps. For example, a social site will want to query multiple records per user per page load.

For querying “all your records”, like viewing an user profile, I suggested doing a single multicall() with common records + a record that indicates (here are my rare records), which collapses many requests to 1-2.

This is cool! Would you consider contributing it to the universal resolver?

This is a really obvious thing to do, and I don’t know why I didn’t think of it.

Would you be happy to open a PR against GitHub - smartcontractkit/ccip-read to add this to the base gateway implementation?

That’s correct. Though I wonder if the correct solution here would be to implement multicall on the resolver too, having it just call multicall on the server?

Yes, I understand what you’re asking for. What you don’t seem to understand is that if you just call a public RPC provider, and don’t verify its responses in any way, it’s exactly equivalent to just storing those responses in a centralized database, from a security POV. The only thing you gain is the convenience of not having to host your own server, and instead relying on using an L2 as your database.

Gateway authors and contract authors can choose their security model for CCIP read; they can do anything from just trusting the gateway response at one extreme, to verifying everything against the L2’s state root onchain at the other extreme. With the scheme you are proposing, however, the latter becomes effectively impossible; the upper limit of security that can be achieved with it is simply trusting the public RPC node the client is using.

I honestly don’t understand why you think this is a beneficial addition to the protocol.

Someone has to run a public endpoint in either case; whether it’s a JSON-RPC server for the L2, or a CCIP-Read gateway is immaterial. It’s certainly not worth throwing away all security guarantees for the convenience of not hosting a server somewhere.

There are no privacy benefits here either, because the JSON-RPC server the client uses can also log their requests.

I’m talking here about the case where a user wants to start on L2, and do forward resolution of a name. The L2 would provide a wildcard resolver that fetches state proofs from L1 - but what if the user is using a custom resolver implementation on L1?

2 Likes

Quick input:

I think there is a clear difference in the trust assumptions of Nick and C0de. C0de trusts public gateways more than the ability of end users to verify CCIP gateways of their service providers; Nick is diagonally opposite on the trust spectrum. Both have valid points.

if you just call a public RPC provider, and don’t verify its responses in any way

@nick.eth Verification can be built cheaply for some storage types and their public endpoints. For example, if I query ipfs/Qmbla from any public IPFS gateway, I can verify its content’s legitimacy by calculating CID(content) = Qmbla. In that sense, CCIP-Read providers can blindly query any IPFS public gateway and relatively easily verify the content’s validity before resolving the result of a direct RPC call. This can be done for any immutable storage (= unique CID) with relatively cheap verification method(s). This came to my mind while reading this conversation but I haven’t thought too deeply


PR. Although, this is multicall() not resolve(name,multicall()).

Sure.

I like that, except it only works where you don’t need the preimage–unless you multicall a bunch of resolves(), which I guess also works.

  • resolve(name, multicall[addr(60),text(url)]
    Advantage: only send name once, ENSIP-10 compat
    vs
  • multicall[resolve(name,addr(60)), resolve(name, text(url))]
    Advantage: if you bucket per basename, you could multicall with multiple names, like a.[cb.id] and b.[cb.id].

I was under the impression that content hashes aren’t necessarily integrity hashes due to chunking. Although, maybe that can be ignored for data below the minimum chunk?

The other issue is that integrity != latest, so like signatures, you need an additional channel to indicate you’re seeing the newest version.

An improvement might be: after you’ve made updates to an L2 resolver, you commit a merkle root of all your name’s records. Then the gateway just needs to provide 1 storage proof for the root (1 slot), the raw record data, and the links in the tree to locally prove it.

Correct, we are looking at tiny data due to limitations of CCIP-Read itself. Pretty sure it is impossible to pass more than few KB of payload between resolve() and callback(). All chunking issues can be ignored; it is essentially hashing at that amount of data.

That’s correct. IPFS CIDs need to be wrapped in a namespace (say IPNS) which then requires some indexer to break degeneracy among versions. This is however a very simple task for an L2 EVM-Gateway Indexer which stores nothing more than latest version for an IPNS key in an extremely rudimentary contract. This L2 EVM-Gateway Indexer + Public RPC is a tightly secure CCIP-Read infra without a dedicated CCIP Gateway. We think it is worth considering; in fact an ENSIP standard for “No-Gateway CCIP-Read for Immutable & Calculable NameSpaced Verifiable Storages” would be nice. Arweave + ArNS falls neatly into the same category, maybe more like Swarm as well bar some caveats. We can derive more value from the implicit trust in the public gateways and it may even push them to be more security oriented; that’s not our concern but a good side-effect.


I wrote my comment before reading this part and we have ended up saying the same thing. We briefly entertained precisely this (= Index-on-L2 + Store Off-Chain) for NameSys v1 but gave up in interest of simplicity at that point; it is still in our To-Do list for NameSys v2 a little further down the line.

1 Like

is there anything wrong with this diagram?

Users do not have to verify CCIP gateways - the gateways are designed such that they cannot return inaccurate results. Any inaccurate results will be detected and rejected by the client.

I played around with this idea and created MerkleResolver.sol → Goerli Contract

The idea is that you create a shape using createShape(bytes32[]) -> bytes32 , which expects an array of cells, which are hashes of record fields, like contenthash, addr(60), text(avatar). You then assign a shape to a node using setShape(node, shape).

After that, everything works normally. When you edit a record, it clears the merkle root. When you’re done editing records, you can commit(node) and it will compute and store the merkle root. You can only set records that exist on your shape.

MerkleResolver r = new MerkleResolver();
bytes32[] memory cells = new bytes32[](3);
cells[0] = r.CELL_CONTENTHASH();
cells[1] = r.cellForAddr(60);
cells[2] = r.cellForText("avatar");
bytes32 shape = r.createShape(cells);
bytes32 node = namehash("merkle.eth");
r.setShape(node, shape);
r.setText(node, "avatar", "https://...");
r.setAddr(node, 0x51050ec063d393217B436747617aD1C2285Aeeee);
r.setContenthash(node, hex"1234");
bytes32 root = r.commit(node);

If you have a shape for cells [A,B,C], you can upgrade your shape to [A,B,C,D,
] without changing the storage layout. Shapes can be shared between nodes. A “Standard” shape could contain the records typically shown.

The root is computed as a right-padded tree with nulls. If you set a shape with 6 cells, it will delete the values for [7, 8] and then only allow storing 1-6 by mapping from setText/setAddr/setContenthash → cell → index of cell

/ \ / \ / \ / \ 
1 2 3 4 5 6 X X

Instead of manual commit(), there could be a boolean, which when true, auto-commits after a change is made. It needs multicall(), a few other features like expandShape(), and some gas analysis.

This is cool, but I’m not sure I really understand the purpose. Wouldn’t proving the merkle root plus proving the branches in that subtree be at least as much overhead as just proving the storage slots directly?

I think you only need to prove the merkle root in L2Resolver (1 slot) and then the CCIP server can provide the L2 record data unverified (as full data or branches depending on the request) and the receiving contract can verify the root is valid and use that to verify the supplied data is valid.

Right, but wouldn’t:

  • Merkle proof of the data you care about to the merkle root in the L2 slot
  • Merkle proof of the L2 slot to the contract’s storage root
  • Merkle proof of the contract’s storage root to the chain’s state root

All add up to more proof data than just doing the last two, even if that meant more values in the L2 contract’s storage?

I think it depends on the size of the data and the type of tree (radix).


Ignore the merkle example and just assume a storage contract that writes hash-prefixed byte arrays:

set(bytes32 key, bytes value) stores keccak256(value) + value

1 storage proof + N slots of data + hash check is more efficient than N storage proofs.


And this brings me to my question:

I would classify the ENS EVMGateway as open (because the gateway itself can be substituted with another) and general (because it plucks slots from L2 without any additional assumptions.)

There also exists open gateways that only operate with specific L2 contracts. Gateways of this type are reliant on L2 contract functionality to extend their chain of trust. In my example above, each set()-stored value would only need it’s hash proven and the actual data could be supplied unmodified. The L1 verifier would verify the proofs of the hashes, and then verify the hash of the data. Since this requires that the hash written to L2 corresponds to the hash of the data, only contracts that obey this mechanic would be supported.

Instead of a set of trusted signers, they’d have a set of trusted storage contracts.

Would gateways of this type be acceptable? I think in essence, it’s the difference between:

  • general — (L1 verifier) + any L2 storage
  • specific — (L1 verifier + L2 storage)

From the POV of a user, I feel like these are actually equivalent, since you need to trust both the L1 and L2 contracts are functioning correctly. For example, an L2 contract that allows anyone to write anywhere can be read using an open gateway but the chain of trust is broken simply because the L2 implementation is flawed.

The advantage of a general gateway is that you can apply it to existing L2 contracts.

The disadvantage of a specific gateway is that it only operates on a trusted set of contracts, but it enables a huge amount of optimizations because you can use the EVM on both sides to enforce logic.

That’s true, but given we’re talking O(1) overhead vs O(log n) overhead, and verifications are done in a read context, is it worth the extra complexity?