Expanding Beyond Mainnet

Okay, I think I get it now. Here is my new take using the EVM gateway instead of L2 RPCs. This is a more advanced version of L1Resolver (from EVM Gateway) where I make 2 requests (to support Registry + Resolver) however the gateway code could be modified to support multiple targets in one request.

My original motivation: I wanted an official registry on L2, not just metadata. I also wanted support for complex resolver contracts (like Wildcard or CCIP).


ENS resolution always starts from L1.

L2 writes would still involve switching to the corresponding chain.

Wildcard or OffchainLookup resolvers can only be deployed on L1.

Only one contract, just deploy on L1.
L2OnChainResolver enables wildcard through rewriting + L2 registry which handles most cases.

Deploy a Registry on each supported L2.

Deploy a ReverseRegistrar.
(optional) Deploy {uuid}._id-style UUIDRegistrar.

Deploy an L2OnChainResolver to L1

If there’s a node in the L2 registry and the metadata is in an L2 resolver, then you can set the L1 resolver to the singleton L2OnChainResolver and then call setTarget(node, chain, rewriter).

Since ENSIP-10 doesn’t supply the node that has the wildcard contract, the contract needs to also find the base node by popping labels from the path until the registry gives a result.

// similar to UniversalResolver.findResolver()
// but loop until resolver = address(this) and return offset instead
uint256 pos = findBaseOffset(name);
bytes basename = name.slice(pos, name.length);
bytes prefix = name.slice(0, pos);
bytes32 basenode = namehashFromDNS(basename);

Use the basenode to query the target:

uint64 chain = ENS.ttl(basenode); // cheap hack
(uint64 chain, address rewriter) = L2OnChainResolver.getTarget(basenode);

rewriter is a contract address. If it isn’t address(0) then rewrite the name:

name = INameRewriter(rewriter).rewrite(name).

interface INameRewriter { 
    // both names are dns-encoded
    // pos is byte-offset of basename
    function rewrite(bytes memory oldName, uint256 pos) external view returns (bytes memory newName);
}

This allows *[raffy.eth] to become *[51050ec063d393217b436747617ad1c2285aeeee.addr.reverse] before being resolved.

setTarget(namehash("raffy.eth"), 10, OwnedRewriter)

The following could be a helper contract to rewrite L1 names to owned L2 names:

contract OwnedRewriter implements INameRewriter {
    function rewrite(bytes memory oldName, uint256 pos) external view returns (bytes memory) { 
        // compute the base node
        bytes32 basenode = namehashFromDNS(oldName.slice(pos, oldName.length);
        // find the owner
        address owner = ENS.owner(basenode);
        // replace *[base] with *[addr].addr.reverse 
        return bytes.concat(oldName.slice(0, pos), dnsEncodedAddr(owner), '\4addr\7reverse');
    }
}

Example: addr("sub.raffy.eth", 60)

  1. ENS.resolver(namehash("sub.raffy.eth"))null
  2. ENS.resolver(namehash("raffy.eth'))L2OnChainResolver
  3. L2OnChainResolver.resolve(<sub.raffy.eth>, addr(..., 60))
    1. pos = findBasenameOffset(name);
      • pos = 4; // only removed "\3sub"
    2. basenode = namehashFromDNS(name.slice(pos, name.length)); // "raffy.eth"
    3. (chain, rewriter) = getTarget(basenode);
      • chain = 10; // optimism
      • rewriter = ...; // eg. OwnedRewriter
    4. name = INameRewriter(rewriter).rewrite(name, pos);
      • name = "sub.51050ec063d393217b436747617ad1c2285aeeee.addr.reverse";
    5. node = namehash(node);
    6. verifier = getVerifier(chain); // optimism verifier contract
    7. EVMFetcher(verifier, registry)...fetch(<regCallback>, encode(calldata0, verifier, node))
    8. regCallback(values[], calldata)
    9. Decode: calldata(calldata0, verifier, node)
    10. Decode: values[] resolver = address(values[0]))
    11. EVMFetcher(verifier, resolver)...fetch(<resCallback>, calldata)
    12. resCallback(values[], calldata)
    13. Decode: calldata(calldata0)
    14. Decode original calldata: calldata0addr(60)
    15. Do any transforms necessary.
    16. return values[1];

  • This lets an L1 name use L2 for metadata.
  • This lets you build out an L2 registry subtree and use L2 for metadata.
  • Can reuse existing L1 tooling (L2 Registry and L2 PublicResolver would work as-is)
  • Get native L2 resolution for free (when ENSIP-10 isn’t supported) — only for names without rewrites
1 Like

I think that’s definitely something we should do. It will likely need a new API, though it may also be possible to add an instruction for “replace the contract address with this one”.

This seems reasonable.

Using clones-with-immutable-args might be more gas-efficient than using a singleton here.

Another related topic is discovery of unique text/addr records, both on-chain and over CCIP. And related to this is general discovery for CCIP or “remote ERC-165”.

I think wide adoption of this interface: IMulticallable.sol for everything off-L1 is one improvement, but another is lists of my keys/coins/chains/etc.

IMulticallable lets you collapse a huge blast of common record-reads into a single request. This can already be done with Multicall contracts, but this interface is ENS-specific and is a direct interaction. The default personal profile (name, avatar, banner, url, contenthash, major coins, etc.) can become one CCIP fetch. The request can also be decoded by an smart CCIP bridge contract that decodes, proves, and returns the data from a CCIP L2 EVM gateway in one proof.

I implemented the IMulticallable tunnel in my resolver, when you query a supporting wildcard resolver, like my NFT project’s per-token names: moo331.gmcafe.eth which does 22 CCIP calls in 1.

But while this technique lets you query many common things, it doesn’t let you discover/promote uncommon/new records.

I suggested "public-texts" as a new record which would be a comma-separated list of “rare” keys but idea could be extended to other field types. And the storage could be better/smarter/etc.

First, you multicall the [commons + the lists (keys, coins, chains, friends)] in the one request and then follow up with a multicall for the (filterable) rare records if needed.

This isn’t automatically true - to fetch multiple records over CCIP-read using multicall, either the resolver needs to support this directly (eg, understand that a multicall should roll multiple requests into one fetch) or the multicall implementation needs to be using a gateway that batches underlying CCIP-read requests. Both require manual work to implement over and above standard multicall.

Correct, not automatically true, but ENS should encourage this style of access—always use resolve(multicall*) if records > 1—so middleware like trusted CCIP or EVM-gateway can be efficient.

This way, complex applications which query lots of data are feasible, as they aren’t making hundreds of requests per page load to show basic identity information

If the resolver is wildcard and the resolve(multicall*) fails, you can just query it normally. Or if it’s not wildcard, we could just have an immutable helper contract that does an external multicall if the resolver itself doesn’t support IMulticallable. All of this code would be in ethers/veim/ensjs with an API like:

resolver = await getResolver("sub.raffy.eth");
resolver.node // namehash
resolver.dnsEncodedName // null if too long (cant wrap, cant ENSIP-10)
resolver.supportsWildcard // true if has ExtendedResolver interface
resolver.baseName // name of the wildcard resolver (or byte/label-offset)
resolver.supportsMulticall // [0 = untested, 1 = yes, 2 = no]

await resolver.fetch([["text", "name"], ["text", "description"], ["addr", 60], ["addr", 0], ["contenthash"]);

This is essentially the same API I use in my resolver and it works great.

  • fetch_records() / get_resolver()
  • I think this is the same as findResolver() from UniversalResolver.sol. However, the multcall should work the opposite way: always use the resolve() interface, and have the helper contract decode, multicall, and repackage internally when there’s no support.

Registering *.eth from L2 is hard problem to solve… it’s easy for sub/domain.eth and CCIP resolver on L1 with domain.eth pointing to L2 data resolver contract, people are already using that… I’d recommend not to introduce any breaking changes against stuffs that’s already working & if possible don’t add in extra complexity. ENS is already 3 levels deep with name registry, erc721 & erc1155 wrappers.

As we’re verifying manager/approved signature stored on L2/offchain, at worst case scenarios BAD L2 RPC/gateways can only send stale records… we could narrow down with extra ttl/validity timestamp but that’ll require manager/signer to update their records more frequently on L2/offchain.

In bad dapp/client scenarios they can resolve to anything even for non-ccip lookups… For current web2 gateway with signers, any BAD/compromised gateway signer can easily sign and send anything… Domain’s manager/approval sig is more trustless than web2 gateway signer.

There’s same old privacy risk with web2 ccip gateways. L2 RPC endpoints are still web2 but it’s less risky than random web2 gateways used by every domain.eth. Using CAIP22 like format can leverage L2/light clients if underlying dapps/wallet clients have light clients support in future.

“if clients are not following ENS specs they’re not ENS compliant…”, Paraphrasing @nick.eth’s reply from 2020 :smiley: back when I asked "what if bad scam clients skip normalization, or resolve their addr when I send $eth to “domain.eth” from their dapps/wallet.

For batch resolving I kinda preferred using @serenae’s draft specs [Draft] ENSIP-##: Off-chain Name Meta-Resolution [Draft] ENSIP-##: Off-chain Name Meta-Resolution compared to current ENSIP-16: Offchain metadata with extra “graphqlUrl”…

sorry I’m bit laggin… back to work! :vulcan_salute:

1 Like

Right, but you get exactly the same level of security from a centralized database storing those signatures. If you’re not going to verify the blockchain’s state proof, you gain no benefit from using a blockchain, and it isn’t trustless.

No, they can’t - because they can only produce valid proofs of L2 state; any invalid proof will be rejected by the contract that verifies the response.

No, it’s strictly more secure, because unlike an RPC endpoint, a gateway cannot lie.

My claim is that you must own a .eth or a web2 domain name to have a 2LD on an L2. 3+ is different but ENS has no control over subdomains.

We use the EVM-gateway to trustly forward any namespace from L1 to an L2 where everything can be proved (must be on-chain but content agnostic)—and then make that work with IMulticallable interface.

ENS can create the L2 registry at the standard address, setup a reverse registrar, public resolver, and associated nodes. Account holders can claim their reverse record and then build out any hierarchy they want reusing ENS infrastructure with L2 gas fees.

The L1 just needs to know the rewriter, which transforms ___raffy.eth to __51050ec063d393217b436747617ad1c2285aeeee.addr.reverse—for example the OwnedRewriter contract (adds the owner’s hex address). The gateway resolver contract verifies a storage proof for a Multicallable bundle.

The name 51050ec063d393217b436747617ad1c2285aeeee.addr.reverse can be claimed on an L2 w/o additional permissions or changes to ENS. From there, an entire L2 ecosystem can be built when combined with the PublicResolver. And the whole thing is immutable and trustless.

An underscored UUID registrar could mint base32-encoded names (z34ad._id) for same purpose, except these basenames could be transferred, whereas the OwnerRewriter transfers trust from the L1 (since the owners match).

I made a helper contract that offers the IExtendedResolver.resolve(name, request) interface that does the following:

  • If the resolver is a wildcard, do a wrapped resolve().
  • Otherwise it’s on-chain:
    • If request is a IMulticallable.multicall() wrapper, decode it and call those functions directly on the resolver, then return the results encoded.
    • Otherwise, call that request directly on the resolver.

This makes it so, for any name that’s dns-encodable (under 256 bytes per label, 99.98% of known names), you can call it like IExtendedResolver and you can give it any request, including multicall(), and it only does 1 CCIP request.

I made a simple demo that resolves both "adraffy.eth" (onchain, PublicResolver) and "raffy.gmcafe.eth" (wildcard, that supports multicall()) via the same call to the helper contract.

For this to work, servers must implement the multicall() interface (which is really simple since you just call your own resolver recursively.)

I think if the recursive CCIP limit was higher (ethers uses MAX_CCIP_REDIR = 10, but that code could be unrolled), in the case where the server doesn’t implement multicall(), you can encode an iterator into extraData, but I haven’t tested this yet.

While implementing this, I noticed that my own CCIP implementation was too strict, as I was only allowing requests originating from my own resolver contract. To support recursive CCIP, you need to accept requests from any contract.

Nick, can you clarify was was meant by EIP-3668 “Unsuccessful requests MUST return the appropriate HTTP status code - for example, 404 if the sender address is not supported by this gateway”. I originally interpreted this to mean, sender = my contract, but I feel like the sender should only be used to block malicious senders (otherwise recursive can’t work.)

Fortunately, even if recursive calls aren’t supported due to restricted sender logic, the client can always call the wildcard resolver directly. The helper contract just makes this more universal.

Creating an EVM gateway contract that can do similar decoding and prove the entire request, including a bounce through a L2 registry, is the next step.

1 Like

I think we’re mixing up multiple types…
a) web2 gateway+db with gateway signer
b) web2 gateway+db or L2 with manager’s signature
c) web2 gateway with L2 state/storage proof
d) L2 RPC with manager’s signature

(a) & (b) are similar but (a) is depending on gateway signer not to lie/sign & return bad data.
(d) is dependent on CAIP22 like format & currently there’s no way to access L2 RPC endpoints directly.
(c) can’t use CAIP22 “like” formats? as it’s a web2 gateway reading L2 state/storage proof from L2 RPC endpoints on backend. ?some extra works on client side can unlock do that too?

To extend (b) we’re requesting CCIP gateway urls to support ipfs://… ipns:// bzz://… protocols so ccip can access web3 storage providers from client side without any web2/ccip gateway in middle…

state/storage proof is best option and I guess ENS will run official gateways to multiple L2s … still there’s 2 minor benefits for supporting wb3 data/caip22 like formats, we don’t have to run a web2 gateway in the middle… && we get extra privacy by not requesting data from random web2 gateway used by different domains…

As it’ll be backwards compatible both can still be used as fallback mechanism.

@raffy I think you changed addr() in normalization/resolver page recently? & I see addr(60) in your example page too.

There are two addr() returns (address) & addr(node, 60) returns(bytes) for ETH address… As addr() is from original resolver our wildcard/offchain data is using addr() format but our data is stored in addr(60) space *as address instead of bytes(address)…

So I was wondering if we should skip using addr()/address and always use addr(60)/bytes format?

What name is this? Ah freetib.eth, I’ll look into this.

Your addr(60) isn’t encoded correctly, and I wasn’t doing normal addr() during wildcard. Fixed.

Over the last week I’ve had a play with this and published a pull request for this (in the context of L1 multi target resolution for now) here: Initial implementation of multi-target CCIP read by clowestab · Pull Request #28 · ensdomains/evmgateway · GitHub

I’d be appreciative of feedback…


I’m rereading everything trying to get context/understanding of the ideas being put forward here.

This reads as being unnecessarily complex. The evmgateway code as is caters for this need anyway in that you can query multiple pieces of data from a resolver you just need to know the base storage slot of each value you want. Once you’ve discerned the resolver address and chain this can be done on any EVM compatible chain…

What do you mean? 0x123 reverse resolves on an L2 to sub.raffy.eth. If the user has the private key for 0x123 so as to have set the reverse resolver on that L2 in this case do we care about L1?

Yup, I see this.

?? What do you mean. You do discern (and can keep track of) the node that is resolving a subname…

I’ve re-read this multiple times and I’m getting lost in its complexity when in reality I don’t think it is actually that complex

It really depends what is going on behind the scenes - you can have 1 CCIP fetch request but then the gateway be sending off multiple RPC calls behind the scenes. OR the gateway could use multicall behind the scenes OR the gateway provider could implement their own middleware contract that batches these data requests and returns them as packed data (I’ve done this before with some of my tooling).

Oh I see, you’re basically implying it should be provided by ENS natively? I’m not convinced - theres a lot of pros to keeping the core contracts as simple as possible, and middleware providers can simply build on top of ENS. If you really wanted to directly you could do this in a hacky manner through the resolve method of ENSIP-10.

I’m in this camp. A well reputed service provider storing signed data in a centralised database offers the same security guarantees. The core concern would be that provider not responding (or disappearing completely) so the key is the well reputed bit. An L2 could also disappear.

Again, I’m confused. This doesn’t seem to follow the ENSIP-10 spec…?

Clients are meant to recursively iterate through the name tree until they find a resolver that implements at which point they call resolve. This seems to be a resolver that finds a resolver in its resolve…?

This might be more performant than a client (JS for example) discerning the resolver but i’d imagine in reality unless we are talking 10 levels of nested subdomains the performance difference would be negligible, but yeh… crucially this isn’t an implementation of the specification?


More generally whilst this thread is super informative (and definitely needed) there seems to be a lot of different tangents being discussed and it is not particularly easy to follow (IMO). I’m also not sure that a level of agreement has been reached whereby a summary of current thoughts could easily be collated :grimacing:

1 Like

Sorry this thread is all over the place. You can ignore the original idea of switching chains.

This looks good to me. For the use case above, I want to read the resolver address from L2 registry, switch to that target, and read multiple records.

The existing IMulticallable interface is sufficient—I didn’t know it existed. I think wildcard servers should be encouraged to handle this request.

My thinking is this resolve() function should also process IMulticallable.multicall.selector.

To benefit from multicall() (single fetch) ENS clients need to be issuing requests in this style.

There are (5?) possible schemes for reading records of a name:
(m = registry reads, n = resolver reads)

  1. O(2m+2n) — ENSIP-10: resolve() per record
    [resolver() + supports(ensip10)] x m + [req + reply] x n
  2. O(2m+2) — ENSIP-10: resolve(multicall) all records
  3. O(2+n) — on-chain: eth_call per record
  4. O(4) — on-chain: supports(multicall) → call multicall() (internal multicall)
    resolver() + supports(ensip10) + supports(multicall) + multicall()
  5. O(3) — on-chain: use helper Multicall contract (external multicall)
    resolver() + supports(ensip10) + multicall()

My example above was just an experiment to provide a common interface that is O(1-2) for all types. It’s not a resolver itself per se. It does implement ENSIP-10. It lets you multicall an on-chain resolver from 2017 using IExtendedResolver interface.


I think CCIP fetches need to be bundled for complex dapps. For example, a social site will want to query multiple records per user per page load.

For querying “all your records”, like viewing an user profile, I suggested doing a single multicall() with common records + a record that indicates (here are my rare records), which collapses many requests to 1-2.

This is cool! Would you consider contributing it to the universal resolver?

This is a really obvious thing to do, and I don’t know why I didn’t think of it.

Would you be happy to open a PR against GitHub - smartcontractkit/ccip-read to add this to the base gateway implementation?

That’s correct. Though I wonder if the correct solution here would be to implement multicall on the resolver too, having it just call multicall on the server?

Yes, I understand what you’re asking for. What you don’t seem to understand is that if you just call a public RPC provider, and don’t verify its responses in any way, it’s exactly equivalent to just storing those responses in a centralized database, from a security POV. The only thing you gain is the convenience of not having to host your own server, and instead relying on using an L2 as your database.

Gateway authors and contract authors can choose their security model for CCIP read; they can do anything from just trusting the gateway response at one extreme, to verifying everything against the L2’s state root onchain at the other extreme. With the scheme you are proposing, however, the latter becomes effectively impossible; the upper limit of security that can be achieved with it is simply trusting the public RPC node the client is using.

I honestly don’t understand why you think this is a beneficial addition to the protocol.

Someone has to run a public endpoint in either case; whether it’s a JSON-RPC server for the L2, or a CCIP-Read gateway is immaterial. It’s certainly not worth throwing away all security guarantees for the convenience of not hosting a server somewhere.

There are no privacy benefits here either, because the JSON-RPC server the client uses can also log their requests.

I’m talking here about the case where a user wants to start on L2, and do forward resolution of a name. The L2 would provide a wildcard resolver that fetches state proofs from L1 - but what if the user is using a custom resolver implementation on L1?

2 Likes

Quick input:

I think there is a clear difference in the trust assumptions of Nick and C0de. C0de trusts public gateways more than the ability of end users to verify CCIP gateways of their service providers; Nick is diagonally opposite on the trust spectrum. Both have valid points.

if you just call a public RPC provider, and don’t verify its responses in any way

@nick.eth Verification can be built cheaply for some storage types and their public endpoints. For example, if I query ipfs/Qmbla from any public IPFS gateway, I can verify its content’s legitimacy by calculating CID(content) = Qmbla. In that sense, CCIP-Read providers can blindly query any IPFS public gateway and relatively easily verify the content’s validity before resolving the result of a direct RPC call. This can be done for any immutable storage (= unique CID) with relatively cheap verification method(s). This came to my mind while reading this conversation but I haven’t thought too deeply…

PR. Although, this is multicall() not resolve(name,multicall()).

Sure.

I like that, except it only works where you don’t need the preimage–unless you multicall a bunch of resolves(), which I guess also works.

  • resolve(name, multicall[addr(60),text(url)]
    Advantage: only send name once, ENSIP-10 compat
    vs
  • multicall[resolve(name,addr(60)), resolve(name, text(url))]
    Advantage: if you bucket per basename, you could multicall with multiple names, like a.[cb.id] and b.[cb.id].

I was under the impression that content hashes aren’t necessarily integrity hashes due to chunking. Although, maybe that can be ignored for data below the minimum chunk?

The other issue is that integrity != latest, so like signatures, you need an additional channel to indicate you’re seeing the newest version.

An improvement might be: after you’ve made updates to an L2 resolver, you commit a merkle root of all your name’s records. Then the gateway just needs to provide 1 storage proof for the root (1 slot), the raw record data, and the links in the tree to locally prove it.

Correct, we are looking at tiny data due to limitations of CCIP-Read itself. Pretty sure it is impossible to pass more than few KB of payload between resolve() and callback(). All chunking issues can be ignored; it is essentially hashing at that amount of data.

That’s correct. IPFS CIDs need to be wrapped in a namespace (say IPNS) which then requires some indexer to break degeneracy among versions. This is however a very simple task for an L2 EVM-Gateway Indexer which stores nothing more than latest version for an IPNS key in an extremely rudimentary contract. This L2 EVM-Gateway Indexer + Public RPC is a tightly secure CCIP-Read infra without a dedicated CCIP Gateway. We think it is worth considering; in fact an ENSIP standard for “No-Gateway CCIP-Read for Immutable & Calculable NameSpaced Verifiable Storages” would be nice. Arweave + ArNS falls neatly into the same category, maybe more like Swarm as well bar some caveats. We can derive more value from the implicit trust in the public gateways and it may even push them to be more security oriented; that’s not our concern but a good side-effect.


I wrote my comment before reading this part and we have ended up saying the same thing. We briefly entertained precisely this (= Index-on-L2 + Store Off-Chain) for NameSys v1 but gave up in interest of simplicity at that point; it is still in our To-Do list for NameSys v2 a little further down the line.

1 Like

is there anything wrong with this diagram?

Users do not have to verify CCIP gateways - the gateways are designed such that they cannot return inaccurate results. Any inaccurate results will be detected and rejected by the client.