Expanding Beyond Mainnet

Okay, I think I get it now. Here is my new take using the EVM gateway instead of L2 RPCs. This is a more advanced version of L1Resolver (from EVM Gateway) where I make 2 requests (to support Registry + Resolver) however the gateway code could be modified to support multiple targets in one request.

My original motivation: I wanted an official registry on L2, not just metadata. I also wanted support for complex resolver contracts (like Wildcard or CCIP).


ENS resolution always starts from L1.

L2 writes would still involve switching to the corresponding chain.

Wildcard or OffchainLookup resolvers can only be deployed on L1.

Only one contract, just deploy on L1.
L2OnChainResolver enables wildcard through rewriting + L2 registry which handles most cases.

Deploy a Registry on each supported L2.

Deploy a ReverseRegistrar.
(optional) Deploy {uuid}._id-style UUIDRegistrar.

Deploy an L2OnChainResolver to L1

If thereā€™s a node in the L2 registry and the metadata is in an L2 resolver, then you can set the L1 resolver to the singleton L2OnChainResolver and then call setTarget(node, chain, rewriter).

Since ENSIP-10 doesnā€™t supply the node that has the wildcard contract, the contract needs to also find the base node by popping labels from the path until the registry gives a result.

// similar to UniversalResolver.findResolver()
// but loop until resolver = address(this) and return offset instead
uint256 pos = findBaseOffset(name);
bytes basename = name.slice(pos, name.length);
bytes prefix = name.slice(0, pos);
bytes32 basenode = namehashFromDNS(basename);

Use the basenode to query the target:

uint64 chain = ENS.ttl(basenode); // cheap hack
(uint64 chain, address rewriter) = L2OnChainResolver.getTarget(basenode);

rewriter is a contract address. If it isnā€™t address(0) then rewrite the name:

name = INameRewriter(rewriter).rewrite(name).

interface INameRewriter { 
    // both names are dns-encoded
    // pos is byte-offset of basename
    function rewrite(bytes memory oldName, uint256 pos) external view returns (bytes memory newName);
}

This allows *[raffy.eth] to become *[51050ec063d393217b436747617ad1c2285aeeee.addr.reverse] before being resolved.

setTarget(namehash("raffy.eth"), 10, OwnedRewriter)

The following could be a helper contract to rewrite L1 names to owned L2 names:

contract OwnedRewriter implements INameRewriter {
    function rewrite(bytes memory oldName, uint256 pos) external view returns (bytes memory) { 
        // compute the base node
        bytes32 basenode = namehashFromDNS(oldName.slice(pos, oldName.length);
        // find the owner
        address owner = ENS.owner(basenode);
        // replace *[base] with *[addr].addr.reverse 
        return bytes.concat(oldName.slice(0, pos), dnsEncodedAddr(owner), '\4addr\7reverse');
    }
}

Example: addr("sub.raffy.eth", 60)

  1. ENS.resolver(namehash("sub.raffy.eth")) ā†’ null
  2. ENS.resolver(namehash("raffy.eth')) ā†’ L2OnChainResolver
  3. L2OnChainResolver.resolve(<sub.raffy.eth>, addr(..., 60))
    1. pos = findBasenameOffset(name);
      • pos = 4; // only removed "\3sub"
    2. basenode = namehashFromDNS(name.slice(pos, name.length)); // "raffy.eth"
    3. (chain, rewriter) = getTarget(basenode);
      • chain = 10; // optimism
      • rewriter = ...; // eg. OwnedRewriter
    4. name = INameRewriter(rewriter).rewrite(name, pos);
      • name = "sub.51050ec063d393217b436747617ad1c2285aeeee.addr.reverse";
    5. node = namehash(node);
    6. verifier = getVerifier(chain); // optimism verifier contract
    7. ā† EVMFetcher(verifier, registry)...fetch(<regCallback>, encode(calldata0, verifier, node))
    8. ā†’ regCallback(values[], calldata)
    9. Decode: calldata ā†’ (calldata0, verifier, node)
    10. Decode: values[] ā†’ resolver = address(values[0]))
    11. ā† EVMFetcher(verifier, resolver)...fetch(<resCallback>, calldata)
    12. ā†’ resCallback(values[], calldata)
    13. Decode: calldata ā†’ (calldata0)
    14. Decode original calldata: calldata0 ā†’ addr(60)
    15. Do any transforms necessary.
    16. return values[1];

  • This lets an L1 name use L2 for metadata.
  • This lets you build out an L2 registry subtree and use L2 for metadata.
  • Can reuse existing L1 tooling (L2 Registry and L2 PublicResolver would work as-is)
  • Get native L2 resolution for free (when ENSIP-10 isnā€™t supported) ā€” only for names without rewrites
1 Like

I think thatā€™s definitely something we should do. It will likely need a new API, though it may also be possible to add an instruction for ā€œreplace the contract address with this oneā€.

This seems reasonable.

Using clones-with-immutable-args might be more gas-efficient than using a singleton here.

Another related topic is discovery of unique text/addr records, both on-chain and over CCIP. And related to this is general discovery for CCIP or ā€œremote ERC-165ā€.

I think wide adoption of this interface: IMulticallable.sol for everything off-L1 is one improvement, but another is lists of my keys/coins/chains/etc.

IMulticallable lets you collapse a huge blast of common record-reads into a single request. This can already be done with Multicall contracts, but this interface is ENS-specific and is a direct interaction. The default personal profile (name, avatar, banner, url, contenthash, major coins, etc.) can become one CCIP fetch. The request can also be decoded by an smart CCIP bridge contract that decodes, proves, and returns the data from a CCIP L2 EVM gateway in one proof.

I implemented the IMulticallable tunnel in my resolver, when you query a supporting wildcard resolver, like my NFT projectā€™s per-token names: moo331.gmcafe.eth which does 22 CCIP calls in 1.

But while this technique lets you query many common things, it doesnā€™t let you discover/promote uncommon/new records.

I suggested "public-texts" as a new record which would be a comma-separated list of ā€œrareā€ keys but idea could be extended to other field types. And the storage could be better/smarter/etc.

First, you multicall the [commons + the lists (keys, coins, chains, friends)] in the one request and then follow up with a multicall for the (filterable) rare records if needed.

This isnā€™t automatically true - to fetch multiple records over CCIP-read using multicall, either the resolver needs to support this directly (eg, understand that a multicall should roll multiple requests into one fetch) or the multicall implementation needs to be using a gateway that batches underlying CCIP-read requests. Both require manual work to implement over and above standard multicall.

Correct, not automatically true, but ENS should encourage this style of accessā€”always use resolve(multicall*) if records > 1ā€”so middleware like trusted CCIP or EVM-gateway can be efficient.

This way, complex applications which query lots of data are feasible, as they arenā€™t making hundreds of requests per page load to show basic identity information

If the resolver is wildcard and the resolve(multicall*) fails, you can just query it normally. Or if itā€™s not wildcard, we could just have an immutable helper contract that does an external multicall if the resolver itself doesnā€™t support IMulticallable. All of this code would be in ethers/veim/ensjs with an API like:

resolver = await getResolver("sub.raffy.eth");
resolver.node // namehash
resolver.dnsEncodedName // null if too long (cant wrap, cant ENSIP-10)
resolver.supportsWildcard // true if has ExtendedResolver interface
resolver.baseName // name of the wildcard resolver (or byte/label-offset)
resolver.supportsMulticall // [0 = untested, 1 = yes, 2 = no]

await resolver.fetch([["text", "name"], ["text", "description"], ["addr", 60], ["addr", 0], ["contenthash"]);

This is essentially the same API I use in my resolver and it works great.

  • fetch_records() / get_resolver()
  • I think this is the same as findResolver() from UniversalResolver.sol. However, the multcall should work the opposite way: always use the resolve() interface, and have the helper contract decode, multicall, and repackage internally when thereā€™s no support.

Registering *.eth from L2 is hard problem to solveā€¦ itā€™s easy for sub/domain.eth and CCIP resolver on L1 with domain.eth pointing to L2 data resolver contract, people are already using thatā€¦ Iā€™d recommend not to introduce any breaking changes against stuffs thatā€™s already working & if possible donā€™t add in extra complexity. ENS is already 3 levels deep with name registry, erc721 & erc1155 wrappers.

As weā€™re verifying manager/approved signature stored on L2/offchain, at worst case scenarios BAD L2 RPC/gateways can only send stale recordsā€¦ we could narrow down with extra ttl/validity timestamp but thatā€™ll require manager/signer to update their records more frequently on L2/offchain.

In bad dapp/client scenarios they can resolve to anything even for non-ccip lookupsā€¦ For current web2 gateway with signers, any BAD/compromised gateway signer can easily sign and send anythingā€¦ Domainā€™s manager/approval sig is more trustless than web2 gateway signer.

Thereā€™s same old privacy risk with web2 ccip gateways. L2 RPC endpoints are still web2 but itā€™s less risky than random web2 gateways used by every domain.eth. Using CAIP22 like format can leverage L2/light clients if underlying dapps/wallet clients have light clients support in future.

ā€œif clients are not following ENS specs theyā€™re not ENS compliantā€¦ā€, Paraphrasing @nick.ethā€™s reply from 2020 :smiley: back when I asked "what if bad scam clients skip normalization, or resolve their addr when I send $eth to ā€œdomain.ethā€ from their dapps/wallet.

For batch resolving I kinda preferred using @serenaeā€™s draft specs [Draft] ENSIP-##: Off-chain Name Meta-Resolution [Draft] ENSIP-##: Off-chain Name Meta-Resolution compared to current ENSIP-16: Offchain metadata with extra ā€œgraphqlUrlā€ā€¦

sorry Iā€™m bit lagginā€¦ back to work! :vulcan_salute:

1 Like

Right, but you get exactly the same level of security from a centralized database storing those signatures. If youā€™re not going to verify the blockchainā€™s state proof, you gain no benefit from using a blockchain, and it isnā€™t trustless.

No, they canā€™t - because they can only produce valid proofs of L2 state; any invalid proof will be rejected by the contract that verifies the response.

No, itā€™s strictly more secure, because unlike an RPC endpoint, a gateway cannot lie.

My claim is that you must own a .eth or a web2 domain name to have a 2LD on an L2. 3+ is different but ENS has no control over subdomains.

We use the EVM-gateway to trustly forward any namespace from L1 to an L2 where everything can be proved (must be on-chain but content agnostic)ā€”and then make that work with IMulticallable interface.

ENS can create the L2 registry at the standard address, setup a reverse registrar, public resolver, and associated nodes. Account holders can claim their reverse record and then build out any hierarchy they want reusing ENS infrastructure with L2 gas fees.

The L1 just needs to know the rewriter, which transforms ___raffy.eth to __51050ec063d393217b436747617ad1c2285aeeee.addr.reverseā€”for example the OwnedRewriter contract (adds the ownerā€™s hex address). The gateway resolver contract verifies a storage proof for a Multicallable bundle.

The name 51050ec063d393217b436747617ad1c2285aeeee.addr.reverse can be claimed on an L2 w/o additional permissions or changes to ENS. From there, an entire L2 ecosystem can be built when combined with the PublicResolver. And the whole thing is immutable and trustless.

An underscored UUID registrar could mint base32-encoded names (z34ad._id) for same purpose, except these basenames could be transferred, whereas the OwnerRewriter transfers trust from the L1 (since the owners match).

I made a helper contract that offers the IExtendedResolver.resolve(name, request) interface that does the following:

  • If the resolver is a wildcard, do a wrapped resolve().
  • Otherwise itā€™s on-chain:
    • If request is a IMulticallable.multicall() wrapper, decode it and call those functions directly on the resolver, then return the results encoded.
    • Otherwise, call that request directly on the resolver.

This makes it so, for any name thatā€™s dns-encodable (under 256 bytes per label, 99.98% of known names), you can call it like IExtendedResolver and you can give it any request, including multicall(), and it only does 1 CCIP request.

I made a simple demo that resolves both "adraffy.eth" (onchain, PublicResolver) and "raffy.gmcafe.eth" (wildcard, that supports multicall()) via the same call to the helper contract.

For this to work, servers must implement the multicall() interface (which is really simple since you just call your own resolver recursively.)

I think if the recursive CCIP limit was higher (ethers uses MAX_CCIP_REDIR = 10, but that code could be unrolled), in the case where the server doesnā€™t implement multicall(), you can encode an iterator into extraData, but I havenā€™t tested this yet.

While implementing this, I noticed that my own CCIP implementation was too strict, as I was only allowing requests originating from my own resolver contract. To support recursive CCIP, you need to accept requests from any contract.

Nick, can you clarify was was meant by EIP-3668 ā€œUnsuccessful requests MUST return the appropriate HTTP status code - for example, 404 if the sender address is not supported by this gatewayā€. I originally interpreted this to mean, sender = my contract, but I feel like the sender should only be used to block malicious senders (otherwise recursive canā€™t work.)

Fortunately, even if recursive calls arenā€™t supported due to restricted sender logic, the client can always call the wildcard resolver directly. The helper contract just makes this more universal.

Creating an EVM gateway contract that can do similar decoding and prove the entire request, including a bounce through a L2 registry, is the next step.

1 Like

I think weā€™re mixing up multiple typesā€¦
a) web2 gateway+db with gateway signer
b) web2 gateway+db or L2 with managerā€™s signature
c) web2 gateway with L2 state/storage proof
d) L2 RPC with managerā€™s signature

(a) & (b) are similar but (a) is depending on gateway signer not to lie/sign & return bad data.
(d) is dependent on CAIP22 like format & currently thereā€™s no way to access L2 RPC endpoints directly.
(c) canā€™t use CAIP22 ā€œlikeā€ formats? as itā€™s a web2 gateway reading L2 state/storage proof from L2 RPC endpoints on backend. ?some extra works on client side can unlock do that too?

To extend (b) weā€™re requesting CCIP gateway urls to support ipfs://ā€¦ ipns:// bzz://ā€¦ protocols so ccip can access web3 storage providers from client side without any web2/ccip gateway in middleā€¦

state/storage proof is best option and I guess ENS will run official gateways to multiple L2s ā€¦ still thereā€™s 2 minor benefits for supporting wb3 data/caip22 like formats, we donā€™t have to run a web2 gateway in the middleā€¦ && we get extra privacy by not requesting data from random web2 gateway used by different domainsā€¦

As itā€™ll be backwards compatible both can still be used as fallback mechanism.

@raffy I think you changed addr() in normalization/resolver page recently? & I see addr(60) in your example page too.

There are two addr() returns (address) & addr(node, 60) returns(bytes) for ETH addressā€¦ As addr() is from original resolver our wildcard/offchain data is using addr() format but our data is stored in addr(60) space *as address instead of bytes(address)ā€¦

So I was wondering if we should skip using addr()/address and always use addr(60)/bytes format?

What name is this? Ah freetib.eth, Iā€™ll look into this.

Your addr(60) isnā€™t encoded correctly, and I wasnā€™t doing normal addr() during wildcard. Fixed.

Over the last week Iā€™ve had a play with this and published a pull request for this (in the context of L1 multi target resolution for now) here: Initial implementation of multi-target CCIP read by clowestab Ā· Pull Request #28 Ā· ensdomains/evmgateway Ā· GitHub

Iā€™d be appreciative of feedbackā€¦


Iā€™m rereading everything trying to get context/understanding of the ideas being put forward here.

This reads as being unnecessarily complex. The evmgateway code as is caters for this need anyway in that you can query multiple pieces of data from a resolver you just need to know the base storage slot of each value you want. Once youā€™ve discerned the resolver address and chain this can be done on any EVM compatible chainā€¦

What do you mean? 0x123 reverse resolves on an L2 to sub.raffy.eth. If the user has the private key for 0x123 so as to have set the reverse resolver on that L2 in this case do we care about L1?

Yup, I see this.

?? What do you mean. You do discern (and can keep track of) the node that is resolving a subnameā€¦

Iā€™ve re-read this multiple times and Iā€™m getting lost in its complexity when in reality I donā€™t think it is actually that complex

It really depends what is going on behind the scenes - you can have 1 CCIP fetch request but then the gateway be sending off multiple RPC calls behind the scenes. OR the gateway could use multicall behind the scenes OR the gateway provider could implement their own middleware contract that batches these data requests and returns them as packed data (Iā€™ve done this before with some of my tooling).

Oh I see, youā€™re basically implying it should be provided by ENS natively? Iā€™m not convinced - theres a lot of pros to keeping the core contracts as simple as possible, and middleware providers can simply build on top of ENS. If you really wanted to directly you could do this in a hacky manner through the resolve method of ENSIP-10.

Iā€™m in this camp. A well reputed service provider storing signed data in a centralised database offers the same security guarantees. The core concern would be that provider not responding (or disappearing completely) so the key is the well reputed bit. An L2 could also disappear.

Again, Iā€™m confused. This doesnā€™t seem to follow the ENSIP-10 specā€¦?

Clients are meant to recursively iterate through the name tree until they find a resolver that implements at which point they call resolve. This seems to be a resolver that finds a resolver in its resolveā€¦?

This might be more performant than a client (JS for example) discerning the resolver but iā€™d imagine in reality unless we are talking 10 levels of nested subdomains the performance difference would be negligible, but yehā€¦ crucially this isnā€™t an implementation of the specification?


More generally whilst this thread is super informative (and definitely needed) there seems to be a lot of different tangents being discussed and it is not particularly easy to follow (IMO). Iā€™m also not sure that a level of agreement has been reached whereby a summary of current thoughts could easily be collated :grimacing:

1 Like

Sorry this thread is all over the place. You can ignore the original idea of switching chains.

This looks good to me. For the use case above, I want to read the resolver address from L2 registry, switch to that target, and read multiple records.

The existing IMulticallable interface is sufficientā€”I didnā€™t know it existed. I think wildcard servers should be encouraged to handle this request.

My thinking is this resolve() function should also process IMulticallable.multicall.selector.

To benefit from multicall() (single fetch) ENS clients need to be issuing requests in this style.

There are (5?) possible schemes for reading records of a name:
(m = registry reads, n = resolver reads)

  1. O(2m+2n) ā€” ENSIP-10: resolve() per record
    [resolver() + supports(ensip10)] x m + [req + reply] x n
  2. O(2m+2) ā€” ENSIP-10: resolve(multicall) all records
  3. O(2+n) ā€” on-chain: eth_call per record
  4. O(4) ā€” on-chain: supports(multicall) ā†’ call multicall() (internal multicall)
    resolver() + supports(ensip10) + supports(multicall) + multicall()
  5. O(3) ā€” on-chain: use helper Multicall contract (external multicall)
    resolver() + supports(ensip10) + multicall()

My example above was just an experiment to provide a common interface that is O(1-2) for all types. Itā€™s not a resolver itself per se. It does implement ENSIP-10. It lets you multicall an on-chain resolver from 2017 using IExtendedResolver interface.


I think CCIP fetches need to be bundled for complex dapps. For example, a social site will want to query multiple records per user per page load.

For querying ā€œall your recordsā€, like viewing an user profile, I suggested doing a single multicall() with common records + a record that indicates (here are my rare records), which collapses many requests to 1-2.

This is cool! Would you consider contributing it to the universal resolver?

This is a really obvious thing to do, and I donā€™t know why I didnā€™t think of it.

Would you be happy to open a PR against GitHub - smartcontractkit/ccip-read to add this to the base gateway implementation?

Thatā€™s correct. Though I wonder if the correct solution here would be to implement multicall on the resolver too, having it just call multicall on the server?

Yes, I understand what youā€™re asking for. What you donā€™t seem to understand is that if you just call a public RPC provider, and donā€™t verify its responses in any way, itā€™s exactly equivalent to just storing those responses in a centralized database, from a security POV. The only thing you gain is the convenience of not having to host your own server, and instead relying on using an L2 as your database.

Gateway authors and contract authors can choose their security model for CCIP read; they can do anything from just trusting the gateway response at one extreme, to verifying everything against the L2ā€™s state root onchain at the other extreme. With the scheme you are proposing, however, the latter becomes effectively impossible; the upper limit of security that can be achieved with it is simply trusting the public RPC node the client is using.

I honestly donā€™t understand why you think this is a beneficial addition to the protocol.

Someone has to run a public endpoint in either case; whether itā€™s a JSON-RPC server for the L2, or a CCIP-Read gateway is immaterial. Itā€™s certainly not worth throwing away all security guarantees for the convenience of not hosting a server somewhere.

There are no privacy benefits here either, because the JSON-RPC server the client uses can also log their requests.

Iā€™m talking here about the case where a user wants to start on L2, and do forward resolution of a name. The L2 would provide a wildcard resolver that fetches state proofs from L1 - but what if the user is using a custom resolver implementation on L1?

2 Likes

Quick input:

I think there is a clear difference in the trust assumptions of Nick and C0de. C0de trusts public gateways more than the ability of end users to verify CCIP gateways of their service providers; Nick is diagonally opposite on the trust spectrum. Both have valid points.

if you just call a public RPC provider, and donā€™t verify its responses in any way

@nick.eth Verification can be built cheaply for some storage types and their public endpoints. For example, if I query ipfs/Qmbla from any public IPFS gateway, I can verify its contentā€™s legitimacy by calculating CID(content) = Qmbla. In that sense, CCIP-Read providers can blindly query any IPFS public gateway and relatively easily verify the contentā€™s validity before resolving the result of a direct RPC call. This can be done for any immutable storage (= unique CID) with relatively cheap verification method(s). This came to my mind while reading this conversation but I havenā€™t thought too deeplyā€¦

PR. Although, this is multicall() not resolve(name,multicall()).

Sure.

I like that, except it only works where you donā€™t need the preimageā€“unless you multicall a bunch of resolves(), which I guess also works.

  • resolve(name, multicall[addr(60),text(url)]
    Advantage: only send name once, ENSIP-10 compat
    vs
  • multicall[resolve(name,addr(60)), resolve(name, text(url))]
    Advantage: if you bucket per basename, you could multicall with multiple names, like a.[cb.id] and b.[cb.id].

I was under the impression that content hashes arenā€™t necessarily integrity hashes due to chunking. Although, maybe that can be ignored for data below the minimum chunk?

The other issue is that integrity != latest, so like signatures, you need an additional channel to indicate youā€™re seeing the newest version.

An improvement might be: after youā€™ve made updates to an L2 resolver, you commit a merkle root of all your nameā€™s records. Then the gateway just needs to provide 1 storage proof for the root (1 slot), the raw record data, and the links in the tree to locally prove it.

Correct, we are looking at tiny data due to limitations of CCIP-Read itself. Pretty sure it is impossible to pass more than few KB of payload between resolve() and callback(). All chunking issues can be ignored; it is essentially hashing at that amount of data.

Thatā€™s correct. IPFS CIDs need to be wrapped in a namespace (say IPNS) which then requires some indexer to break degeneracy among versions. This is however a very simple task for an L2 EVM-Gateway Indexer which stores nothing more than latest version for an IPNS key in an extremely rudimentary contract. This L2 EVM-Gateway Indexer + Public RPC is a tightly secure CCIP-Read infra without a dedicated CCIP Gateway. We think it is worth considering; in fact an ENSIP standard for ā€œNo-Gateway CCIP-Read for Immutable & Calculable NameSpaced Verifiable Storagesā€ would be nice. Arweave + ArNS falls neatly into the same category, maybe more like Swarm as well bar some caveats. We can derive more value from the implicit trust in the public gateways and it may even push them to be more security oriented; thatā€™s not our concern but a good side-effect.


I wrote my comment before reading this part and we have ended up saying the same thing. We briefly entertained precisely this (= Index-on-L2 + Store Off-Chain) for NameSys v1 but gave up in interest of simplicity at that point; it is still in our To-Do list for NameSys v2 a little further down the line.

1 Like

is there anything wrong with this diagram?

Users do not have to verify CCIP gateways - the gateways are designed such that they cannot return inaccurate results. Any inaccurate results will be detected and rejected by the client.