Expanding Beyond Mainnet

I’ve tried a few times to write this post but I keep getting lost trying to account for various possibilities and consequences. Instead, I think it’s better to just describe how I think this should work and see if anyone agrees.


My current thoughts for making ENS work across the EVM ecosystem

tl;dr

  1. always start resolution from L1 (independent of dapp/wallet/etc) (free)
  2. on each L2, create Registry at 0x00000000000C2E074eC69A0dFb2997BA6C7d2e1e
  3. on each L2, create Registrar for {chainName}.eth (eg. *.op.eth)
  4. store reverse names on L2 exactly like L1
  5. deploy BridgedResolver on all chains (calls registry on another chain via ENSIP-10++)
  6. set BridgedResolver on L2 [root]
  7. set BridgedResolver on L1 for each {chainName}.eth subdomain
  8. use BridgedResolver to connect reverse records to corresponding chains (free)
  9. use claimable {owner}._owned to enable L2 storage for L1 names

Forward resolution should always start from L1 registry. For developers, the ENS client provider should be decoupled from the dapp provider. This will extend the ENS we use today to every EVM chain without any additional changes.
Edit: This is no longer necessary with a wildcard on the L2 [root] (see below).

Forward address resolution should be chain-aware. This has many input representations (eg. op:raffy.eth) but effectively specifies the coinType (and fallback behavior) based on chain.

  • L1: (raffy.eth, 1) → addr(60)
    L2/op: (raffy.eth, 10) → addr(614)
    L2/arb: (raffy.eth, 42161) → addr(9001)

  • If we use fallback, nearly all EOAs would likely use addr(60):
    L2: (raffy.eth, 10) → addr(614) || addr(60)

  • Or, we could define a universal coinType for “any evm chain”:
    L1: (raffy.eth, 1) → addr(60) || addr(any)
    L2: (raffy.eth, 10) → addr(614) || addr(any)

  • Or, constantly remind the user on L2 to update their addr(60) when they set their chain-specific coinType.

Reverse resolution should be native. This requires a reverse registry contract on each L2. This can use the existing ReverseRegistry and NameResolver tech.

Reverse address resolution should be chain-aware.

  • L1 or L2: resolve("51050ec063d393217b436747617ad1c2285aeeee.addr.reverse") does normal thing

  • L1 or L2: resolve("51050ec063d393217b436747617ad1c2285aeeee.{chain}.reverse") is a {chain}.reverse Wildcard that uses BridgedResolver to access another chains reverse registry.

  • Fallback could be implemented in the ENS client or helper contract (eg. *.any.reverse)

  • L2 could directly query the L2 registry for reverse records if no fallback is required. Get this for free.

(Optional) Acquire 1-4 character names for each L2’s blessed registrar.

  • Example: {op/o, arb/a, base/b, poly/p, avax/c, ...}.eth

Cross-chain names could be bridged using an extension of ENSIP-10 where a new optional behavior target() returns the chain where the resolution could happen natively and basename replacement.

Extended ENSIP-10
// change: ens (registry) is provided
// change: check [root] resolver
function getResolver(ens, name) {
    let currentname = name;
    while (true) {
        const node = namehash(currentname);
        const resolver = ens.resolver(node);
        if(resolver) return [resolver, currentname];
        if (!currentname) return [];
        currentname = parent(currentname);
    }
}

// change: ens (registry) is provided
// new: if the resolver is ENSIP-10 and supports BridgedResolver extension, do a native call
function resolve(ens, name, func, ...args) {
  const [resolver, resolverName] = getResolver(ens, name);
  if(!resolver) return null;
  const supportsENSIP10 = resolver.supportsInterface('0x9061b923');
  if(resolver.supportsInterface('0x9061b923')) {
    const node = namehash(name);
    try {
      const [chain, basename] = resolver.target(namehash(resolverName));
      const alt_ens = ensForChain(chain);
      let alt_name = name.slice(0, -resolverName.length); 
      if (basename == '_owned') basename = join(ens.owner(node).slice(2).toLowerCase(), basename);
      alt_name = join(alt_name, basename);
      return resolve(alt_ens, alt_name, func, ...args);
    } catch (ignored) {
    }
    const calldata = resolver[func].encodeFunctionCall(node, ...args);
    const result = resolver.resolve(dnsencode(name), calldata);
    return resolver[func].decodeReturnData(result);
  } else if(name == resolverName) {
    return resolver[func](...args);
  } else {
    return null;
  }
}

function ensForChain(chain) {
  // return registry contract for each known chain
}
interface IBridgedResolver {
    function target(bytes32 node) external view returns (uint256 chain, string memory basename);
}
contract BridgedResolver {
    function setTarget(bytes32 node, uint256 chain, string calldata basename) authorized(node) external;
}

There only needs be one deployment of the BridgedResolver per chain since the target can be parameterized by basename of the wildcard. If the ENS client is unaware of the chain, the resolver would handle the request using CCIP read via EVM gateway.

I included the ability to modify the basename, so the op.eth registry tree could be used before short .eth names are enabled on mainnet. This would allow raffy.[op-bridge.eth] to wildcard bridge to raffy.[op.eth] and would allow raffy.[op] or any other other TLD to also bridge into the same raffy.[op.eth] node.

Special case: if the basename is _owned, the basename becomes {owner}._owned (see below).

  1. Register op.eth
  2. ENSRegistry.setResolver(namehash("op.eth"), BridgedResolver.address)
  3. BridgedResolver.setTarget(namehash("op.eth"), 10, "op.eth")
    eg. L1: *.op.eth → L2[10]: *.op.eth

The BridgedResolver could also be used for the reverse record (10.reverse) example above:

  1. L1: BridgedResolver.setTarget(namehash("10.reverse"), 10, "addr.reverse")
    L2: BridgedResolver.setTarget(namehash("1.reverse"), 1, "addr.reverse")
    eg. L1: *.10.reverse → L2[10]: *.addr.reverse
    eg. L2: *.1.reverse → L1: *.addr.reverse

Each L2 should have a Registry contract. This contract is almost never invoked directly since resolution starts from L1. Ideally this should be a fixed address and shouldn’t be the same as the L1 registry.

  • [root] ← resolver = BridgedResolver(1, "")
    • reverse
      • addr
        • 51050ec063d393217b436747617ad1c2285aeeee
    • eth ← resolver = null
      • op ← NFT registrar
        • raffy ← Tokenized
    • _owned
      • 51050ec063d393217b436747617ad1c2285aeeee

Deploy a L2 registrar for claiming {label}.(op|arb|...).eth. Should probably use something like NameWrapper. This would work like .eth registrations on L1. This could have a registration fee and expiration.

Deploy a L2 registrar which lets anyone claim {owner}._owned. This would not be tokenized. You could build anything off this node, but since resolution works for L1, this node isn’t reachable directly since resolution starts from L1. Use addr.reverse instead.


Name → Resolver Examples

  1. L1 name: raffy.eth → PublicResolver

  2. L2 name: raffy.op.eth → op.eth BridgedResolver(10, op.eth)
    → L2[10]: raffy.op.eth → PublicResolver

  3. L1 name w/L2 resolver: raffy.eth → raffy.eth BridgedResolver(10, 51050ec063d393217b436747617ad1c2285aeeee.addr.reverse)
    → L2[10]: raffy.51050ec063d393217b436747617ad1c2285aeeee.addr.reverse → PublicResolver

  4. L1 name w/L2 resolver: sub.raffy.eth → raffy.eth BridgedResolver(10, 51050ec063d393217b436747617ad1c2285aeeee.addr.reverse)
    → L2[10]: sub.raffy.51050ec063d393217b436747617ad1c2285aeeee.addr.reverse → PublicResolver

Address Examples

  1. L1 name: (raffy.eth, 1) → PublicResolver → addr(60)
  2. L1 name: (raffy.eth, 10) → PublicResolver → addr(614)
  3. L2 name: (raffy.op.eth, 1) → BridgedResolver → PublicResolver → addr(60)
  4. L2 name: (raffy.op.eth, 10) → BridgedResolver → PublicResolver → addr(618)

This setup would allow stuff like:

  • L2 wildcard:
    • Register test.op.eth on L2
    • Set it as a custom wildcard on L2
    • From anywhere, resolve sub.test.op.eth
  • L2 wildcard mirror of .eth on L1 (same tech used on [root] of L2)
    • Register eth.op.eth on L2
    • Set it as BridgedResolver(1, eth)
    • From anywhere, resolve raffy.eth.op.eth
      • L1: raffy.eth.[op.eth] → BridgedResolver(10, op.eth) → (10, raffy.eth.op.eth)
      • L2[10]: raffy.[eth.op.eth] → BridgedResolver(1, eth) → (1, raffy.eth)
      • L1: raffy.eth → PublicResolver
10 Likes

:pray: @raffy we’re exploring similar L2/L1 ideas for possible new ENSIP/ERC without breaking/changing anything big


Extending ENSIP10/ERC3668 (CCIP-read) gateways with new URI format
 eg. eip155:<chain-id>:ensip10:<address>:<0xcalldata>, currently ccip-read only supports http/s so extending it to support ipfs://..., ipns://.., bzz:// and other web3/storage types seems good idea.

a) L1 name w/L2 data resolver:
raffy.eth → L1 CCIP with gateway URI set as eip155:<10>:ensip10:<L2 resolver contract>:<msg.data from L1> → format and call rpc api at L2[10] contract with calldata → use returned data from L2 in L1 resolver’s callback function. * Resolvers can verify state or signature data, OR accept returned data without extended verification as it’s all done on client side without extra web2 ccip-gateway in middle.

b) it’s possible to do multichain recursive ccip-read but it’s better to use L1 RPC as primary source.

L2[42161] → L1 → L2[10] << callback follows backwards

c) L1 resolver with offchain ENS records/data storage
@NameSys is already working on multiple web2/web3 offchain based ENS records storage.

Downside : Compatible clients/dapps must have multiple RPC endpoints for L1 & all supported L2s for ccip lookup.

Upside : Easy multichain CCIP-read without any CCIP-gateways in middle. It’s only extending CCIP gateways with new URI formats so we don’t have to wait few more years for ERC-5559 compatibility.

4 Likes

Hey @0xc0de4c0ffee thanks for the input. I’ll jump right in:

I considered this approach but there’s an important difference. With my extension, you do the full ENS client resolve() pipeline, which might involve multiple calls, like a wildcard resolution, a CCIP call, or another recursive bridge action.

The URI format above would only be able to execute one on-chain call. Since the first complex resolver action would be the chain switch (op.eth → L2), this would limit L2 names to on-chain only.

However, I do agree this would be very useful for many applications and should implemented for the general case of simple cross-chain reads. I can think of many ideas off the top of my head that would use this tech.

I’m not sure how to solve the chain RPC discovery issue though. Public RPCs for each chain could just be stored in the L1 registry.


This is also why I suggest doing this inside the ENSIP-10 pipeline. With my extension, if the chain isn’t known to the ENS client (eg. can’t be determined locally) then the normal path is taken. This means BridgedResolver is feasible today, it just requires an EVM-gateway or trusted CCIP-read setup.

A complete trustless bridge that supports arbitrary resolver implementations is probably difficult and likely requires additional ENSIP-10 functionality.

A local chain switch is the easiest solution to this problem and being ENSIP-10-specific makes any resolver contract work.


I didn’t include this case since once you’re doing off-chain stuff, you’re outside the official ENS envelope, but I agree this is an important option for users.

My thinking was any reachable name from the L1 (ie. must exist in the L1 registry or L2 registry, eg. ".eth", ".reverse", or DNS import) can bridge to an unreachable node on an L2 ({owner}._owned) which enables the existing infrastructure to manage the ENS records.

This would simplify a static table-based (like names backed by a csv or json file) solution to a BridgedResolver with a few subdomains on a L2 which almost anyone could setup.

In addition to {owner}._owned, I considered claimable {base36(++id)}._id for this same purpose. An advantage of incremented domains is size, since they need stored on-chain to indicate where the bridged resolution continues, eg. (6) 3z._id vs (47) 51050ec063d393217b436747617ad1c2285aeeee._owned. An advantage of the _owned approach is that everything automatically resets if the domain ownership changed (similar to versioning in the PublicResolver.)


Do you agree with the idea that resolution should start from L1?
I like this approach because it prevents needing to deploy copypasta Wildcards shims to every chain.

If we modify ENSIP-10 to check the resolver on the [root], then we get this behavior for free, since any name not in the L2 registry would match.

  • L1 [root] resolver would stay null
  • L2 [root] resolver would be BridgedResolver(1, "")
  • L2 reverse resolutions would work natively by default
  • L1 reverse from L2 would use 1.reverse BridgedResolver(1, addr.reverse)
  • This would also make it okay to deploy L2 registries at the same address as the L1
  • Side effect: _owned would be become locally accessible on each L2

Interesting consequence: it would be trivial to BridgeResolver() to a testnet or vice versa — inb4 goerli registrations
 raffy.go.eth

Interesting consequence: BridgedResolver() to the same chain is a CNAME
eg. L1: raffy.[antistupid.com] → BridgedResolver(1, eth) → raffy.eth
eg. L1: [raffy.antistupid.com] → BridgedResolver(1, raffy.eth) → raffy.eth


It would be trivial to expose existing L2 name services across the entire ENS ecosystem. They just need to claim an L1 or L2 name and deploy an L2 Wildcard which uses their name-resolution tech.

4 Likes

We entertained a Goerli Bridge (including contenthash mapping to eth.limo) for a hackathon 6 months ago but decided against it. It’s a risky move with unintended fallouts and a potential bad rap among community for destroying a testnet :sweat_smile: Even more interesting is that ENS is one of the few protocols that is truly chain-agnostic even in terms of implementation because at the end of day, it is just a database, a sophisticated database but still a database :innocent:

3 Likes

Thanks for posting this! A lot of it lines up very closely with our existing intentions and plans at Labs. Replying below only on points where we might differ.

What’s the purpose of this? Not everything has to be a .eth!

We’ve got various proposals for metadata APIs for offchain names. @matoken.eth has the lowdown on our current ideas here.

Couldn’t you just use your reverse record for this?

A wildcard resolver on the L2 registry root is an excellent idea.

I’m not a fan of this. It would complicate client implementations a lot to support these additional protocols, and 99% of them will end up supporting it simply by using a hardcoded gateway URL, so there’s no practical difference to the status quo.

I don’t think we want to make ourselves responsible for maintaining a massive database of public RPC endpoints for every EVM chain in existence. Much less onchain!

Likewise, this adds a burden to the client to know how to connect to arbitrary chains. The current system relies on a gateway to do that, which adds downtime risk, but removes a great deal of complexity from the client. Because responses are verified, the gateway can’t return inaccurate results.

2 Likes

I agree. it would be easier to just have a comprehensive resolver library.

1 Like

Chain-specific subdomains would only be if ENS wanted official L2 registrations. Since anyone can offer L2 registrations off an owned L1 node today, the unique advantage of *.op.eth would be the length. The only other way for an L2 to get short names would be for someone to buy and import a TLD ($$$$) or very short 2LD.

Additionally, names that are in the registry can reuse existing ENS tooling, whereas everything else requires custom resolver and storage contracts (and permission systems.)

No, since the user doesn’t own the reverse node.

Setup: you have raffy.eth on L1 and you want the records (and subs) on L2. With the BridgedResolver, you can use the existing ENS infrastructure for managing records, you just need another chain with a node that you control.

However, to own an L2 node, you need an L2 registrar. The _owned idea allows free registration of private nodes (in the sense they only resolve on their own chain, don’t exist in L1, and don’t conflict with DNS.) This feature isn’t necessary if there is a free L2 registrar.

Initially, I thought the same thing, however you only need RPCs to chains that allow public registration. By public, I mean that there is a registrar (that isn’t the reverse registrar.)

  • Most chains only need an empty registry tree with a BridgedResolver on the root back to L1. Or, the ENSIP-10 default could be, if no registry exists, just use L1.

  • The RPCs could be greatly restricted too: read-only, top-level calls to Registry or Resolver contracts only. Customizing the URL of the RPC could be a client feature (eg. if you run your own node(s), don’t want doxxed, etc.)

  • The only data stored on a L2 w/o registrations are the native reverse records, which could be linked from the L1 using EVM gateway. No RPC required.

Global Registry Tree Example:

  • L1/Mainnet: root (resolver = null)
    • reverse
      • addr → (can claim)
        • 51050ec063d393217b436747617ad1c2285aeeee
      • 1 → BridgedResolver(1, reverse.addr)
      • 10 → BridgedResolver(10, reverse.addr)
      • 123 → ReverseResolverEVMGateway
    • eth (can register)
      • raffy
      • op → BridgedResolver(10, op.eth)
    • com
      • antistupid
        • raffy
  • L2/Optimism: root → BridgedResolver(1, "")
    • reverse
      • addr → (can claim)
        • 51050ec063d393217b436747617ad1c2285aeeee
    • eth
      • op → (can register)
        • raffy
    • _owner → (can register)
  • L2/Chain123: root → BridgedResolver(1, "")
    • reverse
      • addr → (can claim)
        • 51050ec063d393217b436747617ad1c2285aeeee
  • (unsupported) L2/Chain456
    • Registry doesn’t exist

More Resolution Examples:

  • addr("raffy.op.eth") on Chain123

    1. Registry exists
    2. Wildcard match for root → BridgedResolver(1, "")
    3. Get RPC for Chain 1
    4. Continue resolution of raffy.op.eth
    5. Registry exists
    6. Wildcard match for op.eth → BridgedResolver(10, op.eth)
    7. Get RPC for Chain 10
    8. Continue resolution of raffy.op.eth
    9. Registry exists
    10. Resolver for raffy.op.eth exists
    11. getAddr(coinType of Chain123) || getAddr(coinType "any")
  • addr("raffy.op.eth") on Optimism (note: exactly like mainnet)

    1. Registry exists
    2. Resolver for raffy.op.eth exists
    3. getAddr(614) || getAddr(coinType "any")
  • addr("raffy.eth") on Optimism

    1. Registry exists
    2. Wildcard match for root → BridgedResolver(1, "")
    3. Get RPC for Chain 1
    4. Continue resolution of raffy.eth
    5. Registry exists
    6. Resolver for raffy.eth exists
    7. getAddr(60) || getAddr(coinType "any")

Example: Native Permissionless TLD on L2:

  1. obtain “op” DNS TLD
  2. DNS import into L1 → BridgedResolver(10, “op”)
  3. DNS import into L2 → create registrar
  4. sell *.op

My concern about the CCIP approach is that all of the complexity is hidden inside the resolver contracts. To resolve a few hundred names requires thousands of HTTP requests. With EVM gateway, you also need storage proofs.

With BridgedResolver, the origin of the resolution is explicit:

To resolve X.[A], continue resolving X.[B] on chain C

The basename rewrite could probably even be simplified:

To resolve A, continue resolving B on chain C

And provides the following benefits:

  • Batch calls can be optimized when they hit a bridged node.
  • Any L2 dapp or game using L2 names would be just as native as L1 is today.
  • L2’s can use any ENS resolver tech (wildcard, ccip, etc.)
  • Can determine if a name is resolvable “on-some-chain” (did not use CCIP).

The main differences seem to be:

  • Using L2 for metadata storage vs extending all ENS infrastructure
  • Security/privacy/trust differences between storage proofs vs local RPC change
  • Modifications required: BridgedResolver needs ENSIP-10 extension.

Ah, I see. Anyone can implement this, though, using custom resolvers; is there a strong benefit to an ‘official’ subdomain registrar for each chain?

They absolutely can - you can call claim on the reverse registrar to own the node.

I see what you mean here. One problem with this is that it restricts you to single ownership of a name; if you want to transfer ownership of the name, you have to also transfer all the records to a new node. It seems to me it’d be better to be able to just create UUID or sequentially numbered names and use one of those instead.

Even then, I don’t think there are advantages to this method that outweigh the cost of maintaining such a list. Gateways neatly sidestep the need for this.

Having to maintain our own RPC endpoints would be even worse! What would be the advantage over just hosting gateways?

Those proofs are a crucial part of the security guarantee. If we simply tell people to “resolve this on chain x”, all those guarantees go away unless the user happens to have a local node for that chain. Public RPC endpoints for those chains could simply return false results with no consequence.

Thanks for the feedback. I will retire the local chain switch idea. I wanted to explore an alternative setup.


I would like more details on how the L2 stuff will work from your POV, where the preferred bridging mechanism is the EVM gateway.

Is there only one real registry? eg. L1 nodes or wildcard

How does resolution work from L2? Do always resolve using mainnet RPC? How do L2 reserve names work? How do complex resolvers (like wildcards) work from L2?


I think there exists a lightweight EVM gateway setup that simply proves the hash of every record requested and then CCIP-reads the actual data from anywhere. This scales linear in the number of records accessed rather than the total size of the records accessed.

  • Create an L2 storage contract where anytime you update a record, you record the hash of the record data under the key of the hash of encoded function data that would query it: hash(<addr(node, 60)>) = hash("0x...")

  • L1 resolver contract gets storage proof of an array of hashes, CCIP-reads the corresponding hash values, and validates the record integrity.

Then, introduce a new Resolver profile that supports record-getter multicall, that both normal and wildcard resolvers can implement, where instead of saying: resolver.getText(node, "a") then resolver.getText(node "b"), you encode a vector of those calls as one call. For the example above, you gateway a storage proof for the hash of each element in the vector (so only 1 gateway storage proof request is needed) and then CCIP-read those records (so only 1 fetch is needed).

Example: ENSIP-20:

// very similar to ENSIP-10 resolve()
function multiResolve(name, getters: [string, ...any][]) {
    const [resolver, resolverName] = getResolver(name);
    if(resolver === null) {
        return null;
    }
    const supportsENSIP10 = resolver.supportsInterface('0x9061b923');
    if (!supportsENSIP10 && name !== resolverName) {
        return null;
    }
    const supportsENSIP20 = getters.length > 1 && resolver.supportsInterface('...');
    if (supportsENSIP20) { 
        // bundle all the getters a single multicall getter
        getters = [['multicall', getters.map(([func, ...args]) => {
            return resolver[func].encodeFunctionCall(namehash(name), ...args);
        }]];
    }
    let results = getters.map(([func, ...args]) => {
        if (supportsENSIP10) {
            const calldata = resolver[func].encodeFunctionCall(namehash(name), ...args);
            const result = resolver.resolve(dnsencode(name), calldata);
            return resolver[func].decodeReturnData(result);
        } else {
            return resolver[func](...args);
        }
    });
    if (supportsENSIP20) {
         // unbundle the multicall
         results = getters.map(([func], i) => {
              return resolver[func].decodeReturnData(this[i]);
         }, results[0]);
    }
    return results;
}

// before
await Promise.all([
    resolve("raffy.eth", "getText", "avatar")
    resolve("raffy.eth", "getAddr", 60)
]);

// after
await multiResolve("raffy.eth", [["getText", "avatar"], ["getAddr", 60]]);

What do you mean by “real registry”?

There’ll always be the canonical registry on L1. In addition, there’ll be one ENS-deployed registry on each supported chain. Any integrator can deploy their own registry and reference it from their CCIP-Read enabled resolver on L1, however - or skip the registry entirely and just look stuff up on the L2 contract of their choice.

I think the simplest option is for all resolutions to start at L1. Unfortunately your idea of a wildcard on the root for resolution starting at an L2 won’t work - see below.

As you outlined, I think it makes sense for addr.reverse on each chain to resolve to the current chain’s reverse registry, while <chainid>.reverse on L1 has a wildcard resolver that delegates to that chain’s registry.

This is a good point. We can’t really start resolution on an L2, since L1 resolvers may have arbitrary logic, and thus we can’t fetch and verify results from L1 resolvers on L2.

Isn’t this pretty much what an L2 already does for us? It serializes the transaction payloads to L1, and applies them all against a state that it maintains for us to easily fetch with merkle hash proofs. Why replicate this?

We already have the universal resolver for batching requests together like this, and via a batch gateway it can support making only a single HTTP request from the client. It may be worth combining things like this, but I’m reluctant to add another resolution interface to resolvers.

1 Like

This has same effect as our suggested formats in CCIP gateway and it requires extra calls to set/get targets before resolve(
) calls during resolving process.

Similar CAIP22 format is already used in text/avatar records so all required parts are already there
 Main point is to skip those web2 ccip gateways that are reading from L2s in background.

We can do verification with L2 storage OR with special L2 resolver contracts storing owner/manager’s signatures with records on cheap L2.

Without CAIP22 like format in CCIP gateway, it’s not really “Cross Chain Interoperable” & we’ll be locked in this web2 gateway in the middle solution. I think clients implementing “how to connect arbitrary chains” with their L2/RPC providers on long run is better than running web2/ccip gateways forever.

ETH RPC format

Vs. CCIP get/Post

CCIP-read isn’t matching RPC format so that’s only reason we need those web2/CCIP gateways. We can work on new CAIP for such formats/L2 calls if there isn’t one already
 CAIP22 is marked for only assets, might also need new ERC to extend ERC3668 with that new CAIP, formatted as eip155:<chain-id>/erc3668:<address>:<0xcalldata>.

1 Like

Parsing the format of the URIs is not the issue here. Being able to contact any of these chains is the issue.

The “web2 ccip gateways” provide complete proofs of their responses, meaning potential misbehaviour is limited to failure to satisfy a request; they cannot give incorrect results. In practice, the alternative involves querying trusted JSON-RPC endpoints for any chain the user doesn’t have a node for (which is ~all of them), and having no way to verify the returned result at all.

2 Likes

Why is this crossed out? I’m inclined to believe that with this being the Ethereum Name Service it is reasonable to begin resolution from Ethereum L1.

My concern here is that there is a lot of overhead to deploying contracts on multiple (possibly infinite) numbers of chains. We can have presigned deployment transactions etc such that anyone can deploy on new chains, but If there is ever a need for a fundamental and significant change, have to do it in one place is obviously easier than doing it in hundreds. Especially if there are requirements of the ENS DAO.

This BridgeResolver approach seems interesting. It seems to maintain a lot of resolution crypto-nativeness, but it reads as quite complex. Getting clients to implement standards is hard enough as is, especially if they are complex and multi-stage.

Premm and I have discussed similar ideas with @nick.eth and others in the past. The problem with this is that it gives wallet providers significant extra work to do as regards resolution, and requires access to RPC nodes. To return proof data if i’m understanding correctly basically means that the wallet provider would essentially be building a ‘gateway’ which is probably not part of their core product and/or something that they want to do


I agree with this.

If I’m reading this correctly, we implemented something somewhat similar in one of our OP demos whereby we proxied 2LD .eth names on L1 to 2LD .unruggable domains on L2 in our CCIP compatible resolve.

I’m fence sitting on this. I like the idea of connecting directly to chains and feel that in practice if you are needing resolution from a chain you probably have access to it (be it directly or through a wallet provider), but also see the value of being able to prove the return data against L1 that gateways provide


That said we trust our connection to L1 for resolution, and proof verification - a malicious client could return any address in response to a 3668 proof callback and most layman wouldn’t notice. TBF most people full stop wouldn’t notice. Theres always trust somewhere


What I like about 3668 is that it is somewhat established at this point and at least it is consistent/relatively simple/extensible.

I understand that clients/wallets can’t have all L2 RPC APIs in their implementation to resolve all chain ids but we can start with few supported L2 chains. Moreover any clients supporting text/avatar records in CAIP22 format already have those L2 RPC endpoints, we’ve to reuse that
 As ERC-3668 gateways supports multiple gateways urls we can use web3/web2 gateways as primary/secondary fallback just in case.

eg.

urls[0] = “eip155:{chain-id}/erc3668:{address}:{0xcalldata}”
urls[1] = “https://ccip.gateway.tld/eip155:{chain-id}/erc3668:{address}:{0xcalldata}”

similarly we can also use

urls[0] = “https://{IPNS CID}.ipfs2.eth.limo”
urls[1] = “ipns://{CID}”
urls[2] = “https://{IPNS CID}.ipns.dweb.link”

if clients doesn’t support urls[0] format (Or that gateway is down) it’ll auto fallback to urls[1]. We’re basically trying to extend ERC3668 to CCIP-read for ERC5559 “CCIP-write” using CAIP22 like format already used in ENSIP12 (text/avatar) records.

Verification storage is easy problem to solve on L2 storage. We can extend L1 resolver’s CCIP lookup data to request for L2 data resolver function with extra signature from manager and use that to verify on L1 during callback.

eg,
L1 resolve(
) for addr(
) reverts with offchain lookup urls[0]= eip155:{L2-chain-id}/erc3668:0x{L2 resolver contract}:0x{calldata getAddr(node, manager)}

L2 data resolver
function setAddr(bytes32 node, address _addr, bytes _signature) public;
msg sender is manager on node.
signature is signed by manager for node & address, it’s verified during ccip callback.
L2 data is stored as keccak(node, manager) = (_addr, _signature)

function getAddr(bytes32 node, address manager) public returns(address _addr, bytes signature);

It certainly is an interesting trade-off. I can see how an RPC endpoint (with the exception of running your own node) is the weak point (where using an public L1 RPC is be the least-worst option since it’s widely available), especially when everything after the eth_call can be verified, but I also agree that being able to reliably continue an eth_call across chains is an incredibly powerful gadget.


If you follow my “eth_call on chain X” tech, then any chain with an ENS registry would have a BridgedResolver on the root node, making resolution not native to that chain automatically go to L1.

The reason for not always starting at L1 is that you might be connected to L2 and want to resolve an L2 name (which could be native, examples: block explorer or a game.) Or, a node might correspond to BridgedResolver that goes to another chain and you skip L1 entirely (eg. resolving the reverse address of chain 3 from chain 2.)

If there’s no L1 deployment, you just use L1.

The “Global Registry Tree Example” above is a good outline of this.

All of this logic would be part of the ENSIP-10 changes I mentioned:

  • Check resolver on root
  • Revert handler for eth_call on different chain (like OffchainLookup)
  • If no registry, use L1

With these changes, you can deploy any ENS resolver tech to any ENS approved chain and reuse all existing tooling. You get native resolution (L2 names on L2 chain are as fast as L1) and batch support (you can batch resolve N names, and then batch all resolutions that continue on the same chain, until everything resolves.)

The overhead is low IMO. The base deployment needs a Registry, a BridgedResolver, and an (optional) Reverse registrar (which don’t need any modification.) Registry would need the BridgedResolver to L1 on root, and maybe some text records for where to find some trustworthy RPCs.

Once that’s deployed, you could do something crazy like deploy a BridgedResolver that jumps to a 2nd L2 that hits a traditional L2 CCIP resolver and then—w/o any additional code—resolve that name from L1 or a 3rd L2 and it just works.

Since the Registry is owned by ENS and you need a node to receive an incoming resolution continuation, every registry will be compatible with each other until a registrar is deployed at some location that doesn’t conflict with any other registry.


I don’t own 51050ec063d393217b436747617ad1c2285aeeee.addr.reverse and can’t subnode it. I can change the resolver. I can with the new contract.

That is because you set your Primary Name using setName on the old Reverse Registrar, which did not give you the reverse node by default. You could have called claim at any time to claim the reverse node for yourself.

The current Reverse Registrar contract gives you the reverse node now by default, when using setName. All you have to do is re-set your primary name, and then you will own the reverse node. That will also update your reverse node to use the latest Public Resolver too.

You can also call claim on the current Reverse Registrar instead, but that would only give you the reverse node and update the resolver on it. You would then need to perform a separate transaction to actually set the name on that new resolver.

1 Like

Oh wow, I had no idea. I just checked a wallet that used the new contract and it does own it! This replaces the need for the _owned construction above, although the {uuid}._id setup is probably better if you want something that survives ownership change.

So has anyone created a subnode off their reverse record yet? or am I going to be the first once gas cools down?

3 Likes

If you do this, why bother using an L2 at all? Just use an HTTP API to a database somewhere; you’re not gaining anything by storing it on a chain.

L2 is cheap & trustless db/storage than users depending on our web2 db/servers forever. Without CAIP22 like format in CCIP we’ve to run web2 gateways reading from L2
 + we don’t want to sign users records with our keys.

For now we’re only using offchain setup with IPNS/IPFS for Namesys, still trying to solve IPNS republishing & scaling issues. Our ENS+IPNS.ETH service will be out soon, after that we’ll look into L2 based solutions


But it’s not trustless in this scenario, because you are relying on a remote JSON-RPC server, with no way to verify its responses. It’s equivalent from a security POV to a database of signed messages.