The goal of this post is to discuss how to enable trust-minimized names via an EVM gateway while meeting customer expectations1 around name resolution.
Gateways
EVM gateways work by verifying information posted to L1 by the L2.
As an illustrative example, we updated teamnick.xyz to use an EVM Gateway2 to deliver trust-minimized names on Base.
Base posts data every hour to L1 via this contract. Therefore, a name can typically resolve in a trust-minimized way one hour3 after creation or update.
UX Solutions Considered
We considered using a non-EVM gateway to resolve names immediately upon name creation. Users would mint a subname, which would instantly resolve in a trusted fashion via a standard gateway, and then switch over to an EVM gateway once the data was posted to L1 and verified.
Through conversations with Raffy, it became clear that implementation of this mechanism would result in a non-trust-minimized solution. This is especially evident on editing resolved address / records.
Using offchain in that case is equivalent to using offchain in all cases
either its proved on L1 with proofs from L2 or its just a trusted solution --Raffy
Therefore, it appears the choice is between:
A trusted gateway that works instantly
A trust-minimized EVM gateway that works with a 1+ hour delay
Is there a way to blend #1 & #2 for a great user experience?
An acceptable solution is a name that resolves in a trusted fashion for X hours while the data is being posted to L1, then switches to a trust minimized name. It appears that the acceptable solution is not technically possible. Of course the ideal would be to resolve instantly4 and be trust-minimized.
1 Get a name and it resolves nearly instantly 2 Developed by @raffy 3 The EVM Gateway created by ENS Labs uses a 5-hour delay in their implementation. 4 All rollups will be ZK and will commit blocks with finalized state roots to L1 every slot â vitalik
I agree itâs not great UX, although not that different from regular DNS domains which can take an hour before they start resolving due to DNS propagation.
I donât think thatâs quite true? For brand new names that donât yet exist in L2 state data, you could fallback to a trusted CCIP gateway. Once the name exists in the L2 proofs, then it should only ever use that data. So subsequent resolver updates would still be delayed, but this solves for the biggest UX issue you are describing, that a new name doesnât resolve at all for hours. Letâs call it a âhybridâ L1 resolver that first tries to resolve the name in L2 proofs, and if the name doesnât exist at all, then it falls back to CCIP gateway. I donât think it breaks the trustless nature completely:
you can communicate to domain owner that names only resolve trustlessly once the L2 state is posted + some reorg buffer, which typically means X hours. this functionality can be verified in L1 resolver contract.
you could build a function on the hybrid L1 resolver to check if domain exists in L2 state, which the owner can then use to tell if their domain is âtrustlessâ yet
an edge case is names that expire and are re-registered might not ever use CCIP after the first registration
in the worst case scenario, say the CCIP gateway is compromised. domains might resolve poisoned addresses if they A. arenât registered at all, or B. registered but L2 data hasnât been posted yet. so a registered domain that is over X hours old is always safe. or am I missing something @raffy ?
Offer the option for the user to pay for a small TX on the L1 resolver, to basically push âcachedâ resolver responses there faster, e.g. for a mission critical migration. The L2 subname provider could create a proof that the user owns the domain + expiration date + a specific resolver response, then the user commits this to L1 resolver. The L1 resolver could use this âcachedâ response for X amount of time (aka TTL, letâs say 6 hours by default), after which it reverts to L2 data. Additionally, this cached data could be invalidated whenever changes arrive in the L2 state proofs. The invalidation might take 1-5 hours to take effect but itâs usually less of an issue on the tail end. This could be used for new domains but also anytime someone needs a resolver update ASAP. It does require an offchain signature from the L2 provider/witness, and also probably difficult to implement for such small use case.
âEVM Gatewayâ is an interesting terminology. The implication being that it is a gateway for resolving data from other EVM compatible chains only. We probably shouldnât constrain ourselves in this regard - proving of non EVM compatible chains down the line against L1 will likely happen.
When you mention using a ânon-EVM gatewayâ and a âstandard gatewayâ I assume you are in fact referring to a gateway that doesnât use another chain as its data source? Instead using a database or static JSON file (for example). In that case as @raffy suggests that is simply a trusted solution.
This is possible but there are considerations as to when you switch between the two and things like if it is a âfirst timeâ resolution or update (as mentioned by @aox.eth).
In practice if I were setting up resolution on a name for the first time I simply wouldnât share the name or attempt to use it until appropriate proofs were on L1.
This is the real interesting scenario. Generally my position is that the datasource from which a gateway resolves is always the L2 but the data would be returned with or without a proof depending on the timings. If being returned without a proof it could be returned signed by a known trusted private key controlled by the gateway provider that is verified by the verifier (on L1) instead. There are trust tradeoffs as alwaysâŠ
That said there are so many trust points in crypto UX atm. Metamask resolution of ENS names for example. In practice its not transparent to the user where/how they are resolving names - they could just be returning random values. Similarly a number of products offer APIs for resolving names - there is a lack of clarity on where/how that data is sourced, and how stale it potentially is. Even a resolution specified directly in a resolver on L1 is irrelevant if clients are not actually resolving from the head of the chain in realtime.
Answering this question directly, I donât see there being a UX issue here. If a user chooses to trust a gateway and trusts the clients, apps and tools they interface with to implement protocol specifications correctly then their names will resolve as expected.
Specification updates could also allow for a certain response when a proof can not be verified on L1 such that at the very worst for an hour your name wouldnât resolve whilst it is âupdatingâ.
Cross-chain name resolution is cool. Its it necessary? Depends on who you ask. Iâm all for the ability to resolve cross-chain. But I feel that forcing resolution across networks encroaches a users right to a degree of privacy. When registering names, users should have the understanding that their name is to be resolved on mainnet. Forcing name resolution across networks shall be be an optional feature, per user and per name. I do not think itâs ENS interest to resolve addresses across chain without explicit user permission that are granted by the user. Enforcing resolution means that ENS will be taking away the ability for the user to have a choice in how their name is used on networks outside of mainnet.
ENS should not make the decisions to implement resolution without the option to disable it over networks.
To start, names shouldnât really be resolved from OP stack chains initially when data is posted to L1, same with pretty much all other optimistic chains. OP chains with fault proofs enabled have a permissionless proposal system, and you need to wait long enough to ensure that if a dispute were to start, it would have already started. OP chains that are yet to be upgraded to support fault proofs (i.e. Base) would maybe allow you to get away with trusting that the proposer (a single, permissioned role) doesnât propose an invalid root. However that is putting a significant amount of trust in centralised systems, where a technical issue could still arise.
Essentially, to safely get data, this means waiting multiple hours (+ any potential dispute time) for each update.
You can implement something where only data that is proven to be null can fallback to something else (assuming the contract is verifying that the output index of the proof result isnât stale), but this doesnât really solve the issue by itself. Note that it also adds a lot of latency for requests in that trusted window, but thatâs not necessarily too bad if itâs only for a few hours.
Once you have ownership information trustlessly available on L1 though, if the account is an EOA, you could use a signed message with record data and a nonce to create an update system that while centralised is still trustless. You could have a gateway then return that latest signed message, and compare with onchain data (onchain would also need nonces). You could then use data with the latest nonce to return.
For an optimal user experience that allows record setting without having to sign a transaction and then a message, you could have the user sign to authorise a browser generated key to set records on their behalf. You could then use the browser key to generate signed messages for each record, only after the transaction was sent.
The issue then becomes:
a. per-request latency - doing 2 CCIP-Read requests for all data
b. centralisation - CCIP-Read doesnât have a built-in way to handle HTTP errors, so even if the EVM Gateway server is still working, if your optimistic verification server is down then all resolution breaks
As for those issues, to tackle them you pretty much just need to start using a ZK chain with fast finality. Not much further you can go afaik.