Scam Name Solution Proposal (Verification Checks)

ZWJ and scammy ENS names are an issue. I’m proposing a solution similar to Twitters verified checkmarks. To be verified in this case means that a verification team, with the proper rubric, verifies that the name is original and if it does contain ZWJ, it’s in a proper way and not in a way that is trying to rip off another domain as a lookalike.

There would be a “request verification” button on the registration page.

There is a cost to this request that is based on a users need and urgency to be verified, but verification would be forever.

Imagine 100 users request verification. Users put in their bids, and the bids are dynamically ranked highest to lowest, and those with the highest bids go through the verification process first.

This system would provide extra income to ENS. The ENS token may even be used to bid for verification and that ENS gets returned to the treasury.

There could be a failsafe in place in case someone tries to game the verification system, there could also be an appeal process.

I’d imagine that it would mostly be secondary market sellers that would go for this, and more than a few major brands/companies. However, it just may become a bar that is set or the norm to register a name and request verification.

OpenSea and secondary marketplaces could display the verified badge and so could etherscan, and maybe one day, Twitter.

This would phase out real scammers and eliminate a lot of worry for everyone. Imagine Coinbase displaying that a certain ENS address is verified before it is transacted with. It will add a better sense of security and peace of mind all around for the homographic attack potential.

Is this a workable idea? Do you think it’s worthwhile?

It’s actually a really complicated problem. It seems simple, but there are so many cases that are in a gray area for instance that domain lndia.eth looks like india.eth in some fonts, but its actually LNDIA.ETH. Should this be allowed or not. I suggest reading through this post as well. It gets really complicated, ENS Name Normalization.

An interface that makes it possible to easily analyze each character is definitely necessary. Take a look here,

I think a better system would be for a third party marketplace to have a “premium” names filter, wherein the third party can use whatever rules the like to decide which names are “premium”.

1 Like

That’s a good point, but that only account for words that start with “i”. Maybe “0” vs words that start with “o” would be another one. I’m thinking the verification process would only be for names with ZWJ. I’m thinking 99% of homograph attacks would be with this. But, now that I think of it, greek and acrylic letters pose a lot of problems too. I still think a ZWJ check and verification service would be a great help. Thank you for sharing, I’m still thinking on this since the point you brought up about homographic attacks involving more than just ZWJ.

I’ve released a version of ens-normalize.js that correctly handles all emoji (and ZWJ) but this is only the first step towards the correct solution. One look at the confusables should make this obvious.

I’ve also explored some input UX ideas, created a low-level API ens_tokenize, and have a basic HTML formatting library lib-parts.js which makes “exploding” a name (w/r/t normalization) relatively simple. The Resolver Demo or Emoji Report are good examples.

I’ve made a few attempts at individually addressing homograph attacks but found the problem intractable for one person. The best solution at the moment appears to be script-based restrictions. The only reason I haven’t released a version using this technique is that it doesn’t solve the problem for the primary case: Common/Greek/Latin/Cyrillic scripts are just too similar – it doesn’t matter if you make Common unmixable with Cyrillic as you can trivially spoof between them. Prior art from DNS isn’t very helpful because it’s either too restrictive or simply converts all exotic names to punycode.

Currently, I am attempting to break these scripts into more manageable chunks, both to represent the data visually (so it’s easier to see whats going on) and in the hope that I can define restrictive recipes using these new pseudo-scripts, that still match most registered names, but prevent needing to solve the confusable issue between all scripts.

I am 100% open to ideas and suggestions.

If possible, keep the discussion in the original thread, ENS Name Normalization, so it’s easier to track.