[Draft-5] EIP-5559: Off-Chain Data Write Protocol

Hello ENS & ETH Developers :wave:

I attach the fifth draft incorporating comments from Nick and others so far up to Draft-4.

GitHub: EIP-5559 | ENSIP-19


EIP-5559: Off-Chain Data Write Protocol

Cross-chain write deferral protocol incorporating secure write deferrals to generic L1s, Ethereum L2s, centralised databases and decentralised & mutable storages

Abstract

The following proposal is a superseding version of EIP-5559: Off-Chain Write Deferral Protocol, targeting wider set of storage types and introducing security measures to consider for secure off-chain write deferrals and subsequent retrievals. EIP-5559 in its present form is limited to deferring write operations to L2 EVM chains and centralised databases. Methods in this updated version enable secure write deferrals to generic L1s, Ethereum L2s, centralised databases and decentralised storages - mutable or immutable - such as IPFS, Arweave, Swarm etc. This draft alongside EIP-3668 is a significant step toward a complete and secure infrastructure for off-chain data retrieval and write deferral.

Motivation

EIP-3668, or ‘CCIP-Read’ in short, has been key to retrieving off-chain data for a variety of contracts on Ethereum blockchain, ranging from price feeds for DeFi contracts, to more recently records for ENS users. The latter case is more interesting since it dedicatedly uses off-chain storage to bypass the usually high gas fees associated with on-chain storage; this aspect has a plethora of use cases well beyond ENS records and a potential for significant impact on universal affordability and accessibility of Ethereum.

Off-chain data retrieval through EIP-3668 is a relatively simpler task since it assumes that all relevant data originating from off-chain storages is translated by CCIP-Read-compliant HTTP gateways; this includes L2 chains, centralised databases or decentralised storages. On the flip side however, so far each service leveraging CCIP-Read must handle two main tasks externally:

  • writing this data securely to these storage types on their own, and

  • incorporating reasonable security measures in their CCIP-Read compatible contracts for verifying this data before performing on-chain read or write operations.

Writing to a variety of centralised and decentralised storages is a broader objective compared to CCIP-Read largely due to two reasons:

  1. Each storage provider typically has its own architecture that the write operation must comply with, e.g. they may require additional credentials and configurations to able to write data to them, and

  2. Each storage must incorporate some form of security measures during write operations so that off-chain data’s integrity can be verified by CCIP-Read contracts during data retrieval stage.

EIP-5559 was the first step toward such a tolerant ‘CCIP-Write’ protocol which outlined how write deferrals could be made to L2 and centralised databases. The cases of L2 and database are similar; deferral to an L2 involves routing the eth_call to L2, while deferral to a database can be made by extracting eth_sign from eth_call and posting the resulting signature along with the data for later verification. In both cases, no pre-flight information needs to be processed by the client and arguments of eth_call and eth_sign as specified in the current EIP-5559 are sufficient. This proposal supersedes the previous EIP-5559 by re-introducing secure write deferrals to generic L1s, EVM L2s, databases and decentralised storages, especially those which - beyond the arguments of eth_call and eth_sign - require additional pre-flight metadata from clients to successfully host users’ data on their favourite storage. This document also enables more complex and generic use-cases of storages such as those which do not store the signers’ addressess on chain as presumed in the current EIP-5559.

Curious Case of Decentralised Storages

Decentralised storages powered by cryptographic protocols are unique in their diversity of architectures compared to centralised databases or L2 chains, both of which have canonical architectures in place. For instance, write calls to L2 chains can be generalised through the use of chainId for any given callData; write deferral in this case is as simple as routing the eth_call to another contract on an L2 chain. There is no need to incorporate any additional security requirement(s) since the L2 chain ensures data integrity locally, while the global integrity can be proven by employing a state verifier scheme (e.g. EVM-Gateway) during CCIP-Read calls. Same argument applies to generic L1 blockchains as well. Centralised databases have a very similar architecture where instead of invoking eth_call, the result of eth_sign needs to be posted on the database along with the callData for integrity verification by CCIP-Read.

Decentralised storages on the other hand, do not typically have EVM- or database-like environments and may have their own unique content addressing requirements. For example, IPFS, Arweave, Swarm etc all have unique content identification schemes as well as their own specific fine-tunings and/or choices of cryptographic primitives, besides supporting their own cryptographically secured namespaces. This significant and diverse deviation from EVM-like architecture results in an equally diverse set of requirements during both the write deferral operation as well as the subsequent state verifying stage.

For example, consider a scenario where the choice of storage is IPNS or ArNS. In precise terms, IPNS storage refers to immutable IPFS content wrapped in mutable IPNS namespace, which eventually serves as the reference coordinate for off-chain data. The case of ArNS is similar; ArNS is immutable Arweave content wrapped in mutable ArNS namespace. To write to IPNS or ArNS storage, the client requires more information than only the gateway URL responsible for write operations and arguments of eth_sign. More precisely, the client must at least prompt the user for their IPNS or ArNS signature which is necessary for updating the namespaced storage. The client may also need additional information from the user such as specific arguments required by IPNS or ArNS signature. One such example is the requirement of encoded version of IPNS update which goes into the construction of IPNS record payload. These additional user-centric requirements are not accommodated by EIP-5559 in its present form, and the resolution of these issues is detailed in the following attempt towards a suitable CCIP-Write specification.

Specification

Overview

The following specification revolves around the structure and description of an arbitrary off-chain storage handler tasked with the responsibility of writing to an arbitrary storage. First introduced in EIP-5559, the protocol outlined herein re-defines the construction of the StorageHandledBy__() revert to accept generic L1 blockchains, EVM L2s, databases and decentralised & namespaced storages. In particular, this draft proposes that StorageHandledByL2() and StorageHandledByOffChainDatabase() introduced in EIP-5559 be replaced with re-defined StorageHandledByL2() and StorageHandledByDatabase() respectively, and new StorageHandledBy__() reverts be allowed through new EIPs that sufficiently detail their interfaces and designs. Some foreseen examples of new storage handlers include StorageHandledBySolana() for Solana, StorageHandledByFilecoin() for Filecoin, StorageHandledByIPFS() for IPFS, StorageHandledByIPNS() for IPNS, StorageHandledByArweave() for Arweave, StorageHandledByArNS() for ArNS, StorageHandledBySwarm() for Swarm etc.

Similar to EIP-5559, a CCIP-Write deferral call to an arbitrary function setValue(bytes32 key, bytes32 value) can be described in pseudo-code as follows:

// Define revert event
error StorageHandledBy__(address sender, bytes callData, bytes metadata);

// Generic function in a contract
function setValue(
    bytes32 key,
    bytes32 value
) external {
    // Get metadata from on-chain sources
    bytes metadata = getMetadata(key);  
    // Defer write call to off-chain handler
    revert StorageHandledBy__(
        msg.sender, 
        abi.encode(key, value), 
        metadata
    );
};

where the following structure for StorageHandledBy__() has been followed:

// Details of revert event
error StorageHandledBy__(
    address msg.sender, // Sender of call
    bytes callData, // Payload to store
    bytes metadata // Metadata required by off-chain clients
);

Metadata

The arbitrary metadata field captures all the relevant information that the client may require to update a user’s data on their favourite storage. For instance, metadata must contain a pointer to a user’s data on their desired storage. In the case of StorageHandledByL2() for example, metadata must contain a chain identifier such as chainId and additionally the contract address. In case of StorageHandledByDatabase(), metadata must contain the custom gateway URL serving a user’s data. In case of StorageHandledByIPNS(), metadata may contain the public key of a user’s IPNS container; the case of ArNS is similar. In addition, metadata may further contain security-driven information such as a delegated signer’s address who is tasked with signing the off-chain data; such signers and their approvals must also be contained for verification tasks to be performed by the client. It is left up to each storage handler StorageHandledBy__() to precisely define the structure of metadata in their documentation for the clients to refer to. This proposal introduces the structure of metadata for four storage handlers: Solana L1, EVM L2s, databases and IPNS as follows.

Solana Handler: StorageHandledBySolana()

A Solana storage handler simply requires the hex-encoded programId and the manager account on the Solana blockchain; programId is equivalent to a contract address on Solana. Since Solana natively uses base58 encoding in its virtual machine setup, programId values must be hex-encoded according to EIP-2308 for storage on Ethereum. These hex-encoded values in the metadata must eventually be decoded back to base58 for usage on Solana.

// Revert handling Solana storage handler
error StorageHandledBySolana(address sender, bytes callData, bytes metadata);

(
    bytes32 programId, // Program (= contract) address on Solana; hex-encoded
    bytes32 account // Manager account on Solana; hex-encoded
) = getMetadata(node); // Arbitrary code
// programId = 0x37868885bbaf236c5d2e7a38952f709e796a1c99d6c9d142a1a41755d7660de3
// account = 0xe853e0dcc1e57656bd760325679ea960d958a0a704274a5a12330208ba0f428f
bytes metadata = abi.encode(programId, account);
bytes callData = abi.encode(node, key, value);
address sender = msg.sender;

Clients implementing the Solana handler must call the Solana programId using a Solana wallet that is connected to account as follows.

/* Pseudo-code to write to Solana program (= contract) */
// Instantiate program interface on Solana
const program = new program(programId, rpcProvider);
// Connect to Solana wallet
const wallet = useWallet();
// Decode off-chain data from encoded calldata in revert
let [node, key, value] = abi.decode(callData);
// Call the Solana program using connected wallet with off-chain data
// [!] Only approved manager in the Solana program should call
if (wallet.publicKey === account === program.isManagerFor(account, msg.sender)) {
    await program(wallet).setValue(node, key, value);
}

In the above example, programId, account and msg.sender must be decoded to base58 from hex. Solana handler requires a one-time transaction on Solana during initial setup for each user to set the local manager. This call in form of pseudo-code is simply

await program(wallet).setManagerFor(account, msg.sender)

L2 Handler: StorageHandledByL2()

A minimal L2 handler only requires the list of chainId values and the corresponding contract addresses. This proposal formalises that chainId and contract must be contained in the metadata. The deferral in this case will prompt the client to submit the transaction to the relevant L2 as prescribed in the metadata. One example construction of an L2 handler’s metadata is given below.

error StorageHandledByL2(address sender, bytes callData, bytes metadata);

(
    address contract, // Contract address on L2
    uint256 chainId // L2 ChainID
) = getMetadata(node); // Arbitrary code
// contract = 0x32f94e75cde5fa48b6469323742e6004d701409b
// chainId = 21
bytes metadata = abi.encode(contract, chainId);
bytes callData = abi.encode(node, key, value);
address sender = msg.sender;

Database Handler: StorageHandledByDatabase()

A minimal database handler is similar to an L2 in the sense that:

a) similar to chainId, it requires the gatewayUrl that is tasked with handling off-chain write operations, and

b) similar to eth_call, it must require eth_sign output to secure the data, and the client must prompt the users for these signatures.

In this case, the metadata must contain the bespoke gatewayUrl and may additionally contain the addresses of dataSigner of eth_sign. If a dataSigner is included in the metadata, then the client must make sure that the signature forwarded to the gateway is signed by that dataSigner. It is possible for the dataSigner to exist off-chain instead and not be returned in the metadata; for this scenario, refer to additional details in section ‘Off-Chain Signers’. One example construction of a database handler’s metadata is given below.

error StorageHandledByDatabase(address sender, bytes callData, bytes metadata);

(
    string gatewayUrl, // Gateway URL
    address dataSigner // Ethereum signer's address; must be address(0) for off-chain signer
) = getMetadata(node);
// gatewayUrl = "https://api.namesys.xyz"
// dataSigner = 0xc0ffee254729296a45a3885639AC7E10F9d54979
bytes metadata = abi.encode(gatewayUrl, dataSigner);
bytes callData = abi.encode(node, key, value);
address sender = msg.sender;

In the above example, the client must first verify that the eth_sign is signed by a matching dataSigner, then prompt the user for a signature and finally pass the resulting signature to the gatewayUrl along with the off-chain data. The message payload for this signature must be formatted according to the directions in the ‘Data Signatures’ section further down this document. The off-chain data and the signatures must be encoded according to the directions in the ‘CCIP-Read Compatiable Payload’ section. Further directions for precise handling of the message payloads and metadata for databases are provided in ‘Interpreting Metadata’ section.

Decentralised Storage Handler: StorageHandledByIPNS()

Decentralised storages are the extremest in the sense that they come both in immutable and mutable form; the immutable forms locate the data through immutable content identifiers (CIDs) while mutable forms utilise some sort of namespace which can statically reference any dynamic content. Examples of the former include raw content hosted on IPFS and Arweave while the latter forms use IPNS and ArNS namespaces respectively to reference the raw and dynamic content.

The case of immutable forms is similar to a database although these forms are not as useful in practise so far. This is due to the difficulty associated with posting the unique CID on chain each time a storage update is made. One way to bypass this difficulty is by storing the CID cheaply in an L2 contract; this method requires the client to update the data on both the decentralised storage as well as the L2 contract through two chained deferrals. CCIP-Read in this case is also expected to read from two storages to be able to fully handle a read call. Contrary to this tedious flow, namespaces can instead be used to statically fetch immutable CIDs. For example, instead of a direct reference to immutable CIDs, IPNS and ArNS public keys can instead be used to refer to IPFS and Arweave content respectively; this method doesn’t require dual deferrals by CCIP-Write (or CCIP-Read), and the IPNS or Arweave public key needs to be stored on chain only once. However, accessing the IPNS and ArNS content now requires that the client must prompt the user for additional information, e.g. IPNS and ArNS signatures in order to update the data.

Decentralised storage handlers’ metadata structure is therefore expected to contain additional context which the clients must interpret and evaluate before calling the gateway with the results. This feature is not supported by EIP-5559 and services using EIP-5559 are thus incapable of storing data on decentralised namespaced & mutable storages. One example construction of a decentralised storage handler’s metadata for IPNS is given below.

error StorageHandledByIPNS(address sender, bytes callData, bytes metadata);

(
    string gatewayUrl, // Gateway URL for POST-ing
    address dataSigner, // Ethereum signer's address; must be address(0) for off-chain signer
    bytes ipnsSigner // Context for namespace (IPNS signer's hex-encoded CID)
) = getMetadata(node);
// gatewayUrl = "https://ipns.namesys.xyz"
// dataSigner = 0xc0ffee254729296a45a3885639AC7E10F9d54979
// ipnsSigner = 0xe50101720024080112203fd7e338b2de90159832ffcc434927da8bbfc3a000fa58ea0548aa8e08f7e10a
bytes metadata = abi.encode(gatewayUrl, dataSigner, ipnsSigner);
bytes callData = abi.encode(node, key, value);
address sender = msg.sender;

In the example above, a client must evaluate the metadata according to the following outline. The client must request the user for an IPNS signature verifiable against the IPNS CID returned in the ipnsSigner metadata. If verified, the client will then additionally require the historical context encoding the previous IPNS record’s version data (e.g. sequence number, validity etc) to make the IPNS update. There are several ways of providing this version data to the clients, e.g. a dedicated API, IPFS Pub/Sub, L2 indexer etc. It is therefore left up to individual implementations and/or protocols to choose their own desired method for version indexing, e.g. ENSIP-16/-19 for ENS. With version data in hand, the client can move on to signing the off-chain data using the dataSigner key. Once signed, the client must encode the off-chain data, the data signature and the approval signature in a CCIP-Read-compatible payload. Lastly, the client must:

  • calculate the IPFS hash corresponding to the new off-chain data payload,
  • increment the IPNS record by encoding the version with new IPFS hash, and
  • broadcast the IPNS update by signing and publishing the incremented version to the gatewayurl.

The strictly typed formatting for this IPNS signature payload is internally handled by the IPNS protocol and IPNS service providers via standard libraries.

Interpreting Metadata

The following section describes the precise interpretation of the metadata common to both IPNS and database storage handlers. The methods described in this section have been designed with autonomy, privacy, UI/UX and accessibility for ethereum users in mind. The plethora of off-chain storages have their own diverse ecosystems such that it in not uncommon for each storage to have its own set of UI/UX requirements, such as wallets, signer extensions etc. If ethereum users were to utilise such storage providers, they will inevitably be subjected to additional wallet extensions in their browsers. This is not ideal and the methods in this section have been crafted such that users do not need to install any additional UI/UX components or extensions other than their favourite ethereum wallet.

StorageHandledByIPNS() is more complex in construction than StorageHandledByDatabase() which is a reduced version of the former. For this reason, we still start by describing how clients must implement StorageHandledByIPNS() first. Later on, we will reduce the requirements to the simpler case of StorageHandledByDatabase().

Key Generation

This draft proposes that both the dataSigner and ipnsSigner keypairs be generated deterministically from ethereum wallet signatures; see figure below.

This process involving deterministic key generation can be implemented concisely in a single unified keygen() function as follows.

/* Pseudo-code for key generation */
function keygen(
  username, // Key identifier
  caip10, // CAIP identifier for the blockchain account
  signature, // Deterministic signature from wallet
  password // Optional password
) {
  // Calculate input key by hashing signature bytes using SHA256 algorithm
  let inputKey = sha256(signature);
  // Calculate info from CAIP-10 identifier and username
  let info = `${caip10}:${username}`;
  // Calculate salt for keygen by hashing concatenated info, hashed password and hex-encoded signature using SHA256 algorithm
  let salt = sha256(`${info}:${sha256(password || "")}:${signature}`);
  // Calculate hash key output by feeding input key, salt & info to the HMAC-based key derivation function (HKDF) with dLen = 42
  let hashKey = hkdf(sha256, inputKey, salt, info, 42);
  // Calculate and return both ed25519 and secp256k1 keypairs
  return [
    ed25519(hashKey), // Calculate ed25519 keypair from hash key
    secp256k1(hashKey) // Calculate secp256k1 keypair from hash key
  ]
}

This keygen() function requires four variables: caip10, username, password and signature. Their descriptions are given below.

1. caip10

CAIP-10 identifier caip10 is auto-derived from the connected wallet’s checksummed address wallet and chainId.

/* CAIP-10 identifier */
const caip10 = `eip155:${chainId}:${wallet}`
2. username

username may be prompted from the user by the client or determined by the protocol. This public field allows users to switch their protocol-specific IPNS namespace in the future. For instance, protocols may set username deterministically as equal to caip10 or some protocol-specific function of node; see example below.

/* Username is dependent on the storage type which can be 'walletType' or 'nodeType'. See definitions at the end of this section */
// Example: node = namehash(normalise(ens)) for ENS, aka preimage(node) = ens
let username;
if (storage === 'walletType') username = caip10;
if (storage === 'nodeType') username = preimage(node);
3. password

password is an optional private field and it must be prompted from the user by the client; this field allows users to secure their IPNS namespace for a given username.

/* IPNS secret key identifier */ 
// Clients must prompt the user for this
const password = 'key1'
4. signature

Deterministic signature forms the backbone of secure, keyless, autonomous and smooth UI when off-chain storages are in the mix. In the simplest implementation, one such signature must be prompted from the users by the clients. sigKeygen is the deterministic ethereum signature responsible for

  • the IPNS key generation and for interpreting ipnsSigner metadata, and
  • the delegated signer key generation and for interpreting dataSigner metadata.

In order to enable batch data writing for multiple nodes, a delegated signer must be derived from the owner or manager keys of a node. Message payload for sigKeygen must be formatted as:

Requesting Signature To Generate Keypair(s)\n\nOrigin: ${username}\nProtocol: ${protocol}\nExtradata: ${extradata}\nSigned By: ${caip10}

where the extradata is calculated as follows,

// Calculating extradata in keygen signatures
bytes32 extradata = keccak256(
    abi.encodePacked(
        pbkdf2(
            password, 
            salt, 
            iterations
        ), // Stretch password with PBKDF2
        wallet
    )
)

where PBKDF2 - with keccak256(abi.encodePacked(username)) as salt and last 5 hex-nibbles converted to uint as the iteration count - is used for brute-force vulnerability protection.

/* Definitions of salt and iterations in PBKDF2 */
let salt = keccak256(abi.encodePacked(username));
let iterations = uint(salt.slice(-5)); // max(iterations) = uint(0xFFFFF) = 1048757

The remaining protocol field is a protocol-specific identifier limiting the scope to a specific protocol. This identifier cannot be global and must be uniquely defined by each implementation or protocol. With this deterministic format for signature message payload, the client must prompt the user for eth_sign signature. Once the user signs the messages, the keygen() function can derive the IPNS keypair and the signer keypair. The clients must additionally derive the IPNS CID and ethereum address corresponding to the IPNS and signer public keys. The metadata interpretation concludes with the client ensuring that

  • the derived IPNS CID must match the ipnsSigner metadata, and
  • the derived signer’s address must match the dataSigner metadata.

If these conditions are not met, clients must throw an error and inform the user of failure in interpretation of the metadata. If these conditions are met, then the client has the correct private keys to update a user’s IPNS record as well as sign a user’s data for later verification by CCIP-Read. Since the derived signer can sign multiple instances of off-chain data in the background without prompting the user, it is possible to update data for multiple nodes simultaneously with this method.

Storage Types

Storage types refer to two types of IPNS namespaces that can host a user’s data. In the first case of nodeType, each node has a unique IPNS container whose CID is stored in ipnsSigner metadata. In the second case of walletType, a user can store the data for all nodes owned or managed by a given wallet. Naturally, the second method is highly cost effective although it compromises on security to some extent; this is due to a single IPNS signer manifesting as a single point of compromise for all off-chain data for a wallet. This feature is achieved by choosing an appropriate username in the signature message payload of sigKeygen depending on the desired storage type.

During the initialisation step when the user sets on-chain ipnsSigner for the first time, the clients must prompt the user for their choice of storage type. Depending on the user’s choice, IPNS CID can be posted on chain with an appropriate index.

/* Setting IPNS signer on-chain during initialisation setup */
// IPNS signer derived from keygen() in CIDv1 format
let cid = 'bafyreibcli3vlmr4et6oekv3xdjx2sm6k4tioynbavmwgrsevklujpzywu';
// IPNS signer is function of node for 'nodeType' storage; remove constant 'e5010172002408011220' prefix from hex-encoded payload to save gas
if (storage === 'nodeType') setIpnsSigner(node, cid.encode('hex').replace('e5010172002408011220', ''));
// IPNS signer is function of wallet for 'walletType' storage; remove constant 'e5010172002408011220' prefix from hex-encoded payload to save gas
if (storage === 'walletType') setIpnsSigner(bytes32(uint256(uint160(wallet))), cid.encode('hex').replace('e5010172002408011220', ''));

CCIP-Write-enabled contracts should implement an appropriate internal mechanism for fetching IPNS signer as a function of node or wallet. This mechanism must follow the previously mentioned fallback strategy: the contract must first check if nodeType storage exists for a given node, and if no ipnsSigner exists for a node, then the contract should check for fallback walletType storage for a wallet and return the result in the revert.

Revert StorageHandledByDatabase()

The case of StorageHandledByDatabase() handler is a subset of the decentralised storage handler, in the sense that the clients should simply skip interpreting IPNS related metadata. There is additionally no concept of storage types for off-chain database handlers. Other than that, the entire process is the same as StorageHandledByIPNS().

Off-Chain Signers

It is possible to further save on gas costs by not storing the dataSigner metadata on chain. In detail, instead of storing the dataSigner on chain for verification, clients can provide the user with the option to,

  • request an approval for an off-chain dataSigner signed by the owner or manager of a node, and
  • post this approval and the off-chain dataSigner along with the off-chain data in encoded form.

CCIP-Read-enabled contracts can then verify during resolution time that the approval attached with the data comes from the node’s manager or owner and that it approves the expected dataSigner. Using this mechanism of delegating signatures to an off-chain signer, no on-chain dataSigner needs to be posted. This additional saving comes at the cost of one additional approval signature approval that the clients must prompt from the user. This signature must have the following message payload format:

Requesting Signature To Approve Data Signer\n\nOrigin: ${username}\nApproved Signer: ${dataSigner}\nApproved By: ${caip10}

where dataSigner must be checksummed.

Data Signatures

Signature(s) sigData accompanying the off-chain data must implement the following format in their message payloads:

Requesting Signature To Update Off-Chain Data\n\nOrigin: ${username}\nData Type: ${dataType}\nData Value: ${dataValue}

where dataType parameters are protocol-specific; they are defined in ENSIP-5, ENSIP-7 and ENSIP-9 for ENS (formerly EIP-634, EIP-1577 and EIP-2308 respectively), e.g. text/avatar, address/60 etc.

CCIP-Read Compatible Payload

The final EIP-3668-compatible data payload in the off-chain data file must then follow this format,

bytes encodedData = abi.encode(['bytes'], [dataValue])
bytes dataPayload = abi.encode(
    ['address', 'bytes32', 'bytes32', 'bytes'],
    [dataSigner, sigData, approval, encodedData]
)

which the CCIP-Read-enabled contracts must first correctly decode, and then verify signer approval and data signatures, before resolving the data value. The client must construct this data and pass it to the gateway in the POST request along with the raw values for indexing.

POST & Protocol-specific Parameters

For any storage other than a blockchain with a wallet extension, the client must call the gatewayUrl via a POST request. The structure of the POST is protocol-specific and left up to individual protocols to handle internally. Besides the POST request, username, protocol and dataType are the other protocol-specific parameters that we have encountered in the text before. Note that we didn’t yet define the paths for the off-chain data files either, i.e. where should the file containing off-chain data be stored and later referred to in CCIP-Read-compatible contracts? These path schemes are also native to each implementation and are therefore left up to each protocol to define along with the previously mentioned parameters. The combined total of five parameters should be defined by the protocols through a native improvement proposal. For example, POST format, username, protocol, dataType, and path for ENS are described in ENSIP-19.

New Revert Events

  1. Each new storage handler must submit their StorageHandledBy__() identifier through an ERC track proposal referencing the current draft and EIP-5559.

  2. Each StorageHandledBy__() provider must be supported with detailed documentation of its structure and the necessary metadata that its implementers must return.

  3. Each StorageHandledBy__() proposal must define the precise formatting of any message payloads that require signatures and complete descriptions of custom cryptographic techniques implemented for additional security, accessibility or privacy.

Implementation featuring ENS

ENS off-chain resolvers capable of reading from and writing to decentralised storages are perhaps the most complex use-case for CCIP-Read and CCIP-Write. One example of such a (minimal) resolver is given below along with the client-side code for handling the revert.

Contract

/* ENS resolver implementing StorageHandledByIPNS() */
interface iResolver {
    // Defined in EIP-5559
    error StorageHandledByIPNS(
        address sender,
        bytes callData,
        bytes metadata
    );
    // Defined in EIP-137
    function setAddr(bytes32 node, address addr) external;
}

// Defined in EIP-5559
string public gatewayUrl = "https://post.namesys.xyz"; // RESTful API endpoint
string public metadataUrl = "https://gql.namesys.xyz"; // GQL API endpoint

/**
* Sets the ethereum address associated with an ENS node
* [!] May only be called by the owner or manager of that node in ENS registry
* @param node Namehash of ENS domain to update
* @param addr Ethereum address to set
*/
function setAddr(
    bytes32 node,
    address addr
) authorised(node) {
    // Get ethereum signer & IPNS CID stored on-chain with arbitrary logic/code
    // Both may be unique to each name, or each owner or manager address
    (address dataSigner, bytes ipnsSigner) = getMetadata(node); 
    // Construct metadata required by off-chain clients. Clients must refer to ENSIP-19 for directions to interpret this metadata
    bytes memory metadata = abi.encode(
        gatewayUrl, // Gateway URL tasked with writing to IPNS
        dataSigner, // Ethereum signer's address
        ipnsSigner, // IPNS signer's hex-encoded CID as context for namespace
        metadataUrl // GraphQL endpoint for encoded version (per ENSIP-16)
    )
    // Defer to IPNS storage
    revert StorageHandledByIPNS(
        msg.sender,
        abi.encode(node, addr),
        metadata
    );
}

Client-side

/* Client-side pseudo-code in ENS App */
// IPNS publishing provider
import IPNS from provider;
// Decode calldata from revert
const [node, addr] = abi.decode(callData);
// Decode metadata from revert
const [gatewayUrl, dataSigner, ipnsSigner, metadataUrl] = abi.decode(metadata);
// Fetch last IPNS version data from metadata API endpoint
let version = await fetch(metadataUrl, node);
// Deterministically generate IPNS and signer keypairs
let [ipnsKey, signerKey] = keygen(username, caip10, sigKeygen, password);
// Check if generated IPNS and signer public keys match the metadata
if (ipnsKey.pub === ipnsSigner && signerKey.pub === dataSigner) {
    // Sign the data with signer private key
    let signedData = await signData(node, addr, signerKey.priv);
    // Make IPFS content from signed data
    let ipfsCid = makeIpfs(signedData);
    // Create IPNS revision to publish from version data
    let revision = IPNS.v0(ipfsCid) || IPNS.increment(version, ipfsCid);
    // Publish revision to IPFS network
    await IPNS.publish(gatewayUrl, revision, signedData, ipnsKey.priv);
} else {
    // Tell user that derived keypairs did not match metadata
    throw Error('Bad Credentials');
}

Backwards Compatibility

Methods in this document are not compatible with previous EIP-5559 specifications.

Security Considerations

  1. Since both the ed25519 and secp256k1 private keys for IPNS and delegated signer respectively are derived from the same signature and hashKey, leaking one key is equivalent to leaking the other.

  2. Clients must purge the derived IPNS and signer private keys from local storage immediately after signing the IPNS update and off-chain data respectively.

  3. Signature message payload and the resulting deterministic signature sigKeygen must be treated as a secret by the clients and immediately purged from local storage after usage in the keygen() function.

  4. Clients must immediately purge the password from local storage after usage in the keygen() function.

Copyright

Copyright and related rights waived via CC0.

2 Likes

ENSIP-19: Off-Chain Data Write Handlers for ENS

Author @sshmatrix, @0xc0de4c0ffee
Status Draft
Submitted â—„

Abstract

This proposal outlines the methods that clients should implement in order to handle StorageHandledByDatabase() and StorageHandledByIPNS() reverts made by off-chain resolvers. Methods in this document are designed to securely and autonomously incorporate off-chain storages - in particular databases and decentralised storages - in ENS resolvers. By implementing these methods in their resolvers to store records, ENS service providers can take advantage of cheap and often free nature of most off-chain storages.

Motivation

Gas fees are a burden on ENS users and off-chain resolvers such as CB.ID and NameSys have recently attempted to tackle this problem. CB.ID for instance offers off-chain records for theirs users on L2 and their centralised databases following the legacy EIP-5559. However, legacy EIP-5559 is incapable of defering storage to decentralised storages such as those used by NameSys, e.g. IPFS, Arweave or Swarm. In its support for decentralised storages in particular, EIP-5559 falls short both in terms of UX as well as protocol interface. The new EIP-5559 introduces storage handlers for L1s, L2s, databases and decentralised storages through which L1 contracts can store data off chain at significantly lower cost. This approach when combined with EIP-3668 (CCIP-Read) has the potential to nullify the gas associated with storing ENS records without compromising security and autonomy. The following text describes how ENS off-chain resolvers should implement StorageHandledByDatabase() and StorageHandledByIPNS() handlers as defined in updated EIP-5559 to store users’ records. The architecture described in this proposal forms the backbone of NameSys v2 Resolver.

Specification

Overview

EIP-5559 describes five parameters that should be defined by specific protocols: username, protocol, dataType and POST request formatting. The following text defines these five parameters for ENS, along with a sixth ENS-specific metadata API.

1. username

username must be auto-filled by the client for ENS implementions of EIP-5559. This public field sets the IPNS namespace for a specific implementation.

/* Username is dependent on the storage type which can be 'WalletType' or 'NodeType'. See definitions in EIP-5559 */
// For ENS: node = namehash(normalise("domain.eth")), aka preimage(node) = "domain.eth"
let username;
if (storage === 'WalletType') username = caip10;
if (storage === 'NodeType') username = "domain.eth";

where CAIP-10 identifier caip10 should be derived from the connected wallet’s checksummed address wallet and string-formatted chainId according to:

/* CAIP-10 identifier */
const caip10 = `eip155:${chainId}:${wallet}`;

2. protocol

protocol is specific to each ENS Resolver’s address (resolver) and must be formatted as:

/* Protocol identifier */
const protocol = `ens:${chainId}:${resolver}`;

3. dataType

Data types for ENS are defined by ENSIP-5, ENSIP-7 and ENSIP-9. These are the usual ENS records.

4. POST REQUEST

A. POST to IPNS

POST request for IPNS storage needs to be handled in a custom manner through the namesys-client (or w3name-client) client-side libraries. This is due to the enforced secret nature of IPNS private key which limits all IPNS related processing to client-side to protect user autonomy. The pseudo-code for autonomous IPNS storage handling is as follows:

/* POST-ing to IPNS */
import IPNS from provider;
let raw: Raw = rawSample; // See example below in text
let version = "0xa4646e616d65783e6b3531717a693575717535646738396831337930373738746e7064696e72617076366b6979756a3461696676766f6b79753962326c6c6375377a636a73716576616c756578412f697066732f62616679626569623234616272726c7572786d67656461656b667a327632656174707a6f326c35636276646f617934686e70656e757a6f6a7436626873657175656e6365016876616c69646974797818323032352d30312d33305432303a31303a30382e3239315a";
let ipfsCid = makeIpfs(raw);
let revision = IPNS.v0(ipfsCid) || IPNS.increment(version, ipfsCid);
await IPNS.publish(gatewayUrl, revision, raw, IPNS_PRIVATE_KEY);

where IPNS.publish() takes care of the IPNS signatures and publishing to IPFS network internally. The raw data object for indexing purposes is formatted as:

/* Type of raw data */
type Raw = {
  ens: string
  chainId: number
  approval: string
  records: {
    contenthash: {
      value: string
      signature: string
      timestamp: number
      data: string
    }
    address: [
      {
        coinType: number
        value: string
        signature: string
        timestamp: number
        data: string
      }
    ]
    text: [
      {
        key: number
        value: string
        signature: string
        timestamp: number
        data: string
      }
    ]
  }
}

Example of a complete raw object is shown below.

/* Example of a raw data object */
let rawSample: Raw = {
  "ens": "sub.domain.eth",
  "chainId": 1,
  "approval" : "0x1cc5e5efa312dc292560a26e3dba2584070b02ec203c51440a3e23d49ba56b342a4404d8b0d9dc26a94190691e47652343183bf1c64bf9c5081a2f1d887937f11b",
  "records" : {
    "contenthash": {
      "value" : "ipfs://QmRAQB6YaCyidP37UdDnjFY5vQuiBrcqdyoW1CuDgwxkD4",
      "signature": "0x0679eaedb300308680a0e8c11725e891d1500fb98b65d6d09d538e2655567fdf06b989689a01db312ad6df0752cbcb1756b3405a7163f8b4b7c01e70b1a9c5c31c",
      "timestamp": 1708322868,
      "data": "0x2b45eb2b0000000000000000000000005ee86839080d2593b30604e3eeb78271fdc29ec800000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000018000000000000000000000000000000000000000000000000000000000000000414abb7b2b9fc395910b4387ff69897ee639fe1cf9b79c31bf2d3743134e77a9b222ec175e563d13d60bc722c8829ce91d9af51bcd949816f95979abef4378d84e1c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000041cbfb2300d60b8602db32ad4ac57279e7a3632e35bb5966eb686e0ac8ec8e7b4a6e306a13a0adee15fce5a9e2bbf3a016db023b0ab66f04bde62a13343287e3851b00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000026e3010170122022380b0884d9e85ef3ff5f71ea7a25874738da71f38b999dc8ffec2f6389a3670000000000000000000000000000000000000000000000000000"
    },
    "address": [
      {
        "coinType": 0,
        "value": "1FfmbHfnpaZjKFvyi1okTjJJusN455paPH",
        "signature": "0x60ecd4979ae2c39399ffc7ad361066d46fc3d20f2b2902c52e01549a1f6912643c21d23d1ad817507413dc8b73b59548840cada57481eb55332c4327a5086a501b",
        "timestamp": 1708322877,
        "data": "0x2b45eb2b0000000000000000000000005ee86839080d2593b30604e3eeb78271fdc29ec800000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000018000000000000000000000000000000000000000000000000000000000000000419c7c185335898d7ec57cffb842e88116a82f367237815f35e16d5f8b28dc3e7b0f0b40edd9f9fc48f771f921986c45973f4c2a82e8c2ebe1732a9f552f8b033a1c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000041cbfb2300d60b8602db32ad4ac57279e7a3632e35bb5966eb686e0ac8ec8e7b4a6e306a13a0adee15fce5a9e2bbf3a016db023b0ab66f04bde62a13343287e3851b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200000000000000000000000001111000000000000000000000000000000000001"
      },
      {
        "coinType": 60,
        "value": "0x839B3B540A9572448FD1B2335e0EB09Ac1A02885",
        "signature": "0xaad74ddef8c031131b6b83b3bf46749701ed11aeb585b63b72246c8dab4fff4f79ef23aea5f62b227092719f72f7cfe04f3c97bfad0229c19413f5cb491e966c1b",
        "timestamp": 1708322917,
        "data": "0x2b45eb2b0000000000000000000000005ee86839080d2593b30604e3eeb78271fdc29ec800000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000018000000000000000000000000000000000000000000000000000000000000000419bb4494a9ac6b37d5d979cbb6c43cccbbd8790ebbd8f898d8427e1ebfd8bb8bd29a2fbc2b20b0a53c3fdde9dd8ce3df648112754742156d3a5ac6fd1b80d8bd01b000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000041cbfb2300d60b8602db32ad4ac57279e7a3632e35bb5966eb686e0ac8ec8e7b4a6e306a13a0adee15fce5a9e2bbf3a016db023b0ab66f04bde62a13343287e3851b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000600000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000001c68747470733a2f2f6e616d657379732e78797a2f6c6f676f2e706e6700000000"
      }
    ],
    "text": [
      {
        "key": "avatar",
        "value": "https://domain.com/avatar",
        "signature": "0xbc3c7f1b511de151bffe8df033859295d83d400413996789e706e222055a2353404ce17027760c927af99e0bf621bfb24d3bfc52abb36bcfbe6e20cf43db7c561b",
        "timestamp": 1708329377,
        "data": "0x2b45eb2b0000000000000000000000005ee86839080d2593b30604e3eeb78271fdc29ec80000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001800000000000000000000000000000000000000000000000000000000000000041dc6ca55c1d1c75eec223a7eb01eb5942a2bdb79708c25ff2827cfc0343f97fb76faefd9fbc40de5103956bbdc841f2cc2d53630cd2836a6b76d8d2c107ccadd21b000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000041cbfb2300d60b8602db32ad4ac57279e7a3632e35bb5966eb686e0ac8ec8e7b4a6e306a13a0adee15fce5a9e2bbf3a016db023b0ab66f04bde62a13343287e3851b000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000046e6e6e6e00000000000000000000000000000000000000000000000000000000"
      },
      {
        "key": "com.github",
        "value": "namesys-eth",
        "signature": "0xc9c33ff219e90510f79b6c9bb489917ee6e00ab123c55abe1117e71ea0d171356cf316420c71cfcf4bd63a791aaf37388ef1832e582f54a8c2df173917240fff1b",
        "timestamp": 1708322898,
        "data": "0x2b45eb2b0000000000000000000000005ee86839080d2593b30604e3eeb78271fdc29ec80000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001800000000000000000000000000000000000000000000000000000000000000041bfd0ab74712b98bc472ef0e5bbb031acba077fc98a54cdfcb3f11e64b02d7fe21477ba5ea9d508a0265616d74a8df99b9c8f3c04e6bfd41f2df554fe11e1fe141c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000041cbfb2300d60b8602db32ad4ac57279e7a3632e35bb5966eb686e0ac8ec8e7b4a6e306a13a0adee15fce5a9e2bbf3a016db023b0ab66f04bde62a13343287e3851b000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000046b6b6b6b00000000000000000000000000000000000000000000000000000000"
      }
    ]
  }
}
B. POST to DATABASE

POST request to a RESTful gateway handling database storage is simply the raw data object.

5. PATHS

EIP-5559 delegates the task of defining the paths for off-chain record files to individual protocols. The path scheme for ENS records is based on the RFC-8615 .well-known standard. The records for each ENS sub.domain.eth must then be stored in JSON format under a reverse-DNS type directory path using / instead of . as separator. For example, the paths for some example records are formatted as

  • text/avatar: .well-known/eth/domain/sub/text/avatar.json,
  • contenthash: .well-known/eth/domain/sub/contenthash.json, and
  • address/112: .well-known/eth/domain/sub/address/112.json etc.

5+. metadataUrl INTERFACE

metadataUrl for ENS must point to a GraphQL endpoint and must be formatted as described in ENSIP-16. This metadataUrl must additionally return the version value for each applicable ENS domain (or node) whose records are hosted on IPNS. This version value is incremented and then used by the gateway to publish new IPNS updates.

Backwards Compatibility

None.

Security Considerations

Same as EIP-5559.

Copyright

Copyright and related rights waived via CC0.

The protocol you describe above can be implemented today using CCIP-read + the client logic above, as you’re using a synthetic key which doesn’t even require client interaction.

To me, you’re describing the format of the data that is passed to a specific gateway, not the protocol of how you get signed data sent to a gateway.

I thought the goal of CCIP-Write is to have a protocol that bakes the signing part into middleware (eg. ethers)—like how CCIP-Read is currently implemented—so any developer can use this mechanism by hinting at the callsite: contract.func(..., {enableCcipRead: true})


I think there’s at least (3) signing flows:

  1. sign EIP-712 structured data → eth_signTypedData(signdata)
    • this probably should have to/from specified in the protocol and not by the contract
  2. sign a hash? → personal_sign(keccak(salt + signdata))
    • where salt is some CCIP-Write constant to avoid people using this as a “1-liner” Seaport/Blur drainer.
  3. send on chain X → sendTransaction()
    • call the same function wrapped: offchainWrite(setAddr(..., 60, 0x1234))
    • not sure how the target is set, maybe: (address, chain) = abi.decode(signdata)
    • as discussed before, should probably be a separate thing, ChainWrite(chain, callData, extraData)

Signing arbitrary data is too way dangerous. Signing a hash is dangerous, but useful for situations where you want a small proof.


Extending my comment from the prior thread about special signaling with OffchainLookup() using a null callback, consider the following:

  • OffchainLookup() with callback 0x0 is terminal (no reply required)
  • OffchainLookup() with callback 0x1 is sign method (1)
  • OffchainLookup() with callback 0x2 is sign method (2)
  • OffchainLookup() with callback 0x3 is sign method (3)

For 0x0, this is just an improvement to CCIP-Read to elide a final RPC void() call.

For 0x1 and 0x2, the normal calldata would actually be abi.encode(calldata, signdata) where the CCIP JSON payload changes from {sender, data} → {sender, data: calldata, signdata, signature}.

Obviously, OffchainSign(sender, urls, calldata, signdata, extraData) would be cleaner, but I’m just trying to illustrate how similar this is to OffchainLookup().


I think a question would be, how does the developer specify the signer? If the signer was a callback, your “Client-side” example above would basically be the callback. Whereas, the default signer, would use the current signer, to issue the corresponding signing request, eg. eth_signTypedData.

Example using default signer:

let signer = new ethers.BrowserProvider(window.ethereum);
let contract  = ethers.Contract("ccip.write.eth", 
   ["function setAddr(bytes32 node, address addr) view external"],
signer);
await contract.setAddr(namehash("raffy.eth"), 0x1234, {enableCcipWrite: true})
  1. enableCcipWrite should imply eth_call
  2. contract reverts with OffchainSign()
  3. custom signer wasn’t specified, using current signer
  4. signature = eth_signTypedData w/ signdata
    • dialog would pop-up
      • reject sign → throws
  5. send {sender, data, signdata, signature} to urls[]
  6. throw unless response.status = 200 (or something)

Or, your example:

// I dont know where (username, password) come from
let node = namehash("raffy.eth");
await contract.setAddr(node, 0x1234, {
    enableCcipWrite: true,
    async sign(tx, calldata, signdata) {
        // tx = abi.encodeCall(setAddr, (node, 0x1234)
        // calldata/signdata from OffchainSign()
        let [metadataUrl, dataSigner, ipnsSigner] = abi.decode(signdata, ["string", "address", "address"]);
        let version = await fetch(metadataUrl, node);
        let [ipnsKey, signerKey] = keygen(username, caip10, sigKeygen, password);
        if (ipnsKey.pub !== ipnsSigner) throw new Error("ipns signer");
        if (signerKey.pub !== dataSigner) throw new Error("data signer");
        return signData(node, addr, signerKey.priv);
    }
})
  1. (same as above) contract reverts with OffchainSign()
  2. custom signer is specified
  3. sign() produces signature
    • show custom dialog and ask for login/password
  4. (same as above) send and throw unless response is OK

I don’t think anything prevents that logic from being inside of a standard helper contract. You’d just need to OffchainLookup() first from the write(), which fetches the metadata, and then feed that back into the contract, which calls the helper contract, which also reverts OffchainLookup(), but with the synthetic signed request, which is set to an trustless offchain gateway, which sends that request to IPNS publish().

You are right, it can all be done with CCIP-Read plus a write-inducing callback, as Draft-1 was doing at the beginning. Conversations with ENS Labs team however suggest that

a) it would amount to overloading the CCIP-Read with helper contracts whereas similar functionality can be achieved with a much simpler standalone protocol,

b) you probably don’t want any signature baking in the middleware since each implementing protocol would have its own requirements, and

c) the current draft maintains some semblance with the previous EIP-5559 which was based on a similar architecture and didn’t have a callback.

I generally agree with the sentiment - despite initially drafting an idea very similar to yours - that having CCIP-Read do CCIP-Write by hijacking the callback leads to an overtly tedious overall protocol. There is no need to confuse the broader Ethereum community by convoluting an existing protocol. We as serious users of CCIP-Read are accustomed to it but perhaps not everyone else wants that.

I’m just using the overload as an example.

To me, the CCIP-Write protocol is something like:

  • the structured revert:
    OffchainSign(sender, urls, calldata, signdata, signStyle, callback, extraData)

  • how signdata and signStyle are used:

    1. signature = eth_signTypedData(signdata)
      • this requires enforced to/from
    2. signature = personal_sign(keccak256(signdata) w/salt)
      • this requires salt
    3. expect client-provided custom handler that you pass:
      • async function(tx, calldata, signdata) → (signdata, signature)
  • what is sent to the CCIP server: {sender, calldata, signdata, signature}

  • what you do on success:

    • callback = null → no reply (optimization)
    • otherwise → contract.callback(response, extradata)
  • what happens on failure:

    • signing errors (client side: rejected, invalid, etc.)
    • server errors (http status code + message)

And I claim on top of this, you can use signStyle = 3 to do your IPNS client-side derived-key sign using (username, password) and also deploy a trustless IPNS CCIP-Write gateway that accepts signed data using that technique, and relays them to an IPNS gateway.

However, signStyle = 3 is really just 2 x OffchainLookup() + extra client code for a non-traditional signature

3668 was actually written for ENS, so this isn’t really accurate.

No need to throw shade on DeFi :laughing:

I think this first paragraph can probably be omitted entirely, though.

If this is going to be a PR to 5559, it shouldn’t refer to 5559 in the third person. This will replace that spec. In general the spec still needs rewriting to be a comprehensive replacement that doesn’t reference the earlier version.

I mentioned this on the previous draft: I don’t see any reason to require all reverts to have the same signature and pack their domain-specific data into a metadata field. It seems clearer and cleaner to just encode that data as arguments of the revert directly.

Is Solana support something we really need to encode into 5559? I don’t see anyone demanding it, and each additional storage handler adds complexity for implementers.

Relatedly, this spec should specify whether clients are required to implement all the handlers or only some - and if only some, we need to consider how they can tell an unknown 5559-based revert apart from any other revert reason.

I think you mean to say that they should be base56 decoded and stored natively; hex is simply a representation commonly used by Ethereum libraries; data in Ethereum is natively stored as bytes.

It seems a little odd to show first unpacking these values, then packing them back up again. I think it would be clearer to leave out the first part and note that the derivation of those two values is up to the implementer.

Isn’t callData supposed to be opaque to the implementing client?

Where does account come from? Isn’t access control also something that should be opaque to the implementer?

I mentioned this on the previous draft - a contract being able to prompt a user to make an arbitrary transaction to any chain is an enormous security vulnerability. Absent some other mitigation, we need to put restrictions on the payload.

What is the mechanism by which it would not be included?

Why does being offchain preclude us from deriving an address for the signer?

I would say instead that “the signature will be produced by the account with address dataSigner”; eth_sign is an implementation detail. It’s also worth noting that this could facilitate apps prompting the user to change wallets where necessary.

It would help to state explicitly that this standard doesn’t describe a mechanism for this.

Since both keys are derived from a single signature, can’t we instead encode the address of the signer from which the keys are derived?

This is a basic requirement in order to implement this standard; I think we need to tell users how they can retrieve the version number.

Can you clarify? How would these standards help with this?

If you’re not going to specify key derivation inline here, you at least need to mention that the key is derived, and where to read that section of the spec.

Can you be more specific here?

Using what API?

I don’t think there’s any reason to include the signature in the salt, since it’s already included in the kdf.

This still needs to be more prescriptive. Say I’m implementing this in the ENS Manager app; how do I know how to treat the username?

This seems to add a lot of complexity to the protocol. Could we just specify one or the other? I’d be in favor of a single signer per wallet for simplicity; if users want to separate their names they can use multiple accounts, and this then mirrors the security profile of the names themselves.

I don’t think there’s any practical way to do this in a CCIP-Read callback that doesn’t just work out to relying on the gateway to be honest.

Based on reading this, I don’t understand how I would implement the signing process for StorageHandledByDatabase.

I think we should specify one mechanism or the other, not both.

It’s not clear to me why an additional signature is required here; can you elaborate?

Is this intended to be part of the “Revert StorageHandledByDatabase()” section? I think it would be clearer if most of these sections were moved up the spec to the main section on each storage type, with only the bits that are actually shared kept in common down below.

I don’t think there’s anywhere we specify a format like text/avatar or address/60. This needs more detail and some examples.

Why is this nested? Is it intended to represent arbitrary data the app supplies?

And what “off-chain data file”? Is this specifying the encoding of IPFS files? Wouldn’t that be up to the CCIP-Read handler to define and interpret? Why prescribe it here?

How can anyone implement this spec if it doesn’t specify an API for the POST request?

What do you mean by “protocol” in this context? Are you referring to new extensions of 5559 for new storage types? Most of this seems like something the implementer can specify, and needn’t standardize unless they want others to interoperate with it; in most cases the API that a CCIP-Read enabled contract uses to talk to its gateway is an internal implementation decision.

I think that if 5559 is written correctly, there should be no need for a separate ENS-specific spec; a client should be able to simply implement 5559 and leave the ENS-specific details to the resolver contract and its gateway.

A spec can’t and shouldn’t require a specific client library. It needs to specify things at a sufficient level of granularity that you could write a client library based on it.

I don’t think there’s any reason to specify internal data formats here, though - those are an implementation detail for specific resolvers to implement.

Can I suggest that going forward, GitHub would be a more productive way to collaborate on this? A PR has all the tools built in for commenting on specific lines, tracking revisions etc; doing this all via forum threads makes it difficult to keep track of the spec as it evolves.

1 Like

:white_check_mark:


:white_check_mark: That’s bad wording on my part. Will fix this; metadata is only meant to be representative of some arbitrary data in whatever format. It doesn’t have to fit inside metadata; reverts are therefore not expected to have same signatures.


:raised_hand: We have immediate plans for offering ENS on Solana. It’s the second best blockchain in its class and ENS is multi-chain.

Yes, that is correct. Handlers are independent of each other and a client may implement some or all of them based on the services that they intend to offer. If ENS App wants to support Solana-based resolvers, they should implement the Solana handler.

Similar to how clients would handle a list of standard reverts by comparing their bytes4 to some EIP, clients in this case must also refer to this EIP for standardised off-chain handler reverts. Once they receive the revert, they can interpret whether it is an off-chain handler (by comparing their bytes4) and then check whether they support it or not, and then accordingly inform the user or take appropriate action. A contract implementing off-chain handlers will have these bytes4 values in their interface; you are right though that these values would need to be separated from other reverts (likely through a standard view interface[?]). I will add this in the draft; it is indeed missing.


:white_check_mark:


:white_check_mark: You are right. The encodings should be part of the revert or a standard interface. We’ll think on this a bit and come up with a concise method.


If the dataSigner is not stored on-chain as per Off-Chain Data Signers section.


That’s a typo.

Yes, dataValue is arbitrary data.


:white_check_mark: It doesn’t. This is a mistake in text; will be part of the purge.


:white_check_mark:


True, this is a pain point of IPNS. In case of ENS App, version data is foreseen to be part of the GQL metadata, but for other clients, we may need to put back the metadataUrl, which will then require additional specifying of API for that URL endpoint.


:white_check_mark: This needs more elaboration.


:white_check_mark:


Each gatewayUrl is a service here and they define their own APIs in practise. It is between the clients and the service provider to agree on an API that works for them internally.

Consider this for comparison: we can only define how to set addr60 for an ENS domain via its namehash, but it is up to ethers and viem as service providers to structure their own APIs. ethers may do ethers.setAddr60("domain.eth", "0xaddr") while viem may do viem.setValue("addr60", "0xaddr", viem.namehash("domain.eth")).

In the case of IPNS, similar situation arises. We can only prescribe the contents of the arguments but their structure depends on each individual service. For example, ENS App would be using w3name API for IPNS but it is no way a standard in the same way that ethers is not a standard in Ethereum ecosystem. A way to interpret this through your own words:

Like you pointed out for CCIP-Read, in most cases API that a CCIP-ReadWrite enabled contract uses to talk to its gateway is an internal implementation decision.


Let’s simplify this even further then. We’ll limit username to walletType storage and remove nodeType storage. This will kill the entire concept of storage types and simplify this part of implementation quite a bit.


Yes, but that would require the EIP itself to become increasingly complex. For example, consider the case where the off-chain data must be arranged according to the following structure for ENS: eth/domain/sub/text/avatar.json. To define this in abstract manner is undue pressure on this EIP and would lead to bunch of random sub-sub-sections. I don’t see how either the contract or the gateway can reasonably handle this since writing the data in this protocol-specific reverse DNS format is the client’s job.

Unlike web2 world where clients are merely middlemen between users and servers, cryptonative clients are expected to do significantly more work to maintain users’ autonomy, privacy and security. There are many examples of implementations that cannot offload specific work (or storage) to either the contract or the gateway; the nodeType feature that we are about to purge in interest of simplicity is one such example.


No, this is simply to restrict the scope of derived IPNS CID to a specific contract or protocol by setting protocol equal to ens:chainId:0xresolverAddress. For example, if multiple ENS resolvers implement this EIP, all their derived IPNS CIDs will be the same and they’ll end up overwriting each other. We usually never want users to use same IPNS CIDs.


:white_check_mark: Correct, this will be purged.


No, this section is common to both; same record signature format for both IPNS and database handlers.


Yes, better to go with off-chain signer since we are doing all this to save on gas and reduce on-chain references.


This is the signature which approves the delegated off-chain dataSigner that is signing all the ENS records.


:white_check_mark: This needs better rephrasing to point to the common section further down the document.


This is referring to the CCIP-Read Compatible Payload section which will be removed. The approval signature here is the additional signature which you asked about above. This approves the off-chain dataSigner, which then would not be included in on-chain metadata; this also partly answers your question below:


We need the IPNS CID explicitly to refer to the storage container during CCIP-Read, e.g. bafySomeIpnsCid.ipfs2.eth.limo/eth/domain/sub/text/avatar.json.


What sort of restrictions would you suggest? Shall we ask the L2 teams to chime in? Require a signature before routing eth_call?


Absolutely! Answering this was rather tedious :sweat_smile: Shall I open a PR into ENSIP repo first to work on this draft?

Right, but what I mean is that in the cases in this doc, there’s no need to have metadata at all; instead the fields that are currently packed in it can be standardized as part of the revert directly.

I’m glad to hear you want to work on it, but supporting an entirely new chain architecture here is a big ask. I’d personally rather keep this spec focused on EVM-equivalent chains and offchain storage, and move any other chain support to another spec. I know I’m not qualified to comment on the technical soundness of Solana proposals.

The larger issue here is that if an app encounters a new revert type that it hasn’t seen before, it won’t even be able to offer a “we don’t support this storage type” error - it will look like any other non-storage-related revert.

How would version get into the GQL data? I don’t think we have any way to track that.

Surely there’s some standardized API implemented by IPFS gateways to fetch this information?

For this to be at all useful, there must be a standardized API used by gateways here. Otherwise, there’s no point in encoding this as a single revert type at all, since each client would need specific support for each service. It’s also not clear to me how a client would even know which service it’s talking to here.

I don’t understand; why can’t the contract simply specify the path to write to as part of the revert data? There’s no need for the client to understand path schemes itself.

Wouldn’t the resolvers just each return their own CID, as set by the user?

How is the record signature used by the IPNS handler?

Can you elaborate on how this is intended to work?

Can’t the contract derive this itself?

The simplest but least flexible restriction is to require that the calldata on the L2 match the original calldata. Anything that allows varying it is going to require really careful consideration. For example, we could insist that only a specific method is allowed to be called on the L2.

Yes please. I can’t be sure I responded to all of your points sensibly above ,because your replies lack enough context without me painstakingly cross-referencing each reply against my original quotes.

:white_check_mark:

:white_check_mark: That’s fair. We’ll move it to a separate proposal for later.

I meant to say that we’ll provide the version data to the ENS App through our GQL endpoint as per ENSIP-16. Our plan is to implement ENSIP-16 which will provide the ENS App with graphqlUrl endpoint where they can read off-chain metadata such as version. You are right that the ENSIP-16 draft currently does not have version specified in GQL schema but we talked briefly on this during the meeting and we intend to request a field contextVersion: in type Domain via a PR.

Sadly, no. Autonomous IPNS pinning is a relatively new feature, with no standard interface and it is left up to IPNS service providers to handle the version info for their users. In practise, one can find custom ways for specific implementations (like I mentioned using ENSIP-16) but no standard exists so far that can accommodate this part of the spec.

Same situation as above; no standard exists so far that can make this part of the spec service-agnostic. This is bad for writing a standard based on IPNS but it is the unfortunate reality at this moment. The IPNS stack at its best is such that service providers have complete freedom to determine how to store & supply version data for their users and how to interface with IPFS nodes internally. Until w3name came along, service providers were going as far as keeping the users’ private keys! :sweat_smile:

:white_check_mark:

Yes, but each CID is generated by the user using keygen() during initial one-time setup. If there is no protocol: specifier, they will end up generating the same key each time and setting that same value on-chain for each resolver.

The only difference between the IPNS and the database handler is the additional ipnsSigner, and the rest is exactly the same. In both cases,

  • multiple records are signed by a derived off-chain dataSigner,
  • the dataSigner is approved by the approval signature, and
  • format for CCIP-Read data: is simply:
    dataValue + dataSignature + dataSigner + approval

In both cases, the CCIP-Read resolver verifies that

  • the dataSigner in data: is approved by the owner or manager of a node through approval, and
  • each record is signed by dataSigner via dataSignature.

In case of IPNS handler, an additional IPNS key is derived; everything else is the same as described above. This makes sense since the only difference between the two handlers is the storage container; records, their signers and their signatures are the same irrespective of where you store them. In both cases, two signatures are required during (batch) record update: one for keygen() and one for approval, irrespective of how many records are in the batch.

It would have been nice for L2 service providers to chime in at this point. We’ll discuss this internally as well and see what can be done.

No, that’s fundamentally impossible (without revealing the private key).

I’ll wrap these final few Q/A here (if any remain) and then post the next draft as a PR into ENSIP repo.

That’s not at all how ENSIP-16 is intended to be used; it’s supposed to be for metadata relating to the name itself - subdomains, records, etc.

If this spec is supposed to specify how to update records via IPFS, it needs to have a standardized, in-protocol way to fetch the version. Leaving it open to the implementer means that no two implementations will be compatible.

But you can write a standard. You’re doing so right now! This standard should specify how the HTTP gateway for offchain databases should work. Again, if you don’t, no two implementations will be intercompatible and we may as well not have a standard at all.

Can’t we include the address of the resolver in the signed data, so each one gets its own keypair?

Surely this shouldn’t be necessary for the IPNS-based integration? If the content is found at the CID specified, that’s verification enough that it was written by an authorized user.

That’s exactly what the protocol: field does, e.g. ens:${chainId}:${resolver} = ens:1:0x23E...0aC. Probably better to use eth instead of ens though, e.g. eth:1:0x23E...0aC, for CAIP-10 compatibility.

:white_check_mark: That is correct, this can now be removed! Context:

In our initial hyper-secure implementation, we demanded explicit verification of both for extra security; we imposed this extra security since we offer IPNS key export feature (for users to export their IPNS key for additional pinning with their favourite service). We thus demanded that both keys be derived from independent signatures (since leaking one exported key means leaking the other if they are derived from same signature) and that signer must be independently verified from the IPNS container. But now that we have decided to derive both keys from the same signature, the independent verification is thus a waste and doesn’t provide extra security. So yes, there is no point having it anymore and will be removed.

:white_check_mark: Based on this feedback, I’ll include the following in Draft-6:

  • simple API for endpoint where clients can fetch the version data,
  • the current implementation’s POST API as the standard for IPNS handler. It’s the only one that exists so far and is used by the ecosystem, so it makes sense to simply state that in the EIP explicitly, and
  • all of the several comments above.

I’ll post the next draft as a PR on GitHub :pray:

1 Like

Conversation has moved to GitHub starting Draft-6: ERC-5559 by sshmatrix · Pull Request #224 · ensdomains/docs · GitHub

This work has been merged into Ethereum ERCs as ERC-7700

3 Likes