Exploring Agent Representation on ENS

Three proposals have emerged for how ENS should handle AI agent identity. This note argues that the Node Metadata Standard may be broad enough to encompass the key functionality described in the other two approaches. Framed this way, the question is less about which proposal should win, and more about whether ENS needs additional global agent-specific keys when those patterns may already be expressible through schemas attached to nodes.

Following last week’s AIxENS call, I was asked to do a writeup on the three mechanisms currently in discussion:

  • ENSIP-26 — a single agent-context text record key pointing to a free-form bootstrapping document
  • Agent Identity Profile — three required text record keys plus a cryptographically signed off-chain manifest
  • Node Metadata Standard — a general-purpose node classification and metadata framework for all ENS names, using class and schema text records to attach typed, validated, self-describing metadata to any node

What Each Proposal Is Actually Doing

ENSIP-26 is a minimal discovery spec. It standardizes the key name agent-context and leaves the value format entirely open — plain text, Markdown, YAML, JSON, anything goes. The intent is to give agentic systems a single well-known place to find bootstrapping information for an ENS name. The spec explicitly analogizes this to index.html: a stable entrypoint, not a schema.

The Agent Identity Profile is a security-oriented agent identity spec. It introduces three required text record keys (agent-version, agent-controller, agent-manifest) and mandates a signed off-chain JSON/CBOR document verified against the controller via EIP-712.

The Node Metadata Standard is the most general of the three. It introduces two global text record keys — class (a controlled vocabulary labeling the role of a node: Agent, Delegate, Treasury, Workgroup, etc.) and schema (a pointer to a JSON Schema defining the typed metadata attributes for that node). It includes inheritance semantics, client validation rules, and parameterized key names.

All three are motivated by the same underlying problem: as agents and on-chain organizations proliferate, clients need a deterministic way to discover who controls an entity and what it can do. The disagreement is less about the problem itself than about where to draw the line between what ENS standardizes directly and what it leaves to convention or schema.


How the Node Metadata Standard Can Encompass Both Approaches

When we designed the Node Metadata Standard, we assumed requirements would continue to evolve. New use cases will need new fields, and the intended meaning of those fields is often scattered across separate proposal documents rather than traveling with the data itself. Part of the appeal of a schema-based approach is that many such patterns can be represented within one general framework, without necessarily requiring a new ENSIP each time a domain-specific metadata pattern emerges.

From that perspective, both ENSIP-26 and the Agent Identity Profile can be understood as agent-specific metadata profiles that may be expressible within the Node Metadata Standard. The key point is not that these proposals are invalid, but that their core semantics may already fit inside a more general structural layer.

ENSIP-26 as a schema:

{
  "$id": "https://ens.domains/schemas/agent-context/v1.0",
  "title": "Agent",
  "description": "An autonomous or operator-assisted agent acting on users' behalf.",
  "type": "object",
  "properties": {
    "agent-context": {
      "type": "string",
      "description": "A bootstrapping document describing the agent's capabilities and interaction patterns. May be plain text, Markdown, YAML, or JSON."
    }
  }
}

Agent Identity Profile as a schema:

{
  "$id": "https://ens.domains/schemas/agent/v1.0",
  "title": "Agent",
  "description": "A verifiable agent identity anchored in ENS, with a cryptographically signed manifest.",
  "type": "object",
  "properties": {
    "agent-version": {
      "type": "string",
      "format": "semver",
      "description": "Profile spec version for compatibility gating."
    },
    "agent-controller": {
      "type": "string",
      "description": "CAIP-10 address, raw address, or ENS name of the authority that signs and rotates the agent manifest."
    },
    "agent-manifest": {
      "type": "string",
      "description": "CID or URL pointing to the signed agent manifest. Clients SHOULD prefer the resolver's contenthash (EIP-1577). Signature MUST be verifiable against agent-controller via EIP-712."
    },
    "agent-endpoint": {
      "type": "string",
      "description": "Optional base endpoint for agent calls (A2A, REST, etc.)."
    },
    "agent-did": {
      "type": "string",
      "description": "Optional DID URI for DIDComm or Verifiable Credential linkage."
    }
  },
  "required": ["agent-version", "agent-controller", "agent-manifest"]
}

On this reading, both proposals fit cleanly. Every keyword they introduce becomes a typed property with a human-readable description, format hints, and required constraints — all of which are difficult to express in a flat text-record-only approach.

This does not mean the other proposals have no role. It means they may be better understood as profiles, conventions, or application-layer patterns that can sit within a broader metadata framework, rather than as entirely separate namespace primitives.

The Self-Describing Advantage

This is where the structural difference between the approaches becomes most visible.

ENSIP-26 and the Agent Identity Profile define their keywords in prose, inside their respective specification documents. That means the semantics of those keys exist primarily in human-readable documentation. Every client that wants to understand what agent-controller means has to have that knowledge hardcoded at compile time, derived from reading the ENSIP or profile specification. There is no runtime path from the data to its own meaning.

The Node Metadata Standard is different. Because the schema travels with the node — referenced via the schema text record — a client can fetch it at runtime and get:

  • Human-readable descriptions of every field, surfaceable directly in UIs
  • Type and format information for input validation
  • required constraints to identify non-conformant records without prior ENSIP knowledge
  • Inheritance semantics so that child nodes do not redundantly repeat parent-level metadata
  • $id and title for schema versioning and programmatic identification

A client implementing the Node Metadata Standard can encounter a schema it has never seen before, render a meaningful description of what the node represents, validate user inputs against it, and flag missing required fields — all without hardcoded knowledge of that specific schema. In that sense, the documentation travels with the data. That is a different level of interoperability from a model where meaning lives primarily in an external specification document.

Semantic Ambiguities Around ENSIP-26

The agent-context record is trying to function as a bootstrapping document for agents — something like a SKILL.md file, which tells a client what an agent can do and how to invoke it. But a real SKILL.md is not just a description: it can include executable scripts, invocation patterns, and structured instructions that make the skill operational for a client. ENSIP-26 does not attempt to capture those now-familiar properties.

Additionally, agent-context sits in roughly the same conceptual space as SKILL.md (Anthropic/MCP), AgentCard (A2A), and MCP capability descriptors — all of which are already trying to answer the question, “how does a client learn what this agent does?” In that sense, introducing a new term here may increase the amount of mapping implementers have to do, unless it also introduces distinctly new semantics.

There is also a naming question. The word context already carries a heavily loaded meaning across the AI stack. Context windows, context injection, and agent memory are all established technical primitives referring to the state an agent carries between turns or the material supplied to a model at inference time.

When an agent framework encounters a text record named agent-context, a natural reading is that it contains memory state or injected model context, not necessarily a capability description or bootstrapping document.

For that reason, if ENSIP-26 continues in its current general direction, names such as agent-bootstrap, agent-index, or agent-install may be semantically clearer. More broadly, this is an example of the kind of naming ambiguity that a schema-based approach can reduce: a schema can make the intended meaning explicit at runtime rather than relying entirely on the record name and external prose.

Possible Futures

The ENS namespace is not infinitely large. Every new global key we standardize creates a coordination cost for future implementers. That cost is worth paying when a key introduces genuinely new semantics. It is less obviously worth paying when the same pattern may already be expressible within an existing framework — especially one that also provides type information, runtime discoverability, and validation.

This is why JSON schemas attached to nodes may offer a consistent way to navigate both current and future requirements without the overhead of a new ENSIP for every new metadata term we want to standardize.

We are already seeing signs of this through collaborations with other ENS ecosystem players. Enscribe has found value in this approach for contract metadata representation. The Public Goods team is exploring how to express grants on-chain. Neither of these efforts required changes to the ENSIP process itself — they are schema definitions, published and referenced, and fully expressible within the Node Metadata Standard.

This approach keeps ENS’s role clearer: a naming and identity layer with typed, verifiable metadata, without binding the namespace too tightly to domain-specific terminology that may shift as the agent ecosystem matures.

It is also viable that agent wallets, skills, and/or services could be represented under the same generalised approach.

To be clear, this is not a criticism of the motivations behind either ENSIP-26 or the Agent Identity Profile. Both are responding to real interoperability and trust problems. The narrower argument here is that ENS may already have a structural tool capable of encompassing those needs, and that using it consistently could serve the ecosystem better than accumulating agent-specific keywords without a shared type system underneath them.

Related

2 Likes

Three things really stand out to me about this exposition:

  1. the synthesis of what each proposal does and how the first two fit inside the third
  2. the step-change in client DevEx by introducing representation at runtime, and
  3. the ambitious push to end ENSIP proliferation for metadata conventions

NMS governs data shapes via schemas, but the implicit limitation is that NMS can only govern key shapes whose pattern was previously anticipated, while flat parameterized keys govern by convention to account for novel key shapes that emerge outside a schema’s regex.

A separation of concerns is the natural resolution: NMS handles the identity and metadata layer — typed keys, self-describing schemas, what the agent is, who controls it.

Any new metadata convention focused on routing and discovery (open keyspace, flat records, how you reach it across protocols) warrants its own ENSIP.

ENSIP-26 was recently updated, and the information here is now out of date. For the latest overview of ENSIP-26, please see this post: https://discuss.ens.domains/t/ensip-26-ens-native-ai-identity/21968

Anyone is able to define their own schemas and attach their own descriptors for what is deemed valid in that node.

Not sure I understand this comment. Having a class and schema effectively future proofs unknown future variants.

Consider Aliens have arrived and they now want to be represented on ENS. They have varing defining features and attributes. No new ENSIP is needed as class and schema would be sufficient to describe whatever is needed.

“Anyone can define their own schema” means each publisher encodes their own valid protocols with their own semantics. For a routing convention expected to work consistently across the ENS ecosystem, that’s fragmentation.

This works well for metadata conventions — typed fields, organizational roles, identity attributes. The self-describing runtime advantage you describe is real for that layer.

But for protocol conventions like routing and discovery, the NMS spec itself draws a boundary.

From the Parameterized Key Names section:

“Defining which values are allowed to be passed inside of the brackets when setting and retrieving records is up to schema publishers and is outside the scope of this ENSIP.”Node Classification and Metadata

A schema description is discoverable, not enforceable. For agent-endpoint[<protocol>], patternProperties validates the key shape — but which protocols are valid and how clients must behave remain outside scope.

The same limitation surfaces in the AIP-as-schema:

"agent-manifest": {
  "type": "string",
  "description": "...Signature MUST be verifiable against agent-controller via EIP-712."
}

The MUST lives in a description string. JSON Schema validates that agent-manifest is a string — it cannot enforce the verification.

Normative meaning is back in prose, not in the structure.

An ENSIP resolves this by making the convention normative once, for everyone — binding clients to behavior that a schema description can only suggest.

This example holds for identity and attributes — class and schema define what they are. But if alien protocols require clients to behave in specific ways, “schema would be sufficient” means every client author reads the description field and guesses.

A separation of concerns seems the best fit to address ENSIP proliferation without sacrificing standardization of protocol conventions.

NMS handles qualities, attributes — what a node is.

ENSIPs govern the behavioral contracts — what clients must do.

Having a schema gives a pretty strong indication on how the attributes should be processed. It’s the opposite of a guess.


It’s concretely more enforceable. A client can pluck the JSON schema and put into something like zod and actually validate the data.

JSON Schemas additionally have features like pattern, enum, format, dependentRequired to further indicate how the node should be shaped and validated without having to guess how it should be interpreted. Meaning they can evolve and converge over time. (With no ENSIPs)


At best adding arbitrary keys omits the classification type that most unstructured data protocols have long settled. In my view whether we need a classification layer before resolving attributes is not an open design question, and omitting the classification layer goes against conventional wisdom.

Consider the following:

JSON-LD requires @type before properties are interpretable. The Streaming JSON-LD spec reinforces our method, it mandates that @context and @type MUST be processed before all other entries in a node, because they change the meaning of everything that follows. Properties like name, url, address are meaningless until you know whether the node is a Person, Organization, or Event. Google’s structured data documentation makes this explicit.

RDF requires rdf:type (class membership) before predicates resolve. The RDF Primer states it directly: “RDF Schema uses the notion of class to specify categories that can be used to classify resources. The relation between an instance and its class is stated through the type property.”

Any plain key-name is an attribute on a node that has no classification step. A client encountering this key has to pattern-match against key names to infer it’s looking at an agent.

class is the classification step for ENS. schema gives you the structure.

Schemas are explicit about structure and silent on client behavior.

My point with separation of concerns is that there needs to be a baseline on distinguishing between behavioral contracts, in this case, ENSIPs, and how they are represented and validated at the data layer through schemas.

In other words: Schemas describe the shape of data and ENSIPs prescribe how clients must behave.

We need more than data validation — we need security guarantees. Without prescriptive cryptographic verification (i.e. ENSIP-25), a zod-valid agent record can still be a spoofed manifest.

These are sufficient for classification and attribution, but not enforceability. Type is not a verification protocol. Take the following real-world example for instance: W3C Verifiable Credentials.

VC use JSON-LD’s @type and still required a separate normative specification for the behavioral layer (credential issuance, verification, revocation).

Indicating how attributes should be processed is orthogonal to whether or not they can be enforced. NMS works well for classification and structural validation — what a node is and what shape its data takes — but ENSIPs are still necessary to create a baseline for how clients must behave.