[EP 6.11] [Executable] Collective Working Group Funding Request (April 2025)

Status Live Onchain
Author 5pence.eth
Voting Vote on Tally or Agora

Abstract

This proposal executes funding requests for the Meta-Governance and Public Goods Working Groups for the April 2025 funding window as passed in EP 6.6.1 and EP 6.6.2. This bundled executable proposal follows the requirements set out in Rule 10.1 of the Working Group Rules (EP 1.8).

Proposal Components


1) Meta-governance Funding Request [EP 6.6.1]

The Meta-Governance Working Group requests funding to fulfill anticipated budgetary needs through the next formal funding window in October 2025.

Destination USDC ETH $ENS
ENS Meta-Gov Main Multisig 589,000 0 100,000

This amount will cover all expected expenses outlined in the social proposal while leaving a prudent reserve to ensure continuity if future funding is delayed.


2) Public Goods Funding Request [EP 6.6.2]

The ENS Public Goods Working Group requests funding to support operations through the next formal funding window in October 2025.

Destination USDC ETH $ENS
ENS PG Main Multisig 356,000 0 0

This amount represents the adjusted funding request after discussions in the forum, reducing the original request from 521k to 356k by adjusting the Strategic Grants and discretionary funding categories.


Specification

The following transfers are to be made from the DAO treasury:

  1. Transfer 589,000 USDC to the Meta-governance safe:
    • Address: 0x91c32893216dE3eA0a55ABb9851f581d4503d39b
  2. Transfer 100,000 ENS to the Meta-governance safe:
    • Address: 0x91c32893216dE3eA0a55ABb9851f581d4503d39b
  3. Transfer 356,000 USDC to the Public Goods safe:
    • Address: 0xcD42b4c4D102cc22864e3A1341Bb0529c17fD87d

Total transfer amount: 945,000 USDC and 100,000 ENS


Calldata:

6.6.1 USDC Tx to Metagov

{
  "target": "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48",
  "value": 0,
  "calldata": "0xa9059cbb00000000000000000000000091c32893216de3ea0a55abb9851f581d4503d39b0000000000000000000000000000000000000000000000000000000892322c200"
}

6.6.1 ENS Tx to Metagov

{
  "target": "0xC18360217D8F7Ab5e7c516566761Ea12Ce7F9D72",
  "value": 0,
  "calldata": "0xa9059cbb00000000000000000000000091c32893216de3ea0a55abb9851f581d4503d39b00000000000000000000000000000000000000000000152d02c7e14af6800000"
}

6.6.2 USDC Tx to Public Goods

{
  "target": "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48",
  "value": 0,
  "calldata": "0xa9059cbb000000000000000000000000cd42b4c4d102cc22864e3a1341bb0529c17fd87d00000000000000000000000000000000000000000000000000000052e340e800"
}
4 Likes

The simulation and tests of EP 6.11 can be found here. The proposal was simulated by proposing, passing, executing, and asserting the difference between states after the ENS and USDC transfer operations. An extra validation will be required when the proposal gets onchain so the calldata can be verified against the description.

This can be checked by cloning the repo and running:
forge test --match-path src/ens/proposals/ep-6-11/* -vvvv

2 Likes

The onchain proposal validation has been made and sucessfully passed the verifcation againts its description. The test file can be found here, and it can be run using the same command:

forge test --match-path src/ens/proposals/ep-6-11/* -vvvv

1 Like

These proposal validations are interesting. I’m curious if you could clarify on the intention behind them? I am concerned that a lot of the potential value is lost due to a lack of a consistent source of truth.

The updated simulation validates that the calldata posted on chain matches the calldata generated by the test script. This is interesting, but it assumes (incorrectly in my opinion) two things:

  1. The calldata that you’ve added to the script is the actual calldata that was posted on chain.
  2. That the calldata you are generating with abi.encodeWithSelector is what is intended.

The context to me looking at this was that in advance of submitting the proposal, @5pence.eth asked me to double check his calldata. He had noted that his encoded numbers looked to be incorrect and didn’t match the intention as described

The original blockful validation essentially demonstrates that if the calldata that blockful have generated is submitted and executed then it will result in a particular chain state but what if the calldata that blockful generates is not what actually gets posted on chain?

This submission flow and validation in my opinion should be a single flow - any submission interface should generate a single Tenderly trace of potential execution. That is the validation.

  • When I tested with Agora, the flow was buggy. I’ve seen it working before, but when I tried it was erroring.

  • With Tally independent Tenderly traces were generated for each transaction rather than a single transaction that considers how all the elements of a proposal interplay and their net effects.

Ping @brennan_agora and @dennison


In the end the proposal was posted using Tally. Spence opted to not include the calldata in the description noting that doing so adds no value and presents an opportunity for what is posted in the description to diverge from what is actually submitted on chain.

The reality is that an optimal proposal draft submission process should allow intuitive generation of calldata, and create a Tenderly trace of execution. The complementary forum post associated with a potential executable should link to the draft on the governance platform such that when discussion has concluded and social consensus has been reached Metagov can simply click ‘Submit’.

2 Likes

I’m not sure why they do this, but if you look closely, the traces after the first have state overrides that mirror the changes made by previous transactions.

1 Like

A mistake happened on our side. After discussing during an internal all-hands with the whole team, here is what happened and what we’ll improve:

What happened?

There was an internal misunderstanding of the process - specifically, what should be reviewed and how should it be communicated. The smart contract developer who performed the review read the proposal, created the calldata, and tested it. The problem is, this wasn’t the calldata that Spence had posted in the forum. We should have reviewed and validated the calldata that was posted, not generating our own.

To be clear: there are two validation points

  • the forum validation (where we made the mistake)
  • The on-chain validation when the proposal is submitted (which was done correctly).

The actual on-chain submission was right and correctly verified, but our forum validation process missed its intended purpose and could have led to an error. In this case, it would have asked for less capital than needed, and Spence would have needed to submit one more proposal.

Our next steps

We’re implementing immediate improvements to our internal process:

  • Improve our internal review process by ensuring detailed pair reviews and documenting the process to ensure consistency
  • Improve our documentation about the proposal and verification executed

Here is our SLA for SPP2 as well. This process will constantly be updated and made more automated. We’ll keep the DAO updated on these improvements in our quarterly reports.

We appreciate the feedback and the opportunity to strengthen our contribution to ENS governance security.

Transparency is one of our core values - we will always make ourselves accountable and clearly communicate mistakes. Security is crucial, should be treated seriously, and in the open.

3 Likes

I appreciate the postmortem! I don’t think “looking more carefully” scales, though - I think we should be aiming for automated ways to make sure there’s a match between the data being tested and what’s being voted on.

We’ve long had issues with the manual workflow involved in putting a proposal through. My own opinion is that we should start to use Tally’s draft function (or any equivalent through another governance portal), and to only number EPs when they get posted onchain. By doing that, it’d be easy for Blockful and others to verify the payload of Tally drafts, and it’d also be easy to set up an automated process that detects new onchain and Snapshot votes and automatically creates PRs for them, while assigning them an EP number.

I think this is the crux of it.

It isn’t particularly technologically complex to build tooling to generate, simulate and post it on chain as an executable. We shouldn’t need to verify calldata because the code that generates it should also submit it. There would be a single source of truth - the blockchain.

This post (SPP2 Stream Implementation - Preparing the Executable Proposal) from @5pence.eth links to a repo we were playing about with that achieves two of those three puzzle pieces in the context of that executable - it generates calldata and creates Tenderly simulations. It would not be that much work to have it submit the executable. It would not be that much work to add a user interface to this.

There are a number of posts about Governance Tooling, and funding. For example: Programmatic Tooling Rewards: A Proposal for Sustainable Governance Infrastructure

Similarly there are numerous fantastic Governance tooling providers, many of whom applied for the Service Provider Program. But… if they don’t do what we actually need, we should just pay someone to do it properly. Just saying.

Given that @blockful got funding to do Governance stuff, they should probably do it…

1 Like

One of the additions we will propose for the governor upgrade will be the draft functionality. Then we’ll have 0 margin for error, one unique reference for reviewing, and no specific platform lock-in.

We will be pushing for increasing automation on different fronts and will post some ideas to start a discussion soon. Thanks for the feedback @nick.eth and @clowes.eth!