This proposal executes funding requests for the Meta-Governance and Public Goods Working Groups for the April 2025 funding window as passed in EP 6.6.1 and EP 6.6.2. This bundled executable proposal follows the requirements set out in Rule 10.1 of the Working Group Rules (EP 1.8).
This amount will cover all expected expenses outlined in the social proposal while leaving a prudent reserve to ensure continuity if future funding is delayed.
This amount represents the adjusted funding request after discussions in the forum, reducing the original request from 521k to 356k by adjusting the Strategic Grants and discretionary funding categories.
Specification
The following transfers are to be made from the DAO treasury:
Transfer 589,000 USDC to the Meta-governance safe:
The simulation and tests of EP 6.11 can be found here. The proposal was simulated by proposing, passing, executing, and asserting the difference between states after the ENS and USDC transfer operations. An extra validation will be required when the proposal gets onchain so the calldata can be verified against the description.
This can be checked by cloning the repo and running: forge test --match-path src/ens/proposals/ep-6-11/* -vvvv
The onchain proposal validation has been made and sucessfully passed the verifcation againts its description. The test file can be found here, and it can be run using the same command:
forge test --match-path src/ens/proposals/ep-6-11/* -vvvv
These proposal validations are interesting. Iâm curious if you could clarify on the intention behind them? I am concerned that a lot of the potential value is lost due to a lack of a consistent source of truth.
The updated simulation validates that the calldata posted on chain matches the calldata generated by the test script. This is interesting, but it assumes (incorrectly in my opinion) two things:
The calldata that youâve added to the script is the actual calldata that was posted on chain.
That the calldata you are generating with abi.encodeWithSelector is what is intended.
The context to me looking at this was that in advance of submitting the proposal, @5pence.eth asked me to double check his calldata. He had noted that his encoded numbers looked to be incorrect and didnât match the intention as described
The original blockful validation essentially demonstrates that if the calldata that blockful have generated is submitted and executed then it will result in a particular chain state but what if the calldata that blockful generates is not what actually gets posted on chain?
This submission flow and validation in my opinion should be a single flow - any submission interface should generate a single Tenderly trace of potential execution. That is the validation.
When I tested with Agora, the flow was buggy. Iâve seen it working before, but when I tried it was erroring.
With Tally independent Tenderly traces were generated for each transaction rather than a single transaction that considers how all the elements of a proposal interplay and their net effects.
In the end the proposal was posted using Tally. Spence opted to not include the calldata in the description noting that doing so adds no value and presents an opportunity for what is posted in the description to diverge from what is actually submitted on chain.
The reality is that an optimal proposal draft submission process should allow intuitive generation of calldata, and create a Tenderly trace of execution. The complementary forum post associated with a potential executable should link to the draft on the governance platform such that when discussion has concluded and social consensus has been reached Metagov can simply click âSubmitâ.
Iâm not sure why they do this, but if you look closely, the traces after the first have state overrides that mirror the changes made by previous transactions.
A mistake happened on our side. After discussing during an internal all-hands with the whole team, here is what happened and what weâll improve:
What happened?
There was an internal misunderstanding of the process - specifically, what should be reviewed and how should it be communicated. The smart contract developer who performed the review read the proposal, created the calldata, and tested it. The problem is, this wasnât the calldata that Spence had posted in the forum. We should have reviewed and validated the calldata that was posted, not generating our own.
To be clear: there are two validation points
the forum validation (where we made the mistake)
The on-chain validation when the proposal is submitted (which was done correctly).
The actual on-chain submission was right and correctly verified, but our forum validation process missed its intended purpose and could have led to an error. In this case, it would have asked for less capital than needed, and Spence would have needed to submit one more proposal.
Our next steps
Weâre implementing immediate improvements to our internal process:
Improve our internal review process by ensuring detailed pair reviews and documenting the process to ensure consistency
Improve our documentation about the proposal and verification executed
Here is our SLA for SPP2 as well. This process will constantly be updated and made more automated. Weâll keep the DAO updated on these improvements in our quarterly reports.
We appreciate the feedback and the opportunity to strengthen our contribution to ENS governance security.
Transparency is one of our core values - we will always make ourselves accountable and clearly communicate mistakes. Security is crucial, should be treated seriously, and in the open.
I appreciate the postmortem! I donât think âlooking more carefullyâ scales, though - I think we should be aiming for automated ways to make sure thereâs a match between the data being tested and whatâs being voted on.
Weâve long had issues with the manual workflow involved in putting a proposal through. My own opinion is that we should start to use Tallyâs draft function (or any equivalent through another governance portal), and to only number EPs when they get posted onchain. By doing that, itâd be easy for Blockful and others to verify the payload of Tally drafts, and itâd also be easy to set up an automated process that detects new onchain and Snapshot votes and automatically creates PRs for them, while assigning them an EP number.
It isnât particularly technologically complex to build tooling to generate, simulate and post it on chain as an executable. We shouldnât need to verify calldata because the code that generates it should also submit it. There would be a single source of truth - the blockchain.
This post (SPP2 Stream Implementation - Preparing the Executable Proposal) from @5pence.eth links to a repo we were playing about with that achieves two of those three puzzle pieces in the context of that executable - it generates calldata and creates Tenderly simulations. It would not be that much work to have it submit the executable. It would not be that much work to add a user interface to this.
Similarly there are numerous fantastic Governance tooling providers, many of whom applied for the Service Provider Program. But⌠if they donât do what we actually need, we should just pay someone to do it properly. Just saying.
Given that @blockful got funding to do Governance stuff, they should probably do itâŚ
One of the additions we will propose for the governor upgrade will be the draft functionality. Then weâll have 0 margin for error, one unique reference for reviewing, and no specific platform lock-in.
We will be pushing for increasing automation on different fronts and will post some ideas to start a discussion soon. Thanks for the feedback @nick.eth and @clowes.eth!