As we near the final stages of this year’s Service Provider Program (SPP) process, I wanted to resurface and build on a conversation that began during the program’s renewal late last year.
At the time, there was discussion about whether service providers should be allowed to submit two separate budget proposals—one for basic scope, one for extended scope. As @nick.eth wrote in response to that suggestion:
In light of today’s All Hands Delegate Call, which focused heavily on the SPP vote, this exact concern resurfaced. Several delegates expressed confusion—particularly around how to handle two budget proposals per provider.
This reinforces Nick’s earlier point: asking delegates to evaluate both basic and extended scope versions introduces cognitive and coordination overhead, weakening clarity and confidence in the process.
Instead, I believe we can address the underlying concern—how to express preferences across different scopes of work—through a combination of two ideas:
- Encouraging each service provider to define a Minimum Viable Budget (MVB) within a single proposal, along with the option to outline what additional scope could be delivered with increased funding.
- Using a modular, two-step voting model that separates the selection of teams from the allocation of funding.
This isn’t a call for immediate change, but rather a forward-looking suggestion—something we might reflect on, iterate together, and consider for the next SPP cycle (2026).
This model is designed to make the funding process more legible, modular, and community-driven, while still respecting the realities of delegate voting power.
Step 1: Select Teams via Snapshot + Copeland Method
We begin with a Snapshot vote using the Copeland method, where delegates compare proposals head-to-head. This helps surface the most broadly supported teams.
To keep things focused, we introduce a hard cutoff: only the top 10 teams move on to the next phase. This ensures that the final funding round isn’t overly diluted and reflects the strongest community-aligned proposals.
Step 2: Allocate Funding via Ranked-Choice + QF Adjustment
In the second vote, delegates rank the finalist teams. These rankings are converted into Borda-style scores, where each rank is assigned a point value—for example: 1st = 5 points, 2nd = 4 points, 3rd = 3 points, etc.
Each team’s points are then multiplied by the voting power of the delegate who submitted the ballot.
We then apply a Quadratic Funding-style adjustment: the square root of each team’s total weighted score is taken to reward broad support rather than concentrated backing. These adjusted scores are normalized and used to allocate the $4.5M proportionally.
A note on budgets in this model:
Even though funding in this model is allocated through ranked-choice voting and Quadratic Funding-style weighting, budgets remain a critical part of the process. They serve several important purposes:
- A request, not a guarantee: Providers still need to communicate what they’re asking for. Their proposed budget sets expectations—but it’s ultimately up to the DAO to determine how much support that proposal receives relative to others.
- A signal of scope: Budgets anchor the proposal in concrete terms. They help delegates understand what will be delivered and how the provider thinks about cost and value.
- A flexible planning tool: By defining a Minimum Viable Budget (MVB) and outlining what additional scope could be unlocked with more funding, teams can help delegates make more informed decisions during ranking (.e.g., “At $200k, we can deliver X. If we receive $500k, we can also deliver Y and Z.”)
Why this approach is worth exploring:
- It separates the decision of who gets funded from how much they receive, giving clarity and purpose to each vote.
- It respects token-weighted governance, while softening its edges through mechanisms that reward distributed support.
- It removes the need to vote on two separate proposals per provider, while still allowing nuanced preferences across funding levels to emerge through ranked voting.
- It sets up a framework that’s scalable and repeatable for future rounds, allowing us to evolve the program alongside the growing maturity of the DAO.
This proposal isn’t meant to disrupt the current process—we’re in the final innings, and much good work has already been done. But I hope it serves as a useful prompt for future design conversations.
Feedback, critique, and remixing welcome.