A Dynamic Task Reward System for Workgroups

A Dynamic Task Reward System for Workgroups

Workgroup challenges

In observing and participating in workgroups I’ve noticed the following emerging challenges:

  • A difficulty for stewards to gauge the value of contributions either with a task-based model which would incentivize completing low effort tasks, or a time-based model which would incentivize slow work.

  • To maximize the value ENS receives when it pays for work, so that ENS doesn’t overpay for simple work done inefficiently and burns through treasury funds needlessly.

  • To incentivize active participation and completion of tasks on time so that the workgroups are a viable alternative to hiring outside contractors.

  • To make the compensation predictable to users working in workgroups to avoid confusion and conflict.

A dynamic reward system for tasks

A solution to this is to set an initial reward value to each task that increases hourly or daily until someone accepts it, which offers the following benefits in that it:

  • Incentivizes active participation by having users compete for tasks by actively visiting the task platform until the price of a task is in-line with their expectation of compensation.

  • Maximizes the value received by ENS by users competing to accept tasks at the lowest price they’re willing to complete them, which guarantees that ENS neither under- nor overpays for tasks.
    Users A and B might be equally competent, but B might be willing to complete the task for less and would pick up the task before A.

  • Naturally scales to the complexity of the task with a dynamically increasing reward when fewer users are willing or able to complete it. Then the reward would increase until it attracts someone either willing or able to complete the task.

  • Users agree beforehand what their time is worth which avoids time-consuming “I did x I should’ve earned more than those who did y”-conflicts.

  • Speeds up work by limiting the number of tasks a user can claim at a time to n.

  • Removes a burden on stewards to subjectively gauge the value of work that may not be within their area of expertise, and without knowing how much time was spent or expertise required to complete it.

Potential problems that could arise from this system

Problem: Users accept tasks at a low price, but do unacceptable work.
Solution: Stewards will approve the quality of work after a task is completed through a vote to accept/reject/send back for suggested edits. If a task is rejected or repeatedly sent back for edits, this behaviour would be disincentivized.

Problem: Users form a price cartel and mutually agree to not take on tasks under a certain price (this seems unlikely to occur.)
Solution: Add more users to the workgroup. The more users that participate, the harder it is to sustain a price cartel.


While there are advanced platforms that could probably be leveraged for this such as Amazon Mechanical Turk, I think that for ENS the best option would be a simple dashboard that links to clarity, as it doesn’t seem to support any sort of pricing system.

The purpose of this post is to kickstart discussions around these ideas, if and how they should be implemented as well as to serve as a springboard for ideas.


This is really great thinking, but how about the reward being given when a user actually completes the task? Essentially a bounty program. First to submit the completed, satisfactory task gets the bounty.

That’s a good idea, unless it would be too stressful? I was playing around with some additional ideas that didn’t make it into my post of ways to incentivize speed gradually, like a declining reward after the due date. I might post those ideas as comments tomorrow :slight_smile:

1 Like

There are some drawbacks I imagine. Potentially wasted effort if multiple people or teams are working on the same tasks, only to lose the race to the bounty. That’s the biggest reservation I can think of. Edit - yeah, please post those. The more ideas the better.

I like the idea, but I worry this turns into an eternal case of running a freelancer-type system, where the overhead of managing the contributors can exceed the benefit of the work they provide.


The system I think is best is to assume that competent people will be competent in the tasks they are doing. There is always a window of risk, where someone who is getting paid to do something can walk away, and the money for that period is lost, but using a system of assumed competence, creates an environment of respect and consideration that sets the tone for any organization. Systems that assume people will always do the least amount of work, at the lowest quality for the most money, tend to promote this type of culture, in my opinion.

1 Like

I actually think that this will reduce overhead. The work in my workgroup has completely stalled, I got through 80% of the tasks in ~6 hours, and the entire project could’ve been completed in 2 days by a small team. Yet we’re now on month 2 and the focus is on more calls, more managing, all of which is registered as time spent working.

While no work is being done.

1 Like

With the establishment of the subgroup and designation of a Lead Contributor, this should resolve. Contributors will have clear direction, a pathway to getting the content in change requests and merged to the Gitbook.


This is learn docs V1 and we already had a clear direction. The tasks have been finished for a month with nothing preventing work and this isn’t the first excuse for work not being done.

What you’re writing sounds nice, but the last thing that’s needed is more bureaucracy when the excessive focus on bureaucracy is already the issue. What’s needed is a focus on work: incentives to do work, and disincentives against pretending to work.

Otherwise we’re fostering a system of bureaucracy that already feels very kafkaesque to those working within it.

PS: I want to be clear that I’m not personally blaming you for this, or any of the stewards. I chalk this up to being an emerging problem in the DAO that just needs to be nipped in the bud.


Don’t want to completely derail the conversation, but is there room for other types of contributions in this model? For example, not everyone is a “doer.” Making a really self-aware comment here. I’m a thinker, with an initiative problem. I’d like to think that is still valuable.

1 Like

Absolutely, I meant this to be a tool to use where applicable rather than a system forced upon everyone. I’m also hoping that users here on the forum will help improve it past what I could figure out on my own.

1 Like

TL;DR. Unless a perfect solution arises, what inefficiencies in the contributor compensation model is the DAO willing to accept?

I’m not taking it personally, as a contributor myself to that project it’s been an interesting journey… There are a few lessons learned to be had. From my perspective, the stalled progress demonstrates that the direction wasn’t entirely clear. Moving forward, this does need to be corrected and I agree with the emerging challenges mentioned in your original post.

  1. A difficulty for stewards to gauge the value of contributions
  2. To maximize the value ENS receives when it pays for work

A task-based model makes the most sense. The time-based model is very one-dimensional. One user may take 2hrs while another takes 30min; the output is the same. It’s also easily gamed. We should compensate based on the task delivered, maximizing the value ENS gets for the work performed.

As you mentioned, this has pitfalls. The correct amount of compensation needs to be applied to disincentivize the favoring of low-effort tasks. If the DAO overpays for easy tasks and underpays for more complex tasks, your scenario will likely play out. Keep in mind that low-effort tasks may still be more competitive even with accurate pricing. More complex tasks, by nature, require additional skills and time possessed by fewer people.

If task-based compensation is preferred and value is subjective, how do we begin to derive the amount awarded without leaning on traditionally applied hourly rates?

Additonally, because value is subjective, any system is likely to contain inefficiencies. A bounty model attempts to defeat this by declaring the value at the outset by the DAO, but even with bounties, there are downsides…

Unless a perfect solution arises, what inefficiencies in the contributor compensation model is the DAO willing to accept? Are we okay with paying too much, mercenary contributors, more centralized tasking, etc.?

  1. To incentivize active participation and completion of tasks on time
  2. To make the compensation predictable to users

I feel that choosing the proper methods of compensation and rewards will sort these two challenges out. If we can make compensation predictable and ensure that contributors are providing value and are valued themselves, active participation will follow.

Ideally, involvement should be transparent and inviting to potential contributors.

This topic seems to be one of the cruxes of DAO governance. I’m not sure there is an obvious system to implement. I checked out Amazon Mechanical Turk, it seems like an interesting solution. I’m not sure how much further integrated with Clarity we will get. I know there has been discussion about not using it outside of the Learn Docs, and several other team management tools have been proposed.

That sounds very good, but it was and is already very clear: create the learn docs, get them onto gitbook. It’s not that complex.

This is a task-based system, but with a dynamic reward.

This question already demonstrates what I wrote about stewards having a difficulty gauging the value of contribution with a static reward-system whether task or time-based. This is now several days after your last attempt where you also found it difficult to do so.

I fail to see how restating the problem as a question without providing a solution again helps. A solution to it is offered in my original post.

That’s why I made this post. You’re again restating an already mentioned problem without a solution.

That’s what my post proposes to do. Again, restates an already mentioned problem without a solution.

With the current number of participants a dynamically increasing rewards-system attached to tasks could likely be implemented and maintained in Google Docs as it supports time arithmetic.

EDIT: Updated for clarity after some confusion as to whether it was advocating a task or time-based system.