A Dynamic Task Reward System for Workgroups

A Dynamic Task Reward System for Workgroups

Workgroup challenges

In observing and participating in workgroups I’ve noticed the following emerging challenges:

  • A difficulty for stewards to gauge the value of contributions either with a task-based model which would incentivize completing low effort tasks, or a time-based model which would incentivize slow work.

  • To maximize the value ENS receives when it pays for work, so that ENS doesn’t overpay for simple work done inefficiently and burns through treasury funds needlessly.

  • To incentivize active participation and completion of tasks on time so that the workgroups are a viable alternative to hiring outside contractors.

  • To make the compensation predictable to users working in workgroups to avoid confusion and conflict.

A dynamic reward system for tasks

A solution to this is to set an initial reward value to each task that increases hourly or daily until someone accepts it, which offers the following benefits in that it:

  • Incentivizes active participation by having users compete for tasks by actively visiting the task platform until the price of a task is in-line with their expectation of compensation.

  • Maximizes the value received by ENS by users competing to accept tasks at the lowest price they’re willing to complete them, which guarantees that ENS neither under- nor overpays for tasks.
    Users A and B might be equally competent, but B might be willing to complete the task for less and would pick up the task before A.

  • Naturally scales to the complexity of the task with a dynamically increasing reward when fewer users are willing or able to complete it. Then the reward would increase until it attracts someone either willing or able to complete the task.

  • Users agree beforehand what their time is worth which avoids time-consuming “I did x I should’ve earned more than those who did y”-conflicts.

  • Speeds up work by limiting the number of tasks a user can claim at a time to n.

  • Removes a burden on stewards to subjectively gauge the value of work that may not be within their area of expertise, and without knowing how much time was spent or expertise required to complete it.

Potential problems that could arise from this system

Problem: Users accept tasks at a low price, but do unacceptable work.
Solution: Stewards will approve the quality of work after a task is completed through a vote to accept/reject/send back for suggested edits. If a task is rejected or repeatedly sent back for edits, this behaviour would be disincentivized.

Problem: Users form a price cartel and mutually agree to not take on tasks under a certain price (this seems unlikely to occur.)
Solution: Add more users to the workgroup. The more users that participate, the harder it is to sustain a price cartel.


While there are advanced platforms that could probably be leveraged for this such as Amazon Mechanical Turk, I think that for ENS the best option would be a simple dashboard that links to clarity, as it doesn’t seem to support any sort of pricing system.

The purpose of this post is to kickstart discussions around these ideas, if and how they should be implemented as well as to serve as a springboard for ideas.


This is really great thinking, but how about the reward being given when a user actually completes the task? Essentially a bounty program. First to submit the completed, satisfactory task gets the bounty.

That’s a good idea, unless it would be too stressful? I was playing around with some additional ideas that didn’t make it into my post of ways to incentivize speed gradually, like a declining reward after the due date. I might post those ideas as comments tomorrow :slight_smile:

1 Like

There are some drawbacks I imagine. Potentially wasted effort if multiple people or teams are working on the same tasks, only to lose the race to the bounty. That’s the biggest reservation I can think of. Edit - yeah, please post those. The more ideas the better.

1 Like

I like the idea, but I worry this turns into an eternal case of running a freelancer-type system, where the overhead of managing the contributors can exceed the benefit of the work they provide.


The system I think is best is to assume that competent people will be competent in the tasks they are doing. There is always a window of risk, where someone who is getting paid to do something can walk away, and the money for that period is lost, but using a system of assumed competence, creates an environment of respect and consideration that sets the tone for any organization. Systems that assume people will always do the least amount of work, at the lowest quality for the most money, tend to promote this type of culture, in my opinion.


I actually think that this will reduce overhead. The work in my workgroup has completely stalled, I got through 80% of the tasks in ~6 hours, and the entire project could’ve been completed in 2 days by a small team. Yet we’re now on month 2 and the focus is on more calls, more managing, all of which is registered as time spent working.

While no work is being done.

1 Like

With the establishment of the subgroup and designation of a Lead Contributor, this should resolve. Contributors will have clear direction, a pathway to getting the content in change requests and merged to the Gitbook.


This is learn docs V1 and we already had a clear direction. The tasks have been finished for a month with nothing preventing work and this isn’t the first excuse for work not being done.

What you’re writing sounds nice, but the last thing that’s needed is more bureaucracy when the excessive focus on bureaucracy is already the issue. What’s needed is a focus on work: incentives to do work, and disincentives against pretending to work.

Otherwise we’re fostering a system of bureaucracy that already feels very kafkaesque to those working within it.

PS: I want to be clear that I’m not personally blaming you for this, or any of the stewards. I chalk this up to being an emerging problem in the DAO that just needs to be nipped in the bud.


Don’t want to completely derail the conversation, but is there room for other types of contributions in this model? For example, not everyone is a “doer.” Making a really self-aware comment here. I’m a thinker, with an initiative problem. I’d like to think that is still valuable.


Absolutely, I meant this to be a tool to use where applicable rather than a system forced upon everyone. I’m also hoping that users here on the forum will help improve it past what I could figure out on my own.

1 Like

TL;DR. Unless a perfect solution arises, what inefficiencies in the contributor compensation model is the DAO willing to accept?

I’m not taking it personally, as a contributor myself to that project it’s been an interesting journey… There are a few lessons learned to be had. From my perspective, the stalled progress demonstrates that the direction wasn’t entirely clear. Moving forward, this does need to be corrected and I agree with the emerging challenges mentioned in your original post.

  1. A difficulty for stewards to gauge the value of contributions
  2. To maximize the value ENS receives when it pays for work

A task-based model makes the most sense. The time-based model is very one-dimensional. One user may take 2hrs while another takes 30min; the output is the same. It’s also easily gamed. We should compensate based on the task delivered, maximizing the value ENS gets for the work performed.

As you mentioned, this has pitfalls. The correct amount of compensation needs to be applied to disincentivize the favoring of low-effort tasks. If the DAO overpays for easy tasks and underpays for more complex tasks, your scenario will likely play out. Keep in mind that low-effort tasks may still be more competitive even with accurate pricing. More complex tasks, by nature, require additional skills and time possessed by fewer people.

If task-based compensation is preferred and value is subjective, how do we begin to derive the amount awarded without leaning on traditionally applied hourly rates?

Additonally, because value is subjective, any system is likely to contain inefficiencies. A bounty model attempts to defeat this by declaring the value at the outset by the DAO, but even with bounties, there are downsides…

Unless a perfect solution arises, what inefficiencies in the contributor compensation model is the DAO willing to accept? Are we okay with paying too much, mercenary contributors, more centralized tasking, etc.?

  1. To incentivize active participation and completion of tasks on time
  2. To make the compensation predictable to users

I feel that choosing the proper methods of compensation and rewards will sort these two challenges out. If we can make compensation predictable and ensure that contributors are providing value and are valued themselves, active participation will follow.

Ideally, involvement should be transparent and inviting to potential contributors.

This topic seems to be one of the cruxes of DAO governance. I’m not sure there is an obvious system to implement. I checked out Amazon Mechanical Turk, it seems like an interesting solution. I’m not sure how much further integrated with Clarity we will get. I know there has been discussion about not using it outside of the Learn Docs, and several other team management tools have been proposed.

That sounds very good, but it was and is already very clear: create the learn docs, get them onto gitbook. It’s not that complex.

This is a task-based system, but with a dynamic reward.

This question already demonstrates what I wrote about stewards having a difficulty gauging the value of contribution with a static reward-system whether task or time-based. This is now several days after your last attempt where you also found it difficult to do so.

I fail to see how restating the problem as a question without providing a solution again helps. A solution to it is offered in my original post.

That’s why I made this post. You’re again restating an already mentioned problem without a solution.

That’s what my post proposes to do. Again, restates an already mentioned problem without a solution.

With the current number of participants a dynamically increasing rewards-system attached to tasks could likely be implemented and maintained in Google Docs as it supports time arithmetic.

EDIT: Updated for clarity after some confusion as to whether it was advocating a task or time-based system.

@Cthulu, it’s been four months since you listed the emerging challenges in the original post.

If you had to make a list today, would the same challenges still be there?

Yes, if anything I think that these problems are starting to show more clearly, which shows in other users in the DAO coming to the same conclusions independently and coming up with their own solutions to them:

For example, @nick.eth recently identified that most of the work is done by a very small number of active workers in this thread: Transitioning the DAO to an RFP model

You recently called for a bulletin board in this thread: Community Contribution scheme - Bulletin board for outstanding tasks - #5 by vegayp

Where it was also made known that @vegayp was working on a bulletin board solution for his WG.

These are all solutions that attempts to solve parts of the same underlying issues that made me write this. I’m sure there are more than I’ve listed as well.

I fully expect that most if not all of what I proposed here will be implemented gradually over a very long period of time as these problems are re-identified and consensus starts building around individual points.


Glad we are revisiting this all. Needed now more than ever I believe. I think we should come together for a real-time meeting on this. Not sure where, maybe in Discord, or Google Meet. Hashing this out with real-time dialogue, versus these threads might accelerate the solution.

Maybe it could be part of an Ecosystem WG meeting? @slobo.eth what do you think?

I don’t think I ever went back and commented here, but last year I brought up looking at tooling to track contributions. Linking it here for reference: Proposal for implementing an open, transparent Contribution Graph reward mechanism for DAO + Contributors

I am not biased toward whatever tool is used. I can foresee more than 1 tool being used for this, but I’m only familiar with Sourcecred’s. Ideas on others would be great.

An example scenario of how this could work:

1 . Add a “praise” channel in Discord. It could be gated like the WG channels.
2. I notice that @estmcmxci has spearheaded onboarding translators. I think that’s great, so I tag his Discord name in the praise channel and say something like:

Praise to @estmcmxci for doing a great job bringing together translators. 
  1. a bot is monitoring this channel for “Praise to”. This is tracking contributions by which user is tagged in a praise “shout out”.
  2. If others notice the praise, and give an emoji, the bot adds a little more weight to the praise, equaling a little more reward. Using the ENS coin emoji could have certain weight. Using just a :+1: could have less weight. If a core team member uses the ENS fairy emoji, this could have extra weight (the bot knows it by role). Or if a steward adds an emoji to the praise, that can have extra weight. The weighting template is customizable.

Well how do you keep users from just continually abusing it?

Assume a set budget of let’s say 100 ENS per month for the praise channel. There were 10 praises posted for 10 different users. (you can’t give yourself praise)

@estmcmxci had a weighted score for the month of 25 (he got a lot of emoji responses, including a an ENS fairy one, had extra weighted ones from some stewards, and a core team member.
@user2 had a praise, but only earned a few :+1:, not weighted emojis from the stewards or team, so his weighted contribution is 5 score.
@user3 had a praise, but didn’t get any emojis on it. So his weighted contribution is 1
@user4 and so on. Until you have all the weighted scores.

Of the total 100 ENS governance tokens awarded:

  • @estmcmxci gets the most as his weight was highest. The amount of the 100 ENS is determined by the total contribution weights of all the users. So :point_down:
  • @user2 gets 1/5 of what @estmcmxci does.
  • @user3 who got no reactions for their “praise to” tag in the channel gets 1/25th of what @estmcmxci does.
  • @user4 and so on…

Anyhow, we’re going to need tooling to do this. Probably more than one. Again, it may be good to bring this topic up on an agenda meeting and we all put our heads together to have a dialogue and come up with actionable solutions soon. :rocket: :full_moon:

Edit: A visible praise channel is great for easy awareness of what people are doing/contributing. It doesn’t solve everything brought up in @cthulu.eth’s OP…We need a combination of solutions.


I believe @daylon.eth has a similar idea as you, of using positive reinforcement as an incentive, but doing so via DAO awards here: [Temp Check] Community Contribution & Steward Performance (CCSP) Awards Campaign - #23 by daylon.eth

Perhaps there can be a collaboration to merge these two ideas? :slight_smile:


Actually the Praise system developed by CommonStack and used on TEC (@Griff) should be able to integrate itself, maybe, with ENS?

I support an idea, even if it’s not praise, of a public acknowledgement/awareness system for contributors. And in general, to foster a better culture on ENS.


It is always a good idea to provide praise and acknowledgement for good work. In the crypto space people often tend to over-financialise and view incentives only through the lens of payments. However, traditional social media systems have shown us that people will do a lot of work just for fake internet points / recognition by peers.

As to the question of assigning budgets and bounties: this is a hard problem. During my time at colony.io I spent a lot of time thinking about this issue. Currently my thinking is that we can use collective decision making to distribute budgets to projects / departments, (and these in turn might use similar schemes to distribute the funds to milestones / teams etc,) but at the local level, where the day to day work takes place, it is more efficient to allow a team leader to manage the work and pay of the tasks at hand. Dividing huge projects into an endless stream of tasks is nearly impossible to do and certainly impossible for any individual to keep track of. Since we do end up working together with our colleagues in more structured groups, our funding distribution should probably reflect this reality. I imagine a system for collectively assigning budgets to teams, while allowing team leaders to assign the funds to tasks / people, subject to some form of arbitration system to guard against abuse.

Regardless of how you might want to structure your funding flows, you’ll need the right tools for the job.
Back in 2018, we at Colony came up with a scheme (BudgetBox) for collectively deciding how to distribute funds ( Introducing BudgetBox ) but for many reasons we never implemented it. Recently, in collaboration with giveth.io and generalmagic.io we have begun work on a prototype under the name “PairWise”.
If y’all are still interested in collective dynamic budget allocations, I’d invite you to take another look at BudgetBox, take a look at what we are building right now and play around with it yourselves GitHub - GeneralMagicio/budget-boxes / https://pairwise.generalmagic.io/ . Who knows, perhaps this is something you can imagine integrating into your workflows too.

Demo video: /ipfs/QmaeCn8j9EMttdJzwDoyni1gzuwmyXN7snZyFjGG1fKgvo (eg https://gateway.pinata.cloud/ipfs/QmaeCn8j9EMttdJzwDoyni1gzuwmyXN7snZyFjGG1fKgvo)