33

Update Oct 7th, 2024:

The new Staging Ground badges are now live. All are listed in the Staging Ground section of the badges home page. Anyone who has previously met the criteria to earn one of the newly released badges should be receiving it shortly.

We have also added a widget in the right side bar where askers and reviewers can track their Staging Ground badge progress.

screenshot of staging ground home page including badge progress widget

Update Sep 19th, 2024:

Graduate badge is downgraded to a silver badge (keeping the criteria described in the table below), and we will add a gold version with higher criteria in a future iteration.

Update Sep 12th, 2024:

As a response to feedback received here, we are announcing the new "Staging Ground Reviewer Stats Widget".


With the general availability launch of Staging Ground in June, we monitored several reviewer engagement metrics: the number of unique reviewers, the average number of reviewers per day, the adoption rate (the number of users who actually became reviewers, out of all the eligible active users who have access), and the number of questions reviewed per reviewer. Unfortunately, at this point, these numbers are not growing.

Staging Ground exists in service of our mission to provide a repository of human knowledge about programming, for the public good. It acknowledges the need for new users to be supported by creating a positive, rewarding, and sheltered entry point. For this reason, Staging Ground is one of the most important initiatives Stack Exchange currently has to offer.

Staging Ground, however, has one small problem: it takes a lot of work. If It succeeds, it will only be on the backs of community members, like many of you, who work to make it a success. However, even this is not enough — a lesson we’ve learned time and again (for instance, with review queues). For it to succeed, there must be enough reviewers over a sustained period to meet the number of questions coming in.

In order to make Staging Ground sustainable, keeping existing reviewers engaged and attracting new reviewers is a priority. In addition to adding more badges for reviewers to earn, the possibility of earning rep is a potential incentive to increase reviewer engagement.

This is not a new idea; prior to the launch of the Staging Ground beta, the product team discussed the possibility of awarding reviewers rep. During the beta, it was discussed again in the Staging Ground Testing Teams instance (only available to those who participated in the beta). We collected feedback from beta participants about the idea, including whether the broader community would like it, and tweaks that could be made to do it well.

We have taken note of the concerns raised regarding the potential for increased fraud if curators are rewarded in this manner. While these concerns are valid, we still firmly believe that curators deserve recognition for their invaluable contributions. Historically, we have acknowledged their efforts through badges, and we plan to continue to do this, but we feel that the dedication and meticulous care curators demonstrate in maintaining the network's high standards of quality merit something more substantial as well.

Staging Ground reviewers play a crucial role in ensuring that the content on our platform remains accurate, relevant, and of the highest quality. They work tirelessly behind the scenes to uphold the integrity of the information shared, but their efforts often go unnoticed. By rewarding reviewers with rep, we not only acknowledge their hard work but also incentivize others to contribute to the community in meaningful ways.

We understand that this is a contentious issue, as reputation has traditionally been reserved for content creators. However, the landscape of our community is evolving, and with the addition of Staging Ground, it is essential to adapt our recognition systems to reflect the diverse ways in which members contribute. Rewarding Staging Ground reviewers with reputation points is a step towards a more inclusive and comprehensive acknowledgment of all forms of contribution.

We understand that this would be a fundamental change to the system. We are offering you a proposal for this idea — and let us be clear, this proposal is targeted to Staging Ground only.

Feedback is welcome, particularly about any concerns that you believe would cause major friction for community members or moderators. We are committed to working on mod tooling to identify bad actors and seek input from you all about what you believe are the most crucial needs at this time to make this proposal viable.

We will outline this proposal as follows:

  1. Goals and rationale

  2. Overview of the rep incentive

  3. Planned updates

  4. Moderator input on new tooling

  5. Call for feedback


Goals and rationale

There are three main goals we hope to achieve with this proposal:

Goal: Current reviewer retention.

  • Rationale: Ensure existing participants can continue to help guide new askers. Since the launch in June, at least 2,000 reviewers have participated once in Staging Ground. However, our true baseline for consistent participation is the average daily reviewers which hovers between 130-140 reviewers on a given day.

Goal: Attract new reviewers

  • Rationale: Entice potential reviewers who may not have been previously interested. There are plans to lower the reviewer entry criteria for users with less than 500 rep who have demonstrated particular behaviors which make them eligible (to be defined).

Goal: Encourage reviewer investment in the question’s outcomes on the main site

  • Rationale: Reviewers play a leadership role in guiding new askers, so they should be invested in the outcomes of the posts they contribute to.

Overview of the rep incentive - How it could work

At a high level, changes to the rep system within Staging Ground should be focused on incentivizing the desired behavior: approvals on posts that are ready to be posted on Stack Overflow.

How we see this working: In Staging Ground, when you approve a post, you have a stake in that question and gain (or lose) a portion of the rep that the asker gained (or lost). Reputation would be granted on the basis of the reviewer’s final action only.

As such, the first iteration could look like this:

Reviewer's last action Reputation +/- Reason Limits or exclusions
Approve OR Approve with minor edits +5 rep Each upvote post receives Limit 5 upvotes; (max 25 rep gained) / Exclusion: reviewer votes on asker's question
Approve OR Approve with minor edits +10 rep Question remains open on Stack Overflow and was not closed at any point within the first week
Approve OR Approve with minor edits -2 rep lost Each downvote post receives Limit 5 downvotes (max 10 rep lost)
Approve OR Approve with minor edits -5 rep Question closed on Stack Overflow Exclusions: “needs details or clarity” or duplicate closure reasons
Approve OR Approve with minor edits -20 rep Question is deleted on Stack Overflow as spam or offensive

This proposal bakes in other limitations such as:

  • Reviewers are still subject to the existing rep cap — no more than 200 rep earned per day.
  • Rep gained is locked in after 14 days to protect against long-tail vandalism.

This proposal is not finalized. One of the most important factors is hearing feedback to ensure that we are creating something that would be sustainable and add value to the Stack Overflow community. We are also reviewing the feedback we have already gathered from moderators and plan to add a few more considerations in future iterations, such as tweaks to which reviewer actions can earn rep and to the success indicators for graduated posts.

Planned updates

Beyond this rep incentive proposal we are planning on releasing several upcoming changes, based on valuable feedback from community members and moderators. These changes are designed to incentivize reviewers and enhance moderation within Staging Ground, empowering mods to manage content more efficiently. Changes we are planning on making are:

  1. Advanced reviewer stats

    • We have introduced advanced reviewer statistics to enhance the existing Moderator tools 'links' page. This new feature, built in response to feedback, will provide moderators with a comprehensive overview of reviewer activities, allowing for better monitoring and management. The functionality includes the ability to filter Staging Ground reviewers based on a minimum number of reviews, specifically focusing on the number of posts reviewed rather than individual actions within a post.
  2. Ability to formally document and warn reviewers without suspending them (will also benefit existing review queues)

  3. New badges to incentivize reviewers

    • We have observed a strong correlation between earning Staging Ground badges and an increased level of participation. Data indicates that reviewers who have earned badges are more engaged, performing more review actions and leaving more comments compared to those without badges.

    • Since the launch, nearly 300 reviewers have earned at least one SG badge. On average, these badge-earning reviewers perform 14 times more review actions per reviewer than those without badges, representing a nearly 1,300% increase. Seventy percent of all reviewer actions are taken by those who have earned badges. Additionally, reviewers with badges leave nearly five times more comments, although it is important to note that not all reviews require comments.

    To further boost engagement and motivation, we are excited to introduce a new set of badges designed to recognize and reward the efforts of Staging Ground participants, both askers and reviewers.

    Name Type Criteria Awarded multiple times?
    Eager Learner Bronze Leave 2 comments in Staging Ground. No
    Swift Editor Bronze Edit to your own Staging Ground question within 5 hours of receiving feedback. No
    Polisher Silver Post 2 questions on the main site from Staging Ground after making minor edits to your draft. Yes
    Graduate Gold Asked two successful questions (receiving a post score of +2, or a post score of +1 and an answer) on the main site from Staging Ground. No
    First Review Bronze Review your first post in Staging Ground. No
    Variety Reviewer Bronze Review questions from 50 distinct tags in Staging Ground. Yes
    Daily Practice Silver Review 50 Staging Ground questions in one day. Yes
    Steadfast Reviewer Silver Review at least one question in Staging Ground per day for 7 consecutive days. Yes
    Champion Reviewer Gold Review 1,000 posts in Staging Ground. Yes

Mod input on future tooling

We understand these issues are multifaceted, and while our goal has always been to underpromise and overdeliver, it's important to set the expectation that we cannot fix everything at once. But we have a plan to move forward.

We’ve collected some feedback internally from the team who worked on the suspicious votes page and from mods about some of the pain points that they have expressed in the past, such as difficulty in being able to identify sock puppets, targeted voting, and the recommended course of actions that should be taken (more manual or less manual actions to invalidate votes).

We are proactively engaging with moderators to identify specific areas where we should focus our efforts in developing tools to combat voting fraud and abuse. While we do not have any specific details to share right now, we believe that by collaborating closely with mods, we can aim to triage the most critical asks that require immediate attention. We will consider their feedback to help develop these tools so that we can alleviate any burden these potential changes may cause.

In closing

To be clear up front: We acknowledge that all these goals are fundamentally audacious, and we are entering new territory.

We are trying something new, and this is just our first attempt at dipping our toe in the waters of rewarding curators with rep, a form of recognition which we believe is overdue. We expect to need to make adjustments and improvements on this as we continue to iterate, and community input will be crucial in that process. We know there is probably no perfect system that rewards people for optimal decisions 100% of the time, but we hope to start by creating something that works really well in Staging Ground.

We look forward to your feedback.

40
  • 43
    Honestly, if you pushed 100% of users who should go to the SG to the SG I'm sure you'd see an uptake. It's incredibly frustrating that the vast majority of posts I see from users that should go to the SG currently don't. That SG posts are shown in the filter is great but such a small number still go there that I wouldn't be surprised if many day-to-day users don't see a post that's in the SG in their search.
    – Thom A
    Commented Aug 26 at 16:25
  • 6
    The criteria for Graduate badge is too loose. Maybe a 2 tier badge with the silver one as is and a gold one reserved for asking "stellar" questions, something like 5 upvotes or so.
    – M--
    Commented Aug 26 at 16:53
  • 32
    How do you deal with a case where multiple reviewers work on a question by providing feedback and perform review actions, but only one person (or some?) will approve the question? Other reviewers who helped OP to form the question into a better shape don't get any reputation, where some other reviewer checking the question for the first time, see it is all good and approve the question. This might turn into jealousy like "Why is he/she getting all the reputation, where I did all the (review) work?".
    – Progman
    Commented Aug 26 at 17:12
  • 7
    So... a user who goes through the SG providing edits/comments requesting asker feedback to improve posts without actually reviewing would earn nothing?
    – Kevin B
    Commented Aug 26 at 17:17
  • 36
    Is it possible that too many motivated people quit the site after you announced the closure of public data dumps, or any of the other bad staff decisions? Commented Aug 26 at 17:54
  • 17
    I, for one, am just happy to see a post that doesn't invoke AI as a "solution".
    – dbugger
    Commented Aug 26 at 18:22
  • 7
    Giving out reputation I think would be a mistake. It would definitely attract more reviewers and the result would a huge and possibly fatal (to the goals of SG) decrease in the quality of SG reviews. Commented Aug 26 at 18:51
  • 6
    @dbugger you will be unhappy to know that it's just on its way.
    – starball
    Commented Aug 26 at 19:06
  • 11
    Carefully look over results and aftermath of the reputation-for-edits system used in Documentation before proceeding. You don't want to repeat some of those mistakes, and you certainly want to amplify what went well. Commented Aug 26 at 20:44
  • 7
    what is "long tail vandalism"?
    – starball
    Commented Aug 26 at 21:36
  • 6
    The staging ground is a solid idea, but it is just awkward in implementation. It slows question delivery to the top-questions list until it is reviewed by someone actually monitoring tags relevant to the question. No shortage for C/C++ or other standard questions, but I see many I'll never review just from not being in an area or current expertise. I also see some questions that seem to bounce around in staging for many hours. Yes, the questioner still gets the benefit of review by those with approval privilege, but I wonder if that delay may not be something that detracts from the site? Commented Aug 26 at 23:01
  • 2
    BTW, now that you are looking into new badges, I think taking care of this FR: meta.stackoverflow.com/questions/394048/… is also on point. That'd help to weed out dupes if/when they graduate from SG.
    – M--
    Commented Aug 27 at 0:26
  • 12
    "keeping existing reviewers engaged and attracting new reviewers is a priority" I like helping, that feeling itself is the reward I want. But contributing my time to making stackexchange inc's product better doesn't feel rewarding becuase of how the company treats us.
    – Sam Dean
    Commented Aug 27 at 9:39
  • 6
    @Itallmakescents People shouldn't use Staging Ground comments for trying to find the answer and most Staging Ground comments would just clutter the post if included in the main site.
    – dan1st
    Commented Sep 2 at 4:16
  • 2
    @chivracq What do you mean with "liking one day to reach 2k"? I don't think we should make assumptions like "it will take me t time to get that much reputation" as we don't really know that much about how it works (and I also don't see many reviews from you before the time of the announcement) and don't forget about daily rep limits etc. Anyways, I think the only thing to do is be patient until we get an update (and to be honest, I hope the currently proposed system is reconsidered since there are quite a few issues with it).
    – dan1st
    Commented Oct 7 at 11:44

24 Answers 24

43

Why I generally do not review in SG:

  1. Plenty of items do not go through it. Yes, I understand that is the idea. It still makes me feel that if I spent some time, it should better be on published questions because that would catch the ones that did not go through SG.
  2. The standards of SG do not seem to match mine. I envisioned SG differently. Apparently, the aim is "barely adequate questions". So, things that need edits, or even clarification are supposed to be published. It reinforces my feeling that my time is better spent on published questions. Otherwise I would do double work.
  3. I saw enough users who publish even non-adequate questions. Some do it because they do not know better and slipped up. Some seem to not really care because they just went through and approved a batch of SG items essentially at once.

Perhaps a more general problem with the entire design can be summed up with: Time is not infinite.

There are already a lot of areas where users spend their time: review queues, manually reviewing questions, asking, answering, editing, retagging, monitoring specific areas of the site, etc. Then Staging Ground is introduced and users are expected to invest even more time. But it cannot come from anywhere. Either they reduce their activity elsewhere on the site, or they take away time from off-site activities. Time investment is a zero sum game. We all have 24 hours a day. Reviewing in Staging Ground in particular, tends to take up more time per post than other activities.

Of course, I cannot speak for everybody. But at least for me, the required time investment without any meaningful reduction elsewhere makes me prefer not to. My specific points 1-3 are all about having to review newly posted questions anyway. If I have to do that regardless of spending any time on SG, then, I would rather just cut out the time I spend in SG.

12
  • 1
    Regarding 2.: I'd say the aim is "questions that seem good to a non-SME" because it's not feasible to expect subject-matter experts to review each question. For 3.: Yes and I think this should ideally be handled by a system that results in these users losing (or at least not significantly gaining) reputation in the long term (at least if they just publish any questions they see).
    – dan1st
    Commented Aug 26 at 17:15
  • 9
    @dan1st That's not my overall impression from SG. "barely adequate" is. Questions with wrong tags or titles are regularly published when I've checked. And I don't expect perfect tagging, like, adding the exact tag that some topic needs. I expect users to look at the tags currently on the question and amend or remove unnecessary ones. Same with titles - a lot of questions are still published with "mystery titles" - ones that you cannot even guess what the question is about without reading it.
    – VLAZ
    Commented Aug 26 at 17:25
  • 2
    @dan1st Here is an example Top dynamic bar in wordpress site. The only two words from the title that have any relevance to the question are "top" and "bar". Even then, the question is not about a top bar but about timing of some events (that is reflected on that bar but could easily be anywhere else). Wordpess does not come into play. It's at useless or even misleading in the title and in the tags.
    – VLAZ
    Commented Aug 26 at 17:25
  • 1
    @dan1st Although, I guess that question has other problems - it was marked as requiring major changes and indeed doesn't have a reproducable example. Yet, it was published. I did not search long to find that example - it was the most recent published question (although seems I had some tags in the filter. I just now noticed).
    – VLAZ
    Commented Aug 26 at 17:25
  • Given that the user in question approving the question you linked only approved and skipped questions in the Staging Ground (at least I can see many approval and skips and no other review actions), I think there are other problems in that case.
    – dan1st
    Commented Aug 26 at 17:48
  • 3
    Regarding titles/edits: This may be a controversial opinion but I think the body and the question having a minimal reproducer/actually asking a proper question is significantly more important. If a question on the main site doesn't have good tagging/a good title, I don't think that's that big of an issue. But I agree that I should probably care more about these two things than I currently do. However, I am not sure whether non-SMEs could really recognize and write good titles and know how relevant a topic actually is.
    – dan1st
    Commented Aug 26 at 17:49
  • 5
    Like if a user approves 23 questions within 15min, I don't think that's a good example for a typical Staging Ground reviewer or expected quality.
    – dan1st
    Commented Aug 26 at 17:56
  • 3
    "If a question on the main site doesn't have good tagging/a good title, I don't think that's that big of an issue." and I disagree with that. I spend at least one hour a day fixing tags. And whenever I actually SO as a knowledge base by searching it, I end up essentially having to read tea leaves. Because when you search (on-site or off-site) the main information about a question you get is the title. When the title is not helpful, then I have to review each entry that even sounds adjacent to what I want. What if I want to do something with Wordpress and top bars?
    – VLAZ
    Commented Aug 26 at 18:09
  • 4
    "Like if a user approves 23 questions within 15min, I don't think that's a good example for a typical Staging Ground reviewer or expected quality." sorry, I'm not going to comb through all the items to find you examples that you would agree with. The fact that it happens and, you should also acknowledge, more than once is enough for me. I've checked out SG published items multiple times after launch. I've found problems often enough. But if we're going the route of "your example is real but doesn't count", then it'd be a waste of my time to get involved.
    – VLAZ
    Commented Aug 26 at 18:12
  • 1
    I am not saying there are no problems. In fact, I agree that many approved questions have a lot of problems. However, I think questions reviewed by people who just approve questions without looking into them much aren't a good indication of the expected quality level.
    – dan1st
    Commented Aug 26 at 18:14
  • I checked and picked the first one that I found. Which happened to be the first one. If you want me to go through the published items, verify which is from a "bad reviewer" or not, then give an example from a "good reviewer" - I'm not going to do that.
    – VLAZ
    Commented Aug 26 at 18:26
  • 1
    I am not asking for a better example. But you mentioned this as an example of the "aim"/expected quality level (at least I interpreted it that way) but examples (especially in cases like that) for approved posts would be examples for the actual quality level of approved posts, not what's expected from reviewers.
    – dan1st
    Commented Aug 28 at 4:48
36

TL;DR

I think introducing reputation in the Staging Ground is necessary but your proposal has some (quite significant) issues.

Rep in the Staging Ground

Thank you. As one of the people who participated in both beta tests, I'm convinced that reputation in the Staging Ground is an absolute necessity. I am completely in favor of the proposal to add reputation to the Staging Ground. I also think that most beta-testers (participating in the discussions about reputationin the staging Ground) and probably also many other active reviewers feel in a similar way.

However, there are still a few things that should be addressed (in my opinion). I mentioned some of these things already in the post on the Beta team but there are still a few things worth mentioning. Don't make the changes you are proposing as-is!

I don't expect you to implement all my suggestions but I'm asking you to think about and consider them as well as explain your choice.

Analysis on existing data

This is a significant change to the site and getting it wrong would be hard to undo. Considering that you now have quite a bit of data of the beta tests and also from reviews since the Staging Ground was released publicly, I am sure you did a lot of analysis (right?). Would it be possible to share the results of that with us? I'd be specifically interested in:

  • What is the distribution of rep that would be gained by reviewers?
  • Are there outliers (both with respect to users and questions) with respect to (simulated) gained reputation in the Staging Ground and how significant are these outliers?
  • If someone would just approve questions in the Staging Ground (analyzing non-SG questions as if they were directly approved in the Staging Ground), would they get a net positive or negative reputation? If someone is approving every post they see, they should (in my opinion) not get any reputation out of it.
  • Is there a significant difference between gained reputation with just an approval or after for posts approved after az least one author edit in response to "Major Changes"?

Too extreme values

The numbers you suggested are quite high. For example, you are suggesting to give reviewers 5 reputation per upvote which is a lot for a post they haven't written.

Use the score

I want to suggest using the post score instead of upvotes/downvotes. For example, if a post receives 1 upvote and 1 downvote, I don't think the reviewer should get reputation. I think it would be better if an ipvote and a downvote would neutralize in that regard.

Major Changes

The most important action in the Staging Ground is "Require Major Changes and this is also the option that takes most work for a reviewer. I suggest also giving some reputation to reviewers requesting changes (with appropriate limits TBD) if another reviewer approved the post (which happens quite often).

For example, you could give these reviewers some reputation for positive scores only (and take some of that away if the score decreases but make sure people don't lose reputation for this action (except for spam posts)).

answering should give more rep

Asking and answering are the main forms of gaining reputation and this should stay that way. Please make sure that typical users can get more reputation by answering questions than by reviewing Staging Ground posts (in comparison to the time it takes to do so).

Duplicates

Reviewers aren't supposed to be subject-matter experts. You can't expect them to find all duplicates. Please don't deduct reputation for posts being closed as a duplicate of another post. Nevermind, I didn't read the exclusion.

On the other hand, finding duplicates can also be a lot of work. What do you think about giving a small amount of reputation to the reviewer who originally found the duplicate if the asker agrees (either by directly agreeing or another reviewer using the same duplicate and the asker not Re-Evaluating). Reputation for duplicate finding could also be a good idea on the main site (see this comment).

giving a lot of reputation for staying open

You include giving 10 rep for

Question remains open on Stack Overflow and was not closed at any point within the first week

This seems excessive to me. I don't think there should be a (significant) reputation gain for posts that are not receiving certain actions. If it was like 1-2 reputation, I'd probably be ok with it but 10 reputation for every post that stays open is way too much.

What happened to the original proposal?

In the teams post you linked, there was a different proposal. Can you please explain why you made the changes you made?

The original proposal was (mostly) the following:

  • +3 rep on upvoted posts, both when the reviewer approved or requested major changes
  • -1 rep on downvoted posts you approved
  • -5 rep for non-duplicate closures
  • +1 rep for closing as off-topic (I agree with the removal of this)
  • +2 rep for duplicate closure (I do think this could be done in a better way than in the original proposal which I mentioned in my suggestions there)

There was a lot of discussion on that proposal including a few problems that were noticed with it but overall, I think this original proposal was better than the current one so I would be interested in what exactly motivated your changes. Did you start over completely?

Giving the wrong incentives

Your current purposal of how to award/remove how much reputation seems to incentivize any reviews, no matter how bad they are. It looks like it would award a lot of reputation even to users that deliberately approve bad posts just because many of their reviewed posts stay open.

Here are some things thah should (in my opinion) be incentivized:

  • Using "Requires Major Changes" in a way that improves the post
  • Finding Duplicates
  • Approving good posts

And the following should NOT be incentivized:

  • Approving any post without looking into these posts
  • Other forms of Robo-Reviewing

If users who are approving like 15 posts in 10 minutes (especially when they aren't skipping many posts) get a net positive out of this, you probably have the wrong system.

Abuse

With any way of introducing reputation to a system like the Staging Ground, there will be some people (trying to) abuse it. Your proposed system with these extreme values for questions staying open and upvotes is especially bad as people would just get reputation for many reviews, no matter how bad these reviews are which (I think) makes it very easy to abuse.

Before making any system that introduces reputation incentives to the Staging Ground, make sure you have good abuse prevention and detection mechanisms in place and ask the moderators whether these mechanisms are sufficient and what else is needed for that. If moderators tell you the mechanisms you built are insufficient, listen to them.

Aside from building the said mechanisms, make sure you are ready to quickly build new ones after reputation in the Staging Ground is available depending on the exact needs.

Other things

Providing incentives to review questions is one (important) thing but it's also important to make sure these reviewers know what they are supposed to do. Please provide an onboarding experience to reviews that:

  • shows them example posts and how to review these
  • tells the reviewers what is expected from them, how the review options work and what a good post looks like
  • they can view again at any point
3
  • 18
    Agreed. The rep is far too extreme. One thing I've learned from Stack Overflow is people will up, or down, vote <expletive deleted>ing anything. Commented Aug 26 at 20:37
  • 1
    @user4581301 Stack Overflow blog, 2013: "It turns out that people will do anything for fake internet points" though the next sentence is "Just kidding." Commented Aug 27 at 16:54
  • Having played my share of MMOs and seen what people will do to earn a cosmetic item, like a cape, that confers absolutely no in-game advantage, and if you know your Edna Mode you know a cape is a disadvantage, I think "Just kidding" is being overly kind. Commented Sep 5 at 0:04
30

Who does a reputation reward work best for? Same question for badges?

I personally feel both of these rewards are great at attracting attention from newer users, but have little to no affect on experienced users (or even diminishes it in some cases.)

I think a much better tool for encouraging participation would be making it clear what positive effect our contributions are having on the network at large. Provide good reporting on the SG that everyone can access including users who can't review so they can learn to trust the process.

The entire point of this platform is being the place where you can find answers. While I think the staging ground is important and the work that it is doing is needed, I also feel its goals are misplaced. It shouldn't be a practice of how many questions can we get approved/posted, it should be process of preventing posts that will not fair well from being published. All of the rep incentives work against that.

1
  • 3
    I like gamefiction in Steam, but quite soon lost interest here on SO. Holiday hats, badges, score aren't my goal anymore. The little green checkmark near my answer is all I need. SG is simply not my thing, feels like waste of time, it has weird UI, it's different from normal topics, it feels like another documentation (or what was the name of that long time dead thing) ... and I have no idea what needs to be done to see me there. Making reputation (currency here) more useful perhaps and making SG/queue the best income? Bounties are very slow and I simply don't bother.
    – Sinatr
    Commented Aug 30 at 15:12
26

Don't exclude “needs details or clarity” from closure penalties

For approving questions that are later closed, you have “needs details or clarity” as one of the exclusions from the -5 rep penalty. It's not totally clear what the motivation for that is, but there are a few problems with that:

  1. It's often used on stuff that's totally incomprehensible, wildly underexplained, using images of text, etc.. Consider this example from SG (reviewed as requiring major changes), or this example from the main site (screenshot for <10k; closed as "needs details or clarity"). It doesn't take a subject-matter expert to identify that these are totally unsuitable.

  2. It's nearly interchangeable with "Needs debugging details" (other than that "Needs debugging details" can't reasonably apply to how-to questions), which does not have that exclusion. I don't think that there's a clear reason that one should be excluded, but not the other. If anything, it can be harder for non–subject-matter experts to identify whether a minimal reproducible example is adequate.

17

I think that the main problem with Staging Ground and lack of reviewers is that there is not enough questions in the SG to begin with.

For providing adequate guidance in the SG, or what I think it would be adequate guidance that would ensure success of the question once published is that user giving guidance needs to be subject matter expert, or at least knowledgeable enough.

I can cast close vote on obviously poor question that lacks details, asks multiple questions, lacks reproducible example and hope that once closed OP can follow the guidelines in the post close notice and read more about asking in Help center.

But in SG I cannot provide sufficient guidance, except for very basic one in line with the guidance that can be found in canned comments, if I don't know anything about that tech stack. Leaving such basic comments and hints (which we must give in SG) that can only address some issues in the question often results with OP making some changes and then replying to your comment asking for further feedback or hoping that question can be published, but at that point I may not be able to provide adequate feedback anymore. And I may not be in position to tell whether question is ready for publishing or not.

There are plenty of users who frequent various tags and post comments and give feedback on numerous posts in the area of their expertise. But they simply don't get to review SG questions, because there aren't any or very small number of those.

Instead of figuring out how to give reputation for reviewers, you should try applying SG on large scale first. That would be the most appropriate test for the whole SG. That way you could get more feedback about the workflow and potential problems and possible improvements from users that are commonly doing curation work in their area of expertise.

The worst that could happen is that poorly asked and researched questions will not be instantly answered and that if they eventually get auto published because they were not reviewed, that posting delay will discourage people from asking without putting some effort first. And those askers that put some effort will get rewarded as their questions will be better received once published.

This would also prevent new users to ask really bad or several poor questions in a row getting an almost instant question ban after joining the site.

11
  • 1
    "Leaving canned comments (which we are forced to do in SG)", the forced part is not true... I've never used any of the 'Canned Comments', I always leave custom Comments... (But I also always select "problematic" Questions (=> for 'Require major changes') that will require addressing several issues that no just one 'Canned Comment' could address all together...)
    – chivracq
    Commented Aug 26 at 20:40
  • 6
    @chivracq The point is that if you are not SME you cannot provide more specific feedback outside what common canned comments can do and that unlike for voting to close you do have to leave a comment which in turn makes OP coming back to you for more when you are not able to be of more help. For instance, if you see that there is no MRE you can leave comment to add one, and when the user does that you will not be able to tell whether that is appropriate mre and the question is ready for being published or not. This is why having actual SME would help, instead of having random reviewers.
    – Dalija Prasnikar Mod
    Commented Aug 26 at 21:27
  • That does not meant that random reviewers cannot do anything, but that such reviews will be rather limited and SG will not fulfill its goal to the fullest.
    – Dalija Prasnikar Mod
    Commented Aug 26 at 21:28
  • 1
    Yep indeed, some knowledge of the technologies/programming language involved in the Qt always helps of course, I skip if I don't know anything... But I only reacted to "forced to use ('Canned Comments')", I prefer to place manually [mre] in my Comment, together with the other issues the Asker will need to address... (Title not descriptive, wrong Tags, no input/output expected, Error Msg missing or as an image, too many typos/missing words, etc...)
    – chivracq
    Commented Aug 26 at 21:56
  • Having a lot of Staging Ground posts that don't get reviewed (in response to opening the SG to a larger scale) will do damage. If the author Re-Evaluates the post after an edit (in response to "Requires Major Changes") and no reviewers make another review, the post will be stuck in the Staging Ground forever (this also happened during the beta tests/strike).
    – dan1st
    Commented Aug 27 at 4:28
  • @dan1st I think that such situations would happen less if the reviewers are subject matter experts. We get notified if the question has been updated, so I expect that such people would provide new feedback if necessary. The problem happens when you are not in position to judge improvement and you hope that someone else will step in. also if such things happen where post is just stuck in the reevaluation more often, then maybe workflow needs improvement.
    – Dalija Prasnikar Mod
    Commented Aug 27 at 6:12
  • 1
    @dan1st I don't think having SG on a large scale will be problematic, if many new questions go through the SG and it shows up by default on the questions listing. Users who want to answer will be forced to review if they want to post answers to these questions. Trying to incentivize users to take the correct review actions instead of them being a FGITW is going to be the main problem with scaling. Commented Aug 27 at 7:27
  • I don't know what would happen if the Staging Ground was suddenly scaled to significantly more questions but I do see questions stuck in Re-Evaluate as a potential issue. That being said, I mostly feel comfortable reviewing questions where I'm not a SME but I prefer reviewing in questions where I know about the topics (beta team link).
    – dan1st
    Commented Aug 27 at 15:48
  • 1
    @danst Are posts getting stuck waiting for a re-review that common a problem? I for more often see posts stuck in the "needs major update" where it seems the OP just gives up/abandons their post, and in this case it's really only their loss. I'd consider it a small victory that a low quality post was not released to general population. Perhaps posts that have been updated/edited/etc and marked for re-review should also auto-publish after some delay, say 24 or 48 hours in case it was the reviewers that gave up.🤷🏻‍♂️
    – Drew Reese
    Commented Aug 28 at 6:08
  • 2
    The goal is to eventually allow more questions into SG. But to do that, we need ample reviewers. The proposal outlined above, as well as features already in the works, are ways in which we hope to motivate reviewers, to accommodate the scaling of SG.
    – Sasha StaffMod
    Commented Aug 28 at 20:04
  • 1
    @Sasha I am worried that you will only motivate robo-reviewers by adding reputation. People are willing to robo-review even for badges, let alone reputation.
    – Dalija Prasnikar Mod
    Commented Aug 28 at 20:17
16

I'll mostly repeat what I said in the beta SO for Teams site.

I don't think this should reward rep. Precedent is relatively weak, and robo-reviewers are already enough of a problem in review queues where all they can get is badges.

If it must be rewarded rep, the proposed values are too lucrative. I propose maximum absolute values of 2, except that publishing spam or R/A should be a review ban and something harsher like -50 rep.

From an abuse-case perspective, you are essentially creating a proxy channel to perform serial-voting on reviewers (assuming SG reviews are or eventually visible in a user's profile page- which they should be). Ex. say I want to act on my pettiness and have something against reviewer A. I can downvote everything reviewer A approves and make their rep go down. I don't see what's so difficult about figuring out what kind of mod tooling you need to detect rep-based abuse- at least at the first order of the most obvious abuse-case this will enable. Integrate the review system into the existing serial-vote-detection system. As in- in that system, treat a vote by user B on post approved by user A as a vote by B on a post by A.

Rep gained is locked in after 14 days to protect against long-tail vandalism.

How is this a net improvement? You want to know what thinking "long game" looks like? "I can make a voting sock, very slowly upvote posts reviewed by my other account, wait for 14 days to pass since each vote, edit the post, and then remove the sock upvote to cover my tracks. Receiver account keeps the rep, but the vote is gone.". Unless mod tooling keeps tracks of votes as transactions in addition to the latest existence of a vote by a user on a post, and factors that into fraud detection, and has parameters wide enough to catch it, that's going under the radar.

We are trying something new, and this is just our first attempt at dipping our toe in the waters of rewarding curators with rep, a form of recognition which we believe is overdue

Why this specifically, then? See for example, https://meta.stackexchange.com/a/387398/997587.

In future iterations, we are open to the possibility of allowing multiple reviewers to gain rep from the same Staging Ground post, rather than just the final approver, as this is often a collaborative process. - Sasha

I can get the motivation (I actually mostly look at just new stuff in SG), but it's a nested can of worms in what's already a can of worms. I can imagine people abusing it in similar ways to how you can abuse "leaving a review" in First Questions by just upvoting a comment. Except now they are doing it to get rep and not just a badge. And what if I only review new stuff, and other people publish something in a state that I don't think is ready? Would you penalize someone for a downvoted post in that scenario? It's messy.

14

I'm an active SG participant and participated in the SG betas.

Unfortunately, at this point, these numbers are not growing.

Not enough new posts that ought to go through SG first are fed through the system.

Staging Ground, however, has one small problem: it takes a lot of work. If It succeeds, it will only be on the backs of community members, like many of you, who work to make it a success. However, even this is not enough — a lesson we’ve learned time and again (for instance, with review queues). For it to succeed, there must be enough reviewers over a sustained period to meet the number of questions coming in.

Is this really a problem though? SG posts that receive no interaction are eventually auto-approved and published to the main site where they are handled through normal means anyway.

In order to make Staging Ground sustainable, keeping existing reviewers engaged and attracting new reviewers is a priority. In addition to adding more badges for reviewers to earn, the possibility of earning rep is a potential incentive to increase reviewer engagement.

I can't really agree with the gamification aspect of SG reviews. With gamification comes rep farmers which comes with audits which leads back to the review queue issues. The review queue audits every 3rd to 5th review are the reason I quit the review queues. I'm not saying audits will come, but there is history here. I can certainly say if review audits were introduced to the Staging Grounds that my participation would be considerably less.

I think recognition in the form of earned badges to highlight a user's contribution specifically to SG could be acceptable though.

I could get behind a rep loss/penalty for SG post reviewers that approve posts that are immediately down-voted and/or closed on the main site. The entire point of the SG is that posts have been peer-reviewed and meet the site's standards before being published. Quality should be the bar, not quantity.

3
  • 7
    And think of the fun to be had policing the really smurfy people dropping revenge downvotes on some poor shmuck's question to punish the reviewer. Commented Aug 26 at 20:38
  • 1
    "The review queue audits every 3rd to 5th review are the reason I quit the review queues." same here! Commented Aug 29 at 18:06
  • 1
    The most annoying thing about the audits is the 10-30 second lag that precedes them. Kind of ruins the whole point of the audit if you know one is coming. Commented Aug 30 at 20:54
11

Bold ideas, and I like the badges, mostly (the naming leaves a bit to be desired, IMHO).

However, I think the suggested reputation table is far too generous to participants; it rewards secondary efforts as much as or more than original efforts. Here is my suggestion for a more measured reputation reward table:

Reviewer's last action Reputation +/- Reason Limits or exclusions
Approve with minor edits +1 rep Each positive score above 0 a post reaches, for questions that remain open and were not closed or deleted after 7 days. Limit 5 upvotes; (max 5 rep gained) / Exclusion: reviewer votes on asker's question
Approve OR Approve with minor edits -2 rep Question closed on Stack Overflow Exclusions: Question closed as a duplicate
Approve OR Approve with minor edits -20 rep Question is deleted on Stack Overflow as spam or offensive

Additionally, similar to the Suggested Edits review queue, there should be a "global" reputation cap for Staging Ground reputation: no more than 1000 reputation should be awarded.

Likewise users should not be penalized more than 500 reputation for negative results. Instead, users should be permanently suspended from the Staging Ground if they reach that point. Auto-suspensions should also probably be implemented at intervals of 100 reputation lost and 250 reputation lost, for 1 month and 6 months, respectively.

Notes:

  • I combined the two "gain" rows into one; I think the notion of giving someone 10 reputation just because they read a post and click a button is... excessive.

  • Approval alone does not merit any reputation reward, so that option has been removed from the remaining "gain" row.

8
  • 3
    "Approval alone does not merit any reputation reward", I don't agree, 'Approval' after a 'Require major changes' (by the same Reviewer) is what takes the most work/time in the 'SG'...
    – chivracq
    Commented Aug 27 at 14:32
  • 1
    @chivracq Doing something else and then approving it is not the same as "Approval alone". Also, "require major changes" is just two extra clicks: 1) toggle on comment template 2) choose the appropriate reason template. Hardly effort worth rewarding reputation.
    – TylerH
    Commented Aug 27 at 14:36
  • 3
    Hum, ah OK..., "just 2 extra clicks", yeah...!, we don't review in the same way, like I explained in a Comment under Dalija's Answer, I choose only/mostly "problematic" Questions with several issues that need to be addressed, I never use 'Canned Comments' but write a custom Comment listing all the issues, ... and I follow up on the Qt('s) I review... And this takes time, and often some back and forth Comments with the Asker, it's not just 2 clicks like you present it...
    – chivracq
    Commented Aug 27 at 14:45
  • 2
    I agree that "Requires Major Changes" is the review action that takes the most work but it's also the most important review action and I also think people requesting changes should be awarded if the post is improved and approved by someone else. I don't think people should get rep for the "Requires Major Changes" action alone but I do think there should be an incentive for that action.
    – dan1st
    Commented Aug 27 at 16:03
  • 1
    @chivracq Following up on questions you review is not relevant here. If a question needs major changes, you click the "needs major changes" review option, which requires that you leave a comment. You can click twice to toggle templates and choose a pre-formatted comment, or if one does not apply, don't click twice and just write a custom comment. But that's hardly "not reviewing the same way". The only difference between what we've described is using a canned comment versus writing a custom one. And, to be clear, writing comments isn't worth a reputation reward, either.
    – TylerH
    Commented Aug 27 at 16:03
  • @chivracq It sounds like you're conflating "only clicking approve" with in-depth conversations with SG askers. The "approve only" that I removed from the table in my answer is only for the former option, not the latter.
    – TylerH
    Commented Aug 27 at 16:04
  • Following up on a Qt that I placed in 'Requires major changes' with, say 3 issues that I listed (choose 3 from [Title not descriptive / Wrong Tags / [mre] needed / Input/expected Output missing / Error Msg missing or not as an image / Improve English if too many obvious typos/missing words/no punctuation / No research / ...]) means the Asker only addressed 1 or 2 from the 3, it has nothing to do with "(going into) in-depth conversations with SG askers", no need to exaggerate from one extreme to another... (+ Not mentioned yet, but for "that kind of Qt's", I will also look for a Dupe Target.)
    – chivracq
    Commented Aug 27 at 17:14
  • "Following up on questions you review is not relevant here." => Beh yes, of course it is...!, because I only approve(d) Qt's that I handled "from the beginning" (= with 'New' Status) and for which I asked the Asker to address several issues, => 'Requires major changes'... (+ Assessing if the Qt is 'Off-Topic' (the easiest case actually), then searching for Duplicates, also takes quite some time, and only then I go into helping the Asker shape their Qt to be suitable for posting on 'Main'...)
    – chivracq
    Commented Aug 27 at 17:20
9

Just to sum up on how the current proposal's reputation gains may work in practice..

A reviewer who approves, say, 35 reviews a day (looking at a real reviewer who approved 35 posts within 2 hours in the past day,) if the posts they approved remain open for 7 days they will earn 200 rep 7 days from today.

If 10 of those posts were closed they will have still earned 200 rep 7 days from today, and they get to keep that rep regardless of whether it gets closed some time after that 7 days.

This isn't taking into account any rep they will earn from the first 5 upvotes that post receives at any time in the future. As of right now, only one of their approved posts is closed because the post wasn't in english... proving they either were just robo reviewing, or don't have a firm grasp on what posts should be approved. None have been downvoted, several have been answered. Yet, the more reviews they approve at the current reward/penalty rates, with the rep cap, the less likely they are to miss out on the rep cap 1 week from now from just quickly approving a bunch of stuff. Granted, this is just one user, and I see no evidence they repeated this, but if they earned rep from it... surely they would? I do see other users who have likely reviewed in this way as well, all before a rep reward and badges even exist.

Now, we could certainly expect mods to police this and curators to flag this behavior, but... in practice is that going to happen? That's certainly not being incentivized.. Are we instead going to get an annoying audit system that gets it wrong half the time? Or... just take the increased questions being approved as proof of the incentives working.

3
  • 1
    It is not going to happen. Applied at scale, we will not be able to moderate such reviewers any more than we can now moderate poorly asked questions. YEs, we might be able to suspend some of them, but many will fly under the radar. And if reviewers can get net positive out of SG by approving anything remotely resembling the question they will do that. Negative reputation could also deter some of reviewers that are not bad ones, but might approve some question where only SME can really tell something is missing.
    – Dalija Prasnikar Mod
    Commented Aug 27 at 6:39
  • Minor correction, both of these users did in fact earn badges
    – Kevin B
    Commented Aug 27 at 6:43
  • 2
    At the end I think that adding reputation will end up in disaster for everyone involved except for robo-reviewers who are smart enough to not to get caught. I am also afraid that initially curators will be closing and downvoting SG published questions more eagerly to counteract bad reviews and this will mostly hurt askers.
    – Dalija Prasnikar Mod
    Commented Aug 27 at 6:43
8

Staging Ground's impending failure is merely a microcosm of the curation problem that Stack Overflow has as a whole. And that problem is simply that there is quite literally nothing, short of paying actual money, that you can do to incentivise people to increase their curation activities.

The reason for this is the one that the community has been warning Stack Exchange Inc. about for years: question quality, or rather the lack thereof. Nobody, and I mean nobody, wants to wade through oceans of sand in order to maybe find a pearl - because there are so few pearls and so much ever-increasing sand, that no amount of fake internet points you could get as a reward for finding a pearl could ever be worth the effort.

I don't even know why I'm writing this answer because you've consistently demonstrated that you have no intention of honestly addressing this fundamental underlying problem. Until you do, these band-aid initiatives like Staging Ground will continue to accomplish nothing and you'll continue to wonder why. So yeah, go ahead and award more rep for participation there. Go ahead and add more badges. Go ahead and do whatever you tell yourselves will make the Staging Ground line go up, because the latter is clearly the only thing that you actually care about.

I'm just looking forward to seeing a staff post in six months or a year, "Driving Staging Ground engagement has caused <new problem> that we want to fix with <new solution>". I'm sure y'all will be exactly as receptive to community suggestions then as you have been in the past, just as I'm sure that when your new solution causes a new problem you'll be ready to tackle the latter head-on.

After all, lines need to go up, right? Even if those lines are "problems we've inflicted on ourselves" and nobody sane would ever consider that a measure of success. But I'm sure you'll manage to convince yourselves once again that the definition of insanity isn't, in fact, doing the same thing multiple times while expecting different results.

10
  • Isn't SG increasing the curation activities as it widen the potential user base of curators?
    – A-Tech
    Commented Aug 30 at 8:40
  • 4
    @A-Tech The problem with Staging Ground, and indeed all attempts so far to foster curation, is that they assume that getting more people to curate is the answer. Except that doesn't work, because as Wikipedia's stats show (and this is mirrored on Stack Overflow), a vast minority of curators perform the vast majority of curation. Thus, any honest effort at increasing curation should be focused on retaining and incentivising the minority who do curate, not the minority who don't.
    – Ian Kemp
    Commented Aug 30 at 9:37
  • 2
    @A-Tech That does not mean there is no value in incentivising curation in order to encourage new curators - far from it - but what the statistics show (and they do not lie) is that historically, very few new curators go on to become established (i.e. part of the minority that does the majority of curation)., This is because the things that drive established curators to curate (e.g. a sense of community) are completely different to those that drive new curators to dip their toes in the water - and Stack Exchange Inc. has continually ignored this discrepancy.
    – Ian Kemp
    Commented Aug 30 at 9:43
  • 4
    @A-Tech The problem with ignoring the established curators is that every time one of them stops curating because their wants and needs are being ignored, you lose a disproportionately large amount of curation activity that would require many new curators to cover. So in order for curation activities to continue at the same pace or (ideally) increase, you have to have a model that gets lets of new curators into the system quickly - but there is no site in the world that has ever been able to make that work without literal financial incentives.
    – Ian Kemp
    Commented Aug 30 at 9:47
  • 2
    @A-Tech So what we have is SE Inc. continually rolling out incentives to encourage the majority to curate, which as we have already seen doesn't happen; and continually ignoring the established curators, who get fed up and leave. Thus instead of the curation gap (ratio between number of people who actually curate content, and amount of content to curate) growing smaller over time, it gets ever-larger. Which means that the ever-fewer curators remaining have to shoulder ever-more curation effort, which burns them out quicker, which leads to fewer curators who need to do more curation, ...
    – Ian Kemp
    Commented Aug 30 at 9:51
  • 2
    @A-Tech At the end of the day, then, SE Inc.'s repeated attempts to target new curators are ultimately doomed to failure and always will be because data doesn't lie - yet the company persists in making new attempts. They also persist in ignoring the data around established curator retention, and why that should be a massive (and arguably larger) focus than new curators. And they do this simply because they are unwilling to tackle the issues that established curators have long complained about.
    – Ian Kemp
    Commented Aug 30 at 9:57
  • 1
    Ive been on this site only a short time (active at least) but as far as I can tell the SG is a wish from long time curators? Or do you and curators wish for something else entirely in terms of implementing SF? From what I have observed the general problems/complains on curating are: 1 to few curators, 2 need better tools, 3 bad posts net to be caught sooner. Which is is what SG addresses in part? But do correct me if I am wrong.
    – A-Tech
    Commented Aug 30 at 10:31
  • 3
    @A-Tech "Which is is what SG addresses in part?" at the moment, it's an insignificant part. We still have to review every question because 1. not all go through SG 2. even the ones which do, may be graduated without changes that they need. So, right now SG is a bit like a gate in a middle of a field with no walls around it - it is not sufficient to really maintain the perimeter
    – VLAZ
    Commented Aug 30 at 11:59
  • @VLAZ Wait doesn't that loop as around back to-> every question by new users must go though SG -> We need more reviewers/curators for that->more curators are not the answer for SOs issues? Or am I severely missing the point?
    – A-Tech
    Commented Aug 30 at 12:12
  • 3
    @A-Tech yes, it does loop there. And this is the problem with the approach - it can't scale to be useful, because it requires far more time and effort. As I already said - I'm not reviewing there because I feel it's a waste of my time. Time I can invest elsewhere. Would SE take steps to reduce time I spend on other activities, I might dip into SG more.
    – VLAZ
    Commented Aug 30 at 12:15
8

Imaginary internet points are not a sustainable reward mechanism for activities that require time and expertise. Curation isn't fun. I will solve protein folding puzzles all day long without compensation because I love a puzzle and I'm advancing human knowledge (or was until AI started doing it better lol, but I probably helped get it there). Trying to get some stranger's question into shape when they are just as likely to get upset about it as appreciate my effort, with very little recognition from my peers or the company, when there will always be more questions that need curating no matter how hard I work at it, is not something I'm willing to do for free, especially now that the data dumps aren't publicly available.

Curating on Stack Exchange is work that requires not only expertise in the topic and English fluency but also expert knowledge of Stack Exchange rules and community norms. There is no reality where reputation points will be enough reward for most people in the current SO environment.

When are humans willing to do thankless work without getting paid? When it's fun. When it's directly in service of a cause greater than lining corporate coffers. When they are able to connect with the people they're helping. When they form a community and connect with other helpers.

I only see two options, because curation will never be fun for most people. Pay people in some sort of proxy currency or go back to Stack Overflow's origins where a small core group of dedicated people work hard to do something for the good of everyone with the company as a partner. Since it's really hard for the part of the company who makes things to change the part of the company who's just interested in making money, and a small core of people working really hard doesn't scale well without a significant investment in technology, I think the only viable option is to give people the opportunity to earn something of tangible value by curating.

Let people earn credits to use the AI or other paid services you offer. Let them earn extended profiles. I don't know what would work best. The problem with reputation is that it's not good for much once you've earned a certain amount. If people are going to work for the company's benefit, the company should compensate them with something that has congruent value.

5
  • But is this work really for "the company's benefit"? It seems like it's more for the benefit of the user who asked the question and also the larger community who uses this Q&A site and therefore seeks to keep it reasonably free of low-quality content. If one just looks at the company's benefit, there's no real advantage to having Staging Ground reviewers—or even an advantage to having a Staging Ground at all, is there?
    – Cody Gray Mod
    Commented Sep 3 at 23:45
  • 1
    @CodyGray "there's no real advantage to having Staging Ground reviewers [for the company]" I'd say there is. From a bit afar SG is just curation work. Curation work increases the quality and value of the site and the company directly profits from that by a higher quality of the dataset, by more visitors and more ad impressions. Also the report is here that numbers of SG reviewers is not growing (maybe even falling). ColleenV gives one possible explanation, other answers give others. Maybe people are simply not that helpful anymore, would be still another one. Commented Sep 4 at 20:15
  • 2
    i would argue curation work in general has been pretty negative to the number of user contributions over time, and that the pearls would receive the same traffic from google regardless of whether or not we had shoveled away the sand. The value of curation is in keeping experts interested enough to continue providing expert answers.
    – Kevin B
    Commented Sep 4 at 20:21
  • @CodyGray Who benefits was a lot more clearcut when there was unfettered access to the data dumps. Curation is critically important when you are selling the content to train commercial AIs. The most engaged, effective curators don't do it for rep or another badge. It's for something bigger. SO's interests are no longer aligned with its purported purpose of knowledge for the public good. No non-paid AI is going to be allowed to summarize this donated content or translate it. It's fine if you still believe the company isn't so greedy they'll kill the goose; some of us aren't so sure.
    – ColleenV
    Commented Sep 9 at 19:25
  • @KevinB There will be no Google search traffic once the AI they are using your content to train is adopted widely enough. If they don't convince people to curate, the data doesn't have much value. SO doesn't make money from the number of contributions. It makes money from the ability to separate quality content in context from chaff now.
    – ColleenV
    Commented Sep 9 at 19:29
7

I'd love to see some stats surrounding how many people with posts in the SG are responding to feedback within an hour/day/other. I just tried looking through the first page of SG questions that are tagged with python and I could only see one that had any semblance of an update.

This was only a small sample but I wonder if a larger one would give more favourable results. No amount of internet points would make it worthwhile if it turns out review feedback is falling on deaf ears

2
  • 2
    Should we maybe also give rep to askers if they actually edit their question after receiving a comment? Rep seems to be the universal motivation booster. Commented Aug 28 at 6:45
  • @NoDataDumpNoContribution - that would probably work quite well, gives people another way to get that 50 rep for commenting and wouldn’t really be impactful if it has a cap like editing posts has
    – Sayse
    Commented Aug 28 at 7:22
7

This proposal bakes in other limitations such as:

Reviewers are still subject to the existing rep cap — no more than 200 rep earned per day.

There's a 1000 rep cap for suggesting edits, see Why is there a limit on suggested edit reputation reward?.

In this proposal you make no mention of any all-time rep cap for SG, just the 200 daily rep cap. That means what you're aiming at is Bodies in Seats. That means what you want is as many reviewers as you can get in exchange for imaginary internet points (changing what reputation is) to spend as much time as they can loose regardless if it's unhealty and promotes internet addiction (something that's been mentioned quite a few times about the fanatic badge).

So no, an unhealty uncapped internet addiction promoting gamification isn't something I support. Put an all-time cap on it.

0
7

My personal opinion
For me personally, the risk of actually losing reputation for trying to help new members would just make me stay away from the SG all together. I haven't contributed much to it, simply because I'm not always confident in my judgement. And I don't want anyone potentially suffering the consequences of that. But being punished for doing my best certainly doesn't encourage me at all.

Issues

  • I have no control over what happens with a post after it's posted on SO. If the OP decides to completely change their question and it results in a lot of downvotes / deletion, the reviewer(s) are to suffer for it. I know there is the question's edit history to fix this, but that would just add a lot of workload to moderators.
  • I have no control over votes either. People can down- or upvote content for any reason they like. It'd be unfair to a reviewer if they lose reputation on a question that's been downvoted just because someone didn't like OP's profile picture.

A better approach (in my opinion)
Reviewers should earn the right to earn reputation for reviewing. Have certain badges in place that are required to be eligible for reputation gain by reviewing. Something like a number of successful approvals that remained on SO with a positive reputation for 1+ week. This should prevent most people from trying to abuse the SG to earn reputation. They'd have to prove themselves capable of properly reviewing content first.

Similarly, you could revoke a badge (or whatever method you wish to use to make someone eligible for reputation gain through reviewing) if too many approved contents end up downvoted / deleted.

Using this approach, you don't end up (unfairly) punishing reviewers. You still encourage them to become a reviewer. And you discourage abuse.

2
  • 1
    If you are reviewing more than 10 questions and not reviewing in a way that actively damages the site (unfortunatelyalso if you'd be reviewing in a harmful way with the current proposal AFAIK), you are very likely earning more reputation than you are losing.
    – dan1st
    Commented Aug 29 at 15:36
  • 2
    @dan1st Perhaps. But the idea is to attract more reviewers. If someone isn't comfortable in their reviewing capabilities yet, they'd probably try reviewing just a few questions to see how it goes. If that doesn't go well, they shouldn't earn rep for it. But they shouldn't be punished for it either. If they are, they most likely won't make another attempt again. Besides, is it a good idea to make decisions based on "it most likely won't happen"?
    – icecub
    Commented Aug 30 at 8:18
5

My thoughts:

  • Asker only badges will be unobtainable for users already beyond the threshold to be no longer eligible for staging ground, unless you do questionable things like make a new account to earn the badges and then merge them.

  • I am not sure if downvotes on questions should penalize the users who approved them. Because what about 'not useful' questions that are nevertheless on-topic and clear? What alternative would there be aside from someone either tanking the penalty and approving them, or leaving them in the Staging Ground limbo for ever?

    I do however agree with a rep penalty for off-topic that get closed, or spam/abusive questions, because those are posts that should absolutely should not be published.

  • I agree with other answers that state that the rep gains and penalties are too extreme. Perhaps only +2 for posts that get upvoted, and -2 for posts that get closed. The primary source of reputation should still be generating useful content yourself.

  • Will these reputation changes be awarded retroactively? Based on the wording the implication is no, but I am not sure for a fact.

2
  • 1
    "I am not sure if downvotes on questions should penalize the users who approved them." - I think this should "average out" when reviewing sufficiently many posts.
    – dan1st
    Commented Aug 27 at 4:41
  • 1
    If rep changes are applied retroactively, I think it might be unfair to include posts reviewed during the beta tests as only few people had access during this time so it might be better to remove that time period (if rep changes were awarded retroactively).
    – dan1st
    Commented Sep 20 at 15:12
5

While your proposal incentivizes making review actions (specifically approvals when it comes to rep), it does nothing for actually improving posts (or letting the asker do so).

While I already gave you feedback on the reputation incentives, I want to suggest a new badge in this answer.

I would like to see a badge (or multiple badges with differentratities) that is rewarded when a user reviews at least N questions as "Requires Major Changes" that later receive and author edit and are afterwards approved (by any reviewer) without any edits (maybe excluding the author) after publishing the post for 48 hours.

The value of N would need to be chosen appropriately, maybe 10 for a bronze badge and 50 for a silver badge.

Adding such a badge would provide an incentive to

  • ask the author to improve their posts if these posts are missing something
  • edit the post if there are thingss that should be changed (spelling, formatting, title, tags, whatever)
  • make sure approved posts have everything necessary

You could also make the rarest version of this badge be awarded multiple times to encourage this behavior also to users who already earnt these badges.

1
  • 1
    Thanks for suggesting this new badge. We will consider it for future iterations, along with other badge suggestions we have received. If taken up, we may not implement it in exactly the way you have laid out here, but we will definitely think about how to use badges to specifically incentivize reviewers to get authors to improve their posts.
    – Sasha StaffMod
    Commented Sep 11 at 14:10
3

Why do we need to track the progress of reaching 50 reviews in one day as a badge when we already have a progress indicator right there at the top of the right column constantly reminding us that we haven't done our share of work?

On that note, how much time are we expecting users to generally spend on a review on average? Should we even be showing the upper limit of reviews in this way, as if it's a daily goal, despite it being something most reasonable people reviewing for the right reasons probably won't reach because reviewing in this way actually takes a decent amount of time?

4
  • 1
    IMO it should be treated similar to voting limits. You don't see the limit until you're close to reaching it. I certainly understand it might be more effective to gamify it, but that's ignoring the quality of the results.
    – Kevin B
    Commented Oct 7 at 18:13
  • The old "you get what you measure". If you measure how maximum review per day then...you get more people doing all the reviews they can per day. Note how this doesn't measure if they have done the reviews right.
    – VLAZ
    Commented Oct 7 at 18:16
  • Though i suppose after today we'll be able to track the people awarded this badge for people who are abusing the system
    – Kevin B
    Commented Oct 7 at 18:55
  • These are good questions. I don't have any concrete answers to share right now, but just wanted to let you know that we are considering these concerns.
    – Sasha StaffMod
    Commented Oct 7 at 20:18
2

I've not worked out how the code editor in the staging ground reviews works. AFAICS, it's completely different from that used in the main SO interface. I assume some people can make it work, but I can't edit the code in questions using it. This is, IMO, a major disincentive (demotivator) for reviewing staging ground questions.

2
  • 9
    If you have concrete issues with it, it's probably best to make a new MSO post.
    – dan1st
    Commented Aug 26 at 18:07
  • 3
    ...tagging said new MSO post [stacks-editor].
    – Ryan M Mod
    Commented Aug 27 at 4:29
2

The number of unique reviewers, the average number of reviewers per day, the adoption rate (the number of users who actually became reviewers, out of all the eligible active users who have access), and the number of questions reviewed per reviewer. Unfortunately, at this point, these numbers are not growing.

These seem like extremely incorrect metrics. Maximizing the number of unique reviewers is silly--if most users have absolutely no interest in helping new users, they shouldn't have to. And they probably wouldn't do a good job at it if forced to, either.

Instead, you probably want to cater to the smaller portion of users who do enjoy being helpful to new users. (Thankfully I think the changes proposed here are fairly in line with keeping those users engaged.)

It seems like the key performance indicators for reviewer engagement are:

  • Average time to first response: If there's too many questions and not enough reviewers, this will be longer. If there's enough reviewers for the question rate, it'll stay the same. If people have to wait for 2 days for somebody to even tell them that their question is bad, they won't return to StackOverflow. Meanwhile, if they get quick feedback and then get approved, they're more likely to return.
  • Queue length/fall-through rate: If there are too many questions and not enough reviewers, the number of active questions will grow until it can't keep up. If an increasing number of questions are not getting responses and getting posted on the main site, that's a problem.
2
  • 1
    I think both of your proposed metrics fall short because currently the number of questions that make it into the SG is based on the number of people reviewing, so should on average not change at all (assuming that the scaling factors are properly picked).
    – cafce25
    Commented Aug 28 at 18:15
  • @cafce25 I see. That does poke a pretty big hole in this, yeah. Thanks for explaining.
    – Kaia
    Commented Aug 28 at 18:20
2

The overall topic of these proposals is to reward curators with rep/badges in order to increase participation of reviewers in the staging ground. Surely, in the beginning there was a novelty effect of the staging ground, but by now it's the real performance. However, curators not only work in the staging ground but also outside of it. Badges are also given out for curation actions outside of the ground, but rep isn't (except for +2rep on edits for users with less than 2k total rep). Sometimes even rep is taken away (for downvotes for example). Also in this proposal, negative rep is proposed for certain actions.

Your goal is to increase motivation of reviewers but also to steer their behavior by gamification. It worked in the past, so it probably also will work in the future. However, the extent of how much you can increase the motivation might be limited. Not all people value badges and reputation that much. It might not be enough, or both goals might counteract each other. For example the -20 rep hit for question that are approved but later get deleted might have a significantly chilling effect on motivation because mistakes (can happen) are punished too much.

My personal motivation is torn. On the one hand, I think that the staging ground is what was always missing on this Q&A platform. Questions should only allow answers when they are answerable and this should be checked before, not at the same time with everything else. On the other hand there are backsides too (good questions have to wait longer and it is a lot of additional work and there needs to be done additional checks and decisions for every question before it even gets answers). Curation surely is hard work and should be rewarded but I still don't care about rep and rep mingles everything together anyway. Also in general, my motivation to contribute to the platform is low, and for whatever reasons, maybe the good idea of the staging ground will simply die because of a lack of curators willing to do the work in the end. Doesn't make it a bad idea though, just an unreachable goal.

One thing that I would really like to know: how often is advice in the staging ground ignored and how often is it actually taken into account and results in a better, answerable question? If for example I was a staging ground reviewer and I would feel that I'm annoying people rather than helping them, I would not want to continue probably. If curators do try their best, are question askers doing their job too or not?

Otherwise, streamline, streamline, streamline. It should be an as smooth as possible experience for everyone. Do comments really have to deleted upon approval? Are automatic up votes upon approval a good idea? Can it be integrated better with the "normal" ground? Make it easy to discover a sufficient amount of fitting questions for reviewers, make giving feedback as quick as possible, try to decide things as automatic as possible (for example a 5 character edit hardly ever solves a serious problem in a question, or so). That should help those that discover the staging ground to actually work in it for a longer period.

Sorry for not being able to go into greater detail.

6
  • 1
    "Do comments really have to deleted upon approval?" Yes because the feedback is irrelevant once the question is improved and published (if there is something missing, it should probably not be published). "Make it easy to discover a sufficient amount of fitting questions for reviewers" like the Staging Ground filters? "make giving feedback as quick as possible" using comment templates?
    – dan1st
    Commented Sep 5 at 15:41
  • @dan1st I think no, because we didn't do it outside of the staging ground and questions do not always get improved when they are published. You assume that every published question doesn't need these comments anymore and I think this is not true. We could maybe put them in a comment thread, but not lose them completely. This is one of the few things that is negative with the staging ground for me. It might be fine for others though. Commented Sep 5 at 16:55
  • "because we didn't do it outside" - it's not really possible to decide which comments are good automatically outside the SG and flagging them as NLN is an option (though that's work for people handling it); "and questions do not always get improved when they are published" - the better way to handle it is try to increase the fraction of posts getting improved; "You assume that every published question doesn't need these comments anymore and I think this is not true" - if posts get the improvement these posts need and people don't write stuff for answering in comments, yes.
    – dan1st
    Commented Sep 5 at 17:51
  • "We could maybe put them in a comment thread, but not lose them completely" - I see no problem with that as long as it doesn't clutter up anything and it's hidden by default. I see no problem with a [feature-request] about that. "This is one of the few things that is negative with the staging ground for me." - I can understand that but as I mentioned in the comment before, I think the way to improve that is ensure feedback is addressed and making sure SG comments are used for providing feedback and not trying to answer the question.
    – dan1st
    Commented Sep 5 at 17:52
  • by the time something leaves SG, everything that should be addressed as raised by comments there should be addressed. that's the design (to my understanding).
    – starball
    Commented Oct 8 at 1:55
  • @starball That's the design yes but there is always a difference between theory and practice. Should be addressed means that in a certain number of cases it will be addressed and in another it won't, because no process is perfect. Questions leaving the SG still get closed afterwards, only with a lowered rate. Commented Oct 8 at 5:51
2

For badges that are awarded multiple times based on stat counts, how is it counted?

  • If I have a total of two "variety" badges, does that mean the total count of distinct tags over all the SG items I have ever reviewed is 100-149?
  • Is it possible to get two "steadfast" badges by reviewing 8 days in a row? (two distinct overlapping windows of 7 days in a row).
4
  • Well, it seems very weirdly counted indeed, I have 57 total questions reviewed, but 4 variety badges when I mostly review a single tag, I'd almost think something is bugged here.
    – cafce25
    Commented Oct 7 at 17:26
  • 1
    I also asked about this here. You get two steadfast reviewer badges when reviewing Staging Ground questions for 14 consecutive days.
    – dan1st
    Commented Oct 7 at 17:32
  • 1
    Just closing the loop that I answered in dan1sts's question, that these are distinct sets, meaning yes, if you have 2 variety badges, you have reviewed posts with a total number of tags between 100 and 149. And the steadfast badge must have two non-overlapping windows, so an 8 day streak won't earn you two badges
    – KyleMit StaffMod
    Commented Oct 7 at 18:11
  • I apparently have 9 variety badges (and got 9 notifications all at once), and I've only apparently "helped" 266 askers while being quite narrowly focused on Python questions... maybe this one is too easy? And why not have silver and gold variations instead of repeatedly awarding it? Commented Oct 8 at 0:46
-3

We need off-topic reasons for the usual no debugging details, or too broad on the SG would be helpful. Since it's the stuff that new users need the most. Having none of them would let the reviewer type different things for every vote or copy any template they had that might not be correct.

3
  • 9
    These are not off-topic reasons, these are reasons that need major changes. Please leave a comment that explains the issues and use the "Requires Major changes" action. Ideally, off-topic reasons are problems that are either terminal to the post (e.g. Not about programming) or require substantial changes/reframing to be on-topic (e.g. Primarily opinion-based).
    – Spevacus
    Commented Sep 2 at 14:17
  • 2
    SG is fundamentally not about closing questions. It's about giving the feedback to posters for what they need to do to post a question. The off-topic reasons are for when the question is just never going to be on-topic without extensively re-writing it. If a question needs details (or less details) then that's the feedback you need to leave - the essence of the question will not change with these updates. Also, there are already templates for comments, including adding an MCVE: i.sstatic.net/oTNXQ41A.png
    – VLAZ
    Commented Sep 2 at 15:03
  • 9
    Thank you for writing this answer. It demonstrates that one of the things that need significant improvement in the SG is reviewer onboarding.
    – dan1st
    Commented Sep 2 at 15:34
-6

I think SG is cool concept, However it adds another task for the community which is already moderating quite actively by flagging and editing. So given that SG does not add much value. And if we introduce reputation as an award it'll take away focus from other queues.

1
  • 2
    Isn't the point of SG to prevent bad questions being published in the first place, so we can reduce the amount of curation needed?
    – A-Tech
    Commented Aug 30 at 11:00
-12

For me, questions marked "Staging Ground" are an automatic turn-off. If I think I know the answer, I can't answer until it's been turned into a real question. And even if I leave a helpful hint as a comment, it will be deleted once the question is published.

I haven't avoided Staging Ground entirely, but I do find it to be a waste of time. Rep changes aren't going to make a difference for me, maybe it will for the newbies who are scrambling to get enough rep to interact with the site normally. But seriously how much help are they going to be?

12
  • 15
    Note that the Staging Ground is about improving questions, not answering questions (if a question is well-written and answerable, you can approve it). These are completely different goals. That being said, if you are not interested in seeing Staging Ground posts, you can disable that in your settings.
    – dan1st
    Commented Aug 26 at 18:00
  • 20
    "If I think I know the answer, I can't answer until it's been turned into a real question." uhh you have enough reputation to be able to review on SG if you feel a question is answerable what stops you from approving it? If you feel some details, etc. are missing making it not worthy of publishing then how is it answerable? This is the second time I've heard this view, can you clarify what's your experience in these cases? Do you just not know that you can approve or do you feel there's something missing in the question or it needs to be improved in a certain way? Commented Aug 26 at 18:27
  • 5
    Well, if the only thing you're interested in is answering questions then SG isn't for you, and yeah, it's a waste of time. We do need reviewers though. Commented Aug 26 at 18:43
  • 4
    @AbdulAzizBarkat if you approve a question, you're immediately whisked away from it to the next question in SG. It takes a lot of work to get back to the question. Commented Aug 26 at 19:34
  • 1
    @PresidentJamesK.Polk the question was whether a change in rewards would incentivize some additional reviewers. My answer is that for this sample size of one, it wouldn't. I posted this thinking that there might be a few people who agreed with me. Commented Aug 26 at 19:38
  • 1
    I had your usage of SG in mind during the beta when i was giving feedback. I wanted the SG to be a place where answerers like yourself could find questions you want to answer and fix + approve them and thus get the first crack at answering them as your reward. but... the way it's implemented it's not very convenient for that use-case. It was why i suggested having SG posts appear in the question list, surely if you saw an SG post you could answer, it'd be easy for you to approve it and answer it... but the review process required if the post needs any changes from the OP gets in the way.
    – Kevin B
    Commented Aug 26 at 19:41
  • 8
    "if you approve a question, you're immediately whisked away from it to the next question in SG. It takes a lot of work to get back to the question." - A single back navigation action seems to work just fine here. 🤷🏻‍♂️
    – Drew Reese
    Commented Aug 26 at 20:10
  • 3
    Is "I would be more motivated to review in SG if it were easier to fix up and answer suitable questions that I find there" a fair read of this answer?
    – Ryan M Mod
    Commented Aug 27 at 1:13
  • 2
    @RyanM I got along just fine without SG for 15 years. I'm trying to say that for me it adds negative value to the site and there's likely nothing you can do to change that opinion. Commented Aug 27 at 2:52
  • @DrewReese I did finally figure that out, but I think it required an additional refresh to see it in non-SG state. Certainly it wasn't obvious. Commented Aug 27 at 2:53
  • True, it seems to require a page reload to see the link to the published post.
    – Drew Reese
    Commented Aug 27 at 2:55
  • 3
    There's also a notification / toast at the top of the page after redirection. You can click that to go back to the original post. Also, vote on this - Allow me to stay on a Staging Ground post after I submit a review action
    – Phil
    Commented Aug 27 at 5:31

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .