Step 3 of 5

Quality Standards

Understand what good work looks like and how your output is evaluated.

Quality Standards

This platform exists because human judgment matters. Every task you submit becomes part of the dataset that trains and evaluates AI systems. Substandard work does not only get rejected — it introduces noise into datasets that teams rely on for training and evaluation. Read every section before starting your first task.

6 sections · ~10 min read

01Why Quality Is Everything

Data annotation is the foundation of AI. Every label, rating, and piece of feedback you produce feeds directly into models that serve millions of people. The quality of the AI depends entirely on the quality of the annotation that trained it.

You

annotate the data

AI

learns from your work

Unlike many jobs where a mistake affects one outcome, a bad annotation can propagate through thousands of model training steps. The quality of your work has a direct, lasting impact on the dataset — and on your standing in this workspace.

Exceptional quality is the baseline expectation, not the goal. Adequate work is not sufficient on this platform. You are expected to produce work you are genuinely proud of, every time.

Quality Determines Your Workload

Your quality record directly determines how much work you receive and which projects you have access to. This is not a soft signal — it is how workload is allocated.

Better work → more work

More tasks, access to higher-value projects, and a long-term place on the platform.

Poor work → less and less

Reduced task volume, removal from projects, and eventually removal from the platform entirely.

02What Gets Evaluated

Every task you submit passes through a two-step pipeline. You complete the work first, then a reviewer evaluates and builds on what you produced. The exact criteria vary by project — each project defines its own rubric, and that rubric is always documented in the project instructions. What is constant across every project is simpler: did you apply real judgment, complete the task fully, and follow the guidelines as written?

Reviewers work from the same instructions you do. There is no hidden standard. Read the guidelines, apply them exactly, and you already know what the reviewer will be looking for.

03The Attempter Standard

As an attempter, you produce the first version of the annotation. The reviewer will build on what you submit — they can improve it, add to it, or correct it. But they cannot rescue a submission that lacks real effort or judgment. The quality of your foundation determines what the finished task can be.

Before you submit, ask yourself:

  • Have I read the full task instructions, including any special rules for this project?
  • Does my response answer exactly what was asked — not a simpler version of it?
  • Have I handled edge cases and ambiguities explicitly?
  • Would a strict reviewer find anything to criticize in my work?
  • Am I submitting because I'm done, or because I want to move on?

Speed does not matter if the work isn't good. Each task has a set time allocation — you are expected to use that time. Rushing through a task to submit quickly is one of the most common causes of quality failures.

If you are unsure how to handle something in a task, re-read the instructions. If the instructions don't cover it, make a reasonable judgment and document your reasoning in your response. Don't leave ambiguity unaddressed.

04The Reviewer Role

Reviewing is not just a pass/fail check. As a reviewer, you complete the task— you take the attempter's first version and bring it to its final state. That might mean approving it as-is, adding corrections or additional detail on top of what the attempter did, or sending it back if the foundation is too weak to build on.

Approve

The attempter's work meets the standard as submitted. Approve it and move it forward.

Improve & approve

The work is solid but incomplete or imperfect. Add to it, correct it, and approve the final version.

Reject

The foundation is too weak to build on. Return it with specific feedback so the attempter can fix it.

Reviewer responsibilities:

  • Read the attempter's work carefully before deciding anything — don't skim and approve
  • Add corrections, missing context, or improvements directly when the work is close
  • Send back with clear, actionable feedback when the issues are too fundamental to patch
  • Apply the same standard consistently — who attempted it is irrelevant
  • Approve only work you would stand behind if asked to justify your decision

Approving bad work is also a quality failure

Reviewers are held to the same quality bar as attempters. If you approve a task that should have been sent back, that is on your record. A rubber-stamp review lets bad data through with a false stamp of approval.

05Common Failure Points

These are the most frequent reasons tasks get sent back or flagged. Most of them are avoidable with careful attention.

Skipping the instructions

Starting a task without fully reading the project guidelines. Instructions are the only source of truth — guessing the format or criteria always leads to failures.

Partial responses

Answering only part of what the task asks. Multi-part tasks require all parts to be addressed, even if some are harder than others.

Vague or generic language

Using filler phrases like "this is a good answer" without explaining why. Evaluations require specific, justified reasoning — not general impressions.

Inconsistent scoring

Rating two similar responses differently without justification. Reviewers flag inconsistency immediately because it signals the attempter wasn't applying a real standard.

Ignoring edge cases

Handling only the obvious case while leaving unusual scenarios unaddressed. Instructions usually specify how to handle edge cases — apply them.

Rushing

Submitting before the task is genuinely complete. Task time allocations exist for a reason. A submission that arrives in a fraction of the expected time will be scrutinized.

06Consequences of Poor Work

The consequences of consistently poor work are direct and serious. This is not a system that gives many warnings before acting.

Low-quality submissions are flagged immediately

If you submit work that clearly doesn't meet the standard — minimal effort responses, copy-pasted content, or responses that ignore the instructions entirely — it is typically flagged during review and may be returned for correction or escalation according to project policy.

Task may be returned to you or rejected
Quality outcomes may be recorded for project evaluation
Repeated failures can lead to reduced project access
Severe or repeated violations can trigger further action per policy

Your Quality Record Follows You

Every task you submit contributes to your quality record on this platform. That record directly determines how much work you receive and how long you stay on it.

Better work leads to more work. Contributors with strong quality records get access to more projects, higher task volumes, and more varied work over time. Poor work leads to less and less. Task volume drops, projects become unavailable, and continued issues result in permanent removal. There is no floor that resets — your record is cumulative.

Pre-submit checklist

  • Re-read the task instructions one final time
  • Verify your response addresses every part of the task
  • Check your reasoning is specific — not vague or generic
  • Confirm you've followed the format specified in the guidelines
  • Ask yourself: would I be comfortable explaining this decision?

Quality standards apply to all roles. Attempters and reviewers are held to the same bar.