Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Coop User Guide

The Coop UI has several pages accessible by a left-hand menu:

  • Overview
  • Automated Enforcement
  • Policies
  • Review Console

As well as a bottom menu with buttons for logging out, managing settings, and viewing your profile.

Dashboard

Coop overview with key operational metrics such as total actions taken, jobs pending review, percentage breakdown of automated vs manual actions, and top policy violations

The Overview dashboard provides top level metrics for Coop that can be filtered by hourly or daily breakdown across a window of time including:

  • Total actions taken
  • Jobs pending review
  • Automated vs manual actions
  • Top policy violations
  • Decisions per moderator
  • Actions per rule (if rules are enabled)
  • Count of violations by Policy

Settings

Configuring Items and Actions

Coop Item Dashboard showing the configuration of an Item Type: a post from the firehose with a schema of fields that get sent to Coop including text, unique ID as well as what format the fields are in

Item Types represent the different types of entities on your platform. For example, if you’ve built a social network that allows users to create profiles, upload posts, and comment on other users’ posts, then your Item Types might be Profile, Post, Comment, and Comment Thread. If you’ve built a marketplace platform, your Item Types might be Buyer, Seller, Product Listing, Product Review, Direct Message, Transaction, etc. Every Item you send Coop needs to be an instance of exactly one of these Item Types.

When creating an Item Type, define the schema to include which fields will be included and shown to reviewers. These fields are also available in any rule logic to connect with signals for routing or automation.

Coop Action Dashboard showing a table of configured custom actions like: send warning, label as spam, delete content, delete account, ban email

Actions represent any action you can perform on Items. Some common examples include Label, Send Warning, Delete, Ban, Mute, Send to Manual Review, Approve, etc.

Creating an action in Coop: add the name, description, what item types the action can run on, and a callback URL

For every Action you define in Coop, you have to expose the action through an API endpoint that can receive requests from Coop. Whenever your rules determine that some Item should receive an Action, Coop will send a POST request to the Action’s API endpoint. When your server receives that POST request, your code should actually perform the corresponding action.

Coop integration page showing two cards to set up API access to Google’s Content Safety API and OpenAI’s Moderation API Coop uses an API key to gate its endpoints. Use the UI to generate an API key to authenticate any requests Coop makes to your organization’s endpoints.

Integrations

API Keys: Coop uses APIs to authenticate requests. UI allows rotating the key and adding a webhook signature verification key Coop comes with pre-built integrations to common software used for online safety. Add your API key to enable integrations like OpenAI’s Moderation API or Google’s Content Safety API, and set up your instance of Meta’s Hasher-Matcher-Actioner.

User Management

User management page in Coop showing different users with different emails and roles assigned to them. There is an Approval Status column and a column showing when they were created

Coop uses role-based access controls to make sure the right people can access and view the right data. You can use the UI to invite more users and either copy the link for them to sign up with an account, or set up an email service to email the link to the invited user.

User Roles

Coop comes with 7 predefined roles that can be further customized:

User RoleAccess Manual Review ToolView all QueuesCreate, Delete and Edit QueuesCreate, Delete and Edit RulesAccess NCMEC dataAccess Insights
AdminYesYesYesYesYesYes
Moderator ManagerYesYesYesNoYesYes
Analyst/Rules ManagerNoYesNoYesNoYes
Child Safety ModeratorYesNoNoNoYesNo
ModeratorYesNoNoNoNoNo
External ModeratorYesNoNoNoNoNo

Admin
Admins manage their entire organizations. They have full control over all of the organization’s resources and settings within Coop.

Analyst
Analysts can view metrics for all Rules, create or edit Draft and Background Rules, and run Backtests. They cannot create or edit Live Rules, run Retroaction on Live rules, or edit any other resources (Actions, Content Types, Signals, other Users, etc.). In short, they can experiment with Background Rules and view Rule metrics, but cannot affect any Live Rules or other features that actually mutate your data.

Child Safety Moderator
Child Safety Moderators have the same permissions as Moderators, but they are also able to review Child Safety jobs and can see previous Child Safety decisions.

External Moderator
External Moderators can only review jobs in the Manual Review tool. They cannot see any decisions or use any other tooling

Moderator
Moderators can view the Manual Review tool, but are only able to review jobs from queues that they’ve been given permission to see. They can also view overall Manual Review metrics. They cannot see any Child Safety-related jobs or decisions.

Moderator Manager
Moderator managers can view and edit queues within the Manual Review Tool. They have full control over the permissions that moderators have, and the Routing Rules that determine how to route each incoming job to the right queue.

Rules Manager
Rules Managers can create, edit, and deploy Rules, and they can view all metrics related to Rules. They cannot create, edit, or delete other organization-level settings, including Actions, Item Types, Manual Review Queues, or other Users in the organization.

Once you invite a new user to Coop, you can either configure an email service to send the link to that person or copy the invite link and share it directly with them.

SSO

Learn how to configure SSO using Okta SAML.

Coop only supports SSO through Okta SAML.

Prerequisites

To configure Okta SAML SSO, you must:

  • Be in Admin mode in Okta.
  • Have group names that match exactly between Okta and SAML.
  • Have admin permissions in Coop.
  • Have the ability to create a custom SAML application.

Configuration

  1. Create a custom SAML application in Okta. Use the following settings.

    SettingValue
    Single sign-on URLYour organization’s callback URL (e.g. https://your-coop-instance.com/login/saml/12345/callback). You can find your callback link in Coop under Settings → SSO.
    Audience URI (SP Entity ID)Your Coop instance base URL (e.g. https://your-coop-instance.com).
    email attribute (in Attribute Statements)email. This field depends on your Identity Provider’s attribute mappings (e.g. Google SSO may use “Primary Email”).
  2. In the Feedback tab, check I’m a software vendor. I’d like to integrate my app with Okta.

  3. In your app’s settings, go to the Sign On tab. Under SAML Signing Certificates → SHA-2, click Actions → View IdP metadata.

  4. Copy the contents of the XML file. In Coop, go to Settings → SSO and paste the XML into the Identity Provider Metadata field.

  5. On the same page, enter email in the Attributes section.

  6. In your Okta app under Assignments, assign users or groups to your app.

Wellness and Safety

Reviewer safety and well-being is critical. Trust & Safety is an incredibly difficult field of work, and it can take a severe mental toll, especially for the moderators who are on the front line, reviewing disturbing content for hours on end every day.

That’s why Coop includes customizable settings to prioritize reviewer safety. These come in two forms: Coop’s organization safety settings where you can set the baseline resilience settings on media such as a blurring scale, grayscale toggle, and auto-mute video toggle

  1. Company-wide Safety Settings: If you are an Admin in your platform’s Coop organization, you can set the default safety settings for every employee at your company who has their own login credentials. Those are configured in your Employee Safety settings page.

Coop’s individual user safety settings where users can override the baseline resilience settings with their own preferences on media such as a blurring scale, grayscale toggle, and auto-mute video toggle

  1. Personal Safety Settings: Any user can customize their own personal safety settings in their account settings page. These will override the default, company-wide safety settings and allow users to create the best experience for themselves.

For both levels of safety settings, you can customize the following properties:

  1. Image & video blur strength: you can configure whether images and videos are always blurred by default, along with the strength of the blur. When images are blurred within Coop, hovering over the image with your mouse will unblur the image, and moving your mouse outside the image will blur it again. When videos are blurred within Coop, playing the video will unblur it.
  2. Image & video grayscale: you can decide whether images and videos are displayed in grayscale or in full color.
  3. Video muted by default: you can ensure videos are always muted by default, even if your device’s volume is on.

Policies

Policy Dashboard showing 4 policies: Fraud, Nudity, Scams, Spam. There’s a button to create new policies as well as options to add sub-policies for each existing policy and edit or delete them Policies are categories of harm that are prohibited or monitored on your platform. Some typical examples include Spam, Nudity, Fraud, Violence, etc. Policies can have sub-policies underneath them, so the Spam policy could have sub-policies like Commercial Spam, Repetitive Content, Fake Engagement, Scams and Phishing, etc., all of which are specific types of Spam that could occur on your platform.

It is often useful (and in some cases, required by some legislation) to tie every Action you take to one or more specific Policies. For example, you could Delete a comment under your Nudity policy, or you could Delete it under your Spam policy. Coop allows you to track those differences and measure how many Actions you’ve taken for each Policy. That way, you can see how effectively you’re enforcing each Policy over time, identify Policies for which your enforcement is poor or degrading, and report performance metrics to your team (or to regulators).

Policies added in Coop’s UI will be visible to reviewers directly in the review flow, so they can easily reference policies and enforcement guidelines.

Learn more about Policies from the Trust & Safety Professional Association.

Manual Review

Queues

Coop’s manual review tool showing 5 example queues: media, default queue, fraud, scams, and spam. There are buttons to create a new queue, start reviewing each queue, delete all jobs, and edit or delete individual queues.

Coop uses Queues (i.e. ordered containers) to organize tasks. When you create a task in Coop, the task will enter a Queue and wait to be reviewed. Then, when a user visits the Queue dashboard and clicks “Start Reviewing” on a particular Queue, Coop pulls the oldest task in the Queue so the user can review it. After a user makes a decision on a task, they will automatically see the next oldest task until the Queue is empty or the user stops. Coop will automatically make sure that two moderators won’t receive the same job to avoid duplicate work.

Queues can be starred to prioritize at the top of the review console (per user) and show the number of pending jobs.

You may want to restrict which Actions can be triggered from a particular Queue. When you create or edit a Queue, you can configure which Actions are hidden for that Queue through its “Hidden Actions” setting.

Task View

A review job in Coop with user data redacted. Shows what time the report was received, what kind of item it is, who created the report and for what reason, the # of reports on the task, and information about the post itself. Users can add internal comments for their teammates and view the policy to check what enforcement guidelines are. There is a list of decisions users can make on the task and a button to skip the task

The task (aka Job) view, shows information about the flagged content or actor being reviewed. Each task has its own hyperlink and can be shared across anyone at your organization who is allowed to access Coop.
When a user reviews a task, Coop will show as much relevant information about the task and its corresponding Item as possible. As a baseline, the Item’s fields will render, and depending on the Item’s type (e.g. whether it’s a DM, Comment, Profile, etc.), Coop will try to render additional relevant information and surrounding context, including:

  • Associated user account connected to the content
  • Additional content associated with the same user

Default actions include:

  • Ignore (task is closed with no action taken)
  • Enqueue to NCMEC (moves the task to the NCMEC review queue and converts it into a User type review, aggregating all media associated with the user)
  • Move (moves the task to another existing queue)

Any Actions you configure will also show up. You can hide specific actions in specific queues when creating or editing a queue.

Investigation

The Investigation tool in Coop with user data redacted. Shows all information Coop knows about a given entity/user. You can use the Investigation tool (either plug in the unique ID of the Item or click through from a manual review task) to see more information about it:

  • The Item itself, including all relevant attributes and metadata Coop has on that Item.
  • The user who created the Item (if applicable), and all the attributes and metadata Coop has on the user.
  • The details and full context of any previous actions taken on the Item, and the user who created it.
  • Other Items that are closely related to the main Item you’re investigating. For example, if you’re investigating a single comment within a larger comment thread, we’ll show you preceding and subsequent comments in the thread.

You can also manually take action on the Item as you’re investigating it. This is useful when you want to take action but aren’t reviewing the Item within the context of a Review Queue.

Automated Enforcement

Read more about rules in Coop in the rules doc.

Matching Banks

The Matching Banks page showing Text Banks and Location Banks in different tabs. There are 3 text banks: scam keywords, sexually explicit, and a regex check for URL shorteners

A Matching Bank is a way to keep track of a large list of values that you’d want to check for matches. For example, you could create a Matching Bank that holds 10,000 keywords, and you can check every new post or comment to see if the text in the post or comment matches any of the 10,000 banned keywords in the Matching Bank. You can build Rules that reference your Matching Banks, so you don’t have to list out all 10,000 banned keywords every time you create a Rule that checks for matches against those keywords.

You can also use regex in rules, for example a regex checking if a URL has been shortened:
A text bank using regex to check for URL shorteners These banks can then be used as signals in both automatic enforcement rules and routing rules for manual review.

Hash Banks

Coop integrates with hasher-matcher-actioner (HMA), providing a configurable way to match known CSAM, non-consensual intimate imagery, terrorist and violent extremist content, and any internal hash banks you maintain. Setup requires API credentials from supported hash databases like NCMEC and Tech Against Terrorism. Read more about HMA documentation here.

HMA signals are available as a signal in Coop’s signal library.

Location Banks

Creating a location bank for the middle of Times Square using a geohash

A Location Matching Bank holds a list of geohashes or Google Maps Places, which are two types of references to geographical locations. This can be helpful if you have a list of locations where you want to apply a different set of Rules. For example, you can apply stricter sexual content Rules on college and high school campuses than you do elsewhere.

Appeals

Coop’s review console in the Appeals tab, showing one queue for any appeals ubmitted by users who have had an action applied on them or their content When you make a decision that affects a user on your platform, you may want to give them the ability to “appeal” your decision (and the EU’s Digital Services Act requires this for many platforms). If a user appeals a moderation decision, you’ll send that Appeal to Coop’s Appeal API so that a moderator can review the Appeal and decide whether or not to overturn the original decision.

The Job view of an Appeal task. Information displays who submitted the appeal and the original action as well as the original content. Decisions are to accept or reject the appeal.

Reports and appeals can be closely related. For example:

  • User A can report User B’s profile, which you would send to Coop through the Report API.
  • If a moderator suspend’s User B, then User B may appeal that suspension decision, which you would send to Coop through the Appeal API.
  • A different moderator can then accept the Appeal and overturn the suspension, or they can reject the Appeal and sustain the suspension.

Recent Decision Log

Coop logs all actions taken in a Recent Decisions Log that includes basic information like the decision(s) taken on each Job and the user who made the decision.

Coop’s recent decisions page showing a log of all actions taken in coop and who took the action. There are buttons to refresh the table, download all decisions, and download only the tasks skipped by users You can also see the full Job itself if you decide you want to investigate further, take additional Actions, or overturn a previous Decision.

NCMEC Review and Reporting

Coop is integrated with the CyberTip Reporting API from the National Center for Missing and Exploited Children. In order to review accounts and content to report to NCMEC, you must have:

  1. A manual review queue called “NCMEC Review”
  2. A User item type that is a RELATED_ITEM field for associated content. This stores a structured reference to the User item. NCMEC-type jobs extract the user identifier from this structured reference to look up the full user in the Item Investigation Service.
    1. Coop will automatically convert content Item Types and aggregate all media associated with the user to convert the job into a NCMEC-type job. This creates a detailed NCMEC report around one user, rather than multiple NCMEC reports for multiple pieces of content from the same user.

Coop’s NCMEC Reporting task view. This differs from the usual task view as it aggregates all media associated wih a user. There are keyboard shortcuts to apply specific industry classifications, a dropdown for selecting the incident type, and add labels per NCMEC’s CyberTip fields The NCMEC job UI includes:

  • Incident Type category (from the Reporting API). Values include:
    • Child Pornography (possession, manufacture, and distribution)
    • Child Sex Trafficking
    • Child Sex Tourism
    • Child Sexual Molestation
    • Misleading Domain Name
    • Misleading Words or Digital Images on the Internet
    • Online Enticement of Children for Sexual Acts
    • Unsolicited Obscene Material Sent to a Child
  • Industry categorization (A categorization from the ESP-designated categorization scale):
    • A1
    • A2
    • B1
    • B2
  • Labels (file annotations):
    • animeDrawingVirtualHentai: The file is depicting anime, drawing, cartoon, virtual or hentai.
    • potentialMeme: The file is being shared/posted out of mimicry or other seemingly non-malicious intent.
    • viral: The file is circulating rapidly from one user to another.
    • possibleSelfProduction: The file contains content that is believed to be self-produced.
    • physicalHarm: The file depicts an intentional act of causing physical injury or trauma to a person.
    • violenceGore: The file depicts graphic violence, including but not limited to acts of brutality or detailed or vivid gruesomeness.
    • bestiality: The file involves an animal.
    • liveStreaming: The file depicts content that was streamed live at the time it was uploaded.
    • infant: The file depicts an infant.
    • generativeAi: The file contains content that is believed to be Generative Artificial Intelligence.