Changes Coming To Reddit: Human Verification

Recently in a post titled “Humans welcome (bots must wear name tags)” u/Spez and Reddit announced a new wave of borderline privacy violations that are coming to Reddit: human verification checks. The post is below. Please read through it carefully, and then lets talk about it.

Humans welcome (bots must wear name tags)

TL;DR:

  • Reddit is for people
  • “Good bots” will be labeled as [App]
  • We’ll continue to remove spam and bad bot activity
  • Automated or suspicious accounts may be asked to verify that there’s a human behind it
  • We are not doing sitewide human verification
  • We don’t need or want your identity

Hi everyone,

The internet feels different lately. It’s getting harder to tell who—or what—you’re interacting with. But Reddit’s purpose is for people to talk to people. And we want it to stay that way.

Our product has always been human conversation: messy, opinionated, sometimes great, sometimes not, but always real (or at least, really creative writing). As AI becomes a bigger part of the internet, we want to make sure that when you’re on Reddit, you know when you’re talking to a person and when you’re not.

So we’re making a few changes.

Our strategy here is to go from the bottom up (i.e., deal with the bots), because on Reddit, you should assume that anyone you’re talking to is a human unless otherwise labeled. A few of the principles behind how we’re approaching this:

  • Verifying someone is human is not the same as knowing who they are
  • We don’t have or want your real-world identity
  • Automated use of Reddit can be useful in some cases (i.e., “good bots”), but we have to be careful

What’s happening

1. Clear labeling for non-human accounts

At the end of last year, we launched verified profiles for brands, publishers, and creators. For professional accounts, being clearly labeled increases transparency and helps their content be accepted in relevant communities.

Next, we’re standardizing how automation shows up on Reddit. Accounts that use automation in allowed ways (what many call “good bots”) will be labeled as [App]. If you see that label, you know you’re interacting with a machine, not a person.

Developers can register their apps to receive this label (there will be more about this in r/redditdev).

2. Continued removal of nefarious bots and spam

We hate it as much as you do and already remove the vast majority of it (an average of 100K accounts per day), often before anyone sees it. We’ll continue to remove nefarious bot content, including spam. 

3. Human verification for automated or otherwise fishy behavior

If something suggests an account isn’t human, including automation (hi, web agents), we may ask it to confirm there’s a person behind it. This will be rare and will not apply to most users. Accounts that can’t pass may be restricted. 

To be clear, this is not sitewide human verification, let alone sitewide ID verification.

4. Reporting suspected automation

Redditors have long been the best bullshit detectors, and increasingly great Turing testers. We’ll make reporting easier and more flexible (these days, we can infer most issues from a report without a lot of context). I’d also like to include comments from other users pointing something out (e.g., “nice post, bot, now fuck off”), since that’s most users’ preferred reporting method.

Privacy

Both due to AI reshaping the internet and increasing regulation around the world requiring various forms of identity or age verification, we are exploring ways to confirm humanness and comply with these regulations without compromising user privacy. The best long-term solutions will be decentralized, individualized, private, and ideally not require an ID at all.

If we need to verify an account is human, we’ll do it in a privacy-first way. Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn’t have to sacrifice one for the other.

When confirming that there is a human behind an account, we prefer third-party tools that keep a distance between verification and Reddit itself. Any system we use will not expose your real-world identity to Reddit nor your Reddit username or activity to any third party. There are a handful of ways to do this, and I’m sure there will be more. Each have their tradeoffs:

  • Passkeys (which are well supported by Apple, Google, YubiKey, and various password managers) – These are lightweight, require a human to do something, and don’t require your ID. The tradeoff is that there is no proof of individuality or anything other than “a human probably did something.” Nevertheless, it’s a great starting point.
  • Third-party biometric services – For example, World ID (yes, the Orb company, though they have non-Orb solutions as well). This technology unlocks proof-of-individual without requiring your name, government ID, or a centralized database. I think the internet needs verification solutions like this, where your account information, usage data, and identity never mix.
  • Third-party government ID services – In some countries, such as the UK and Australia, governments require us to use these. These are the least secure, least private, and least preferred. When we are forced to do this, we design the integrations so that we never actually see your ID information, so your Reddit data cannot be tied to you.

What about AI-generated content?

There is, of course, the gray area of humans using AI to write. We see it too and agree that it can feel off, but we’re not going to overcorrect on that now, at least at a sitewide level. We’ll monitor its usage and see what happens as we crack down even more on automated accounts. As always, communities can set their own standards if they want. 

For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you’re seeing. Before there was AI slop, there was slop. It’s not a new problem, and it’s one that Reddit, with its voting and moderation system, is better than most at dealing with.

Things are changing quickly, and we’ll adapt as best we can. We welcome any thoughts and criticism.

Thanks,

u/spez

What is this saying in plain english?

1) Automation becomes a bigger risk to your account

What that means in practice for adult creators:

  • If you use auto posting tools (think schedulers!), scripting, mass commenting, scraping, auto-DMs, auto-upvoting, or anything “bot-like,” you are more likely to get flagged and restricted.
  • Even harmless “efficiency” behavior can look automated, especially if you post the same promo copy repeatedly across subs.
  • If you truly do run any automation (this includes schedulers!), it may end up clearly labeled as non-human, which can kill trust in NSFW communities and get you filtered by moderators even if it is allowed.

2) “Human verification” is not “ID verification,” but it can still create privacy pressure

They say they are not doing sitewide human verification and do not want your identity, they only want confidence there is a person behind an account.

But they also describe possible verification methods and tradeoffs:

  • Passkeys (low friction, no ID, but weak “proof of individuality”)
  • Third-party biometrics (they mention World ID as an example)
  • Third-party government ID checks (they call these the least preferred, but note they may be required in places like the UK and Australia)

For adult creators, the privacy reality is:

  • Even if Reddit never sees your real name, any verification step adds a new third party and a new data risk surface (breaches, subpoenas, policy drift, vendor changes).
  • If you rely on anonymity, your goal is to avoid situations where you are pushed toward higher-risk verification options. That means avoiding behavior that triggers verification in the first place.

3) “Report suspected automation” can be weaponized against creators

They want to make reporting bots easier, and they even joke about users calling someone a bot in comments.

Adult creators are more exposed to bad-faith reporting because:

  • Promotional patterns can look repetitive.
  • People dislike NSFW promo and may try to silence you by calling you a bot.
  • Brigading (a coordinated effort by a group of users to disrupt, harass, or manipulate an online community) is a thing, especially in adult spaces.

So, you want your posting style to look unmistakably human.

What does this mean for scheduler users?

I wanted to know this immediately as y’all know I depend on a scheduler, Fangrowth.io for my base Reddit posting. So I reached out to the developer of Fangrowth.io and figured I would ask her perspective. She seems fairly confident that things will not change too terribly.

My gut feeling is that things will operate as it normally does, especially if you still use your Reddit account to comment from time to time, so it doesn’t seem like a “dead” account to their systems. 

She did give us a tip to help keep us scheduler users flying under the radar as much as possible:

…the safest way to operate is to ensure a “human touch” and that can be just logging in and commenting or engaging with Reddit content.

We at creators spicy tea already heavily suggest if you use a scheduler that you maintain manual interraction as often as possible. Be certain to give the below post a read through for more tips and reminders on how to safely and effectively use schedulers on Reddit.

Final Thoughts

I personally do not have the physical time in my day to stop using a scheduler. I know I also do not have the consistency in my neuro-spicy brain to keep up without one. As such, I will be continuing to use Fangrowth.io despite the above privacy risks. This is a thought out, researched and educated decision I will be making for my business. That is the same decision that every creator is now being forced to make if we want to keep using Reddit.

The internet is becoming much much less anonymous, and now Reddit has followed, regardless of the “we don’t want your identity” reassurances. Reddit itself may not want or use it. But we now each have to make the decision if we are going to be trusting third party systems, or even if we can even trust those reassurances.

Stay safe yall.


Comments

Have a Question or Comment? Join the Conversation!