What prediction market builders are missing about Sybils

What prediction market builders are missing about Sybils

There was a day in late 2025 when the numbers on prediction markets stopped feeling like probabilities and started feeling like predictions without evidence. You’d watch a contract on a major election or a macro outcome barely trade, and before the underlying data had even begun to materialize, the price would settle into a near-certainty. It wasn’t subtle; the market moved, liquidity surged, and the “crowd wisdom” lit up green long before any new economic report, press release, or signal had a chance to surface.

That’s how fast this world has grown.

In 2025 alone, monthly prediction-market volumes jumped from under $100 million in early 2024 to well over $13 billion by November,  a roughly 130-fold increase in less than two years. Platforms like Kalshi and Polymarket posted record volumes, with Kalshi hitting roughly $5.8 billion and Polymarket exceeding $3.7 billion in spot trading for November.

For founders whose products are built on forecasting accuracy and real-world signals, these numbers are exciting and uncomfortable.

Because when markets move that fast, before information should logically be priced in, you start asking yourself hard questions. Was that prediction born from collective human insight? Or was it a coordinated push from accounts acting in unison without people behind them?

If you spend any time in prediction-market communities, “bot” and “fake accounts” aren’t fringe words anymore. They’re whispered in Discord channels, tested in back channels, and sometimes, inevitably, whispered across Zoom calls between founders when the charts look too clean, early, and precise.

The concern isn’t just theory. These markets have become large enough that institutional players are watching. Intercontinental Exchange,  the owner of the NYSE,  signaled plans to invest up to $2 billion in Polymarket as it eyes broader adoption. Coinbase now integrates prediction markets into its product suite. DraftKings and FanDuel are launching prediction market products in dozens of states. The narrative is that prediction markets are becoming real financial infrastructure, not just niche experiments.

And yet, if the base layer of participation, the assumption that one identity equals one real person with one independent view, is shaky, all of that looks a lot like a house built on borrowed confidence.

When prices lock in before the world has changed, when liquidity appears as if on cue, the magic isn’t in the efficiency. It’s in the absence of cost for influence.

And that’s the uncomfortable truth every founder building a prediction market has to face.

What happens when identity becomes cheap

You don’t have to dig deep to start seeing the skeletons in prediction markets. Not in theory. In practice. On chain. Underneath the headlines about billions in volume.

There’s research that shows how fake identities distort real markets. A Columbia University analysis of Polymarket’s on-chain data found that about 25% of all trading activity wasn’t normal human flow but patterns resembling artificial or coordinated behavior over its history. That’s a quarter of volume that doesn’t represent diverse human expectations. 

Even outside academic papers, practitioners have seen it. Reports of bot-like bettors extracting roughly $40 million in arbitrage profits on Polymarket in a single year have made the rounds. That’s not small-time noise. That’s systematic exploitation of price inefficiencies and automation wrapping around them. 

When you watch that kind of activity happen,  bots trading faster than markets can respond, clusters of wallets acting in concert,  a strange thing begins to happen. Markets start anticipating outcomes before new information arrives. Prices move without new facts. Confidence grows before news breaks. And the signal markets are supposed to distill starts feeling like manufacture, not wisdom.

Prediction markets were meant to aggregate belief, not to amplify coordination without consequence. But when one person, or one script,  can imitate hundreds of identities, the market stops being a loom of diverse expectation and becomes an echo chamber of repetition.

It’s not just a single statistic or a spooky back-test. The very dynamics that make prediction markets appealing,  price as information, liquidity as confidence, are the same dynamics that Sybil identities twist to their advantage. A collection of fake accounts can:

  • inflate volume numbers that journalists and investors salivate over,
  • create the illusion of consensus where none exists,
  • mask depth that isn’t human,
  • and make a market feel alive when it’s really just an engine of duplicated influence.

Look at Polymarket’s experience with bots and automated traders. It’s one of the largest decentralized prediction markets in existence, and yet patterns of artificial trading have been visible enough for external researchers and even community watchers to call them out publicly. 

Founders building markets often chalk early convergence up to efficiency, but that’s just a comforting story. When real pricing should take days or weeks to reflect new data, and instead it happens in minutes with no new inputs,  that’s a signal you’ve priced quantity of accounts, not quality of belief. That’s coordination at near-zero cost.

It doesn’t have to be a conspiracy. It doesn’t have to be a whale with insider info or a million-dollar trader pushing markets. Sometimes the manipulation is mechanical: multiple accounts, scripts front-running liquidity changes, automated bots executing dormant strategies that no single human could manage across dozens of identities.

You start to see it in patterns that don’t make intuitive sense,  price moves that don’t match news cycles, spikes in volume at odd hours that don’t match time zone distributions of real players, and clusters of addresses doing nearly identical behavior.

And markets aren’t innocent here. Open, permissionless markets give anyone the ability to trade, and that’s part of the appeal. But permissionless participation without identity assurance also gives anyone the ability to game the probability signals that underlie your entire product.

In academic terms, Sybil attacks have been shown to seriously alter the equilibrium in market-like systems. When a single agent can field many identities, they can increase their relative impact in ways that normal participants can’t match. The very incentives that make markets also make them vulnerable to identity manipulation if the base layer doesn’t verify who is participating. 

So the experiences founders are living, convergence before information, liquidity that feels too coordinated, too tidy, too quick, aren’t glitches. They’re symptoms. Cracks in the participation layer. When identity is cheap, influence is cheap too. And that turns markets into something that looks like forecasting while behaving like engineered output.

How Sybils bend a market without ever touching the truth

Most manipulation in prediction markets doesn’t look dramatic. There’s no single wallet throwing around an obscene size. No obvious whale pushing prices off a cliff. That would be too visible. Too easy to flag.

What actually happens is quieter.

A market opens. A new question goes live. The first liquidity shows up almost immediately. Not because thousands of people suddenly formed an opinion, but because a handful of coordinated identities did. They seed both sides. They probe depth. They watch how the pricing function reacts. Within minutes, the market “finds” a range.

To an outside observer, it looks healthy. Volume is there. Spreads tighten with moving prices. The chart starts behaving the way charts are supposed to behave.

But underneath, participation is thin. Shallow. Repeated.

A single operator can split conviction across dozens of accounts, nudging prices without ever appearing dominant. If one account moves the market too much, ten accounts move it just enough. If one bet looks suspicious, a hundred small bets look like a consensus.

This is how early convergence happens. Not because the market knows something, but because it’s hearing the same voice over and over again and mistaking it for a crowd.

You see it most clearly in markets that should take time. Long-horizon outcomes. Elections are months away. Regulatory decisions with no new information flow. Yet prices lock in early, hardening confidence. Later information barely moves the needle.

That’s inertia created by duplicated participation.

Another pattern shows up around resolution windows. As settlement approaches, activity spikes in clusters. Synchronized positioning across multiple accounts, designed to extract value from predictable behavior in the pricing curve.

The market doesn’t break. It performs. Smoothly. Efficiently. Exactly as designed.

And that’s the problem.

Because once Sybils learn the mechanics of your market, they stop trying to outguess reality. They start outguessing your rules. They don’t care whether an outcome is true. They care whether it’s profitable to make it look settled.

Over time, this changes what kind of participants stick around.

Real humans hesitate. They see prices move in ways that don’t match their understanding. They feel late even when they aren’t. They second-guess their own information because the market already “decided.”

Meanwhile, automated and coordinated actors thrive. They aren’t discouraged by early convergence. They create it.

This is where prediction markets quietly stop being instruments of collective intelligence and start becoming instruments of coordination. Not coordination around truth, but coordination around incentives that no longer reflect individual belief.

Founders notice this before they can articulate it. They’ll say things like:

“The market feels solved too early.” “Price action doesn’t line up with news anymore.” “Liquidity shows up, but discussion doesn’t.”

What they’re seeing is not a UX issue or a liquidity design flaw. It’s an identity problem leaking upward into market behavior.

When one participant can masquerade as many, the market doesn’t aggregate beliefs. It aggregates replication. And replication always wins against diversity when there’s no cost to being many.

At that point, your prediction market still clears trades. Still settles outcomes. Still produces charts and probabilities. But it stops doing the one thing it was meant to do.

Listen to many humans at once.

Why decentralization alone doesn’t fix this

Most prediction market builders already know the textbook answer. Decentralization removes intermediaries. It opens participation. It makes markets permissionless. In theory, that should be enough. In practice, it isn’t.

Decentralization changes who can participate. It doesn’t change how many times the same actor can show up. And in prediction markets, that detail matters more than most people admit.

When identity is cheap, decentralization amplifies the problem. One actor can fragment themselves across hundreds of accounts, seed early liquidity, reinforce a narrative, and manufacture consensus long before real information arrives. On-chain transparency doesn’t stop this. It often helps coordinate it. Every wallet is visible. Every move is observable. Sybils don’t hide. They blend.

This is why you see markets that feel “decided” too early. Odds snap into place before the world has had time to produce signal. Volume appears out of nowhere, then stalls. The market looks confident, but the confidence isn’t earned. It’s constructed.

Builders sometimes mistake this for efficiency. They tell themselves the crowd is smart. That price discovery is just faster now. But fast convergence without distributed risk isn’t wisdom. It’s leverage. And when that leverage comes from duplicated identities, the market stops being a reflection of belief and starts being a megaphone.

Decentralization also doesn’t stop incentive abuse. In fact, it can make it easier. When rewards, reputation, or governance rights are tied to participation, multi-account strategies turn prediction markets into extraction tools. One human, many wallets. One opinion, many votes. The market still clears, but the signal is warped.

There’s a reason even traditional prediction platforms throttle participation, cap exposure, or quietly apply trust scores behind the scenes. They’re trying to compensate for a missing assumption. That assumption isn’t liquidity. It isn’t decentralization. It’s personhood.

Without some guarantee that participation maps to people, markets optimize for coordination instead of truth. They reward scale instead of insight. They converge early, then stop learning.

This isn’t a moral failure. It’s a structural one.

Prediction markets don’t break because they’re decentralized. They break because decentralization alone doesn’t tell you who is actually participating. And if your market knows the answer before reality does, the problem isn’t the oracle.

It’s the identity layer underneath it. So what’s the solution?

When markets start verifying people instead of accounts

Once you accept that decentralization alone doesn’t solve this, the shape of the problem changes. The issue stops being liquidity curves, incentive tuning, or oracle design. It collapses into something more basic. Who is allowed to participate, and how many times?

Most prediction markets today verify accounts. Wallets. API keys. Sometimes devices. Sometimes documents. All of those are proxies. None of them is the thing they’re standing in for.

Markets don’t need to know who someone is. They need to know whether the participant showing up is one person or one person pretending to be many.

That distinction matters more here than almost anywhere else. In prediction markets, influence scales with presence. Early trades anchor prices. Repeated participation compounds confidence. If one actor can occupy ten slots instead of one, they don’t need better information. They just need more surface area.

This is where identity stops being a UX detail and starts being market infrastructure.

A participation layer that can say, with high confidence, that each slot in the market maps to a single human changes the dynamics immediately. Because it restores cost to influence. The market can still be open. Still permissionless. Still global. But showing up twice stops being free.

That’s the direction systems like Humanode are pushing at the infrastructure level. Not by tying markets to passports or profiles, but by anchoring participation to uniqueness. One living person, one active presence without exposing raw biometric data or asking for identity. Just a cryptographic assurance that the same human isn’t echoing themselves across the market.

In that model, verification happens once, off the critical path. After that, markets don’t need to interrogate users at every turn. They can assume something simple and powerful: this participant hasn’t already spoken ten times.

For prediction markets, that assumption does something subtle but important. It slows convergence back down to the speed of reality. Prices still move early, but not instantly. Liquidity still arrives, but it looks uneven, human, opinionated. Confidence grows when information grows, not before. The market starts listening to many humans again.

This doesn’t eliminate speculation. It doesn’t guarantee truth. It doesn’t make markets moral. What it does is remove the easiest way to fake consensus. And once that lever is gone, everything else becomes easier to reason about.

If a market converges early after that, it’s worth paying attention. If it doesn’t, that’s information too.

Prediction markets don’t fail because people are wrong. They fail when the system can’t tell how many people are speaking. Fix that, and the rest of the design questions become worth arguing about again.

Why PoP-style participation changes incentive tuning without touching market mechanics

Nothing about a prediction market’s core machinery needs to change for Sybils to matter less. The pricing curve can stay the same. The matching engine can stay the same. Even the UI can stay the same. What changes is quieter and more fundamental.

Cost.

When participation maps to a person instead of an account, influence acquires friction. Existential friction. You can still be wrong. You can still speculate. You can still trade aggressively. But you can’t cheaply multiply yourself to make the market hear you louder than you are.

That single constraint reshapes incentives without rewriting rules.

Early liquidity stops being a free tactic. Seeding a market suddenly means committing belief, not spraying it across ten wallets to see what sticks. Artificial consensus becomes expensive because repetition no longer scales. Coordination still exists, but it has to recruit humans instead of spinning up scripts.

This is where Proof of Personhood changes market behavior without acting like a referee. It just quietly enforces a condition markets already assume but rarely verify: that one signal comes from one mind.

Once that assumption holds, tuning becomes easier. You don’t need aggressive throttles or punitive limits to slow convergence. You don’t need to overcorrect incentives because you’re afraid of being farmed. Reward structures can be calibrated for disagreement instead of defense. Long-horizon markets can remain open without being “solved” on day one. Confidence regains its meaning.

Markets stop optimizing for the scale of identity and start responding to the distribution of belief. That distinction matters when you look at where this is being built in practice.

On Humanode, Sybil resistance isn’t just a moderation feature or a post-hoc filter. It’s infrastructure. The chain itself is designed around the idea that one human equals one active validator. 

Projects building on top inherit that assumption without having to reinvent it.

Episteme is an example of what unlocks for prediction markets. It doesn’t redesign market mechanics or add exotic constraints. It keeps the familiar logic of forecasting intact, while anchoring participation to Proof of Personhood. Traders are constrained by uniqueness.

That subtle shift does something powerful. It restores proportionality. One belief has one weight. One participant has one voice. Influence stops being a function of how many times you can show up.

The market becomes harder to fake. And when the market stops rewarding duplication, it starts rewarding disagreement again. Which is the whole point.

Prediction markets don’t fail because they’re wrong. They fail when they stop listening. PoP doesn’t make them smarter. It makes them honest about who is actually speaking.