What can go wrong in PoBU? (Threat model)
In the last piece, we covered why PoBU is probabilistic. It explained that the strength of the “one eligible account per human” rule depends on two things: biometric error rates and policy choices.
Now comes the next natural question:
Even if the rule is clearly defined, what can still go wrong in a real system?
This is where the threat model comes in.
In simple terms, this part of the paper lists the main ways a PoBU system can fail, be weakened, or be attacked in practice.
The paper highlights four main risk areas:
- issuer concentration
- compromise / coercion
- availability
- privacy / linkability
Let’s unpack them in plain language.
1) Issuer concentration
PoBU needs a process that decides whether someone is eligible.
The paper flags a risk here: what if too much of that power sits with too few parties?
In normal terms, this means:
If only a small number of actors control who gets marked as eligible, they become a powerful gate in the system.
Even if the chain is open, this part can still become concentrated.
Why this matters is simple. PoBU is trying to define participation at the level of unique humans. But if the ability to approve that participation becomes too concentrated, then the system can be shaped by a small group at the point where eligibility is decided.
So this threat is not about the idea of PoBU itself. It is about who controls the “entry point.”
2) Compromise / coercion
The paper also flags compromise and coercion.
This is about what happens if eligibility is not simply “owned and used safely” by the intended person.
In plain language, even if the rule is one eligible account per human, someone can still try to break the system by:
- stealing access
- forcing access
- pressuring people
- or controlling eligibility in practice through real humans
This matters because PoBU limits how many eligible accounts a person can have, but it does not magically remove the risk that real humans can be manipulated, pressured, or compromised.
So this threat is about the difference between:
- who should control eligibility, and
- who actually controls it in practice
That gap matters a lot in any real system.
3) Availability
This one is easy to understand.
The eligibility system has to be available and working.
If it is down, unstable, or unreachable, people may not be able to:
- prove uniqueness
- renew eligibility
- or recover after resets
So availability is the “can people actually use the system when they need to?” problem.
A simple way to think about it:
Even a well-designed system becomes a problem if people cannot access it at the right time.
In PoBU terms, eligibility is not just a definition on paper. It is something people need to interact with over time. If that process is unavailable, participation itself gets affected.
4) Privacy / linkability
PoBU is not about putting civil identity on-chain.
But the paper still flags privacy and linkability as a risk.
Why?
Because even if a system does not store “who you are,” it can still create patterns that make activity easier to connect over time.
That means the risk is not only “identity exposure.” It can also be “making people easier to track or connect across actions.”
A normal way to think about this:
A system may avoid storing your name, but still leave enough traces that someone can connect your actions together.
That is why privacy and linkability appear in the threat model. The system is about unique-human eligibility, so it still has to care about what gets exposed around that process.
Why this section matters
This part of the paper is important because it shows PoBU is not presented as “problem solved.”
The paper does not only define the rule.
It also names the main places where real-world systems can break:
- too much control at the eligibility issuer layer
- compromised or coerced eligibility in practice
- system downtime / unavailability
- privacy and tracking risks
That makes the paper easier to trust as a technical framework, because it not only describes what should happen. It also describes what can go wrong.
You can read the paper here: https://papers.humanode.io/pobu.pdf
What’s next
The next theme after this is how the paper connects the PoBU idea to a real running system and evaluates it using public chain-derived data.
That’s where the paper moves from definition and risks to measurement on a live network.