PoBU: What the chain can’t show (yet)

PoBU: What the chain can’t show (yet)

In the last piece, we covered what the PoBU paper measures on a running chain: validator-set breadth and churn, and block-author concentration. Those are chain-derived signals anyone can reproduce.

This next theme is about a simple line the paper keeps repeating in different ways:

PoBU is about unique humans.But the chain mostly shows keys.

So what can the chain prove today?And what would make the evaluation stronger, if it can be published safely?

What the chain can show today

The paper deliberately sticks to publicly reproducible, chain-derived measurements.

That means it focuses on what the chain can show without needing private data:

  • how broad the active validator set is over time
  • how much it changes (entries, exits, churn)
  • how concentrated block production is across authors (top-k share, HHI, Gini)

These are treated as necessary signals for broad participation and low concentration at the consensus key level. The paper is careful about that wording: necessary, not sufficient.

What the chain cannot show directly

The paper is also direct about what these measurements do not capture.

First, the metrics are computed over on-chain keys, not over verified unique humans.

Second, they do not directly measure identity-layer parameters, including biometric performance and failure rates. The paper treats those as important, but outside what it reports here.

Third, even the chain-derived author metrics have limits, because they are computed from sampled blocks with disclosed step sizes, and sampling can introduce bias.

So the picture is:

  • chain data is great for measuring participation and concentration at the key level
  • but some of the most important “human-level” parameters live off-chain.

What would strengthen evaluation (if it can be published safely)

The paper explicitly lists measurements that would strengthen the evaluation, but are not reported in this draft unless they can be published as safe aggregates.

It names identity-layer aggregates such as:

  • issuance and revocation counts
  • uniqueness and presentation-attack (PAD) error summaries
  • and enrollment/renewal success rates and related summaries.

It also mentions governance participation distributions as another area not reported here.

The important point is not the list itself.

It’s the direction:

  • chain-derived metrics show what’s visible on-chain
  • safe identity-layer aggregates would help connect those results to the off-chain parameters PoBU depends on.

Why is the paper cautious about this

The paper explains the constraint clearly: PoBU’s critical security parameters partly live off-chain, and this draft stays grounded in what is publicly reproducible from chain data.

That is why the paper:

  • reports key-level evidence from chain data
  • discloses sampling and limitations
  • and points to safe aggregates as future work.

What comes next

The next theme after this one is the paper’s forward path: the concrete future work it lists for making PoBU evaluation stronger over time, including publishing a more precise interface and safe aggregates, and other steps the paper calls out.