When AI Agents Become “Users,” Who Counts as a Person?

When AI Agents Become “Users,” Who Counts as a Person?

By Sasha Shilina

“No one can be replaced by anyone else.”
— Hannah Arendt, The Human Condition

Something subtle is shifting in the language of tech. Satya Nadella, CEO at Microsoft, recently said he looks at AI agents as “users,” and described a future in which a digital worker could have “its own identity,” its own tools, even its own desktop. That may sound like ordinary platform-speak. It is not. It is a sign that the internet is beginning to reorganize itself around a new kind of participant: not only humans, not only institutions, but software entities that can act with growing autonomy. 

Once that happens, identity stops being a matter of account security and becomes a matter of political design. What is acting here? What kind of presence is this? Who should count, and where? A platform can call all of these things “users” if it wants. But the category is doing too much work. A user is merely something a system can authenticate, meter, permission, and monetize. A person is something else. A person is finite. A person can be harmed. A person can bear responsibility. A person is not infinitely reproducible.

That last distinction is the one that matters.

An AI agent can be copied, deployed, forked, scaled, and multiplied with industrial ease. A human being cannot. As long as the internet was mostly populated by human-operated accounts and familiar bot spam, that difference remained partially blurred but still manageable. In an agentic internet, it becomes the fault line. If systems cannot distinguish between a living human participant and an endlessly replicable software actor, then voice, governance, reputation, and access all begin to drift toward the logic of scale. The system will favor whatever can appear most often, operate most cheaply, replicate most easily.

This is where Humanode becomes unusually important.

For years, Humanode’s core idea looked, to many outsiders, like a highly specific crypto answer to a highly specific crypto problem. We proposed Proof of Biometric Uniqueness, or PoBU, as an alternative to systems where power depends on money or hardware. Instead of stake-weighted or compute-weighted participation, we ask a different question: are you a unique human being? Humanode’s own framing is explicit here. PoBU is meant to make participation depend not on wealth or machine power, but on human uniqueness itself. 

That already mattered in the older Web3 world of Sybil attacks, farmed wallets, governance capture, and fake crowds. But it matters even more now.

Because the rise of AI agents changes the scale of the problem. What once looked like fraud or spam starts to look like ontology. We are no longer just dealing with fake accounts. We are dealing with a network filling up with actors that can transact, coordinate, and perhaps one day govern, while not being human at all. In that world, proof of personhood is no longer a niche mechanism for cleaner airdrops or fairer DAO votes. It starts to look like infrastructure.

Humanode’s design is powerful precisely because it does not try to answer the bureaucratic question, “Who are you?” It aims to answer a narrower and, in some contexts, more important one: “Are you one real, living, unique human?” We repeatedly frame Biomapper and related systems in those terms: no KYC, no personal identification documents, no raw identity exposure, but a cryptographic proof that a wallet or participant corresponds to a unique human presence. The universal-pass vision, SRGate, pushes this further by proposing uniqueness verification once, usable across apps and chains. 

It is a theory of political scarcity.

In a world of agentic abundance, humanness becomes newly valuable because it is one of the few things that cannot be mass-produced. This sounds obvious, almost embarrassingly so, but the internet has spent years pretending otherwise. It flattened participants into addresses, accounts, handles, activity metrics. The assumption underneath was that open systems could simply process whatever arrived. But once software entities begin arriving in serious numbers, participation itself ceases to be a stable category. Activity no longer guarantees presence. Presence no longer guarantees personhood.

Humanode insists that some domains of digital life should still care about that distinction.

Governance is the clearest example. If a voting system is supposed to represent people, then it must have some way of knowing whether it is counting people rather than scripts, fleets, or delegated software workers. The same is true of public-goods funding, anti-Sybil reputation systems, community coordination, and any app that claims to care about unique human participation rather than raw engagement. Humanode’s app-layer story has already been moving in this direction. 

That practical turn matters. It means Humanode is not only making a philosophical case about the importance of embodied uniqueness in digital systems. It is trying to operationalize a human layer before the internet becomes too crowded with non-human actors for “human participation” to remain a meaningful category.

And yet the solution cannot simply be: force everyone to biomap everywhere. That would be another kind of disaster — an internet transformed into a permanent checkpoint, where every action requires liveness, every interaction demands proof, and humanness itself becomes an exhausting administrative performance. The point of Humanode is more interesting than that. The promise is selective human guarantees: contexts in which being a real, living, unique human can matter without turning the whole of digital life into a biometric border crossing.

This is a subtle but essential distinction: Humanode does not need to become the identity system for everything in order to matter. In fact, it probably should not. The point is that some parts of the emerging internet need a protected category of human standing. Without that, the word “user” will swallow too much. It will come to include humans, agents, institutions, and swarms under one flattened operational label. Once that happens, the human does not disappear in some dramatic sci-fi scene. The human becomes administratively ordinary.

That is the real risk of the agentic turn: that systems will quietly stop caring about the difference.

Nadella’s remark reveals how fast platform language is moving in that direction. If agents are users, and users are the basic unit around which software markets, permissions, and workplaces are organized, then the burden shifts onto other layers of infrastructure to preserve what is specific about human presence. Humanode is one of the clearest attempts to build such a layer. Our wager is that some rights, some powers, some forms of coordination should remain anchored in living human uniqueness rather than capital, documents, or synthetic scale. That wager may look eccentric only if one assumes the future internet will still be mostly human by default. It will not. The software industry is already preparing for agentic users, agentic workflows, agentic markets. In that environment, PoBU ceases to be decorative. It becomes a way of defending the non-fungibility of the human before “user” becomes the only category left.

Humanode begins from a simple premise: some parts of digital life should still be organized around unique human beings. As agents become ordinary participants in the network, that premise becomes harder to preserve and more important to defend.