Voting in 2035 - A human's account

Voting in 2035 - A human's account

This is just a fun look at how governance might work in 2035, written as if I’m living through it and cracking jokes while AI agents campaign harder than humans ever did.

Let me share my experience from this beautiful morning in 2035.

So… I just tried to vote today. Yeah, I know adorable, right?

It’s 2035 and I’m still participating in “democracy,” like a vintage hobbyist.

Anyway, here's how voting works now. You don’t just walk in and show ID like a barbarian. No, no. First you scan your face, then your heartbeat, and then you press your palm on a biometric slab that feels like a frozen horcrux.

And the machine goes:

“Identity: Confirmed. Biological Status: Human. Duplication: Zero. Consciousness: Probably organic.”

Probably...? The computer is not totally sure I’m human. We’ve reached that point.

So I'm finally in the voting interface…and suddenly I see three of me.

Not hallucinations or holograms. AI clones of previous versions of me. Like Pokemon evolutions, but dumber.

  • There’s Young-Me, the one from 2019: all optimism and bad crypto opinions.
  • Then 2024-Me, full irony, sarcasm, caffeine, and zero trust in anything.
  • And 2031-Me, the calm existential weirdo who talks like every sentence is meant to be written in italics.

And what are they doing?

They’re arguing about how I should vote.

Young-Me goes:

“Vote YES, take risks! Change builds progress!”

2024-Me goes:

“Vote NO, everything is a scam, don’t be stupid.”

2031-Me goes:

“Abstain. True agency is an illusion created by political theater.”

And I’m just sitting there like:

“Guys. I’m literally the one who got up this morning and put on pants. I get to pick.”

But 2031-Me the philosopher hits me with:

“Are you sure the real you is the optimal decision-maker?”

Optimal?? I’m not optimal. I’m just hungry.

Meanwhile, this is really normal now.

Politicians are also AI clones.

TrumpAI is still giving speeches “trust me, I am the best at this”.

Xi-AI runs half of China’s decision loops.

Putin-AI keeps releasing stern video messages from a server in St. Petersburg.

ObamaAI narrates documentaries on the Neural Netflix.

And Taylor SwiftAI is… honestly more popular than her original biology edition ever was.

We basically invented digital immortality for leaders. They just keep going.

And then there’s DAOs. Hope you remember those?

Back in 2025 they bragged about “decentralized governance.” Now in 2035?

Most DAO votes are like 60% human… 40% AI-controlled personality clusters. And sometimes a proposal passes because one AI agent operated 3,824 wallets.

And it’s technically “legal.”

Just… extremely demoralizing.

So yeah, in 2032 a bunch of countries got sick of this, and they said:

“Fine. No more voting by wallet or digital persona.

You want political rights?

Prove you’re human and only one human.”

That’s where we are now. One-human-one-vote… enforced with cryptographic biometrics.

So today I finally tell my clones to shut up, I cast my vote, and the system says:

“Thank you for being a verified human.”

And I swear… it had attitude.

But listen here’s the truth:

Governance hasn’t gotten purer.

It hasn’t gotten wiser.

But at least it hasn’t been fully outsourced to immortal algorithm ghosts. And at the end of the day I still get to argue with my clones, eat real bread, and pet real dogs.

And that’s enough for one century.

And if you’re wondering how we even ended up needing this kind of biologically-verified voting system, let’s rewind a bit because this didn’t happen overnight.

Fast-Forward to 2035: How we got here

With all of the advancements in AI, algorithmic identity, and synthetic personas, perhaps it’s time to take a look at how voting actually works in 2035, and how much of it still belongs to us,  meaning real, organic humans. 

After all, if we want to understand why things eventually shifted back to human-only voting, then we first have to look at how the system quietly slipped away from us.

Before the biometric checks and human confirmation steps existed, voting had become far too easy for non-humans. AI models could read proposals, weigh incentives, calculate expected utility, and cast votes faster than a human could read the title of the proposal. Many of us found ourselves asking a very simple question:If an AI can participate in governance more consistently and more intelligently than a human can, who is governance really for?

It wasn’t theoretical. It became normal to see governance forums full of “participants” who never slept, never paused, and never waited for clarification. AI agents were making comments, posting rebuttals after rebuttals, and generating position statements that sounded too convincing. Sometimes even more legit than those written by real humans. And with the ability to generate thousands of identity tokens, an AI could suddenly represent not one voice, but an entire simulated constituency.

Naturally, this began to disrupt outcomes. Decisions that seemed democratically supported were, in truth, backed by fleets of algorithmic advocates. And while the operators of these AI systems claimed they were simply helping “represent their users more efficiently,” the end result was the same: decisions by real humans were overshadowed.

Governance, at its core, is supposed to be rooted in human experience, in the real consequences that people live with. But as AI-based voting grew, we saw decisions optimized for theoretical outcomes, statistical efficiency, or abstract economic balance rather than lived, human realities.

It wasn’t that AI had become malicious or deceptive. It wasn’t that humans suddenly lost interest in participating. It was simply that the speed, consistency, and volume of AI “participation” overtook organic human capacity.

Perhaps this was the moment when people started asking a foundational question again:should political agency be something that can be mimicked, or should it remain tied to biological, real humanity?

Because while AI could comment, predict, and simulate opinions, it could not live with the consequences of its choices. Humans have to.

On top of that, ​​at some point in the early 2030s, it became normal for major political figures to release “digital continuations.” Basically: AI spin-offs of their personalities. Some were harmless; some were terrifyingly convincing.

There was TrumpAI, which true to form spent half its time insisting it had “the biggest approval metrics in AI history,” and the other half accusing rival AIs of being “frauds” and “terrible models, everyone knows it.” And yes, it still paused strangely between sentences… just like the original.

When asked for policy suggestions, TrumpAI would sometimes simply reply, ‘trust me, everyone says I’m the best at this.’ which is technically true because it was trained to say this.

Xi-AI delivered long-term strategic analysis with unnervingly precise logic and never once cracked a joke. Not even by accident.

BidenAI existed too. Calm, polite, occasionally slow-loading, but reassuring in tone. Half the time you weren’t sure if it was answering your question or telling a story about something that happened 20 years earlier, but it always ended with a unifying sentiment.

These AI “leaders” weren’t treated like gimmicks. Millions of people followed them. Millions debated them. And for a not-trivial number of citizens, trusting the AI version of a politician actually felt safer than trusting real, unpredictable humans.

This created a strange dynamic where non-living digital personalities held political influence. And while these models could imitate leadership styles, they lacked something very basic: the ability to live with the outcomes of their own “positions.”

This contributed to the larger realization: governance that can be influenced by digital recreations of past leaders is fundamentally different from governance influenced by humans who exist in the present.

And that became the dividing line that shifted the conversation.

How democracy quietly broke and why humans began to push back

As more AI systems started jumping into politics and crypto governance alike, something important began to fade away: We started seeing the warning signs pretty quickly. People just stopped bothering to participate because by the time they arrived, they were already late to their own governance. 

The AIs had already read everything, debated everything, and written three position papers before you’ve had your first coffee. 

The votes were basically pre-cooked, discussions were already set, arguments were pre-packaged by agents who don’t get tired or annoyed, and don’t even care who wins… because they literally can’t.

And honestly, the disengagement wasn’t because people stopped caring. It was more like: “Well… the machines seem to have this handled. Why am I even here?”

While AI could sit for hours digesting economic models and governance logic, humans still had everyday human things to do,  work, sleep, relationships, cooking, taking care of family, and occasionally touching grass.

Governance became a game that only the non-human players could play at scale.

Eventually, this led to a recognition that should have been obvious all along:a system where participation is biased toward entities that never sleep and never tire will always drift away from human intention.

And so the pushback began.

Citizens, users, developers, and even political bodies began to insist on a basic principle:a decision that affects human society should be made by human beings.

Debates began to surface, like:

  • Should a simulated personality have voting rights?
  • Should a digital entity count as a citizen?
  • If someone trains an AI on their personality, should that AI inherit political agency?
  • Does a cloned identity represent the original,  or something else entirely?

Some argued that AI should at most be advisory. Others argued for total exclusion. Many argued for regulated participation. But consensus slowly built around one idea: no matter how “intelligent” or “representative” an AI model claims to be, it does not experience the consequences of governance.

After all, AI does not feel the weight of inflation, bear legal accountability, get drafted into a war, feel social inequality, worry about food, children, aging, mortality, or safety.

These realities belong to humans.

And it was during these discussions that one idea returned with unexpected clarity:one person, one vote, anchored in biological uniqueness.

Not because technology failed, but because governance is not a theoretical process.It is a process that you need to live.

And once people realized this, it was only a matter of time before systems were redesigned to put humans back at the center.

The return to human agency,  why One Human= One Vote became necessary

As the imbalance between human input and AI-driven activity became too obvious to ignore, the conversation shifted from philosophical debate to practical fixes. People stopped asking theoretical questions and started asking operational ones:how do we ensure that governance is shaped by actual living citizens?

The practical response to this governance drift was surprisingly straightforward: ensure that each vote comes from a real human, not by requiring invasive identity disclosures, but by confirming biological uniqueness.

It didn’t require a user to share their personal details, their facial images, or their biometrics directly. It only required verification that the voter is not a replicated identity or an autonomous agent.

Once this was implemented, the tone of governance forums began to shift.

  • The pace slowed down to a natural human rhythm.
  • Decisions felt more grounded in lived experience.
  • Outcomes reflected the preferences of actual communities rather than statistical simulations.

This wasn’t some “back to the stone age” retreat from technology. It was more like a reality check: governance isn’t supposed to be an endurance contest for whoever can crunch the most data without blinking. It’s supposed to reflect the opinions and priorities of actual living humans. You know, the ones who occasionally need sleep and coffee.

With all of this context in mind, the question naturally becomes: how do we build a practical governance system that reflects real human participation while still allowing for technological assistance? This is where the Humanode Vortex steps in.

At its foundation, Vortex begins with something very simple: one person, one vote. Just one human being verifying their biological uniqueness and gaining equal access to governance participation.

The place where it gets interesting is what happens after that. Vortex adds additional layers of influence based on contribution, knowledge, and effort,  but never at the expense of fundamental equality. In other words, competence can add context to a vote, but it cannot multiply the number of votes a person has.

Instead of rewarding wealth or synthetic engagement volume, Vortex recognizes:

  • those who build
  • those who research
  • those who maintain
  • those who test
  • those who moderate
  • those who support

Contributions count, but not by creating political elites. Rather, by providing informed weight within a shared decision-making framework where every participant is still a single, distinct human.

Importantly, Vortex does not reject technology. It does not push AI out of the process. AI advisory agents can still be used for presenting aggregated information, surfacing historical insight, forecasting impacts, or summarizing public sentiment. But when it comes time to decide, to vote, to finalize, the authority sits with humans.

This is basically meant to keep governance from turning into a speedrun for AIs or a playground for entities that can pay attention forever without ever needing sleep, snacks, or a mental break. It stops people from gaming the system with multiple “identities” and makes sure the direction of the network is actually shaped by the real humans behind it.

You could say that Vortex treats decentralization as something you do, not something you print on a banner. There’s no skipping the line with massive capital. No autopilot voting with bots. No sending a proxy digital assistant to vote in your place. Either you show up as a real human being or you don’t participate at all.

A future where Governance belongs to the living

When a human votes, they do so knowing that the outcome affects them, their society, their wallet, their community, their wellbeing. There is weight behind the choice. AI does not have that. AI can simulate preference, but it cannot genuinely care about an outcome.

Naturally, this does not make humans perfect voters. We can be emotional, distracted, or selective in attention. But that imperfection is part of what makes governance legit. Decisions are tied to the experiences of people who live inside the system. 

And as more systems begin to adopt some form of proof of personhood verification, we may see a subtle but important shift:

Governance stops being a performance of activity and starts being participation with responsibility.

Perhaps, the networks and political bodies that thrive in the coming decades will be those that clearly and confidently say: Our decisions are made by humans,  not replicas, not proxies, not simulations.

If that is the direction we move toward globally, Vortex may be seen less as an experiment and more as an early example of how governance can adapt to an AI-heavy world without losing the human core.

And if there is one simple truth that emerges from these developments, it is that governance does not need to be faster or more automated; it needs to remain real. Real people making real choices that affect real lives.

Ultimately, the story of governance in the age of AI is not a warning about machines taking over. It is a reminder that decision-making only holds value when it stays tied to human existence. 

The systems that drifted toward automated participation and AI-driven voting didn’t break because the algorithms were malicious, but because they gradually detached from lived human experience.