, May 15, 2021

0 results found in this keyword

Epoch-CRDT mechanism between equal human nodes

  •   22 min reads
Epoch-CRDT mechanism between equal human nodes

By Dato Kavazi, Max Kupriianov and Victor Smirnov

As the Humanode network consists of equal human beings who become equal network peers, we are able to choose a different approach to achieve consensus among peers compared to those dominant now in permissionless blockchains, i.e., Nakamoto, used by PoW networks, and Byzantine Fault Tolerance (BFT), widely adopted by PoS networks. Our choice is to replace it with a new type of mechanism between equal human nodes inspired by the unique characteristics of Conflict-Free Replicated Data Types (CRDTs), which when merged with Merkle Clocks (H. Sanjuan, S. Poyhtari, P. Teixeira & I. Psaras, 2020), provide the necessary properties for a permissionless distributed network: strong eventual consistency, causality of events, and maintaining availability without choosing a leader and without the necessity for all peers to stay online to maintain the same state of the ledger. Merkle-CRDTs are agnostic to peer discovery data systems. IPFS serves both as an environment for working with Merkle-DAGs and as a networking transport, broadcasting, discovery, and data transfer layer.

Merkle-CRDTs work only in environments where the peers are equal, which hinders the adoption of this technology by current blockchain protocols with unequal power distribution. In our protocol, human nodes are equal in their voting and validation power, as they are all human.

Terminology in Merkle-CRDTs utilizes the word “node” to describe events that are happening in DAGs and is a separate entity from human nodes, which we address as “replicas”.


Merkle DAGs are distributed authenticated hash-linked data structures.

A Directed Acyclic Graph (DAG) is a type of graph where edges have direction and cycles are not allowed. A linked list 1 → 2 → 3 may serve as an example of the DAG: list member 1 references 2 while 2 references 3. As 1 has a link to 2, 2 is a child of 1. Therefore, we call 1 a parent to node 2. Nodes that are not children to any other node in the DAG are called the root nodes. List member 1 cannot reference 3, as this will be considered a cycle.

Figure 1. Root and child nodes in a Merkle-DAG

The best example of a Merkle data structure is a Merkle Tree (R. C. Merkle, 1998). It is used by all existing blockchains, in which all the data inputs represent transactions (Ta, Tb, Tc, Td). Then the data is turned into hashes using a cryptographic hash function like SHA256. After this, we combine Hash A and Hash B and hash the result, which gives us Hash AB. Then we do the same with Hashes AB and CD to get the root of the tree – Hash ABCD.

Figure 2. A Merkle Tree example

Merkle trees offer huge benefits over hash lists, as they require fewer actions to check the integrity of the data.

A Merkle-DAG is a bit different. In a Merkle-DAG, there is a parent Merkle-DAG and local or sub-Merkle DAGs. Parent Merkle-DAG always consists of roots of sub-Merkle-DAGs (child-free nodes). But instead of storing all the nodes, Merkle-DAG interacts and syncs only child-free nodes, e.g., parents. This data structure is similar to a Merkle tree but allows more freedom: Merkle-DAG does not need to be balanced, and its non-leaf nodes are allowed to contain data. The human nodes work as replicas: storing the hashing result of root nodes and comparing them amongst each other. If the hashing result of the root node differs from root nodes of other replicas, then one of the children in this Merkle tree was altered. Humanode protocol will detect such intrusions and slash malicious actors accordingly.

The main idea is that although we might not know the order in which all events happened globally, at least a replica knows the order of events issued by itself. Any other replica that receives this information would be able to know the order of events based on preconditions. Commonly, logical clocks are representations of causal history and provide partial ordering between events. They help to understand whether A happened before B, B before A, or concurrently. The practical implementation of logical clocks usually involves metadata that travels attached to every event in the system.

Information-centric networks (ICN) do direct content-naming, routing, and forwarding based on content names. ‘Transport-agnostic’ state synchronization via Merkle-CRDT handles state synchronization directly through named network objects and brings standard ICN advantages to Merkle-CRDTs.

Conflict-Free Replicated Data Types (CRDTs)

The main challenge for any decentralized ledger technology is to maintain the consistency and integrity of causal information. Causal history is defined as the history of states an object went through, which consists of queries, updates, merges, etc.

Introduced by M. Shapiro, Conflict-Free Replicated Data Types (CRDTs) are data types that guarantee the convergence of data among replicas in spite of any failures, leading to partitioning of replicas in distributed systems (M. Shapiro et al., 2011).

CRDTs are widely deployed in practice among tech giants as a perfect fit for collaborative applications. CRDTs are used by Apple in Notes to sync offline edits and by League of Legends for in-game chat, handling 7.5 million concurrent users and 11,000 messages per second.

CRDTs achieve strong eventual consistency (SEC), which means that if two replicas receive the same updates, their state will be the same. This is quite a useful property for a decentralized ledger technology.

Among the advantages of CRDTs are:

  • Updates do not require synchronization;
  • CRDTs avoid the complexity of conflict resolution and roll-back;
  • Conflict-freedom ensures safety and liveness despite any number of network failures;
  • Monotonicity—mathematically ensured absence of conflict;
  • Replicas provably converge to a correct common state;
  • Replicas remain responsive, available, and scalable despite high network latency, faults, or disconnection.
Figure 3. Convergence of states in state-based CRDTs

The figure shows how different replicas S1, S2, S3 change their local state, represented by a simple set, and then disseminate the state (blue lines) to other replicas, eventually converging to the common state among all 3 replicas including one that was offline.

CRDT Add-Wins / Observed-Remove Set

Figure 4. AWOR-set CRDT

There are many different types of CRDT’s out there. We chose to implement the Add-Wins (AW-set) or Observed-Removed Set (OR-Set), which supports adding and removing elements and is easily understandable. The outcome of a sequence of adds and removes depends only on its causal history and conforms to the sequential specification of a set. In the case of concurrent add and remove of the same element, add has precedence. (Marc Shapiro, Nuno Preguiça, Carlos Baquero, Marek Zawirski 2011)

The intuition is to tag each added element uniquely, without exposing the unique tags in the interface. When removing an element, all associated unique tags observed at the source replica are removed, and only those.

The two add(a) operations generate unique tags α and β. The remove(a) called at the top replica translates to removing (a,α) downstream. The add called at the second replica is concurrent to the remove of the first one, therefore (a,β) remains in the final state.

OR-Set is a CRDT. Concurrent adds commute since each one is unique. Concurrent removes commute because any common pairs have the same effect, and any disjoint pairs have independent effects.

IPFS Cluster

Merkle-CRDT has been implemented as a pluggable datastore component and integrated as a consensus engine into the IPFS Cluster. The IPFS Cluster provides data orchestration across a swarm of IPFS daemons by allocating, replicating, and tracking a global pinset distributed among multiple peers. Cluster peers form a distributed network and maintain a global, replicated, and conflict-free list of pins.

IPFS utilizes a distributed hash table (DHT) to announce and discover which replicas provide certain Merkle-DAG nodes. It also implements a node-exchange protocol bitswap to retrieve DAG nodes from any provider. At the same time, IPFS is built on top of libp2p, which provides a broadcasting mechanism. With Inter-Planetary Linked Data maintaining content identifier (CID) support for nodes and all the properties mentioned above, IPFS seems to be very suitable for Merkle-DAG. IPFS acts as an asynchronous messaging layer that provides communication between separate replicas run by two main operators: the DAG-syncer and the broadcaster.

As the Humanode protocol is a public cluster, 2/3 of human nodes must be non-malicious and maintain server availability. While a Sybil attack is still possible, it requires the coordination of 66% of malicious human nodes, unlike the requirement of rented equipment in PoW or token purchase in PoS to attack the network, where a small group of ultra-high-net-worth individuals are able to halt the network or verify a fraudulent transaction. Hence, the security of the network grows with the number of active human nodes and cannot be easily breached by capital attacks.

Merkle Clocks

A Merkle Clock is a Merkle-DAG where each node represents an event. By merging DAGs between replicas, we build a new DAG. New nodes are added as root nodes, while previous root nodes become their children. It is worth mentioning that a Merkle Clock may have several roots at a given time. A Merkle Clock ensures consistency of causal history by resolving any issues connected to the order of events.

For example, if we have two Merkle Clocks then they will carry out the convergence of data based on the pre-conditions for the particular case. For example, if the roots of two different Merkle Clocks are the same, it means that no action is required as the DAGs are already the same. If there are some differences in nodes, then the system through DAG-syncing will understand the order of events and update all of the replicas accordingly. But if the Merkle Clock would not understand the order of events even after DAG-syncing then it means that neither of the replicas has any data that are linked to one another and Merkle Clock would have to merge both nodes by keeping both DAGs until resolution is eventually reached. Both Merkle Clocks can be fully disjoint or not, depending on whether they share some of their deeper nodes. This makes Merkle-Clocks a convergent, replicated data type.

Merkle Clock DAGs are presented as a growing set of CRDTs and therefore they can converge in multiple replicas.

There are four main steps in replica synchronization through Merkle Clocks:

  1. Broadcasting of the content identifier (CID) of the new root to other replicas. The whole Merkle Clock is identified by the CID of its root, and its full DAG can be analyzed from it if there is a need.
  2. Replicas make a quick comparison by querying and pulling only those nodes that it does not already have. That is possible due to the immutable nature of Merkle-DAGs.
  3. Merkle-DAG nodes can be pulled from any source that wants to provide them, as they are self-verified through their CIDs and immutable to corruption and tempering.
  4. The design of the system does not allow duplication. Therefore, identical nodes are de-duplicated at the end of the process, as there can only be one unique representation for every event.

In a Merkle Clock, every replica pulls causal histories from other replicas. A newly born replica with no previous history will fetch the full causal history automatically. A Merkle Clock replaces timestamps and the logical clocks that are usually used as part of CRDTs. By implementing Merkle Clocks, we are able to disconnect the causality of information from the number of replicas, which stands as a limitation in many solutions using logical clocks. By doing so, we can reduce the size of the messages in CRDT implementation and try to solve the problem of keeping up the operation of the clock when replicas randomly join and leave the system.

Merkle Clocks solve many problems of distributed causal history ordering by design. In many systems dropped messages might prevent replicas from learning about the existence of new roots, but Merkle Clock DAGs are superseded by new DAGs that download and synchronize all information and missing parts in the eventual consistency mechanisms that heal eventually on their own. Out of order delivery cannot harm the system for the same reason. Duplicated nodes are simply ignored by replicas as they are already incorporated into their Merkle Clocks.

The main problem with Merkle Clocks is that they cannot define which of the events happened first if they have multiple roots, so they would stay in a diverged state until the order is resolved through the logic of causal history in other replicas. To solve this problem, we need to address approaches to sorting out concurrent events that would qualify as data-layer conflict resolution. This is where Merkle-CRDTs come in handy.


The Merkle-CRDT approach facilitates the use of CRDTs in a P2P environment with a large number of replicas (human nodes) without the need for message delivery guarantees (H. Sanjuan, S. Poyhtari, P. Teixeira, I. Psaras 2020). A Merkle-CRDT is a Merkle Clock whose nodes carry an arbitrary CRDT payload, which qualifies as data-layer conflict resolution. This approach mostly addresses the ambiguity of concurrent events because for the payload to converge properly it must be of the convergent data type.

Figure 5. Merkle-CRDT mechanism

Replica B issues a payload by creating a new DAG node and broadcasts the new CID to the rest of the replicas in the network. Replica A receives the broadcast of replica B and uses the DAG-syncer to retrieve all the nodes from (sub)Merkle-DAG B that are not in (sub)Merkle-DAG A and stores them in a special D-set. If the D-set is empty then no further actions are required, as it means that both Merkle-DAGs are already the same. If the D-set is not empty, then the system sorts the CIDs inside the set provided by Merkle Clocks from both sub-DAGs. After the sorting is done, if the system finds that all nodes in (sub)Merkle-DAG A are included in (sub)Merkle-DAG B and that nodes in D-set come from (sub)Merkle-DAG B then Replica A will process the payloads associated with the nodes corresponding to the CIDs in the D-set from the lowest to the highest and the Merkle Clock from (sub)Merkle-DAG B becomes a new local Merkle-CRDT in Replica A.

CRDTs and Merkle-DAGs complement one another: the former ensures eventual global state convergence without complicated and costly consensus mechanisms; the latter allows the Humanode network to take advantage of the IPFS content-addressing layer for discovery and self-verification of data. By embedding CRDT objects inside Merkle-DAG nodes, we get a convergent system.

A Merkle-CRDT node carries:

  • A CID – content identifier
  • An opaque data object with CRDT properties
  • A set of children identifiers

Delta-state CRDTs

Delta-state CRDTs solve the problem of propagation and merger of the full state each time the change is made by a replica (human-node). This is done via delta-mutations (P. S. Almeida, A. Shoker, and C. Baquero 2015).

In state-based CRDTs, the entire state is periodically propagated and merged with other replicas. As the local state grows, this approach becomes impractical. Delta state–based CRDTs disseminate only recent changes to the state instead of the whole state incurred in its local state. Using Delta-CRDTs increases memory and computation efficiency, maintaining mathematically verified strong eventual consistency (T. Blau, 2020).

Residual increase in number of validator human nodes

As Merkle-CRDT is an emerging technology that has never been implemented as an algorithm for data convergence in a public permissionless cluster on IPFS, the number of validator nodes will be first restricted to thousands, potentially scaling to millions depending on the public Merkle-CRDT cluster research and development.

The limitation will be set following Vortex voting. When the limitation of validator human nodes is still low, it remains possible to attack the system by coordination among more than 66% of active validator nodes. In order to lower the number of malicious human nodes that verify transactions in the early days of the protocol, we impose additional selection criteria on candidate human nodes based on Proof-of-Time and Proof-of-Devotion. Those with a higher tier or longer governing history are going to be the first candidates to enter the Humanode public cluster on the mainnet as validators. Anybody else will still stay a human node and will perform relay node functions to fasten data delivery.

Humanode Transactions Flow

Humanode reuses existing building blocks from the IPFS project to implement networking and conflict-free data replication layers. By utilizing Merkle-CRDT for the state replication it becomes possible to implement a robust Key-Value store that becomes the application state holder, like usual Merkle trees such as IAVL+ (Cosmos).

We know the advantages and limitations of the selected state replication mechanism and the main goal is to design an application-level logic that will ensure that transactions are conflict-free, and the state converges to the truth as fast as possible.

State abstractions

In order to avoid the burden of working with Merkle-CRDT directly, we leverage a Datastore component that provides a unified interface to access data. Datastores are general enough to be backed by all kinds of different storage: in-memory caches, databases, a remote datastore, flat files on disk, etc. In our case, we're picking go-ds-crdt package from the IPFS project that implements a Datastore by Merkle-CRDT, based on the following external components:

  • A user-provided, thread-safe, go-datastore implementation to be used as permanent storage. This is the lowest level of storage that requires disk access to cache all temporary and final state. For that purpose, the BadgerDB has been selected, as it is used in many other places by IPFS and Humanode Network's code.
  • A user-defined Broadcaster component to broadcast and receive updates from a set of replicas, a.k.a Humanode transaction validators. Since our application is using libp2p, we use libp2p PubSub and the provided PubsubBroadcaster.
  • A user-defined DAGSyncer component to publish and retrieve Merkle DAGs to the network. For that interface, the node initializes an IPFS-Lite client which implements it.

When all external components are properly initialized and mounted, the Datastore is ready to be used by a node instance. Since it's a key-value store, we use key prefixes to define separate data buckets.

For the application level, which represents transaction processing and validation logic, we defined the following set of buckets to hold the state:

  • Epochs — a KV subspace (bucket) that holds epoch snapshots. An epoch snapshot is similar to a block in the classic blockchain, but within our network transactions are grouped in epochs. More on that below. Each epoch has a number as a key that points to a list of transactions.
  • Transactions — a bucket holding set of confirmed transactions. Consensus participants decide whether or not to include a transaction into that set. If the transaction is confirmed, its CID and validator signatures are added to the set, and the body (data, metadata, result, etc) will be replicated through IPFS using DAGSyncer.

Also, an IAVL+ (cosmos/iavl) tree outside Merkle-CRDT replication, i.e. simply a locally cached state:

  • Accounts — a bucket that holds account balances and nonces by address. Each time an account is modified, i.e. its balance changes or nonce increases, we update a value associated with the address key at a certain epoch.
  • Validators — a bucket holding a set of validators. To add a validator, consensus must achieve 2/3 quorum. Validator meta-data (name, URL, etc) can be modified by a single transaction from the validator's address. This bucket is a cached version of the state that can be recovered from replaying transactions in the corresponding epochs.

This local state allows us to get the most relevant state on a given epoch by replaying all transactions within that epoch. The concept is similar to the state at certain block heights. Merkelized implementation provided by Cosmos also has the ability to pre-apply unconfirmed transactions to obtain a pending state.

Changes for the validator set are snapshotted for each Epoch individually, so the set cannot be affected mid-Epoch.

Consensus caveats

Transactions on the Humanode network are similar to normal blockchain transactions that can be seen in Ethereum, Cosmos. Our main consideration about transactions is that we don't use blocks to cryptographically verify transaction continuity. Other chains that use blocks showed us a great example of consequences and drawbacks resulting from that approach, and their effects are much more harmful than useful for the chain in the long-term. In modern PoS systems, blocks with 300-800ms timeouts make less and less sense: it's a not fully accurate approach to measure time, they produce a lot of meta-data garbage and shadow the real problems.

Real problem I: Transaction ordering

One of the most subtle problems in blockchain programming is transaction ordering and propagation. What happens if a user sends transactions 1, 2 to node A? What if he sends 1 to node A, and 2 to node B simultaneously? Different blockchains have their approaches, some frameworks like Hyperledger delegate the decision to a replaceable component that allows using custom ordering logic suitable for the application.

In Ethereum, all incoming transactions on a node fall into the pool of unconfirmed transactions (Mempool), where they are sorted by gas price. Basically, the ordering logic of unconfirmed transactions targets to maximize a miner's profit. This opens a whole new world of possibilities of front-running: while one transaction is pending, another user can create and sign the same transaction, but with higher gas. This is harmful to applications in DeFi space, where, for instance, an order on DEX can be taken by anyone who was first to propagate the transaction.

In Cosmos (Tendermint), transactions in Mempool are ordered by a FIFO queue and once the proposer is selected for a round, the FIFO snapshot is taken by the proposer node and processed for the round (tendermint-core/mempool). This makes the outcome quite undefined, but at least it somehow relies on time and network latency. Whoever was lucky to get the transaction to the right proposer, wins. The proposer selection frequency depends on the stake volume, so the nodes with a bigger stake will be chosen as proposers most of the time, therefore providing the most canonical FIFO snapshots.

Humanode consensus cannot count on leaders, cannot count on gas price. So, we decided to stop playing around with 0-ish seconds block times and simply introduce time into all transactions. Yes, the Humanode transactions are timestamped. This way, ordering becomes truly distributed — the clients define the timestamp, local nodes ensure its validity within a certain bound, and the transactions are ordered based on timestamp. If two transactions have an equal timestamp, the ordering is defined by the transaction hash. (Since timestamp is part of the transaction, and thus transaction hash, it's impossible to quickly "pre-mine" cool transaction hashes that will sort better, until timestamp itself is expired).

Real problem II: Block time and Epochs

Blocks in different blockchains are part of the DNA and therefore are used for many application-level purposes. In PoW chains such as Ethereum V1 and Bitcoin, studying blocks allows you to eventually tell whether a transaction was accepted or not. If the user's payment or kitty sale went through or not. The time of the user action confirmation equals to (blocktime * safe_thershold), which may take minutes or even hours. This is proven to be a UX hell and newer PoS chains such as Ethereum V2 or Cosmos-based (Kava, Matic) explicitly target the instant-finality model, where safe_thershold=1 and blocktime being preferably ≤1s.

In Humanode Consensus we don’t use blocks to tell whether the transaction was confirmed: it's a property of a concrete transaction. Some transactions might take extra CPU time (e.g. for TEE call), but still be cryptographically verifiable on a global scale.

Another use case for blocks was understanding the time on-chain. Transactions could target approximately the time at which a certain block becomes confirmed. In many sub-second block time blockchains, this "height" simply becomes a timestamp, since 1 block resolves to approximately 1 second. When block time is varying (Binance Chain shows 300ms - 500ms- 3s in different explorers), it becomes totally unclear how to handle that.

In Humanode Consensus we decided to use the term Epoch for a similar concept. Humanode transactions are grouped by Epochs: 1-minute time intervals corresponding to UNIX-time, rounded by a minute, starting from the network genesis. When a network is launched, the Epoch is effectively 1. After 1-minute passes, the epoch is incremented. Working with time in decentralized systems is a risky job, but when applied properly this can provide its own benefits.

Since epochs are snapshotted every minute, not much meta-data is persisted for empty epochs. Every transaction has its own confirmation status - meaning that we don't use epochs to signal any confirmations. Transactions are timestamped by clients, and this timestamp must be validated against the epoch boundary on the local node. Basically, the system is decentralized, it tries to be aware of time, but allows deviations and clock skew within reasonable bounds.

Having Epochs actually solves multiple problems: the ordering of transactions, validation deadlines, state replays within certain time intervals. And finally, the reliable clock for the smart-contract level (yet accurate within 1-minute bounds).

Epoch is considered sealed after another Epoch on top of it has been elapsed. Sealed in this case means no new transactions targeting that Epoch will be broadcasted (and the ongoing validation will be ceased even before reaching 2/3 quorum). The only transactions allowed will be those with 2/3 quorum already collected before this deadline, but somehow failed to propagate on time.

Figure 6: Epochs and flow of time

Real problem III: Transaction conflicts

While the underlying Merkle-CRDT data structure ensures that the state eventually converges, it is still vulnerable to head conflicts. What if different nodes provide different values for the same key on the same position of the logical clock? Luckily, the design of Merkle-CRDTs delegates conflict resolution to the application level. There we can ensure that this type of conflict is very unlikely. For both CRDT-enabled buckets:

  • Epochs bucket would pick Epoch with a longer transaction list. All transactions carry 2/3 consensus signatures, so the most lengthy and valid list prevails. Transactions within an Epoch are deterministically ordered by individual timestamps.
  • Transactions bucket is a set. If a transaction has 2/3 consensus signatures attached, then it will be added. Once added, the transaction is immutable and cannot be changed. Transaction hash is CID, i.e. the transaction data pre-defines its key and it will be unique. Conflicts within the signature set are not expected, since double-sign is disallowed (no conflicting signatures from the same validator).

We generally prefer append-only sorted lists when CRDTs are involved.

Application-level data races are another type of problem that cannot be simply avoided with append-only lists. We leverage a Merkelized IAVL+ tree that keeps snapshots of Accounts and Validators sets on every Epoch. During transaction validation, validators access their local state within the pending epoch, converged by replaying past transactions, to get the most up-to-date information on the account and validator set.

Account balance is increased or decreased as a side-effect of running state transactions. A user might send his HMND coin or pay gas, he may be a beneficiary of such transfer. Every transaction keeps a map of addresses and their balance deltas. During a replay, the final_balance(epoch) = balance(epoch-1) + (delta_1, delta_2, ...) balances are modified. Data race might occur if the same account sends the same transaction from multiple nodes, thus an account nonce is introduced, similarly to Ethereum, Cosmos, and other chains.

Figure 7: Account balance updates from validator-signed broadcasts over Epoch-CRDT

While account nonce prevents transaction re-send attacks, it also helps to keep ordering of transactions coming from the same account. Account nonce is monotonically increased with every new transaction. Depending on the approach, the node might wait until "gap" is closed (Ethereum approach) or explicitly reject transactions with non-sequential nonces from mempool (Tendermint approach).

The data race problem grows when multiple accounts are applying transactions to the shared state, such as smart-contact. In this case, while nonce continuity is enforced for each account, the order of transactions within a single Epoch is determined by the transaction timestamp. So, what happens if two distinct accounts transact over the same contract and their transactions may cancel each other (like spending shared balance)? In networks like Ethereum, all transactions are validated sequentially, and this problem simply does not exist. But with introduced parallelism in Epoch-CRDT, we must provide a mechanism similar to Mutex (mutually exclusive access lock), ensuring that smart-contract state also has something similar to account nonce, which needs to be sequentially increased. So, while a single transaction is being executed against the smart-contract state, the other one waits. Smart-contract side of the Humanode network is not fully developed yet and it is a very deep topic of research in general, but this concept would provide the idea for now.

Transaction lifetime

So, given all the drill about possible consensus challenges in parallel and self-replicating distributed ledger, let's jump to the workflow of the basic transaction processing.

Fig 8: Transactions flow in the Epoch-CRDT mechanism utilizing Merkle-CRDT and IPFS
  1. User signs a transaction locally using his cached private key (HD-derived from biometric data), the transaction is timestamped using the best effort on clock accuracy.
  2. User broadcasts the Tx to the closest node, the node performs timestamp check against local clock and epoch, allowing certain clock skew (up to 5s).
  3. Transaction is added into local mempool store with TTL (time until Tx will be evicted).
  4. The node checks the account's nonce against the local state, until TTL expires. If nonce continuity requirement is not met, the Tx is rejected.
  5. After the nonce requirement is met, Tx is executed against the local state at the pending epoch:
    1. During execution, balances are validated, and a map of balance deltas filled. Gas cost is deducted from the sender's balance as well;
    2. If successfully validated, transaction is being broadcasted into consensus using PubSub;
    3. Transaction data, meta, and execution result written into IPFS namespace, its CID is broadcasted.
  6. Validators pick broadcasted Txns, load the data from shared IPFS namespace and run against the local state at the pending epoch. All previously confirmed transactions within the same epoch are prevailing (FIFO order).
  7. Signatures are broadcasted to PubSub and collected locally:
    1. When someone (i.e. leaderless) achieves a set of 2/3+ of validator signatures on a transaction, a Merkle-CRDT datastore writing is broadcasted (modifying Transactions set). Multiple broadcasts of the same change are not a problem.
  8. Transaction CID and 2/3+ signatures are in the CRDT-replicated state, its body is pinned by validators in the IPFS layer.
  9. Every validator can propose Epoch change. Just write into Merkle-CRDT datastore with a list of transactions, only the longest list wins. The datastore is sealed automatically and doesn't accept writes after the next Epoch elapses.
  10. All confirmed transactions are replayed against the local state of the Accounts and Validators state. Basically, confirming a transaction changes the total balance shown for the account. Or, in the case of a validator-set modifying transaction, the validator set is changed for the next Epoch.
  11. Transaction can be viewed in the explorer by its Hash (CID):
    1. The user can query his current balance from any Humanode server that is in sync with the network.


An Epoch-CRDT mechanism based on equal human nodes brings a new approach in transacting value in decentralized fashion. Delta-state CRDTs allows data convergence and merger of causal information without the necessity for additional consensus. It is mathematically guaranteed by the nature of CRDTs that all replicas will converge to the same state. The Epoch-based timestamped transactions allow to overcome common blockchain limitations that arise from transaction ordering, forced block time finality and transactional conflicts. Epoch based approach with consensus on broadcast allows to implement parallelism of transactions like never before. IPFS fits perfectly as a layer for broadcasting, syncing and storage through components such as Libp2p, BadgerDB and DAG-syncer. With introduced parallelism in Epoch-CRDT, we must provide a mechanism ensuring that the state of smart contracts also has something similar to account nonce, which needs to be sequentially increased. Smart-contract side of the Humanode network is not fully developed yet and it is a vast field for research.

For more information check out Humanode’s:

Whitepaper |▲ Website |▲ Telegram ANN |▲ Telegram Chat

Twitter |▲ GitHub |▲ Youtube |▲ LinkedIn

Related News

You've successfully subscribed to Humanode
Great! Next, complete checkout for full access to Humanode
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
Your link has expired.