Influential thinkers of the 1960’s and 1970’s including Ralph Merkle and Leslie Lamport made significant contributions to what is now commonly referred to as the field of hash-based cryptography. Such designs to have been conceptualised during this era include hash-chains (the subject of this article), Merkle Trees (incredibly important cryptographic primitive), and blockchains (without which decentralised finance would not be possible). While the prior list is not exhaustive it does capture some of the schemes that are currently making the most impact out of the field of hash-based cryptography. Other schemes such as the Lamport One-Time Signature and the Winternitz One-Time Signature are also increasingly being researched in reference to post-quantum cryptography. One interesting notion that characterises all of this work though is that at their initial time of conception they weren’t investigated thoroughly, and weren’t until much more recently with the rising popularity of applications such as Bitcoin and Ethereum.

Contents:

FAQ
Introduction
Merkle Trees
Hash-Chain
Refined-Winternitz One-Time Signature
Conclusion

FAQ

What is hash-based cryptography?

Relying on the provably hard problem of finding the inverse or a collision for any given output, hash functions can be used to generate schemes for confidentiality and integrity in network communications. The use of hash functions, either in chains or other patterns, forms the basis for hash-based cryptography. Even with quantum computation available it has been proven that Grover’s Algorithm would not make hash functions unviable to use (as long as their security level was doubled). However, hash-based cryptography is characterised by One-Time Signatures (that can only be used once before being vulnerable), and Few-Time Signatures (that can only be used a finite number of times); both of which require large amounts of computation and large proofs for a single message.

Why is hash-based cryptography important?

In a future where quantum computation is seeming increasingly likely and one in which the reliance of networks on cryptography for providing confidentiality, integrity, and availability increases as well, new cryptographic schemes are necessary. Currently widespread and computationally cheap iterations of public-key cryptography are vulnerable to quantum algorithms, whereas the assumptions that underlie hash-based cryptography have been proven to be upheld mostly against such algorithms. While hash functions have been proven to never be able to provide confidentiality, they can provide integrity which is necessary for ensuring that a malicious actor cannot forge a message that they have claimed was sent by a genuine actor.

What types of one-time signatures are there?

One-Time Signatures are schemes derived from the field of hash-based cryptography that allow for integrity of messages sent over a network even where confidentiality does not exist. ‘One-Time’ refers to the notion that all signatures of this type can only be used once before they are vulnerable to being exploited to forge signatures for new messages. Therefore, each time a user desires to prove the integrity of their message with a One-Time Signature they must generate a completely new signature with a new public key. It is this element that make them computationally prohibitive to implement, as each single message requires significant computation and/or proof sizes. There are two commonly proposed varieties of One-Time Signatures: Lamport Signatures, and Winternitz Signatures.

What is a hash-chain?

Visualising a hash function as taking some input and producing a hashed output, it can be said that the output is equal to the zeroth index where the input is the ‘first’ in the chain. In the same way the hash of this output can be denoted as equal to the first index where the input is the ‘second’ in the chain. Where this simplification has been made it can be said that the nth index would be the nth hash of the zeroth index. Hash-chains are simply this concept of taking some object and hashing it many times to produce a long chain of objects that are the hash of the object that came before. An important characteristic of this concept is that if any actor apart from the actor who generated the chain only knows the ith index, they cannot find any jth index of the hash-chain where j is less than i. Hence in some instances a hash-chain can serve as n proofs of an actor’s genuine knowledge of the original input, the ‘first’ in the chain.

What is a hash-ladder?

If an actor produces two objects, they can form an an nth index hash-chain of both objects. If they desire to prove their knowledge of either object to a verifier they would be able to provide the ith index of the first hash-chain and the jth index of the second hash-chain where j is i less than n. If a malicious actor intercepts this communication, without any further information provided, they would not be able to prove their knowledge of either object using some k not equal to i, as it would require the actor be able to reverse a hash function. This task is considered computationally infeasible, even with quantum computation. It this concept of using two hash-chains in collaboration that can be abstractly visualised as a “hash-ladder”.

How are Verkle Trees created?

Verkle Trees are more generalised versions of Merkle Trees, where a Merkle Tree typically involves hashing the concatenation of two objects at each step to form the next node along the tree. Instead, Verkle Trees allow for the construction of Merkle Tree where the concatenation of n objects at each step to form the next node along the tree is done. One commonly proposed scenario is to put forth the value of 16. The reason for implementing this change is that it will allow for Verkle Trees to be “wider” and as such “shallower” (meaning less data is needed for each Verkle proof and less hashing needs to be done to produce a tree). Such a change remains currently theoretical though as no efficient algorithm has yet been fully tested to allow for this possibility. Typically an actor producing a proof would need to include all 16 objects together in a proof to prove that they hash together to form the next node - but with an efficient algorithm it is possible that only 2 “objects” (where the definition of object in this case is more vague) would be required.

Introduction

Large strides have already been made to improve blockchain theory, and cryptographers are beginning to find constructions of “Verkle Trees” (wider applications of Merkle Trees). However, hash-chain designs and one-time signatures haven’t received such portions of research; this article dedicates itself to a wider exploration of these schemes. To frame this continuing investigation of hash-chains and one-time signatures clarification is provided to the wider-purpose of hash-based cryptography. Where cybersecurity is incredibly reliant on the CIA triad (confidentiality, integrity, availability) any hash scheme is essential for providing integrity in the most computationally efficient manner possible. On the other hand though they do not provide any capability to support the properties of availability or confidentiality, and it has been mathematically proven that they never could. Therefore, focusing on what improvements can be made for the property of integrity is where viability for the aforementioned concepts can be found and applied. The reason for this has been explored more widely through an earlier explanation of Functional-Verifiability scale. To condense the information presented by this explanation it was found, and has been proven on numerous occasions, that under certain conditions One-Way-Hash functions can prove that an actor $P$ knows some object $x$ to any verifier $V$, though the value of $x$ eventually must be revealed. Advantageously under certain conditions no other actor can achieve the same function without actually knowing $x$. These certain conditions that allow for verification without revealing $x$ prematurely to other actors though can especially be leveraged by hash-chain designs.

Merkle Trees

It has been alluded to previously that Merkle Trees are essential cryptographic primitives that underpin many digital systems, especially blockchain networks. The concept of a Merkle Tree is simple in nature, and perhaps by extension of this simplicity, its applications are wide. In general it can be said that they allow for an actor $P$ to prove they know any one of $n$ different objects, with only revealing a single one of the $n$ objects $x$ to a verifier $V$ at a time without any other actor being able to replicate this under the right conditions (these conditions being referenced will be expanded upon in further detail below). Understanding how a Merkle Tree is constructed and how it is applied will assist in explaining further hash-based cryptography topics.

An illustration of a simplified Merkle Tree. An illustration of a simplified Merkle Tree.

Above can be observed an illustration of a simple Merkle Tree, where the root node verifies the knowledge and existence of 4 objects by an actor $P$. It should be noted that the four child nodes on the bottom of the tree are actually the One-Way-Hash outputs of the objects themselves. By sharing around the root node alone $P$ cannot prove their knowledge of any of the objects, but to any verifier $V$ that has already received the root node there is a proof that can be constructed. As seen where $V$ wants to verify $P$’s knowledge of the second object they can perform the following operations.

// Function to be run by the verifier V
function proveObj(objToProve, proof, rootNode) {
	let node = objToProve;
	for (let i = 0; i < proof.length; i++) {
		node = hash(node, proof[i]);
	}
	if (node == rootNode) {
		return true;
	} else {
		return false;
	}
}

// Verifier may know the root node and the object that needs to be proved
let obj = 0xb3b...; // Second object from illustration
let root = 0x32b...; // Root node from illustration

// Verifier receives a proof from the actor P
let proof = [0x11c..., 0x2ef...]; // Proof from illustration

proveObj(obj, proof, root); // Will return true as indicated by the above diagram

In placing the code excerpt in sentence structure it can be said that the verifier $V$ requires from $P$ all the nodes circled from the diagram (where the root node will be public to the verifier in advance) in addition to the object that is intended to have knowledge proven of. By hashing the object $x$ to be proven alongside each value contained in the proof one by one, the verifier is able to construct each step up in the tree until they arrive to the root node which will require a straight forward comparison to validate.

hash(0xb3b..., 0x11c...); // 0xaf0...
hash(0xaf0..., 0x2ef...); // 0x32b...
0x32b... == 0x32b...; // True

Where it is generally assumed that a SHA256 hash function is used, any proof will be $256 \times \log_2(n)$ bits in length given $n$ is the number of objects that knowledge can be proven of given the root node. Therefore the current example would require a proof size of 512 bits to prove knowledge of any of the 4 objects. If knowledge needed to be proven of one of any 1024 objects though, the proof size does not increase too much, still being only 2560 bits in size (10 times the proof size for 256 times the objects). This is one area where some potential applications can be extracted from; the observation that it may be possible to prove one of incredibly large number of objects with relatively manageable proof sizes. Combined with this concept each of the objects in a Merkle Tree can be root nodes of other Merkle Trees, creating a Merkle Tree of Merkle Trees. This would allow $P$ to generate many public keys in advance that they can use one by one for different applications (though the proof size is not reduced any further by this approach).

Verkle Trees

Theoretical work is being applied to implementing varieties of Merkle Trees that require significantly smaller proof sizes though. These have been known across development teams in Ethereum as ‘Verkle Trees’. Whereas a proof for a Merkle Tree is deterministic and cannot feasibly be forged or have any chance of an error, Verkle Trees look for a probabilistic proof. These forms of proofs are not inherently weaker than deterministic varieties as long as there is an infeasibly small chance that an incorrect proof will be mistaken for a correct one. While no concrete working approach has been proposed yet, it is believed that most Verkle Trees will use a polynomial-proof (proofs that are infeasible for a malicious actor to produce based on the computationally hard problem of solving n-degree polynomials). Instead of a design that looks like the illustration from above, each node would instead have $k$ child nodes instead of being limited to 2 child nodes.

In most instances it is suggested that $k=16$ and as such there would be sixteen child nodes per a node. This would effectively mean that if an actor $P$ had 256 objects they wanted to prove knowledge of, they could represent this with a Verkle Tree of height 2 (an identical height to the Merkle Tree that can only prove knowledge of 4 objects). If this idea was applied directly to a Merkle Tree though there would be no advantage found, as any proof would still require all 15 other child nodes to allow the verifier $V$ to find the root node. This is where switching the proof with a polynomial-proof becomes important. While the exact architecture of a polynomial-proof cannot be explored as it is far beyond the scope of hash-based cryptography explored here, description can be made of the result of these proofs. Polynomial proofs would allow for proof sizes that were $b \times \log_k(n)$ in bits, where $b$ is the size of the proof for each step in the tree and $n$ is the number of objects knowledge can be proven of for a single root node. Therefore, as long as $b$ is not exponentially larger than 256 it can be seen that a large $k$ would allow for extremely small proof sizes for a large $n$.

Assuming that $b=512$ and $k=16$, and that $P$ wants to prove their knowledge of any one of 1,048,576 objects the following can be seen to be true.

$$ \begin{align*} p=b \times \log_k(n) \\ p=512 \times \log_{16}(1048576) \\ p=2560 \ \mathrm{bits} \end{align*} $$

Therefore it can be seen that a proof capable of proving knowledge of any one of more than a million objects can be constructed with the same size as a Merkle proof that can be prove knowledge of any one of 1024 objects, while using a relatively small $k$ compared to what might be possible. As long as it is valid to assume that there in an infeasible chance of an incorrect proof being mistaken for a correct one the Verkle Tree provides a huge potential upgrade to the Merkle Tree. This article will however maintain with exploring the Merkle Tree considering the theoretical unknowns associated with proposing refined cryptographic schemes for a Verkle Tree design.

Hash-Chain

Verifiability Assumptions

There has been numerous references made throughout the article so far to the concept of One-Way-Hash functions allowing for the verification of knowledge of some object $x$ without allowing other actors the same capacity under certain conditions. More detail is provided in this section to depict what precisely these conditions entail. Given an actor $P$ wants to prove their knowledge of some object $x$ to any verifiers $V$ without any other actor being able to achieve the same function they need to use a general hashing pattern. Given that $V$ knows an immutable public key $p$ that has been certified and originates from the actor $P$, then if $V$ is provided with an object $x$ such that the hash of $x$ is equal to $p$ then it is known that only $P$ could have produced $x$ (much like the case in a Merkle Tree). This general structure is the required pattern.

function verifyObject(pubKey, obj) {
	obj = hash(obj);
	if (pubKey == obj) {
		return true;
	}
	return false;
}

p = 0x2c9...; // Verifier fetches P's public key
x = 0x349...; // An object claimed to be received from P
verifyObject(p, x); // Returns true

This pattern can be generalised further though where the following can be said. Given that $V$ knows an immutable public key $p$ that has been certified and originates from the actor $P$, then if $V$ is provided with an object $x$ such that one of the $n^{th}$ hash of $x$ is equal to $p$ then it is known that only $P$ could have produced $x$. It is this more generalised pattern that begins to approach the concept of a hash-chain as originally envisioned by Merkle.

function verifyObj(pubKey, obj, n) {
	for (let i = 0; i < n; i++) {
		obj = hash(obj);
	}
	if (pubKey == obj) {
		return true;
	}
	return false;
}

p = 0x2c9...; // Verifier fetches P's public key
x = 0xeb1...; // An object claimed to be received from P
n = 32; // Number of times x has been hashed to create p
verifyObject(p, x, n); // Returns true

This generalisation may appear redundant, requiring additional computation of hashes for proving knowledge of the same number of objects (one), but the capability provided by this flexibility is revealed through further conceptualisation of a hash-chain design. Therefore, it can be said that the conditions required for ensuring an environment wherein only the actor $P$ may prove their knowledge of $x$, and as such that they created the object (i.e. integrity), is detailed as following.

  1. Any verifier $V$ has been able to certify an immutable copy of $p$, a public key that originates from the actor $P$.
  2. That $x$ is an object that can be hashed $n$ times to equal the public key $p$ where.

These assumptions are not too difficult to enforce depending on any other additional requirement of the network that communication is occurring on, but there conditions alone are not necessarily ideal. For true cryptographic security where no malicious actor could forge knowledge of $x$ without knowing $x$ an additional assumption of confidentiality is required.

  1. No malicious actors may be able to discern the value of $x$ until it has been verified immutably.

Maintaining this latter requirement necessitates the existence of forms of encryption, meaning that post-quantum security cannot be guaranteed by hash-chains alone. This requirement is identical to that of Merkle Trees as well. However, it needs to only be fulfilled in the case that by knowing $x$ before it has been verified immutably that a malicious actor could forge signatures from $P$. As long as the design of the cryptographic schemes does not rely on this assumption then it is satisfactory to only need the first two conditions from the list above. Exactly what applications can be made of hash-chains are explored further below.

Applications

As depicted in the generalised scenario above, it is possible to construct a public-key $p$ that is able to verify the knowledge of an object $x$ with hash functions. By hashing $x$ a total of $n$ times to produce $p$ it is possible to use that key to prove that an actor has knowledge of the original object $x$. Further generalisations can be applied however to allow for $p$ to prove the knowledge of $n$ objects, in a manner that requires less storage and less computation than a Merkle Tree while being cryptographically safe under the same assumptions.

function verifyObj(pubKey, obj, n) {
	for (let i = 0; i < n; i++) {
		obj = hash(obj);
		if (pubKey == obj) {
			return true;
		}
	}
	return false;
}

p = 0x2c9...; // Verifier fetches P's public key
x = 0xeb1...; // An object claimed to be received from P
n = 32; // Maximum number of times x could have been hashed to create p
verifyObject(p, x, n); // Returns true

Under the second such generalisation of the verifyObj() function it can be seen that further adjustments to the design have been applied. Now instead of simply checking if the $n^{th}$ hash of the object $x$ is equal to $p$, the function checks if the $i^{th}$ hash of the object $x$ is equal to $p$ where $0 < i \leq n$. Hence, where the $i^{th}$ hash of some private-key $s$ is equal to $h_{i-1}$ and the $n^{th}$ hash of $s$ equals $p$, the public-key can be used to prove knowledge of any one of the objects $h_{i-1}$ where $0<i \leq n$. In more tangible terms, any of the hashes of $s$ that lie between $s$ and $p$ on some “chain” can have knowledge of their existence by an actor $P$ proven. This generalisation does come with an additional condition and a limitation though. The limitation is that none of the hashes between $s$ and $p$ may have any particular value in having their knowledge of proven, meaning that this design would be of little value as indicated previously. If knowledge of any of the hashes is valuable however, one additional condition must be applied for cryptographic security.

  1. If the object $h_j$ has been provided to prove knowledge of said object, the actor $P$ cannot also provide the object $h_k$ where $j < k$ to prove their knowledge of this object.

To satisfy this condition requires that both the actor $P$ and any verifiers $V$ are aware of which objects knowledge of has been proven previously. Otherwise a malicious actor can acquire the object $h_j$ and generate the object $h_k$ by hashing it the necessary number of occasions, and hence “proving their knowledge” of $h_k$. One alternative way to avoid the possibility of this problem, or the need to track and store which objects have been previously used, is to use a hash-ladder variation of a hash-chain. Illustrated below is such an example, where two hash-chains are generated from two different private-keys $s_1$ and $s_2$.

Visualisation of a “hash-ladder” variation of the hash-chain - equivalent to two separate hash-chains used together to form the “rungs” of the “ladder”. Visualisation of a “hash-ladder” variation of the hash-chain - equivalent to two separate hash-chains used together to form the “rungs” of the “ladder”.

Whereas previously the knowledge of $h_j$ would allow a malicious actor to resolve $h_k$ where $j < k$, using the ladder design the malicious actor would never be able to resolve this object without reversing a hash function. Where the $i^{th}$ hash of the private-key $(s_1, \ s_2)$ can be denoted as $(h_{0,i-1}, \ h_{1,n+1-i})$, to find the $i+1^{th}$ hash of the private-key requires the object $(h_{0,i}, \ h_{1,n-i})$. The second part of this object, $h_{1,n-i}$, comes before $h_{1,n+1-i}$ on the hash-chain, hence reversal of the hash function would be required.

Using the hash-ladder design restricts a malicious actor from forging a proof of their knowledge of some additional object, but the design loses its security if it is used by $P$ to prove their knowledge of two or more objects. If $P$ provides both $(h_{0,j}, \ h_{1,n-j})$ and $(h_{0,k}, \ h_{1,n-k})$ as objects their knowledge is to be proven of, then any malicious actor can provide an object $(h_{0,r}, \ h_{1,n-r})$ where $j < r < k$ or $j > r > k$ as a forged proof without needing to reverse any hash functions. Restricting each hash-ladder to being able to prove knowledge of a single object however is not viable however, as Merkle Trees are far more computationally viable to implement for allowing significantly more proofs. However, by taking note of this design and its capability to remove consideration of the fourth condition and combining it with another design that removes the need to consider the third condition, a new powerful signature can be constructed.

Refined-Winternitz One-Time Signature

One-Time Signatures are cryptographic schemes that avoid any reliance on encryption, instead being entirely reliant on hash-based cryptography to guarantee the origin of a message (i.e. integrity). The intention of this scheme is similar to what has previously been discussed with Merkle Trees and hash-chains, but is a more concrete implementation. While it is especially useful to be able to ensure integrity in network communications with the simplest cryptographic assumptions possible (especially for post-quantum security), there are some drawbacks to the design that will be contended with. As the name suggests, the signature is only usable once before it cannot safely ensure integrity of messages any longer. While Few-Time Signatures are possible to construct with One-Time Signatures, producing two separate One-Time Signatures is as expensive computationally as a Few-Time Signature that can verify the origins of two messages. In addition, computational storage required for these signatures can be excessive.

There are two primary applications of a One-Time Signature: Lamport OTS, and Winternitz OTS. It will be seen through further inspection however that the Lamport OTS is in some sense a specific application of the Winternitz OTS which will be detailed throughout this section. There are a number of steps that must be completed to allow for the construction of a valid Winternitz OTS proof.

Proof Construction

  1. Produce a $b/w$ long list of private-keys $s_i$.
  2. Hash each $s_i$ exactly $2^w$ times to produce a list of public-keys $p_i$ that is shared.
  3. Given a message $m$, produce the hash of the message $h(m)$.
  4. Spilt $h(m)$ into blocks $w$ bits long; each block $h(m)_i$ will correspond with a block $s_i$.
  5. Hash each $s_i$ exactly $h(m)_i$ times to produce $j_i$.
  6. Send $m$ and the list of $j_i$ together to any verifier $V$ to prove the integrity of the message $m$.

Given the above guidelines, an actor $P$ would be able to construct a valid Winternitz OTS signature to prove the integrity of a given message $m$. Later it will be observed exactly how a verifier can use this information provided, namely $(m, \ j, \ p)$, to verify the origins of $m$. There are two free variables $b, \ w$ listed: $b$ is the size of the output of the hash function $h$ in bits, and $w$ is a coefficient that can be adjusted to directly control the computation and storage requirements of the signature. Below is illustrated a common scenario where $b=256$ and $w=8$, meaning that $h(m)$ is split into byte-sized blocks and that each $s_i$ is a byte-sized and is hashed 256 occasions to produce $p_i$.

Construction of a Winternitz One-Time Signature on a byte by byte basis. Construction of a Winternitz One-Time Signature on a byte by byte basis.

Using this specific common example, the list of instructions to generate a Winternitz OTS become more simplified.

  1. Produce a $32$ long list of private-keys $s_i$.
  2. Hash each $s_i$ exactly $256$ times to produce a list of public-keys $p_i$ that is shared.
  3. Given a message $m$, produce the hash of the message $h(m)$.
  4. Spilt $h(m)$ into blocks $8$ bits long; each block $h(m)_i$ will correspond with a block $s_i$.
  5. Hash each $s_i$ exactly $h(m)_i$ times to produce $j_i$.
  6. Send $m$ and the list of $j_i$ together to any verifier $V$ to prove the integrity of the message $m$.

While choices of $b$ and $w$ may seem to be an aesthetic decision, their values have a large impact on the cost of computation and storage associated with each Winternitz OTS. When the actor $P$ sends their signatures $j_i$ alongside their message, the size of each one of $j_i$ is equal to $b$ bits and there are $b/w$ individual $j_i$ that make up a signature. Overall this means that the signature size is $b^2/w$. Therefore, one can either reduce $b$ or increase $w$ to reduce the size of the signature (which is important for reducing computational storage required). Reducing $b$ directly reduces the security level of the signature though, and hence this action is not typically recommended (reducing $b$ by a factor of 2 makes it up to 2 times easier for a malicious actor to forge the signature). Increasing $w$ therefore presents itself as the safest form of decreasing signature sizes, however it comes with the drawback of increasing computational costs to produce a signature.

Firstly, the generation of a public-key requires $2^wb/w$ computations of a hash function. This is equivalent to producing $2^w$ size hash-chains for each one of $b/w$ private-keys $s_i$. Any increase in $w$ leads to a sharp increase in the computation required, and as $w$ tends to a large number the equation for calculating computation required tends to $2^w$. Secondly, unless the actor $P$ retains all hash-chains in storage, they will need to perform on average half the computation required for generating a public-key initially to produce a signature. On average the verifier will also need to perform the remaining half of the computation to then verify the origins of the message (which equates to doubling the initial computation required overall that must be performed by all actors). Why the verifier must perform these operations will be explored in the next section.

Winternitz coefficient Hash output size Hash operations required Storage required (bits)
2 256 512 32768
4 256 1024 16384
8 256 8192 8192
16 256 1048576 4096
32 256 34359738368 2048
2 128 256 8192
4 128 512 4096
8 128 4096 2048
16 128 524288 1024
32 128 17179869184 512

It can be seen that while doubling the Winternitz coefficient $w$ will decrease storage requirements by an identical amount as to halving the hash output size $b$, the hash operations required increase exponentially. Therefore it can be observed that $w=8$ not only appears to be aesthetically sound of a design choice, but also appears to provide the greatest balance between storage and computation requirements. Whereas traditional signature schemes involving public-key cryptography require $b$ bits for a signature and 1 hash operation, Winternitz OTS’s require substantially more resources. They are as such potentially limited in viability.

Verification

  1. Receive a message $m$ and a list of signature blocks $j_i$ from an actor $P$.
  2. Hash the message to find $h(m)$.
  3. Split $h(m)$ into blocks $h(m)_i$ each $w$ bits long that corresponds with a $j_i$ each.
  4. Hash each signature block $j_i$ exactly $2^w-h(m)_i$ occasions.
  5. Check if the result of the above steps on each $j_i$ is equal to the corresponding public-keys $p_i$.
  6. If equal, it is true that the actor that knows $s$ was the signer of the message $m$.

It can be seen from above that any verifier $V$ will on average need to perform $2^{w-1}$ hash operations for each $j_i$ as mentioned previously. This is because where each signature $j_i$ is in the $h(m)_i^{th}$ position in each $2^w$ long hash-chain, and $h(m)_i$ will on average equal $2^{w-1}$, both the actor $P$ and the verifier $V$ will need to perform $2^{w-1}$ hash operations each on average. These considerations are separate from the original public-key generation. Upon performing these required operations however, any verifier will be able to deterministically ascertain if the message $m$ has been sent by the actor $P$. As also explored previously the only requirement of this signature scheme is that it is computationally infeasible for a malicious actor to find a $m_f$ where $h(m)=h(m_f)$.

Illustrated below is a simple code example of how a valid WOTS may be constructed in JavaScript given what information and examples have been provided above. In considering the above examples, the common scenario of using $w=8$ and $b=256$ has been selected, though changing these values require negligible effort.

function genKeys() {
	let privKey = [];
	let pubKey = [];
	for (let i = 0; i < 32; i++) {
		let obj = randbits(256);
		privKey.push(obj);
		for (let j = 0; j < 256; j++) {
			obj = hash(obj);
		}
		pubKey.push(obj);
	}
	return [privKey, pubKey];
}

function genSig(message, privKey) {
	let obj = hash(message);
	let signature = privKey;
	for (let i = 0; i < 32; i++) {
		let temp = [];
		for (let j = 0; j < 8; j++) {
			temp.push(obj[i * 8 + j]);
		}
		for (let j = 0; j < temp; j++) {
			signature[i] = hash(signature[i]);
		}
	}
	return signature;
}

function verifySig(message, signature, pubKey) {
	let obj = hash(message);
	for (let i = 0; i < 32; i++) {
		let temp = [];
		for (let j = 0; j < 8; j++) {
			temp.push(obj[i * 8 + j]);
		}
		for (let j = 0; j < 256 - temp; j++) {
			signature[i] = hash(signature[i]);
		}
		if (signature[i] != pubKey[i]) {
			return false;
		}
	}
	return true;
}

Refinement

Taking this simplified adaptation of a Winternitz OTS alone however is vulnerable to forgery attacks by malicious actors. Without any further checks or algorithms included to protect against these occurrences, a malicious actor could intercept the message $m$ and create their own message $m_f$ which appears to have been signed by $P$. Where the actor $P$ signs each $j_i$ by finding the $h(m)_i^{th}$ hash of the hash-chain between $s_i$ and $p_i$, and it has been identified previously that any malicious actor can resolve $j_k$ in a hash-chain given that $h(m)_k > h(m)_i$, the vulnerability becomes clearer. If $P$ has signed a message wherein $h(m)_i=92$, then any other actor could use this information to generate a signature for a message wherein $92 \leq h(m)_k \leq 255$. There is a high likelihood that any one of these 163 possible unique signatures could allow a malicious actor to forge a “useful” message (i.e. sign a virus as having been sent by $P$). On average given a message $m$ and the signatures $j_i$, a malicious actor would be able to forge up to $2^{224}$ possible messages $m_f$; or in other words they could forge on average greater than 99.999% of all possible messages.

Therefore, there needs to be additional checks and algorithms to prevent this possibility. To this extent some of the lessons raised in previous discussion about hash-chains can be extracted to produce a more complete Winternitz OTS in which no signatures can be forged (as long as the OTS is only used once). If a hash-ladder is constructed such that only one of the “rungs” of this “ladder” is used (in this case, for generating a signature $j_i$) then it is impossible to discover any of the other “rungs” without reversing a hash function. Given the pattern $[(h(s_i)_{h(m)_i}, \ h(s_g)_{2^w - h(m)_i})]$, it could be said that the point wherein $[h(m)_i=92]$ would be $[(h(s_i)_{92}, \ h(s_g)_{164})]$. If a malicious actor wanted to generate a signature for a point wherein $[h(m)_k=93]$ then they would need to calculate $[(h(s_i)_{93}, \ h(s_g)_{163})]$ - a computationally infeasible task.

Incorporating this exact hash-ladder design illustrated previously with the simplified version of the Winternitz OTS put forth, a more refined design can be visualised. While a necessary design, this element when incorporated will result in exactly double the hash operations required for both public-key generation and the constructing/verifying of signatures, and double the storage required as well.

function genKeys() {
	let privKey = [];
	let pubKey = [];
	for (let i = 0; i < 32; i++) {
		let obj = randbits(256);
		let objTwo = randbits(256);
		privKey.push([obj, objTwo]);
		for (let j = 0; j < 256; j++) {
			obj = hash(obj);
			objTwo = hash(objTwo);
		}
		pubKey.push([obj, objTwo]);
	}
	return [privKey, pubKey];
}

function genSig(message, privKey) {
	let obj = hash(message);
	let signature = privKey;
	for (let i = 0; i < 32; i++) {
		let temp = [];
		for (let j = 0; j < 8; j++) {
			temp.push(obj[i * 8 + j]);
		}
		for (let j = 0; j < temp; j++) {
			signature[i][0] = hash(signature[i][0]);
		}
		for (let j = 0; j < 256 - temp; j++) {
			signature[i][1] = hash(signature[i][1]);
		}
	}
	return signature;
}

function verifySig(message, signature, pubKey) {
	let obj = hash(message);
	for (let i = 0; i < 32; i++) {
		let temp = [];
		for (let j = 0; j < 8; j++) {
			temp.push(obj[i * 8 + j]);
		}
		for (let j = 0; j < 256 - temp; j++) {
			signature[i][0] = hash(signature[i][0]);
		}
		for (let j = 0; j < 256; j++) {
			signature[i][1] = hash(signature[i][1]);
		}
		if (signature[i] != pubKey[i]) {
			return false;
		}
	}
	return true;
}

Lamport One-Time Signature

One of the primary alternatives to generating a Winternitz OTS is to generate a Lamport OTS. An explanation will be presented as to how to produce and verify a proof through a Lamport OTS below. In addition it will be conceptualised as to how Lamport OTS’s are essentially specific versions of a Winternitz OTS where $w=1$.

Construction

  1. Produce a $512$ long lists of private-keys $s_i$.
  2. Hash each $s_i$ exactly $1$ time to produce a list of public-keys $p_i$ that is shared.
  3. Given a message $m$, produce the hash of the message $h(m)$.
  4. Spilt $h(m)$ into blocks $1$ bit long; each block $h(m)i$ will correspond with either a block $s_i$ or $s{i+256}$.
  5. If $h(m)_i=0$ then $j_i=s_i$, else $h(m)i=1$ then $j_i=s{i+256}$.
  6. Send $m$ and the list of $j_i$ together to any verifier $V$ to prove the integrity of the message $m$.

Verification

  1. Receive a message $m$ and a list of signature blocks $j_i$ from an actor $P$.
  2. Hash the message to find $h(m)$.
  3. Split $h(m)$ into blocks $h(m)_i$ each $1$ bit long that corresponds with a $j_i$ each.
  4. Hash each signature block $j_i$ exactly $1$ occasion.
  5. Check if the result of the above steps on each $j_i$ is equal to the corresponding public-keys $p_i$ or $p_{i+256}$ (if $h(m)_i=0$ or $h(m)_i=1$ respectively).
  6. If equal, it is true that the actor that knows $s$ was the signer of the message $m$.

Refinement

By adjusting the above steps to be a more generalised version compatible with previous examples of implementations of Winternitz OTS’s the storage requirements remain the same, but the number of hash operations required doubles (though the number of hash operations required for $w=1$ is negligible).

Construction

  1. Produce two $256$ long lists of private-keys $s_i$ and $s_g$.
  2. Hash each $s_i$ and $s_g$ exactly $2$ times to produce a list of public-keys $p_i$ and $p_g$ that is shared.
  3. Given a message $m$, produce the hash of the message $h(m)$.
  4. Spilt $h(m)$ into blocks $1$ bit long; each block $h(m)_i$ will correspond with blocks $s_i$ and $s_g$.
  5. Hash each $s_i$ exactly $h(m)_i$ times and each $s_g$ exactly $2-h(m)_i$ times to produce $j_i$ and $j_g$.
  6. Send $m$ and the list of $j_i$ and $j_g$ together to any verifier $V$ to prove the integrity of the message $m$.

Verification

  1. Receive a message $m$ and a list of signature blocks $j_i$ and $j_g$ from an actor $P$.
  2. Hash the message to find $h(m)$.
  3. Split $h(m)$ into blocks $h(m)_i$ each $1$ bit long that corresponds with a $j_i$ each.
  4. Hash each signature block $j_i$ exactly $2-h(m)_i$ occasions and each $j_g$ exactly $h(m)_i$ occasions.
  5. Check if the result of the above steps on each $j_i$ and $j_g$ is equal to the corresponding public-keys $p_i$ and $p_g$.
  6. If equal, it is true that the actor that knows $s$ was the signer of the message $m$.

It can be seen that using a $w=1$ generalisation of the Lamport OTS achieves the same effect as the specialised version, with the consequence of requiring double the hash operations. However, $w=2$ relies on the same number of hash operations while halving required storage. Considering the affordability of computing hash functions in comparison to storage across a network (especially in blockchain networks), any operation that can halve the required storage for only double the hash operations is a positive change. Any attempt at apply the Lamport specialisation to other Winternitz coefficients also does not provide opportune results in terms of the comparison between computational and storage requirements. For instance, applying the specialisation to $w=8$ would allow for only $32$ hash operations (one for each $8$ bits), but require $2^{21}$ bits worth of storage for a public-key (and private-key by extension). If there was a computational development wherein storage was significantly more efficient to acquire than computation to perform a hash function however, then these specialisations would be useful. The below table focuses on the hash operations required for generating a signature, and not the computational resources needed to create a public-key in the first place.

Winternitz coefficient Hash output size Hash operations required Storage required (bits)
2 256 128 131072
4 256 64 262144
8 256 32 2097152
16 256 16 268435456
32 256 8 8796093022208
2 128 64 32768
4 128 32 65536
8 128 16 524288
16 128 8 67108864
32 128 4 2199023255552

One such manner in which storage could be cultivated more efficiently is through the use of Merkle Trees. While the concerns of communicating such large quantities of data cannot be entirely avoided, it can be alleviated to a degree. Considering the Lamport specialisation for $w=8$, the public-key could be recorded as the root of the Merkle Tree of all $s_i$ private-key blocks. There would be exactly $2^{13}$ hashed values that would need storing for a Merkle Tree of height $13$ (meaning $2^{14}$ hash operations would need to be performed for generating a public-key). As such, the public-key to be shared would only be $256$ bits in size. For any verifier to be able to verify any given message $m$ against this public-key would need to receive $2^{16}$ bits of data instead of $2^{21}$ bits, which is a significant reduction, but still inefficient for network communications. Use of hexary Verkle Trees (where $k=16$) could reduce the communication need further to $2^{12}$ bits assuming an ideal implementation of such a scheme, which might begin to more significantly approach a viable system.

Construction of a Lamport One-Time Signature on a bit by bit basis ($w=2$). Construction of a Lamport One-Time Signature on a bit by bit basis ($w=2$).

Conclusion

As can be seen, there are many levels of consideration as to the application of hash-based cryptography to network communications such as blockchains. Variations of not only Merkle/Verkle Trees, but hash-chains and one-time signatures can be utilised to perform a variety of tasks under simplified cryptographic assumptions. While their collective costs on computational resources are too immense for primary consideration currently, the advent of quantum computation will likely force the continued refinement of these simplified schemes. Without being able to rely on such cryptographic assumptions integrity will be impossible to maintain in a post-quantum environment.

However, importance must be placed on further designing more efficient forms of such discussed methods of ensuring integrity. Advances in storage requirement, developments of Verkle Trees, or Fully-Homomorphic Hash functions are some ways in which these efficiencies could be achieved. While discussion has been made partially to these two original points, Fully-Homomorphic Hash functions have not been mentioned to the same detail. This is because all designs currently are considered highly theoretical, and the exact future implementation of this primitive may have significant impacts on the cryptographic assumptions of hash-chains or one-time signatures that rely on these developments.

With in-depth research of these fields and further analysis it is possible that much like blockchain, more remnants of hash-based cryptography can be brought into viability to advance cryptography. The more reliant the world becomes on cryptography, the more essential this work becomes.