Real-Time Block Rate Targeting

A proof-of-work blockchain uses a retargeting algorithm, also termed a difficulty adjustment algorithm, to manage the rate of block production in the presence of changing hashrate. To derive the parameters that guide the search for the next block, nearly all such algorithms rely on averages of past inter-block time observations, as measured by on-chain timestamps. We are motivated to seek better responsiveness to changing hashrate, while improving stability of the block production rate and retaining the progress-free property of mining. We describe a class of retargeting algorithms for which the sole inter-block time input is that of the block being searched for, and whose response is nonlinear in that time. We discuss how these algorithms allow the other consensus rules that govern allowable timestamps to be tightened, which may improve the blockchain’s effectiveness as a time-stamping machine.

1. Introduction 1.1. Traditional retargeting algorithm-A traditional retargeting algorithm sets a hash target G n that defines a valid cryptographic hash of block n. The hash is required to be no larger than the target: H(block n) ≤ G n (1) which gives rise to the block's proof-of-work. 1 G n is a function of data found in earlier blocks and is committed to as part of the header of block n. It is set so as to expect future inter-block times of a desired value T , given the hashrate estimate derived from past inter-block times. In the case of Bitcoin, it is recalculated every 2016 blocks, and unchanged otherwise. Omitting some details inessential to our examination: , n mod 2016 = 0 where s n is the committed timestamp of block n and T (= 600 seconds) is the desired interblock time. G n is known as soon as block n − 1 is published. Equation 2 makes use of the scaling property of the exponential distribution. 2 Post-retarget, assuming hashrate continues at its average, a valid block n will ideally be found after inter-block time t n that is exponentially distributed with rate which describes a homogeneous Poisson process. Of course, hashrate will not be constant on the blockchain of interest and, as a random variable, t n exhibits sample variance, so retargeting continues ad infinitum. Further, the act of retargeting itself introduces inhomogeneity. 3 Equation 2 contains a well-known error due to Satoshi which does not affect our analysis, and which we will not refer to again: the term s (n−2016) should have been s (n−2017) .
1.2. Unresponsiveness of the traditional algorithm-The observed inter-block time t n has a standard deviation as large as its mean, which likely drove the design decision to aggregate 2016 consecutive samples before retargeting Bitcoin.
That decision reduced the coefficient of variation from 100% to 2.2%, but has resulted in a >5% overshoot versus target of the long term block production rate in the environment of ever-increasing hashrate.
More recently, Bitcoin has shared the pool of global SHA256 hashrate with other blockchains such as Bitcoin Cash, which have been assigned value in the eyes of the market. As miners are free to allocate their hashrate, a dynamic situation has emerged wherein miners follow diverse strategies to deploy hashrate among the blockchains where it can be applied.
In this context, the Bitcoin retargeting algorithm has a weakness. It reacts to hashrate changes only once every two weeks, while relative changes in the prices of the tokens (BTC, BCH, etc.) often occur in seconds. Such changes directly influence relative profitability and therefore miners' allocation decisions.
We do not attempt to survey the altcoin landscape or its myriad retargeting algorithms, but the phenomenon of coin-hopping is nothing new to its inhabitants. Swings in the varied types of altcoin hashrate have been dramatic enough to end the existence of an altcoin as its miners suddenly depart and blocks are produced so slowly that users lose patience altogether. 4,5 Using a dynamic economic model, Noda et al. find that the traditional retargeting algorithim does not converge after a severe price shock when hashrate is allowed to vary depending on expected dollar reward value. 6 The Bitcoin Cash algorithm, with its moving 144-block window, reacts more quickly to relative price changes, adjusts its own block production, and mitigates the profitability disparity, to the benefit of both blockchains. It is found to converge slowly, but exhibits a worrisome resonant hashrate oscillation with a period of 144 blocks. Nevertheless, we see the advantage of responsiveness and are prompted to ask how we can increase it.
1.3. The memoryless property vs. progress-freedom-The exponential is the unique memoryless statistical distribution (Ross, chapter 4). 7 Being memoryless is the property that the likelihood of observing any particular wall-clock time interval from now until the appearance of block n is independent of the time passed since the appearance of block n − 1, whether that time is zero, 48 seconds, or any other value.
Progress-freedom is distinct from being memoryless. We define progress-freedom to be the property that a miner's chance of finding a block at any moment is independent of how much work he has already done since the appearance of block n − 1.
Progress-freedom is a critical property for a proof-of-work blockchain because it results in a miner's chance to find the next block being proportional to his hashrate. If a miner were to somehow get credit for progressive work done toward a block solution, larger miners would increase their chances of finding the block with each passing second, at the expense of smaller miners, creating an incentive to centralize mining.
Progress-freedom is a property of the iterated cryptographic hash search, whose trials are independent of each other, even if the hash target has a time dependency. We describe a system that remains progress-free, but aims to be more responsive to hashrate changes, to exhibit more stable inter-block times, and to be more resistant to target and timestamp manipulation attacks, while necessarily discarding the memoryless property.

Engineering the Block Production Rate
Consider a more general block production rate function Equation 5 contemplates a system where, during the search for block n, the instantaneous block production rate is not constant, but rather a monomial function of the time t since block n − 1. Equation 3 is a special case (a = 1/T , k = 1) of Equation 5. By "time t" we refer to idealized global wall-clock time, and we assume for the moment that timestamps are always recorded as the global time of the respective event. The accuracy of this assumption depends on the incentive framework we create. This is discussed in Section 5.
For the simple reason that it depends only on time, and not on accumulated work, Equation 5 is progress-free. Time-dependent rate functions are well studied in statistics and are also known as hazard rate functions (the hazard is often something like a machine failing).
We set k > 1, so that with each passing second, the instantaneous block production rate increases. However, this even more strongly affects the likelihood that the block has already been found, which is expressed by the cumulative distribution function F(t). These factors are illustrated in Fig. 1. The net result is a quasi-symmetrical distribution of inter-block times (Fig. 2), whose shape is narrower with higher values of k. We are free to choose k, but we also require the mean inter-block time to be T . Therefore, we are not free to choose a; we derive it. It turns out that the hazard rate function uniquely defines the distribution of observed times by the relation: Substituting Eq. 5 into Eq. 6 and solving the integral gives the cumulative distribution function and probability density function (= F (t)) which is a Weibull distribution (notably, Weibull reduces to exponential when k = 1) (Ross, chapter 5.2). 7 The expected value, or mean, inter-block time t n for block n is Setting Eq. 9 equal to T and solving for a gives the value of a in terms of the constants k and T :

Real-Time Targeting
To keep the blockchain verifiable, t n must be the difference in committed timestamps between the block n being searched for and the previous block: The hash target G (n−1) of the previous block, starting with the genesis block, is known. This committed target continues to be interpreted according to Eq. 1, namely as the maximum hash that would be valid if the search for the block n were carried out using the traditional algorithm.
But we introduce changes to the actual search algorithm, described in the next three subsections.
3.1. Subtargeting-To conduct the block search using the increasing block production rate of Eq. 5, we define a subtarget at time t since block n − 1, such that whereby we have applied the desired block production rate, and corrected for the traditional production rate λ . Combining Eqs. 3, 5, 10, and 11 gives The proof-of-work validity constraint / mining target depends on t and is given by 3.2. Modified retargeting-The subtargeting process results in a block found at time t n which is distributed as shown in Fig. 2. The quasi-symmetry and low (81% lower for k = 6) variance compared to exponential are key. To a chosen degree, and before the next block's base target is even computed, t n is forced toward its mean by subtargeting. In other words, the observed inter-block time reacts less strongly to changed hashrate than it would under the traditional algorithm.
While it reduces the incidence of overly short or long inter-block times, the less reactive inter-block time works against us when retargeting for the next block. For that operation, we need to amplify the deviation from the mean back to its traditional equivalent.
To achieve this transformation is straightforward given the form of the function we chose for Eq. 5. At each moment, we compute F(t) from Eq. 7, and plug it into the inverse CDF of the exponential distribution, giving an equivalent time u in that space: We can see in Fig. 2 that stretching the RTT distribution of t out to the equivalent exponentially distributed time u will make a short inter-block time shorter, and a long inter-block time longer. The amplified reaction leads to noisier-but appropriately reactive-difficulty.
When block n is found (or found prospectively in a block template produced for a miner), we compute u n directly from t n by combining Eqs. 3, 7, 10, and 12: u n is the inter-block time that would have been observed if the traditional retargeting algorithm were being used. Each block header commits to a homogeneous-Poisson-equivalent target which is analogous to Eq. 2, but which • references the timestamp in block n itself: G n ← u n ← t n ← s n .
• uses only one inter-block time (t n ) as input.
• is recomputed with every block.
• is not used to validate the hash of block n (although the target itself is validated), but rather block n + 1 (per Section 3.1). Note that the subtarget g(t) does not influence G n except through its role in the discovery of t n .
Remarkably, there is no averaging whatsoever of past inter-block times. Instead, the subtargeting process is a kind of auction, which starts at a level difficult for the network to achieve, and continually becomes easier until the network achieves it with the hashrate available. For example, with k = 4 and T = 600s, producing a block with t n = 1s is some 80 million times more difficult than in the traditional system.
3.3. Required adjustment to T -We stated in Section 1.1 that the process of retargeting itself makes the global block production process inhomogeneous, 3 even without our nonlinear construction.
Rosenfeld finds that with constant hashrate, the traditional Bitcoin retargeting algorithm produces a mean inter-block time that is 1 2015 longer than the intended 600 seconds. 8 Since we fully retarget every block, this effect is much larger and cannot be ignored. We show in Appendix 7 that to achieve a true mean inter-block time of T , the adjusted value of Eq. 18T = T Γ(1 − 1 k ) must be used in place of T by software implementing the algorithm, just as though it were the desired inter-block time.
This adjustment is sufficient to produce an observed mean of T under constant hashrate. Variable or trending hashrate will produce a different mean, but this is true of every retargeting algorithm.

Parent Chainwork
Since a block's chainwork now depends sensitively on its inter-block time t n , the block comparator is changed to reference the chainwork of the parent, block n − 1, rather than the chainwork of the block itself.
This change ensures that blocks with the same parent have the same chainwork, and reduces the opportunity for miners to conduct inexpensive "orphaning" attacks by starting a competing chain with a block whose timestamp is one second earlier than another miner's block.

Tighter Timestamp Validity Rules
In Section 3.2, we mentioned that it is very difficult to mine a block with very small t n . It is evident, too, that it is impossible to mine a block with t n = 0, as Eq. 5 makes plain that the block production rate is zero in that case. t n < 0 is just as nonsensical.
Consequently, it is natural that t n > 0 be enforced as a consensus rule. Furthermore, without negative inter-block timestamp differences, it may not be possible to recover from a situation where timestamps get ahead of the real-world clock. So it is very natural that we also tighten the maximum allowable timestamp to "now" instead of Bitcoin's "now + 2 hours." We doubt anyway that miners will be eager to accept a peer's "block from the future" that was easier to find than allowed right now.
How well the committed timestamps match actual time is an emergent result of the system, but since there is strong incentive to choose the latest possible non-future timestamp, i.e. a timestamp of "now," we have reason to be optimistic.

Conclusion
We have described Real-Time Targeting (RTT), a class of algorithms for retargeting the block production rate of a proof-of-work blockchain.
By changing the hash target during the search for a block, it is no longer necessary to mine a block at the old difficulty to change to a new difficulty. Instead, every block immediately resets to a new difficulty that immediately reflects the network hashrate estimate gathered during the search for the latest block itself, with no averaging of past inter-block times.
We found that with this algorithm, tighter timestamp validity rules are necessary, and that miners have incentive to use accurate timestamps.
Our aim in the preceding sections has been to promote understanding of these algorithms when they are seen in operation. We have not attempted to formally specify a particular algorithm, to exhaustively quantify its responsiveness or overall performance, nor to compare its performance to other algorithms. These pursuits remain for the future.
Substituting Eq. 16 into Eq. 15, We correct this error before it occurs by using a modified valueT as the "desired" inter-block time: