Difference between revisions of "Capacity Metrics"
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
− | |||
This document has been assembled to share metrics for Bitcoin SV, with the aim to harmonize the values of metrics that are used in published materials. | This document has been assembled to share metrics for Bitcoin SV, with the aim to harmonize the values of metrics that are used in published materials. | ||
One of the primary goals of Bitcoin SV is to “scale the block size”, but what does this mean? What we actually mean is “scaling capacity”. | One of the primary goals of Bitcoin SV is to “scale the block size”, but what does this mean? What we actually mean is “scaling capacity”. | ||
− | The capacity of the Bitcoin network has been constrained for many years due to the decisions made by the developers of the primary Bitcoin node. The particular limit that constrained the capacity was the “maximum block size” which was 1 Megabyte. This is why the discussion has focused on block size, but this is not really the right metric. | + | The capacity of the Bitcoin network has been constrained for many years due to the decisions made by the developers of the primary Bitcoin node. The particular limit that constrained the capacity was the “maximum block size” which was 1 Megabyte. This is why the discussion has focused on block size, but this is not really the right metric. |
− | + | === Constraints === | |
+ | |||
+ | There are two particular operations that primarily constrain the processing time of a transaction. | ||
+ | # UTXO lookups (This is a database lookup) | ||
+ | # Signature validations (This is very CPU intensive) | ||
+ | |||
+ | When we are talking about the capacity of a node to process transactions we are really talking about how many of these operations we can handle in a second. Both of these operations depend on the number of inputs in a transaction which isn't really related to the overall size of the transaction. But as most transactions contain a fairly similar number of inputs it is reasonable to approximate this capacity as "transactions per second" which is a measurement that is more easily understood. | ||
+ | |||
+ | There are some other other constraints like: network propagation and block building time that have over the last year constrained Bitcoin SV but those constraints are already largely eliminated. | ||
+ | |||
+ | Therefore, the most suitable metric for measuring the capacity of Bitcoin SV is “the number of transactions per second” (or “tps”). As we continue removing the limits on the size of blocks that can be processed by Bitcoin SV, we should move to describing capacity in terms of “transactions per second” instead of “block size”. | ||
+ | |||
+ | It is important to note the key effect of this: A small number of large transaction is much quicker to process than a large number of small transactions even if they total the same number of bytes. Therefore big blocks full of large data transactions are easier to create and process that big blocks full of small transactions. | ||
+ | |||
+ | === Peak Vs Sustained capacity === | ||
+ | |||
+ | One other important point to note is that peak capacity is generally higher than sustained capacity. A peak is when we have a very short term burst of activity above the norm. For example if the network is quiet then suddenly enough transactions are suddenly broadcast to fill a 200mb block in 10 minutes then stopped, we'd regard this a peak load. If that flow of transactions continued for many hours we'd regard that as a sustained load. Because the network and nodes themselves are beginning from an unstressed state they will generally perform better and process a single 200mb block more effectively than a sustained burst of 200mb blocks. Below a certain threshold the performance of both should be the same. But once we start to increase the load beyond a certain threshold, a difference in performance between the two scenarios will begin to form and the difference will increase as loads get further and further beyond that threshold. | ||
+ | |||
+ | === Some sample metrics === | ||
As Bitcoin SV has released the restrictions on transactions, we also start to see different types of transactions, including data transactions. Data transactions are up to 300 times as large as payment transactions. For this reason, we also consider a balanced mix of transactions. | As Bitcoin SV has released the restrictions on transactions, we also start to see different types of transactions, including data transactions. Data transactions are up to 300 times as large as payment transactions. For this reason, we also consider a balanced mix of transactions. | ||
Line 13: | Line 30: | ||
|- | |- | ||
| colspan="2" | | | colspan="2" | | ||
− | | colspan="2" | | + | | colspan="2" | Block Size |
|- | |- | ||
! tps | ! tps | ||
Line 73: | Line 90: | ||
− | Calculations | + | ==== Calculations ==== |
+ | |||
Calculations assume a block every 10 minutes. | Calculations assume a block every 10 minutes. | ||
To achieve 1 tps, we need 600 transactions in a block. | To achieve 1 tps, we need 600 transactions in a block. |
Latest revision as of 07:44, 13 September 2019
This document has been assembled to share metrics for Bitcoin SV, with the aim to harmonize the values of metrics that are used in published materials.
One of the primary goals of Bitcoin SV is to “scale the block size”, but what does this mean? What we actually mean is “scaling capacity”.
The capacity of the Bitcoin network has been constrained for many years due to the decisions made by the developers of the primary Bitcoin node. The particular limit that constrained the capacity was the “maximum block size” which was 1 Megabyte. This is why the discussion has focused on block size, but this is not really the right metric.
Constraints
There are two particular operations that primarily constrain the processing time of a transaction.
- UTXO lookups (This is a database lookup)
- Signature validations (This is very CPU intensive)
When we are talking about the capacity of a node to process transactions we are really talking about how many of these operations we can handle in a second. Both of these operations depend on the number of inputs in a transaction which isn't really related to the overall size of the transaction. But as most transactions contain a fairly similar number of inputs it is reasonable to approximate this capacity as "transactions per second" which is a measurement that is more easily understood.
There are some other other constraints like: network propagation and block building time that have over the last year constrained Bitcoin SV but those constraints are already largely eliminated.
Therefore, the most suitable metric for measuring the capacity of Bitcoin SV is “the number of transactions per second” (or “tps”). As we continue removing the limits on the size of blocks that can be processed by Bitcoin SV, we should move to describing capacity in terms of “transactions per second” instead of “block size”.
It is important to note the key effect of this: A small number of large transaction is much quicker to process than a large number of small transactions even if they total the same number of bytes. Therefore big blocks full of large data transactions are easier to create and process that big blocks full of small transactions.
Peak Vs Sustained capacity
One other important point to note is that peak capacity is generally higher than sustained capacity. A peak is when we have a very short term burst of activity above the norm. For example if the network is quiet then suddenly enough transactions are suddenly broadcast to fill a 200mb block in 10 minutes then stopped, we'd regard this a peak load. If that flow of transactions continued for many hours we'd regard that as a sustained load. Because the network and nodes themselves are beginning from an unstressed state they will generally perform better and process a single 200mb block more effectively than a sustained burst of 200mb blocks. Below a certain threshold the performance of both should be the same. But once we start to increase the load beyond a certain threshold, a difference in performance between the two scenarios will begin to form and the difference will increase as loads get further and further beyond that threshold.
Some sample metrics
As Bitcoin SV has released the restrictions on transactions, we also start to see different types of transactions, including data transactions. Data transactions are up to 300 times as large as payment transactions. For this reason, we also consider a balanced mix of transactions.
Block Size | |||
tps | tx/day | payments | balanced |
---|---|---|---|
5 | 432,000 | 1.2 MB (BTC) | 102 MB |
130 | 11 million | 31.2 MB (BCH) | 2.6 GB |
520 | 45 million | 125 MB (BSV Nov 18) | 11 GB |
1000 | 86 million | 240 MB | 20 GB |
9,000 | 777 million | 2.16 GB (BSV Quasar) | 184 GB |
10,000 | 864 million | 2.4 GB | 205 GB |
50,000 | 4 billion | 11.1 GB | 0.9 TB |
100,000 | 9 billion | 25 GB | 2 TB |
200,000 | 17 billion | 47 GB | 4 TB |
4,000,000 | 345 billion | 0.96 TB | 82 TB |
Calculations
Calculations assume a block every 10 minutes. To achieve 1 tps, we need 600 transactions in a block.
Payment transactions are small, we use an average of 400 bytes per transactions.
Balanced transactions are a mix of 30% payment transactions (400 bytes), 40% medium transactions (10,000 bytes), and 30% large transactions (100,000 bytes). This results in an approximate average size of 34,120 bytes.