diff --git a/swip-21.md b/swip-21.md new file mode 100644 index 0000000..4bf21d7 --- /dev/null +++ b/swip-21.md @@ -0,0 +1,214 @@ +--- +title: Reserve doubling +SWIP: 21 +author: Viktor Tron (@zelig), Callum Toner (@callum), Mark Bliss (@n00b21337) +discussions-to: https://discord.gg/Q6BvSkCv (Swarm Discord) +status: Draft +type: Standards Track +category: Core +created: 2024-05-04 +--- + +# Reserve doubling + +How to extend node storage capacity dedicated to the reserve to be able to calibrate operators' profitability. + +## Abstract + +No matter how large the storage space a node could dedicate to its reserve, in the current setup of the redistribution game, it only gets rewarded for a prescribed amount of chunks called the reserve size. The current reserve size of cca. 4 million chunks (around 16 GB, or more precisely 2^22, which effectively requires 25 GB with indexes) is proving to be too small for the profitable operation of nodes. The purpose of this SWIP is to change this and allow nodes to double their reserve size (potentially multiple times) and get rewarded accordingly. + +## Background and objectives + +The reserve size was deliberately chosen relatively small so that Swarm could experience scaling in the number of nodes relatively quickly. After a successful period of multiple instances of storage radius increases, we now consider Swarm’s storage capacity scaling safe and tested. Operating a node requires other resources like CPU, memory, and bandwidth. In order to help calibrate a profitable setup, it is desirable that storage capacity can be dynamically set. + +Increasing the node reserve capacity (and getting incentivized and paid accordingly for it) vastly decreases how many peer connections need to be maintained per unit of storage. This makes operating a node far more economical as they will earn more revenue for storing more data, at virtually the same operating costs. On the other hand, this allows the network to operate with the same capacity and security of storage with fewer nodes. If all storer nodes double their reserve, only half as many nodes will be required to maintain the same quality of service. + +## Context + +The redistribution game requires nodes to sample the chunks closest to the current round anchor, called the *playing reserve* (This is validated by the *retrievability* check), and mandates that each participating node’s overlay address be close to this same round anchor (*responsibility* check). Close here means the address must match the anchor up to at least *d* bits, where *d* is the committed depth of storage. Nodes are encouraged to report the largest depth *d* they can, since both the selection of truth and the winner are determined by the node’s stake density, which is the effective stake per volume of responsibility (i.e., stake \* 2^d). Overreporting depth is prevented by checking that the sample comes from a large enough chunk pool, corresponding to the prescribed minimum playing reserve size (*resources* check). Therefore we can rephrase the participation as follows: a node is selected if the anchor falls within its neighborhood that can hold all the chunks in that neighborhood given the prescribed reserve capacity, and the node must sample its reserve (filtering chunks matching the anchor at least up to the storage depth). + +## Solution + +Now the solution to increased reserve size will be indirect. The game will stay almost the same as currently, i.e., nodes sampling the part of their storage designated by the anchor, with a depth corresponding to storage depth *d*. However, the constraint on the node overlay proximity to the anchor can be weakened: the node’s overlay will no longer need to match the anchor up to *d* bits. This effectively means that a node is allowed to play in a round where its *sister neighborhood* is selected, i.e., when the responsibility criteria are modified to allow the node overlay to match the anchor with *d’\