- Why is
m_assumeutxo_data
hardcoded within the first place if we do not wish to belief different’s UTXO set? (We’re getting compelled to use solely that UTXO set model)
The priority is individuals placing up web sites with directions for “even quicker sync time!” with UTXO set downloads. If such a web site would turn into common, after which compromised, there’s a non-negligible likelihood of this really leading to a malicious UTXO set being loaded and accepted by customers, even when briefly (something is feasible in such a UTXO set, together with the attacker giving themselves 1 million BTC).
By placing the dedication hash within the supply code, it turns into topic to Bitcoin Core’s evaluate ecosystem. I feel it is unfair to name this only a “builders resolve”, as a result of:
- Energetic evaluate group Anybody can, and many individuals do, look over the adjustments to the supply code. A change to the
m_assumeutxo_data
worth is straightforward to evaluate (simply verify an current node’s hash), and will get a variety of scrutiny. - Bitcoin Core has reproducible builds. Anybody, together with non-developers, can take part in constructing releases, and they need to find yourself with bit-for-bit similar binaries as those printed. This establishes confidence that the binaries which individuals really run match the launched supply code, together with the
m_assumeutxo_data
worth.
Should you consider “builders” as the complete group of individuals taking part in these processes, then it is after all not incorrect to state that it is successfully this group making that call. However I feel the size and transparency of the entire thing issues. This is not a single individual selecting a worth earlier than a launch, with out oversight, as an instruction on an internet site is perhaps. And naturally, the customers is inherently trusting this group of individuals/course of anyway for the validation software program itself, even when we attempt to reduce the extent this belief is required.
- Why is the
m_assumeutxo_data
set to 840.000 and to not the identical block asassumevalid
?
The unique thought, though no one is working proper now on finishing it, behind assumeutxo included automated snapshotting and distribution of snapshots over the community, in order that customers wouldn’t must go discover a supply.
In such a mannequin, there could be a predefined schedule of heights at which snapshots could be made. For instance, there could possibly be one each 52500 blocks (roughly as soon as per 12 months), and all nodes supporting the function would make a snapshot at that top when reached, and preserve the previous couple of snapshots round for obtain over the P2P community. New nodes beginning up, with m_assumeutxo_data
values set to regardless of the final a number of of 52500 was on the time of launch, can then synchronize from any snapshot-providing node on the community, even when the supplier is utilizing older software program than the receiver.
Whereas there is no such thing as a progress at the moment on the P2P aspect of this, it nonetheless suggests utilizing a snapshot top schedule that’s not tied to Bitcoin Core releases.
- I perceive that we do not need individuals to start out trusting random UTXO units due to laziness for ready to sync, however could not we use some sort of signed-by-self UTXO units? It could be nice if as a person you’ll be able to backup the precise UTXO set, signal it not directly, and be capable to load+confirm it sooner or later to sync a brand new node.
If it is only for your self, you can also make a backup of the chainstate
listing (whereas the node isn’t operating). Assumeutxo has a lot of options that matter within the extensive distribution mannequin, however do not apply to non-public backups:
- The snapshot knowledge is canonical. Anybody can create a snapshot at a selected top, and everybody will acquire an similar snapshot file, making it simple to match, and distribute (probably from a number of sources, bittorrent-style).
- Snapshot loading nonetheless entails background revalidation. It offers you a node that’s instantly synced to the snapshot level, and might proceed validation from that time on, however for safety, the node will nonetheless individually additionally carry out within the background a revalidation of the snapshot itself (from genesis to the snapshot level).
Should you belief the snapshot creator and loader utterly (since you are each of them your self), the overhead of those options is pointless. By making a backup of your chainstate (which holds the UTXO set), you’ll be able to at any level, on any system, bounce to that time in validation. It is a database, so it’s not byte-for-byte comparable between methods, however it’s suitable. The aspect “restoring” the backup will not know it is loading one thing created externally, so it will not carry out background re-validation, however in case you in the end belief the info anyway, that is simply duplication of labor.