-
Notifications
You must be signed in to change notification settings - Fork 280
Description
Update process if timestamp meta contains hashes for snapshot:
- local root is loaded, root is updated until final
- local timestamp is loaded
- timestamp is updated (meta (version, hashes, length) for snapshot changes here)
- local snapshot is loaded (hashes do not match meta in timestamp, preventing loading)
- remote snapshot is loaded -- rollback checks are not done because local snapshot was not loaded
The same issues do not exist for targets as they have no rollback checks.
Similar issues for version numbers and expiry are being fixed by delaying the checks until we know we have the "final" snapshot. We don't want to do the exact same thing here as the hashes are meant to prevent even parsing data we don't trust... This seems to be a case where "trusted local metadata" and "untrusted data from network" is the deciding factor. TrustedMetadataSet does not have this information currently so possibly we need to add an argument to update_snapshot():
def update_snapshot(self, data: bytes, trusted: Optional[bool] = False)
where trusted data would not be hash/length checked, but untrusted data would. Looks a bit ugly since none of the other update functions need this but it would seem to work in all cases:
- when loading local snapshot, we know the data has at some point been trusted (signatures have been checked): it does not need to match hashes now
- if there is no local snapshot and we're updating from remote, the remote data must match meta hashes in timestamp
- if there is a local snapshot and we're updating from remote, the remote data must match meta hashes in timestamp