-
Notifications
You must be signed in to change notification settings - Fork 11
feat: use files hosted on cloudflare r2 instead of git lfs #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
info: file combination (chunks) are no longer there as the 4 GiB file limit is an LFS thing, so we can directly host the combined compressed state file on R2 instead of having to combine the chunks ourselves. |
dapplion
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, better hosting solution. But you need to check integrity
| )]; | ||
|
|
||
| fn get_chunks(chain: &str) -> Vec<(&'static str, u64)> { | ||
| const R2_BASE: &str = "https://initstate.gnosischain.com"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make this a CLI arg that defaults to this value? Ok to do in another PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
created an issue for later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| fn get_compressed_state_size(chain: &str) -> u64 { | ||
| match chain { | ||
| "gnosis" => 5_672_943_444, | ||
| "chiado" => 22_468_068, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should hardcode the file hash and check integrity after downloading. Much better to check local file is fine that just the size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The trust assumption otherwise it's very sketchy: hey download a random file from our URL that you can't change and hope it's fine. Even if no-one uses it you should add a hackmd somewhere with steps to recreate those files and serve them in an independent route
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made hash verification for the downloaded files, and added a markdown for recreating the state files
| println!("✅ compressed state verified"); | ||
| } else { | ||
| println!("✅ compressed state already present"); | ||
| println!("✅ compressed state verified"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use tracing to be consistent with the rest of reth logging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tracing cannot be used here as this part of the code is before tracing is initialized (at node startup)
dapplion
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job! Thanks
we are reaching git lfs limits with a small number of downloads, this fixes the problem by enabling unlimited downloads from r2 and ditching git lfs (along with the complexity it came with).
the files are still canonical with the git repo ones so everyone can check the served files.