Issue summary
Forest DB, on a longer-running Forest instance, gets large at around 30G per day. This forces every node operator to implement its shrinking mechanism, with the simplest being:
- export snapshot and turn off the node (or turn off the node and download it from a trusted source, it may be faster),
- import the new snapshot
- repeat when available disk space gets low.
We can do something better on our own (though following roughly the same logic). The rough idea is to mark entries as exportable and then delete them from the database. This should theoretically put us back to the just-after-import db size.
Task summary
Acceptance Criteria
Other information and links
Not exactly the way we decided to move forward at the moment, but worth mentioning: #1708
Issue summary
Forest DB, on a longer-running Forest instance, gets large at around 30G per day. This forces every node operator to implement its shrinking mechanism, with the simplest being:
We can do something better on our own (though following roughly the same logic). The rough idea is to mark entries as exportable and then delete them from the database. This should theoretically put us back to the just-after-import db size.
Task summary
Acceptance Criteria
Other information and links
Not exactly the way we decided to move forward at the moment, but worth mentioning: #1708