Closed
Description
Hi all,
Sometimes during learning, the setup may crash, or it is run on a server which has to reboot. In these cases we have to rerun the whole learning algorithm. It may take a long time to come back to the hypothesis we had before the crash. We might reuse a bit from the previous hypothesis (it can act like a cache and we can find counter examples with it), but the learning algorithm will still pose queries from start.
Is it possible to serialize the state of the learning algorithm?
If not, what can I do to improve the situation? I think for L*-based algorithms, it shouldn't be too hard (just (de)serialize the observation table).
Metadata
Metadata
Assignees
Labels
No labels