A guest with enough RAM, eg. 128G, is likely to detect savevm downtime
and to complain about stalled CPUs. This happens because we re-read
the timebase just before migrating it and we thus don't account for
all the time between VM stop and pre-save.
A very similar situation was already addressed for live migration of
paused guests (commit
d14f33976282). Extend the logic to do the same
with savevm.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1893787
Signed-off-by: Greg Kurz <[email protected]>
Message-Id: <
160693010619.
1111945.
632640981169395440[email protected]>
Signed-off-by: David Gibson <[email protected]>
*/
tb->guest_timebase = ticks + first_ppc_cpu->env.tb_env->tb_offset;
- tb->runstate_paused = runstate_check(RUN_STATE_PAUSED);
+ tb->runstate_paused =
+ runstate_check(RUN_STATE_PAUSED) || runstate_check(RUN_STATE_SAVE_VM);
}
static void timebase_load(PPCTimebase *tb)
{
PPCTimebase *tb = opaque;
- /* guest_timebase won't be overridden in case of paused guest */
+ /* guest_timebase won't be overridden in case of paused guest or savevm */
if (!tb->runstate_paused) {
timebase_save(tb);
}