If a BDS gets deleted during blk_drain_all(), it might miss a
call to bdrv_do_drained_end(). This means missing a call to
aio_enable_external() and the AIO context remains disabled for
ever. This can cause a device to become irresponsive and to
disrupt the guest execution, ie. hang, loop forever or worse.
This scenario is quite easy to encounter with virtio-scsi
on POWER when punching multiple blockdev-create QMP commands
while the guest is booting and it is still running the SLOF
firmware. This happens because SLOF disables/re-enables PCI
devices multiple times via IO/MEM/MASTER bits of PCI_COMMAND
register after the initial probe/feature negotiation, as it
tends to work with a single device at a time at various stages
like probing and running block/network bootloaders without
doing a full reset in-between. This naturally generates many
dataplane stops and starts, and thus many drain sections that
can race with blockdev_create_run(). In the end, SLOF bails
out.
It is somehow reproducible on x86 but it requires to generate
articial dataplane start/stop activity with stop/cont QMP
commands. In this case, seabios ends up looping for ever,
waiting for the virtio-scsi device to send a response to
a command it never received.
Add a helper that pairs all previously called bdrv_do_drained_begin()
with a bdrv_do_drained_end() and call it from bdrv_close().
While at it, update the "/bdrv-drain/graph-change/drain_all"
test in test-bdrv-drain so that it can catch the issue.
BugId: https://bugzilla.redhat.com/show_bug.cgi?id=
1874441
Signed-off-by: Greg Kurz <[email protected]>
Message-Id: <
160346526998.272601.
9045392804399803158[email protected]>
Signed-off-by: Kevin Wolf <[email protected]>
}
QLIST_INIT(&bs->aio_notifiers);
bdrv_drained_end(bs);
+
+ /*
+ * If we're still inside some bdrv_drain_all_begin()/end() sections, end
+ * them now since this BDS won't exist anymore when bdrv_drain_all_end()
+ * gets called.
+ */
+ if (bs->quiesce_counter) {
+ bdrv_drain_all_end_quiesce(bs);
+ }
}
void bdrv_close_all(void)
}
}
+void bdrv_drain_all_end_quiesce(BlockDriverState *bs)
+{
+ int drained_end_counter = 0;
+
+ g_assert(bs->quiesce_counter > 0);
+ g_assert(!bs->refcnt);
+
+ while (bs->quiesce_counter) {
+ bdrv_do_drained_end(bs, false, NULL, true, &drained_end_counter);
+ }
+ BDRV_POLL_WHILE(bs, qatomic_read(&drained_end_counter) > 0);
+}
+
void bdrv_drain_all_end(void)
{
BlockDriverState *bs = NULL;
*/
void bdrv_drained_end_no_poll(BlockDriverState *bs, int *drained_end_counter);
+/**
+ * End all quiescent sections started by bdrv_drain_all_begin(). This is
+ * only needed when deleting a BDS before bdrv_drain_all_end() is called.
+ */
+void bdrv_drain_all_end_quiesce(BlockDriverState *bs);
+
/**
* End a quiescent section started by bdrv_subtree_drained_begin().
*/
g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
g_assert_cmpint(b_s->drain_count, ==, 0);
+ g_assert_cmpint(qemu_get_aio_context()->external_disable_cnt, ==, 0);
bdrv_unref(bs_b);
blk_unref(blk_b);