Peter Maydell [Tue, 19 Jun 2018 15:04:43 +0000 (16:04 +0100)]
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- Active mirror (blockdev-mirror copy-mode=write-blocking)
- bdrv_drain_*() fixes and test cases
- Fix crash with scsi-hd and drive_del
# gpg: Signature made Mon 18 Jun 2018 17:44:10 BST
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <[email protected]>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (35 commits)
iotests: Add test for active mirroring
block/mirror: Add copy mode QAPI interface
block/mirror: Add active mirroring
job: Add job_progress_increase_remaining()
block/mirror: Add MirrorBDSOpaque
block/dirty-bitmap: Add bdrv_dirty_iter_next_area
test-hbitmap: Add non-advancing iter_next tests
hbitmap: Add @advance param to hbitmap_iter_next()
block: Generalize should_update_child() rule
block/mirror: Use source as a BdrvChild
block/mirror: Wait for in-flight op conflicts
block/mirror: Use CoQueue to wait on in-flight ops
block/mirror: Convert to coroutines
block/mirror: Pull out mirror_perform()
block: fix QEMU crash with scsi-hd and drive_del
test-bdrv-drain: Test graph changes in drain_all section
block: Allow graph changes in bdrv_drain_all_begin/end sections
block: ignore_bds_parents parameter for drain functions
block: Move bdrv_drain_all_begin() out of coroutine context
block: Allow AIO_WAIT_WHILE with NULL ctx
...
Peter Maydell [Tue, 19 Jun 2018 14:19:07 +0000 (15:19 +0100)]
Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2018-06-18' into staging
Monitor patches for 2018-06-18
# gpg: Signature made Mon 18 Jun 2018 14:50:29 BST
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <[email protected]>"
# gpg: aka "Markus Armbruster <[email protected]>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-monitor-2018-06-18:
monitor: add lock to protect mon_fdsets
monitor: move init global earlier
monitor: remove event_clock_type
monitor: fix comment for monitor_lock
monitor: more comments on lock-free elements
monitor: protect mon->fds with mon_lock
monitor: rename out_lock to mon_lock
* remotes/kraxel/tags/usb-20180618-pull-request:
Revert "bus: do not unref the added child bus on realize"
Revert "usb: release the created buses"
Revert "usb-ccid: fix bus leak"
Peter Maydell [Tue, 19 Jun 2018 10:15:27 +0000 (11:15 +0100)]
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180618' into staging
ppc patch queue 2018-06-18
Next batch of ppc and spapr related patches for the 3.0 release.
* Improved handling of Spectre/Meltdown mitigations for POWER8
* Numerous Mac machine type cleanups and improvements
* Cleanup to cpu realize/unrealize path for spapr
* Create a place for machine-specific per-cpu information, and
start moving some things to it
* Assorted bugfixes
* remotes/dgibson/tags/ppc-for-3.0-20180618: (28 commits)
spapr: fix xics_system_init() error path
target/ppc, spapr: Move VPA information to machine_data
ppc/pnv: introduce a pnv_chip_core_realize() routine
spapr_cpu_core: introduce spapr_create_vcpu()
spapr_cpu_core: add missing rollback on realization path
spapr_cpu_core: fix potential leak in spapr_cpu_core_realize()
spapr_cpu_core: convert last snprintf() to g_strdup_printf()
pnv: Add cpu unrealize path
pnv: Clean up cpu realize path
pnv_core: Allocate cpu thread objects individually
pnv: Fix some error handling cpu realize()
spapr: Clean up cpu realize/unrealize paths
sm501: Do not clear read only bits when writing registers
mos6522: expose mos6522_update_irq() through MOS6522DeviceClass
mos6522: remove additional interrupt flag filter from mos6522_update_irq()
mos6522: only clear the shift register interrupt upon write
xics_kvm: fix a build break
mac_newworld: add PMU device
adb: add property to disable direct reg 3 writes
adb: fix read reg 3 byte ordering
...
Kevin Wolf [Mon, 18 Jun 2018 15:20:41 +0000 (17:20 +0200)]
Merge remote-tracking branch 'mreitz/tags/pull-block-2018-06-18' into queue-block
Block patches:
- Active mirror (blockdev-mirror copy-mode=write-blocking)
# gpg: Signature made Mon Jun 18 17:08:19 2018 CEST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <[email protected]>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* mreitz/tags/pull-block-2018-06-18:
iotests: Add test for active mirroring
block/mirror: Add copy mode QAPI interface
block/mirror: Add active mirroring
job: Add job_progress_increase_remaining()
block/mirror: Add MirrorBDSOpaque
block/dirty-bitmap: Add bdrv_dirty_iter_next_area
test-hbitmap: Add non-advancing iter_next tests
hbitmap: Add @advance param to hbitmap_iter_next()
block: Generalize should_update_child() rule
block/mirror: Use source as a BdrvChild
block/mirror: Wait for in-flight op conflicts
block/mirror: Use CoQueue to wait on in-flight ops
block/mirror: Convert to coroutines
block/mirror: Pull out mirror_perform()
Max Reitz [Wed, 13 Jun 2018 18:18:22 +0000 (20:18 +0200)]
block/mirror: Add copy mode QAPI interface
This patch allows the user to specify whether to use active or only
background mode for mirror block jobs. Currently, this setting will
remain constant for the duration of the entire block job.
Max Reitz [Wed, 13 Jun 2018 18:18:21 +0000 (20:18 +0200)]
block/mirror: Add active mirroring
This patch implements active synchronous mirroring. In active mode, the
passive mechanism will still be in place and is used to copy all
initially dirty clusters off the source disk; but every write request
will write data both to the source and the target disk, so the source
cannot be dirtied faster than data is mirrored to the target. Also,
once the block job has converged (BLOCK_JOB_READY sent), source and
target are guaranteed to stay in sync (unless an error occurs).
Active mode is completely optional and currently disabled at runtime. A
later patch will add a way for users to enable it.
Max Reitz [Wed, 13 Jun 2018 18:18:17 +0000 (20:18 +0200)]
test-hbitmap: Add non-advancing iter_next tests
Add a function that wraps hbitmap_iter_next() and always calls it in
non-advancing mode first, and in advancing mode next. The result should
always be the same.
By using this function everywhere we called hbitmap_iter_next() before,
we should get good test coverage for non-advancing hbitmap_iter_next().
Max Reitz [Wed, 13 Jun 2018 18:18:15 +0000 (20:18 +0200)]
block: Generalize should_update_child() rule
Currently, bdrv_replace_node() refuses to create loops from one BDS to
itself if the BDS to be replaced is the backing node of the BDS to
replace it: Say there is a node A and a node B. Replacing B by A means
making all references to B point to A. If B is a child of A (i.e. A has
a reference to B), that would mean we would have to make this reference
point to A itself -- so we'd create a loop.
bdrv_replace_node() (through should_update_child()) refuses to do so if
B is the backing node of A. There is no reason why we should create
loops if B is not the backing node of A, though. The BDS graph should
never contain loops, so we should always refuse to create them.
If B is a child of A and B is to be replaced by A, we should simply
leave B in place there because it is the most sensible choice.
A more specific argument would be: Putting filter drivers into the BDS
graph is basically the same as appending an overlay to a backing chain.
But the main child BDS of a filter driver is not "backing" but "file",
so restricting the no-loop rule to backing nodes would fail here.
Max Reitz [Wed, 13 Jun 2018 18:18:13 +0000 (20:18 +0200)]
block/mirror: Wait for in-flight op conflicts
This patch makes the mirror code differentiate between simply waiting
for any operation to complete (mirror_wait_for_free_in_flight_slot())
and specifically waiting for all operations touching a certain range of
the virtual disk to complete (mirror_wait_on_conflicts()).
Max Reitz [Wed, 13 Jun 2018 18:18:12 +0000 (20:18 +0200)]
block/mirror: Use CoQueue to wait on in-flight ops
Attach a CoQueue to each in-flight operation so if we need to wait for
any we can use it to wait instead of just blindly yielding and hoping
for some operation to wake us.
A later patch will use this infrastructure to allow requests accessing
the same area of the virtual disk to specifically wait for each other.
Max Reitz [Wed, 13 Jun 2018 18:18:11 +0000 (20:18 +0200)]
block/mirror: Convert to coroutines
In order to talk to the source BDS (and maybe in the future to the
target BDS as well) directly, we need to convert our existing AIO
requests into coroutine I/O requests.
Max Reitz [Wed, 13 Jun 2018 18:18:10 +0000 (20:18 +0200)]
block/mirror: Pull out mirror_perform()
When converting mirror's I/O to coroutines, we are going to need a point
where these coroutines are created. mirror_perform() is going to be
that point.
Peter Xu [Fri, 8 Jun 2018 03:55:11 +0000 (11:55 +0800)]
monitor: add lock to protect mon_fdsets
Introduce a new global big lock for mon_fdsets. Take it where needed.
The monitor_fdset_get_fd() handling is a bit tricky: now we need to call
qemu_mutex_unlock() which might pollute errno, so we need to make sure
the correct errno be passed up to the callers. To make things simpler,
we let monitor_fdset_get_fd() return the -errno directly when error
happens, then in qemu_open() we move it back into errno.
Peter Xu [Fri, 8 Jun 2018 03:55:10 +0000 (11:55 +0800)]
monitor: move init global earlier
Before this patch, monitor fd helpers might be called even earlier than
monitor_init_globals(). This can be problematic.
After previous work, now monitor_init_globals() does not depend on
accelerator initialization any more. Call it earlier (before CLI
parsing; that's where the monitor APIs might be called) to make sure it
is called before any of the monitor APIs.
Peter Xu [Fri, 8 Jun 2018 03:55:09 +0000 (11:55 +0800)]
monitor: remove event_clock_type
Instead, use a dynamic function to detect which clock we'll use. The
problem is that the old code will let monitor initialization depend on
configure_accelerator() (that's where qtest_enabled() start to take
effect). After this change, we don't have such a dependency any more.
We just need to make sure configure_accelerator() is called when we
start to use it. Now it's only used in monitor_qapi_event_queue() and
monitor_qapi_event_handler(), so we're good.
Peter Xu [Fri, 8 Jun 2018 03:55:05 +0000 (11:55 +0800)]
monitor: rename out_lock to mon_lock
The out_lock is protecting a few Monitor fields. In the future the
monitor code will start to run in multiple threads. We are going to
turn it into a bigger lock to protect not only the out buffer but also
most of the rest.
The problem is that we should avoid making block driver graph
changes while we have in-flight requests. Let's drain all I/O
for this BB before calling bdrv_root_unref_child().
Kevin Wolf [Wed, 28 Mar 2018 16:29:18 +0000 (18:29 +0200)]
block: Allow graph changes in bdrv_drain_all_begin/end sections
bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and
did a subtree drain for each of them. This works fine as long as the
graph is static, but sadly, reality looks different.
If the graph changes so that root nodes are added or removed, we would
have to compensate for this. bdrv_next() returns each root node only
once even if it's the root node for multiple BlockBackends or for a
monitor-owned block driver tree, which would only complicate things.
The much easier and more obviously correct way is to fundamentally
change the way the functions work: Iterate over all BlockDriverStates,
no matter who owns them, and drain them individually. Compensation is
only necessary when a new BDS is created inside a drain_all section.
Removal of a BDS doesn't require any action because it's gone afterwards
anyway.
Kevin Wolf [Tue, 29 May 2018 15:17:45 +0000 (17:17 +0200)]
block: ignore_bds_parents parameter for drain functions
In the future, bdrv_drained_all_begin/end() will drain all invidiual
nodes separately rather than whole subtrees. This means that we don't
want to propagate the drain to all parents any more: If the parent is a
BDS, it will already be drained separately. Recursing to all parents is
unnecessary work and would make it an O(n²) operation.
Prepare the drain function for the changed drain_all by adding an
ignore_bds_parents parameter to the internal implementation that
prevents the propagation of the drain to BDS parents. We still (have to)
propagate it to non-BDS parents like BlockBackends or Jobs because those
are not drained separately.
Kevin Wolf [Tue, 10 Apr 2018 14:07:55 +0000 (16:07 +0200)]
block: Move bdrv_drain_all_begin() out of coroutine context
Before we can introduce a single polling loop for all nodes in
bdrv_drain_all_begin(), we must make sure to run it outside of coroutine
context like we already do for bdrv_do_drained_begin().
Kevin Wolf [Tue, 10 Apr 2018 13:51:52 +0000 (15:51 +0200)]
block: Allow AIO_WAIT_WHILE with NULL ctx
bdrv_drain_all() wants to have a single polling loop for draining the
in-flight requests of all nodes. This means that the AIO_WAIT_WHILE()
condition relies on activity in multiple AioContexts, which is polled
from the mainloop context. We must therefore call AIO_WAIT_WHILE() from
the mainloop thread and use the AioWait notification mechanism.
Just randomly picking the AioContext of any non-mainloop thread would
work, but instead of bothering to find such a context in the caller, we
can just as well accept NULL for ctx.
Kevin Wolf [Fri, 23 Mar 2018 19:29:24 +0000 (20:29 +0100)]
block: Defer .bdrv_drain_begin callback to polling phase
We cannot allow aio_poll() in bdrv_drain_invoke(begin=true) until we're
done with propagating the drain through the graph and are doing the
single final BDRV_POLL_WHILE().
Just schedule the coroutine with the callback and increase bs->in_flight
to make sure that the polling phase will wait for it.
Kevin Wolf [Fri, 23 Mar 2018 14:57:20 +0000 (15:57 +0100)]
block: Don't poll in parent drain callbacks
bdrv_do_drained_begin() is only safe if we have a single
BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow
that parent callbacks introduce a nested polling loop that could cause
graph changes while we're traversing the graph.
Split off bdrv_do_drained_begin_quiesce(), which only quiesces a single
node without waiting for its requests to complete. These requests will
be waited for in the BDRV_POLL_WHILE() call down the call chain.
Kevin Wolf [Fri, 23 Mar 2018 11:40:21 +0000 (12:40 +0100)]
test-bdrv-drain: Test node deletion in subtree recursion
If bdrv_do_drained_begin() polls during its subtree recursion, the graph
can change and mess up the bs->children iteration. Test that this
doesn't happen.
Kevin Wolf [Fri, 23 Mar 2018 11:40:41 +0000 (12:40 +0100)]
block: Drain recursively with a single BDRV_POLL_WHILE()
Anything can happen inside BDRV_POLL_WHILE(), including graph
changes that may interfere with its callers (e.g. child list iteration
in recursive callers of bdrv_do_drained_begin).
Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the
end of bdrv_do_drained_begin() to avoid such effects. The recursion
happens now inside the loop condition. As the graph can only change
between bdrv_drain_poll() calls, but not inside of it, doing the
recursion here is safe.
Max Reitz [Wed, 28 Feb 2018 18:04:54 +0000 (19:04 +0100)]
test-bdrv-drain: Add test for node deletion
This patch adds two bdrv-drain tests for what happens if some BDS goes
away during the drainage.
The basic idea is that you have a parent BDS with some child nodes.
Then, you drain one of the children. Because of that, the party who
actually owns the parent decides to (A) delete it, or (B) detach all its
children from it -- both while the child is still being drained.
A real-world case where this can happen is the mirror block job, which
may exit if you drain one of its children.
Kevin Wolf [Thu, 22 Mar 2018 13:35:58 +0000 (14:35 +0100)]
block: Remove bdrv_drain_recurse()
For bdrv_drain(), recursively waiting for child node requests is
pointless because we didn't quiesce their parents, so new requests could
come in anyway. Letting the function work only on a single node makes it
more consistent.
For subtree drains and drain_all, we already have the recursion in
bdrv_do_drained_begin(), so the extra recursion doesn't add anything
either.
Kevin Wolf [Thu, 22 Mar 2018 13:11:20 +0000 (14:11 +0100)]
block: Really pause block jobs on drain
We already requested that block jobs be paused in .bdrv_drained_begin,
but no guarantee was made that the job was actually inactive at the
point where bdrv_drained_begin() returned.
This introduces a new callback BdrvChildRole.bdrv_drained_poll() and
uses it to make bdrv_drain_poll() consider block jobs using the node to
be drained.
For the test case to work as expected, we have to switch from
block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even
considered active and must be waited for when draining the node.
Kevin Wolf [Thu, 22 Mar 2018 09:57:14 +0000 (10:57 +0100)]
block: Avoid unnecessary aio_poll() in AIO_WAIT_WHILE()
Commit 91af091f923 added an additional aio_poll() to BDRV_POLL_WHILE()
in order to make sure that all pending BHs are executed on drain. This
was the wrong place to make the fix, as it is useless overhead for all
other users of the macro and unnecessarily complicates the mechanism.
This patch effectively reverts said commit (the context has changed a
bit and the code has moved to AIO_WAIT_WHILE()) and instead polls in the
loop condition for drain.
The effect is probably hard to measure in any real-world use case
because actual I/O will dominate, but if I run only the initialisation
part of 'qemu-img convert' where it calls bdrv_block_status() for the
whole image to find out how much data there is copy, this phase actually
needs only roughly half the time after this patch.
Kevin Wolf [Wed, 4 Apr 2018 11:26:16 +0000 (13:26 +0200)]
tests/test-bdrv-drain: bdrv_drain_all() works in coroutines now
Since we use bdrv_do_drained_begin/end() for bdrv_drain_all_begin/end(),
coroutine context is automatically left with a BH, preventing the
deadlocks that made bdrv_drain_all*() unsafe in coroutine context. Now
that we even removed the old polling code as dead code, it's obvious
that it's compatible now.
Enable the coroutine test cases for bdrv_drain_all().
Kevin Wolf [Thu, 14 Dec 2017 10:25:16 +0000 (11:25 +0100)]
block: Don't manually poll in bdrv_drain_all()
All involved nodes are already idle, we called bdrv_do_drain_begin() on
them.
The comment in the code suggested that this was not correct because the
completion of a request on one node could spawn a new request on a
different node (which might have been drained before, so we wouldn't
drain the new request). In reality, new requests to different nodes
aren't spawned out of nothing, but only in the context of a parent
request, and they aren't submitted to random nodes, but only to child
nodes. As long as we still poll for the completion of the parent request
(which we do), draining each root node separately is good enough.
Remove the additional polling code from bdrv_drain_all_begin() and
replace it with an assertion that all nodes are already idle after we
drained them separately.
Kevin Wolf [Thu, 14 Dec 2017 09:27:23 +0000 (10:27 +0100)]
block: Use bdrv_do_drain_begin/end in bdrv_drain_all()
bdrv_do_drain_begin/end() implement already everything that
bdrv_drain_all_begin/end() need and currently still do manually: Disable
external events, call parent drain callbacks, call block driver
callbacks.
It also does two more things:
The first is incrementing bs->quiesce_counter. bdrv_drain_all() already
stood out in the test case by behaving different from the other drain
variants. Adding this is not only safe, but in fact a bug fix.
The second is calling bdrv_drain_recurse(). We already do that later in
the same function in a loop, so basically doing an early first iteration
doesn't hurt.
Kevin Wolf [Sat, 9 Dec 2017 23:11:13 +0000 (00:11 +0100)]
test-bdrv-drain: bdrv_drain() works with cross-AioContext events
As long as nobody keeps the other I/O thread from working, there is no
reason why bdrv_drain() wouldn't work with cross-AioContext events. The
key is that the root request we're waiting for is in the AioContext
we're polling (which it always is for bdrv_drain()) so that aio_poll()
is woken up in the end.
Add a test case that shows that it works. Remove the comment in
bdrv_drain() that claims otherwise.
liujunjie [Thu, 7 Jun 2018 08:02:37 +0000 (16:02 +0800)]
ps2: check PS2Queue wptr pointer in post_load routine
In commit 802cbcb7300, most issues have been fixed when qemu guest
migration. But the queue size still need to check whether is equal to
PS2_QUEUE_SIZE. If yes, the wptr should set as 0. Or, wptr would larger
than PS2_QUEUE_SIZE and never come back when ps2_queue_noirq is called.
This could lead to OOB access, add check to avoid it.
Gerd Hoffmann [Wed, 13 Jun 2018 12:29:45 +0000 (14:29 +0200)]
hw/display: add ramfb, a simple boot framebuffer living in guest ram
The boot framebuffer is expected to be configured by the firmware, so it
uses fw_cfg as interface. Initialization goes as follows:
(1) Check whenever etc/ramfb is present.
(2) Allocate framebuffer from RAM.
(3) Fill struct RAMFBCfg, write it to etc/ramfb.
Done. You can write stuff to the framebuffer now, and it should appear
automagically on the screen.
Note that this isn't very efficient because it does a full display
update on each refresh. No dirty tracking. Dirty tracking would have
to be active for the whole ram slot, so that wouldn't be very efficient
either. For a boot display which is active for a short time only this
isn't a big deal. As permanent guest display something better should be
used (if possible).
This is the ramfb core code. Some windup is needed for display devices
which want have a ramfb boot display.
Revert "bus: do not unref the added child bus on realize"
This is wrong. object_finalize_child_property()'s unref balances the
ref in object_property_add_child(). qbus_realize's unref balances the
ref that was initially placed by object_new/object_initialize.
Greg Kurz [Fri, 15 Jun 2018 16:58:00 +0000 (18:58 +0200)]
spapr: fix xics_system_init() error path
Commit 3d85885a1b1f3 tried to fix error handling, but it actually
went into the wrong direction by dropping the local Error *.
In the default KVM case, the rationale is to try the in-kernel XICS first,
and if not possible, to fallback to userland XICS. Passing errp everywhere
makes this fallback impossible if errp is &error_fatal (which happens to
be the case). And anyway, if the caller would pass a regular &local_err,
things would be worse: we could possibly pass an already set *errp to
error_setg() and crash, or return an error even in case of success.
So we definitely need a local Error * and only propagate it when we're
done with the fallback logic. This is what this patch does.
Mark Cave-Ayland [Fri, 15 Jun 2018 07:33:03 +0000 (08:33 +0100)]
SPARC64: add icount support
This patch adds gen_io_start()/gen_io_end() to various instructions as required
in order to boot my OpenBIOS test images on qemu-system-sparc64 with icount
enabled.
Fix the issues by converting the instance_init functions into realize()
functions instead, which are allowed to fail (and not called during
device introspection).
Thomas Huth [Thu, 5 Apr 2018 09:32:30 +0000 (11:32 +0200)]
hw/sparc64/sun4u: Fix introspection by converting prom instance_init to realize
The instance_init function of devices should always succeed to be able
to introspect the device. However, the instance_init function of the
"openprom" device can currently fail, for example like this:
hw/isa/smc37c669: Change the parallel I/O base to 378H
On the Alpha DP264 machine, the Cirrus VGA is I/O mapped
in the 3C0H-3CFH range, thus I/O base used by the parallel
device clashes, and since a4cb773928e the VGA is not
working:
David Gibson [Wed, 13 Jun 2018 06:22:18 +0000 (16:22 +1000)]
target/ppc, spapr: Move VPA information to machine_data
CPUPPCState currently contains a number of fields containing the state of
the VPA. The VPA is a PAPR specific concept covering several guest/host
shared memory areas used to communicate some information with the
hypervisor.
As a PAPR concept this is really machine specific information, although it
is per-cpu, so it doesn't really belong in the core CPU state structure.
There's also other information that's per-cpu, but platform/machine
specific. So create a (void *)machine_data in PowerPCCPU which can be
used by the machine to locate per-cpu data. Intialization, lifetime and
cleanup of machine_data is entirely up to the machine type.
Cédric Le Goater [Thu, 14 Jun 2018 14:00:41 +0000 (16:00 +0200)]
ppc/pnv: introduce a pnv_chip_core_realize() routine
This extracts from the PvChip realize routine the part creating the
cores. On Power9, we will need to create the cores after the Xive
interrupt controller is created.
Greg Kurz [Thu, 14 Jun 2018 21:50:42 +0000 (23:50 +0200)]
spapr_cpu_core: add missing rollback on realization path
The spapr_realize_vcpu() function doesn't rollback in case of error.
This isn't a problem with coldplugged CPUs because the machine won't
start and QEMU will exit. Hotplug is a different story though: the
CPU thread is started under object_property_set_bool() and it assumes
it can access the CPU object.
If icp_create() fails, we return an error without unregistering the
reset handler for this CPU, and we let the underlying QEMU thread for
this CPU alive. Since spapr_cpu_core_realize() doesn't care to unrealize
already realized CPUs either, but happily frees all of them anyway, the
CPU thread crashes instantly:
(qemu) device_add host-spapr-cpu-core,core-id=1,id=gku
GKU: failing icp_create (cpu 0x11497fd0)
^^^^^^^^^^
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffee3feaa0 (LWP 24725)]
0x00000000104c8374 in object_dynamic_cast_assert (obj=0x11497fd0,
^^^^^^^^^^^^^^
pointer to the CPU object
623 trace_object_dynamic_cast_assert(obj ? obj->class->type->name
(gdb) p obj->class->type
$1 = (Type) 0x0
(gdb) p * obj
$2 = {class = 0x10ea9c10, free = 0x11244620,
^^^^^^^^^^
should be g_free
(gdb) p g_free
$3 = {<text variable, no debug info>} 0x7ffff282bef0 <g_free>
obj is a dangling pointer to the CPU that was just destroyed in
spapr_cpu_core_realize().
This patch adds proper rollback to both spapr_realize_vcpu() and
spapr_cpu_core_realize().
Signed-off-by: Greg Kurz <[email protected]>
[dwg: Fixed a conflict due to a change in my tree] Signed-off-by: David Gibson <[email protected]>
Greg Kurz [Thu, 14 Jun 2018 21:50:27 +0000 (23:50 +0200)]
spapr_cpu_core: fix potential leak in spapr_cpu_core_realize()
Commit 94ad93bd97684 (QEMU 2.12) switched to instantiate CPUs separately
but it missed to adapt the error path accordingly. If something fails in
the CPU creation loop, then the CPU object that was just created is leaked.
The error paths in this function are a bit obfuscated, and adding
yet another label to free this CPU object makes it worse. We should
move the block of the loop to a separate function, with a proper
rollback path, but this is a bigger cleanup.
For now, let's just fix the bug by adding the missing calls to
object_unref(). This will allow easier backport to older QEMU
versions.
David Gibson [Wed, 13 Jun 2018 02:08:42 +0000 (12:08 +1000)]
pnv: Add cpu unrealize path
Currently we don't have any unrealize path for pnv cpu cores. We get away
with this because we don't yet support cpu hotplug for pnv.
However, we're going to want it eventually, and in the meantime, it makes
it non-obvious why there are a bunch of allocations on the realize() path
that don't have matching frees.
David Gibson [Wed, 13 Jun 2018 03:34:36 +0000 (13:34 +1000)]
pnv: Clean up cpu realize path
pnv_cpu_init() is only called from the the pnv cpu core realize path, and
really only can be called from there. So fold it into its caller, which
we also rename for brevity.
David Gibson [Wed, 13 Jun 2018 01:57:37 +0000 (11:57 +1000)]
pnv_core: Allocate cpu thread objects individually
Currently, we allocate space for all the cpu objects within a single core
in one big block. This was copied from an older version of the spapr code
and requires some ugly pointer manipulation to extract the individual
objects.
This design was due to a misunderstanding of qemu lifetime conventions and
has already been changed in spapr (in 94ad93bd "spapr_cpu_core: instantiate
CPUs separately".
Make an equivalent change in pnv_core to get rid of the nasty pointer
arithmetic.
David Gibson [Wed, 13 Jun 2018 01:55:31 +0000 (11:55 +1000)]
pnv: Fix some error handling cpu realize()
In pnv_core_realize() we call two functions with an Error * parameter in
succession, which will go badly if they both cause errors. In fact, a
failure in either of them indicates a qemu internal error, so we can just
use &error_abort in both cases.
David Gibson [Wed, 13 Jun 2018 01:48:26 +0000 (11:48 +1000)]
spapr: Clean up cpu realize/unrealize paths
spapr_cpu_init() and spapr_cpu_destroy() are only called from the spapr
cpu core realize/unrealize paths, and really can only be called from there.
Those are all short functions, so fold the pairs together for simplicity.
While we're there rename some functions and change some parameter types
for brevity and clarity.
BALATON Zoltan [Thu, 14 Jun 2018 00:17:00 +0000 (02:17 +0200)]
sm501: Do not clear read only bits when writing registers
When writing registers that have read only bits we have to avoid
changing these bits as they may have non zero values. Make sure we use
the correct masks to mask out read only and reserved bits when
changing registers.
Also remove extra spaces from dram_control and arbitration_control
assignments.
Mark Cave-Ayland [Wed, 13 Jun 2018 08:30:15 +0000 (09:30 +0100)]
mos6522: expose mos6522_update_irq() through MOS6522DeviceClass
In the case where we have an interrupt generated externally from inputs to
bits 1 and 2 of port A and/or port B, it is necessary to expose
mos6522_update_irq() so it can be called by the interrupt source.
Mark Cave-Ayland [Wed, 13 Jun 2018 08:30:14 +0000 (09:30 +0100)]
mos6522: remove additional interrupt flag filter from mos6522_update_irq()
The datasheet indicates that the interrupt is generated by ANDing the
interrupt flags register (IFR) with the interrupt enable register (IER)
but currently there is an extra filter for the SR and timer interrupts.
Remove this extra filter to allow interrupts to be generated by external
inputs on bits 1 and 2 of ports A and B.
Cédric Le Goater [Tue, 12 Jun 2018 10:11:35 +0000 (12:11 +0200)]
xics_kvm: fix a build break
On CentOS 7.5, gcc-4.8.5-28.el7_5.1.ppc64le fails to build QEMU due to :
hw/intc/xics_kvm.c: In function ‘ics_set_kvm_state’:
hw/intc/xics_kvm.c:281:13: error: ‘ret’ may be used uninitialized in this
function [-Werror=maybe-uninitialized]
return ret;
Fix the breakage and also remove the extra error reporting as
kvm_device_access() already provides a substantial error message.
Mark Cave-Ayland [Tue, 12 Jun 2018 16:44:01 +0000 (17:44 +0100)]
adb: add property to disable direct reg 3 writes
MacOS 9 has a bug in its PMU driver whereby after configuring the ADB bus
devices it sends another write to reg 3 on both devices resetting them
both back to the same address.
Add a new disable_direct_reg3_writes property to ADBDevice to disable these
direct writes which can enabled just for the upcoming pmu-adb support.
Mark Cave-Ayland [Tue, 12 Jun 2018 16:44:00 +0000 (17:44 +0100)]
adb: fix read reg 3 byte ordering
According to the Apple ADB documentation, register 3 is a 2-byte register
with the device address in the first byte, and the handler ID in the second
byte.
This is currently the opposite away to which QEMU returns them so switch the
order around.
Mark Cave-Ayland [Tue, 12 Jun 2018 16:43:57 +0000 (17:43 +0100)]
mac_newworld: add via machine option to control mac99 VIA/ADB configuration
This option allows the VIA configuration to be controlled between 3
different possible setups: cuda, pmu-adb and pmu with USB rather than ADB
keyboard/mouse.
For the moment we don't do anything with the configuration except to pass
it to the macio device (the via-cuda parent) and also to the firmware via
the fw_cfg interface so that it can present the correct device tree.
The default is cuda which is the current default and so will have no
change in behaviour.
Greg Kurz [Tue, 12 Jun 2018 17:01:26 +0000 (19:01 +0200)]
spapr: fix leak in h_client_architecture_support()
If the negotiated compat mode can't be set, but raw mode is supported,
we decide to ignore the error. An so, we should free it to prevent a
memory leak.
ppc/spapr_caps: Don't disable cap_cfpc on POWER8 by default
In default_caps_with_cpu() we set spapr_cap_cfpc to broken for POWER8
processors and before.
Since we no longer require private l1d cache on POWER8 for this cap to
be set to workaround change this to default to broken for POWER7
processors and before.
Cleber Rosa [Wed, 30 May 2018 18:41:56 +0000 (14:41 -0400)]
Acceptance tests: add Linux kernel boot and console checking test
This test boots a Linux kernel, and checks that the given command
line was effective in two ways:
* It makes the kernel use the set "console device" as a console
* The kernel records the command line as expected in the console
Given that way too many error conditions may occur, and detecting the
kernel boot progress status may not be trivial, this test relies on a
timeout to handle unexpected situations. Also, it's *not* tagged as a
quick test for obvious reasons.
It may be useful, while interactively running/debugging this test, or
tests similar to this one, to show some of the logging channels.
Example:
$ avocado --show=QMP,console run boot_linux_console.py
Cleber Rosa [Wed, 30 May 2018 18:41:55 +0000 (14:41 -0400)]
scripts/qemu.py: introduce set_console() method
The set_console() method is intended to ease higher level use cases
that require a console device.
The amount of intelligence is limited on purpose, requiring either the
device type explicitly, or the existence of a machine (pattern)
definition.
Because of the console device type selection criteria (by machine
type), users should also be able to define that. It'll then be used
for both '-machine' and for the console device type selection.
Users of the set_console() method will certainly be interested in
accessing the console device, and for that a console_socket property
has been added.