target/arm: Replace ARM_FEATURE_VFP4 with isar_feature_aa32_simdfmac
All remaining tests for VFP4 are for fused multiply-add insns.
Since the MVFR1 field is used for both VFP and NEON, move its adjustment
from the !has_neon block to the (!has_vfp && !has_neon) block.
Test for vfp of the appropraite width alongside the test for simdfmac
within translate-vfp.inc.c. Within disas_neon_data_insn, we have
already tested for ARM_FEATURE_NEON.
We cannot easily create "any" functions for these, because the
ID_AA64PFR0 fields for FP and SIMD signal "enabled" with zero.
Which means that an aarch32-only cpu will return incorrect results
when testing the aarch64 registers.
To use these, we must either have context or additionally test
vs ARM_FEATURE_AARCH64.
Use this in the places that were checking ARM_FEATURE_VFP, and
are obviously testing for the existance of the register set
as opposed to testing for some particular instruction extension.
Sai Pavan Boddu [Mon, 24 Feb 2020 09:39:22 +0000 (15:09 +0530)]
arm_gic: Mask the un-supported priority bits
The GICv2 allows the implementation to implement a variable number
of priority bits; unimplemented bits in the priority registers
are read as zeros, writes ignored. We were previously always
implementing a full 8 bits of priority, which is allowed but not
what the real hardware typically does (which is usually to have
4 or 5 bits of priority).
Add a new device property to allow the number of implemented
property bits to be specified.
Peter Maydell [Fri, 28 Feb 2020 14:02:31 +0000 (14:02 +0000)]
Merge remote-tracking branch 'remotes/juanquintela/tags/pull-migration-pull-request' into staging
Migration pull request
# gpg: Signature made Fri 28 Feb 2020 09:21:31 GMT
# gpg: using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723
# gpg: Good signature from "Juan Quintela <[email protected]>" [full]
# gpg: aka "Juan Quintela <[email protected]>" [full]
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723
* remotes/juanquintela/tags/pull-migration-pull-request:
savevm: Don't call colo_init_ram_cache twice
migration/colo: wrap incoming checkpoint process into new helper
migration: fix COLO broken caused by a previous commit
migration/block: rename BLOCK_SIZE macro
migration/savevm: release gslist after dump_vmstate_json
test-vmstate: Fix memleaks in test_load_qlist
migration/vmstate: Remove redundant statement in vmstate_save_state_v()
multifd: Add zstd compression multifd support
multifd: Add multifd-zstd-level parameter
configure: Enable test and libs for zstd
multifd: Add zlib compression multifd support
multifd: Add multifd-zlib-level parameter
multifd: Make no compression operations into its own structure
migration: Add support for modules
multifd: Add multifd-compression parameter
Peter Maydell [Fri, 28 Feb 2020 10:27:34 +0000 (10:27 +0000)]
Merge remote-tracking branch 'remotes/aperard/tags/pull-xen-20200227' into staging
Xen queue 2020-02-27
* fix for xen-block
* fix in exec.c for migration of xen guest
* one cleanup patch
# gpg: Signature made Thu 27 Feb 2020 11:57:12 GMT
# gpg: using RSA key F80C006308E22CFD8A92E7980CF5572FD7FB55AF
# gpg: issuer "[email protected]"
# gpg: Good signature from "Anthony PERARD <[email protected]>" [marginal]
# gpg: aka "Anthony PERARD <[email protected]>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 5379 2F71 024C 600F 778A 7161 D8D5 7199 DF83 42C8
# Subkey fingerprint: F80C 0063 08E2 2CFD 8A92 E798 0CF5 572F D7FB 55AF
* remotes/aperard/tags/pull-xen-20200227:
Memory: Only call ramblock_ptr when needed in qemu_ram_writeback
xen-bus/block: explicitly assign event channels to an AioContext
hw/xen/xen_pt_load_rom: Remove unused includes
Stefan Hajnoczi [Tue, 18 Feb 2020 11:02:09 +0000 (11:02 +0000)]
migration/block: rename BLOCK_SIZE macro
Both <linux/fs.h> and <sys/mount.h> define BLOCK_SIZE macros. Avoiding
using that name in block/migration.c.
I noticed this when including <liburing.h> (Linux io_uring) from
"block/aio.h" and compilation failed. Although patches adding that
include haven't been sent yet, it makes sense to rename the macro now in
case someone else stumbles on it in the meantime.
Pan Nengyuan [Wed, 19 Feb 2020 09:47:05 +0000 (17:47 +0800)]
migration/savevm: release gslist after dump_vmstate_json
'list' forgot to free at the end of dump_vmstate_json_to_file(), although it's called only once, but seems like a clean code.
Fix the leak as follow:
Direct leak of 16 byte(s) in 1 object(s) allocated from:
#0 0x7fb946abd768 in __interceptor_malloc (/lib64/libasan.so.5+0xef768)
#1 0x7fb945eca445 in g_malloc (/lib64/libglib-2.0.so.0+0x52445)
#2 0x7fb945ee2066 in g_slice_alloc (/lib64/libglib-2.0.so.0+0x6a066)
#3 0x7fb945ee3139 in g_slist_prepend (/lib64/libglib-2.0.so.0+0x6b139)
#4 0x5585db591581 in object_class_get_list_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1084
#5 0x5585db590f66 in object_class_foreach_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1028
#6 0x7fb945eb35f7 in g_hash_table_foreach (/lib64/libglib-2.0.so.0+0x3b5f7)
#7 0x5585db59110c in object_class_foreach /mnt/sdb/qemu-new/qemu/qom/object.c:1038
#8 0x5585db5916b6 in object_class_get_list /mnt/sdb/qemu-new/qemu/qom/object.c:1092
#9 0x5585db335ca0 in dump_vmstate_json_to_file /mnt/sdb/qemu-new/qemu/migration/savevm.c:638
#10 0x5585daa5bcbf in main /mnt/sdb/qemu-new/qemu/vl.c:4420
#11 0x7fb941204812 in __libc_start_main ../csu/libc-start.c:308
#12 0x5585da29420d in _start (/mnt/sdb/qemu-new/qemu/build/x86_64-softmmu/qemu-system-x86_64+0x27f020d)
Indirect leak of 7472 byte(s) in 467 object(s) allocated from:
#0 0x7fb946abd768 in __interceptor_malloc (/lib64/libasan.so.5+0xef768)
#1 0x7fb945eca445 in g_malloc (/lib64/libglib-2.0.so.0+0x52445)
#2 0x7fb945ee2066 in g_slice_alloc (/lib64/libglib-2.0.so.0+0x6a066)
#3 0x7fb945ee3139 in g_slist_prepend (/lib64/libglib-2.0.so.0+0x6b139)
#4 0x5585db591581 in object_class_get_list_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1084
#5 0x5585db590f66 in object_class_foreach_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1028
#6 0x7fb945eb35f7 in g_hash_table_foreach (/lib64/libglib-2.0.so.0+0x3b5f7)
#7 0x5585db59110c in object_class_foreach /mnt/sdb/qemu-new/qemu/qom/object.c:1038
#8 0x5585db5916b6 in object_class_get_list /mnt/sdb/qemu-new/qemu/qom/object.c:1092
#9 0x5585db335ca0 in dump_vmstate_json_to_file /mnt/sdb/qemu-new/qemu/migration/savevm.c:638
#10 0x5585daa5bcbf in main /mnt/sdb/qemu-new/qemu/vl.c:4420
#11 0x7fb941204812 in __libc_start_main ../csu/libc-start.c:308
#12 0x5585da29420d in _start (/mnt/sdb/qemu-new/qemu/build/x86_64-softmmu/qemu-system-x86_64+0x27f020d)
Peter Maydell [Thu, 27 Feb 2020 18:16:03 +0000 (18:16 +0000)]
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2020-02-26' into staging
nbd patches for 2020-02-26
- ensure multiple meta contexts work
- allow leading / in export names
- fix a failure path memory leak
# gpg: Signature made Thu 27 Feb 2020 01:53:03 GMT
# gpg: using RSA key 71C2CC22B1C4602927D2F3AAA7A16B4A2527436A
# gpg: Good signature from "Eric Blake <[email protected]>" [full]
# gpg: aka "Eric Blake (Free Software Programmer) <[email protected]>" [full]
# gpg: aka "[jpeg image of size 6874]" [full]
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2020-02-26:
block/nbd: fix memory leak in nbd_open()
block/nbd: extract the common cleanup code
nbd-client: Support leading / in NBD URI
nbd: Fix regression with multiple meta contexts
Peter Maydell [Thu, 27 Feb 2020 17:12:31 +0000 (17:12 +0000)]
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-and-plugins-250220-1' into staging
Testing and plugin updates:
- fix pauth TCG tests
- tweak away rcutorture failures
- various Travis updates
- relax iotest size check a little
- fix for -trace/-D clash
- fix cross compile detection for tcg tests
- document plugin query lifetime
- fix missing break in plugin core
- fix some plugin warnings
- better progressive instruction decode
- avoid trampling vaddr in plugins
# gpg: Signature made Tue 25 Feb 2020 20:21:56 GMT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <[email protected]>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-and-plugins-250220-1:
tests/tcg: take into account expected clashes pauth-4
tests/tcg: fix typo in configure.sh test for v8.3
tcg: save vaddr temp for plugin usage
tests/tcg: give debug builds a little bit longer
tests/plugins: make howvec clean-up after itself.
target/riscv: progressively load the instruction during decode
qemu/bitops.h: Add extract8 and extract16
tests/plugin: prevent uninitialized warning
plugins/core: add missing break in cb_to_tcg_flags
docs/devel: document query handle lifetimes
tracing: only allow -trace to override -D if set
tests/iotests: be a little more forgiving on the size test
travis.yml: single-thread build-tcg stages
travis.yml: Fix Travis YAML configuration warnings
travis.yml: Test the s390-ccw build, too
tests/rcutorture: mild documenting refactor of update thread
tests/rcutorture: better document locking of stats
tests/rcutorture: update usage hint
tests/tcg: include a skip runner for pauth3 with plugins
Igor Mammedov [Thu, 27 Feb 2020 16:14:54 +0000 (11:14 -0500)]
softmmu/vl.c: fix too slow TCG regression
Commit a1b18df9a4 moved -m option parsing after configure_accelerators()
that broke TCG accelerator initialization which accesses global ram_size
from size_code_gen_buffer() which is equal to 0 at that moment.
Partially revert a1b18df9a4, by returning set_memory_options() to its
original location and only keep 32-bit host VA check and 'memory-backend'
size check introduced by fe64d06afc at current place.
tests/acceptance: Count multiple Tux logos displayed on framebuffer
Add a test that verifies that each core properly displays the Tux
logo on the framebuffer device.
We simply follow the OpenCV "Template Matching with Multiple Objects"
tutorial, replacing Lionel Messi by Tux:
https://docs.opencv.org/4.2.0/d4/dc6/tutorial_py_template_matching.html
When OpenCV and NumPy are installed, this test can be run using:
$ avocado --show=app,framebuffer \
run -t cpu:i6400 \
tests/acceptance/machine_mips_malta.py
JOB ID : 54f3d8efd8674f289b8aa01a87f5d70c5814544c
JOB LOG : avocado/job-results/job-2020-02-01T20.52-54f3d8e/job.log
(1/3) tests/acceptance/machine_mips_malta.py:MaltaMachineFramebuffer.test_mips_malta_i6400_framebuffer_logo_1core:
framebuffer: found Tux at position (x, y) = (0, 0)
PASS (3.37 s)
(2/3) tests/acceptance/machine_mips_malta.py:MaltaMachineFramebuffer.test_mips_malta_i6400_framebuffer_logo_7cores:
framebuffer: found Tux at position (x, y) = (0, 0)
framebuffer: found Tux at position (x, y) = (88, 0)
framebuffer: found Tux at position (x, y) = (176, 0)
framebuffer: found Tux at position (x, y) = (264, 0)
framebuffer: found Tux at position (x, y) = (352, 0)
framebuffer: found Tux at position (x, y) = (440, 0)
framebuffer: found Tux at position (x, y) = (528, 0)
PASS (5.80 s)
(3/3) tests/acceptance/machine_mips_malta.py:MaltaMachineFramebuffer.test_mips_malta_i6400_framebuffer_logo_8cores:
framebuffer: found Tux at position (x, y) = (0, 0)
framebuffer: found Tux at position (x, y) = (88, 0)
framebuffer: found Tux at position (x, y) = (176, 0)
framebuffer: found Tux at position (x, y) = (264, 0)
framebuffer: found Tux at position (x, y) = (352, 0)
framebuffer: found Tux at position (x, y) = (440, 0)
framebuffer: found Tux at position (x, y) = (528, 0)
framebuffer: found Tux at position (x, y) = (616, 0)
PASS (6.67 s)
RESULTS : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME : 16.79 s
If the AVOCADO_CV2_SCREENDUMP_PNG_PATH environment variable is set, the
test will save the screenshot with matched squares to it.
Test inspired by the following post:
https://www.mips.com/blog/how-to-run-smp-linux-in-qemu-on-a-mips64-release-6-cpu/
Kernel built with the following Docker file:
https://github.com/philmd/qemu-testing-blob/blob/malta_i6400/mips/malta/mips64el/Dockerfile
Reactivate MIPS KVM maintainership with a modest goal of keeping
the support alive, checking common KVM code changes against MIPS
functionality, etc. (hence the status "Odd Fixes"), with hope that
this component will be fully maintained at some further, but not
distant point in future.
James Hogan [Sat, 21 Dec 2019 15:53:06 +0000 (15:53 +0000)]
MAINTAINERS: Orphan MIPS KVM CPUs
I haven't been active for 18 months, and don't have the hardware set up
to test KVM for MIPS, so mark it as orphaned and remove myself as
maintainer. Hopefully somebody from MIPS can pick this up.
Anthony PERARD [Thu, 19 Dec 2019 15:43:22 +0000 (15:43 +0000)]
Memory: Only call ramblock_ptr when needed in qemu_ram_writeback
It is possible that a ramblock doesn't have memory that QEMU can
access, this is the case with the Xen hypervisor.
In order to avoid to trigger an assert, only call ramblock_ptr() when
needed in qemu_ram_writeback(). This should fix migration of Xen
guests that was broken with bd108a44bc29 ("migration: ram: Switch to
ram block writeback").
Paul Durrant [Mon, 16 Dec 2019 14:34:51 +0000 (14:34 +0000)]
xen-bus/block: explicitly assign event channels to an AioContext
It is not safe to close an event channel from the QEMU main thread when
that channel's poller is running in IOThread context.
This patch adds a new xen_device_set_event_channel_context() function
to explicitly assign the channel AioContext, and modifies
xen_device_bind_event_channel() to initially assign the channel's poller
to the QEMU main thread context. The code in xen-block's dataplane is
then modified to assign the channel to IOThread context during
xen_block_dataplane_start() and de-assign it during in
xen_block_dataplane_stop(), such that the channel is always assigned
back to main thread context before it is closed. aio_set_fd_handler()
already deals with all the necessary synchronization when moving an fd
between AioContext-s so no extra code is needed to manage this.
Janosch Frank [Thu, 27 Feb 2020 09:23:41 +0000 (04:23 -0500)]
s390x: Rename and use constants for short PSW address and mask
Let's rename PSW_MASK_ESA_ADDR to PSW_MASK_SHORT_ADDR because we're
not working with a ESA PSW which would not support the extended
addressing bit. Also let's actually use it.
Additionally we introduce PSW_MASK_SHORT_CTRL and use it throughout
the codebase.
Raphael Norwitz [Thu, 16 Jan 2020 02:57:04 +0000 (21:57 -0500)]
Fixed assert in vhost_user_set_mem_table_postcopy
The current vhost_user_set_mem_table_postcopy() implementation
populates each region of the VHOST_USER_SET_MEM_TABLE message without
first checking if there are more than VHOST_MEMORY_MAX_NREGIONS already
populated. This can cause memory corruption if too many regions are
added to the message during the postcopy step.
This change moves an existing assert up such that attempting to
construct a VHOST_USER_SET_MEM_TABLE message with too many memory
regions will gracefully bring down qemu instead of corrupting memory.
Johannes Berg [Thu, 23 Jan 2020 08:17:08 +0000 (09:17 +0100)]
libvhost-user: implement in-band notifications
Add support for VHOST_USER_PROTOCOL_F_IN_BAND_NOTIFICATIONS, but
as it's not desired by default, don't enable it unless the device
implementation opts in by returning it from its protocol features
callback.
Note that I updated vu_set_vring_err_exec(), but didn't add any
sending of the VHOST_USER_SLAVE_VRING_ERR message as there's no
write to the err_fd today either.
This also adds vu_queue_notify_sync() which can be used to force
a synchronous notification if inband notifications are supported.
Previously, I had left out the slave->master direction handling
of F_REPLY_ACK, this now adds some code to support it as well.
Johannes Berg [Thu, 23 Jan 2020 08:17:07 +0000 (09:17 +0100)]
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Johannes Berg [Thu, 23 Jan 2020 08:17:05 +0000 (09:17 +0100)]
libvhost-user-glib: use g_main_context_get_thread_default()
If we use NULL, we just get the main program default mainloop
here. Using g_main_context_get_thread_default() has basically
the same effect, but it lets us start different devices in
different threads with different mainloops, which can be useful.
Johannes Berg [Thu, 23 Jan 2020 08:17:04 +0000 (09:17 +0100)]
libvhost-user-glib: fix VugDev main fd cleanup
If you try to make a device implementation that can handle multiple
connections and allow disconnections (which requires overriding the
VHOST_USER_NONE handling), then glib will warn that we remove a src
while it's still on the mainloop, and will poll() an FD that doesn't
exist anymore.
Fix this by making vug_source_new() require pairing with the new
vug_source_destroy() so we can keep the GSource referenced in the
meantime.
Note that this requires calling the new API in vhost-user-input.
vhost-user-gpu also uses vug_source_new(), but never seems to free
the result at all, so I haven't changed anything there.
This is really simple, since we know whether a response is
already requested or not, so we can just send a (successful)
response when there isn't one already.
Given that, it's not all _that_ useful but the master can at
least be sure the message was processed, and we can exercise
more code paths using the example code.
Eric Auger [Fri, 14 Feb 2020 13:27:44 +0000 (14:27 +0100)]
hw/arm/virt: Add the virtio-iommu device tree mappings
Adds the "virtio,pci-iommu" node in the host bridge node and
the RID mapping, excluding the IOMMU RID.
This is done in the virtio-iommu-pci hotplug handler which
gets called only if no firmware is loaded or if -no-acpi is
passed on the command line. As non DT integration is
not yet supported by the kernel we must make sure we
are in DT mode. This limitation will be removed as soon
as the topology description feature gets supported.
Eric Auger [Fri, 14 Feb 2020 13:27:43 +0000 (14:27 +0100)]
virtio-iommu-pci: Add virtio iommu pci support
This patch adds virtio-iommu-pci, which is the pci proxy for
the virtio-iommu device.
Currently non DT integration is not yet supported by the kernel.
So the machine must implement a hotplug handler for the
virtio-iommu-pci device that creates the device tree iommu-map
bindings as documented in kernel documentation:
Eric Auger [Fri, 14 Feb 2020 13:27:42 +0000 (14:27 +0100)]
virtio-iommu: Support migration
Add Migration support. We rely on recently added gtree and qlist
migration. We only migrate the domain gtree. The endpoint gtree
is re-constructed in a post-load operation.
Eric Auger [Fri, 14 Feb 2020 13:27:38 +0000 (14:27 +0100)]
virtio-iommu: Implement attach/detach command
This patch implements the endpoint attach/detach to/from
a domain.
Domain and endpoint internal datatypes are introduced.
Both are stored in RB trees. The domain owns a list of
endpoints attached to it. Also helpers to get/put
end points and domains are introduced.
As for the IOMMU memory regions, a callback is called on
PCI bus enumeration that initializes for a given device
on the bus hierarchy an IOMMU memory region. The PCI bus
hierarchy is stored locally in IOMMUPciBus and IOMMUDevice
objects.
At the time of the enumeration, the bus number may not be
computed yet.
So operations that will need to retrieve the IOMMUdevice
and its IOMMU memory region from the bus number and devfn,
once the bus number is garanteed to be frozen, use an array
of IOMMUPciBus, lazily populated.
Eric Auger [Fri, 14 Feb 2020 13:27:37 +0000 (14:27 +0100)]
virtio-iommu: Decode the command payload
This patch adds the command payload decoding and
introduces the functions that will do the actual
command handling. Those functions are not yet implemented.
Stefan Hajnoczi [Fri, 7 Feb 2020 10:46:19 +0000 (10:46 +0000)]
virtio: gracefully handle invalid region caches
The virtqueue code sets up MemoryRegionCaches to access the virtqueue
guest RAM data structures. The code currently assumes that
VRingMemoryRegionCaches is initialized before device emulation code
accesses the virtqueue. An assertion will fail in
vring_get_region_caches() when this is not true. Device fuzzing found a
case where this assumption is false (see below).
Virtqueue guest RAM addresses can also be changed from a vCPU thread
while an IOThread is accessing the virtqueue. This breaks the same
assumption but this time the caches could become invalid partway through
the virtqueue code. The code fetches the caches RCU pointer multiple
times so we will need to validate the pointer every time it is fetched.
Add checks each time we call vring_get_region_caches() and treat invalid
caches as a nop: memory stores are ignored and memory reads return 0.
#0 0x00007ffff5520625 in raise () at /lib64/libc.so.6
#1 0x00007ffff55098d9 in abort () at /lib64/libc.so.6
#2 0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
#3 0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
#4 0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
#5 vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
#6 0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
#7 virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
#8 virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
#9 0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
#10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
#11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
#12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
#13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
#14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
#15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
#17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
#18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
#19 0x00005555559b20c9 in main_loop () at vl.c:1683
#20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441
Pan Nengyuan [Thu, 5 Dec 2019 03:45:28 +0000 (11:45 +0800)]
block/nbd: fix memory leak in nbd_open()
In currently implementation there will be a memory leak when
nbd_client_connect() returns error status. Here is an easy way to
reproduce:
1. run qemu-iotests as follow and check the result with asan:
./check -raw 143
Following is the asan output backtrack:
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7f629688a560 in calloc (/usr/lib64/libasan.so.3+0xc7560)
#1 0x7f6295e7e015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015)
#2 0x56281dab4642 in qobject_input_start_struct /mnt/sdb/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:295
#3 0x56281dab1a04 in visit_start_struct /mnt/sdb/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:49
#4 0x56281dad1827 in visit_type_SocketAddress qapi/qapi-visit-sockets.c:386
#5 0x56281da8062f in nbd_config /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1716
#6 0x56281da8062f in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1829
#7 0x56281da8062f in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873
Direct leak of 15 byte(s) in 1 object(s) allocated from:
#0 0x7f629688a3a0 in malloc (/usr/lib64/libasan.so.3+0xc73a0)
#1 0x7f6295e7dfbd in g_malloc (/usr/lib64/libglib-2.0.so.0+0x4ffbd)
#2 0x7f6295e96ace in g_strdup (/usr/lib64/libglib-2.0.so.0+0x68ace)
#3 0x56281da804ac in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1834
#4 0x56281da804ac in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7f629688a3a0 in malloc (/usr/lib64/libasan.so.3+0xc73a0)
#1 0x7f6295e7dfbd in g_malloc (/usr/lib64/libglib-2.0.so.0+0x4ffbd)
#2 0x7f6295e96ace in g_strdup (/usr/lib64/libglib-2.0.so.0+0x68ace)
#3 0x56281dab41a3 in qobject_input_type_str_keyval /mnt/sdb/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:536
#4 0x56281dab2ee9 in visit_type_str /mnt/sdb/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:297
#5 0x56281dad0fa1 in visit_type_UnixSocketAddress_members qapi/qapi-visit-sockets.c:141
#6 0x56281dad17b6 in visit_type_SocketAddress_members qapi/qapi-visit-sockets.c:366
#7 0x56281dad186a in visit_type_SocketAddress qapi/qapi-visit-sockets.c:393
#8 0x56281da8062f in nbd_config /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1716
#9 0x56281da8062f in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1829
#10 0x56281da8062f in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873
Eric Blake [Wed, 12 Feb 2020 02:31:01 +0000 (20:31 -0600)]
nbd-client: Support leading / in NBD URI
The NBD URI specification [1] states that only one leading slash at
the beginning of the URI path component is stripped, not all such
slashes. This becomes important to a patch I just proposed to nbdkit
[2], which would allow the exportname to select a file embedded within
an ext2 image: ext2fs demands an absolute pathname beginning with '/',
and because qemu was inadvertantly stripping it, my nbdkit patch had
to work around the behavior.
Note that the qemu bug only affects handling of URIs such as
nbd://host:port//abs/path (where '/abs/path' should be the export
name); it is still possible to use --image-opts and pass the desired
export name with a leading slash directly through JSON even without
this patch.
Eric Blake [Thu, 6 Feb 2020 17:38:32 +0000 (11:38 -0600)]
nbd: Fix regression with multiple meta contexts
Detected by a hang in the libnbd testsuite. If a client requests
multiple meta contexts (both base:allocation and qemu:dirty-bitmap:x)
at the same time, our attempt to silence a false-positive warning
about a potential uninitialized variable introduced botched logic: we
were short-circuiting the second context, and never sending the
NBD_REPLY_FLAG_DONE. Combining two 'if' into one 'if/else' in bdf200a55 was wrong (I'm a bit embarrassed that such a change was my
initial suggestion after the v1 patch, then I did not review the v2
patch that actually got committed). Revert that, and instead silence
the false positive warning by replacing 'return ret' with 'return 0'
(the value it always has at that point in the code, even though it
eluded the deduction abilities of the robot that reported the false
positive).
There is a special quiesce PSW that we check for "shutdown". Otherwise disabled
wait is detected as "crashed". Architecturally we must only check PSW bits
116-127. Fix this.
Janosch Frank [Fri, 14 Feb 2020 15:16:21 +0000 (10:16 -0500)]
s390x: Add missing vcpu reset functions
Up to now we only had an ioctl to reset vcpu data QEMU couldn't reach
for the initial reset, which was also called for the clear reset. To
be architecture compliant, we also need to clear local interrupts on a
normal reset.
Because of this and the upcoming protvirt support we need to add
ioctls for the missing clear and normal resets.
Thomas Huth [Thu, 30 Jan 2020 13:34:17 +0000 (14:34 +0100)]
target/s390x/translate: Fix RNSBG instruction
RNSBG is handled via the op_rosbg() helper function. But RNSBG has
the opcode 0xEC54, i.e. 0x54 as second byte, while op_rosbg() currently
checks for 0x55. This seems to be a typo, fix it to use 0x54 instead,
so that op_rosbg() does not abort() anymore if a program uses RNSBG.
I've checked with a simple test function that I now get the same results
with KVM and with TCG:
Alex Bennée [Tue, 25 Feb 2020 12:47:10 +0000 (12:47 +0000)]
tests/tcg: take into account expected clashes pauth-4
Pointer authentication isn't perfect so measure the percentage of
failed checks. As we want to vary the pointer we work through a bunch
of different addresses.
Alex Bennée [Tue, 25 Feb 2020 17:49:08 +0000 (17:49 +0000)]
tcg: save vaddr temp for plugin usage
While do_gen_mem_cb does copy (via extu_tl_i64) vaddr into a new temp
this won't help if the vaddr temp gets clobbered by the actual
load/store op. To avoid this clobbering we explicitly copy vaddr
before the op to ensure it is live my the time we do the
instrumentation.
Alex Bennée [Tue, 25 Feb 2020 12:47:06 +0000 (12:47 +0000)]
tests/plugins: make howvec clean-up after itself.
TCG plugins are responsible for their own memory usage and although
the plugin_exit is tied to the end of execution in this case it is
still poor practice. Ensure we delete the hash table and related data
when we are done to be a good plugin citizen.
Alex Bennée [Tue, 25 Feb 2020 12:47:05 +0000 (12:47 +0000)]
target/riscv: progressively load the instruction during decode
The plugin system would throw up a harmless warning when it detected
that a disassembly of an instruction didn't use all it's bytes. Fix
the riscv decoder to only load the instruction bytes it needs as it
needs them.
This drops opcode from the ctx in favour if passing the appropriately
sized opcode down a few levels of the decode.
Alex Bennée [Tue, 25 Feb 2020 20:20:09 +0000 (20:20 +0000)]
tests/iotests: be a little more forgiving on the size test
At least on ZFS this was failing as 512 was less than or equal to 512.
I suspect the reason is additional compression done by ZFS and however
qemu-img gets the actual size.
Loosen the criteria to make sure after is not bigger than before and
also dump the values in the report.
This fixes the following warnings Travis has detected on the
YAML configuration:
- 'on root: missing os, using the default "linux"'
- 'on root: the key matrix is an alias for jobs, using jobs'
- 'on jobs.include.python: unexpected sequence, using the first value (3.5)'
- 'on jobs.include.python: unexpected sequence, using the first value (3.6)'
Thomas Huth [Tue, 25 Feb 2020 12:46:56 +0000 (12:46 +0000)]
travis.yml: Test the s390-ccw build, too
Since we can now use a s390x host on Travis, we can also build and
test the s390-ccw bios images there. For this we have to make sure
that roms/SLOF is checked out, too, and then move the generated *.img
files to the right location before running the tests.
Alex Bennée [Tue, 25 Feb 2020 12:46:55 +0000 (12:46 +0000)]
tests/rcutorture: mild documenting refactor of update thread
This is mainly to help with reasoning what the test is trying to do.
We can move rcu_stress_idx to a local variable as there is only ever
one updater thread. I've also added an assert to catch the case where
we end up updating the current structure to itself which is the only
way I can see the mberror cases we are seeing on Travis.
We shall see if the rcutorture test failures go away now.
Pan Nengyuan [Mon, 24 Feb 2020 04:13:35 +0000 (12:13 +0800)]
vhost-user-blk: delete virtioqueues in unrealize to fix memleaks
virtio queues forgot to delete in unrealize, and aslo error path in
realize, this patch fix these memleaks, the leak stack is as follow:
Direct leak of 114688 byte(s) in 16 object(s) allocated from:
#0 0x7f24024fdbf0 in calloc (/lib64/libasan.so.3+0xcabf0)
#1 0x7f2401642015 in g_malloc0 (/lib64/libglib-2.0.so.0+0x50015)
#2 0x55ad175a6447 in virtio_add_queue /mnt/sdb/qemu/hw/virtio/virtio.c:2327
#3 0x55ad17570cf9 in vhost_user_blk_device_realize /mnt/sdb/qemu/hw/block/vhost-user-blk.c:419
#4 0x55ad175a3707 in virtio_device_realize /mnt/sdb/qemu/hw/virtio/virtio.c:3509
#5 0x55ad176ad0d1 in device_set_realized /mnt/sdb/qemu/hw/core/qdev.c:876
#6 0x55ad1781ff9d in property_set_bool /mnt/sdb/qemu/qom/object.c:2080
#7 0x55ad178245ae in object_property_set_qobject /mnt/sdb/qemu/qom/qom-qobject.c:26
#8 0x55ad17821eb4 in object_property_set_bool /mnt/sdb/qemu/qom/object.c:1338
#9 0x55ad177aeed7 in virtio_pci_realize /mnt/sdb/qemu/hw/virtio/virtio-pci.c:1801
Pan Nengyuan [Tue, 25 Feb 2020 07:55:54 +0000 (15:55 +0800)]
virtio-crypto: do delete ctrl_vq in virtio_crypto_device_unrealize
Similar to other virtio-deivces, ctrl_vq forgot to delete in virtio_crypto_device_unrealize, this patch fix it.
This device has aleardy maintained vq pointers. Thus, we use the new virtio_delete_queue function directly to do the cleanup.
The leak stack:
Direct leak of 10752 byte(s) in 3 object(s) allocated from:
#0 0x7f4c024b1970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
#1 0x7f4c018be49d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
#2 0x55a2f8017279 in virtio_add_queue /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:2333
#3 0x55a2f8057035 in virtio_crypto_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto.c:814
#4 0x55a2f8005d80 in virtio_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:3531
#5 0x55a2f8497d1b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891
#6 0x55a2f8b48595 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238
#7 0x55a2f8b54fad in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26
#8 0x55a2f8b4de2c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390
#9 0x55a2f80609c9 in virtio_crypto_pci_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto-pci.c:58
Pan Nengyuan [Tue, 25 Feb 2020 07:55:53 +0000 (15:55 +0800)]
virtio-pmem: do delete rq_vq in virtio_pmem_unrealize
Similar to other virtio-devices, rq_vq forgot to delete in
virtio_pmem_unrealize, this patch fix it. This device has already
maintained a vq pointer, thus we use the new virtio_delete_queue
function directly to do the cleanup.