Laurent Vivier [Tue, 21 Jun 2016 17:51:14 +0000 (19:51 +0200)]
linux-user: fix netlink memory corruption
Netlink is byte-swapping data in the guest memory (it's bad).
It's ok when the data come from the host as they are generated by the
host.
But it doesn't work when data come from the guest: the guest can
try to reuse these data whereas they have been byte-swapped.
This is what happens in glibc:
glibc generates a sequence number in nlh.nlmsg_seq and calls
sendto() with this nlh. In sendto(), we byte-swap nlmsg.seq.
Later, after the recvmsg(), glibc compares nlh.nlmsg_seq with
sequence number given in return, and of course it fails (hangs),
because nlh.nlmsg_seq is not valid anymore.
The involved code in glibc is:
sysdeps/unix/sysv/linux/check_pf.c:make_request()
...
req.nlh.nlmsg_seq = time (NULL);
...
if (TEMP_FAILURE_RETRY (__sendto (fd, (void *) &req, sizeof (req), 0,
(struct sockaddr *) &nladdr,
sizeof (nladdr))) < 0)
<here req.nlh.nlmsg_seq has been byte-swapped>
...
do
{
...
ssize_t read_len = TEMP_FAILURE_RETRY (__recvmsg (fd, &msg, 0));
...
struct nlmsghdr *nlmh;
for (nlmh = (struct nlmsghdr *) buf;
NLMSG_OK (nlmh, (size_t) read_len);
nlmh = (struct nlmsghdr *) NLMSG_NEXT (nlmh, read_len))
{
<we compare nlmh->nlmsg_seq with corrupted req.nlh.nlmsg_seq>
if (nladdr.nl_pid != 0 || (pid_t) nlmh->nlmsg_pid != pid
|| nlmh->nlmsg_seq != req.nlh.nlmsg_seq)
continue;
...
else if (nlmh->nlmsg_type == NLMSG_DONE)
/* We found the end, leave the loop. */
done = true;
}
}
while (! done);
As we have a continue on "nlmh->nlmsg_seq != req.nlh.nlmsg_seq",
"done" cannot be set to "true" and we have an infinite loop.
It's why commands like "apt-get update" or "dnf update hangs".
* remotes/jnsnow/tags/ide-pull-request:
block: ignore flush requests when storage is clean
tests: in IDE and AHCI tests perform DMA write before flushing
ide: set retry_unit for PIO and FLUSH requests
ide: refactor retry_unit set and clear into separate function
* remotes/stefanha/tags/tracing-pull-request:
trace: Add QAPI/QMP interfaces to query and control per-vCPU tracing state
trace: Allow event name pattern in "info trace-events"
trace: Conditionally trace events based on their per-vCPU state
trace: Add per-vCPU tracing states for events with the 'vcpu' property
trace: Cosmetic changes on fast-path tracing
disas: Remove unused macro '_'
trace: Identify events with the 'vcpu' property
trace: [bsd-user] Commandline arguments to control tracing
trace: [linux-user] Commandline arguments to control tracing
Peter Maydell [Tue, 19 Jul 2016 08:02:05 +0000 (09:02 +0100)]
Merge remote-tracking branch 'remotes/awilliam/tags/vfio-update-20160718.0' into staging
VFIO update 2016-07-18
One fix for 2.7-rc0 which hides the ARI extended capability, fixing
multifunction support in PCIe configurations where the assigned device
function topology does not match the host (Alex Williamson)
block: ignore flush requests when storage is clean
Some guests (win2008 server for example) do a lot of unnecessary
flushing when underlying media has not changed. This adds additional
overhead on host when calling fsync/fdatasync.
This change introduces a write generation scheme in BlockDriverState.
Current write generation is checked against last flushed generation to
avoid unnessesary flushes.
The problem with excessive flushing was found by a performance test
which does parallel directory tree creation (from 2 processes).
Results improved from 0.424 loops/sec to 0.432 loops/sec.
Each loop creates 10^3 directories with 10 files in each.
This affected some blkdebug testcases that were expecting error logs from
failure-injected flushes which are now skipped entirely
(tests 026 071 089).
This also affects the performance of block jobs and thus BLOCK_JOB_READY
events for driver-mirror and active block-commit commands now arrives
faster, before QMP send successfully returns to caller (tests 141 144).
The following sequence of tests discovered a problem in IDE emulation:
1. Send DMA write to IDE device 0
2. Send CMD_FLUSH_CACHE to same IDE device which will be failed by block
layer using blkdebug script in tests/ide-test:test_retry_flush
When doing DMA request ide/core.c will set s->retry_unit to s->unit in
ide_start_dma. When dma completes ide_set_inactive sets retry_unit to -1.
After that ide_flush_cache runs and fails thanks to blkdebug.
ide_flush_cb calls ide_handle_rw_error which asserts that s->retry_unit
== s->unit. But s->retry_unit is still -1 after previous DMA completion
and flush does not use anything related to retry.
This patch restricts retry unit assertion only to ops that actually use
retry logic.
trace: Conditionally trace events based on their per-vCPU state
Events with the 'vcpu' property are conditionally emitted according to
their per-vCPU state. Other events are emitted normally based on their
global tracing state.
Note that the per-vCPU condition check applies to all tracing backends.
Eliminates a future compilation error when UI code includes the tracing
headers (indirectly pulling "disas/bfd.h" through "qom/cpu.h") and
GLib's i18n '_' macro.
trace: [bsd-user] Commandline arguments to control tracing
[Changed const char *trace_file to char *trace_file since it's a
heap-allocated string that needs to be freed. This type is also
returned by trace_opt_parse() and used in vl.c.
Also fixed coding style on for(;;) and else statement as suggested by
Eric Blake <[email protected]> since the patch modifies these lines or
close enough.
--Stefan]
trace: [linux-user] Commandline arguments to control tracing
[Changed const char *trace_file to char *trace_file since it's a
heap-allocated string that needs to be freed. This type is also
returned by trace_opt_parse() and used in vl.c.
--Stefan]
Alex Williamson [Mon, 18 Jul 2016 16:55:17 +0000 (10:55 -0600)]
vfio/pci: Hide ARI capability
QEMU supports ARI on downstream ports and assigned devices may support
ARI in their extended capabilities. The endpoint ARI capability
specifies the next function, such that the OS doesn't need to walk
each possible function, however this next function is relative to the
host, not the guest. This leads to device discovery issues when we
combine separate functions into virtual multi-function packages in a
guest. For example, SR-IOV VFs are not enumerated by simply probing
the function address space, therefore the ARI next-function field is
zero. When we combine multiple VFs together as a multi-function
device in the guest, the guest OS identifies ARI is enabled, relies on
this next-function field, and stops looking for additional function
after the first is found.
Long term we should expose the ARI capability to the guest to enable
configurations with more than 8 functions per slot, but this requires
additional QEMU PCI infrastructure to manage the next-function field
for multiple, otherwise independent devices. In the short term,
hiding this capability allows equivalent functionality to what we
currently have on non-express chipsets.
Renames look like this with git-diff(1) when diff.renames = true is set:
diff --git a/a b/b
similarity index 100%
rename from a
rename to b
This raises the "Does not appear to be a unified-diff format patch"
error because checkpatch.pl only considers a diff valid if it contains
at least one "@@" hunk.
This patch accepts renames and copies too so that checkpatch.pl exits
successfully when a diff only renames/copies files. The git diff
extended header format is described on the git-diff(1) man page.
Roman Pen [Wed, 13 Jul 2016 13:03:24 +0000 (15:03 +0200)]
linux-aio: prevent submitting more than MAX_EVENTS
Invoking io_setup(MAX_EVENTS) we ask kernel to create ring buffer for us
with specified number of events. But kernel ring buffer allocation logic
is a bit tricky (ring buffer is page size aligned + some percpu allocation
are required) so eventually more than requested events number is allocated.
From a userspace side we have to follow the convention and should not try
to io_submit() more or logic, which consumes completed events, should be
changed accordingly. The pitfall is in the following sequence:
MAX_EVENTS = 128
io_setup(MAX_EVENTS)
io_submit(MAX_EVENTS)
io_submit(MAX_EVENTS)
/* now 256 events are in-flight */
io_getevents(MAX_EVENTS) = 128
/* we can handle only 128 events at once, to be sure
* that nothing is pended the io_getevents(MAX_EVENTS)
* call must be invoked once more or hang will happen. */
To prevent the hang or reiteration of io_getevents() call this patch
restricts the number of in-flights, which is now limited to MAX_EVENTS.
Peter Maydell [Mon, 18 Jul 2016 10:24:15 +0000 (11:24 +0100)]
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.7-20160718' into staging
ppc patch queue 2016-07-18
Here's what ought to be the final ppc pull request before the 2.7 hard
freeze. This set contains a rework of the DBDMA device for Mac
platforms, and some assorted cleanups and bugfixes.
# gpg: Signature made Mon 18 Jul 2016 05:35:27 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <[email protected]>"
# gpg: aka "David Gibson (Red Hat) <[email protected]>"
# gpg: aka "David Gibson (ozlabs.org) <[email protected]>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.7-20160718:
ppc: Yet another fix for the huge page support detection mechanism
target-ppc: fix left shift overflow in hpte_page_shift
ppc/mmu-hash64: Remove duplicated #include statement
ppc: abort if compat property contains an unknown value
spapr: Ensure CPU cores are added contiguously and removed in LIFO order
vfio/spapr: Remove stale ioctl() call
ppc: Fix support for odd MSR combinations
dbdma: reset io->processing flag for unassigned DBDMA channel rw accesses
dbdma: set FLUSH bit upon reception of flush command for unassigned DBDMA channels
dbdma: fix load_word/store_word value endianness
dbdma: fix endian of DBDMA_CMDPTR_LO during branch
dbdma: add per-channel debugging enabled via DEBUG_DBDMA_CHANMASK
dbdma: always define DBDMA_DPRINTF and enable debug with DEBUG_DBDMA
spapr: fix core unplug crash
Thomas Huth [Fri, 15 Jul 2016 08:10:25 +0000 (10:10 +0200)]
ppc: Yet another fix for the huge page support detection mechanism
Commit 86b50f2e1bef ("Disable huge page support if it is not available
for main RAM") already made sure that huge page support is not announced
to the guest if the normal RAM of non-NUMA configurations is not backed
by a huge page filesystem. However, there is one more case that can go
wrong: NUMA is enabled, but the RAM of the NUMA nodes are not configured
with huge page support (and only the memory of a DIMM is configured with
it). When QEMU is started with the following command line for example,
the Linux guest currently crashes because it is trying to use huge pages
on a memory region that does not support huge pages:
To fix this issue, we've got to make sure to disable huge page support,
too, when there is a NUMA node that is not using a memory backend with
huge page support.
Fixes: 86b50f2e1befc33407bdfeb6f45f7b0d2439a740 Signed-off-by: Thomas Huth <[email protected]> Signed-off-by: David Gibson <[email protected]>
Greg Kurz [Wed, 13 Jul 2016 10:00:17 +0000 (12:00 +0200)]
ppc: abort if compat property contains an unknown value
It is not possible to set the compat property to an unknown value with
powerpc_set_compat(). Something must have gone terribly wrong in QEMU,
if we detect an "Internal error" in powerpc_get_compat(). Let's abort then.
This patch also drops the "max_compat ? *max_compat : -1" construct. It is
useless since max_compat is dereferenced a few lines above.
spapr: Ensure CPU cores are added contiguously and removed in LIFO order
If CPU core addition or removal is allowed in random order leading to
holes in the core id range (and hence in the cpu_index range), migration
can fail as migration with holes in cpu_index range isn't yet handled
correctly.
Prevent this situation by enforcing the addition in contiguous order
and removal in LIFO order so that we never end up with holes in
cpu_index range.
David Gibson [Tue, 12 Jul 2016 06:54:03 +0000 (16:54 +1000)]
vfio/spapr: Remove stale ioctl() call
This ioctl() call to VFIO_IOMMU_SPAPR_TCE_REMOVE was left over from an
earlier version of the code and has since been folded into
vfio_spapr_remove_window().
It wasn't caught because although the argument structure has been removed,
the libc function remove() means this didn't trigger a compile failure.
The ioctl() was also almost certain to fail silently and harmlessly with
the bogus argument, so this wasn't caught in testing.
MacOS uses an architecturally illegal MSR combination that
seems nonetheless supported by 32-bit processors, which is
to have MSR[PR]=1 and one or more of MSR[DR/IR/EE]=0.
This adds support for it. To work properly we need to also
properly include support for PR=1,{I,D}R=0 to the MMU index
used by the qemu TLB.
Mark Cave-Ayland [Sun, 10 Jul 2016 18:08:54 +0000 (19:08 +0100)]
dbdma: add per-channel debugging enabled via DEBUG_DBDMA_CHANMASK
By default large amounts of DBDMA debugging are produced when often it is just
1 or 2 channels that are of interest. Introduce DEBUG_DBDMA_CHANMASK to allow
the developer to select the channels of interest at compile time, and then
further add the extra channel information to each debug statement where
possible.
Also clearly mark the start/end of DBDMA_run_bh to allow tracking the bottom
half execution.
This happens because spapr_core_unplug() assumes cpu_dt_id == core_id.
As long as cpu_dt_id is derived from the non-table cpu_index, this is
only true when you plug cores with contiguous ids.
It is safer to be consistent: the DR connector was created with an
index that is immediately written to cc->core_id, and spapr_core_plug()
also relies on cc->core_id.
Peter Maydell [Thu, 14 Jul 2016 16:32:53 +0000 (17:32 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20160714' into staging
target-arm queue:
* add virtio-mmio transport base address to device path
(avoid an assertion failure with multiple virtio-scsi-devices)
* revert hw/ptimer commit 5a50307 which causes regressions on
SPARC guests
* use Neon to accelerate zero-page checking on AArch64 hosts
* set the MPIDR for TCG to match how KVM does it (and fit with
GICv2/GICv3 restrictions on SGI target lists)
* add some missing AArch32 TLBI hypervisor TLB operations
* m25p80: Fix QIOR/DIOR handling for Winbond
* hw/misc: fix typo in Aspeed SCU hw-strap2 property name
* ast2400: pretend DMAs are done for U-boot
* ast2400: some minor code cleanups
* remotes/pmaydell/tags/pull-target-arm-20160714:
ast2400: externalize revision numbers
ast2400: pretend DMAs are done for U-boot
ast2400: replace aspeed_smc_is_implemented()
hw/misc: fix typo in Aspeed SCU hw-strap2 property name
m25p80: Fix QIOR/DIOR handling for Winbond
target-arm: Add missed AArch32 TLBI sytem registers
hw/arm/virt: tcg: adjust MPIDR like KVM
gic: provide defines for v2/v3 targetlist sizes
target-arm: Use Neon for zero checking
Revert "hw/ptimer: Perform counter wrap around if timer already expired"
virtio-mmio: format transport base address in BusClass.get_dev_path
AST2400_A0_SILICON_REV is defined twice. Fix this by including the
definition in the header file as well as the routine to check if a
silicon revision is supported. It will useful to reuse in other
controllers.
Let's add also AST2500_A0_SILICON_REV for future use.
U-boot does SPI timing calibration using DMA tranfers. To let the
initialization continue, we fake success by setting the DMA status of
the Interrupt Control Register.
For the moment, DMA support is not required as it is not used in
normal operation.
aspeed_smc_is_implemented() filters invalid registers in a peculiar
way. Let's remove it and open code the if conditions. It serves the
same purpose, the aesthetic is better, and new registers can easily be
added.
Winbond also support continuous read mode, but as an opposite for other
flash type read mode clock cycles are included to dummy cycles number.
This path add proper handling of read mode byte and update needed
dummy cycles. QPI mode and dummy cycles configuration are not supported.
Andrew Jones [Thu, 14 Jul 2016 15:51:37 +0000 (16:51 +0100)]
hw/arm/virt: tcg: adjust MPIDR like KVM
KVM adjusts the MPIDR of guest vcpus based on the architecture of
the host, 32-bit vs. 64-bit, and, for 64-bit, also on the type of
GIC the guest is using. To be consistent and improve SGI efficiency
we make the same adjustments for TCG as 64-bit KVM hosts. We neglect
to add consistency with 32-bit KVM hosts, as that would reduce SGI
efficiency and KVM is expected to change.
As MPIDR is a system register, and thus guest visible, we only make
adjustments for current and later versioned machines.
Revert "hw/ptimer: Perform counter wrap around if timer already expired"
Software should see timer counter wraparound only after IRQ being triggered.
This fixes regression introduced by the commit 5a50307 ("hw/ptimer: Perform
counter wrap around if timer already expired"), resulting in monotonic timer
jumping backwards on SPARC emulated machine running NetBSD guest OS, as
reported by Mark Cave-Ayland.
The reason is that the vmstate sections for the two scsi-hd devices are
not uniquely identifiable by name.
The direct parent buses of the scsi-hd devices -- scsi0.0 and scsi1.0 --
support the BusClass.get_dev_path member function. scsibus_get_dev_path()
formats a device path prefix with the help of its topologically parent
bus, and then appends the chan:id:lun triplet to it. For both scsi-hd
devices, this triplet is 0:0:0.
(Here we use "device path" in the QEMU migration sense, for vmstate
section identification, not in the OFW or UEFI device path senses.)
The virtio-scsi HBA is plugged into the virtio-mmio bus (implemented by
the internal VirtIOMMIOProxy device). This bus class
(TYPE_VIRTIO_MMIO_BUS) inherits, as its get_dev_path() member function,
the virtio_bus_get_dev_path() method from its parent class
(TYPE_VIRTIO_BUS).
virtio_bus_get_dev_path() does not format any kind of device address on
its own; "virtio addresses" are transport-specific. Therefore
virtio_bus_get_dev_path() asks the topologically parent bus of the proxy
object (implementing the specific virtio transport) to format the address
of the proxy object.
(For virtio-pci devices (where the proxy is an instance of VirtIOPCIProxy,
plugged into a PCI bus), this ends up in pcibus_get_dev_path().)
However, VirtIOMMIOProxy is usually (in practice: always) plugged into
"main-system-bus", the singleton TYPE_SYSTEM_BUS object. This BusClass
does not support formatting QEMU vmstate device paths at all (as
SysBusDevice objects can have zero or more IO ports and zero or more MMIO
regions). Hence the formatting request delegated from
virtio_bus_get_dev_path() gets answered with NULL.
The end result is that the two scsi-hd devices end up with the same device
path "0:0:0", which triggers the assert.
We can solve this by recognizing that virtio-mmio transports are
distinguished from each other by their base addresses in MMIO address
space. Implement virtio_mmio_bus_get_dev_path() as follows:
(1) The virtio device whose devpath is to be formatted resides on a
virtio-mmio bus that is implemented by a VirtIOMMIOProxy object. Ask
the parent bus of VirtIOMMIOProxy to format the device path of
VirtIOMMIOProxy, as a path prefix. (This is identical to what
virtio_bus_get_dev_path() does.)
(2) Append the base address of VirtIOMMIOProxy to the device path, such
as:
- virtio-mmio@000000000a003e00,
- virtio-mmio@000000000a003c00.
Given that these device paths are placed in the migration stream, step (2)
above, if done unconditionally, would break migration. So make that step
conditional on a new VirtIOMMIOProxy property, which is enabled for 2.7
machine types and later.
* remotes/bonzini/tags/for-upstream:
hostmem: detect host backend memory is being used properly
hostmem: fix QEMU crash by 'info memdev'
char: do not use atexit cleanup handler
net: do not use atexit for cleanup
slirp: use exit notifier for slirp_smb_cleanup
tap: use an exit notifier to call down_script
util: Fix MIN_NON_ZERO
qemu-sockets: use qapi_free_SocketAddress in cleanup
disas: avoid including everything in headers compiled from C++
json-streamer: fix double-free on exiting during a parse
main-loop: check return value before using pointer
Use "-s" instead of "--quiet" to resolve non-fatal build error on FreeBSD.
scsi-bus: Use longer sense buffer with scanners
scsi-bus: Add SCSI scanner support
Max Filippov [Wed, 6 Jul 2016 06:31:32 +0000 (09:31 +0300)]
target-xtensa: xtfpga: fix FLASH interface width
FLASH chip on XTFPGA boards is connected with 16-bit-wide interface.
Latest U-Boot can see the difference and does not work correctly with
32-bit-wide interface.
Set FLASH chip 'width' property to 2.
Peter Maydell [Thu, 14 Jul 2016 10:48:46 +0000 (11:48 +0100)]
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches
# gpg: Signature made Wed 13 Jul 2016 12:46:17 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <[email protected]>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (34 commits)
iotests: Make 157 actually format-agnostic
vvfat: Fix qcow write target driver specification
hmp: show all of snapshot info on every block dev in output of 'info snapshots'
hmp: use snapshot name to determine whether a snapshot is 'fully available'
qemu-iotests: Test naming of throttling groups
blockdev: Fix regression with the default naming of throttling groups
vmdk: fix metadata write regression
Improve block job rate limiting for small bandwidth values
qcow2: Fix qcow2_get_cluster_offset()
qemu-io: Use correct range limitations
qcow2: Avoid making the L1 table too big
qemu-img: Use strerror() for generic resize error
block: Remove BB options from blockdev-add
qemu-iotests: Test setting WCE with qdev
block/qdev: Allow configuring rerror/werror with qdev properties
commit: Fix use of error handling policy
block/qdev: Allow configuring WCE with qdev properties
block/qdev: Allow node name for drive properties
coroutine: move entry argument to qemu_coroutine_create
test-coroutine: prepare for the next patch
...
* mreitz/tags/pull-block-for-kevin-2016-07-13:
iotests: Make 157 actually format-agnostic
vvfat: Fix qcow write target driver specification
hmp: show all of snapshot info on every block dev in output of 'info snapshots'
hmp: use snapshot name to determine whether a snapshot is 'fully available'
qemu-iotests: Test naming of throttling groups
blockdev: Fix regression with the default naming of throttling groups
vmdk: fix metadata write regression
Improve block job rate limiting for small bandwidth values
qcow2: Fix qcow2_get_cluster_offset()
qemu-io: Use correct range limitations
qcow2: Avoid making the L1 table too big
qemu-img: Use strerror() for generic resize error
Max Reitz [Mon, 11 Jul 2016 13:22:46 +0000 (15:22 +0200)]
iotests: Make 157 actually format-agnostic
iotest 157 pretends not to care about the image format used, but in fact
it does due to the format name not being filtered in its output. This
patch adds filtering and changes the reference output accordingly.
Max Reitz [Mon, 11 Jul 2016 13:54:52 +0000 (15:54 +0200)]
vvfat: Fix qcow write target driver specification
First, bdrv_open_child() expects all options for the child to be
prefixed by the child's name (and a separating dot). Second,
bdrv_open_child() does not take ownership of the QDict passed to it but
only extracts all options for the child, so if a QDict is created for
the sole purpose of passing it to bdrv_open_child(), it needs to be
freed afterwards.
This patch makes vvfat adhere to both of these rules.
Lin Ma [Thu, 7 Jul 2016 05:26:04 +0000 (13:26 +0800)]
hmp: show all of snapshot info on every block dev in output of 'info snapshots'
Currently, the output of 'info snapshots' shows fully available snapshots.
It's opaque, hides some snapshot information to users. It's not convenient
if users want to know more about all of snapshot information on every block
device via monitor.
Follow Kevin's and Max's proposals, The patch makes the output more detailed:
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
List of partial (non-loadable) snapshots on 'drive_image1':
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
Lin Ma [Thu, 7 Jul 2016 05:26:03 +0000 (13:26 +0800)]
hmp: use snapshot name to determine whether a snapshot is 'fully available'
Currently qemu uses snapshot id to determine whether a snapshot is fully
available, It causes incorrect output in some scenario.
For instance:
(qemu) info block
drive_image1 (#block113): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk0.qcow2
(qcow2)
Cache mode: writeback
drive_image2 (#block349): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk1.qcow2
(qcow2)
Cache mode: writeback
(qemu)
(qemu) info snapshots
There is no snapshot available.
(qemu)
(qemu) snapshot_blkdev_internal drive_image1 snap1
(qemu)
(qemu) info snapshots
There is no suitable snapshot available
(qemu)
(qemu) savevm checkpoint-1
(qemu)
(qemu) info snapshots
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
(qemu)
$ qemu-img snapshot -l disk0.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
2 checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
$ qemu-img snapshot -l disk1.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 checkpoint-1 0 2016-05-22 16:58:07 00:02:06.813
The patch uses snapshot name instead of snapshot id to determine whether a
snapshot is fully available and uses '--' instead of snapshot id in output
because the snapshot id is not guaranteed to be the same on all images.
For instance:
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
Alberto Garcia [Fri, 8 Jul 2016 14:03:01 +0000 (17:03 +0300)]
qemu-iotests: Test naming of throttling groups
Throttling groups are named using the 'group' parameter of the
block_set_io_throttle command and the throttling.group command-line
option. If that parameter is unspecified the groups get the name of
the block device.
This patch adds a new test to check the naming of throttling groups.
Alberto Garcia [Fri, 8 Jul 2016 14:03:00 +0000 (17:03 +0300)]
blockdev: Fix regression with the default naming of throttling groups
When I/O limits are set for a block device, the name of the throttling
group is taken from the BlockBackend if the user doesn't specify one.
Commit efaa7c4eeb7490c6f37f3 moved the naming of the BlockBackend in
blockdev_init() to the end of the function, after I/O limits are set.
The consequence is that the throttling group gets an empty name.
Reda Sallahi [Thu, 7 Jul 2016 08:42:49 +0000 (10:42 +0200)]
vmdk: fix metadata write regression
Commit "cdeaf1f vmdk: add bdrv_co_write_zeroes" causes a regression on
writes. It writes metadata after every write instead of doing it only once
for each cluster.
vmdk_pwritev() writes metadata whenever m_data is set as valid so this patch
sets m_data as valid only when we have a new cluster which hasn't been
allocated before or a zero grain.
Sascha Silbe [Tue, 28 Jun 2016 15:28:41 +0000 (17:28 +0200)]
Improve block job rate limiting for small bandwidth values
ratelimit_calculate_delay() previously reset the accounting every time
slice, no matter how much data had been processed before. This had (at
least) two consequences:
1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream.
Not sure if there are real-world use cases where this would be a
problem. Mirroring and backup over a slow link (e.g. DSL) would
come to mind, though.
2. Tests for block job operations (e.g. cancel) were rather racy
All block jobs currently use a time slice of 100ms. That's a
reasonable value to get smooth output during regular
operation. However this also meant that the state of block jobs
changed every 100ms, no matter how low the configured limit was. On
busy hosts, qemu often transferred additional chunks until the test
case had a chance to cancel the job.
Fix the block job rate limit code to delay for more than one time
slice to address the above issues. To make it easier to handle
oversized chunks we switch the semantics from returning a delay
_before_ the current request to a delay _after_ the current
request. If necessary, this delay consists of multiple time slice
units.
Since the mirror job sends multiple chunks in one go even if the rate
limit was exceeded in between, we need to keep track of the start of
the current time slice so we can correctly re-compute the delay for
the updated amount of data.
The minimum bandwidth now is 1 data unit per time slice. The block
jobs are currently passing the amount of data transferred in sectors
and using 100ms time slices, so this translates to 5120
bytes/second. With chunk sizes usually being O(512KiB), tests have
plenty of time (O(100s)) to operate on block jobs. The chance of a
race condition now is fairly remote, except possibly on insanely
loaded systems.
Max Reitz [Mon, 20 Jun 2016 14:26:23 +0000 (16:26 +0200)]
qcow2: Fix qcow2_get_cluster_offset()
Recently, qcow2_get_cluster_offset() has been changed to work with bytes
instead of sectors. This invalidated some assertions and introduced a
possible integer multiplication overflow.
This patch removes the now wrong assertion, adding comments and more
assertions to prove its correctness (and fixing the overflow which would
become apparent with the original assertion removed).
Max Reitz [Mon, 20 Jun 2016 14:26:22 +0000 (16:26 +0200)]
qemu-io: Use correct range limitations
create_iovec() has a comment lamenting the lack of SIZE_T_MAX. Since
there actually is a SIZE_MAX, use it.
Two places use INT_MAX for checking the upper bound of a sector count
that is used as an argument for a blk_*() function (blk_discard() and
blk_write_compressed(), respectively). BDRV_REQUEST_MAX_SECTORS should
be used instead.
And finally, do_co_pwrite_zeroes() used to similarly check that the
sector count does not exceed INT_MAX. However, this function is now
backed by blk_co_pwrite_zeroes() which takes bytes as an argument
instead of sectors. Therefore, it should be the byte count that does not
exceed INT_MAX, not the sector count.
Kevin Wolf [Thu, 30 Jun 2016 13:52:37 +0000 (15:52 +0200)]
block: Remove BB options from blockdev-add
werror/rerror are now available as qdev options. The stats-* options are
removed without an existing replacement; they should probably be
configurable with a separate QMP command like I/O throttling settings.
Removing id is left for another day because this involves updating
qemu-iotests cases to use node-name for everything. Before we can do
that, however, all QMP commands must support node-name.
Kevin Wolf [Wed, 29 Jun 2016 15:41:35 +0000 (17:41 +0200)]
block/qdev: Allow configuring rerror/werror with qdev properties
The rerror/werror policies are implemented in the devices, so that's
where they should be configured. In comparison to the old options in
-drive, the qdev properties are only added to those devices that
actually support them.
If the option isn't given (or "auto" is specified), the setting of the
BlockBackend is used for compatibility with the old options. For block
jobs, "auto" is the same as "enospc".
Kevin Wolf [Wed, 29 Jun 2016 15:38:57 +0000 (17:38 +0200)]
commit: Fix use of error handling policy
Commit implemented the 'enospc' policy as 'ignore' if the error was not
ENOSPC. The QAPI documentation promises that it's treated as 'stop'.
Using the common block job error handling function fixes this and also
adds the missing QMP event.
Kevin Wolf [Thu, 23 Jun 2016 13:12:35 +0000 (15:12 +0200)]
block/qdev: Allow configuring WCE with qdev properties
As cache.writeback is a BlockBackend property and as such more related
to the guest device than the BlockDriverState, we already removed it
from the blockdev-add interface. This patch adds the new way to set it,
as a qdev property of the corresponding guest device.
For example: -drive if=none,file=test.img,node-name=img
-device ide-hd,drive=img,write-cache=off
hostmem: detect host backend memory is being used properly
Currently, we use memory_region_is_mapped() to detect if the host
backend memory is being used. This works if the memory is directly
mapped into guest's address space, however, it is not true for
nvdimm as it uses aliased memory region to map the memory. This is
why this bug can happen:
https://bugzilla.redhat.com/show_bug.cgi?id=1352769
Fix it by introduce a new filed, is_mapped, to HostMemoryBackend,
we set/clear this filed accordingly when the device link/unlink to
host backend memory
'info memdev' crashes QEMU:
(qemu) info memdev
Unexpected error in parse_str() at qapi/string-input-visitor.c:111:
Parameter 'null' expects an int64 value or range
It is caused by null uint16List is returned if 'host-nodes' is the default
value
It turns out qemu is calling exit() in various places from various
threads without taking much care of resources state. The atexit()
cleanup handlers cannot easily destroy resources that are in use (by
the same thread or other).
Since c1111a24a3, TCG arm guests run into the following abort() when
running tests, the chardev mutex is locked during the write, so
qemu_mutex_destroy() returns an error:
#0 0x00007fffdbb806f5 in raise () at /lib64/libc.so.6
#1 0x00007fffdbb822fa in abort () at /lib64/libc.so.6
#2 0x00005555557616fe in error_exit (err=<optimized out>, msg=msg@entry=0x555555c38c30 <__func__.14622> "qemu_mutex_destroy")
at /home/drjones/code/qemu/util/qemu-thread-posix.c:39
#3 0x0000555555b0be20 in qemu_mutex_destroy (mutex=mutex@entry=0x5555566aa0e0) at /home/drjones/code/qemu/util/qemu-thread-posix.c:57
#4 0x00005555558aab00 in qemu_chr_free_common (chr=0x5555566aa0e0) at /home/drjones/code/qemu/qemu-char.c:4029
#5 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4038
#6 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4044
#7 0x00005555558b062c in qemu_chr_cleanup () at /home/drjones/code/qemu/qemu-char.c:4557
#8 0x00007fffdbb851e8 in __run_exit_handlers () at /lib64/libc.so.6
#9 0x00007fffdbb85235 in () at /lib64/libc.so.6
#10 0x00005555558d1b39 in testdev_write (testdev=0x5555566aa0a0) at /home/drjones/code/qemu/backends/testdev.c:71
#11 0x00005555558d1b39 in testdev_write (chr=<optimized out>, buf=0x7fffc343fd9a "", len=0) at /home/drjones/code/qemu/backends/testdev.c:95
#12 0x00005555558adced in qemu_chr_fe_write (s=0x5555566aa0e0, buf=buf@entry=0x7fffc343fd98 "0q", len=len@entry=2) at /home/drjones/code/qemu/qemu-char.c:282
Instead of using a atexit() handler, only run the chardev cleanup as
initially proposed at the end of main(), where there are less chances
(hic) of conflicts or other races.
Paolo Bonzini [Fri, 8 Jul 2016 15:28:34 +0000 (17:28 +0200)]
net: do not use atexit for cleanup
This will be necessary in the next patch, which stops using atexit for
character devices; without it, vhost-user and the redirector filter
will cause a use-after-free. Relying on the ordering of atexit calls
is also brittle, even now that both the network and chardev
subsystems are using atexit.
Paolo Bonzini [Tue, 12 Jul 2016 07:57:12 +0000 (09:57 +0200)]
slirp: use exit notifier for slirp_smb_cleanup
We would like to move back net_cleanup() at the end of main function,
like it used to be until f30dbae63a46f23116715dff8d130c, but minimum
cleanup is needed regardless at exit() time for slirp's SMB
functionality. Use an exit notifier to call slirp_smb_cleanup.
If net_cleanup() is called first, then remove the exit notifier as it
will become a dangling pointer otherwise.
We would like to move back net_cleanup() at the end of main function,
like it used to be until f30dbae63a46f23116715dff8d130c, but minimum
tap cleanup is necessary regarless at exit() time. Use an exit notifier
to call TAP down_script. If net_cleanup() is called first, then remove
the exit notifier as it will become a dangling pointer otherwise.
Kevin Wolf [Tue, 21 Jun 2016 18:46:05 +0000 (20:46 +0200)]
block/qdev: Allow node name for drive properties
If a node name instead of a BlockBackend name is specified as the driver
for a guest device, an anonymous BlockBackend is created now.
The order of operations in release_drive() must be reversed in order to
avoid a use-after-free bug because now blk_detach_dev() frees the last
reference if an anonymous BlockBackend is used.
usb-storage uses a hack where it forwards its BlockBackend as a property
to another device that it internally creates. This hack must be updated
so that it doesn't drop its original BB before it can be passed to the
other device. This used to work because we always had the monitor
reference around, but with node-names the device reference is the only
one now.
Paolo Bonzini [Mon, 4 Jul 2016 17:10:01 +0000 (19:10 +0200)]
coroutine: move entry argument to qemu_coroutine_create
In practice the entry argument is always known at creation time, and
it is confusing that sometimes qemu_coroutine_enter is used with a
non-NULL argument to re-enter a coroutine (this happens in
block/sheepdog.c and tests/test-coroutine.c). So pass the opaque value
at creation time, for consistency with e.g. aio_bh_new.
except for the aforementioned few places where the semantic patch
stumbled (as expected) and for test_co_queue, which would otherwise
produce an uninitialized variable warning.
Paolo Bonzini [Mon, 4 Jul 2016 17:10:00 +0000 (19:10 +0200)]
test-coroutine: prepare for the next patch
The next patch moves the coroutine argument from first-enter to
creation time. In this case, coroutine has not been initialized
yet when the coroutine is created, so change to a pointer.
Paolo Bonzini [Mon, 4 Jul 2016 17:09:59 +0000 (19:09 +0200)]
coroutine: use QSIMPLEQ instead of QTAILQ
CoQueue do not need to remove any element but the head of the list;
processing is always strictly FIFO. Therefore, the simpler singly-linked
QSIMPLEQ can be used instead.
Alberto Garcia [Tue, 5 Jul 2016 14:29:02 +0000 (17:29 +0300)]
blockjob: Update description of the 'device' field in the QMP API
The 'device' field in all BLOCK_JOB_* events and 'block-job-*' command
is no longer the device name, but the ID of the job. This patch
updates the documentation to clarify that.
Alberto Garcia [Tue, 5 Jul 2016 14:29:01 +0000 (17:29 +0300)]
qemu-img: Set the ID of the block job in img_commit()
img_commit() creates a block job without an ID. This is no longer
allowed now that we require it to be unique and well-formed. We were
solving this by having a fallback in block_job_create(), but now that
we extended the API of commit_active_start() we can finally set an
explicit ID and revert that change.
Alberto Garcia [Tue, 5 Jul 2016 14:28:58 +0000 (17:28 +0300)]
backup: Add 'job-id' parameter to 'blockdev-backup' and 'drive-backup'
This patch adds a new optional 'job-id' parameter to 'blockdev-backup'
and 'drive-backup', allowing the user to specify the ID of the block
job to be created.