Peter Maydell [Thu, 14 Jul 2016 16:32:53 +0000 (17:32 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20160714' into staging
target-arm queue:
* add virtio-mmio transport base address to device path
(avoid an assertion failure with multiple virtio-scsi-devices)
* revert hw/ptimer commit 5a50307 which causes regressions on
SPARC guests
* use Neon to accelerate zero-page checking on AArch64 hosts
* set the MPIDR for TCG to match how KVM does it (and fit with
GICv2/GICv3 restrictions on SGI target lists)
* add some missing AArch32 TLBI hypervisor TLB operations
* m25p80: Fix QIOR/DIOR handling for Winbond
* hw/misc: fix typo in Aspeed SCU hw-strap2 property name
* ast2400: pretend DMAs are done for U-boot
* ast2400: some minor code cleanups
* remotes/pmaydell/tags/pull-target-arm-20160714:
ast2400: externalize revision numbers
ast2400: pretend DMAs are done for U-boot
ast2400: replace aspeed_smc_is_implemented()
hw/misc: fix typo in Aspeed SCU hw-strap2 property name
m25p80: Fix QIOR/DIOR handling for Winbond
target-arm: Add missed AArch32 TLBI sytem registers
hw/arm/virt: tcg: adjust MPIDR like KVM
gic: provide defines for v2/v3 targetlist sizes
target-arm: Use Neon for zero checking
Revert "hw/ptimer: Perform counter wrap around if timer already expired"
virtio-mmio: format transport base address in BusClass.get_dev_path
AST2400_A0_SILICON_REV is defined twice. Fix this by including the
definition in the header file as well as the routine to check if a
silicon revision is supported. It will useful to reuse in other
controllers.
Let's add also AST2500_A0_SILICON_REV for future use.
U-boot does SPI timing calibration using DMA tranfers. To let the
initialization continue, we fake success by setting the DMA status of
the Interrupt Control Register.
For the moment, DMA support is not required as it is not used in
normal operation.
aspeed_smc_is_implemented() filters invalid registers in a peculiar
way. Let's remove it and open code the if conditions. It serves the
same purpose, the aesthetic is better, and new registers can easily be
added.
Winbond also support continuous read mode, but as an opposite for other
flash type read mode clock cycles are included to dummy cycles number.
This path add proper handling of read mode byte and update needed
dummy cycles. QPI mode and dummy cycles configuration are not supported.
Andrew Jones [Thu, 14 Jul 2016 15:51:37 +0000 (16:51 +0100)]
hw/arm/virt: tcg: adjust MPIDR like KVM
KVM adjusts the MPIDR of guest vcpus based on the architecture of
the host, 32-bit vs. 64-bit, and, for 64-bit, also on the type of
GIC the guest is using. To be consistent and improve SGI efficiency
we make the same adjustments for TCG as 64-bit KVM hosts. We neglect
to add consistency with 32-bit KVM hosts, as that would reduce SGI
efficiency and KVM is expected to change.
As MPIDR is a system register, and thus guest visible, we only make
adjustments for current and later versioned machines.
Revert "hw/ptimer: Perform counter wrap around if timer already expired"
Software should see timer counter wraparound only after IRQ being triggered.
This fixes regression introduced by the commit 5a50307 ("hw/ptimer: Perform
counter wrap around if timer already expired"), resulting in monotonic timer
jumping backwards on SPARC emulated machine running NetBSD guest OS, as
reported by Mark Cave-Ayland.
The reason is that the vmstate sections for the two scsi-hd devices are
not uniquely identifiable by name.
The direct parent buses of the scsi-hd devices -- scsi0.0 and scsi1.0 --
support the BusClass.get_dev_path member function. scsibus_get_dev_path()
formats a device path prefix with the help of its topologically parent
bus, and then appends the chan:id:lun triplet to it. For both scsi-hd
devices, this triplet is 0:0:0.
(Here we use "device path" in the QEMU migration sense, for vmstate
section identification, not in the OFW or UEFI device path senses.)
The virtio-scsi HBA is plugged into the virtio-mmio bus (implemented by
the internal VirtIOMMIOProxy device). This bus class
(TYPE_VIRTIO_MMIO_BUS) inherits, as its get_dev_path() member function,
the virtio_bus_get_dev_path() method from its parent class
(TYPE_VIRTIO_BUS).
virtio_bus_get_dev_path() does not format any kind of device address on
its own; "virtio addresses" are transport-specific. Therefore
virtio_bus_get_dev_path() asks the topologically parent bus of the proxy
object (implementing the specific virtio transport) to format the address
of the proxy object.
(For virtio-pci devices (where the proxy is an instance of VirtIOPCIProxy,
plugged into a PCI bus), this ends up in pcibus_get_dev_path().)
However, VirtIOMMIOProxy is usually (in practice: always) plugged into
"main-system-bus", the singleton TYPE_SYSTEM_BUS object. This BusClass
does not support formatting QEMU vmstate device paths at all (as
SysBusDevice objects can have zero or more IO ports and zero or more MMIO
regions). Hence the formatting request delegated from
virtio_bus_get_dev_path() gets answered with NULL.
The end result is that the two scsi-hd devices end up with the same device
path "0:0:0", which triggers the assert.
We can solve this by recognizing that virtio-mmio transports are
distinguished from each other by their base addresses in MMIO address
space. Implement virtio_mmio_bus_get_dev_path() as follows:
(1) The virtio device whose devpath is to be formatted resides on a
virtio-mmio bus that is implemented by a VirtIOMMIOProxy object. Ask
the parent bus of VirtIOMMIOProxy to format the device path of
VirtIOMMIOProxy, as a path prefix. (This is identical to what
virtio_bus_get_dev_path() does.)
(2) Append the base address of VirtIOMMIOProxy to the device path, such
as:
- virtio-mmio@000000000a003e00,
- virtio-mmio@000000000a003c00.
Given that these device paths are placed in the migration stream, step (2)
above, if done unconditionally, would break migration. So make that step
conditional on a new VirtIOMMIOProxy property, which is enabled for 2.7
machine types and later.
* remotes/bonzini/tags/for-upstream:
hostmem: detect host backend memory is being used properly
hostmem: fix QEMU crash by 'info memdev'
char: do not use atexit cleanup handler
net: do not use atexit for cleanup
slirp: use exit notifier for slirp_smb_cleanup
tap: use an exit notifier to call down_script
util: Fix MIN_NON_ZERO
qemu-sockets: use qapi_free_SocketAddress in cleanup
disas: avoid including everything in headers compiled from C++
json-streamer: fix double-free on exiting during a parse
main-loop: check return value before using pointer
Use "-s" instead of "--quiet" to resolve non-fatal build error on FreeBSD.
scsi-bus: Use longer sense buffer with scanners
scsi-bus: Add SCSI scanner support
Max Filippov [Wed, 6 Jul 2016 06:31:32 +0000 (09:31 +0300)]
target-xtensa: xtfpga: fix FLASH interface width
FLASH chip on XTFPGA boards is connected with 16-bit-wide interface.
Latest U-Boot can see the difference and does not work correctly with
32-bit-wide interface.
Set FLASH chip 'width' property to 2.
Peter Maydell [Thu, 14 Jul 2016 10:48:46 +0000 (11:48 +0100)]
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches
# gpg: Signature made Wed 13 Jul 2016 12:46:17 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <[email protected]>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (34 commits)
iotests: Make 157 actually format-agnostic
vvfat: Fix qcow write target driver specification
hmp: show all of snapshot info on every block dev in output of 'info snapshots'
hmp: use snapshot name to determine whether a snapshot is 'fully available'
qemu-iotests: Test naming of throttling groups
blockdev: Fix regression with the default naming of throttling groups
vmdk: fix metadata write regression
Improve block job rate limiting for small bandwidth values
qcow2: Fix qcow2_get_cluster_offset()
qemu-io: Use correct range limitations
qcow2: Avoid making the L1 table too big
qemu-img: Use strerror() for generic resize error
block: Remove BB options from blockdev-add
qemu-iotests: Test setting WCE with qdev
block/qdev: Allow configuring rerror/werror with qdev properties
commit: Fix use of error handling policy
block/qdev: Allow configuring WCE with qdev properties
block/qdev: Allow node name for drive properties
coroutine: move entry argument to qemu_coroutine_create
test-coroutine: prepare for the next patch
...
* mreitz/tags/pull-block-for-kevin-2016-07-13:
iotests: Make 157 actually format-agnostic
vvfat: Fix qcow write target driver specification
hmp: show all of snapshot info on every block dev in output of 'info snapshots'
hmp: use snapshot name to determine whether a snapshot is 'fully available'
qemu-iotests: Test naming of throttling groups
blockdev: Fix regression with the default naming of throttling groups
vmdk: fix metadata write regression
Improve block job rate limiting for small bandwidth values
qcow2: Fix qcow2_get_cluster_offset()
qemu-io: Use correct range limitations
qcow2: Avoid making the L1 table too big
qemu-img: Use strerror() for generic resize error
Max Reitz [Mon, 11 Jul 2016 13:22:46 +0000 (15:22 +0200)]
iotests: Make 157 actually format-agnostic
iotest 157 pretends not to care about the image format used, but in fact
it does due to the format name not being filtered in its output. This
patch adds filtering and changes the reference output accordingly.
Max Reitz [Mon, 11 Jul 2016 13:54:52 +0000 (15:54 +0200)]
vvfat: Fix qcow write target driver specification
First, bdrv_open_child() expects all options for the child to be
prefixed by the child's name (and a separating dot). Second,
bdrv_open_child() does not take ownership of the QDict passed to it but
only extracts all options for the child, so if a QDict is created for
the sole purpose of passing it to bdrv_open_child(), it needs to be
freed afterwards.
This patch makes vvfat adhere to both of these rules.
Lin Ma [Thu, 7 Jul 2016 05:26:04 +0000 (13:26 +0800)]
hmp: show all of snapshot info on every block dev in output of 'info snapshots'
Currently, the output of 'info snapshots' shows fully available snapshots.
It's opaque, hides some snapshot information to users. It's not convenient
if users want to know more about all of snapshot information on every block
device via monitor.
Follow Kevin's and Max's proposals, The patch makes the output more detailed:
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
List of partial (non-loadable) snapshots on 'drive_image1':
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
Lin Ma [Thu, 7 Jul 2016 05:26:03 +0000 (13:26 +0800)]
hmp: use snapshot name to determine whether a snapshot is 'fully available'
Currently qemu uses snapshot id to determine whether a snapshot is fully
available, It causes incorrect output in some scenario.
For instance:
(qemu) info block
drive_image1 (#block113): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk0.qcow2
(qcow2)
Cache mode: writeback
drive_image2 (#block349): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk1.qcow2
(qcow2)
Cache mode: writeback
(qemu)
(qemu) info snapshots
There is no snapshot available.
(qemu)
(qemu) snapshot_blkdev_internal drive_image1 snap1
(qemu)
(qemu) info snapshots
There is no suitable snapshot available
(qemu)
(qemu) savevm checkpoint-1
(qemu)
(qemu) info snapshots
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
(qemu)
$ qemu-img snapshot -l disk0.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
2 checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
$ qemu-img snapshot -l disk1.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 checkpoint-1 0 2016-05-22 16:58:07 00:02:06.813
The patch uses snapshot name instead of snapshot id to determine whether a
snapshot is fully available and uses '--' instead of snapshot id in output
because the snapshot id is not guaranteed to be the same on all images.
For instance:
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
Alberto Garcia [Fri, 8 Jul 2016 14:03:01 +0000 (17:03 +0300)]
qemu-iotests: Test naming of throttling groups
Throttling groups are named using the 'group' parameter of the
block_set_io_throttle command and the throttling.group command-line
option. If that parameter is unspecified the groups get the name of
the block device.
This patch adds a new test to check the naming of throttling groups.
Alberto Garcia [Fri, 8 Jul 2016 14:03:00 +0000 (17:03 +0300)]
blockdev: Fix regression with the default naming of throttling groups
When I/O limits are set for a block device, the name of the throttling
group is taken from the BlockBackend if the user doesn't specify one.
Commit efaa7c4eeb7490c6f37f3 moved the naming of the BlockBackend in
blockdev_init() to the end of the function, after I/O limits are set.
The consequence is that the throttling group gets an empty name.
Reda Sallahi [Thu, 7 Jul 2016 08:42:49 +0000 (10:42 +0200)]
vmdk: fix metadata write regression
Commit "cdeaf1f vmdk: add bdrv_co_write_zeroes" causes a regression on
writes. It writes metadata after every write instead of doing it only once
for each cluster.
vmdk_pwritev() writes metadata whenever m_data is set as valid so this patch
sets m_data as valid only when we have a new cluster which hasn't been
allocated before or a zero grain.
Sascha Silbe [Tue, 28 Jun 2016 15:28:41 +0000 (17:28 +0200)]
Improve block job rate limiting for small bandwidth values
ratelimit_calculate_delay() previously reset the accounting every time
slice, no matter how much data had been processed before. This had (at
least) two consequences:
1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream.
Not sure if there are real-world use cases where this would be a
problem. Mirroring and backup over a slow link (e.g. DSL) would
come to mind, though.
2. Tests for block job operations (e.g. cancel) were rather racy
All block jobs currently use a time slice of 100ms. That's a
reasonable value to get smooth output during regular
operation. However this also meant that the state of block jobs
changed every 100ms, no matter how low the configured limit was. On
busy hosts, qemu often transferred additional chunks until the test
case had a chance to cancel the job.
Fix the block job rate limit code to delay for more than one time
slice to address the above issues. To make it easier to handle
oversized chunks we switch the semantics from returning a delay
_before_ the current request to a delay _after_ the current
request. If necessary, this delay consists of multiple time slice
units.
Since the mirror job sends multiple chunks in one go even if the rate
limit was exceeded in between, we need to keep track of the start of
the current time slice so we can correctly re-compute the delay for
the updated amount of data.
The minimum bandwidth now is 1 data unit per time slice. The block
jobs are currently passing the amount of data transferred in sectors
and using 100ms time slices, so this translates to 5120
bytes/second. With chunk sizes usually being O(512KiB), tests have
plenty of time (O(100s)) to operate on block jobs. The chance of a
race condition now is fairly remote, except possibly on insanely
loaded systems.
Max Reitz [Mon, 20 Jun 2016 14:26:23 +0000 (16:26 +0200)]
qcow2: Fix qcow2_get_cluster_offset()
Recently, qcow2_get_cluster_offset() has been changed to work with bytes
instead of sectors. This invalidated some assertions and introduced a
possible integer multiplication overflow.
This patch removes the now wrong assertion, adding comments and more
assertions to prove its correctness (and fixing the overflow which would
become apparent with the original assertion removed).
Max Reitz [Mon, 20 Jun 2016 14:26:22 +0000 (16:26 +0200)]
qemu-io: Use correct range limitations
create_iovec() has a comment lamenting the lack of SIZE_T_MAX. Since
there actually is a SIZE_MAX, use it.
Two places use INT_MAX for checking the upper bound of a sector count
that is used as an argument for a blk_*() function (blk_discard() and
blk_write_compressed(), respectively). BDRV_REQUEST_MAX_SECTORS should
be used instead.
And finally, do_co_pwrite_zeroes() used to similarly check that the
sector count does not exceed INT_MAX. However, this function is now
backed by blk_co_pwrite_zeroes() which takes bytes as an argument
instead of sectors. Therefore, it should be the byte count that does not
exceed INT_MAX, not the sector count.
Kevin Wolf [Thu, 30 Jun 2016 13:52:37 +0000 (15:52 +0200)]
block: Remove BB options from blockdev-add
werror/rerror are now available as qdev options. The stats-* options are
removed without an existing replacement; they should probably be
configurable with a separate QMP command like I/O throttling settings.
Removing id is left for another day because this involves updating
qemu-iotests cases to use node-name for everything. Before we can do
that, however, all QMP commands must support node-name.
Kevin Wolf [Wed, 29 Jun 2016 15:41:35 +0000 (17:41 +0200)]
block/qdev: Allow configuring rerror/werror with qdev properties
The rerror/werror policies are implemented in the devices, so that's
where they should be configured. In comparison to the old options in
-drive, the qdev properties are only added to those devices that
actually support them.
If the option isn't given (or "auto" is specified), the setting of the
BlockBackend is used for compatibility with the old options. For block
jobs, "auto" is the same as "enospc".
Kevin Wolf [Wed, 29 Jun 2016 15:38:57 +0000 (17:38 +0200)]
commit: Fix use of error handling policy
Commit implemented the 'enospc' policy as 'ignore' if the error was not
ENOSPC. The QAPI documentation promises that it's treated as 'stop'.
Using the common block job error handling function fixes this and also
adds the missing QMP event.
Kevin Wolf [Thu, 23 Jun 2016 13:12:35 +0000 (15:12 +0200)]
block/qdev: Allow configuring WCE with qdev properties
As cache.writeback is a BlockBackend property and as such more related
to the guest device than the BlockDriverState, we already removed it
from the blockdev-add interface. This patch adds the new way to set it,
as a qdev property of the corresponding guest device.
For example: -drive if=none,file=test.img,node-name=img
-device ide-hd,drive=img,write-cache=off
hostmem: detect host backend memory is being used properly
Currently, we use memory_region_is_mapped() to detect if the host
backend memory is being used. This works if the memory is directly
mapped into guest's address space, however, it is not true for
nvdimm as it uses aliased memory region to map the memory. This is
why this bug can happen:
https://bugzilla.redhat.com/show_bug.cgi?id=1352769
Fix it by introduce a new filed, is_mapped, to HostMemoryBackend,
we set/clear this filed accordingly when the device link/unlink to
host backend memory
'info memdev' crashes QEMU:
(qemu) info memdev
Unexpected error in parse_str() at qapi/string-input-visitor.c:111:
Parameter 'null' expects an int64 value or range
It is caused by null uint16List is returned if 'host-nodes' is the default
value
It turns out qemu is calling exit() in various places from various
threads without taking much care of resources state. The atexit()
cleanup handlers cannot easily destroy resources that are in use (by
the same thread or other).
Since c1111a24a3, TCG arm guests run into the following abort() when
running tests, the chardev mutex is locked during the write, so
qemu_mutex_destroy() returns an error:
#0 0x00007fffdbb806f5 in raise () at /lib64/libc.so.6
#1 0x00007fffdbb822fa in abort () at /lib64/libc.so.6
#2 0x00005555557616fe in error_exit (err=<optimized out>, msg=msg@entry=0x555555c38c30 <__func__.14622> "qemu_mutex_destroy")
at /home/drjones/code/qemu/util/qemu-thread-posix.c:39
#3 0x0000555555b0be20 in qemu_mutex_destroy (mutex=mutex@entry=0x5555566aa0e0) at /home/drjones/code/qemu/util/qemu-thread-posix.c:57
#4 0x00005555558aab00 in qemu_chr_free_common (chr=0x5555566aa0e0) at /home/drjones/code/qemu/qemu-char.c:4029
#5 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4038
#6 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4044
#7 0x00005555558b062c in qemu_chr_cleanup () at /home/drjones/code/qemu/qemu-char.c:4557
#8 0x00007fffdbb851e8 in __run_exit_handlers () at /lib64/libc.so.6
#9 0x00007fffdbb85235 in () at /lib64/libc.so.6
#10 0x00005555558d1b39 in testdev_write (testdev=0x5555566aa0a0) at /home/drjones/code/qemu/backends/testdev.c:71
#11 0x00005555558d1b39 in testdev_write (chr=<optimized out>, buf=0x7fffc343fd9a "", len=0) at /home/drjones/code/qemu/backends/testdev.c:95
#12 0x00005555558adced in qemu_chr_fe_write (s=0x5555566aa0e0, buf=buf@entry=0x7fffc343fd98 "0q", len=len@entry=2) at /home/drjones/code/qemu/qemu-char.c:282
Instead of using a atexit() handler, only run the chardev cleanup as
initially proposed at the end of main(), where there are less chances
(hic) of conflicts or other races.
Paolo Bonzini [Fri, 8 Jul 2016 15:28:34 +0000 (17:28 +0200)]
net: do not use atexit for cleanup
This will be necessary in the next patch, which stops using atexit for
character devices; without it, vhost-user and the redirector filter
will cause a use-after-free. Relying on the ordering of atexit calls
is also brittle, even now that both the network and chardev
subsystems are using atexit.
Paolo Bonzini [Tue, 12 Jul 2016 07:57:12 +0000 (09:57 +0200)]
slirp: use exit notifier for slirp_smb_cleanup
We would like to move back net_cleanup() at the end of main function,
like it used to be until f30dbae63a46f23116715dff8d130c, but minimum
cleanup is needed regardless at exit() time for slirp's SMB
functionality. Use an exit notifier to call slirp_smb_cleanup.
If net_cleanup() is called first, then remove the exit notifier as it
will become a dangling pointer otherwise.
We would like to move back net_cleanup() at the end of main function,
like it used to be until f30dbae63a46f23116715dff8d130c, but minimum
tap cleanup is necessary regarless at exit() time. Use an exit notifier
to call TAP down_script. If net_cleanup() is called first, then remove
the exit notifier as it will become a dangling pointer otherwise.
Kevin Wolf [Tue, 21 Jun 2016 18:46:05 +0000 (20:46 +0200)]
block/qdev: Allow node name for drive properties
If a node name instead of a BlockBackend name is specified as the driver
for a guest device, an anonymous BlockBackend is created now.
The order of operations in release_drive() must be reversed in order to
avoid a use-after-free bug because now blk_detach_dev() frees the last
reference if an anonymous BlockBackend is used.
usb-storage uses a hack where it forwards its BlockBackend as a property
to another device that it internally creates. This hack must be updated
so that it doesn't drop its original BB before it can be passed to the
other device. This used to work because we always had the monitor
reference around, but with node-names the device reference is the only
one now.
Paolo Bonzini [Mon, 4 Jul 2016 17:10:01 +0000 (19:10 +0200)]
coroutine: move entry argument to qemu_coroutine_create
In practice the entry argument is always known at creation time, and
it is confusing that sometimes qemu_coroutine_enter is used with a
non-NULL argument to re-enter a coroutine (this happens in
block/sheepdog.c and tests/test-coroutine.c). So pass the opaque value
at creation time, for consistency with e.g. aio_bh_new.
except for the aforementioned few places where the semantic patch
stumbled (as expected) and for test_co_queue, which would otherwise
produce an uninitialized variable warning.
Paolo Bonzini [Mon, 4 Jul 2016 17:10:00 +0000 (19:10 +0200)]
test-coroutine: prepare for the next patch
The next patch moves the coroutine argument from first-enter to
creation time. In this case, coroutine has not been initialized
yet when the coroutine is created, so change to a pointer.
Paolo Bonzini [Mon, 4 Jul 2016 17:09:59 +0000 (19:09 +0200)]
coroutine: use QSIMPLEQ instead of QTAILQ
CoQueue do not need to remove any element but the head of the list;
processing is always strictly FIFO. Therefore, the simpler singly-linked
QSIMPLEQ can be used instead.
Alberto Garcia [Tue, 5 Jul 2016 14:29:02 +0000 (17:29 +0300)]
blockjob: Update description of the 'device' field in the QMP API
The 'device' field in all BLOCK_JOB_* events and 'block-job-*' command
is no longer the device name, but the ID of the job. This patch
updates the documentation to clarify that.
Alberto Garcia [Tue, 5 Jul 2016 14:29:01 +0000 (17:29 +0300)]
qemu-img: Set the ID of the block job in img_commit()
img_commit() creates a block job without an ID. This is no longer
allowed now that we require it to be unique and well-formed. We were
solving this by having a fallback in block_job_create(), but now that
we extended the API of commit_active_start() we can finally set an
explicit ID and revert that change.
Alberto Garcia [Tue, 5 Jul 2016 14:28:58 +0000 (17:28 +0300)]
backup: Add 'job-id' parameter to 'blockdev-backup' and 'drive-backup'
This patch adds a new optional 'job-id' parameter to 'blockdev-backup'
and 'drive-backup', allowing the user to specify the ID of the block
job to be created.
Alberto Garcia [Tue, 5 Jul 2016 14:28:57 +0000 (17:28 +0300)]
mirror: Add 'job-id' parameter to 'blockdev-mirror' and 'drive-mirror'
This patch adds a new optional 'job-id' parameter to 'blockdev-mirror'
and 'drive-mirror', allowing the user to specify the ID of the block
job to be created.
Alberto Garcia [Tue, 5 Jul 2016 14:28:56 +0000 (17:28 +0300)]
blockjob: Add 'job_id' parameter to block_job_create()
When a new job is created, the job ID is taken from the device name of
the BDS. This patch adds a new 'job_id' parameter to let the caller
provide one instead.
This patch also verifies that the ID is always unique and well-formed.
This causes problems in a couple of places where no ID is being set,
because the BDS does not have a device name.
In the case of test_block_job_start() (from test-blockjob-txn.c) we
can simply use this new 'job_id' parameter to set the missing ID.
In the case of img_commit() (from qemu-img.c) we still don't have the
API to make commit_active_start() set the job ID, so we solve it by
setting a default value. We'll get rid of this as soon as we extend
the API.
Alberto Garcia [Tue, 5 Jul 2016 14:28:55 +0000 (17:28 +0300)]
block: Use block_job_get() in find_block_job()
find_block_job() looks for a block backend with a specified name,
checks whether it has a block job and acquires its AioContext.
We want to identify jobs by their ID and not by the block backend
they're attached to, so this patch ignores the backends altogether and
gets the job directly. Apart from making the code simpler, this will
allow us to find block jobs once they start having user-specified IDs.
To ensure backward compatibility we keep ERROR_CLASS_DEVICE_NOT_ACTIVE
as the error class if the job doesn't exist. In subsequent patches
we'll also need to keep the device name as the default job ID if the
user doesn't specify a different one.
Alberto Garcia [Tue, 5 Jul 2016 14:28:53 +0000 (17:28 +0300)]
blockjob: Update description of the 'id' field
The 'id' field of the BlockJob structure will be able to hold any ID,
not only a device name. This patch updates the description of that
field and the error messages where it is being used.
Soon we'll add the ability to set an arbitrary ID when creating a
block job.
Mark Cave-Ayland [Tue, 12 Jul 2016 07:04:09 +0000 (08:04 +0100)]
OpenBIOS: switch over to official OpenBIOS git repo
This update should preserve git history, and switches git.qemu-project.org
over to be a mirror of the new official git repo hosted at
https://github.com/openbios from a git-svn import of the old coreboot SVN
repository. All prior history from the SVN repository should still be preserved
(i.e. commit hashes are the same for historical commits).
No other source changes are made by this commit since both the old and new
HEADs contain the same source tree (albeit with difference metadata) whilst the
previous git-svn HEAD can be retrieved via the svn-head branch.
By arranging for explicit writes to cpu_fsr after floating point
operations, we are able to mark the helpers as not writing to
tcg globals, which means that we don't need to invalidate the
integer register set across said calls.
Knowing the value of %asi at translation time means that we
can handle the common settings without a function call.
The steady state appears to be %asi == ASI_P, so that sparcv9
code can use offset forms of lda/sta. The %asi register gets
pushed and popped on entry to certain functions, but it rarely
takes on values other than ASI_P or ASI_AIUP. Therefore we're
unlikely to be expanding the set of TBs created.
Doing this instead of saving the raw PS_PRIV and TL. This means
that all nucleus mode TBs (TL > 0) can be shared. This fixes a
bug in that we didn't include HS_PRIV in the TB flags, and so could
produce incorrect TB matches for hypervisor state.
The LSU and DMMU states were unused by the translator. Including
them in TB flags meant unnecessary mismatches from tb_find_fast.
The global is only ever read for one insn; we can just as well
use a load from env instead and generate the same code. This
also allows us to indicate the the associated helpers do not
touch TCG globals.
Paolo Bonzini [Thu, 7 Jul 2016 12:07:33 +0000 (14:07 +0200)]
disas: avoid including everything in headers compiled from C++
disas/arm-a64.cc is careful to include only the bare minimum that
it needs---qemu/osdep.h and disas/bfd.h. Unfortunately, disas/bfd.h
then includes qemu-common.h, which brings in qemu/option.h and from
there we get the kitchen sink.
This causes problems because for example QEMU's atomic macros
conflict with C++ atomic types. But really all that bfd.h needs
is the fprintf_function typedef, so replace the inclusion of
qemu-common.h with qemu/fprintf-fn.h.
Paolo Bonzini [Mon, 4 Jul 2016 12:40:59 +0000 (14:40 +0200)]
json-streamer: fix double-free on exiting during a parse
Now that json-streamer tries not to leak tokens on incomplete parse,
the tokens can be freed twice if QEMU destroys the json-streamer
object during the parser->emit call. To fix this, create the new
empty GQueue earlier, so that it is already in place when the old
one is passed to parser->emit.
Sean Bruno [Tue, 14 Jun 2016 18:07:34 +0000 (11:07 -0700)]
Use "-s" instead of "--quiet" to resolve non-fatal build error on FreeBSD.
The --quiet argument is not available on all operating systems. Use -s
instead to match the rest of the Makefile uses. This fixes a non-fatal
error seen on FreeBSD.