* remotes/xtensa/tags/20190326-xtensa:
tests/tcg/xtensa: clean up test set
target/xtensa: don't announce exit simcall
target/xtensa: fix break_dependency for repeated resources
Peter Maydell [Tue, 26 Mar 2019 15:52:46 +0000 (15:52 +0000)]
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- Fix slow pre-zeroing in qemu-img convert
- Test case for block job pausing on I/O errors
# gpg: Signature made Tue 26 Mar 2019 15:28:00 GMT
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <[email protected]>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
qemu-io: Add write -n for BDRV_REQ_NO_FALLBACK
qemu-img: Use BDRV_REQ_NO_FALLBACK for pre-zeroing
file-posix: Support BDRV_REQ_NO_FALLBACK for zero writes
block: Advertise BDRV_REQ_NO_FALLBACK in filter drivers
block: Add BDRV_REQ_NO_FALLBACK
block: Remove error messages in bdrv_make_zero()
iotests: add 248: test resume mirror after auto pause on ENOSPC
* remotes/pmaydell/tags/pull-target-arm-20190326:
gdbstub: fix vCont packet handling when no thread is specified
target/arm: Set SIMDMISC and FPMISC for 32-bit -cpu max
Luc Michel [Tue, 26 Mar 2019 12:53:26 +0000 (12:53 +0000)]
gdbstub: fix vCont packet handling when no thread is specified
The vCont packet accepts a series of actions, each being applied on a
given thread ID. Giving no thread ID for an action is valid and means
"all threads".
This commit fixes vCont packets being incorrectly rejected when no
thread ID was given for an action.
In multiprocess mode, the GDB Remote Protocol specification is unclear
on what "all threads" means. We choose to apply the action on all
threads of all attached processes.
This commit is based on the initial fix by Lucien Murray-Pitts.
Kevin Wolf [Fri, 22 Mar 2019 12:49:28 +0000 (13:49 +0100)]
qemu-img: Use BDRV_REQ_NO_FALLBACK for pre-zeroing
If qemu-img convert sees that the target image isn't zero-initialised
yet, it tries to do an efficient zero write for the whole image first
to save the overhead of repeated explicit zero writes during the
conversion. Obviously, this provides only an advantage if the
pre-zeroing is actually efficient. Otherwise, we can end up writing
zeroes slowly while zeroing out the whole image, and then overwrite the
same blocks again with real data, potentially doubling the written data.
Pass BDRV_REQ_NO_FALLBACK to blk_make_zero() to avoid this case. If we
can't efficiently zero out, we'll instead write explicit zeroes only if
there is no data to be written to a block.
Kevin Wolf [Fri, 22 Mar 2019 12:45:23 +0000 (13:45 +0100)]
file-posix: Support BDRV_REQ_NO_FALLBACK for zero writes
We know that the kernel implements a slow fallback code path for
BLKZEROOUT, so if BDRV_REQ_NO_FALLBACK is given, we shouldn't call it.
The other operations we call in the context of .bdrv_co_pwrite_zeroes
should usually be quick, so no modification should be needed for them.
If we ever notice that there are additional problematic cases, we can
still make these conditional as well.
Kevin Wolf [Fri, 22 Mar 2019 12:42:39 +0000 (13:42 +0100)]
block: Advertise BDRV_REQ_NO_FALLBACK in filter drivers
Filter drivers that support .bdrv_co_pwrite_zeroes can safely advertise
BDRV_REQ_NO_FALLBACK because they just forward the request flags to
their child node.
Kevin Wolf [Fri, 22 Mar 2019 12:38:43 +0000 (13:38 +0100)]
block: Add BDRV_REQ_NO_FALLBACK
For qemu-img convert, we want an operation that zeroes out the whole
image if this can be done efficiently, but that returns an error
otherwise so we don't write explicit zeroes and immediately overwrite
them with the real data, potentially doubling the amount of data to be
written.
Kevin Wolf [Fri, 22 Mar 2019 12:53:46 +0000 (13:53 +0100)]
block: Remove error messages in bdrv_make_zero()
There is only a single caller of bdrv_make_zero(), which is qemu-img
convert. If the function fails, we just fall back to a different method
of zeroing out blocks on the target image. There is no good reason to
print error messages on stderr when the higher level operation will
actually succeed.
Peter Maydell [Tue, 26 Mar 2019 10:27:20 +0000 (10:27 +0000)]
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.0-rc1-v2' into staging
A second RISC-V Patch for 4.0.0-rc1
Sorry for sending two back-to-back pull requests. It looks like I
misunderstood Kito and there were actually two patches necessary to fix
the GCC test suite runs.
# gpg: Signature made Tue 26 Mar 2019 10:20:20 GMT
# gpg: using RSA key 00CE76D1834960DFCE886DF8EF4CA1502CCBAB41
# gpg: issuer "[email protected]"
# gpg: Good signature from "Palmer Dabbelt <[email protected]>" [unknown]
# gpg: aka "Palmer Dabbelt <[email protected]>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
* remotes/palmer/tags/riscv-for-master-4.0-rc1-v2:
target/riscv: Fix wrong expanding for c.fswsp
Peter Maydell [Tue, 26 Mar 2019 09:28:24 +0000 (09:28 +0000)]
Merge remote-tracking branch 'remotes/armbru/tags/pull-misc-2019-03-26' into staging
Miscellaneous patches for 2019-03-26
# gpg: Signature made Tue 26 Mar 2019 07:10:23 GMT
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <[email protected]>" [full]
# gpg: aka "Markus Armbruster <[email protected]>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-misc-2019-03-26:
qapi/qmp-dispatch: fix return value in do_qmp_dispatch
json: Fix off-by-one assert check in next_state()
xen-block: Replace qdict_put_obj() by qdict_put() where appropriate
util/error: Remove an unnecessary NULL check
Peter Maydell [Tue, 26 Mar 2019 08:51:35 +0000 (08:51 +0000)]
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.0-rc1' into staging
A Single RISC-V Patch for 4.0-rc1
If this is too late I'm OK with it being in rc2, but it fixes a concrete
regression and nobody has complained yet so I'd prefer it to be in rc1
if possible.
The fix is to zero-extend the inputs to DIVUW and REMUW, which was
exposed by the GCC test suite.
# gpg: Signature made Tue 26 Mar 2019 05:54:20 GMT
# gpg: using RSA key 00CE76D1834960DFCE886DF8EF4CA1502CCBAB41
# gpg: issuer "[email protected]"
# gpg: Good signature from "Palmer Dabbelt <[email protected]>" [unknown]
# gpg: aka "Palmer Dabbelt <[email protected]>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
* remotes/palmer/tags/riscv-for-master-4.0-rc1:
target/riscv: Zero extend the inputs of divuw and remuw
pflash: Require backend size to match device, improve errors
We reject undersized backends with a rather enigmatic "failed to read
the initial flash content" error. For instance:
$ qemu-system-ppc64 -S -display none -M sam460ex -drive if=pflash,format=raw,file=eins.img
qemu-system-ppc64: Initialization of device cfi.pflash02 failed: failed to read the initial flash content
We happily accept oversized images, ignoring their tail. Throwing
away parts of firmware that way is pretty much certain to end in an
even more enigmatic failure to boot.
Require the backend's size to match the device's size exactly. Report
mismatch like this:
qapi/qmp-dispatch: fix return value in do_qmp_dispatch
There are no harm but just looks weird to return bool in
pointer-returning function. Introduced in 69240fe62d1 with the whole
failure-checking "if" chunk.
Liam Merwick [Thu, 21 Mar 2019 11:57:52 +0000 (11:57 +0000)]
json: Fix off-by-one assert check in next_state()
The assert checking if the value of lexer->state in next_state(),
which is used as an index to the 'json_lexer' array, incorrectly
checks for an index value less than or equal to ARRAY_SIZE(json_lexer).
Fix assert so that it just checks for an index less than the array size.
Peter Maydell [Mon, 25 Mar 2019 18:15:43 +0000 (18:15 +0000)]
Merge remote-tracking branch 'remotes/juanquintela/tags/migration-pull-request' into staging
Pull request
- Rebase last pull request
- Drop multifd
- several other minor fixesLaLaLa
# gpg: Signature made Mon 25 Mar 2019 17:46:29 GMT
# gpg: using RSA key F487EF185872D723
# gpg: Good signature from "Juan Quintela <[email protected]>" [full]
# gpg: aka "Juan Quintela <[email protected]>" [full]
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723
* remotes/juanquintela/tags/migration-pull-request:
migration/postcopy: Update the bandwidth during postcopy
Migration/colo.c: Make user obtain the last COLO mode info after failover
Migration/colo.c: Add the necessary checks for colo_do_failover
Migration/colo.c: Add new COLOExitReason to handle all failover state
Migration/colo.c: Fix COLO failover status error
migration/rdma: Check qemu_rdma_init_one_block
migration: add support for a "tls-authz" migration parameter
multifd: Drop x-
multifd: Add some padding
multifd: Change default packet size
multifd: Be flexible about packet size
multifd: Drop x-multifd-page-count parameter
multifd: Create new next_packet_size field
multifd: Rename "size" member to pages_alloc
multifd: Only send pages when packet are not empty
migration/postcopy: Update the bandwidth during postcopy
The recently added max-postcopy-bandwidth parameter is only read
at the transition from precopy->postcopy where as the older
max-bandwidth parameter updates the migration bandwidth when changed
even if the migration is already running.
Fix this discrepency so that:
a) You can change the bandwidth during postcopy by setting
max-postcopy-bandwidth
b) Changing max-bandwidth during postcopy has no effect
(it currently changes the postcopy bandwidth which isn't
expected).
Zhang Chen [Fri, 22 Mar 2019 10:13:33 +0000 (18:13 +0800)]
Migration/colo.c: Make user obtain the last COLO mode info after failover
Add the last_colo_mode to save the status after failover.
This patch can solve the issue that user want to get last colo mode
use query_colo_status after failover.
Zhang Chen [Fri, 22 Mar 2019 10:13:31 +0000 (18:13 +0800)]
Migration/colo.c: Add new COLOExitReason to handle all failover state
In this patch we add the processing state for COLOExitReason,
because we have to identify COLO in the failover processing state or
failover error state. In the way, we can handle all the failover state.
We have improved the description of the COLOExitReason by the way.
migration: add support for a "tls-authz" migration parameter
The QEMU instance that runs as the server for the migration data
transport (ie the target QEMU) needs to be able to configure access
control so it can prevent unauthorized clients initiating an incoming
migration. This adds a new 'tls-authz' migration parameter that is used
to provide the QOM ID of a QAuthZ subclass instance that provides the
access control check. This is checked against the x509 certificate
obtained during the TLS handshake.
For example, when starting a QEMU for incoming migration, it is
possible to give an example identity of the source QEMU that is
intended to be connecting later:
Juan Quintela [Wed, 20 Feb 2019 11:44:07 +0000 (12:44 +0100)]
multifd: Be flexible about packet size
This way we can change the packet size in the future and everything
will work. We choose an arbitrary big number (100 times configured
size) as a limit about how big we will reallocate.
Juan Quintela [Wed, 20 Feb 2019 11:06:03 +0000 (12:06 +0100)]
multifd: Drop x-multifd-page-count parameter
Libvirt don't want to expose (and explain it). From now on we measure
the number of packages in bytes instead of pages, so it is the same
independently of architecture. We choose the page size of x86.
Notice that in the following patch we make this variable.
Juan Quintela [Fri, 4 Jan 2019 18:12:35 +0000 (19:12 +0100)]
multifd: Only send pages when packet are not empty
We send packages without pages sometimes for sysnchronizanion. The
iov functions do the right thing, but we will be changing this code in
future patches.
Peter Maydell [Mon, 25 Mar 2019 17:01:10 +0000 (17:01 +0000)]
Merge remote-tracking branch 'remotes/stefanha/tags/tracing-pull-request' into staging
Pull request
Compilation fixes and cleanups for QEMU 4.0.0.
# gpg: Signature made Mon 25 Mar 2019 15:58:28 GMT
# gpg: using RSA key 9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <[email protected]>" [full]
# gpg: aka "Stefan Hajnoczi <[email protected]>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/tracing-pull-request:
trace-events: Fix attribution of trace points to source
trace-events: Delete unused trace points
scripts/cleanup-trace-events: Update for current practice
trace-events: Shorten file names in comments
trace-events: Consistently point to docs/devel/tracing.txt
trace: avoid SystemTap dtrace(1) warnings on empty files
trace: handle tracefs path truncation
Peter Maydell [Mon, 25 Mar 2019 15:58:49 +0000 (15:58 +0000)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190325' into staging
target-arm queue:
* Fix non-parallel expansion of CASP
* nrf51_gpio: reflect pull-up/pull-down to IRQs
* Fix crash if guest tries to enable non-existent PMU counters
* Add PMUv2 to the Cortex-A15 and Cortex-A7
* Make pmccntr_op_start/finish static
* remotes/pmaydell/tags/pull-target-arm-20190325:
target/arm: make pmccntr_op_start/finish static
target/arm: cortex-a7 and cortex-a15 have pmus
target/arm: fix crash on pmu register access
target/arm: add PCI_TESTDEV back to default config
nrf51_gpio: reflect pull-up/pull-down to IRQs
target/arm: Fix non-parallel expansion of CASP
Andrew Jones [Mon, 25 Mar 2019 14:16:47 +0000 (14:16 +0000)]
target/arm: cortex-a7 and cortex-a15 have pmus
cortex-a7 and cortex-a15 have pmus (PMUv2) and they advertise
them in ID_DFR0. Let's allow them to function. This also enables
the pmu cpu property to work with these cpu types, i.e. we can
now do '-cpu cortex-a15,pmu=off' to remove the pmu.
Andrew Jones [Mon, 25 Mar 2019 14:16:47 +0000 (14:16 +0000)]
target/arm: fix crash on pmu register access
Fix a QEMU NULL derefence that occurs when the guest attempts to
enable PMU counters with a non-v8 cpu model or a v8 cpu model
which has not configured a PMU.
Paolo Bonzini [Mon, 25 Mar 2019 14:16:46 +0000 (14:16 +0000)]
nrf51_gpio: reflect pull-up/pull-down to IRQs
Some drivers do I2C bitbanging by keeping the output to 0 and flipping
the GPIO direction between input and output (see for example in Linux
gpio_set_open_drain_value_commit, in drivers/gpio/gpiolib.c).
When the GPIO is set to input, the pull-up resistor brings the output
to 1, while when the GPIO is set to output, the output driver brings
the output to 0.
Implement this for the nRF51 GPIO device model. First, if both input and
output are floating, and there is a pull-up or pull-down resistor
configured, do not just set s->in, but also make any devices listening
on the output qemu_irq receive that value. Second, if the pin is
driven both internally (output pin) and externally you don't get a
short circuit if both sides drive the pin to the same value.
Peter Maydell [Mon, 25 Mar 2019 13:31:12 +0000 (13:31 +0000)]
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-and-fpu-fixes-250319-1' into staging
Mix of testing & fpu fixes
- more splitting of Travis matric to avoid timeouts
- Fused Multiply-Add fixes for MIPS and hardfloat
- cleanups to docker travis emulation
# gpg: Signature made Mon 25 Mar 2019 10:44:44 GMT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <[email protected]>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-and-fpu-fixes-250319-1:
docker: trivial changes to `make docker` help
docker: Fix travis script unable to find source dir
docker: Fix travis.py parser and misc change
hardfloat: fix float32/64 fused multiply-add
target/mips: Fix minor bug in FPU
.travis.yml: reduce number of targets built while disabling things
.travis.yml: --disable-user for --without-default-devices
.travis.yml: split some more system builds
configure: add --target-list-exclude
docker: Fix travis script unable to find source dir
The script generated from QEMU_SRC/.travis.yml uses BUILD_DIR and
SRC_DIR path relative to the current dir, unless these variables
are exported in environment.
Since commit 05790dafef1 BUILD_DIR is exported in the runner script,
although SRC_DIR is not, so that make docker-travis fails becase
the reference to source dir is wrong. So let's unset both BUILD_DIR
and SRC_DIR before calling the script, given it is executed from
the source dir already (as in Travis).
Fixed the travis.py script that has failed to parse the current
QEMU_SRC/.travis.yml file. It no longer makes combinations from
env/matrix, instead it uses explicit includes. Also the compiler
can be omitted from matrix/include, so that Travis chooses the
first entry of the global compiler list.
Replaced yaml.load() with yaml.safe_load() so that quieting the
following deprecation warning:
https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation
Wrong type of NaN was generated for IEEE 754-2008 by MADDF.<D|S> and
MSUBF.<D|S> instructions when the arguments were (Inf, Zero, NaN) or
(Zero, Inf, NaN).
The if-else statement establishes if the system conforms to IEEE
754-1985 or IEEE 754-2008, and defines different behaviors depending
on that. In case of IEEE 754-2008, in mentioned cases of inputs,
<MADDF|MSUBF>.<D|S> returns the input value 'c' [2] (page 53) and
raises floating point exception 'Invalid Operation' [1] (pages 349,
350).
These scenarios were tested and the results in QEMU emulation match
the results obtained on the machine that has a MIPS64R6 CPU.
[1] MIPS Architecture for Programmers Volume II-a: The MIPS64
Instruction Set Reference Manual, Revision 6.06
[2] MIPS Architecture for Programmers Volume IV-j: The MIPS64
SIMD Architecture Module, Revision 1.12
Alex Bennée [Tue, 19 Mar 2019 12:09:49 +0000 (12:09 +0000)]
.travis.yml: split some more system builds
We define a new class of targets (MAIN_SOFTMMU_TARGETS) to cover the
major architectures. We either just build those or use the new
target-list-exclude mechanism to remove them from the list. This will
hopefully stop some of the longer builds hitting the Travis timeout
limit.
Alex Bennée [Tue, 19 Mar 2019 11:59:12 +0000 (11:59 +0000)]
configure: add --target-list-exclude
This is an inverse selection which excludes a selected set of targets
from the default target list. It will mostly be useful for CI
configurations but it might be useful for some users as well.
You cannot specify --target-list and --target-list-exclude at the same
time.
Peter Maydell [Mon, 25 Mar 2019 07:59:40 +0000 (07:59 +0000)]
Merge remote-tracking branch 'remotes/elmarco/tags/slirp-pull-request' into staging
slirp: clarify license of slirp as BSD-3
# gpg: Signature made Fri 22 Mar 2019 19:16:50 GMT
# gpg: using RSA key DAE8E10975969CE5
# gpg: Good signature from "Marc-André Lureau <[email protected]>" [full]
# gpg: aka "Marc-André Lureau <[email protected]>" [full]
# Primary key fingerprint: 87A9 BD93 3F87 C606 D276 F62D DAE8 E109 7596 9CE5
* remotes/elmarco/tags/slirp-pull-request:
slirp: is not maintained by Kelly Price for a long time
slirp: remove reference to COPYRIGHT file
slirp: clarify license of slirp files using SPDX: implicit via unstated
slirp: clarify license of slirp files using SPDX: implicit via COPYRIGHT
slirp: clarify license of slirp files using SPDX: explicit MIT
slirp: clarify license of slirp files using SPDX: explicit BSD
slirp: relicense GPL files to BSD-3
slirp: update COPYRIGHT to use full 3-Clause BSD License
Max Filippov [Fri, 22 Mar 2019 19:49:17 +0000 (12:49 -0700)]
tests/tcg/xtensa: clean up test set
Drop test_fail: we know that exit simcall works. Now that it's not run
automatically there's no point in keeping it.
Drop test_pipeline: we're not modeling pipeline, we don't control ccount
and there's no plan to do so.
Enable test_boolean: it won't break on cores without boolean option, it
will do testing on cores with boolean option.
The slirp COPYRIGHT file is a BSD-3 license. Instead of referring to
another project file, the SPDX license notice present in all source
files states that unequivocally.
In order to make slirp a standalone project, the project must have a
clear license, and be compatible with the GPL or LGPL.
Since commit 2f5f89963186d42a7ded253bc6cf5b32abb45cec ("Remove the
advertising clause from the slirp license"), slirp is BSD-3. But new
files have been added under slirp/ with QEMU GPL license since then.
The copyright holders have been asked to relicense files to BSD-3 and
gave their permission:
> Is the code in question copyright you personally, or copyright
> IBM as your employer at the time ? If the latter, it is IBM that
> would need to approve the relicensing.
That was done. I had our legal team approve the change of license.
slirp: update COPYRIGHT to use full 3-Clause BSD License
According to commit 2f5f89963186d42a7ded253bc6cf5b32abb45cec ("Remove
the advertising clause from the slirp license"), Danny Gasparovski
gave permission to license slirp code under 3-clause BSD license:
I have no objection to having Slirp code in QEMU be licensed under
the 3-clause BSD license.
slirp/COPYRIGHT's initial version in 2004 (commit 5fafdf24) listed
only 3 clauses BUT used the poisonous advertising clause for clause 3
which is the controversial clause of non-free 4-clause (that is, it
appears that the BSD-4 license was copied, and then the WRONG clause
was deleted, when creating COPYRIGHT. Perhaps explained as an easy
mistake to make since 3-clause was created by removing clause 3 of the
4-clause, where you sometimes see the three-clause version with
clauses 1, 2, 4; but more commonly see a renumbered version with
clauses 1, 2, 3 to close the gap. If you pay attention only to clause
numbers instead of content, it can be easy to confuse which clause to
delete to go from 4-clause to 3-clause).
Commit 2f5f89963 removed the poisonous wrong clause on
the grounds of moving from 4-clause to 3-clause; but did not add the
missing clause, which makes it LOOK like the 2-clause version. But I
think we have a decent enough trail showing the intent for 3-clause.
trace-events: Fix attribution of trace points to source
Some trace points are attributed to the wrong source file. Happens
when we neglect to update trace-events for code motion, or add events
in the wrong place, or misspell the file name.
Clean up with help of cleanup-trace-events.pl. Same funnies as in the
previous commit, of course. Manually shorten its change to
linux-user/trace-events to */signal.c.
We spell out sub/dir/ in sub/dir/trace-events' comments pointing to
source files. That's because when trace-events got split up, the
comments were moved verbatim.
Delete the sub/dir/ part from these comments. Gets rid of several
misspellings.
Stefan Hajnoczi [Thu, 21 Mar 2019 17:08:31 +0000 (17:08 +0000)]
trace: avoid SystemTap dtrace(1) warnings on empty files
target/hppa/trace-events only contains disabled events, resulting in a
trace-dtrace.dtrace file that says "provider qemu {}". SystemTap's
dtrace(1) tool prints a warning when processing this input file.
This patch avoids the error by emitting an empty file instead of
"provider qemu {}" when there are no enabled trace events.
Stefan Hajnoczi [Thu, 21 Mar 2019 17:08:30 +0000 (17:08 +0000)]
trace: handle tracefs path truncation
If the tracefs mountpoint has a very long path we may exceed PATH_MAX.
This is a system misconfiguration and the user must resolve it so that
applications can perform path-based system calls successfully.
This issue does not occur on real-world systems since tracefs is mounted
on /sys/kernel/debug/tracing/, but the compiler is smart enough to
foresee the possibility and warn about the unchecked snprintf(3) return
value. This patch fixes the compiler warning.
Peter Maydell [Fri, 22 Mar 2019 09:37:38 +0000 (09:37 +0000)]
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' into staging
x86 queue for -rc1
A few fixes that missed -rc0:
* CPU model documentation updates (Daniel P. Berrangé)
* Fix bogus OSPKE warnings (Eduardo Habkost)
* Work around KVM bugs when handing arch_capabilities
(Eduardo Habkost)
# gpg: Signature made Thu 21 Mar 2019 19:32:02 GMT
# gpg: using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <[email protected]>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-next-pull-request:
docs: add note about stibp CPU feature for spectre v2
docs: clarify that spec-ctrl is only needed for Spectre v2
i386: Disable OSPKE on CPU model definitions
i386: Make arch_capabilities migratable
i386: kvm: Disable arch_capabilities if MSR can't be set
Palmer Dabbelt [Thu, 21 Mar 2019 14:59:20 +0000 (07:59 -0700)]
target/riscv: Zero extend the inputs of divuw and remuw
While running the GCC test suite against 4.0.0-rc0, Kito found a
regression introduced by the decodetree conversion that caused divuw and
remuw to sign-extend their inputs. The ISA manual says they are
supposed to be zero extended:
DIVW and DIVUW instructions are only valid for RV64, and divide the
lower 32 bits of rs1 by the lower 32 bits of rs2, treating them as
signed and unsigned integers respectively, placing the 32-bit
quotient in rd, sign-extended to 64 bits. REMW and REMUW
instructions are only valid for RV64, and provide the corresponding
signed and unsigned remainder operations respectively. Both REMW
and REMUW always sign-extend the 32-bit result to 64 bits, including
on a divide by zero.
Here's Kito's reduced test case from the GCC test suite
Max Filippov [Fri, 22 Mar 2019 03:22:03 +0000 (20:22 -0700)]
target/xtensa: fix break_dependency for repeated resources
break_dependency incorrectly handles the case of dependency on an opcode
that references the same register multiple times. E.g. the following
instruction is translated incorrectly:
{ or a2, a3, a3 ; or a3, a2, a2 }
This happens because resource indices of both dependency graph nodes are
incremented, and a copy for the second instance of the same register in
the ending node is not done.
Only increment resource index of the ending node of the dependency.
Add test.
Greg Kurz [Thu, 28 Feb 2019 15:06:06 +0000 (16:06 +0100)]
crypto/block: remove redundant struct packing to fix build with gcc 9
Build fails with gcc 9:
crypto/block-luks.c:689:18: error: taking address of packed member of ‘struct QCryptoBlockLUKSHeader’ may result in an unaligned pointer value [-Werror=address-of-packed-member]
689 | be32_to_cpus(&luks->header.payload_offset);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
crypto/block-luks.c:690:18: error: taking address of packed member of ‘struct QCryptoBlockLUKSHeader’ may result in an unaligned pointer value [-Werror=address-of-packed-member]
690 | be32_to_cpus(&luks->header.key_bytes);
| ^~~~~~~~~~~~~~~~~~~~~~~
crypto/block-luks.c:691:18: error: taking address of packed member of ‘struct QCryptoBlockLUKSHeader’ may result in an unaligned pointer value [-Werror=address-of-packed-member]
691 | be32_to_cpus(&luks->header.master_key_iterations);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... a bunch of similar errors...
crypto/block-luks.c:1288:22: error: taking address of packed member of ‘struct QCryptoBlockLUKSKeySlot’ may result in an unaligned pointer value [-Werror=address-of-packed-member]
1288 | be32_to_cpus(&luks->header.key_slots[i].stripes);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
All members of the QCryptoBlockLUKSKeySlot and QCryptoBlockLUKSHeader are
naturally aligned and we already check at build time there isn't any
unwanted padding. Drop the QEMU_PACKED attribute.
io: fix handling of EOF / error conditions in websock GSource
We were never reporting the G_IO_HUP event when an end of file was hit
on the websocket channel.
We also didn't report G_IO_ERR when we hit a fatal error processing the
websocket protocol.
The latter in particular meant that the chardev code would not notice
when an eof/error was encountered on the websocket channel, unless the
guest OS happened to trigger a write operation.
This meant that once the first client had quit, the chardev would never
listen to accept a new client.
docs: add note about stibp CPU feature for spectre v2
While the stibp CPU feature is not commonly used by guest OS for spectre
mitigation due to its performance impact, it is none the less best
practice to expose it to all guest OS. This allows the guest OS to
decide whether to make use or it.
docs: clarify that spec-ctrl is only needed for Spectre v2
The docs currently say that the spec-ctrl feature is needed for both
Spectre variants, but it is only used to address Spectre v2. Also
remove the note about retpolines. The guest OS is usually treated
as a blackbox from host mgmt pov, so it won't have knowledge about
use of retpolines and thus should unconditionally expose spec-ctrl,
allowing the guest to decide whether to use it or not.
This happens because OSPKE was never returned by
GET_SUPPORTED_CPUID or x86_cpu_get_supported_feature_word().
OSPKE is a runtime flag automatically set by the KVM module or by
TCG code, was always cleared by x86_cpu_filter_features(), and
was not supposed to appear on the CPU model table.
Remove the OSPKE flag from the CPU model table entries, to avoid
the bogus warning and avoid returning invalid feature data on
query-cpu-* QMP commands. As OSPKE was always cleared by
x86_cpu_filter_features(), this won't have any guest-visible
impact.
Include a test case that should detect the problem if we introduce
a similar bug again.
Eduardo Habkost [Fri, 25 Jan 2019 22:06:06 +0000 (20:06 -0200)]
i386: Make arch_capabilities migratable
Now that kvm_arch_get_supported_cpuid() will only return
arch_capabilities if QEMU is able to initialize the MSR properly,
we know that the feature is safely migratable.
Eduardo Habkost [Fri, 25 Jan 2019 22:06:05 +0000 (20:06 -0200)]
i386: kvm: Disable arch_capabilities if MSR can't be set
KVM has two bugs in the handling of MSR_IA32_ARCH_CAPABILITIES:
1) Linux commit commit 1eaafe91a0df ("kvm: x86: IA32_ARCH_CAPABILITIES
is always supported") makes GET_SUPPORTED_CPUID return
arch_capabilities even if running on SVM. This makes "-cpu
host,migratable=off" incorrectly expose arch_capabilities on CPUID on
AMD hosts (where the MSR is not emulated by KVM).
2) KVM_GET_MSR_INDEX_LIST does not return MSR_IA32_ARCH_CAPABILITIES if
the MSR is not supported by the host CPU. This makes QEMU not
initialize the MSR properly at kvm_put_msrs() on those hosts.
Work around both bugs on the QEMU side, by checking if the MSR
was returned by KVM_GET_MSR_INDEX_LIST before returning the
feature flag on kvm_arch_get_supported_cpuid().
This has the unfortunate side effect of making arch_capabilities
unavailable on hosts without hardware support for the MSR until bug #2
is fixed on KVM, but I can't see another way to work around bug #1
without that side effect.
Peter Maydell [Tue, 19 Mar 2019 16:27:14 +0000 (16:27 +0000)]
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- mirror: Fix early return from drain (could cause deadlocks)
- vmdk: Fixed probing for version 3 images
- vl: Fix to create migration object before block backends again (fixes
segfault for block drivers that set migration blockers)
- Several minor fixes, documentation and test case improvements
# gpg: Signature made Tue 19 Mar 2019 14:59:17 GMT
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <[email protected]>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
qemu-iotests: Treat custom TEST_DIR in 051
blockdev: Check @replaces in blockdev_mirror_common
block: Make bdrv_{copy_on_read,crypto_luks,replication} static
blockjob: fix user pause in block_job_error_action
qemu-iotests: Fix 232 for non-qcow2
vl: Fix to create migration object before block backends again
iotests: 153: Wait for an answer to QMP commands
block: Silence Coverity in bdrv_drop_intermediate()
vmdk: Support version=3 in VMDK descriptor files
qapi: fix block-latency-histogram-set description and examples
qcow2: Fix data file error condition in qcow2_co_create()
mirror: Confirm we're quiesced only if the job is paused or cancelled
Roger Pau Monne [Mon, 18 Mar 2019 17:37:31 +0000 (18:37 +0100)]
xen-mapcache: use MAP_FIXED flag so the mmap address hint is always honored
Or if it's not possible to honor the hinted address an error is returned
instead. This makes it easier to spot the actual failure, instead of
failing later on when the caller of xen_remap_bucket realizes the
mapping has not been created at the requested address.
Also note that at least on FreeBSD using MAP_FIXED will cause mmap to
try harder to honor the passed address.
Max Reitz [Wed, 13 Feb 2019 22:53:01 +0000 (23:53 +0100)]
blockdev: Check @replaces in blockdev_mirror_common
There is no reason why the constraints we put on @replaces should be
limited to drive-mirror. Therefore, move the sanity checks from
qmp_drive_mirror() to blockdev_mirror_common() so they apply to
blockdev-mirror as well.