* remotes/juanquintela/tags/fail-pull-request:
multifd: Use number of channels as listen backlog
socket: Add num connections to qio_net_listener_open_sync()
socket: Add num connections to qio_channel_socket_async()
socket: Add num connections to qio_channel_socket_sync()
socket: Add backlog parameter to socket_listen
Peter Maydell [Wed, 4 Sep 2019 13:44:54 +0000 (14:44 +0100)]
Merge remote-tracking branch 'remotes/ehabkost/tags/machine-next-pull-request' into staging
Machine + x86 queue, 2019-09-03
Bug fixes:
* Fix die-id validation regression (Eduardo Habkost)
* vmmouse: Properly reset state (Jan Kiszka)
* hostmem-file: fix pmem file size check (Stefan Hajnoczi)
* Keep query-hotpluggable-cpus output compatible with older QEMU
if '-smp dies' is not set (Igor Mammedov)
* migration: Do not re-read the clock on pre_save in case of paused guest
(Maxiwell S. Garcia)
Cleanups:
* NUMA code cleanups (Tao Xu)
* Remove stale externs from includes (Alex Bennée)
Features:
* qapi: report the default CPU type for each machine (Daniel P. Berrangé)
* remotes/ehabkost/tags/machine-next-pull-request:
migration: Do not re-read the clock on pre_save in case of paused guest
x86: do not advertise die-id in query-hotpluggbale-cpus if '-smp dies' is not set
i386/vmmouse: Properly reset state
hostmem-file: fix pmem file size check
qapi: report the default CPU type for each machine
pc: Don't make die-id mandatory unless necessary
pc: Improve error message when die-id is omitted
pc: Fix error message on die-id validation
numa: move numa global variable numa_info into MachineState
numa: move numa global variable have_numa_distance into MachineState
numa: move numa global variable nb_numa_nodes into MachineState
hw/arm: simplify arm_load_dtb
includes: remove stale [smp|max]_cpus externs
Peter Maydell [Wed, 4 Sep 2019 12:59:01 +0000 (13:59 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190903' into staging
target-arm queue:
* Revert and correctly fix refactoring of unallocated_encoding()
* Take exceptions on ATS instructions when needed
* aspeed/timer: Provide back-pressure information for short periods
* memory: Remove unused memory_region_iommu_replay_all()
* hw/arm/smmuv3: Log a guest error when decoding an invalid STE
* hw/arm/smmuv3: Remove spurious error messages on IOVA invalidations
* target/arm: Fix SMMLS argument order
* hw/arm: Use ARM_CPU_TYPE_NAME() macro when appropriate
* hw/arm: Correct reference counting for creation of various objects
* includes: remove stale [smp|max]_cpus externs
* tcg/README: fix typo
* atomic_template: fix indentation in GEN_ATOMIC_HELPER
* include/exec/cpu-defs.h: fix typo
* target/arm: Free TCG temps in trans_VMOV_64_sp()
* target/arm: Don't abort on M-profile exception return in linux-user mode
* remotes/pmaydell/tags/pull-target-arm-20190903: (21 commits)
target/arm: Don't abort on M-profile exception return in linux-user mode
target/arm: Free TCG temps in trans_VMOV_64_sp()
include/exec/cpu-defs.h: fix typo
atomic_template: fix indentation in GEN_ATOMIC_HELPER
tcg/README: fix typo s/afterwise/afterwards/
includes: remove stale [smp|max]_cpus externs
hw/net/xilinx_axi: Use object_initialize_child for correct ref. counting
hw/dma/xilinx_axi: Use object_initialize_child for correct ref. counting
hw/arm/fsl-imx: Add the cpu as child of the SoC object
hw/arm: Use sysbus_init_child_obj for correct reference counting
hw/arm: Use object_initialize_child for correct reference counting
hw/arm: Use ARM_CPU_TYPE_NAME() macro when appropriate
target/arm: Fix SMMLS argument order
hw/arm/smmuv3: Remove spurious error messages on IOVA invalidations
hw/arm/smmuv3: Log a guest error when decoding an invalid STE
memory: Remove unused memory_region_iommu_replay_all()
aspeed/timer: Provide back-pressure information for short periods
target/arm: Take exceptions on ATS instructions when needed
target/arm: Allow ARMCPRegInfo read/write functions to throw exceptions
target/arm: Factor out unallocated_encoding for aarch32
...
Peter Maydell [Wed, 4 Sep 2019 11:28:43 +0000 (12:28 +0100)]
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2019-09-03' into staging
Block patches:
- qemu-io now accepts a file to read a write pattern from
- Ensure that raw files have their first block allocated so we can probe
the O_DIRECT alignment if necessary
- Various fixes
Peter Maydell [Wed, 4 Sep 2019 10:25:13 +0000 (11:25 +0100)]
Merge remote-tracking branch 'remotes/stsquad/tags/pull-gdbstub-gitdm-testing-020919-1' into staging
Various maintainer updates
- fixes for gdbstub regressions
- bunch of gitdm/mailmap updates
- module fixes for Travis
- docker fixes for shippable
# gpg: Signature made Mon 02 Sep 2019 11:19:04 BST
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <[email protected]>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-gdbstub-gitdm-testing-020919-1:
tests/docker: upgrade docker.py to python3
tests: fix modules-test with no default machine
build-sys: build ui-spice-app as a module
contrib/gitdm: Add RT-RK to the domain-map
.mailmap/aliases: add some further commentary
mailmap: Add many entries to improve 'git shortlog' statistics
mailmap: Update philmd email address
mailmap: Reorder by sections
contrib/gitdm: Add [email protected] to group-map-redhat
contrib/gitdm: filetype interface is not in order, fix
gdbstub: Fix handler for 'F' packet
gdbstub: Fix handling of '!' packet with new infra
Peter Maydell [Wed, 4 Sep 2019 09:16:00 +0000 (10:16 +0100)]
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-aug-29-2019' into staging
MIPS queue for August 29th, 2019
# gpg: Signature made Thu 29 Aug 2019 11:19:28 BST
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <[email protected]>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-aug-29-2019: (31 commits)
target/mips: Fix emulation of ST.W in system mode
target/mips: Clean up handling of CP0 register 31
target/mips: Clean up handling of CP0 register 30
target/mips: Clean up handling of CP0 register 29
target/mips: Clean up handling of CP0 register 28
target/mips: Clean up handling of CP0 register 27
target/mips: Clean up handling of CP0 register 26
target/mips: Clean up handling of CP0 register 25
target/mips: Clean up handling of CP0 register 24
target/mips: Clean up handling of CP0 register 23
target/mips: Clean up handling of CP0 register 20
target/mips: Clean up handling of CP0 register 19
target/mips: Clean up handling of CP0 register 18
target/mips: Clean up handling of CP0 register 17
target/mips: Clean up handling of CP0 register 16
target/mips: Clean up handling of CP0 register 15
target/mips: Clean up handling of CP0 register 14
target/mips: Clean up handling of CP0 register 13
target/mips: Clean up handling of CP0 register 12
target/mips: Clean up handling of CP0 register 11
...
migration: Do not re-read the clock on pre_save in case of paused guest
The clock move makes the guest knows about the paused time between the
'stop' and 'migrate' commands. This is an issue in an already-paused
VM because some side effects, like process stalls, could happen
after migration.
So, this patch checks the runstate of guest in the pre_save handler and
do not re-reads the clock in case of paused state (cold migration).
Igor Mammedov [Mon, 2 Sep 2019 12:02:22 +0000 (08:02 -0400)]
x86: do not advertise die-id in query-hotpluggbale-cpus if '-smp dies' is not set
Commit 176d2cda0 (i386/cpu: Consolidate die-id validity in smp context) added
new 'die-id' topology property to CPUs and exposed it via QMP command
query-hotpluggable-cpus, which broke -device/device_add cpu-foo for existing
users that do not support die-id/dies yet. That's would be fine if it happened
to new machine type only but it also happened to old machine types,
which breaks migration from old QEMU to the new one, for example following CLI:
OLD-QEMU -M pc-i440fx-4.0 -smp 1,max_cpus=2 \
-device qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id
is not able to start with new QEMU, complaining about invalid die-id.
After discovering regression, the patch
"pc: Don't make die-id mandatory unless necessary"
makes die-id optional so old CLI would work.
However it's not enough as new QEMU still exposes die-id via query-hotpluggbale-cpus
QMP command, so the users that started old machine type on new QEMU, using all
properties (including die-id) received from QMP command (as required), won't be
able to start old QEMU using the same properties since it doesn't support die-id.
Fix it by hiding die-id in query-hotpluggbale-cpus for all machine types in case
'-smp dies' is not provided on CLI or -smp dies = 1', in which case smp_dies == 1
and APIC ID is calculated in default way (as it was before DIE support) so we won't
need compat code as in both cases the topology provided to guest via CPUID is the same.
Stefan Hajnoczi [Fri, 23 Aug 2019 13:56:32 +0000 (14:56 +0100)]
hostmem-file: fix pmem file size check
Commit 314aec4a6e06844937f1677f6cba21981005f389 ("hostmem-file: reject
invalid pmem file sizes") added a file size check that verifies the
hostmem object's size parameter against the actual devdax pmem file.
This is useful because getting the size wrong results in confusing
errors inside the guest.
However, the code doesn't work properly for files where struct
stat::st_size is zero. Hostmem-file's ->alloc() function returns early
without setting an Error, causing the following assertion failure:
qapi: report the default CPU type for each machine
When user doesn't request any explicit CPU model with libvirt or QEMU,
a machine type specific CPU model is picked. Currently there is no way
to determine what this QEMU built-in default is, so libvirt cannot
report this back to the user in the XML config.
This extends the "query-machines" QMP command so that it reports the
default CPU model typename for each machine.
Eduardo Habkost [Fri, 16 Aug 2019 17:07:50 +0000 (14:07 -0300)]
pc: Don't make die-id mandatory unless necessary
We have this issue reported when using libvirt to hotplug CPUs:
https://bugzilla.redhat.com/show_bug.cgi?id=1741451
Basically, libvirt is not copying die-id from
query-hotpluggable-cpus, but die-id is now mandatory.
We could blame libvirt and say it is not following the documented
interface, because we have this buried in the QAPI schema
documentation:
> Note: currently there are 5 properties that could be present
> but management should be prepared to pass through other
> properties with device_add command to allow for future
> interface extension. This also requires the filed names to be kept in
> sync with the properties passed to -device/device_add.
But I don't think this would be reasonable from us. We can just
make QEMU more flexible and let die-id to be omitted when there's
no ambiguity. This will allow us to keep compatibility with
existing libvirt versions.
Test case included to ensure we don't break this again.
Eduardo Habkost [Thu, 15 Aug 2019 18:38:02 +0000 (15:38 -0300)]
pc: Improve error message when die-id is omitted
The error message when die-id is omitted doesn't make sense:
$ qemu-system-x86_64 -smp 1,sockets=6,maxcpus=6 \
-device qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id=0
qemu-system-x86_64: -device qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id=0: \
Invalid CPU die-id: 4294967295 must be in range 0:0
Fix it, so it will now read:
qemu-system-x86_64: -device qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id=0: \
CPU die-id is not set
Eduardo Habkost [Thu, 15 Aug 2019 18:38:01 +0000 (15:38 -0300)]
pc: Fix error message on die-id validation
The error message for die-id range validation is incorrect. Example:
$ qemu-system-x86_64 -smp 1,sockets=6,maxcpus=6 \
-device qemu64-x86_64-cpu,socket-id=1,die-id=1,core-id=0,thread-id=0
qemu-system-x86_64: -device qemu64-x86_64-cpu,socket-id=1,die-id=1,core-id=0,thread-id=0: \
Invalid CPU die-id: 1 must be in range 0:5
The actual range for die-id in this example is 0:0.
Fix the error message to use smp_dies and print the correct range.
Peter Maydell [Tue, 3 Sep 2019 16:20:39 +0000 (17:20 +0100)]
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.2-20190829' into staging
ppc patch queue 2018-08-29
Another pull request for ppc-for-4.2. Includes
* Several powernv patches which were pulled last minute from the
last PULL, now that some problems with them have been sorted out
* A fix for -no-reboot which has been broken since the
pseries-rhel4.1.0 machine type
* Add some host threads information which AIX guests will need to
properly scale the PURR and SPURR
* Change behaviour to match x86 when unplugging function 0 of a
multifunction PCI device
* A number of TCG fixes in FPU emulation
And a handful of other assorted fixes and cleanups.
* remotes/dgibson/tags/ppc-for-4.2-20190829:
spapr: Set compat mode in spapr_core_plug()
spapr/pci: Convert types to QEMU coding style
spapr_pci: Advertise BAR reallocation capability
spapr: Use SHUTDOWN_CAUSE_SUBSYSTEM_RESET for CAS reboots
powerpc/spapr: Add host threads parameter to ibm,get_system_parameter
pseries: Update SLOF firmware image
target/ppc: Refactor emulation of vmrgew and vmrgow instructions
target/ppc: Fix do_float_check_status vs inexact
target/ppc: Set float_tininess_before_rounding at cpu reset
pseries: Fix compat_pvr on reset
spapr_pci: remove all child functions in function zero unplug
ppc: Fix xscvdpspn for SNAN
ppc: Fix xsmaddmdp and friends
tests/boot-serial-test: add support for all the PowerNV machines
ppc/pnv: Introduce PowerNV machines with fixed CPU models
ppc/pnv: Generate phandle for the "interrupt-parent" property
ppc/pnv: add more dummy XSCOM addresses for the P9 CAPP
ppc/pnv: update skiboot to v6.4
ppc/pnv: Set default ram size to 1.75GB
Peter Maydell [Tue, 3 Sep 2019 15:48:37 +0000 (16:48 +0100)]
Merge remote-tracking branch 'remotes/cleber/tags/python-next-pull-request' into staging
Python (acceptance tests) queue, 2019-08-28
# gpg: Signature made Thu 29 Aug 2019 02:11:22 BST
# gpg: using RSA key 7ABB96EB8B46B94D5E0FE9BB657E8D33A5F209F3
# gpg: Good signature from "Cleber Rosa <[email protected]>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 7ABB 96EB 8B46 B94D 5E0F E9BB 657E 8D33 A5F2 09F3
* remotes/cleber/tags/python-next-pull-request:
VNC Acceptance test: simplify test names
Boot Linux Console Test: add a test for ppc64 + pseries
Acceptance tests: drop left over usage of ":avocado: enable"
tests/requirements.txt: pin paramiko version requirement
tests.acceptance.avocado_qemu: Add support for powerpc
Peter Maydell [Thu, 22 Aug 2019 13:15:34 +0000 (14:15 +0100)]
target/arm: Don't abort on M-profile exception return in linux-user mode
An attempt to do an exception-return (branch to one of the magic
addresses) in linux-user mode for M-profile should behave like
a normal branch, because linux-user mode is always going to be
in 'handler' mode. This used to work, but we broke it when we added
support for the M-profile security extension in commit d02a8698d7ae2bfed.
In that commit we allowed even handler-mode calls to magic return
values to be checked for and dealt with by causing an
EXCP_EXCEPTION_EXIT exception to be taken, because this is
needed for the FNC_RETURN return-from-non-secure-function-call
handling. For system mode we added a check in do_v7m_exception_exit()
to make any spurious calls from Handler mode behave correctly, but
forgot that linux-user mode would also be affected.
How an attempted return-from-non-secure-function-call in linux-user
mode should be handled is not clear -- on real hardware it would
result in return to secure code (not to the Linux kernel) which
could then handle the error in any way it chose. For QEMU we take
the simple approach of treating this erroneous return the same way
it would be handled on a CPU without the security extensions --
treat it as a normal branch.
The upshot of all this is that for linux-user mode we should never
do any of the bx_excret magic, so the code change is simple.
This ought to be a weird corner case that only affects broken guest
code (because Linux user processes should never be attempting to do
exception returns or NS function returns), except that the code that
assigns addresses in RAM for the process and stack in our linux-user
code does not attempt to avoid this magic address range, so
legitimate code attempting to return to a trampoline routine on the
stack can fall into this case. This change fixes those programs,
but we should also look at restricting the range of memory we
use for M-profile linux-user guests to the area that would be
real RAM in hardware.
Peter Maydell [Tue, 27 Aug 2019 12:19:31 +0000 (13:19 +0100)]
target/arm: Free TCG temps in trans_VMOV_64_sp()
The function neon_store_reg32() doesn't free the TCG temp that it
is passed, so the caller must do that. We got this right in most
places but forgot to free the TCG temps in trans_VMOV_64_sp().
Both functions, object_initialize() and object_property_add_child()
increase the reference counter of the new object, so one of the
references has to be dropped afterwards to get the reference
counting right. Otherwise the child object will not be properly
cleaned up when the parent gets destroyed.
Thus let's use now object_initialize_child() instead to get the
reference counting here right.
Both functions, object_initialize() and object_property_add_child()
increase the reference counter of the new object, so one of the
references has to be dropped afterwards to get the reference
counting right. Otherwise the child object will not be properly
cleaned up when the parent gets destroyed.
Thus let's use now object_initialize_child() instead to get the
reference counting here right.
hw/arm: Use sysbus_init_child_obj for correct reference counting
Both object_initialize() and qdev_set_parent_bus() increase the
reference counter of the new object, so one of the references has
to be dropped afterwards to get the reference counting right.
In machine model code this refcount leak is not particularly
problematic because (unlike devices) machines will never be
created on demand via QMP, and they are never destroyed.
But in any case let's use the new sysbus_init_child_obj() instead
to get the reference counting here right.
Both functions, object_initialize() and object_property_add_child()
increase the reference counter of the new object, so one of the
references has to be dropped afterwards to get the reference
counting right. Otherwise the child object will not be properly
cleaned up when the parent gets destroyed.
Thus let's use now object_initialize_child() instead to get the
reference counting here right.
The previous simplification got the order of operands to the
subtraction wrong. Since the 64-bit product is the subtrahend,
we must use a 64-bit subtract to properly compute the borrow
from the low-part of the product.
Eric Auger [Thu, 22 Aug 2019 17:23:50 +0000 (19:23 +0200)]
hw/arm/smmuv3: Remove spurious error messages on IOVA invalidations
An IOVA/ASID invalidation is notified to all IOMMU Memory Regions
through smmuv3_inv_notifiers_iova/smmuv3_notify_iova.
When the notification occurs it is possible that some of the
PCIe devices associated to the notified regions do not have a
valid stream table entry. In that case we output a LOG_GUEST_ERROR
message, for example:
invalid sid=<SID> (L1STD span=0)
"smmuv3_notify_iova error decoding the configuration for iommu mr=<MR>
This is unfortunate as the user gets the impression that there
are some translation decoding errors whereas there are not.
This patch adds a new field in SMMUEventInfo that tells whether
the detection of an invalid STE must lead to an error report.
invalid_ste_allowed is set before doing the invalidations and
kept unset on actual translation.
The other configuration decoding error messages are kept since if the
STE is valid then the rest of the config must be correct.
Andrew Jeffery [Thu, 4 Jul 2019 05:51:50 +0000 (07:51 +0200)]
aspeed/timer: Provide back-pressure information for short periods
First up: This is not the way the hardware behaves.
However, it helps resolve real-world problems with short periods being
used under Linux. Commit 4451d3f59f2a ("clocksource/drivers/fttmr010:
Fix set_next_event handler") in Linux fixed the timer driver to
correctly schedule the next event for the Aspeed controller, and in
combination with 5daa8212c08e ("ARM: dts: aspeed: Describe random number
device") Linux will now set a timer with a period as low as 1us.
Configuring a qemu timer with such a short period results in spending
time handling the interrupt in the model rather than executing guest
code, leading to noticeable "sticky" behaviour in the guest.
The behaviour of Linux is correct with respect to the hardware, so we
need to improve our handling under emulation. The approach chosen is to
provide back-pressure information by calculating an acceptable minimum
number of ticks to be set on the model. Under Linux an additional read
is added in the timer configuration path to detect back-pressure, which
will never occur on hardware. However if back-pressure is observed, the
driver alerts the clock event subsystem, which then performs its own
next event dilation via a config option - d1748302f70b ("clockevents:
Make minimum delay adjustments configurable")
A minimum period of 5us was experimentally determined on a Lenovo
T480s, which I've increased to 20us for "safety".
Signed-off-by: Andrew Jeffery <[email protected]> Reviewed-by: Joel Stanley <[email protected]> Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Tested-by: Joel Stanley <[email protected]> Signed-off-by: Cédric Le Goater <[email protected]>
Message-id: 20190704055150[email protected]
[clg: - changed the computation of min_ticks to be done each time the
timer value is reloaded. It removes the ordering issue of the
timer and scu reset handlers but is slightly slower ]
- introduced TIMER_MIN_NS
- introduced calculate_min_ticks() ] Signed-off-by: Cédric Le Goater <[email protected]> Signed-off-by: Peter Maydell <[email protected]>
Peter Maydell [Fri, 16 Aug 2019 12:58:02 +0000 (13:58 +0100)]
target/arm: Take exceptions on ATS instructions when needed
The translation table walk for an ATS instruction can result in
various faults. In general these are just reported back via the
PAR_EL1 fault status fields, but in some cases the architecture
requires that the fault is turned into an exception:
* synchronous stage 2 faults of any kind during AT S1E0* and
AT S1E1* instructions executed from NS EL1 fault to EL2 or EL3
* synchronous external aborts are taken as Data Abort exceptions
(This is documented in the v8A Arm ARM DDI0487A.e D5.2.11 and
G5.13.4.)
Peter Maydell [Fri, 16 Aug 2019 12:58:01 +0000 (13:58 +0100)]
target/arm: Allow ARMCPRegInfo read/write functions to throw exceptions
Currently the only part of an ARMCPRegInfo which is allowed to cause
a CPU exception is the access function, which returns a value indicating
that some flavour of UNDEF should be generated.
For the ATS system instructions, we would like to conditionally
generate exceptions as part of the writefn, because some faults
during the page table walk (like external aborts) should cause
an exception to be raised rather than returning a value.
There are several ways we could do this:
* plumb the GETPC() value from the top level set_cp_reg/get_cp_reg
helper functions through into the readfn and writefn hooks
* add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC()
value
* require the ATS instructions to provide a dummy accessfn,
which serves no purpose except to cause the code generation
to emit TCG ops to sync the CPU state
* add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly
throwing an exception in its read/write hooks, and make the
codegen sync the CPU state before calling the hooks if the
flag is set
This patch opts for the last of these, as it is fairly simple
to implement and doesn't require invasive changes like updating
the readfn/writefn hook function prototype signature.
target/arm: Factor out unallocated_encoding for aarch32
Make this a static function private to translate.c.
Thus we can use the same idiom between aarch64 and aarch32
without actually sharing function implementations.
Despite the fact that the text for the call to gen_exception_insn
is identical for aarch64 and aarch32, the implementation inside
gen_exception_insn is totally different.
* remotes/ehabkost/tags/python-next-pull-request:
configure: more resilient Python version capture
BootLinuxSshTest: Only use 'test' for unittest.TestCase method names
Tao Xu [Fri, 9 Aug 2019 06:57:22 +0000 (14:57 +0800)]
numa: move numa global variable nb_numa_nodes into MachineState
Add struct NumaState in MachineState and move existing numa global
nb_numa_nodes(renamed as "num_nodes") into NumaState. And add variable
numa_support into MachineClass to decide which submachines support NUMA.
Tao Xu [Fri, 9 Aug 2019 06:57:21 +0000 (14:57 +0800)]
hw/arm: simplify arm_load_dtb
In struct arm_boot_info, kernel_filename, initrd_filename and
kernel_cmdline are copied from from MachineState. This patch add
MachineState as a parameter into arm_load_dtb() and move the copy chunk
of kernel_filename, initrd_filename and kernel_cmdline into
arm_load_kernel().
Thomas Huth [Fri, 23 Aug 2019 08:42:03 +0000 (10:42 +0200)]
tests/check-block: Skip iotests when sanitizers are enabled
The sanitizers (especially the address sanitizer from Clang) are
sometimes printing out warnings or false positives - this spoils
the output of the iotests, causing some of the tests to fail.
Thus let's skip the automatic iotests during "make check" when the
user configured QEMU with --enable-sanitizers.
Thomas Huth [Fri, 23 Aug 2019 13:35:52 +0000 (15:35 +0200)]
iotests: Check for enabled drivers before testing them
It is possible to enable only a subset of the block drivers with the
"--block-drv-rw-whitelist" option of the "configure" script. All other
drivers are marked as unusable (or only included as read-only with the
"--block-drv-ro-whitelist" option). If an iotest is now using such a
disabled block driver, it is failing - which is bad, since at least the
tests in the "auto" group should be able to deal with this situation.
Thus let's introduce a "_require_drivers" function that can be used by
the shell tests to check for the availability of certain drivers first,
and marks the test as "not run" if one of the drivers is missing.
This patch mainly targets the test in the "auto" group which should
never fail in such a case, but also improves some of the other tests
along the way. Note that we also assume that the "qcow2" and "file"
drivers are always available - otherwise it does not make sense to
run "make check-block" at all (which only tests with qcow2 by default).
Max Reitz [Mon, 19 Aug 2019 20:18:44 +0000 (22:18 +0200)]
iotests: Add -display none to the qemu options
Without this argument, qemu will print an angry message about not being
able to connect to a display server if $DISPLAY is not set. For me,
that breaks iotests.supported_formats() because it thus only sees
["Could", "not", "connect"] as the supported formats.
Max Reitz [Thu, 15 Aug 2019 15:36:37 +0000 (17:36 +0200)]
iotests: Disable 110 for vmdk.twoGbMaxExtentSparse
The error message for the test case where we have a quorum node for
which no directory name can be generated is different: For
twoGbMaxExtentSparse, it complains that it cannot open the extent file.
For other (sub)formats, it just notes that it cannot determine the
backing file path. Both are fine, but just disable twoGbMaxExtentSparse
for simplicity's sake.
Max Reitz [Thu, 15 Aug 2019 15:36:36 +0000 (17:36 +0200)]
iotests: Disable broken streamOptimized tests
streamOptimized does not support writes that do not span exactly one
cluster. Furthermore, it cannot rewrite already allocated clusters.
As such, many iotests do not work with it. Disable them.
Max Reitz [Thu, 15 Aug 2019 15:36:35 +0000 (17:36 +0200)]
vmdk: Reject invalid compressed writes
Compressed writes generally have to write full clusters, not just in
theory but also in practice when it comes to vmdk's streamOptimized
subformat. It currently is just silently broken for writes with
non-zero in-cluster offsets:
(The technical reason is that vmdk_write_extent() just writes the
incomplete compressed data actually to offset 4k. When reading the
data, vmdk_read_extent() looks at offset 0 and finds the compressed data
size to be 0, because that is what it reads from there. This yields an
error.)
For incomplete writes with zero in-cluster offsets, the error path when
reading the rest of the cluster is a bit different, but the result is
the same:
We had a test for a case where relative extent paths did not work, but
unfortunately we just fixed the underlying problem, so it works now.
This patch adds a new test case that still fails.
Max Reitz [Thu, 15 Aug 2019 15:36:32 +0000 (17:36 +0200)]
iotests: Fix _filter_img_create()
fe646693acc changed qemu-img create's output so that it no longer prints
single quotes around parameter values. The subformat and adapter_type
filters in _filter_img_create() have never been adapted to that change.
Nir Soffer [Tue, 27 Aug 2019 01:05:28 +0000 (04:05 +0300)]
iotests: Test allocate_first_block() with O_DIRECT
Using block_resize we can test allocate_first_block() with file
descriptor opened with O_DIRECT, ensuring that it works for any size
larger than 4096 bytes.
Testing smaller sizes is tricky as the result depends on the filesystem
used for testing. For example on NFS any size will work since O_DIRECT
does not require any alignment.
Nir Soffer [Tue, 27 Aug 2019 01:05:27 +0000 (04:05 +0300)]
block: posix: Always allocate the first block
When creating an image with preallocation "off" or "falloc", the first
block of the image is typically not allocated. When using Gluster
storage backed by XFS filesystem, reading this block using direct I/O
succeeds regardless of request length, fooling alignment detection.
In this case we fallback to a safe value (4096) instead of the optimal
value (512), which may lead to unneeded data copying when aligning
requests. Allocating the first block avoids the fallback.
Since we allocate the first block even with preallocation=off, we no
longer create images with zero disk size:
It's wrong to OR shared permissions. It may lead to crash on further
permission updates.
Also, no needs to consider previously calculated permissions, as at
this point we already bind all new parents and bdrv_get_cumulative_perm
result is enough. So fix the bug by just set permissions by
bdrv_get_cumulative_perm result.
Bug was introduced in long ago 234ac1a9025, in 2.9.
Peter Maydell [Tue, 3 Sep 2019 08:43:26 +0000 (09:43 +0100)]
Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging
Pull request
# gpg: Signature made Tue 27 Aug 2019 21:16:27 BST
# gpg: using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <[email protected]>" [full]
# gpg: aka "Stefan Hajnoczi <[email protected]>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/block-pull-request:
block/qcow2: implement .bdrv_co_pwritev(_compressed)_part
block/qcow2: implement .bdrv_co_preadv_part
block/qcow2: refactor qcow2_co_preadv to use buffer-based io
block/io: introduce bdrv_co_p{read, write}v_part
block/io: bdrv_aligned_pwritev: use and support qiov_offset
block/io: bdrv_aligned_preadv: use and support qiov_offset
block/io: bdrv_co_do_copy_on_readv: lazy allocation
block/io: bdrv_co_do_copy_on_readv: use and support qiov_offset
block: define .*_part io handlers in BlockDriver
block/io: refactor padding
util/iov: improve qemu_iovec_is_zero
util/iov: introduce qemu_iovec_init_extended
Alex Bennée [Thu, 29 Aug 2019 11:08:27 +0000 (12:08 +0100)]
tests/docker: upgrade docker.py to python3
The recent podman changes (9459f754134bb) imported enum which is part
of the python3 standard library but only available as an external
library for python2. This causes problems on the fairly restricted
environment such as shippable. Lets bite the bullet and make the
script a fully python3 one. To that end:
- drop the from __future__ import (we are there now ;-)
- avoid the StringIO import hack
- be consistent with the mode we read/write dockerfiles
- s/iteritems/items/
- ensure check_output returns strings for processing
This reverts commit 45db1ac157 ("modules-test: ui-spice-app is not
built as module") and fixes commit d8aec9d9f1 ("display: add -display
spice-app launching a Spice client").
Alex Bennée [Wed, 28 Aug 2019 11:03:23 +0000 (12:03 +0100)]
.mailmap/aliases: add some further commentary
The two files are not interchangeable but a change to one *might*
require a change to the other so lets flag that up with an explanation
of what both files are trying to achieve. While we are at it document
the many forms .mailmap can take in the header.
mailmap: Add many entries to improve 'git shortlog' statistics
All of these emails have a least 1 commit with utf8/latin1 encoding
issue, or one with no author name.
When there are multiple commits, keep the author name the most used.
Our mailmap currently has 4 sections somehow documented.
Reorder few entries not related to "addresses from the original
git import" into the 3rd section, and add a comment to describe
it.
contrib/gitdm: filetype interface is not in order, fix
gitm prints the rather cryptic message "interface not found, appended
to the last order". This is because filetypes.txt has filetype
interface, but neglects to mention it in order. Fix that.