Peter Maydell [Tue, 17 Oct 2017 14:26:51 +0000 (15:26 +0100)]
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.11-20171017' into staging
ppc patch queue 2017-10-17
Here's the currently accumulated set of ppc patches for qemu.
* The biggest set here is the ppc parts of Igor Mammedov's cleanups
to cpu model handling
* The above also includes a generic patches which are required as
prerequisites for the ppc parts. They don't seem to have been
merged by Eduardo yet, so I hope they're ok to include here.
* Apart from that it's basically just assorted bug fixes and cleanups
* remotes/dgibson/tags/ppc-for-2.11-20171017: (34 commits)
spapr_cpu_core: rewrite machine type sanity check
spapr_pci: fail gracefully with non-pseries machine types
spapr: Correct RAM size calculation for HPT resizing
ppc: pnv: consolidate type definitions and batch register them
ppc: pnv: drop PnvChipClass::cpu_model field
ppc: pnv: define core types statically
ppc: pnv: drop PnvCoreClass::cpu_oc field
ppc: pnv: normalize core/chip type names
ppc: pnv: use generic cpu_model parsing
ppc: spapr: use generic cpu_model parsing
ppc: move ppc_cpu_lookup_alias() before its first user
ppc: spapr: use cpu model names as tcg defaults instead of aliases
ppc: spapr: register 'host' core type along with the rest of core types
ppc: spapr: use cpu type name directly
ppc: spapr: define core types statically
ppc: move '-cpu foo,compat=xxx' parsing into ppc_cpu_parse_featurestr()
ppc: spapr: replace ppc_cpu_parse_features() with cpu_parse_cpu_model()
ppc: 40p/prep: replace cpu_model with cpu_type
ppc: virtex-ml507: replace cpu_model with cpu_type
ppc: replace cpu_model with cpu_type on ref405ep,taihu boards
...
Peter Maydell [Tue, 17 Oct 2017 10:29:51 +0000 (11:29 +0100)]
Merge remote-tracking branch 'remotes/berrange/tags/pull-qio-2017-10-16-1' into staging
Merge QIO 2017/10/16 v1
# gpg: Signature made Mon 16 Oct 2017 17:10:54 BST
# gpg: using RSA key 0xBE86EBB415104FDF
# gpg: Good signature from "Daniel P. Berrange <[email protected]>"
# gpg: aka "Daniel P. Berrange <[email protected]>"
# Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF
* remotes/berrange/tags/pull-qio-2017-10-16-1:
io: fix mem leak in websock error path
io: add trace points for websocket HTTP protocol headers
io: cope with websock 'Connection' header having multiple values
io: get rid of bounce buffering in websock write path
io: pass a struct iovec into qio_channel_websock_encode
io: get rid of qio_channel_websock_encode helper method
io: simplify websocket ping reply handling
io: monitor encoutput buffer size from websocket GSource
sockets: Handle race condition between binds to the same port
sockets: factor out create_fast_reuse_socket
sockets: factor out a new try_bind() function
Peter Maydell [Tue, 17 Oct 2017 09:44:23 +0000 (10:44 +0100)]
Merge remote-tracking branch 'remotes/gkurz/tags/for-upstream' into staging
This fixes a potential data leak to the guest.
# gpg: Signature made Mon 16 Oct 2017 16:08:25 BST
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <[email protected]>"
# gpg: aka "Greg Kurz <[email protected]>"
# gpg: aka "Greg Kurz <[email protected]>"
# gpg: aka "Gregory Kurz (Groug) <[email protected]>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
9pfs: use g_malloc0 to allocate space for xattr
* remotes/kraxel/tags/ui-20171016-pull-request:
gtk: fix wrong id between texture and framebuffer
ui/gtk: Fix deprecation of vte_terminal_copy_clipboard
pc-bios/keymaps: keymaps update
Add pc-bios/keymaps/Makefile
tools: add qemu-keymap
ui: don't export qemu_input_event_new_key
ui: convert key events to QKeyCodes immediately
ui: convert common input code to keycodemapdb
ui: add keycodemapdb repository as a GIT submodule
docker: don't rely on submodules existing in the main checkout
build: automatically handle GIT submodule checkout for dtc
Gerd Hoffmann [Tue, 10 Oct 2017 14:13:22 +0000 (16:13 +0200)]
vga: handle cirrus vbe mode wraparounds.
Commit "3d90c62548 vga: stop passing pointers to vga_draw_line*
functions" is incomplete. It doesn't handle the case that the vga
rendering code tries to create a shared surface, i.e. a pixman image
backed by vga video memory. That can not work in case the guest display
wraps from end of video memory to the start. So force shadowing in that
case. Also adjust the snapshot region calculation.
Can trigger with cirrus only, when programming vbe modes using the bochs
api (stdvga, also qxl and virtio-vga in vga compat mode) wrap arounds
can't happen.
Greg Kurz [Thu, 12 Oct 2017 16:30:14 +0000 (18:30 +0200)]
spapr_pci: fail gracefully with non-pseries machine types
QEMU currently crashes when the user tries to add an spapr-pci-host-bridge
on a non-pseries machine:
$ qemu-system-ppc64 -M ppce500 -device spapr-pci-host-bridge,index=1
hw/ppc/spapr_pci.c:1535:spapr_phb_realize:
Object 0x1003dacae60 is not an instance of type spapr-machine
Aborted (core dumped)
The same thing happens with the deprecated but still available child type
spapr-pci-vfio-host-bridge.
Fix both by checking the machine type with object_dynamic_cast().
David Gibson [Tue, 10 Oct 2017 13:16:57 +0000 (00:16 +1100)]
spapr: Correct RAM size calculation for HPT resizing
In order to prevent the guest from forcing the allocation of large amounts
of qemu memory (or host kernel memory, in the case of KVM HV), we limit
the size of Hashed Page Table (HPT) it is allowed to allocated, based on
its RAM size.
However, the current calculation is not correct: it only adds up the size
of plugged memory, ignoring the base memory size. This patch corrects it.
While we're there, use get_plugged_memory_size() instead of directly
calling pc_existing_dimms_capacity(). The only difference is that it
will abort on failure, which is right: a failure here indicates something
wrong within qemu.
Igor Mammedov [Mon, 9 Oct 2017 19:51:09 +0000 (21:51 +0200)]
ppc: pnv: define core types statically
pnv core type definition doesn't have any fields that
require it to be defined at runtime. So replace code
that fills in TypeInfo at runtime with static TypeInfo
array that does the same at complie time.
Igor Mammedov [Mon, 9 Oct 2017 19:51:08 +0000 (21:51 +0200)]
ppc: pnv: drop PnvCoreClass::cpu_oc field
deduce cpu type directly from core type instead of
maintaining type mapping in PnvCoreClass::cpu_oc and doing
extra cpu_model parsing in pnv_core_class_init()
Igor Mammedov [Mon, 9 Oct 2017 19:51:06 +0000 (21:51 +0200)]
ppc: pnv: use generic cpu_model parsing
use common cpu_model prasing in vl.c and set default cpu_model
using generic MachineClass::default_cpu_type.
Beside of switching to generic infrastructure it solves several
issues.
* ppc_cpu_class_by_name() is used to deal with lower/upper case
and alias translations into actual cpu type, which fixes
'-M powernv -cpu power8' and '-M powernv -cpu power9_v1.0'
usecases which error out with:
'invalid CPU model 'FOO' for powernv machine'
* allows to switch to lower-case typenames in pnv chip/core name
(by convention typnames should be lower-case)
* replace aliased names /power8, power9, .../ with exact cpu model
names (i.e. typenames should be stable but aliases might decide to
point to other cpu model withi family or changed by kvm). It will
also help to simplify pnv_chip/core code and get rid of dependency
on cpu_model parsing.
Igor Mammedov [Mon, 9 Oct 2017 19:51:05 +0000 (21:51 +0200)]
ppc: spapr: use generic cpu_model parsing
use generic cpu_model parsing introduced by
(6063d4c0f vl.c: convert cpu_model to cpu type and set of global properties before machine_init())
it allows to:
* replace sPAPRMachineClass::tcg_default_cpu with
MachineClass::default_cpu_type
* drop cpu_parse_cpu_model() from hw/ppc/spapr.c and reuse
one in vl.c
* simplify spapr_get_cpu_core_type() by removing
not needed anymore recurrsion since alias look up
happens earlier at vl.c and spapr_get_cpu_core_type()
works only with resulted from that cpu type.
* spapr no more needs to parse/depend on being phased out
MachineState::cpu_model, all tha parsing done by generic
code and target specific callback.
Igor Mammedov [Mon, 9 Oct 2017 19:51:04 +0000 (21:51 +0200)]
ppc: move ppc_cpu_lookup_alias() before its first user
next commit will drop ppc_cpu_lookup_alias() declaration from header
and make it static which will break its last user ppc_cpu_class_by_name()
since ppc_cpu_class_by_name() defined before ppc_cpu_lookup_alias().
To avoid this move ppc_cpu_lookup_alias() right before
ppc_cpu_class_by_name().
Igor Mammedov [Mon, 9 Oct 2017 19:51:00 +0000 (21:51 +0200)]
ppc: spapr: define core types statically
spapr core type definition doesn't have any fields that
require it to be defined at runtime. So replace code
that fills in TypeInfo at runtime with static TypeInfo
array that does the same at complie time.
Igor Mammedov [Mon, 9 Oct 2017 19:50:59 +0000 (21:50 +0200)]
ppc: move '-cpu foo,compat=xxx' parsing into ppc_cpu_parse_featurestr()
there is a dedicated callback CPUClass::parse_features
which purpose is to convert -cpu features into a set of
global properties AND deal with compat/legacy features
that couldn't be directly translated into CPU's properties.
Create ppc variant of it (ppc_cpu_parse_featurestr) and
move 'compat=val' handling from spapr_cpu_core.c into it.
That removes a dependency of board/core code on cpu_model
parsing and would let to reuse common -cpu parsing
introduced by 6063d4c0
Set "max-cpu-compat" property only if it exists, in practice
it should limit 'compat' hack to spapr machine and allow
to avoid including machine/spapr headers in target/ppc/cpu.c
for (i = 0; i < ARRAY_SIZE(type_infos); i++) {
type_register_static(&type_infos[i]);
}
}
type_init(foo_register_types)
with a single line
DEFINE_TYPES(type_infos)
where types have static definition which could be consolidated in
a single array of TypeInfo structures.
It saves us ~6-10LOC per use case and would help to replace
imperative foo_register_types() there with declarative style of
type registration.
hw/ppc/spapr.c: abort unplug_request if previous unplug isn't done
LMB removal is completed only when the spapr_lmb_release callback
is called after all DRCs of the dimm are detached. During this
time, it is possible that a unplug request for the same dimm
arrives, trying to detach DRCs that were detached by the guest
in the first unplug_request.
BQL doesn't help in this case - the lock will prevent any concurrent
removal from happening until the end of spapr_memory_unplug_request
only. What happens is that the second unplug_request ends up calling
spapr_drc_detach in a DRC that were detached already, causing an
assert error in spapr_drc_detach (e.g
https://bugs.launchpad.net/qemu/+bug/1718118).
spapr_lmb_release uses a structure called sPAPRDIMMState, stored in the
spapr->pending_dimm_unplugs QTAIL, to track how many LMB DRCs are left
to be detached by the guest. When there are no more DRCs left, this
structure is deleted and the pc-dimm unplug handler is called to
finish the process.
This patch reuses the sPAPRDIMMState to allow unplug_request to know
if there is an ongoing unplug process for a given dimm, aborting the
unplug request in this case, by doing the following changes:
- in spapr_lmb_release callback, move the dimm state removal to the
end, after pc-dimm unplug handler. With this change we can check for
the existence of the dimm state to see if the unplug process is
done.
- use spapr_pending_dimm_unplugs_find in spapr_memory_unplug_request
to check if the dimm state exists. If positive, there is an unplug
operation already in progress for this dimm, meaning that we should
abort it and warn the user about it.
Fixes: https://bugs.launchpad.net/qemu/+bug/1718118 Signed-off-by: Daniel Henrique Barboza <[email protected]> Signed-off-by: David Gibson <[email protected]>
Sandipan Das [Fri, 6 Oct 2017 06:42:44 +0000 (12:12 +0530)]
target/ppc: Fix carry flag setting for shift algebraic instructions
For POWER ISA v3.0, the XER bit CA32 needs to be set by the shift
right algebraic instructions whenever the CA bit is to be set. This
change affects the following instructions:
* Shift Right Algebraic Word (sraw[.])
* Shift Right Algebraic Word Immediate (srawi[.])
* Shift Right Algebraic Doubleword (srad[.])
* Shift Right Algebraic Doubleword Immediate (sradi[.])
David Gibson [Fri, 6 Oct 2017 11:21:18 +0000 (22:21 +1100)]
target/ppc: Add POWER9 DD2.0 model information
At the moment the only POWER9 model which is listed in qemu is v1.0 (aka
"DD1"). This is a very early (read, buggy) version which will never be
released to the public - it was included in qemu only for the convenience
of those doing bringup on the early silicon. For bonus points, we actually
had its PVR incorrect in the table (0x004e0000 instead of 0x004e0100). We
also never actually implemented the differences in behaviour (read, bugs)
that marked DD1 in qemu.
Now that we know the PVR for the substantially better v2.0 (DD2) chip,
include it and make it the default POWER9 in qemu. For the time being we
leave the DD1 definition in place for the poor souls (read, me) who still
need to work with DD1 hardware.
Greg Kurz [Wed, 4 Oct 2017 09:02:31 +0000 (11:02 +0200)]
spapr: sanity check size of the CAS buffer
The CAS buffer is provided by SLOF. A broken SLOF could pass a silly
size: either smaller than the diff header, in which case the current
code will try to allocate 16 Exabytes of memory and g_malloc0() will
abort, or bigger than the maximum memory provisioned for SLOF (ie,
40 Megabytes), which doesn't make sense. Both cases indicate that
SLOF has a bug.
Let's print out an explicit error message and exit since rebooting as
we do with other errors would only result in a reset loop.
Signed-off-by: Greg Kurz <[email protected]>
[dwg: Fix format specifier that broke 32-bit builds] Signed-off-by: David Gibson <[email protected]>
Thomas Huth [Tue, 3 Oct 2017 10:14:04 +0000 (12:14 +0200)]
target/ppc: Remove unused PPC 460 and 460F definitions
We don't have any 460 or 460F CPUs in QEMU, so the init functions
are just dead code. Let's simply remove them (translate_init.c
is already big enough without them).
filter-mirror: segfault when specifying non existent device
When using filter-mirror like the example below where the interface
'ndev0' does not exist on the host, QEMU crashes into segmentation
fault.
$ qemu-system-x86_64 -S -machine pc -netdev user,id=ndev0 -object filter-mirror,id=test-object,netdev=ndev0
This happens because the function filter_mirror_setup() does not check
if the device actually exists and still keep on processing calling
qemu_chr_find(). This patch fixes this issue.
Peter Maydell [Thu, 12 Oct 2017 13:17:47 +0000 (14:17 +0100)]
include/hw/or-irq.h: Drop unused in_irqs field
The struct OrIRQState has an unused member field in_irqs.
This is a legacy of earlier versions of the patch; the
code that used it was dropped from the final version of
the code that went into master, but we forgot to delete
the no-longer-used struct field. Do so now.
Comments explaining why we include a header tend to go bad. This
one's almost comical: not only doesn't qemu-options.hx use
MAP_POPULATE anymore (since commit ef36fa1, v2.0.0, 2013), even the
include it applies to got moved away in commit 02d0e09 (v2.7.0).
* remotes/huth/tags/pull-request-2017-10-16:
default-configs: Enable CONFIG_VMXNET3_PCI only on x86
tests/prom-env: Bump the timeout, and test pseries only in slow mode
tests: use g_new() family of functions
M68K: use g_new() family of functions
hw/m68k: Replace fprintf(stderr, "*\n" with error_report()
Peter Maydell [Mon, 16 Oct 2017 16:29:16 +0000 (17:29 +0100)]
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc, pci, virtio: fixes, features
A bunch of fixes all over the place.
A new vmcore device - the user interface around it is still somewhat
controversial, but I feel most of the code is fine, suggestions can be
addressed by adding patches on top.
Signed-off-by: Michael S. Tsirkin <[email protected]>
# gpg: Signature made Sun 15 Oct 2017 04:02:23 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <[email protected]>"
# gpg: aka "Michael S. Tsirkin <[email protected]>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (26 commits)
tests/pxe: Test more NICs when running in SPEED=slow mode
pc: remove useless hot_add_cpu initialisation
isapc: Remove unnecessary migration compatibility code
virtio-pci: Replace modern_as with direct access to modern_bar
virtio: fix descriptor counting in virtqueue_pop
hw/gen_pcie_root_port: make IO RO 0 on IO disabled
pci: Validate interfaces on base_class_init
xen/pt: Mark TYPE_XEN_PT_DEVICE as hybrid
pci: Add INTERFACE_CONVENTIONAL_PCI_DEVICE to Conventional PCI devices
pci: Add INTERFACE_PCIE_DEVICE to all PCIe devices
pci: Add interface names to hybrid PCI devices
pci: conventional-pci-device and pci-express-device interfaces
PCI: PCIe access should always be little endian
virtio/pci/migration: Convert to VMState
hw/pci-bridge/pcie_pci_bridge: properly handle MSI unavailability case
pci: allow 32-bit PCI IO accesses to pass through the PCI bridge
virtio/vhost: reset dev->log after syncing
MAINTAINERS: add Dump maintainers
scripts/dump-guest-memory.py: add vmcoreinfo
kdump: set vmcoreinfo location
...
io: cope with websock 'Connection' header having multiple values
The noVNC server sends a header "Connection: keep-alive, Upgrade" which
fails our simple equality test. Split the header on ',', trim whitespace
and then check for 'upgrade' token.
io: get rid of bounce buffering in websock write path
Currently most outbound I/O on the websock channel gets copied into the
rawoutput buffer, and then immediately copied again into the encoutput
buffer, with a header prepended. Now that qio_channel_websock_encode
accepts a struct iovec, we can trivially remove this bounce buffering
and write directly to encoutput.
In doing so, we also now correctly validate the encoutput size against
the QIO_CHANNEL_WEBSOCK_MAX_BUFFER limit.
io: pass a struct iovec into qio_channel_websock_encode
Instead of requiring use of another Buffer, pass a struct iovec
into qio_channel_websock_encode, which gives callers more
flexibility in how they process data.
io: get rid of qio_channel_websock_encode helper method
The qio_channel_websock_encode method is only used in one place,
everything else calls qio_channel_websock_encode_buffer directly.
It can also be pushed up a level into the qio_channel_websock_writev
method, since every other caller of qio_channel_websock_write_wire
has already filled encoutput.
We must ensure we don't get flooded with ping replies if the outbound
channel is slow. Currently we do this by keeping the ping reply in a
separate temporary buffer and only writing it if the encoutput buffer
is completely empty. This is overly pessimistic, as it is reasonable
to add a ping reply to the encoutput buffer even if it has previous
data in it, as long as that previous data doesn't include a ping
reply.
To track this better, put the ping reply directly into the encoutput
buffer, and then record the size of encoutput at this time in
pong_remain. As we write encoutput to the underlying channel, we
can decrement the pong_remain counter. Once it hits zero, we can
accept further ping replies for transmission.
io: monitor encoutput buffer size from websocket GSource
The websocket GSource is monitoring the size of the rawoutput
buffer to determine if the channel can accepts more writes.
The rawoutput buffer, however, is merely a temporary staging
buffer before data is copied into the encoutput buffer. Thus
its size will always be zero when the GSource runs.
This flaw causes the encoutput buffer to grow without bound
if the other end of the underlying data channel doesn't
read data being sent. This can be seen with VNC if a client
is on a slow WAN link and the guest OS is sending many screen
updates. A malicious VNC client can act like it is on a slow
link by playing a video in the guest and then reading data
very slowly, causing QEMU host memory to expand arbitrarily.
This issue is assigned CVE-2017-15268, publically reported in
Knut Omang [Mon, 7 Aug 2017 10:58:42 +0000 (12:58 +0200)]
sockets: Handle race condition between binds to the same port
If an offset of ports is specified to the inet_listen_saddr function(),
and two or more processes tries to bind from these ports at the same time,
occasionally more than one process may be able to bind to the same
port. The condition is detected by listen() but too late to avoid a failure.
This function is called by socket_listen() and used
by all socket listening code in QEMU, so all cases where any form of dynamic
port selection is used should be subject to this issue.
Add code to close and re-establish the socket when this
condition is observed, hiding the race condition from the user.
Also clean up some issues with error handling to allow more
accurate reporting of the cause of an error.
This has been developed and tested by means of the
test-listen unit test in the previous commit.
Enable the test for make check now that it passes.
Peter Maydell [Mon, 16 Oct 2017 14:54:42 +0000 (15:54 +0100)]
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2017-10-14' into staging
nbd patches for 2017-10-14
- Marc-André Lureau - NBD: use g_new() family of functions
- Vladimir Sementsov-Ogievskiy - first half of 00/13 nbd minimal structured read
# gpg: Signature made Sun 15 Oct 2017 01:38:47 BST
# gpg: using RSA key 0xA7A16B4A2527436A
# gpg: Good signature from "Eric Blake <[email protected]>"
# gpg: aka "Eric Blake (Free Software Programmer) <[email protected]>"
# gpg: aka "[jpeg image of size 6874]"
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2017-10-14:
nbd: header constants indenting
nbd/server: simplify reply transmission
nbd/server: refactor nbd_co_send_simple_reply parameters
nbd/server: do not use NBDReply structure
nbd/server: structurize simple reply header sending
nbd: rename some simple-request related objects to be _simple_
block/nbd-client: refactor nbd_co_receive_reply
block/nbd-client: assert qiov len once in nbd_co_request
NBD: use g_new() family of functions
The gd_gl_area_scanout_texture must destroy framebuffer if there is
no texture id instead of no framebuffer id.
The effect was a black screen with "-vga virtio -display gtk,gl=on"
options.
The bug was introduce by a4f113fd "gtk: use framebuffer helper functions."
Always use QKeyCode in the InputKeyEvent struct, by converting key
numbers to QKeyCode at the time the event is created. This allows
the code processing / consuming key events to assume QKeyCode is
used. The only place we accept a key number in the InputKeyEvent
struct is with QMP commands sent by the user.
- KEY_PLAYPAUSE now maps to Q_KEY_CODE_AUDIOPLAY, instead of
KEY_PLAYCD. KEY_PLAYPAUSE is defined across almost all scancodes
sets, while KEY_PLAYCD only appears in AT set1, so the former is
a more useful mapping.
- Q_KEY_CODE_MENU was incorrectly mapped to the compose
scancode (0xdd) and is now mapped to 0x9e
- Q_KEY_CODE_FIND was mapped to 0xe065 (Search) instead
of to 0xe041 (Find)
- Q_KEY_CODE_HIRAGANA was mapped to 0x70 (Katakanahiragana)
instead of of 0x77 (Hirigana)
- Q_KEY_CODE_PRINT was mapped to 0xb7 which is not a defined
scan code in AT set 1, it is now mapped to 0x54 (sysrq)
ui: add keycodemapdb repository as a GIT submodule
The https://gitlab.com/keycodemap/keycodemapdb/ repo contains a
data file mapping between all the different scancode/keycode/keysym
sets that are known, and a tool to auto-generate lookup tables for
different combinations.
It is used by GTK-VNC, SPICE-GTK and libvirt for mapping keys.
Using it in QEMU will let us replace many hand written lookup
tables with auto-generated tables from a master data source,
reducing bugs. Adding new QKeyCodes will now only require the
master table to be updated, all ~20 other tables will be
automatically updated to follow.
docker: don't rely on submodules existing in the main checkout
When building the tarball to pass into the docker/vm test image,
the code relies on the git submodules being checked out in the
main checkout.
ie if the developer has not run 'git submodule update --init dtc'
many of the docker tests will fail due to the libfdt package not
being present in the test images. Patchew manually checks out the
dtc submodule in the main git checkout, but this is a bad idea.
When running tests we want to have a predictable set of submodules
included in the source that's tested. The build environment is
completely independent of the developers host OS, so the submodules
the developer has checked out should not be considered relevant for
the tests.
This changes the archive-source.sh script so that it clones the
current git checkout into a temporary directory, checks out a
fixed set of submodules, builds the tarball and finally removes
the temporary git clone.
build: automatically handle GIT submodule checkout for dtc
Currently if DTC is required by configure and not available in the host
OS install, we exit with an error message telling the user to checkout a
git submodule or install the library.
This introduces automatic handling of the git submodule checkout process
and enables it for dtc. This only runs if building from GIT, so users of
release tarballs still need the system library install. The current state
of the git checkout is stashed in .git-submodule-status, and a helper
program is used to determine if this state matches the desired submodule
state. A dependency against 'Makefile' ensures that the submodule state
is refreshed at the start of the build process
9p back-end first queries the size of an extended attribute,
allocates space for it via g_malloc() and then retrieves its
value into allocated buffer. Race between querying attribute
size and retrieving its could lead to memory bytes disclosure.
Use g_malloc0() to avoid it.
Peter Maydell [Mon, 16 Oct 2017 12:04:43 +0000 (13:04 +0100)]
Merge remote-tracking branch 'remotes/stefanberger/tags/pull-tpm-2017-10-04-3' into staging
Merge tpm 2017/10/04 v3
# gpg: Signature made Fri 13 Oct 2017 12:37:07 BST
# gpg: using RSA key 0x75AD65802A0B4211
# gpg: Good signature from "Stefan Berger <[email protected]>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B818 B9CA DF90 89C2 D5CE C66B 75AD 6580 2A0B 4211
* remotes/stefanberger/tags/pull-tpm-2017-10-04-3:
specs: Describe the TPM support in QEMU
tpm: Move tpm_cleanup() to right place
tpm: Added support for TPM emulator
tpm-passthrough: move reusable code to utils
tpm-backend: Move realloc_buffer() implementation to tpm-tis model
tpm-backend: Add new API to read backend TpmInfo
tpm-backend: Made few interface methods optional
tpm-backend: Initialize and free data members in it's own methods
tpm-backend: Move thread handling inside TPMBackend
tpm-backend: Remove unneeded member variable from backend class
tpm: Use EMSGSIZE instead of EBADMSG to compile on OpenBSD
Thomas Huth [Thu, 21 Sep 2017 05:39:15 +0000 (07:39 +0200)]
tests/prom-env: Bump the timeout, and test pseries only in slow mode
If QEMU has been compiled with the flags --enable-tcg-interpreter and
--enable-debug, the guest is running incredibly slow. The prom-env
test is approximately 10 times slower than normal in this case, and
it takes up to 500 seconds until the test with the pseries machine
finishs. While we should still look for ways to speed up the test
on the pseries machine here, let's bump the timeout to 600 seconds to
allow the test to pass with this unusal configuration already now.
Also move the pseries test into the "slow" category - since it is
really a very slow test.
Alistair Francis [Sat, 30 Sep 2017 00:16:06 +0000 (17:16 -0700)]
hw/m68k: Replace fprintf(stderr, "*\n" with error_report()
Replace a large number of the fprintf(stderr, "*\n" calls with
error_report(). The functions were renamed with these commands and then
compiler issues where manually fixed.
find ./* -type f -exec sed -i \
'N;N;N;N;N;N;N;N;N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N;N;N;N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N;N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N; {s|fprintf(stderr, "\(.*\)\\n"\(.*\));|error_report("\1"\2);|Ig}' \
{} +
Some lines where then manually tweaked to pass checkpatch.
Peter Maydell [Mon, 16 Oct 2017 09:22:39 +0000 (10:22 +0100)]
Merge remote-tracking branch 'remotes/elmarco/tags/vu-pull-request' into staging
# gpg: Signature made Thu 12 Oct 2017 21:52:28 BST
# gpg: using RSA key 0xDAE8E10975969CE5
# gpg: Good signature from "Marc-André Lureau <[email protected]>"
# gpg: aka "Marc-André Lureau <[email protected]>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 87A9 BD93 3F87 C606 D276 F62D DAE8 E109 7596 9CE5
* remotes/elmarco/tags/vu-pull-request:
libvhost-user: Support VHOST_USER_SET_SLAVE_REQ_FD
libvhost-user: Update and fix feature and request lists
vhost-user-bridge: Only process received packets on started queues
libvhost-user: vu_queue_started
Thomas Huth [Wed, 20 Sep 2017 09:02:07 +0000 (11:02 +0200)]
tests/pxe: Test more NICs when running in SPEED=slow mode
The pxe-test is a very good test to excercise NICs, thus we should use
it to test all NICs that can be used by the BIOS for booting via network.
However, to avoid that the default testing time increases too much, the
additional NICs are only tested in the "make check SPEED=slow" mode.
The virtio-net NIC on ppc64 is now also only tested in slow mode, since
the test on ppc64 is really quite slow and we've got test coverage for
virtio-net in big endian mode now on s390x, too.
Laurent Vivier [Tue, 10 Oct 2017 13:32:41 +0000 (15:32 +0200)]
pc: remove useless hot_add_cpu initialisation
Since 4458fb3a79 (pc: Eliminate pc_default_machine_options()),
hot_add_cpu is set in pc_machine_class_init(), so we don't
need to set it in pc_q35_machine_options(), pc_i440fx_machine_options()
and xenfv_machine_options(), except to clear it in
pc_i440fx_1_4_machine_opt().
We don't touch isapc when we change guest ABI and add new entries
to PC_COMPAT_* or new PCMachineClass compat flags. This means
isapc never guaranteed guest ABI and cross-QEMU-version live
migration compatibility. There's no point in keeping code for
kvm-pv-eoi and APIC ID compatibility in pc_init_isa().
virtio-pci: Replace modern_as with direct access to modern_bar
The modern bar is accessed now via yet another address space created just
for that purpose and it does not really need FlatView and dispatch tree
as it has a single memory region so it is just a waste of memory. Things
get even worse when there are dozens or hundreds of virtio-pci devices -
since these address spaces are global, changing any of them triggers
rebuilding all address spaces.
This replaces indirect accesses to the modern BAR with a simple lookup
and direct calls to memory_region_dispatch_read/write.
This is expected to save lots of memory at boot time after applying:
[Qemu-devel] [PULL 00/32] Misc changes for 2017-09-22
While changing the s/g list allocation, commit 3b3b0628
also changed the descriptor counting to count iovec entries
as split by cpu_physical_memory_map(). Previously only the
actual descriptor entries were counted and the split into
the iovec happened afterwards in virtqueue_map().
Count the entries again instead to avoid erroneous
"Looped descriptor" errors.
hw/gen_pcie_root_port: make IO RO 0 on IO disabled
IO_LIMIT and IO_BASE registers should not be writable if
gen_pcie_root_port's io-reserve property is set to 0.
The COMMAND register should have the IO flag read only.