Peter Krempa [Fri, 18 Oct 2019 08:14:54 +0000 (10:14 +0200)]
qapi: Allow introspecting fix for savevm's cooperation with blockdev
'savevm' was buggy as it considered all monitor-owned block device
nodes for snapshot. With the introduction of -blockdev, the common
usage made all nodes including protocol and backing file nodes be
monitor-owned and thus considered for snapshot.
This is a problem since the 'file' protocol nodes can't have internal
snapshots and it does not make sense to take snapshot of nodes
representing backing files.
This was fixed by commit 05f4aced658a02b02. Clients need to be able to
detect whether this fix is present.
Since savevm does not have an QMP alternative, add the feature for the
'human-monitor-command' backdoor which is used to call this command in
modern use.
Commit 8aa3a33e44 "tests/qapi-schema: Test for good feature lists in
structs" neglected to cover documentation comments, and the previous
commit followed its example. Make up for them.
Peter Krempa [Fri, 18 Oct 2019 08:14:51 +0000 (10:14 +0200)]
qapi: Add feature flags to commands
Similarly to features for struct types introduce the feature flags also
for commands. This will allow notifying management layers of fixes and
compatible changes in the behaviour of a command which may not be
detectable any other way.
The changes were heavily inspired by commit 6a8c0b51025.
The QAPI code generator clocks in at some 3100 SLOC in 8 source files.
Almost 60% of the code is in qapi/common.py. Split it into more
focused modules:
* Move QAPISchemaPragma and QAPISourceInfo to qapi/source.py.
* Move QAPIError and its sub-classes to qapi/error.py.
* Move QAPISchemaParser and QAPIDoc to parser.py. Use the opportunity
to put QAPISchemaParser first.
* Move check_expr() & friends to qapi/expr.py. Use the opportunity to
put the code into a more sensible order.
* Move QAPISchema & friends to qapi/schema.py
* Move QAPIGen and its sub-classes, ifcontext,
QAPISchemaModularCVisitor, and QAPISchemaModularCVisitor to qapi/gen.py
* Delete camel_case(), it's unused since commit e98859a9b9 "qapi:
Clean up after recent conversions to QAPISchemaVisitor"
A number of helper functions remain in qapi/common.py. I considered
moving the code generator helpers to qapi/gen.py, but decided not to.
Perhaps we should rewrite them as methods of QAPIGen some day.
qapi: Move gen_enum(), gen_enum_lookup() back to qapi/types.py
The next commit will split up qapi/common.py. gen_enum() needs
QAPISchemaEnumMember, and that's in the way. Move it to qapi/types.py
along with its buddy gen_enum_lookup().
Permit me a short a digression on history: how did gen_enum() end up
in qapi/common.py? Commit 21cd70dfc1 "qapi script: add event support"
duplicated qapi-types.py's gen_enum() and gen_enum_lookup() in
qapi-event.py. Simply importing them would have been cleaner, but
wasn't possible as qapi-types.py was a program, not a module. Commit efd2eaa6c2 "qapi: De-duplicate enum code generation" de-duplicated by
moving them to qapi.py, which was a module.
Since then, program qapi-types.py has morphed into module types.py.
It's where gen_enum() and gen_enum_lookup() started, and where they
belong.
"make check-qapi-schema" takes around 10s user + system time for me.
With -j, it takes a bit over 3s real time. We have worse tests. It's
still annoying when you work on the QAPI generator.
Some 1.4s user + system time is consumed by make figuring out what to
do, measured by making a target that does nothing. There's nothing I
can do about that right now. But let's see what we can do about the
other 8s.
Almost 7s are spent running test-qapi.py for every test case, the rest
normalizing and diffing test-qapi.py output. We have 190 test cases.
If I downgrade to python2, it's 4.5s, but python2 is a goner.
Hacking up test-qapi.py to exit(0) without doing anything makes it
only marginally faster. The problem is Python startup overhead.
Our configure puts -B into $(PYTHON). Running without -B is faster:
4.4s.
We could improve the Makefile to run test cases only when the test
case or the generator changed. But I'm after improvement in the case
where the generator changed.
test-qapi.py is designed to be the simplest possible building block
for a shell script to do the complete job (it's actually a Makefile,
not a shell script; no real difference). Python is just not meant for
that. It's for bigger blocks.
Move the post-processing and diffing into test-qapi.py, and make it
capable of testing multiple schema files. Set executable bits while
there.
Running it once per test case now takes slightly longer than 8s. But
running it once for all of them takes under 0.2s.
Messing with the Makefile to run it only on the tests that need
retesting is clearly not worth the bother.
Expected error output changes because the new normalization strips off
$(SRCDIR)/tests/qapi-schema/ instead of just $(SRCDIR)/.
The .exit files go away, because there is no exit status to test
anymore.
The frontend can't be run more than once due to its global state.
A future commit will want to do that.
The only global frontend state remaining is accidental:
QAPISchemaParser.__init__()'s parameter previously_included=[].
Python evaluates the default once, at definition time. Any
modifications to it are visible in subsequent calls. Well-known
Python trap. Change the default to None and replace it by the real
default in the function body. Use the opportunity to convert
previously_included to a set.
qapi: Store pragma state in QAPISourceInfo, not global state
The frontend can't be run more than once due to its global state.
A future commit will want to do that.
Recent commit "qapi: Move context-sensitive checking to the proper
place" got rid of many global variables already, but pragma state is
still stored in global variables (that's why a pragma directive's
scope is the complete schema).
qapi: Don't suppress doc generation without pragma doc-required
Commit bc52d03ff5 "qapi: Make doc comments optional where we don't
need them" made scripts/qapi2texi.py fail[*] unless the schema had
pragma 'doc-required': true. The stated reason was inability to cope
with incomplete documentation.
When commit fb0bc835e5 "qapi-gen: New common driver for code and doc
generators" folded scripts/qapi2texi.py into scripts/qapi-gen.py, it
turned the failure into silent suppression.
The doc generator can cope with incomplete documentation now. I don't
know since when, or what the problem was, or even whether it ever
existed.
Drop the silent suppression.
[*] The fail part was broken, fixed in commit e8ba07ea9a.
* remotes/kraxel/tags/audio-20191018-pull-request:
paaudio: fix channel order for usb-audio 5.1 and 7.1 streams
usbaudio: change playback counters to 64 bit
usb-audio: support more than two channels of audio
usb-audio: do not count on avail bytes actually available
audio: basic support for multichannel audio
audio: replace shift in audio_pcm_info with bytes_per_frame
audio: support more than two channels in volume setting
paaudio: get/put_buffer functions
audio: make mixeng optional
audio: add mixing-engine option (documentation)
audio: paaudio: ability to specify stream name
audio: paaudio: fix connection and stream name
audio: fix parameter dereference before NULL check
* remotes/jnsnow/tags/bitmaps-pull-request:
dirty-bitmaps: remove deprecated autoload parameter
MAINTAINERS: Add Vladimir as a reviewer for bitmaps
qcow2-bitmap: move bitmap reopen-rw code to qcow2_reopen_commit
block/qcow2-bitmap: fix and improve qcow2_reopen_bitmaps_rw
iotests: add test 260 to check bitmap life after snapshot + commit
block/qcow2-bitmap: do not remove bitmaps on reopen-ro
block/qcow2-bitmap: drop qcow2_reopen_bitmaps_rw_hint()
block/qcow2-bitmap: get rid of bdrv_has_changed_persistent_bitmaps
iotests: add test-case to 165 to test reopening qcow2 bitmaps to RW
block: reverse order for reopen commits
block: switch reopen queue from QSIMPLEQ to QTAILQ
block/dirty-bitmap: refactor bdrv_dirty_bitmap_next
block/dirty-bitmap: drop BdrvDirtyBitmap.mutex
block/dirty-bitmap: add bs link
block/dirty-bitmap: drop meta
block/qcow2: proper locking on bitmap add/remove paths
block/dirty-bitmap: return int from bdrv_remove_persistent_dirty_bitmap
block: move bdrv_can_store_new_dirty_bitmap to block/dirty-bitmap.c
util/hbitmap: strict hbitmap_reset
* remotes/kraxel/tags/ui-20191018-pull-request:
ui: fix keymap file search in input-barrier object
curses: correctly pass the color pair to setcchar()
curses: use the bit mask constants provided by curses
ui: Fix hanging up Cocoa display on macOS 10.15 (Catalina)
Matthew Kilgore [Fri, 4 Oct 2019 03:53:38 +0000 (23:53 -0400)]
curses: correctly pass the color pair to setcchar()
The current code does not correctly pass the color pair information to
setcchar(), it instead always passes zero. This results in the curses
output always being in white on black.
This patch fixes this by using PAIR_NUMBER() to retrieve the color pair
number from the chtype value, and then passes that value as an argument
to setcchar().
Matthew Kilgore [Fri, 4 Oct 2019 03:53:37 +0000 (23:53 -0400)]
curses: use the bit mask constants provided by curses
The curses API provides the A_ATTRIBUTES and A_CHARTEXT bit masks for
getting the attributes and character parts of a chtype, respectively. We
should use provided constants instead of using 0xff.
Hikaru Nishida [Tue, 15 Oct 2019 01:07:34 +0000 (10:07 +0900)]
ui: Fix hanging up Cocoa display on macOS 10.15 (Catalina)
macOS API documentation says that before applicationDidFinishLaunching
is called, any events will not be processed. However, some events are
fired before it is called in macOS Catalina. This causes deadlock of
iothread_lock in handleEvent while it will be released after the
app_started_sem is posted.
This patch avoids processing events before the app_started_sem is
posted to prevent this deadlock.
Kővágó, Zoltán [Sun, 13 Oct 2019 19:58:06 +0000 (21:58 +0200)]
usbaudio: change playback counters to 64 bit
With stereo playback, they need about 375 minutes of continuous audio
playback to overflow, which is usually not a problem (as stopping and
later resuming playback resets the counters). But with 7.1 audio, they
only need about 95 minutes to overflow.
After the overflow, the buf->prod % USBAUDIO_PACKET_SIZE(channels)
assertion no longer holds true, which will result in overflowing the
buffer. With 64 bit variables, it would take about 762000 years to
overflow.
Kővágó, Zoltán [Sun, 13 Oct 2019 19:58:05 +0000 (21:58 +0200)]
usb-audio: support more than two channels of audio
This commit adds support for 5.1 and 7.1 audio playback. This commit
adds a new property to usb-audio:
* multi=on|off
Whether to enable the 5.1 and 7.1 audio support. When off (default)
it continues to emulate the old stereo-only device. When on, it
emulates a slightly different audio device that supports 5.1 and 7.1
audio.
Kővágó, Zoltán [Sun, 13 Oct 2019 19:58:02 +0000 (21:58 +0200)]
audio: replace shift in audio_pcm_info with bytes_per_frame
The bit shifting trick worked because the number of bytes per frame was
always a power-of-two (since QEMU only supports mono, stereo and 8, 16
and 32 bit samples). But if we want to add support for surround sound,
this no longer holds true.
Kővágó, Zoltán [Sun, 13 Oct 2019 19:57:58 +0000 (21:57 +0200)]
audio: add mixing-engine option (documentation)
This will allow us to disable mixeng when we use a decent backend.
Disabling mixeng have a few advantages:
* we no longer convert the audio output from one format to another, when
the underlying audio system would just convert it to a third format.
We no longer convert, only the underlying system, when needed.
* the underlying system probably has better resampling and sample format
converting methods anyway...
* we may support formats that the mixeng currently does not support (S24
or float samples, more than two channels)
* when using an audio server (like pulseaudio) different sound card
outputs will show up as separate streams, even if we use only one
backend
Disadvantages:
* audio capturing no longer works (wavcapture, and vnc audio extension)
* some backends only support a single playback stream or very picky
about the audio format. In this case we can't disable mixeng.
Originally thw two main use cases of the disabled option was: using
unsupported audio formats (5.1 and 7.1 audio) and having different
pulseaudio streams per audio frontend. Since we can have multiple
-audiodevs, the latter is not that important, so currently you only need
this option if you want to use 5.1 or 7.1 audio (implemented in a later
patch), otherwise it's probably better to stick to the old and tried
mixeng, since it's less picky about the backends.
The ideal solution would be to port as much as possible to gstreamer,
but this is currently out of scope:
https://wiki.qemu.org/Internships/ProjectIdeas/AudioGStreamer
Kővágó, Zoltán [Tue, 10 Sep 2019 23:26:19 +0000 (01:26 +0200)]
audio: paaudio: fix connection and stream name
Connection name was previously erroneously set to the server socket
path, while connection names were simply "qemu". After this patch, the
connection name will be the vm name (falling back to "qemu" if not
specified), while stream names will be the audiodev's id.
This parameter has been deprecated since 2.12.0 and is eligible for
removal. Remove this parameter as it is actually completely ignored;
let's not give false hope.
qcow2-bitmap: move bitmap reopen-rw code to qcow2_reopen_commit
The only reason I can imagine for this strange code at the very-end of
bdrv_reopen_commit is the fact that bs->read_only updated after
calling drv->bdrv_reopen_commit in bdrv_reopen_commit. And in the same
time, prior to previous commit, qcow2_reopen_bitmaps_rw did a wrong
check for being writable, when actually it only need writable file
child not self.
So, as it's fixed, let's move things to correct place.
block/qcow2-bitmap: fix and improve qcow2_reopen_bitmaps_rw
- Correct check for write access to file child, and in correct place
(only if we want to write).
- Support reopen rw -> rw (which will be used in following commit),
for example, !bdrv_dirty_bitmap_readonly() is not a corruption if
bitmap is marked IN_USE in the image.
- Consider unexpected bitmap as a corruption and check other
combinations of in-image and in-RAM bitmaps.
block/qcow2-bitmap: do not remove bitmaps on reopen-ro
qcow2_reopen_bitmaps_ro wants to store bitmaps and then mark them all
readonly. But the latter don't work, as
qcow2_store_persistent_dirty_bitmaps removes bitmaps after storing.
It's OK for inactivation but bad idea for reopen-ro. And this leads to
the following bug:
Assume we have persistent bitmap 'bitmap0'.
Create external snapshot
bitmap0 is stored and therefore removed
Commit snapshot
now we have no bitmaps
Do some writes from guest (*)
they are not marked in bitmap
Shutdown
Start
bitmap0 is loaded as valid, but it is actually broken! It misses
writes (*)
Incremental backup
it will be inconsistent
So, let's stop removing bitmaps on reopen-ro. But don't rejoice:
reopening bitmaps to rw is broken too, so the whole scenario will not
work after this patch and we can't enable corresponding test cases in
260 iotests still. Reopening bitmaps rw will be fixed in the following
patches.
block/qcow2-bitmap: get rid of bdrv_has_changed_persistent_bitmaps
Firstly, no reason to optimize failure path. Then, function name is
ambiguous: it checks for readonly and similar things, but someone may
think that it will ignore normal bitmaps which was just unchanged, and
this is in bad relation with the fact that we should drop IN_USE flag
for unchanged bitmaps in the image.
It's needed to fix reopening qcow2 with bitmaps to RW. Currently it
can't work, as qcow2 needs write access to file child, to mark bitmaps
in-image with IN_USE flag. But usually children goes after parents in
reopen queue and file child is still RO on qcow2 reopen commit. Reverse
reopen order to fix it.
bdrv_dirty_bitmap_next is always used in same pattern. So, split it
into _next and _first, instead of combining two functions into one and
add FOR_EACH_DIRTY_BITMAP macro.
mutex field is just a pointer to bs->dirty_bitmap_mutex, so no needs
to store it in BdrvDirtyBitmap when we have bs pointer in it (since
previous patch).
Drop mutex field. Constantly use bdrv_dirty_bitmaps_lock/unlock in
block/dirty-bitmap.c to make it more obvious that it's not per-bitmap
lock. Still, for simplicity, leave bdrv_dirty_bitmap_lock/unlock
functions as an external API.
block/qcow2: proper locking on bitmap add/remove paths
qmp_block_dirty_bitmap_add and do_block_dirty_bitmap_remove do acquire
aio context since 0a6c86d024c52b. But this is not enough: we also must
lock qcow2 mutex when access in-image metadata. Especially it concerns
freeing qcow2 clusters.
To achieve this, move qcow2_can_store_new_dirty_bitmap and
qcow2_remove_persistent_dirty_bitmap to coroutine context.
Since we work in coroutines in correct aio context, we don't need
context acquiring in blockdev.c anymore, drop it.
hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.
Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.
* remotes/ehabkost/tags/machine-next-pull-request:
target/i386: Add Snowridge-v2 (no MPX) CPU model
i386: Omit all-zeroes entries from KVM CPUID table
i386: Fix legacy guest with xsave panic on host kvm without update cpuid.
target/i386: drop the duplicated definition of cpuid AVX512_VBMI macro
target/i386: clean up comments over 80 chars per line
memory-device: break the loop if tmp exceed the hinted range
memory-device: not necessary to use goto for the last check
hw/misc/vmcoreinfo: Add comment about reset handler
hw/input/lm832x: Convert reset handler to DeviceReset
hw/isa/vt82c686: Convert reset handler to DeviceReset
hw/ide/via82c: Convert reset handler to DeviceReset
hw/ide/sii3112: Convert reset handler to DeviceReset
hw/ide/piix: Convert reset handler to DeviceReset
hw/isa/piix4: Convert reset handler to DeviceReset
hw/acpi/piix4: Convert reset handler to DeviceReset
numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
tests: cpu-plug-test: fix device_add for pc/q35 machines
tests: add qtest_qmp_device_add_qdict() helper
Peter Maydell [Thu, 17 Oct 2019 15:48:56 +0000 (16:48 +0100)]
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20191013' into staging
Host vector support for tcg/ppc.
Fix thread=single cpu kicking.
# gpg: Signature made Mon 14 Oct 2019 15:11:55 BST
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "[email protected]"
# gpg: Good signature from "Richard Henderson <[email protected]>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-tcg-20191013: (23 commits)
cpus: kick all vCPUs when running thread=single
tcg/ppc: Update vector support for v3.00 dup/dupi
tcg/ppc: Update vector support for v3.00 load/store
tcg/ppc: Update vector support for v3.00 Altivec
tcg/ppc: Update vector support for v2.07 FP
tcg/ppc: Update vector support for v2.07 VSX
tcg/ppc: Update vector support for v2.07 Altivec
tcg/ppc: Update vector support for VSX
tcg/ppc: Enable Altivec detection
tcg/ppc: Support vector dup2
tcg/ppc: Support vector multiply
tcg/ppc: Support vector shift by immediate
tcg/ppc: Add support for vector saturated add/subtract
tcg/ppc: Add support for vector add/subtract
tcg/ppc: Add support for vector maximum/minimum
tcg/ppc: Add support for load/store/logic/comparison
tcg/ppc: Enable tcg backend vector compilation
tcg/ppc: Replace HAVE_ISEL macro with a variable
tcg/ppc: Replace HAVE_ISA_2_06
tcg/ppc: Create TCGPowerISA and have_isa
...
Peter Maydell [Thu, 17 Oct 2019 14:30:44 +0000 (15:30 +0100)]
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
virtio, vhost, acpi: features, fixes, tests
ARM ACPI memory hotplug support +
tests for new arm/virt ACPI tables.
Virtio fs support (no migration).
A vhost-user reconnect bugfix.
Signed-off-by: Michael S. Tsirkin <[email protected]>
# gpg: Signature made Tue 15 Oct 2019 22:02:19 BST
# gpg: using RSA key 281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <[email protected]>" [full]
# gpg: aka "Michael S. Tsirkin <[email protected]>" [full]
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
virtio: add vhost-user-fs-pci device
virtio: add vhost-user-fs base device
virtio: Add virtio_fs linux headers
tests/acpi: add expected tables for arm/virt
tests: document how to update acpi tables
tests: Add bios tests to arm/virt
tests: allow empty expected files
tests/acpi: add empty files
tests: Update ACPI tables list for upcoming arm/virt tests
docs/specs: Add ACPI GED documentation
hw/arm: Use GED for system_powerdown event
hw/arm: Factor out powerdown notifier from GPIO
hw/arm/virt-acpi-build: Add PC-DIMM in SRAT
hw/arm/virt: Enable device memory cold/hot plug with ACPI boot
hw/arm/virt: Add memory hotplug framework
hw/acpi: Add ACPI Generic Event Device Support
hw/acpi: Do not create memory hotplug method when handler is not defined
hw/acpi: Make ACPI IO address space configurable
vhost-user: save features if the char dev is closed
Eduardo Habkost [Mon, 14 Oct 2019 15:01:33 +0000 (12:01 -0300)]
sphinx: Use separate doctree directories for different builders
sphinx-build is buggy when multiple processes are using the same
doctree directory in parallel. See the 3-year-old Sphinx bug
report at: https://github.com/sphinx-doc/sphinx/issues/2946
Instead of avoiding parallel builds or adding some kind of
locking, I'm using the simplest solution: just using a different
doctree cache for each builder.
Eduardo Habkost [Thu, 22 Aug 2019 22:52:10 +0000 (19:52 -0300)]
i386: Omit all-zeroes entries from KVM CPUID table
KVM has a 80-entry limit at KVM_SET_CPUID2. With the
introduction of CPUID[0x1F], it is now possible to hit this limit
with unusual CPU configurations, e.g.:
$ ./x86_64-softmmu/qemu-system-x86_64 \
-smp 1,dies=2,maxcpus=2 \
-cpu EPYC,check=off,enforce=off \
-machine accel=kvm
qemu-system-x86_64: kvm_init_vcpu failed: Argument list too long
This happens because QEMU adds a lot of all-zeroes CPUID entries
for unused CPUID leaves. In the example above, we end up
creating 48 all-zeroes CPUID entries.
KVM already returns all-zeroes when emulating the CPUID
instruction if an entry is missing, so the all-zeroes entries are
redundant. Skip those entries. This reduces the CPUID table
size by half while keeping CPUID output unchanged.
Bingsong Si [Thu, 22 Aug 2019 04:29:01 +0000 (12:29 +0800)]
i386: Fix legacy guest with xsave panic on host kvm without update cpuid.
without kvm commit 412a3c41, CPUID(EAX=0xd,ECX=0).EBX always equal to 0 even
through guest update xcr0, this will crash legacy guest(e.g., CentOS 6).
Below is the call trace on the guest.
Tao Xu [Thu, 26 Sep 2019 02:10:55 +0000 (10:10 +0800)]
target/i386: drop the duplicated definition of cpuid AVX512_VBMI macro
Drop the duplicated definition of cpuid AVX512_VBMI macro and rename
it as CPUID_7_0_ECX_AVX512_VBMI. Rename CPUID_7_0_ECX_VBMI2 as
CPUID_7_0_ECX_AVX512_VBMI2.
Tao Xu [Thu, 26 Sep 2019 02:10:54 +0000 (10:10 +0800)]
target/i386: clean up comments over 80 chars per line
Add some comments, clean up comments over 80 chars per line. And there
is an extra line in comment of CPUID_8000_0008_EBX_WBNOINVD, remove
the extra enter and spaces.
Wei Yang [Tue, 30 Jul 2019 00:37:40 +0000 (08:37 +0800)]
memory-device: break the loop if tmp exceed the hinted range
The memory-device list built by memory_device_build_list is ordered by
its address, this means if the tmp range exceed the hinted range, all
the following range will not overlap with it.
And this won't change default pc-dimm mapping and address assignment stay
the same as before this change.
Igor Mammedov [Fri, 30 Aug 2019 11:07:23 +0000 (07:07 -0400)]
tests: cpu-plug-test: fix device_add for pc/q35 machines
Commit bc1fb850a3 silently broke device_add test for CPU hotplug which
resulted in test successfully passing though it wasn't actually run.
Fix it by making sure that all non present CPUs reported
by "query-hotpluggable-cpus" are hotplugged instead of making up
and hardcoding values.
Use of query-hotpluggable-cpus also allows consolidatiate device_add
cpu testcases and reuse the same test function for all targets.
While at it also add a check that at least one CPU was hotplugged,
to avoid silent breakage in the future.
Igor Mammedov [Fri, 30 Aug 2019 11:07:22 +0000 (07:07 -0400)]
tests: add qtest_qmp_device_add_qdict() helper
Add an API that takes QDict directly, so users could skip steps
of first building json dictionary and converting it back to
QDict in existing qtest_qmp_device_add() and instead use QDict
directly without intermediate conversion.
Peter Maydell [Tue, 15 Oct 2019 17:15:59 +0000 (18:15 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20191015' into staging
target-arm queue:
* Add Aspeed AST2600 SoC support (but no new board model yet)
* aspeed/wdt: Check correct register for clock source
* bcm2835: code cleanups, better logging, trace events
* implement v2.0 of the Arm semihosting specification
* provide new 'transaction-based' ptimer API and use it
for the Arm devices that use ptimers
* ARM: KVM: support more than 256 CPUs
* remotes/pmaydell/tags/pull-target-arm-20191015: (67 commits)
hw/misc/bcm2835_mbox: Add trace events
hw/arm/bcm2835: Add various unimplemented peripherals
hw/arm/bcm2835: Rename some definitions
hw/arm/bcm2835_peripherals: Name various address spaces
hw/arm/bcm2835_peripherals: Improve logging
hw/arm/raspi: Use the IEC binary prefix definitions
aspeed/soc: Add ASPEED Video stub
aspeed: add support for the Aspeed MII controller of the AST2600
aspeed: Parameterise number of MACs
m25p80: Add support for w25q512jv
aspeed/soc: Add AST2600 support
aspeed: Introduce an object class per SoC
aspeed/i2c: Add AST2600 support
aspeed/i2c: Introduce an object class per SoC
hw/gpio: Add in AST2600 specific implementation
aspeed/smc: Add AST2600 support
aspeed/smc: Introduce segment operations
hw: wdt_aspeed: Add AST2600 support
watchdog/aspeed: Introduce an object class per SoC
aspeed/sdmc: Add AST2600 support
...
Properties are structures used for the ARM particular MBOX.
Since one call in bcm2835_property.c concerns the mbox block,
name this trace event in the same bcm2835_mbox* namespace.
hw/arm/bcm2835: Add various unimplemented peripherals
Base addresses and sizes taken from the "BCM2835 ARM Peripherals"
datasheet from February 06 2012:
https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
Various logging improvements as once:
- Use 0x prefix for hex numbers
- Display value written during write accesses
- Move some logs from GUEST_ERROR to UNIMP
Initial definitions for a simple machine using an AST2600 SoC (Cortex
CPU).
The Cortex CPU and its interrupt controller are too complex to handle
in the common Aspeed SoC framework. We introduce a new Aspeed SoC
class with instance_init and realize handlers to handle the differences
with the AST2400 and the AST2500 SoCs. This will add extra work to
keep in sync both models with future extensions but it makes the code
clearer.
The I2C controller of the AST2400 and AST2500 SoCs have one IRQ shared
by all I2C busses. The AST2600 SoC I2C controller has one IRQ per bus
and 16 busses.
The AST2600 SoC SMC controller is a SPI only controller now and has a
few extensions which we will need to take into account when SW
requires it. This is enough to support u-boot and Linux.
The most important changes will be on the register range 0x34 - 0x3C
memops. Introduce class read/write operations to handle the
differences between SoCs.
Joel Stanley [Wed, 25 Sep 2019 14:32:28 +0000 (16:32 +0200)]
hw: aspeed_scu: Add AST2600 support
The SCU controller on the AST2600 SoC has extra registers. Increase
the number of regs of the model and introduce a new field in the class
to customize the MemoryRegion operations depending on the SoC model.