Do not retain a GString in thread-local storage. Allocate a
new one and free it on every invocation. Do not g_strdup the
result; return the buffer from the GString. Do not use
warn_report.
Using cs_disasm allocated memory via the &insn parameter, but
that was never freed. Use cs_disasm_iter so that we use the
memory that we've already allocated, and so that we only try
to disassemble one insn, as desired. Do not allocate 1k to
hold the bytes for a single instruction.
Rename several functions, dropping "generic" and making "host"
vs "target" clearer. Make a bunch of functions static that are
not used outside this file. Replace INIT_DISASSEMBLE_INFO with
a trio of functions.
There are better ways to do this, e.g. meson cmake subproject,
but that requires cmake 3.7 and some of our CI environments
only provide cmake 3.5.
Nor can we add a meson.build file to capstone/, because the git
submodule would then always report "untracked files". Fixing that
would require creating our own branch on the qemu git mirror, at
which point we could just as easily create a native meson subproject.
Instead, build the library via the main meson.build.
This improves the current state of affairs in that we will re-link
the qemu executables against a changed libcapstone.a, which we wouldn't
do before-hand. In addition, the use of the configuration header file
instead of command-line -DEFINES means that we will rebuild the
capstone objects with changes to meson.build.
* remotes/kevin/tags/for-upstream: (37 commits)
qcow2: Use L1E_SIZE in qcow2_write_l1_entry()
qemu-storage-daemon: Fix help line for --export
iotests: Test block-export-* QMP interface
iotests: Allow supported and unsupported formats at the same time
iotests: Introduce qemu_nbd_list_log()
iotests: Factor out qemu_tool_pipe_and_status()
nbd: Deprecate nbd-server-add/remove
nbd: Merge nbd_export_new() and nbd_export_create()
block/export: Move writable to BlockExportOptions
block/export: Add query-block-exports
block/export: Create BlockBackend in blk_exp_add()
block/export: Move blk to BlockExport
block/export: Add BLOCK_EXPORT_DELETED event
block/export: Add block-export-del
block/export: Move strong user reference to block_exports
block/export: Add 'id' option to block-export-add
block/export: Add blk_exp_close_all(_type)
block/export: Allocate BlockExport in blk_exp_add()
block/export: Add node-name to BlockExportOptions
block/export: Move AioContext from NBDExport to BlockExport
...
Kevin Wolf [Thu, 24 Sep 2020 15:27:16 +0000 (17:27 +0200)]
iotests: Allow supported and unsupported formats at the same time
This is useful for specifying 'generic' as supported (which includes
only writable image formats), but still excluding some incompatible
writable formats.
Kevin Wolf [Thu, 24 Sep 2020 15:27:14 +0000 (17:27 +0200)]
iotests: Factor out qemu_tool_pipe_and_status()
We have three almost identical functions that call an external process
and return its output and return code. Refactor them into small wrappers
around a common function.
Kevin Wolf [Thu, 24 Sep 2020 15:27:12 +0000 (17:27 +0200)]
nbd: Merge nbd_export_new() and nbd_export_create()
There is no real reason any more why nbd_export_new() and
nbd_export_create() should be separate functions. The latter only
performs a few checks before it calls the former.
What makes the current state stand out is that it's the only function in
BlockExportDriver that is not a static function inside nbd/server.c, but
a small wrapper in blockdev-nbd.c that then calls back into nbd/server.c
for the real functionality.
Move all the checks to nbd/server.c and make the resulting function
static to improve readability.
Kevin Wolf [Thu, 24 Sep 2020 15:27:11 +0000 (17:27 +0200)]
block/export: Move writable to BlockExportOptions
The 'writable' option is a basic option that will probably be applicable
to most if not all export types that we will implement. Move it from NBD
to the generic BlockExport layer.
Kevin Wolf [Thu, 24 Sep 2020 15:27:07 +0000 (17:27 +0200)]
block/export: Add BLOCK_EXPORT_DELETED event
Clients may want to know when an export has finally disappeard
(block-export-del returns earlier than that in the general case), so add
a QAPI event for it.
Kevin Wolf [Thu, 24 Sep 2020 15:27:05 +0000 (17:27 +0200)]
block/export: Move strong user reference to block_exports
The reference owned by the user/monitor that is created when adding the
export and dropped when removing it was tied to the 'exports' list in
nbd/server.c. Every block export will have a user reference, so move it
to the block export level and tie it to the 'block_exports' list in
block/export/export.c instead. This is necessary for introducing a QMP
command for removing exports.
Note that exports are present in block_exports even after the user has
requested shutdown. This is different from NBD's exports where exports
are immediately removed on a shutdown request, even if they are still in
the process of shutting down. In order to avoid that the user still
interacts with an export that is shutting down (and possibly removes it
a second time), we need to remember if the user actually still owns it.
Kevin Wolf [Thu, 24 Sep 2020 15:27:04 +0000 (17:27 +0200)]
block/export: Add 'id' option to block-export-add
We'll need an id to identify block exports in monitor commands. This
adds one.
Note that this is different from the 'name' option in the NBD server,
which is the externally visible export name. While block export ids need
to be unique in the whole process, export names must be unique only for
the same server. Different export types or (potentially in the future)
multiple NBD servers can have the same export name externally, but still
need different block export ids internally.
Kevin Wolf [Thu, 24 Sep 2020 15:27:03 +0000 (17:27 +0200)]
block/export: Add blk_exp_close_all(_type)
This adds a function to shut down all block exports, and another one to
shut down the block exports of a single type. The latter is used for now
when stopping the NBD server. As soon as we implement support for
multiple NBD servers, we'll need a per-server list of exports and it
will be replaced by a function using that.
As a side effect, the BlockExport layer has a list tracking all existing
exports now. closed_exports loses its only user and can go away.
Kevin Wolf [Thu, 24 Sep 2020 15:27:02 +0000 (17:27 +0200)]
block/export: Allocate BlockExport in blk_exp_add()
Instead of letting the driver allocate and return the BlockExport
object, allocate it already in blk_exp_add() and pass it. This allows us
to initialise the generic part before calling into the driver so that
the driver can just use these values instead of having to parse the
options a second time.
For symmetry, move freeing the BlockExport to blk_exp_unref().
Kevin Wolf [Thu, 24 Sep 2020 15:27:01 +0000 (17:27 +0200)]
block/export: Add node-name to BlockExportOptions
Every block export needs a block node to export, so add a 'node-name'
option to BlockExportOptions and remove the replaced option 'device'
from BlockExportOptionsNbd.
To maintain compatibility in nbd-server-add, BlockExportOptionsNbd needs
to be wrapped by a new type NbdServerAddOptions that adds 'device' back
because nbd-server-add doesn't use the BlockExportOptions base type at
all (so even without changing it to a 'node-name' option in
block-export-add, this compatibility code would be necessary).
Kevin Wolf [Thu, 24 Sep 2020 15:26:58 +0000 (17:26 +0200)]
nbd/server: Simplify export shutdown
Closing export is somewhat convoluted because nbd_export_close() and
nbd_export_put() call each other and the ways they actually end up being
nested is not necessarily obvious.
However, it is not really necessary to call nbd_export_close() from
nbd_export_put() when putting the last reference because it only does
three things:
1. Close all clients. We're going to refcount 0 and all clients hold a
reference, so we know there is no active client any more.
2. Close the user reference (represented by exp->name being non-NULL).
The same argument applies: If the export were still named, we would
still have a reference.
3. Freeing exp->description. This is really cleanup work to be done when
the export is finally freed. There is no reason to already clear it
while clients are still in the process of shutting down.
So after moving the cleanup of exp->description, the code can be
simplified so that only nbd_export_close() calls nbd_export_put(), but
never the other way around.
Kevin Wolf [Thu, 24 Sep 2020 15:26:57 +0000 (17:26 +0200)]
qemu-nbd: Use blk_exp_add() to create the export
With this change, NBD exports are now only created through the
BlockExport interface. This allows us finally to move things from the
NBD layer to the BlockExport layer if they make sense for other export
types, too.
blk_exp_add() returns only a weak reference, so the explicit
nbd_export_put() goes away.
Kevin Wolf [Thu, 24 Sep 2020 15:26:56 +0000 (17:26 +0200)]
nbd: Remove NBDExport.close callback
The export close callback is unused by the built-in NBD server. qemu-nbd
uses it only during shutdown to wait for the unrefed export to actually
go away. It can just use nbd_export_close_all() instead and do without
the callback.
This removes the close callback from nbd_export_new() and makes both
callers of it more similar.
Kevin Wolf [Thu, 24 Sep 2020 15:26:55 +0000 (17:26 +0200)]
nbd: Add writethrough to block-export-add
qemu-nbd allows use of writethrough cache modes, which mean that write
requests made through NBD will cause a flush before they complete.
Expose the same functionality in block-export-add.
Kevin Wolf [Thu, 24 Sep 2020 15:26:53 +0000 (17:26 +0200)]
block/export: Remove magic from block-export-add
nbd-server-add tries to be convenient and adds two questionable
features that we don't want to share in block-export-add, even for NBD
exports:
1. When requesting a writable export of a read-only device, the export
is silently downgraded to read-only. This should be an error in the
context of block-export-add.
2. When using a BlockBackend name, unplugging the device from the guest
will automatically stop the NBD server, too. This may sometimes be
what you want, but it could also be very surprising. Let's keep
things explicit with block-export-add. If the user wants to stop the
export, they should tell us so.
Move these things into the nbd-server-add QMP command handler so that
they apply only there.
Kevin Wolf [Thu, 24 Sep 2020 15:26:52 +0000 (17:26 +0200)]
qemu-nbd: Use raw block driver for --offset
Instead of implementing qemu-nbd --offset in the NBD code, just put a
raw block node with the requested offset on top of the user image and
rely on that doing the job.
This does not only simplify the nbd_export_new() interface and bring it
closer to the set of options that the nbd-server-add QMP command offers,
but in fact it also eliminates a potential source for bugs in the NBD
code which previously had to add the offset manually in all relevant
places.
Kevin Wolf [Thu, 24 Sep 2020 15:26:50 +0000 (17:26 +0200)]
block/export: Add BlockExport infrastructure and block-export-add
We want to have a common set of commands for all types of block exports.
Currently, this is only NBD, but we're going to add more types.
This patch adds the basic BlockExport and BlockExportDriver structs and
a QMP command block-export-add that creates a new export based on the
given BlockExportOptions.
qmp_nbd_server_add() becomes a wrapper around qmp_block_export_add().
However, there are only two routes to help_oneline being called:
help_f -> help_all -> help_oneline(ct->name, ct)
help_f -> help_onecmd(argv[1], ct)
In the first case, 'cmd' and 'ct->name' are the same thing,
so it's impossible for the if (cmd) to be false and then validly
print ct->name - this is upsetting gcc
( https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96739 )
In the second case, cmd is argv[1] and we know we've got argv[1]
so again (cmd) is non-NULL.
Simplify help_oneline by just printing cmd.
(Also strengthen argc check just to be pedantic)
Thomas Huth [Fri, 18 Sep 2020 15:35:14 +0000 (17:35 +0200)]
tests/check-block: Do not run the iotests with old versions of bash
macOS is shipped with a very old version of the bash (3.2), which
is currently not suitable for running the iotests anymore (e.g.
it is missing support for "readarray" which is used in the file
tests/qemu-iotests/common.filter). Add a check to skip the iotests
in this case - if someone still wants to run the iotests on macOS,
they can install a newer version from homebrew, for example.
* remotes/cohuck/tags/s390x-20201002:
s390x/tcg: Implement CIPHER MESSAGE WITH AUTHENTICATION (KMA)
s390x/tcg: We support Miscellaneous-Instruction-Extensions Facility 2
s390x/tcg: Implement MULTIPLY SINGLE (MSC, MSGC, MSGRKC, MSRKC)
s390x/tcg: Implement BRANCH INDIRECT ON CONDITION (BIC)
s390x/tcg: Implement MULTIPLY HALFWORD (MGH)
s390x/tcg: Implement MULTIPLY (MG, MGRK)
s390x/tcg: Implement SUBTRACT HALFWORD (SGH)
s390x/tcg: Implement ADD HALFWORD (AGH)
s390x/cpumodel: S390_FEAT_MISC_INSTRUCTION_EXT -> S390_FEAT_MISC_INSTRUCTION_EXT2
vfio-ccw: plug memory leak while getting region info
s390x/tcg: Implement MONITOR CALL
s390: guest support for diagnose 0x318
s390/sclp: add extended-length sccb support for kvm guest
s390/sclp: use cpu offset to locate cpu entries
s390/sclp: check sccb len before filling in data
s390/sclp: read sccb from mem based on provided length
s390/sclp: rework sclp boundary checks
s390/sclp: get machine once during read scp/cpu info
hw/s390x/css: Remove double initialization
Peter Maydell [Fri, 2 Oct 2020 12:39:20 +0000 (13:39 +0100)]
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-and-python-021020-1' into staging
Python testing updates:
- drop python 3.5 test from travis
- replace Debian 9 containers with 10
- increase cross build timeout
- bump minimum python version in configure
- move user plugins tests to gitlab
- split deprecated builds into build and test
# gpg: Signature made Fri 02 Oct 2020 12:34:36 BST
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <[email protected]>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-and-python-021020-1:
gitlab: split deprecated job into build/check stages
gitlab: move linux-user plugins test across to gitlab
configure: Bump the minimum required Python version to 3.6
gitlab-ci: Increase the timeout for the cross-compiler builds
tests/docker: Remove old Debian 9 containers
shippable.yml: Remove the Debian9-based MinGW cross-compiler tests
tests/docker: Update the tricore container to debian 10
gitlab-ci: Remove the Debian9-based containers and containers-layer3
tests/docker: Use Fedora containers for MinGW cross-builds in the gitlab-CI
travis.yml: Drop the Python 3.5 build
travis.yml: Drop the superfluous Python 3.6 build
travis.yml: Update Travis to use Bionic and Focal instead of Xenial
travis.yml: Drop the default softmmu builds
migration: Silence compiler warning in global_state_store_running()
s390x/tcg: Implement BRANCH INDIRECT ON CONDITION (BIC)
Just like BRANCH ON CONDITION - however the address is read from memory
(always 8 bytes are read), we have to wrap the address manually. The
address is read using current CPU DAT/address-space controls, just like
ordinary data.
vfio-ccw: plug memory leak while getting region info
vfio_get_dev_region_info() unconditionally allocates memory
for a passed-in vfio_region_info structure (and does not re-use
an already allocated structure). Therefore, we have to free
the structure we pass to that function in vfio_ccw_get_region()
for every region we successfully obtained information for.
Fixes: 8fadea24de4e ("vfio-ccw: support async command subregion") Fixes: 46ea3841edaf ("vfio-ccw: Add support for the schib region") Fixes: f030532f2ad6 ("vfio-ccw: Add support for the CRW region and IRQ") Reported-by: Alex Williamson <[email protected]> Signed-off-by: Cornelia Huck <[email protected]> Reviewed-by: Eric Farman <[email protected]> Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Message-Id: <20200928101701[email protected]>
Recent upstream Linux uses the MONITOR CALL instruction for things like
BUG_ON() and WARN_ON(). We currently inject an operation exception when
we hit a MONITOR CALL instruction - which is wrong, as the instruction
is not glued to specific CPU features.
Doing a simple WARN_ON_ONCE() currently results in a panic:
[ 18.162801] illegal operation: 0001 ilc:2 [#1] SMP
[ 18.162889] Modules linked in:
[...]
[ 18.165476] Kernel panic - not syncing: Fatal exception: panic_on_oops
With a proper implementation, we now get:
[ 18.242754] ------------[ cut here ]------------
[ 18.242855] WARNING: CPU: 7 PID: 1 at init/main.c:1534 [...]
[ 18.242919] Modules linked in:
[...]
[ 18.246262] ---[ end trace a420477d71dc97b4 ]---
[ 18.259014] Freeing unused kernel memory: 4220K
DIAGNOSE 0x318 (diag318) is an s390 instruction that allows the storage
of diagnostic information that is collected by the firmware in the case
of hardware/firmware service events.
QEMU handles the instruction by storing the info in the CPU state. A
subsequent register sync will communicate the data to the hypervisor.
QEMU handles the migration via a VM State Description.
This feature depends on the Extended-Length SCCB (els) feature. If
els is not present, then a warning will be printed and the SCLP bit
that allows the Linux kernel to execute the instruction will not be
set.
Availability of this instruction is determined by byte 134 (aka fac134)
bit 0 of the SCLP Read Info block. This coincidentally expands into the
space used for CPU entries, which means VMs running with the diag318
capability may not be able to read information regarding all CPUs
unless the guest kernel supports an extended-length SCCB.
This feature is not supported in protected virtualization mode.
s390/sclp: add extended-length sccb support for kvm guest
As more features and facilities are added to the Read SCP Info (RSCPI)
response, more space is required to store them. The space used to store
these new features intrudes on the space originally used to store CPU
entries. This means as more features and facilities are added to the
RSCPI response, less space can be used to store CPU entries.
With the Extended-Length SCCB (ELS) facility, a KVM guest can execute
the RSCPI command and determine if the SCCB is large enough to store a
complete reponse. If it is not large enough, then the required length
will be set in the SCCB header.
The caller of the SCLP command is responsible for creating a
large-enough SCCB to store a complete response. Proper checking should
be in place, and the caller should execute the command once-more with
the large-enough SCCB.
This facility also enables an extended SCCB for the Read CPU Info
(RCPUI) command.
When this facility is enabled, the boundary violation response cannot
be a result from the RSCPI, RSCPI Forced, or RCPUI commands.
In order to tolerate kernels that do not yet have full support for this
feature, a "fixed" offset to the start of the CPU Entries within the
Read SCP Info struct is set to allow for the original 248 max entries
when this feature is disabled.
Additionally, this is introduced as a CPU feature to protect the guest
from migrating to a machine that does not support storing an extended
SCCB. This could otherwise hinder the VM from being able to read all
available CPU entries after migration (such as during re-ipl).
The start of the CPU entry region in the Read SCP Info response data is
denoted by the offset_cpu field. As such, QEMU needs to begin creating
entries at this address.
This is in preparation for when Read SCP Info inevitably introduces new
bytes that push the start of the CPUEntry field further away.
Read CPU Info is unlikely to ever change, so let's not bother
accounting for the offset there.
The SCCB must be checked for a sufficient length before it is filled
with any data. If the length is insufficient, then the SCLP command
is suppressed and the proper response code is set in the SCCB header.
While we're at it, let's cleanup the length check by placing the
calculation inside a macro.
s390/sclp: read sccb from mem based on provided length
The header contained within the SCCB passed to the SCLP service call
contains the actual length of the SCCB. Instead of allocating a static
4K size for the work sccb, let's allow for a variable size determined
by the value in the header. The proper checks are already in place to
ensure the SCCB length is sufficent to store a full response and that
the length does not cross any explicitly-set boundaries.
s390/sclp: get machine once during read scp/cpu info
Functions within read scp/cpu info will need access to the machine
state. Let's make a call to retrieve the machine state once and
pass the appropriate data to the respective functions.
Alex Bennée [Fri, 2 Oct 2020 09:15:38 +0000 (10:15 +0100)]
gitlab: split deprecated job into build/check stages
While the job is pretty fast for only a few targets we still want to
catch breakage of the build. By splitting the test step we can
allow_failures for that while still ensuring we don't miss the build
breaking.
Alex Bennée [Fri, 2 Oct 2020 10:32:23 +0000 (11:32 +0100)]
gitlab: move linux-user plugins test across to gitlab
Even with the recent split moving beefier plugins into contrib and
dropping them from the check-tcg tests we are still hitting time
limits. This possibly points to a slow down of --debug-tcg but seeing
as we are migrating stuff to gitlab we might as well move there and
bump the timeout.
Thomas Huth [Fri, 25 Sep 2020 15:40:27 +0000 (16:40 +0100)]
configure: Bump the minimum required Python version to 3.6
All our supported build platforms have Python 3.6 or newer nowadays, and
there are some useful features in Python 3.6 which are not available in
3.5 yet (e.g. the type hint annotations which will allow us to statically
type the QAPI parser), so let's bump the minimum Python version to 3.6 now.
Thomas Huth [Fri, 25 Sep 2020 15:40:26 +0000 (16:40 +0100)]
gitlab-ci: Increase the timeout for the cross-compiler builds
Some of the cross-compiler builds (the mips build and the win64 build
for example) are quite slow and sometimes hit the 1h time limit.
Increase the limit a little bit to make sure that we do not get failures
in the CI runs just because of some few minutes.
Thomas Huth [Fri, 25 Sep 2020 15:40:24 +0000 (16:40 +0100)]
shippable.yml: Remove the Debian9-based MinGW cross-compiler tests
We're not supporting Debian 9 anymore, and we are now testing
MinGW cross-compiler builds in the gitlab-CI, too, so we do not
really need these jobs in the shippable.yml anymore.
Thomas Huth [Fri, 25 Sep 2020 15:40:22 +0000 (16:40 +0100)]
gitlab-ci: Remove the Debian9-based containers and containers-layer3
According to our support policy, Debian 9 is not supported by the
QEMU project anymore. Since we now switched the MinGW cross-compiler
builds to Fedora, we do not need these Debian9-based containers
in the gitlab-CI anymore, and can now also get rid of the "layer3"
container build stage this way.
Thomas Huth [Fri, 25 Sep 2020 15:40:21 +0000 (16:40 +0100)]
tests/docker: Use Fedora containers for MinGW cross-builds in the gitlab-CI
According to our support policy, we do not support Debian 9 in QEMU
anymore, and we only support building the Windows binaries with a
very recent version of the MinGW toolchain. So we should not test
the MinGW cross-compilation with Debian 9 anymore, but switch to
something newer like Fedora. To do this, we need a separate Fedora
container for each build that provides the QEMU_CONFIGURE_OPTS
environment variable.
Unfortunately, the MinGW 64-bit compiler seems to be a little bit
slow, so we also have to disable some features like "capstone" in the
build here to make sure that the CI pipelines still finish within a
reasonable amount of time.
Thomas Huth [Fri, 25 Sep 2020 15:40:18 +0000 (16:40 +0100)]
travis.yml: Update Travis to use Bionic and Focal instead of Xenial
According to our support policy, we do not support Xenial anymore.
Time to switch the bigger parts of the builds to Focal instead.
Some few jobs have to be updated to Bionic instead, since they are
currently still failing on Focal otherwise. Also "--disable-pie" is
causing linker problems with newer versions of Ubuntu ... so remove
that switch from the jobs now (we still test it in a gitlab CI job,
so we don't lose much test coverage here).
Thomas Huth [Fri, 25 Sep 2020 15:40:17 +0000 (16:40 +0100)]
travis.yml: Drop the default softmmu builds
The total runtime of all Travis jobs is very long and we are testing
all softmmu targets in the gitlab-CI already - so we can speed up the
Travis testing a little bit by not testing the softmmu targets here
anymore.
Thomas Huth [Fri, 25 Sep 2020 15:40:16 +0000 (16:40 +0100)]
migration: Silence compiler warning in global_state_store_running()
GCC 9.3.0 on Ubuntu complains:
In file included from /usr/include/string.h:495,
from /home/travis/build/huth/qemu/include/qemu/osdep.h:87,
from ../migration/global_state.c:13:
In function ‘strncpy’,
inlined from ‘global_state_store_running’ at ../migration/global_state.c:47:5:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: error:
‘__builtin_strncpy’ specified bound 100 equals destination size [-Werror=stringop-truncation]
106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... but we apparently really want to do a strncpy here - the size is already
checked with the assert() statement right in front of it. To silence the
warning, simply replace it with our strpadcpy() function.
* remotes/jsnow-gitlab/tags/ide-pull-request:
ide: cancel pending callbacks on SRST
ide: clear interrupt on command write
ide: remove magic constants from the device register
ide: reorder set/get sector functions
ide: model HOB correctly
ide: don't tamper with the device register
ide: rename cmd_write to ctrl_write
hw/ide/ahci: Do not dma_memory_unmap(NULL)
MAINTAINERS: Update my git address
John Snow [Fri, 24 Jul 2020 05:23:00 +0000 (01:23 -0400)]
ide: cancel pending callbacks on SRST
The SRST implementation did not keep up with the rest of IDE; it is
possible to perform a weak reset on an IDE device to remove the BSY/DRQ
bits, and then issue writes to the control/device registers which can
cause chaos with the state machine.
Fix that by actually performing a real reset.
Reported-by: Alexander Bulekov <[email protected]> Fixes: https://bugs.launchpad.net/qemu/+bug/1878253 Fixes: https://bugs.launchpad.net/qemu/+bug/1887303 Fixes: https://bugs.launchpad.net/qemu/+bug/1887309 Signed-off-by: John Snow <[email protected]>
John Snow [Fri, 24 Jul 2020 05:22:58 +0000 (01:22 -0400)]
ide: remove magic constants from the device register
(In QEMU, we call this the "select" register.)
My memory isn't good enough to memorize what these magic runes
do. Label them to prevent mixups from happening in the future.
Side note: I assume it's safe to always set 0xA0 even though ATA2 claims
these bits are reserved, because ATA3 immediately reinstated that these
bits should be always on. ATA4 and subsequent specs only claim that the
fields are obsolete, so I assume it's safe to leave these set and that
it should work with the widest array of guests.
John Snow [Fri, 24 Jul 2020 05:22:56 +0000 (01:22 -0400)]
ide: model HOB correctly
I have been staring at this FIXME for years and I never knew what it
meant. I finally stumbled across it!
When writing to the command registers, the old value is shifted into a
HOB copy of the register and the new value is written into the primary
register. When reading registers, the value retrieved is dependent on
the HOB bit in the CONTROL register.
By setting bit 7 (0x80) in CONTROL, any register read will, if it has
one, yield the HOB value for that register instead.
Our code has a problem: We were using bit 7 of the DEVICE register to
model this. We use bus->cmd roughly as the control register already, as
it stores the value from ide_ctrl_write.
Lastly, all command register writes reset the HOB, so fix that, too.
John Snow [Fri, 24 Jul 2020 05:22:55 +0000 (01:22 -0400)]
ide: don't tamper with the device register
In real ISA operation, register writes go out to an entire bus channel
and all listening devices receive the write. The devices do not toggle
the DEV bit based on their own configuration, nor does the HBA
intermediate or tamper with that value.
The reality of the matter is that DEV0/DEV1 accordingly will react to
command register writes based on whether or not the device was selected.
This does not fix a known bug, but it makes the code slightly simpler
and more obvious.
This is because we don't check the return value from dma_memory_map()
which can return NULL, then we call dma_memory_unmap(NULL) which is
illegal. Fix by only unmap if the value is not NULL (and the size is
not the expected one).
Peter Maydell [Thu, 1 Oct 2020 15:41:30 +0000 (16:41 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20201001' into staging
target-arm queue:
* Make isar_feature_aa32_fp16_arith() handle M-profile
* Fix SVE splice
* Fix SVE LDR/STR
* Remove ignore_memory_transaction_failures on the raspi2
* raspi: Various cleanup/refactoring
* remotes/pmaydell/tags/pull-target-arm-20201001:
hw/arm/raspi: Remove use of the 'version' value in the board code
hw/arm/raspi: Use RaspiProcessorId to set the firmware load address
hw/arm/raspi: Introduce RaspiProcessorId enum
hw/arm/raspi: Use more specific machine names
hw/arm/raspi: Avoid using TypeInfo::class_data pointer
hw/arm/raspi: Move arm_boot_info structure to RaspiMachineState
hw/arm/raspi: Load the firmware on the first core
hw/arm/raspi: Display the board revision in the machine description
hw/arm/raspi: Remove ignore_memory_transaction_failures on the raspi2
hw/arm/bcm2835: Add more unimplemented peripherals
hw/arm/raspi: Define various blocks base addresses
target/arm: Fix SVE splice
target/arm: Fix sve ldr/str
target/arm: Make isar_feature_aa32_fp16_arith() handle M-profile
target/arm: Add ID register values for Cortex-M0
hw/intc/armv7m_nvic: Only show ID register values for Main Extension CPUs
target/arm: Move id_pfr0, id_pfr1 into ARMISARegisters
target/arm: Replace ARM_FEATURE_PXN with ID_MMFR0.VMSA check
hw/arm/raspi: Remove use of the 'version' value in the board code
We expected the 'version' ID to match the board processor ID,
but this is not always true (for example boards with revision
id 0xa02042/0xa22042 are Raspberry Pi 2 with a BCM2837 SoC).
This was not important because we were not modelling them, but
since the recent refactor now allow to model these boards, it
is safer to check the processor id directly. Remove the version
check.
As we only support a reduced set of the REV_CODE_PROCESSOR id
encoded in the board revision, define the PROCESSOR_ID values
as an enum. We can simplify the board_soc_type and cores_count
methods.
Now that we can instantiate different machines based on their
board_rev register value, we can have various raspi2 and raspi3.
In commit fc78a990ec103 we corrected the machine description.
Correct the machine names too. For backward compatibility, add
an alias to the previous generic name.
hw/arm/raspi: Avoid using TypeInfo::class_data pointer
Using class_data pointer to create a MachineClass is not
the recommended way anymore. The correct way is to open-code
the MachineClass::fields in the class_init() method.
We can not use TYPE_RASPI_MACHINE::class_base_init() because
it is called *before* each machine class_init(), therefore the
board_rev field is not populated. We have to manually call
raspi_machine_class_common_init() for each machine.
The 'first_cpu' is more a QEMU accelerator-related concept
than a variable the machine requires to use.
Since the machine is aware of its CPUs, directly use the
first one to load the firmware.
hw/arm/raspi: Remove ignore_memory_transaction_failures on the raspi2
Commit 1c3db49d39 added the raspi3, which uses the same peripherals
than the raspi2 (but with different ARM cores). The raspi3 was
introduced without the ignore_memory_transaction_failures flag.
Almost 2 years later, the machine is usable running U-Boot and
Linux.
In commit 00cbd5bd74 we mapped a lot of unimplemented devices,
commit d442d95f added thermal block and commit 0e5bbd7406 the
system timer.
As we are happy with the raspi3, let's remove this flag on the
raspi2.