Add python script with new logic of searching for tests:
Current ./check behavior:
- tests are named [0-9][0-9][0-9]
- tests must be registered in group file (even if test doesn't belong
to any group, like 142)
Behavior of findtests.py:
- group file is dropped
- tests are all files in tests/ subdirectory (except for .out files),
so it's not needed more to "register the test", just create it with
appropriate name in tests/ subdirectory. Old names like
[0-9][0-9][0-9] (in root iotests directory) are supported too, but
not recommended for new tests
- groups are parsed from '# group: ' line inside test files
- optional file group.local may be used to define some additional
groups for downstreams
- 'disabled' group is used to temporary disable tests. So instead of
commenting tests in old 'group' file you now can add them to
disabled group with help of 'group.local' file
- selecting test ranges like 5-15 are not supported more
(to support restarting failed ./check command from the middle of the
process, new argument is added: --start-from)
Benefits:
- no rebase conflicts in group file on patch porting from branch to
branch
- no conflicts in upstream, when different series want to occupy same
test number
- meaningful names for test files
For example, with digital number, when some person wants to add some
test about block-stream, he most probably will just create a new
test. But if there would be test-block-stream test already, he will
at first look at it and may be just add a test-case into it.
And anyway meaningful names are better.
This commit doesn't update check behavior (which will be done in
further commit), still, the documentation changed like new behavior is
already here. Let's live with this small inconsistency for the
following few commits, until final change.
Kevin Wolf [Mon, 18 Jan 2021 12:34:47 +0000 (13:34 +0100)]
block: Separate blk_is_writable() and blk_supports_write_perm()
Currently, blk_is_read_only() tells whether a given BlockBackend can
only be used in read-only mode because its root node is read-only. Some
callers actually try to answer a slightly different question: Is the
BlockBackend configured to be writable, by taking write permissions on
the root node?
This can differ, for example, for CD-ROM devices which don't take write
permissions, but may be backed by a writable image file. scsi-cd allows
write requests to the drive if blk_is_read_only() returns false.
However, the write request will immediately run into an assertion
failure because the write permission is missing.
This patch introduces separate functions for both questions.
blk_supports_write_perm() answers the question whether the block
node/image file can support writable devices, whereas blk_is_writable()
tells whether the BlockBackend is currently configured to be writable.
All calls of blk_is_read_only() are converted to one of the two new
functions.
Peter Maydell [Wed, 27 Jan 2021 17:40:24 +0000 (17:40 +0000)]
Merge remote-tracking branch 'remotes/edgar/tags/edgar/xilinx-next-2021-01-27.for-upstream' into staging
For upstream
# gpg: Signature made Wed 27 Jan 2021 07:41:20 GMT
# gpg: using RSA key AC44FEDC14F7F1EBEDBF415129C596780F6BCA83
# gpg: Good signature from "Edgar E. Iglesias (Xilinx key) <[email protected]>" [unknown]
# gpg: aka "Edgar E. Iglesias <[email protected]>" [full]
# Primary key fingerprint: AC44 FEDC 14F7 F1EB EDBF 4151 29C5 9678 0F6B CA83
* remotes/edgar/tags/edgar/xilinx-next-2021-01-27.for-upstream:
target/microblaze: Add security attributes on memory transactions
target/microblaze: use MMUAccessType instead of int in mmu_translate
target/microblaze: Add use-non-secure property
Joe Komlodi [Fri, 22 Jan 2021 00:18:53 +0000 (16:18 -0800)]
target/microblaze: Add use-non-secure property
This property is used to control the security of the following interfaces
on MicroBlaze:
M_AXI_DP - data interface
M_AXI_IP - instruction interface
M_AXI_DC - dcache interface
M_AXI_IC - icache interface
It works by enabling or disabling the use of the non_secure[3:0] signals.
Interfaces and their corresponding values are taken from:
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_2/ug984-vivado-microblaze-ref.pdf
page 153.
Peter Maydell [Tue, 26 Jan 2021 21:08:13 +0000 (21:08 +0000)]
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2021-01-26' into staging
Block patches:
- Make backup block jobs use asynchronous requests with the block-copy
module
- Use COR filter node for stream block jobs
- Make coroutine-sigaltstack’s qemu_coroutine_new() function thread-safe
- Report error string when file locking fails with an unexpected errno
- iotest fixes, additions, and some refactoring
* remotes/maxreitz/tags/pull-block-2021-01-26: (53 commits)
iotests/178: Pass value to invalid option
iotests/118: Drop 'change' test
iotests: Add test for the regression fixed in c8bf9a9169
block: report errno when flock fcntl fails
simplebench: add bench-backup.py
simplebench: bench_block_job: add cmd_options argument
simplebench/bench_block_job: use correct shebang line with python3
block/block-copy: drop unused argument of block_copy()
block/block-copy: drop unused block_copy_set_progress_callback()
qapi: backup: disable copy_range by default
backup: move to block-copy
block/backup: drop extra gotos from backup_run()
block/block-copy: make progress_bytes_callback optional
iotests: 257: prepare for backup over block-copy
iotests: 219: prepare for backup over block-copy
iotests: 185: prepare for backup over block-copy
iotests/129: Limit backup's max-chunk/max-workers
iotests: 56: prepare for backup over block-copy
qapi: backup: add max-chunk and max-workers to x-perf struct
job: call job_enter from job_pause
...
Max Reitz [Tue, 26 Jan 2021 12:38:34 +0000 (13:38 +0100)]
iotests/178: Pass value to invalid option
ccd3b3b8112 has deprecated short-hand boolean options (i.e., options
with values). All options without values are interpreted as boolean
options, so this includes the invalid option "snapshot.foo" used in
iotest 178.
Further commit will add a benchmark
(scripts/simplebench/bench-backup.py), which will show that backup
works better with async parallel requests (previous commit) and
disabled copy_range. So, let's disable copy_range by default.
Note: the option was added several commits ago with default to true,
to follow old behavior (the feature was enabled unconditionally), and
only now we are going to change the default behavior.
block/block-copy: make progress_bytes_callback optional
We are going to stop use of this callback in the following commit.
Still the callback handling code will be dropped in a separate commit.
So, for now let's make it optional.
Iotest 257 dumps a lot of in-progress information of backup job, such
as offset and bitmap dirtiness. Further commit will move backup to be
one block-copy call, which will introduce async parallel requests
instead of plain cluster-by-cluster copying. To keep things
deterministic, allow only one worker (only one copy request at a time)
for this test.
The further change of moving backup to be a one block-copy call will
make copying chunk-size and cluster-size two separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
Test 219 depends on specified chunk-size. Update it for explicit
chunk-size for backup as for mirror.
The further change of moving backup to be a one block-copy call will
make copying chunk-size and cluster-size two separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
185 test however assumes, that with speed limited to 64K, one iteration
would result in offset=64K. It will change, as first iteration would
result in offset=1M independently of speed.
So, let's explicitly specify, what test wants: set max-chunk to 64K, so
that one iteration is 64K. Note, that we don't need to limit
max-workers, as block-copy rate limiter will handle the situation and
wouldn't start new workers when speed limit is obviously reached.
Max Reitz [Wed, 20 Jan 2021 10:20:43 +0000 (11:20 +0100)]
iotests/129: Limit backup's max-chunk/max-workers
Right now, this does not change anything, because backup ignores
max-chunk and max-workers. However, as soon as backup is switched over
to block-copy for the background copying process, we will need it to
keep 129 passing.
After introducing parallel async copy requests instead of plain
cluster-by-cluster copying loop, we'll have to wait for paused status,
as we need to wait for several parallel request. So, let's gently wait
instead of just asserting that job already paused.
qapi: backup: add max-chunk and max-workers to x-perf struct
Add new parameters to configure future backup features. The patch
doesn't introduce aio backup requests (so we actually have only one
worker) neither requests larger than one cluster. Still, formally we
satisfy these maximums anyway, so add the parameters now, to facilitate
further patch which will really change backup job behavior.
If main job coroutine called job_yield (while some background process
is in progress), we should give it a chance to call job_pause_point().
It will be used in backup, when moved on async block-copy.
Note, that job_user_pause is not enough: we want to handle
child_job_drained_begin() as well, which call job_pause().
Still, if job is already in job_do_yield() in job_pause_point() we
should not enter it.
iotest 109 output is modified: on stop we do bdrv_drain_all() which now
triggers job pause immediately (and pause after ready is standby).
We are going to directly use one async block-copy operation for backup
job, so we need rate limiter.
We want to maintain current backup behavior: only background copying is
limited and copy-before-write operations only participate in limit
calculation. Therefore we need one rate limiter for block-copy state
and boolean flag for block-copy call state for actual limitation.
Note, that we can't just calculate each chunk in limiter after
successful copying: it will not save us from starting a lot of async
sub-requests which will exceed limit too much. Instead let's use the
following scheme on sub-request creation:
1. If at the moment limit is not exceeded, create the request and
account it immediately.
2. If at the moment limit is already exceeded, drop create sub-request
and handle limit instead (by sleep).
With this approach we'll never exceed the limit more than by one
sub-request (which pretty much matches current backup behavior).
Note also, that if there is in-flight block-copy async call,
block_copy_kick() should be used after set-speed to apply new setup
faster. For that block_copy_kick() published in this patch.
Refactor common path to use BlockCopyCallState pointer as parameter, to
prepare it for use in asynchronous block-copy (at least, we'll need to
run block-copy in a coroutine, passing the whole parameters as one
pointer).
Experiments show, that copy_range is not always making things faster.
So, to make experimentation simpler, let's add a parameter. Some more
perf parameters will be added soon, so here is a new struct.
For now, add new backup qmp parameter with x- prefix for the following
reasons:
- We are going to add more performance parameters, some will be
related to the whole block-copy process, some only to background
copying in backup (ignored for copy-before-write operations).
- On the other hand, we are going to use block-copy interface in other
block jobs, which will need performance options as well.. And it
should be the same structure or at least somehow related.
So, there are too much unclean things about how the interface and now
we need the new options mostly for testing. Let's keep them
experimental for a while.
In do_backup_common() new x-perf parameter handled in a way to
make further options addition simpler.
We add use-copy-range with default=true, and we'll change the default
in further patch, after moving backup to use block-copy.
Max Reitz [Mon, 25 Jan 2021 12:03:05 +0000 (13:03 +0100)]
coroutine-sigaltstack: Add SIGUSR2 mutex
Disposition (action) for any given signal is global for the process.
When two threads run coroutine-sigaltstack's qemu_coroutine_new()
concurrently, they may interfere with each other: One of them may revert
the SIGUSR2 handler to SIG_DFL, between the other thread (a) setting up
coroutine_trampoline() as the handler and (b) raising SIGUSR2. That
SIGUSR2 will then terminate the QEMU process abnormally.
We have to ensure that only one thread at a time can modify the
process-global SIGUSR2 handler. To do so, wrap the whole section where
that is done in a mutex.
Alternatively, we could for example have the SIGUSR2 handler always be
coroutine_trampoline(), so there would be no need to invoke sigaction()
in qemu_coroutine_new(). Laszlo has posted a patch to do so here:
However, given that coroutine-sigaltstack is more of a fallback
implementation for platforms that do not support ucontext, that change
may be a bit too invasive to be comfortable with it. The mutex proposed
here may negatively impact performance, but the change is much simpler.
Max Reitz [Mon, 18 Jan 2021 10:57:18 +0000 (11:57 +0100)]
iotests/129: Limit mirror job's buffer size
Issuing 'stop' on the VM drains all nodes. If the mirror job has many
large requests in flight, this may lead to significant I/O that looks a
bit like 'stop' would make the job try to complete (which is what 129
should verify not to happen).
We can limit the I/O in flight by limiting the buffer size, so mirror
will make very little progress during the 'stop' drain.
(We do not need to do anything about commit, which has a buffer size of
512 kB by default; or backup, which goes cluster by cluster. Once we
have asynchronous requests for backup, that will change, but then we can
fine-tune the backup job to only perform a single request on a very
small chunk, too.)
Max Reitz [Mon, 18 Jan 2021 10:57:17 +0000 (11:57 +0100)]
iotests/129: Actually test a commit job
Before this patch, test_block_commit() performs an active commit, which
under the hood is a mirror job. If we want to test various different
block jobs, we should perhaps run an actual commit job instead.
Doing so requires adding an overlay above the source node before the
commit is done (and then specifying the source node as the top node for
the commit job).
Max Reitz [Mon, 18 Jan 2021 10:57:16 +0000 (11:57 +0100)]
iotests/129: Use throttle node
Throttling on the BB has not affected block jobs in a while, so it is
possible that one of the jobs in 129 finishes before the VM is stopped.
We can fix that by running the job from a throttle node.
Max Reitz [Mon, 18 Jan 2021 10:57:15 +0000 (11:57 +0100)]
iotests/129: Do not check @busy
@busy is false when the job is paused, which happens all the time
because that is how jobs yield (e.g. for mirror at least since commit 565ac01f8d3).
Back when 129 was added (2015), perhaps there was no better way of
checking whether the job was still actually running. Now we have the
@status field (as of 58b295ba52c, i.e. 2018), which can give us exactly
that information.
Max Reitz [Mon, 18 Jan 2021 10:57:12 +0000 (11:57 +0100)]
iotests/297: Rewrite in Python and extend reach
Instead of checking iotests.py only, check all Python files in the
qemu-iotests/ directory. Of course, most of them do not pass, so there
is an extensive skip list for now. (The only files that do pass are
209, 254, 283, and iotests.py.)
(Alternatively, we could have the opposite, i.e. an explicit list of
files that we do want to check, but I think it is better to check files
by default.)
Unless started in debug mode (./check -d), the output has no information
on which files are tested, so we will not have a problem e.g. with
backports, where some files may be missing when compared to upstream.
Besides the technical rewrite, some more things are changed:
- For the pylint invocation, PYTHONPATH is adjusted. This mirrors
setting MYPYPATH for mypy.
- Also, MYPYPATH is now derived from PYTHONPATH, so that we include
paths set by the environment. Maybe at some point we want to let the
check script add '../../python/' to PYTHONPATH so that iotests.py does
not need to do that.
- Passing --notes=FIXME,XXX to pylint suppresses warnings for TODO
comments. TODO is fine, we do not need 297 to complain about such
comments.
- The "Success" line from mypy's output is suppressed, because (A) it
does not add useful information, and (B) it would leak information
about the files having been tested to the reference output, which we
decidedly do not want.
Max Reitz [Mon, 18 Jan 2021 10:57:11 +0000 (11:57 +0100)]
iotests.py: Assume a couple of variables as given
There are a couple of environment variables that we fetch with
os.environ.get() without supplying a default. Clearly they are required
and expected to be set by the ./check script (as evidenced by
execute_setup_common(), which checks for test_dir and
qemu_default_machine to be set, and aborts if they are not).
Using .get() this way has the disadvantage of returning an Optional[str]
type, which mypy will complain about when tests just assume these values
to be str.
Use [] instead, which raises a KeyError for environment variables that
are not set. When this exception is raised, catch it and move the abort
code from execute_setup_common() there.
Drop the 'assert iotests.sock_dir is not None' from iotest 300, because
that sort of thing is precisely what this patch wants to prevent.
This patch completes the series with the COR-filter applied to
block-stream operations.
Adding the filter makes it possible in future implement discarding
copied regions in backing files during the block-stream job, to reduce
the disk overuse (we need control on permissions).
Also, the filter now is smart enough to do copy-on-read with specified
base, so we have benefit on guest reads even when doing block-stream of
the part of the backing chain.
Several iotests are slightly modified due to filter insertion.
iotests: 30: prepare to COR filter insertion by stream job
test_stream_parallel run parallel stream jobs, intersecting so that top
of one is base of another. It's OK now, but it would be a problem if
insert the filter, as one job will want to use another job's filter as
above_base node.
Correct thing to do is move to new interface: "bottom" argument instead
of base. This guarantees that jobs don't intersect by their actions.
The code already don't freeze base node and we try to make it prepared
for the situation when base node is changed during the operation. In
other words, block-stream doesn't own base node.
Let's introduce a new interface which should replace the current one,
which will in better relations with the code. Specifying bottom node
instead of base, and requiring it to be non-filter gives us the
following benefits:
- drop difference between above_base and base_overlay, which will be
renamed to just bottom, when old interface dropped
- clean way to work with parallel streams/commits on the same backing
chain, which otherwise become a problem when we introduce a filter
for stream job
- cleaner interface. Nobody will surprised the fact that base node may
disappear during block-stream, when there is no word about "base" in
the interface.
Stream in stream_prepare calls bdrv_change_backing_file() to change
backing-file in the metadata of bs.
It may use either backing-file parameter given by user or just take
filename of base on job start.
Backing file format is determined by base on job finish.
There are some problems with this design, we solve only two by this
patch:
1. Consider scenario with backing-file unset. Current concept of stream
supports changing of the base during the job (we don't freeze link to
the base). So, we should not save base filename at job start,
- let's determine name of the base on job finish.
2. Using direct base to determine filename and format is not very good:
base node may be a filter, so its filename may be JSON, and format_name
is not good for storing into qcow2 metadata as backing file format.
copy-on-read: skip non-guest reads if no copy needed
If the flag BDRV_REQ_PREFETCH was set, skip idling read/write
operations in COR-driver. It can be taken into account for the
COR-algorithms optimization. That check is being made during the
block stream job by the moment.
Add the BDRV_REQ_PREFETCH flag to the supported_read_flags of the
COR-filter.
block: Modify the comment for the flag BDRV_REQ_PREFETCH as we are
going to use it alone and pass it to the COR-filter driver for further
processing.
block: include supported_read_flags into BDS structure
Add the new member supported_read_flags to the BlockDriverState
structure. It will control the flags set for copy-on-read operations.
Make the block generic layer evaluate supported read flags before they
go to a block driver.
Provide API for the COR-filter removal. Also, drop the filter child
permissions for an inactive state when the filter node is being
removed.
To insert the filter, the block generic layer function
bdrv_insert_node() can be used.
The new function bdrv_cor_filter_drop() may be considered as an
intermediate solution before the QEMU permission update system has
overhauled. Then we are able to implement the API function
bdrv_remove_node() on the block generic layer.
Unfortunately commit "iotests: handle tmpfs" breaks running iotests
with -nbd -nocache, as _check_o_direct tries to create
$TEST_IMG.test_o_direct, but in case of nbd TEST_IMG is something like
nbd+unix:///... , and test fails with message
qemu-img: nbd+unix:///?socket[...]test_o_direct: Protocol driver
'nbd' does not support image creation, and opening the image
failed: Failed to connect to '/tmp/tmp.[...]/nbd/test_o_direct': No
such file or directory
Peter Maydell [Tue, 26 Jan 2021 09:51:02 +0000 (09:51 +0000)]
Merge remote-tracking branch 'remotes/stefanberger/tags/pull-tpm-2021-01-25-1' into staging
Merge tpm 2021/01/25 v1
# gpg: Signature made Tue 26 Jan 2021 01:58:26 GMT
# gpg: using RSA key B818B9CADF9089C2D5CEC66B75AD65802A0B4211
# gpg: Good signature from "Stefan Berger <[email protected]>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B818 B9CA DF90 89C2 D5CE C66B 75AD 6580 2A0B 4211
Roman Bolshakov [Mon, 7 Dec 2020 06:43:52 +0000 (09:43 +0300)]
tpm: tpm_spapr: Remove unused tracepoint
Linking of qemu-system-ppc64 fails on macOS with dtrace enabled:
error: probe tpm_spapr_show_buffer doesn't exist
error: Could not register probes
ld: error creating dtrace DOF section for architecture x86_64
The failure is explained in 8c8ed03850208e4 ("net/colo: Match is-enabled
probe to tracepoint"). In short, is-enabled probe can't be used without
a matching trace probe. And for this particular case
tpm_util_show_buffer probe should be enabled to print TPM buffer.
Peter Maydell [Mon, 25 Jan 2021 15:56:13 +0000 (15:56 +0000)]
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging
# gpg: Signature made Mon 25 Jan 2021 09:05:51 GMT
# gpg: using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <[email protected]>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
net: checksum: Introduce fine control over checksum type
net: checksum: Add IP header checksum calculation
net: checksum: Skip fragmented IP packets
net: Fix handling of id in netdev_add and netdev_del
Peter Maydell [Mon, 25 Jan 2021 13:48:38 +0000 (13:48 +0000)]
Merge remote-tracking branch 'remotes/gkurz-gitlab/tags/9p-next-pull-request' into staging
This fixes a Coverity report and improves the fid reclaim logic.
# gpg: Signature made Mon 25 Jan 2021 09:37:28 GMT
# gpg: using RSA key B4828BAF943140CEF2A3491071D4D5E5822F73D6
# gpg: Good signature from "Greg Kurz <[email protected]>" [full]
# gpg: aka "Gregory Kurz <[email protected]>" [full]
# gpg: aka "[jpeg image of size 3330]" [full]
# Primary key fingerprint: B482 8BAF 9431 40CE F2A3 4910 71D4 D5E5 822F 73D6
* remotes/gkurz-gitlab/tags/9p-next-pull-request:
9pfs: Convert reclaim list to QSLIST
9pfs: Improve unreclaim loop
9pfs: Convert V9fsFidState::fid_list to QSIMPLEQ
9pfs: Convert V9fsFidState::clunked to bool
9pfs/proxy: Check return value of proxy_marshal()
Peter Maydell [Mon, 25 Jan 2021 11:52:00 +0000 (11:52 +0000)]
Merge remote-tracking branch 'remotes/philmd-gitlab/tags/sdmmc-20210124' into staging
SD/MMC patches
- Various improvements for SD cards in SPI mode (Bin Meng)
# gpg: Signature made Sun 24 Jan 2021 19:16:55 GMT
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <[email protected]>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* remotes/philmd-gitlab/tags/sdmmc-20210124:
hw/sd: sd.h: Cosmetic change of using spaces
hw/sd: ssi-sd: Use macros for the dummy value and tokens in the transfer
hw/sd: ssi-sd: Fix the wrong command index for STOP_TRANSMISSION
hw/sd: ssi-sd: Add a state representing Nac
hw/sd: ssi-sd: Suffix a data block with CRC16
util: Add CRC16 (CCITT) calculation routines
hw/sd: sd: Drop sd_crc16()
hw/sd: sd: Support CMD59 for SPI mode
hw/sd: ssi-sd: Fix incorrect card response sequence
Bin Meng [Fri, 11 Dec 2020 09:35:12 +0000 (17:35 +0800)]
net: checksum: Introduce fine control over checksum type
At present net_checksum_calculate() blindly calculates all types of
checksums (IP, TCP, UDP). Some NICs may have a per type setting in
their BDs to control what checksum should be offloaded. To support
such hardware behavior, introduce a 'csum_flag' parameter to the
net_checksum_calculate() API to allow fine control over what type
checksum is calculated.
Existing users of this API are updated accordingly.
Guishan Qin [Fri, 11 Dec 2020 09:35:11 +0000 (17:35 +0800)]
net: checksum: Add IP header checksum calculation
At present net_checksum_calculate() only calculates TCP/UDP checksum
in an IP packet, but assumes the IP header checksum to be provided
by the software, e.g.: Linux kernel always calculates the IP header
checksum. However this might not always be the case, e.g.: for an IP
checksum offload enabled stack like VxWorks, the IP header checksum
can be zero.
This adds the checksum calculation of the IP header.
Markus Carlstedt [Fri, 11 Dec 2020 09:35:10 +0000 (17:35 +0800)]
net: checksum: Skip fragmented IP packets
To calculate the TCP/UDP checksum we need the whole datagram. Unless
the hardware has some logic to collect all fragments before sending
the whole datagram first, it can only be done by the network stack,
which is normally the case for the NICs we have seen so far.
Skip these fragmented IP packets to avoid checksum corruption.
net: Fix handling of id in netdev_add and netdev_del
CLI -netdev accumulates in option group "netdev".
Before commit 08712fcb85 "net: Track netdevs in NetClientState rather
than QemuOpt", netdev_add added to the option group, and netdev_del
removed from it, both HMP and QMP. Thus, every netdev had a
corresponding QemuOpts in this option group.
Commit 08712fcb85 dropped this for QMP netdev_add and both netdev_del.
Now a netdev has a corresponding QemuOpts only when it was created
with CLI or HMP. Two issues:
* QMP and HMP netdev_del can leave QemuOpts behind, breaking HMP
netdev_add. Reproducer:
$ qemu-system-x86_64 -S -display none -nodefaults -monitor stdio
QEMU 5.1.92 monitor - type 'help' for more information
(qemu) netdev_add user,id=net0
(qemu) info network
net0: index=0,type=user,net=10.0.2.0,restrict=off
(qemu) netdev_del net0
(qemu) info network
(qemu) netdev_add user,id=net0
upstream-qemu: Duplicate ID 'net0' for netdev
Try "help netdev_add" for more information
Fix by restoring the QemuOpts deletion in qmp_netdev_del(), but with
a guard, because the QemuOpts need not exist.
* QMP netdev_add loses its "no duplicate ID" check. Reproducer:
Fix by adding a duplicate ID check to net_client_init1() to replace
the lost one. The check is redundant for callers where QemuOpts
still checks, i.e. for CLI and HMP.
Peter Maydell [Sun, 24 Jan 2021 19:36:45 +0000 (19:36 +0000)]
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210124' into staging
Fix tcg constant temp overflow.
Fix running during atomic single-step.
Partial support for apple silicon.
Cleanups for accel/tcg.
# gpg: Signature made Sun 24 Jan 2021 18:08:57 GMT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "[email protected]"
# gpg: Good signature from "Richard Henderson <[email protected]>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth-gitlab/tags/pull-tcg-20210124:
tcg: Restart code generation when we run out of temps
tcg: Toggle page execution for Apple Silicon
accel/tcg: Restrict cpu_io_recompile() from other accelerators
accel/tcg: Declare missing cpu_loop_exit*() stubs
accel/tcg: Restrict tb_gen_code() from other accelerators
accel/tcg: Move tb_flush_jmp_cache() to cputlb.c
accel/tcg: Make cpu_gen_init() static
tcg: Optimize inline dup_const for MO_64
qemu/compiler: Split out qemu_build_not_reached_always
tcg: update the cpu running flag in cpu_exec_step_atomic
Bin Meng [Sat, 23 Jan 2021 10:40:00 +0000 (18:40 +0800)]
hw/sd: ssi-sd: Fix the wrong command index for STOP_TRANSMISSION
This fixes the wrong command index for STOP_TRANSMISSION, the
required command to interrupt the multiple block read command,
in the old codes. It should be CMD12 (0x4c), not CMD13 (0x4d).
Bin Meng [Sat, 23 Jan 2021 10:39:59 +0000 (18:39 +0800)]
hw/sd: ssi-sd: Add a state representing Nac
Per the "Physical Layer Specification Version 8.00" chapter 7.5.2,
"Data Read", there is a minimum 8 clock cycles (Nac) after the card
response and before data block shows up on the data out line. This
applies to both single and multiple block read operations.
Current implementation of single block read already satisfies the
timing requirement as in the RESPONSE state after all responses are
transferred the state remains unchanged. In the next 8 clock cycles
it jumps to DATA_START state if data is ready.
However we need an explicit state when expanding our support to
multiple block read in the future. Let's add a new state PREP_DATA
explicitly in the ssi-sd state machine to represent Nac.
Note we don't change the single block read state machine to let it
jump from RESPONSE state to DATA_START state as that effectively
generates a 16 clock cycles Nac, which might not be safe. As the
spec says the maximum Nac shall be calculated from several fields
encoded in the CSD register, we don't want to bother updating CSD
to ensure our Nac is within range to complicate things.
Bin Meng [Sat, 23 Jan 2021 10:39:58 +0000 (18:39 +0800)]
hw/sd: ssi-sd: Suffix a data block with CRC16
Per the SD spec, a valid data block is suffixed with a 16-bit CRC
generated by the standard CCITT polynomial x16+x12+x5+1. This part
is currently missing in the ssi-sd state machine. Without it, all
data block transfer fails in guest software because the expected
CRC16 is missing on the data out line.
Bin Meng [Sat, 23 Jan 2021 10:39:56 +0000 (18:39 +0800)]
hw/sd: sd: Drop sd_crc16()
commit f6fb1f9b319f ("sdcard: Correct CRC16 offset in sd_function_switch()")
changed the 16-bit CRC to be stored at offset 64. In fact, this CRC
calculation is completely wrong. From the original codes, it wants
to calculate the CRC16 of the first 64 bytes of sd->data[], however
passing 64 as the `width` to sd_crc16() actually counts 256 bytes
starting from the `message` for the CRC16 calculation, which is not
what we want.
Besides that, it seems existing sd_crc16() algorithm does not match
the SD spec, which says CRC16 is the CCITT one but the calculation
does not produce expected result. It turns out the CRC16 was never
transferred outside the sd core, as in sd_read_byte() we see:
if (sd->data_offset >= 64)
sd->state = sd_transfer_state;
tcg: Restart code generation when we run out of temps
Some large translation blocks can generate so many unique
constants that we run out of temps to hold them. In this
case, longjmp back to the start of code generation and
restart with a smaller translation block.
Bin Meng [Sat, 23 Jan 2021 10:39:55 +0000 (18:39 +0800)]
hw/sd: sd: Support CMD59 for SPI mode
After the card is put into SPI mode, CRC check for all commands
including CMD0 will be done according to CMD59 setting. But this
command is currently unimplemented. Simply allow the decoding of
CMD59, but the CRC remains unchecked.
Per the "Physical Layer Specification Version 8.00" chapter 7.5.1,
"Command/Response", there is a minimum 8 clock cycles (Ncr) before
the card response shows up on the data out line. However current
implementation jumps directly to the sending response state after
all 6 bytes command is received, which is a spec violation.
Add a new state PREP_RESP in the ssi-sd state machine to handle it.
* remotes/bonzini-gitlab/tags/for-upstream: (30 commits)
qemu-option: warn for short-form boolean options
qemu-option: move help handling to get_opt_name_value
qemu-option: clean up id vs. list->merge_lists
vnc: support "-vnc help"
qmp: remove deprecated "change" command
hmp: remove "change vnc TARGET" command
acceptance: switch to QMP change-vnc-password command
meson.build: Detect bzip2 program
meson.build: Declare global edk2_targets / install_edk2_blobs variables
meson: Add a section header for library dependencies
meson: Display crypto-related information altogether
meson: Display block layer information altogether
meson: Display accelerators and selected targets altogether
meson: Summarize compilation-related information altogether
meson: Summarize overall features altogether
meson: Display host binaries information altogether
meson: Summarize information related to directories first
meson: convert wixl detection to Meson
nsis: adjust for new MinGW paths
meson: Declare have_virtfs_proxy_helper in main meson.build
...
Roman Bolshakov [Wed, 13 Jan 2021 03:28:07 +0000 (06:28 +0300)]
tcg: Toggle page execution for Apple Silicon
Pages can't be both write and executable at the same time on Apple
Silicon. macOS provides public API to switch write protection [1] for
JIT applications, like TCG.
cpu_loop_exit*() functions are declared in accel/tcg/cpu-exec-common.c,
and are not available when TCG accelerator is not built. Add stubs so
linking without TCG succeed.
Problematic files:
- hw/semihosting/console.c in qemu_semihosting_console_inc()
- hw/ppc/spapr_hcall.c in h_confer()
- hw/s390x/ipl.c in s390_ipl_reset_request()
- hw/misc/mips_itu.c
Paolo Bonzini [Mon, 9 Nov 2020 09:13:39 +0000 (04:13 -0500)]
qemu-option: warn for short-form boolean options
Options such as "server" or "nowait", that are commonly found in -chardev,
are sugar for "server=on" and "wait=off". This is quite surprising and
also does not have any notion of typing attached. It is even possible to
do "-device e1000,noid" and get a device with "id=off".
Deprecate it and print a warning when it is encountered. In general,
this short form for boolean options only seems to be in wide use for
-chardev and -spice.
Paolo Bonzini [Tue, 3 Nov 2020 13:48:11 +0000 (08:48 -0500)]
qemu-option: move help handling to get_opt_name_value
Right now, help options are parsed normally and then checked
specially in opt_validate, but only if coming from
qemu_opts_parse_noisily. has_help_option does the check on its own.
opt_validate() has two callers: qemu_opt_set(), which passes null and is
therefore unaffected, and opts_do_parse(), which is affected.
opts_do_parse() is called by qemu_opts_do_parse(), which passes null and
is therefore unaffected, and opts_parse().
opts_parse() is called by qemu_opts_parse() and qemu_opts_set_defaults(),
which pass null and are therefore unaffected, and
qemu_opts_parse_noisily().
Move the check from opt_validate to the parsing workhorse of QemuOpts,
get_opt_name_value. This will come in handy in the next patch, which
will raise a warning for "-object memory-backend-ram,share" ("flag" option
with no =on/=off part) but not for "-object memory-backend-ram,help".
As a result:
- opts_parse and opts_do_parse do not return an error anymore
when help is requested; qemu_opts_parse_noisily does not have
to work around that anymore.
- various crazy ways to request help are not recognized anymore:
- "help=..."
- "nohelp" (sugar for "help=off")
- "?=..."
- "no?" (sugar for "?=off")
- "help" would be recognized as help request even if there is a (foolishly
named) parameter "help". No such parameters exist, though.
qemu_boot_opts ("-boot")
in hw/nvram/fw_cfg.c and hw/s390x/ipl.c:
QTAILQ_FIRST(&qemu_find_opts("bootopts")->head)
in softmmu/vl.c:
qemu_opts_find(qemu_find_opts("boot-opts"), NULL)
qemu_name_opts ("-name")
qemu_opts_foreach->parse_name
parse_name does not use id
i.e. they don't need an id. Sometimes its presence is ignored
(e.g. when using qemu_opts_foreach), sometimes all the options
with the id are skipped, sometimes only the first option on the
command line is considered. -boot does two different things
depending on who's looking at the options.
With this patch we just forbid id on merge-lists QemuOptsLists; if the
command line still works, it has the same semantics as before.
qemu_opts_create's fail_if_exists parameter is now unnecessary:
- it is unused if id is NULL
- opts_parse only passes false if reached from qemu_opts_set_defaults,
in which case this patch enforces that id must be NULL
- other callers that can pass a non-NULL id always set it to true
Assert that it is true in the only case where "fail_if_exists" matters,
i.e. "id && !lists->merge_lists". This means that if an id is present,
duplicates are always forbidden, which was already the status quo.
Discounting the case that aborts as it's not user-controlled (it's
"just" a matter of inspecting qemu_opts_create callers), the paths
through qemu_opts_create can be summarized as:
- merge_lists = true: singleton opts with NULL id; non-NULL id fails
- merge_lists = false: always return new opts; non-NULL id fails if dup
Paolo Bonzini [Wed, 20 Jan 2021 14:42:33 +0000 (15:42 +0100)]
hmp: remove "change vnc TARGET" command
The HMP command \"change vnc TARGET\" is messy:
- it takes an ugly shortcut to determine if the option has an "id",
with incorrect results if "id=" is not preceded by an unescaped
comma.
- it deletes the existing QemuOpts and does not try to rollback
if the parsing fails (which is not causing problems, but only due to
how VNC options are parsed)
- because it uses the same parsing function as "-vnc", it forces
the latter to not support "-vnc help".
On top of this, it uses a deprecated QMP command, thus getting in
the way of removing the QMP command. Since the usecase for the
command is not clear, just remove it and send "change vnc password"
directly to the QMP "change-vnc-password" command.