Peter Maydell [Fri, 14 Jul 2017 15:13:29 +0000 (16:13 +0100)]
Merge remote-tracking branch 'remotes/berrange/tags/pull-sockets-2017-07-11-3' into staging
Merge sockets 2017/07/11 v3
# gpg: Signature made Fri 14 Jul 2017 16:09:03 BST
# gpg: using RSA key 0xBE86EBB415104FDF
# gpg: Good signature from "Daniel P. Berrange <[email protected]>"
# gpg: aka "Daniel P. Berrange <[email protected]>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF
* remotes/berrange/tags/pull-sockets-2017-07-11-3:
io: preserve ipv4/ipv6 flags when resolving InetSocketAddress
sockets: ensure we don't accept IPv4 clients when IPv4 is disabled
sockets: don't block IPv4 clients when listening on "::"
sockets: ensure we can bind to both ipv4 & ipv6 separately
io: preserve ipv4/ipv6 flags when resolving InetSocketAddress
The original InetSocketAddress struct may have has_ipv4 and
has_ipv6 fields set, which will control both the ai_family
used during DNS resolution, and later use of the V6ONLY
flag.
Currently the standalone DNS resolver code drops the
has_ipv4 & has_ipv6 flags after resolving, which means
the later bind() code won't correctly set V6ONLY.
sockets: ensure we don't accept IPv4 clients when IPv4 is disabled
Currently if you disable listening on IPv4 addresses, via the
CLI flag ipv4=off, we still mistakenly accept IPv4 clients via
the IPv6 listener socket due to IPV6_V6ONLY flag being unset.
We must ensure IPV6_V6ONLY is always set if ipv4=off
sockets: don't block IPv4 clients when listening on "::"
When inet_parse() parses the hostname, it is forcing the
has_ipv6 && ipv6 flags if the address contains a ":". This
means that if the user had set the ipv4=on flag, to try to
restrict the listener to just ipv4, an error would not have
been raised. eg
-incoming tcp:[::]:9000,ipv4
should have raised an error because listening for IPv4
on "::" is a non-sensical combination. With this removed,
we now call getaddrinfo() on "::" passing PF_INET and
so getaddrinfo reports an error about the hostname being
incompatible with the requested protocol:
qemu-system-x86_64: -incoming tcp:[::]:9000,ipv4: address resolution
failed for :::9000: Address family for hostname not supported
Likewise it is explicitly setting the has_ipv4 & ipv4
flags when the address contains only digits + '.'. This
has no ill-effect, but also has no benefit, so is removed.
sockets: ensure we can bind to both ipv4 & ipv6 separately
When binding to an IPv6 socket we currently force the
IPV6_V6ONLY flag to off. This means that the IPv6 socket
will accept both IPv4 & IPv6 sockets when QEMU is launched
with something like
-vnc :::1
While this is good for that case, it is bad for other
cases. For example if an empty hostname is given,
getaddrinfo resolves it to 2 addresses 0.0.0.0 and ::,
in that order. We will thus bind to 0.0.0.0 first, and
then fail to bind to :: on the same port. The same
problem can happen if any other hostname lookup causes
the IPv4 address to be reported before the IPv6 address.
When we get an IPv6 bind failure, we should re-try the
same port, but with IPV6_V6ONLY turned on again, to
avoid clash with any IPv4 listener.
This ensures that
-vnc :1
will bind successfully to both 0.0.0.0 and ::, and also
avoid
-vnc :1,to=2
from mistakenly using a 2nd port for the :: listener.
This is a regression due to commit 396f935 "ui: add ability to
specify multiple VNC listen addresses".
Peter Maydell [Fri, 14 Jul 2017 13:19:35 +0000 (14:19 +0100)]
Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20170714' into staging
s390x/kvm/migration/cpumodel: fixes, enhancements and cleanups
- add a network boot rom for s390 (Thomas Huth)
- migration of storage attributes like the CMMA used/unused state
- PCI related enhancements - full support for aen, ais and zpci
- migration support for css with vmstates (Halil Pasic)
- cpu model enhancements for cpu features
- guarded storage support
# gpg: Signature made Fri 14 Jul 2017 11:33:04 BST
# gpg: using RSA key 0x117BBC80B5A61C7C
# gpg: Good signature from "Christian Borntraeger (IBM) <[email protected]>"
# Primary key fingerprint: F922 9381 A334 08F9 DBAB FBCA 117B BC80 B5A6 1C7C
* remotes/borntraeger/tags/s390x-20170714: (40 commits)
s390x/gdb: add gs registers
s390x/arch_dump: also dump guarded storage control block
s390x/kvm: enable guarded storage
s390x/kvm: Enable KSS facility for nested virtualization
s390x/cpumodel: add esop/esop2 to z12 model
s390x/cpumodel: we are always in zarchitecture mode
s390x/cpumodel: wire up new hardware features
s390x/flic: migrate ais states
s390x/cpumodel: add zpci, aen and ais facilities
s390x: initialize cpu firstly
pc-bios/s390: rebuild s390-ccw.img
pc-bios/s390: add s390-netboot.img
pc-bios/s390-ccw: Link libnet into the netboot image and do the TFTP load
pc-bios/s390-ccw: Add virtio-net driver code
pc-bios/s390-ccw: Add core files for the network bootloading program
roms/SLOF: Update submodule to latest status
pc-bios/s390-ccw: Add code for virtio feature negotiation
pc-bios/s390-ccw: Remove unused structs from virtio.h
pc-bios/s390-ccw: Move byteswap functions to a separate header
pc-bios/s390-ccw: Add a write() function for stdio
...
* remotes/bonzini/tags/for-upstream: (55 commits)
spapr_rng: Convert to DEFINE_PROP_LINK
cpu: Convert to DEFINE_PROP_LINK
mips_cmgcr: Convert to DEFINE_PROP_LINK
ivshmem: Convert to DEFINE_PROP_LINK
dimm: Convert to DEFINE_PROP_LINK
virtio-crypto: Convert to DEFINE_PROP_LINK
virtio-rng: Convert to DEFINE_PROP_LINK
virtio-scsi: Convert to DEFINE_PROP_LINK
virtio-blk: Convert to DEFINE_PROP_LINK
qdev: Add const qualifier to PropertyInfo definitions
qmp: Use ObjectProperty.type if present
qdev: Introduce DEFINE_PROP_LINK
qdev: Introduce PropertyInfo.create
qom: enforce readonly nature of link's check callback
translate-all: remove redundant !tcg_enabled check in dump_exec_info
vl: fix breakage of -tb-size
nbd: Implement NBD_INFO_BLOCK_SIZE on client
nbd: Implement NBD_INFO_BLOCK_SIZE on server
nbd: Implement NBD_OPT_GO on client
nbd: Implement NBD_OPT_GO on server
...
Fan Zhang [Wed, 15 Feb 2017 03:47:49 +0000 (04:47 +0100)]
s390x/kvm: enable guarded storage
Introduce guarded storage support for KVM guests on s390.
We need to enable the capability, extend machine check validity,
sigp store-additional-status-at-address, and migration.
The feature is fenced for older machine type versions.
Jason J. Herne [Mon, 10 Apr 2017 13:39:00 +0000 (09:39 -0400)]
s390x/cpumodel: add esop/esop2 to z12 model
Add esop and esop2 features to z12 model where esop2 was originally
introduced. Disable esop and esop2 when using compatibility machine
v2.9 or earlier.
Jason J. Herne [Thu, 1 Sep 2016 08:56:49 +0000 (10:56 +0200)]
s390x/cpumodel: we are always in zarchitecture mode
In QEMU, a guest VCPU always started in and never was able to leave
z/Architecture mode. Now we have an architected way of showing this
condition.
The SIGP SET ARCHITECTURE instruction is simply rejected. Linux as guest
seems to not care about the return value, which is a good thing
The new handling is just like already being in z/Architecture mode.
We'll not try to fake absence of this facility, but still not indicate
the facility in case some strange CPU model turned z/Architecture off
completely (which doesn't work either way but let's us see how a
guest would react on a lack of this facility).
Yi Min Zhao [Tue, 16 May 2017 10:58:44 +0000 (18:58 +0800)]
s390x/flic: migrate ais states
During migration we should transfer ais states to the target guest.
This patch introduces a subsection to kvm_s390_flic_vmstate and new
vmsd for qemu_flic. The ais states need to be migrated only when
ais is supported.
Yi Min Zhao [Wed, 14 Jun 2017 05:25:58 +0000 (13:25 +0800)]
s390x/cpumodel: add zpci, aen and ais facilities
zPCI instructions and facilities are available since IBM zEnterprise
EC12. To support z/PCI in QEMU we enable zpci, aen and ais facilities
starting with zEC12 GA1. And we always set zpci and aen bits in max cpu
model. Later they might be switched off due to applied real cpu model.
For ais bit, we only provide it in the full cpu model beginning with
zEC12 and defer its enablement in the default cpu model to a later point
in time. At the same time, disable them for 2.9 and older machines.
Because of introducing AIS facility, we could check if it's enabled to
initialize flic->ais_supported with the real value.
4b996d0 pc-bios/s390-ccw: Link libnet into the netboot image and do the TFTP load e6879a6 pc-bios/s390-ccw: Add virtio-net driver code 766500f pc-bios/s390-ccw: Add core files for the network bootloading program f807e55 pc-bios/s390-ccw: Add code for virtio feature negotiation b4e3b4f pc-bios/s390-ccw: Remove unused structs from virtio.h dd3dc5e pc-bios/s390-ccw: Move byteswap functions to a separate header a20b4fe pc-bios/s390-ccw: Add a write() function for stdio 262e07c pc-bios/s390-ccw: Move virtio-block related functions into a separate file 7438d32 pc-bios/s390-ccw: Move ebc2asc to sclp.c 8760bad pc-bios/s390-ccw: Move libc functions to separate header c68f450 pc-bios/s390-ccw: use STRIP variable in Makefile
It's already possible to do a network boot of an s390x guest with an
external netboot image based on a Linux installation, but it would
be much more convenient if the s390-ccw firmware supported network
booting right out of the box, without the need to assemble such an
external image first.
This is an s390-netboot.img that can be used for network booting.
You can download a combined kernel + initrd image via TFTP
by starting QEMU for example with:
Thomas Huth [Wed, 12 Jul 2017 12:49:51 +0000 (14:49 +0200)]
pc-bios/s390-ccw: Add core files for the network bootloading program
This is just a preparation for the next steps: Add a makefile and a
stripped down copy of pc-bios/s390-ccw/main.c as a basis for the network
bootloader program, linked against the libc from SLOF already (which we
will need for SLOF's libnet). The networking code is not included yet.
Thomas Huth [Wed, 12 Jul 2017 12:49:46 +0000 (14:49 +0200)]
pc-bios/s390-ccw: Add a write() function for stdio
The stdio functions from the SLOF libc need a write() function for
printing text to stdout/stderr. Let's implement this function by
refactoring the code from sclp_print().
Thomas Huth [Wed, 12 Jul 2017 12:49:44 +0000 (14:49 +0200)]
pc-bios/s390-ccw: Move ebc2asc to sclp.c
We will later need this array in a file that we will link to the
netboot code, too. Since there is some ebcdic conversion done
in sclp_get_loadparm_ascii(), the sclp.c file seems to be a good
candidate.
Thomas Huth [Wed, 12 Jul 2017 12:49:43 +0000 (14:49 +0200)]
pc-bios/s390-ccw: Move libc functions to separate header
The upcoming netboot code will use the libc from SLOF. To be able
to still use s390-ccw.h there, the libc related functions in this
header have to be moved to a different location.
And while we're at it, remove the duplicate memcpy() function from
sclp.c.
Halil Pasic [Tue, 11 Jul 2017 14:54:41 +0000 (16:54 +0200)]
s390x/css: use SubchDev.orb
Instead of passing around a pointer to ORB let us simplify some
function signatures by using the previously introduced ORB saved at the
subchannel (SubchDev).
Halil Pasic [Tue, 11 Jul 2017 14:54:39 +0000 (16:54 +0200)]
s390x/css: add ORB to SubchDev
Since we are going to need a migration compatibility breaking change to
activate ChannelSubSys migration let us use the opportunity to introduce
ORB to the SubchDev before that (otherwise we would need separate
handling e.g. a compat property).
The ORB will be useful for implementing IDA, or async handling of
subchannel work.
Halil Pasic [Tue, 11 Jul 2017 14:54:38 +0000 (16:54 +0200)]
s390x/css: add missing css state conditionally
Although we have recently vmstatified the migration of some css
infrastructure, for some css entities there is still state to be
migrated left, because the focus was keeping migration stream
compatibility (that is basically everything as-is).
Let us add vmstate helpers and extend existing vmstate descriptions so
that we have everything we need. Let us guard the added state via
css_migration_enabled, so we keep the compatible behavior if css
migration is disabled.
Let's also annotate the bits which do not need to be migrated for better
readability.
Halil Pasic [Tue, 11 Jul 2017 14:54:37 +0000 (16:54 +0200)]
s390x: add css_migration_enabled to machine class
Currently the migration of the channel subsystem (css) is only partial
and is done by the virtio ccw proxies -- the only migratable css devices
existing at the moment.
With the current work on emulated and passthrough devices we need to
decouple the migration of the channel subsystem state from virtio ccw,
and have a separate section for it. A new section however necessarily
breaks the migration compatibility.
So let us introduce a switch at the machine class, and put it in 'off'
state for now. We will turn the switch 'on' for future machines once all
preparations are met. For compatibility machines the switch will stay
'off'.
Halil Pasic [Tue, 11 Jul 2017 14:54:36 +0000 (16:54 +0200)]
s390x: add helper get_machine_class
We will need the machine class at machine initialization time, so the
usual way via qdev won't do. Let's cache the machine class and also use
the default values of the base machine for capability discovery.
Yi Min Zhao [Fri, 17 Feb 2017 07:26:48 +0000 (15:26 +0800)]
s390x/css: update css_adapter_interrupt
Let's use the new inject_airq callback of flic to inject adapter
interrupts. For kvm case, if the kernel flic doesn't support the new
interface, the irq routine remains unchanged. For non-kvm case,
qemu-flic handles the suppression process.
Yi Min Zhao [Fri, 17 Feb 2017 07:00:59 +0000 (15:00 +0800)]
s390x/flic: introduce inject_airq callback
Let's introduce a specialized way to inject adapter interrupts that,
unlike the common interrupt injection method, allows to take the
characteristics of the adapter into account.
For adapters subject to AIS facility:
- for non-kvm case, we handle the suppression for a given ISC in QEMU.
- for kvm case, we pass adapter id to kvm to do airq injection.
Add add tracepoint for suppressed airq and suppressing airq.
Fei Li [Fri, 17 Feb 2017 06:23:44 +0000 (14:23 +0800)]
s390x/flic: introduce modify_ais_mode callback
In order to emulate the adapter interruption suppression (AIS)
facility properly, the guest needs to be able to modify the AIS mask.
Interrupt suppression will be handled via the flic (for kvm, via a
recently introduced kernel backend; for !kvm, in the flic code), so
let's introduce a method to change the mode via the flic interface.
We introduce the 'simm' and 'nimm' fields to QEMUS390FLICState
to store interruption modes for each ISC. Each bit in 'simm' and
'nimm' targets one ISC, and collaboratively indicate three modes:
ALL-Interruptions, SINGLE-Interruption and NO-Interruptions. This
interface can initiate most transitions between the states; transition
from SINGLE-Interruption to NO-Interruptions via adapter interrupt
injection will be introduced in a following patch. The meaningful
combinations are as follows:
interruption mode | simm bit | nimm bit
------------------|----------|----------
ALL | 0 | 0
SINGLE | 1 | 0
NO | 1 | 1
Fei Li [Tue, 7 Mar 2017 03:07:44 +0000 (04:07 +0100)]
s390x: add flags field for registering I/O adapter
Introduce a new 'flags' field to IoAdapter to contain further
characteristics of the adapter, like whether the adapter is subject to
adapter-interruption suppression.
For the kvm case, pass this value in the 'flags' field when
registering an adapter.
Jason J. Herne [Wed, 22 Feb 2017 15:32:20 +0000 (10:32 -0500)]
s390x/cpumodel: clean up spacing and comments
Clean up spacing and add comments to clarify difference between base, full and
default models.
Not having spacing around the model definitions in gen-features.c is
particularly frustrating as the reader tends to misinterpret which model they
are looking at or editing.
Claudio Imbrenda [Mon, 15 Aug 2016 16:44:04 +0000 (18:44 +0200)]
s390x/migration: Monitor commands for storage attributes
Add an "info" monitor command to non-destructively inspect the state of
the storage attributes of the guest, and a normal command to toggle
migration mode (useful for debugging).
Unlike the usual object_property_add_link() invocations in other
devices, ivshmem checks the "is mapped" state of the backend in addition
to qdev_prop_allow_set_link_before_realize. To convert it without
specializing DEFINE_PROP_LINK which always uses the qdev callback, move
the extra check to device realize time.
Unlike the usual object_property_add_link() invocations in other
devices, dimm checks the "is mapped" state of the backend in addition to
qdev_prop_allow_set_link_before_realize. To convert it without
specializing DEFINE_PROP_LINK which always uses the qdev general check
callback, move the extra check to device realize time.
Unlike other object_property_add_link() occurrences in virtio devices,
virtio-crypto checks the "in use" state of the linked backend object in
addition to qdev_prop_allow_set_link_before_realize. To convert it
without needing to specialize DEFINE_PROP_LINK which always uses the
qdev callback, move the "in use" check to device realize time.
Igor Mammedov [Fri, 14 Jul 2017 02:14:50 +0000 (10:14 +0800)]
qom: enforce readonly nature of link's check callback
link's check callback is supposed to verify/permit setting it,
however currently nothing restricts it from misusing it
and modifying target object from within.
Make sure that readonly semantics are checked by compiler
to prevent callback's misuse.
translate-all: remove redundant !tcg_enabled check in dump_exec_info
This check is redundant because it is already performed by the only
caller of dump_exec_info -- the caller was updated by b7da97eef
("monitor: Check whether TCG is enabled before running the "info jit"
code").
Checking twice wouldn't necessarily be too bad, but here the check also
returns with tb_lock held. So we can either do the check before tb_lock is
acquired, or just get rid of it. Given that it is redundant, I am going
for the latter option.
Commit e7b161d573 ("vl: add tcg_enabled() for tcg related code") adds
a check to exit the program when !tcg_enabled() while parsing the -tb-size
flag.
It turns out that when the -tb-size flag is evaluated, tcg_enabled() can
only return 0, since it is set (or not) much later by configure_accelerator().
Fix it by unconditionally exiting if the flag is passed to a QEMU binary
built with !CONFIG_TCG.
Eric Blake [Fri, 7 Jul 2017 20:30:49 +0000 (15:30 -0500)]
nbd: Implement NBD_INFO_BLOCK_SIZE on client
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server whether it intends to
obey block sizes.
When using the block layer as the client, we will obey block
sizes; but when used as 'qemu-nbd -c' to hand off to the
kernel nbd module as the client, we are still waiting for the
kernel to implement a way for us to learn if it will honor
block sizes (perhaps by an addition to sysfs, rather than an
ioctl), as well as any way to tell the kernel what additional
block sizes to obey (NBD_SET_BLKSIZE appears to be accurate
for the minimum size, but preferred and maximum sizes would
probably be new ioctl()s), so until then, we need to make our
request for block sizes conditional.
When using ioctl(NBD_SET_BLKSIZE) to hand off to the kernel,
use the minimum block size as the sector size if it is larger
than 512, which also has the nice effect of cooperating with
(non-qemu) servers that don't do read-modify-write when
exposing a block device with 4k sectors; it might also allow
us to visit a file larger than 2T on a 32-bit kernel.
Eric Blake [Fri, 7 Jul 2017 20:30:48 +0000 (15:30 -0500)]
nbd: Implement NBD_INFO_BLOCK_SIZE on server
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Thanks to a recent fix (commit df7b97ff), our real minimum
transfer size is always 1 (the block layer takes care of
read-modify-write on our behalf), but we're still more efficient
if we advertise 512 when the client supports it, as follows:
- OPT_INFO, but no NBD_INFO_BLOCK_SIZE: advertise 512, then
fail with NBD_REP_ERR_BLOCK_SIZE_REQD; client is free to try
something else since we don't disconnect
- OPT_INFO with NBD_INFO_BLOCK_SIZE: advertise 512
- OPT_GO, but no NBD_INFO_BLOCK_SIZE: advertise 1
- OPT_GO with NBD_INFO_BLOCK_SIZE: advertise 512
We can also advertise the optimum block size (presumably the
cluster size, when exporting a qcow2 file), and our absolute
maximum transfer size of 32M, to help newer clients avoid
EINVAL failures or abrupt disconnects on oversize requests.
We do not reject clients for using the older NBD_OPT_EXPORT_NAME;
we are no worse off for those clients than we used to be.
Eric Blake [Fri, 7 Jul 2017 20:30:47 +0000 (15:30 -0500)]
nbd: Implement NBD_OPT_GO on client
NBD_OPT_EXPORT_NAME is lousy: per the NBD protocol, any failure
requires the server to close the connection rather than report an
error to us. Therefore, upstream NBD recently added NBD_OPT_GO as
the improved version of the option that does what we want [1]: it
reports sane errors on failures, and on success provides at least
as much info as NBD_OPT_EXPORT_NAME.
This is a first cut at use of the information types. Note that we
do not need to use NBD_OPT_INFO, and that use of NBD_OPT_GO means
we no longer have to use NBD_OPT_LIST to learn whether a server
requires TLS (this requires servers that gracefully handle unknown
NBD_OPT, many servers prior to qemu 2.5 were buggy, but I have patched
qemu, upstream nbd, and nbdkit in the meantime, in part because of
interoperability testing with this patch). We still fall back to
NBD_OPT_LIST when NBD_OPT_GO is not supported on the server, as it
is still one last chance for a nicer error message. Later patches
will use further info, like NBD_INFO_BLOCK_SIZE.
Eric Blake [Fri, 7 Jul 2017 20:30:46 +0000 (15:30 -0500)]
nbd: Implement NBD_OPT_GO on server
NBD_OPT_EXPORT_NAME is lousy: per the NBD protocol, any failure
requires us to close the connection rather than report an error.
Therefore, upstream NBD recently added NBD_OPT_GO as the improved
version of the option that does what we want [1], along with
NBD_OPT_INFO that returns the same information but does not
transition to transmission phase.
This is a first cut at the information types, and only passes the
same information already available through NBD_OPT_LIST and
NBD_OPT_EXPORT_NAME; items like NBD_INFO_BLOCK_SIZE (and thus any
use of NBD_REP_ERR_BLOCK_SIZE_REQD) are intentionally left for
later patches.
Eric Blake [Fri, 7 Jul 2017 20:30:45 +0000 (15:30 -0500)]
nbd: Refactor reply to NBD_OPT_EXPORT_NAME
Reply directly in nbd_negotiate_handle_export_name(), rather than
waiting until nbd_negotiate_options() completes. This will make it
easier to implement NBD_OPT_GO. Pass additional parameters around,
rather than stashing things inside NBDClient.
Eric Blake [Fri, 7 Jul 2017 20:30:43 +0000 (15:30 -0500)]
nbd: Expose and debug more NBD constants
The NBD protocol has several constants defined in various extensions
that we are about to implement. Expose them to the code, along with
an easy way to map various constants to strings during diagnostic
messages.
Eric Blake [Fri, 7 Jul 2017 20:30:41 +0000 (15:30 -0500)]
nbd: Create struct for tracking export info
The NBD Protocol is introducing some additional information
about exports, such as minimum request size and alignment, as
well as an advertised maximum request size. It will be easier
to feed this information back to the block layer if we gather
all the information into a struct, rather than adding yet more
pointer parameters during negotiation.
This finishes QOM'fication of IOMMUMemoryRegion by introducing
a IOMMUMemoryRegionClass. This also provides a fastpath analog for
IOMMU_MEMORY_REGION_GET_CLASS().
This defines new QOM object - IOMMUMemoryRegion - with MemoryRegion
as a parent.
This moves IOMMU-related fields from MR to IOMMU MR. However to avoid
dymanic QOM casting in fast path (address_space_translate, etc),
this adds an @is_iommu boolean flag to MR and provides new helper to
do simple cast to IOMMU MR - memory_region_get_iommu. The flag
is set in the instance init callback. This defines
memory_region_is_iommu as memory_region_get_iommu()!=NULL.
This switches MemoryRegion to IOMMUMemoryRegion in most places except
the ones where MemoryRegion may be an alias.
Parallel device don't register be->chr_can_read function, but remote
disconnect event is handled in chr_read.So connected parallel device
can not detect remote disconnect event. The chardevs with chr_can_read=NULL
has the same problem.
Alex Bennée [Wed, 12 Jul 2017 10:52:16 +0000 (11:52 +0100)]
gdbstub: don't fail on vCont; C04:0; c packets
The thread-id of 0 means any CPU but we then ignore the fact we find
the first_cpu in this case who can have an index of 0. Instead of
bailing out just test if we have managed to match up thread-id to a
CPU.
Otherwise you get:
gdb_handle_packet: command='vCont;C04:0;c'
put_packet: reply='E22'
The actual reason for gdb sending vCont;C04:0;c was fixed in a
previous commit where we ensure the first_cpu's tid is correctly
reported to gdb however we should still behave correctly next time it
does send 0.
Alex Bennée [Wed, 12 Jul 2017 10:52:15 +0000 (11:52 +0100)]
qom/cpu: remove host_tid field
This was only used by the gdbstub and even then was only being set for
subsequent threads. Rather the continue duplicating the number just
make the gdbstub get the information from TaskState structure.
Now the tid is correctly reported for all threads the bug I was seeing
with "vCont;C04:0;c" packets is fixed as the correct tid is reported
to gdb.
I moved cpu_gdb_index into the gdbstub to facilitate easy access to
the TaskState which is used elsewhere in gdbstub.
To prevent BSD failing to build I've included ts_tid into its
TaskStruct but not populated it - which was the same state as the old
cpu->host_tid. I'll leave it up to the BSD maintainers to actually
populate this properly if they want a working gdbstub with
user-threads.
Alex Bennée [Wed, 12 Jul 2017 10:52:14 +0000 (11:52 +0100)]
gdbstub: rename cpu_index -> cpu_gdb_index
This is to make it clear the index is purely a gdbstub function and
should not be confused with the value of cpu->cpu_index. At the same
time we move the function from the header to gdbstub itself which will
help with later changes.
Alex Bennée [Wed, 12 Jul 2017 10:52:13 +0000 (11:52 +0100)]
gdbstub: modernise DEBUG_GDB
Convert the a gdb_debug helper which compiles away to nothing when not
used but still ensures the format strings are checked. There is some
minor code motion for the incorrect checksum message to report it
before we attempt to send the reply.
mttcg/i386: Patch instruction using async_safe_* framework
In mttcg, calling pause_all_vcpus() during execution from the
generated TBs causes a deadlock if some vCPU is waiting for exclusive
execution in start_exclusive(). Fix this by using the aync_safe_*
framework instead of pausing vcpus for patching instructions.
When accessing guest's ram block during DMA operation, use
'qemu_ram_ptr_length' to get ram block pointer. It ensures
that DMA operation of given length is possible; And avoids
any OOB memory access situations.
QEMU 2.9.50 monitor - type 'help' for more information
(qemu) chardev-change charchannel2 \
socket,host=127.0.0.1,port=4242,server,nowait
For a backend change, a number of ioctls has to be replayed to sync
the current setup of a frontend to a backend tty. This is hopefully
enough so we don't have to track, store and replay the whole original
control byte sequence.
Anton Nefedov [Thu, 6 Jul 2017 12:08:50 +0000 (15:08 +0300)]
char: chardevice hotswap
This patch adds a possibility to change a char device without a frontend
removal.
Ideally, it would have to happen transparently to a frontend, i.e.
frontend would continue its regular operation.
However, backends are not stateless and are set up by the frontends
via qemu_chr_fe_<> functions, and it's not (generally) possible to replay
that setup entirely in a backend code, as different chardevs respond
to the setup calls differently, so do frontends work differently basing
on those setup responses.
Moreover, some frontend can generally get and save the backend pointer
(qemu_chr_fe_get_driver()), and it will become invalid after backend change.
So, a frontend which would like to support chardev hotswap has to register
a "backend change" handler, and redo its backend setup there.
A sync read should block until all requested data is
available (instead of retrying in qemu_chr_fe_read_all). Change the
channel to blocking during sync_read.