linux-user: Tidy and enforce reserved_va initialization
We had a check using TARGET_VIRT_ADDR_SPACE_BITS to make sure
that the allocation coming in from the command-line option was
not too large, but that didn't include target-specific knowledge
about other restrictions on user-space.
Remove several target-specific hacks in linux-user/main.c.
For MIPS and Nios, we can replace them with proper adjustments
to the respective target's TARGET_VIRT_ADDR_SPACE_BITS definition.
For ARM, we had no existing ifdef but I suspect that the current
default value of 0xf7000000 was chosen with this in mind. Define
a workable value in linux-user/arm/, and also document why the
special case is required.
Most of the users of page_set_flags offset (page, page + len) as
the end points. One might consider this an error, since the other
users do supply an endpoint as the last byte of the region.
However, the first thing that page_set_flags does is round end UP
to the start of the next page. Which means computing page + len - 1
is in the end pointless. Therefore, accept this usage and do not
assert when given the exact size of the vm as the endpoint.
Peter Maydell [Tue, 3 Oct 2017 16:21:43 +0000 (17:21 +0100)]
linux-user: Allow -R values up to 0xffff0000 for 32-bit ARM guests
The 32-bit ARM validate_guest_space() check tests whether the
specified -R value leaves enough space for us to put the
commpage in at 0xffff0f00. However it was incorrectly doing
a <= check for the check against (guest_base + guest_size),
which meant that it wasn't permitting the guest space to
butt right up against the commpage.
Fix the comparison, so that -R values all the way up to 0xffff0000
work correctly.
Riku Voipio [Tue, 8 Aug 2017 13:01:19 +0000 (16:01 +0300)]
linux-user: fix O_TMPFILE handling
Since O_TMPFILE might differ between guest and host,
add it to the bitmask_transtbl. While at it, fix the definitions
of O_DIRECTORY etc which should arm32 according to kernel sources.
This fixes open14 and openat03 ltp testcases. Fixes:
Peter Maydell [Mon, 16 Oct 2017 09:22:39 +0000 (10:22 +0100)]
Merge remote-tracking branch 'remotes/elmarco/tags/vu-pull-request' into staging
# gpg: Signature made Thu 12 Oct 2017 21:52:28 BST
# gpg: using RSA key 0xDAE8E10975969CE5
# gpg: Good signature from "Marc-André Lureau <[email protected]>"
# gpg: aka "Marc-André Lureau <[email protected]>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 87A9 BD93 3F87 C606 D276 F62D DAE8 E109 7596 9CE5
* remotes/elmarco/tags/vu-pull-request:
libvhost-user: Support VHOST_USER_SET_SLAVE_REQ_FD
libvhost-user: Update and fix feature and request lists
vhost-user-bridge: Only process received packets on started queues
libvhost-user: vu_queue_started
Peter Maydell [Thu, 12 Oct 2017 16:06:50 +0000 (17:06 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20171012' into staging
target-arm queue:
* v8M: SG, BLXNS, secure-return
* v8M: fixes for coverity issues in previous patches
* arm: fix armv7m_init() declaration to match definition
* watchdog/aspeed: fix variable type to store reload value
* remotes/pmaydell/tags/pull-target-arm-20171012:
nvic: Fix miscalculation of offsets into ITNS array
nvic: Add missing 'break'
target/arm: Implement SG instruction corner cases
target/arm: Support some Thumb insns being always unconditional
target-arm: Simplify insn_crosses_page()
target/arm: Pull Thumb insn word loads up to top level
target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
target/arm: Implement secure function return
target/arm: Implement BLXNS
target/arm: Implement SG instruction
target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
arm: fix armv7m_init() declaration to match definition
watchdog/aspeed: fix variable type to store reload value
Peter Maydell [Tue, 10 Oct 2017 15:54:16 +0000 (16:54 +0100)]
nvic: Fix miscalculation of offsets into ITNS array
This calculation of the first exception vector in
the ITNS<n> register being accessed:
int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
is incorrect, because offset is in bytes, so we only want
to multiply by 8.
Spotted by Coverity (CID 1381484, CID 1381488), though it is
not correct that it actually overflows the buffer, because
we have a 'startvec + i < s->num_irq' guard.
Peter Maydell [Wed, 11 Oct 2017 17:24:36 +0000 (18:24 +0100)]
nvic: Add missing 'break'
Coverity points out that we forgot the 'break' for
the SAU_CTRL write case (CID1381683). This has
no actual visible consequences because it happens
that the following case is effectively a no-op.
Peter Maydell [Mon, 9 Oct 2017 13:48:39 +0000 (14:48 +0100)]
target/arm: Implement SG instruction corner cases
The common situation of the SG instruction is that it is
executed from S&NSC memory by a CPU in NS state. That case
is handled by v7m_handle_execute_nsc(). However the instruction
also has defined behaviour in a couple of other cases:
* SG instruction in NS memory (behaves as a NOP)
* SG in S memory but CPU already secure (clears IT bits and
does nothing else)
* SG instruction in v8M without Security Extension (NOP)
Peter Maydell [Mon, 9 Oct 2017 13:48:38 +0000 (14:48 +0100)]
target/arm: Support some Thumb insns being always unconditional
A few Thumb instructions are always unconditional even inside an
IT block (as opposed to being UNPREDICTABLE if used inside an
IT block): BKPT, the v8M SG instruction, and the A profile
HLT (debug halt) instruction.
This means we need to suppress the jump-over-instruction-on-condfail
code generation (though the IT state still advances as usual and
subsequent insns in the IT block may be conditional).
Peter Maydell [Mon, 9 Oct 2017 13:48:37 +0000 (14:48 +0100)]
target-arm: Simplify insn_crosses_page()
Recent changes have left insn_crosses_page() more complicated
than it needed to be:
* it's only called from thumb_tr_translate_insn() so we know
for certain that we're looking at a Thumb insn
* the caller's check for dc->pc >= dc->next_page_start - 3
means that dc->pc can't possibly be 4 aligned, so there's
no need to check that (the check was partly there to ensure
that we didn't treat an ARM insn as Thumb, I think)
* we now have thumb_insn_is_16bit() which lets us do a precise
check of the length of the next insn, rather than opencoding
an inaccurate check
Simplify it down to just loading the first half of the insn
and calling thumb_insn_is_16bit() on it.
Peter Maydell [Mon, 9 Oct 2017 13:48:36 +0000 (14:48 +0100)]
target/arm: Pull Thumb insn word loads up to top level
Refactor the Thumb decode to do the loads of the instruction words at
the top level rather than only loading the second half of a 32-bit
Thumb insn in the middle of the decode.
This is simple apart from the awkward case of Thumb1, where the
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
32-bit insn space. To handle these we decode enough to identify
whether we're looking at a prefix/suffix that we handle as a 16 bit
insn, or a prefix that we're going to merge with the following suffix
to consider as a 32 bit insn. The translation of the 16 bit cases
then moves from disas_thumb2_insn() to disas_thumb_insn().
The refactoring has the benefit that we don't need to pass the
CPUARMState* down into the decoder code any more, but the major
reason for doing this is that some Thumb instructions must be always
unconditional regardless of the IT state bits, so we need to know the
whole insn before we emit the "skip this insn if the IT bits and cond
state tell us to" code. (The always unconditional insns are BKPT,
HLT and SG; the last of these is 32 bits.)
Peter Maydell [Mon, 9 Oct 2017 13:48:35 +0000 (14:48 +0100)]
target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
The code which implements the Thumb1 split BL/BLX instructions
is guarded by a check on "not M or THUMB2". All we really need
to check here is "not THUMB2" (and we assume that elsewhere too,
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).
This doesn't change behaviour because all M profile cores
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
(v6M implements a very restricted subset of Thumb2, but we
can cross that bridge when we get to it with appropriate
feature bits.)
Peter Maydell [Mon, 9 Oct 2017 13:48:34 +0000 (14:48 +0100)]
target/arm: Implement secure function return
Secure function return happens when a non-secure function has been
called using BLXNS and so has a particular magic LR value (either
0xfefffffe or 0xfeffffff). The function return via BX behaves
specially when the new PC value is this magic value, in the same
way that exception returns are handled.
Adjust our BX excret guards so that they recognize the function
return magic number as well, and perform the function-return
unstacking in do_v7m_exception_exit().
Peter Maydell [Mon, 9 Oct 2017 13:48:31 +0000 (14:48 +0100)]
target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
Add the M profile secure MMU index values to the switch in
get_a32_user_mem_index() so that LDRT/STRT work correctly
rather than asserting at translate time.
Peter Maydell [Thu, 12 Oct 2017 09:02:09 +0000 (10:02 +0100)]
Merge remote-tracking branch 'remotes/ehabkost/tags/python-next-pull-request' into staging
Python queue, 2017-10-11
# gpg: Signature made Wed 11 Oct 2017 19:49:40 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <[email protected]>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/python-next-pull-request:
scripts: Remove debug parameter from QEMUMachine
scripts: Remove debug parameter from QEMUMonitorProtocol
guestperf: Configure logging on all shell frontends
basevm: Call logging.basicConfig()
iotests: Set up Python logging
Eduardo Habkost [Thu, 5 Oct 2017 17:20:13 +0000 (14:20 -0300)]
scripts: Remove debug parameter from QEMUMachine
All scripts that use the QEMUMachine and QEMUQtestMachine classes
(device-crash-test, tests/migration/*, iotests.py, basevm.py)
already configure logging.
The basicConfig() call inside QEMUMachine.__init__() is being
kept just to make sure a script would still work if it didn't
configure logging.
Eduardo Habkost [Thu, 5 Oct 2017 17:20:12 +0000 (14:20 -0300)]
scripts: Remove debug parameter from QEMUMonitorProtocol
Use logging module for the QMP debug messages. The only scripts
that set debug=True are iotests.py and guestperf/engine.py, and
they already call logging.basicConfig() to set up logging.
Scripts that don't configure logging are safe as long as they
don't need debugging output, because debug messages don't trigger
the "No handlers could be found for logger" message from the
Python logging module.
Scripts that already configure logging but don't use debug=True
(e.g. scripts/vm/basevm.py) will get QMP debugging enabled for
free.
Just setting level=DEBUG when debug is enabled is not enough: we
need to set up a log handler if we want debug messages generated
using logging.getLogger(...).debug() to be printed.
This was not a problem before because logging.debug() calls
logging.basicConfig() implicitly, but it's safer to not rely on
that.
Peter Maydell [Wed, 11 Oct 2017 12:10:36 +0000 (13:10 +0100)]
Merge remote-tracking branch 'remotes/elmarco/tags/vus-pull-request' into staging
# gpg: Signature made Tue 10 Oct 2017 22:33:56 BST
# gpg: using RSA key 0xDAE8E10975969CE5
# gpg: Good signature from "Marc-André Lureau <[email protected]>"
# gpg: aka "Marc-André Lureau <[email protected]>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 87A9 BD93 3F87 C606 D276 F62D DAE8 E109 7596 9CE5
* remotes/elmarco/tags/vus-pull-request: (27 commits)
vhost-user-scsi: remove server_sock from VusDev
vhost-user-scsi: use libvhost-user glib helper
libvhost-user: add glib source helper
vhost-user-scsi: use glib logging
vhost-user-scsi: simplify source handling
vhost-user-scsi: drop extra callback pointer
vhost-user-scsi: don't copy iscsi/scsi-lowlevel.h
vhost-user-scsi: avoid use of iscsi_ namespace
vhost-user-scsi: rename VUS types
vhost-user-scsi: remove unimplemented functions
vhost-user-scsi: remove VUS_MAX_LUNS
vhost-user-scsi: remove vdev_scsi_add_iscsi_lun()
vhost-user-scsi: assert() in iscsi_add_lun()
vhost-user-scsi: use NULL pointer
vhost-user-scsi: simplify unix path cleanup
vhost-user-scsi: remove vdev_scsi_find_by_vu()
vhost-user-scsi: also free the gtree
vhost-user-scsi: glib calls that allocate don't return NULL
vhost-user-scsi: use glib allocation
vhost-user-scsi: code style fixes
...
Peter Maydell [Wed, 11 Oct 2017 08:56:16 +0000 (09:56 +0100)]
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20171010' into staging
Queued TCG patches
# gpg: Signature made Tue 10 Oct 2017 20:23:12 BST
# gpg: using RSA key 0x64DF38E8AF7E215F
# gpg: Good signature from "Richard Henderson <[email protected]>"
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-tcg-20171010:
tcg/mips: delete commented out extern keyword.
tcg: define TCG_HIGHWATER
util: move qemu_real_host_page_size/mask to osdep.h
tcg: take .helpers out of TCGContext
tci: move tci_regs to tcg_qemu_tb_exec's stack
exec-all: extract tb->tc_* into a separate struct tc_tb
translate-all: define and use DEBUG_TB_CHECK_GATE
translate-all: define and use DEBUG_TB_INVALIDATE_GATE
exec-all: introduce TB_PAGE_ADDR_FMT
translate-all: define and use DEBUG_TB_FLUSH_GATE
exec-all: bring tb->invalid into tb->cflags
tcg: consolidate TB lookups in tb_lookup__cpu_state
tcg: remove addr argument from lookup_tb_ptr
tcg/mips: constify tcg_target_callee_save_regs
tcg/i386: constify tcg_target_callee_save_regs
cpu-exec: rename have_tb_lock to acquired_tb_lock in tb_find
translate-all: make have_tb_lock static
exec-all: fix typos in TranslationBlock's documentation
tcg: fix corruption of code_time profiling counter upon tb_flush
cputlb: bring back tlb_flush_count under !TLB_DEBUG
- PLOG is unused
- code is compiled out unless debug is enabled
- logging is too verbose
- you can pipe to ts to have timestamp if needed, or use structured
logging with more recent glib
Always remove the unix path when leaving the program (instead of when
freeing scsi_dev). Note that unix_sock_new() also unlink() exisiting
path before creating the socket.
And fix the following warning when DEBUG_TB_INVALIDATE is enabled
in translate-all.c:
CC mipsn32-linux-user/accel/tcg/translate-all.o
/data/src/qemu/accel/tcg/translate-all.c: In function ‘tb_alloc_page’:
/data/src/qemu/accel/tcg/translate-all.c:1201:16: error: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 2 has type ‘tb_page_addr_t {aka unsigned int}’ [-Werror=format=]
printf("protecting code page: 0x" TARGET_FMT_lx "\n",
^
cc1: all warnings being treated as errors
/data/src/qemu/rules.mak:66: recipe for target 'accel/tcg/translate-all.o' failed
make[1]: *** [accel/tcg/translate-all.o] Error 1
Makefile:328: recipe for target 'subdir-mipsn32-linux-user' failed
make: *** [subdir-mipsn32-linux-user] Error 2
cota@flamenco:/data/src/qemu/build ((18f3fe1...) *$)$
tcg: consolidate TB lookups in tb_lookup__cpu_state
This avoids duplicating code. cpu_exec_step will also use the
new common function once we integrate parallel_cpus into tb->cflags.
Note that in this commit we also fix a race, described by Richard Henderson
during review. Think of this scenario with threads A and B:
(A) Lookup succeeds for TB in hash without tb_lock
(B) Sets the TB's tb->invalid flag
(B) Removes the TB from tb_htable
(B) Clears all CPU's tb_jmp_cache
(A) Store TB into local tb_jmp_cache
Given that order of events, (A) will keep executing that invalid TB until
another flush of its tb_jmp_cache happens, which in theory might never happen.
We can fix this by checking the tb->invalid flag every time we look up a TB
from tb_jmp_cache, so that in the above scenario, next time we try to find
that TB in tb_jmp_cache, we won't, and will therefore be forced to look it
up in tb_htable.
Performance-wise, I measured a small improvement when booting debian-arm.
Note that inlining pays off:
It is unlikely that we will ever want to call this helper passing
an argument other than the current PC. So just remove the argument,
and use the pc we already get from cpu_get_tb_cpu_state.
This change paves the way to having a common "tb_lookup" function.
tcg: fix corruption of code_time profiling counter upon tb_flush
Whenever there is an overflow in code_gen_buffer (e.g. we run out
of space in it and have to flush it), the code_time profiling counter
ends up with an invalid value (that is, code_time -= profile_getclock(),
without later on getting += profile_getclock() due to the goto).
Fix it by using the ti variable, so that we only update code_time
when there is no overflow. Note that in case there is an overflow
we fail to account for the elapsed coding time, but this is quite rare
so we can probably live with it.
"info jit" before/after, roughly at the same time during debian-arm bootup:
cputlb: bring back tlb_flush_count under !TLB_DEBUG
Commit f0aff0f124 ("cputlb: add assert_cpu_is_self checks") buried
the increment of tlb_flush_count under TLB_DEBUG. This results in
"info jit" always (mis)reporting 0 TLB flushes when !TLB_DEBUG.
Besides, under MTTCG tlb_flush_count is updated by several threads,
so in order not to lose counts we'd either have to use atomic ops
or distribute the counter, which is more scalable.
This patch does the latter by embedding tlb_flush_count in CPUArchState.
The global count is then easily obtained by iterating over the CPU list.
Note that this change also requires updating the accessors to
tlb_flush_count to use atomic_read/set whenever there may be conflicting
accesses (as defined in C11) to it.
Peter Maydell [Tue, 10 Oct 2017 12:25:46 +0000 (13:25 +0100)]
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging
x86 and machine queue, 2017-10-09
Includes x86, QOM, CPU, and option/config parsing patches.
Highlights:
* Deprecation of -nodefconfig option;
* MachineClass::valid_cpu_types field.
# gpg: Signature made Tue 10 Oct 2017 03:31:33 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <[email protected]>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-and-machine-pull-request:
x86: Correct translation of some rdgsbase and wrgsbase encodings
vl: exit if maxcpus is negative
qom: update doc comment for type_register[_static]()
config: qemu_config_parse() return number of config groups
qemu-options: Deprecate -nodefconfig
vl: Eliminate defconfig variable
machine: Add a valid_cpu_types property
qom/cpu: move cpu_model null check to cpu_class_by_name()
x86: Correct translation of some rdgsbase and wrgsbase encodings
It looks like there was a transcription error when writing this code
initially. The code previously only decoded src or dst of rax. This
resolves
https://bugs.launchpad.net/qemu/+bug/1719984.
Igor Mammedov [Wed, 4 Oct 2017 10:08:00 +0000 (12:08 +0200)]
qom: update doc comment for type_register[_static]()
type_register()/type_register_static() functions in current impl.
can't fail returning 0, also none of the users check for error
so update doc comment to reflect current behaviour.
Eduardo Habkost [Wed, 4 Oct 2017 02:50:42 +0000 (23:50 -0300)]
config: qemu_config_parse() return number of config groups
Change qemu_config_parse() to return the number of config groups
in success and -EINVAL on error. This will allow callers of
qemu_config_parse() to check if something was really loaded from
the config file.
All existing callers of qemu_config_parse() and
qemu_read_config_file() only check if the return value was
negative, so the change shouldn't affect them.
Eduardo Habkost [Wed, 4 Oct 2017 03:00:25 +0000 (00:00 -0300)]
qemu-options: Deprecate -nodefconfig
Since 2012 (commit ba6212d8 "Eliminate cpus-x86_64.conf file") we
have no default config files that would be disabled using
-nodefconfig. Update documentation and document -nodefconfig as
deprecated.
This patch add a MachineClass element that can be set in the machine C
code to specify a list of supported CPU types. If the supported CPU
types are specified the user enter CPU (by -cpu at runtime) is checked
against the supported types and QEMU exits if they aren't supported.
Peter Maydell [Fri, 6 Oct 2017 16:43:02 +0000 (17:43 +0100)]
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches
# gpg: Signature made Fri 06 Oct 2017 16:52:59 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <[email protected]>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (54 commits)
block/mirror: check backing in bdrv_mirror_top_flush
qcow2: truncate the tail of the image file after shrinking the image
qcow2: fix return error code in qcow2_truncate()
iotests: Fix 195 if IMGFMT is part of TEST_DIR
block/mirror: check backing in bdrv_mirror_top_refresh_filename
block: support passthrough of BDRV_REQ_FUA in crypto driver
block: convert qcrypto_block_encrypt|decrypt to take bytes offset
block: convert crypto driver to bdrv_co_preadv|pwritev
block: fix data type casting for crypto payload offset
crypto: expose encryption sector size in APIs
block: use 1 MB bounce buffers for crypto instead of 16KB
iotests: Add test 197 for covering copy-on-read
block: Perform copy-on-read in loop
block: Add blkdebug hook for copy-on-read
iotests: Restore stty settings on completion
block: Uniform handling of 0-length bdrv_get_block_status()
qemu-io: Add -C for opening with copy-on-read
commit: Remove overlay_bs
qemu-iotests: Test commit block job where top has two parents
qemu-iotests: Allow QMP pretty printing in common.qemu
...
Peter Maydell [Fri, 6 Oct 2017 16:00:42 +0000 (17:00 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20171006' into staging
target-arm:
* v8M: more preparatory work
* nvic: reset properly rather than leaving the nvic in a weird state
* xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
* sd: fix out-of-bounds check for multi block reads
* arm: Fix SMC reporting to EL2 when QEMU provides PSCI
* remotes/pmaydell/tags/pull-target-arm-20171006:
nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit
target/arm: Factor out "get mmuidx for specified security state"
target/arm: Fix calculation of secure mm_idx values
target/arm: Implement security attribute lookups for memory accesses
nvic: Implement Security Attribution Unit registers
target/arm: Add v8M support to exception entry code
target/arm: Add support for restoring v8M additional state context
target/arm: Update excret sanity checks for v8M
target/arm: Add new-in-v8M SFSR and SFAR
target/arm: Don't warn about exception return with PC low bit set for v8M
target/arm: Warn about restoring to unaligned stack
target/arm: Check for xPSR mismatch usage faults earlier for v8M
target/arm: Restore SPSEL to correct CONTROL register on exception return
target/arm: Restore security state on exception return
target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode
target/arm: Don't switch to target stack early in v7M exception return
nvic: Clear the vector arrays and prigroup on reset
hw/arm/xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
hw/sd: fix out-of-bounds check for multi block reads
arm: Fix SMC reporting to EL2 when QEMU provides PSCI
Peter Maydell [Fri, 6 Oct 2017 15:46:49 +0000 (16:46 +0100)]
nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit
When we added support for the new SHCSR bits in v8M in commit 437d59c17e9 the code to support writing to the new HARDFAULTPENDED
bit was accidentally only added for non-secure writes; the
secure banked version of the bit should also be writable.
Peter Maydell [Fri, 6 Oct 2017 15:46:49 +0000 (16:46 +0100)]
target/arm: Factor out "get mmuidx for specified security state"
For the SG instruction and secure function return we are going
to want to do memory accesses using the MMU index of the CPU
in secure state, even though the CPU is currently in non-secure
state. Write arm_v7m_mmu_idx_for_secstate() to do this job,
and use it in cpu_mmu_index().
Peter Maydell [Fri, 6 Oct 2017 15:46:49 +0000 (16:46 +0100)]
target/arm: Fix calculation of secure mm_idx values
In cpu_mmu_index() we try to do this:
if (env->v7m.secure) {
mmu_idx += ARMMMUIdx_MSUser;
}
but it will give the wrong answer, because ARMMMUIdx_MSUser
includes the 0x40 ARM_MMU_IDX_M field, and so does the
mmu_idx we're adding to, and we'll end up with 0x8n rather
than 0x4n. This error is then nullified by the call to
arm_to_core_mmu_idx() which masks out the high part, but
we're about to factor out the code that calculates the
ARMMMUIdx values so it can be used without passing it through
arm_to_core_mmu_idx(), so fix this bug first.
Peter Maydell [Fri, 6 Oct 2017 15:46:49 +0000 (16:46 +0100)]
target/arm: Implement security attribute lookups for memory accesses
Implement the security attribute lookups for memory accesses
in the get_phys_addr() functions, causing these to generate
various kinds of SecureFault for bad accesses.
The major subtlety in this code relates to handling of the
case when the security attributes the SAU assigns to the
address don't match the current security state of the CPU.
In the ARM ARM pseudocode for validating instruction
accesses, the security attributes of the address determine
whether the Secure or NonSecure MPU state is used. At face
value, handling this would require us to encode the relevant
bits of state into mmu_idx for both S and NS at once, which
would result in our needing 16 mmu indexes. Fortunately we
don't actually need to do this because a mismatch between
address attributes and CPU state means either:
* some kind of fault (usually a SecureFault, but in theory
perhaps a UserFault for unaligned access to Device memory)
* execution of the SG instruction in NS state from a
Secure & NonSecure code region
The purpose of SG is simply to flip the CPU into Secure
state, so we can handle it by emulating execution of that
instruction directly in arm_v7m_cpu_do_interrupt(), which
means we can treat all the mismatch cases as "throw an
exception" and we don't need to encode the state of the
other MPU bank into our mmu_idx values.
This commit doesn't include the actual emulation of SG;
it also doesn't include implementation of the IDAU, which
is a per-board way to specify hard-coded memory attributes
for addresses, which override the CPU-internal SAU if they
specify a more secure setting than the SAU is programmed to.
Peter Maydell [Fri, 6 Oct 2017 15:46:49 +0000 (16:46 +0100)]
nvic: Implement Security Attribution Unit registers
Implement the register interface for the SAU: SAU_CTRL,
SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the
actual behaviour is implemented here; registers just
read back as written.
When the CPU definition for Cortex-M33 is eventually
added, its initfn will set cpu->sau_sregion, in the same
way that we currently set cpu->pmsav7_dregion for the
M3 and M4.
Number of SAU regions is typically a configurable
CPU parameter, but this patch doesn't provide a
QEMU CPU property for it. We can easily add one when
we have a board that requires it.
Peter Maydell [Fri, 6 Oct 2017 15:46:49 +0000 (16:46 +0100)]
target/arm: Add v8M support to exception entry code
Add support for v8M and in particular the security extension
to the exception entry code. This requires changes to:
* calculation of the exception-return magic LR value
* push the callee-saves registers in certain cases
* clear registers when taking non-secure exceptions to avoid
leaking information from the interrupted secure code
* switch to the correct security state on entry
* use the vector table for the security state we're targeting
Peter Maydell [Fri, 6 Oct 2017 15:46:48 +0000 (16:46 +0100)]
target/arm: Add support for restoring v8M additional state context
For v8M, exceptions from Secure to Non-Secure state will save
callee-saved registers to the exception frame as well as the
caller-saved registers. Add support for unstacking these
registers in exception exit when necessary.
Peter Maydell [Fri, 6 Oct 2017 15:46:48 +0000 (16:46 +0100)]
target/arm: Update excret sanity checks for v8M
In v8M, more bits are defined in the exception-return magic
values; update the code that checks these so we accept
the v8M values when the CPU permits them.