Thomas Gleixner [Thu, 21 May 2020 20:05:21 +0000 (22:05 +0200)]
genirq: Provide irq_enter/exit_rcu()
irq_enter()/exit() currently include RCU handling. To properly separate the RCU
handling code, provide variants which contain only the non-RCU related
functionality.
Thomas Gleixner [Thu, 21 May 2020 20:05:19 +0000 (22:05 +0200)]
x86/idtentry: Switch to conditional RCU handling
Switch all idtentry_enter/exit() users over to the new conditional RCU
handling scheme and make the user mode entries in #DB, #INT3 and #MCE use
the user mode idtentry functions.
Thomas Gleixner [Thu, 21 May 2020 20:05:18 +0000 (22:05 +0200)]
x86/entry: Provide idtentry_enter/exit_user()
As there are exceptions which already handle entry from user mode and from
kernel mode separately, providing explicit user entry/exit handling callbacks
makes sense and makes the code easier to understand.
Thomas Gleixner [Thu, 21 May 2020 20:05:17 +0000 (22:05 +0200)]
x86/entry: Provide idtentry_entry/exit_cond_rcu()
After a lengthy discussion [1] it turned out that RCU does not need a full
rcu_irq_enter/exit() when RCU is already watching. All it needs if
NOHZ_FULL is active is to check whether the tick needs to be restarted.
This allows to avoid a separate variant for the pagefault handler which
cannot invoke rcu_irq_enter() on a kernel pagefault which might sleep.
The cond_rcu argument is only temporary and will be removed once the
existing users of idtentry_enter/exit() have been cleaned up. After that
the code can be significantly simplified.
Thomas Gleixner [Tue, 25 Feb 2020 22:33:31 +0000 (23:33 +0100)]
x86/entry: Convert double fault exception to IDTENTRY_DF
Convert #DF to IDTENTRY_DF
- Implement the C entry point with DEFINE_IDTENTRY_DF
- Emit the ASM stub with DECLARE_IDTENTRY_DF on 64bit
- Remove the ASM idtentry in 64bit
- Adjust the 32bit shim code
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 21 Apr 2020 19:22:36 +0000 (21:22 +0200)]
x86/mce: Address objtools noinstr complaints
Mark the relevant functions noinstr, use the plain non-instrumented MSR
accessors. The only odd part is the instrumentation_begin()/end() pair around the
indirect machine_check_vector() call as objtool can't figure that out. The
possible invoked functions are annotated correctly.
Also use notrace variant of nmi_enter/exit(). If MCEs happen then hardware
latency tracing is the least of the worries.
Thomas Gleixner [Mon, 4 May 2020 17:56:26 +0000 (19:56 +0200)]
x86/traps: Restructure #DB handling
Now that there are separate entry points, move the kernel/user_mode specifc
checks into the entry functions so the common handling code does not need
the extra mode checks. Make the code more readable while at it.
Thomas Gleixner [Tue, 25 Feb 2020 22:33:29 +0000 (23:33 +0100)]
x86/entry: Implement user mode C entry points for #DB and #MCE
The MCE entry point uses the same mechanism as the IST entry point for
now. For #DB split the inner workings and just keep the nmi_enter/exit()
magic in the IST variant. Fixup the ASM code to emit the proper
noist_##cfunc call.
Thomas Gleixner [Tue, 25 Feb 2020 22:33:28 +0000 (23:33 +0100)]
x86/idtentry: Provide IDTRENTRY_NOIST variants for #DB and #MC
Provide NOIST entry point macros which allows to implement NOIST variants
of the C entry points. These are invoked when #DB or #MC enter from user
space. This allows explicit handling of the difference between user mode
and kernel mode entry later.
Thomas Gleixner [Tue, 25 Feb 2020 22:33:26 +0000 (23:33 +0100)]
x86/entry: Convert Debug exception to IDTENTRY_DB
Convert #DB to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY_DB
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:33:24 +0000 (23:33 +0100)]
x86/idtentry: Provide IDTENTRY_XEN for XEN/PV
XEN/PV has special wrappers for NMI and DB exceptions. They redirect these
exceptions through regular IDTENTRY points. Provide the necessary IDTENTRY
macros to make this work
Thomas Gleixner [Sat, 4 Apr 2020 13:39:13 +0000 (15:39 +0200)]
x86/mce: Use untraced rd/wrmsr in the MCE offline/crash check
mce_check_crashing_cpu() is called right at the entry of the MCE
handler. It uses mce_rdmsr() and mce_wrmsr() which are wrappers around
rdmsr() and wrmsr() to handle the MCE error injection mechanism, which is
pointless in this context, i.e. when the MCE hits an offline CPU or the
system is already marked crashing.
The MSR access can also be traced, so use the untraceable variants. This
is also safe vs. XEN paravirt as these MSRs are not affected by XEN PV
modifications.
Thomas Gleixner [Tue, 25 Feb 2020 22:33:23 +0000 (23:33 +0100)]
x86/entry: Convert Machine Check to IDTENTRY_IST
Convert #MC to IDTENTRY_MCE:
- Implement the C entry points with DEFINE_IDTENTRY_MCE
- Emit the ASM stub with DECLARE_IDTENTRY_MCE
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the error code from *machine_check_vector() as
it is always 0 and not used by any of the functions
it can point to. Fixup all the functions as well.
Thomas Gleixner [Fri, 3 Apr 2020 20:37:31 +0000 (22:37 +0200)]
x86/mce: Move nmi_enter/exit() into the entry point
There is no reason to have nmi_enter/exit() in the actual MCE
handlers. Move it to the entry point. This also covers the until now
uncovered initial handler which only prints.
Thomas Gleixner [Tue, 25 Feb 2020 22:16:16 +0000 (23:16 +0100)]
x86/entry: Convert INT3 exception to IDTENTRY_RAW
Convert #BP to IDTENTRY_RAW:
- Implement the C entry point with DEFINE_IDTENTRY_RAW
- Invoke idtentry_enter/exit() from the function body
- Emit the ASM stub with DECLARE_IDTENTRY_RAW
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
No functional change.
This could be a plain IDTENTRY, but as Peter pointed out INT3 is broken
vs. the static key in the context tracking code as this static key might be
in the state of being patched and has an int3 which would recurse forever.
IDTENTRY_RAW is therefore chosen to allow addressing this issue without
lots of code churn.
Peter Zijlstra [Thu, 20 Feb 2020 12:28:06 +0000 (13:28 +0100)]
x86/int3: Inline bsearch()
Avoid calling out to bsearch() by inlining it, for normal kernel configs
this was the last external call and poke_int3_handler() is now fully self
sufficient -- no calls to external code.
Peter Zijlstra [Wed, 19 Feb 2020 17:25:09 +0000 (18:25 +0100)]
lib/bsearch: Provide __always_inline variant
For code that needs the ultimate performance (it can inline the @cmp
function too) or simply needs to avoid calling external functions for
whatever reason, provide an __always_inline variant of bsearch().
[ tglx: Renamed to __inline_bsearch() as suggested by Andy ]
Thomas Gleixner [Tue, 21 Jan 2020 14:53:09 +0000 (15:53 +0100)]
x86/int3: Ensure that poke_int3_handler() is not traced
In order to ensure poke_int3_handler() is completely self contained -- this
is called while modifying other text, imagine the fun of hitting another
INT3 -- ensure that everything it uses is not traced.
The primary means here is to force inlining; bsearch() is notrace because
all of lib/ is.
Thomas Gleixner [Tue, 25 Feb 2020 22:16:30 +0000 (23:16 +0100)]
x86/entry/32: Convert IRET exception to IDTENTRY_SW
Convert the IRET exception handler to IDTENTRY_SW. This is slightly
different than the conversions of hardware exceptions as the IRET exception
is invoked via an exception table when IRET faults. So it just uses the
IDTENTRY_SW mechanism for consistency. It does not emit ASM code as it does
not fit the other idtentry exceptions.
- Implement the C entry point with DEFINE_IDTENTRY_SW() which maps to
DEFINE_IDTENTRY()
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the RCU warning as the new entry macro ensures correctness
Thomas Gleixner [Tue, 25 Feb 2020 22:16:29 +0000 (23:16 +0100)]
x86/entry: Convert SIMD coprocessor error exception to IDTENTRY
Convert #XF to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Handle INVD_BUG in C
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the RCU warning as the new entry macro ensures correctness
Thomas Gleixner [Tue, 25 Feb 2020 22:16:28 +0000 (23:16 +0100)]
x86/entry: Convert Alignment check exception to IDTENTRY
Convert #AC to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the RCU warning as the new entry macro ensures correctness
Thomas Gleixner [Tue, 25 Feb 2020 22:16:27 +0000 (23:16 +0100)]
x86/entry: Convert Coprocessor error exception to IDTENTRY
Convert #MF to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the RCU warning as the new entry macro ensures correctness
Thomas Gleixner [Tue, 25 Feb 2020 22:16:26 +0000 (23:16 +0100)]
x86/entry: Convert Spurious interrupt bug exception to IDTENTRY
Convert #SPURIOUS to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:16:25 +0000 (23:16 +0100)]
x86/entry: Convert General protection exception to IDTENTRY
Convert #GP to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the RCU warning as the new entry macro ensures correctness
Thomas Gleixner [Tue, 25 Feb 2020 22:16:24 +0000 (23:16 +0100)]
x86/entry: Convert Stack segment exception to IDTENTRY
Convert #SS to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:16:23 +0000 (23:16 +0100)]
x86/entry: Convert Segment not present exception to IDTENTRY
Convert #NP to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:16:22 +0000 (23:16 +0100)]
x86/entry: Convert Invalid TSS exception to IDTENTRY
Convert #TS to IDTENTRY_ERRORCODE:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:16:20 +0000 (23:16 +0100)]
x86/entry: Convert Coprocessor segment overrun exception to IDTENTRY
Convert #OLD_MF to IDTENTRY:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:16:19 +0000 (23:16 +0100)]
x86/entry: Convert Device not available exception to IDTENTRY
Convert #NM to IDTENTRY:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the RCU warning as the new entry macro ensures correctness
Thomas Gleixner [Tue, 25 Feb 2020 22:16:18 +0000 (23:16 +0100)]
x86/entry: Convert Invalid Opcode exception to IDTENTRY
Convert #UD to IDTENTRY:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Fixup the FOOF bug call in fault.c
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:16:17 +0000 (23:16 +0100)]
x86/entry: Convert Bounds exception to IDTENTRY
Convert #BR to IDTENTRY:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
- Remove the RCU warning as the new entry macro ensures correctness
Thomas Gleixner [Tue, 25 Feb 2020 22:16:15 +0000 (23:16 +0100)]
x86/entry: Convert Overflow exception to IDTENTRY
Convert #OF to IDTENTRY:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
- Remove the old prototypes
Thomas Gleixner [Tue, 25 Feb 2020 22:16:14 +0000 (23:16 +0100)]
x86/entry: Convert Divide Error to IDTENTRY
Convert #DE to IDTENTRY:
- Implement the C entry point with DEFINE_IDTENTRY
- Emit the ASM stub with DECLARE_IDTENTRY
- Remove the ASM idtentry in 64bit
- Remove the open coded ASM entry code in 32bit
- Fixup the XEN/PV code
Thomas Gleixner [Tue, 25 Feb 2020 22:16:13 +0000 (23:16 +0100)]
x86/traps: Prepare for using DEFINE_IDTENTRY
Prepare for using IDTENTRY to define the C exception/trap entry points. It
would be possible to glue this into the existing macro maze, but it's
simpler and better to read at the end to just make them distinct.
Provide a trivial inline helper to read the trap address and add a comment
explaining the logic behind it.
The existing macros will be removed once all instances are converted.
Thomas Gleixner [Tue, 25 Feb 2020 22:16:12 +0000 (23:16 +0100)]
x86/idtentry: Provide macros to define/declare IDT entry points
Provide DECLARE/DEFINE_IDTENTRY() macros.
DEFINE_IDTENTRY() provides a wrapper which acts as the function
definition. The exception handler body is just appended to it with curly
brackets. The entry point is marked noinstr so that irq tracing and the
enter_from_user_mode() can be moved into the C-entry point. As all
C-entries use the same macro (or a later variant) the necessary entry
handling can be implemented at one central place.
DECLARE_IDTENTRY() provides the function prototypes:
- The C entry point cfunc
- The ASM entry point asm_cfunc
- The XEN/PV entry point xen_asm_cfunc
They all follow the same naming convention.
When included from ASM code DECLARE_IDTENTRY() is a macro which emits the
low level entry point in assembly by instantiating idtentry.
IDTENTRY is the simplest variant which just has a pt_regs argument. It's
going to be used for all exceptions which have no error code.
Thomas Gleixner [Wed, 23 Oct 2019 12:27:10 +0000 (14:27 +0200)]
x86/traps: Make interrupt enable/disable symmetric in C code
Traps enable interrupts conditionally but rely on the ASM return code to
disable them again. That results in redundant interrupt disable and trace
calls.
Make the trap handlers disable interrupts before returning to avoid that,
which allows simplification of the ASM entry code in follow up changes.
Thomas Gleixner [Wed, 4 Mar 2020 11:49:18 +0000 (12:49 +0100)]
x86/speculation/mds: Mark mds_user_clear_cpu_buffers() __always_inline
Prevent the compiler from uninlining and creating traceable/probable
functions as this is invoked _after_ context tracking switched to
CONTEXT_USER and rcu idle.
Thomas Gleixner [Wed, 4 Mar 2020 11:51:59 +0000 (12:51 +0100)]
x86/entry: Move irq flags tracing to prepare_exit_to_usermode()
This is another step towards more C-code and less convoluted ASM.
Similar to the entry path, invoke the tracer before context tracking which
might turn off RCU and invoke lockdep as the last step before going back to
user space. Annotate the code sections in exit_to_user_mode() accordingly
so objtool won't complain about the tracer invocation.
Thomas Gleixner [Tue, 25 Feb 2020 22:08:05 +0000 (23:08 +0100)]
x86/entry: Move irq tracing on syscall entry to C-code
Now that the C entry points are safe, move the irq flags tracing code into
the entry helper:
- Invoke lockdep before calling into context tracking
- Use the safe trace_hardirqs_on_prepare() trace function after context
tracking established state and RCU is watching.
enter_from_user_mode() is also still invoked from the exception/interrupt
entry code which still contains the ASM irq flags tracing. So this is just
a redundant and harmless invocation of tracing / lockdep until these are
removed as well.
Thomas Gleixner [Tue, 10 Mar 2020 13:46:27 +0000 (14:46 +0100)]
x86/entry/common: Protect against instrumentation
Mark the various syscall entries with noinstr to protect them against
instrumentation and add the noinstrumentation_begin()/end() annotations to mark the
parts of the functions which are safe to call out into instrumentable code.
Thomas Gleixner [Sat, 29 Feb 2020 14:12:33 +0000 (15:12 +0100)]
x86/entry: Mark enter_from_user_mode() noinstr
Both the callers in the low level ASM code and __context_tracking_exit()
which is invoked from enter_from_user_mode() via user_exit_irqoff() are
marked NOKPROBE. Allowing enter_from_user_mode() to be probed is
inconsistent at best.
Aside of that while function tracing per se is safe the function trace
entry/exit points can be used via BPF as well which is not safe to use
before context tracking has reached CONTEXT_KERNEL and adjusted RCU.
Mark it noinstr which moves it into the instrumentation protected text
section and includes notrace.
Note, this needs further fixups in context tracking to ensure that the
full call chain is protected. Will be addressed in follow up changes.
Thomas Gleixner [Wed, 25 Mar 2020 18:47:40 +0000 (19:47 +0100)]
x86/entry/32: Move non entry code into .text section
All ASM code which is not part of the entry functionality can move out into
the .text section. No reason to keep it in the non-instrumentable entry
section.
Thomas Gleixner [Wed, 25 Mar 2020 18:45:26 +0000 (19:45 +0100)]
x86/entry/64: Move non entry code into .text section
All ASM code which is not part of the entry functionality can move out into
the .text section. No reason to keep it in the non-instrumentable entry
section.
Thomas Gleixner [Tue, 10 Mar 2020 22:47:39 +0000 (23:47 +0100)]
lib/smp_processor_id: Move it into noinstr section
That code is already not traceable. Move it into the noinstr section so the
objtool section validation does not trigger.
Annotate the warning code as "safe". While it might be not under all
circumstances, getting the information out is important enough.
Should this ever trigger from the sensitive code which is shielded against
instrumentation, e.g. low level entry, then the printk is the least of the
worries.
Addresses the objtool warnings:
vmlinux.o: warning: objtool: context_tracking_recursion_enter()+0x7: call to __this_cpu_preempt_check() leaves .noinstr.text section
vmlinux.o: warning: objtool: __context_tracking_exit()+0x17: call to __this_cpu_preempt_check() leaves .noinstr.text section
vmlinux.o: warning: objtool: __context_tracking_enter()+0x2a: call to __this_cpu_preempt_check() leaves .noinstr.text section
Thomas Gleixner [Wed, 4 Mar 2020 10:05:22 +0000 (11:05 +0100)]
context_tracking: Ensure that the critical path cannot be instrumented
context tracking lacks a few protection mechanisms against instrumentation:
- While the core functions are marked NOKPROBE they lack protection
against function tracing which is required as the function entry/exit
points can be utilized by BPF.
- static functions invoked from the protected functions need to be marked
as well as they can be instrumented otherwise.
- using plain inline allows the compiler to emit traceable and probable
functions.
Fix this by marking the functions noinstr and converting the plain inlines
to __always_inline.
The NOKPROBE_SYMBOL() annotations are removed as the .noinstr.text section
is already excluded from being probed.
Cures the following objtool warnings:
vmlinux.o: warning: objtool: enter_from_user_mode()+0x34: call to __context_tracking_exit() leaves .noinstr.text section
vmlinux.o: warning: objtool: prepare_exit_to_usermode()+0x29: call to __context_tracking_enter() leaves .noinstr.text section
vmlinux.o: warning: objtool: syscall_return_slowpath()+0x29: call to __context_tracking_enter() leaves .noinstr.text section
vmlinux.o: warning: objtool: do_syscall_64()+0x7f: call to __context_tracking_enter() leaves .noinstr.text section
vmlinux.o: warning: objtool: do_int80_syscall_32()+0x3d: call to __context_tracking_enter() leaves .noinstr.text section
vmlinux.o: warning: objtool: do_fast_syscall_32()+0x9c: call to __context_tracking_enter() leaves .noinstr.text section
Peter Zijlstra [Thu, 20 Feb 2020 12:17:27 +0000 (13:17 +0100)]
x86/doublefault: Remove memmove() call
Use of memmove() in #DF is problematic considered tracing and other
instrumentation.
Remove the memmove() call and simply write out what needs doing; this
even clarifies the code, win-win! The code copies from the espfix64
stack to the normal task stack, there is no possible way for that to
overlap.
Andy Lutomirski [Mon, 24 Feb 2020 12:24:58 +0000 (13:24 +0100)]
x86/hw_breakpoint: Prevent data breakpoints on cpu_entry_area
A data breakpoint near the top of an IST stack will cause unrecoverable
recursion. A data breakpoint on the GDT, IDT, or TSS is terrifying.
Prevent either of these from happening.
x86/idt: Keep spurious entries unset in system_vectors
With commit dc20b2d52653 ("x86/idt: Move interrupt gate initialization to
IDT code") non assigned system vectors are also marked as used in
'used_vectors' (now 'system_vectors') bitmap. This makes checks in
arch_show_interrupts() whether a particular system vector is allocated to
always pass and e.g. 'Hyper-V reenlightenment interrupts' entry always
shows up in /proc/interrupts.
Another side effect of having all unassigned system vectors marked as used
is that irq_matrix_debug_show() will wrongly count them among 'System'
vectors.
As it is now ensured that alloc_intr_gate() is not called after init, it is
possible to leave unused entries in 'system_vectors' unset to fix these
issues.
There seems to be no reason to allocate interrupt gates after init. Mark
alloc_intr_gate() as __init and add WARN_ON() checks making sure it is
only used before idt_setup_apic_and_irq_gates() finalizes IDT setup and
maps all un-allocated entries to spurious entries.
x86/xen: Split HVM vector callback setup and interrupt gate allocation
As a preparatory change for making alloc_intr_gate() __init split
xen_callback_vector() into callback vector setup via hypercall
(xen_setup_callback_vector()) and interrupt gate allocation
(xen_alloc_callback_vector()).
xen_setup_callback_vector() is being called twice: on init and upon
system resume from xen_hvm_post_suspend(). alloc_intr_gate() only
needs to be called once.
Lai Jiangshan [Sun, 19 Apr 2020 14:40:49 +0000 (14:40 +0000)]
x86/entry/64: Remove an unused label
The label .Lcommon_\sym was introduced by 39e9543344fa.
(x86-64: Reduce amount of redundant code generated for invalidate_interruptNN)
And all the other relevant information was removed by 52aec3308db8
(x86/tlb: replace INVALIDATE_TLB_VECTOR by CALL_FUNCTION_VECTOR)
Peter Zijlstra [Fri, 24 Jan 2020 21:13:03 +0000 (22:13 +0100)]
locking/atomics: Flip fallbacks and instrumentation
Currently instrumentation of atomic primitives is done at the architecture
level, while composites or fallbacks are provided at the generic level.
The result is that there are no uninstrumented variants of the
fallbacks. Since there is now need of such variants to isolate text poke
from any form of instrumentation invert this ordering.
Doing this means moving the instrumentation into the generic code as
well as having (for now) two variants of the fallbacks.
Notes:
- the various *cond_read* primitives are not proper fallbacks
and got moved into linux/atomic.c. No arch_ variants are
generated because the base primitives smp_cond_load*()
are instrumented.
- once all architectures are moved over to arch_atomic_ one of the
fallback variants can be removed and some 2300 lines reclaimed.
- atomic_{read,set}*() are no longer double-instrumented
Marco Elver [Tue, 26 Nov 2019 14:04:05 +0000 (15:04 +0100)]
asm-generic/atomic: Use __always_inline for fallback wrappers
Use __always_inline for atomic fallback wrappers. When building for size
(CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
inline even relatively small static inline functions that are assumed to
be inlinable such as atomic ops. This can cause problems, for example in
UACCESS regions.
While the fallback wrappers aren't pure wrappers, they are trivial
nonetheless, and the function they wrap should determine the final
inlining policy.
For x86 tinyconfig we observe:
- vmlinux baseline: 1315988
- vmlinux with patch: 1315928 (-60 bytes)
Linus Torvalds [Thu, 11 Jun 2020 01:09:13 +0000 (18:09 -0700)]
Merge branch 'work.epoll' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull epoll update from Al Viro:
"epoll conversion to read_iter from Jens; I thought there might be more
epoll stuff this cycle, but uaccess took too much time"
* 'work.epoll' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
eventfd: convert to f_op->read_iter()
Linus Torvalds [Wed, 10 Jun 2020 23:05:54 +0000 (16:05 -0700)]
Merge branch 'work.sysctl' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull sysctl fixes from Al Viro:
"Fixups to regressions in sysctl series"
* 'work.sysctl' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
sysctl: reject gigantic reads/write to sysctl files
cdrom: fix an incorrect __user annotation on cdrom_sysctl_info
trace: fix an incorrect __user annotation on stack_trace_sysctl
random: fix an incorrect __user annotation on proc_do_entropy
net/sysctl: remove leftover __user annotations on neigh_proc_dointvec*
net/sysctl: use cpumask_parse in flow_limit_cpu_sysctl
Linus Torvalds [Wed, 10 Jun 2020 23:04:27 +0000 (16:04 -0700)]
Merge branch 'uaccess.i915' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull i915 uaccess updates from Al Viro:
"Low-hanging fruit in i915; there are several trickier followups, but
that'll wait for the next cycle"
* 'uaccess.i915' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
i915:get_engines(): get rid of pointless access_ok()
i915: alloc_oa_regs(): get rid of pointless access_ok()
i915 compat ioctl(): just use drm_ioctl_kernel()
i915: switch copy_perf_config_registers_or_number() to unsafe_put_user()
i915: switch query_{topology,engine}_info() to copy_to_user()
Linus Torvalds [Wed, 10 Jun 2020 23:02:54 +0000 (16:02 -0700)]
Merge branch 'uaccess.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull misc uaccess updates from Al Viro:
"Assorted uaccess patches for this cycle - the stuff that didn't fit
into thematic series"
* 'uaccess.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
bpf: make bpf_check_uarg_tail_zero() use check_zeroed_user()
x86: kvm_hv_set_msr(): use __put_user() instead of 32bit __clear_user()
user_regset_copyout_zero(): use clear_user()
TEST_ACCESS_OK _never_ had been checked anywhere
x86: switch cp_stat64() to unsafe_put_user()
binfmt_flat: don't use __put_user()
binfmt_elf_fdpic: don't use __... uaccess primitives
binfmt_elf: don't bother with __{put,copy_to}_user()
pselect6() and friends: take handling the combined 6th/7th args into helper
Linus Torvalds [Wed, 10 Jun 2020 21:46:54 +0000 (14:46 -0700)]
Merge branch 'rwonce/rework' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux
Pull READ/WRITE_ONCE rework from Will Deacon:
"This the READ_ONCE rework I've been working on for a while, which
bumps the minimum GCC version and improves code-gen on arm64 when
stack protector is enabled"
[ Side note: I'm _really_ tempted to raise the minimum gcc version to
4.9, so that we can just say that we require _Generic() support.
That would allow us to more cleanly handle a lot of the cases where we
depend on very complex macros with 'sizeof' or __builtin_choose_expr()
with __builtin_types_compatible_p() etc.
This branch has a workaround for sparse not handling _Generic(),
either, but that was already fixed in the sparse development branch,
so it's really just gcc-4.9 that we'd require. - Linus ]
* 'rwonce/rework' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux:
compiler_types.h: Use unoptimized __unqual_scalar_typeof for sparse
compiler_types.h: Optimize __unqual_scalar_typeof compilation time
compiler.h: Enforce that READ_ONCE_NOCHECK() access size is sizeof(long)
compiler-types.h: Include naked type in __pick_integer_type() match
READ_ONCE: Fix comment describing 2x32-bit atomicity
gcov: Remove old GCC 3.4 support
arm64: barrier: Use '__unqual_scalar_typeof' for acquire/release macros
locking/barriers: Use '__unqual_scalar_typeof' for load-acquire macros
READ_ONCE: Drop pointer qualifiers when reading from scalar types
READ_ONCE: Enforce atomicity for {READ,WRITE}_ONCE() memory accesses
READ_ONCE: Simplify implementations of {READ,WRITE}_ONCE()
arm64: csum: Disable KASAN for do_csum()
fault_inject: Don't rely on "return value" from WRITE_ONCE()
net: tls: Avoid assigning 'const' pointer to non-const pointer
netfilter: Avoid assigning 'const' pointer to non-const pointer
compiler/gcc: Raise minimum GCC version for kernel builds to 4.8
Linus Torvalds [Wed, 10 Jun 2020 21:12:15 +0000 (14:12 -0700)]
Merge tag 'docs-5.8-2' of git://git.lwn.net/linux
Pull more documentation updates from Jonathan Corbet:
"A handful of late-arriving docs fixes, along with a patch changing a
lot of HTTP links to HTTPS that had to be yanked and redone before the
first pull"
* tag 'docs-5.8-2' of git://git.lwn.net/linux:
docs/memory-barriers.txt/kokr: smp_mb__{before,after}_atomic(): update Documentation
Documentation: devres: add missing entry for devm_platform_get_and_ioremap_resource()
Replace HTTP links with HTTPS ones: documentation
docs: it_IT: address invalid reference warnings
doc: zh_CN: use doc reference to resolve undefined label warning
docs: Update the location of the LF NDA program
docs: dev-tools: coccinelle: underlines
Linus Torvalds [Wed, 10 Jun 2020 21:09:08 +0000 (14:09 -0700)]
Merge tag 'acpi-5.8-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull more ACPI updates from Rafael Wysocki:
"Update the ACPICA code in the kernel to upstream revision 20200528
with the following changes:
- Remove some dead code from the acpidump utility (Bob Moore)
- Add new OperationRegion subtype keyword PlatformRtMechanism to the
compiler (Erik Kaneda)"
* tag 'acpi-5.8-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPICA: Update version to 20200528
ACPICA: iASL: add new OperationRegion subtype keyword PlatformRtMechanism
ACPICA: acpidump: Removed dead code from oslinuxtbl.c
Linus Torvalds [Wed, 10 Jun 2020 21:04:39 +0000 (14:04 -0700)]
Merge tag 'pm-5.8-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull more power management updates from Rafael Wysocki:
"These are operating performance points (OPP) framework updates mostly,
including support for interconnect bandwidth in the OPP core, plus a
few cpufreq changes, including boost support in the CPPC cpufreq
driver, an ACPI device power management fix and a hibernation code
cleanup.
Specifics:
- Add support for interconnect bandwidth to the OPP core (Georgi
Djakov, Saravana Kannan, Sibi Sankar, Viresh Kumar).
- Add support for regulator enable/disable to the OPP core (Kamil
Konieczny).
- Add boost support to the CPPC cpufreq driver (Xiongfeng Wang).
- Make the tegra186 cpufreq driver set the
CPUFREQ_NEED_INITIAL_FREQ_CHECK flag (Mian Yousaf Kaukab).
- Prevent the ACPI power management from using power resources with
devices where the list of power resources for power state D0 (full
power) is missing (Rafael Wysocki).
- Annotate a hibernation-related function with __init (Christophe
JAILLET)"
* tag 'pm-5.8-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI: PM: Avoid using power resources if there are none for D0
cpufreq: CPPC: add SW BOOST support
cpufreq: change '.set_boost' to act on one policy
PM: hibernate: Add __init annotation to swsusp_header_init()
opp: Don't parse icc paths unnecessarily
opp: Remove bandwidth votes when target_freq is zero
opp: core: add regulators enable and disable
opp: Reorder the code for !target_freq case
opp: Expose bandwidth information via debugfs
cpufreq: dt: Add support for interconnect bandwidth scaling
opp: Update the bandwidth on OPP frequency changes
opp: Add sanity checks in _read_opp_key()
opp: Add support for parsing interconnect bandwidth
cpufreq: tegra186: add CPUFREQ_NEED_INITIAL_FREQ_CHECK flag
OPP: Add helpers for reading the binding properties
dt-bindings: opp: Introduce opp-peak-kBps and opp-avg-kBps bindings
Linus Torvalds [Wed, 10 Jun 2020 20:42:09 +0000 (13:42 -0700)]
Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
Pull virtio updates from Michael Tsirkin:
- virtio-mem: paravirtualized memory hotplug
- support doorbell mapping for vdpa
- config interrupt support in ifc
- fixes all over the place
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: (40 commits)
vhost/test: fix up after API change
virtio_mem: convert device block size into 64bit
virtio-mem: drop unnecessary initialization
ifcvf: implement config interrupt in IFCVF
vhost: replace -1 with VHOST_FILE_UNBIND in ioctls
vhost_vdpa: Support config interrupt in vdpa
ifcvf: ignore continuous setting same status value
virtio-mem: Don't rely on implicit compiler padding for requests
virtio-mem: Try to unplug the complete online memory block first
virtio-mem: Use -ETXTBSY as error code if the device is busy
virtio-mem: Unplug subblocks right-to-left
virtio-mem: Drop manual check for already present memory
virtio-mem: Add parent resource for all added "System RAM"
virtio-mem: Better retry handling
virtio-mem: Offline and remove completely unplugged memory blocks
mm/memory_hotplug: Introduce offline_and_remove_memory()
virtio-mem: Allow to offline partially unplugged memory blocks
mm: Allow to offline unmovable PageOffline() pages via MEM_GOING_OFFLINE
virtio-mem: Paravirtualized memory hotunplug part 2
virtio-mem: Paravirtualized memory hotunplug part 1
...
Linus Torvalds [Wed, 10 Jun 2020 20:25:40 +0000 (13:25 -0700)]
Merge tag 'for-linus-5.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml
Pull UML updates from Richard Weinberger:
- Use fdatasync() in ubd
- Add a generic "fd" vector transport
- Minor cleanups and fixes
* tag 'for-linus-5.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml:
um: virtio: Replace zero-length array with flexible-array
um: Use fdatasync() when mapping the UBD FSYNC command
um: Do not evaluate compiler's library path when cleaning
um: Neaten vu_err macro definition
um: Add a generic "fd" vector transport
um: Add include: memset() and memcpy() are in <string.h>