]>
Commit | Line | Data |
---|---|---|
b2441318 | 1 | # SPDX-License-Identifier: GPL-2.0 |
fb32e03f MD |
2 | # |
3 | # General architecture dependent options | |
4 | # | |
125e5645 | 5 | |
1572497c CH |
6 | # |
7 | # Note: arch/$(SRCARCH)/Kconfig needs to be included first so that it can | |
8 | # override the default values in this file. | |
9 | # | |
10 | source "arch/$(SRCARCH)/Kconfig" | |
11 | ||
22471e13 RD |
12 | menu "General architecture-dependent options" |
13 | ||
da32b581 CM |
14 | config ARCH_HAS_SUBPAGE_FAULTS |
15 | bool | |
16 | help | |
17 | Select if the architecture can check permissions at sub-page | |
18 | granularity (e.g. arm64 MTE). The probe_user_*() functions | |
19 | must be implemented. | |
20 | ||
05736e4a TG |
21 | config HOTPLUG_SMT |
22 | bool | |
23 | ||
38253464 ME |
24 | config SMT_NUM_THREADS_DYNAMIC |
25 | bool | |
26 | ||
6f062123 TG |
27 | # Selected by HOTPLUG_CORE_SYNC_DEAD or HOTPLUG_CORE_SYNC_FULL |
28 | config HOTPLUG_CORE_SYNC | |
29 | bool | |
30 | ||
31 | # Basic CPU dead synchronization selected by architecture | |
32 | config HOTPLUG_CORE_SYNC_DEAD | |
33 | bool | |
34 | select HOTPLUG_CORE_SYNC | |
35 | ||
36 | # Full CPU synchronization with alive state selected by architecture | |
37 | config HOTPLUG_CORE_SYNC_FULL | |
38 | bool | |
39 | select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU | |
40 | select HOTPLUG_CORE_SYNC | |
41 | ||
a631be92 TG |
42 | config HOTPLUG_SPLIT_STARTUP |
43 | bool | |
44 | select HOTPLUG_CORE_SYNC_FULL | |
45 | ||
18415f33 TG |
46 | config HOTPLUG_PARALLEL |
47 | bool | |
48 | select HOTPLUG_SPLIT_STARTUP | |
49 | ||
142781e1 | 50 | config GENERIC_ENTRY |
9f79ffc1 | 51 | bool |
142781e1 | 52 | |
125e5645 MD |
53 | config KPROBES |
54 | bool "Kprobes" | |
05ed160e | 55 | depends on MODULES |
125e5645 | 56 | depends on HAVE_KPROBES |
05ed160e | 57 | select KALLSYMS |
835f14ed | 58 | select TASKS_RCU if PREEMPTION |
125e5645 MD |
59 | help |
60 | Kprobes allows you to trap at almost any kernel address and | |
61 | execute a callback function. register_kprobe() establishes | |
62 | a probepoint and specifies the callback. Kprobes is useful | |
63 | for kernel debugging, non-intrusive instrumentation and testing. | |
64 | If in doubt, say "N". | |
65 | ||
45f81b1c | 66 | config JUMP_LABEL |
24b54fee KK |
67 | bool "Optimize very unlikely/likely branches" |
68 | depends on HAVE_ARCH_JUMP_LABEL | |
4ab7674f | 69 | select OBJTOOL if HAVE_JUMP_LABEL_HACK |
24b54fee | 70 | help |
9f79ffc1 JH |
71 | This option enables a transparent branch optimization that |
72 | makes certain almost-always-true or almost-always-false branch | |
73 | conditions even cheaper to execute within the kernel. | |
c5905afb | 74 | |
9f79ffc1 JH |
75 | Certain performance-sensitive kernel code, such as trace points, |
76 | scheduler functionality, networking code and KVM have such | |
77 | branches and include support for this optimization technique. | |
c5905afb | 78 | |
9f79ffc1 JH |
79 | If it is detected that the compiler has support for "asm goto", |
80 | the kernel will compile such branches with just a nop | |
81 | instruction. When the condition flag is toggled to true, the | |
82 | nop will be converted to a jump instruction to execute the | |
83 | conditional block of instructions. | |
c5905afb | 84 | |
9f79ffc1 JH |
85 | This technique lowers overhead and stress on the branch prediction |
86 | of the processor and generally makes the kernel faster. The update | |
87 | of the condition is slower, but those are always very rare. | |
45f81b1c | 88 | |
9f79ffc1 JH |
89 | ( On 32-bit x86, the necessary options added to the compiler |
90 | flags may increase the size of the kernel slightly. ) | |
45f81b1c | 91 | |
1987c947 PZ |
92 | config STATIC_KEYS_SELFTEST |
93 | bool "Static key selftest" | |
94 | depends on JUMP_LABEL | |
95 | help | |
96 | Boot time self-test of the branch patching code. | |
97 | ||
f03c4129 PZ |
98 | config STATIC_CALL_SELFTEST |
99 | bool "Static call selftest" | |
100 | depends on HAVE_STATIC_CALL | |
101 | help | |
102 | Boot time self-test of the call patching code. | |
103 | ||
afd66255 | 104 | config OPTPROBES |
5cc718b9 MH |
105 | def_bool y |
106 | depends on KPROBES && HAVE_OPTPROBES | |
01b1d88b | 107 | select TASKS_RCU if PREEMPTION |
afd66255 | 108 | |
e7dbfe34 MH |
109 | config KPROBES_ON_FTRACE |
110 | def_bool y | |
111 | depends on KPROBES && HAVE_KPROBES_ON_FTRACE | |
112 | depends on DYNAMIC_FTRACE_WITH_REGS | |
113 | help | |
9f79ffc1 JH |
114 | If function tracer is enabled and the arch supports full |
115 | passing of pt_regs to function tracing, then kprobes can | |
116 | optimize on top of function tracing. | |
e7dbfe34 | 117 | |
2b144498 | 118 | config UPROBES |
09294e31 | 119 | def_bool n |
e8f4aa60 | 120 | depends on ARCH_SUPPORTS_UPROBES |
2b144498 | 121 | help |
7b2d81d4 IM |
122 | Uprobes is the user-space counterpart to kprobes: they |
123 | enable instrumentation applications (such as 'perf probe') | |
124 | to establish unintrusive probes in user-space binaries and | |
125 | libraries, by executing handler functions when the probes | |
126 | are hit by user-space applications. | |
127 | ||
128 | ( These probes come in the form of single-byte breakpoints, | |
129 | managed by the kernel and kept transparent to the probed | |
130 | application. ) | |
2b144498 | 131 | |
adab66b7 SRV |
132 | config HAVE_64BIT_ALIGNED_ACCESS |
133 | def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS | |
134 | help | |
135 | Some architectures require 64 bit accesses to be 64 bit | |
136 | aligned, which also requires structs containing 64 bit values | |
137 | to be 64 bit aligned too. This includes some 32 bit | |
138 | architectures which can do 64 bit accesses, as well as 64 bit | |
139 | architectures without unaligned access. | |
140 | ||
141 | This symbol should be selected by an architecture if 64 bit | |
142 | accesses are required to be 64 bit aligned in this way even | |
143 | though it is not a 64 bit architecture. | |
144 | ||
ba1a297d LB |
145 | See Documentation/core-api/unaligned-memory-access.rst for |
146 | more information on the topic of unaligned memory accesses. | |
adab66b7 | 147 | |
58340a07 | 148 | config HAVE_EFFICIENT_UNALIGNED_ACCESS |
9ba16087 | 149 | bool |
58340a07 JB |
150 | help |
151 | Some architectures are unable to perform unaligned accesses | |
152 | without the use of get_unaligned/put_unaligned. Others are | |
153 | unable to perform such accesses efficiently (e.g. trap on | |
154 | unaligned access and require fixing it up in the exception | |
155 | handler.) | |
156 | ||
157 | This symbol should be selected by an architecture if it can | |
158 | perform unaligned accesses efficiently to allow different | |
159 | code paths to be selected for these cases. Some network | |
160 | drivers, for example, could opt to not fix up alignment | |
161 | problems with received packets if doing so would not help | |
162 | much. | |
163 | ||
c9b54d6f | 164 | See Documentation/core-api/unaligned-memory-access.rst for more |
58340a07 JB |
165 | information on the topic of unaligned memory accesses. |
166 | ||
cf66bb93 | 167 | config ARCH_USE_BUILTIN_BSWAP |
24b54fee KK |
168 | bool |
169 | help | |
9f79ffc1 JH |
170 | Modern versions of GCC (since 4.4) have builtin functions |
171 | for handling byte-swapping. Using these, instead of the old | |
172 | inline assembler that the architecture code provides in the | |
173 | __arch_bswapXX() macros, allows the compiler to see what's | |
174 | happening and offers more opportunity for optimisation. In | |
175 | particular, the compiler will be able to combine the byteswap | |
176 | with a nearby load or store and use load-and-swap or | |
177 | store-and-swap instructions if the architecture has them. It | |
178 | should almost *never* result in code which is worse than the | |
179 | hand-coded assembler in <asm/swab.h>. But just in case it | |
180 | does, the use of the builtins is optional. | |
cf66bb93 | 181 | |
9f79ffc1 JH |
182 | Any architecture with load-and-swap or store-and-swap |
183 | instructions should set this. And it shouldn't hurt to set it | |
184 | on architectures that don't have such instructions. | |
cf66bb93 | 185 | |
9edddaa2 AM |
186 | config KRETPROBES |
187 | def_bool y | |
73f9b911 MH |
188 | depends on KPROBES && (HAVE_KRETPROBES || HAVE_RETHOOK) |
189 | ||
190 | config KRETPROBE_ON_RETHOOK | |
191 | def_bool y | |
192 | depends on HAVE_RETHOOK | |
193 | depends on KRETPROBES | |
194 | select RETHOOK | |
9edddaa2 | 195 | |
7c68af6e AK |
196 | config USER_RETURN_NOTIFIER |
197 | bool | |
198 | depends on HAVE_USER_RETURN_NOTIFIER | |
199 | help | |
200 | Provide a kernel-internal notification when a cpu is about to | |
201 | switch to user mode. | |
202 | ||
28b2ee20 | 203 | config HAVE_IOREMAP_PROT |
9ba16087 | 204 | bool |
28b2ee20 | 205 | |
125e5645 | 206 | config HAVE_KPROBES |
9ba16087 | 207 | bool |
9edddaa2 AM |
208 | |
209 | config HAVE_KRETPROBES | |
9ba16087 | 210 | bool |
74bc7cee | 211 | |
afd66255 MH |
212 | config HAVE_OPTPROBES |
213 | bool | |
d314d74c | 214 | |
e7dbfe34 MH |
215 | config HAVE_KPROBES_ON_FTRACE |
216 | bool | |
217 | ||
1f6d3a8f MH |
218 | config ARCH_CORRECT_STACKTRACE_ON_KRETPROBE |
219 | bool | |
220 | help | |
221 | Since kretprobes modifies return address on the stack, the | |
222 | stacktrace may see the kretprobe trampoline address instead | |
223 | of correct one. If the architecture stacktrace code and | |
224 | unwinder can adjust such entries, select this configuration. | |
225 | ||
540adea3 | 226 | config HAVE_FUNCTION_ERROR_INJECTION |
9802d865 JB |
227 | bool |
228 | ||
42a0bb3f PM |
229 | config HAVE_NMI |
230 | bool | |
231 | ||
a257cacc CL |
232 | config HAVE_FUNCTION_DESCRIPTORS |
233 | bool | |
234 | ||
4aae683f MY |
235 | config TRACE_IRQFLAGS_SUPPORT |
236 | bool | |
237 | ||
4510bffb MR |
238 | config TRACE_IRQFLAGS_NMI_SUPPORT |
239 | bool | |
240 | ||
1f5a4ad9 RM |
241 | # |
242 | # An arch should select this if it provides all these things: | |
243 | # | |
244 | # task_pt_regs() in asm/processor.h or asm/ptrace.h | |
245 | # arch_has_single_step() if there is hardware single-step support | |
246 | # arch_has_block_step() if there is hardware block-step support | |
1f5a4ad9 RM |
247 | # asm/syscall.h supplying asm-generic/syscall.h interface |
248 | # linux/regset.h user_regset interfaces | |
249 | # CORE_DUMP_USE_REGSET #define'd in linux/elf.h | |
153474ba | 250 | # TIF_SYSCALL_TRACE calls ptrace_report_syscall_{entry,exit} |
03248add | 251 | # TIF_NOTIFY_RESUME calls resume_user_mode_work() |
1f5a4ad9 RM |
252 | # |
253 | config HAVE_ARCH_TRACEHOOK | |
9ba16087 | 254 | bool |
1f5a4ad9 | 255 | |
c64be2bb MS |
256 | config HAVE_DMA_CONTIGUOUS |
257 | bool | |
258 | ||
29d5e047 | 259 | config GENERIC_SMP_IDLE_THREAD |
24b54fee | 260 | bool |
29d5e047 | 261 | |
485cf5da | 262 | config GENERIC_IDLE_POLL_SETUP |
24b54fee | 263 | bool |
485cf5da | 264 | |
6974f0c4 DM |
265 | config ARCH_HAS_FORTIFY_SOURCE |
266 | bool | |
267 | help | |
268 | An architecture should select this when it can successfully | |
269 | build and run with CONFIG_FORTIFY_SOURCE. | |
270 | ||
d8ae8a37 CH |
271 | # |
272 | # Select if the arch provides a historic keepinit alias for the retain_initrd | |
273 | # command line option | |
274 | # | |
275 | config ARCH_HAS_KEEPINITRD | |
276 | bool | |
277 | ||
d2852a22 DB |
278 | # Select if arch has all set_memory_ro/rw/x/nx() functions in asm/cacheflush.h |
279 | config ARCH_HAS_SET_MEMORY | |
280 | bool | |
281 | ||
d253ca0c RE |
282 | # Select if arch has all set_direct_map_invalid/default() functions |
283 | config ARCH_HAS_SET_DIRECT_MAP | |
284 | bool | |
285 | ||
c30700db | 286 | # |
fa7e2247 | 287 | # Select if the architecture provides the arch_dma_set_uncached symbol to |
a86ecfa6 | 288 | # either provide an uncached segment alias for a DMA allocation, or |
fa7e2247 | 289 | # to remap the page tables in place. |
c30700db | 290 | # |
fa7e2247 | 291 | config ARCH_HAS_DMA_SET_UNCACHED |
c30700db CH |
292 | bool |
293 | ||
999a5d12 CH |
294 | # |
295 | # Select if the architectures provides the arch_dma_clear_uncached symbol | |
296 | # to undo an in-place page table remap for uncached access. | |
297 | # | |
298 | config ARCH_HAS_DMA_CLEAR_UNCACHED | |
c30700db CH |
299 | bool |
300 | ||
7725acaa TG |
301 | config ARCH_HAS_CPU_FINALIZE_INIT |
302 | bool | |
303 | ||
8f23f5db JG |
304 | # The architecture has a per-task state that includes the mm's PASID |
305 | config ARCH_HAS_CPU_PASID | |
306 | bool | |
307 | select IOMMU_MM_DATA | |
308 | ||
5905429a KC |
309 | config HAVE_ARCH_THREAD_STRUCT_WHITELIST |
310 | bool | |
5905429a KC |
311 | help |
312 | An architecture should select this to provide hardened usercopy | |
313 | knowledge about what region of the thread_struct should be | |
314 | whitelisted for copying to userspace. Normally this is only the | |
315 | FPU registers. Specifically, arch_thread_struct_whitelist() | |
316 | should be implemented. Without this, the entire thread_struct | |
317 | field in task_struct will be left whitelisted. | |
318 | ||
5aaeb5c0 IM |
319 | # Select if arch wants to size task_struct dynamically via arch_task_struct_size: |
320 | config ARCH_WANTS_DYNAMIC_TASK_STRUCT | |
321 | bool | |
322 | ||
51c2ee6d ND |
323 | config ARCH_WANTS_NO_INSTR |
324 | bool | |
325 | help | |
326 | An architecture should select this if the noinstr macro is being used on | |
327 | functions to denote that the toolchain should avoid instrumenting such | |
328 | functions and is required for correctness. | |
329 | ||
942fa985 YN |
330 | config ARCH_32BIT_OFF_T |
331 | bool | |
332 | depends on !64BIT | |
333 | help | |
334 | All new 32-bit architectures should have 64-bit off_t type on | |
335 | userspace side which corresponds to the loff_t kernel type. This | |
336 | is the requirement for modern ABIs. Some existing architectures | |
337 | still support 32-bit off_t. This option is enabled for all such | |
338 | architectures explicitly. | |
339 | ||
96c0a6a7 HC |
340 | # Selected by 64 bit architectures which have a 32 bit f_tinode in struct ustat |
341 | config ARCH_32BIT_USTAT_F_TINODE | |
342 | bool | |
343 | ||
2ff2b7ec MY |
344 | config HAVE_ASM_MODVERSIONS |
345 | bool | |
346 | help | |
a86ecfa6 | 347 | This symbol should be selected by an architecture if it provides |
2ff2b7ec MY |
348 | <asm/asm-prototypes.h> to support the module versioning for symbols |
349 | exported from assembly code. | |
350 | ||
f850c30c HC |
351 | config HAVE_REGS_AND_STACK_ACCESS_API |
352 | bool | |
e01292b1 | 353 | help |
a86ecfa6 | 354 | This symbol should be selected by an architecture if it supports |
e01292b1 HC |
355 | the API needed to access registers and stack entries from pt_regs, |
356 | declared in asm/ptrace.h | |
357 | For example the kprobes-based event tracer needs this API. | |
f850c30c | 358 | |
d7822b1e MD |
359 | config HAVE_RSEQ |
360 | bool | |
361 | depends on HAVE_REGS_AND_STACK_ACCESS_API | |
362 | help | |
363 | This symbol should be selected by an architecture if it | |
364 | supports an implementation of restartable sequences. | |
365 | ||
2f7ab126 MO |
366 | config HAVE_RUST |
367 | bool | |
368 | help | |
369 | This symbol should be selected by an architecture if it | |
370 | supports Rust. | |
371 | ||
3c88ee19 MH |
372 | config HAVE_FUNCTION_ARG_ACCESS_API |
373 | bool | |
374 | help | |
a86ecfa6 | 375 | This symbol should be selected by an architecture if it supports |
3c88ee19 MH |
376 | the API needed to access function arguments from pt_regs, |
377 | declared in asm/ptrace.h | |
378 | ||
62a038d3 P |
379 | config HAVE_HW_BREAKPOINT |
380 | bool | |
99e8c5a3 | 381 | depends on PERF_EVENTS |
62a038d3 | 382 | |
0102752e FW |
383 | config HAVE_MIXED_BREAKPOINTS_REGS |
384 | bool | |
385 | depends on HAVE_HW_BREAKPOINT | |
386 | help | |
387 | Depending on the arch implementation of hardware breakpoints, | |
388 | some of them have separate registers for data and instruction | |
389 | breakpoints addresses, others have mixed registers to store | |
390 | them but define the access type in a control register. | |
391 | Select this option if your arch implements breakpoints under the | |
392 | latter fashion. | |
393 | ||
7c68af6e AK |
394 | config HAVE_USER_RETURN_NOTIFIER |
395 | bool | |
a1922ed6 | 396 | |
c01d4323 FW |
397 | config HAVE_PERF_EVENTS_NMI |
398 | bool | |
23637d47 FW |
399 | help |
400 | System hardware can generate an NMI using the perf event | |
401 | subsystem. Also has support for calculating CPU cycle events | |
402 | to determine how many clock cycles in a given period. | |
c01d4323 | 403 | |
05a4a952 NP |
404 | config HAVE_HARDLOCKUP_DETECTOR_PERF |
405 | bool | |
406 | depends on HAVE_PERF_EVENTS_NMI | |
407 | help | |
408 | The arch chooses to use the generic perf-NMI-based hardlockup | |
409 | detector. Must define HAVE_PERF_EVENTS_NMI. | |
410 | ||
05a4a952 NP |
411 | config HAVE_HARDLOCKUP_DETECTOR_ARCH |
412 | bool | |
05a4a952 | 413 | help |
1356d0b9 PM |
414 | The arch provides its own hardlockup detector implementation instead |
415 | of the generic ones. | |
416 | ||
417 | It uses the same command line parameters, and sysctl interface, | |
418 | as the generic hardlockup detectors. | |
05a4a952 | 419 | |
c5e63197 JO |
420 | config HAVE_PERF_REGS |
421 | bool | |
422 | help | |
423 | Support selective register dumps for perf events. This includes | |
424 | bit-mapping of each registers and a unique architecture id. | |
425 | ||
c5ebcedb JO |
426 | config HAVE_PERF_USER_STACK_DUMP |
427 | bool | |
428 | help | |
429 | Support user stack dumps for perf event samples. This needs | |
430 | access to the user stack pointer which is not unified across | |
431 | architectures. | |
432 | ||
bf5438fc JB |
433 | config HAVE_ARCH_JUMP_LABEL |
434 | bool | |
435 | ||
50ff18ab AB |
436 | config HAVE_ARCH_JUMP_LABEL_RELATIVE |
437 | bool | |
438 | ||
0d6e24d4 PZ |
439 | config MMU_GATHER_TABLE_FREE |
440 | bool | |
441 | ||
ff2e6d72 | 442 | config MMU_GATHER_RCU_TABLE_FREE |
26723911 | 443 | bool |
0d6e24d4 | 444 | select MMU_GATHER_TABLE_FREE |
26723911 | 445 | |
3af4bd03 | 446 | config MMU_GATHER_PAGE_SIZE |
ed6a7935 PZ |
447 | bool |
448 | ||
27796d03 PZ |
449 | config MMU_GATHER_NO_RANGE |
450 | bool | |
1e9fdf21 PZ |
451 | select MMU_GATHER_MERGE_VMAS |
452 | ||
453 | config MMU_GATHER_NO_FLUSH_CACHE | |
454 | bool | |
455 | ||
456 | config MMU_GATHER_MERGE_VMAS | |
457 | bool | |
27796d03 | 458 | |
580a586c | 459 | config MMU_GATHER_NO_GATHER |
952a31c9 | 460 | bool |
0d6e24d4 | 461 | depends on MMU_GATHER_TABLE_FREE |
952a31c9 | 462 | |
d53c3dfb NP |
463 | config ARCH_WANT_IRQS_OFF_ACTIVATE_MM |
464 | bool | |
465 | help | |
466 | Temporary select until all architectures can be converted to have | |
467 | irqs disabled over activate_mm. Architectures that do IPI based TLB | |
468 | shootdowns should enable this. | |
469 | ||
88e3009b NP |
470 | # Use normal mm refcounting for MMU_LAZY_TLB kernel thread references. |
471 | # MMU_LAZY_TLB_REFCOUNT=n can improve the scalability of context switching | |
472 | # to/from kernel threads when the same mm is running on a lot of CPUs (a large | |
473 | # multi-threaded application), by reducing contention on the mm refcount. | |
474 | # | |
475 | # This can be disabled if the architecture ensures no CPUs are using an mm as a | |
476 | # "lazy tlb" beyond its final refcount (i.e., by the time __mmdrop frees the mm | |
477 | # or its kernel page tables). This could be arranged by arch_exit_mmap(), or | |
478 | # final exit(2) TLB flush, for example. | |
479 | # | |
480 | # To implement this, an arch *must*: | |
481 | # Ensure the _lazy_tlb variants of mmgrab/mmdrop are used when manipulating | |
482 | # the lazy tlb reference of a kthread's ->active_mm (non-arch code has been | |
483 | # converted already). | |
484 | config MMU_LAZY_TLB_REFCOUNT | |
485 | def_bool y | |
2655421a NP |
486 | depends on !MMU_LAZY_TLB_SHOOTDOWN |
487 | ||
488 | # This option allows MMU_LAZY_TLB_REFCOUNT=n. It ensures no CPUs are using an | |
489 | # mm as a lazy tlb beyond its last reference count, by shooting down these | |
490 | # users before the mm is deallocated. __mmdrop() first IPIs all CPUs that may | |
491 | # be using the mm as a lazy tlb, so that they may switch themselves to using | |
492 | # init_mm for their active mm. mm_cpumask(mm) is used to determine which CPUs | |
493 | # may be using mm as a lazy tlb mm. | |
494 | # | |
495 | # To implement this, an arch *must*: | |
496 | # - At the time of the final mmdrop of the mm, ensure mm_cpumask(mm) contains | |
497 | # at least all possible CPUs in which the mm is lazy. | |
498 | # - It must meet the requirements for MMU_LAZY_TLB_REFCOUNT=n (see above). | |
499 | config MMU_LAZY_TLB_SHOOTDOWN | |
500 | bool | |
88e3009b | 501 | |
df013ffb YH |
502 | config ARCH_HAVE_NMI_SAFE_CMPXCHG |
503 | bool | |
504 | ||
2e83b879 PM |
505 | config ARCH_HAS_NMI_SAFE_THIS_CPU_OPS |
506 | bool | |
507 | ||
43570fd2 HC |
508 | config HAVE_ALIGNED_STRUCT_PAGE |
509 | bool | |
510 | help | |
511 | This makes sure that struct pages are double word aligned and that | |
512 | e.g. the SLUB allocator can perform double word atomic operations | |
513 | on a struct page for better performance. However selecting this | |
514 | might increase the size of a struct page by a word. | |
515 | ||
4156153c HC |
516 | config HAVE_CMPXCHG_LOCAL |
517 | bool | |
518 | ||
2565409f HC |
519 | config HAVE_CMPXCHG_DOUBLE |
520 | bool | |
521 | ||
77e58496 PM |
522 | config ARCH_WEAK_RELEASE_ACQUIRE |
523 | bool | |
524 | ||
c1d7e01d WD |
525 | config ARCH_WANT_IPC_PARSE_VERSION |
526 | bool | |
527 | ||
528 | config ARCH_WANT_COMPAT_IPC_PARSE_VERSION | |
529 | bool | |
530 | ||
48b25c43 | 531 | config ARCH_WANT_OLD_COMPAT_IPC |
c1d7e01d | 532 | select ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
48b25c43 CM |
533 | bool |
534 | ||
282a181b YZ |
535 | config HAVE_ARCH_SECCOMP |
536 | bool | |
537 | help | |
538 | An arch should select this symbol to support seccomp mode 1 (the fixed | |
539 | syscall policy), and must provide an overrides for __NR_seccomp_sigreturn, | |
540 | and compat syscalls if the asm-generic/seccomp.h defaults need adjustment: | |
541 | - __NR_seccomp_read_32 | |
542 | - __NR_seccomp_write_32 | |
543 | - __NR_seccomp_exit_32 | |
544 | - __NR_seccomp_sigreturn_32 | |
545 | ||
e2cfabdf WD |
546 | config HAVE_ARCH_SECCOMP_FILTER |
547 | bool | |
282a181b | 548 | select HAVE_ARCH_SECCOMP |
e2cfabdf | 549 | help |
fb0fadf9 | 550 | An arch should select this symbol if it provides all of these things: |
282a181b | 551 | - all the requirements for HAVE_ARCH_SECCOMP |
bb6ea430 WD |
552 | - syscall_get_arch() |
553 | - syscall_get_arguments() | |
554 | - syscall_rollback() | |
555 | - syscall_set_return_value() | |
fb0fadf9 WD |
556 | - SIGSYS siginfo_t support |
557 | - secure_computing is called from a ptrace_event()-safe context | |
558 | - secure_computing return value is checked and a return value of -1 | |
559 | results in the system call being skipped immediately. | |
48dc92b9 | 560 | - seccomp syscall wired up |
0d8315dd YZ |
561 | - if !HAVE_SPARSE_SYSCALL_NR, have SECCOMP_ARCH_NATIVE, |
562 | SECCOMP_ARCH_NATIVE_NR, SECCOMP_ARCH_NATIVE_NAME defined. If | |
563 | COMPAT is supported, have the SECCOMP_ARCH_COMPAT* defines too. | |
e2cfabdf | 564 | |
282a181b YZ |
565 | config SECCOMP |
566 | prompt "Enable seccomp to safely execute untrusted bytecode" | |
567 | def_bool y | |
568 | depends on HAVE_ARCH_SECCOMP | |
569 | help | |
570 | This kernel feature is useful for number crunching applications | |
571 | that may need to handle untrusted bytecode during their | |
572 | execution. By using pipes or other transports made available | |
573 | to the process as file descriptors supporting the read/write | |
574 | syscalls, it's possible to isolate those applications in their | |
575 | own address space using seccomp. Once seccomp is enabled via | |
576 | prctl(PR_SET_SECCOMP) or the seccomp() syscall, it cannot be | |
577 | disabled and the task is only allowed to execute a few safe | |
578 | syscalls defined by each seccomp mode. | |
579 | ||
580 | If unsure, say Y. | |
581 | ||
e2cfabdf WD |
582 | config SECCOMP_FILTER |
583 | def_bool y | |
584 | depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET | |
585 | help | |
586 | Enable tasks to build secure computing environments defined | |
587 | in terms of Berkeley Packet Filter programs which implement | |
588 | task-defined system call filtering polices. | |
589 | ||
5fb94e9c | 590 | See Documentation/userspace-api/seccomp_filter.rst for details. |
e2cfabdf | 591 | |
0d8315dd YZ |
592 | config SECCOMP_CACHE_DEBUG |
593 | bool "Show seccomp filter cache status in /proc/pid/seccomp_cache" | |
594 | depends on SECCOMP_FILTER && !HAVE_SPARSE_SYSCALL_NR | |
595 | depends on PROC_FS | |
596 | help | |
597 | This enables the /proc/pid/seccomp_cache interface to monitor | |
598 | seccomp cache data. The file format is subject to change. Reading | |
599 | the file requires CAP_SYS_ADMIN. | |
600 | ||
601 | This option is for debugging only. Enabling presents the risk that | |
602 | an adversary may be able to infer the seccomp filter logic. | |
603 | ||
604 | If unsure, say N. | |
605 | ||
afaef01c AP |
606 | config HAVE_ARCH_STACKLEAK |
607 | bool | |
608 | help | |
609 | An architecture should select this if it has the code which | |
610 | fills the used part of the kernel stack with the STACKLEAK_POISON | |
611 | value before returning from system calls. | |
612 | ||
d148eac0 | 613 | config HAVE_STACKPROTECTOR |
19952a92 KC |
614 | bool |
615 | help | |
616 | An arch should select this symbol if: | |
19952a92 KC |
617 | - it has implemented a stack canary (e.g. __stack_chk_guard) |
618 | ||
050e9baa | 619 | config STACKPROTECTOR |
2a61f474 | 620 | bool "Stack Protector buffer overflow detection" |
d148eac0 | 621 | depends on HAVE_STACKPROTECTOR |
2a61f474 MY |
622 | depends on $(cc-option,-fstack-protector) |
623 | default y | |
19952a92 | 624 | help |
8779657d | 625 | This option turns on the "stack-protector" GCC feature. This |
19952a92 KC |
626 | feature puts, at the beginning of functions, a canary value on |
627 | the stack just before the return address, and validates | |
628 | the value just before actually returning. Stack based buffer | |
629 | overflows (that need to overwrite this return address) now also | |
630 | overwrite the canary, which gets detected and the attack is then | |
631 | neutralized via a kernel panic. | |
632 | ||
8779657d KC |
633 | Functions will have the stack-protector canary logic added if they |
634 | have an 8-byte or larger character array on the stack. | |
635 | ||
19952a92 | 636 | This feature requires gcc version 4.2 or above, or a distribution |
8779657d KC |
637 | gcc with the feature backported ("-fstack-protector"). |
638 | ||
639 | On an x86 "defconfig" build, this feature adds canary checks to | |
640 | about 3% of all kernel functions, which increases kernel code size | |
641 | by about 0.3%. | |
642 | ||
050e9baa | 643 | config STACKPROTECTOR_STRONG |
2a61f474 | 644 | bool "Strong Stack Protector" |
050e9baa | 645 | depends on STACKPROTECTOR |
2a61f474 MY |
646 | depends on $(cc-option,-fstack-protector-strong) |
647 | default y | |
8779657d KC |
648 | help |
649 | Functions will have the stack-protector canary logic added in any | |
650 | of the following conditions: | |
651 | ||
652 | - local variable's address used as part of the right hand side of an | |
653 | assignment or function argument | |
654 | - local variable is an array (or union containing an array), | |
655 | regardless of array type or length | |
656 | - uses register local variables | |
657 | ||
658 | This feature requires gcc version 4.9 or above, or a distribution | |
659 | gcc with the feature backported ("-fstack-protector-strong"). | |
660 | ||
661 | On an x86 "defconfig" build, this feature adds canary checks to | |
662 | about 20% of all kernel functions, which increases the kernel code | |
663 | size by about 2%. | |
664 | ||
d08b9f0c ST |
665 | config ARCH_SUPPORTS_SHADOW_CALL_STACK |
666 | bool | |
667 | help | |
afcf5441 DL |
668 | An architecture should select this if it supports the compiler's |
669 | Shadow Call Stack and implements runtime support for shadow stack | |
aa7a65ae | 670 | switching. |
d08b9f0c ST |
671 | |
672 | config SHADOW_CALL_STACK | |
afcf5441 DL |
673 | bool "Shadow Call Stack" |
674 | depends on ARCH_SUPPORTS_SHADOW_CALL_STACK | |
38792972 | 675 | depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER |
6f9dc684 | 676 | depends on MMU |
d08b9f0c | 677 | help |
afcf5441 DL |
678 | This option enables the compiler's Shadow Call Stack, which |
679 | uses a shadow stack to protect function return addresses from | |
680 | being overwritten by an attacker. More information can be found | |
681 | in the compiler's documentation: | |
d08b9f0c | 682 | |
afcf5441 DL |
683 | - Clang: https://clang.llvm.org/docs/ShadowCallStack.html |
684 | - GCC: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html#Instrumentation-Options | |
d08b9f0c ST |
685 | |
686 | Note that security guarantees in the kernel differ from the | |
687 | ones documented for user space. The kernel must store addresses | |
688 | of shadow stacks in memory, which means an attacker capable of | |
689 | reading and writing arbitrary memory may be able to locate them | |
690 | and hijack control flow by modifying the stacks. | |
691 | ||
9beccca0 AB |
692 | config DYNAMIC_SCS |
693 | bool | |
694 | help | |
695 | Set by the arch code if it relies on code patching to insert the | |
696 | shadow call stack push and pop instructions rather than on the | |
697 | compiler. | |
698 | ||
dc5723b0 ST |
699 | config LTO |
700 | bool | |
701 | help | |
702 | Selected if the kernel will be built using the compiler's LTO feature. | |
703 | ||
704 | config LTO_CLANG | |
705 | bool | |
706 | select LTO | |
707 | help | |
708 | Selected if the kernel will be built using Clang's LTO feature. | |
709 | ||
710 | config ARCH_SUPPORTS_LTO_CLANG | |
711 | bool | |
712 | help | |
713 | An architecture should select this option if it supports: | |
714 | - compiling with Clang, | |
715 | - compiling inline assembly with Clang's integrated assembler, | |
716 | - and linking with LLD. | |
717 | ||
718 | config ARCH_SUPPORTS_LTO_CLANG_THIN | |
719 | bool | |
720 | help | |
721 | An architecture should select this option if it can support Clang's | |
722 | ThinLTO mode. | |
723 | ||
724 | config HAS_LTO_CLANG | |
725 | def_bool y | |
1e68a8af | 726 | depends on CC_IS_CLANG && LD_IS_LLD && AS_IS_LLVM |
dc5723b0 ST |
727 | depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm) |
728 | depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm) | |
729 | depends on ARCH_SUPPORTS_LTO_CLANG | |
730 | depends on !FTRACE_MCOUNT_USE_RECORDMCOUNT | |
349fde59 JK |
731 | # https://github.com/ClangBuiltLinux/linux/issues/1721 |
732 | depends on (!KASAN || KASAN_HW_TAGS || CLANG_VERSION >= 170000) || !DEBUG_INFO | |
733 | depends on (!KCOV || CLANG_VERSION >= 170000) || !DEBUG_INFO | |
dc5723b0 | 734 | depends on !GCOV_KERNEL |
dc5723b0 ST |
735 | help |
736 | The compiler and Kconfig options support building with Clang's | |
737 | LTO. | |
738 | ||
739 | choice | |
740 | prompt "Link Time Optimization (LTO)" | |
741 | default LTO_NONE | |
742 | help | |
743 | This option enables Link Time Optimization (LTO), which allows the | |
744 | compiler to optimize binaries globally. | |
745 | ||
746 | If unsure, select LTO_NONE. Note that LTO is very resource-intensive | |
747 | so it's disabled by default. | |
748 | ||
749 | config LTO_NONE | |
750 | bool "None" | |
751 | help | |
752 | Build the kernel normally, without Link Time Optimization (LTO). | |
753 | ||
754 | config LTO_CLANG_FULL | |
755 | bool "Clang Full LTO (EXPERIMENTAL)" | |
756 | depends on HAS_LTO_CLANG | |
757 | depends on !COMPILE_TEST | |
758 | select LTO_CLANG | |
759 | help | |
9f79ffc1 JH |
760 | This option enables Clang's full Link Time Optimization (LTO), which |
761 | allows the compiler to optimize the kernel globally. If you enable | |
762 | this option, the compiler generates LLVM bitcode instead of ELF | |
763 | object files, and the actual compilation from bitcode happens at | |
764 | the LTO link step, which may take several minutes depending on the | |
765 | kernel configuration. More information can be found from LLVM's | |
766 | documentation: | |
dc5723b0 ST |
767 | |
768 | https://llvm.org/docs/LinkTimeOptimization.html | |
769 | ||
770 | During link time, this option can use a large amount of RAM, and | |
771 | may take much longer than the ThinLTO option. | |
772 | ||
773 | config LTO_CLANG_THIN | |
774 | bool "Clang ThinLTO (EXPERIMENTAL)" | |
775 | depends on HAS_LTO_CLANG && ARCH_SUPPORTS_LTO_CLANG_THIN | |
776 | select LTO_CLANG | |
777 | help | |
778 | This option enables Clang's ThinLTO, which allows for parallel | |
779 | optimization and faster incremental compiles compared to the | |
780 | CONFIG_LTO_CLANG_FULL option. More information can be found | |
781 | from Clang's documentation: | |
782 | ||
783 | https://clang.llvm.org/docs/ThinLTO.html | |
784 | ||
785 | If unsure, say Y. | |
786 | endchoice | |
787 | ||
cf68fffb ST |
788 | config ARCH_SUPPORTS_CFI_CLANG |
789 | bool | |
790 | help | |
791 | An architecture should select this option if it can support Clang's | |
792 | Control-Flow Integrity (CFI) checking. | |
793 | ||
89245600 ST |
794 | config ARCH_USES_CFI_TRAPS |
795 | bool | |
796 | ||
cf68fffb ST |
797 | config CFI_CLANG |
798 | bool "Use Clang's Control Flow Integrity (CFI)" | |
89245600 ST |
799 | depends on ARCH_SUPPORTS_CFI_CLANG |
800 | depends on $(cc-option,-fsanitize=kcfi) | |
cf68fffb ST |
801 | help |
802 | This option enables Clang’s forward-edge Control Flow Integrity | |
803 | (CFI) checking, where the compiler injects a runtime check to each | |
804 | indirect function call to ensure the target is a valid function with | |
805 | the correct static type. This restricts possible call targets and | |
806 | makes it more difficult for an attacker to exploit bugs that allow | |
807 | the modification of stored function pointers. More information can be | |
808 | found from Clang's documentation: | |
809 | ||
810 | https://clang.llvm.org/docs/ControlFlowIntegrity.html | |
811 | ||
cf68fffb ST |
812 | config CFI_PERMISSIVE |
813 | bool "Use CFI in permissive mode" | |
814 | depends on CFI_CLANG | |
815 | help | |
816 | When selected, Control Flow Integrity (CFI) violations result in a | |
817 | warning instead of a kernel panic. This option should only be used | |
818 | for finding indirect call type mismatches during development. | |
819 | ||
820 | If unsure, say N. | |
821 | ||
0f60a8ef KC |
822 | config HAVE_ARCH_WITHIN_STACK_FRAMES |
823 | bool | |
824 | help | |
825 | An architecture should select this if it can walk the kernel stack | |
826 | frames to determine if an object is part of either the arguments | |
827 | or local variables (i.e. that it excludes saved return addresses, | |
828 | and similar) by implementing an inline arch_within_stack_frames(), | |
829 | which is used by CONFIG_HARDENED_USERCOPY. | |
830 | ||
24a9c541 | 831 | config HAVE_CONTEXT_TRACKING_USER |
2b1d5024 FW |
832 | bool |
833 | help | |
91d1aa43 FW |
834 | Provide kernel/user boundaries probes necessary for subsystems |
835 | that need it, such as userspace RCU extended quiescent state. | |
490f561b FW |
836 | Syscalls need to be wrapped inside user_exit()-user_enter(), either |
837 | optimized behind static key or through the slow path using TIF_NOHZ | |
838 | flag. Exceptions handlers must be wrapped as well. Irqs are already | |
6f0e6c15 | 839 | protected inside ct_irq_enter/ct_irq_exit() but preemption or signal |
490f561b FW |
840 | handling on irq exit still need to be protected. |
841 | ||
24a9c541 | 842 | config HAVE_CONTEXT_TRACKING_USER_OFFSTACK |
83c2da2e FW |
843 | bool |
844 | help | |
845 | Architecture neither relies on exception_enter()/exception_exit() | |
846 | nor on schedule_user(). Also preempt_schedule_notrace() and | |
847 | preempt_schedule_irq() can't be called in a preemptible section | |
848 | while context tracking is CONTEXT_USER. This feature reflects a sane | |
849 | entry implementation where the following requirements are met on | |
850 | critical entry code, ie: before user_exit() or after user_enter(): | |
851 | ||
852 | - Critical entry code isn't preemptible (or better yet: | |
853 | not interruptible). | |
493c1822 | 854 | - No use of RCU read side critical sections, unless ct_nmi_enter() |
83c2da2e FW |
855 | got called. |
856 | - No use of instrumentation, unless instrumentation_begin() got | |
857 | called. | |
858 | ||
490f561b FW |
859 | config HAVE_TIF_NOHZ |
860 | bool | |
861 | help | |
862 | Arch relies on TIF_NOHZ and syscall slow path to implement context | |
863 | tracking calls to user_enter()/user_exit(). | |
2b1d5024 | 864 | |
b952741c FW |
865 | config HAVE_VIRT_CPU_ACCOUNTING |
866 | bool | |
867 | ||
2b91ec9f FW |
868 | config HAVE_VIRT_CPU_ACCOUNTING_IDLE |
869 | bool | |
870 | help | |
871 | Architecture has its own way to account idle CPU time and therefore | |
872 | doesn't implement vtime_account_idle(). | |
873 | ||
40565b5a SG |
874 | config ARCH_HAS_SCALED_CPUTIME |
875 | bool | |
876 | ||
554b0004 KH |
877 | config HAVE_VIRT_CPU_ACCOUNTING_GEN |
878 | bool | |
879 | default y if 64BIT | |
880 | help | |
881 | With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. | |
882 | Before enabling this option, arch code must be audited | |
883 | to ensure there are no races in concurrent read/write of | |
884 | cputime_t. For example, reading/writing 64-bit cputime_t on | |
885 | some 32-bit arches may require multiple accesses, so proper | |
886 | locking is needed to protect against concurrent accesses. | |
887 | ||
fdf9c356 FW |
888 | config HAVE_IRQ_TIME_ACCOUNTING |
889 | bool | |
890 | help | |
891 | Archs need to ensure they use a high enough resolution clock to | |
892 | support irq time accounting and then call enable_sched_clock_irqtime(). | |
893 | ||
c49dd340 KS |
894 | config HAVE_MOVE_PUD |
895 | bool | |
896 | help | |
897 | Architectures that select this are able to move page tables at the | |
898 | PUD level. If there are only 3 page table levels, the move effectively | |
899 | happens at the PGD level. | |
900 | ||
2c91bd4a JFG |
901 | config HAVE_MOVE_PMD |
902 | bool | |
903 | help | |
904 | Archs that select this are able to move page tables at the PMD level. | |
905 | ||
15626062 GS |
906 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE |
907 | bool | |
908 | ||
a00cc7d9 MW |
909 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD |
910 | bool | |
911 | ||
0ddab1d2 TK |
912 | config HAVE_ARCH_HUGE_VMAP |
913 | bool | |
914 | ||
121e6f32 NP |
915 | # |
916 | # Archs that select this would be capable of PMD-sized vmaps (i.e., | |
559089e0 SL |
917 | # arch_vmap_pmd_supported() returns true). The VM_ALLOW_HUGE_VMAP flag |
918 | # must be used to enable allocations to use hugepages. | |
121e6f32 NP |
919 | # |
920 | config HAVE_ARCH_HUGE_VMALLOC | |
921 | depends on HAVE_ARCH_HUGE_VMAP | |
922 | bool | |
923 | ||
3876d4a3 AG |
924 | config ARCH_WANT_HUGE_PMD_SHARE |
925 | bool | |
926 | ||
2f0584f3 RE |
927 | # Archs that want to use pmd_mkwrite on kernel memory need it defined even |
928 | # if there are no userspace memory management features that use it | |
929 | config ARCH_WANT_KERNEL_PMD_MKWRITE | |
930 | bool | |
931 | ||
932 | config ARCH_WANT_PMD_MKWRITE | |
933 | def_bool TRANSPARENT_HUGEPAGE || ARCH_WANT_KERNEL_PMD_MKWRITE | |
934 | ||
0f8975ec PE |
935 | config HAVE_ARCH_SOFT_DIRTY |
936 | bool | |
937 | ||
786d35d4 DH |
938 | config HAVE_MOD_ARCH_SPECIFIC |
939 | bool | |
940 | help | |
941 | The arch uses struct mod_arch_specific to store data. Many arches | |
942 | just need a simple module loader without arch specific data - those | |
943 | should not enable this. | |
944 | ||
945 | config MODULES_USE_ELF_RELA | |
946 | bool | |
947 | help | |
948 | Modules only use ELF RELA relocations. Modules with ELF REL | |
949 | relocations will give an error. | |
950 | ||
951 | config MODULES_USE_ELF_REL | |
952 | bool | |
953 | help | |
954 | Modules only use ELF REL relocations. Modules with ELF RELA | |
955 | relocations will give an error. | |
956 | ||
01dc0386 CL |
957 | config ARCH_WANTS_MODULES_DATA_IN_VMALLOC |
958 | bool | |
959 | help | |
960 | For architectures like powerpc/32 which have constraints on module | |
961 | allocation and need to allocate module data outside of module area. | |
962 | ||
cc1f0274 FW |
963 | config HAVE_IRQ_EXIT_ON_IRQ_STACK |
964 | bool | |
965 | help | |
966 | Architecture doesn't only execute the irq handler on the irq stack | |
967 | but also irq_exit(). This way we can process softirqs on this irq | |
968 | stack instead of switching to a new one when we call __do_softirq() | |
969 | in the end of an hardirq. | |
970 | This spares a stack switch and improves cache usage on softirq | |
971 | processing. | |
972 | ||
cd1a41ce TG |
973 | config HAVE_SOFTIRQ_ON_OWN_STACK |
974 | bool | |
975 | help | |
976 | Architecture provides a function to run __do_softirq() on a | |
c226bc3c | 977 | separate stack. |
cd1a41ce | 978 | |
8cbb2b50 SAS |
979 | config SOFTIRQ_ON_OWN_STACK |
980 | def_bool HAVE_SOFTIRQ_ON_OWN_STACK && !PREEMPT_RT | |
981 | ||
12700c17 AB |
982 | config ALTERNATE_USER_ADDRESS_SPACE |
983 | bool | |
984 | help | |
985 | Architectures set this when the CPU uses separate address | |
986 | spaces for kernel and user space pointers. In this case, the | |
987 | access_ok() check on a __user pointer is skipped. | |
988 | ||
235a8f02 KS |
989 | config PGTABLE_LEVELS |
990 | int | |
991 | default 2 | |
992 | ||
2b68f6ca KC |
993 | config ARCH_HAS_ELF_RANDOMIZE |
994 | bool | |
995 | help | |
996 | An architecture supports choosing randomized locations for | |
997 | stack, mmap, brk, and ET_DYN. Defined functions: | |
998 | - arch_mmap_rnd() | |
204db6ed | 999 | - arch_randomize_brk() |
2b68f6ca | 1000 | |
d07e2259 DC |
1001 | config HAVE_ARCH_MMAP_RND_BITS |
1002 | bool | |
1003 | help | |
1004 | An arch should select this symbol if it supports setting a variable | |
1005 | number of bits for use in establishing the base address for mmap | |
1006 | allocations, has MMU enabled and provides values for both: | |
1007 | - ARCH_MMAP_RND_BITS_MIN | |
1008 | - ARCH_MMAP_RND_BITS_MAX | |
1009 | ||
5f56a5df JS |
1010 | config HAVE_EXIT_THREAD |
1011 | bool | |
1012 | help | |
1013 | An architecture implements exit_thread. | |
1014 | ||
d07e2259 DC |
1015 | config ARCH_MMAP_RND_BITS_MIN |
1016 | int | |
1017 | ||
1018 | config ARCH_MMAP_RND_BITS_MAX | |
1019 | int | |
1020 | ||
1021 | config ARCH_MMAP_RND_BITS_DEFAULT | |
1022 | int | |
1023 | ||
1024 | config ARCH_MMAP_RND_BITS | |
1025 | int "Number of bits to use for ASLR of mmap base address" if EXPERT | |
1026 | range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX | |
1027 | default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT | |
1028 | default ARCH_MMAP_RND_BITS_MIN | |
1029 | depends on HAVE_ARCH_MMAP_RND_BITS | |
1030 | help | |
1031 | This value can be used to select the number of bits to use to | |
1032 | determine the random offset to the base address of vma regions | |
1033 | resulting from mmap allocations. This value will be bounded | |
1034 | by the architecture's minimum and maximum supported values. | |
1035 | ||
1036 | This value can be changed after boot using the | |
1037 | /proc/sys/vm/mmap_rnd_bits tunable | |
1038 | ||
1039 | config HAVE_ARCH_MMAP_RND_COMPAT_BITS | |
1040 | bool | |
1041 | help | |
1042 | An arch should select this symbol if it supports running applications | |
1043 | in compatibility mode, supports setting a variable number of bits for | |
1044 | use in establishing the base address for mmap allocations, has MMU | |
1045 | enabled and provides values for both: | |
1046 | - ARCH_MMAP_RND_COMPAT_BITS_MIN | |
1047 | - ARCH_MMAP_RND_COMPAT_BITS_MAX | |
1048 | ||
1049 | config ARCH_MMAP_RND_COMPAT_BITS_MIN | |
1050 | int | |
1051 | ||
1052 | config ARCH_MMAP_RND_COMPAT_BITS_MAX | |
1053 | int | |
1054 | ||
1055 | config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT | |
1056 | int | |
1057 | ||
1058 | config ARCH_MMAP_RND_COMPAT_BITS | |
1059 | int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT | |
1060 | range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX | |
1061 | default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT | |
1062 | default ARCH_MMAP_RND_COMPAT_BITS_MIN | |
1063 | depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS | |
1064 | help | |
1065 | This value can be used to select the number of bits to use to | |
1066 | determine the random offset to the base address of vma regions | |
1067 | resulting from mmap allocations for compatible applications This | |
1068 | value will be bounded by the architecture's minimum and maximum | |
1069 | supported values. | |
1070 | ||
1071 | This value can be changed after boot using the | |
1072 | /proc/sys/vm/mmap_rnd_compat_bits tunable | |
1073 | ||
1b028f78 DS |
1074 | config HAVE_ARCH_COMPAT_MMAP_BASES |
1075 | bool | |
1076 | help | |
1077 | This allows 64bit applications to invoke 32-bit mmap() syscall | |
1078 | and vice-versa 32-bit applications to call 64-bit mmap(). | |
1079 | Required for applications doing different bitness syscalls. | |
1080 | ||
1f0e290c GR |
1081 | config PAGE_SIZE_LESS_THAN_64KB |
1082 | def_bool y | |
1083 | depends on !ARM64_64K_PAGES | |
1f0e290c GR |
1084 | depends on !PAGE_SIZE_64KB |
1085 | depends on !PARISC_PAGE_SIZE_64KB | |
e4bbd20d NC |
1086 | depends on PAGE_SIZE_LESS_THAN_256KB |
1087 | ||
1088 | config PAGE_SIZE_LESS_THAN_256KB | |
1089 | def_bool y | |
1f0e290c GR |
1090 | depends on !PAGE_SIZE_256KB |
1091 | ||
67f3977f AG |
1092 | # This allows to use a set of generic functions to determine mmap base |
1093 | # address by giving priority to top-down scheme only if the process | |
1094 | # is not in legacy mode (compat task, unlimited stack size or | |
1095 | # sysctl_legacy_va_layout). | |
1096 | # Architecture that selects this option can provide its own version of: | |
1097 | # - STACK_RND_MASK | |
1098 | config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT | |
1099 | bool | |
1100 | depends on MMU | |
e7142bf5 | 1101 | select ARCH_HAS_ELF_RANDOMIZE |
67f3977f | 1102 | |
03f16cd0 JP |
1103 | config HAVE_OBJTOOL |
1104 | bool | |
1105 | ||
4ab7674f JP |
1106 | config HAVE_JUMP_LABEL_HACK |
1107 | bool | |
1108 | ||
22102f45 JP |
1109 | config HAVE_NOINSTR_HACK |
1110 | bool | |
1111 | ||
489e355b JP |
1112 | config HAVE_NOINSTR_VALIDATION |
1113 | bool | |
1114 | ||
5f3da8c0 JP |
1115 | config HAVE_UACCESS_VALIDATION |
1116 | bool | |
1117 | select OBJTOOL | |
1118 | ||
b9ab5ebb JP |
1119 | config HAVE_STACK_VALIDATION |
1120 | bool | |
1121 | help | |
03f16cd0 JP |
1122 | Architecture supports objtool compile-time frame pointer rule |
1123 | validation. | |
b9ab5ebb | 1124 | |
af085d90 JP |
1125 | config HAVE_RELIABLE_STACKTRACE |
1126 | bool | |
1127 | help | |
140d7e88 MB |
1128 | Architecture has either save_stack_trace_tsk_reliable() or |
1129 | arch_stack_walk_reliable() function which only returns a stack trace | |
1130 | if it can guarantee the trace is reliable. | |
af085d90 | 1131 | |
468a9428 GS |
1132 | config HAVE_ARCH_HASH |
1133 | bool | |
1134 | default n | |
1135 | help | |
1136 | If this is set, the architecture provides an <asm/hash.h> | |
1137 | file which provides platform-specific implementations of some | |
1138 | functions in <linux/hash.h> or fs/namei.c. | |
1139 | ||
666047fe FT |
1140 | config HAVE_ARCH_NVRAM_OPS |
1141 | bool | |
1142 | ||
3a495511 WBG |
1143 | config ISA_BUS_API |
1144 | def_bool ISA | |
1145 | ||
d2125043 AV |
1146 | # |
1147 | # ABI hall of shame | |
1148 | # | |
1149 | config CLONE_BACKWARDS | |
1150 | bool | |
1151 | help | |
1152 | Architecture has tls passed as the 4th argument of clone(2), | |
1153 | not the 5th one. | |
1154 | ||
1155 | config CLONE_BACKWARDS2 | |
1156 | bool | |
1157 | help | |
1158 | Architecture has the first two arguments of clone(2) swapped. | |
1159 | ||
dfa9771a MS |
1160 | config CLONE_BACKWARDS3 |
1161 | bool | |
1162 | help | |
1163 | Architecture has tls passed as the 3rd argument of clone(2), | |
1164 | not the 5th one. | |
1165 | ||
eaca6eae AV |
1166 | config ODD_RT_SIGACTION |
1167 | bool | |
1168 | help | |
1169 | Architecture has unusual rt_sigaction(2) arguments | |
1170 | ||
0a0e8cdf AV |
1171 | config OLD_SIGSUSPEND |
1172 | bool | |
1173 | help | |
1174 | Architecture has old sigsuspend(2) syscall, of one-argument variety | |
1175 | ||
1176 | config OLD_SIGSUSPEND3 | |
1177 | bool | |
1178 | help | |
1179 | Even weirder antique ABI - three-argument sigsuspend(2) | |
1180 | ||
495dfbf7 AV |
1181 | config OLD_SIGACTION |
1182 | bool | |
1183 | help | |
1184 | Architecture has old sigaction(2) syscall. Nope, not the same | |
1185 | as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), | |
1186 | but fairly different variant of sigaction(2), thanks to OSF/1 | |
1187 | compatibility... | |
1188 | ||
1189 | config COMPAT_OLD_SIGACTION | |
1190 | bool | |
1191 | ||
17435e5f | 1192 | config COMPAT_32BIT_TIME |
942437c9 AB |
1193 | bool "Provide system calls for 32-bit time_t" |
1194 | default !64BIT || COMPAT | |
17435e5f DD |
1195 | help |
1196 | This enables 32 bit time_t support in addition to 64 bit time_t support. | |
1197 | This is relevant on all 32-bit architectures, and 64-bit architectures | |
1198 | as part of compat syscall handling. | |
1199 | ||
87a4c375 CH |
1200 | config ARCH_NO_PREEMPT |
1201 | bool | |
1202 | ||
a50a3f4b TG |
1203 | config ARCH_SUPPORTS_RT |
1204 | bool | |
1205 | ||
fff7fb0b ZZ |
1206 | config CPU_NO_EFFICIENT_FFS |
1207 | def_bool n | |
1208 | ||
ba14a194 AL |
1209 | config HAVE_ARCH_VMAP_STACK |
1210 | def_bool n | |
1211 | help | |
1212 | An arch should select this symbol if it can support kernel stacks | |
1213 | in vmalloc space. This means: | |
1214 | ||
1215 | - vmalloc space must be large enough to hold many kernel stacks. | |
1216 | This may rule out many 32-bit architectures. | |
1217 | ||
1218 | - Stacks in vmalloc space need to work reliably. For example, if | |
1219 | vmap page tables are created on demand, either this mechanism | |
1220 | needs to work while the stack points to a virtual address with | |
1221 | unpopulated page tables or arch code (switch_to() and switch_mm(), | |
1222 | most likely) needs to ensure that the stack's page table entries | |
1223 | are populated before running on a possibly unpopulated stack. | |
1224 | ||
1225 | - If the stack overflows into a guard page, something reasonable | |
1226 | should happen. The definition of "reasonable" is flexible, but | |
1227 | instantly rebooting without logging anything would be unfriendly. | |
1228 | ||
1229 | config VMAP_STACK | |
1230 | default y | |
1231 | bool "Use a virtually-mapped stack" | |
eafb149e | 1232 | depends on HAVE_ARCH_VMAP_STACK |
38dd767d | 1233 | depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC |
a7f7f624 | 1234 | help |
ba14a194 AL |
1235 | Enable this if you want the use virtually-mapped kernel stacks |
1236 | with guard pages. This causes kernel stack overflows to be | |
1237 | caught immediately rather than causing difficult-to-diagnose | |
1238 | corruption. | |
1239 | ||
38dd767d AK |
1240 | To use this with software KASAN modes, the architecture must support |
1241 | backing virtual mappings with real shadow memory, and KASAN_VMALLOC | |
1242 | must be enabled. | |
ba14a194 | 1243 | |
39218ff4 KC |
1244 | config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET |
1245 | def_bool n | |
1246 | help | |
1247 | An arch should select this symbol if it can support kernel stack | |
1248 | offset randomization with calls to add_random_kstack_offset() | |
1249 | during syscall entry and choose_random_kstack_offset() during | |
1250 | syscall exit. Careful removal of -fstack-protector-strong and | |
1251 | -fstack-protector should also be applied to the entry code and | |
1252 | closely examined, as the artificial stack bump looks like an array | |
1253 | to the compiler, so it will attempt to add canary checks regardless | |
1254 | of the static branch state. | |
1255 | ||
8cb37a59 ME |
1256 | config RANDOMIZE_KSTACK_OFFSET |
1257 | bool "Support for randomizing kernel stack offset on syscall entry" if EXPERT | |
1258 | default y | |
39218ff4 | 1259 | depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET |
efa90c11 | 1260 | depends on INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION >= 140000 |
39218ff4 KC |
1261 | help |
1262 | The kernel stack offset can be randomized (after pt_regs) by | |
1263 | roughly 5 bits of entropy, frustrating memory corruption | |
1264 | attacks that depend on stack address determinism or | |
8cb37a59 ME |
1265 | cross-syscall address exposures. |
1266 | ||
1267 | The feature is controlled via the "randomize_kstack_offset=on/off" | |
1268 | kernel boot param, and if turned off has zero overhead due to its use | |
1269 | of static branches (see JUMP_LABEL). | |
1270 | ||
1271 | If unsure, say Y. | |
1272 | ||
1273 | config RANDOMIZE_KSTACK_OFFSET_DEFAULT | |
1274 | bool "Default state of kernel stack offset randomization" | |
1275 | depends on RANDOMIZE_KSTACK_OFFSET | |
1276 | help | |
1277 | Kernel stack offset randomization is controlled by kernel boot param | |
1278 | "randomize_kstack_offset=on/off", and this config chooses the default | |
1279 | boot state. | |
39218ff4 | 1280 | |
ad21fc4f LA |
1281 | config ARCH_OPTIONAL_KERNEL_RWX |
1282 | def_bool n | |
1283 | ||
1284 | config ARCH_OPTIONAL_KERNEL_RWX_DEFAULT | |
1285 | def_bool n | |
1286 | ||
1287 | config ARCH_HAS_STRICT_KERNEL_RWX | |
1288 | def_bool n | |
1289 | ||
0f5bf6d0 | 1290 | config STRICT_KERNEL_RWX |
ad21fc4f LA |
1291 | bool "Make kernel text and rodata read-only" if ARCH_OPTIONAL_KERNEL_RWX |
1292 | depends on ARCH_HAS_STRICT_KERNEL_RWX | |
1293 | default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT | |
1294 | help | |
1295 | If this is set, kernel text and rodata memory will be made read-only, | |
1296 | and non-text memory will be made non-executable. This provides | |
1297 | protection against certain security exploits (e.g. executing the heap | |
1298 | or modifying text) | |
1299 | ||
1300 | These features are considered standard security practice these days. | |
1301 | You should say Y here in almost all cases. | |
1302 | ||
1303 | config ARCH_HAS_STRICT_MODULE_RWX | |
1304 | def_bool n | |
1305 | ||
0f5bf6d0 | 1306 | config STRICT_MODULE_RWX |
ad21fc4f LA |
1307 | bool "Set loadable kernel module data as NX and text as RO" if ARCH_OPTIONAL_KERNEL_RWX |
1308 | depends on ARCH_HAS_STRICT_MODULE_RWX && MODULES | |
1309 | default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT | |
1310 | help | |
1311 | If this is set, module text and rodata memory will be made read-only, | |
1312 | and non-text memory will be made non-executable. This provides | |
1313 | protection against certain security exploits (e.g. writing to text) | |
1314 | ||
ea8c64ac CH |
1315 | # select if the architecture provides an asm/dma-direct.h header |
1316 | config ARCH_HAS_PHYS_TO_DMA | |
1317 | bool | |
1318 | ||
04f264d3 PB |
1319 | config HAVE_ARCH_COMPILER_H |
1320 | bool | |
1321 | help | |
1322 | An architecture can select this if it provides an | |
1323 | asm/compiler.h header that should be included after | |
1324 | linux/compiler-*.h in order to override macro definitions that those | |
1325 | headers generally provide. | |
1326 | ||
271ca788 AB |
1327 | config HAVE_ARCH_PREL32_RELOCATIONS |
1328 | bool | |
1329 | help | |
1330 | May be selected by an architecture if it supports place-relative | |
1331 | 32-bit relocations, both in the toolchain and in the module loader, | |
1332 | in which case relative references can be used in special sections | |
1333 | for PCI fixup, initcalls etc which are only half the size on 64 bit | |
1334 | architectures, and don't require runtime relocation on relocatable | |
1335 | kernels. | |
1336 | ||
ce9084ba AB |
1337 | config ARCH_USE_MEMREMAP_PROT |
1338 | bool | |
1339 | ||
fb346fd9 WL |
1340 | config LOCK_EVENT_COUNTS |
1341 | bool "Locking event counts collection" | |
1342 | depends on DEBUG_FS | |
a7f7f624 | 1343 | help |
fb346fd9 WL |
1344 | Enable light-weight counting of various locking related events |
1345 | in the system with minimal performance impact. This reduces | |
1346 | the chance of application behavior change because of timing | |
1347 | differences. The counts are reported via debugfs. | |
1348 | ||
5cf896fb PC |
1349 | # Select if the architecture has support for applying RELR relocations. |
1350 | config ARCH_HAS_RELR | |
1351 | bool | |
1352 | ||
1353 | config RELR | |
1354 | bool "Use RELR relocation packing" | |
1355 | depends on ARCH_HAS_RELR && TOOLS_SUPPORT_RELR | |
1356 | default y | |
1357 | help | |
1358 | Store the kernel's dynamic relocations in the RELR relocation packing | |
1359 | format. Requires a compatible linker (LLD supports this feature), as | |
1360 | well as compatible NM and OBJCOPY utilities (llvm-nm and llvm-objcopy | |
1361 | are compatible). | |
1362 | ||
0c9c1d56 TJB |
1363 | config ARCH_HAS_MEM_ENCRYPT |
1364 | bool | |
1365 | ||
46b49b12 TL |
1366 | config ARCH_HAS_CC_PLATFORM |
1367 | bool | |
1368 | ||
0e242208 | 1369 | config HAVE_SPARSE_SYSCALL_NR |
9f79ffc1 JH |
1370 | bool |
1371 | help | |
1372 | An architecture should select this if its syscall numbering is sparse | |
0e242208 HN |
1373 | to save space. For example, MIPS architecture has a syscall array with |
1374 | entries at 4000, 5000 and 6000 locations. This option turns on syscall | |
1375 | related optimizations for a given architecture. | |
1376 | ||
d60d7de3 SS |
1377 | config ARCH_HAS_VDSO_DATA |
1378 | bool | |
1379 | ||
115284d8 JP |
1380 | config HAVE_STATIC_CALL |
1381 | bool | |
1382 | ||
9183c3f9 JP |
1383 | config HAVE_STATIC_CALL_INLINE |
1384 | bool | |
1385 | depends on HAVE_STATIC_CALL | |
03f16cd0 | 1386 | select OBJTOOL |
9183c3f9 | 1387 | |
6ef869e0 MH |
1388 | config HAVE_PREEMPT_DYNAMIC |
1389 | bool | |
99cf983c MR |
1390 | |
1391 | config HAVE_PREEMPT_DYNAMIC_CALL | |
1392 | bool | |
6ef869e0 | 1393 | depends on HAVE_STATIC_CALL |
99cf983c MR |
1394 | select HAVE_PREEMPT_DYNAMIC |
1395 | help | |
9f79ffc1 JH |
1396 | An architecture should select this if it can handle the preemption |
1397 | model being selected at boot time using static calls. | |
99cf983c | 1398 | |
9f79ffc1 JH |
1399 | Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a |
1400 | preemption function will be patched directly. | |
99cf983c | 1401 | |
9f79ffc1 JH |
1402 | Where an architecture does not select HAVE_STATIC_CALL_INLINE, any |
1403 | call to a preemption function will go through a trampoline, and the | |
1404 | trampoline will be patched. | |
99cf983c | 1405 | |
9f79ffc1 JH |
1406 | It is strongly advised to support inline static call to avoid any |
1407 | overhead. | |
99cf983c MR |
1408 | |
1409 | config HAVE_PREEMPT_DYNAMIC_KEY | |
1410 | bool | |
a0a12c3e | 1411 | depends on HAVE_ARCH_JUMP_LABEL |
99cf983c | 1412 | select HAVE_PREEMPT_DYNAMIC |
6ef869e0 | 1413 | help |
9f79ffc1 JH |
1414 | An architecture should select this if it can handle the preemption |
1415 | model being selected at boot time using static keys. | |
99cf983c | 1416 | |
9f79ffc1 JH |
1417 | Each preemption function will be given an early return based on a |
1418 | static key. This should have slightly lower overhead than non-inline | |
1419 | static calls, as this effectively inlines each trampoline into the | |
1420 | start of its callee. This may avoid redundant work, and may | |
1421 | integrate better with CFI schemes. | |
99cf983c | 1422 | |
9f79ffc1 JH |
1423 | This will have greater overhead than using inline static calls as |
1424 | the call to the preemption function cannot be entirely elided. | |
6ef869e0 | 1425 | |
59612b24 NC |
1426 | config ARCH_WANT_LD_ORPHAN_WARN |
1427 | bool | |
1428 | help | |
1429 | An arch should select this symbol once all linker sections are explicitly | |
1430 | included, size-asserted, or discarded in the linker scripts. This is | |
1431 | important because we never want expected sections to be placed heuristically | |
1432 | by the linker, since the locations of such sections can change between linker | |
1433 | versions. | |
1434 | ||
4f5b0c17 MR |
1435 | config HAVE_ARCH_PFN_VALID |
1436 | bool | |
1437 | ||
5d6ad668 MR |
1438 | config ARCH_SUPPORTS_DEBUG_PAGEALLOC |
1439 | bool | |
1440 | ||
df4e817b PT |
1441 | config ARCH_SUPPORTS_PAGE_TABLE_CHECK |
1442 | bool | |
1443 | ||
2ca408d9 BG |
1444 | config ARCH_SPLIT_ARG64 |
1445 | bool | |
1446 | help | |
9f79ffc1 JH |
1447 | If a 32-bit architecture requires 64-bit arguments to be split into |
1448 | pairs of 32-bit arguments, select this option. | |
2ca408d9 | 1449 | |
7facdc42 AV |
1450 | config ARCH_HAS_ELFCORE_COMPAT |
1451 | bool | |
1452 | ||
58e106e7 BS |
1453 | config ARCH_HAS_PARANOID_L1D_FLUSH |
1454 | bool | |
1455 | ||
d593d64f PS |
1456 | config ARCH_HAVE_TRACE_MMIO_ACCESS |
1457 | bool | |
1458 | ||
1bdda24c TG |
1459 | config DYNAMIC_SIGFRAME |
1460 | bool | |
1461 | ||
50468e43 JS |
1462 | # Select, if arch has a named attribute group bound to NUMA device nodes. |
1463 | config HAVE_ARCH_NODE_DEV_GROUP | |
1464 | bool | |
1465 | ||
71ce1ab5 KH |
1466 | config ARCH_HAS_HW_PTE_YOUNG |
1467 | bool | |
1468 | help | |
1469 | Architectures that select this option are capable of setting the | |
1470 | accessed bit in PTE entries when using them as part of linear address | |
1471 | translations. Architectures that require runtime check should select | |
1472 | this option and override arch_has_hw_pte_young(). | |
1473 | ||
eed9a328 YZ |
1474 | config ARCH_HAS_NONLEAF_PMD_YOUNG |
1475 | bool | |
1476 | help | |
1477 | Architectures that select this option are capable of setting the | |
1478 | accessed bit in non-leaf PMD entries when using them as part of linear | |
1479 | address translations. Page table walkers that clear the accessed bit | |
1480 | may use this capability to reduce their search space. | |
1481 | ||
2521f2c2 | 1482 | source "kernel/gcov/Kconfig" |
45332b1b MY |
1483 | |
1484 | source "scripts/gcc-plugins/Kconfig" | |
fa1b5d09 | 1485 | |
d49a0626 PZ |
1486 | config FUNCTION_ALIGNMENT_4B |
1487 | bool | |
1488 | ||
1489 | config FUNCTION_ALIGNMENT_8B | |
1490 | bool | |
1491 | ||
1492 | config FUNCTION_ALIGNMENT_16B | |
1493 | bool | |
1494 | ||
1495 | config FUNCTION_ALIGNMENT_32B | |
1496 | bool | |
1497 | ||
1498 | config FUNCTION_ALIGNMENT_64B | |
1499 | bool | |
1500 | ||
1501 | config FUNCTION_ALIGNMENT | |
1502 | int | |
1503 | default 64 if FUNCTION_ALIGNMENT_64B | |
1504 | default 32 if FUNCTION_ALIGNMENT_32B | |
1505 | default 16 if FUNCTION_ALIGNMENT_16B | |
1506 | default 8 if FUNCTION_ALIGNMENT_8B | |
1507 | default 4 if FUNCTION_ALIGNMENT_4B | |
1508 | default 0 | |
1509 | ||
22471e13 | 1510 | endmenu |