Kui-Feng Lee [Fri, 19 Jan 2024 22:50:03 +0000 (14:50 -0800)]
libbpf: Find correct module BTFs for struct_ops maps and progs.
Locate the module BTFs for struct_ops maps and progs and pass them to the
kernel. This ensures that the kernel correctly resolves type IDs from the
appropriate module BTFs.
For the map of a struct_ops object, the FD of the module BTF is set to
bpf_map to keep a reference to the module BTF. The FD is passed to the
kernel as value_type_btf_obj_fd when the struct_ops object is loaded.
For a bpf_struct_ops prog, attach_btf_obj_fd of bpf_prog is the FD of a
module BTF in the kernel.
Kui-Feng Lee [Fri, 19 Jan 2024 22:50:01 +0000 (14:50 -0800)]
bpf: validate value_type
A value_type should consist of three components: refcnt, state, and data.
refcnt and state has been move to struct bpf_struct_ops_common_value to
make it easier to check the value type.
Kui-Feng Lee [Fri, 19 Jan 2024 22:50:00 +0000 (14:50 -0800)]
bpf: hold module refcnt in bpf_struct_ops map creation and prog verification.
To ensure that a module remains accessible whenever a struct_ops object of
a struct_ops type provided by the module is still in use.
struct bpf_struct_ops_map doesn't hold a refcnt to btf anymore since a
module will hold a refcnt to it's btf already. But, struct_ops programs are
different. They hold their associated btf, not the module since they need
only btf to assure their types (signatures).
However, verifier holds the refcnt of the associated module of a struct_ops
type temporarily when verify a struct_ops prog. Verifier needs the help
from the verifier operators (struct bpf_verifier_ops) provided by the owner
module to verify data access of a prog, provide information, and generate
code.
This patch also add a count of links (links_cnt) to bpf_struct_ops_map. It
avoids bpf_struct_ops_map_put_progs() from accessing btf after calling
module_put() in bpf_struct_ops_map_free().
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:59 +0000 (14:49 -0800)]
bpf: pass attached BTF to the bpf_struct_ops subsystem
Pass the fd of a btf from the userspace to the bpf() syscall, and then
convert the fd into a btf. The btf is generated from the module that
defines the target BPF struct_ops type.
In order to inform the kernel about the module that defines the target
struct_ops type, the userspace program needs to provide a btf fd for the
respective module's btf. This btf contains essential information on the
types defined within the module, including the target struct_ops type.
A btf fd must be provided to the kernel for struct_ops maps and for the bpf
programs attached to those maps.
In the case of the bpf programs, the attach_btf_obj_fd parameter is passed
as part of the bpf_attr and is converted into a btf. This btf is then
stored in the prog->aux->attach_btf field. Here, it just let the verifier
access attach_btf directly.
In the case of struct_ops maps, a btf fd is passed as value_type_btf_obj_fd
of bpf_attr. The bpf_struct_ops_map_alloc() function converts the fd to a
btf and stores it as st_map->btf. A flag BPF_F_VTYPE_BTF_OBJ_FD is added
for map_flags to indicate that the value of value_type_btf_obj_fd is set.
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:58 +0000 (14:49 -0800)]
bpf: lookup struct_ops types from a given module BTF.
This is a preparation for searching for struct_ops types from a specified
module. BTF is always btf_vmlinux now. This patch passes a pointer of BTF
to bpf_struct_ops_find_value() and bpf_struct_ops_find(). Once the new
registration API of struct_ops types is used, other BTFs besides
btf_vmlinux can also be passed to them.
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:57 +0000 (14:49 -0800)]
bpf: pass btf object id in bpf_map_info.
Include btf object id (btf_obj_id) in bpf_map_info so that tools (ex:
bpftools struct_ops dump) know the correct btf from the kernel to look up
type information of struct_ops types.
Since struct_ops types can be defined and registered in a module. The
type information of a struct_ops type are defined in the btf of the
module defining it. The userspace tools need to know which btf is for
the module defining a struct_ops type.
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:56 +0000 (14:49 -0800)]
bpf: make struct_ops_map support btfs other than btf_vmlinux.
Once new struct_ops can be registered from modules, btf_vmlinux is no
longer the only btf that struct_ops_map would face. st_map should remember
what btf it should use to get type information.
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:55 +0000 (14:49 -0800)]
bpf: add struct_ops_tab to btf.
Maintain a registry of registered struct_ops types in the per-btf (module)
struct_ops_tab. This registry allows for easy lookup of struct_ops types
that are registered by a specific module.
It is a preparation work for supporting kernel module struct_ops in a
latter patch. Each struct_ops will be registered under its own kernel
module btf and will be stored in the newly added btf->struct_ops_tab. The
bpf verifier and bpf syscall (e.g. prog and map cmd) can find the
struct_ops and its btf type/size/id... information from
btf->struct_ops_tab.
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:54 +0000 (14:49 -0800)]
bpf, net: introduce bpf_struct_ops_desc.
Move some of members of bpf_struct_ops to bpf_struct_ops_desc. type_id is
unavailabe in bpf_struct_ops anymore. Modules should get it from the btf
received by kmod's init function.
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:53 +0000 (14:49 -0800)]
bpf: get type information with BTF_ID_LIST
Get ready to remove bpf_struct_ops_init() in the future. By using
BTF_ID_LIST, it is possible to gather type information while building
instead of runtime.
Kui-Feng Lee [Fri, 19 Jan 2024 22:49:52 +0000 (14:49 -0800)]
bpf: refactory struct_ops type initialization to a function.
Move the majority of the code to bpf_struct_ops_init_one(), which can then
be utilized for the initialization of newly registered dynamically
allocated struct_ops types in the following patches.
====================
bpf: Add cookies retrieval for perf/kprobe multi links
hi,
this patchset adds support to retrieve cookies from existing tracing
links that still did not support it plus changes to bpftool to display
them. It's leftover we discussed some time ago [1].
Cookie is attached to specific address, and because we sort addresses
before printing, we need to sort cookies the same way, hence adding
the struct addr_cookie to keep and sort them together.
Also adding missing dd.sym_count check to show_kprobe_multi_json.
Jiri Olsa [Fri, 19 Jan 2024 11:04:59 +0000 (12:04 +0100)]
bpf: Store cookies in kprobe_multi bpf_link_info data
Storing cookies in kprobe_multi bpf_link_info data. The cookies
field is optional and if provided it needs to be an array of
__u64 with kprobe_multi.count length.
The "p" constraint is a tricky one. It is documented in the GCC manual
section "Simple Constraints":
An operand that is a valid memory address is allowed. This is for
``load address'' and ``push address'' instructions.
p in the constraint must be accompanied by address_operand as the
predicate in the match_operand. This predicate interprets the mode
specified in the match_operand as the mode of the memory reference for
which the address would be valid.
There are two problems:
1. It is questionable whether that constraint was ever intended to be
used in inline assembly templates, because its behavior really
depends on compiler internals. A "memory address" is not the same
than a "memory operand" or a "memory reference" (constraint "m"), and
in fact its usage in the template above results in an error in both
x86_64-linux-gnu and bpf-unkonwn-none:
foo.c: In function ‘bar’:
foo.c:6:3: error: invalid 'asm': invalid expression as operand
6 | asm volatile ("r1 = %[jorl]" : : [jorl]"p"(&jorl));
| ^~~
I would assume the same happens with aarch64, riscv, and most/all
other targets in GCC, that do not accept operands of the form A + B
that are not wrapped either in a const or in a memory reference.
To avoid that error, the usage of the "p" constraint in internal GCC
instruction templates is supposed to be complemented by the 'a'
modifier, like in:
%aN means expect operand N to be a memory address
(not a memory reference!) and print a reference
to that address.
That works because when the modifier 'a' is found, GCC prints an
"operand address", which is not the same than an "operand".
But...
2. Even if we used the internal 'a' modifier (we shouldn't) the 'rN =
rM' instruction really requires a register argument. In cases
involving automatics, like in the examples above, we easily end with:
bar:
#APP
r1 = r10-4
#NO_APP
In other cases we could conceibly also end with a 64-bit label that
may overflow the 32-bit immediate operand of `rN = imm32'
instructions:
r1 = foo
All of which is clearly wrong.
clang happens to do "the right thing" in the current usage of __imm_ptr
in the BPF tests, because even with -O2 it seems to "reload" the
fp-relative address of the automatic to a register like in:
bar:
r1 = r10
r1 += -4
#APP
r1 = r1
#NO_APP
Which is what GCC would generate with -O0. Whether this is by chance
or by design, the compiler shouln't be expected to do that reload
driven by the "p" constraint.
This patch changes the usage of the "p" constraint in the BPF
selftests macros to use the "r" constraint instead. If a register is
what is required, we should let the compiler know.
The eth_buffer, iph_buffer_tcp and other automatics are fixed size
only if the compiler optimizes away the constant global variables.
clang does this, but GCC does not, turning these automatics into
variable length arrays.
This patch removes the global variables and turns these values into
preprocessor constants. This makes the selftest to build properly
with GCC.
Andrii Nakryiko [Fri, 19 Jan 2024 21:02:01 +0000 (13:02 -0800)]
libbpf: call dup2() syscall directly
We've ran into issues with using dup2() API in production setting, where
libbpf is linked into large production environment and ends up calling
unintended custom implementations of dup2(). These custom implementations
don't provide atomic FD replacement guarantees of dup2() syscall,
leading to subtle and hard to debug issues.
To prevent this in the future and guarantee that no libc implementation
will do their own custom non-atomic dup2() implementation, call dup2()
syscall directly with syscall(SYS_dup2).
Note that some architectures don't seem to provide dup2 and have dup3
instead. Try to detect and pick best syscall.
Hou Tao [Fri, 19 Jan 2024 10:25:28 +0000 (18:25 +0800)]
bpf, arm64: Enable the inline of bpf_kptr_xchg()
ARM64 bpf jit satisfies the following two conditions:
1) support BPF_XCHG() on pointer-sized word.
2) the implementation of xchg is the same as atomic_xchg() on
pointer-sized words. Both of these two functions use arch_xchg() to
implement the exchange.
So enable the inline of bpf_kptr_xchg() for arm64 bpf jit.
Dave Thaler [Thu, 18 Jan 2024 23:29:54 +0000 (15:29 -0800)]
bpf, docs: Clarify that MOVSX is only for BPF_X not BPF_K
Per discussion on the mailing list at
https://mailarchive.ietf.org/arch/msg/bpf/uQiqhURdtxV_ZQOTgjCdm-seh74/
the MOVSX operation is only defined to support register extension.
The document didn't previously state this and incorrectly implied
that one could use an immediate value.
bpf: Define struct bpf_tcp_req_attrs when CONFIG_SYN_COOKIES=n.
kernel test robot reported the warning below:
>> net/core/filter.c:11842:13: warning: declaration of 'struct bpf_tcp_req_attrs' will not be visible outside of this function [-Wvisibility]
11842 | struct bpf_tcp_req_attrs *attrs, int attrs__sz)
| ^
1 warning generated.
struct bpf_tcp_req_attrs is defined under CONFIG_SYN_COOKIES
but used in kfunc without the config.
Let's move struct bpf_tcp_req_attrs definition outside of
CONFIG_SYN_COOKIES guard.
Hao Sun [Wed, 17 Jan 2024 09:40:12 +0000 (10:40 +0100)]
bpf: Refactor ptr alu checking rules to allow alu explicitly
Current checking rules are structured to disallow alu on particular ptr
types explicitly, so default cases are allowed implicitly. This may lead
to newly added ptr types being allowed unexpectedly. So restruture it to
allow alu explicitly. The tradeoff is mainly a bit more cases added in
the switch. The following table from Eduard summarizes the rules:
The refactored rules are equivalent to the original one. Note that
PTR_TO_FUNC and CONST_PTR_TO_DYNPTR are not reject here because: (1)
check_mem_access() rejects load/store on those ptrs, and those ptrs
with offset passing to calls are rejected check_func_arg_reg_off();
(2) someone may rely on the verifier not rejecting programs earily.
Andrey Grafin [Wed, 17 Jan 2024 13:06:19 +0000 (16:06 +0300)]
selftest/bpf: Add map_in_maps with BPF_MAP_TYPE_PERF_EVENT_ARRAY values
Check that bpf_object__load() successfully creates map_in_maps
with BPF_MAP_TYPE_PERF_EVENT_ARRAY values.
These changes cover fix in the previous patch
"libbpf: Apply map_set_def_max_entries() for inner_maps on creation".
A command line output is:
- w/o fix
$ sudo ./test_maps
libbpf: map 'mim_array_pe': failed to create inner map: -22
libbpf: map 'mim_array_pe': failed to create: Invalid argument(-22)
libbpf: failed to load object './test_map_in_map.bpf.o'
Failed to load test prog
Andrey Grafin [Wed, 17 Jan 2024 13:06:18 +0000 (16:06 +0300)]
libbpf: Apply map_set_def_max_entries() for inner_maps on creation
This patch allows to auto create BPF_MAP_TYPE_ARRAY_OF_MAPS and
BPF_MAP_TYPE_HASH_OF_MAPS with values of BPF_MAP_TYPE_PERF_EVENT_ARRAY
by bpf_object__load().
Previous behaviour created a zero filled btf_map_def for inner maps and
tried to use it for a map creation but the linux kernel forbids to create
a BPF_MAP_TYPE_PERF_EVENT_ARRAY map with max_entries=0.
Daniel Borkmann [Wed, 17 Jan 2024 09:16:11 +0000 (10:16 +0100)]
bpf: Sync uapi bpf.h header for the tooling infra
Both commit 91051f003948 ("tcp: Dump bound-only sockets in inet_diag.")
and commit 985b8ea9ec7e ("bpf, docs: Fix bpf_redirect_peer header doc")
missed the tooling header sync. Fix it.
Martin KaFai Lau [Tue, 16 Jan 2024 22:42:40 +0000 (14:42 -0800)]
Merge branch 'bpf: tcp: Support arbitrary SYN Cookie at TC.'
Kuniyuki Iwashima says:
====================
Under SYN Flood, the TCP stack generates SYN Cookie to remain stateless
for the connection request until a valid ACK is responded to the SYN+ACK.
The cookie contains two kinds of host-specific bits, a timestamp and
secrets, so only can it be validated by the generator. It means SYN
Cookie consumes network resources between the client and the server;
intermediate nodes must remember which nodes to route ACK for the cookie.
SYN Proxy reduces such unwanted resource allocation by handling 3WHS at
the edge network. After SYN Proxy completes 3WHS, it forwards SYN to the
backend server and completes another 3WHS. However, since the server's
ISN differs from the cookie, the proxy must manage the ISN mappings and
fix up SEQ/ACK numbers in every packet for each connection. If a proxy
node goes down, all the connections through it are terminated. Keeping
a state at proxy is painful from that perspective.
At AWS, we use a dirty hack to build truly stateless SYN Proxy at scale.
Our SYN Proxy consists of the front proxy layer and the backend kernel
module. (See slides of LPC2023 [0], p37 - p48)
The cookie that SYN Proxy generates differs from the kernel's cookie in
that it contains a secret (called rolling salt) (i) shared by all the proxy
nodes so that any node can validate ACK and (ii) updated periodically so
that old cookies cannot be validated and we need not encode a timestamp for
the cookie. Also, ISN contains WScale, SACK, and ECN, not in TS val. This
is not to sacrifice any connection quality, where some customers turn off
TCP timestamps option due to retro CVE.
After 3WHS, the proxy restores SYN, encapsulates ACK into SYN, and forward
the TCP-in-TCP packet to the backend server. Our kernel module works at
Netfilter input/output hooks and first feeds SYN to the TCP stack to
initiate 3WHS. When the module is triggered for SYN+ACK, it looks up the
corresponding request socket and overwrites tcp_rsk(req)->snt_isn with the
proxy's cookie. Then, the module can complete 3WHS with the original ACK
as is.
This way, our SYN Proxy does not manage the ISN mappings nor wait for
SYN+ACK from the backend thus can remain stateless. It's working very
well for high-bandwidth services like multiple Tbps, but we are looking
for a way to drop the dirty hack and further optimise the sequences.
If we could validate an arbitrary SYN Cookie on the backend server with
BPF, the proxy would need not restore SYN nor pass it. After validating
ACK, the proxy node just needs to forward it, and then the server can do
the lightweight validation (e.g. check if ACK came from proxy nodes, etc)
and create a connection from the ACK.
This series allows us to create a full sk from an arbitrary SYN Cookie,
which is done in 3 steps.
1) At tc, BPF prog calls a new kfunc to create a reqsk and configure
it based on the argument populated from SYN Cookie. The reqsk has
its listener as req->rsk_listener and is passed to the TCP stack as
skb->sk.
2) During TCP socket lookup for the skb, skb_steal_sock() returns a
listener in the reuseport group that inet_reqsk(skb->sk)->rsk_listener
belongs to.
3) In cookie_v[46]_check(), the reqsk (skb->sk) is fully initialised and
a full sk is created.
Changes:
v8
* Rebase on Yonghong's cpuv4 fix
* Patch 5
* Fill the trailing 3-bytes padding in struct bpf_tcp_req_attrs
and test it as null
* Patch 6
* Remove unused IPPROTP_MPTCP definition
v6: https://lore.kernel.org/bpf/20231214155424[email protected]/
* Patch 5 & 6
* /struct /s/tcp_cookie_attributes/bpf_tcp_req_attrs/
* Don't reuse struct tcp_options_received and use u8 for each attrs
* Patch 6
* Check retval of test__start_subtest()
v5: https://lore.kernel.org/netdev/20231211073650[email protected]/
* Split patch 1-3
* Patch 3
* Clear req->rsk_listener in skb_steal_sock()
* Patch 4 & 5
* Move sysctl validation and tsoff init from cookie_bpf_check() to kfunc
* Patch 5
* Do not increment LINUX_MIB_SYNCOOKIES(RECV|FAILED)
* Patch 6
* Remove __always_inline
* Test if tcp_handle_{syn,ack}() is executed
* Move some definition to bpf_tracing_net.h
* s/BPF_F_CURRENT_NETNS/-1/
v4: https://lore.kernel.org/bpf/20231205013420[email protected]/
* Patch 1 & 2
* s/CONFIG_SYN_COOKIE/CONFIG_SYN_COOKIES/
* Patch 1
* Don't set rcv_wscale for BPF SYN Cookie case.
* Patch 2
* Add test for tcp_opt.{unused,rcv_wscale} in kfunc
* Modify skb_steal_sock() to avoid resetting skb-sk
* Support SO_REUSEPORT lookup
* Patch 3
* Add CONFIG_SYN_COOKIES to Kconfig for CI
* Define BPF_F_CURRENT_NETNS
v3: https://lore.kernel.org/netdev/20231121184245[email protected]/
* Guard kfunc and req->syncookie part in inet6?_steal_sock() with
CONFIG_SYN_COOKIE
v2: https://lore.kernel.org/netdev/20231120222341[email protected]/
* Drop SOCK_OPS and move SYN Cookie validation logic to TC with kfunc.
* Add cleanup patches to reduce discrepancy between cookie_v[46]_check()
bpf_sk_assign_tcp_reqsk() takes skb, a listener sk, and struct
bpf_tcp_req_attrs and allocates reqsk and configures it. Then,
bpf_sk_assign_tcp_reqsk() links reqsk with skb and the listener.
The notable thing here is that we do not hold refcnt for both reqsk
and listener. To differentiate that, we mark reqsk->syncookie, which
is only used in TX for now. So, if reqsk->syncookie is 1 in RX, it
means that the reqsk is allocated by kfunc.
When skb is freed, sock_pfree() checks if reqsk->syncookie is 1,
and in that case, we set NULL to reqsk->rsk_listener before calling
reqsk_free() as reqsk does not hold a refcnt of the listener.
When the TCP stack looks up a socket from the skb, we steal the
listener from the reqsk in skb_steal_sock() and create a full sk
in cookie_v[46]_check().
The refcnt of reqsk will finally be set to 1 in tcp_get_cookie_sock()
after creating a full sk.
Note that we can extend struct bpf_tcp_req_attrs in the future when
we add a new attribute that is determined in 3WHS.
bpf: tcp: Handle BPF SYN Cookie in cookie_v[46]_check().
We will support arbitrary SYN Cookie with BPF in the following
patch.
If BPF prog validates ACK and kfunc allocates a reqsk, it will
be carried to cookie_[46]_check() as skb->sk. If skb->sk is not
NULL, we call cookie_bpf_check().
Then, we clear skb->sk and skb->destructor, which are needed not
to hold refcnt for reqsk and the listener. See the following patch
for details.
After that, we finish initialisation for the remaining fields with
cookie_tcp_reqsk_init().
Note that the server side WScale is set only for non-BPF SYN Cookie.
bpf: tcp: Handle BPF SYN Cookie in skb_steal_sock().
We will support arbitrary SYN Cookie with BPF.
If BPF prog validates ACK and kfunc allocates a reqsk, it will
be carried to TCP stack as skb->sk with req->syncookie 1. Also,
the reqsk has its listener as req->rsk_listener with no refcnt
taken.
When the TCP stack looks up a socket from the skb, we steal
inet_reqsk(skb->sk)->rsk_listener in skb_steal_sock() so that
the skb will be processed in cookie_v[46]_check() with the
listener.
Note that we do not clear skb->sk and skb->destructor so that we
can carry the reqsk to cookie_v[46]_check().
Artem Savkov [Wed, 10 Jan 2024 08:57:37 +0000 (09:57 +0100)]
selftests/bpf: Fix potential premature unload in bpf_testmod
It is possible for bpf_kfunc_call_test_release() to be called from
bpf_map_free_deferred() when bpf_testmod is already unloaded and
perf_test_stuct.cnt which it tries to decrease is no longer in memory.
This patch tries to fix the issue by waiting for all references to be
dropped in bpf_testmod_exit().
The issue can be triggered by running 'test_progs -t map_kptr' in 6.5,
but is obscured in 6.6 by d119357d07435 ("rcu-tasks: Treat only
synchronous grace periods urgently").
If BPF prog validates ACK and kfunc allocates a reqsk, it will
be carried to TCP stack as skb->sk with req->syncookie 1.
In skb_steal_sock(), we need to check inet_reqsk(sk)->syncookie
to see if the reqsk is created by kfunc. However, inet_reqsk()
is not available in sock.h.
Let's move skb_steal_sock() to request_sock.h.
While at it, we refactor skb_steal_sock() so it returns early if
skb->sk is NULL to minimise the following patch.
Tiezhu Yang [Tue, 16 Jan 2024 06:19:20 +0000 (14:19 +0800)]
bpftool: Silence build warning about calloc()
There exists the following warning when building bpftool:
CC prog.o
prog.c: In function ‘profile_open_perf_events’:
prog.c:2301:24: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args]
2301 | sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
| ^~~
prog.c:2301:24: note: earlier argument should specify number of elements, later size of each element
Tested with the latest upstream GCC which contains a new warning option
-Wcalloc-transposed-args. The first argument to calloc is documented to
be number of elements in array, while the second argument is size of each
element, just switch the first and second arguments of calloc() to silence
the build warning, compile tested only.
Few minor improvements for bpf_cmp() macro:
. reduce number of args in __bpf_cmp()
. rename NOFLIP to UNLIKELY
. add a comment about 64-bit truncation in "i" constraint
. use "ri" constraint for sizeof(rhs) <= 4
. improve error message for bpf_cmp_likely()
Before:
progs/iters_task_vma.c:31:7: error: variable 'ret' is uninitialized when used here [-Werror,-Wuninitialized]
31 | if (bpf_cmp_likely(seen, <==, 1000))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../bpf/bpf_experimental.h:325:3: note: expanded from macro 'bpf_cmp_likely'
325 | ret;
| ^~~
progs/iters_task_vma.c:31:7: note: variable 'ret' is declared here
../bpf/bpf_experimental.h:310:3: note: expanded from macro 'bpf_cmp_likely'
310 | bool ret;
| ^
After:
progs/iters_task_vma.c:31:7: error: invalid operand for instruction
31 | if (bpf_cmp_likely(seen, <==, 1000))
| ^
../bpf/bpf_experimental.h:324:17: note: expanded from macro 'bpf_cmp_likely'
324 | asm volatile("r0 " #OP " invalid compare");
| ^
<inline asm>:1:5: note: instantiated into assembly here
1 | r0 <== invalid compare
| ^
Yonghong Song [Thu, 11 Jan 2024 05:21:36 +0000 (21:21 -0800)]
docs/bpf: Fix an incorrect statement in verifier.rst
In verifier.rst, I found an incorrect statement (maybe a typo) in section
'Liveness marks tracking'. Basically, the wrong register is attributed
to have a read mark. This may confuse the user.
Yonghong Song [Wed, 10 Jan 2024 05:13:55 +0000 (21:13 -0800)]
selftests/bpf: Add a selftest with not-8-byte aligned BPF_ST
Add a selftest with a 4 bytes BPF_ST of 0 where the store is not
8-byte aligned. The goal is to ensure that STACK_ZERO is properly
marked in stack slots and the STACK_ZERO value can propagate
properly during the load.
Yonghong Song [Wed, 10 Jan 2024 05:13:48 +0000 (21:13 -0800)]
bpf: Track aligned st store as imprecise spilled registers
With patch set [1], precision backtracing supports register spill/fill
to/from the stack. The patch [2] allows initial imprecise register spill
with content 0. This is a common case for cpuv3 and lower for
initializing the stack variables with pattern
r1 = 0
*(u64 *)(r10 - 8) = r1
and the [2] has demonstrated good verification improvement.
For cpuv4, the initialization could be
*(u64 *)(r10 - 8) = 0
The current verifier marks the r10-8 contents with STACK_ZERO.
Similar to [2], let us permit the above insn to behave like
imprecise register spill which can reduce number of verified states.
The change is in function check_stack_write_fixed_off().
Before this patch, spilled zero will be marked as STACK_ZERO
which can provide precise values. In check_stack_write_var_off(),
STACK_ZERO will be maintained if writing a const zero
so later it can provide precise values if needed.
The above handling of '*(u64 *)(r10 - 8) = 0' as a spill
will have issues in check_stack_write_var_off() as the spill
will be converted to STACK_MISC and the precise value 0
is lost. To fix this issue, if the spill slots with const
zero and the BPF_ST write also with const zero, the spill slots
are preserved, which can later provide precise values
if needed. Without the change in check_stack_write_var_off(),
the test_verifier subtest 'BPF_ST_MEM stack imm zero, variable offset'
will fail.
I checked cpuv3 and cpuv4 with and without this patch with veristat.
There is no state change for cpuv3 since '*(u64 *)(r10 - 8) = 0'
is only generated with cpuv4.
Currently, when a scalar bounded register is spilled to the stack, its
ID is preserved, but only if was already assigned, i.e. if this register
was MOVed before.
Assign an ID on spill if none is set, so that equal scalars could be
tracked if a register is spilled to the stack and filled into another
register.
One test is adjusted to reflect the change in register IDs.
Extract the common code that generates a register ID for src_reg before
MOV if needed into a new function. This function will also be used in
a following commit.
selftests/bpf: Add a test case for 32-bit spill tracking
When a range check is performed on a register that was 32-bit spilled to
the stack, the IDs of the two instances of the register are the same, so
the range should also be the same.
bpf: Make bpf_for_each_spilled_reg consider narrow spills
Adjust the check in bpf_get_spilled_reg to take into account spilled
registers narrower than 64 bits. That allows find_equal_scalars to
properly adjust the range of all spilled registers that have the same
ID. Before this change, it was possible for a register and a spilled
register to have the same IDs but different ranges if the spill was
narrower than 64 bits and a range check was performed on the register.
bpf: make infinite loop detection in is_state_visited() exact
Current infinite loops detection mechanism is speculative:
- first, states_maybe_looping() check is done which simply does memcmp
for R1-R10 in current frame;
- second, states_equal(..., exact=false) is called. With exact=false
states_equal() would compare scalars for equality only if in old
state scalar has precision mark.
Such logic might be problematic if compiler makes some unlucky stack
spill/fill decisions. An artificial example of a false positive looks
as follows:
selftests/bpf: Fix the u64_offset_to_skb_data test
The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but
instead makes a 16-bit one. Fix the test according to its intention and
update the comments accordingly (umax is no longer 0xffff). The 16-bit
fill is covered by u16_offset_to_skb_data.
reviews.llvm.org was LLVM's Phabricator instances for code review. It
has been abandoned in favor of GitHub pull requests. While the majority
of links in the kernel sources still work because of the work Fangrui
has done turning the dynamic Phabricator instance into a static archive,
there are some issues with that work, so preemptively convert all the
links in the kernel sources to point to the commit on GitHub.
Most of the commits have the corresponding differential review link in
the commit message itself so there should not be any loss of fidelity in
the relevant information.
Additionally, fix a typo in the xdpwall.c print ("LLMV" -> "LLVM") while
in the area.
Andrii Nakryiko [Tue, 9 Jan 2024 23:17:38 +0000 (15:17 -0800)]
selftests/bpf: detect testing prog flags support
Various tests specify extra testing prog_flags when loading BPF
programs, like BPF_F_TEST_RND_HI32, and more recently also
BPF_F_TEST_REG_INVARIANTS. While BPF_F_TEST_RND_HI32 is old enough to
not cause much problem on older kernels, BPF_F_TEST_REG_INVARIANTS is
very fresh and unconditionally specifying it causes selftests to fail on
even slightly outdated kernels.
This breaks libbpf CI test against 4.9 and 5.15 kernels, it can break
some local development (done outside of VM), etc.
To prevent this, and guard against similar problems in the future, do
runtime detection of supported "testing flags", and only provide those
that host kernel recognizes.
Dave Thaler [Mon, 8 Jan 2024 21:42:31 +0000 (13:42 -0800)]
Introduce concept of conformance groups
The discussion of what the actual conformance groups should be
is still in progress, so this is just part 1 which only uses
"legacy" for deprecated instructions and "basic" for everything
else. Subsequent patches will add more groups as discussion
continues.
Andrii Nakryiko [Fri, 5 Jan 2024 00:09:05 +0000 (16:09 -0800)]
bpf: support multiple tags per argument
Add ability to iterate multiple decl_tag types pointed to the same
function argument. Use this to support multiple __arg_xxx tags per
global subprog argument.
We leave btf_find_decl_tag_value() intact, but change its implementation
to use a new btf_find_next_decl_tag() which can be straightforwardly
used to find next BTF type ID of a matching btf_decl_tag type.
btf_prepare_func_args() is switched from btf_find_decl_tag_value() to
btf_find_next_decl_tag() to gain multiple tags per argument support.
Andrii Nakryiko [Fri, 5 Jan 2024 00:09:03 +0000 (16:09 -0800)]
bpf: make sure scalar args don't accept __arg_nonnull tag
Move scalar arg processing in btf_prepare_func_args() after all pointer
arg processing is done. This makes it easier to do validation. One
example of unintended behavior right now is ability to specify
__arg_nonnull for integer/enum arguments. This patch fixes this.
... in verbose successful test log is very confusing. Use smaller
identifier-like test tag to denote that we are asserting specs array
allocation success.
====================
The motivation of inlining bpf_kptr_xchg() comes from the performance
profiling of bpf memory allocator benchmark [1]. The benchmark uses
bpf_kptr_xchg() to stash the allocated objects and to pop the stashed
objects for free. After inling bpf_kptr_xchg(), the performance for
object free on 8-CPUs VM increases about 2%~10%. However the performance
gain comes with costs: both the kasan and kcsan checks on the pointer
will be unavailable. Initially the inline is implemented in do_jit() for
x86-64 directly, but I think it will more portable to implement the
inline in verifier.
Patch #1 supports inlining bpf_kptr_xchg() helper and enables it on
x86-4. Patch #2 factors out a helper for newly-added test in patch #3.
Patch #3 tests whether the inlining of bpf_kptr_xchg() is expected.
Please see individual patches for more details. And comments are always
welcome.
Change Log:
v3:
* rebased on bpf-next tree
* patch 1 & 2: Add Rvb-by and Ack-by tags from Eduard
* patch 3: use inline assembly and naked function instead of c code
(suggested by Eduard)
v2: https://lore.kernel.org/bpf/20231223104042.1432300[email protected]/
* rebased on bpf-next tree
* drop patch #1 in v1 due to discussion in [2]
* patch #1: add the motivation in the commit message, merge patch #1
and #3 into the new patch in v2. (Daniel)
* patch #2/#3: newly-added patch to test the inlining of
bpf_kptr_xchg() (Eduard)
Hou Tao [Fri, 5 Jan 2024 10:48:19 +0000 (18:48 +0800)]
selftests/bpf: Test the inlining of bpf_kptr_xchg()
The test uses bpf_prog_get_info_by_fd() to obtain the xlated
instructions of the program first. Since these instructions have
already been rewritten by the verifier, the tests then checks whether
the rewritten instructions are as expected. And to ensure LLVM generates
code exactly as expected, use inline assembly and a naked function.
Hou Tao [Fri, 5 Jan 2024 10:48:17 +0000 (18:48 +0800)]
bpf: Support inlining bpf_kptr_xchg() helper
The motivation of inlining bpf_kptr_xchg() comes from the performance
profiling of bpf memory allocator benchmark. The benchmark uses
bpf_kptr_xchg() to stash the allocated objects and to pop the stashed
objects for free. After inling bpf_kptr_xchg(), the performance for
object free on 8-CPUs VM increases about 2%~10%. The inline also has
downside: both the kasan and kcsan checks on the pointer will be
unavailable.
bpf_kptr_xchg() can be inlined by converting the calling of
bpf_kptr_xchg() into an atomic_xchg() instruction. But the conversion
depends on two conditions:
1) JIT backend supports atomic_xchg() on pointer-sized word
2) For the specific arch, the implementation of xchg is the same as
atomic_xchg() on pointer-sized words.
It seems most 64-bit JIT backends satisfies these two conditions. But
as a precaution, defining a weak function bpf_jit_supports_ptr_xchg()
to state whether such conversion is safe and only supporting inline for
64-bit host.
For x86-64, it supports BPF_XCHG atomic operation and both xchg() and
atomic_xchg() use arch_xchg() to implement the exchange, so enabling the
inline of bpf_kptr_xchg() on x86-64 first.
Eric Dumazet [Mon, 22 Jan 2024 11:26:03 +0000 (11:26 +0000)]
inet_diag: skip over empty buckets
After the removal of inet_diag_table_mutex, sock_diag_table_mutex
and sock_diag_mutex, I was able so see spinlock contention from
inet_diag_dump_icsk() when running 100 parallel invocations.
Eric Dumazet [Mon, 22 Jan 2024 11:26:02 +0000 (11:26 +0000)]
sock_diag: remove sock_diag_mutex
sock_diag_rcv() is still serializing its operations using
a mutex, for no good reason.
This came with commit 0a9c73014415 ("[INET_DIAG]: Fix oops
in netlink_rcv_skb"), but the root cause has been fixed
with commit cd40b7d3983c ("[NET]: make netlink user -> kernel
interface synchronious")
Remove this mutex to let multiple threads run concurrently.
Eric Dumazet [Mon, 22 Jan 2024 11:26:00 +0000 (11:26 +0000)]
sock_diag: allow concurrent operations
sock_diag_broadcast_destroy_work() and __sock_diag_cmd()
are currently using sock_diag_table_mutex to protect
against concurrent sock_diag_handlers[] changes.
This makes inet_diag dump serialized, thus less scalable
than legacy /proc files.
Linus Torvalds [Fri, 19 Jan 2024 01:33:50 +0000 (17:33 -0800)]
Merge tag 'net-6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Including fixes from bpf and netfilter.
Previous releases - regressions:
- Revert "net: rtnetlink: Enslave device before bringing it up",
breaks the case inverse to the one it was trying to fix
- net: dsa: fix oob access in DSA's netdevice event handler
dereference netdev_priv() before check its a DSA port
- sched: track device in tcf_block_get/put_ext() only for clsact
binder types
- net: tls, fix WARNING in __sk_msg_free when record becomes full
during splice and MORE hint set
- sfp-bus: fix SFP mode detect from bitrate
- drv: stmmac: prevent DSA tags from breaking COE
Previous releases - always broken:
- bpf: fix no forward progress in in bpf_iter_udp if output buffer is
too small
- bpf: reject variable offset alu on registers with a type of
PTR_TO_FLOW_KEYS to prevent oob access
- netfilter: tighten input validation
- net: add more sanity check in virtio_net_hdr_to_skb()
- rxrpc: fix use of Don't Fragment flag on RESPONSE packets, avoid
infinite loop
- amt: do not use the portion of skb->cb area which may get clobbered
- mptcp: improve validation of the MPTCPOPT_MP_JOIN MCTCP option
Misc:
- spring cleanup of inactive maintainers"
* tag 'net-6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (88 commits)
i40e: Include types.h to some headers
ipv6: mcast: fix data-race in ipv6_mc_down / mld_ifc_work
selftests: mlxsw: qos_pfc: Adjust the test to support 8 lanes
selftests: mlxsw: qos_pfc: Remove wrong description
mlxsw: spectrum_router: Register netdevice notifier before nexthop
mlxsw: spectrum_acl_tcam: Fix stack corruption
mlxsw: spectrum_acl_tcam: Fix NULL pointer dereference in error path
mlxsw: spectrum_acl_erp: Fix error flow of pool allocation failure
ethtool: netlink: Add missing ethnl_ops_begin/complete
selftests: bonding: Add more missing config options
selftests: netdevsim: add a config file
libbpf: warn on unexpected __arg_ctx type when rewriting BTF
selftests/bpf: add tests confirming type logic in kernel for __arg_ctx
bpf: enforce types for __arg_ctx-tagged arguments in global subprogs
bpf: extract bpf_ctx_convert_map logic and make it more reusable
libbpf: feature-detect arg:ctx tag support in kernel
ipvs: avoid stat macros calls from preemptible context
netfilter: nf_tables: reject NFT_SET_CONCAT with not field length description
netfilter: nf_tables: skip dead set elements in netlink dump
netfilter: nf_tables: do not allow mismatch field size and set key length
...
Linus Torvalds [Fri, 19 Jan 2024 01:29:01 +0000 (17:29 -0800)]
Merge tag 'i2c-for-6.8-rc1-rebased' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
Pull i2c updates from Wolfram Sang:
"This removes the currently unused CLASS_DDC support (controllers set
the flag, but there is no client to use it).
Also, CLASS_SPD support gets simplified to prepare removal in the
future. Class based instantiation is not recommended these days
anyhow.
Furthermore, I2C core now creates a debugfs directory per I2C adapter.
Current bus driver users were converted to use it.
Finally, quite some driver updates. Standing out are patches for the
wmt-driver which is refactored to support more variants.
This is the rebased pull request where a large series for the
designware driver was dropped"
* tag 'i2c-for-6.8-rc1-rebased' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: (38 commits)
MAINTAINERS: use proper email for my I2C work
i2c: stm32f7: add support for stm32mp25 soc
i2c: stm32f7: perform I2C_ISR read once at beginning of event isr
dt-bindings: i2c: document st,stm32mp25-i2c compatible
i2c: stm32f7: simplify status messages in case of errors
i2c: stm32f7: perform most of irq job in threaded handler
i2c: stm32f7: use dev_err_probe upon calls of devm_request_irq
i2c: i801: Add lis3lv02d for Dell XPS 15 7590
i2c: i801: Add lis3lv02d for Dell Precision 3540
i2c: wmt: Reduce redundant: REG_CR setting
i2c: wmt: Reduce redundant: function parameter
i2c: wmt: Reduce redundant: clock mode setting
i2c: wmt: Reduce redundant: wait event complete
i2c: wmt: Reduce redundant: bus busy check
i2c: mux: reg: Remove class-based device auto-detection support
i2c: make i2c_bus_type const
dt-bindings: at24: add ROHM BR24G04
eeprom: at24: use of_match_ptr()
i2c: cpm: Remove linux,i2c-index conversion from be32
i2c: imx: Make SDA actually optional for bus recovering
...
Linus Torvalds [Fri, 19 Jan 2024 01:25:39 +0000 (17:25 -0800)]
Merge tag 'rtc-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux
Pull RTC updates from Alexandre Belloni:
"There are three new drivers this cycle. Also the cmos driver is
getting fixes for longstanding wakeup issues on AMD.
New drivers:
- Analog Devices MAX31335
- Nuvoton ma35d1
- Texas Instrument TPS6594 PMIC RTC
Drivers:
- cmos: use ACPI alarm instead of HPET on recent AMD platforms
- nuvoton: add NCT3015Y-R and NCT3018Y-R support
- rv8803: proper suspend/resume and wakeup-source support"
* tag 'rtc-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux: (26 commits)
rtc: nuvoton: Compatible with NCT3015Y-R and NCT3018Y-R
rtc: da9063: Use dev_err_probe()
rtc: da9063: Use device_get_match_data()
rtc: da9063: Make IRQ as optional
rtc: max31335: Fix comparison in max31335_volatile_reg()
rtc: max31335: use regmap_update_bits_check
rtc: max31335: remove unecessary locking
rtc: max31335: add driver support
dt-bindings: rtc: max31335: add max31335 bindings
rtc: rv8803: add wakeup-source support
rtc: ac100: remove misuses of kernel-doc
rtc: class: Remove usage of the deprecated ida_simple_xx() API
rtc: MAINTAINERS: drop Alessandro Zummo
rtc: ma35d1: remove hardcoded UIE support
dt-bindings: rtc: qcom-pm8xxx: fix inconsistent example
rtc: rv8803: Add power management support
rtc: ds3232: avoid unused-const-variable warning
rtc: lpc24xx: add missing dependency
rtc: tps6594: Add driver for TPS6594 RTC
rtc: Add driver for Nuvoton ma35d1 rtc controller
...
Linus Torvalds [Fri, 19 Jan 2024 01:21:35 +0000 (17:21 -0800)]
Merge tag 'input-for-v6.8-rc0' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input
Pull input updates from Dmitry Torokhov:
- a new driver for Adafruit Seesaw gamepad device
- Zforce touchscreen will handle standard device properties for axis
swap/inversion
- handling of advanced sensitivity settings in Microchip CAP11xx
capacitive sensor driver
- more drivers have been converted to use newer gpiod API
- support for dedicated wakeup IRQs in gpio-keys dirver
- support for slider gestures and OTP variants in iqs269a driver
- atkbd will report keyboard version as 0xab83 in cases when GET ID
command was skipped (to deal with problematic firmware on newer
laptops), restoring the previous behavior
- other assorted cleanups and changes
* tag 'input-for-v6.8-rc0' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: (44 commits)
Input: atkbd - use ab83 as id when skipping the getid command
Input: driver for Adafruit Seesaw Gamepad
dt-bindings: input: bindings for Adafruit Seesaw Gamepad
Input: da9063_onkey - avoid explicitly setting input's parent
Input: da9063_onkey - avoid using OF-specific APIs
Input: iqs269a - add support for OTP variants
dt-bindings: input: iqs269a: Add bindings for OTP variants
Input: iqs269a - add support for slider gestures
dt-bindings: input: iqs269a: Add bindings for slider gestures
Input: gpio-keys - filter gpio_keys -EPROBE_DEFER error messages
Input: zforce_ts - accept standard touchscreen properties
dt-bindings: touchscreen: neonode,zforce: Use standard properties
dt-bindings: touchscreen: convert neonode,zforce to json-schema
dt-bindings: input: convert drv266x to json-schema
Input: da9063 - use dev_err_probe()
Input: da9063 - drop redundant prints in probe()
Input: da9063 - simplify obtaining OF match data
Input: as5011 - convert to GPIO descriptor
Input: omap-keypad - drop optional GPIO support
Input: tca6416-keypad - drop unused include
...
Linus Torvalds [Fri, 19 Jan 2024 01:08:31 +0000 (17:08 -0800)]
Merge tag 'soundwire-6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire
Pull soundwire updates from Vinod Koul:
- Core: add concept of controller_id to deal with clear Controller /
Manager hierarchy
- bunch of qcom driver refactoring for qcom_swrm_stream_alloc_ports(),
qcom_swrm_stream_alloc_ports() and setting controller id to hw master
id
* tag 'soundwire-6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire:
soundwire: amd: drop bus freq calculation and set 'max_clk_freq'
soundwire: generic_bandwidth_allocation use bus->params.max_dr_freq
soundwire: qcom: set controller id to hw master id
soundwire: fix initializing sysfs for same devices on different buses
soundwire: bus: introduce controller_id
soundwire: stream: constify sdw_port_config when adding devices
soundwire: qcom: move sconfig in qcom_swrm_stream_alloc_ports() out of critical section
soundwire: qcom: drop unneeded qcom_swrm_stream_alloc_ports() cleanup
Linus Torvalds [Fri, 19 Jan 2024 01:03:51 +0000 (17:03 -0800)]
Merge tag 'gpio-fixes-for-v6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux
Pull gpio fixes from Bartosz Golaszewski:
"Apart from some regular driver fixes there's a relatively big revert
of the locking changes that were introduced to GPIOLIB in this merge
window.
This is because it turned out that some legacy GPIO interfaces - that
need to translate a number from the global GPIO numberspace to the
address of the relevant descriptor, thus running a GPIO device lookup
and taking the GPIO device list lock - are still used in old code from
atomic context resulting in "scheduling while atomic" errors.
I'll try to make the read-only part of the list access entirely
lockless using SRCU but this will take some time so let's go back to
the old global spinlock for now.
Summary:
- revert the changes aiming to use a read-write semaphore to protect
the list of GPIO devices due to calls to legacy API taking that
lock from atomic context in old code
- fix inverted logic in DEFINE_FREE() for GPIO device references
- check the return value of bgpio_init() in gpio-mlxbf3
- fix node address in the DT bindings example for gpio-xilinx
- fix signedness bug in gpio-rtd
- fix kernel-doc warnings in gpio-en7523"
* tag 'gpio-fixes-for-v6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux:
gpiolib: revert the attempt to protect the GPIO device list with an rwsem
gpio: EN7523: fix kernel-doc warnings
gpiolib: Fix scope-based gpio_device refcounting
gpio: mlxbf3: add an error code check in mlxbf3_gpio_probe
dt-bindings: gpio: xilinx: Fix node address in gpio
gpio: rtd: Fix signedness bug in probe
Linus Torvalds [Fri, 19 Jan 2024 00:58:21 +0000 (16:58 -0800)]
Merge tag 'pwm/for-6.8-2' of gitolite.kernel.org:pub/scm/linux/kernel/git/ukleinek/linux
Pull pwm fixes from Uwe Kleine-König:
- fix a duplicate cleanup in an error path introduced in
this merge window
- fix an out-of-bounds access
In practise it doesn't happen - otherwise someone would have noticed
since v5.17-rc1 I guess - because the device tree binding for the two
drivers using of_pwm_single_xlate() only have args->args_count == 1.
A device-tree that doesn't conform to the respective bindings could
trigger that easily however.
- correct the request callback of the jz4740 pwm driver which used
dev_err_probe() long after .probe() completed.
This is conceptually wrong because dev_err_probe() might call
device_set_deferred_probe_reason() which is nonsensical after the
driver is bound.
* tag 'pwm/for-6.8-2' of gitolite.kernel.org:pub/scm/linux/kernel/git/ukleinek/linux:
pwm: jz4740: Don't use dev_err_probe() in .request()
pwm: Fix out-of-bounds access in of_pwm_single_xlate()
pwm: bcm2835: Remove duplicate call to clk_rate_exclusive_put()
Linus Torvalds [Fri, 19 Jan 2024 00:53:35 +0000 (16:53 -0800)]
Merge tag 'backlight-next-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/backlight
Pull backlight updates from Lee Jones:
"New Drivers:
- Add support for Monolithic Power Systems MP3309C WLED Step-up Converter
Fix-ups:
- Use/convert to new/better APIs/helpers/MACROs instead of
hand-rolling implementations
- Device Tree Binding updates
- Demote non-kerneldoc header comments
- Improve error handling; return proper error values, simplify, avoid
duplicates, etc
- Convert over to the new (kinda) GPIOD API
Bug Fixes:
- Fix uninitialised local variable"
* tag 'backlight-next-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/backlight:
backlight: hx8357: Convert to agnostic GPIO API
backlight: ili922x: Add an error code check in ili922x_write()
backlight: ili922x: Drop kernel-doc for local macros
backlight: mp3309c: Fix uninitialized local variable
backlight: pwm_bl: Use dev_err_probe
backlight: mp3309c: Add support for MPS MP3309C
dt-bindings: backlight: mp3309c: Remove two required properties
Linus Torvalds [Fri, 19 Jan 2024 00:49:34 +0000 (16:49 -0800)]
Merge tag 'dma-mapping-6.8-2024-01-18' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fixes from Christoph Hellwig:
- fix kerneldoc warnings (Randy Dunlap)
- better bounds checking in swiotlb (ZhangPeng)
* tag 'dma-mapping-6.8-2024-01-18' of git://git.infradead.org/users/hch/dma-mapping:
dma-debug: fix kernel-doc warnings
swiotlb: check alloc_size before the allocation of a new memory pool
Linus Torvalds [Fri, 19 Jan 2024 00:46:18 +0000 (16:46 -0800)]
Merge tag 'memblock-v6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock update from Mike Rapoport:
"Code readability improvement.
Use NUMA_NO_NODE instead of -1 as return value of
memblock_search_pfn_nid() to improve code readability
and consistency with the callers of that function"
* tag 'memblock-v6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
memblock: Return NUMA_NO_NODE instead of -1 to improve code readability
Linus Torvalds [Fri, 19 Jan 2024 00:44:03 +0000 (16:44 -0800)]
Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
Pull virtio updates from Michael Tsirkin:
- vdpa/mlx5: support for resumable vqs
- virtio_scsi: mq_poll support
- 3virtio_pmem: support SHMEM_REGION
- virtio_balloon: stay awake while adjusting balloon
- virtio: support for no-reset virtio PCI PM
- Fixes, cleanups
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
vdpa/mlx5: Add mkey leak detection
vdpa/mlx5: Introduce reference counting to mrs
vdpa/mlx5: Use vq suspend/resume during .set_map
vdpa/mlx5: Mark vq state for modification in hw vq
vdpa/mlx5: Mark vq addrs for modification in hw vq
vdpa/mlx5: Introduce per vq and device resume
vdpa/mlx5: Allow modifying multiple vq fields in one modify command
vdpa/mlx5: Expose resumable vq capability
vdpa: Block vq property changes in DRIVER_OK
vdpa: Track device suspended state
scsi: virtio_scsi: Add mq_poll support
virtio_pmem: support feature SHMEM_REGION
virtio_balloon: stay awake while adjusting balloon
vdpa: Remove usage of the deprecated ida_simple_xx() API
virtio: Add support for no-reset virtio PCI PM
virtio_net: fix missing dma unmap for resize
vhost-vdpa: account iommu allocations
vdpa: Fix an error handling path in eni_vdpa_probe()
Linus Torvalds [Fri, 19 Jan 2024 00:42:18 +0000 (16:42 -0800)]
Merge tag 'hwmon-for-v6.8-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull hwmonfix from Guenter Roeck:
"Fix crash seen when instantiating npcm750-pwm-fan"
* tag 'hwmon-for-v6.8-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
hwmon: (npcm750-pwm-fan) Fix crash observed when instantiating nuvoton,npcm750-pwm-fan
Linus Torvalds [Fri, 19 Jan 2024 00:22:43 +0000 (16:22 -0800)]
Merge tag 'cxl-for-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl
Pull CXL (Compute Express Link) updates from Dan Williams:
"The bulk of this update is support for enumerating the performance
capabilities of CXL memory targets and connecting that to a platform
CXL memory QoS class. Some follow-on work remains to hook up this data
into core-mm policy, but that is saved for v6.9.
The next significant update is unifying how CXL event records (things
like background scrub errors) are processed between so called
"firmware first" and native error record retrieval. The CXL driver
handler that processes the record retrieved from the device mailbox is
now the handler for that same record format coming from an EFI/ACPI
notification source.
This also contains miscellaneous feature updates, like Get Timestamp,
and other fixups.
Summary:
- Add support for parsing the Coherent Device Attribute Table (CDAT)
- Add support for calculating a platform CXL QoS class from CDAT data
- Unify the tracing of EFI CXL Events with native CXL Events.
- Add Get Timestamp support
- Miscellaneous cleanups and fixups"
* tag 'cxl-for-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (41 commits)
cxl/core: use sysfs_emit() for attr's _show()
cxl/pci: Register for and process CPER events
PCI: Introduce cleanup helpers for device reference counts and locks
acpi/ghes: Process CXL Component Events
cxl/events: Create a CXL event union
cxl/events: Separate UUID from event structures
cxl/events: Remove passing a UUID to known event traces
cxl/events: Create common event UUID defines
cxl/events: Promote CXL event structures to a core header
cxl: Refactor to use __free() for cxl_root allocation in cxl_endpoint_port_probe()
cxl: Refactor to use __free() for cxl_root allocation in cxl_find_nvdimm_bridge()
cxl: Fix device reference leak in cxl_port_perf_data_calculate()
cxl: Convert find_cxl_root() to return a 'struct cxl_root *'
cxl: Introduce put_cxl_root() helper
cxl/port: Fix missing target list lock
cxl/port: Fix decoder initialization when nr_targets > interleave_ways
cxl/region: fix x9 interleave typo
cxl/trace: Pass UUID explicitly to event traces
cxl/region: use %pap format to print resource_size_t
cxl/region: Add dev_dbg() detail on failure to allocate HPA space
...
- Virtio infrastructure and a new virtio-vfio-pci variant driver, which
provides emulation of a legacy virtio interfaces on modern virtio
hardware for virtio-net VF devices where the PF driver exposes
support for legacy admin queues, ie. an emulated IO BAR on an SR-IOV
VF to provide driver ABI compatibility to legacy devices (Yishai
Hadas & Feng Liu)
- Migration fixes for the hisi-acc-vfio-pci variant driver (Shameer
Kolothum)
- Kconfig dependency fix for new virtio-vfio-pci variant driver (Arnd
Bergmann)
* tag 'vfio-v6.8-rc1' of https://github.com/awilliam/linux-vfio: (22 commits)
vfio/virtio: fix virtio-pci dependency
hisi_acc_vfio_pci: Update migration data pointer correctly on saving/resume
vfio/virtio: Declare virtiovf_pci_aer_reset_done() static
vfio/virtio: Introduce a vfio driver over virtio devices
vfio/pci: Expose vfio_pci_core_iowrite/read##size()
vfio/pci: Expose vfio_pci_core_setup_barmap()
virtio-pci: Introduce APIs to execute legacy IO admin commands
virtio-pci: Initialize the supported admin commands
virtio-pci: Introduce admin commands
virtio-pci: Introduce admin command sending function
virtio-pci: Introduce admin virtqueue
virtio: Define feature bit for administration virtqueue
vfio/type1: account iommu allocations
vfio/pds: Add multi-region support
vfio/pds: Move seq/ack bitmaps into region struct
vfio/pds: Pass region info to relevant functions
vfio/pds: Move and rename region specific info
vfio/pds: Only use a single SGL for both seq and ack
vfio/pds: Fix calculations in pds_vfio_dirty_sync
MAINTAINERS: Add vfio debugfs interface doc link
...
Linus Torvalds [Thu, 18 Jan 2024 23:28:15 +0000 (15:28 -0800)]
Merge tag 'for-linus-iommufd' of git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd
Pull iommufd updates from Jason Gunthorpe:
"This brings the first of three planned user IO page table invalidation
operations:
- IOMMU_HWPT_INVALIDATE allows invalidating the IOTLB integrated into
the iommu itself. The Intel implementation will also generate an
ATC invalidation to flush the device IOTLB as it unambiguously
knows the device, but other HW will not.
It goes along with the prior PR to implement userspace IO page tables
(aka nested translation for VMs) to allow Intel to have full
functionality for simple cases. An Intel implementation of the
operation is provided.
Also fix a small bug in the selftest mock iommu driver probe"
* tag 'for-linus-iommufd' of git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd:
iommufd/selftest: Check the bus type during probe
iommu/vt-d: Add iotlb flush for nested domain
iommufd: Add data structure for Intel VT-d stage-1 cache invalidation
iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl
iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op
iommufd/selftest: Add mock_domain_cache_invalidate_user support
iommu: Add iommu_copy_struct_from_user_array helper
iommufd: Add IOMMU_HWPT_INVALIDATE
iommu: Add cache_invalidate_user op
Linus Torvalds [Thu, 18 Jan 2024 23:16:57 +0000 (15:16 -0800)]
Merge tag 'iommu-updates-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull iommu updates from Joerg Roedel:
"Core changes:
- Fix race conditions in device probe path
- Retire IOMMU bus_ops
- Support for passing custom allocators to page table drivers
- Clean up Kconfig around IOMMU_SVA
- Support for sharing SVA domains with all devices bound to a mm
- Firmware data parsing cleanup
- Tracing improvements for iommu-dma code
- Some smaller fixes and cleanups
ARM-SMMU drivers:
- Device-tree binding updates:
- Add additional compatible strings for Qualcomm SoCs
- Document Adreno clocks for Qualcomm's SM8350 SoC
- SMMUv2:
- Implement support for the ->domain_alloc_paging() callback
- Ensure Secure context is restored following suspend of Qualcomm
SMMU implementation
- SMMUv3:
- Disable stalling mode for the "quiet" context descriptor
- Minor refactoring and driver cleanups
Intel VT-d driver:
- Cleanup and refactoring
AMD IOMMU driver:
- Improve IO TLB invalidation logic
- Small cleanups and improvements
Linus Torvalds [Thu, 18 Jan 2024 23:01:28 +0000 (15:01 -0800)]
Merge tag 'percpu-for-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu
Pull percpu updates from Dennis Zhou:
"Enable percpu page allocator for RISC-V.
There are RISC-V configurations with sparse NUMA configurations and
small vmalloc space causing dynamic percpu allocations to fail as the
backing chunk stride is too far apart"
* tag 'percpu-for-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu:
riscv: Enable pcpu page first chunk allocator
mm: Introduce flush_cache_vmap_early()
Linus Torvalds [Thu, 18 Jan 2024 22:45:33 +0000 (14:45 -0800)]
Merge tag 'eventfs-v6.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull eventfs updates from Steven Rostedt:
- Remove "lookup" parameter of create_dir_dentry() and
create_file_dentry(). These functions were called by lookup and the
readdir logic, where readdir needed it to up the ref count of the
dentry but the lookup did not. A "lookup" parameter was passed in to
tell it what to do, but this complicated the code. It is better to
just always up the ref count and require the caller to decrement it,
even for lookup.
- Modify the .iterate_shared callback to not use the dcache_readdir()
logic and just handle what gets displayed by that one function. This
removes the need for eventfs to hijack the file->private_data from
the dcache_readdir() "cursor" pointer, and makes the code a bit more
sane
- Use the root and instance inodes for default ownership. Instead of
walking the dentry tree and updating each dentry gid, use the
getattr(), setattr() and permission() callbacks to set the ownership
and permissions using the root or instance as the default
- Some other optimizations with the eventfs iterate_shared logic
- Hard-code the inodes for eventfs to the same number for files, and
the same number for directories
- Have getdent() not create dentries/inodes in iterate_shared() as now
it has hard-coded inode numbers
- Use kcalloc() instead of kzalloc() on a list of elements
- Fix seq_buf warning and make static work properly.
* tag 'eventfs-v6.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
seq_buf: Make DECLARE_SEQ_BUF() usable
eventfs: Use kcalloc() instead of kzalloc()
eventfs: Do not create dentries nor inodes in iterate_shared
eventfs: Have the inodes all for files and directories all be the same
eventfs: Shortcut eventfs_iterate() by skipping entries already read
eventfs: Read ei->entries before ei->children in eventfs_iterate()
eventfs: Do ctx->pos update for all iterations in eventfs_iterate()
eventfs: Have eventfs_iterate() stop immediately if ei->is_freed is set
tracefs/eventfs: Use root and instance inodes as default ownership
eventfs: Stop using dcache_readdir() for getdents()
eventfs: Remove "lookup" parameter from create_dir/file_dentry()
Linus Torvalds [Thu, 18 Jan 2024 22:35:29 +0000 (14:35 -0800)]
Merge tag 'trace-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing updates from Steven Rostedt:
- Allow kernel trace instance creation to specify what events are
created
Inside the kernel, a subsystem may create a tracing instance that it
can use to send events to user space. This sub-system may not care
about the thousands of events that exist in eventfs. Allow the
sub-system to specify what sub-systems of events it cares about, and
only those events are exposed to this instance.
- Allow the ring buffer to be broken up into bigger sub-buffers than
just the architecture page size.
A new tracefs file called "buffer_subbuf_size_kb" is created. The
user can now specify a minimum size the sub-buffer may be in
kilobytes. Note, that the implementation currently make the
sub-buffer size a power of 2 pages (1, 2, 4, 8, 16, ...) but the user
only writes in kilobyte size, and the sub-buffer will be updated to
the next size that it will can accommodate it. If the user writes in
10, it will change the size to be 4 pages on x86 (16K), as that is
the next available size that can hold 10K pages.
- Update the debug output when a corrupt time is detected in the ring
buffer. If the ring buffer detects inconsistent timestamps, there's a
debug config options that will dump the contents of the meta data of
the sub-buffer that is used for debugging. Add some more information
to this dump that helps with debugging.
- Add more timestamp debugging checks (only triggers when the config is
enabled)
- Increase the trace_seq iterator to 2 page sizes.
- Allow strings written into tracefs_marker to be larger. Up to just
under 2 page sizes (based on what trace_seq can hold).
- Increase the trace_maker_raw write to be as big as a sub-buffer can
hold.
- Remove 32 bit time stamp logic, now that the rb_time_cmpxchg() has
been removed.
- More selftests were added.
- Some code clean ups as well.
* tag 'trace-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (29 commits)
ring-buffer: Remove stale comment from ring_buffer_size()
tracing histograms: Simplify parse_actions() function
tracing/selftests: Remove exec permissions from trace_marker.tc test
ring-buffer: Use subbuf_order for buffer page masking
tracing: Update subbuffer with kilobytes not page order
ringbuffer/selftest: Add basic selftest to test changing subbuf order
ring-buffer: Add documentation on the buffer_subbuf_order file
ring-buffer: Just update the subbuffers when changing their allocation order
ring-buffer: Keep the same size when updating the order
tracing: Stop the tracing while changing the ring buffer subbuf size
tracing: Update snapshot order along with main buffer order
ring-buffer: Make sure the spare sub buffer used for reads has same size
ring-buffer: Do no swap cpu buffers if order is different
ring-buffer: Clear pages on error in ring_buffer_subbuf_order_set() failure
ring-buffer: Read and write to ring buffers with custom sub buffer size
ring-buffer: Set new size of the ring buffer sub page
ring-buffer: Add interface for configuring trace sub buffer size
ring-buffer: Page size per ring buffer
ring-buffer: Have ring_buffer_print_page_header() be able to access ring_buffer_iter
ring-buffer: Check if absolute timestamp goes backwards
...
Linus Torvalds [Thu, 18 Jan 2024 22:21:22 +0000 (14:21 -0800)]
Merge tag 'probes-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull probes update from Masami Hiramatsu:
- Update the Kprobes trace event to show the actual function name in
notrace-symbol warning.
Instead of using the user specified symbol name, use "%ps" printk
format to show the actual symbol at the probe address. Since kprobe
event accepts the offset from symbol which is bigger than the symbol
size, the user specified symbol may not be the actual probed symbol.
* tag 'probes-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
trace/kprobe: Display the actual notrace function when rejecting a probe