Maciej Fijalkowski [Wed, 16 Sep 2020 21:10:05 +0000 (23:10 +0200)]
bpf: propagate poke descriptors to subprograms
Previously, there was no need for poke descriptors being present in
subprogram's bpf_prog_aux struct since tailcalls were simply not allowed
in them. Each subprog is JITed independently so in order to enable
JITing subprograms that use tailcalls, do the following:
- in fixup_bpf_calls() store the index of tailcall insn onto the generated
poke descriptor,
- in case when insn patching occurs, adjust the tailcall insn idx from
bpf_patch_insn_data,
- then in jit_subprogs() check whether the given poke descriptor belongs
to the current subprog by checking if that previously stored absolute
index of tail call insn is in the scope of the insns of given subprog,
- update the insn->imm with new poke descriptor slot so that while JITing
the proper poke descriptor will be grabbed
This way each of the main program's poke descriptors are distributed
across the subprograms poke descriptor array, so main program's
descriptors can be untracked out of the prog array map.
Add also subprog's aux struct to the BPF map poke_progs list by calling
on it map_poke_track().
In case of any error, call the map_poke_untrack() on subprog's aux
structs that have already been registered to prog array map.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 16 Sep 2020 21:10:04 +0000 (23:10 +0200)]
bpf, x64: use %rcx instead of %rax for tail call retpolines
Currently, %rax is used to store the jump target when BPF program is
emitting the retpoline instructions that are handling the indirect
tailcall.
There is a plan to use %rax for different purpose, which is storing the
tail call counter. In order to preserve this value across the tailcalls,
adjust the BPF indirect tailcalls so that the target program will reside
in %rcx and teach the retpoline instructions about new location of jump
target.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Wed, 16 Sep 2020 00:48:19 +0000 (17:48 -0700)]
selftests/bpf: Merge most of test_btf into test_progs
Merge 183 tests from test_btf into test_progs framework to be exercised
regularly. All the test_btf tests that were moved are modeled as proper
sub-tests in test_progs framework for ease of debugging and reporting.
No functional or behavioral changes were intended, I tried to preserve
original behavior as much as possible. E.g., `test_progs -v` will activate
"always_log" flag to emit BTF validation log.
The only difference is in reducing the max_entries limit for pretty-printing
tests from (128 * 1024) to just 128 to reduce tests running time without
reducing the coverage.
Example test run:
$ sudo ./test_progs -n 8
...
#8 btf:OK
Summary: 1/183 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200916004819.3767489-1-andriin@fb.com
Alexei Starovoitov [Wed, 16 Sep 2020 01:28:27 +0000 (18:28 -0700)]
Merge branch 'bpf_metadata'
Stanislav Fomichev says:
====================
Currently, if a user wants to store arbitrary metadata for an eBPF
program, for example, the program build commit hash or version, they
could store it in a map, and conveniently libbpf uses .data section to
populate an internal map. However, if the program does not actually
reference the map, then the map would be de-refcounted and freed.
This patch set introduces a new syscall BPF_PROG_BIND_MAP to add a map
to a program's used_maps, even if the program instructions does not
reference the map.
libbpf is extended to always BPF_PROG_BIND_MAP .rodata section so the
metadata is kept in place.
bpftool is also extended to print metadata in the 'bpftool prog' list.
The variable is considered metadata if it starts with the
magic 'bpf_metadata_' prefix; everything after the prefix is the
metadata name.
An example use of this would be BPF C file declaring:
volatile const char bpf_metadata_commit_hash[] SEC(".rodata") = "
abcdef123456";
and bpftool would emit:
$ bpftool prog
[...]
metadata:
commit_hash = "
abcdef123456"
v6 changes:
* libbpf: drop FEAT_GLOBAL_DATA from probe_prog_bind_map (Andrii Nakryiko)
* bpftool: combine find_metadata_map_id & find_metadata;
drops extra bpf_map_get_fd_by_id and bpf_map_get_fd_by_id (Andrii Nakryiko)
* bpftool: use strncmp instead of strstr (Andrii Nakryiko)
* bpftool: memset(map_info) and extra empty line (Andrii Nakryiko)
v5 changes:
* selftest: verify that prog holds rodata (Andrii Nakryiko)
* selftest: use volatile for metadata (Andrii Nakryiko)
* bpftool: use sizeof in BPF_METADATA_PREFIX_LEN (Andrii Nakryiko)
* bpftool: new find_metadata that does map lookup (Andrii Nakryiko)
* libbpf: don't generalize probe_create_global_data (Andrii Nakryiko)
* libbpf: use OPTS_VALID in bpf_prog_bind_map (Andrii Nakryiko)
* libbpf: keep LIBBPF_0.2.0 sorted (Andrii Nakryiko)
v4 changes:
* Don't return EEXIST from syscall if already bound (Andrii Nakryiko)
* Removed --metadata argument (Andrii Nakryiko)
* Removed custom .metadata section (Alexei Starovoitov)
* Addressed Andrii's suggestions about btf helpers and vsi (Andrii Nakryiko)
* Moved bpf_prog_find_metadata into bpftool (Alexei Starovoitov)
v3 changes:
* API changes for bpf_prog_find_metadata (Toke Høiland-Jørgensen)
v2 changes:
* Made struct bpf_prog_bind_opts in libbpf so flags is optional.
* Deduped probe_kern_global_data and probe_prog_bind_map into a common
helper.
* Added comment regarding why EEXIST is ignored in libbpf bind map.
* Froze all LIBBPF_MAP_METADATA internal maps.
* Moved bpf_prog_bind_map into new LIBBPF_0.1.1 in libbpf.map.
* Added p_err() calls on error cases in bpftool show_prog_metadata.
* Reverse christmas tree coding style in bpftool show_prog_metadata.
* Made bpftool gen skeleton recognize .metadata as an internal map and
generate datasec definition in skeleton.
* Added C test using skeleton to see asset that the metadata is what we
expect and rebinding causes EEXIST.
v1 changes:
* Fixed a few missing unlocks, and missing close while iterating map fds.
* Move mutex initialization to right after prog aux allocation, and mutex
destroy to right after prog aux free.
* s/ADD_MAP/BIND_MAP/
* Use mutex only instead of RCU to protect the used_map array & count.
Cc: YiFei Zhu <zhuyifei1999@gmail.com>
====================
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
YiFei Zhu [Tue, 15 Sep 2020 23:45:43 +0000 (16:45 -0700)]
selftests/bpf: Test load and dump metadata with btftool and skel
This is a simple test to check that loading and dumping metadata
in btftool works, whether or not metadata contents are used by the
program.
A C test is also added to make sure the skeleton code can read the
metadata values.
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Cc: YiFei Zhu <zhuyifei1999@gmail.com>
Link: https://lore.kernel.org/bpf/20200915234543.3220146-6-sdf@google.com
YiFei Zhu [Tue, 15 Sep 2020 23:45:42 +0000 (16:45 -0700)]
bpftool: Support dumping metadata
Dump metadata in the 'bpftool prog' list if it's present.
For some formatting some BTF code is put directly in the
metadata dumping. Sanity checks on the map and the kind of the btf_type
to make sure we are actually dumping what we are expecting.
A helper jsonw_reset is added to json writer so we can reuse the same
json writer without having extraneous commas.
Sample output:
$ bpftool prog
6: cgroup_skb name prog tag
bcf7977d3b93787c gpl
[...]
btf_id 4
metadata:
a = "foo"
b = 1
$ bpftool prog --json --pretty
[{
"id": 6,
[...]
"btf_id": 4,
"metadata": {
"a": "foo",
"b": 1
}
}
]
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: YiFei Zhu <zhuyifei1999@gmail.com>
Link: https://lore.kernel.org/bpf/20200915234543.3220146-5-sdf@google.com
YiFei Zhu [Tue, 15 Sep 2020 23:45:41 +0000 (16:45 -0700)]
libbpf: Add BPF_PROG_BIND_MAP syscall and use it on .rodata section
The patch adds a simple wrapper bpf_prog_bind_map around the syscall.
When the libbpf tries to load a program, it will probe the kernel for
the support of this syscall and unconditionally bind .rodata section
to the program.
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: YiFei Zhu <zhuyifei1999@gmail.com>
Link: https://lore.kernel.org/bpf/20200915234543.3220146-4-sdf@google.com
YiFei Zhu [Tue, 15 Sep 2020 23:45:40 +0000 (16:45 -0700)]
bpf: Add BPF_PROG_BIND_MAP syscall
This syscall binds a map to a program. Returns success if the map is
already bound to the program.
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Cc: YiFei Zhu <zhuyifei1999@gmail.com>
Link: https://lore.kernel.org/bpf/20200915234543.3220146-3-sdf@google.com
YiFei Zhu [Tue, 15 Sep 2020 23:45:39 +0000 (16:45 -0700)]
bpf: Mutex protect used_maps array and count
To support modifying the used_maps array, we use a mutex to protect
the use of the counter and the array. The mutex is initialized right
after the prog aux is allocated, and destroyed right before prog
aux is freed. This way we guarantee it's initialized for both cBPF
and eBPF.
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Cc: YiFei Zhu <zhuyifei1999@gmail.com>
Link: https://lore.kernel.org/bpf/20200915234543.3220146-2-sdf@google.com
Yonghong Song [Mon, 14 Sep 2020 22:32:10 +0000 (15:32 -0700)]
libbpf: Fix a compilation error with xsk.c for ubuntu 16.04
When syncing latest libbpf repo to bcc, ubuntu 16.04 (4.4.0 LTS kernel)
failed compilation for xsk.c:
In file included from /tmp/debuild.0jkauG/bcc/src/cc/libbpf/src/xsk.c:23:0:
/tmp/debuild.0jkauG/bcc/src/cc/libbpf/src/xsk.c: In function ‘xsk_get_ctx’:
/tmp/debuild.0jkauG/bcc/src/cc/libbpf/include/linux/list.h:81:9: warning: implicit
declaration of function ‘container_of’ [-Wimplicit-function-declaration]
container_of(ptr, type, member)
^
/tmp/debuild.0jkauG/bcc/src/cc/libbpf/include/linux/list.h:83:9: note: in expansion
of macro ‘list_entry’
list_entry((ptr)->next, type, member)
...
src/cc/CMakeFiles/bpf-static.dir/build.make:209: recipe for target
'src/cc/CMakeFiles/bpf-static.dir/libbpf/src/xsk.c.o' failed
Commit
2f6324a3937f ("libbpf: Support shared umems between queues and devices")
added include file <linux/list.h>, which uses macro "container_of".
xsk.c file also includes <linux/ethtool.h> before <linux/list.h>.
In a more recent distro kernel, <linux/ethtool.h> includes <linux/kernel.h>
which contains the macro definition for "container_of". So compilation is all fine.
But in ubuntu 16.04 kernel, <linux/ethtool.h> does not contain <linux/kernel.h>
which caused the above compilation error.
Let explicitly add <linux/kernel.h> in xsk.c to avoid compilation error
in old distro's.
Fixes: 2f6324a3937f ("libbpf: Support shared umems between queues and devices")
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200914223210.1831262-1-yhs@fb.com
Yonghong Song [Mon, 14 Sep 2020 18:31:10 +0000 (11:31 -0700)]
bpftool: Fix build failure
When building bpf selftests like
make -C tools/testing/selftests/bpf -j20
I hit the following errors:
...
GEN /net-next/tools/testing/selftests/bpf/tools/build/bpftool/Documentation/bpftool-gen.8
<stdin>:75: (WARNING/2) Block quote ends without a blank line; unexpected unindent.
<stdin>:71: (WARNING/2) Literal block ends without a blank line; unexpected unindent.
<stdin>:85: (WARNING/2) Literal block ends without a blank line; unexpected unindent.
<stdin>:57: (WARNING/2) Block quote ends without a blank line; unexpected unindent.
<stdin>:66: (WARNING/2) Literal block ends without a blank line; unexpected unindent.
<stdin>:109: (WARNING/2) Literal block ends without a blank line; unexpected unindent.
<stdin>:175: (WARNING/2) Literal block ends without a blank line; unexpected unindent.
<stdin>:273: (WARNING/2) Literal block ends without a blank line; unexpected unindent.
make[1]: *** [/net-next/tools/testing/selftests/bpf/tools/build/bpftool/Documentation/bpftool-perf.8] Error 12
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [/net-next/tools/testing/selftests/bpf/tools/build/bpftool/Documentation/bpftool-iter.8] Error 12
make[1]: *** [/net-next/tools/testing/selftests/bpf/tools/build/bpftool/Documentation/bpftool-struct_ops.8] Error 12
...
I am using:
-bash-4.4$ rst2man --version
rst2man (Docutils 0.11 [repository], Python 2.7.5, on linux2)
-bash-4.4$
The Makefile generated final .rst file (e.g., bpftool-cgroup.rst) looks like
...
ID AttachType AttachFlags Name
\n SEE ALSO\n========\n\t**bpf**\ (2),\n\t**bpf-helpers**\
(7),\n\t**bpftool**\ (8),\n\t**bpftool-btf**\
(8),\n\t**bpftool-feature**\ (8),\n\t**bpftool-gen**\
(8),\n\t**bpftool-iter**\ (8),\n\t**bpftool-link**\
(8),\n\t**bpftool-map**\ (8),\n\t**bpftool-net**\
(8),\n\t**bpftool-perf**\ (8),\n\t**bpftool-prog**\
(8),\n\t**bpftool-struct_ops**\ (8)\n
The rst2man generated .8 file looks like
Literal block ends without a blank line; unexpected unindent.
.sp
n SEEALSOn========nt**bpf**(2),nt**bpf\-helpers**(7),nt**bpftool**(8),nt**bpftool\-btf**(8),nt**
bpftool\-feature**(8),nt**bpftool\-gen**(8),nt**bpftool\-iter**(8),nt**bpftool\-link**(8),nt**
bpftool\-map**(8),nt**bpftool\-net**(8),nt**bpftool\-perf**(8),nt**bpftool\-prog**(8),nt**
bpftool\-struct_ops**(8)n
Looks like that particular version of rst2man prefers to have actual new line
instead of \n.
Since `echo -e` may not be available in some environment, let us use `printf`.
Format string "%b" is used for `printf` to ensure all escape characters are
interpretted properly.
Fixes: 18841da98100 ("tools: bpftool: Automate generation for "SEE ALSO" sections in man pages")
Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Cc: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200914183110.999906-1-yhs@fb.com
Magnus Karlsson [Mon, 14 Sep 2020 14:50:36 +0000 (16:50 +0200)]
xsk: Fix refcount warning in xp_dma_map
Fix a potential refcount warning that a zero value is increased to one
in xp_dma_map, by initializing the refcount to one to start with,
instead of zero plus a refcount_inc().
Fixes: 921b68692abb ("xsk: Enable sharing of dma mappings")
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/1600095036-23868-1-git-send-email-magnus.karlsson@gmail.com
Magnus Karlsson [Thu, 10 Sep 2020 08:31:06 +0000 (10:31 +0200)]
samples/bpf: Add quiet option to xdpsock
Add a quiet option (-Q) that disables the statistics print outs of
xdpsock. This is good to have when measuring 0% loss rate performance
as it will be quite terrible if the application uses printfs.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/1599726666-8431-4-git-send-email-magnus.karlsson@gmail.com
Magnus Karlsson [Thu, 10 Sep 2020 08:31:05 +0000 (10:31 +0200)]
samples/bpf: Fix possible deadlock in xdpsock
Fix a possible deadlock in the l2fwd application in xdpsock that can
occur when there is no space in the Tx ring. There are two ways to get
the kernel to consume entries in the Tx ring: calling sendto() to make
it send packets and freeing entries from the completion ring, as the
kernel will not send a packet if there is no space for it to add a
completion entry in the completion ring. The Tx loop in l2fwd only
used to call sendto(). This patches adds cleaning the completion ring
in that loop.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/1599726666-8431-3-git-send-email-magnus.karlsson@gmail.com
Magnus Karlsson [Thu, 10 Sep 2020 08:31:04 +0000 (10:31 +0200)]
samples/bpf: Fix one packet sending in xdpsock
Fix the sending of a single packet (or small burst) in xdpsock when
executing in copy mode. Currently, the l2fwd application in xdpsock
only transmits the packets after a batch of them has been received,
which might be confusing if you only send one packet and expect that
it is returned pronto. Fix this by calling sendto() more often and add
a comment in the code that states that this can be optimized if
needed.
Reported-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/1599726666-8431-2-git-send-email-magnus.karlsson@gmail.com
Ilya Leoshkevich [Wed, 9 Sep 2020 23:21:41 +0000 (01:21 +0200)]
s390/bpf: Fix multiple tail calls
In order to branch around tail calls (due to out-of-bounds index,
exceeding tail call count or missing tail call target), JIT uses
label[0] field, which contains the address of the instruction following
the tail call. When there are multiple tail calls, label[0] value comes
from handling of a previous tail call, which is incorrect.
Fix by getting rid of label array and resolving the label address
locally: for all 3 branches that jump to it, emit 0 offsets at the
beginning, and then backpatch them with the correct value.
Also, do not use the long jump infrastructure: the tail call sequence
is known to be short, so make all 3 jumps short.
Fixes: 6651ee070b31 ("s390/bpf: implement bpf_tail_call() helper")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200909232141.3099367-1-iii@linux.ibm.com
Alexei Starovoitov [Fri, 11 Sep 2020 03:53:01 +0000 (20:53 -0700)]
Merge branch 'improve-bpf-tcp-cc-init'
Neal Cardwell says:
====================
This patch series reorganizes TCP congestion control initialization so that if
EBPF code called by tcp_init_transfer() sets the congestion control algorithm
by calling setsockopt(TCP_CONGESTION) then the TCP stack initializes the
congestion control module immediately, instead of having tcp_init_transfer()
later initialize the congestion control module.
This increases flexibility for the EBPF code that runs at connection
establishment time, and simplifies the code.
This has the following benefits:
(1) This allows CC module customizations made by the EBPF called in
tcp_init_transfer() to persist, and not be wiped out by a later
call to tcp_init_congestion_control() in tcp_init_transfer().
(2) Does not flip the order of EBPF and CC init, to avoid causing bugs
for existing code upstream that depends on the current order.
(3) Does not cause 2 initializations for for CC in the case where the
EBPF called in tcp_init_transfer() wants to set the CC to a new CC
algorithm.
(4) Allows follow-on simplifications to the code in net/core/filter.c
and net/ipv4/tcp_cong.c, which currently both have some complexity
to special-case CC initialization to avoid double CC
initialization if EBPF sets the CC.
changes in v2:
o rebase onto bpf-next
o add another follow-on simplification suggested by Martin KaFai Lau:
"tcp: simplify tcp_set_congestion_control() load=false case"
changes in v3:
o no change in commits
o resent patch series from @gmail.com, since mail from ncardwell@google.com
stopped being accepted at netdev@vger.kernel.org mid-way through processing
the v2 patch series (between patches 2 and 3), confusing patchwork about
which patches belonged to the v2 patch series
====================
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Neal Cardwell [Thu, 10 Sep 2020 19:35:36 +0000 (15:35 -0400)]
tcp: Simplify tcp_set_congestion_control() load=false case
Simplify tcp_set_congestion_control() by removing the initialization
code path for the !load case.
There are only two call sites for tcp_set_congestion_control(). The
EBPF call site is the only one that passes load=false; it also passes
cap_net_admin=true. Because of that, the exact same behavior can be
achieved by removing the special if (!load) branch of the logic. Both
before and after this commit, the EBPF case will call
bpf_try_module_get(), and if that succeeds then call
tcp_reinit_congestion_control() or if that fails then return EBUSY.
Note that this returns the logic to a structure very similar to the
structure before:
commit
91b5b21c7c16 ("bpf: Add support for changing congestion control")
except that the CAP_NET_ADMIN status is passed in as a function
argument.
This clean-up was suggested by Martin KaFai Lau.
Suggested-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Lawrence Brakmo <brakmo@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Kevin Yang <yyd@google.com>
Neal Cardwell [Thu, 10 Sep 2020 19:35:35 +0000 (15:35 -0400)]
tcp: simplify _bpf_setsockopt(): Remove flags argument
Now that the previous patches have removed the code that uses the
flags argument to _bpf_setsockopt(), we can remove that argument.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Kevin Yang <yyd@google.com>
Cc: Lawrence Brakmo <brakmo@fb.com>
Neal Cardwell [Thu, 10 Sep 2020 19:35:34 +0000 (15:35 -0400)]
tcp: simplify tcp_set_congestion_control(): Always reinitialize
Now that the previous patches ensure that all call sites for
tcp_set_congestion_control() want to initialize congestion control, we
can simplify tcp_set_congestion_control() by removing the reinit
argument and the code to support it.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Kevin Yang <yyd@google.com>
Cc: Lawrence Brakmo <brakmo@fb.com>
Neal Cardwell [Thu, 10 Sep 2020 19:35:33 +0000 (15:35 -0400)]
tcp: Simplify EBPF TCP_CONGESTION to always init CC
Now that the previous patch ensures we don't initialize the congestion
control twice, when EBPF sets the congestion control algorithm at
connection establishment we can simplify the code by simply
initializing the congestion control module at that time.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Kevin Yang <yyd@google.com>
Cc: Lawrence Brakmo <brakmo@fb.com>
Neal Cardwell [Thu, 10 Sep 2020 19:35:32 +0000 (15:35 -0400)]
tcp: Only init congestion control if not initialized already
Change tcp_init_transfer() to only initialize congestion control if it
has not been initialized already.
With this new approach, we can arrange things so that if the EBPF code
sets the congestion control by calling setsockopt(TCP_CONGESTION) then
tcp_init_transfer() will not re-initialize the CC module.
This is an approach that has the following beneficial properties:
(1) This allows CC module customizations made by the EBPF called in
tcp_init_transfer() to persist, and not be wiped out by a later
call to tcp_init_congestion_control() in tcp_init_transfer().
(2) Does not flip the order of EBPF and CC init, to avoid causing bugs
for existing code upstream that depends on the current order.
(3) Does not cause 2 initializations for for CC in the case where the
EBPF called in tcp_init_transfer() wants to set the CC to a new CC
algorithm.
(4) Allows follow-on simplifications to the code in net/core/filter.c
and net/ipv4/tcp_cong.c, which currently both have some complexity
to special-case CC initialization to avoid double CC
initialization if EBPF sets the CC.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Kevin Yang <yyd@google.com>
Cc: Lawrence Brakmo <brakmo@fb.com>
Quentin Monnet [Thu, 10 Sep 2020 20:39:35 +0000 (21:39 +0100)]
tools: bpftool: Automate generation for "SEE ALSO" sections in man pages
The "SEE ALSO" sections of bpftool's manual pages refer to bpf(2),
bpf-helpers(7), then all existing bpftool man pages (save the current
one).
This leads to nearly-identical lists being duplicated in all manual
pages. Ideally, when a new page is created, all lists should be updated
accordingly, but this has led to omissions and inconsistencies multiple
times in the past.
Let's take it out of the RST files and generate the "SEE ALSO" sections
automatically in the Makefile when generating the man pages. The lists
are not really useful in the RST anyway because all other pages are
available in the same directory.
v3:
- Fix conflict with a previous patchset that introduced RST2MAN_OPTS
variable passed to rst2man.
v2:
- Use "echo -n" instead of "printf" in Makefile, to avoid any risk of
passing a format string directly to the command.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200910203935.25304-1-quentin@isovalent.com
Song Liu [Thu, 10 Sep 2020 20:33:14 +0000 (13:33 -0700)]
bpf: Fix comment for helper bpf_current_task_under_cgroup()
This should be "current" not "skb".
Fixes: c6b5fb8690fa ("bpf: add documentation for eBPF helpers (42-50)")
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/bpf/20200910203314.70018-1-songliubraving@fb.com
Yonghong Song [Thu, 10 Sep 2020 20:27:18 +0000 (13:27 -0700)]
selftests/bpf: Define string const as global for test_sysctl_prog.c
When tweaking llvm optimizations, I found that selftest build failed
with the following error:
libbpf: elf: skipping unrecognized data section(6) .rodata.str1.1
libbpf: prog 'sysctl_tcp_mem': bad map relo against '.L__const.is_tcp_mem.tcp_mem_name'
in section '.rodata.str1.1'
Error: failed to open BPF object file: Relocation failed
make: *** [/work/net-next/tools/testing/selftests/bpf/test_sysctl_prog.skel.h] Error 255
make: *** Deleting file `/work/net-next/tools/testing/selftests/bpf/test_sysctl_prog.skel.h'
The local string constant "tcp_mem_name" is put into '.rodata.str1.1' section
which libbpf cannot handle. Using untweaked upstream llvm, "tcp_mem_name"
is completely inlined after loop unrolling.
Commit
7fb5eefd7639 ("selftests/bpf: Fix test_sysctl_loop{1, 2}
failure due to clang change") solved a similar problem by defining
the string const as a global. Let us do the same here
for test_sysctl_prog.c so it can weather future potential llvm changes.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200910202718.956042-1-yhs@fb.com
Ilya Leoshkevich [Thu, 10 Sep 2020 17:13:36 +0000 (19:13 +0200)]
selftests/bpf: Fix test_ksyms on non-SMP kernels
On non-SMP kernels __per_cpu_start is not 0, so look it up in kallsyms.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200910171336.3161995-1-iii@linux.ibm.com
Lorenz Bauer [Thu, 10 Sep 2020 11:02:48 +0000 (12:02 +0100)]
bpf: Plug hole in struct bpf_sk_lookup_kern
As Alexei points out, struct bpf_sk_lookup_kern has two 4-byte holes.
This leads to suboptimal instructions being generated (IPv4, x86):
1372 struct bpf_sk_lookup_kern ctx = {
0xffffffff81b87f30 <+624>: xor %eax,%eax
0xffffffff81b87f32 <+626>: mov $0x6,%ecx
0xffffffff81b87f37 <+631>: lea 0x90(%rsp),%rdi
0xffffffff81b87f3f <+639>: movl $0x110002,0x88(%rsp)
0xffffffff81b87f4a <+650>: rep stos %rax,%es:(%rdi)
0xffffffff81b87f4d <+653>: mov 0x8(%rsp),%eax
0xffffffff81b87f51 <+657>: mov %r13d,0x90(%rsp)
0xffffffff81b87f59 <+665>: incl %gs:0x7e4970a0(%rip)
0xffffffff81b87f60 <+672>: mov %eax,0x8c(%rsp)
0xffffffff81b87f67 <+679>: movzwl 0x10(%rsp),%eax
0xffffffff81b87f6c <+684>: mov %ax,0xa8(%rsp)
0xffffffff81b87f74 <+692>: movzwl 0x38(%rsp),%eax
0xffffffff81b87f79 <+697>: mov %ax,0xaa(%rsp)
Fix this by moving around sport and dport. pahole confirms there
are no more holes:
struct bpf_sk_lookup_kern {
u16 family; /* 0 2 */
u16 protocol; /* 2 2 */
__be16 sport; /* 4 2 */
u16 dport; /* 6 2 */
struct {
__be32 saddr; /* 8 4 */
__be32 daddr; /* 12 4 */
} v4; /* 8 8 */
struct {
const struct in6_addr * saddr; /* 16 8 */
const struct in6_addr * daddr; /* 24 8 */
} v6; /* 16 16 */
struct sock * selected_sk; /* 32 8 */
bool no_reuseport; /* 40 1 */
/* size: 48, cachelines: 1, members: 8 */
/* padding: 7 */
/* last cacheline: 48 bytes */
};
The assembly also doesn't contain the pesky rep stos anymore:
1372 struct bpf_sk_lookup_kern ctx = {
0xffffffff81b87f60 <+624>: movzwl 0x10(%rsp),%eax
0xffffffff81b87f65 <+629>: movq $0x0,0xa8(%rsp)
0xffffffff81b87f71 <+641>: movq $0x0,0xb0(%rsp)
0xffffffff81b87f7d <+653>: mov %ax,0x9c(%rsp)
0xffffffff81b87f85 <+661>: movzwl 0x38(%rsp),%eax
0xffffffff81b87f8a <+666>: movq $0x0,0xb8(%rsp)
0xffffffff81b87f96 <+678>: mov %ax,0x9e(%rsp)
0xffffffff81b87f9e <+686>: mov 0x8(%rsp),%eax
0xffffffff81b87fa2 <+690>: movq $0x0,0xc0(%rsp)
0xffffffff81b87fae <+702>: movl $0x110002,0x98(%rsp)
0xffffffff81b87fb9 <+713>: mov %eax,0xa0(%rsp)
0xffffffff81b87fc0 <+720>: mov %r13d,0xa4(%rsp)
1: https://lore.kernel.org/bpf/CAADnVQKE6y9h2fwX6OS837v-Uf+aBXnT_JXiN_bbo2gitZQ3tA@mail.gmail.com/
Fixes: e9ddbb7707ff ("bpf: Introduce SK_LOOKUP program type with a dedicated attach point")
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20200910110248.198326-1-lmb@cloudflare.com
Quentin Monnet [Thu, 10 Sep 2020 10:26:52 +0000 (11:26 +0100)]
tools: bpftool: Add "inner_map" to "bpftool map create" outer maps
There is no support for creating maps of types array-of-map or
hash-of-map in bpftool. This is because the kernel needs an inner_map_fd
to collect metadata on the inner maps to be supported by the new map,
but bpftool does not provide a way to pass this file descriptor.
Add a new optional "inner_map" keyword that can be used to pass a
reference to a map, retrieve a fd to that map, and pass it as the
inner_map_fd.
Add related documentation and bash completion. Note that we can
reference the inner map by its name, meaning we can have several times
the keyword "name" with different meanings (mandatory outer map name,
and possibly a name to use to find the inner_map_fd). The bash
completion will offer it just once, and will not suggest "name" on the
following command:
# bpftool map create /sys/fs/bpf/my_outer_map type hash_of_maps \
inner_map name my_inner_map [TAB]
Fixing that specific case seems too convoluted. Completion will work as
expected, however, if the outer map name comes first and the "inner_map
name ..." is passed second.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200910102652.10509-4-quentin@isovalent.com
Quentin Monnet [Thu, 10 Sep 2020 10:26:51 +0000 (11:26 +0100)]
tools: bpftool: Keep errors for map-of-map dumps if distinct from ENOENT
When dumping outer maps or prog_array maps, and on lookup failure,
bpftool simply skips the entry with no error message. This is because
the kernel returns non-zero when no value is found for the provided key,
which frequently happen for those maps if they have not been filled.
When such a case occurs, errno is set to ENOENT. It seems unlikely we
could receive other error codes at this stage (we successfully retrieved
map info just before), but to be on the safe side, let's skip the entry
only if errno was ENOENT, and not for the other errors.
v3: New patch
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200910102652.10509-3-quentin@isovalent.com
Quentin Monnet [Thu, 10 Sep 2020 10:26:50 +0000 (11:26 +0100)]
tools: bpftool: Clean up function to dump map entry
The function used to dump a map entry in bpftool is a bit difficult to
follow, as a consequence to earlier refactorings. There is a variable
("num_elems") which does not appear to be necessary, and the error
handling would look cleaner if moved to its own function. Let's clean it
up. No functional change.
v2:
- v1 was erroneously removing the check on fd maps in an attempt to get
support for outer map dumps. This is already working. Instead, v2
focuses on cleaning up the dump_map_elem() function, to avoid
similar confusion in the future.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200910102652.10509-2-quentin@isovalent.com
Lorenz Bauer [Wed, 9 Sep 2020 16:27:12 +0000 (17:27 +0100)]
selftests: bpf: Test iterating a sockmap
Add a test that exercises a basic sockmap / sockhash iteration. For
now we simply count the number of elements seen. Once sockmap update
from iterators works we can extend this to perform a full copy.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200909162712.221874-4-lmb@cloudflare.com
Lorenz Bauer [Wed, 9 Sep 2020 16:27:11 +0000 (17:27 +0100)]
net: Allow iterating sockmap and sockhash
Add bpf_iter support for sockmap / sockhash, based on the bpf_sk_storage and
hashtable implementation. sockmap and sockhash share the same iteration
context: a pointer to an arbitrary key and a pointer to a socket. Both
pointers may be NULL, and so BPF has to perform a NULL check before accessing
them. Technically it's not possible for sockhash iteration to yield a NULL
socket, but we ignore this to be able to use a single iteration point.
Iteration will visit all keys that remain unmodified during the lifetime of
the iterator. It may or may not visit newly added ones.
Switch from using rcu_dereference_raw to plain rcu_dereference, so we gain
another guard rail if CONFIG_PROVE_RCU is enabled.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200909162712.221874-3-lmb@cloudflare.com
Lorenz Bauer [Wed, 9 Sep 2020 16:27:10 +0000 (17:27 +0100)]
net: sockmap: Remove unnecessary sk_fullsock checks
The lookup paths for sockmap and sockhash currently include a check
that returns NULL if the socket we just found is not a full socket.
However, this check is not necessary. On insertion we ensure that
we have a full socket (caveat around sock_ops), so request sockets
are not a problem. Time-wait sockets are allocated separate from
the original socket and then fed into the hashdance. They don't
affect the sockets already stored in the sockmap.
Suggested-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200909162712.221874-2-lmb@cloudflare.com
Quentin Monnet [Wed, 9 Sep 2020 16:24:59 +0000 (17:24 +0100)]
tools: bpftool: Include common options from separate file
Nearly all man pages for bpftool have the same common set of option
flags (--help, --version, --json, --pretty, --debug). The description is
duplicated across all the pages, which is more difficult to maintain if
the description of an option changes. It may also be confusing to sort
out what options are not "common" and should not be copied when creating
new manual pages.
Let's move the description for those common options to a separate file,
which is included with a RST directive when generating the man pages.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200909162500.17010-3-quentin@isovalent.com
Quentin Monnet [Wed, 9 Sep 2020 16:24:58 +0000 (17:24 +0100)]
tools: bpftool: Print optional built-in features along with version
Bpftool has a number of features that can be included or left aside
during compilation. This includes:
- Support for libbfd, providing the disassembler for JIT-compiled
programs.
- Support for BPF skeletons, used for profiling programs or iterating on
the PIDs of processes associated with BPF objects.
In order to make it easy for users to understand what features were
compiled for a given bpftool binary, print the status of the two
features above when showing the version number for bpftool ("bpftool -V"
or "bpftool version"). Document this in the main manual page. Example
invocations:
$ bpftool version
./bpftool v5.9.0-rc1
features: libbfd, skeletons
$ bpftool -p version
{
"version": "5.9.0-rc1",
"features": {
"libbfd": true,
"skeletons": true
}
}
Some other parameters are optional at compilation
("DISASM_FOUR_ARGS_SIGNATURE", LIBCAP support) but they do not impact
significantly bpftool's behaviour from a user's point of view, so their
status is not reported.
Available commands and supported program types depend on the version
number, and are therefore not reported either. Note that they are
already available, albeit without JSON, via bpftool's help messages.
v3:
- Use a simple list instead of boolean values for plain output.
v2:
- Fix JSON (object instead or array for the features).
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200909162500.17010-2-quentin@isovalent.com
Quentin Monnet [Wed, 9 Sep 2020 16:22:51 +0000 (17:22 +0100)]
selftests, bpftool: Add bpftool (and eBPF helpers) documentation build
eBPF selftests include a script to check that bpftool builds correctly
with different command lines. Let's add one build for bpftool's
documentation so as to detect errors or warning reported by rst2man when
compiling the man pages. Also add a build to the selftests Makefile to
make sure we build bpftool documentation along with bpftool when
building the selftests.
This also builds and checks warnings for the man page for eBPF helpers,
which is built along bpftool's documentation.
This change adds rst2man as a dependency for selftests (it comes with
Python's "docutils").
v2:
- Use "--exit-status=1" option for rst2man instead of counting lines
from stderr.
- Also build bpftool as part as the selftests build (and not only when
the tests are actually run).
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200909162251.15498-3-quentin@isovalent.com
Quentin Monnet [Wed, 9 Sep 2020 16:22:50 +0000 (17:22 +0100)]
tools: bpftool: Log info-level messages when building bpftool man pages
To build man pages for bpftool (and for eBPF helper functions), rst2man
can log different levels of information. Let's make it log all levels
to keep the RST files clean.
Doing so, rst2man complains about double colons, used for literal
blocks, that look like underlines for section titles. Let's add the
necessary blank lines.
v2:
- Use "--verbose" instead of "-r 1" (same behaviour but more readable).
- Pass it through a RST2MAN_OPTS variable so we can easily pass other
options too.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200909162251.15498-2-quentin@isovalent.com
Chen Zhou [Tue, 8 Sep 2020 13:22:01 +0000 (21:22 +0800)]
bpf: Remove duplicate headers
Remove duplicate headers which are included twice.
Signed-off-by: Chen Zhou <chenzhou10@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200908132201.184005-1-chenzhou10@huawei.com
Andrii Nakryiko [Tue, 8 Sep 2020 18:01:27 +0000 (11:01 -0700)]
perf: Stop using deprecated bpf_program__title()
Switch from deprecated bpf_program__title() API to
bpf_program__section_name(). Also drop unnecessary error checks because
neither bpf_program__title() nor bpf_program__section_name() can fail or
return NULL.
Fixes: 521095842027 ("libbpf: Deprecate notion of BPF program "title" in favor of "section name"")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Tobias Klauser <tklauser@distanz.ch>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/bpf/20200908180127.1249-1-andriin@fb.com
Yonghong Song [Wed, 9 Sep 2020 17:15:42 +0000 (10:15 -0700)]
selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang change
Andrii reported that with latest clang, when building selftests, we have
error likes:
error: progs/test_sysctl_loop1.c:23:16: in function sysctl_tcp_mem i32 (%struct.bpf_sysctl*):
Looks like the BPF stack limit of 512 bytes is exceeded.
Please move large on stack variables into BPF per-cpu array map.
The error is triggered by the following LLVM patch:
https://reviews.llvm.org/D87134
For example, the following code is from test_sysctl_loop1.c:
static __always_inline int is_tcp_mem(struct bpf_sysctl *ctx)
{
volatile char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string";
...
}
Without the above LLVM patch, the compiler did optimization to load the string
(59 bytes long) with 7 64bit loads, 1 8bit load and 1 16bit load,
occupying 64 byte stack size.
With the above LLVM patch, the compiler only uses 8bit loads, but subregister is 32bit.
So stack requirements become 4 * 59 = 236 bytes. Together with other stuff on
the stack, total stack size exceeds 512 bytes, hence compiler complains and quits.
To fix the issue, removing "volatile" key word or changing "volatile" to
"const"/"static const" does not work, the string is put in .rodata.str1.1 section,
which libbpf did not process it and errors out with
libbpf: elf: skipping unrecognized data section(6) .rodata.str1.1
libbpf: prog 'sysctl_tcp_mem': bad map relo against '.L__const.is_tcp_mem.tcp_mem_name'
in section '.rodata.str1.1'
Defining the string const as global variable can fix the issue as it puts the string constant
in '.rodata' section which is recognized by libbpf. In the future, when libbpf can process
'.rodata.str*.*' properly, the global definition can be changed back to local definition.
Defining tcp_mem_name as a global, however, triggered a verifier failure.
./test_progs -n 7/21
libbpf: load bpf program failed: Permission denied
libbpf: -- BEGIN DUMP LOG ---
libbpf:
invalid stack off=0 size=1
verification time 6975 usec
stack depth 160+64
processed 889 insns (limit
1000000) max_states_per_insn 4 total_states
14 peak_states 14 mark_read 10
libbpf: -- END LOG --
libbpf: failed to load program 'sysctl_tcp_mem'
libbpf: failed to load object 'test_sysctl_loop2.o'
test_bpf_verif_scale:FAIL:114
#7/21 test_sysctl_loop2.o:FAIL
This actually exposed a bpf program bug. In test_sysctl_loop{1,2}, we have code
like
const char tcp_mem_name[] = "<...long string...>";
...
char name[64];
...
for (i = 0; i < sizeof(tcp_mem_name); ++i)
if (name[i] != tcp_mem_name[i])
return 0;
In the above code, if sizeof(tcp_mem_name) > 64, name[i] access may be
out of bound. The sizeof(tcp_mem_name) is 59 for test_sysctl_loop1.c and
79 for test_sysctl_loop2.c.
Without promotion-to-global change, old compiler generates code where
the overflowed stack access is actually filled with valid value, so hiding
the bpf program bug. With promotion-to-global change, the code is different,
more specifically, the previous loading constants to stack is gone, and
"name" occupies stack[-64:0] and overflow access triggers a verifier error.
To fix the issue, adjust "name" buffer size properly.
Reported-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200909171542.3673449-1-yhs@fb.com
Yonghong Song [Tue, 8 Sep 2020 17:57:03 +0000 (10:57 -0700)]
selftests/bpf: Add test for map_ptr arithmetic
Change selftest map_ptr_kern.c with disabling inlining for
one of subtests, which will fail the test without previous
verifier change. Also added to verifier test for both
"map_ptr += scalar" and "scalar += map_ptr" arithmetic.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200908175703.2463721-1-yhs@fb.com
Yonghong Song [Tue, 8 Sep 2020 17:57:02 +0000 (10:57 -0700)]
bpf: Permit map_ptr arithmetic with opcode add and offset 0
Commit
41c48f3a98231 ("bpf: Support access
to bpf map fields") added support to access map fields
with CORE support. For example,
struct bpf_map {
__u32 max_entries;
} __attribute__((preserve_access_index));
struct bpf_array {
struct bpf_map map;
__u32 elem_size;
} __attribute__((preserve_access_index));
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__uint(max_entries, 4);
__type(key, __u32);
__type(value, __u32);
} m_array SEC(".maps");
SEC("cgroup_skb/egress")
int cg_skb(void *ctx)
{
struct bpf_array *array = (struct bpf_array *)&m_array;
/* .. array->map.max_entries .. */
}
In kernel, bpf_htab has similar structure,
struct bpf_htab {
struct bpf_map map;
...
}
In the above cg_skb(), to access array->map.max_entries, with CORE, the clang will
generate two builtin's.
base = &m_array;
/* access array.map */
map_addr = __builtin_preserve_struct_access_info(base, 0, 0);
/* access array.map.max_entries */
max_entries_addr = __builtin_preserve_struct_access_info(map_addr, 0, 0);
max_entries = *max_entries_addr;
In the current llvm, if two builtin's are in the same function or
in the same function after inlining, the compiler is smart enough to chain
them together and generates like below:
base = &m_array;
max_entries = *(base + reloc_offset); /* reloc_offset = 0 in this case */
and we are fine.
But if we force no inlining for one of functions in test_map_ptr() selftest, e.g.,
check_default(), the above two __builtin_preserve_* will be in two different
functions. In this case, we will have code like:
func check_hash():
reloc_offset_map = 0;
base = &m_array;
map_base = base + reloc_offset_map;
check_default(map_base, ...)
func check_default(map_base, ...):
max_entries = *(map_base + reloc_offset_max_entries);
In kernel, map_ptr (CONST_PTR_TO_MAP) does not allow any arithmetic.
The above "map_base = base + reloc_offset_map" will trigger a verifier failure.
; VERIFY(check_default(&hash->map, map));
0: (18) r7 = 0xffffb4fe8018a004
2: (b4) w1 = 110
3: (63) *(u32 *)(r7 +0) = r1
R1_w=invP110 R7_w=map_value(id=0,off=4,ks=4,vs=8,imm=0) R10=fp0
; VERIFY_TYPE(BPF_MAP_TYPE_HASH, check_hash);
4: (18) r1 = 0xffffb4fe8018a000
6: (b4) w2 = 1
7: (63) *(u32 *)(r1 +0) = r2
R1_w=map_value(id=0,off=0,ks=4,vs=8,imm=0) R2_w=invP1 R7_w=map_value(id=0,off=4,ks=4,vs=8,imm=0) R10=fp0
8: (b7) r2 = 0
9: (18) r8 = 0xffff90bcb500c000
11: (18) r1 = 0xffff90bcb500c000
13: (0f) r1 += r2
R1 pointer arithmetic on map_ptr prohibited
To fix the issue, let us permit map_ptr + 0 arithmetic which will
result in exactly the same map_ptr.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200908175702.2463625-1-yhs@fb.com
Quentin Monnet [Fri, 4 Sep 2020 16:14:54 +0000 (17:14 +0100)]
tools, bpf: Synchronise BPF UAPI header with tools
Synchronise the bpf.h header under tools, to report the fixes recently
brought to the documentation for the BPF helpers.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200904161454.31135-4-quentin@isovalent.com
Quentin Monnet [Fri, 4 Sep 2020 16:14:53 +0000 (17:14 +0100)]
bpf: Fix formatting in documentation for BPF helpers
Fix a formatting error in the description of bpf_load_hdr_opt() (rst2man
complains about a wrong indentation, but what is missing is actually a
blank line before the bullet list).
Fix and harmonise the formatting for other helpers.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200904161454.31135-3-quentin@isovalent.com
Quentin Monnet [Fri, 4 Sep 2020 16:14:52 +0000 (17:14 +0100)]
tools: bpftool: Fix formatting in bpftool-link documentation
Fix a formatting error in the documentation for bpftool-link, so that
the man page can build correctly.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200904161454.31135-2-quentin@isovalent.com
Daniel T. Lee [Fri, 4 Sep 2020 06:34:34 +0000 (15:34 +0900)]
samples, bpf: Add xsk_fwd test file to .gitignore
This commit adds xsk_fwd test file to .gitignore which is newly added
to samples/bpf.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200904063434.24963-2-danieltimlee@gmail.com
Daniel T. Lee [Fri, 4 Sep 2020 06:34:33 +0000 (15:34 +0900)]
samples, bpf: Replace bpf_program__title() with bpf_program__section_name()
From commit
521095842027 ("libbpf: Deprecate notion of BPF program
"title" in favor of "section name""), the term title has been replaced
with section name in libbpf.
Since the bpf_program__title() has been deprecated, this commit
switches this function to bpf_program__section_name(). Due to
this commit, the compilation warning issue has also been resolved.
Fixes: 521095842027 ("libbpf: Deprecate notion of BPF program "title" in favor of "section name"")
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200904063434.24963-1-danieltimlee@gmail.com
Andrii Nakryiko [Fri, 4 Sep 2020 04:16:11 +0000 (21:16 -0700)]
libbpf: Fix potential multiplication overflow
Detected by LGTM static analyze in Github repo, fix potential multiplication
overflow before result is casted to size_t.
Fixes: 8505e8709b5e ("libbpf: Implement generalized .BTF.ext func/line info adjustment")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200904041611.1695163-2-andriin@fb.com
Andrii Nakryiko [Fri, 4 Sep 2020 04:16:10 +0000 (21:16 -0700)]
libbpf: Fix another __u64 cast in printf
Another issue of __u64 needing either %lu or %llu, depending on the
architecture. Fix with cast to `unsigned long long`.
Fixes: 7e06aad52929 ("libbpf: Add multi-prog section support for struct_ops")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200904041611.1695163-1-andriin@fb.com
Hao Luo [Thu, 3 Sep 2020 20:05:28 +0000 (13:05 -0700)]
selftests/bpf: Fix check in global_data_init.
The returned value of bpf_object__open_file() should be checked with
libbpf_get_error() rather than NULL. This fix prevents test_progs from
crash when test_global_data.o is not present.
Signed-off-by: Hao Luo <haoluo@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200903200528.747884-1-haoluo@google.com
Alexei Starovoitov [Fri, 4 Sep 2020 00:14:40 +0000 (17:14 -0700)]
Merge branch 'libbpf-support-bpf-to-bpf-calls'
Andrii Nakryiko says:
====================
Currently, libbpf supports a limited form of BPF-to-BPF subprogram calls. The
restriction is that entry-point BPF program should use *all* of defined
sub-programs in BPF .o file. If any of the subprograms is not used, such
entry-point BPF program will be rejected by verifier as containing unreachable
dead code. This is not a big limitation for cases with single entry-point BPF
programs, but is quite a heavy restriction for multi-programs that use only
partially overlapping set of subprograms.
This patch set removes all such restrictions and adds complete support for
using BPF sub-program calls on BPF side. This is achieved through libbpf
tracking subprograms individually and detecting which subprograms are used by
any given entry-point BPF program, and subsequently only appending and
relocating code for just those used subprograms.
In addition, libbpf now also supports multiple entry-point BPF programs within
the same ELF section. This allows to structure code so that there are few
variants of BPF programs of the same type and attaching to the same target
(e.g., for tracepoints and kprobes) without the need to worry about ELF
section name clashes.
This patch set opens way for more wider adoption of BPF subprogram calls,
especially for real-world production use-cases with complicated net of
subprograms. This will allow to further scale BPF verification process through
good use of global functions, which can be verified independently. This is
also important prerequisite for static linking which allows static BPF
libraries to not worry about naming clashes for section names, as well as use
static non-inlined functions (subprograms) without worries of verifier
rejecting program due to dead code.
Patch set is structured as follows:
- patched 1-6 contain all the libbpf changes necessary to support multi-prog
sections and bpf2bpf subcalls;
- patch 7 adds dedicated selftests validating all combinations of possible
sub-calls (within and across sections, static vs global functions);
- patch 8 deprecated bpf_program__title() in favor of
bpf_program__section_name(). The intent was to also deprecate
bpf_object__find_program_by_title() as it's now non-sensical with multiple
programs per section. But there were too many selftests uses of this and
I didn't want to delay this patches further and make it even bigger, so left
it for a follow up cleanup;
- patches 9-10 remove uses for title-related APIs from bpftool and
bpf_program__title() use from selftests;
- patch 11 is converting fexit_bpf2bpf to have explicit subtest (it does
contain 4 subtests, which are not handled as sub-tests);
- patches 12-14 convert few complicated BPF selftests to use __noinline
functions to further validate correctness of libbpf's bpf2bpf processing
logic.
v2->v3:
- explained subprog relocation algorithm in more details (Alexei);
- pyperf, strobelight and cls_redirect got new subprog variants, leaving
other modes intact (Alexei);
v1->v2:
- rename DEPRECATED to LIBBPF_DEPRECATED to avoid name clashes;
- fix test_subprogs build;
- convert a bunch of complicated selftests to __noinline (Alexei).
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:42 +0000 (13:35 -0700)]
selftests/bpf: Add __noinline variant of cls_redirect selftest
As one of the most complicated and close-to-real-world programs, cls_redirect
is a good candidate to exercise libbpf's logic of handling bpf2bpf calls. So
add variant with using explicit __noinline for majority of functions except
few most basic ones. If those few functions are inlined, verifier starts to
complain about program instruction limit of 1mln instructions being exceeded,
most probably due to instruction overhead of doing a sub-program call.
Convert user-space part of selftest to have to sub-tests: with and without
inlining.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Lorenz Bauer <lmb@cloudflare.com>
Link: https://lore.kernel.org/bpf/20200903203542.15944-15-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:41 +0000 (13:35 -0700)]
selftests/bpf: Modernize xdp_noinline test w/ skeleton and __noinline
Update xdp_noinline to use BPF skeleton and force __noinline on helper
sub-programs. Also, split existing logic into v4- and v6-only to complicate
sub-program calling patterns (partially overlapped sets of functions for
entry-level BPF programs).
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-14-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:40 +0000 (13:35 -0700)]
selftests/bpf: Add subprogs to pyperf, strobemeta, and l4lb_noinline tests
Add use of non-inlined subprogs to few bigger selftests to excercise libbpf's
bpf2bpf handling logic. Also split l4lb_all selftest into two sub-tests.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-13-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:39 +0000 (13:35 -0700)]
selftests/bpf: Turn fexit_bpf2bpf into test with subtests
There are clearly 4 subtests, so make it official.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-12-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:38 +0000 (13:35 -0700)]
libbpf: Deprecate notion of BPF program "title" in favor of "section name"
BPF program title is ambigious and misleading term. It is ELF section name, so
let's just call it that and deprecate bpf_program__title() API in favor of
bpf_program__section_name().
Additionally, using bpf_object__find_program_by_title() is now inherently
dangerous and ambiguous, as multiple BPF program can have the same section
name. So deprecate this API as well and recommend to switch to non-ambiguous
bpf_object__find_program_by_name().
Internally, clean up usage and mis-usage of BPF program section name for
denoting BPF program name. Shorten the field name to prog->sec_name to be
consistent with all other prog->sec_* variables.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-11-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:37 +0000 (13:35 -0700)]
selftests/bpf: Don't use deprecated libbpf APIs
Remove all uses of bpf_program__title() and
bpf_program__find_program_by_title().
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-10-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:36 +0000 (13:35 -0700)]
tools/bpftool: Replace bpf_program__title() with bpf_program__section_name()
bpf_program__title() is deprecated, switch to bpf_program__section_name() and
avoid compilation warnings.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-9-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:35 +0000 (13:35 -0700)]
selftests/bpf: Add selftest for multi-prog sections and bpf-to-bpf calls
Add a selftest excercising bpf-to-bpf subprogram calls, as well as multiple
entry-point BPF programs per section. Also make sure that BPF CO-RE works for
such set ups both for sub-programs and for multi-entry sections.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-8-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:34 +0000 (13:35 -0700)]
libbpf: Add multi-prog section support for struct_ops
Adjust struct_ops handling code to work with multi-program ELF sections
properly.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-7-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:33 +0000 (13:35 -0700)]
libbpf: Implement generalized .BTF.ext func/line info adjustment
Complete multi-prog sections and multi sub-prog support in libbpf by properly
adjusting .BTF.ext's line and function information. Mark exposed
btf_ext__reloc_func_info() and btf_ext__reloc_func_info() APIs as deprecated.
These APIs have simplistic assumption that all sub-programs are going to be
appended to all main BPF programs, which doesn't hold in real life. It's
unlikely there are any users of this API, as it's very libbpf
internals-specific.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-6-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:32 +0000 (13:35 -0700)]
libbpf: Make RELO_CALL work for multi-prog sections and sub-program calls
This patch implements general and correct logic for bpf-to-bpf sub-program
calls. Only sub-programs used (called into) from entry-point (main) BPF
program are going to be appended at the end of main BPF program. This ensures
that BPF verifier won't encounter any dead code due to copying unreferenced
sub-program. This change means that each entry-point (main) BPF program might
have a different set of sub-programs appended to it and potentially in
different order. This has implications on how sub-program call relocations
need to be handled, described below.
All relocations are now split into two categores: data references (maps and
global variables) and code references (sub-program calls). This distinction is
important because data references need to be relocated just once per each BPF
program and sub-program. These relocation are agnostic to instruction
locations, because they are not code-relative and they are relocating against
static targets (maps, variables with fixes offsets, etc).
Sub-program RELO_CALL relocations, on the other hand, are highly-dependent on
code position, because they are recorded as instruction-relative offset. So
BPF sub-programs (those that do calls into other sub-programs) can't be
relocated once, they need to be relocated each time such a sub-program is
appended at the end of the main entry-point BPF program. As mentioned above,
each main BPF program might have different subset and differen order of
sub-programs, so call relocations can't be done just once. Splitting data
reference and calls relocations as described above allows to do this
efficiently and cleanly.
bpf_object__find_program_by_name() will now ignore non-entry BPF programs.
Previously one could have looked up '.text' fake BPF program, but the
existence of such BPF program was always an implementation detail and you
can't do much useful with it. Now, though, all non-entry sub-programs get
their own BPF program with name corresponding to a function name, so there is
no more '.text' name for BPF program. This means there is no regression,
effectively, w.r.t. API behavior. But this is important aspect to highlight,
because it's going to be critical once libbpf implements static linking of BPF
programs. Non-entry static BPF programs will be allowed to have conflicting
names, but global and main-entry BPF program names should be unique. Just like
with normal user-space linking process. So it's important to restrict this
aspect right now, keep static and non-entry functions as internal
implementation details, and not have to deal with regressions in behavior
later.
This patch leaves .BTF.ext adjustment as is until next patch.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200903203542.15944-5-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:31 +0000 (13:35 -0700)]
libbpf: Support CO-RE relocations for multi-prog sections
Fix up CO-RE relocation code to handle relocations against ELF sections
containing multiple BPF programs. This requires lookup of a BPF program by its
section name and instruction index it contains. While it could have been done
as a simple loop, it could run into performance issues pretty quickly, as
number of CO-RE relocations can be quite large in real-world applications, and
each CO-RE relocation incurs BPF program look up now. So instead of simple
loop, implement a binary search by section name + insn offset.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200903203542.15944-4-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:30 +0000 (13:35 -0700)]
libbpf: Parse multi-function sections into multiple BPF programs
Teach libbpf how to parse code sections into potentially multiple bpf_program
instances, based on ELF FUNC symbols. Each BPF program will keep track of its
position within containing ELF section for translating section instruction
offsets into program instruction offsets: regardless of BPF program's location
in ELF section, it's first instruction is always at local instruction offset
0, so when libbpf is working with relocations (which use section-based
instruction offsets) this is critical to make proper translations.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200903203542.15944-3-andriin@fb.com
Andrii Nakryiko [Thu, 3 Sep 2020 20:35:29 +0000 (13:35 -0700)]
libbpf: Ensure ELF symbols table is found before further ELF processing
libbpf ELF parsing logic might need symbols available before ELF parsing is
completed, so we need to make sure that symbols table section is found in
a separate pass before all the subsequent sections are processed.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200903203542.15944-2-andriin@fb.com
Magnus Karlsson [Wed, 2 Sep 2020 07:36:04 +0000 (09:36 +0200)]
xsk: Fix use-after-free in failed shared_umem bind
Fix use-after-free when a shared umem bind fails. The code incorrectly
tried to free the allocated buffer pool both in the bind code and then
later also when the socket was released. Fix this by setting the
buffer pool pointer to NULL after the bind code has freed the pool, so
that the socket release code will not try to free the pool. This is
the same solution as the regular, non-shared umem code path has. This
was missing from the shared umem path.
Fixes: b5aea28dca13 ("xsk: Add shared umem support between queue ids")
Reported-by: syzbot+5334f62e4d22804e646a@syzkaller.appspotmail.com
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1599032164-25684-1-git-send-email-magnus.karlsson@intel.com
Gustavo A. R. Silva [Wed, 2 Sep 2020 15:07:50 +0000 (10:07 -0500)]
xsk: Fix null check on error return path
Currently, dma_map is being checked, when the right object identifier
to be null-checked is dma_map->dma_pages, instead.
Fix this by null-checking dma_map->dma_pages.
Fixes: 921b68692abb ("xsk: Enable sharing of dma mappings")
Addresses-Coverity-ID:
1496811 ("Logically dead code")
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/20200902150750.GA7257@embeddedor
Magnus Karlsson [Wed, 2 Sep 2020 09:06:09 +0000 (11:06 +0200)]
xsk: Fix possible segfault at xskmap entry insertion
Fix possible segfault when entry is inserted into xskmap. This can
happen if the socket is in a state where the umem has been set up, the
Rx ring created but it has yet to be bound to a device. In this case
the pool has not yet been created and we cannot reference it for the
existence of the fill ring. Fix this by removing the whole
xsk_is_setup_for_bpf_map function. Once upon a time, it was used to
make sure that the Rx and fill rings where set up before the driver
could call xsk_rcv, since there are no tests for the existence of
these rings in the data path. But these days, we have a state variable
that we test instead. When it is XSK_BOUND, everything has been set up
correctly and the socket has been bound. So no reason to have the
xsk_is_setup_for_bpf_map function anymore.
Fixes: 7361f9c3d719 ("xsk: Move fill and completion rings to buffer pool")
Reported-by: syzbot+febe51d44243fbc564ee@syzkaller.appspotmail.com
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1599037569-26690-1-git-send-email-magnus.karlsson@intel.com
Magnus Karlsson [Wed, 2 Sep 2020 08:52:23 +0000 (10:52 +0200)]
xsk: Fix possible segfault in xsk umem diagnostics
Fix possible segfault in the xsk diagnostics code when dumping
information about the umem. This can happen when a umem has been
created, but the socket has not been bound yet. In this case, the xsk
buffer pool does not exist yet and we cannot dump the information
that was moved from the umem to the buffer pool. Fix this by testing
for the existence of the buffer pool and if not there, do not dump any
of that information.
Fixes: c2d3d6a47462 ("xsk: Move queue_id, dev and need_wakeup to buffer pool")
Fixes: 7361f9c3d719 ("xsk: Move fill and completion rings to buffer pool")
Reported-by: syzbot+3f04d36b7336f7868066@syzkaller.appspotmail.com
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1599036743-26454-1-git-send-email-magnus.karlsson@intel.com
Yonghong Song [Wed, 2 Sep 2020 02:31:13 +0000 (19:31 -0700)]
selftests/bpf: Test task_file iterator without visiting pthreads
Modified existing bpf_iter_test_file.c program to check whether
all accessed files from the main thread or not.
Modified existing bpf_iter_test_file program to check
whether all accessed files from the main thread or not.
$ ./test_progs -n 4
...
#4/7 task_file:OK
...
#4 bpf_iter:OK
Summary: 1/24 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200902023113.1672863-1-yhs@fb.com
Yonghong Song [Wed, 2 Sep 2020 02:31:12 +0000 (19:31 -0700)]
bpf: Avoid iterating duplicated files for task_file iterator
Currently, task_file iterator iterates all files from all tasks.
This may potentially visit a lot of duplicated files if there are
many tasks sharing the same files, e.g., typical pthreads
where these pthreads and the main thread are sharing the same files.
This patch changed task_file iterator to skip a particular task
if that task shares the same files as its group_leader (the task
having the same tgid and also task->tgid == task->pid).
This will preserve the same result, visiting all files from all
tasks, and will reduce runtime cost significantl, e.g., if there are
a lot of pthreads and the process has a lot of open files.
Suggested-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Link: https://lore.kernel.org/bpf/20200902023112.1672792-1-yhs@fb.com
David S. Miller [Tue, 1 Sep 2020 20:23:58 +0000 (13:23 -0700)]
Merge branch 'dpaa2-eth-add-a-dpaa2_eth_-prefix-to-all-functions'
Ioana Ciornei says:
====================
dpaa2-eth: add a dpaa2_eth_ prefix to all functions
This is just a quick cleanup that aims at adding a dpaa2_eth_ prefix to
all functions within the dpaa2-eth driver even if those are static and
private to the driver. The main reason for doing this is that looking a
perf top, for example, is becoming an inconvenience because one cannot
easily determine which entries are dpaa2-eth related or not.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Mon, 31 Aug 2020 18:12:40 +0000 (21:12 +0300)]
dpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-eth-dcb.c
Some static functions in the dpaa2-eth driver don't have the dpaa2_eth_
prefix and this is becoming an inconvenience when looking at, for
example, a perf top output and trying to determine easily which entries
are dpaa2-eth related. Ammend this by adding the prefix to all the
functions.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Mon, 31 Aug 2020 18:12:39 +0000 (21:12 +0300)]
dpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-eth.c
Some static functions in the dpaa2-eth driver don't have the dpaa2_eth_
prefix and this is becoming an inconvenience when looking at, for
example, a perf top output and trying to determine easily which entries
are dpaa2-eth related. Ammend this by adding the prefix to all the
functions.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Mon, 31 Aug 2020 18:12:38 +0000 (21:12 +0300)]
dpaa2-eth: add a dpaa2_eth_ prefix to all functions in dpaa2-ethtool.c
Some static functions in the dpaa2-eth driver don't have the dpaa2_eth_
prefix and this is becoming an inconvenience when looking at, for
example, a perf top output and trying to determine easily which entries
are dpaa2-eth related. Ammend this by adding the prefix to all the
functions.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eelco Chaudron [Mon, 31 Aug 2020 09:57:57 +0000 (11:57 +0200)]
net: openvswitch: fixes crash if nf_conncount_init() fails
If nf_conncount_init fails currently the dispatched work is not canceled,
causing problems when the timer fires. This change fixes this by not
scheduling the work until all initialization is successful.
Fixes: a65878d6f00b ("net: openvswitch: fixes potential deadlock in dp cleanup code")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Reviewed-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Mon, 31 Aug 2020 16:59:57 +0000 (11:59 -0500)]
ibmvnic: Harden device Command Response Queue handshake
In some cases, the device or firmware may be busy when the
driver attempts to perform the CRQ initialization handshake.
If the partner is busy, the hypervisor will return the H_CLOSED
return code. The aim of this patch is that, if the device is not
ready, to query the device a number of times, with a small wait
time in between queries. If all initialization requests fail,
the driver will remain in a dormant state, awaiting a signal
from the device that it is ready for operation.
Signed-off-by: Thomas Falcon <tlfalcon@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 1 Sep 2020 20:05:08 +0000 (13:05 -0700)]
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2020-09-01
The following pull-request contains BPF updates for your *net-next* tree.
There are two small conflicts when pulling, resolve as follows:
1) Merge conflict in tools/lib/bpf/libbpf.c between
88a82120282b ("libbpf: Factor
out common ELF operations and improve logging") in bpf-next and
1e891e513e16
("libbpf: Fix map index used in error message") in net-next. Resolve by taking
the hunk in bpf-next:
[...]
scn = elf_sec_by_idx(obj, obj->efile.btf_maps_shndx);
data = elf_sec_data(obj, scn);
if (!scn || !data) {
pr_warn("elf: failed to get %s map definitions for %s\n",
MAPS_ELF_SEC, obj->path);
return -EINVAL;
}
[...]
2) Merge conflict in drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c between
9647c57b11e5 ("xsk: i40e: ice: ixgbe: mlx5: Test for dma_need_sync earlier for
better performance") in bpf-next and
e20f0dbf204f ("net/mlx5e: RX, Add a prefetch
command for small L1_CACHE_BYTES") in net-next. Resolve the two locations by retaining
net_prefetch() and taking xsk_buff_dma_sync_for_cpu() from bpf-next. Should look like:
[...]
xdp_set_data_meta_invalid(xdp);
xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool);
net_prefetch(xdp->data);
[...]
We've added 133 non-merge commits during the last 14 day(s) which contain
a total of 246 files changed, 13832 insertions(+), 3105 deletions(-).
The main changes are:
1) Initial support for sleepable BPF programs along with bpf_copy_from_user() helper
for tracing to reliably access user memory, from Alexei Starovoitov.
2) Add BPF infra for writing and parsing TCP header options, from Martin KaFai Lau.
3) bpf_d_path() helper for returning full path for given 'struct path', from Jiri Olsa.
4) AF_XDP support for shared umems between devices and queues, from Magnus Karlsson.
5) Initial prep work for full BPF-to-BPF call support in libbpf, from Andrii Nakryiko.
6) Generalize bpf_sk_storage map & add local storage for inodes, from KP Singh.
7) Implement sockmap/hash updates from BPF context, from Lorenz Bauer.
8) BPF xor verification for scalar types & add BPF link iterator, from Yonghong Song.
9) Use target's prog type for BPF_PROG_TYPE_EXT prog verification, from Udip Pant.
10) Rework BPF tracing samples to use libbpf loader, from Daniel T. Lee.
11) Fix xdpsock sample to really cycle through all buffers, from Weqaar Janjua.
12) Improve type safety for tun/veth XDP frame handling, from Maciej Żenczykowski.
13) Various smaller cleanups and improvements all over the place.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Tue, 1 Sep 2020 14:11:15 +0000 (22:11 +0800)]
liquidio: Remove unneeded cast from memory allocation
Remove unneeded return value cast.
This is detected by coccinelle.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Tue, 1 Sep 2020 14:10:28 +0000 (22:10 +0800)]
net: sungem: Remove unneeded cast from memory allocation
Remove dma_alloc_coherent return value cast.
This is detected by coccinelle.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yutaro Hayakawa [Tue, 1 Sep 2020 13:59:45 +0000 (22:59 +0900)]
net/tls: Implement getsockopt SOL_TLS TLS_RX
Implement the getsockopt SOL_TLS TLS_RX which is currently missing. The
primary usecase is to use it in conjunction with TCP_REPAIR to
checkpoint/restore the TLS record layer state.
TLS connection state usually exists on the user space library. So
basically we can easily extract it from there, but when the TLS
connections are delegated to the kTLS, it is not the case. We need to
have a way to extract the TLS state from the kernel for both of TX and
RX side.
The new TLS_RX getsockopt copies the crypto_info to user in the same
way as TLS_TX does.
We have described use cases in our research work in Netdev 0x14
Transport Workshop [1].
Also, there is an TLS implementation called tlse [2] which supports
TLS connection migration. They have support of kTLS and their code
shows that they are expecting the future support of this option.
[1] https://speakerdeck.com/yutarohayakawa/prism-proxies-without-the-pain
[2] https://github.com/eduardsui/tlse
Signed-off-by: Yutaro Hayakawa <yhayakawa3720@gmail.com>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 1 Sep 2020 18:42:15 +0000 (11:42 -0700)]
Merge branch 'net-openvswitch-improve-the-codes'
Tonghao Zhang says:
====================
net: openvswitch: improve the codes
This series patches are not bug fix, just improve codes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Tonghao Zhang [Tue, 1 Sep 2020 12:26:14 +0000 (20:26 +0800)]
net: openvswitch: remove unused keep_flows
keep_flows was introduced by [1], which used as flag to delete flows or not.
When rehashing or expanding the table instance, we will not flush the flows.
Now don't use it anymore, remove it.
[1] - https://github.com/openvswitch/ovs/commit/
acd051f1761569205827dc9b037e15568a8d59f8
Cc: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tonghao Zhang [Tue, 1 Sep 2020 12:26:13 +0000 (20:26 +0800)]
net: openvswitch: refactor flow free function
Decrease table->count and ufid_count unconditionally,
because we only don't use count or ufid_count to count
when flushing the flows. To simplify the codes, we
remove the "count" argument of table_instance_flow_free.
To avoid a bug when deleting flows in the future, add
WARN_ON in flush flows function.
Cc: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tonghao Zhang [Tue, 1 Sep 2020 12:26:12 +0000 (20:26 +0800)]
net: openvswitch: improve the coding style
Not change the logic, just improve the coding style.
Cc: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Björn Töpel [Tue, 1 Sep 2020 08:39:28 +0000 (10:39 +0200)]
bpf: {cpu,dev}map: Change various functions return type from int to void
The functions bq_enqueue(), bq_flush_to_queue(), and bq_xmit_all() in
{cpu,dev}map.c always return zero. Changing the return type from int
to void makes the code easier to follow.
Suggested-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20200901083928.6199-1-bjorn.topel@gmail.com
Alexei Starovoitov [Mon, 31 Aug 2020 20:16:51 +0000 (13:16 -0700)]
bpf: Remove bpf_lsm_file_mprotect from sleepable list.
Technically the bpf programs can sleep while attached to bpf_lsm_file_mprotect,
but such programs need to access user memory. So they're in might_fault()
category. Which means they cannot be called from file_mprotect lsm hook that
takes write lock on mm->mmap_lock.
Adjust the test accordingly.
Also add might_fault() to __bpf_prog_enter_sleepable() to catch such deadlocks early.
Fixes: 1e6c62a88215 ("bpf: Introduce sleepable BPF programs")
Fixes: e68a144547fc ("selftests/bpf: Add sleepable tests")
Reported-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200831201651.82447-1-alexei.starovoitov@gmail.com
Weqaar Janjua [Fri, 28 Aug 2020 16:17:17 +0000 (00:17 +0800)]
samples/bpf: Fix to xdpsock to avoid recycling frames
The txpush program in the xdpsock sample application is supposed
to send out all packets in the umem in a round-robin fashion.
The problem is that it only cycled through the first BATCH_SIZE
worth of packets. Fixed this so that it cycles through all buffers
in the umem as intended.
Fixes: 248c7f9c0e21 ("samples/bpf: convert xdpsock to use libbpf for AF_XDP access")
Signed-off-by: Weqaar Janjua <weqaar.a.janjua@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/20200828161717.42705-1-weqaar.a.janjua@intel.com
Magnus Karlsson [Fri, 28 Aug 2020 12:51:05 +0000 (14:51 +0200)]
samples/bpf: Optimize l2fwd performance in xdpsock
Optimize the throughput performance of the l2fwd sub-app in the
xdpsock sample application by removing a duplicate syscall and
increasing the size of the fill ring.
The latter needs some further explanation. We recommend that you set
the fill ring size >= HW RX ring size + AF_XDP RX ring size. Make sure
you fill up the fill ring with buffers at regular intervals, and you
will with this setting avoid allocation failures in the driver. These
are usually quite expensive since drivers have not been written to
assume that allocation failures are common. For regular sockets,
kernel allocated memory is used that only runs out in OOM situations
that should be rare.
These two performance optimizations together lead to a 6% percent
improvement for the l2fwd app on my machine.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1598619065-1944-1-git-send-email-magnus.karlsson@intel.com
Miaohe Lin [Mon, 31 Aug 2020 06:26:34 +0000 (02:26 -0400)]
net: ipv4: remove unused arg exact_dif in compute_score
The arg exact_dif is not used anymore, remove it. inet_exact_dif_match()
is no longer needed after the above is removed, so remove it too.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Miaohe Lin [Mon, 31 Aug 2020 06:26:10 +0000 (02:26 -0400)]
net: ipv6: remove unused arg exact_dif in compute_score
The arg exact_dif is not used anymore, remove it. inet6_exact_dif_match()
is no longer needed after the above is removed, remove it too.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 31 Aug 2020 19:52:33 +0000 (12:52 -0700)]
Merge branch 'net-phy-add-Lynx-PCS-MDIO-module'
Ioana Ciornei says:
====================
net: phy: add Lynx PCS MDIO module
Add support for the Lynx PCS as a separate module in drivers/net/phy/.
The advantage of this structure is that multiple ethernet or switch
drivers used on NXP hardware (ENETC, Seville, Felix DSA switch etc) can
share the same implementation of PCS configuration and runtime
management.
The module implements phylink_pcs_ops and exports a phylink_pcs
(incorporated into a lynx_pcs) which can be directly passed to phylink
through phylink_pcs_set.
The first 3 patches add some missing pieces in phylink and the locked
mdiobus write accessor. Next, the Lynx PCS MDIO module is added as a
standalone module. The majority of the code is extracted from the Felix
DSA driver. The last patch makes the necessary changes in the Felix and
Seville drivers in order to use the new common PCS implementation.
At the moment, USXGMII (only with in-band AN), SGMII, QSGMII (with and
without in-band AN) and 2500Base-X (only w/o in-band AN) are supported
by the Lynx PCS MDIO module since these were also supported by Felix and
no functional change is intended at this time.
Changes in v2:
* got rid of the mdio_lynx_pcs structure and directly exported the
functions without the need of an indirection
* made the necessary adjustments for this in the Felix DSA driver
* solved the broken allmodconfig build test by making the module
tristate instead of bool
* fixed a memory leakage in the Felix driver (the pcs structure was
allocated twice)
Changes in v3:
* added support for PHYLINK PCS ops in DSA (patch 5/9)
* cleanup in Felix PHYLINK operations and migrate to
phylink_mac_link_up() being the callback of choice for applying MAC
configuration (patches 6-8)
Changes in v4:
* use the newly introduced phylink PCS mechanism
* install the phylink_pcs in the phylink_mac_config DSA ops
* remove the direct implementations of the PCS ops
* do no use the SGMII_ prefix when referring to the IF_MORE register
* add a phylink helper to decode the USXGMII code word
* remove cleanup patches for Felix (these have been already accepted)
* Seville (recently introduced) now has PCS support through the same
Lynx PCS module
Changes in v5:
- move the pcs-lynx driver to drivers/net/pcs
- reword the commit message a bit in 4/5
- add error checking and error propagation in 4/5
- s/IF_MODE_DUPLEX/IF_MODE_HALF_DUPLEX in 4/5
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Sun, 30 Aug 2020 08:34:02 +0000 (11:34 +0300)]
net: dsa: ocelot: use the Lynx PCS helpers in Felix and Seville
Use the helper functions introduced by the newly added
Lynx PCS MDIO module in the Felix VSC9959 and Seville VSC9953.
Instead of representing the PCS as a phy_device, a mdio_device structure
will be passed to the Lynx module which is now actually implementing all
the PCS configuration and status reporting.
All code previously used for PCS monitoring and runtime configuration
is removed and replaced will calls to the Lynx PCS operations.
Tested on the following SERDES protocols of LS1028A: 0x7777
(2500Base-X), 0x85bb (QSGMII), 0x9999 (SGMII) and 0x13bb (USXGMII).
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Sun, 30 Aug 2020 08:34:01 +0000 (11:34 +0300)]
net: phy: add Lynx PCS module
Add a Lynx PCS module which exposes the necessary operations to drive
the PCS using phylink.
The majority of the code is extracted from the Felix DSA driver, which
will be also modified in a later patch, and exposed as a separate module
for code reusability purposes.
As such, this aims at feature and bug parity with the existing Felix DSA
driver, and thus USXGMII, SGMII, QSGMII and 2500Base-X (only w/o in-band
AN) are supported by the Lynx PCS module since these were also supported
by Felix.
The module can only be enabled by the drivers in need and not user
selectable.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Sun, 30 Aug 2020 08:34:00 +0000 (11:34 +0300)]
net: mdiobus: add clause 45 mdiobus write accessor
Add the locked variant of the clause 45 mdiobus write accessor -
mdiobus_c45_write().
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Sun, 30 Aug 2020 08:33:59 +0000 (11:33 +0300)]
net: phylink: consider QSGMII interface mode in phylink_mii_c22_pcs_get_state
The same link partner advertisement word is used for both QSGMII and
SGMII, thus treat both interface modes using the same
phylink_decode_sgmii_word() function.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Sun, 30 Aug 2020 08:33:58 +0000 (11:33 +0300)]
net: phylink: add helper function to decode USXGMII word
With the new addition of the USXGMII link partner ability constants we
can now introduce a phylink helper that decodes the USXGMII word and
populates the appropriate fields in the phylink_link_state structure
based on them.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Sat, 29 Aug 2020 11:58:23 +0000 (19:58 +0800)]
net/wan/fsl_ucc_hdlc: Add MODULE_DESCRIPTION
Add missing MODULE_DESCRIPTION.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Sat, 29 Aug 2020 11:57:37 +0000 (19:57 +0800)]
net: hns: Remove unused macro AE_NAME_PORT_ID_IDX
There is no caller in tree.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Sat, 29 Aug 2020 11:56:23 +0000 (19:56 +0800)]
net: dl2k: Remove unused macro DRV_NAME
There is no caller in tree any more.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This page took 0.136949 seconds and 4 git commands to generate.