]> Git Repo - J-linux.git/commitdiff
Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf...
authorJakub Kicinski <[email protected]>
Mon, 29 Apr 2024 18:59:20 +0000 (11:59 -0700)
committerJakub Kicinski <[email protected]>
Mon, 29 Apr 2024 20:12:19 +0000 (13:12 -0700)
Daniel Borkmann says:

====================
pull-request: bpf-next 2024-04-29

We've added 147 non-merge commits during the last 32 day(s) which contain
a total of 158 files changed, 9400 insertions(+), 2213 deletions(-).

The main changes are:

1) Add an internal-only BPF per-CPU instruction for resolving per-CPU
   memory addresses and implement support in x86 BPF JIT. This allows
   inlining per-CPU array and hashmap lookups
   and the bpf_get_smp_processor_id() helper, from Andrii Nakryiko.

2) Add BPF link support for sk_msg and sk_skb programs, from Yonghong Song.

3) Optimize x86 BPF JIT's emit_mov_imm64, and add support for various
   atomics in bpf_arena which can be JITed as a single x86 instruction,
   from Alexei Starovoitov.

4) Add support for passing mark with bpf_fib_lookup helper,
   from Anton Protopopov.

5) Add a new bpf_wq API for deferring events and refactor sleepable
   bpf_timer code to keep common code where possible,
   from Benjamin Tissoires.

6) Fix BPF_PROG_TEST_RUN infra with regards to bpf_dummy_struct_ops programs
   to check when NULL is passed for non-NULLable parameters,
   from Eduard Zingerman.

7) Harden the BPF verifier's and/or/xor value tracking,
   from Harishankar Vishwanathan.

8) Introduce crypto kfuncs to make BPF programs able to utilize the kernel
   crypto subsystem, from Vadim Fedorenko.

9) Various improvements to the BPF instruction set standardization doc,
   from Dave Thaler.

10) Extend libbpf APIs to partially consume items from the BPF ringbuffer,
    from Andrea Righi.

11) Bigger batch of BPF selftests refactoring to use common network helpers
    and to drop duplicate code, from Geliang Tang.

12) Support bpf_tail_call_static() helper for BPF programs with GCC 13,
    from Jose E. Marchesi.

13) Add bpf_preempt_{disable,enable}() kfuncs in order to allow a BPF
    program to have code sections where preemption is disabled,
    from Kumar Kartikeya Dwivedi.

14) Allow invoking BPF kfuncs from BPF_PROG_TYPE_SYSCALL programs,
    from David Vernet.

15) Extend the BPF verifier to allow different input maps for a given
    bpf_for_each_map_elem() helper call in a BPF program, from Philo Lu.

16) Add support for PROBE_MEM32 and bpf_addr_space_cast instructions
    for riscv64 and arm64 JITs to enable BPF Arena, from Puranjay Mohan.

17) Shut up a false-positive KMSAN splat in interpreter mode by unpoison
    the stack memory, from Martin KaFai Lau.

18) Improve xsk selftest coverage with new tests on maximum and minimum
    hardware ring size configurations, from Tushar Vyavahare.

19) Various ReST man pages fixes as well as documentation and bash completion
    improvements for bpftool, from Rameez Rehman & Quentin Monnet.

20) Fix libbpf with regards to dumping subsequent char arrays,
    from Quentin Deslandes.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (147 commits)
  bpf, docs: Clarify PC use in instruction-set.rst
  bpf_helpers.h: Define bpf_tail_call_static when building with GCC
  bpf, docs: Add introduction for use in the ISA Internet Draft
  selftests/bpf: extend BPF_SOCK_OPS_RTT_CB test for srtt and mrtt_us
  bpf: add mrtt and srtt as BPF_SOCK_OPS_RTT_CB args
  selftests/bpf: dummy_st_ops should reject 0 for non-nullable params
  bpf: check bpf_dummy_struct_ops program params for test runs
  selftests/bpf: do not pass NULL for non-nullable params in dummy_st_ops
  selftests/bpf: adjust dummy_st_ops_success to detect additional error
  bpf: mark bpf_dummy_struct_ops.test_1 parameter as nullable
  selftests/bpf: Add ring_buffer__consume_n test.
  bpf: Add bpf_guard_preempt() convenience macro
  selftests: bpf: crypto: add benchmark for crypto functions
  selftests: bpf: crypto skcipher algo selftests
  bpf: crypto: add skcipher to bpf crypto
  bpf: make common crypto API for TC/XDP programs
  bpf: update the comment for BTF_FIELDS_MAX
  selftests/bpf: Fix wq test.
  selftests/bpf: Use make_sockaddr in test_sock_addr
  selftests/bpf: Use connect_to_addr in test_sock_addr
  ...
====================

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
12 files changed:
1  2 
MAINTAINERS
arch/x86/net/bpf_jit_comp.c
include/linux/bpf.h
include/net/tcp.h
kernel/bpf/Makefile
kernel/bpf/syscall.c
kernel/bpf/verifier.c
kernel/trace/bpf_trace.c
net/core/filter.c
net/core/sock_map.c
net/ipv4/tcp_input.c
tools/testing/selftests/bpf/Makefile

diff --combined MAINTAINERS
index ab89edc6974d17670575651e136c27f70065eea9,c9f887fbb47751be1ed67141dbfc7a2dcc1588a8..943921d642add01a6b2736625bb38da28fc22a57
@@@ -2191,6 -2191,7 +2191,6 @@@ N:      mx
  
  ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE
  M:    Shawn Guo <[email protected]>
 -M:    Li Yang <[email protected]>
  L:    [email protected] (moderated for non-subscribers)
  S:    Maintained
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
@@@ -2707,7 -2708,7 +2707,7 @@@ F:      sound/soc/rockchip
  N:    rockchip
  
  ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  R:    Alim Akhtar <[email protected]>
  L:    [email protected] (moderated for non-subscribers)
  L:    [email protected]
@@@ -3572,7 -3573,6 +3572,7 @@@ S:      Supporte
  C:    irc://irc.oftc.net/bcache
  T:    git https://evilpiepirate.org/git/bcachefs.git
  F:    fs/bcachefs/
 +F:    Documentation/filesystems/bcachefs/
  
  BDISP ST MEDIA DRIVER
  M:    Fabien Dessenne <[email protected]>
@@@ -3822,6 -3822,14 +3822,14 @@@ F:    kernel/bpf/tnum.
  F:    kernel/bpf/trampoline.c
  F:    kernel/bpf/verifier.c
  
+ BPF [CRYPTO]
+ M:    Vadim Fedorenko <[email protected]>
+ L:    [email protected]
+ S:    Maintained
+ F:    crypto/bpf_crypto_skcipher.c
+ F:    include/linux/bpf_crypto.h
+ F:    kernel/bpf/crypto.c
  BPF [DOCUMENTATION] (Related to Standardization)
  R:    David Vernet <[email protected]>
  L:    [email protected]
@@@ -4869,6 -4877,7 +4877,6 @@@ F:      drivers/power/supply/cw2015_battery.
  CEPH COMMON CODE (LIBCEPH)
  M:    Ilya Dryomov <[email protected]>
  M:    Xiubo Li <[email protected]>
 -R:    Jeff Layton <[email protected]>
  L:    [email protected]
  S:    Supported
  W:    http://ceph.com/
@@@ -4880,6 -4889,7 +4888,6 @@@ F:      net/ceph
  CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH)
  M:    Xiubo Li <[email protected]>
  M:    Ilya Dryomov <[email protected]>
 -R:    Jeff Layton <[email protected]>
  L:    [email protected]
  S:    Supported
  W:    http://ceph.com/
@@@ -5555,7 -5565,7 +5563,7 @@@ F:      drivers/cpuidle/cpuidle-big_little.
  CPUIDLE DRIVER - ARM EXYNOS
  M:    Daniel Lezcano <[email protected]>
  M:    Kukjin Kim <[email protected]>
 -R:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +R:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  L:    [email protected]
  S:    Maintained
@@@ -6154,6 -6164,7 +6162,6 @@@ DEVICE-MAPPER  (LVM
  M:    Alasdair Kergon <[email protected]>
  M:    Mike Snitzer <[email protected]>
  M:    Mikulas Patocka <[email protected]>
 -M:    [email protected]
  L:    [email protected]
  S:    Maintained
  Q:    http://patchwork.kernel.org/project/dm-devel/list/
@@@ -7829,8 -7840,9 +7837,8 @@@ W:      http://aeschi.ch.eu.org/efs
  F:    fs/efs/
  
  EHEA (IBM pSeries eHEA 10Gb ethernet adapter) DRIVER
 -M:    Douglas Miller <[email protected]>
  L:    [email protected]
 -S:    Maintained
 +S:    Orphan
  F:    drivers/net/ethernet/ibm/ehea/
  
  ELM327 CAN NETWORK DRIVER
@@@ -8012,8 -8024,6 +8020,8 @@@ F:      include/linux/mii.
  F:    include/linux/of_net.h
  F:    include/linux/phy.h
  F:    include/linux/phy_fixed.h
 +F:    include/linux/phy_link_topology.h
 +F:    include/linux/phy_link_topology_core.h
  F:    include/linux/phylib_stubs.h
  F:    include/linux/platform_data/mdio-bcm-unimac.h
  F:    include/linux/platform_data/mdio-gpio.h
@@@ -8522,6 -8532,7 +8530,6 @@@ S:      Maintaine
  F:    drivers/video/fbdev/fsl-diu-fb.*
  
  FREESCALE DMA DRIVER
 -M:    Li Yang <[email protected]>
  M:    Zhang Wei <[email protected]>
  L:    [email protected]
  S:    Maintained
@@@ -8686,9 -8697,10 +8694,9 @@@ F:     drivers/soc/fsl/qe/tsa.
  F:    include/dt-bindings/soc/cpm1-fsl,tsa.h
  
  FREESCALE QUICC ENGINE UCC ETHERNET DRIVER
 -M:    Li Yang <[email protected]>
  L:    [email protected]
  L:    [email protected]
 -S:    Maintained
 +S:    Orphan
  F:    drivers/net/ethernet/freescale/ucc_geth*
  
  FREESCALE QUICC ENGINE UCC HDLC DRIVER
@@@ -8705,9 -8717,10 +8713,9 @@@ S:     Maintaine
  F:    drivers/tty/serial/ucc_uart.c
  
  FREESCALE SOC DRIVERS
 -M:    Li Yang <[email protected]>
  L:    [email protected]
  L:    [email protected] (moderated for non-subscribers)
 -S:    Maintained
 +S:    Orphan
  F:    Documentation/devicetree/bindings/misc/fsl,dpaa2-console.yaml
  F:    Documentation/devicetree/bindings/soc/fsl/
  F:    drivers/soc/fsl/
@@@ -8741,15 -8754,17 +8749,15 @@@ F:   Documentation/devicetree/bindings/so
  F:    sound/soc/fsl/fsl_qmc_audio.c
  
  FREESCALE USB PERIPHERAL DRIVERS
 -M:    Li Yang <[email protected]>
  L:    [email protected]
  L:    [email protected]
 -S:    Maintained
 +S:    Orphan
  F:    drivers/usb/gadget/udc/fsl*
  
  FREESCALE USB PHY DRIVER
 -M:    Ran Wang <[email protected]>
  L:    [email protected]
  L:    [email protected]
 -S:    Maintained
 +S:    Orphan
  F:    drivers/usb/phy/phy-fsl-usb*
  
  FREEVXFS FILESYSTEM
@@@ -8994,7 -9009,7 +9002,7 @@@ F:      drivers/i2c/muxes/i2c-mux-gpio.
  F:    include/linux/platform_data/i2c-mux-gpio.h
  
  GENERIC GPIO RESET DRIVER
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  S:    Maintained
  F:    drivers/reset/reset-gpio.c
  
@@@ -9577,7 -9592,7 +9585,7 @@@ F:      kernel/power
  
  HID CORE LAYER
  M:    Jiri Kosina <[email protected]>
 -M:    Benjamin Tissoires <ben[email protected]>
 +M:    Benjamin Tissoires <ben[email protected]>
  L:    [email protected]
  S:    Maintained
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git
@@@ -9645,9 -9660,7 +9653,9 @@@ L:      [email protected]
  S:    Maintained
  F:    drivers/hid/hid-logitech-hidpp.c
  
 -HIGH-RESOLUTION TIMERS, CLOCKEVENTS
 +HIGH-RESOLUTION TIMERS, TIMER WHEEL, CLOCKEVENTS
 +M:    Anna-Maria Behnsen <[email protected]>
 +M:    Frederic Weisbecker <[email protected]>
  M:    Thomas Gleixner <[email protected]>
  L:    [email protected]
  S:    Maintained
@@@ -9655,13 -9668,9 +9663,13 @@@ T:    git git://git.kernel.org/pub/scm/lin
  F:    Documentation/timers/
  F:    include/linux/clockchips.h
  F:    include/linux/hrtimer.h
 +F:    include/linux/timer.h
  F:    kernel/time/clockevents.c
  F:    kernel/time/hrtimer.c
 -F:    kernel/time/timer_*.c
 +F:    kernel/time/timer.c
 +F:    kernel/time/timer_list.c
 +F:    kernel/time/timer_migration.*
 +F:    tools/testing/selftests/timers/
  
  HIGH-SPEED SCC DRIVER FOR AX.25
  L:    [email protected]
@@@ -10024,7 -10033,7 +10032,7 @@@ F:   drivers/media/platform/st/sti/hv
  
  HWPOISON MEMORY FAILURE HANDLING
  M:    Miaohe Lin <[email protected]>
 -R:    Naoya Horiguchi <naoya.horiguchi@nec.com>
 +R:    Naoya Horiguchi <nao.horiguchi@gmail.com>
  L:    [email protected]
  S:    Maintained
  F:    mm/hwpoison-inject.c
@@@ -11995,7 -12004,7 +12003,7 @@@ F:   include/keys/encrypted-type.
  F:    security/keys/encrypted-keys/
  
  KEYS-TRUSTED
 -M:    James Bottomley <[email protected].com>
 +M:    James Bottomley <James.Bottomley@HansenPartnership.com>
  M:    Jarkko Sakkinen <[email protected]>
  M:    Mimi Zohar <[email protected]>
  L:    [email protected]
@@@ -12388,26 -12397,6 +12396,26 @@@ F: drivers/ata
  F:    include/linux/ata.h
  F:    include/linux/libata.h
  
 +LIBETH COMMON ETHERNET LIBRARY
 +M:    Alexander Lobakin <[email protected]>
 +L:    [email protected]
 +L:    [email protected] (moderated for non-subscribers)
 +S:    Supported
 +T:    git https://github.com/alobakin/linux.git
 +F:    drivers/net/ethernet/intel/libeth/
 +F:    include/net/libeth/
 +K:    libeth
 +
 +LIBIE COMMON INTEL ETHERNET LIBRARY
 +M:    Alexander Lobakin <[email protected]>
 +L:    [email protected] (moderated for non-subscribers)
 +L:    [email protected]
 +S:    Supported
 +T:    git https://github.com/alobakin/linux.git
 +F:    drivers/net/ethernet/intel/libie/
 +F:    include/linux/net/intel/libie/
 +K:    libie
 +
  LIBNVDIMM BTT: BLOCK TRANSLATION TABLE
  M:    Vishal Verma <[email protected]>
  M:    Dan Williams <[email protected]>
@@@ -13309,7 -13298,7 +13317,7 @@@ F:   drivers/iio/adc/max11205.
  
  MAXIM MAX17040 FAMILY FUEL GAUGE DRIVERS
  R:    Iskren Chernev <[email protected]>
 -R:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +R:    Krzysztof Kozlowski <krzk@kernel.org>
  R:    Marek Szyprowski <[email protected]>
  R:    Matheus Castello <[email protected]>
  L:    [email protected]
@@@ -13319,7 -13308,7 +13327,7 @@@ F:   drivers/power/supply/max17040_batter
  
  MAXIM MAX17042 FAMILY FUEL GAUGE DRIVERS
  R:    Hans de Goede <[email protected]>
 -R:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +R:    Krzysztof Kozlowski <krzk@kernel.org>
  R:    Marek Szyprowski <[email protected]>
  R:    Sebastian Krzyszkowiak <[email protected]>
  R:    Purism Kernel Team <[email protected]>
@@@ -13377,7 -13366,7 +13385,7 @@@ F:   Documentation/devicetree/bindings/po
  F:    drivers/power/supply/max77976_charger.c
  
  MAXIM MUIC CHARGER DRIVERS FOR EXYNOS BASED BOARDS
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  S:    Maintained
  B:    mailto:[email protected]
@@@ -13388,7 -13377,7 +13396,7 @@@ F:   drivers/power/supply/max77693_charge
  
  MAXIM PMIC AND MUIC DRIVERS FOR EXYNOS BASED BOARDS
  M:    Chanwoo Choi <[email protected]>
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  S:    Maintained
  B:    mailto:[email protected]
@@@ -14033,7 -14022,6 +14041,7 @@@ F:   drivers/net/ethernet/mellanox/mlx4/e
  
  MELLANOX ETHERNET DRIVER (mlx5e)
  M:    Saeed Mahameed <[email protected]>
 +M:    Tariq Toukan <[email protected]>
  L:    [email protected]
  S:    Supported
  W:    http://www.mellanox.com
@@@ -14101,7 -14089,6 +14109,7 @@@ F:   include/uapi/rdma/mlx4-abi.
  MELLANOX MLX5 core VPI driver
  M:    Saeed Mahameed <[email protected]>
  M:    Leon Romanovsky <[email protected]>
 +M:    Tariq Toukan <[email protected]>
  L:    [email protected]
  L:    [email protected]
  S:    Supported
@@@ -14172,7 -14159,7 +14180,7 @@@ F:   mm/mm_init.
  F:    tools/testing/memblock/
  
  MEMORY CONTROLLER DRIVERS
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  S:    Maintained
  B:    mailto:[email protected]
@@@ -14377,7 -14364,7 +14385,7 @@@ F:   drivers/dma/at_xdmac.
  F:    include/dt-bindings/dma/at91.h
  
  MICROCHIP AT91 SERIAL DRIVER
 -M:    Richard Genoud <richard.genoud@gmail.com>
 +M:    Richard Genoud <richard.genoud@bootlin.com>
  S:    Maintained
  F:    Documentation/devicetree/bindings/serial/atmel,at91-usart.yaml
  F:    drivers/tty/serial/atmel_serial.c
@@@ -15280,7 -15267,6 +15288,7 @@@ F:   net/*/netfilter.
  F:    net/*/netfilter/
  F:    net/bridge/br_netfilter*.c
  F:    net/netfilter/
 +F:    tools/testing/selftests/net/netfilter/
  
  NETROM NETWORK LAYER
  M:    Ralf Baechle <[email protected]>
@@@ -15554,7 -15540,7 +15562,7 @@@ F:   include/uapi/linux/nexthop.
  F:    net/ipv4/nexthop.c
  
  NFC SUBSYSTEM
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  S:    Maintained
  F:    Documentation/devicetree/bindings/net/nfc/
@@@ -15649,10 -15635,9 +15657,10 @@@ F: drivers/misc/nsm.
  F:    include/uapi/linux/nsm.h
  
  NOHZ, DYNTICKS SUPPORT
 +M:    Anna-Maria Behnsen <[email protected]>
  M:    Frederic Weisbecker <[email protected]>
 -M:    Thomas Gleixner <[email protected]>
  M:    Ingo Molnar <[email protected]>
 +M:    Thomas Gleixner <[email protected]>
  L:    [email protected]
  S:    Maintained
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/nohz
@@@ -15931,7 -15916,7 +15939,7 @@@ F:   Documentation/devicetree/bindings/re
  F:    drivers/regulator/pf8x00-regulator.c
  
  NXP PTN5150A CC LOGIC AND EXTCON DRIVER
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  S:    Maintained
  F:    Documentation/devicetree/bindings/extcon/extcon-ptn5150.yaml
@@@ -16542,7 -16527,7 +16550,7 @@@ K:   of_overlay_remov
  
  OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS
  M:    Rob Herring <[email protected]>
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>
 +M:    Krzysztof Kozlowski <krzk+dt@kernel.org>
  M:    Conor Dooley <[email protected]>
  L:    [email protected]
  S:    Maintained
@@@ -16748,9 -16733,9 +16756,9 @@@ F:   include/uapi/linux/ppdev.
  
  PARAVIRT_OPS INTERFACE
  M:    Juergen Gross <[email protected]>
 -R:    Ajay Kaher <akaher@vmware.com>
 -R:    Alexey Makhalov <amakhalov@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +R:    Ajay Kaher <ajay.kaher@broadcom.com>
 +R:    Alexey Makhalov <alexey.amakhalov@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  L:    [email protected]
  S:    Supported
@@@ -16989,6 -16974,7 +16997,6 @@@ F:   drivers/pci/controller/dwc/pci-exyno
  
  PCI DRIVER FOR SYNOPSYS DESIGNWARE
  M:    Jingoo Han <[email protected]>
 -M:    Gustavo Pimentel <[email protected]>
  M:    Manivannan Sadhasivam <[email protected]>
  L:    [email protected]
  S:    Maintained
@@@ -17499,7 -17485,7 +17507,7 @@@ F:   Documentation/devicetree/bindings/pi
  F:    drivers/pinctrl/renesas/
  
  PIN CONTROLLER - SAMSUNG
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  M:    Sylwester Nawrocki <[email protected]>
  R:    Alim Akhtar <[email protected]>
  L:    [email protected] (moderated for non-subscribers)
@@@ -17612,20 -17598,15 +17620,20 @@@ F:        drivers/pnp
  F:    include/linux/pnp.h
  
  POSIX CLOCKS and TIMERS
 +M:    Anna-Maria Behnsen <[email protected]>
 +M:    Frederic Weisbecker <[email protected]>
  M:    Thomas Gleixner <[email protected]>
  L:    [email protected]
  S:    Maintained
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core
  F:    fs/timerfd.c
  F:    include/linux/time_namespace.h
 -F:    include/linux/timer*
 +F:    include/linux/timerfd.h
 +F:    include/uapi/linux/time.h
 +F:    include/uapi/linux/timerfd.h
  F:    include/trace/events/timer*
 -F:    kernel/time/*timer*
 +F:    kernel/time/itimer.c
 +F:    kernel/time/posix-*
  F:    kernel/time/namespace.c
  
  POWER MANAGEMENT CORE
@@@ -17795,14 -17776,6 +17803,14 @@@ F: include/net/psample.
  F:    include/uapi/linux/psample.h
  F:    net/psample
  
 +PSE NETWORK DRIVER
 +M:    Oleksij Rempel <[email protected]>
 +M:    Kory Maincent <[email protected]>
 +L:    [email protected]
 +S:    Maintained
 +F:    Documentation/devicetree/bindings/net/pse-pd/
 +F:    drivers/net/pse-pd/
 +
  PSTORE FILESYSTEM
  M:    Kees Cook <[email protected]>
  R:    Tony Luck <[email protected]>
@@@ -19475,7 -19448,7 +19483,7 @@@ F:   Documentation/devicetree/bindings/so
  F:    sound/soc/samsung/
  
  SAMSUNG EXYNOS PSEUDO RANDOM NUMBER GENERATOR (RNG) DRIVER
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  L:    [email protected]
  S:    Maintained
@@@ -19510,7 -19483,7 +19518,7 @@@ S:   Maintaine
  F:    drivers/platform/x86/samsung-laptop.c
  
  SAMSUNG MULTIFUNCTION PMIC DEVICE DRIVERS
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  L:    [email protected]
  S:    Maintained
@@@ -19536,7 -19509,7 +19544,7 @@@ F:   drivers/media/platform/samsung/s3c-c
  F:    include/media/drv-intf/s3c_camif.h
  
  SAMSUNG S3FWRN5 NFC DRIVER
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  S:    Maintained
  F:    Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml
  F:    drivers/nfc/s3fwrn5
@@@ -19557,7 -19530,7 +19565,7 @@@ S:   Supporte
  F:    drivers/media/i2c/s5k5baf.c
  
  SAMSUNG S5P Security SubSystem (SSS) DRIVER
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  M:    Vladimir Zapolskiy <[email protected]>
  L:    [email protected]
  L:    [email protected]
@@@ -19579,7 -19552,7 +19587,7 @@@ F:   Documentation/devicetree/bindings/me
  F:    drivers/media/platform/samsung/exynos4-is/
  
  SAMSUNG SOC CLOCK DRIVERS
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  M:    Sylwester Nawrocki <[email protected]>
  M:    Chanwoo Choi <[email protected]>
  R:    Alim Akhtar <[email protected]>
@@@ -19611,7 -19584,7 +19619,7 @@@ F:   drivers/net/ethernet/samsung/sxgbe
  
  SAMSUNG THERMAL DRIVER
  M:    Bartlomiej Zolnierkiewicz <[email protected]>
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  L:    [email protected]
  L:    [email protected]
  S:    Maintained
@@@ -19698,7 -19671,7 +19706,7 @@@ F:   drivers/scsi/sg.
  F:    include/scsi/sg.h
  
  SCSI SUBSYSTEM
 -M:    "James E.J. Bottomley" <[email protected].com>
 +M:    "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
  M:    "Martin K. Petersen" <[email protected]>
  L:    [email protected]
  S:    Maintained
@@@ -21700,7 -21673,6 +21708,7 @@@ TEAM DRIVE
  M:    Jiri Pirko <[email protected]>
  L:    [email protected]
  S:    Supported
 +F:    Documentation/netlink/specs/team.yaml
  F:    drivers/net/team/
  F:    include/linux/if_team.h
  F:    include/uapi/linux/if_team.h
@@@ -22316,20 -22288,13 +22324,20 @@@ S:        Supporte
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core
  F:    include/linux/clocksource.h
  F:    include/linux/time.h
 +F:    include/linux/timekeeper_internal.h
 +F:    include/linux/timekeeping.h
  F:    include/linux/timex.h
  F:    include/uapi/linux/time.h
  F:    include/uapi/linux/timex.h
  F:    kernel/time/alarmtimer.c
 -F:    kernel/time/clocksource.c
 -F:    kernel/time/ntp.c
 -F:    kernel/time/time*.c
 +F:    kernel/time/clocksource*
 +F:    kernel/time/ntp*
 +F:    kernel/time/time.c
 +F:    kernel/time/timeconst.bc
 +F:    kernel/time/timeconv.c
 +F:    kernel/time/timecounter.c
 +F:    kernel/time/timekeeping*
 +F:    kernel/time/time_test.c
  F:    tools/testing/selftests/timers/
  
  TIPC NETWORK LAYER
@@@ -22453,7 -22418,6 +22461,7 @@@ S:   Maintaine
  W:    https://kernsec.org/wiki/index.php/Linux_Kernel_Integrity
  Q:    https://patchwork.kernel.org/project/linux-integrity/list/
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd.git
 +F:    Documentation/devicetree/bindings/tpm/
  F:    drivers/char/tpm/
  
  TPS546D24 DRIVER
@@@ -22600,7 -22564,6 +22608,7 @@@ Q:   https://patchwork.kernel.org/project
  B:    https://bugzilla.kernel.org
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git turbostat
  F:    tools/power/x86/turbostat/
 +F:    tools/testing/selftests/turbostat/
  
  TW5864 VIDEO4LINUX DRIVER
  M:    Bluecherry Maintainers <[email protected]>
@@@ -22870,7 -22833,7 +22878,7 @@@ F:   drivers/usb/host/ehci
  
  USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...)
  M:    Jiri Kosina <[email protected]>
 -M:    Benjamin Tissoires <ben[email protected]>
 +M:    Benjamin Tissoires <ben[email protected]>
  L:    [email protected]
  S:    Maintained
  T:    git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git
@@@ -23466,7 -23429,6 +23474,7 @@@ F:   include/linux/virtio*.
  F:    include/linux/vringh.h
  F:    include/uapi/linux/virtio_*.h
  F:    tools/virtio/
 +F:    tools/testing/selftests/drivers/net/virtio_net/
  
  VIRTIO CRYPTO DRIVER
  M:    Gonglei <[email protected]>
@@@ -23680,9 -23642,9 +23688,9 @@@ S:   Supporte
  F:    drivers/misc/vmw_balloon.c
  
  VMWARE HYPERVISOR INTERFACE
 -M:    Ajay Kaher <akaher@vmware.com>
 -M:    Alexey Makhalov <amakhalov@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +M:    Ajay Kaher <ajay.kaher@broadcom.com>
 +M:    Alexey Makhalov <alexey.amakhalov@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  L:    [email protected]
  S:    Supported
@@@ -23691,34 -23653,34 +23699,34 @@@ F:        arch/x86/include/asm/vmware.
  F:    arch/x86/kernel/cpu/vmware.c
  
  VMWARE PVRDMA DRIVER
 -M:    Bryan Tan <bryantan@vmware.com>
 -M:    Vishnu Dasa <vdasa@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +M:    Bryan Tan <bryan-bt.tan@broadcom.com>
 +M:    Vishnu Dasa <vishnu.dasa@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  S:    Supported
  F:    drivers/infiniband/hw/vmw_pvrdma/
  
  VMWARE PVSCSI DRIVER
 -M:    Vishal Bhakta <vbhakta@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +M:    Vishal Bhakta <vishal.bhakta@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  S:    Supported
  F:    drivers/scsi/vmw_pvscsi.c
  F:    drivers/scsi/vmw_pvscsi.h
  
  VMWARE VIRTUAL PTP CLOCK DRIVER
 -M:    Jeff Sipek <jsipek@vmware.com>
 -R:    Ajay Kaher <akaher@vmware.com>
 -R:    Alexey Makhalov <amakhalov@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +M:    Nick Shi <nick.shi@broadcom.com>
 +R:    Ajay Kaher <ajay.kaher@broadcom.com>
 +R:    Alexey Makhalov <alexey.amakhalov@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  S:    Supported
  F:    drivers/ptp/ptp_vmw.c
  
  VMWARE VMCI DRIVER
 -M:    Bryan Tan <bryantan@vmware.com>
 -M:    Vishnu Dasa <vdasa@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +M:    Bryan Tan <bryan-bt.tan@broadcom.com>
 +M:    Vishnu Dasa <vishnu.dasa@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  S:    Supported
  F:    drivers/misc/vmw_vmci/
@@@ -23733,16 -23695,16 +23741,16 @@@ F:        drivers/input/mouse/vmmouse.
  F:    drivers/input/mouse/vmmouse.h
  
  VMWARE VMXNET3 ETHERNET DRIVER
 -M:    Ronak Doshi <doshir@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +M:    Ronak Doshi <ronak.doshi@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  S:    Supported
  F:    drivers/net/vmxnet3/
  
  VMWARE VSOCK VMCI TRANSPORT DRIVER
 -M:    Bryan Tan <bryantan@vmware.com>
 -M:    Vishnu Dasa <vdasa@vmware.com>
 -R:    VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
 +M:    Bryan Tan <bryan-bt.tan@broadcom.com>
 +M:    Vishnu Dasa <vishnu.dasa@broadcom.com>
 +R:    Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
  L:    [email protected]
  S:    Supported
  F:    net/vmw_vsock/vmci_transport*
@@@ -23810,7 -23772,7 +23818,7 @@@ S:   Orpha
  F:    drivers/mmc/host/vub300.c
  
  W1 DALLAS'S 1-WIRE BUS
 -M:    Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
 +M:    Krzysztof Kozlowski <krzk@kernel.org>
  S:    Maintained
  F:    Documentation/devicetree/bindings/w1/
  F:    Documentation/w1/
index 788a3d6f62765cae75804faeddf56c3916901b41,673fdbd765d780d469877f839a7f9fdc3abf7efb..ff217cc35ce926694320e6b9f6b28b986a304f83
@@@ -480,7 -480,7 +480,7 @@@ static int emit_call(u8 **pprog, void *
  static int emit_rsb_call(u8 **pprog, void *func, void *ip)
  {
        OPTIMIZER_HIDE_VAR(func);
 -      x86_call_depth_emit_accounting(pprog, func);
 +      ip += x86_call_depth_emit_accounting(pprog, func, ip);
        return emit_patch(pprog, func, ip, 0xE8);
  }
  
@@@ -816,9 -816,10 +816,10 @@@ done
  static void emit_mov_imm64(u8 **pprog, u32 dst_reg,
                           const u32 imm32_hi, const u32 imm32_lo)
  {
+       u64 imm64 = ((u64)imm32_hi << 32) | (u32)imm32_lo;
        u8 *prog = *pprog;
  
-       if (is_uimm32(((u64)imm32_hi << 32) | (u32)imm32_lo)) {
+       if (is_uimm32(imm64)) {
                /*
                 * For emitting plain u32, where sign bit must not be
                 * propagated LLVM tends to load imm64 over mov32
                 * 'mov %eax, imm32' instead.
                 */
                emit_mov_imm32(&prog, false, dst_reg, imm32_lo);
+       } else if (is_simm32(imm64)) {
+               emit_mov_imm32(&prog, true, dst_reg, imm32_lo);
        } else {
                /* movabsq rax, imm64 */
                EMIT2(add_1mod(0x48, dst_reg), add_1reg(0xB8, dst_reg));
@@@ -1169,6 -1172,54 +1172,54 @@@ static int emit_atomic(u8 **pprog, u8 a
        return 0;
  }
  
+ static int emit_atomic_index(u8 **pprog, u8 atomic_op, u32 size,
+                            u32 dst_reg, u32 src_reg, u32 index_reg, int off)
+ {
+       u8 *prog = *pprog;
+       EMIT1(0xF0); /* lock prefix */
+       switch (size) {
+       case BPF_W:
+               EMIT1(add_3mod(0x40, dst_reg, src_reg, index_reg));
+               break;
+       case BPF_DW:
+               EMIT1(add_3mod(0x48, dst_reg, src_reg, index_reg));
+               break;
+       default:
+               pr_err("bpf_jit: 1 and 2 byte atomics are not supported\n");
+               return -EFAULT;
+       }
+       /* emit opcode */
+       switch (atomic_op) {
+       case BPF_ADD:
+       case BPF_AND:
+       case BPF_OR:
+       case BPF_XOR:
+               /* lock *(u32/u64*)(dst_reg + idx_reg + off) <op>= src_reg */
+               EMIT1(simple_alu_opcodes[atomic_op]);
+               break;
+       case BPF_ADD | BPF_FETCH:
+               /* src_reg = atomic_fetch_add(dst_reg + idx_reg + off, src_reg); */
+               EMIT2(0x0F, 0xC1);
+               break;
+       case BPF_XCHG:
+               /* src_reg = atomic_xchg(dst_reg + idx_reg + off, src_reg); */
+               EMIT1(0x87);
+               break;
+       case BPF_CMPXCHG:
+               /* r0 = atomic_cmpxchg(dst_reg + idx_reg + off, r0, src_reg); */
+               EMIT2(0x0F, 0xB1);
+               break;
+       default:
+               pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op);
+               return -EFAULT;
+       }
+       emit_insn_suffix_SIB(&prog, dst_reg, src_reg, index_reg, off);
+       *pprog = prog;
+       return 0;
+ }
  #define DONT_CLEAR 1
  
  bool ex_handler_bpf(const struct exception_table_entry *x, struct pt_regs *regs)
@@@ -1382,6 -1433,16 +1433,16 @@@ static int do_jit(struct bpf_prog *bpf_
                                maybe_emit_mod(&prog, AUX_REG, dst_reg, true);
                                EMIT3(0x0F, 0x44, add_2reg(0xC0, AUX_REG, dst_reg));
                                break;
+                       } else if (insn_is_mov_percpu_addr(insn)) {
+                               /* mov <dst>, <src> (if necessary) */
+                               EMIT_mov(dst_reg, src_reg);
+ #ifdef CONFIG_SMP
+                               /* add <dst>, gs:[<off>] */
+                               EMIT2(0x65, add_1mod(0x48, dst_reg));
+                               EMIT3(0x03, add_2reg(0x04, 0, dst_reg), 0x25);
+                               EMIT((u32)(unsigned long)&this_cpu_off, 4);
+ #endif
+                               break;
                        }
                        fallthrough;
                case BPF_ALU | BPF_MOV | BPF_X:
@@@ -1969,19 -2030,31 +2030,28 @@@ populate_extable
                                return err;
                        break;
  
+               case BPF_STX | BPF_PROBE_ATOMIC | BPF_W:
+               case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW:
+                       start_of_ldx = prog;
+                       err = emit_atomic_index(&prog, insn->imm, BPF_SIZE(insn->code),
+                                               dst_reg, src_reg, X86_REG_R12, insn->off);
+                       if (err)
+                               return err;
+                       goto populate_extable;
                        /* call */
                case BPF_JMP | BPF_CALL: {
 -                      int offs;
 +                      u8 *ip = image + addrs[i - 1];
  
                        func = (u8 *) __bpf_call_base + imm32;
                        if (tail_call_reachable) {
                                RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
 -                              if (!imm32)
 -                                      return -EINVAL;
 -                              offs = 7 + x86_call_depth_emit_accounting(&prog, func);
 -                      } else {
 -                              if (!imm32)
 -                                      return -EINVAL;
 -                              offs = x86_call_depth_emit_accounting(&prog, func);
 +                              ip += 7;
                        }
 -                      if (emit_call(&prog, func, image + addrs[i - 1] + offs))
 +                      if (!imm32)
 +                              return -EINVAL;
 +                      ip += x86_call_depth_emit_accounting(&prog, func, ip);
 +                      if (emit_call(&prog, func, ip))
                                return -EINVAL;
                        break;
                }
@@@ -2831,7 -2904,7 +2901,7 @@@ static int __arch_prepare_bpf_trampolin
                 * Direct-call fentry stub, as such it needs accounting for the
                 * __fentry__ call.
                 */
 -              x86_call_depth_emit_accounting(&prog, NULL);
 +              x86_call_depth_emit_accounting(&prog, NULL, image);
        }
        EMIT1(0x55);             /* push rbp */
        EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
@@@ -3362,6 -3435,11 +3432,11 @@@ bool bpf_jit_supports_subprog_tailcalls
        return true;
  }
  
+ bool bpf_jit_supports_percpu_insn(void)
+ {
+       return true;
+ }
  void bpf_jit_free(struct bpf_prog *prog)
  {
        if (prog->jited) {
@@@ -3465,6 -3543,21 +3540,21 @@@ bool bpf_jit_supports_arena(void
        return true;
  }
  
+ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena)
+ {
+       if (!in_arena)
+               return true;
+       switch (insn->code) {
+       case BPF_STX | BPF_ATOMIC | BPF_W:
+       case BPF_STX | BPF_ATOMIC | BPF_DW:
+               if (insn->imm == (BPF_AND | BPF_FETCH) ||
+                   insn->imm == (BPF_OR | BPF_FETCH) ||
+                   insn->imm == (BPF_XOR | BPF_FETCH))
+                       return false;
+       }
+       return true;
+ }
  bool bpf_jit_supports_ptr_xchg(void)
  {
        return true;
diff --combined include/linux/bpf.h
index e52d5b3ee45e16db0c9a7da07293576ce59f576f,364563b74db61f0abbf28f86298f1a42d4423585..90094400cc63d107e4a330a8721b4ec25fbcd93c
@@@ -184,8 -184,8 +184,8 @@@ struct bpf_map_ops 
  };
  
  enum {
-       /* Support at most 10 fields in a BTF type */
-       BTF_FIELDS_MAX     = 10,
+       /* Support at most 11 fields in a BTF type */
+       BTF_FIELDS_MAX     = 11,
  };
  
  enum btf_field_type {
        BPF_GRAPH_NODE = BPF_RB_NODE | BPF_LIST_NODE,
        BPF_GRAPH_ROOT = BPF_RB_ROOT | BPF_LIST_HEAD,
        BPF_REFCOUNT   = (1 << 9),
+       BPF_WORKQUEUE  = (1 << 10),
  };
  
  typedef void (*btf_dtor_kfunc_t)(void *);
@@@ -238,6 -239,7 +239,7 @@@ struct btf_record 
        u32 field_mask;
        int spin_lock_off;
        int timer_off;
+       int wq_off;
        int refcount_off;
        struct btf_field fields[];
  };
@@@ -312,6 -314,8 +314,8 @@@ static inline const char *btf_field_typ
                return "bpf_spin_lock";
        case BPF_TIMER:
                return "bpf_timer";
+       case BPF_WORKQUEUE:
+               return "bpf_wq";
        case BPF_KPTR_UNREF:
        case BPF_KPTR_REF:
                return "kptr";
@@@ -340,6 -344,8 +344,8 @@@ static inline u32 btf_field_type_size(e
                return sizeof(struct bpf_spin_lock);
        case BPF_TIMER:
                return sizeof(struct bpf_timer);
+       case BPF_WORKQUEUE:
+               return sizeof(struct bpf_wq);
        case BPF_KPTR_UNREF:
        case BPF_KPTR_REF:
        case BPF_KPTR_PERCPU:
@@@ -367,6 -373,8 +373,8 @@@ static inline u32 btf_field_type_align(
                return __alignof__(struct bpf_spin_lock);
        case BPF_TIMER:
                return __alignof__(struct bpf_timer);
+       case BPF_WORKQUEUE:
+               return __alignof__(struct bpf_wq);
        case BPF_KPTR_UNREF:
        case BPF_KPTR_REF:
        case BPF_KPTR_PERCPU:
@@@ -406,6 -414,7 +414,7 @@@ static inline void bpf_obj_init_field(c
                /* RB_ROOT_CACHED 0-inits, no need to do anything after memset */
        case BPF_SPIN_LOCK:
        case BPF_TIMER:
+       case BPF_WORKQUEUE:
        case BPF_KPTR_UNREF:
        case BPF_KPTR_REF:
        case BPF_KPTR_PERCPU:
@@@ -525,6 -534,7 +534,7 @@@ static inline void zero_map_value(struc
  void copy_map_value_locked(struct bpf_map *map, void *dst, void *src,
                           bool lock_src);
  void bpf_timer_cancel_and_free(void *timer);
+ void bpf_wq_cancel_and_free(void *timer);
  void bpf_list_head_free(const struct btf_field *field, void *list_head,
                        struct bpf_spin_lock *spin_lock);
  void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
@@@ -1265,6 -1275,7 +1275,7 @@@ int bpf_dynptr_check_size(u32 size)
  u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
  const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 len);
  void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
+ bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
  
  #ifdef CONFIG_BPF_JIT
  int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
@@@ -1573,26 -1584,12 +1584,26 @@@ struct bpf_link 
        enum bpf_link_type type;
        const struct bpf_link_ops *ops;
        struct bpf_prog *prog;
 -      struct work_struct work;
 +      /* rcu is used before freeing, work can be used to schedule that
 +       * RCU-based freeing before that, so they never overlap
 +       */
 +      union {
 +              struct rcu_head rcu;
 +              struct work_struct work;
 +      };
  };
  
  struct bpf_link_ops {
        void (*release)(struct bpf_link *link);
 +      /* deallocate link resources callback, called without RCU grace period
 +       * waiting
 +       */
        void (*dealloc)(struct bpf_link *link);
 +      /* deallocate link resources callback, called after RCU grace period;
 +       * if underlying BPF program is sleepable we go through tasks trace
 +       * RCU GP and then "classic" RCU GP
 +       */
 +      void (*dealloc_deferred)(struct bpf_link *link);
        int (*detach)(struct bpf_link *link);
        int (*update_prog)(struct bpf_link *link, struct bpf_prog *new_prog,
                           struct bpf_prog *old_prog);
@@@ -2209,6 -2206,7 +2220,7 @@@ void bpf_map_free_record(struct bpf_ma
  struct btf_record *btf_record_dup(const struct btf_record *rec);
  bool btf_record_equal(const struct btf_record *rec_a, const struct btf_record *rec_b);
  void bpf_obj_free_timer(const struct btf_record *rec, void *obj);
+ void bpf_obj_free_workqueue(const struct btf_record *rec, void *obj);
  void bpf_obj_free_fields(const struct btf_record *rec, void *obj);
  void __bpf_obj_drop_impl(void *p, const struct btf_record *rec, bool percpu);
  
@@@ -3010,6 -3008,7 +3022,7 @@@ int sock_map_prog_detach(const union bp
  int sock_map_update_elem_sys(struct bpf_map *map, void *key, void *value, u64 flags);
  int sock_map_bpf_prog_query(const union bpf_attr *attr,
                            union bpf_attr __user *uattr);
+ int sock_map_link_create(const union bpf_attr *attr, struct bpf_prog *prog);
  
  void sock_map_unhash(struct sock *sk);
  void sock_map_destroy(struct sock *sk);
@@@ -3108,6 -3107,11 +3121,11 @@@ static inline int sock_map_bpf_prog_que
  {
        return -EINVAL;
  }
+ static inline int sock_map_link_create(const union bpf_attr *attr, struct bpf_prog *prog)
+ {
+       return -EOPNOTSUPP;
+ }
  #endif /* CONFIG_BPF_SYSCALL */
  #endif /* CONFIG_NET && CONFIG_BPF_SYSCALL */
  
diff --combined include/net/tcp.h
index a9eb21251195c3707c892247bfba5b430e7bc1bb,0f75d03287c25d964e3f3db53e8041b7bc75d018..fe98fb01879bf524c3aa356ff1ea8d3680f80b76
@@@ -52,8 -52,6 +52,8 @@@ extern struct inet_hashinfo tcp_hashinf
  DECLARE_PER_CPU(unsigned int, tcp_orphan_count);
  int tcp_orphan_count_sum(void);
  
 +DECLARE_PER_CPU(u32, tcp_tw_isn);
 +
  void tcp_time_wait(struct sock *sk, int state, int timeo);
  
  #define MAX_TCP_HEADER        L1_CACHE_ALIGN(128 + MAX_HEADER)
@@@ -355,7 -353,7 +355,7 @@@ void tcp_rcv_established(struct sock *s
  void tcp_rcv_space_adjust(struct sock *sk);
  int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp);
  void tcp_twsk_destructor(struct sock *sk);
 -void tcp_twsk_purge(struct list_head *net_exit_list, int family);
 +void tcp_twsk_purge(struct list_head *net_exit_list);
  ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
                        struct pipe_inode_info *pipe, size_t len,
                        unsigned int flags);
@@@ -394,8 -392,7 +394,8 @@@ enum tcp_tw_status 
  
  enum tcp_tw_status tcp_timewait_state_process(struct inet_timewait_sock *tw,
                                              struct sk_buff *skb,
 -                                            const struct tcphdr *th);
 +                                            const struct tcphdr *th,
 +                                            u32 *tw_isn);
  struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
                           struct request_sock *req, bool fastopen,
                           bool *lost_race);
@@@ -670,8 -667,7 +670,8 @@@ int tcp_fragment(struct sock *sk, enum 
  void tcp_send_probe0(struct sock *);
  int tcp_write_wakeup(struct sock *, int mib);
  void tcp_send_fin(struct sock *sk);
 -void tcp_send_active_reset(struct sock *sk, gfp_t priority);
 +void tcp_send_active_reset(struct sock *sk, gfp_t priority,
 +                         enum sk_rst_reason reason);
  int tcp_send_synack(struct sock *);
  void tcp_push_one(struct sock *, unsigned int mss_now);
  void __tcp_send_ack(struct sock *sk, u32 rcv_nxt);
@@@ -746,7 -742,7 +746,7 @@@ int tcp_mtu_to_mss(struct sock *sk, in
  int tcp_mss_to_mtu(struct sock *sk, int mss);
  void tcp_mtup_init(struct sock *sk);
  
 -static inline void tcp_bound_rto(const struct sock *sk)
 +static inline void tcp_bound_rto(struct sock *sk)
  {
        if (inet_csk(sk)->icsk_rto > TCP_RTO_MAX)
                inet_csk(sk)->icsk_rto = TCP_RTO_MAX;
@@@ -929,19 -925,6 +929,19 @@@ static inline u32 tcp_rsk_tsval(const s
  
  #define TCPHDR_SYN_ECN        (TCPHDR_SYN | TCPHDR_ECE | TCPHDR_CWR)
  
 +/* State flags for sacked in struct tcp_skb_cb */
 +enum tcp_skb_cb_sacked_flags {
 +      TCPCB_SACKED_ACKED      = (1 << 0),     /* SKB ACK'd by a SACK block    */
 +      TCPCB_SACKED_RETRANS    = (1 << 1),     /* SKB retransmitted            */
 +      TCPCB_LOST              = (1 << 2),     /* SKB is lost                  */
 +      TCPCB_TAGBITS           = (TCPCB_SACKED_ACKED | TCPCB_SACKED_RETRANS |
 +                                 TCPCB_LOST), /* All tag bits                 */
 +      TCPCB_REPAIRED          = (1 << 4),     /* SKB repaired (no skb_mstamp_ns)      */
 +      TCPCB_EVER_RETRANS      = (1 << 7),     /* Ever retransmitted frame     */
 +      TCPCB_RETRANS           = (TCPCB_SACKED_RETRANS | TCPCB_EVER_RETRANS |
 +                                 TCPCB_REPAIRED),
 +};
 +
  /* This is what the send packet queuing engine uses to pass
   * TCP per-packet control information to the transmission code.
   * We also store the host-order sequence numbers in here too.
@@@ -952,10 -935,13 +952,10 @@@ struct tcp_skb_cb 
        __u32           seq;            /* Starting sequence number     */
        __u32           end_seq;        /* SEQ + FIN + SYN + datalen    */
        union {
 -              /* Note : tcp_tw_isn is used in input path only
 -               *        (isn chosen by tcp_timewait_state_process())
 -               *
 +              /* Note :
                 *        tcp_gso_segs/size are used in write queue only,
                 *        cf tcp_skb_pcount()/tcp_skb_mss()
                 */
 -              __u32           tcp_tw_isn;
                struct {
                        u16     tcp_gso_segs;
                        u16     tcp_gso_size;
        __u8            tcp_flags;      /* TCP header flags. (tcp[13])  */
  
        __u8            sacked;         /* State flags for SACK.        */
 -#define TCPCB_SACKED_ACKED    0x01    /* SKB ACK'd by a SACK block    */
 -#define TCPCB_SACKED_RETRANS  0x02    /* SKB retransmitted            */
 -#define TCPCB_LOST            0x04    /* SKB is lost                  */
 -#define TCPCB_TAGBITS         0x07    /* All tag bits                 */
 -#define TCPCB_REPAIRED                0x10    /* SKB repaired (no skb_mstamp_ns)      */
 -#define TCPCB_EVER_RETRANS    0x80    /* Ever retransmitted frame     */
 -#define TCPCB_RETRANS         (TCPCB_SACKED_RETRANS|TCPCB_EVER_RETRANS| \
 -                              TCPCB_REPAIRED)
 -
        __u8            ip_dsfield;     /* IPv4 tos or IPv6 dsfield     */
        __u8            txstamp_ack:1,  /* Record TX timestamp for ack? */
                        eor:1,          /* Is skb MSG_EOR marked? */
@@@ -1544,10 -1539,11 +1544,10 @@@ static inline int tcp_space_from_win(co
        return __tcp_space_from_win(tcp_sk(sk)->scaling_ratio, win);
  }
  
 -/* Assume a conservative default of 1200 bytes of payload per 4K page.
 +/* Assume a 50% default for skb->len/skb->truesize ratio.
   * This may be adjusted later in tcp_measure_rcv_mss().
   */
 -#define TCP_DEFAULT_SCALING_RATIO ((1200 << TCP_RMEM_TO_WIN_SCALE) / \
 -                                 SKB_TRUESIZE(4096))
 +#define TCP_DEFAULT_SCALING_RATIO (1 << (TCP_RMEM_TO_WIN_SCALE - 1))
  
  static inline void tcp_scaling_ratio_init(struct sock *sk)
  {
@@@ -2288,8 -2284,7 +2288,8 @@@ struct tcp_request_sock_ops 
        struct dst_entry *(*route_req)(const struct sock *sk,
                                       struct sk_buff *skb,
                                       struct flowi *fl,
 -                                     struct request_sock *req);
 +                                     struct request_sock *req,
 +                                     u32 tw_isn);
        u32 (*init_seq)(const struct sk_buff *skb);
        u32 (*init_ts_off)(const struct net *net, const struct sk_buff *skb);
        int (*send_synack)(const struct sock *sk, struct dst_entry *dst,
@@@ -2711,10 -2706,10 +2711,10 @@@ static inline bool tcp_bpf_ca_needs_ecn
        return (tcp_call_bpf(sk, BPF_SOCK_OPS_NEEDS_ECN, 0, NULL) == 1);
  }
  
- static inline void tcp_bpf_rtt(struct sock *sk)
+ static inline void tcp_bpf_rtt(struct sock *sk, long mrtt, u32 srtt)
  {
        if (BPF_SOCK_OPS_TEST_FLAG(tcp_sk(sk), BPF_SOCK_OPS_RTT_CB_FLAG))
-               tcp_call_bpf(sk, BPF_SOCK_OPS_RTT_CB, 0, NULL);
+               tcp_call_bpf_2arg(sk, BPF_SOCK_OPS_RTT_CB, mrtt, srtt);
  }
  
  #if IS_ENABLED(CONFIG_SMC)
diff --combined kernel/bpf/Makefile
index e497011261b897784db588160df007554552a60d,736bd22e5ce082e569ead2e94082b9b14f18b9e2..85786fd97d2aa1e8fd3e2c528477829ea020dd89
@@@ -4,7 -4,7 +4,7 @@@ ifneq ($(CONFIG_BPF_JIT_ALWAYS_ON),y
  # ___bpf_prog_run() needs GCSE disabled on x86; see 3193c0836f203 for details
  cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse
  endif
 -CFLAGS_core.o += $(call cc-disable-warning, override-init) $(cflags-nogcse-yy)
 +CFLAGS_core.o += -Wno-override-init $(cflags-nogcse-yy)
  
  obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o token.o
  obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o
@@@ -44,6 -44,9 +44,9 @@@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct
  obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
  obj-${CONFIG_BPF_LSM} += bpf_lsm.o
  endif
+ ifeq ($(CONFIG_CRYPTO),y)
+ obj-$(CONFIG_BPF_SYSCALL) += crypto.o
+ endif
  obj-$(CONFIG_BPF_PRELOAD) += preload/
  
  obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
diff --combined kernel/bpf/syscall.c
index c0f2f052a02cf49343ed5b16bca1172fc852b635,63e368337483b4ea3fafa449add5dfba7de5d88a..f655adf42e3960f8078f97d38eb1da0d1e71dc2d
@@@ -559,6 -559,7 +559,7 @@@ void btf_record_free(struct btf_record 
                case BPF_SPIN_LOCK:
                case BPF_TIMER:
                case BPF_REFCOUNT:
+               case BPF_WORKQUEUE:
                        /* Nothing to release */
                        break;
                default:
@@@ -608,6 -609,7 +609,7 @@@ struct btf_record *btf_record_dup(cons
                case BPF_SPIN_LOCK:
                case BPF_TIMER:
                case BPF_REFCOUNT:
+               case BPF_WORKQUEUE:
                        /* Nothing to acquire */
                        break;
                default:
@@@ -659,6 -661,13 +661,13 @@@ void bpf_obj_free_timer(const struct bt
        bpf_timer_cancel_and_free(obj + rec->timer_off);
  }
  
+ void bpf_obj_free_workqueue(const struct btf_record *rec, void *obj)
+ {
+       if (WARN_ON_ONCE(!btf_record_has_field(rec, BPF_WORKQUEUE)))
+               return;
+       bpf_wq_cancel_and_free(obj + rec->wq_off);
+ }
  void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
  {
        const struct btf_field *fields;
                case BPF_TIMER:
                        bpf_timer_cancel_and_free(field_ptr);
                        break;
+               case BPF_WORKQUEUE:
+                       bpf_wq_cancel_and_free(field_ptr);
+                       break;
                case BPF_KPTR_UNREF:
                        WRITE_ONCE(*(u64 *)field_ptr, 0);
                        break;
@@@ -1085,7 -1097,7 +1097,7 @@@ static int map_check_btf(struct bpf_ma
  
        map->record = btf_parse_fields(btf, value_type,
                                       BPF_SPIN_LOCK | BPF_TIMER | BPF_KPTR | BPF_LIST_HEAD |
-                                      BPF_RB_ROOT | BPF_REFCOUNT,
+                                      BPF_RB_ROOT | BPF_REFCOUNT | BPF_WORKQUEUE,
                                       map->value_size);
        if (!IS_ERR_OR_NULL(map->record)) {
                int i;
                                }
                                break;
                        case BPF_TIMER:
+                       case BPF_WORKQUEUE:
                                if (map->map_type != BPF_MAP_TYPE_HASH &&
                                    map->map_type != BPF_MAP_TYPE_LRU_HASH &&
                                    map->map_type != BPF_MAP_TYPE_ARRAY) {
@@@ -3024,46 -3037,17 +3037,46 @@@ void bpf_link_inc(struct bpf_link *link
        atomic64_inc(&link->refcnt);
  }
  
 +static void bpf_link_defer_dealloc_rcu_gp(struct rcu_head *rcu)
 +{
 +      struct bpf_link *link = container_of(rcu, struct bpf_link, rcu);
 +
 +      /* free bpf_link and its containing memory */
 +      link->ops->dealloc_deferred(link);
 +}
 +
 +static void bpf_link_defer_dealloc_mult_rcu_gp(struct rcu_head *rcu)
 +{
 +      if (rcu_trace_implies_rcu_gp())
 +              bpf_link_defer_dealloc_rcu_gp(rcu);
 +      else
 +              call_rcu(rcu, bpf_link_defer_dealloc_rcu_gp);
 +}
 +
  /* bpf_link_free is guaranteed to be called from process context */
  static void bpf_link_free(struct bpf_link *link)
  {
 +      bool sleepable = false;
 +
        bpf_link_free_id(link->id);
        if (link->prog) {
 +              sleepable = link->prog->sleepable;
                /* detach BPF program, clean up used resources */
                link->ops->release(link);
                bpf_prog_put(link->prog);
        }
 -      /* free bpf_link and its containing memory */
 -      link->ops->dealloc(link);
 +      if (link->ops->dealloc_deferred) {
 +              /* schedule BPF link deallocation; if underlying BPF program
 +               * is sleepable, we need to first wait for RCU tasks trace
 +               * sync, then go through "classic" RCU grace period
 +               */
 +              if (sleepable)
 +                      call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp);
 +              else
 +                      call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp);
 +      }
 +      if (link->ops->dealloc)
 +              link->ops->dealloc(link);
  }
  
  static void bpf_link_put_deferred(struct work_struct *work)
@@@ -3568,7 -3552,7 +3581,7 @@@ static int bpf_raw_tp_link_fill_link_in
  
  static const struct bpf_link_ops bpf_raw_tp_link_lops = {
        .release = bpf_raw_tp_link_release,
 -      .dealloc = bpf_raw_tp_link_dealloc,
 +      .dealloc_deferred = bpf_raw_tp_link_dealloc,
        .show_fdinfo = bpf_raw_tp_link_show_fdinfo,
        .fill_link_info = bpf_raw_tp_link_fill_link_info,
  };
@@@ -5242,6 -5226,10 +5255,10 @@@ static int link_create(union bpf_attr *
        case BPF_PROG_TYPE_SK_LOOKUP:
                ret = netns_bpf_link_create(attr, prog);
                break;
+       case BPF_PROG_TYPE_SK_MSG:
+       case BPF_PROG_TYPE_SK_SKB:
+               ret = sock_map_link_create(attr, prog);
+               break;
  #ifdef CONFIG_NET
        case BPF_PROG_TYPE_XDP:
                ret = bpf_xdp_link_attach(attr, prog);
diff --combined kernel/bpf/verifier.c
index 36f5a945520555520d91d1e9ac763aacb32cf476,4e474ef44e9cf767e8ef440881fa1df11b49ae92..87ff414899cf37fb93b06a8d9e1e1ceaabbe578d
@@@ -172,7 -172,7 +172,7 @@@ static bool bpf_global_percpu_ma_set
  
  /* verifier_state + insn_idx are pushed to stack when branch is encountered */
  struct bpf_verifier_stack_elem {
-       /* verifer state is 'st'
+       /* verifier state is 'st'
         * before processing instruction 'insn_idx'
         * and after processing instruction 'prev_insn_idx'
         */
  #define BPF_MAP_KEY_POISON    (1ULL << 63)
  #define BPF_MAP_KEY_SEEN      (1ULL << 62)
  
- #define BPF_MAP_PTR_UNPRIV    1UL
- #define BPF_MAP_PTR_POISON    ((void *)((0xeB9FUL << 1) +     \
-                                         POISON_POINTER_DELTA))
- #define BPF_MAP_PTR(X)                ((struct bpf_map *)((X) & ~BPF_MAP_PTR_UNPRIV))
  #define BPF_GLOBAL_PERCPU_MA_MAX_SIZE  512
  
  static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx);
@@@ -209,21 -204,22 +204,22 @@@ static bool is_trusted_reg(const struc
  
  static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
  {
-       return BPF_MAP_PTR(aux->map_ptr_state) == BPF_MAP_PTR_POISON;
+       return aux->map_ptr_state.poison;
  }
  
  static bool bpf_map_ptr_unpriv(const struct bpf_insn_aux_data *aux)
  {
-       return aux->map_ptr_state & BPF_MAP_PTR_UNPRIV;
+       return aux->map_ptr_state.unpriv;
  }
  
  static void bpf_map_ptr_store(struct bpf_insn_aux_data *aux,
-                             const struct bpf_map *map, bool unpriv)
+                             struct bpf_map *map,
+                             bool unpriv, bool poison)
  {
-       BUILD_BUG_ON((unsigned long)BPF_MAP_PTR_POISON & BPF_MAP_PTR_UNPRIV);
        unpriv |= bpf_map_ptr_unpriv(aux);
-       aux->map_ptr_state = (unsigned long)map |
-                            (unpriv ? BPF_MAP_PTR_UNPRIV : 0UL);
+       aux->map_ptr_state.unpriv = unpriv;
+       aux->map_ptr_state.poison = poison;
+       aux->map_ptr_state.map_ptr = map;
  }
  
  static bool bpf_map_key_poisoned(const struct bpf_insn_aux_data *aux)
@@@ -336,6 -332,10 +332,10 @@@ struct bpf_kfunc_call_arg_meta 
                u8 spi;
                u8 frameno;
        } iter;
+       struct {
+               struct bpf_map *ptr;
+               int uid;
+       } map;
        u64 mem_size;
  };
  
@@@ -501,8 -501,12 +501,12 @@@ static bool is_dynptr_ref_function(enu
  }
  
  static bool is_sync_callback_calling_kfunc(u32 btf_id);
+ static bool is_async_callback_calling_kfunc(u32 btf_id);
+ static bool is_callback_calling_kfunc(u32 btf_id);
  static bool is_bpf_throw_kfunc(struct bpf_insn *insn);
  
+ static bool is_bpf_wq_set_callback_impl_kfunc(u32 btf_id);
  static bool is_sync_callback_calling_function(enum bpf_func_id func_id)
  {
        return func_id == BPF_FUNC_for_each_map_elem ||
@@@ -530,7 -534,8 +534,8 @@@ static bool is_sync_callback_calling_in
  
  static bool is_async_callback_calling_insn(struct bpf_insn *insn)
  {
-       return bpf_helper_call(insn) && is_async_callback_calling_function(insn->imm);
+       return (bpf_helper_call(insn) && is_async_callback_calling_function(insn->imm)) ||
+              (bpf_pseudo_kfunc_call(insn) && is_async_callback_calling_kfunc(insn->imm));
  }
  
  static bool is_may_goto_insn(struct bpf_insn *insn)
@@@ -1429,6 -1434,8 +1434,8 @@@ static int copy_verifier_state(struct b
        }
        dst_state->speculative = src->speculative;
        dst_state->active_rcu_lock = src->active_rcu_lock;
+       dst_state->active_preempt_lock = src->active_preempt_lock;
+       dst_state->in_sleepable = src->in_sleepable;
        dst_state->curframe = src->curframe;
        dst_state->active_lock.ptr = src->active_lock.ptr;
        dst_state->active_lock.id = src->active_lock.id;
@@@ -1842,6 -1849,8 +1849,8 @@@ static void mark_ptr_not_null_reg(struc
                         */
                        if (btf_record_has_field(map->inner_map_meta->record, BPF_TIMER))
                                reg->map_uid = reg->id;
+                       if (btf_record_has_field(map->inner_map_meta->record, BPF_WORKQUEUE))
+                               reg->map_uid = reg->id;
                } else if (map->map_type == BPF_MAP_TYPE_XSKMAP) {
                        reg->type = PTR_TO_XDP_SOCK;
                } else if (map->map_type == BPF_MAP_TYPE_SOCKMAP ||
@@@ -2135,7 -2144,7 +2144,7 @@@ static void __reg64_deduce_bounds(struc
  static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
  {
        /* Try to tighten 64-bit bounds from 32-bit knowledge, using 32-bit
-        * values on both sides of 64-bit range in hope to have tigher range.
+        * values on both sides of 64-bit range in hope to have tighter range.
         * E.g., if r1 is [0x1'00000000, 0x3'80000000], and we learn from
         * 32-bit signed > 0 operation that s32 bounds are now [1; 0x7fffffff].
         * With this, we can substitute 1 as low 32-bits of _low_ 64-bit bound
         * _high_ 64-bit bound (0x380000000 -> 0x37fffffff) and arrive at a
         * better overall bounds for r1 as [0x1'000000001; 0x3'7fffffff].
         * We just need to make sure that derived bounds we are intersecting
-        * with are well-formed ranges in respecitve s64 or u64 domain, just
+        * with are well-formed ranges in respective s64 or u64 domain, just
         * like we do with similar kinds of 32-to-64 or 64-to-32 adjustments.
         */
        __u64 new_umin, new_umax;
@@@ -2402,7 -2411,7 +2411,7 @@@ static void init_func_state(struct bpf_
  /* Similar to push_stack(), but for async callbacks */
  static struct bpf_verifier_state *push_async_cb(struct bpf_verifier_env *env,
                                                int insn_idx, int prev_insn_idx,
-                                               int subprog)
+                                               int subprog, bool is_sleepable)
  {
        struct bpf_verifier_stack_elem *elem;
        struct bpf_func_state *frame;
         * Initialize it similar to do_check_common().
         */
        elem->st.branches = 1;
+       elem->st.in_sleepable = is_sleepable;
        frame = kzalloc(sizeof(*frame), GFP_KERNEL);
        if (!frame)
                goto err;
@@@ -3615,7 -3625,8 +3625,8 @@@ static int backtrack_insn(struct bpf_ve
                                 * sreg needs precision before this insn
                                 */
                                bt_clear_reg(bt, dreg);
-                               bt_set_reg(bt, sreg);
+                               if (sreg != BPF_REG_FP)
+                                       bt_set_reg(bt, sreg);
                        } else {
                                /* dreg = K
                                 * dreg needs precision after this insn.
                                 * both dreg and sreg need precision
                                 * before this insn
                                 */
-                               bt_set_reg(bt, sreg);
+                               if (sreg != BPF_REG_FP)
+                                       bt_set_reg(bt, sreg);
                        } /* else dreg += K
                           * dreg still needs precision before this insn
                           */
@@@ -5274,7 -5286,8 +5286,8 @@@ bad_type
  
  static bool in_sleepable(struct bpf_verifier_env *env)
  {
-       return env->prog->sleepable;
+       return env->prog->sleepable ||
+              (env->cur_state && env->cur_state->in_sleepable);
  }
  
  /* The non-sleepable programs and sleepable programs with explicit bpf_rcu_read_lock()
@@@ -5297,6 -5310,7 +5310,7 @@@ BTF_ID(struct, cgroup
  BTF_ID(struct, bpf_cpumask)
  #endif
  BTF_ID(struct, task_struct)
+ BTF_ID(struct, bpf_crypto_ctx)
  BTF_SET_END(rcu_protected_types)
  
  static bool rcu_protected_object(const struct btf *btf, u32 btf_id)
@@@ -6972,6 -6986,9 +6986,9 @@@ static int check_mem_access(struct bpf_
        return err;
  }
  
+ static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type type,
+                            bool allow_trust_mismatch);
  static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn)
  {
        int load_reg;
            is_pkt_reg(env, insn->dst_reg) ||
            is_flow_key_reg(env, insn->dst_reg) ||
            is_sk_reg(env, insn->dst_reg) ||
-           is_arena_reg(env, insn->dst_reg)) {
+           (is_arena_reg(env, insn->dst_reg) && !bpf_jit_supports_insn(insn, true))) {
                verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n",
                        insn->dst_reg,
                        reg_type_str(env, reg_state(env, insn->dst_reg)->type));
        if (err)
                return err;
  
+       if (is_arena_reg(env, insn->dst_reg)) {
+               err = save_aux_ptr_type(env, PTR_TO_ARENA, false);
+               if (err)
+                       return err;
+       }
        /* Check whether we can write into the same memory. */
        err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
                               BPF_SIZE(insn->code), BPF_WRITE, -1, true, false);
@@@ -7590,6 -7612,23 +7612,23 @@@ static int process_timer_func(struct bp
        return 0;
  }
  
+ static int process_wq_func(struct bpf_verifier_env *env, int regno,
+                          struct bpf_kfunc_call_arg_meta *meta)
+ {
+       struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
+       struct bpf_map *map = reg->map_ptr;
+       u64 val = reg->var_off.value;
+       if (map->record->wq_off != val + reg->off) {
+               verbose(env, "off %lld doesn't point to 'struct bpf_wq' that is at %d\n",
+                       val + reg->off, map->record->wq_off);
+               return -EINVAL;
+       }
+       meta->map.uid = reg->map_uid;
+       meta->map.ptr = map;
+       return 0;
+ }
  static int process_kptr_func(struct bpf_verifier_env *env, int regno,
                             struct bpf_call_arg_meta *meta)
  {
@@@ -9484,7 -9523,7 +9523,7 @@@ static int push_callback_call(struct bp
         */
        env->subprog_info[subprog].is_cb = true;
        if (bpf_pseudo_kfunc_call(insn) &&
-           !is_sync_callback_calling_kfunc(insn->imm)) {
+           !is_callback_calling_kfunc(insn->imm)) {
                verbose(env, "verifier bug: kfunc %s#%d not marked as callback-calling\n",
                        func_id_name(insn->imm), insn->imm);
                return -EFAULT;
        if (is_async_callback_calling_insn(insn)) {
                struct bpf_verifier_state *async_cb;
  
-               /* there is no real recursion here. timer callbacks are async */
+               /* there is no real recursion here. timer and workqueue callbacks are async */
                env->subprog_info[subprog].is_async_cb = true;
                async_cb = push_async_cb(env, env->subprog_info[subprog].start,
-                                        insn_idx, subprog);
+                                        insn_idx, subprog,
+                                        is_bpf_wq_set_callback_impl_kfunc(insn->imm));
                if (!async_cb)
                        return -EFAULT;
                callee = async_cb->frame[0];
@@@ -9561,6 -9601,13 +9601,13 @@@ static int check_func_call(struct bpf_v
                        return -EINVAL;
                }
  
+               /* Only global subprogs cannot be called with preemption disabled. */
+               if (env->cur_state->active_preempt_lock) {
+                       verbose(env, "global function calls are not allowed with preemption disabled,\n"
+                                    "use static function instead\n");
+                       return -EINVAL;
+               }
                if (err) {
                        verbose(env, "Caller passes invalid args into func#%d ('%s')\n",
                                subprog, sub_name);
@@@ -9653,12 -9700,8 +9700,8 @@@ static int set_map_elem_callback_state(
        struct bpf_map *map;
        int err;
  
-       if (bpf_map_ptr_poisoned(insn_aux)) {
-               verbose(env, "tail_call abusing map_ptr\n");
-               return -EINVAL;
-       }
-       map = BPF_MAP_PTR(insn_aux->map_ptr_state);
+       /* valid map_ptr and poison value does not matter */
+       map = insn_aux->map_ptr_state.map_ptr;
        if (!map->ops->map_set_for_each_callback_args ||
            !map->ops->map_for_each_callback) {
                verbose(env, "callback function not allowed for map\n");
@@@ -10017,12 -10060,12 +10060,12 @@@ record_func_map(struct bpf_verifier_en
                return -EACCES;
        }
  
-       if (!BPF_MAP_PTR(aux->map_ptr_state))
+       if (!aux->map_ptr_state.map_ptr)
+               bpf_map_ptr_store(aux, meta->map_ptr,
+                                 !meta->map_ptr->bypass_spec_v1, false);
+       else if (aux->map_ptr_state.map_ptr != meta->map_ptr)
                bpf_map_ptr_store(aux, meta->map_ptr,
-                                 !meta->map_ptr->bypass_spec_v1);
-       else if (BPF_MAP_PTR(aux->map_ptr_state) != meta->map_ptr)
-               bpf_map_ptr_store(aux, BPF_MAP_PTR_POISON,
-                                 !meta->map_ptr->bypass_spec_v1);
+                                 !meta->map_ptr->bypass_spec_v1, true);
        return 0;
  }
  
@@@ -10201,8 -10244,8 +10244,8 @@@ static int check_helper_call(struct bpf
        if (env->ops->get_func_proto)
                fn = env->ops->get_func_proto(func_id, env->prog);
        if (!fn) {
-               verbose(env, "unknown func %s#%d\n", func_id_name(func_id),
-                       func_id);
+               verbose(env, "program of this type cannot use helper %s#%d\n",
+                       func_id_name(func_id), func_id);
                return -EINVAL;
        }
  
                        env->insn_aux_data[insn_idx].storage_get_func_atomic = true;
        }
  
+       if (env->cur_state->active_preempt_lock) {
+               if (fn->might_sleep) {
+                       verbose(env, "sleepable helper %s#%d in non-preemptible region\n",
+                               func_id_name(func_id), func_id);
+                       return -EINVAL;
+               }
+               if (in_sleepable(env) && is_storage_get_function(func_id))
+                       env->insn_aux_data[insn_idx].storage_get_func_atomic = true;
+       }
        meta.func_id = func_id;
        /* check args */
        for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
@@@ -10839,6 -10893,7 +10893,7 @@@ enum 
        KF_ARG_LIST_NODE_ID,
        KF_ARG_RB_ROOT_ID,
        KF_ARG_RB_NODE_ID,
+       KF_ARG_WORKQUEUE_ID,
  };
  
  BTF_ID_LIST(kf_arg_btf_ids)
@@@ -10847,6 -10902,7 +10902,7 @@@ BTF_ID(struct, bpf_list_head
  BTF_ID(struct, bpf_list_node)
  BTF_ID(struct, bpf_rb_root)
  BTF_ID(struct, bpf_rb_node)
+ BTF_ID(struct, bpf_wq)
  
  static bool __is_kfunc_ptr_arg_type(const struct btf *btf,
                                    const struct btf_param *arg, int type)
@@@ -10890,6 -10946,11 +10946,11 @@@ static bool is_kfunc_arg_rbtree_node(co
        return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_RB_NODE_ID);
  }
  
+ static bool is_kfunc_arg_wq(const struct btf *btf, const struct btf_param *arg)
+ {
+       return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_WORKQUEUE_ID);
+ }
  static bool is_kfunc_arg_callback(struct bpf_verifier_env *env, const struct btf *btf,
                                  const struct btf_param *arg)
  {
@@@ -10959,6 -11020,7 +11020,7 @@@ enum kfunc_ptr_arg_type 
        KF_ARG_PTR_TO_NULL,
        KF_ARG_PTR_TO_CONST_STR,
        KF_ARG_PTR_TO_MAP,
+       KF_ARG_PTR_TO_WORKQUEUE,
  };
  
  enum special_kfunc_type {
        KF_bpf_percpu_obj_new_impl,
        KF_bpf_percpu_obj_drop_impl,
        KF_bpf_throw,
+       KF_bpf_wq_set_callback_impl,
+       KF_bpf_preempt_disable,
+       KF_bpf_preempt_enable,
        KF_bpf_iter_css_task_new,
  };
  
@@@ -11008,6 -11073,7 +11073,7 @@@ BTF_ID(func, bpf_dynptr_clone
  BTF_ID(func, bpf_percpu_obj_new_impl)
  BTF_ID(func, bpf_percpu_obj_drop_impl)
  BTF_ID(func, bpf_throw)
+ BTF_ID(func, bpf_wq_set_callback_impl)
  #ifdef CONFIG_CGROUPS
  BTF_ID(func, bpf_iter_css_task_new)
  #endif
@@@ -11036,6 -11102,9 +11102,9 @@@ BTF_ID(func, bpf_dynptr_clone
  BTF_ID(func, bpf_percpu_obj_new_impl)
  BTF_ID(func, bpf_percpu_obj_drop_impl)
  BTF_ID(func, bpf_throw)
+ BTF_ID(func, bpf_wq_set_callback_impl)
+ BTF_ID(func, bpf_preempt_disable)
+ BTF_ID(func, bpf_preempt_enable)
  #ifdef CONFIG_CGROUPS
  BTF_ID(func, bpf_iter_css_task_new)
  #else
@@@ -11062,6 -11131,16 +11131,16 @@@ static bool is_kfunc_bpf_rcu_read_unloc
        return meta->func_id == special_kfunc_list[KF_bpf_rcu_read_unlock];
  }
  
+ static bool is_kfunc_bpf_preempt_disable(struct bpf_kfunc_call_arg_meta *meta)
+ {
+       return meta->func_id == special_kfunc_list[KF_bpf_preempt_disable];
+ }
+ static bool is_kfunc_bpf_preempt_enable(struct bpf_kfunc_call_arg_meta *meta)
+ {
+       return meta->func_id == special_kfunc_list[KF_bpf_preempt_enable];
+ }
  static enum kfunc_ptr_arg_type
  get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
                       struct bpf_kfunc_call_arg_meta *meta,
        if (is_kfunc_arg_map(meta->btf, &args[argno]))
                return KF_ARG_PTR_TO_MAP;
  
+       if (is_kfunc_arg_wq(meta->btf, &args[argno]))
+               return KF_ARG_PTR_TO_WORKQUEUE;
        if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) {
                if (!btf_type_is_struct(ref_t)) {
                        verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n",
@@@ -11366,12 -11448,28 +11448,28 @@@ static bool is_sync_callback_calling_kf
        return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl];
  }
  
+ static bool is_async_callback_calling_kfunc(u32 btf_id)
+ {
+       return btf_id == special_kfunc_list[KF_bpf_wq_set_callback_impl];
+ }
  static bool is_bpf_throw_kfunc(struct bpf_insn *insn)
  {
        return bpf_pseudo_kfunc_call(insn) && insn->off == 0 &&
               insn->imm == special_kfunc_list[KF_bpf_throw];
  }
  
+ static bool is_bpf_wq_set_callback_impl_kfunc(u32 btf_id)
+ {
+       return btf_id == special_kfunc_list[KF_bpf_wq_set_callback_impl];
+ }
+ static bool is_callback_calling_kfunc(u32 btf_id)
+ {
+       return is_sync_callback_calling_kfunc(btf_id) ||
+              is_async_callback_calling_kfunc(btf_id);
+ }
  static bool is_rbtree_lock_required_kfunc(u32 btf_id)
  {
        return is_bpf_rbtree_api_kfunc(btf_id);
@@@ -11716,6 -11814,34 +11814,34 @@@ static int check_kfunc_args(struct bpf_
                case KF_ARG_PTR_TO_NULL:
                        continue;
                case KF_ARG_PTR_TO_MAP:
+                       if (!reg->map_ptr) {
+                               verbose(env, "pointer in R%d isn't map pointer\n", regno);
+                               return -EINVAL;
+                       }
+                       if (meta->map.ptr && reg->map_ptr->record->wq_off >= 0) {
+                               /* Use map_uid (which is unique id of inner map) to reject:
+                                * inner_map1 = bpf_map_lookup_elem(outer_map, key1)
+                                * inner_map2 = bpf_map_lookup_elem(outer_map, key2)
+                                * if (inner_map1 && inner_map2) {
+                                *     wq = bpf_map_lookup_elem(inner_map1);
+                                *     if (wq)
+                                *         // mismatch would have been allowed
+                                *         bpf_wq_init(wq, inner_map2);
+                                * }
+                                *
+                                * Comparing map_ptr is enough to distinguish normal and outer maps.
+                                */
+                               if (meta->map.ptr != reg->map_ptr ||
+                                   meta->map.uid != reg->map_uid) {
+                                       verbose(env,
+                                               "workqueue pointer in R1 map_uid=%d doesn't match map pointer in R2 map_uid=%d\n",
+                                               meta->map.uid, reg->map_uid);
+                                       return -EINVAL;
+                               }
+                       }
+                       meta->map.ptr = reg->map_ptr;
+                       meta->map.uid = reg->map_uid;
+                       fallthrough;
                case KF_ARG_PTR_TO_ALLOC_BTF_ID:
                case KF_ARG_PTR_TO_BTF_ID:
                        if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
                case KF_ARG_PTR_TO_CALLBACK:
                case KF_ARG_PTR_TO_REFCOUNTED_KPTR:
                case KF_ARG_PTR_TO_CONST_STR:
+               case KF_ARG_PTR_TO_WORKQUEUE:
                        /* Trusted by default */
                        break;
                default:
                        if (ret)
                                return ret;
                        break;
+               case KF_ARG_PTR_TO_WORKQUEUE:
+                       if (reg->type != PTR_TO_MAP_VALUE) {
+                               verbose(env, "arg#%d doesn't point to a map value\n", i);
+                               return -EINVAL;
+                       }
+                       ret = process_wq_func(env, regno, meta);
+                       if (ret < 0)
+                               return ret;
+                       break;
                }
        }
  
@@@ -12093,11 -12229,11 +12229,11 @@@ static int check_return_code(struct bpf
  static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
                            int *insn_idx_p)
  {
-       const struct btf_type *t, *ptr_type;
+       bool sleepable, rcu_lock, rcu_unlock, preempt_disable, preempt_enable;
        u32 i, nargs, ptr_type_id, release_ref_obj_id;
        struct bpf_reg_state *regs = cur_regs(env);
        const char *func_name, *ptr_type_name;
-       bool sleepable, rcu_lock, rcu_unlock;
+       const struct btf_type *t, *ptr_type;
        struct bpf_kfunc_call_arg_meta meta;
        struct bpf_insn_aux_data *insn_aux;
        int err, insn_idx = *insn_idx_p;
                }
        }
  
+       if (is_bpf_wq_set_callback_impl_kfunc(meta.func_id)) {
+               err = push_callback_call(env, insn, insn_idx, meta.subprogno,
+                                        set_timer_callback_state);
+               if (err) {
+                       verbose(env, "kfunc %s#%d failed callback verification\n",
+                               func_name, meta.func_id);
+                       return err;
+               }
+       }
        rcu_lock = is_kfunc_bpf_rcu_read_lock(&meta);
        rcu_unlock = is_kfunc_bpf_rcu_read_unlock(&meta);
  
+       preempt_disable = is_kfunc_bpf_preempt_disable(&meta);
+       preempt_enable = is_kfunc_bpf_preempt_enable(&meta);
        if (env->cur_state->active_rcu_lock) {
                struct bpf_func_state *state;
                struct bpf_reg_state *reg;
                return -EINVAL;
        }
  
+       if (env->cur_state->active_preempt_lock) {
+               if (preempt_disable) {
+                       env->cur_state->active_preempt_lock++;
+               } else if (preempt_enable) {
+                       env->cur_state->active_preempt_lock--;
+               } else if (sleepable) {
+                       verbose(env, "kernel func %s is sleepable within non-preemptible region\n", func_name);
+                       return -EACCES;
+               }
+       } else if (preempt_disable) {
+               env->cur_state->active_preempt_lock++;
+       } else if (preempt_enable) {
+               verbose(env, "unmatched attempt to enable preemption (kernel function %s)\n", func_name);
+               return -EINVAL;
+       }
        /* In case of release function, we get register number of refcounted
         * PTR_TO_BTF_ID in bpf_kfunc_arg_meta, do the release now.
         */
@@@ -13318,7 -13483,6 +13483,6 @@@ static void scalar32_min_max_and(struc
        bool src_known = tnum_subreg_is_const(src_reg->var_off);
        bool dst_known = tnum_subreg_is_const(dst_reg->var_off);
        struct tnum var32_off = tnum_subreg(dst_reg->var_off);
-       s32 smin_val = src_reg->s32_min_value;
        u32 umax_val = src_reg->u32_max_value;
  
        if (src_known && dst_known) {
         */
        dst_reg->u32_min_value = var32_off.value;
        dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val);
-       if (dst_reg->s32_min_value < 0 || smin_val < 0) {
-               /* Lose signed bounds when ANDing negative numbers,
-                * ain't nobody got time for that.
-                */
-               dst_reg->s32_min_value = S32_MIN;
-               dst_reg->s32_max_value = S32_MAX;
-       } else {
-               /* ANDing two positives gives a positive, so safe to
-                * cast result into s64.
-                */
+       /* Safe to set s32 bounds by casting u32 result into s32 when u32
+        * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded.
+        */
+       if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) {
                dst_reg->s32_min_value = dst_reg->u32_min_value;
                dst_reg->s32_max_value = dst_reg->u32_max_value;
+       } else {
+               dst_reg->s32_min_value = S32_MIN;
+               dst_reg->s32_max_value = S32_MAX;
        }
  }
  
@@@ -13351,7 -13513,6 +13513,6 @@@ static void scalar_min_max_and(struct b
  {
        bool src_known = tnum_is_const(src_reg->var_off);
        bool dst_known = tnum_is_const(dst_reg->var_off);
-       s64 smin_val = src_reg->smin_value;
        u64 umax_val = src_reg->umax_value;
  
        if (src_known && dst_known) {
         */
        dst_reg->umin_value = dst_reg->var_off.value;
        dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
-       if (dst_reg->smin_value < 0 || smin_val < 0) {
-               /* Lose signed bounds when ANDing negative numbers,
-                * ain't nobody got time for that.
-                */
-               dst_reg->smin_value = S64_MIN;
-               dst_reg->smax_value = S64_MAX;
-       } else {
-               /* ANDing two positives gives a positive, so safe to
-                * cast result into s64.
-                */
+       /* Safe to set s64 bounds by casting u64 result into s64 when u64
+        * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded.
+        */
+       if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) {
                dst_reg->smin_value = dst_reg->umin_value;
                dst_reg->smax_value = dst_reg->umax_value;
+       } else {
+               dst_reg->smin_value = S64_MIN;
+               dst_reg->smax_value = S64_MAX;
        }
        /* We may learn something more from the var_off */
        __update_reg_bounds(dst_reg);
@@@ -13387,7 -13546,6 +13546,6 @@@ static void scalar32_min_max_or(struct 
        bool src_known = tnum_subreg_is_const(src_reg->var_off);
        bool dst_known = tnum_subreg_is_const(dst_reg->var_off);
        struct tnum var32_off = tnum_subreg(dst_reg->var_off);
-       s32 smin_val = src_reg->s32_min_value;
        u32 umin_val = src_reg->u32_min_value;
  
        if (src_known && dst_known) {
         */
        dst_reg->u32_min_value = max(dst_reg->u32_min_value, umin_val);
        dst_reg->u32_max_value = var32_off.value | var32_off.mask;
-       if (dst_reg->s32_min_value < 0 || smin_val < 0) {
-               /* Lose signed bounds when ORing negative numbers,
-                * ain't nobody got time for that.
-                */
-               dst_reg->s32_min_value = S32_MIN;
-               dst_reg->s32_max_value = S32_MAX;
-       } else {
-               /* ORing two positives gives a positive, so safe to
-                * cast result into s64.
-                */
+       /* Safe to set s32 bounds by casting u32 result into s32 when u32
+        * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded.
+        */
+       if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) {
                dst_reg->s32_min_value = dst_reg->u32_min_value;
                dst_reg->s32_max_value = dst_reg->u32_max_value;
+       } else {
+               dst_reg->s32_min_value = S32_MIN;
+               dst_reg->s32_max_value = S32_MAX;
        }
  }
  
@@@ -13420,7 -13576,6 +13576,6 @@@ static void scalar_min_max_or(struct bp
  {
        bool src_known = tnum_is_const(src_reg->var_off);
        bool dst_known = tnum_is_const(dst_reg->var_off);
-       s64 smin_val = src_reg->smin_value;
        u64 umin_val = src_reg->umin_value;
  
        if (src_known && dst_known) {
         */
        dst_reg->umin_value = max(dst_reg->umin_value, umin_val);
        dst_reg->umax_value = dst_reg->var_off.value | dst_reg->var_off.mask;
-       if (dst_reg->smin_value < 0 || smin_val < 0) {
-               /* Lose signed bounds when ORing negative numbers,
-                * ain't nobody got time for that.
-                */
-               dst_reg->smin_value = S64_MIN;
-               dst_reg->smax_value = S64_MAX;
-       } else {
-               /* ORing two positives gives a positive, so safe to
-                * cast result into s64.
-                */
+       /* Safe to set s64 bounds by casting u64 result into s64 when u64
+        * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded.
+        */
+       if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) {
                dst_reg->smin_value = dst_reg->umin_value;
                dst_reg->smax_value = dst_reg->umax_value;
+       } else {
+               dst_reg->smin_value = S64_MIN;
+               dst_reg->smax_value = S64_MAX;
        }
        /* We may learn something more from the var_off */
        __update_reg_bounds(dst_reg);
@@@ -13456,7 -13609,6 +13609,6 @@@ static void scalar32_min_max_xor(struc
        bool src_known = tnum_subreg_is_const(src_reg->var_off);
        bool dst_known = tnum_subreg_is_const(dst_reg->var_off);
        struct tnum var32_off = tnum_subreg(dst_reg->var_off);
-       s32 smin_val = src_reg->s32_min_value;
  
        if (src_known && dst_known) {
                __mark_reg32_known(dst_reg, var32_off.value);
        dst_reg->u32_min_value = var32_off.value;
        dst_reg->u32_max_value = var32_off.value | var32_off.mask;
  
-       if (dst_reg->s32_min_value >= 0 && smin_val >= 0) {
-               /* XORing two positive sign numbers gives a positive,
-                * so safe to cast u32 result into s32.
-                */
+       /* Safe to set s32 bounds by casting u32 result into s32 when u32
+        * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded.
+        */
+       if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) {
                dst_reg->s32_min_value = dst_reg->u32_min_value;
                dst_reg->s32_max_value = dst_reg->u32_max_value;
        } else {
@@@ -13484,7 -13636,6 +13636,6 @@@ static void scalar_min_max_xor(struct b
  {
        bool src_known = tnum_is_const(src_reg->var_off);
        bool dst_known = tnum_is_const(dst_reg->var_off);
-       s64 smin_val = src_reg->smin_value;
  
        if (src_known && dst_known) {
                /* dst_reg->var_off.value has been updated earlier */
        dst_reg->umin_value = dst_reg->var_off.value;
        dst_reg->umax_value = dst_reg->var_off.value | dst_reg->var_off.mask;
  
-       if (dst_reg->smin_value >= 0 && smin_val >= 0) {
-               /* XORing two positive sign numbers gives a positive,
-                * so safe to cast u64 result into s64.
-                */
+       /* Safe to set s64 bounds by casting u64 result into s64 when u64
+        * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded.
+        */
+       if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) {
                dst_reg->smin_value = dst_reg->umin_value;
                dst_reg->smax_value = dst_reg->umax_value;
        } else {
@@@ -14726,7 -14877,7 +14877,7 @@@ static void regs_refine_cond_op(struct 
  
  /* Adjusts the register min/max values in the case that the dst_reg and
   * src_reg are both SCALAR_VALUE registers (or we are simply doing a BPF_K
-  * check, in which case we havea fake SCALAR_VALUE representing insn->imm).
+  * check, in which case we have a fake SCALAR_VALUE representing insn->imm).
   * Technically we can do similar adjustments for pointers to the same object,
   * but we don't support that right now.
   */
@@@ -15341,6 -15492,11 +15492,11 @@@ static int check_ld_abs(struct bpf_veri
                return -EINVAL;
        }
  
+       if (env->cur_state->active_preempt_lock) {
+               verbose(env, "BPF_LD_[ABS|IND] cannot be used inside bpf_preempt_disable-ed region\n");
+               return -EINVAL;
+       }
        if (regs[ctx_reg].type != PTR_TO_CTX) {
                verbose(env,
                        "at the time of BPF_LD_ABS|IND R6 != pointer to skb\n");
@@@ -16908,6 -17064,12 +17064,12 @@@ static bool states_equal(struct bpf_ver
        if (old->active_rcu_lock != cur->active_rcu_lock)
                return false;
  
+       if (old->active_preempt_lock != cur->active_preempt_lock)
+               return false;
+       if (old->in_sleepable != cur->in_sleepable)
+               return false;
        /* for states to be equal callsites have to be the same
         * and all frame states need to be equivalent
         */
                        err = propagate_liveness(env, &sl->state, cur);
  
                        /* if previous state reached the exit with precision and
-                        * current state is equivalent to it (except precsion marks)
+                        * current state is equivalent to it (except precision marks)
                         * the precision needs to be propagated back in
                         * the current state.
                         */
@@@ -17542,7 -17704,7 +17704,7 @@@ static bool reg_type_mismatch(enum bpf_
  }
  
  static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type type,
-                            bool allow_trust_missmatch)
+                            bool allow_trust_mismatch)
  {
        enum bpf_reg_type *prev_type = &env->insn_aux_data[env->insn_idx].ptr_type;
  
                 * src_reg == stack|map in some other branch.
                 * Reject it.
                 */
-               if (allow_trust_missmatch &&
+               if (allow_trust_mismatch &&
                    base_type(type) == PTR_TO_BTF_ID &&
                    base_type(*prev_type) == PTR_TO_BTF_ID) {
                        /*
@@@ -17856,6 -18018,13 +18018,13 @@@ process_bpf_exit_full
                                        return -EINVAL;
                                }
  
+                               if (env->cur_state->active_preempt_lock && !env->cur_state->curframe) {
+                                       verbose(env, "%d bpf_preempt_enable%s missing\n",
+                                               env->cur_state->active_preempt_lock,
+                                               env->cur_state->active_preempt_lock == 1 ? " is" : "(s) are");
+                                       return -EINVAL;
+                               }
                                /* We must do check_reference_leak here before
                                 * prepare_func_exit to handle the case when
                                 * state->curframe > 0, it may be a callback
@@@ -18153,6 -18322,13 +18322,13 @@@ static int check_map_prog_compatibility
                }
        }
  
+       if (btf_record_has_field(map->record, BPF_WORKQUEUE)) {
+               if (is_tracing_prog_type(prog_type)) {
+                       verbose(env, "tracing progs cannot use bpf_wq yet\n");
+                       return -EINVAL;
+               }
+       }
        if ((bpf_prog_is_offloaded(prog->aux) || bpf_map_is_offloaded(map)) &&
            !bpf_offload_prog_map_match(prog, map)) {
                verbose(env, "offload device mismatch between prog and map\n");
@@@ -18348,6 -18524,8 +18524,8 @@@ static int resolve_pseudo_ldimm64(struc
                        }
  
                        if (env->used_map_cnt >= MAX_USED_MAPS) {
+                               verbose(env, "The total number of maps per program has reached the limit of %u\n",
+                                       MAX_USED_MAPS);
                                fdput(f);
                                return -E2BIG;
                        }
                                }
                                if (!env->prog->jit_requested) {
                                        verbose(env, "JIT is required to use arena\n");
 +                                      fdput(f);
                                        return -EOPNOTSUPP;
                                }
                                if (!bpf_jit_supports_arena()) {
                                        verbose(env, "JIT doesn't support arena\n");
 +                                      fdput(f);
                                        return -EOPNOTSUPP;
                                }
                                env->prog->aux->arena = (void *)map;
                                if (!bpf_arena_get_user_vm_start(env->prog->aux->arena)) {
                                        verbose(env, "arena's user address must be set via map_extra or mmap()\n");
 +                                      fdput(f);
                                        return -EINVAL;
                                }
                        }
@@@ -18962,6 -19137,12 +19140,12 @@@ static int convert_ctx_accesses(struct 
                           insn->code == (BPF_ST | BPF_MEM | BPF_W) ||
                           insn->code == (BPF_ST | BPF_MEM | BPF_DW)) {
                        type = BPF_WRITE;
+               } else if ((insn->code == (BPF_STX | BPF_ATOMIC | BPF_W) ||
+                           insn->code == (BPF_STX | BPF_ATOMIC | BPF_DW)) &&
+                          env->insn_aux_data[i + delta].ptr_type == PTR_TO_ARENA) {
+                       insn->code = BPF_STX | BPF_PROBE_ATOMIC | BPF_SIZE(insn->code);
+                       env->prog->aux->num_exentries++;
+                       continue;
                } else {
                        continue;
                }
@@@ -19148,12 -19329,19 +19332,19 @@@ static int jit_subprogs(struct bpf_veri
                env->insn_aux_data[i].call_imm = insn->imm;
                /* point imm to __bpf_call_base+1 from JITs point of view */
                insn->imm = 1;
-               if (bpf_pseudo_func(insn))
+               if (bpf_pseudo_func(insn)) {
+ #if defined(MODULES_VADDR)
+                       u64 addr = MODULES_VADDR;
+ #else
+                       u64 addr = VMALLOC_START;
+ #endif
                        /* jit (e.g. x86_64) may emit fewer instructions
                         * if it learns a u32 imm is the same as a u64 imm.
-                        * Force a non zero here.
+                        * Set close enough to possible prog address.
                         */
-                       insn[1].imm = 1;
+                       insn[0].imm = (u32)addr;
+                       insn[1].imm = addr >> 32;
+               }
        }
  
        err = bpf_prog_alloc_jited_linfo(prog);
                             BPF_CLASS(insn->code) == BPF_ST) &&
                             BPF_MODE(insn->code) == BPF_PROBE_MEM32)
                                num_exentries++;
+                       if (BPF_CLASS(insn->code) == BPF_STX &&
+                            BPF_MODE(insn->code) == BPF_PROBE_ATOMIC)
+                               num_exentries++;
                }
                func[i]->aux->num_exentries = num_exentries;
                func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable;
@@@ -19557,6 -19748,13 +19751,13 @@@ static int fixup_kfunc_call(struct bpf_
                   desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
                insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
                *cnt = 1;
+       } else if (is_bpf_wq_set_callback_impl_kfunc(desc->func_id)) {
+               struct bpf_insn ld_addrs[2] = { BPF_LD_IMM64(BPF_REG_4, (long)env->prog->aux) };
+               insn_buf[0] = ld_addrs[0];
+               insn_buf[1] = ld_addrs[1];
+               insn_buf[2] = *insn;
+               *cnt = 3;
        }
        return 0;
  }
@@@ -19832,7 -20030,7 +20033,7 @@@ static int do_misc_fixups(struct bpf_ve
                            !bpf_map_ptr_unpriv(aux)) {
                                struct bpf_jit_poke_descriptor desc = {
                                        .reason = BPF_POKE_REASON_TAIL_CALL,
-                                       .tail_call.map = BPF_MAP_PTR(aux->map_ptr_state),
+                                       .tail_call.map = aux->map_ptr_state.map_ptr,
                                        .tail_call.key = bpf_map_key_immediate(aux),
                                        .insn_idx = i + delta,
                                };
                                return -EINVAL;
                        }
  
-                       map_ptr = BPF_MAP_PTR(aux->map_ptr_state);
+                       map_ptr = aux->map_ptr_state.map_ptr;
                        insn_buf[0] = BPF_JMP_IMM(BPF_JGE, BPF_REG_3,
                                                  map_ptr->max_entries, 2);
                        insn_buf[1] = BPF_ALU32_IMM(BPF_AND, BPF_REG_3,
                        if (bpf_map_ptr_poisoned(aux))
                                goto patch_call_imm;
  
-                       map_ptr = BPF_MAP_PTR(aux->map_ptr_state);
+                       map_ptr = aux->map_ptr_state.map_ptr;
                        ops = map_ptr->ops;
                        if (insn->imm == BPF_FUNC_map_lookup_elem &&
                            ops->map_gen_lookup) {
@@@ -20075,6 -20273,30 +20276,30 @@@ patch_map_ops_generic
                        goto next_insn;
                }
  
+ #ifdef CONFIG_X86_64
+               /* Implement bpf_get_smp_processor_id() inline. */
+               if (insn->imm == BPF_FUNC_get_smp_processor_id &&
+                   prog->jit_requested && bpf_jit_supports_percpu_insn()) {
+                       /* BPF_FUNC_get_smp_processor_id inlining is an
+                        * optimization, so if pcpu_hot.cpu_number is ever
+                        * changed in some incompatible and hard to support
+                        * way, it's fine to back out this inlining logic
+                        */
+                       insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
+                       insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+                       insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
+                       cnt = 3;
+                       new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+                       if (!new_prog)
+                               return -ENOMEM;
+                       delta    += cnt - 1;
+                       env->prog = prog = new_prog;
+                       insn      = new_prog->insnsi + i + delta;
+                       goto next_insn;
+               }
+ #endif
                /* Implement bpf_get_func_arg inline. */
                if (prog_type == BPF_PROG_TYPE_TRACING &&
                    insn->imm == BPF_FUNC_get_func_arg) {
                        goto next_insn;
                }
  
+               /* Implement bpf_get_branch_snapshot inline. */
+               if (IS_ENABLED(CONFIG_PERF_EVENTS) &&
+                   prog->jit_requested && BITS_PER_LONG == 64 &&
+                   insn->imm == BPF_FUNC_get_branch_snapshot) {
+                       /* We are dealing with the following func protos:
+                        * u64 bpf_get_branch_snapshot(void *buf, u32 size, u64 flags);
+                        * int perf_snapshot_branch_stack(struct perf_branch_entry *entries, u32 cnt);
+                        */
+                       const u32 br_entry_size = sizeof(struct perf_branch_entry);
+                       /* struct perf_branch_entry is part of UAPI and is
+                        * used as an array element, so extremely unlikely to
+                        * ever grow or shrink
+                        */
+                       BUILD_BUG_ON(br_entry_size != 24);
+                       /* if (unlikely(flags)) return -EINVAL */
+                       insn_buf[0] = BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 0, 7);
+                       /* Transform size (bytes) into number of entries (cnt = size / 24).
+                        * But to avoid expensive division instruction, we implement
+                        * divide-by-3 through multiplication, followed by further
+                        * division by 8 through 3-bit right shift.
+                        * Refer to book "Hacker's Delight, 2nd ed." by Henry S. Warren, Jr.,
+                        * p. 227, chapter "Unsigned Division by 3" for details and proofs.
+                        *
+                        * N / 3 <=> M * N / 2^33, where M = (2^33 + 1) / 3 = 0xaaaaaaab.
+                        */
+                       insn_buf[1] = BPF_MOV32_IMM(BPF_REG_0, 0xaaaaaaab);
+                       insn_buf[2] = BPF_ALU64_REG(BPF_MUL, BPF_REG_2, BPF_REG_0);
+                       insn_buf[3] = BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 36);
+                       /* call perf_snapshot_branch_stack implementation */
+                       insn_buf[4] = BPF_EMIT_CALL(static_call_query(perf_snapshot_branch_stack));
+                       /* if (entry_cnt == 0) return -ENOENT */
+                       insn_buf[5] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4);
+                       /* return entry_cnt * sizeof(struct perf_branch_entry) */
+                       insn_buf[6] = BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, br_entry_size);
+                       insn_buf[7] = BPF_JMP_A(3);
+                       /* return -EINVAL; */
+                       insn_buf[8] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
+                       insn_buf[9] = BPF_JMP_A(1);
+                       /* return -ENOENT; */
+                       insn_buf[10] = BPF_MOV64_IMM(BPF_REG_0, -ENOENT);
+                       cnt = 11;
+                       new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+                       if (!new_prog)
+                               return -ENOMEM;
+                       delta    += cnt - 1;
+                       env->prog = prog = new_prog;
+                       insn      = new_prog->insnsi + i + delta;
+                       continue;
+               }
                /* Implement bpf_kptr_xchg inline */
                if (prog->jit_requested && BITS_PER_LONG == 64 &&
                    insn->imm == BPF_FUNC_kptr_xchg &&
diff --combined kernel/trace/bpf_trace.c
index 802e4f77a118c2c8e68e225ec1bbbcaf9c4081b2,afb232b1d7c237a82634c3e57e9577db496154ed..0ba722b57af3ded2f9c87164e6395b1b4020b02e
@@@ -1188,9 -1188,6 +1188,6 @@@ static const struct bpf_func_proto bpf_
  
  BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
  {
- #ifndef CONFIG_X86
-       return -ENOENT;
- #else
        static const u32 br_entry_size = sizeof(struct perf_branch_entry);
        u32 entry_cnt = size / br_entry_size;
  
                return -ENOENT;
  
        return entry_cnt * br_entry_size;
- #endif
  }
  
  static const struct bpf_func_proto bpf_get_branch_snapshot_proto = {
@@@ -2740,7 -2736,7 +2736,7 @@@ static int bpf_kprobe_multi_link_fill_l
  
  static const struct bpf_link_ops bpf_kprobe_multi_link_lops = {
        .release = bpf_kprobe_multi_link_release,
 -      .dealloc = bpf_kprobe_multi_link_dealloc,
 +      .dealloc_deferred = bpf_kprobe_multi_link_dealloc,
        .fill_link_info = bpf_kprobe_multi_link_fill_link_info,
  };
  
@@@ -3169,9 -3165,6 +3165,9 @@@ static void bpf_uprobe_multi_link_relea
  
        umulti_link = container_of(link, struct bpf_uprobe_multi_link, link);
        bpf_uprobe_unregister(&umulti_link->path, umulti_link->uprobes, umulti_link->cnt);
 +      if (umulti_link->task)
 +              put_task_struct(umulti_link->task);
 +      path_put(&umulti_link->path);
  }
  
  static void bpf_uprobe_multi_link_dealloc(struct bpf_link *link)
        struct bpf_uprobe_multi_link *umulti_link;
  
        umulti_link = container_of(link, struct bpf_uprobe_multi_link, link);
 -      if (umulti_link->task)
 -              put_task_struct(umulti_link->task);
 -      path_put(&umulti_link->path);
        kvfree(umulti_link->uprobes);
        kfree(umulti_link);
  }
@@@ -3254,7 -3250,7 +3250,7 @@@ static int bpf_uprobe_multi_link_fill_l
  
  static const struct bpf_link_ops bpf_uprobe_multi_link_lops = {
        .release = bpf_uprobe_multi_link_release,
 -      .dealloc = bpf_uprobe_multi_link_dealloc,
 +      .dealloc_deferred = bpf_uprobe_multi_link_dealloc,
        .fill_link_info = bpf_uprobe_multi_link_fill_link_info,
  };
  
diff --combined net/core/filter.c
index 5662464e1abd29230fb72c0db46620153fee7ef9,786d792ac8161e76339154b8ac2f8b9335182349..6d319c76188b6a475605f26d575b09eb46a2531b
@@@ -87,6 -87,9 +87,9 @@@
  
  #include "dev.h"
  
+ /* Keep the struct bpf_fib_lookup small so that it fits into a cacheline */
+ static_assert(sizeof(struct bpf_fib_lookup) == 64, "struct bpf_fib_lookup size check");
  static const struct bpf_func_proto *
  bpf_sk_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog);
  
@@@ -2215,7 -2218,7 +2218,7 @@@ static int bpf_out_neigh_v6(struct net 
        rcu_read_lock();
        if (!nh) {
                dst = skb_dst(skb);
 -              nexthop = rt6_nexthop(container_of(dst, struct rt6_info, dst),
 +              nexthop = rt6_nexthop(dst_rt6_info(dst),
                                      &ipv6_hdr(skb)->daddr);
        } else {
                nexthop = &nh->ipv6_nh;
@@@ -4662,7 -4665,7 +4665,7 @@@ set_compat
        to->tunnel_tos = info->key.tos;
        to->tunnel_ttl = info->key.ttl;
        if (flags & BPF_F_TUNINFO_FLAGS)
 -              to->tunnel_flags = info->key.tun_flags;
 +              to->tunnel_flags = ip_tunnel_flags_to_be16(info->key.tun_flags);
        else
                to->tunnel_ext = 0;
  
@@@ -4705,7 -4708,7 +4708,7 @@@ BPF_CALL_3(bpf_skb_get_tunnel_opt, stru
        int err;
  
        if (unlikely(!info ||
 -                   !(info->key.tun_flags & TUNNEL_OPTIONS_PRESENT))) {
 +                   !ip_tunnel_is_options_present(info->key.tun_flags))) {
                err = -ENOENT;
                goto err_clear;
        }
@@@ -4775,15 -4778,15 +4778,15 @@@ BPF_CALL_4(bpf_skb_set_tunnel_key, stru
        memset(info, 0, sizeof(*info));
        info->mode = IP_TUNNEL_INFO_TX;
  
 -      info->key.tun_flags = TUNNEL_KEY | TUNNEL_CSUM | TUNNEL_NOCACHE;
 -      if (flags & BPF_F_DONT_FRAGMENT)
 -              info->key.tun_flags |= TUNNEL_DONT_FRAGMENT;
 -      if (flags & BPF_F_ZERO_CSUM_TX)
 -              info->key.tun_flags &= ~TUNNEL_CSUM;
 -      if (flags & BPF_F_SEQ_NUMBER)
 -              info->key.tun_flags |= TUNNEL_SEQ;
 -      if (flags & BPF_F_NO_TUNNEL_KEY)
 -              info->key.tun_flags &= ~TUNNEL_KEY;
 +      __set_bit(IP_TUNNEL_NOCACHE_BIT, info->key.tun_flags);
 +      __assign_bit(IP_TUNNEL_DONT_FRAGMENT_BIT, info->key.tun_flags,
 +                   flags & BPF_F_DONT_FRAGMENT);
 +      __assign_bit(IP_TUNNEL_CSUM_BIT, info->key.tun_flags,
 +                   !(flags & BPF_F_ZERO_CSUM_TX));
 +      __assign_bit(IP_TUNNEL_SEQ_BIT, info->key.tun_flags,
 +                   flags & BPF_F_SEQ_NUMBER);
 +      __assign_bit(IP_TUNNEL_KEY_BIT, info->key.tun_flags,
 +                   !(flags & BPF_F_NO_TUNNEL_KEY));
  
        info->key.tun_id = cpu_to_be64(from->tunnel_id);
        info->key.tos = from->tunnel_tos;
@@@ -4821,15 -4824,13 +4824,15 @@@ BPF_CALL_3(bpf_skb_set_tunnel_opt, stru
  {
        struct ip_tunnel_info *info = skb_tunnel_info(skb);
        const struct metadata_dst *md = this_cpu_ptr(md_dst);
 +      IP_TUNNEL_DECLARE_FLAGS(present) = { };
  
        if (unlikely(info != &md->u.tun_info || (size & (sizeof(u32) - 1))))
                return -EINVAL;
        if (unlikely(size > IP_TUNNEL_OPTS_MAX))
                return -ENOMEM;
  
 -      ip_tunnel_info_opts_set(info, from, size, TUNNEL_OPTIONS_PRESENT);
 +      ip_tunnel_set_options_present(present);
 +      ip_tunnel_info_opts_set(info, from, size, present);
  
        return 0;
  }
@@@ -5886,7 -5887,10 +5889,10 @@@ static int bpf_ipv4_fib_lookup(struct n
  
                err = fib_table_lookup(tb, &fl4, &res, FIB_LOOKUP_NOREF);
        } else {
-               fl4.flowi4_mark = 0;
+               if (flags & BPF_FIB_LOOKUP_MARK)
+                       fl4.flowi4_mark = params->mark;
+               else
+                       fl4.flowi4_mark = 0;
                fl4.flowi4_secid = 0;
                fl4.flowi4_tun_key.tun_id = 0;
                fl4.flowi4_uid = sock_net_uid(net, NULL);
@@@ -6029,7 -6033,10 +6035,10 @@@ static int bpf_ipv6_fib_lookup(struct n
                err = ipv6_stub->fib6_table_lookup(net, tb, oif, &fl6, &res,
                                                   strict);
        } else {
-               fl6.flowi6_mark = 0;
+               if (flags & BPF_FIB_LOOKUP_MARK)
+                       fl6.flowi6_mark = params->mark;
+               else
+                       fl6.flowi6_mark = 0;
                fl6.flowi6_secid = 0;
                fl6.flowi6_tun_key.tun_id = 0;
                fl6.flowi6_uid = sock_net_uid(net, NULL);
@@@ -6107,7 -6114,7 +6116,7 @@@ set_fwd_params
  
  #define BPF_FIB_LOOKUP_MASK (BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_OUTPUT | \
                             BPF_FIB_LOOKUP_SKIP_NEIGH | BPF_FIB_LOOKUP_TBID | \
-                            BPF_FIB_LOOKUP_SRC)
+                            BPF_FIB_LOOKUP_SRC | BPF_FIB_LOOKUP_MARK)
  
  BPF_CALL_4(bpf_xdp_fib_lookup, struct xdp_buff *, ctx,
           struct bpf_fib_lookup *, params, int, plen, u32, flags)
diff --combined net/core/sock_map.c
index 8598466a3805784f58497d9607c5ace6f081cefb,63c016b4c1696ab8573f827fbf3fdaaa218f7972..9402889840bf7e4fe2adb743d387b9dcdbe17024
@@@ -24,8 -24,16 +24,16 @@@ struct bpf_stab 
  #define SOCK_CREATE_FLAG_MASK                         \
        (BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY)
  
+ /* This mutex is used to
+  *  - protect race between prog/link attach/detach and link prog update, and
+  *  - protect race between releasing and accessing map in bpf_link.
+  * A single global mutex lock is used since it is expected contention is low.
+  */
+ static DEFINE_MUTEX(sockmap_mutex);
  static int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog,
-                               struct bpf_prog *old, u32 which);
+                               struct bpf_prog *old, struct bpf_link *link,
+                               u32 which);
  static struct sk_psock_progs *sock_map_progs(struct bpf_map *map);
  
  static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
@@@ -71,7 -79,9 +79,9 @@@ int sock_map_get_from_fd(const union bp
        map = __bpf_map_get(f);
        if (IS_ERR(map))
                return PTR_ERR(map);
-       ret = sock_map_prog_update(map, prog, NULL, attr->attach_type);
+       mutex_lock(&sockmap_mutex);
+       ret = sock_map_prog_update(map, prog, NULL, NULL, attr->attach_type);
+       mutex_unlock(&sockmap_mutex);
        fdput(f);
        return ret;
  }
@@@ -103,7 -113,9 +113,9 @@@ int sock_map_prog_detach(const union bp
                goto put_prog;
        }
  
-       ret = sock_map_prog_update(map, NULL, prog, attr->attach_type);
+       mutex_lock(&sockmap_mutex);
+       ret = sock_map_prog_update(map, NULL, prog, NULL, attr->attach_type);
+       mutex_unlock(&sockmap_mutex);
  put_prog:
        bpf_prog_put(prog);
  put_map:
@@@ -411,9 -423,6 +423,9 @@@ static int __sock_map_delete(struct bpf
        struct sock *sk;
        int err = 0;
  
 +      if (irqs_disabled())
 +              return -EOPNOTSUPP; /* locks here are hardirq-unsafe */
 +
        spin_lock_bh(&stab->lock);
        sk = *psk;
        if (!sk_test || sk_test == sk)
@@@ -936,9 -945,6 +948,9 @@@ static long sock_hash_delete_elem(struc
        struct bpf_shtab_elem *elem;
        int ret = -ENOENT;
  
 +      if (irqs_disabled())
 +              return -EOPNOTSUPP; /* locks here are hardirq-unsafe */
 +
        hash = sock_hash_bucket_hash(key, key_size);
        bucket = sock_hash_select_bucket(htab, hash);
  
@@@ -1460,55 -1466,84 +1472,84 @@@ static struct sk_psock_progs *sock_map_
        return NULL;
  }
  
- static int sock_map_prog_lookup(struct bpf_map *map, struct bpf_prog ***pprog,
-                               u32 which)
+ static int sock_map_prog_link_lookup(struct bpf_map *map, struct bpf_prog ***pprog,
+                                    struct bpf_link ***plink, u32 which)
  {
        struct sk_psock_progs *progs = sock_map_progs(map);
+       struct bpf_prog **cur_pprog;
+       struct bpf_link **cur_plink;
  
        if (!progs)
                return -EOPNOTSUPP;
  
        switch (which) {
        case BPF_SK_MSG_VERDICT:
-               *pprog = &progs->msg_parser;
+               cur_pprog = &progs->msg_parser;
+               cur_plink = &progs->msg_parser_link;
                break;
  #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
        case BPF_SK_SKB_STREAM_PARSER:
-               *pprog = &progs->stream_parser;
+               cur_pprog = &progs->stream_parser;
+               cur_plink = &progs->stream_parser_link;
                break;
  #endif
        case BPF_SK_SKB_STREAM_VERDICT:
                if (progs->skb_verdict)
                        return -EBUSY;
-               *pprog = &progs->stream_verdict;
+               cur_pprog = &progs->stream_verdict;
+               cur_plink = &progs->stream_verdict_link;
                break;
        case BPF_SK_SKB_VERDICT:
                if (progs->stream_verdict)
                        return -EBUSY;
-               *pprog = &progs->skb_verdict;
+               cur_pprog = &progs->skb_verdict;
+               cur_plink = &progs->skb_verdict_link;
                break;
        default:
                return -EOPNOTSUPP;
        }
  
+       *pprog = cur_pprog;
+       if (plink)
+               *plink = cur_plink;
        return 0;
  }
  
+ /* Handle the following four cases:
+  * prog_attach: prog != NULL, old == NULL, link == NULL
+  * prog_detach: prog == NULL, old != NULL, link == NULL
+  * link_attach: prog != NULL, old == NULL, link != NULL
+  * link_detach: prog == NULL, old != NULL, link != NULL
+  */
  static int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog,
-                               struct bpf_prog *old, u32 which)
+                               struct bpf_prog *old, struct bpf_link *link,
+                               u32 which)
  {
        struct bpf_prog **pprog;
+       struct bpf_link **plink;
        int ret;
  
-       ret = sock_map_prog_lookup(map, &pprog, which);
+       ret = sock_map_prog_link_lookup(map, &pprog, &plink, which);
        if (ret)
                return ret;
  
-       if (old)
-               return psock_replace_prog(pprog, prog, old);
+       /* for prog_attach/prog_detach/link_attach, return error if a bpf_link
+        * exists for that prog.
+        */
+       if ((!link || prog) && *plink)
+               return -EBUSY;
  
-       psock_set_prog(pprog, prog);
-       return 0;
+       if (old) {
+               ret = psock_replace_prog(pprog, prog, old);
+               if (!ret)
+                       *plink = NULL;
+       } else {
+               psock_set_prog(pprog, prog);
+               if (link)
+                       *plink = link;
+       }
+       return ret;
  }
  
  int sock_map_bpf_prog_query(const union bpf_attr *attr,
  
        rcu_read_lock();
  
-       ret = sock_map_prog_lookup(map, &pprog, attr->query.attach_type);
+       ret = sock_map_prog_link_lookup(map, &pprog, NULL, attr->query.attach_type);
        if (ret)
                goto end;
  
@@@ -1663,6 -1698,196 +1704,196 @@@ void sock_map_close(struct sock *sk, lo
  }
  EXPORT_SYMBOL_GPL(sock_map_close);
  
+ struct sockmap_link {
+       struct bpf_link link;
+       struct bpf_map *map;
+       enum bpf_attach_type attach_type;
+ };
+ static void sock_map_link_release(struct bpf_link *link)
+ {
+       struct sockmap_link *sockmap_link = container_of(link, struct sockmap_link, link);
+       mutex_lock(&sockmap_mutex);
+       if (!sockmap_link->map)
+               goto out;
+       WARN_ON_ONCE(sock_map_prog_update(sockmap_link->map, NULL, link->prog, link,
+                                         sockmap_link->attach_type));
+       bpf_map_put_with_uref(sockmap_link->map);
+       sockmap_link->map = NULL;
+ out:
+       mutex_unlock(&sockmap_mutex);
+ }
+ static int sock_map_link_detach(struct bpf_link *link)
+ {
+       sock_map_link_release(link);
+       return 0;
+ }
+ static void sock_map_link_dealloc(struct bpf_link *link)
+ {
+       kfree(link);
+ }
+ /* Handle the following two cases:
+  * case 1: link != NULL, prog != NULL, old != NULL
+  * case 2: link != NULL, prog != NULL, old == NULL
+  */
+ static int sock_map_link_update_prog(struct bpf_link *link,
+                                    struct bpf_prog *prog,
+                                    struct bpf_prog *old)
+ {
+       const struct sockmap_link *sockmap_link = container_of(link, struct sockmap_link, link);
+       struct bpf_prog **pprog, *old_link_prog;
+       struct bpf_link **plink;
+       int ret = 0;
+       mutex_lock(&sockmap_mutex);
+       /* If old prog is not NULL, ensure old prog is the same as link->prog. */
+       if (old && link->prog != old) {
+               ret = -EPERM;
+               goto out;
+       }
+       /* Ensure link->prog has the same type/attach_type as the new prog. */
+       if (link->prog->type != prog->type ||
+           link->prog->expected_attach_type != prog->expected_attach_type) {
+               ret = -EINVAL;
+               goto out;
+       }
+       ret = sock_map_prog_link_lookup(sockmap_link->map, &pprog, &plink,
+                                       sockmap_link->attach_type);
+       if (ret)
+               goto out;
+       /* return error if the stored bpf_link does not match the incoming bpf_link. */
+       if (link != *plink) {
+               ret = -EBUSY;
+               goto out;
+       }
+       if (old) {
+               ret = psock_replace_prog(pprog, prog, old);
+               if (ret)
+                       goto out;
+       } else {
+               psock_set_prog(pprog, prog);
+       }
+       bpf_prog_inc(prog);
+       old_link_prog = xchg(&link->prog, prog);
+       bpf_prog_put(old_link_prog);
+ out:
+       mutex_unlock(&sockmap_mutex);
+       return ret;
+ }
+ static u32 sock_map_link_get_map_id(const struct sockmap_link *sockmap_link)
+ {
+       u32 map_id = 0;
+       mutex_lock(&sockmap_mutex);
+       if (sockmap_link->map)
+               map_id = sockmap_link->map->id;
+       mutex_unlock(&sockmap_mutex);
+       return map_id;
+ }
+ static int sock_map_link_fill_info(const struct bpf_link *link,
+                                  struct bpf_link_info *info)
+ {
+       const struct sockmap_link *sockmap_link = container_of(link, struct sockmap_link, link);
+       u32 map_id = sock_map_link_get_map_id(sockmap_link);
+       info->sockmap.map_id = map_id;
+       info->sockmap.attach_type = sockmap_link->attach_type;
+       return 0;
+ }
+ static void sock_map_link_show_fdinfo(const struct bpf_link *link,
+                                     struct seq_file *seq)
+ {
+       const struct sockmap_link *sockmap_link = container_of(link, struct sockmap_link, link);
+       u32 map_id = sock_map_link_get_map_id(sockmap_link);
+       seq_printf(seq, "map_id:\t%u\n", map_id);
+       seq_printf(seq, "attach_type:\t%u\n", sockmap_link->attach_type);
+ }
+ static const struct bpf_link_ops sock_map_link_ops = {
+       .release = sock_map_link_release,
+       .dealloc = sock_map_link_dealloc,
+       .detach = sock_map_link_detach,
+       .update_prog = sock_map_link_update_prog,
+       .fill_link_info = sock_map_link_fill_info,
+       .show_fdinfo = sock_map_link_show_fdinfo,
+ };
+ int sock_map_link_create(const union bpf_attr *attr, struct bpf_prog *prog)
+ {
+       struct bpf_link_primer link_primer;
+       struct sockmap_link *sockmap_link;
+       enum bpf_attach_type attach_type;
+       struct bpf_map *map;
+       int ret;
+       if (attr->link_create.flags)
+               return -EINVAL;
+       map = bpf_map_get_with_uref(attr->link_create.target_fd);
+       if (IS_ERR(map))
+               return PTR_ERR(map);
+       if (map->map_type != BPF_MAP_TYPE_SOCKMAP && map->map_type != BPF_MAP_TYPE_SOCKHASH) {
+               ret = -EINVAL;
+               goto out;
+       }
+       sockmap_link = kzalloc(sizeof(*sockmap_link), GFP_USER);
+       if (!sockmap_link) {
+               ret = -ENOMEM;
+               goto out;
+       }
+       attach_type = attr->link_create.attach_type;
+       bpf_link_init(&sockmap_link->link, BPF_LINK_TYPE_SOCKMAP, &sock_map_link_ops, prog);
+       sockmap_link->map = map;
+       sockmap_link->attach_type = attach_type;
+       ret = bpf_link_prime(&sockmap_link->link, &link_primer);
+       if (ret) {
+               kfree(sockmap_link);
+               goto out;
+       }
+       mutex_lock(&sockmap_mutex);
+       ret = sock_map_prog_update(map, prog, NULL, &sockmap_link->link, attach_type);
+       mutex_unlock(&sockmap_mutex);
+       if (ret) {
+               bpf_link_cleanup(&link_primer);
+               goto out;
+       }
+       /* Increase refcnt for the prog since when old prog is replaced with
+        * psock_replace_prog() and psock_set_prog() its refcnt will be decreased.
+        *
+        * Actually, we do not need to increase refcnt for the prog since bpf_link
+        * will hold a reference. But in order to have less complexity w.r.t.
+        * replacing/setting prog, let us increase the refcnt to make things simpler.
+        */
+       bpf_prog_inc(prog);
+       return bpf_link_settle(&link_primer);
+ out:
+       bpf_map_put_with_uref(map);
+       return ret;
+ }
  static int sock_map_iter_attach_target(struct bpf_prog *prog,
                                       union bpf_iter_link_info *linfo,
                                       struct bpf_iter_aux_info *aux)
diff --combined net/ipv4/tcp_input.c
index 384fa5e2f0655389ac678b5d13553949598a9c74,d1115d7c3936aa86ecdaa396395104bcca652a9a..53e1150f706fdd00fc1908985ca6b4f201d7d717
@@@ -563,20 -563,19 +563,20 @@@ static void tcp_init_buffer_space(struc
        maxwin = tcp_full_space(sk);
  
        if (tp->window_clamp >= maxwin) {
 -              tp->window_clamp = maxwin;
 +              WRITE_ONCE(tp->window_clamp, maxwin);
  
                if (tcp_app_win && maxwin > 4 * tp->advmss)
 -                      tp->window_clamp = max(maxwin -
 -                                             (maxwin >> tcp_app_win),
 -                                             4 * tp->advmss);
 +                      WRITE_ONCE(tp->window_clamp,
 +                                 max(maxwin - (maxwin >> tcp_app_win),
 +                                     4 * tp->advmss));
        }
  
        /* Force reservation of one segment. */
        if (tcp_app_win &&
            tp->window_clamp > 2 * tp->advmss &&
            tp->window_clamp + tp->advmss > maxwin)
 -              tp->window_clamp = max(2 * tp->advmss, maxwin - tp->advmss);
 +              WRITE_ONCE(tp->window_clamp,
 +                         max(2 * tp->advmss, maxwin - tp->advmss));
  
        tp->rcv_ssthresh = min(tp->rcv_ssthresh, tp->window_clamp);
        tp->snd_cwnd_stamp = tcp_jiffies32;
@@@ -774,8 -773,7 +774,8 @@@ void tcp_rcv_space_adjust(struct sock *
                        WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
  
                        /* Make the window clamp follow along.  */
 -                      tp->window_clamp = tcp_win_from_space(sk, rcvbuf);
 +                      WRITE_ONCE(tp->window_clamp,
 +                                 tcp_win_from_space(sk, rcvbuf));
                }
        }
        tp->rcvq_space.space = copied;
@@@ -913,7 -911,7 +913,7 @@@ static void tcp_rtt_estimator(struct so
                        tp->rtt_seq = tp->snd_nxt;
                        tp->mdev_max_us = tcp_rto_min_us(sk);
  
-                       tcp_bpf_rtt(sk);
+                       tcp_bpf_rtt(sk, mrtt_us, srtt);
                }
        } else {
                /* no previous measure. */
                tp->mdev_max_us = tp->rttvar_us;
                tp->rtt_seq = tp->snd_nxt;
  
-               tcp_bpf_rtt(sk);
+               tcp_bpf_rtt(sk, mrtt_us, srtt);
        }
        tp->srtt_us = max(1U, srtt);
  }
@@@ -4805,8 -4803,10 +4805,8 @@@ static bool tcp_try_coalesce(struct soc
        if (!mptcp_skb_can_collapse(to, from))
                return false;
  
 -#ifdef CONFIG_TLS_DEVICE
 -      if (from->decrypted != to->decrypted)
 +      if (skb_cmp_decrypted(from, to))
                return false;
 -#endif
  
        if (!skb_try_coalesce(to, from, fragstolen, &delta))
                return false;
@@@ -5174,16 -5174,6 +5174,16 @@@ static void tcp_data_queue(struct sock 
         */
        if (TCP_SKB_CB(skb)->seq == tp->rcv_nxt) {
                if (tcp_receive_window(tp) == 0) {
 +                      /* Some stacks are known to send bare FIN packets
 +                       * in a loop even if we send RWIN 0 in our ACK.
 +                       * Accepting this FIN does not hurt memory pressure
 +                       * because the FIN flag will simply be merged to the
 +                       * receive queue tail skb in most cases.
 +                       */
 +                      if (!skb->len &&
 +                          (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN))
 +                              goto queue_and_out;
 +
                        reason = SKB_DROP_REASON_TCP_ZEROWINDOW;
                        NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPZEROWINDOWDROP);
                        goto out_of_window;
@@@ -5198,7 -5188,7 +5198,7 @@@ queue_and_out
                        inet_csk_schedule_ack(sk);
                        sk->sk_data_ready(sk);
  
 -                      if (skb_queue_len(&sk->sk_receive_queue)) {
 +                      if (skb_queue_len(&sk->sk_receive_queue) && skb->len) {
                                reason = SKB_DROP_REASON_PROTO_MEM;
                                NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRCVQDROP);
                                goto drop;
@@@ -5385,7 -5375,9 +5385,7 @@@ restart
                        break;
  
                memcpy(nskb->cb, skb->cb, sizeof(skb->cb));
 -#ifdef CONFIG_TLS_DEVICE
 -              nskb->decrypted = skb->decrypted;
 -#endif
 +              skb_copy_decrypted(nskb, skb);
                TCP_SKB_CB(nskb)->seq = TCP_SKB_CB(nskb)->end_seq = start;
                if (list)
                        __skb_queue_before(list, skb, nskb);
                                    !mptcp_skb_can_collapse(nskb, skb) ||
                                    (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)))
                                        goto end;
 -#ifdef CONFIG_TLS_DEVICE
 -                              if (skb->decrypted != nskb->decrypted)
 +                              if (skb_cmp_decrypted(skb, nskb))
                                        goto end;
 -#endif
                        }
                }
        }
@@@ -6432,8 -6426,7 +6432,8 @@@ consume
  
                if (!tp->rx_opt.wscale_ok) {
                        tp->rx_opt.snd_wscale = tp->rx_opt.rcv_wscale = 0;
 -                      tp->window_clamp = min(tp->window_clamp, 65535U);
 +                      WRITE_ONCE(tp->window_clamp,
 +                                 min(tp->window_clamp, 65535U));
                }
  
                if (tp->rx_opt.saw_tstamp) {
@@@ -7006,7 -6999,7 +7006,7 @@@ EXPORT_SYMBOL(inet_reqsk_alloc)
  /*
   * Return true if a syncookie should be sent
   */
 -static bool tcp_syn_flood_action(const struct sock *sk, const char *proto)
 +static bool tcp_syn_flood_action(struct sock *sk, const char *proto)
  {
        struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
        const char *msg = "Dropping request";
@@@ -7107,6 -7100,7 +7107,6 @@@ int tcp_conn_request(struct request_soc
                     struct sock *sk, struct sk_buff *skb)
  {
        struct tcp_fastopen_cookie foc = { .len = -1 };
 -      __u32 isn = TCP_SKB_CB(skb)->tcp_tw_isn;
        struct tcp_options_received tmp_opt;
        struct tcp_sock *tp = tcp_sk(sk);
        struct net *net = sock_net(sk);
        struct dst_entry *dst;
        struct flowi fl;
        u8 syncookies;
 +      u32 isn;
  
  #ifdef CONFIG_TCP_AO
        const struct tcp_ao_hdr *aoh;
  #endif
  
 -      syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies);
 +      isn = __this_cpu_read(tcp_tw_isn);
 +      if (isn) {
 +              /* TW buckets are converted to open requests without
 +               * limitations, they conserve resources and peer is
 +               * evidently real one.
 +               */
 +              __this_cpu_write(tcp_tw_isn, 0);
 +      } else {
 +              syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies);
  
 -      /* TW buckets are converted to open requests without
 -       * limitations, they conserve resources and peer is
 -       * evidently real one.
 -       */
 -      if ((syncookies == 2 || inet_csk_reqsk_queue_is_full(sk)) && !isn) {
 -              want_cookie = tcp_syn_flood_action(sk, rsk_ops->slab_name);
 -              if (!want_cookie)
 -                      goto drop;
 +              if (syncookies == 2 || inet_csk_reqsk_queue_is_full(sk)) {
 +                      want_cookie = tcp_syn_flood_action(sk,
 +                                                         rsk_ops->slab_name);
 +                      if (!want_cookie)
 +                              goto drop;
 +              }
        }
  
        if (sk_acceptq_is_full(sk)) {
        /* Note: tcp_v6_init_req() might override ir_iif for link locals */
        inet_rsk(req)->ir_iif = inet_request_bound_dev_if(sk, skb);
  
 -      dst = af_ops->route_req(sk, skb, &fl, req);
 +      dst = af_ops->route_req(sk, skb, &fl, req, isn);
        if (!dst)
                goto drop_and_free;
  
index f06c527eee34a1dc921dea35fa8574ed42ee5245,ca8b73f7c774e7aae09a264b7a56c180cc457de8..82247aeef8571442e5c7e88ef3c22632e175dd74
@@@ -102,6 -102,7 +102,6 @@@ TEST_PROGS := test_kmod.sh 
        test_xdp_redirect_multi.sh \
        test_xdp_meta.sh \
        test_xdp_veth.sh \
 -      test_offload.py \
        test_sock_addr.sh \
        test_tunnel.sh \
        test_lwt_seg6local.sh \
@@@ -135,7 -136,18 +135,7 @@@ TEST_GEN_PROGS_EXTENDED = test_sock_add
  
  TEST_GEN_FILES += liburandom_read.so urandom_read sign-file uprobe_multi
  
 -# Emit succinct information message describing current building step
 -# $1 - generic step name (e.g., CC, LINK, etc);
 -# $2 - optional "flavor" specifier; if provided, will be emitted as [flavor];
 -# $3 - target (assumed to be file); only file name will be emitted;
 -# $4 - optional extra arg, emitted as-is, if provided.
 -ifeq ($(V),1)
 -Q =
 -msg =
 -else
 -Q = @
 -msg = @printf '  %-8s%s %s%s\n' "$(1)" "$(if $(2), [$(2)])" "$(notdir $(3))" "$(if $(4), $(4))";
 -MAKEFLAGS += --no-print-directory
 +ifneq ($(V),1)
  submake_extras := feature_display=0
  endif
  
@@@ -278,11 -290,12 +278,12 @@@ UNPRIV_HELPERS  := $(OUTPUT)/unpriv_hel
  TRACE_HELPERS := $(OUTPUT)/trace_helpers.o
  JSON_WRITER           := $(OUTPUT)/json_writer.o
  CAP_HELPERS   := $(OUTPUT)/cap_helpers.o
+ NETWORK_HELPERS := $(OUTPUT)/network_helpers.o
  
  $(OUTPUT)/test_dev_cgroup: $(CGROUP_HELPERS) $(TESTING_HELPERS)
  $(OUTPUT)/test_skb_cgroup_id_user: $(CGROUP_HELPERS) $(TESTING_HELPERS)
  $(OUTPUT)/test_sock: $(CGROUP_HELPERS) $(TESTING_HELPERS)
- $(OUTPUT)/test_sock_addr: $(CGROUP_HELPERS) $(TESTING_HELPERS)
+ $(OUTPUT)/test_sock_addr: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(NETWORK_HELPERS)
  $(OUTPUT)/test_sockmap: $(CGROUP_HELPERS) $(TESTING_HELPERS)
  $(OUTPUT)/test_tcpnotify_user: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(TRACE_HELPERS)
  $(OUTPUT)/get_cgroup_id_user: $(CGROUP_HELPERS) $(TESTING_HELPERS)
@@@ -443,7 -456,7 +444,7 @@@ LINKED_SKELS := test_static_linked.skel
  LSKELS := fentry_test.c fexit_test.c fexit_sleep.c atomics.c          \
        trace_printk.c trace_vprintk.c map_ptr_kern.c                   \
        core_kern.c core_kern_overflow.c test_ringbuf.c                 \
-       test_ringbuf_map_key.c
+       test_ringbuf_n.c test_ringbuf_map_key.c
  
  # Generate both light skeleton and libbpf skeleton for these
  LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test.c \
@@@ -646,7 -659,7 +647,7 @@@ $(eval $(call DEFINE_TEST_RUNNER,test_p
  # Define test_progs-cpuv4 test runner.
  ifneq ($(CLANG_CPUV4),)
  TRUNNER_BPF_BUILD_RULE := CLANG_CPUV4_BPF_BUILD_RULE
- TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
+ TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) -DENABLE_ATOMICS_TESTS
  $(eval $(call DEFINE_TEST_RUNNER,test_progs,cpuv4))
  endif
  
@@@ -683,7 -696,7 +684,7 @@@ $(OUTPUT)/test_verifier: test_verifier.
  
  # Include find_bit.c to compile xskxceiver.
  EXTRA_SRC := $(TOOLSDIR)/lib/find_bit.c
- $(OUTPUT)/xskxceiver: $(EXTRA_SRC) xskxceiver.c xskxceiver.h $(OUTPUT)/xsk.o $(OUTPUT)/xsk_xdp_progs.skel.h $(BPFOBJ) | $(OUTPUT)
+ $(OUTPUT)/xskxceiver: $(EXTRA_SRC) xskxceiver.c xskxceiver.h $(OUTPUT)/network_helpers.o $(OUTPUT)/xsk.o $(OUTPUT)/xsk_xdp_progs.skel.h $(BPFOBJ) | $(OUTPUT)
        $(call msg,BINARY,,$@)
        $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
  
@@@ -717,6 -730,7 +718,7 @@@ $(OUTPUT)/bench_local_storage_rcu_tasks
  $(OUTPUT)/bench_local_storage_create.o: $(OUTPUT)/bench_local_storage_create.skel.h
  $(OUTPUT)/bench_bpf_hashmap_lookup.o: $(OUTPUT)/bpf_hashmap_lookup.skel.h
  $(OUTPUT)/bench_htab_mem.o: $(OUTPUT)/htab_mem_bench.skel.h
+ $(OUTPUT)/bench_bpf_crypto.o: $(OUTPUT)/crypto_bench.skel.h
  $(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
  $(OUTPUT)/bench: LDLIBS += -lm
  $(OUTPUT)/bench: $(OUTPUT)/bench.o \
                 $(OUTPUT)/bench_bpf_hashmap_lookup.o \
                 $(OUTPUT)/bench_local_storage_create.o \
                 $(OUTPUT)/bench_htab_mem.o \
+                $(OUTPUT)/bench_bpf_crypto.o \
                 #
        $(call msg,BINARY,,$@)
        $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
@@@ -747,7 -762,7 +750,7 @@@ $(OUTPUT)/veristat: $(OUTPUT)/veristat.
  
  $(OUTPUT)/uprobe_multi: uprobe_multi.c
        $(call msg,BINARY,,$@)
-       $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
+       $(Q)$(CC) $(CFLAGS) -O0 $(LDFLAGS) $^ $(LDLIBS) -o $@
  
  EXTRA_CLEAN := $(SCRATCH_DIR) $(HOST_SCRATCH_DIR)                     \
        prog_tests/tests.h map_tests/tests.h verifier/tests.h           \
This page took 0.263794 seconds and 4 git commands to generate.