1 ================================
2 Documentation for /proc/sys/net/
3 ================================
20 For general info and legal blurb, please look in index.rst.
22 ------------------------------------------------------------------------------
24 This file contains the documentation for the sysctl files in
27 The interface to the networking parts of the kernel is located in
28 /proc/sys/net. The following table shows all possible subdirectories. You may
29 see only some of them, depending on your kernel's configuration.
32 Table : Subdirectories in /proc/sys/net
34 ========= =================== = ========== ==================
35 Directory Content Directory Content
36 ========= =================== = ========== ==================
37 core General parameter appletalk Appletalk protocol
38 unix Unix domain sockets netrom NET/ROM
39 802 E802 protocol ax25 AX25
40 ethernet Ethernet protocol rose X.25 PLP layer
41 ipv4 IP version 4 x25 X.25 protocol
42 bridge Bridging decnet DEC net
43 ipv6 IP version 6 tipc TIPC
44 ========= =================== = ========== ==================
46 1. /proc/sys/net/core - Network core options
47 ============================================
52 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
53 and efficient infrastructure allowing to execute bytecode at various
54 hook points. It is used in a number of Linux kernel subsystems such
55 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
56 and security (e.g. seccomp). LLVM has a BPF back end that can compile
57 restricted C into a sequence of BPF instructions. After program load
58 through bpf(2) and passing a verifier in the kernel, a JIT will then
59 translate these BPF proglets into native CPU instructions. There are
60 two flavors of JITs, the newer eBPF JIT currently supported on:
72 And the older cBPF JIT supported on the following archs:
78 eBPF JITs are a superset of cBPF JITs, meaning the kernel will
79 migrate cBPF instructions into eBPF instructions and then JIT
80 compile them transparently. Older cBPF JITs can only translate
81 tcpdump filters, seccomp rules, etc, but not mentioned eBPF
82 programs loaded through bpf(2).
86 - 0 - disable the JIT (default value)
88 - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
93 This enables hardening for the BPF JIT compiler. Supported are eBPF
94 JIT backends. Enabling hardening trades off performance, but can
95 mitigate JIT spraying.
99 - 0 - disable JIT hardening (default value)
100 - 1 - enable JIT hardening for unprivileged users only
101 - 2 - enable JIT hardening for all users
106 When BPF JIT compiler is enabled, then compiled images are unknown
107 addresses to the kernel, meaning they neither show up in traces nor
108 in /proc/kallsyms. This enables export of these addresses, which can
109 be used for debugging/tracing. If bpf_jit_harden is enabled, this
114 - 0 - disable JIT kallsyms export (default value)
115 - 1 - enable JIT kallsyms export for privileged users only
120 This enforces a global limit for memory allocations to the BPF JIT
121 compiler in order to reject unprivileged JIT requests once it has
122 been surpassed. bpf_jit_limit contains the value of the global limit
128 The maximum number of packets that kernel can handle on a NAPI interrupt,
129 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
130 aggregated packet is counted as one packet in this context.
137 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
138 of the driver for the per softirq cycle netdev_budget. This parameter influences
139 the proportion of the configured netdev_budget that is spent on RPS based packet
140 processing during RX softirq cycles. It is further meant for making current
141 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
142 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
143 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
150 Scales the maximum number of packets that can be processed during a TX softirq cycle.
151 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
152 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
154 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
161 The default queuing discipline to use for network devices. This allows
162 overriding the default of pfifo_fast with an alternative. Since the default
163 queuing discipline is created without additional parameters so is best suited
164 to queuing disciplines that work well without configuration like stochastic
165 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
166 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
167 which require setting up classes and bandwidths. Note that physical multiqueue
168 interfaces still use mq as root qdisc, which in turn uses this default for its
169 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
177 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
178 Approximate time in us to busy loop waiting for packets on the device queue.
179 This sets the default value of the SO_BUSY_POLL socket option.
180 Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
181 which is the preferred method of enabling. If you need to enable the feature
182 globally via sysctl, a value of 50 is recommended.
184 Will increase power usage.
190 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
191 Approximate time in us to busy loop waiting for events.
192 Recommended value depends on the number of sockets you poll on.
193 For several sockets 50, for several hundreds 100.
194 For more than that you probably want to use epoll.
195 Note that only sockets with SO_BUSY_POLL set will be busy polled,
196 so you want to either selectively set SO_BUSY_POLL on those sockets or set
197 sysctl.net.busy_read globally.
199 Will increase power usage.
206 The default setting of the socket receive buffer in bytes.
211 The maximum receive socket buffer size in bytes.
215 Allow processes to receive tx timestamps looped together with the original
216 packet contents. If disabled, transmit timestamp requests from unprivileged
217 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
225 The default setting (in bytes) of the socket send buffer.
230 The maximum send socket buffer size in bytes.
232 message_burst and message_cost
233 ------------------------------
235 These parameters are used to limit the warning messages written to the kernel
236 log from the networking code. They enforce a rate limit to make a
237 denial-of-service attack impossible. A higher message_cost factor, results in
238 fewer messages that will be written. Message_burst controls when messages will
239 be dropped. The default settings limit warning messages to one every five
245 This sysctl is now unused.
247 This was used to control console messages from the networking stack that
248 occur because of problems on the network like duplicate address or bad
251 These messages are now emitted at KERN_DEBUG and can generally be enabled
252 and controlled by the dynamic_debug facility.
257 Maximum number of packets taken from all interfaces in one polling cycle (NAPI
258 poll). In one polling cycle interfaces which are registered to polling are
259 probed in a round-robin manner. Also, a polling cycle may not exceed
260 netdev_budget_usecs microseconds, even if netdev_budget has not been
264 ---------------------
266 Maximum number of microseconds in one NAPI polling cycle. Polling
267 will exit when either netdev_budget_usecs have elapsed during the
268 poll cycle or the number of packets processed reaches netdev_budget.
273 Maximum number of packets, queued on the INPUT side, when the interface
274 receives packets faster than kernel can process them.
279 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
281 Some user space might need to gather its content even if drivers do not
282 provide ethtool -x support yet.
286 myhost:~# cat /proc/sys/net/core/netdev_rss_key
287 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
289 File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
292 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
293 but most drivers only use 40 bytes of it.
297 myhost:~# ethtool -x eth0
298 RX flow hash indirection table for eth0 with 8 RX ring(s):
301 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
303 netdev_tstamp_prequeue
304 ----------------------
306 If set to 0, RX packet timestamps can be sampled after RPS processing, when
307 the target CPU processes packets. It might give some delay on timestamps, but
308 permit to distribute the load on several cpus.
310 If set to 1 (default), timestamps are sampled as soon as possible, before
316 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
317 of struct cmsghdr structures with appended data.
319 fb_tunnels_only_for_init_net
320 ----------------------------
322 Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
323 sit0, ip6tnl0, ip6gre0) are automatically created when a new
324 network namespace is created, if corresponding tunnel is present
325 in initial network namespace.
326 If set to 1, these devices are not automatically created, and
327 user space is responsible for creating them if needed.
329 Default : 0 (for compatibility reasons)
331 devconf_inherit_init_net
332 ------------------------
334 Controls if a new network namespace should inherit all current
335 settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
336 default, we keep the current behavior: for IPv4 we inherit all current
337 settings from init_net and for IPv6 we reset all settings to default.
339 If set to 1, both IPv4 and IPv6 settings are forced to inherit from
340 current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
341 forced to reset to their default values.
343 Default : 0 (for compatibility reasons)
345 2. /proc/sys/net/unix - Parameters for Unix domain sockets
346 ----------------------------------------------------------
348 There is only one file in this directory.
349 unix_dgram_qlen limits the max number of datagrams queued in Unix domain
350 socket's buffer. It will not take effect unless PF_UNIX flag is specified.
353 3. /proc/sys/net/ipv4 - IPV4 settings
354 -------------------------------------
355 Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for
356 descriptions of these entries.
362 The /proc/sys/net/appletalk directory holds the Appletalk configuration data
363 when Appletalk is loaded. The configurable parameters are:
368 The amount of time we keep an ARP entry before expiring it. Used to age out
374 The amount of time we will spend trying to resolve an Appletalk address.
376 aarp-retransmit-limit
377 ---------------------
379 The number of times we will retransmit a query before giving up.
384 Controls the rate at which expires are checked.
386 The directory /proc/net/appletalk holds the list of active Appletalk sockets
389 The fields indicate the DDP type, the local address (in network:node format)
390 the remote address, the size of the transmit pending queue, the size of the
391 received queue (bytes waiting for applications to read) the state and the uid
394 /proc/net/atalk_iface lists all the interfaces configured for appletalk.It
395 shows the name of the interface, its Appletalk address, the network range on
396 that address (or network number for phase 1 networks), and the status of the
399 /proc/net/atalk_route lists each known network route. It lists the target
400 (network) that the route leads to, the router (may be directly connected), the
401 route flags, and the device the route is using.
409 The TIPC protocol now has a tunable for the receive memory, similar to the
410 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
414 # cat /proc/sys/net/tipc/tipc_rmem
415 4252725 34021800 68043600
418 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
419 are scaled (shifted) versions of that same value. Note that the min value
420 is not at this point in time used in any meaningful way, but the triplet is
421 preserved in order to be consistent with things like tcp_rmem.
426 TIPC name table updates are distributed asynchronously in a cluster, without
427 any form of transaction handling. This means that different race scenarios are
428 possible. One such is that a name withdrawal sent out by one node and received
429 by another node may arrive after a second, overlapping name publication already
430 has been accepted from a third node, although the conflicting updates
431 originally may have been issued in the correct sequential order.
432 If named_timeout is nonzero, failed topology updates will be placed on a defer
433 queue until another event arrives that clears the error, or until the timeout
434 expires. Value is in milliseconds.