D: Soundblaster driver fixes, ISAPnP quirk
S: California, USA
+N: Jarkko Lavinen
+D: OMAP MMC support
+
N: Jonathan Layes
D: ARPD support
S: North Little Rock, Arkansas 72115
S: USA
+N: Christopher Li
+D: Sparse maintainer 2009 - 2018
+
N: Stephan Linz
causing system reset or hang due to sending
INIT from AP to BSP.
- disable_counter_freezing [HW]
+ perf_v4_pmi= [X86,INTEL]
+ Format: <bool>
Disable Intel PMU counter freezing feature.
The feature only exists starting from
Arch Perfmon v4 (Skylake and newer).
prevent spurious wakeup);
n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a
pause after every control message);
+ o = USB_QUIRK_HUB_SLOW_RESET (Hub needs extra
+ delay after resetting its port);
Example: quirks=0781:5580:bk,0a5c:5834:gij
usbhid.mousepoll=
a governor ``sysfs`` interface to it. Next, the governor is started by
invoking its ``->start()`` callback.
-That callback it expected to register per-CPU utilization update callbacks for
+That callback is expected to register per-CPU utilization update callbacks for
all of the online CPUs belonging to the given policy with the CPU scheduler.
The utilization update callbacks will be invoked by the CPU scheduler on
important events, like task enqueue and dequeue, on every iteration of the
The security list is not a disclosure channel. For that, see Coordination
below.
-Once a robust fix has been developed, our preference is to release the
-fix in a timely fashion, treating it no differently than any of the other
-thousands of changes and fixes the Linux kernel project releases every
-month.
-
-However, at the request of the reporter, we will postpone releasing the
-fix for up to 5 business days after the date of the report or after the
-embargo has lifted; whichever comes first. The only exception to that
-rule is if the bug is publicly known, in which case the preference is to
-release the fix as soon as it's available.
+Once a robust fix has been developed, the release process starts. Fixes
+for publicly known bugs are released immediately.
+
+Although our preference is to release fixes for publicly undisclosed bugs
+as soon as they become available, this may be postponed at the request of
+the reporter or an affected party for up to 7 calendar days from the start
+of the release process, with an exceptional extension to 14 calendar days
+if it is agreed that the criticality of the bug requires more time. The
+only valid reason for deferring the publication of a fix is to accommodate
+the logistics of QA and large scale rollouts which require release
+coordination.
Whilst embargoed information may be shared with trusted individuals in
order to develop a fix, such information will not be published alongside
new entry and return the previous entry stored at that index. You can
use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a
``NULL`` entry. There is no difference between an entry that has never
-been stored to and one that has most recently had ``NULL`` stored to it.
+been stored to, one that has been erased and one that has most recently
+had ``NULL`` stored to it.
You can conditionally replace an entry at an index by using
:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if
indices. Storing into one index may result in the entry retrieved by
some, but not all of the other indices changing.
+Sometimes you need to ensure that a subsequent call to :c:func:`xa_store`
+will not need to allocate memory. The :c:func:`xa_reserve` function
+will store a reserved entry at the indicated index. Users of the normal
+API will see this entry as containing ``NULL``. If you do not need to
+use the reserved entry, you can call :c:func:`xa_release` to remove the
+unused entry. If another user has stored to the entry in the meantime,
+:c:func:`xa_release` will do nothing; if instead you want the entry to
+become ``NULL``, you should use :c:func:`xa_erase`.
+
+If all entries in the array are ``NULL``, the :c:func:`xa_empty` function
+will return ``true``.
+
Finally, you can remove all entries from an XArray by calling
:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish
to free the entries first. You can do this by iterating over all present
entries in the XArray using the :c:func:`xa_for_each` iterator.
-ID assignment
--------------
+Allocating XArrays
+------------------
+
+If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or
+initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,
+the XArray changes to track whether entries are in use or not.
You can call :c:func:`xa_alloc` to store the entry at any unused index
in the XArray. If you need to modify the array from interrupt context,
you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable
-interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating
-a ``NULL`` pointer does not delete an entry. Instead it reserves an
-entry like :c:func:`xa_reserve` and you can release it using either
-:c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the
-XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised
-by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,
+interrupts while allocating the ID.
+
+Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert`
+will mark the entry as being allocated. Unlike a normal XArray, storing
+``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`.
+To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if
+you only want to free the entry if it's ``NULL``).
+
+You cannot use ``XA_MARK_0`` with an allocating XArray as this mark
+is used to track whether an entry is free or not. The other marks are
+available for your use.
Memory allocation
-----------------
Takes xa_lock internally:
* :c:func:`xa_store`
+ * :c:func:`xa_store_bh`
+ * :c:func:`xa_store_irq`
* :c:func:`xa_insert`
* :c:func:`xa_erase`
* :c:func:`xa_erase_bh`
* :c:func:`xa_alloc`
* :c:func:`xa_alloc_bh`
* :c:func:`xa_alloc_irq`
+ * :c:func:`xa_reserve`
+ * :c:func:`xa_reserve_bh`
+ * :c:func:`xa_reserve_irq`
* :c:func:`xa_destroy`
* :c:func:`xa_set_mark`
* :c:func:`xa_clear_mark`
* :c:func:`__xa_erase`
* :c:func:`__xa_cmpxchg`
* :c:func:`__xa_alloc`
+ * :c:func:`__xa_reserve`
* :c:func:`__xa_set_mark`
* :c:func:`__xa_clear_mark`
using :c:func:`xa_lock_irqsave` in both the interrupt handler and process
context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`
in the interrupt handler. Some of the more common patterns have helper
-functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.
+functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`,
+:c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.
Sometimes you need to protect access to the XArray with a mutex because
that lock sits above another mutex in the locking hierarchy. That does
- :c:func:`xa_is_zero`
- Zero entries appear as ``NULL`` through the Normal API, but occupy
an entry in the XArray which can be used to reserve the index for
- future use.
+ future use. This is used by allocating XArrays for allocated entries
+ which are ``NULL``.
Other internal entries may be added in the future. As far as possible, they
will be handled by :c:func:`xas_retry`.
This will give a fine grained information about all the CPU frequency
transitions. The cat output here is a two dimensional matrix, where an entry
<i,j> (row i, column j) represents the count of number of transitions from
-Freq_i to Freq_j. Freq_i is in descending order with increasing rows and
-Freq_j is in descending order with increasing columns. The output here also
-contains the actual freq values for each row and column for better readability.
+Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in
+which the driver has provided the frequency table initially to the cpufreq core
+and so can be sorted (ascending or descending) or unsorted. The output here
+also contains the actual freq values for each row and column for better
+readability.
If the transition table is bigger than PAGE_SIZE, reading this will
return an -EFBIG error.
+++ /dev/null
-Generic ARM big LITTLE cpufreq driver's DT glue
------------------------------------------------
-
-This is DT specific glue layer for generic cpufreq driver for big LITTLE
-systems.
-
-Both required and optional properties listed below must be defined
-under node /cpus/cpu@x. Where x is the first cpu inside a cluster.
-
-FIXME: Cpus should boot in the order specified in DT and all cpus for a cluster
-must be present contiguously. Generic DT driver will check only node 'x' for
-cpu:x.
-
-Required properties:
-- operating-points: Refer to Documentation/devicetree/bindings/opp/opp.txt
- for details
-
-Optional properties:
-- clock-latency: Specify the possible maximum transition latency for clock,
- in unit of nanoseconds.
-
-Examples:
-
-cpus {
- #address-cells = <1>;
- #size-cells = <0>;
-
- cpu@0 {
- compatible = "arm,cortex-a15";
- reg = <0>;
- next-level-cache = <&L2>;
- operating-points = <
- /* kHz uV */
- 792000 1100000
- 396000 950000
- 198000 850000
- >;
- clock-latency = <61036>; /* two CLK32 periods */
- };
-
- cpu@1 {
- compatible = "arm,cortex-a15";
- reg = <1>;
- next-level-cache = <&L2>;
- };
-
- cpu@100 {
- compatible = "arm,cortex-a7";
- reg = <100>;
- next-level-cache = <&L2>;
- operating-points = <
- /* kHz uV */
- 792000 950000
- 396000 750000
- 198000 450000
- >;
- clock-latency = <61036>; /* two CLK32 periods */
- };
-
- cpu@101 {
- compatible = "arm,cortex-a7";
- reg = <101>;
- next-level-cache = <&L2>;
- };
-};
reg = <1>;
clocks = <&clk32m>;
interrupt-parent = <&gpio4>;
- interrupts = <13 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <13 IRQ_TYPE_LEVEL_HIGH>;
vdd-supply = <®5v0>;
xceiver-supply = <®5v0>;
};
- compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC.
"renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC.
"renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC.
+ "renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC.
"renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC.
"renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC.
"renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC.
"renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC.
"renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC.
"renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC.
+ "renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC.
"renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device.
"renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G1
compatible device.
- "renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device.
+ "renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G2
+ compatible device.
When compatible with the generic version, nodes must list the
SoC-specific version corresponding to the platform first
followed by the generic version.
- reg: physical base address and size of the R-Car CAN register map.
- interrupts: interrupt specifier for the sole interrupt.
-- clocks: phandles and clock specifiers for 3 CAN clock inputs.
-- clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk".
+- clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G2
+ devices.
+ phandles and clock specifiers for 3 CAN clock inputs for every other
+ SoC.
+- clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk".
+ 3 clock input name strings for every other SoC: "clkp1", "clkp2",
+ "can_clk".
- pinctrl-0: pin control group to be used for this controller.
- pinctrl-names: must be "default".
-Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796"
-compatible:
-In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock
-and can be used by both CAN and CAN FD controller at the same time. It needs to
-be scaled to maximum frequency if any of these controllers use it. This is done
+Required properties for R8A7795, R8A7796 and R8A77965:
+For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can
+be used by both CAN and CAN FD controller at the same time. It needs to be
+scaled to maximum frequency if any of these controllers use it. This is done
using the below properties:
- assigned-clocks: phandle of clkp2(CANFD) clock.
Optional properties:
- renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are:
<0x0> (default) : Peripheral clock (clkp1)
- <0x1> : Peripheral clock (clkp2)
- <0x3> : Externally input clock
+ <0x1> : Peripheral clock (clkp2) (not supported by
+ RZ/G2 devices)
+ <0x3> : External input clock
Example
-------
Current Binding
---------------
-Switches are true Linux devices and can be probes by any means. Once
+Switches are true Linux devices and can be probed by any means. Once
probed, they register to the DSA framework, passing a node
pointer. This node is expected to fulfil the following binding, and
may contain additional properties as required by the device it is
Required properties:
- compatible: should be "socionext,uniphier-scssi"
- reg: address and length of the spi master registers
- - #address-cells: must be <1>, see spi-bus.txt
- - #size-cells: must be <0>, see spi-bus.txt
- - clocks: A phandle to the clock for the device.
- - resets: A phandle to the reset control for the device.
+ - interrupts: a single interrupt specifier
+ - pinctrl-names: should be "default"
+ - pinctrl-0: pin control state for the default mode
+ - clocks: a phandle to the clock for the device
+ - resets: a phandle to the reset control for the device
Example:
spi0: spi@54006000 {
compatible = "socionext,uniphier-scssi";
reg = <0x54006000 0x100>;
- #address-cells = <1>;
- #size-cells = <0>;
+ interrupts = <0 39 4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_spi0>;
clocks = <&peri_clk 11>;
resets = <&peri_rst 11>;
};
* REL_WHEEL, REL_HWHEEL:
- These codes are used for vertical and horizontal scroll wheels,
- respectively. The value is the number of "notches" moved on the wheel, the
- physical size of which varies by device. For high-resolution wheels (which
- report multiple events for each notch of movement, or do not have notches)
- this may be an approximation based on the high-resolution scroll events.
-
-* REL_WHEEL_HI_RES:
-
- - If a vertical scroll wheel supports high-resolution scrolling, this code
- will be emitted in addition to REL_WHEEL. The value is the (approximate)
- distance travelled by the user's finger, in microns.
+ respectively.
EV_ABS
------
the desired operation. Both drivers and applications must set the remainder of
the :c:type:`v4l2_format` structure to 0.
-.. _v4l2-meta-format:
+.. c:type:: v4l2_meta_format
.. tabularcolumns:: |p{1.4cm}|p{2.2cm}|p{13.9cm}|
- ``sdr``
- Definition of a data format, see :ref:`pixfmt`, used by SDR
capture and output devices.
+ * -
+ - struct :c:type:`v4l2_meta_format`
+ - ``meta``
+ - Definition of a metadata format, see :ref:`meta-formats`, used by
+ metadata capture devices.
* -
- __u8
- ``raw_data``\ [200]
u32 rxrpc_kernel_check_life(struct socket *sock,
struct rxrpc_call *call);
+ void rxrpc_kernel_probe_life(struct socket *sock,
+ struct rxrpc_call *call);
- This returns a number that is updated when ACKs are received from the peer
- (notably including PING RESPONSE ACKs which we can elicit by sending PING
- ACKs to see if the call still exists on the server). The caller should
- compare the numbers of two calls to see if the call is still alive after
- waiting for a suitable interval.
+ The first function returns a number that is updated when ACKs are received
+ from the peer (notably including PING RESPONSE ACKs which we can elicit by
+ sending PING ACKs to see if the call still exists on the server). The
+ caller should compare the numbers of two calls to see if the call is still
+ alive after waiting for a suitable interval.
This allows the caller to work out if the server is still contactable and
if the call is still alive on the server whilst waiting for the server to
process a client operation.
- This function may transmit a PING ACK.
+ The second function causes a ping ACK to be transmitted to try to provoke
+ the peer into responding, which would then cause the value returned by the
+ first function to change. Note that this must be called in TASK_RUNNING
+ state.
(*) Get reply timestamp.
8169 10/100/1000 GIGABIT ETHERNET DRIVER
S: Maintained
F: drivers/net/ethernet/realtek/r8169.c
F: include/dt-bindings/reset/altr,rst-mgr-a10sr.h
ALTERA TRIPLE SPEED ETHERNET DRIVER
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147
S: Supported
-F: arch/x86/net/bpf_jit*
+F: arch/*/net/*
F: Documentation/networking/filter.txt
F: Documentation/bpf/
F: include/linux/bpf*
F: tools/lib/bpf/
F: tools/testing/selftests/bpf/
+BPF JIT for ARM
+S: Maintained
+F: arch/arm/net/
+
+BPF JIT for ARM64
+S: Supported
+F: arch/arm64/net/
+
+BPF JIT for MIPS (32-BIT AND 64-BIT)
+S: Maintained
+F: arch/mips/net/
+
+BPF JIT for NFP NICs
+S: Supported
+F: drivers/net/ethernet/netronome/nfp/bpf/
+
+BPF JIT for POWERPC (32-BIT AND 64-BIT)
+S: Maintained
+F: arch/powerpc/net/
+
+BPF JIT for S390
+S: Maintained
+F: arch/s390/net/
+X: arch/s390/net/pnet.c
+
+BPF JIT for SPARC (32-BIT AND 64-BIT)
+S: Maintained
+F: arch/sparc/net/
+
+BPF JIT for X86 32-BIT
+S: Maintained
+F: arch/x86/net/bpf_jit_comp32.c
+
+BPF JIT for X86 64-BIT
+S: Supported
+F: arch/x86/net/
+X: arch/x86/net/bpf_jit_comp32.c
+
BROADCOM B44 10/100 ETHERNET DRIVER
F: include/net/caif/
F: net/caif/
+CAKE QDISC
+S: Maintained
+F: net/sched/sch_cake.c
+
CALGARY x86-64 IOMMU
ETHERNET PHY LIBRARY
S: Maintained
F: Documentation/ABI/testing/sysfs-bus-mdio
GPIO SUBSYSTEM
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
S: Maintained
F: Documentation/fb/intelfb.txt
F: drivers/video/fbdev/intelfb/
+INTEL GPIO DRIVERS
+S: Maintained
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git
+F: drivers/gpio/gpio-ich.c
+F: drivers/gpio/gpio-intel-mid.c
+F: drivers/gpio/gpio-lynxpoint.c
+F: drivers/gpio/gpio-merrifield.c
+F: drivers/gpio/gpio-ml-ioh.c
+F: drivers/gpio/gpio-pch.c
+F: drivers/gpio/gpio-sch.c
+F: drivers/gpio/gpio-sodaville.c
+
INTEL GVT-g DRIVERS (Intel GPU Virtualization)
S: Supported
F: drivers/gpu/drm/i915/gvt/
-INTEL PMIC GPIO DRIVER
-S: Maintained
-F: drivers/gpio/gpio-*cove.c
-F: drivers/gpio/gpio-msic.c
-
INTEL HID EVENT DRIVER
S: Supported
F: drivers/platform/x86/intel_menlow.c
-INTEL MERRIFIELD GPIO DRIVER
-S: Maintained
-F: drivers/gpio/gpio-merrifield.c
-
INTEL MIC DRIVERS (mic)
F: arch/x86/include/asm/intel_pmc_ipc.h
F: arch/x86/include/asm/intel_punit_ipc.h
+INTEL PMIC GPIO DRIVERS
+S: Maintained
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git
+F: drivers/gpio/gpio-*cove.c
+F: drivers/gpio/gpio-msic.c
+
INTEL MULTIFUNCTION PMIC DEVICE DRIVERS
S: Maintained
F: drivers/staging/media/omap4iss/
OMAP MMC SUPPORT
-S: Maintained
+S: Odd Fixes
F: drivers/mmc/host/omap.c
OMAP POWER MANAGEMENT SUPPORT
PIN CONTROLLER - INTEL
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git
S: Maintained
F: drivers/pinctrl/intel/
F: Documentation/devicetree/bindings/sound/
F: Documentation/sound/soc/
F: sound/soc/
+F: include/dt-bindings/sound/
F: include/sound/soc*
SOUNDWIRE SUBSYSTEM
F: drivers/tty/vcc.c
SPARSE CHECKER
W: https://sparse.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/devel/sparse/sparse.git
-T: git git://git.kernel.org/pub/scm/devel/sparse/chrisl/sparse.git
S: Maintained
F: include/linux/compiler.h
STABLE BRANCH
S: Supported
F: Documentation/process/stable-kernel-rules.rst
VERSION = 4
PATCHLEVEL = 20
SUBLEVEL = 0
-EXTRAVERSION = -rc2
-NAME = "People's Front"
+EXTRAVERSION = -rc4
+NAME = Shy Crocodile
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
#include <linux/kernel.h>
extern unsigned int processor_id;
+struct proc_info_list *lookup_processor(u32 midr);
#ifdef CONFIG_CPU_CP15
#define read_cpuid(reg) \
/*
* Don't change this structure - ASM code relies on it.
*/
-extern struct processor {
+struct processor {
/* MISC
* get data abort address/flags
*/
unsigned int suspend_size;
void (*do_suspend)(void *);
void (*do_resume)(void *);
-} processor;
+};
#ifndef MULTI_CPU
+static inline void init_proc_vtable(const struct processor *p)
+{
+}
+
extern void cpu_proc_init(void);
extern void cpu_proc_fin(void);
extern int cpu_do_idle(void);
extern void cpu_do_suspend(void *);
extern void cpu_do_resume(void *);
#else
-#define cpu_proc_init processor._proc_init
-#define cpu_proc_fin processor._proc_fin
-#define cpu_reset processor.reset
-#define cpu_do_idle processor._do_idle
-#define cpu_dcache_clean_area processor.dcache_clean_area
-#define cpu_set_pte_ext processor.set_pte_ext
-#define cpu_do_switch_mm processor.switch_mm
-/* These three are private to arch/arm/kernel/suspend.c */
-#define cpu_do_suspend processor.do_suspend
-#define cpu_do_resume processor.do_resume
+extern struct processor processor;
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+#include <linux/smp.h>
+/*
+ * This can't be a per-cpu variable because we need to access it before
+ * per-cpu has been initialised. We have a couple of functions that are
+ * called in a pre-emptible context, and so can't use smp_processor_id()
+ * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the
+ * function pointers for these are identical across all CPUs.
+ */
+extern struct processor *cpu_vtable[];
+#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f
+#define PROC_TABLE(f) cpu_vtable[0]->f
+static inline void init_proc_vtable(const struct processor *p)
+{
+ unsigned int cpu = smp_processor_id();
+ *cpu_vtable[cpu] = *p;
+ WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area !=
+ cpu_vtable[0]->dcache_clean_area);
+ WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext !=
+ cpu_vtable[0]->set_pte_ext);
+}
+#else
+#define PROC_VTABLE(f) processor.f
+#define PROC_TABLE(f) processor.f
+static inline void init_proc_vtable(const struct processor *p)
+{
+ processor = *p;
+}
+#endif
+
+#define cpu_proc_init PROC_VTABLE(_proc_init)
+#define cpu_check_bugs PROC_VTABLE(check_bugs)
+#define cpu_proc_fin PROC_VTABLE(_proc_fin)
+#define cpu_reset PROC_VTABLE(reset)
+#define cpu_do_idle PROC_VTABLE(_do_idle)
+#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area)
+#define cpu_set_pte_ext PROC_TABLE(set_pte_ext)
+#define cpu_do_switch_mm PROC_VTABLE(switch_mm)
+
+/* These two are private to arch/arm/kernel/suspend.c */
+#define cpu_do_suspend PROC_VTABLE(do_suspend)
+#define cpu_do_resume PROC_VTABLE(do_resume)
#endif
extern void cpu_resume(void);
void check_other_bugs(void)
{
#ifdef MULTI_CPU
- if (processor.check_bugs)
- processor.check_bugs();
+ if (cpu_check_bugs)
+ cpu_check_bugs();
#endif
}
unsigned long frame_pointer)
{
unsigned long return_hooker = (unsigned long) &return_to_handler;
- struct ftrace_graph_ent trace;
unsigned long old;
- int err;
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
return;
old = *parent;
*parent = return_hooker;
- trace.func = self_addr;
- trace.depth = current->curr_ret_stack + 1;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace)) {
+ if (function_graph_enter(old, self_addr, frame_pointer, NULL))
*parent = old;
- return;
- }
-
- err = ftrace_push_return_trace(old, self_addr, &trace.depth,
- frame_pointer, NULL);
- if (err == -EBUSY) {
- *parent = old;
- return;
- }
}
#ifdef CONFIG_DYNAMIC_FTRACE
#endif
.size __mmap_switched_data, . - __mmap_switched_data
+ __FINIT
+ .text
+
/*
* This provides a C-API version of __lookup_processor_type
*/
ldmfd sp!, {r4 - r6, r9, pc}
ENDPROC(lookup_processor_type)
- __FINIT
- .text
-
/*
* Read processor ID register (CP#15, CR0), and look up in the linker-built
* supported processor list. Note that we can't use the absolute addresses
#ifdef MULTI_CPU
struct processor processor __ro_after_init;
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+struct processor *cpu_vtable[NR_CPUS] = {
+ [0] = &processor,
+};
+#endif
#endif
#ifdef MULTI_TLB
struct cpu_tlb_fns cpu_tlb __ro_after_init;
}
#endif
-static void __init setup_processor(void)
+/*
+ * locate processor in the list of supported processor types. The linker
+ * builds this table for us from the entries in arch/arm/mm/proc-*.S
+ */
+struct proc_info_list *lookup_processor(u32 midr)
{
- struct proc_info_list *list;
+ struct proc_info_list *list = lookup_processor_type(midr);
- /*
- * locate processor in the list of supported processor
- * types. The linker builds this table for us from the
- * entries in arch/arm/mm/proc-*.S
- */
- list = lookup_processor_type(read_cpuid_id());
if (!list) {
- pr_err("CPU configuration botched (ID %08x), unable to continue.\n",
- read_cpuid_id());
- while (1);
+ pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n",
+ smp_processor_id(), midr);
+ while (1)
+ /* can't use cpu_relax() here as it may require MMU setup */;
}
+ return list;
+}
+
+static void __init setup_processor(void)
+{
+ unsigned int midr = read_cpuid_id();
+ struct proc_info_list *list = lookup_processor(midr);
+
cpu_name = list->cpu_name;
__cpu_architecture = __get_cpu_architecture();
-#ifdef MULTI_CPU
- processor = *list->proc;
-#endif
+ init_proc_vtable(list->proc);
#ifdef MULTI_TLB
cpu_tlb = *list->tlb;
#endif
#endif
pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",
- cpu_name, read_cpuid_id(), read_cpuid_id() & 15,
+ list->cpu_name, midr, midr & 15,
proc_arch[cpu_architecture()], get_cr());
snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
+#include <asm/procinfo.h>
#include <asm/processor.h>
#include <asm/sections.h>
#include <asm/tlbflush.h>
#endif
}
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+static int secondary_biglittle_prepare(unsigned int cpu)
+{
+ if (!cpu_vtable[cpu])
+ cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL);
+
+ return cpu_vtable[cpu] ? 0 : -ENOMEM;
+}
+
+static void secondary_biglittle_init(void)
+{
+ init_proc_vtable(lookup_processor(read_cpuid_id())->proc);
+}
+#else
+static int secondary_biglittle_prepare(unsigned int cpu)
+{
+ return 0;
+}
+
+static void secondary_biglittle_init(void)
+{
+}
+#endif
+
int __cpu_up(unsigned int cpu, struct task_struct *idle)
{
int ret;
if (!smp_ops.smp_boot_secondary)
return -ENOSYS;
+ ret = secondary_biglittle_prepare(cpu);
+ if (ret)
+ return ret;
+
/*
* We need to tell the secondary core where to find
* its stack and the page tables.
struct mm_struct *mm = &init_mm;
unsigned int cpu;
+ secondary_biglittle_init();
+
/*
* The identity mapping is uncached (strongly ordered), so
* switch away from it before attempting any exclusive accesses.
return 0;
}
-#else
-static inline int omapdss_init_fbdev(void)
+
+static const char * const omapdss_compat_names[] __initconst = {
+ "ti,omap2-dss",
+ "ti,omap3-dss",
+ "ti,omap4-dss",
+ "ti,omap5-dss",
+ "ti,dra7-dss",
+};
+
+static struct device_node * __init omapdss_find_dss_of_node(void)
{
- return 0;
+ struct device_node *node;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {
+ node = of_find_compatible_node(NULL, NULL,
+ omapdss_compat_names[i]);
+ if (node)
+ return node;
+ }
+
+ return NULL;
}
+
+static int __init omapdss_init_of(void)
+{
+ int r;
+ struct device_node *node;
+ struct platform_device *pdev;
+
+ /* only create dss helper devices if dss is enabled in the .dts */
+
+ node = omapdss_find_dss_of_node();
+ if (!node)
+ return 0;
+
+ if (!of_device_is_available(node))
+ return 0;
+
+ pdev = of_find_device_by_node(node);
+
+ if (!pdev) {
+ pr_err("Unable to find DSS platform device\n");
+ return -ENODEV;
+ }
+
+ r = of_platform_populate(node, NULL, NULL, &pdev->dev);
+ if (r) {
+ pr_err("Unable to populate DSS submodule devices\n");
+ return r;
+ }
+
+ return omapdss_init_fbdev();
+}
+omap_device_initcall(omapdss_init_of);
#endif /* CONFIG_FB_OMAP2 */
static void dispc_disable_outputs(void)
return r;
}
-
-static const char * const omapdss_compat_names[] __initconst = {
- "ti,omap2-dss",
- "ti,omap3-dss",
- "ti,omap4-dss",
- "ti,omap5-dss",
- "ti,dra7-dss",
-};
-
-static struct device_node * __init omapdss_find_dss_of_node(void)
-{
- struct device_node *node;
- int i;
-
- for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {
- node = of_find_compatible_node(NULL, NULL,
- omapdss_compat_names[i]);
- if (node)
- return node;
- }
-
- return NULL;
-}
-
-static int __init omapdss_init_of(void)
-{
- int r;
- struct device_node *node;
- struct platform_device *pdev;
-
- /* only create dss helper devices if dss is enabled in the .dts */
-
- node = omapdss_find_dss_of_node();
- if (!node)
- return 0;
-
- if (!of_device_is_available(node))
- return 0;
-
- pdev = of_find_device_by_node(node);
-
- if (!pdev) {
- pr_err("Unable to find DSS platform device\n");
- return -ENODEV;
- }
-
- r = of_platform_populate(node, NULL, NULL, &pdev->dev);
- if (r) {
- pr_err("Unable to populate DSS submodule devices\n");
- return r;
- }
-
- return omapdss_init_fbdev();
-}
-omap_device_initcall(omapdss_init_of);
case ARM_CPU_PART_CORTEX_A17:
case ARM_CPU_PART_CORTEX_A73:
case ARM_CPU_PART_CORTEX_A75:
- if (processor.switch_mm != cpu_v7_bpiall_switch_mm)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_bpiall;
spectre_v2_method = "BPIALL";
case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_BRAHMA_B15:
- if (processor.switch_mm != cpu_v7_iciallu_switch_mm)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_iciallu;
spectre_v2_method = "ICIALLU";
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
- if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
call_hvc_arch_workaround_1;
- processor.switch_mm = cpu_v7_hvc_switch_mm;
+ cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
spectre_v2_method = "hypervisor";
break;
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
- if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
call_smc_arch_workaround_1;
- processor.switch_mm = cpu_v7_smc_switch_mm;
+ cpu_do_switch_mm = cpu_v7_smc_switch_mm;
spectre_v2_method = "firmware";
break;
if (spectre_v2_method)
pr_info("CPU%u: Spectre v2: using %s workaround\n",
smp_processor_id(), spectre_v2_method);
- return;
-
-bl_error:
- pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n",
- cpu);
}
#else
static void cpu_v7_spectre_init(void)
*/
ufp_exc->fpexc = hwstate->fpexc;
ufp_exc->fpinst = hwstate->fpinst;
- ufp_exc->fpinst2 = ufp_exc->fpinst2;
+ ufp_exc->fpinst2 = hwstate->fpinst2;
/* Ensure that VFP is disabled. */
vfp_flush_hwstate(thread);
SCTLR_ELx_SA | SCTLR_ELx_I | SCTLR_ELx_WXN | \
SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
-#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff
+#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffffUL
#error "Inconsistent SCTLR_EL2 set/clear bits"
#endif
SCTLR_EL1_UMA | SCTLR_ELx_WXN | ENDIAN_CLEAR_EL1 |\
SCTLR_ELx_DSSBS | SCTLR_EL1_NTWI | SCTLR_EL1_RES0)
-#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff
+#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffffUL
#error "Inconsistent SCTLR_EL1 set/clear bits"
#endif
.cpu_enable = cpu_enable_hw_dbm,
},
#endif
-#ifdef CONFIG_ARM64_SSBD
{
.desc = "CRC32 instructions",
.capability = ARM64_HAS_CRC32,
.field_pos = ID_AA64ISAR0_CRC32_SHIFT,
.min_field_value = 1,
},
+#ifdef CONFIG_ARM64_SSBD
{
.desc = "Speculative Store Bypassing Safe (SSBS)",
.capability = ARM64_SSBS,
{
unsigned long return_hooker = (unsigned long)&return_to_handler;
unsigned long old;
- struct ftrace_graph_ent trace;
- int err;
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
return;
*/
old = *parent;
- trace.func = self_addr;
- trace.depth = current->curr_ret_stack + 1;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace))
- return;
-
- err = ftrace_push_return_trace(old, self_addr, &trace.depth,
- frame_pointer, NULL);
- if (err == -EBUSY)
- return;
- else
+ if (!function_graph_enter(old, self_addr, frame_pointer, NULL))
*parent = return_hooker;
}
arm64_memblock_init();
paging_init();
+ efi_apply_persistent_mem_reservations();
acpi_table_upgrade();
* >0 - successfully JITed a 16-byte eBPF instruction.
* <0 - failed to JIT.
*/
-static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
+ bool extra_pass)
{
const u8 code = insn->code;
const u8 dst = bpf2a64[insn->dst_reg];
case BPF_JMP | BPF_CALL:
{
const u8 r0 = bpf2a64[BPF_REG_0];
- const u64 func = (u64)__bpf_call_base + imm;
+ bool func_addr_fixed;
+ u64 func_addr;
+ int ret;
- if (ctx->prog->is_func)
- emit_addr_mov_i64(tmp, func, ctx);
+ ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
+ &func_addr, &func_addr_fixed);
+ if (ret < 0)
+ return ret;
+ if (func_addr_fixed)
+ /* We can use optimized emission here. */
+ emit_a64_mov_i64(tmp, func_addr, ctx);
else
- emit_a64_mov_i64(tmp, func, ctx);
+ emit_addr_mov_i64(tmp, func_addr, ctx);
emit(A64_BLR(tmp), ctx);
emit(A64_MOV(1, r0, A64_R(0)), ctx);
break;
return 0;
}
-static int build_body(struct jit_ctx *ctx)
+static int build_body(struct jit_ctx *ctx, bool extra_pass)
{
const struct bpf_prog *prog = ctx->prog;
int i;
const struct bpf_insn *insn = &prog->insnsi[i];
int ret;
- ret = build_insn(insn, ctx);
+ ret = build_insn(insn, ctx, extra_pass);
if (ret > 0) {
i++;
if (ctx->image == NULL)
/* 1. Initial fake pass to compute ctx->idx. */
/* Fake pass to fill in ctx->offset. */
- if (build_body(&ctx)) {
+ if (build_body(&ctx, extra_pass)) {
prog = orig_prog;
goto out_off;
}
build_prologue(&ctx, was_classic);
- if (build_body(&ctx)) {
+ if (build_body(&ctx, extra_pass)) {
bpf_jit_binary_free(header);
prog = orig_prog;
goto out_off;
*/
extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];
-#define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])
+#define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])
+extern int __node_distance(int from, int to);
+#define node_distance(from,to) __node_distance(from, to)
extern int paddr_to_nid(unsigned long paddr);
if (!slit_table) {
for (i = 0; i < MAX_NUMNODES; i++)
for (j = 0; j < MAX_NUMNODES; j++)
- node_distance(i, j) = i == j ? LOCAL_DISTANCE :
- REMOTE_DISTANCE;
+ slit_distance(i, j) = i == j ?
+ LOCAL_DISTANCE : REMOTE_DISTANCE;
return;
}
if (!pxm_bit_test(j))
continue;
node_to = pxm_to_node(j);
- node_distance(node_from, node_to) =
+ slit_distance(node_from, node_to) =
slit_table->entry[i * slit_table->locality_count + j];
}
}
*/
u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];
+int __node_distance(int from, int to)
+{
+ return slit_distance(from, to);
+}
+EXPORT_SYMBOL(__node_distance);
+
/* Identify which cnode a physical address resides on */
int
paddr_to_nid(unsigned long paddr)
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
{
unsigned long old;
- int faulted, err;
- struct ftrace_graph_ent trace;
+ int faulted;
unsigned long return_hooker = (unsigned long)
&return_to_handler;
return;
}
- err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL);
- if (err == -EBUSY) {
+ if (function_graph_enter(old, self_addr, 0, NULL))
*parent = old;
- return;
- }
-
- trace.func = self_addr;
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace)) {
- current->curr_ret_stack--;
- *parent = old;
- }
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
CONFIG_RTC_DRV_DS1307=y
CONFIG_STAGING=y
CONFIG_OCTEON_ETHERNET=y
+CONFIG_OCTEON_USB=y
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_RAS=y
CONFIG_EXT4_FS=y
unsigned long fp)
{
unsigned long old_parent_ra;
- struct ftrace_graph_ent trace;
unsigned long return_hooker = (unsigned long)
&return_to_handler;
int faulted, insns;
if (unlikely(faulted))
goto out;
- if (ftrace_push_return_trace(old_parent_ra, self_ra, &trace.depth, fp,
- NULL) == -EBUSY) {
- *parent_ra_addr = old_parent_ra;
- return;
- }
-
/*
* Get the recorded ip of the current mcount calling site in the
* __mcount_loc section, which will be used to filter the function
*/
insns = core_kernel_text(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1;
- trace.func = self_ra - (MCOUNT_INSN_SIZE * insns);
+ self_ra -= (MCOUNT_INSN_SIZE * insns);
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace)) {
- current->curr_ret_stack--;
+ if (function_graph_enter(old_parent_ra, self_ra, fp, NULL))
*parent_ra_addr = old_parent_ra;
- }
return;
out:
ftrace_graph_stop();
/* call board setup routine */
plat_mem_setup();
+ memblock_set_bottom_up(true);
/*
* Make sure all kernel memory is in the maps. The "UP" and
unsigned long size = 0x200 + VECTORSPACING*64;
phys_addr_t ebase_pa;
- memblock_set_bottom_up(true);
ebase = (unsigned long)
memblock_alloc_from(size, 1 << fls(size), 0);
- memblock_set_bottom_up(false);
/*
* Try to ensure ebase resides in KSeg0 if possible.
if (board_ebase_setup)
board_ebase_setup();
per_cpu_trap_init(true);
+ memblock_set_bottom_up(false);
/*
* Copy the generic exception handlers to their final destination.
cpumask_clear(&__node_data[(node)]->cpumask);
}
}
+ max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
+
for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {
node = cpu / loongson_sysconf.cores_per_node;
if (node >= num_online_nodes())
void __init paging_init(void)
{
- unsigned node;
unsigned long zones_size[MAX_NR_ZONES] = {0, };
pagetable_init();
-
- for_each_online_node(node) {
- unsigned long start_pfn, end_pfn;
-
- get_pfn_range_for_nid(node, &start_pfn, &end_pfn);
-
- if (end_pfn > max_low_pfn)
- max_low_pfn = end_pfn;
- }
#ifdef CONFIG_ZONE_DMA32
zones_size[ZONE_DMA32] = MAX_DMA32_PFN;
#endif
mlreset();
szmem();
+ max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
for (node = 0; node < MAX_COMPACT_NODES; node++) {
if (node_online(node)) {
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES] = {0, };
- unsigned node;
pagetable_init();
-
- for_each_online_node(node) {
- unsigned long start_pfn, end_pfn;
-
- get_pfn_range_for_nid(node, &start_pfn, &end_pfn);
-
- if (end_pfn > max_low_pfn)
- max_low_pfn = end_pfn;
- }
zones_size[ZONE_NORMAL] = max_low_pfn;
free_area_init_nodes(zones_size);
}
unsigned long frame_pointer)
{
unsigned long return_hooker = (unsigned long)&return_to_handler;
- struct ftrace_graph_ent trace;
unsigned long old;
- int err;
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
return;
old = *parent;
- trace.func = self_addr;
- trace.depth = current->curr_ret_stack + 1;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace))
- return;
-
- err = ftrace_push_return_trace(old, self_addr, &trace.depth,
- frame_pointer, NULL);
-
- if (err == -EBUSY)
- return;
-
- *parent = return_hooker;
+ if (!function_graph_enter(old, self_addr, frame_pointer, NULL))
+ *parent = return_hooker;
}
noinline void ftrace_graph_caller(void)
volatile unsigned int *a;
a = __ldcw_align(x);
- /* Release with ordered store. */
- __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory");
+ mb();
+ *a = 1;
}
static inline int arch_spin_trylock(arch_spinlock_t *x)
unsigned long self_addr)
{
unsigned long old;
- struct ftrace_graph_ent trace;
extern int parisc_return_to_handler;
if (unlikely(ftrace_graph_is_dead()))
old = *parent;
- trace.func = self_addr;
- trace.depth = current->curr_ret_stack + 1;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace))
- return;
-
- if (ftrace_push_return_trace(old, self_addr, &trace.depth,
- 0, NULL) == -EBUSY)
- return;
-
- /* activate parisc_return_to_handler() as return point */
- *parent = (unsigned long) &parisc_return_to_handler;
+ if (!function_graph_enter(old, self_addr, 0, NULL))
+ /* activate parisc_return_to_handler() as return point */
+ *parent = (unsigned long) &parisc_return_to_handler;
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
sub,<> %r28, %r25, %r0
2: stw %r24, 0(%r26)
/* Free lock */
- stw,ma %r20, 0(%sr2,%r20)
+ sync
+ stw %r20, 0(%sr2,%r20)
#if ENABLE_LWS_DEBUG
/* Clear thread register indicator */
stw %r0, 4(%sr2,%r20)
3:
/* Error occurred on load or store */
/* Free lock */
- stw,ma %r20, 0(%sr2,%r20)
+ sync
+ stw %r20, 0(%sr2,%r20)
#if ENABLE_LWS_DEBUG
stw %r0, 4(%sr2,%r20)
#endif
cas2_end:
/* Free lock */
- stw,ma %r20, 0(%sr2,%r20)
+ sync
+ stw %r20, 0(%sr2,%r20)
/* Enable interrupts */
ssm PSW_SM_I, %r0
/* Return to userspace, set no error */
22:
/* Error occurred on load or store */
/* Free lock */
- stw,ma %r20, 0(%sr2,%r20)
+ sync
+ stw %r20, 0(%sr2,%r20)
ssm PSW_SM_I, %r0
ldo 1(%r0),%r28
b lws_exit
* their hooks, a bitfield is reserved for use by the platform near the
* top of MMIO addresses (not PIO, those have to cope the hard way).
*
- * This bit field is 12 bits and is at the top of the IO virtual
- * addresses PCI_IO_INDIRECT_TOKEN_MASK.
+ * The highest address in the kernel virtual space are:
*
- * The kernel virtual space is thus:
+ * d0003fffffffffff # with Hash MMU
+ * c00fffffffffffff # with Radix MMU
*
- * 0xD000000000000000 : vmalloc
- * 0xD000080000000000 : PCI PHB IO space
- * 0xD000080080000000 : ioremap
- * 0xD0000fffffffffff : end of ioremap region
- *
- * Since the top 4 bits are reserved as the region ID, we use thus
- * the next 12 bits and keep 4 bits available for the future if the
- * virtual address space is ever to be extended.
+ * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits
+ * that can be used for the field.
*
* The direct IO mapping operations will then mask off those bits
* before doing the actual access, though that only happen when
*/
#ifdef CONFIG_PPC_INDIRECT_MMIO
-#define PCI_IO_IND_TOKEN_MASK 0x0fff000000000000ul
-#define PCI_IO_IND_TOKEN_SHIFT 48
+#define PCI_IO_IND_TOKEN_SHIFT 52
+#define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT)
#define PCI_FIX_ADDR(addr) \
((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK))
#define PCI_GET_ADDR_TOKEN(addr) \
__PPC_RS(t) | __PPC_RA0(a) | __PPC_RB(b))
#define PPC_SLBFEE_DOT(t, b) stringify_in_c(.long PPC_INST_SLBFEE | \
__PPC_RT(t) | __PPC_RB(b))
+#define __PPC_SLBFEE_DOT(t, b) stringify_in_c(.long PPC_INST_SLBFEE | \
+ ___PPC_RT(t) | ___PPC_RB(b))
#define PPC_ICBT(c,a,b) stringify_in_c(.long PPC_INST_ICBT | \
__PPC_CT(c) | __PPC_RA0(a) | __PPC_RB(b))
/* PASemi instructions */
#ifdef CONFIG_PPC64
unsigned long ppr;
+ unsigned long __pad; /* Maintain 16 byte interrupt stack alignment */
#endif
};
#endif
{
unsigned long pa;
+ BUILD_BUG_ON(STACK_INT_FRAME_SIZE % 16);
+
pa = memblock_alloc_base_nid(THREAD_SIZE, THREAD_SIZE, limit,
early_cpu_to_node(cpu), MEMBLOCK_NONE);
if (!pa) {
*/
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)
{
- struct ftrace_graph_ent trace;
unsigned long return_hooker;
if (unlikely(ftrace_graph_is_dead()))
return_hooker = ppc_function_entry(return_to_handler);
- trace.func = ip;
- trace.depth = current->curr_ret_stack + 1;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace))
- goto out;
-
- if (ftrace_push_return_trace(parent, ip, &trace.depth, 0,
- NULL) == -EBUSY)
- goto out;
-
- parent = return_hooker;
+ if (!function_graph_enter(parent, ip, 0, NULL))
+ parent = return_hooker;
out:
return parent;
}
ret = kvmhv_enter_nested_guest(vcpu);
if (ret == H_INTERRUPT) {
kvmppc_set_gpr(vcpu, 3, 0);
+ vcpu->arch.hcall_needed = 0;
return -EINTR;
}
break;
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm
-#define TRACE_INCLUDE_PATH .
-#define TRACE_INCLUDE_FILE trace
/*
* Tracepoint for guest mode entry.
#endif /* _TRACE_KVM_H */
/* This part must be outside protection */
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+
+#define TRACE_INCLUDE_PATH .
+#define TRACE_INCLUDE_FILE trace
+
#include <trace/define_trace.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm_booke
-#define TRACE_INCLUDE_PATH .
-#define TRACE_INCLUDE_FILE trace_booke
#define kvm_trace_symbol_exit \
{0, "CRITICAL"}, \
#endif
/* This part must be outside protection */
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+
+#define TRACE_INCLUDE_PATH .
+#define TRACE_INCLUDE_FILE trace_booke
+
#include <trace/define_trace.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm_hv
-#define TRACE_INCLUDE_PATH .
-#define TRACE_INCLUDE_FILE trace_hv
#define kvm_trace_symbol_hcall \
{H_REMOVE, "H_REMOVE"}, \
#endif /* _TRACE_KVM_HV_H */
/* This part must be outside protection */
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+
+#define TRACE_INCLUDE_PATH .
+#define TRACE_INCLUDE_FILE trace_hv
+
#include <trace/define_trace.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm_pr
-#define TRACE_INCLUDE_PATH .
-#define TRACE_INCLUDE_FILE trace_pr
TRACE_EVENT(kvm_book3s_reenter,
TP_PROTO(int r, struct kvm_vcpu *vcpu),
#endif /* _TRACE_KVM_H */
/* This part must be outside protection */
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+
+#define TRACE_INCLUDE_PATH .
+#define TRACE_INCLUDE_FILE trace_pr
+
#include <trace/define_trace.h>
switch (rc) {
case H_FUNCTION:
- printk(KERN_INFO
+ printk_once(KERN_INFO
"VPHN is not supported. Disabling polling...\n");
stop_topology_update();
break;
#include <asm/mmu.h>
#include <asm/mmu_context.h>
#include <asm/paca.h>
+#include <asm/ppc-opcode.h>
#include <asm/cputable.h>
#include <asm/cacheflush.h>
#include <asm/smp.h>
return __mk_vsid_data(get_kernel_vsid(ea, ssize), ssize, flags);
}
-static void assert_slb_exists(unsigned long ea)
+static void assert_slb_presence(bool present, unsigned long ea)
{
#ifdef CONFIG_DEBUG_VM
unsigned long tmp;
WARN_ON_ONCE(mfmsr() & MSR_EE);
- asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0");
- WARN_ON(tmp == 0);
-#endif
-}
-
-static void assert_slb_notexists(unsigned long ea)
-{
-#ifdef CONFIG_DEBUG_VM
- unsigned long tmp;
+ if (!cpu_has_feature(CPU_FTR_ARCH_206))
+ return;
- WARN_ON_ONCE(mfmsr() & MSR_EE);
+ asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
- asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0");
- WARN_ON(tmp != 0);
+ WARN_ON(present == (tmp == 0));
#endif
}
*/
slb_shadow_update(ea, ssize, flags, index);
- assert_slb_notexists(ea);
+ assert_slb_presence(false, ea);
asm volatile("slbmte %0,%1" :
: "r" (mk_vsid_data(ea, ssize, flags)),
"r" (mk_esid_data(ea, ssize, index))
"r" (be64_to_cpu(p->save_area[index].esid)));
}
- assert_slb_exists(local_paca->kstack);
+ assert_slb_presence(true, local_paca->kstack);
}
/*
:: "r" (be64_to_cpu(p->save_area[KSTACK_INDEX].vsid)),
"r" (be64_to_cpu(p->save_area[KSTACK_INDEX].esid))
: "memory");
- assert_slb_exists(get_paca()->kstack);
+ assert_slb_presence(true, get_paca()->kstack);
get_paca()->slb_cache_ptr = 0;
ea = (unsigned long)
get_paca()->slb_cache[i] << SID_SHIFT;
/*
- * Could assert_slb_exists here, but hypervisor
- * or machine check could have come in and
- * removed the entry at this point.
+ * Could assert_slb_presence(true) here, but
+ * hypervisor or machine check could have come
+ * in and removed the entry at this point.
*/
slbie_data = ea;
* User preloads should add isync afterwards in case the kernel
* accesses user memory before it returns to userspace with rfid.
*/
- assert_slb_notexists(ea);
+ assert_slb_presence(false, ea);
asm volatile("slbmte %0, %1" : : "r" (vsid_data), "r" (esid_data));
barrier();
return -EFAULT;
if (ea < H_VMALLOC_END)
- flags = get_paca()->vmalloc_sllp;
+ flags = local_paca->vmalloc_sllp;
else
flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_io_psize].sllp;
} else {
PPC_BLR();
}
-static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func)
+static void bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx,
+ u64 func)
+{
+#ifdef PPC64_ELF_ABI_v1
+ /* func points to the function descriptor */
+ PPC_LI64(b2p[TMP_REG_2], func);
+ /* Load actual entry point from function descriptor */
+ PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0);
+ /* ... and move it to LR */
+ PPC_MTLR(b2p[TMP_REG_1]);
+ /*
+ * Load TOC from function descriptor at offset 8.
+ * We can clobber r2 since we get called through a
+ * function pointer (so caller will save/restore r2)
+ * and since we don't use a TOC ourself.
+ */
+ PPC_BPF_LL(2, b2p[TMP_REG_2], 8);
+#else
+ /* We can clobber r12 */
+ PPC_FUNC_ADDR(12, func);
+ PPC_MTLR(12);
+#endif
+ PPC_BLRL();
+}
+
+static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx,
+ u64 func)
{
unsigned int i, ctx_idx = ctx->idx;
{
const struct bpf_insn *insn = fp->insnsi;
int flen = fp->len;
- int i;
+ int i, ret;
/* Start of epilogue code - will only be valid 2nd pass onwards */
u32 exit_addr = addrs[flen];
u32 src_reg = b2p[insn[i].src_reg];
s16 off = insn[i].off;
s32 imm = insn[i].imm;
+ bool func_addr_fixed;
+ u64 func_addr;
u64 imm64;
- u8 *func;
u32 true_cond;
u32 tmp_idx;
case BPF_JMP | BPF_CALL:
ctx->seen |= SEEN_FUNC;
- /* bpf function call */
- if (insn[i].src_reg == BPF_PSEUDO_CALL)
- if (!extra_pass)
- func = NULL;
- else if (fp->aux->func && off < fp->aux->func_cnt)
- /* use the subprog id from the off
- * field to lookup the callee address
- */
- func = (u8 *) fp->aux->func[off]->bpf_func;
- else
- return -EINVAL;
- /* kernel helper call */
- else
- func = (u8 *) __bpf_call_base + imm;
-
- bpf_jit_emit_func_call(image, ctx, (u64)func);
+ ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
+ &func_addr, &func_addr_fixed);
+ if (ret < 0)
+ return ret;
+ if (func_addr_fixed)
+ bpf_jit_emit_func_call_hlp(image, ctx, func_addr);
+ else
+ bpf_jit_emit_func_call_rel(image, ctx, func_addr);
/* move return value from r3 to BPF_REG_0 */
PPC_MR(b2p[BPF_REG_0], 3);
break;
}
EXPORT_SYMBOL(pnv_pci_get_npu_dev);
-#define NPU_DMA_OP_UNSUPPORTED() \
- dev_err_once(dev, "%s operation unsupported for NVLink devices\n", \
- __func__)
-
-static void *dma_npu_alloc(struct device *dev, size_t size,
- dma_addr_t *dma_handle, gfp_t flag,
- unsigned long attrs)
-{
- NPU_DMA_OP_UNSUPPORTED();
- return NULL;
-}
-
-static void dma_npu_free(struct device *dev, size_t size,
- void *vaddr, dma_addr_t dma_handle,
- unsigned long attrs)
-{
- NPU_DMA_OP_UNSUPPORTED();
-}
-
-static dma_addr_t dma_npu_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size,
- enum dma_data_direction direction,
- unsigned long attrs)
-{
- NPU_DMA_OP_UNSUPPORTED();
- return 0;
-}
-
-static int dma_npu_map_sg(struct device *dev, struct scatterlist *sglist,
- int nelems, enum dma_data_direction direction,
- unsigned long attrs)
-{
- NPU_DMA_OP_UNSUPPORTED();
- return 0;
-}
-
-static int dma_npu_dma_supported(struct device *dev, u64 mask)
-{
- NPU_DMA_OP_UNSUPPORTED();
- return 0;
-}
-
-static u64 dma_npu_get_required_mask(struct device *dev)
-{
- NPU_DMA_OP_UNSUPPORTED();
- return 0;
-}
-
-static const struct dma_map_ops dma_npu_ops = {
- .map_page = dma_npu_map_page,
- .map_sg = dma_npu_map_sg,
- .alloc = dma_npu_alloc,
- .free = dma_npu_free,
- .dma_supported = dma_npu_dma_supported,
- .get_required_mask = dma_npu_get_required_mask,
-};
-
/*
* Returns the PE assoicated with the PCI device of the given
* NPU. Returns the linked pci device if pci_dev != NULL.
rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
/*
- * We don't initialise npu_pe->tce32_table as we always use
- * dma_npu_ops which are nops.
+ * NVLink devices use the same TCE table configuration as
+ * their parent device so drivers shouldn't be doing DMA
+ * operations directly on these devices.
*/
- set_dma_ops(&npe->pdev->dev, &dma_npu_ops);
+ set_dma_ops(&npe->pdev->dev, NULL);
}
/*
# arch specific predefines for sparse
CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS)
+# Default target when executing plain make
+boot := arch/riscv/boot
+KBUILD_IMAGE := $(boot)/Image.gz
+
head-y := arch/riscv/kernel/head.o
core-y += arch/riscv/kernel/ arch/riscv/mm/
libs-y += arch/riscv/lib/
-all: vmlinux
+PHONY += vdso_install
+vdso_install:
+ $(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso $@
+
+all: Image.gz
+
+Image: vmlinux
+ $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
+
+Image.%: Image
+ $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
+
+zinstall install:
+ $(Q)$(MAKE) $(build)=$(boot) $@
--- /dev/null
+Image
+Image.gz
--- /dev/null
+#
+# arch/riscv/boot/Makefile
+#
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies.
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2018, Anup Patel.
+#
+# Based on the ia64 and arm64 boot/Makefile.
+#
+
+OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
+
+targets := Image
+
+$(obj)/Image: vmlinux FORCE
+ $(call if_changed,objcopy)
+
+$(obj)/Image.gz: $(obj)/Image FORCE
+ $(call if_changed,gzip)
+
+install:
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
+ $(obj)/Image System.map "$(INSTALL_PATH)"
+
+zinstall:
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
+ $(obj)/Image.gz System.map "$(INSTALL_PATH)"
--- /dev/null
+#!/bin/sh
+#
+# arch/riscv/boot/install.sh
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1995 by Linus Torvalds
+#
+# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
+# Adapted from code in arch/i386/boot/install.sh by Russell King
+#
+# "make install" script for the RISC-V Linux port
+#
+# Arguments:
+# $1 - kernel version
+# $2 - kernel image file
+# $3 - kernel map file
+# $4 - default install path (blank if root directory)
+#
+
+verify () {
+ if [ ! -f "$1" ]; then
+ echo "" 1>&2
+ echo " *** Missing file: $1" 1>&2
+ echo ' *** You need to run "make" before "make install".' 1>&2
+ echo "" 1>&2
+ exit 1
+ fi
+}
+
+# Make sure the files actually exist
+verify "$2"
+verify "$3"
+
+# User may have a custom install script
+if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
+if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
+
+if [ "$(basename $2)" = "Image.gz" ]; then
+# Compressed install
+ echo "Installing compressed kernel"
+ base=vmlinuz
+else
+# Normal install
+ echo "Installing normal kernel"
+ base=vmlinux
+fi
+
+if [ -f $4/$base-$1 ]; then
+ mv $4/$base-$1 $4/$base-$1.old
+fi
+cat $2 > $4/$base-$1
+
+# Install system map file
+if [ -f $4/System.map-$1 ]; then
+ mv $4/System.map-$1 $4/System.map-$1.old
+fi
+cp $3 $4/System.map-$1
CONFIG_NFS_V4_2=y
CONFIG_ROOT_NFS=y
CONFIG_CRYPTO_USER_API_HASH=y
+CONFIG_PRINTK_TIME=y
# CONFIG_RCU_TRACE is not set
#define MODULE_ARCH_VERMAGIC "riscv"
+struct module;
u64 module_emit_got_entry(struct module *mod, u64 val);
u64 module_emit_plt_entry(struct module *mod, u64 val);
unsigned long sstatus;
unsigned long sbadaddr;
unsigned long scause;
- /* a0 value before the syscall */
- unsigned long orig_a0;
+ /* a0 value before the syscall */
+ unsigned long orig_a0;
};
#ifdef CONFIG_64BIT
static inline unsigned long
raw_copy_from_user(void *to, const void __user *from, unsigned long n)
{
- return __asm_copy_to_user(to, from, n);
+ return __asm_copy_from_user(to, from, n);
}
static inline unsigned long
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
- return __asm_copy_from_user(to, from, n);
+ return __asm_copy_to_user(to, from, n);
}
extern long strncpy_from_user(char *dest, const char __user *src, long count);
/*
* There is explicitly no include guard here because this file is expected to
- * be included multiple times. See uapi/asm/syscalls.h for more info.
+ * be included multiple times.
*/
-#define __ARCH_WANT_NEW_STAT
#define __ARCH_WANT_SYS_CLONE
+
#include <uapi/asm/unistd.h>
-#include <uapi/asm/syscalls.h>
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright (C) 2017-2018 SiFive
- */
-
-/*
- * There is explicitly no include guard here because this file is expected to
- * be included multiple times in order to define the syscall macros via
- * __SYSCALL.
- */
-
-/*
- * Allows the instruction cache to be flushed from userspace. Despite RISC-V
- * having a direct 'fence.i' instruction available to userspace (which we
- * can't trap!), that's not actually viable when running on Linux because the
- * kernel might schedule a process on another hart. There is no way for
- * userspace to handle this without invoking the kernel (as it doesn't know the
- * thread->hart mappings), so we've defined a RISC-V specific system call to
- * flush the instruction cache.
- *
- * __NR_riscv_flush_icache is defined to flush the instruction cache over an
- * address range, with the flush applying to either all threads or just the
- * caller. We don't currently do anything with the address range, that's just
- * in there for forwards compatibility.
- */
-#ifndef __NR_riscv_flush_icache
-#define __NR_riscv_flush_icache (__NR_arch_specific_syscall + 15)
-#endif
-__SYSCALL(__NR_riscv_flush_icache, sys_riscv_flush_icache)
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifdef __LP64__
+#define __ARCH_WANT_NEW_STAT
+#endif /* __LP64__ */
+
+#include <asm-generic/unistd.h>
+
+/*
+ * Allows the instruction cache to be flushed from userspace. Despite RISC-V
+ * having a direct 'fence.i' instruction available to userspace (which we
+ * can't trap!), that's not actually viable when running on Linux because the
+ * kernel might schedule a process on another hart. There is no way for
+ * userspace to handle this without invoking the kernel (as it doesn't know the
+ * thread->hart mappings), so we've defined a RISC-V specific system call to
+ * flush the instruction cache.
+ *
+ * __NR_riscv_flush_icache is defined to flush the instruction cache over an
+ * address range, with the flush applying to either all threads or just the
+ * caller. We don't currently do anything with the address range, that's just
+ * in there for forwards compatibility.
+ */
+#ifndef __NR_riscv_flush_icache
+#define __NR_riscv_flush_icache (__NR_arch_specific_syscall + 15)
+#endif
+__SYSCALL(__NR_riscv_flush_icache, sys_riscv_flush_icache)
static void print_isa(struct seq_file *f, const char *orig_isa)
{
- static const char *ext = "mafdc";
+ static const char *ext = "mafdcsu";
const char *isa = orig_isa;
const char *e;
/*
* Check the rest of the ISA string for valid extensions, printing those
* we find. RISC-V ISA strings define an order, so we only print the
- * extension bits when they're in order.
+ * extension bits when they're in order. Hide the supervisor (S)
+ * extension from userspace as it's not accessible from there.
*/
for (e = ext; *e != '\0'; ++e) {
if (isa[0] == e[0]) {
- seq_write(f, isa, 1);
+ if (isa[0] != 's')
+ seq_write(f, isa, 1);
+
isa++;
}
}
{
unsigned long return_hooker = (unsigned long)&return_to_handler;
unsigned long old;
- struct ftrace_graph_ent trace;
int err;
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
*/
old = *parent;
- trace.func = self_addr;
- trace.depth = current->curr_ret_stack + 1;
-
- if (!ftrace_graph_entry(&trace))
- return;
-
- err = ftrace_push_return_trace(old, self_addr, &trace.depth,
- frame_pointer, parent);
- if (err == -EBUSY)
- return;
- *parent = return_hooker;
+ if (function_graph_enter(old, self_addr, frame_pointer, parent))
+ *parent = return_hooker;
}
#ifdef CONFIG_DYNAMIC_FTRACE
amoadd.w a3, a2, (a3)
bnez a3, .Lsecondary_start
+ /* Clear BSS for flat non-ELF images */
+ la a3, __bss_start
+ la a4, __bss_stop
+ ble a4, a3, clear_bss_done
+clear_bss:
+ REG_S zero, (a3)
+ add a3, a3, RISCV_SZPTR
+ blt a3, a4, clear_bss
+clear_bss_done:
+
/* Save hart ID and DTB physical address */
mv s0, a0
mv s1, a1
{
if (v != (u32)v) {
pr_err("%s: value %016llx out of range for 32-bit field\n",
- me->name, v);
+ me->name, (long long)v);
return -EINVAL;
}
*location = v;
if (offset != (s32)offset) {
pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
- me->name, v, location);
+ me->name, (long long)v, location);
return -EINVAL;
}
if (IS_ENABLED(CMODEL_MEDLOW)) {
pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
- me->name, v, location);
+ me->name, (long long)v, location);
return -EINVAL;
}
} else {
pr_err(
"%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n",
- me->name, v, location);
+ me->name, (long long)v, location);
return -EINVAL;
}
} else {
pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
- me->name, v, location);
+ me->name, (long long)v, location);
return -EINVAL;
}
}
if (offset != fill_v) {
pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
- me->name, v, location);
+ me->name, (long long)v, location);
return -EINVAL;
}
*(.sbss*)
}
- BSS_SECTION(0, 0, 0)
+ BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
EXCEPTION_TABLE(0x10)
NOTES
lib-y += memset.o
lib-y += uaccess.o
-lib-(CONFIG_64BIT) += tishift.o
+lib-$(CONFIG_64BIT) += tishift.o
lib-$(CONFIG_32BIT) += udivdi3.o
*/
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)
{
- struct ftrace_graph_ent trace;
-
if (unlikely(ftrace_graph_is_dead()))
goto out;
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
goto out;
ip -= MCOUNT_INSN_SIZE;
- trace.func = ip;
- trace.depth = current->curr_ret_stack + 1;
- /* Only trace if the calling function expects to. */
- if (!ftrace_graph_entry(&trace))
- goto out;
- if (ftrace_push_return_trace(parent, ip, &trace.depth, 0,
- NULL) == -EBUSY)
- goto out;
- parent = (unsigned long) return_to_handler;
+ if (!function_graph_enter(parent, ip, 0, NULL))
+ parent = (unsigned long) return_to_handler;
out:
return parent;
}
break;
case PERF_TYPE_HARDWARE:
+ if (is_sampling_event(event)) /* No sampling support */
+ return -ENOENT;
ev = attr->config;
/* Count user space (problem-state) only */
if (!attr->exclude_user && attr->exclude_kernel) {
}
pgd = mm->pgd;
+ mm_dec_nr_pmds(mm);
mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN);
mm->context.asce_limit = _REGION3_SIZE;
mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
{
unsigned long old;
- int faulted, err;
- struct ftrace_graph_ent trace;
+ int faulted;
unsigned long return_hooker = (unsigned long)&return_to_handler;
if (unlikely(ftrace_graph_is_dead()))
return;
}
- err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL);
- if (err == -EBUSY) {
+ if (function_graph_enter(old, self_addr, 0, NULL))
__raw_writel(old, parent);
- return;
- }
-
- trace.func = self_addr;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace)) {
- current->curr_ret_stack--;
- __raw_writel(old, parent);
- }
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
unsigned long frame_pointer)
{
unsigned long return_hooker = (unsigned long) &return_to_handler;
- struct ftrace_graph_ent trace;
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
return parent + 8UL;
- trace.func = self_addr;
- trace.depth = current->curr_ret_stack + 1;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace))
- return parent + 8UL;
-
- if (ftrace_push_return_trace(parent, self_addr, &trace.depth,
- frame_pointer, NULL) == -EBUSY)
+ if (function_graph_enter(parent, self_addr, frame_pointer, NULL))
return parent + 8UL;
return return_hooker;
}
/* Just skip the save instruction and the ctx register move. */
-#define BPF_TAILCALL_PROLOGUE_SKIP 16
+#define BPF_TAILCALL_PROLOGUE_SKIP 32
#define BPF_TAILCALL_CNT_SP_OFF (STACK_BIAS + 128)
static void build_prologue(struct jit_ctx *ctx)
const u8 vfp = bpf2sparc[BPF_REG_FP];
emit(ADD | IMMED | RS1(FP) | S13(STACK_BIAS) | RD(vfp), ctx);
+ } else {
+ emit_nop(ctx);
}
emit_reg_move(I0, O0, ctx);
+ emit_reg_move(I1, O1, ctx);
+ emit_reg_move(I2, O2, ctx);
+ emit_reg_move(I3, O3, ctx);
+ emit_reg_move(I4, O4, ctx);
/* If you add anything here, adjust BPF_TAILCALL_PROLOGUE_SKIP above. */
}
const u8 tmp2 = bpf2sparc[TMP_REG_2];
u32 opcode = 0, rs2;
+ if (insn->dst_reg == BPF_REG_FP)
+ ctx->saw_frame_pointer = true;
+
ctx->tmp_2_used = true;
emit_loadimm(imm, tmp2, ctx);
const u8 tmp = bpf2sparc[TMP_REG_1];
u32 opcode = 0, rs2;
+ if (insn->dst_reg == BPF_REG_FP)
+ ctx->saw_frame_pointer = true;
+
switch (BPF_SIZE(code)) {
case BPF_W:
opcode = ST32;
const u8 tmp2 = bpf2sparc[TMP_REG_2];
const u8 tmp3 = bpf2sparc[TMP_REG_3];
+ if (insn->dst_reg == BPF_REG_FP)
+ ctx->saw_frame_pointer = true;
+
ctx->tmp_1_used = true;
ctx->tmp_2_used = true;
ctx->tmp_3_used = true;
const u8 tmp2 = bpf2sparc[TMP_REG_2];
const u8 tmp3 = bpf2sparc[TMP_REG_3];
+ if (insn->dst_reg == BPF_REG_FP)
+ ctx->saw_frame_pointer = true;
+
ctx->tmp_1_used = true;
ctx->tmp_2_used = true;
ctx->tmp_3_used = true;
struct bpf_prog *tmp, *orig_prog = prog;
struct sparc64_jit_data *jit_data;
struct bpf_binary_header *header;
+ u32 prev_image_size, image_size;
bool tmp_blinded = false;
bool extra_pass = false;
struct jit_ctx ctx;
- u32 image_size;
u8 *image_ptr;
- int pass;
+ int pass, i;
if (!prog->jit_requested)
return orig_prog;
header = jit_data->header;
extra_pass = true;
image_size = sizeof(u32) * ctx.idx;
+ prev_image_size = image_size;
+ pass = 1;
goto skip_init_ctx;
}
memset(&ctx, 0, sizeof(ctx));
ctx.prog = prog;
- ctx.offset = kcalloc(prog->len, sizeof(unsigned int), GFP_KERNEL);
+ ctx.offset = kmalloc_array(prog->len, sizeof(unsigned int), GFP_KERNEL);
if (ctx.offset == NULL) {
prog = orig_prog;
goto out_off;
}
- /* Fake pass to detect features used, and get an accurate assessment
- * of what the final image size will be.
+ /* Longest sequence emitted is for bswap32, 12 instructions. Pre-cook
+ * the offset array so that we converge faster.
*/
- if (build_body(&ctx)) {
- prog = orig_prog;
- goto out_off;
- }
- build_prologue(&ctx);
- build_epilogue(&ctx);
-
- /* Now we know the actual image size. */
- image_size = sizeof(u32) * ctx.idx;
- header = bpf_jit_binary_alloc(image_size, &image_ptr,
- sizeof(u32), jit_fill_hole);
- if (header == NULL) {
- prog = orig_prog;
- goto out_off;
- }
+ for (i = 0; i < prog->len; i++)
+ ctx.offset[i] = i * (12 * 4);
- ctx.image = (u32 *)image_ptr;
-skip_init_ctx:
- for (pass = 1; pass < 3; pass++) {
+ prev_image_size = ~0U;
+ for (pass = 1; pass < 40; pass++) {
ctx.idx = 0;
build_prologue(&ctx);
-
if (build_body(&ctx)) {
- bpf_jit_binary_free(header);
prog = orig_prog;
goto out_off;
}
-
build_epilogue(&ctx);
if (bpf_jit_enable > 1)
- pr_info("Pass %d: shrink = %d, seen = [%c%c%c%c%c%c]\n", pass,
- image_size - (ctx.idx * 4),
+ pr_info("Pass %d: size = %u, seen = [%c%c%c%c%c%c]\n", pass,
+ ctx.idx * 4,
ctx.tmp_1_used ? '1' : ' ',
ctx.tmp_2_used ? '2' : ' ',
ctx.tmp_3_used ? '3' : ' ',
ctx.saw_frame_pointer ? 'F' : ' ',
ctx.saw_call ? 'C' : ' ',
ctx.saw_tail_call ? 'T' : ' ');
+
+ if (ctx.idx * 4 == prev_image_size)
+ break;
+ prev_image_size = ctx.idx * 4;
+ cond_resched();
+ }
+
+ /* Now we know the actual image size. */
+ image_size = sizeof(u32) * ctx.idx;
+ header = bpf_jit_binary_alloc(image_size, &image_ptr,
+ sizeof(u32), jit_fill_hole);
+ if (header == NULL) {
+ prog = orig_prog;
+ goto out_off;
+ }
+
+ ctx.image = (u32 *)image_ptr;
+skip_init_ctx:
+ ctx.idx = 0;
+
+ build_prologue(&ctx);
+
+ if (build_body(&ctx)) {
+ bpf_jit_binary_free(header);
+ prog = orig_prog;
+ goto out_off;
+ }
+
+ build_epilogue(&ctx);
+
+ if (ctx.idx * 4 != prev_image_size) {
+ pr_err("bpf_jit: Failed to converge, prev_size=%u size=%d\n",
+ prev_image_size, ctx.idx * 4);
+ bpf_jit_binary_free(header);
+ prog = orig_prog;
+ goto out_off;
}
if (bpf_jit_enable > 1)
if (config == -1LL)
return -EINVAL;
- /*
- * Branch tracing:
- */
- if (attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS &&
- !attr->freq && hwc->sample_period == 1) {
- /* BTS is not supported by this architecture. */
- if (!x86_pmu.bts_active)
- return -EOPNOTSUPP;
-
- /* BTS is currently only allowed for user-mode. */
- if (!attr->exclude_kernel)
- return -EOPNOTSUPP;
-
- /* disallow bts if conflicting events are present */
- if (x86_add_exclusive(x86_lbr_exclusive_lbr))
- return -EBUSY;
-
- event->destroy = hw_perf_lbr_event_destroy;
- }
-
hwc->config |= config;
return 0;
return handled;
}
-static bool disable_counter_freezing;
+static bool disable_counter_freezing = true;
static int __init intel_perf_counter_freezing_setup(char *s)
{
- disable_counter_freezing = true;
- pr_info("Intel PMU Counter freezing feature disabled\n");
+ bool res;
+
+ if (kstrtobool(s, &res))
+ return -EINVAL;
+
+ disable_counter_freezing = !res;
return 1;
}
-__setup("disable_counter_freezing", intel_perf_counter_freezing_setup);
+__setup("perf_v4_pmi=", intel_perf_counter_freezing_setup);
/*
* Simplified handler for Arch Perfmon v4:
static struct event_constraint *
intel_bts_constraints(struct perf_event *event)
{
- struct hw_perf_event *hwc = &event->hw;
- unsigned int hw_event, bts_event;
-
- if (event->attr.freq)
- return NULL;
-
- hw_event = hwc->config & INTEL_ARCH_EVENT_MASK;
- bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
-
- if (unlikely(hw_event == bts_event && hwc->sample_period == 1))
+ if (unlikely(intel_pmu_has_bts(event)))
return &bts_constraint;
return NULL;
return flags;
}
+static int intel_pmu_bts_config(struct perf_event *event)
+{
+ struct perf_event_attr *attr = &event->attr;
+
+ if (unlikely(intel_pmu_has_bts(event))) {
+ /* BTS is not supported by this architecture. */
+ if (!x86_pmu.bts_active)
+ return -EOPNOTSUPP;
+
+ /* BTS is currently only allowed for user-mode. */
+ if (!attr->exclude_kernel)
+ return -EOPNOTSUPP;
+
+ /* BTS is not allowed for precise events. */
+ if (attr->precise_ip)
+ return -EOPNOTSUPP;
+
+ /* disallow bts if conflicting events are present */
+ if (x86_add_exclusive(x86_lbr_exclusive_lbr))
+ return -EBUSY;
+
+ event->destroy = hw_perf_lbr_event_destroy;
+ }
+
+ return 0;
+}
+
+static int core_pmu_hw_config(struct perf_event *event)
+{
+ int ret = x86_pmu_hw_config(event);
+
+ if (ret)
+ return ret;
+
+ return intel_pmu_bts_config(event);
+}
+
static int intel_pmu_hw_config(struct perf_event *event)
{
int ret = x86_pmu_hw_config(event);
+ if (ret)
+ return ret;
+
+ ret = intel_pmu_bts_config(event);
if (ret)
return ret;
/*
* BTS is set up earlier in this path, so don't account twice
*/
- if (!intel_pmu_has_bts(event)) {
+ if (!unlikely(intel_pmu_has_bts(event))) {
/* disallow lbr if conflicting events are present */
if (x86_add_exclusive(x86_lbr_exclusive_lbr))
return -EBUSY;
.enable_all = core_pmu_enable_all,
.enable = core_pmu_enable_event,
.disable = x86_pmu_disable_event,
- .hw_config = x86_pmu_hw_config,
+ .hw_config = core_pmu_hw_config,
.schedule_events = x86_schedule_events,
.eventsel = MSR_ARCH_PERFMON_EVENTSEL0,
.perfctr = MSR_ARCH_PERFMON_PERFCTR0,
struct intel_uncore_extra_reg shared_regs[0];
};
-#define UNCORE_BOX_FLAG_INITIATED 0
-#define UNCORE_BOX_FLAG_CTL_OFFS8 1 /* event config registers are 8-byte apart */
+/* CFL uncore 8th cbox MSRs */
+#define CFL_UNC_CBO_7_PERFEVTSEL0 0xf70
+#define CFL_UNC_CBO_7_PER_CTR0 0xf76
+
+#define UNCORE_BOX_FLAG_INITIATED 0
+/* event config registers are 8-byte apart */
+#define UNCORE_BOX_FLAG_CTL_OFFS8 1
+/* CFL 8th CBOX has different MSR space */
+#define UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS 2
struct uncore_event_desc {
struct kobj_attribute attr;
static inline
unsigned uncore_msr_event_ctl(struct intel_uncore_box *box, int idx)
{
- return box->pmu->type->event_ctl +
- (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) +
- uncore_msr_box_offset(box);
+ if (test_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags)) {
+ return CFL_UNC_CBO_7_PERFEVTSEL0 +
+ (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx);
+ } else {
+ return box->pmu->type->event_ctl +
+ (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) +
+ uncore_msr_box_offset(box);
+ }
}
static inline
unsigned uncore_msr_perf_ctr(struct intel_uncore_box *box, int idx)
{
- return box->pmu->type->perf_ctr +
- (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) +
- uncore_msr_box_offset(box);
+ if (test_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags)) {
+ return CFL_UNC_CBO_7_PER_CTR0 +
+ (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx);
+ } else {
+ return box->pmu->type->perf_ctr +
+ (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) +
+ uncore_msr_box_offset(box);
+ }
}
static inline
#define PCI_DEVICE_ID_INTEL_SKL_HQ_IMC 0x1910
#define PCI_DEVICE_ID_INTEL_SKL_SD_IMC 0x190f
#define PCI_DEVICE_ID_INTEL_SKL_SQ_IMC 0x191f
+#define PCI_DEVICE_ID_INTEL_KBL_Y_IMC 0x590c
+#define PCI_DEVICE_ID_INTEL_KBL_U_IMC 0x5904
+#define PCI_DEVICE_ID_INTEL_KBL_UQ_IMC 0x5914
+#define PCI_DEVICE_ID_INTEL_KBL_SD_IMC 0x590f
+#define PCI_DEVICE_ID_INTEL_KBL_SQ_IMC 0x591f
+#define PCI_DEVICE_ID_INTEL_CFL_2U_IMC 0x3ecc
+#define PCI_DEVICE_ID_INTEL_CFL_4U_IMC 0x3ed0
+#define PCI_DEVICE_ID_INTEL_CFL_4H_IMC 0x3e10
+#define PCI_DEVICE_ID_INTEL_CFL_6H_IMC 0x3ec4
+#define PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC 0x3e0f
+#define PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC 0x3e1f
+#define PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC 0x3ec2
+#define PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC 0x3e30
+#define PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC 0x3e18
+#define PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC 0x3ec6
+#define PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC 0x3e31
+#define PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC 0x3e33
+#define PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC 0x3eca
+#define PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC 0x3e32
/* SNB event control */
#define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff
wrmsrl(SKL_UNC_PERF_GLOBAL_CTL,
SNB_UNC_GLOBAL_CTL_EN | SKL_UNC_GLOBAL_CTL_CORE_ALL);
}
+
+ /* The 8th CBOX has different MSR space */
+ if (box->pmu->pmu_idx == 7)
+ __set_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags);
}
static void skl_uncore_msr_enable_box(struct intel_uncore_box *box)
static struct intel_uncore_type skl_uncore_cbox = {
.name = "cbox",
.num_counters = 4,
- .num_boxes = 5,
+ .num_boxes = 8,
.perf_ctr_bits = 44,
.fixed_ctr_bits = 48,
.perf_ctr = SNB_UNC_CBO_0_PER_CTR0,
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SQ_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
-
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_Y_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_U_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_UQ_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SD_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SQ_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2U_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4U_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4H_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6H_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
+ { /* IMC */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC),
+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
+ },
{ /* end: all zeroes */ },
};
IMC_DEV(SKL_HQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Quad Core */
IMC_DEV(SKL_SD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Dual Core */
IMC_DEV(SKL_SQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Quad Core */
+ IMC_DEV(KBL_Y_IMC, &skl_uncore_pci_driver), /* 7th Gen Core Y */
+ IMC_DEV(KBL_U_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U */
+ IMC_DEV(KBL_UQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U Quad Core */
+ IMC_DEV(KBL_SD_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Dual Core */
+ IMC_DEV(KBL_SQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Quad Core */
+ IMC_DEV(CFL_2U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 2 Cores */
+ IMC_DEV(CFL_4U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 4 Cores */
+ IMC_DEV(CFL_4H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 4 Cores */
+ IMC_DEV(CFL_6H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 6 Cores */
+ IMC_DEV(CFL_2S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 2 Cores Desktop */
+ IMC_DEV(CFL_4S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Desktop */
+ IMC_DEV(CFL_6S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Desktop */
+ IMC_DEV(CFL_8S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Desktop */
+ IMC_DEV(CFL_4S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Work Station */
+ IMC_DEV(CFL_6S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Work Station */
+ IMC_DEV(CFL_8S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Work Station */
+ IMC_DEV(CFL_4S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Server */
+ IMC_DEV(CFL_6S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Server */
+ IMC_DEV(CFL_8S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Server */
{ /* end marker */ }
};
static inline bool intel_pmu_has_bts(struct perf_event *event)
{
- if (event->attr.config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS &&
- !event->attr.freq && event->hw.sample_period == 1)
- return true;
+ struct hw_perf_event *hwc = &event->hw;
+ unsigned int hw_event, bts_event;
+
+ if (event->attr.freq)
+ return false;
+
+ hw_event = hwc->config & INTEL_ARCH_EVENT_MASK;
+ bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
- return false;
+ return hw_event == bts_event && hwc->sample_period == 1;
}
int intel_pmu_save_and_restart(struct perf_event *event);
bool (*has_wbinvd_exit)(void);
u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu);
- void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset);
+ /* Returns actual tsc_offset set in active VMCS */
+ u64 (*write_l1_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset);
void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2);
{
unsigned long old;
int faulted;
- struct ftrace_graph_ent trace;
unsigned long return_hooker = (unsigned long)
&return_to_handler;
return;
}
- trace.func = self_addr;
- trace.depth = current->curr_ret_stack + 1;
-
- /* Only trace if the calling function expects to */
- if (!ftrace_graph_entry(&trace)) {
+ if (function_graph_enter(old, self_addr, frame_pointer, parent))
*parent = old;
- return;
- }
-
- if (ftrace_push_return_trace(old, self_addr, &trace.depth,
- frame_pointer, parent) == -EBUSY) {
- *parent = old;
- return;
- }
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#define PRIo64 "o"
/* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */
-#define apic_debug(fmt, arg...)
+#define apic_debug(fmt, arg...) do {} while (0)
/* 14 is the version for Xeon and Pentium 8.4.8*/
#define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16))
rcu_read_lock();
map = rcu_dereference(kvm->arch.apic_map);
+ if (unlikely(!map)) {
+ count = -EOPNOTSUPP;
+ goto out;
+ }
+
if (min > map->max_apic_id)
goto out;
/* Bits above cluster_size are masked in the caller. */
}
static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,
- const u8 *new, int *bytes)
+ int *bytes)
{
- u64 gentry;
+ u64 gentry = 0;
int r;
/*
/* Handle a 32-bit guest writing two halves of a 64-bit gpte */
*gpa &= ~(gpa_t)7;
*bytes = 8;
- r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8);
- if (r)
- gentry = 0;
- new = (const u8 *)&gentry;
}
- switch (*bytes) {
- case 4:
- gentry = *(const u32 *)new;
- break;
- case 8:
- gentry = *(const u64 *)new;
- break;
- default:
- gentry = 0;
- break;
+ if (*bytes == 4 || *bytes == 8) {
+ r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes);
+ if (r)
+ gentry = 0;
}
return gentry;
pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);
- gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes);
-
/*
* No need to care whether allocation memory is successful
* or not since pte prefetch is skiped if it does not have
mmu_topup_memory_caches(vcpu);
spin_lock(&vcpu->kvm->mmu_lock);
+
+ gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes);
+
++vcpu->kvm->stat.mmu_pte_write;
kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);
return vcpu->arch.tsc_offset;
}
-static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
{
struct vcpu_svm *svm = to_svm(vcpu);
u64 g_tsc_offset = 0;
svm->vmcb->control.tsc_offset = offset + g_tsc_offset;
mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
+ return svm->vmcb->control.tsc_offset;
}
static void avic_init_vmcb(struct vcpu_svm *svm)
static int avic_init_access_page(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
- int ret;
+ int ret = 0;
+ mutex_lock(&kvm->slots_lock);
if (kvm->arch.apic_access_page_done)
- return 0;
+ goto out;
- ret = x86_set_memory_region(kvm,
- APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
- APIC_DEFAULT_PHYS_BASE,
- PAGE_SIZE);
+ ret = __x86_set_memory_region(kvm,
+ APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
+ APIC_DEFAULT_PHYS_BASE,
+ PAGE_SIZE);
if (ret)
- return ret;
+ goto out;
kvm->arch.apic_access_page_done = true;
- return 0;
+out:
+ mutex_unlock(&kvm->slots_lock);
+ return ret;
}
static int avic_init_backing_page(struct kvm_vcpu *vcpu)
return ERR_PTR(err);
}
+static void svm_clear_current_vmcb(struct vmcb *vmcb)
+{
+ int i;
+
+ for_each_online_cpu(i)
+ cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL);
+}
+
static void svm_free_vcpu(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
+ /*
+ * The vmcb page can be recycled, causing a false negative in
+ * svm_vcpu_load(). So, ensure that no logical CPU has this
+ * vmcb page recorded as its current vmcb.
+ */
+ svm_clear_current_vmcb(svm->vmcb);
+
__free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT));
__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
__free_page(virt_to_page(svm->nested.hsave));
__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
kvm_vcpu_uninit(vcpu);
kmem_cache_free(kvm_vcpu_cache, svm);
- /*
- * The vmcb page can be recycled, causing a false negative in
- * svm_vcpu_load(). So do a full IBPB now.
- */
- indirect_branch_prediction_barrier();
}
static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
.has_wbinvd_exit = svm_has_wbinvd_exit,
.read_l1_tsc_offset = svm_read_l1_tsc_offset,
- .write_tsc_offset = svm_write_tsc_offset,
+ .write_l1_tsc_offset = svm_write_l1_tsc_offset,
.set_tdp_cr3 = set_tdp_cr3,
* refer SDM volume 3b section 21.6.13 & 22.1.3.
*/
static unsigned int ple_gap = KVM_DEFAULT_PLE_GAP;
+module_param(ple_gap, uint, 0444);
static unsigned int ple_window = KVM_VMX_DEFAULT_PLE_WINDOW;
module_param(ple_window, uint, 0444);
struct shared_msr_entry *guest_msrs;
int nmsrs;
int save_nmsrs;
+ bool guest_msrs_dirty;
unsigned long host_idt_base;
#ifdef CONFIG_X86_64
u64 msr_host_kernel_gs_base;
static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,
u16 error_code);
static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
u32 msr, int type);
static DEFINE_PER_CPU(struct vmcs *, vmxarea);
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
- /* We don't support disabling the feature for simplicity. */
- if (vmx->nested.enlightened_vmcs_enabled)
- return 0;
-
- vmx->nested.enlightened_vmcs_enabled = true;
-
/*
* vmcs_version represents the range of supported Enlightened VMCS
* versions: lower 8 bits is the minimal version, higher 8 bits is the
if (vmcs_version)
*vmcs_version = (KVM_EVMCS_VERSION << 8) | 1;
+ /* We don't support disabling the feature for simplicity. */
+ if (vmx->nested.enlightened_vmcs_enabled)
+ return 0;
+
+ vmx->nested.enlightened_vmcs_enabled = true;
+
vmx->nested.msrs.pinbased_ctls_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;
vmx->nested.msrs.entry_ctls_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;
vmx->nested.msrs.exit_ctls_high &= ~EVMCS1_UNSUPPORTED_VMEXIT_CTRL;
vmx->req_immediate_exit = false;
+ /*
+ * Note that guest MSRs to be saved/restored can also be changed
+ * when guest state is loaded. This happens when guest transitions
+ * to/from long-mode by setting MSR_EFER.LMA.
+ */
+ if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) {
+ vmx->guest_msrs_dirty = false;
+ for (i = 0; i < vmx->save_nmsrs; ++i)
+ kvm_set_shared_msr(vmx->guest_msrs[i].index,
+ vmx->guest_msrs[i].data,
+ vmx->guest_msrs[i].mask);
+
+ }
+
if (vmx->loaded_cpu_state)
return;
vmcs_writel(HOST_GS_BASE, gs_base);
host_state->gs_base = gs_base;
}
-
- for (i = 0; i < vmx->save_nmsrs; ++i)
- kvm_set_shared_msr(vmx->guest_msrs[i].index,
- vmx->guest_msrs[i].data,
- vmx->guest_msrs[i].mask);
}
static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)
move_msr_up(vmx, index, save_nmsrs++);
vmx->save_nmsrs = save_nmsrs;
+ vmx->guest_msrs_dirty = true;
if (cpu_has_vmx_msr_bitmap())
vmx_update_msr_bitmap(&vmx->vcpu);
return vcpu->arch.tsc_offset;
}
-/*
- * writes 'offset' into guest's timestamp counter offset register
- */
-static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
{
+ u64 active_offset = offset;
if (is_guest_mode(vcpu)) {
/*
* We're here if L1 chose not to trap WRMSR to TSC. According
* set for L2 remains unchanged, and still needs to be added
* to the newly set TSC to get L2's TSC.
*/
- struct vmcs12 *vmcs12;
- /* recalculate vmcs02.TSC_OFFSET: */
- vmcs12 = get_vmcs12(vcpu);
- vmcs_write64(TSC_OFFSET, offset +
- (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ?
- vmcs12->tsc_offset : 0));
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING))
+ active_offset += vmcs12->tsc_offset;
} else {
trace_kvm_write_tsc_offset(vcpu->vcpu_id,
vmcs_read64(TSC_OFFSET), offset);
- vmcs_write64(TSC_OFFSET, offset);
}
+
+ vmcs_write64(TSC_OFFSET, active_offset);
+ return active_offset;
}
/*
spin_unlock(&vmx_vpid_lock);
}
-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
u32 msr, int type)
{
int f = sizeof(unsigned long);
}
}
-static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,
+static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,
u32 msr, int type)
{
int f = sizeof(unsigned long);
}
}
-static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap,
+static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap,
u32 msr, int type, bool value)
{
if (value)
struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12;
struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;
- vmcs12->hdr.revision_id = evmcs->revision_id;
-
/* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */
vmcs12->tpr_threshold = evmcs->tpr_threshold;
vmcs12->guest_rip = evmcs->guest_rip;
vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page);
- if (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION) {
+ /*
+ * Currently, KVM only supports eVMCS version 1
+ * (== KVM_EVMCS_VERSION) and thus we expect guest to set this
+ * value to first u32 field of eVMCS which should specify eVMCS
+ * VersionNumber.
+ *
+ * Guest should be aware of supported eVMCS versions by host by
+ * examining CPUID.0x4000000A.EAX[0:15]. Host userspace VMM is
+ * expected to set this CPUID leaf according to the value
+ * returned in vmcs_version from nested_enable_evmcs().
+ *
+ * However, it turns out that Microsoft Hyper-V fails to comply
+ * to their own invented interface: When Hyper-V use eVMCS, it
+ * just sets first u32 field of eVMCS to revision_id specified
+ * in MSR_IA32_VMX_BASIC. Instead of used eVMCS version number
+ * which is one of the supported versions specified in
+ * CPUID.0x4000000A.EAX[0:15].
+ *
+ * To overcome Hyper-V bug, we accept here either a supported
+ * eVMCS version or VMCS12 revision_id as valid values for first
+ * u32 field of eVMCS.
+ */
+ if ((vmx->nested.hv_evmcs->revision_id != KVM_EVMCS_VERSION) &&
+ (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION)) {
nested_release_evmcs(vcpu);
return 0;
}
* present in struct hv_enlightened_vmcs, ...). Make sure there
* are no leftovers.
*/
- if (from_launch)
- memset(vmx->nested.cached_vmcs12, 0,
- sizeof(*vmx->nested.cached_vmcs12));
+ if (from_launch) {
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ memset(vmcs12, 0, sizeof(*vmcs12));
+ vmcs12->hdr.revision_id = VMCS12_REVISION;
+ }
}
return 1;
.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
.read_l1_tsc_offset = vmx_read_l1_tsc_offset,
- .write_tsc_offset = vmx_write_tsc_offset,
+ .write_l1_tsc_offset = vmx_write_l1_tsc_offset,
.set_tdp_cr3 = vmx_set_cr3,
static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
{
- kvm_x86_ops->write_tsc_offset(vcpu, offset);
- vcpu->arch.tsc_offset = offset;
+ vcpu->arch.tsc_offset = kvm_x86_ops->write_l1_tsc_offset(vcpu, offset);
}
static inline bool kvm_check_tsc_unstable(void)
static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,
s64 adjustment)
{
- kvm_vcpu_write_tsc_offset(vcpu, vcpu->arch.tsc_offset + adjustment);
+ u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu);
+ kvm_vcpu_write_tsc_offset(vcpu, tsc_offset + adjustment);
}
static inline void adjust_tsc_offset_host(struct kvm_vcpu *vcpu, s64 adjustment)
clock_pairing.nsec = ts.tv_nsec;
clock_pairing.tsc = kvm_read_l1_tsc(vcpu, cycle);
clock_pairing.flags = 0;
+ memset(&clock_pairing.pad, 0, sizeof(clock_pairing.pad));
ret = 0;
if (kvm_write_guest(vcpu->kvm, paddr, &clock_pairing,
else {
if (vcpu->arch.apicv_active)
kvm_x86_ops->sync_pir_to_irr(vcpu);
- kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors);
+ if (ioapic_in_kernel(vcpu->kvm))
+ kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors);
}
if (is_guest_mode(vcpu))
# error Linux requires the Xtensa Windowed Registers Option.
#endif
-#define ARCH_SLAB_MINALIGN XCHAL_DATA_WIDTH
+/* Xtensa ABI requires stack alignment to be at least 16 */
+
+#define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16)
+
+#define ARCH_SLAB_MINALIGN STACK_ALIGN
/*
* User space process size: 1 GB.
DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp));
DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable));
#if XTENSA_HAVE_COPROCESSORS
- DEFINE(THREAD_XTREGS_CP0, offsetof (struct thread_info, xtregs_cp));
- DEFINE(THREAD_XTREGS_CP1, offsetof (struct thread_info, xtregs_cp));
- DEFINE(THREAD_XTREGS_CP2, offsetof (struct thread_info, xtregs_cp));
- DEFINE(THREAD_XTREGS_CP3, offsetof (struct thread_info, xtregs_cp));
- DEFINE(THREAD_XTREGS_CP4, offsetof (struct thread_info, xtregs_cp));
- DEFINE(THREAD_XTREGS_CP5, offsetof (struct thread_info, xtregs_cp));
- DEFINE(THREAD_XTREGS_CP6, offsetof (struct thread_info, xtregs_cp));
- DEFINE(THREAD_XTREGS_CP7, offsetof (struct thread_info, xtregs_cp));
+ DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0));
+ DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1));
+ DEFINE(THREAD_XTREGS_CP2, offsetof(struct thread_info, xtregs_cp.cp2));
+ DEFINE(THREAD_XTREGS_CP3, offsetof(struct thread_info, xtregs_cp.cp3));
+ DEFINE(THREAD_XTREGS_CP4, offsetof(struct thread_info, xtregs_cp.cp4));
+ DEFINE(THREAD_XTREGS_CP5, offsetof(struct thread_info, xtregs_cp.cp5));
+ DEFINE(THREAD_XTREGS_CP6, offsetof(struct thread_info, xtregs_cp.cp6));
+ DEFINE(THREAD_XTREGS_CP7, offsetof(struct thread_info, xtregs_cp.cp7));
#endif
DEFINE(THREAD_XTREGS_USER, offsetof (struct thread_info, xtregs_user));
DEFINE(XTREGS_USER_SIZE, sizeof(xtregs_user_t));
initialize_mmu
#if defined(CONFIG_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
rsr a2, excsave1
- movi a3, 0x08000000
+ movi a3, XCHAL_KSEG_PADDR
+ bltu a2, a3, 1f
+ sub a2, a2, a3
+ movi a3, XCHAL_KSEG_SIZE
bgeu a2, a3, 1f
- movi a3, 0xd0000000
+ movi a3, XCHAL_KSEG_CACHED_VADDR
add a2, a2, a3
wsr a2, excsave1
1:
void coprocessor_flush_all(struct thread_info *ti)
{
- unsigned long cpenable;
+ unsigned long cpenable, old_cpenable;
int i;
preempt_disable();
+ RSR_CPENABLE(old_cpenable);
cpenable = ti->cpenable;
+ WSR_CPENABLE(cpenable);
for (i = 0; i < XCHAL_CP_MAX; i++) {
if ((cpenable & 1) != 0 && coprocessor_owner[i] == ti)
coprocessor_flush(ti, i);
cpenable >>= 1;
}
+ WSR_CPENABLE(old_cpenable);
preempt_enable();
}
}
+#if XTENSA_HAVE_COPROCESSORS
+#define CP_OFFSETS(cp) \
+ { \
+ .elf_xtregs_offset = offsetof(elf_xtregs_t, cp), \
+ .ti_offset = offsetof(struct thread_info, xtregs_cp.cp), \
+ .sz = sizeof(xtregs_ ## cp ## _t), \
+ }
+
+static const struct {
+ size_t elf_xtregs_offset;
+ size_t ti_offset;
+ size_t sz;
+} cp_offsets[] = {
+ CP_OFFSETS(cp0),
+ CP_OFFSETS(cp1),
+ CP_OFFSETS(cp2),
+ CP_OFFSETS(cp3),
+ CP_OFFSETS(cp4),
+ CP_OFFSETS(cp5),
+ CP_OFFSETS(cp6),
+ CP_OFFSETS(cp7),
+};
+#endif
+
static int ptrace_getxregs(struct task_struct *child, void __user *uregs)
{
struct pt_regs *regs = task_pt_regs(child);
struct thread_info *ti = task_thread_info(child);
elf_xtregs_t __user *xtregs = uregs;
int ret = 0;
+ int i __maybe_unused;
if (!access_ok(VERIFY_WRITE, uregs, sizeof(elf_xtregs_t)))
return -EIO;
#if XTENSA_HAVE_COPROCESSORS
/* Flush all coprocessor registers to memory. */
coprocessor_flush_all(ti);
- ret |= __copy_to_user(&xtregs->cp0, &ti->xtregs_cp,
- sizeof(xtregs_coprocessor_t));
+
+ for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i)
+ ret |= __copy_to_user((char __user *)xtregs +
+ cp_offsets[i].elf_xtregs_offset,
+ (const char *)ti +
+ cp_offsets[i].ti_offset,
+ cp_offsets[i].sz);
#endif
ret |= __copy_to_user(&xtregs->opt, ®s->xtregs_opt,
sizeof(xtregs->opt));
struct pt_regs *regs = task_pt_regs(child);
elf_xtregs_t *xtregs = uregs;
int ret = 0;
+ int i __maybe_unused;
if (!access_ok(VERIFY_READ, uregs, sizeof(elf_xtregs_t)))
return -EFAULT;
coprocessor_flush_all(ti);
coprocessor_release_all(ti);
- ret |= __copy_from_user(&ti->xtregs_cp, &xtregs->cp0,
- sizeof(xtregs_coprocessor_t));
+ for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i)
+ ret |= __copy_from_user((char *)ti + cp_offsets[i].ti_offset,
+ (const char __user *)xtregs +
+ cp_offsets[i].elf_xtregs_offset,
+ cp_offsets[i].sz);
#endif
ret |= __copy_from_user(®s->xtregs_opt, &xtregs->opt,
sizeof(xtregs->opt));
if (bio_flagged(bio_src, BIO_THROTTLED))
bio_set_flag(bio, BIO_THROTTLED);
bio->bi_opf = bio_src->bi_opf;
+ bio->bi_ioprio = bio_src->bi_ioprio;
bio->bi_write_hint = bio_src->bi_write_hint;
bio->bi_iter = bio_src->bi_iter;
bio->bi_io_vec = bio_src->bi_io_vec;
* dispatch may still be in-progress since we dispatch requests
* from more than one contexts.
*
- * No need to quiesce queue if it isn't initialized yet since
- * blk_freeze_queue() should be enough for cases of passthrough
- * request.
+ * We rely on driver to deal with the race in case that queue
+ * initialization isn't done.
*/
if (q->mq_ops && blk_queue_init_done(q))
blk_mq_quiesce_queue(q);
return -EINVAL;
while (nr_sects) {
- unsigned int req_sects = min_t(unsigned int, nr_sects,
+ sector_t req_sects = min_t(sector_t, nr_sects,
bio_allowed_max_sectors(q));
+ WARN_ON_ONCE((req_sects << 9) > UINT_MAX);
+
bio = blk_next_bio(bio, 0, gfp_mask);
bio->bi_iter.bi_sector = sector;
bio_set_dev(bio, bdev);
return NULL;
bio->bi_disk = bio_src->bi_disk;
bio->bi_opf = bio_src->bi_opf;
+ bio->bi_ioprio = bio_src->bi_ioprio;
bio->bi_write_hint = bio_src->bi_write_hint;
bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector;
bio->bi_iter.bi_size = bio_src->bi_iter.bi_size;
{
struct crypto_report_cipher rcipher;
- strlcpy(rcipher.type, "cipher", sizeof(rcipher.type));
+ strncpy(rcipher.type, "cipher", sizeof(rcipher.type));
rcipher.blocksize = alg->cra_blocksize;
rcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
{
struct crypto_report_comp rcomp;
- strlcpy(rcomp.type, "compression", sizeof(rcomp.type));
+ strncpy(rcomp.type, "compression", sizeof(rcomp.type));
if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
sizeof(struct crypto_report_comp), &rcomp))
goto nla_put_failure;
{
struct crypto_report_acomp racomp;
- strlcpy(racomp.type, "acomp", sizeof(racomp.type));
+ strncpy(racomp.type, "acomp", sizeof(racomp.type));
if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMP,
sizeof(struct crypto_report_acomp), &racomp))
{
struct crypto_report_akcipher rakcipher;
- strlcpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
+ strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER,
sizeof(struct crypto_report_akcipher), &rakcipher))
{
struct crypto_report_kpp rkpp;
- strlcpy(rkpp.type, "kpp", sizeof(rkpp.type));
+ strncpy(rkpp.type, "kpp", sizeof(rkpp.type));
if (nla_put(skb, CRYPTOCFGA_REPORT_KPP,
sizeof(struct crypto_report_kpp), &rkpp))
static int crypto_report_one(struct crypto_alg *alg,
struct crypto_user_alg *ualg, struct sk_buff *skb)
{
- strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
- strlcpy(ualg->cru_driver_name, alg->cra_driver_name,
+ strncpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
+ strncpy(ualg->cru_driver_name, alg->cra_driver_name,
sizeof(ualg->cru_driver_name));
- strlcpy(ualg->cru_module_name, module_name(alg->cra_module),
+ strncpy(ualg->cru_module_name, module_name(alg->cra_module),
sizeof(ualg->cru_module_name));
ualg->cru_type = 0;
if (alg->cra_flags & CRYPTO_ALG_LARVAL) {
struct crypto_report_larval rl;
- strlcpy(rl.type, "larval", sizeof(rl.type));
+ strncpy(rl.type, "larval", sizeof(rl.type));
if (nla_put(skb, CRYPTOCFGA_REPORT_LARVAL,
sizeof(struct crypto_report_larval), &rl))
goto nla_put_failure;
u64 v64;
u32 v32;
+ memset(&raead, 0, sizeof(raead));
+
strncpy(raead.type, "aead", sizeof(raead.type));
v32 = atomic_read(&alg->encrypt_cnt);
u64 v64;
u32 v32;
+ memset(&rcipher, 0, sizeof(rcipher));
+
strlcpy(rcipher.type, "cipher", sizeof(rcipher.type));
v32 = atomic_read(&alg->encrypt_cnt);
u64 v64;
u32 v32;
+ memset(&rcomp, 0, sizeof(rcomp));
+
strlcpy(rcomp.type, "compression", sizeof(rcomp.type));
v32 = atomic_read(&alg->compress_cnt);
rcomp.stat_compress_cnt = v32;
u64 v64;
u32 v32;
+ memset(&racomp, 0, sizeof(racomp));
+
strlcpy(racomp.type, "acomp", sizeof(racomp.type));
v32 = atomic_read(&alg->compress_cnt);
racomp.stat_compress_cnt = v32;
u64 v64;
u32 v32;
+ memset(&rakcipher, 0, sizeof(rakcipher));
+
strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
v32 = atomic_read(&alg->encrypt_cnt);
rakcipher.stat_encrypt_cnt = v32;
struct crypto_stat rkpp;
u32 v;
+ memset(&rkpp, 0, sizeof(rkpp));
+
strlcpy(rkpp.type, "kpp", sizeof(rkpp.type));
v = atomic_read(&alg->setsecret_cnt);
u64 v64;
u32 v32;
+ memset(&rhash, 0, sizeof(rhash));
+
strncpy(rhash.type, "ahash", sizeof(rhash.type));
v32 = atomic_read(&alg->hash_cnt);
u64 v64;
u32 v32;
+ memset(&rhash, 0, sizeof(rhash));
+
strncpy(rhash.type, "shash", sizeof(rhash.type));
v32 = atomic_read(&alg->hash_cnt);
u64 v64;
u32 v32;
+ memset(&rrng, 0, sizeof(rrng));
+
strncpy(rrng.type, "rng", sizeof(rrng.type));
v32 = atomic_read(&alg->generate_cnt);
struct crypto_user_alg *ualg,
struct sk_buff *skb)
{
+ memset(ualg, 0, sizeof(*ualg));
+
strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
strlcpy(ualg->cru_driver_name, alg->cra_driver_name,
sizeof(ualg->cru_driver_name));
if (alg->cra_flags & CRYPTO_ALG_LARVAL) {
struct crypto_stat rl;
+ memset(&rl, 0, sizeof(rl));
strlcpy(rl.type, "larval", sizeof(rl.type));
if (nla_put(skb, CRYPTOCFGA_STAT_LARVAL,
sizeof(struct crypto_stat), &rl))
ctx->cryptd_tfm = cryptd_tfm;
- reqsize = sizeof(struct skcipher_request);
- reqsize += crypto_skcipher_reqsize(&cryptd_tfm->base);
+ reqsize = crypto_skcipher_reqsize(cryptd_skcipher_child(cryptd_tfm));
+ reqsize = max(reqsize, crypto_skcipher_reqsize(&cryptd_tfm->base));
+ reqsize += sizeof(struct skcipher_request);
crypto_skcipher_set_reqsize(tfm, reqsize);
config XPOWER_PMIC_OPREGION
bool "ACPI operation region support for XPower AXP288 PMIC"
- depends on MFD_AXP20X_I2C && IOSF_MBI
+ depends on MFD_AXP20X_I2C && IOSF_MBI=y
help
This config adds ACPI operation region support for XPower AXP288 PMIC.
{"PNP0200", 0}, /* AT DMA Controller */
{"ACPI0009", 0}, /* IOxAPIC */
{"ACPI000A", 0}, /* IOAPIC */
+ {"SMB0001", 0}, /* ACPI SMBUS virtual device */
{"", 0},
};
{
acpi_status status;
u32 buffer_length;
- u32 data_length;
void *buffer;
union acpi_operand_object *buffer_desc;
u32 function;
case ACPI_ADR_SPACE_SMBUS:
buffer_length = ACPI_SMBUS_BUFFER_SIZE;
- data_length = ACPI_SMBUS_DATA_SIZE;
function = ACPI_WRITE | (obj_desc->field.attribute << 16);
break;
case ACPI_ADR_SPACE_IPMI:
buffer_length = ACPI_IPMI_BUFFER_SIZE;
- data_length = ACPI_IPMI_DATA_SIZE;
function = ACPI_WRITE;
break;
/* Add header length to get the full size of the buffer */
buffer_length += ACPI_SERIAL_HEADER_SIZE;
- data_length = source_desc->buffer.pointer[1];
function = ACPI_WRITE | (accessor_type << 16);
break;
return_ACPI_STATUS(AE_AML_INVALID_SPACE_ID);
}
-#if 0
- OBSOLETE ?
- /* Check for possible buffer overflow */
- if (data_length > source_desc->buffer.length) {
- ACPI_ERROR((AE_INFO,
- "Length in buffer header (%u)(%u) is greater than "
- "the physical buffer length (%u) and will overflow",
- data_length, buffer_length,
- source_desc->buffer.length));
-
- return_ACPI_STATUS(AE_AML_BUFFER_LIMIT);
- }
-#endif
-
/* Create the transfer/bidirectional/return buffer */
buffer_desc = acpi_ut_create_buffer_object(buffer_length);
/* Copy the input buffer data to the transfer buffer */
buffer = buffer_desc->buffer.pointer;
- memcpy(buffer, source_desc->buffer.pointer, data_length);
+ memcpy(buffer, source_desc->buffer.pointer,
+ min(buffer_length, source_desc->buffer.length));
/* Lock entire transaction if requested */
return rc;
if (ars_status_process_records(acpi_desc))
- return -ENOMEM;
+ dev_err(acpi_desc->dev, "Failed to process ARS records\n");
- return 0;
+ return rc;
}
static int ars_register(struct acpi_nfit_desc *acpi_desc,
struct nvdimm *nvdimm, unsigned int cmd)
{
struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc);
- struct nfit_spa *nfit_spa;
- int rc = 0;
if (nvdimm)
return 0;
* just needs guarantees that any ARS it initiates are not
* interrupted by any intervening start requests from userspace.
*/
- mutex_lock(&acpi_desc->init_mutex);
- list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
- if (acpi_desc->scrub_spa
- || test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state)
- || test_bit(ARS_REQ_LONG, &nfit_spa->ars_state)) {
- rc = -EBUSY;
- break;
- }
- mutex_unlock(&acpi_desc->init_mutex);
+ if (work_busy(&acpi_desc->dwork.work))
+ return -EBUSY;
- return rc;
+ return 0;
}
int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
/* These specific Samsung models/firmware-revs do not handle LPM well */
{ "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
{ "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },
- { "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
+ { "SAMSUNG MZ7TD256HAFV-000L9", NULL, ATA_HORKAGE_NOLPM, },
/* devices that don't properly handle queued TRIM commands */
{ "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
func_enter ();
- fs_dprintk (FS_DEBUG_INIT, "Inititing queue at %x: %d entries:\n",
+ fs_dprintk (FS_DEBUG_INIT, "Initializing queue at %x: %d entries:\n",
queue, nentries);
p = aligned_kmalloc (sz, GFP_KERNEL, 0x10);
{
func_enter ();
- fs_dprintk (FS_DEBUG_INIT, "Inititing free pool at %x:\n", queue);
+ fs_dprintk (FS_DEBUG_INIT, "Initializing free pool at %x:\n", queue);
write_fs (dev, FP_CNF(queue), (bufsize * RBFP_RBS) | RBFP_RBSVAL | RBFP_CME);
write_fs (dev, FP_SA(queue), 0);
bio.bi_end_io = floppy_rb0_cb;
bio_set_op_attrs(&bio, REQ_OP_READ, 0);
+ init_completion(&cbdata.complete);
+
submit_bio(&bio);
process_fd_request();
- init_completion(&cbdata.complete);
wait_for_completion(&cbdata.complete);
__free_page(page);
/* Ensure the arm clock divider is what we expect */
ret = clk_set_rate(clks[ARM].clk, new_freq * 1000);
if (ret) {
+ int ret1;
+
dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);
- regulator_set_voltage_tol(arm_reg, volt_old, 0);
+ ret1 = regulator_set_voltage_tol(arm_reg, volt_old, 0);
+ if (ret1)
+ dev_warn(cpu_dev,
+ "failed to restore vddarm voltage: %d\n", ret1);
return ret;
}
{},
};
+static const struct of_device_id *ti_cpufreq_match_node(void)
+{
+ struct device_node *np;
+ const struct of_device_id *match;
+
+ np = of_find_node_by_path("/");
+ match = of_match_node(ti_cpufreq_of_match, np);
+ of_node_put(np);
+
+ return match;
+}
+
static int ti_cpufreq_probe(struct platform_device *pdev)
{
u32 version[VERSION_COUNT];
- struct device_node *np;
const struct of_device_id *match;
struct opp_table *ti_opp_table;
struct ti_cpufreq_data *opp_data;
const char * const reg_names[] = {"vdd", "vbb"};
int ret;
- np = of_find_node_by_path("/");
- match = of_match_node(ti_cpufreq_of_match, np);
- of_node_put(np);
+ match = dev_get_platdata(&pdev->dev);
if (!match)
return -ENODEV;
static int ti_cpufreq_init(void)
{
- platform_device_register_simple("ti-cpufreq", -1, NULL, 0);
+ const struct of_device_id *match;
+
+ /* Check to ensure we are on a compatible platform */
+ match = ti_cpufreq_match_node();
+ if (match)
+ platform_device_register_data(NULL, "ti-cpufreq", -1, match,
+ sizeof(*match));
+
return 0;
}
module_init(ti_cpufreq_init);
{
int ret;
struct cpuidle_driver *drv;
- struct cpuidle_device *dev;
drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL);
if (!drv)
goto out_kfree_drv;
}
- ret = cpuidle_register_driver(drv);
- if (ret) {
- if (ret != -EBUSY)
- pr_err("Failed to register cpuidle driver\n");
- goto out_kfree_drv;
- }
-
/*
* Call arch CPU operations in order to initialize
* idle states suspend back-end specific data
ret = arm_cpuidle_init(cpu);
/*
- * Skip the cpuidle device initialization if the reported
+ * Allow the initialization to continue for other CPUs, if the reported
* failure is a HW misconfiguration/breakage (-ENXIO).
*/
- if (ret == -ENXIO)
- return 0;
-
if (ret) {
pr_err("CPU %d failed to init idle CPU ops\n", cpu);
- goto out_unregister_drv;
- }
-
- dev = kzalloc(sizeof(*dev), GFP_KERNEL);
- if (!dev) {
- ret = -ENOMEM;
- goto out_unregister_drv;
+ ret = ret == -ENXIO ? 0 : ret;
+ goto out_kfree_drv;
}
- dev->cpu = cpu;
- ret = cpuidle_register_device(dev);
- if (ret) {
- pr_err("Failed to register cpuidle device for CPU %d\n",
- cpu);
- goto out_kfree_dev;
- }
+ ret = cpuidle_register(drv, NULL);
+ if (ret)
+ goto out_kfree_drv;
return 0;
-out_kfree_dev:
- kfree(dev);
-out_unregister_drv:
- cpuidle_unregister_driver(drv);
out_kfree_drv:
kfree(drv);
return ret;
while (--cpu >= 0) {
dev = per_cpu(cpuidle_devices, cpu);
drv = cpuidle_get_cpu_driver(dev);
- cpuidle_unregister_device(dev);
- cpuidle_unregister_driver(drv);
- kfree(dev);
+ cpuidle_unregister(drv);
kfree(drv);
}
int *splits_in_nents;
int *splits_out_nents = NULL;
struct sec_request_el *el, *temp;
+ bool split = skreq->src != skreq->dst;
mutex_init(&sec_req->lock);
sec_req->req_base = &skreq->base;
if (ret)
goto err_free_split_sizes;
- if (skreq->src != skreq->dst) {
+ if (split) {
sec_req->len_out = sg_nents(skreq->dst);
ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps,
&splits_out, &splits_out_nents,
split_sizes[i],
skreq->src != skreq->dst,
splits_in[i], splits_in_nents[i],
- splits_out[i],
- splits_out_nents[i], info);
+ split ? splits_out[i] : NULL,
+ split ? splits_out_nents[i] : 0,
+ info);
if (IS_ERR(el)) {
ret = PTR_ERR(el);
goto err_free_elements;
* more refined but this is unlikely to happen so no need.
*/
- /* Cleanup - all elements in pointer arrays have been coppied */
- kfree(splits_in_nents);
- kfree(splits_in);
- kfree(splits_out_nents);
- kfree(splits_out);
- kfree(split_sizes);
-
/* Grab a big lock for a long time to avoid concurrency issues */
mutex_lock(&queue->queuelock);
(!queue->havesoftqueue ||
kfifo_avail(&queue->softqueue) > steps)) ||
!list_empty(&ctx->backlog)) {
+ ret = -EBUSY;
if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {
list_add_tail(&sec_req->backlog_head, &ctx->backlog);
mutex_unlock(&queue->queuelock);
- return -EBUSY;
+ goto out;
}
- ret = -EBUSY;
mutex_unlock(&queue->queuelock);
goto err_free_elements;
}
if (ret)
goto err_free_elements;
- return -EINPROGRESS;
+ ret = -EINPROGRESS;
+out:
+ /* Cleanup - all elements in pointer arrays have been copied */
+ kfree(splits_in_nents);
+ kfree(splits_in);
+ kfree(splits_out_nents);
+ kfree(splits_out);
+ kfree(split_sizes);
+ return ret;
err_free_elements:
list_for_each_entry_safe(el, temp, &sec_req->elements, head) {
crypto_skcipher_ivsize(atfm),
DMA_BIDIRECTIONAL);
err_unmap_out_sg:
- if (skreq->src != skreq->dst)
+ if (split)
sec_unmap_sg_on_err(skreq->dst, steps, splits_out,
splits_out_nents, sec_req->len_out,
info->dev);
exp_info.ops = &udmabuf_ops;
exp_info.size = ubuf->pagecount << PAGE_SHIFT;
exp_info.priv = ubuf;
+ exp_info.flags = O_RDWR;
buf = dma_buf_export(&exp_info);
if (IS_ERR(buf)) {
(params.mmap & ~PAGE_MASK)));
init_screen_info();
+
+ /* ARM does not permit early mappings to persist across paging_init() */
+ if (IS_ENABLED(CONFIG_ARM))
+ efi_memmap_unmap();
}
static int __init register_gop_device(void)
{
u64 mapsize;
- if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {
+ if (!efi_enabled(EFI_BOOT)) {
pr_info("EFI services will not be available.\n");
return 0;
}
early_memunmap(tbl, sizeof(*tbl));
}
+ return 0;
+}
+int __init efi_apply_persistent_mem_reservations(void)
+{
if (efi.mem_reserve != EFI_INVALID_TABLE_ADDR) {
unsigned long prsv = efi.mem_reserve;
}
static DEFINE_SPINLOCK(efi_mem_reserve_persistent_lock);
+static struct linux_efi_memreserve *efi_memreserve_root __ro_after_init;
-int efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+static int __init efi_memreserve_map_root(void)
{
- struct linux_efi_memreserve *rsv, *parent;
-
if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR)
return -ENODEV;
- rsv = kmalloc(sizeof(*rsv), GFP_KERNEL);
- if (!rsv)
+ efi_memreserve_root = memremap(efi.mem_reserve,
+ sizeof(*efi_memreserve_root),
+ MEMREMAP_WB);
+ if (WARN_ON_ONCE(!efi_memreserve_root))
return -ENOMEM;
+ return 0;
+}
- parent = memremap(efi.mem_reserve, sizeof(*rsv), MEMREMAP_WB);
- if (!parent) {
- kfree(rsv);
- return -ENOMEM;
+int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+{
+ struct linux_efi_memreserve *rsv;
+ int rc;
+
+ if (efi_memreserve_root == (void *)ULONG_MAX)
+ return -ENODEV;
+
+ if (!efi_memreserve_root) {
+ rc = efi_memreserve_map_root();
+ if (rc)
+ return rc;
}
+ rsv = kmalloc(sizeof(*rsv), GFP_ATOMIC);
+ if (!rsv)
+ return -ENOMEM;
+
rsv->base = addr;
rsv->size = size;
spin_lock(&efi_mem_reserve_persistent_lock);
- rsv->next = parent->next;
- parent->next = __pa(rsv);
+ rsv->next = efi_memreserve_root->next;
+ efi_memreserve_root->next = __pa(rsv);
spin_unlock(&efi_mem_reserve_persistent_lock);
- memunmap(parent);
+ return 0;
+}
+static int __init efi_memreserve_root_init(void)
+{
+ if (efi_memreserve_root)
+ return 0;
+ if (efi_memreserve_map_root())
+ efi_memreserve_root = (void *)ULONG_MAX;
return 0;
}
+early_initcall(efi_memreserve_root_init);
#ifdef CONFIG_KEXEC
static int update_efi_random_seed(struct notifier_block *nb,
efi_guid_t memreserve_table_guid = LINUX_EFI_MEMRESERVE_TABLE_GUID;
efi_status_t status;
+ if (IS_ENABLED(CONFIG_ARM))
+ return;
+
status = efi_call_early(allocate_pool, EFI_LOADER_DATA, sizeof(*rsv),
(void **)&rsv);
if (status != EFI_SUCCESS) {
return efi_status;
}
}
+
+ /* shrink the FDT back to its minimum size */
+ fdt_pack(fdt);
+
return EFI_SUCCESS;
fdt_set_fail:
void __init efi_memmap_unmap(void)
{
+ if (!efi_enabled(EFI_MEMMAP))
+ return;
+
if (!efi.memmap.late) {
unsigned long size;
} \
\
init_completion(&efi_rts_work.efi_rts_comp); \
- INIT_WORK_ONSTACK(&efi_rts_work.work, efi_call_rts); \
+ INIT_WORK(&efi_rts_work.work, efi_call_rts); \
efi_rts_work.arg1 = _arg1; \
efi_rts_work.arg2 = _arg2; \
efi_rts_work.arg3 = _arg3; \
#include <linux/of.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
+#include <linux/sched.h>
#include <linux/serdev.h>
#include <linux/slab.h>
int ret;
/* write is only buffered synchronously */
- ret = serdev_device_write(serdev, buf, count, 0);
+ ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT);
if (ret < 0)
return ret;
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
+#include <linux/sched.h>
#include <linux/serdev.h>
#include <linux/slab.h>
#include <linux/wait.h>
int ret;
/* write is only buffered synchronously */
- ret = serdev_device_write(serdev, buf, count, 0);
+ ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT);
if (ret < 0)
return ret;
#define gpio_mockup_err(...) pr_err(GPIO_MOCKUP_NAME ": " __VA_ARGS__)
enum {
- GPIO_MOCKUP_DIR_OUT = 0,
- GPIO_MOCKUP_DIR_IN = 1,
+ GPIO_MOCKUP_DIR_IN = 0,
+ GPIO_MOCKUP_DIR_OUT = 1,
};
/*
{
struct gpio_mockup_chip *chip = gpiochip_get_data(gc);
- return chip->lines[offset].dir;
+ return !chip->lines[offset].dir;
}
static int gpio_mockup_to_irq(struct gpio_chip *gc, unsigned int offset)
if (pxa_gpio_has_pinctrl()) {
ret = pinctrl_gpio_direction_input(chip->base + offset);
- if (!ret)
- return 0;
+ if (ret)
+ return ret;
}
spin_lock_irqsave(&gpio_lock, flags);
gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);
if (!gdev->descs) {
status = -ENOMEM;
- goto err_free_gdev;
+ goto err_free_ida;
}
if (chip->ngpio == 0) {
kfree_const(gdev->label);
err_free_descs:
kfree(gdev->descs);
-err_free_gdev:
+err_free_ida:
ida_simple_remove(&gpio_ida, gdev->id);
+err_free_gdev:
/* failures here can mean systems won't boot... */
pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__,
gdev->base, gdev->base + gdev->ngpio - 1,
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
- amdgpu_dpm_switch_power_profile(adev,
- PP_SMC_POWER_PROFILE_COMPUTE, !idle);
+ if (adev->powerplay.pp_funcs &&
+ adev->powerplay.pp_funcs->switch_power_profile)
+ amdgpu_dpm_switch_power_profile(adev,
+ PP_SMC_POWER_PROFILE_COMPUTE,
+ !idle);
}
bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid)
"dither",
amdgpu_dither_enum_list, sz);
+ if (amdgpu_device_has_dc_support(adev)) {
+ adev->mode_info.max_bpc_property =
+ drm_property_create_range(adev->ddev, 0, "max bpc", 8, 16);
+ if (!adev->mode_info.max_bpc_property)
+ return -ENOMEM;
+ }
+
return 0;
}
struct drm_property *audio_property;
/* FMT dithering */
struct drm_property *dither_property;
+ /* maximum number of bits per channel for monitor color */
+ struct drm_property *max_bpc_property;
/* hardcoded DFP edid from BIOS */
struct edid *bios_hardcoded_edid;
int bios_hardcoded_edid_size;
if (level == adev->vm_manager.root_level)
/* For the root directory */
- return round_up(adev->vm_manager.max_pfn, 1 << shift) >> shift;
+ return round_up(adev->vm_manager.max_pfn, 1ULL << shift) >> shift;
else if (level != AMDGPU_VM_PTB)
/* Everything in between */
return 512;
continue;
}
- /* First check if the entry is already handled */
- if (cursor.pfn < frag_start) {
- cursor.entry->huge = true;
- amdgpu_vm_pt_next(adev, &cursor);
- continue;
- }
-
/* If it isn't already handled it can't be a huge page */
if (cursor.entry->huge) {
/* Add the entry to the relocated list to update it. */
if (!amdgpu_vm_pt_descendant(adev, &cursor))
return -ENOENT;
continue;
- } else if (frag >= parent_shift) {
+ } else if (frag >= parent_shift &&
+ cursor.level - 1 != adev->vm_manager.root_level) {
/* If the fragment size is even larger than the parent
- * shift we should go up one level and check it again.
+ * shift we should go up one level and check it again
+ * unless one level up is the root level.
*/
if (!amdgpu_vm_pt_ancestor(&cursor))
return -ENOENT;
}
/* Looks good so far, calculate parameters for the update */
- incr = AMDGPU_GPU_PAGE_SIZE << shift;
+ incr = (uint64_t)AMDGPU_GPU_PAGE_SIZE << shift;
mask = amdgpu_vm_entries_mask(adev, cursor.level);
pe_start = ((cursor.pfn >> shift) & mask) * 8;
- entry_end = (mask + 1) << shift;
+ entry_end = (uint64_t)(mask + 1) << shift;
entry_end += cursor.pfn & ~(entry_end - 1);
entry_end = min(entry_end, end);
flags | AMDGPU_PTE_FRAG(frag));
pe_start += nptes * 8;
- dst += nptes * AMDGPU_GPU_PAGE_SIZE << shift;
+ dst += (uint64_t)nptes * AMDGPU_GPU_PAGE_SIZE << shift;
frag_start = upd_end;
if (frag_start >= frag_end) {
}
} while (frag_start < entry_end);
- if (frag >= shift)
+ if (amdgpu_vm_pt_descendant(adev, &cursor)) {
+ /* Mark all child entries as huge */
+ while (cursor.pfn < frag_start) {
+ cursor.entry->huge = true;
+ amdgpu_vm_pt_next(adev, &cursor);
+ }
+
+ } else if (frag >= shift) {
+ /* or just move on to the next on the same level. */
amdgpu_vm_pt_next(adev, &cursor);
+ }
}
return 0;
#endif
WREG32_FIELD15(GC, 0, RLC_CNTL, RLC_ENABLE_F32, 1);
+ udelay(50);
/* carrizo do enable cp interrupt after cp inited */
- if (!(adev->flags & AMD_IS_APU))
+ if (!(adev->flags & AMD_IS_APU)) {
gfx_v9_0_enable_gui_idle_interrupt(adev, true);
-
- udelay(50);
+ udelay(50);
+ }
#ifdef AMDGPU_RLC_DEBUG_RETRY
/* RLC_GPM_GENERAL_6 : RLC Ucode version */
/* Program the system aperture low logical page number. */
WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
- min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);
+ min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)
/*
* to get rid of the VM fault and hardware hang.
*/
WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
- max((adev->gmc.vram_end >> 18) + 0x1,
+ max((adev->gmc.fb_end >> 18) + 0x1,
adev->gmc.agp_end >> 18));
else
WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
- max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);
+ max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
/* Set default page address. */
value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
MODULE_FIRMWARE("amdgpu/pitcairn_mc.bin");
MODULE_FIRMWARE("amdgpu/verde_mc.bin");
MODULE_FIRMWARE("amdgpu/oland_mc.bin");
+MODULE_FIRMWARE("amdgpu/hainan_mc.bin");
MODULE_FIRMWARE("amdgpu/si58_mc.bin");
#define MC_SEQ_MISC0__MT__MASK 0xf0000000
/* Program the system aperture low logical page number. */
WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
- min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);
+ min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)
/*
* to get rid of the VM fault and hardware hang.
*/
WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
- max((adev->gmc.vram_end >> 18) + 0x1,
+ max((adev->gmc.fb_end >> 18) + 0x1,
adev->gmc.agp_end >> 18));
else
WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
- max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);
+ max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
/* Set default page address. */
value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
#define mmMP0_MISC_LIGHT_SLEEP_CTRL 0x01ba
#define mmMP0_MISC_LIGHT_SLEEP_CTRL_BASE_IDX 0
+/* for Vega20 register name change */
+#define mmHDP_MEM_POWER_CTRL 0x00d4
+#define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK 0x00000001L
+#define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK 0x00000002L
+#define HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK 0x00010000L
+#define HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK 0x00020000L
+#define mmHDP_MEM_POWER_CTRL_BASE_IDX 0
/*
* Indirect registers accessor
*/
{
uint32_t def, data;
- def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS));
+ if (adev->asic_type == CHIP_VEGA20) {
+ def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL));
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
- data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK;
- else
- data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK;
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
+ data |= HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK |
+ HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK |
+ HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK |
+ HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK;
+ else
+ data &= ~(HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK |
+ HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK |
+ HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK |
+ HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK);
- if (def != data)
- WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data);
+ if (def != data)
+ WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL), data);
+ } else {
+ def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS));
+
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
+ data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK;
+ else
+ data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK;
+
+ if (def != data)
+ WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data);
+ }
}
static void soc15_update_drm_clock_gating(struct amdgpu_device *adev, bool enable)
else
wptr_off = adev->wb.gpu_addr + (adev->irq.ih.wptr_offs * 4);
WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_LO, lower_32_bits(wptr_off));
- WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFF);
+ WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFFFF);
/* set rptr, wptr to 0 */
WREG32_SOC15(OSSSYS, 0, mmIH_RB_RPTR, 0);
static enum dc_color_depth
convert_color_depth_from_display_info(const struct drm_connector *connector)
{
+ struct dm_connector_state *dm_conn_state =
+ to_dm_connector_state(connector->state);
uint32_t bpc = connector->display_info.bpc;
+ /* TODO: Remove this when there's support for max_bpc in drm */
+ if (dm_conn_state && bpc > dm_conn_state->max_bpc)
+ /* Round down to nearest even number. */
+ bpc = dm_conn_state->max_bpc - (dm_conn_state->max_bpc & 1);
+
switch (bpc) {
case 0:
/*
} else if (property == adev->mode_info.underscan_property) {
dm_new_state->underscan_enable = val;
ret = 0;
+ } else if (property == adev->mode_info.max_bpc_property) {
+ dm_new_state->max_bpc = val;
+ ret = 0;
}
return ret;
} else if (property == adev->mode_info.underscan_property) {
*val = dm_state->underscan_enable;
ret = 0;
+ } else if (property == adev->mode_info.max_bpc_property) {
+ *val = dm_state->max_bpc;
+ ret = 0;
}
return ret;
}
drm_object_attach_property(&aconnector->base.base,
adev->mode_info.underscan_vborder_property,
0);
+ drm_object_attach_property(&aconnector->base.base,
+ adev->mode_info.max_bpc_property,
+ 0);
}
enum amdgpu_rmx_type scaling;
uint8_t underscan_vborder;
uint8_t underscan_hborder;
+ uint8_t max_bpc;
bool underscan_enable;
bool freesync_enable;
bool freesync_capable;
master->connector_id);
aconnector->mst_encoder = dm_dp_create_fake_mst_encoder(master);
+ drm_connector_attach_encoder(&aconnector->base,
+ &aconnector->mst_encoder->base);
- /*
- * TODO: understand why this one is needed
- */
drm_object_attach_property(
&connector->base,
dev->mode_config.path_property,
struct smu7_single_dpm_table *sclk_table = &(data->dpm_table.sclk_table);
struct smu7_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.sclk_table);
- int value;
+ int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
+ int golden_value = golden_sclk_table->dpm_levels
+ [golden_sclk_table->count - 1].value;
- value = (sclk_table->dpm_levels[sclk_table->count - 1].value -
- golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) *
- 100 /
- golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value;
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
struct smu7_single_dpm_table *mclk_table = &(data->dpm_table.mclk_table);
struct smu7_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mclk_table);
- int value;
+ int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
+ int golden_value = golden_mclk_table->dpm_levels
+ [golden_mclk_table->count - 1].value;
- value = (mclk_table->dpm_levels[mclk_table->count - 1].value -
- golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) *
- 100 /
- golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value;
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
for (i = 0; i < wm_with_clock_ranges->num_wm_dmif_sets; i++) {
table->WatermarkRow[1][i].MinClock =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz /
+ 1000));
table->WatermarkRow[1][i].MaxClock =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz /
+ 1000));
table->WatermarkRow[1][i].MinUclk =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz /
+ 1000));
table->WatermarkRow[1][i].MaxUclk =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz /
+ 1000));
table->WatermarkRow[1][i].WmSetting = (uint8_t)
wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_set_id;
}
for (i = 0; i < wm_with_clock_ranges->num_wm_mcif_sets; i++) {
table->WatermarkRow[0][i].MinClock =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz /
+ 1000));
table->WatermarkRow[0][i].MaxClock =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz /
+ 1000));
table->WatermarkRow[0][i].MinUclk =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz /
+ 1000));
table->WatermarkRow[0][i].MaxUclk =
cpu_to_le16((uint16_t)
- (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz) /
- 1000);
+ (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz /
+ 1000));
table->WatermarkRow[0][i].WmSetting = (uint8_t)
wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id;
}
struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
struct vega10_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.gfx_table);
- int value;
-
- value = (sclk_table->dpm_levels[sclk_table->count - 1].value -
- golden_sclk_table->dpm_levels
- [golden_sclk_table->count - 1].value) *
- 100 /
- golden_sclk_table->dpm_levels
+ int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
+ int golden_value = golden_sclk_table->dpm_levels
[golden_sclk_table->count - 1].value;
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
+
return value;
}
struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
struct vega10_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mem_table);
- int value;
-
- value = (mclk_table->dpm_levels
- [mclk_table->count - 1].value -
- golden_mclk_table->dpm_levels
- [golden_mclk_table->count - 1].value) *
- 100 /
- golden_mclk_table->dpm_levels
+ int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
+ int golden_value = golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value;
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
+
return value;
}
struct vega12_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
struct vega12_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.gfx_table);
- int value;
+ int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
+ int golden_value = golden_sclk_table->dpm_levels
+ [golden_sclk_table->count - 1].value;
- value = (sclk_table->dpm_levels[sclk_table->count - 1].value -
- golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) *
- 100 /
- golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value;
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
struct vega12_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
struct vega12_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mem_table);
- int value;
-
- value = (mclk_table->dpm_levels
- [mclk_table->count - 1].value -
- golden_mclk_table->dpm_levels
- [golden_mclk_table->count - 1].value) *
- 100 /
- golden_mclk_table->dpm_levels
+ int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
+ int golden_value = golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value;
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
+
return value;
}
data->phy_clk_quad_eqn_b = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT;
data->phy_clk_quad_eqn_c = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT;
- data->registry_data.disallowed_features = 0x0;
+ /*
+ * Disable the following features for now:
+ * GFXCLK DS
+ * SOCLK DS
+ * LCLK DS
+ * DCEFCLK DS
+ * FCLK DS
+ * MP1CLK DS
+ * MP0CLK DS
+ */
+ data->registry_data.disallowed_features = 0xE0041C00;
data->registry_data.od_state_in_dc_support = 0;
data->registry_data.thermal_support = 1;
data->registry_data.skip_baco_hardware = 0;
&(data->dpm_table.gfx_table);
struct vega20_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.gfx_table);
- int value;
+ int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
+ int golden_value = golden_sclk_table->dpm_levels
+ [golden_sclk_table->count - 1].value;
/* od percentage */
- value = DIV_ROUND_UP((sclk_table->dpm_levels[sclk_table->count - 1].value -
- golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 100,
- golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value);
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
&(data->dpm_table.mem_table);
struct vega20_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mem_table);
- int value;
+ int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
+ int golden_value = golden_mclk_table->dpm_levels
+ [golden_mclk_table->count - 1].value;
/* od percentage */
- value = DIV_ROUND_UP((mclk_table->dpm_levels[mclk_table->count - 1].value -
- golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) * 100,
- golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value);
+ value -= golden_value;
+ value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
MODULE_DEVICE_TABLE(pci, pciidlist);
+static void ast_kick_out_firmware_fb(struct pci_dev *pdev)
+{
+ struct apertures_struct *ap;
+ bool primary = false;
+
+ ap = alloc_apertures(1);
+ if (!ap)
+ return;
+
+ ap->ranges[0].base = pci_resource_start(pdev, 0);
+ ap->ranges[0].size = pci_resource_len(pdev, 0);
+
+#ifdef CONFIG_X86
+ primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
+#endif
+ drm_fb_helper_remove_conflicting_framebuffers(ap, "astdrmfb", primary);
+ kfree(ap);
+}
+
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
+ ast_kick_out_firmware_fb(pdev);
+
return drm_get_pci_dev(pdev, ent, &driver);
}
drm_mode_config_cleanup(dev);
ast_mm_fini(ast);
- pci_iounmap(dev->pdev, ast->ioregs);
+ if (ast->ioregs != ast->regs + AST_IO_MM_OFFSET)
+ pci_iounmap(dev->pdev, ast->ioregs);
pci_iounmap(dev->pdev, ast->regs);
kfree(ast);
}
}
ast_bo_unreserve(bo);
+ ast_set_offset_reg(crtc);
ast_set_start_address_crt1(crtc, (u32)gpu_addr);
return 0;
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = i2c->dev->dev_private;
- uint32_t val;
+ uint32_t val, val2, count, pass;
+
+ count = 0;
+ pass = 0;
+ val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
+ do {
+ val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
+ if (val == val2) {
+ pass++;
+ } else {
+ pass = 0;
+ val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
+ }
+ } while ((pass < 5) && (count++ < 0x10000));
- val = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4;
return val & 1 ? 1 : 0;
}
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = i2c->dev->dev_private;
- uint32_t val;
+ uint32_t val, val2, count, pass;
+
+ count = 0;
+ pass = 0;
+ val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
+ do {
+ val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
+ if (val == val2) {
+ pass++;
+ } else {
+ pass = 0;
+ val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
+ }
+ } while ((pass < 5) && (count++ < 0x10000));
- val = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5;
return val & 1 ? 1 : 0;
}
for (i = 0; i < 0x10000; i++) {
ujcrb7 = ((clock & 0x01) ? 0 : 1);
- ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xfe, ujcrb7);
+ ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf4, ujcrb7);
jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x01);
if (ujcrb7 == jtemp)
break;
for (i = 0; i < 0x10000; i++) {
ujcrb7 = ((data & 0x01) ? 0 : 1) << 2;
- ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xfb, ujcrb7);
+ ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf1, ujcrb7);
jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x04);
if (ujcrb7 == jtemp)
break;
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, ((y >> 8) & 0x07));
/* dummy write to fire HWC */
- ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xCB, 0xFF, 0x00);
+ ast_show_cursor(crtc);
return 0;
}
lockdep_assert_held_once(&dev->master_mutex);
+ WARN_ON(fpriv->is_master);
old_master = fpriv->master;
fpriv->master = drm_master_create(dev);
if (!fpriv->master) {
/* drop references and restore old master on failure */
drm_master_put(&fpriv->master);
fpriv->master = old_master;
+ fpriv->is_master = 0;
return ret;
}
mutex_lock(&mgr->lock);
mstb = mgr->mst_primary;
+ if (!mstb)
+ goto out;
+
for (i = 0; i < lct - 1; i++) {
int shift = (i % 2) ? 0 : 4;
int port_num = (rad[i / 2] >> shift) & 0xf;
mutex_lock(&fb_helper->lock);
drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
+ if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
+ continue;
+
ret = __drm_fb_helper_add_one_connector(fb_helper, connector);
if (ret)
goto fail;
/**
* drm_driver_legacy_fb_format - compute drm fourcc code from legacy description
+ * @dev: DRM device
* @bpp: bits per pixels
* @depth: bit depth per pixel
- * @native: use host native byte order
*
* Computes a drm fourcc pixel format code for the given @bpp/@depth values.
* Unlike drm_mode_legacy_fb_format() this looks at the drivers mode_config,
}
mutex_lock(&dev_priv->drm.struct_mutex);
+ mmio_hw_access_pre(dev_priv);
ret = i915_gem_gtt_insert(&dev_priv->ggtt.vm, node,
size, I915_GTT_PAGE_SIZE,
I915_COLOR_UNEVICTABLE,
start, end, flags);
+ mmio_hw_access_post(dev_priv);
mutex_unlock(&dev_priv->drm.struct_mutex);
if (ret)
gvt_err("fail to alloc %s gm space from host\n",
static void intel_vgpu_destroy_ggtt_mm(struct intel_vgpu *vgpu)
{
- struct intel_gvt_partial_pte *pos;
+ struct intel_gvt_partial_pte *pos, *next;
- list_for_each_entry(pos,
- &vgpu->gtt.ggtt_mm->ggtt_mm.partial_pte_list, list) {
+ list_for_each_entry_safe(pos, next,
+ &vgpu->gtt.ggtt_mm->ggtt_mm.partial_pte_list,
+ list) {
gvt_dbg_mm("partial PTE update on hold 0x%lx : 0x%llx\n",
pos->offset, pos->data);
kfree(pos);
int ring_id, i;
for (ring_id = 0; ring_id < ARRAY_SIZE(regs); ring_id++) {
+ if (!HAS_ENGINE(dev_priv, ring_id))
+ continue;
offset.reg = regs[ring_id];
for (i = 0; i < GEN9_MOCS_SIZE; i++) {
gen9_render_mocs.control_table[ring_id][i] =
else if (gen >= 4)
len = 4;
else
- len = 3;
+ len = 6;
batch = reloc_gpu(eb, vma, len);
if (IS_ERR(batch))
*batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
*batch++ = addr;
*batch++ = target_offset;
+
+ /* And again for good measure (blb/pnv) */
+ *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
+ *batch++ = addr;
+ *batch++ = target_offset;
}
goto out;
ggtt->vm.insert_page = bxt_vtd_ggtt_insert_page__BKL;
if (ggtt->vm.clear_range != nop_clear_range)
ggtt->vm.clear_range = bxt_vtd_ggtt_clear_range__BKL;
+
+ /* Prevent recursively calling stop_machine() and deadlocks. */
+ dev_info(dev_priv->drm.dev,
+ "Disabling error capture for VT-d workaround\n");
+ i915_disable_error_state(dev_priv, -ENODEV);
}
ggtt->invalidate = gen6_ggtt_invalidate;
return 0;
}
+ if (IS_ERR(error))
+ return PTR_ERR(error);
+
if (*error->error_msg)
err_printf(m, "%s\n", error->error_msg);
err_printf(m, "Kernel: " UTS_RELEASE "\n");
error = i915_capture_gpu_state(i915);
if (!error) {
DRM_DEBUG_DRIVER("out of memory, not capturing error state\n");
+ i915_disable_error_state(i915, -ENOMEM);
return;
}
i915->gpu_error.first_error = NULL;
spin_unlock_irq(&i915->gpu_error.lock);
- i915_gpu_state_put(error);
+ if (!IS_ERR(error))
+ i915_gpu_state_put(error);
+}
+
+void i915_disable_error_state(struct drm_i915_private *i915, int err)
+{
+ spin_lock_irq(&i915->gpu_error.lock);
+ if (!i915->gpu_error.first_error)
+ i915->gpu_error.first_error = ERR_PTR(err);
+ spin_unlock_irq(&i915->gpu_error.lock);
}
struct i915_gpu_state *i915_first_error_state(struct drm_i915_private *i915);
void i915_reset_error_state(struct drm_i915_private *i915);
+void i915_disable_error_state(struct drm_i915_private *i915, int err);
#else
static inline struct i915_gpu_state *
i915_first_error_state(struct drm_i915_private *i915)
{
- return NULL;
+ return ERR_PTR(-ENODEV);
}
static inline void i915_reset_error_state(struct drm_i915_private *i915)
{
}
+static inline void i915_disable_error_state(struct drm_i915_private *i915,
+ int err)
+{
+}
+
#endif /* IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) */
#endif /* _I915_GPU_ERROR_H_ */
u8 eu_disabled_mask;
u32 n_disabled;
- if (!(sseu->subslice_mask[ss] & BIT(ss)))
+ if (!(sseu->subslice_mask[s] & BIT(ss)))
/* skip disabled subslice */
continue;
return;
valid_fb:
+ intel_state->base.rotation = plane_config->rotation;
intel_fill_fb_ggtt_view(&intel_state->view, fb,
intel_state->base.rotation);
intel_state->color_plane[0].stride =
* chroma samples for both of the luma samples, and thus we don't
* actually get the expected MPEG2 chroma siting convention :(
* The same behaviour is observed on pre-SKL platforms as well.
+ *
+ * Theory behind the formula (note that we ignore sub-pixel
+ * source coordinates):
+ * s = source sample position
+ * d = destination sample position
+ *
+ * Downscaling 4:1:
+ * -0.5
+ * | 0.0
+ * | | 1.5 (initial phase)
+ * | | |
+ * v v v
+ * | s | s | s | s |
+ * | d |
+ *
+ * Upscaling 1:4:
+ * -0.5
+ * | -0.375 (initial phase)
+ * | | 0.0
+ * | | |
+ * v v v
+ * | s |
+ * | d | d | d | d |
*/
-u16 skl_scaler_calc_phase(int sub, bool chroma_cosited)
+u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_cosited)
{
int phase = -0x8000;
u16 trip = 0;
if (chroma_cosited)
phase += (sub - 1) * 0x8000 / sub;
+ phase += scale / (2 * sub);
+
+ /*
+ * Hardware initial phase limited to [-0.5:1.5].
+ * Since the max hardware scale factor is 3.0, we
+ * should never actually excdeed 1.0 here.
+ */
+ WARN_ON(phase < -0x8000 || phase > 0x18000);
+
if (phase < 0)
phase = 0x10000 + phase;
else
if (crtc->config->pch_pfit.enabled) {
u16 uv_rgb_hphase, uv_rgb_vphase;
+ int pfit_w, pfit_h, hscale, vscale;
int id;
if (WARN_ON(crtc->config->scaler_state.scaler_id < 0))
return;
- uv_rgb_hphase = skl_scaler_calc_phase(1, false);
- uv_rgb_vphase = skl_scaler_calc_phase(1, false);
+ pfit_w = (crtc->config->pch_pfit.size >> 16) & 0xFFFF;
+ pfit_h = crtc->config->pch_pfit.size & 0xFFFF;
+
+ hscale = (crtc->config->pipe_src_w << 16) / pfit_w;
+ vscale = (crtc->config->pipe_src_h << 16) / pfit_h;
+
+ uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false);
+ uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false);
id = scaler_state->scaler_id;
I915_WRITE(SKL_PS_CTRL(pipe, id), PS_SCALER_EN |
plane_config->tiling = I915_TILING_X;
fb->modifier = I915_FORMAT_MOD_X_TILED;
}
+
+ if (val & DISPPLANE_ROTATE_180)
+ plane_config->rotation = DRM_MODE_ROTATE_180;
}
+ if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B &&
+ val & DISPPLANE_MIRROR)
+ plane_config->rotation |= DRM_MODE_REFLECT_X;
+
pixel_format = val & DISPPLANE_PIXFORMAT_MASK;
fourcc = i9xx_format_to_fourcc(pixel_format);
fb->format = drm_format_info(fourcc);
goto error;
}
+ /*
+ * DRM_MODE_ROTATE_ is counter clockwise to stay compatible with Xrandr
+ * while i915 HW rotation is clockwise, thats why this swapping.
+ */
+ switch (val & PLANE_CTL_ROTATE_MASK) {
+ case PLANE_CTL_ROTATE_0:
+ plane_config->rotation = DRM_MODE_ROTATE_0;
+ break;
+ case PLANE_CTL_ROTATE_90:
+ plane_config->rotation = DRM_MODE_ROTATE_270;
+ break;
+ case PLANE_CTL_ROTATE_180:
+ plane_config->rotation = DRM_MODE_ROTATE_180;
+ break;
+ case PLANE_CTL_ROTATE_270:
+ plane_config->rotation = DRM_MODE_ROTATE_90;
+ break;
+ }
+
+ if (INTEL_GEN(dev_priv) >= 10 &&
+ val & PLANE_CTL_FLIP_HORIZONTAL)
+ plane_config->rotation |= DRM_MODE_REFLECT_X;
+
base = I915_READ(PLANE_SURF(pipe, plane_id)) & 0xfffff000;
plane_config->base = base;
ret = drm_atomic_add_affected_planes(state, crtc);
if (ret)
goto out;
+
+ /*
+ * FIXME hack to force a LUT update to avoid the
+ * plane update forcing the pipe gamma on without
+ * having a proper LUT loaded. Remove once we
+ * have readout for pipe gamma enable.
+ */
+ crtc_state->color_mgmt_changed = true;
}
}
if (!intel_connector)
return NULL;
+ intel_connector->get_hw_state = intel_dp_mst_get_hw_state;
+ intel_connector->mst_port = intel_dp;
+ intel_connector->port = port;
+
connector = &intel_connector->base;
ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs,
DRM_MODE_CONNECTOR_DisplayPort);
drm_connector_helper_add(connector, &intel_dp_mst_connector_helper_funcs);
- intel_connector->get_hw_state = intel_dp_mst_get_hw_state;
- intel_connector->mst_port = intel_dp;
- intel_connector->port = port;
-
for_each_pipe(dev_priv, pipe) {
struct drm_encoder *enc =
&intel_dp->mst_encoders[pipe]->base.base;
unsigned int tiling;
int size;
u32 base;
+ u8 rotation;
};
#define SKL_MIN_SRC_W 8
void intel_crtc_arm_fifo_underrun(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state);
-u16 skl_scaler_calc_phase(int sub, bool chroma_center);
+u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_center);
int skl_update_scaler_crtc(struct intel_crtc_state *crtc_state);
int skl_max_scale(const struct intel_crtc_state *crtc_state,
u32 pixel_format);
drm_for_each_connector_iter(connector, &conn_iter) {
struct intel_connector *intel_connector = to_intel_connector(connector);
- if (intel_connector->encoder->hpd_pin == pin) {
+ /* Don't check MST ports, they don't have pins */
+ if (!intel_connector->mst_port &&
+ intel_connector->encoder->hpd_pin == pin) {
if (connector->polled != intel_connector->polled)
DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n",
connector->name);
struct intel_encoder *encoder;
bool storm_detected = false;
bool queue_dig = false, queue_hp = false;
+ u32 long_hpd_pulse_mask = 0;
+ u32 short_hpd_pulse_mask = 0;
+ enum hpd_pin pin;
if (!pin_mask)
return;
spin_lock(&dev_priv->irq_lock);
+
+ /*
+ * Determine whether ->hpd_pulse() exists for each pin, and
+ * whether we have a short or a long pulse. This is needed
+ * as each pin may have up to two encoders (HDMI and DP) and
+ * only the one of them (DP) will have ->hpd_pulse().
+ */
for_each_intel_encoder(&dev_priv->drm, encoder) {
- enum hpd_pin pin = encoder->hpd_pin;
bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);
+ enum port port = encoder->port;
+ bool long_hpd;
+ pin = encoder->hpd_pin;
if (!(BIT(pin) & pin_mask))
continue;
- if (has_hpd_pulse) {
- bool long_hpd = long_mask & BIT(pin);
- enum port port = encoder->port;
+ if (!has_hpd_pulse)
+ continue;
- DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),
- long_hpd ? "long" : "short");
- /*
- * For long HPD pulses we want to have the digital queue happen,
- * but we still want HPD storm detection to function.
- */
- queue_dig = true;
- if (long_hpd) {
- dev_priv->hotplug.long_port_mask |= (1 << port);
- } else {
- /* for short HPD just trigger the digital queue */
- dev_priv->hotplug.short_port_mask |= (1 << port);
- continue;
- }
+ long_hpd = long_mask & BIT(pin);
+
+ DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),
+ long_hpd ? "long" : "short");
+ queue_dig = true;
+
+ if (long_hpd) {
+ long_hpd_pulse_mask |= BIT(pin);
+ dev_priv->hotplug.long_port_mask |= BIT(port);
+ } else {
+ short_hpd_pulse_mask |= BIT(pin);
+ dev_priv->hotplug.short_port_mask |= BIT(port);
}
+ }
+
+ /* Now process each pin just once */
+ for_each_hpd_pin(pin) {
+ bool long_hpd;
+
+ if (!(BIT(pin) & pin_mask))
+ continue;
if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED) {
/*
if (dev_priv->hotplug.stats[pin].state != HPD_ENABLED)
continue;
- if (!has_hpd_pulse) {
+ /*
+ * Delegate to ->hpd_pulse() if one of the encoders for this
+ * pin has it, otherwise let the hotplug_work deal with this
+ * pin directly.
+ */
+ if (((short_hpd_pulse_mask | long_hpd_pulse_mask) & BIT(pin))) {
+ long_hpd = long_hpd_pulse_mask & BIT(pin);
+ } else {
dev_priv->hotplug.event_bits |= BIT(pin);
+ long_hpd = true;
queue_hp = true;
}
+ if (!long_hpd)
+ continue;
+
if (intel_hpd_irq_storm_detect(dev_priv, pin)) {
dev_priv->hotplug.event_bits &= ~BIT(pin);
storm_detected = true;
reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);
- /* True 32b PPGTT with dynamic page allocation: update PDP
+ /*
+ * True 32b PPGTT with dynamic page allocation: update PDP
* registers and point the unallocated PDPs to scratch page.
* PML4 is allocated during ppgtt init, so this is not needed
* in 48-bit mode.
if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm))
execlists_update_context_pdps(ppgtt, reg_state);
+ /*
+ * Make sure the context image is complete before we submit it to HW.
+ *
+ * Ostensibly, writes (including the WCB) should be flushed prior to
+ * an uncached write such as our mmio register access, the empirical
+ * evidence (esp. on Braswell) suggests that the WC write into memory
+ * may not be visible to the HW prior to the completion of the UC
+ * register write and that we may begin execution from the context
+ * before its image is complete leading to invalid PD chasing.
+ */
+ wmb();
return ce->lrc_desc;
}
uint32_t method1, method2;
int cpp;
+ if (mem_value == 0)
+ return U32_MAX;
+
if (!intel_wm_plane_visible(cstate, pstate))
return 0;
uint32_t method1, method2;
int cpp;
+ if (mem_value == 0)
+ return U32_MAX;
+
if (!intel_wm_plane_visible(cstate, pstate))
return 0;
{
int cpp;
+ if (mem_value == 0)
+ return U32_MAX;
+
if (!intel_wm_plane_visible(cstate, pstate))
return 0;
intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);
}
+static void snb_wm_lp3_irq_quirk(struct drm_i915_private *dev_priv)
+{
+ /*
+ * On some SNB machines (Thinkpad X220 Tablet at least)
+ * LP3 usage can cause vblank interrupts to be lost.
+ * The DEIIR bit will go high but it looks like the CPU
+ * never gets interrupted.
+ *
+ * It's not clear whether other interrupt source could
+ * be affected or if this is somehow limited to vblank
+ * interrupts only. To play it safe we disable LP3
+ * watermarks entirely.
+ */
+ if (dev_priv->wm.pri_latency[3] == 0 &&
+ dev_priv->wm.spr_latency[3] == 0 &&
+ dev_priv->wm.cur_latency[3] == 0)
+ return;
+
+ dev_priv->wm.pri_latency[3] = 0;
+ dev_priv->wm.spr_latency[3] = 0;
+ dev_priv->wm.cur_latency[3] = 0;
+
+ DRM_DEBUG_KMS("LP3 watermarks disabled due to potential for lost interrupts\n");
+ intel_print_wm_latency(dev_priv, "Primary", dev_priv->wm.pri_latency);
+ intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);
+ intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);
+}
+
static void ilk_setup_wm_latency(struct drm_i915_private *dev_priv)
{
intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency);
intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);
intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);
- if (IS_GEN6(dev_priv))
+ if (IS_GEN6(dev_priv)) {
snb_wm_latency_quirk(dev_priv);
+ snb_wm_lp3_irq_quirk(dev_priv);
+ }
}
static void skl_setup_wm_latency(struct drm_i915_private *dev_priv)
gen4_render_ring_flush(struct i915_request *rq, u32 mode)
{
u32 cmd, *cs;
+ int i;
/*
* read/write caches:
cmd |= MI_INVALIDATE_ISP;
}
- cs = intel_ring_begin(rq, 2);
+ i = 2;
+ if (mode & EMIT_INVALIDATE)
+ i += 20;
+
+ cs = intel_ring_begin(rq, i);
if (IS_ERR(cs))
return PTR_ERR(cs);
*cs++ = cmd;
- *cs++ = MI_NOOP;
+
+ /*
+ * A random delay to let the CS invalidate take effect? Without this
+ * delay, the GPU relocation path fails as the CS does not see
+ * the updated contents. Just as important, if we apply the flushes
+ * to the EMIT_FLUSH branch (i.e. immediately after the relocation
+ * write and before the invalidate on the next batch), the relocations
+ * still fail. This implies that is a delay following invalidation
+ * that is required to reset the caches as opposed to a delay to
+ * ensure the memory is written.
+ */
+ if (mode & EMIT_INVALIDATE) {
+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |
+ PIPE_CONTROL_GLOBAL_GTT;
+ *cs++ = 0;
+ *cs++ = 0;
+
+ for (i = 0; i < 12; i++)
+ *cs++ = MI_FLUSH;
+
+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |
+ PIPE_CONTROL_GLOBAL_GTT;
+ *cs++ = 0;
+ *cs++ = 0;
+ }
+
+ *cs++ = cmd;
+
intel_ring_advance(rq, cs);
return 0;
.hsw.has_fuses = true,
},
},
+ {
+ .name = "DC off",
+ .domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS,
+ .ops = &gen9_dc_off_power_well_ops,
+ .id = DISP_PW_ID_NONE,
+ },
{
.name = "power well 2",
.domains = ICL_PW_2_POWER_DOMAINS,
.hsw.has_fuses = true,
},
},
- {
- .name = "DC off",
- .domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS,
- .ops = &gen9_dc_off_power_well_ops,
- .id = DISP_PW_ID_NONE,
- },
{
.name = "power well 3",
.domains = ICL_PW_3_POWER_DOMAINS,
void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
u8 req_slices)
{
- u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices;
- u32 val;
+ const u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices;
bool ret;
if (req_slices > intel_dbuf_max_slices(dev_priv)) {
if (req_slices == hw_enabled_slices || req_slices == 0)
return;
- val = I915_READ(DBUF_CTL_S2);
if (req_slices > hw_enabled_slices)
ret = intel_dbuf_slice_set(dev_priv, DBUF_CTL_S2, true);
else
return min(8192 * cpp, 32768);
}
+static void
+skl_program_scaler(struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+{
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+ int scaler_id = plane_state->scaler_id;
+ const struct intel_scaler *scaler =
+ &crtc_state->scaler_state.scalers[scaler_id];
+ int crtc_x = plane_state->base.dst.x1;
+ int crtc_y = plane_state->base.dst.y1;
+ uint32_t crtc_w = drm_rect_width(&plane_state->base.dst);
+ uint32_t crtc_h = drm_rect_height(&plane_state->base.dst);
+ u16 y_hphase, uv_rgb_hphase;
+ u16 y_vphase, uv_rgb_vphase;
+ int hscale, vscale;
+
+ hscale = drm_rect_calc_hscale(&plane_state->base.src,
+ &plane_state->base.dst,
+ 0, INT_MAX);
+ vscale = drm_rect_calc_vscale(&plane_state->base.src,
+ &plane_state->base.dst,
+ 0, INT_MAX);
+
+ /* TODO: handle sub-pixel coordinates */
+ if (plane_state->base.fb->format->format == DRM_FORMAT_NV12) {
+ y_hphase = skl_scaler_calc_phase(1, hscale, false);
+ y_vphase = skl_scaler_calc_phase(1, vscale, false);
+
+ /* MPEG2 chroma siting convention */
+ uv_rgb_hphase = skl_scaler_calc_phase(2, hscale, true);
+ uv_rgb_vphase = skl_scaler_calc_phase(2, vscale, false);
+ } else {
+ /* not used */
+ y_hphase = 0;
+ y_vphase = 0;
+
+ uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false);
+ uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false);
+ }
+
+ I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id),
+ PS_SCALER_EN | PS_PLANE_SEL(plane->id) | scaler->mode);
+ I915_WRITE_FW(SKL_PS_PWR_GATE(pipe, scaler_id), 0);
+ I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id),
+ PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase));
+ I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id),
+ PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase));
+ I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y);
+ I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id), (crtc_w << 16) | crtc_h);
+}
+
void
skl_update_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
- const struct drm_framebuffer *fb = plane_state->base.fb;
enum plane_id plane_id = plane->id;
enum pipe pipe = plane->pipe;
u32 plane_ctl = plane_state->ctl;
u32 aux_stride = skl_plane_stride(plane_state, 1);
int crtc_x = plane_state->base.dst.x1;
int crtc_y = plane_state->base.dst.y1;
- uint32_t crtc_w = drm_rect_width(&plane_state->base.dst);
- uint32_t crtc_h = drm_rect_height(&plane_state->base.dst);
uint32_t x = plane_state->color_plane[0].x;
uint32_t y = plane_state->color_plane[0].y;
uint32_t src_w = drm_rect_width(&plane_state->base.src) >> 16;
/* Sizes are 0 based */
src_w--;
src_h--;
- crtc_w--;
- crtc_h--;
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
(plane_state->color_plane[1].y << 16) |
plane_state->color_plane[1].x);
- /* program plane scaler */
if (plane_state->scaler_id >= 0) {
- int scaler_id = plane_state->scaler_id;
- const struct intel_scaler *scaler =
- &crtc_state->scaler_state.scalers[scaler_id];
- u16 y_hphase, uv_rgb_hphase;
- u16 y_vphase, uv_rgb_vphase;
-
- /* TODO: handle sub-pixel coordinates */
- if (fb->format->format == DRM_FORMAT_NV12) {
- y_hphase = skl_scaler_calc_phase(1, false);
- y_vphase = skl_scaler_calc_phase(1, false);
-
- /* MPEG2 chroma siting convention */
- uv_rgb_hphase = skl_scaler_calc_phase(2, true);
- uv_rgb_vphase = skl_scaler_calc_phase(2, false);
- } else {
- /* not used */
- y_hphase = 0;
- y_vphase = 0;
-
- uv_rgb_hphase = skl_scaler_calc_phase(1, false);
- uv_rgb_vphase = skl_scaler_calc_phase(1, false);
- }
-
- I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id),
- PS_SCALER_EN | PS_PLANE_SEL(plane_id) | scaler->mode);
- I915_WRITE_FW(SKL_PS_PWR_GATE(pipe, scaler_id), 0);
- I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id),
- PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase));
- I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id),
- PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase));
- I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y);
- I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id),
- ((crtc_w + 1) << 16)|(crtc_h + 1));
+ skl_program_scaler(plane, crtc_state, plane_state);
I915_WRITE_FW(PLANE_POS(pipe, plane_id), 0);
} else {
struct drm_crtc base;
struct drm_pending_vblank_event *event;
struct meson_drm *priv;
+ bool enabled;
};
#define to_meson_crtc(x) container_of(x, struct meson_crtc, base)
};
-static void meson_crtc_atomic_enable(struct drm_crtc *crtc,
- struct drm_crtc_state *old_state)
+static void meson_crtc_enable(struct drm_crtc *crtc)
{
struct meson_crtc *meson_crtc = to_meson_crtc(crtc);
struct drm_crtc_state *crtc_state = crtc->state;
writel_bits_relaxed(VPP_POSTBLEND_ENABLE, VPP_POSTBLEND_ENABLE,
priv->io_base + _REG(VPP_MISC));
+ drm_crtc_vblank_on(crtc);
+
+ meson_crtc->enabled = true;
+}
+
+static void meson_crtc_atomic_enable(struct drm_crtc *crtc,
+ struct drm_crtc_state *old_state)
+{
+ struct meson_crtc *meson_crtc = to_meson_crtc(crtc);
+ struct meson_drm *priv = meson_crtc->priv;
+
+ DRM_DEBUG_DRIVER("\n");
+
+ if (!meson_crtc->enabled)
+ meson_crtc_enable(crtc);
+
priv->viu.osd1_enabled = true;
}
struct meson_crtc *meson_crtc = to_meson_crtc(crtc);
struct meson_drm *priv = meson_crtc->priv;
+ drm_crtc_vblank_off(crtc);
+
priv->viu.osd1_enabled = false;
priv->viu.osd1_commit = false;
crtc->state->event = NULL;
}
+
+ meson_crtc->enabled = false;
}
static void meson_crtc_atomic_begin(struct drm_crtc *crtc,
struct meson_crtc *meson_crtc = to_meson_crtc(crtc);
unsigned long flags;
+ if (crtc->state->enable && !meson_crtc->enabled)
+ meson_crtc_enable(crtc);
+
if (crtc->state->event) {
WARN_ON(drm_crtc_vblank_get(crtc) != 0);
.reg_read = meson_dw_hdmi_reg_read,
.reg_write = meson_dw_hdmi_reg_write,
.max_register = 0x10000,
+ .fast_io = true,
};
static bool meson_hdmi_connector_is_available(struct device *dev)
*/
/* HHI Registers */
+#define HHI_GCLK_MPEG2 0x148 /* 0x52 offset in data sheet */
#define HHI_VDAC_CNTL0 0x2F4 /* 0xbd offset in data sheet */
#define HHI_VDAC_CNTL1 0x2F8 /* 0xbe offset in data sheet */
#define HHI_HDMI_PHY_CNTL0 0x3a0 /* 0xe8 offset in data sheet */
{ 5, &meson_hdmi_encp_mode_1080i60 },
{ 20, &meson_hdmi_encp_mode_1080i50 },
{ 32, &meson_hdmi_encp_mode_1080p24 },
+ { 33, &meson_hdmi_encp_mode_1080p50 },
{ 34, &meson_hdmi_encp_mode_1080p30 },
{ 31, &meson_hdmi_encp_mode_1080p50 },
{ 16, &meson_hdmi_encp_mode_1080p60 },
unsigned int sof_lines;
unsigned int vsync_lines;
+ /* Use VENCI for 480i and 576i and double HDMI pixels */
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK) {
+ hdmi_repeat = true;
+ use_enci = true;
+ venc_hdmi_latency = 1;
+ }
+
if (meson_venc_hdmi_supported_vic(vic)) {
vmode = meson_venc_hdmi_get_vic_vmode(vic);
if (!vmode) {
} else {
meson_venc_hdmi_get_dmt_vmode(mode, &vmode_dmt);
vmode = &vmode_dmt;
- }
-
- /* Use VENCI for 480i and 576i and double HDMI pixels */
- if (mode->flags & DRM_MODE_FLAG_DBLCLK) {
- hdmi_repeat = true;
- use_enci = true;
- venc_hdmi_latency = 1;
+ use_enci = false;
}
/* Repeat VENC pixels for 480/576i/p, 720p50/60 and 1080p50/60 */
void meson_venc_enable_vsync(struct meson_drm *priv)
{
writel_relaxed(2, priv->io_base + _REG(VENC_INTCTRL));
+ regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), BIT(25));
}
void meson_venc_disable_vsync(struct meson_drm *priv)
{
+ regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), 0);
writel_relaxed(0, priv->io_base + _REG(VENC_INTCTRL));
}
if (lut_sel == VIU_LUT_OSD_OETF) {
writel(0, priv->io_base + _REG(addr_port));
- for (i = 0; i < 20; i++)
+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)
writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16),
priv->io_base + _REG(data_port));
writel(r_map[OSD_OETF_LUT_SIZE - 1] | (g_map[0] << 16),
priv->io_base + _REG(data_port));
- for (i = 0; i < 20; i++)
+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)
writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16),
priv->io_base + _REG(data_port));
- for (i = 0; i < 20; i++)
+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)
writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16),
priv->io_base + _REG(data_port));
} else if (lut_sel == VIU_LUT_OSD_EOTF) {
writel(0, priv->io_base + _REG(addr_port));
- for (i = 0; i < 20; i++)
+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)
writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16),
priv->io_base + _REG(data_port));
writel(r_map[OSD_EOTF_LUT_SIZE - 1] | (g_map[0] << 16),
priv->io_base + _REG(data_port));
- for (i = 0; i < 20; i++)
+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)
writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16),
priv->io_base + _REG(data_port));
- for (i = 0; i < 20; i++)
+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)
writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16),
priv->io_base + _REG(data_port));
/* DSI on OMAP3 doesn't have register DSI_GNQ, set number
* of data to 3 by default */
- if (dsi->data->quirks & DSI_QUIRK_GNQ)
+ if (dsi->data->quirks & DSI_QUIRK_GNQ) {
+ dsi_runtime_get(dsi);
/* NB_DATA_LANES */
dsi->num_lanes_supported = 1 + REG_GET(dsi, DSI_GNQ, 11, 9);
- else
+ dsi_runtime_put(dsi);
+ } else {
dsi->num_lanes_supported = 3;
+ }
r = dsi_init_output(dsi);
if (r)
}
r = of_platform_populate(dev->of_node, NULL, NULL, dev);
- if (r)
+ if (r) {
DSSERR("Failed to populate DSI child devices: %d\n", r);
+ goto err_uninit_output;
+ }
r = component_add(&pdev->dev, &dsi_component_ops);
if (r)
- goto err_uninit_output;
+ goto err_of_depopulate;
return 0;
+err_of_depopulate:
+ of_platform_depopulate(dev);
err_uninit_output:
dsi_uninit_output(dsi);
err_pm_disable:
/* wait for current handler to finish before turning the DSI off */
synchronize_irq(dsi->irq);
- dispc_runtime_put(dsi->dss->dispc);
-
return 0;
}
static int dsi_runtime_resume(struct device *dev)
{
struct dsi_data *dsi = dev_get_drvdata(dev);
- int r;
-
- r = dispc_runtime_get(dsi->dss->dispc);
- if (r)
- return r;
dsi->is_enabled = true;
/* ensure the irq handler sees the is_enabled value */
dss);
/* Add all the child devices as components. */
+ r = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+ if (r)
+ goto err_uninit_debugfs;
+
omapdss_gather_components(&pdev->dev);
device_for_each_child(&pdev->dev, &match, dss_add_child_component);
r = component_master_add_with_match(&pdev->dev, &dss_component_ops, match);
if (r)
- goto err_uninit_debugfs;
+ goto err_of_depopulate;
return 0;
+err_of_depopulate:
+ of_platform_depopulate(&pdev->dev);
+
err_uninit_debugfs:
dss_debugfs_remove_file(dss->debugfs.clk);
dss_debugfs_remove_file(dss->debugfs.dss);
{
struct dss_device *dss = platform_get_drvdata(pdev);
+ of_platform_depopulate(&pdev->dev);
+
component_master_del(&pdev->dev, &dss_component_ops);
dss_debugfs_remove_file(dss->debugfs.clk);
hdmi->dss = dss;
- r = hdmi_pll_init(dss, hdmi->pdev, &hdmi->pll, &hdmi->wp);
+ r = hdmi_runtime_get(hdmi);
if (r)
return r;
+ r = hdmi_pll_init(dss, hdmi->pdev, &hdmi->pll, &hdmi->wp);
+ if (r)
+ goto err_runtime_put;
+
r = hdmi4_cec_init(hdmi->pdev, &hdmi->core, &hdmi->wp);
if (r)
goto err_pll_uninit;
hdmi->debugfs = dss_debugfs_create_file(dss, "hdmi", hdmi_dump_regs,
hdmi);
+ hdmi_runtime_put(hdmi);
+
return 0;
err_cec_uninit:
hdmi4_cec_uninit(&hdmi->core);
err_pll_uninit:
hdmi_pll_uninit(&hdmi->pll);
+err_runtime_put:
+ hdmi_runtime_put(hdmi);
return r;
}
return 0;
}
-static int hdmi_runtime_suspend(struct device *dev)
-{
- struct omap_hdmi *hdmi = dev_get_drvdata(dev);
-
- dispc_runtime_put(hdmi->dss->dispc);
-
- return 0;
-}
-
-static int hdmi_runtime_resume(struct device *dev)
-{
- struct omap_hdmi *hdmi = dev_get_drvdata(dev);
- int r;
-
- r = dispc_runtime_get(hdmi->dss->dispc);
- if (r < 0)
- return r;
-
- return 0;
-}
-
-static const struct dev_pm_ops hdmi_pm_ops = {
- .runtime_suspend = hdmi_runtime_suspend,
- .runtime_resume = hdmi_runtime_resume,
-};
-
static const struct of_device_id hdmi_of_match[] = {
{ .compatible = "ti,omap4-hdmi", },
{},
.remove = hdmi4_remove,
.driver = {
.name = "omapdss_hdmi",
- .pm = &hdmi_pm_ops,
.of_match_table = hdmi_of_match,
.suppress_bind_attrs = true,
},
return 0;
}
-static int hdmi_runtime_suspend(struct device *dev)
-{
- struct omap_hdmi *hdmi = dev_get_drvdata(dev);
-
- dispc_runtime_put(hdmi->dss->dispc);
-
- return 0;
-}
-
-static int hdmi_runtime_resume(struct device *dev)
-{
- struct omap_hdmi *hdmi = dev_get_drvdata(dev);
- int r;
-
- r = dispc_runtime_get(hdmi->dss->dispc);
- if (r < 0)
- return r;
-
- return 0;
-}
-
-static const struct dev_pm_ops hdmi_pm_ops = {
- .runtime_suspend = hdmi_runtime_suspend,
- .runtime_resume = hdmi_runtime_resume,
-};
-
static const struct of_device_id hdmi_of_match[] = {
{ .compatible = "ti,omap5-hdmi", },
{ .compatible = "ti,dra7-hdmi", },
.remove = hdmi5_remove,
.driver = {
.name = "omapdss_hdmi5",
- .pm = &hdmi_pm_ops,
.of_match_table = hdmi_of_match,
.suppress_bind_attrs = true,
},
if (venc->tv_dac_clk)
clk_disable_unprepare(venc->tv_dac_clk);
- dispc_runtime_put(venc->dss->dispc);
-
return 0;
}
static int venc_runtime_resume(struct device *dev)
{
struct venc_device *venc = dev_get_drvdata(dev);
- int r;
-
- r = dispc_runtime_get(venc->dss->dispc);
- if (r < 0)
- return r;
if (venc->tv_dac_clk)
clk_prepare_enable(venc->tv_dac_clk);
static void omap_crtc_atomic_enable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
+ struct omap_drm_private *priv = crtc->dev->dev_private;
struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
int ret;
DBG("%s", omap_crtc->name);
+ priv->dispc_ops->runtime_get(priv->dispc);
+
spin_lock_irq(&crtc->dev->event_lock);
drm_crtc_vblank_on(crtc);
ret = drm_crtc_vblank_get(crtc);
static void omap_crtc_atomic_disable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
+ struct omap_drm_private *priv = crtc->dev->dev_private;
struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
DBG("%s", omap_crtc->name);
spin_unlock_irq(&crtc->dev->event_lock);
drm_crtc_vblank_off(crtc);
+
+ priv->dispc_ops->runtime_put(priv->dispc);
}
static enum drm_mode_status omap_crtc_mode_valid(struct drm_crtc *crtc,
static void __rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)
{
- struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2];
+ struct rcar_du_device *rcdu = rgrp->dev;
+
+ /*
+ * Group start/stop is controlled by the DRES and DEN bits of DSYSR0
+ * for the first group and DSYSR2 for the second group. On most DU
+ * instances, this maps to the first CRTC of the group, and we can just
+ * use rcar_du_crtc_dsysr_clr_set() to access the correct DSYSR. On
+ * M3-N, however, DU2 doesn't exist, but DSYSR2 does. We thus need to
+ * access the register directly using group read/write.
+ */
+ if (rcdu->info->channels_mask & BIT(rgrp->index * 2)) {
+ struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2];
- rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN,
- start ? DSYSR_DEN : DSYSR_DRES);
+ rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN,
+ start ? DSYSR_DEN : DSYSR_DRES);
+ } else {
+ rcar_du_group_write(rgrp, DSYSR,
+ start ? DSYSR_DEN : DSYSR_DRES);
+ }
}
void rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)
return 0;
}
+ /* We know for sure we don't want an async update here. Set
+ * state->legacy_cursor_update to false to prevent
+ * drm_atomic_helper_setup_commit() from auto-completing
+ * commit->flip_done.
+ */
+ state->legacy_cursor_update = false;
ret = drm_atomic_helper_setup_commit(state, nonblock);
if (ret)
return ret;
static void vc4_plane_atomic_async_update(struct drm_plane *plane,
struct drm_plane_state *state)
{
- struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);
+ struct vc4_plane_state *vc4_state, *new_vc4_state;
if (plane->state->fb != state->fb) {
vc4_plane_async_set_fb(plane, state->fb);
plane->state->src_y = state->src_y;
/* Update the display list based on the new crtc_x/y. */
- vc4_plane_atomic_check(plane, plane->state);
+ vc4_plane_atomic_check(plane, state);
+
+ new_vc4_state = to_vc4_plane_state(state);
+ vc4_state = to_vc4_plane_state(plane->state);
+
+ /* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */
+ vc4_state->dlist[vc4_state->pos0_offset] =
+ new_vc4_state->dlist[vc4_state->pos0_offset];
+ vc4_state->dlist[vc4_state->pos2_offset] =
+ new_vc4_state->dlist[vc4_state->pos2_offset];
+ vc4_state->dlist[vc4_state->ptr0_offset] =
+ new_vc4_state->dlist[vc4_state->ptr0_offset];
/* Note that we can't just call vc4_plane_write_dlist()
* because that would smash the context data that the HVS is
#define USB_VENDOR_ID_CIDC 0x1677
+#define I2C_VENDOR_ID_CIRQUE 0x0488
+#define I2C_PRODUCT_ID_CIRQUE_121F 0x121F
+
#define USB_VENDOR_ID_CJTOUCH 0x24b8
#define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020
#define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040
#define USB_VENDOR_ID_LG 0x1fd2
#define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
#define USB_DEVICE_ID_LG_MELFAS_MT 0x6007
+#define I2C_DEVICE_ID_LG_8001 0x8001
#define USB_VENDOR_ID_LOGITECH 0x046d
#define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e
#define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9
#define USB_DEVICE_ID_MS_POWER_COVER 0x07da
#define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd
+#define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb
#define USB_VENDOR_ID_MOJO 0x8282
#define USB_DEVICE_ID_RETRO_ADAPTER 0x3201
#define USB_VENDOR_ID_SYMBOL 0x05e0
#define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800
#define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300
+#define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200
#define USB_VENDOR_ID_SYNAPTICS 0x06cb
#define USB_DEVICE_ID_SYNAPTICS_TP 0x0001
#define USB_DEVICE_ID_PRIMAX_MOUSE_4D22 0x4d22
#define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05
#define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72
+#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f
+#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22
#define USB_VENDOR_ID_RISO_KAGAKU 0x1294 /* Riso Kagaku Corp. */
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM,
USB_DEVICE_ID_ELECOM_BM084),
HID_BATTERY_QUIRK_IGNORE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL,
+ USB_DEVICE_ID_SYMBOL_SCANNER_3),
+ HID_BATTERY_QUIRK_IGNORE },
{}
};
}
EXPORT_SYMBOL_GPL(hidinput_disconnect);
-/**
- * hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll
- * events given a high-resolution wheel
- * movement.
- * @counter: a hid_scroll_counter struct describing the wheel.
- * @hi_res_value: the movement of the wheel, in the mouse's high-resolution
- * units.
- *
- * Given a high-resolution movement, this function converts the movement into
- * microns and emits high-resolution scroll events for the input device. It also
- * uses the multiplier from &struct hid_scroll_counter to emit low-resolution
- * scroll events when appropriate for backwards-compatibility with userspace
- * input libraries.
- */
-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,
- int hi_res_value)
-{
- int low_res_value, remainder, multiplier;
-
- input_report_rel(counter->dev, REL_WHEEL_HI_RES,
- hi_res_value * counter->microns_per_hi_res_unit);
-
- /*
- * Update the low-res remainder with the high-res value,
- * but reset if the direction has changed.
- */
- remainder = counter->remainder;
- if ((remainder ^ hi_res_value) < 0)
- remainder = 0;
- remainder += hi_res_value;
-
- /*
- * Then just use the resolution multiplier to see if
- * we should send a low-res (aka regular wheel) event.
- */
- multiplier = counter->resolution_multiplier;
- low_res_value = remainder / multiplier;
- remainder -= low_res_value * multiplier;
- counter->remainder = remainder;
-
- if (low_res_value)
- input_report_rel(counter->dev, REL_WHEEL, low_res_value);
-}
-EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);
#define HIDPP_QUIRK_NO_HIDINPUT BIT(23)
#define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24)
#define HIDPP_QUIRK_UNIFYING BIT(25)
-#define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26)
-#define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27)
-#define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28)
-
-/* Convenience constant to check for any high-res support. */
-#define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \
- HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \
- HIDPP_QUIRK_HI_RES_SCROLL_X2121)
#define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT
unsigned long capabilities;
struct hidpp_battery battery;
- struct hid_scroll_counter vertical_wheel_counter;
};
/* HID++ 1.0 error codes */
#define HIDPP_SET_LONG_REGISTER 0x82
#define HIDPP_GET_LONG_REGISTER 0x83
-/**
- * hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register.
- * @hidpp_dev: the device to set the register on.
- * @register_address: the address of the register to modify.
- * @byte: the byte of the register to modify. Should be less than 3.
- * Return: 0 if successful, otherwise a negative error code.
- */
-static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev,
- u8 register_address, u8 byte, u8 bit)
+#define HIDPP_REG_GENERAL 0x00
+
+static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)
{
struct hidpp_report response;
int ret;
u8 params[3] = { 0 };
ret = hidpp_send_rap_command_sync(hidpp_dev,
- REPORT_ID_HIDPP_SHORT,
- HIDPP_GET_REGISTER,
- register_address,
- NULL, 0, &response);
+ REPORT_ID_HIDPP_SHORT,
+ HIDPP_GET_REGISTER,
+ HIDPP_REG_GENERAL,
+ NULL, 0, &response);
if (ret)
return ret;
memcpy(params, response.rap.params, 3);
- params[byte] |= BIT(bit);
+ /* Set the battery bit */
+ params[0] |= BIT(4);
return hidpp_send_rap_command_sync(hidpp_dev,
- REPORT_ID_HIDPP_SHORT,
- HIDPP_SET_REGISTER,
- register_address,
- params, 3, &response);
-}
-
-
-#define HIDPP_REG_GENERAL 0x00
-
-static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)
-{
- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4);
-}
-
-#define HIDPP_REG_FEATURES 0x01
-
-/* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */
-static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev)
-{
- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6);
+ REPORT_ID_HIDPP_SHORT,
+ HIDPP_SET_REGISTER,
+ HIDPP_REG_GENERAL,
+ params, 3, &response);
}
#define HIDPP_REG_BATTERY_STATUS 0x07
return ret;
}
-/* -------------------------------------------------------------------------- */
-/* 0x2120: Hi-resolution scrolling */
-/* -------------------------------------------------------------------------- */
-
-#define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x2120
-
-#define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x10
-
-static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp,
- bool enabled, u8 *multiplier)
-{
- u8 feature_index;
- u8 feature_type;
- int ret;
- u8 params[1];
- struct hidpp_report response;
-
- ret = hidpp_root_get_feature(hidpp,
- HIDPP_PAGE_HI_RESOLUTION_SCROLLING,
- &feature_index,
- &feature_type);
- if (ret)
- return ret;
-
- params[0] = enabled ? BIT(0) : 0;
- ret = hidpp_send_fap_command_sync(hidpp, feature_index,
- CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE,
- params, sizeof(params), &response);
- if (ret)
- return ret;
- *multiplier = response.fap.params[1];
- return 0;
-}
-
-/* -------------------------------------------------------------------------- */
-/* 0x2121: HiRes Wheel */
-/* -------------------------------------------------------------------------- */
-
-#define HIDPP_PAGE_HIRES_WHEEL 0x2121
-
-#define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x00
-#define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x20
-
-static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp,
- u8 *multiplier)
-{
- u8 feature_index;
- u8 feature_type;
- int ret;
- struct hidpp_report response;
-
- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,
- &feature_index, &feature_type);
- if (ret)
- goto return_default;
-
- ret = hidpp_send_fap_command_sync(hidpp, feature_index,
- CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY,
- NULL, 0, &response);
- if (ret)
- goto return_default;
-
- *multiplier = response.fap.params[0];
- return 0;
-return_default:
- hid_warn(hidpp->hid_dev,
- "Couldn't get wheel multiplier (error %d), assuming %d.\n",
- ret, *multiplier);
- return ret;
-}
-
-static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert,
- bool high_resolution, bool use_hidpp)
-{
- u8 feature_index;
- u8 feature_type;
- int ret;
- u8 params[1];
- struct hidpp_report response;
-
- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,
- &feature_index, &feature_type);
- if (ret)
- return ret;
-
- params[0] = (invert ? BIT(2) : 0) |
- (high_resolution ? BIT(1) : 0) |
- (use_hidpp ? BIT(0) : 0);
-
- return hidpp_send_fap_command_sync(hidpp, feature_index,
- CMD_HIRES_WHEEL_SET_WHEEL_MODE,
- params, sizeof(params), &response);
-}
-
/* -------------------------------------------------------------------------- */
/* 0x4301: Solar Keyboard */
/* -------------------------------------------------------------------------- */
input_report_rel(mydata->input, REL_Y, v);
v = hid_snto32(data[6], 8);
- hid_scroll_counter_handle_scroll(
- &hidpp->vertical_wheel_counter, v);
+ input_report_rel(mydata->input, REL_WHEEL, v);
input_sync(mydata->input);
}
return 0;
}
-/* -------------------------------------------------------------------------- */
-/* High-resolution scroll wheels */
-/* -------------------------------------------------------------------------- */
-
-/**
- * struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel.
- * @product_id: the HID product ID of the device being described.
- * @microns_per_hi_res_unit: the distance moved by the user's finger for each
- * high-resolution unit reported by the device, in
- * 256ths of a millimetre.
- */
-struct hi_res_scroll_info {
- __u32 product_id;
- int microns_per_hi_res_unit;
-};
-
-static struct hi_res_scroll_info hi_res_scroll_devices[] = {
- { /* Anywhere MX */
- .product_id = 0x1017, .microns_per_hi_res_unit = 445 },
- { /* Performance MX */
- .product_id = 0x101a, .microns_per_hi_res_unit = 406 },
- { /* M560 */
- .product_id = 0x402d, .microns_per_hi_res_unit = 435 },
- { /* MX Master 2S */
- .product_id = 0x4069, .microns_per_hi_res_unit = 406 },
-};
-
-static int hi_res_scroll_look_up_microns(__u32 product_id)
-{
- int i;
- int num_devices = sizeof(hi_res_scroll_devices)
- / sizeof(hi_res_scroll_devices[0]);
- for (i = 0; i < num_devices; i++) {
- if (hi_res_scroll_devices[i].product_id == product_id)
- return hi_res_scroll_devices[i].microns_per_hi_res_unit;
- }
- /* We don't have a value for this device, so use a sensible default. */
- return 406;
-}
-
-static int hi_res_scroll_enable(struct hidpp_device *hidpp)
-{
- int ret;
- u8 multiplier = 8;
-
- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) {
- ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false);
- hidpp_hrw_get_wheel_capability(hidpp, &multiplier);
- } else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) {
- ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true,
- &multiplier);
- } else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */
- ret = hidpp10_enable_scrolling_acceleration(hidpp);
-
- if (ret)
- return ret;
-
- hidpp->vertical_wheel_counter.resolution_multiplier = multiplier;
- hidpp->vertical_wheel_counter.microns_per_hi_res_unit =
- hi_res_scroll_look_up_microns(hidpp->hid_dev->product);
- hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n",
- multiplier,
- hidpp->vertical_wheel_counter.microns_per_hi_res_unit);
- return 0;
-}
-
/* -------------------------------------------------------------------------- */
/* Generic HID++ devices */
/* -------------------------------------------------------------------------- */
wtp_populate_input(hidpp, input, origin_is_hid_core);
else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560)
m560_populate_input(hidpp, input, origin_is_hid_core);
-
- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) {
- input_set_capability(input, EV_REL, REL_WHEEL_HI_RES);
- hidpp->vertical_wheel_counter.dev = input;
- }
}
static int hidpp_input_configured(struct hid_device *hdev,
return 0;
}
-static int hidpp_event(struct hid_device *hdev, struct hid_field *field,
- struct hid_usage *usage, __s32 value)
-{
- /* This function will only be called for scroll events, due to the
- * restriction imposed in hidpp_usages.
- */
- struct hidpp_device *hidpp = hid_get_drvdata(hdev);
- struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter;
- /* A scroll event may occur before the multiplier has been retrieved or
- * the input device set, or high-res scroll enabling may fail. In such
- * cases we must return early (falling back to default behaviour) to
- * avoid a crash in hid_scroll_counter_handle_scroll.
- */
- if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 0
- || counter->dev == NULL || counter->resolution_multiplier == 0)
- return 0;
-
- hid_scroll_counter_handle_scroll(counter, value);
- return 1;
-}
-
static int hidpp_initialize_battery(struct hidpp_device *hidpp)
{
static atomic_t battery_no = ATOMIC_INIT(0);
if (hidpp->battery.ps)
power_supply_changed(hidpp->battery.ps);
- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL)
- hi_res_scroll_enable(hidpp);
-
if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input)
/* if the input nodes are already created, we can stop now */
return;
mutex_destroy(&hidpp->send_mutex);
}
-#define LDJ_DEVICE(product) \
- HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \
- USB_VENDOR_ID_LOGITECH, (product))
-
static const struct hid_device_id hidpp_devices[] = {
{ /* wireless touchpad */
- LDJ_DEVICE(0x4011),
+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ USB_VENDOR_ID_LOGITECH, 0x4011),
.driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT |
HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS },
{ /* wireless touchpad T650 */
- LDJ_DEVICE(0x4101),
+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ USB_VENDOR_ID_LOGITECH, 0x4101),
.driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },
{ /* wireless touchpad T651 */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_T651),
.driver_data = HIDPP_QUIRK_CLASS_WTP },
- { /* Mouse Logitech Anywhere MX */
- LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
- { /* Mouse Logitech Cube */
- LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },
- { /* Mouse Logitech M335 */
- LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { /* Mouse Logitech M515 */
- LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },
{ /* Mouse logitech M560 */
- LDJ_DEVICE(0x402d),
- .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560
- | HIDPP_QUIRK_HI_RES_SCROLL_X2120 },
- { /* Mouse Logitech M705 (firmware RQM17) */
- LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
- { /* Mouse Logitech M705 (firmware RQM67) */
- LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { /* Mouse Logitech M720 */
- LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { /* Mouse Logitech MX Anywhere 2 */
- LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { /* Mouse Logitech MX Anywhere 2S */
- LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { /* Mouse Logitech MX Master */
- LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { /* Mouse Logitech MX Master 2S */
- LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
- { /* Mouse Logitech Performance MX */
- LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ USB_VENDOR_ID_LOGITECH, 0x402d),
+ .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 },
{ /* Keyboard logitech K400 */
- LDJ_DEVICE(0x4024),
+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ USB_VENDOR_ID_LOGITECH, 0x4024),
.driver_data = HIDPP_QUIRK_CLASS_K400 },
{ /* Solar Keyboard Logitech K750 */
- LDJ_DEVICE(0x4002),
+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ USB_VENDOR_ID_LOGITECH, 0x4002),
.driver_data = HIDPP_QUIRK_CLASS_K750 },
- { LDJ_DEVICE(HID_ANY_ID) },
+ { HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL),
.driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS},
MODULE_DEVICE_TABLE(hid, hidpp_devices);
-static const struct hid_usage_id hidpp_usages[] = {
- { HID_GD_WHEEL, EV_REL, REL_WHEEL },
- { HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1}
-};
-
static struct hid_driver hidpp_driver = {
.name = "logitech-hidpp-device",
.id_table = hidpp_devices,
.probe = hidpp_probe,
.remove = hidpp_remove,
.raw_event = hidpp_raw_event,
- .usage_table = hidpp_usages,
- .event = hidpp_event,
.input_configured = hidpp_input_configured,
.input_mapping = hidpp_input_mapping,
.input_mapped = hidpp_input_mapped,
MT_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT,
USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) },
+ /* Cirque devices */
+ { .driver_data = MT_CLS_WIN_8_DUAL,
+ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ I2C_VENDOR_ID_CIRQUE,
+ I2C_PRODUCT_ID_CIRQUE_121F) },
+
/* CJTouch panels */
{ .driver_data = MT_CLS_NSMU,
MT_USB_DEVICE(USB_VENDOR_ID_CJTOUCH,
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET },
{ HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET },
{ HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003), HID_QUIRK_NOGET },
* In order to avoid breaking them this driver creates a layered hidraw device,
* so it can detect when the client is running and then:
* - it will not send any command to the controller.
- * - this input device will be disabled, to avoid double input of the same
+ * - this input device will be removed, to avoid double input of the same
* user action.
+ * When the client is closed, this input device will be created again.
*
* For additional functions, such as changing the right-pad margin or switching
* the led, you can use the user-space tool at:
spinlock_t lock;
struct hid_device *hdev, *client_hdev;
struct mutex mutex;
- bool client_opened, input_opened;
+ bool client_opened;
struct input_dev __rcu *input;
unsigned long quirks;
struct work_struct work_connect;
}
}
-static void steam_update_lizard_mode(struct steam_device *steam)
-{
- mutex_lock(&steam->mutex);
- if (!steam->client_opened) {
- if (steam->input_opened)
- steam_set_lizard_mode(steam, false);
- else
- steam_set_lizard_mode(steam, lizard_mode);
- }
- mutex_unlock(&steam->mutex);
-}
-
static int steam_input_open(struct input_dev *dev)
{
struct steam_device *steam = input_get_drvdata(dev);
return ret;
mutex_lock(&steam->mutex);
- steam->input_opened = true;
if (!steam->client_opened && lizard_mode)
steam_set_lizard_mode(steam, false);
mutex_unlock(&steam->mutex);
struct steam_device *steam = input_get_drvdata(dev);
mutex_lock(&steam->mutex);
- steam->input_opened = false;
if (!steam->client_opened && lizard_mode)
steam_set_lizard_mode(steam, true);
mutex_unlock(&steam->mutex);
return 0;
}
-static int steam_register(struct steam_device *steam)
+static int steam_input_register(struct steam_device *steam)
{
struct hid_device *hdev = steam->hdev;
struct input_dev *input;
return 0;
}
- /*
- * Unlikely, but getting the serial could fail, and it is not so
- * important, so make up a serial number and go on.
- */
- if (steam_get_serial(steam) < 0)
- strlcpy(steam->serial_no, "XXXXXXXXXX",
- sizeof(steam->serial_no));
-
- hid_info(hdev, "Steam Controller '%s' connected",
- steam->serial_no);
-
input = input_allocate_device();
if (!input)
return -ENOMEM;
goto input_register_fail;
rcu_assign_pointer(steam->input, input);
-
- /* ignore battery errors, we can live without it */
- if (steam->quirks & STEAM_QUIRK_WIRELESS)
- steam_battery_register(steam);
-
return 0;
input_register_fail:
return ret;
}
-static void steam_unregister(struct steam_device *steam)
+static void steam_input_unregister(struct steam_device *steam)
{
struct input_dev *input;
+ rcu_read_lock();
+ input = rcu_dereference(steam->input);
+ rcu_read_unlock();
+ if (!input)
+ return;
+ RCU_INIT_POINTER(steam->input, NULL);
+ synchronize_rcu();
+ input_unregister_device(input);
+}
+
+static void steam_battery_unregister(struct steam_device *steam)
+{
struct power_supply *battery;
rcu_read_lock();
- input = rcu_dereference(steam->input);
battery = rcu_dereference(steam->battery);
rcu_read_unlock();
- if (battery) {
- RCU_INIT_POINTER(steam->battery, NULL);
- synchronize_rcu();
- power_supply_unregister(battery);
+ if (!battery)
+ return;
+ RCU_INIT_POINTER(steam->battery, NULL);
+ synchronize_rcu();
+ power_supply_unregister(battery);
+}
+
+static int steam_register(struct steam_device *steam)
+{
+ int ret;
+
+ /*
+ * This function can be called several times in a row with the
+ * wireless adaptor, without steam_unregister() between them, because
+ * another client send a get_connection_status command, for example.
+ * The battery and serial number are set just once per device.
+ */
+ if (!steam->serial_no[0]) {
+ /*
+ * Unlikely, but getting the serial could fail, and it is not so
+ * important, so make up a serial number and go on.
+ */
+ if (steam_get_serial(steam) < 0)
+ strlcpy(steam->serial_no, "XXXXXXXXXX",
+ sizeof(steam->serial_no));
+
+ hid_info(steam->hdev, "Steam Controller '%s' connected",
+ steam->serial_no);
+
+ /* ignore battery errors, we can live without it */
+ if (steam->quirks & STEAM_QUIRK_WIRELESS)
+ steam_battery_register(steam);
+
+ mutex_lock(&steam_devices_lock);
+ list_add(&steam->list, &steam_devices);
+ mutex_unlock(&steam_devices_lock);
}
- if (input) {
- RCU_INIT_POINTER(steam->input, NULL);
- synchronize_rcu();
+
+ mutex_lock(&steam->mutex);
+ if (!steam->client_opened) {
+ steam_set_lizard_mode(steam, lizard_mode);
+ ret = steam_input_register(steam);
+ } else {
+ ret = 0;
+ }
+ mutex_unlock(&steam->mutex);
+
+ return ret;
+}
+
+static void steam_unregister(struct steam_device *steam)
+{
+ steam_battery_unregister(steam);
+ steam_input_unregister(steam);
+ if (steam->serial_no[0]) {
hid_info(steam->hdev, "Steam Controller '%s' disconnected",
steam->serial_no);
- input_unregister_device(input);
+ mutex_lock(&steam_devices_lock);
+ list_del(&steam->list);
+ mutex_unlock(&steam_devices_lock);
+ steam->serial_no[0] = 0;
}
}
mutex_lock(&steam->mutex);
steam->client_opened = true;
mutex_unlock(&steam->mutex);
+
+ steam_input_unregister(steam);
+
return ret;
}
mutex_lock(&steam->mutex);
steam->client_opened = false;
- if (steam->input_opened)
- steam_set_lizard_mode(steam, false);
- else
- steam_set_lizard_mode(steam, lizard_mode);
mutex_unlock(&steam->mutex);
hid_hw_close(steam->hdev);
+ if (steam->connected) {
+ steam_set_lizard_mode(steam, lizard_mode);
+ steam_input_register(steam);
+ }
}
static int steam_client_ll_raw_request(struct hid_device *hdev,
}
}
- mutex_lock(&steam_devices_lock);
- steam_update_lizard_mode(steam);
- list_add(&steam->list, &steam_devices);
- mutex_unlock(&steam_devices_lock);
-
return 0;
hid_hw_open_fail:
return;
}
- mutex_lock(&steam_devices_lock);
- list_del(&steam->list);
- mutex_unlock(&steam_devices_lock);
-
hid_destroy_device(steam->client_hdev);
steam->client_opened = false;
cancel_work_sync(&steam->work_connect);
static void steam_do_connect_event(struct steam_device *steam, bool connected)
{
unsigned long flags;
+ bool changed;
spin_lock_irqsave(&steam->lock, flags);
+ changed = steam->connected != connected;
steam->connected = connected;
spin_unlock_irqrestore(&steam->lock, flags);
- if (schedule_work(&steam->work_connect) == 0)
+ if (changed && schedule_work(&steam->work_connect) == 0)
dbg_hid("%s: connected=%d event already queued\n",
__func__, connected);
}
return 0;
rcu_read_lock();
input = rcu_dereference(steam->input);
- if (likely(input)) {
+ if (likely(input))
steam_do_input_event(steam, input, data);
- } else {
- dbg_hid("%s: input data without connect event\n",
- __func__);
- steam_do_connect_event(steam, true);
- }
rcu_read_unlock();
break;
case STEAM_EV_CONNECT:
mutex_lock(&steam_devices_lock);
list_for_each_entry(steam, &steam_devices, list) {
- steam_update_lizard_mode(steam);
+ mutex_lock(&steam->mutex);
+ if (!steam->client_opened)
+ steam_set_lizard_mode(steam, lizard_mode);
+ mutex_unlock(&steam->mutex);
}
mutex_unlock(&steam_devices_lock);
return 0;
I2C_HID_QUIRK_NO_RUNTIME_PM },
{ I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_4B33,
I2C_HID_QUIRK_DELAY_AFTER_SLEEP },
+ { USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_8001,
+ I2C_HID_QUIRK_NO_RUNTIME_PM },
{ 0, 0 }
};
#include <linux/atomic.h>
#include <linux/compat.h>
+#include <linux/cred.h>
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/hid.h>
goto err_free;
}
- len = min(sizeof(hid->name), sizeof(ev->u.create2.name));
- strlcpy(hid->name, ev->u.create2.name, len);
- len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys));
- strlcpy(hid->phys, ev->u.create2.phys, len);
- len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq));
- strlcpy(hid->uniq, ev->u.create2.uniq, len);
+ /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */
+ len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1;
+ strncpy(hid->name, ev->u.create2.name, len);
+ len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1;
+ strncpy(hid->phys, ev->u.create2.phys, len);
+ len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1;
+ strncpy(hid->uniq, ev->u.create2.uniq, len);
hid->ll_driver = &uhid_hid_driver;
hid->bus = ev->u.create2.bus;
switch (uhid->input_buf.type) {
case UHID_CREATE:
+ /*
+ * 'struct uhid_create_req' contains a __user pointer which is
+ * copied from, so it's unsafe to allow this with elevated
+ * privileges (e.g. from a setuid binary) or via kernel_write().
+ */
+ if (file->f_cred != current_cred() || uaccess_kernel()) {
+ pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n",
+ task_tgid_vnr(current), current->comm);
+ ret = -EACCES;
+ goto unlock;
+ }
ret = uhid_dev_create(uhid, &uhid->input_buf);
break;
case UHID_CREATE2:
out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled;
+ /* fallthrough */
+
+ case KVP_OP_GET_IP_INFO:
utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id,
MAX_ADAPTER_ID_SIZE,
UTF16_LITTLE_ENDIAN,
process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO);
break;
case KVP_OP_GET_IP_INFO:
- /* We only need to pass on message->kvp_hdr.operation. */
+ /*
+ * We only need to pass on the info of operation, adapter_id
+ * and addr_family to the userland kvp daemon.
+ */
+ process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO);
break;
case KVP_OP_SET:
switch (in_msg->body.kvp_set.data.value_type) {
}
- break;
-
- case KVP_OP_GET:
+ /*
+ * The key is always a string - utf16 encoding.
+ */
message->body.kvp_set.data.key_size =
utf16s_to_utf8s(
(wchar_t *)in_msg->body.kvp_set.data.key,
UTF16_LITTLE_ENDIAN,
message->body.kvp_set.data.key,
HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
+
+ break;
+
+ case KVP_OP_GET:
+ message->body.kvp_get.data.key_size =
+ utf16s_to_utf8s(
+ (wchar_t *)in_msg->body.kvp_get.data.key,
+ in_msg->body.kvp_get.data.key_size,
+ UTF16_LITTLE_ENDIAN,
+ message->body.kvp_get.data.key,
+ HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
break;
case KVP_OP_DELETE:
break;
case INA2XX_CURRENT:
/* signed register, result in mA */
- val = regval * data->current_lsb_uA;
+ val = (s16)regval * data->current_lsb_uA;
val = DIV_ROUND_CLOSEST(val, 1000);
break;
case INA2XX_CALIBRATION:
}
data->groups[group++] = &ina2xx_group;
- if (id->driver_data == ina226)
+ if (chip == ina226)
data->groups[group++] = &ina226_group;
hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name,
return PTR_ERR(hwmon_dev);
dev_info(dev, "power monitor %s (Rshunt = %li uOhm)\n",
- id->name, data->rshunt);
+ client->name, data->rshunt);
return 0;
}
*/
#define MLXREG_FAN_GET_RPM(rval, d, s) (DIV_ROUND_CLOSEST(15000000 * 100, \
((rval) + (s)) * (d)))
-#define MLXREG_FAN_GET_FAULT(val, mask) (!!((val) ^ (mask)))
+#define MLXREG_FAN_GET_FAULT(val, mask) (!((val) ^ (mask)))
#define MLXREG_FAN_PWM_DUTY2STATE(duty) (DIV_ROUND_CLOSEST((duty) * \
MLXREG_FAN_MAX_STATE, \
MLXREG_FAN_MAX_DUTY))
{
struct device *dev = &pdev->dev;
struct rpi_hwmon_data *data;
- int ret;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
/* Parent driver assure that firmware is correct */
data->fw = dev_get_drvdata(dev->parent);
- /* Init throttled */
- ret = rpi_firmware_property(data->fw, RPI_FIRMWARE_GET_THROTTLED,
- &data->last_throttled,
- sizeof(data->last_throttled));
-
data->hwmon_dev = devm_hwmon_device_register_with_info(dev, "rpi_volt",
data,
&rpi_chip_info,
* somewhere else in the code
*/
#define SENSOR_ATTR_TEMP(index) { \
- SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 4 ? S_IWUSR : 0), \
+ SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 5 ? S_IWUSR : 0), \
show_temp_mode, store_temp_mode, NOT_USED, index - 1), \
SENSOR_ATTR_2(temp##index##_input, S_IRUGO, show_temp, \
NULL, TEMP_READ, index - 1), \
case NETDEV_CHANGEADDR:
cmds[0] = netdev_del_cmd;
- cmds[1] = add_default_gid_cmd;
- cmds[2] = add_cmd;
+ if (ndev->reg_state == NETREG_REGISTERED) {
+ cmds[1] = add_default_gid_cmd;
+ cmds[2] = add_cmd;
+ }
break;
case NETDEV_CHANGEUPPER:
up_read(&per_mm->umem_rwsem);
}
-static int invalidate_page_trampoline(struct ib_umem_odp *item, u64 start,
- u64 end, void *cookie)
-{
- ib_umem_notifier_start_account(item);
- item->umem.context->invalidate_range(item, start, start + PAGE_SIZE);
- ib_umem_notifier_end_account(item);
- return 0;
-}
-
static int invalidate_range_start_trampoline(struct ib_umem_odp *item,
u64 start, u64 end, void *cookie)
{
put_page(page);
if (remove_existing_mapping && umem->context->invalidate_range) {
- invalidate_page_trampoline(
+ ib_umem_notifier_start_account(umem_odp);
+ umem->context->invalidate_range(
umem_odp,
- ib_umem_start(umem) + (page_index >> umem->page_shift),
- ib_umem_start(umem) + ((page_index + 1) >>
- umem->page_shift),
- NULL);
+ ib_umem_start(umem) + (page_index << umem->page_shift),
+ ib_umem_start(umem) +
+ ((page_index + 1) << umem->page_shift));
+ ib_umem_notifier_end_account(umem_odp);
ret = -EAGAIN;
}
/* Registered a new RoCE device instance to netdev */
rc = bnxt_re_register_netdev(rdev);
if (rc) {
+ rtnl_unlock();
pr_err("Failed to register with netedev: %#x\n", rc);
return -EINVAL;
}
"Failed to register with IB: %#x", rc);
bnxt_re_remove_one(rdev);
bnxt_re_dev_unreg(rdev);
+ goto exit;
}
break;
case NETDEV_UP:
}
smp_mb__before_atomic();
atomic_dec(&rdev->sched_count);
+exit:
kfree(re_work);
}
return hns_roce_cmq_send(hr_dev, &desc, 1);
}
-static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr,
- unsigned long mtpt_idx)
+static int set_mtpt_pbl(struct hns_roce_v2_mpt_entry *mpt_entry,
+ struct hns_roce_mr *mr)
{
- struct hns_roce_v2_mpt_entry *mpt_entry;
struct scatterlist *sg;
u64 page_addr;
u64 *pages;
int len;
int entry;
+ mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size);
+ mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3));
+ roce_set_field(mpt_entry->byte_48_mode_ba,
+ V2_MPT_BYTE_48_PBL_BA_H_M, V2_MPT_BYTE_48_PBL_BA_H_S,
+ upper_32_bits(mr->pbl_ba >> 3));
+
+ pages = (u64 *)__get_free_page(GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ i = 0;
+ for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) {
+ len = sg_dma_len(sg) >> PAGE_SHIFT;
+ for (j = 0; j < len; ++j) {
+ page_addr = sg_dma_address(sg) +
+ (j << mr->umem->page_shift);
+ pages[i] = page_addr >> 6;
+ /* Record the first 2 entry directly to MTPT table */
+ if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1)
+ goto found;
+ i++;
+ }
+ }
+found:
+ mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0]));
+ roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M,
+ V2_MPT_BYTE_56_PA0_H_S, upper_32_bits(pages[0]));
+
+ mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1]));
+ roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M,
+ V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1]));
+ roce_set_field(mpt_entry->byte_64_buf_pa1,
+ V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M,
+ V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S,
+ mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET);
+
+ free_page((unsigned long)pages);
+
+ return 0;
+}
+
+static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr,
+ unsigned long mtpt_idx)
+{
+ struct hns_roce_v2_mpt_entry *mpt_entry;
+ int ret;
+
mpt_entry = mb_buf;
memset(mpt_entry, 0, sizeof(*mpt_entry));
mr->pbl_ba_pg_sz + PG_SHIFT_OFFSET);
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M,
V2_MPT_BYTE_4_PD_S, mr->pd);
- mpt_entry->byte_4_pd_hop_st = cpu_to_le32(mpt_entry->byte_4_pd_hop_st);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RA_EN_S, 0);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_R_INV_EN_S, 1);
(mr->access & IB_ACCESS_REMOTE_WRITE ? 1 : 0));
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S,
(mr->access & IB_ACCESS_LOCAL_WRITE ? 1 : 0));
- mpt_entry->byte_8_mw_cnt_en = cpu_to_le32(mpt_entry->byte_8_mw_cnt_en);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_PA_S,
mr->type == MR_TYPE_MR ? 0 : 1);
roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_INNER_PA_VLD_S,
1);
- mpt_entry->byte_12_mw_pa = cpu_to_le32(mpt_entry->byte_12_mw_pa);
mpt_entry->len_l = cpu_to_le32(lower_32_bits(mr->size));
mpt_entry->len_h = cpu_to_le32(upper_32_bits(mr->size));
if (mr->type == MR_TYPE_DMA)
return 0;
- mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size);
-
- mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3));
- roce_set_field(mpt_entry->byte_48_mode_ba, V2_MPT_BYTE_48_PBL_BA_H_M,
- V2_MPT_BYTE_48_PBL_BA_H_S,
- upper_32_bits(mr->pbl_ba >> 3));
- mpt_entry->byte_48_mode_ba = cpu_to_le32(mpt_entry->byte_48_mode_ba);
-
- pages = (u64 *)__get_free_page(GFP_KERNEL);
- if (!pages)
- return -ENOMEM;
-
- i = 0;
- for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) {
- len = sg_dma_len(sg) >> PAGE_SHIFT;
- for (j = 0; j < len; ++j) {
- page_addr = sg_dma_address(sg) +
- (j << mr->umem->page_shift);
- pages[i] = page_addr >> 6;
-
- /* Record the first 2 entry directly to MTPT table */
- if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1)
- goto found;
- i++;
- }
- }
-
-found:
- mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0]));
- roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M,
- V2_MPT_BYTE_56_PA0_H_S,
- upper_32_bits(pages[0]));
- mpt_entry->byte_56_pa0_h = cpu_to_le32(mpt_entry->byte_56_pa0_h);
-
- mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1]));
- roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M,
- V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1]));
+ ret = set_mtpt_pbl(mpt_entry, mr);
- free_page((unsigned long)pages);
-
- roce_set_field(mpt_entry->byte_64_buf_pa1,
- V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M,
- V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S,
- mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET);
- mpt_entry->byte_64_buf_pa1 = cpu_to_le32(mpt_entry->byte_64_buf_pa1);
-
- return 0;
+ return ret;
}
static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev,
u64 size, void *mb_buf)
{
struct hns_roce_v2_mpt_entry *mpt_entry = mb_buf;
+ int ret = 0;
if (flags & IB_MR_REREG_PD) {
roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M,
V2_MPT_BYTE_8_BIND_EN_S,
(mr_access_flags & IB_ACCESS_MW_BIND ? 1 : 0));
roce_set_bit(mpt_entry->byte_8_mw_cnt_en,
- V2_MPT_BYTE_8_ATOMIC_EN_S,
- (mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0));
+ V2_MPT_BYTE_8_ATOMIC_EN_S,
+ mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RR_EN_S,
- (mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0));
+ mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RW_EN_S,
- (mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0));
+ mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0);
roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S,
- (mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0));
+ mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0);
}
if (flags & IB_MR_REREG_TRANS) {
mpt_entry->len_l = cpu_to_le32(lower_32_bits(size));
mpt_entry->len_h = cpu_to_le32(upper_32_bits(size));
- mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size);
- mpt_entry->pbl_ba_l =
- cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3));
- roce_set_field(mpt_entry->byte_48_mode_ba,
- V2_MPT_BYTE_48_PBL_BA_H_M,
- V2_MPT_BYTE_48_PBL_BA_H_S,
- upper_32_bits(mr->pbl_ba >> 3));
- mpt_entry->byte_48_mode_ba =
- cpu_to_le32(mpt_entry->byte_48_mode_ba);
-
mr->iova = iova;
mr->size = size;
+
+ ret = set_mtpt_pbl(mpt_entry, mr);
}
- return 0;
+ return ret;
}
static int hns_roce_v2_frmr_write_mtpt(void *mb_buf, struct hns_roce_mr *mr)
MLX5_IB_WIDTH_12X = 1 << 4
};
-static int translate_active_width(struct ib_device *ibdev, u8 active_width,
+static void translate_active_width(struct ib_device *ibdev, u8 active_width,
u8 *ib_width)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
- int err = 0;
- if (active_width & MLX5_IB_WIDTH_1X) {
+ if (active_width & MLX5_IB_WIDTH_1X)
*ib_width = IB_WIDTH_1X;
- } else if (active_width & MLX5_IB_WIDTH_2X) {
- mlx5_ib_dbg(dev, "active_width %d is not supported by IB spec\n",
- (int)active_width);
- err = -EINVAL;
- } else if (active_width & MLX5_IB_WIDTH_4X) {
+ else if (active_width & MLX5_IB_WIDTH_4X)
*ib_width = IB_WIDTH_4X;
- } else if (active_width & MLX5_IB_WIDTH_8X) {
+ else if (active_width & MLX5_IB_WIDTH_8X)
*ib_width = IB_WIDTH_8X;
- } else if (active_width & MLX5_IB_WIDTH_12X) {
+ else if (active_width & MLX5_IB_WIDTH_12X)
*ib_width = IB_WIDTH_12X;
- } else {
- mlx5_ib_dbg(dev, "Invalid active_width %d\n",
+ else {
+ mlx5_ib_dbg(dev, "Invalid active_width %d, setting width to default value: 4x\n",
(int)active_width);
- err = -EINVAL;
+ *ib_width = IB_WIDTH_4X;
}
- return err;
+ return;
}
static int mlx5_mtu_to_ib_mtu(int mtu)
if (err)
goto out;
- err = translate_active_width(ibdev, ib_link_width_oper,
- &props->active_width);
- if (err)
- goto out;
+ translate_active_width(ibdev, ib_link_width_oper, &props->active_width);
+
err = mlx5_query_port_ib_proto_oper(mdev, &props->active_speed, port);
if (err)
goto out;
goto srcu_unlock;
}
+ if (!mr->umem->is_odp) {
+ mlx5_ib_dbg(dev, "skipping non ODP MR (lkey=0x%06x) in page fault handler.\n",
+ key);
+ if (bytes_mapped)
+ *bytes_mapped += bcnt;
+ ret = 0;
+ goto srcu_unlock;
+ }
+
ret = pagefault_mr(dev, mr, io_virt, bcnt, bytes_mapped);
if (ret < 0)
goto srcu_unlock;
head = frame;
bcnt -= frame->bcnt;
+ offset = 0;
}
break;
if (access_flags & IB_ACCESS_REMOTE_READ)
*hw_access_flags |= MLX5_QP_BIT_RRE;
- if ((access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
- qp->ibqp.qp_type == IB_QPT_RC) {
+ if (access_flags & IB_ACCESS_REMOTE_ATOMIC) {
int atomic_mode;
atomic_mode = get_atomic_mode(dev, qp->ibqp.qp_type);
goto out;
}
- if (wr->opcode == IB_WR_LOCAL_INV ||
- wr->opcode == IB_WR_REG_MR) {
+ if (wr->opcode == IB_WR_REG_MR) {
fence = dev->umr_fence;
next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL;
- } else if (wr->send_flags & IB_SEND_FENCE) {
- if (qp->next_fence)
- fence = MLX5_FENCE_MODE_SMALL_AND_FENCE;
- else
- fence = MLX5_FENCE_MODE_FENCE;
- } else {
- fence = qp->next_fence;
+ } else {
+ if (wr->send_flags & IB_SEND_FENCE) {
+ if (qp->next_fence)
+ fence = MLX5_FENCE_MODE_SMALL_AND_FENCE;
+ else
+ fence = MLX5_FENCE_MODE_FENCE;
+ } else {
+ fence = qp->next_fence;
+ }
}
switch (ibqp->qp_type) {
* rvt_create_ah - create an address handle
* @pd: the protection domain
* @ah_attr: the attributes of the AH
+ * @udata: pointer to user's input output buffer information.
*
* This may be called from interrupt context.
*
* Return: newly allocated ah
*/
struct ib_ah *rvt_create_ah(struct ib_pd *pd,
- struct rdma_ah_attr *ah_attr)
+ struct rdma_ah_attr *ah_attr,
+ struct ib_udata *udata)
{
struct rvt_ah *ah;
struct rvt_dev_info *dev = ib_to_rvt(pd->device);
#include <rdma/rdma_vt.h>
struct ib_ah *rvt_create_ah(struct ib_pd *pd,
- struct rdma_ah_attr *ah_attr);
+ struct rdma_ah_attr *ah_attr,
+ struct ib_udata *udata);
int rvt_destroy_ah(struct ib_ah *ibah);
int rvt_modify_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr);
int rvt_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr);
IB_MR_CHECK_SIG_STATUS, &mr_status);
if (ret) {
pr_err("ib_check_mr_status failed, ret %d\n", ret);
- goto err;
+ /* Not a lot we can do, return ambiguous guard error */
+ *sector = 0;
+ return 0x1;
}
if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {
}
return 0;
-err:
- /* Not alot we can do here, return ambiguous guard error */
- return 0x1;
}
void iser_err_comp(struct ib_wc *wc, const char *type)
entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
&entry, sizeof(entry));
- entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL;
+ entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
+ (BIT_ULL(52)-1)) & ~7ULL;
memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
&entry, sizeof(entry));
writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
}
if (old_ce)
- iounmap(old_ce);
+ memunmap(old_ce);
ret = 0;
if (devfn < 0x80)
pr_err("%s: Page request without PASID: %08llx %08llx\n",
iommu->name, ((unsigned long long *)req)[0],
((unsigned long long *)req)[1]);
- goto bad_req;
+ goto no_pasid;
}
if (!svm || svm->pasid != req->pasid) {
static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)
{
+ if (!domain->mmu)
+ return;
+
/*
* Disable the context. Flush the TLB as required when modifying the
* context registers.
}
if (adap->transmit_queue_sz >= CEC_MAX_MSG_TX_QUEUE_SZ) {
- dprintk(1, "%s: transmit queue full\n", __func__);
+ dprintk(2, "%s: transmit queue full\n", __func__);
return -EBUSY;
}
{
struct cec_log_addrs *las = &adap->log_addrs;
struct cec_msg msg = { };
+ const unsigned int max_retries = 2;
+ unsigned int i;
int err;
if (cec_has_log_addr(adap, log_addr))
/* Send poll message */
msg.len = 1;
msg.msg[0] = (log_addr << 4) | log_addr;
- err = cec_transmit_msg_fh(adap, &msg, NULL, true);
- /*
- * While trying to poll the physical address was reset
- * and the adapter was unconfigured, so bail out.
- */
- if (!adap->is_configuring)
- return -EINTR;
+ for (i = 0; i < max_retries; i++) {
+ err = cec_transmit_msg_fh(adap, &msg, NULL, true);
- if (err)
- return err;
+ /*
+ * While trying to poll the physical address was reset
+ * and the adapter was unconfigured, so bail out.
+ */
+ if (!adap->is_configuring)
+ return -EINTR;
+
+ if (err)
+ return err;
- if (msg.tx_status & CEC_TX_STATUS_OK)
+ /*
+ * The message was aborted due to a disconnect or
+ * unconfigure, just bail out.
+ */
+ if (msg.tx_status & CEC_TX_STATUS_ABORTED)
+ return -EINTR;
+ if (msg.tx_status & CEC_TX_STATUS_OK)
+ return 0;
+ if (msg.tx_status & CEC_TX_STATUS_NACK)
+ break;
+ /*
+ * Retry up to max_retries times if the message was neither
+ * OKed or NACKed. This can happen due to e.g. a Lost
+ * Arbitration condition.
+ */
+ }
+
+ /*
+ * If we are unable to get an OK or a NACK after max_retries attempts
+ * (and note that each attempt already consists of four polls), then
+ * then we assume that something is really weird and that it is not a
+ * good idea to try and claim this logical address.
+ */
+ if (i == max_retries)
return 0;
/*
ret = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(ep), &endpoint);
if (ret) {
dev_err(dev, "failed to parse endpoint\n");
- ret = ret;
goto put_node;
}
static void cio2_pci_remove(struct pci_dev *pci_dev)
{
struct cio2_device *cio2 = pci_get_drvdata(pci_dev);
- unsigned int i;
+ media_device_unregister(&cio2->media_dev);
cio2_notifier_exit(cio2);
+ cio2_queues_exit(cio2);
cio2_fbpt_exit_dummy(cio2);
- for (i = 0; i < CIO2_QUEUES; i++)
- cio2_queue_exit(cio2, &cio2->queue[i]);
v4l2_device_unregister(&cio2->v4l2_dev);
- media_device_unregister(&cio2->media_dev);
media_device_cleanup(&cio2->media_dev);
mutex_destroy(&cio2->lock);
}
static void isp_unregister_entities(struct isp_device *isp)
{
+ media_device_unregister(&isp->media_dev);
+
omap3isp_csi2_unregister_entities(&isp->isp_csi2a);
omap3isp_ccp2_unregister_entities(&isp->isp_ccp2);
omap3isp_ccdc_unregister_entities(&isp->isp_ccdc);
omap3isp_stat_unregister_entities(&isp->isp_hist);
v4l2_device_unregister(&isp->v4l2_dev);
- media_device_unregister(&isp->media_dev);
media_device_cleanup(&isp->media_dev);
}
#define MAX_WIDTH 4096U
#define MIN_WIDTH 640U
#define MAX_HEIGHT 2160U
-#define MIN_HEIGHT 480U
+#define MIN_HEIGHT 360U
#define dprintk(dev, fmt, arg...) \
v4l2_dbg(1, debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
static const struct media_device_ops m2m_media_ops = {
.req_validate = vb2_request_validate,
- .req_queue = vb2_m2m_request_queue,
+ .req_queue = v4l2_m2m_request_queue,
};
static int vim2m_probe(struct platform_device *pdev)
p_mpeg2_slice_params->forward_ref_index >= VIDEO_MAX_FRAME)
return -EINVAL;
+ if (p_mpeg2_slice_params->pad ||
+ p_mpeg2_slice_params->picture.pad ||
+ p_mpeg2_slice_params->sequence.pad)
+ return -EINVAL;
+
return 0;
case V4L2_CTRL_TYPE_MPEG2_QUANTIZATION:
}
EXPORT_SYMBOL_GPL(v4l2_event_pending);
+static void __v4l2_event_unsubscribe(struct v4l2_subscribed_event *sev)
+{
+ struct v4l2_fh *fh = sev->fh;
+ unsigned int i;
+
+ lockdep_assert_held(&fh->subscribe_lock);
+ assert_spin_locked(&fh->vdev->fh_lock);
+
+ /* Remove any pending events for this subscription */
+ for (i = 0; i < sev->in_use; i++) {
+ list_del(&sev->events[sev_pos(sev, i)].list);
+ fh->navailable--;
+ }
+ list_del(&sev->list);
+}
+
int v4l2_event_subscribe(struct v4l2_fh *fh,
const struct v4l2_event_subscription *sub, unsigned elems,
const struct v4l2_subscribed_event_ops *ops)
spin_lock_irqsave(&fh->vdev->fh_lock, flags);
found_ev = v4l2_event_subscribed(fh, sub->type, sub->id);
+ if (!found_ev)
+ list_add(&sev->list, &fh->subscribed);
spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
if (found_ev) {
/* Already listening */
kvfree(sev);
- goto out_unlock;
- }
-
- if (sev->ops && sev->ops->add) {
+ } else if (sev->ops && sev->ops->add) {
ret = sev->ops->add(sev, elems);
if (ret) {
+ spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ __v4l2_event_unsubscribe(sev);
+ spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
kvfree(sev);
- goto out_unlock;
}
}
- spin_lock_irqsave(&fh->vdev->fh_lock, flags);
- list_add(&sev->list, &fh->subscribed);
- spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
-
-out_unlock:
mutex_unlock(&fh->subscribe_lock);
return ret;
{
struct v4l2_subscribed_event *sev;
unsigned long flags;
- int i;
if (sub->type == V4L2_EVENT_ALL) {
v4l2_event_unsubscribe_all(fh);
spin_lock_irqsave(&fh->vdev->fh_lock, flags);
sev = v4l2_event_subscribed(fh, sub->type, sub->id);
- if (sev != NULL) {
- /* Remove any pending events for this subscription */
- for (i = 0; i < sev->in_use; i++) {
- list_del(&sev->events[sev_pos(sev, i)].list);
- fh->navailable--;
- }
- list_del(&sev->list);
- }
+ if (sev != NULL)
+ __v4l2_event_unsubscribe(sev);
spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
}
EXPORT_SYMBOL_GPL(v4l2_m2m_buf_queue);
-void vb2_m2m_request_queue(struct media_request *req)
+void v4l2_m2m_request_queue(struct media_request *req)
{
struct media_request_object *obj, *obj_safe;
struct v4l2_m2m_ctx *m2m_ctx = NULL;
if (m2m_ctx)
v4l2_m2m_try_schedule(m2m_ctx);
}
-EXPORT_SYMBOL_GPL(vb2_m2m_request_queue);
+EXPORT_SYMBOL_GPL(v4l2_m2m_request_queue);
/* Videobuf2 ioctl helpers */
MODULE_DEVICE_TABLE(of, atmel_ssc_dt_ids);
#endif
-static inline const struct atmel_ssc_platform_data * __init
+static inline const struct atmel_ssc_platform_data *
atmel_ssc_get_driver_data(struct platform_device *pdev)
{
if (pdev->dev.of_node) {
#include <linux/delay.h>
#include <linux/bitops.h>
#include <asm/uv/uv_hub.h>
+
+#include <linux/nospec.h>
+
#include "gru.h"
#include "grutables.h"
#include "gruhandles.h"
/* Currently, only dump by gid is implemented */
if (req.gid >= gru_max_gids)
return -EINVAL;
+ req.gid = array_index_nospec(req.gid, gru_max_gids);
gru = GID_TO_GRU(req.gid);
ubuf = req.buf;
* - JMicron (hardware and technical support)
*/
+#include <linux/bitfield.h>
#include <linux/string.h>
#include <linux/delay.h>
#include <linux/highmem.h>
u32 dsm_fns;
int drv_strength;
bool d3_retune;
+ bool rpm_retune_ok;
+ u32 glk_rx_ctrl1;
+ u32 glk_tun_val;
};
static const guid_t intel_dsm_guid =
return ret;
}
+#ifdef CONFIG_PM
+#define GLK_RX_CTRL1 0x834
+#define GLK_TUN_VAL 0x840
+#define GLK_PATH_PLL GENMASK(13, 8)
+#define GLK_DLY GENMASK(6, 0)
+/* Workaround firmware failing to restore the tuning value */
+static void glk_rpm_retune_wa(struct sdhci_pci_chip *chip, bool susp)
+{
+ struct sdhci_pci_slot *slot = chip->slots[0];
+ struct intel_host *intel_host = sdhci_pci_priv(slot);
+ struct sdhci_host *host = slot->host;
+ u32 glk_rx_ctrl1;
+ u32 glk_tun_val;
+ u32 dly;
+
+ if (intel_host->rpm_retune_ok || !mmc_can_retune(host->mmc))
+ return;
+
+ glk_rx_ctrl1 = sdhci_readl(host, GLK_RX_CTRL1);
+ glk_tun_val = sdhci_readl(host, GLK_TUN_VAL);
+
+ if (susp) {
+ intel_host->glk_rx_ctrl1 = glk_rx_ctrl1;
+ intel_host->glk_tun_val = glk_tun_val;
+ return;
+ }
+
+ if (!intel_host->glk_tun_val)
+ return;
+
+ if (glk_rx_ctrl1 != intel_host->glk_rx_ctrl1) {
+ intel_host->rpm_retune_ok = true;
+ return;
+ }
+
+ dly = FIELD_PREP(GLK_DLY, FIELD_GET(GLK_PATH_PLL, glk_rx_ctrl1) +
+ (intel_host->glk_tun_val << 1));
+ if (dly == FIELD_GET(GLK_DLY, glk_rx_ctrl1))
+ return;
+
+ glk_rx_ctrl1 = (glk_rx_ctrl1 & ~GLK_DLY) | dly;
+ sdhci_writel(host, glk_rx_ctrl1, GLK_RX_CTRL1);
+
+ intel_host->rpm_retune_ok = true;
+ chip->rpm_retune = true;
+ mmc_retune_needed(host->mmc);
+ pr_info("%s: Requiring re-tune after rpm resume", mmc_hostname(host->mmc));
+}
+
+static void glk_rpm_retune_chk(struct sdhci_pci_chip *chip, bool susp)
+{
+ if (chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC &&
+ !chip->rpm_retune)
+ glk_rpm_retune_wa(chip, susp);
+}
+
+static int glk_runtime_suspend(struct sdhci_pci_chip *chip)
+{
+ glk_rpm_retune_chk(chip, true);
+
+ return sdhci_cqhci_runtime_suspend(chip);
+}
+
+static int glk_runtime_resume(struct sdhci_pci_chip *chip)
+{
+ glk_rpm_retune_chk(chip, false);
+
+ return sdhci_cqhci_runtime_resume(chip);
+}
+#endif
+
#ifdef CONFIG_ACPI
static int ni_set_max_freq(struct sdhci_pci_slot *slot)
{
.resume = sdhci_cqhci_resume,
#endif
#ifdef CONFIG_PM
- .runtime_suspend = sdhci_cqhci_runtime_suspend,
- .runtime_resume = sdhci_cqhci_runtime_resume,
+ .runtime_suspend = glk_runtime_suspend,
+ .runtime_resume = glk_runtime_resume,
#endif
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
device_init_wakeup(&pdev->dev, true);
if (slot->cd_idx >= 0) {
- ret = mmc_gpiod_request_cd(host->mmc, NULL, slot->cd_idx,
+ ret = mmc_gpiod_request_cd(host->mmc, "cd", slot->cd_idx,
slot->cd_override_level, 0, NULL);
+ if (ret && ret != -EPROBE_DEFER)
+ ret = mmc_gpiod_request_cd(host->mmc, NULL,
+ slot->cd_idx,
+ slot->cd_override_level,
+ 0, NULL);
if (ret == -EPROBE_DEFER)
goto remove;
int ret;
nand_np = dev->of_node;
- nfc_np = of_find_compatible_node(dev->of_node, NULL,
- "atmel,sama5d3-nfc");
+ nfc_np = of_get_compatible_child(dev->of_node, "atmel,sama5d3-nfc");
if (!nfc_np) {
dev_err(dev, "Could not find device node for sama5d3-nfc\n");
return -ENODEV;
}
if (caps->legacy_of_bindings) {
+ struct device_node *nfc_node;
u32 ale_offs = 21;
/*
* If we are parsing legacy DT props and the DT contains a
* valid NFC node, forward the request to the sama5 logic.
*/
- if (of_find_compatible_node(pdev->dev.of_node, NULL,
- "atmel,sama5d3-nfc"))
+ nfc_node = of_get_compatible_child(pdev->dev.of_node,
+ "atmel,sama5d3-nfc");
+ if (nfc_node) {
caps = &atmel_sama5_nand_caps;
+ of_node_put(nfc_node);
+ }
/*
* Even if the compatible says we are dealing with an
#define NAND_VERSION_MINOR_SHIFT 16
/* NAND OP_CMDs */
-#define PAGE_READ 0x2
-#define PAGE_READ_WITH_ECC 0x3
-#define PAGE_READ_WITH_ECC_SPARE 0x4
-#define PROGRAM_PAGE 0x6
-#define PAGE_PROGRAM_WITH_ECC 0x7
-#define PROGRAM_PAGE_SPARE 0x9
-#define BLOCK_ERASE 0xa
-#define FETCH_ID 0xb
-#define RESET_DEVICE 0xd
+#define OP_PAGE_READ 0x2
+#define OP_PAGE_READ_WITH_ECC 0x3
+#define OP_PAGE_READ_WITH_ECC_SPARE 0x4
+#define OP_PROGRAM_PAGE 0x6
+#define OP_PAGE_PROGRAM_WITH_ECC 0x7
+#define OP_PROGRAM_PAGE_SPARE 0x9
+#define OP_BLOCK_ERASE 0xa
+#define OP_FETCH_ID 0xb
+#define OP_RESET_DEVICE 0xd
/* Default Value for NAND_DEV_CMD_VLD */
#define NAND_DEV_CMD_VLD_VAL (READ_START_VLD | WRITE_START_VLD | \
if (read) {
if (host->use_ecc)
- cmd = PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE;
+ cmd = OP_PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE;
else
- cmd = PAGE_READ | PAGE_ACC | LAST_PAGE;
+ cmd = OP_PAGE_READ | PAGE_ACC | LAST_PAGE;
} else {
- cmd = PROGRAM_PAGE | PAGE_ACC | LAST_PAGE;
+ cmd = OP_PROGRAM_PAGE | PAGE_ACC | LAST_PAGE;
}
if (host->use_ecc) {
* in use. we configure the controller to perform a raw read of 512
* bytes to read onfi params
*/
- nandc_set_reg(nandc, NAND_FLASH_CMD, PAGE_READ | PAGE_ACC | LAST_PAGE);
+ nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_ADDR0, 0);
nandc_set_reg(nandc, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
nandc_set_reg(nandc, NAND_FLASH_CMD,
- BLOCK_ERASE | PAGE_ACC | LAST_PAGE);
+ OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_ADDR0, page_addr);
nandc_set_reg(nandc, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_DEV0_CFG0,
if (column == -1)
return 0;
- nandc_set_reg(nandc, NAND_FLASH_CMD, FETCH_ID);
+ nandc_set_reg(nandc, NAND_FLASH_CMD, OP_FETCH_ID);
nandc_set_reg(nandc, NAND_ADDR0, column);
nandc_set_reg(nandc, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT,
struct nand_chip *chip = &host->chip;
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
- nandc_set_reg(nandc, NAND_FLASH_CMD, RESET_DEVICE);
+ nandc_set_reg(nandc, NAND_FLASH_CMD, OP_RESET_DEVICE);
nandc_set_reg(nandc, NAND_EXEC_CMD, 1);
write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);
ndelay(cqspi->wr_delay);
while (remaining > 0) {
+ size_t write_words, mod_bytes;
+
write_bytes = remaining > page_size ? page_size : remaining;
- iowrite32_rep(cqspi->ahb_base, txbuf,
- DIV_ROUND_UP(write_bytes, 4));
+ write_words = write_bytes / 4;
+ mod_bytes = write_bytes % 4;
+ /* Write 4 bytes at a time then single bytes. */
+ if (write_words) {
+ iowrite32_rep(cqspi->ahb_base, txbuf, write_words);
+ txbuf += (write_words * 4);
+ }
+ if (mod_bytes) {
+ unsigned int temp = 0xFFFFFFFF;
+
+ memcpy(&temp, txbuf, mod_bytes);
+ iowrite32(temp, cqspi->ahb_base);
+ txbuf += mod_bytes;
+ }
if (!wait_for_completion_timeout(&cqspi->transfer_complete,
msecs_to_jiffies(CQSPI_TIMEOUT_MS))) {
goto failwr;
}
- txbuf += write_bytes;
remaining -= write_bytes;
if (remaining > 0)
* @nor: pointer to a 'struct spi_nor'
* @addr: offset in the serial flash memory
* @len: number of bytes to read
- * @buf: buffer where the data is copied into
+ * @buf: buffer where the data is copied into (dma-safe memory)
*
* Return: 0 on success, -errno otherwise.
*/
return left->size - right->size;
}
+/**
+ * spi_nor_sort_erase_mask() - sort erase mask
+ * @map: the erase map of the SPI NOR
+ * @erase_mask: the erase type mask to be sorted
+ *
+ * Replicate the sort done for the map's erase types in BFPT: sort the erase
+ * mask in ascending order with the smallest erase type size starting from
+ * BIT(0) in the sorted erase mask.
+ *
+ * Return: sorted erase mask.
+ */
+static u8 spi_nor_sort_erase_mask(struct spi_nor_erase_map *map, u8 erase_mask)
+{
+ struct spi_nor_erase_type *erase_type = map->erase_type;
+ int i;
+ u8 sorted_erase_mask = 0;
+
+ if (!erase_mask)
+ return 0;
+
+ /* Replicate the sort done for the map's erase types. */
+ for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
+ if (erase_type[i].size && erase_mask & BIT(erase_type[i].idx))
+ sorted_erase_mask |= BIT(i);
+
+ return sorted_erase_mask;
+}
+
/**
* spi_nor_regions_sort_erase_types() - sort erase types in each region
* @map: the erase map of the SPI NOR
static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map)
{
struct spi_nor_erase_region *region = map->regions;
- struct spi_nor_erase_type *erase_type = map->erase_type;
- int i;
u8 region_erase_mask, sorted_erase_mask;
while (region) {
region_erase_mask = region->offset & SNOR_ERASE_TYPE_MASK;
- /* Replicate the sort done for the map's erase types. */
- sorted_erase_mask = 0;
- for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
- if (erase_type[i].size &&
- region_erase_mask & BIT(erase_type[i].idx))
- sorted_erase_mask |= BIT(i);
+ sorted_erase_mask = spi_nor_sort_erase_mask(map,
+ region_erase_mask);
/* Overwrite erase mask. */
region->offset = (region->offset & ~SNOR_ERASE_TYPE_MASK) |
* spi_nor_get_map_in_use() - get the configuration map in use
* @nor: pointer to a 'struct spi_nor'
* @smpt: pointer to the sector map parameter table
+ * @smpt_len: sector map parameter table length
+ *
+ * Return: pointer to the map in use, ERR_PTR(-errno) otherwise.
*/
-static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt)
+static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt,
+ u8 smpt_len)
{
- const u32 *ret = NULL;
- u32 i, addr;
+ const u32 *ret;
+ u8 *buf;
+ u32 addr;
int err;
+ u8 i;
u8 addr_width, read_opcode, read_dummy;
- u8 read_data_mask, data_byte, map_id;
+ u8 read_data_mask, map_id;
+
+ /* Use a kmalloc'ed bounce buffer to guarantee it is DMA-able. */
+ buf = kmalloc(sizeof(*buf), GFP_KERNEL);
+ if (!buf)
+ return ERR_PTR(-ENOMEM);
addr_width = nor->addr_width;
read_dummy = nor->read_dummy;
read_opcode = nor->read_opcode;
map_id = 0;
- i = 0;
/* Determine if there are any optional Detection Command Descriptors */
- while (!(smpt[i] & SMPT_DESC_TYPE_MAP)) {
+ for (i = 0; i < smpt_len; i += 2) {
+ if (smpt[i] & SMPT_DESC_TYPE_MAP)
+ break;
+
read_data_mask = SMPT_CMD_READ_DATA(smpt[i]);
nor->addr_width = spi_nor_smpt_addr_width(nor, smpt[i]);
nor->read_dummy = spi_nor_smpt_read_dummy(nor, smpt[i]);
nor->read_opcode = SMPT_CMD_OPCODE(smpt[i]);
addr = smpt[i + 1];
- err = spi_nor_read_raw(nor, addr, 1, &data_byte);
- if (err)
+ err = spi_nor_read_raw(nor, addr, 1, buf);
+ if (err) {
+ ret = ERR_PTR(err);
goto out;
+ }
/*
* Build an index value that is used to select the Sector Map
* Configuration that is currently in use.
*/
- map_id = map_id << 1 | !!(data_byte & read_data_mask);
- i = i + 2;
+ map_id = map_id << 1 | !!(*buf & read_data_mask);
}
- /* Find the matching configuration map */
- while (SMPT_MAP_ID(smpt[i]) != map_id) {
+ /*
+ * If command descriptors are provided, they always precede map
+ * descriptors in the table. There is no need to start the iteration
+ * over smpt array all over again.
+ *
+ * Find the matching configuration map.
+ */
+ ret = ERR_PTR(-EINVAL);
+ while (i < smpt_len) {
+ if (SMPT_MAP_ID(smpt[i]) == map_id) {
+ ret = smpt + i;
+ break;
+ }
+
+ /*
+ * If there are no more configuration map descriptors and no
+ * configuration ID matched the configuration identifier, the
+ * sector address map is unknown.
+ */
if (smpt[i] & SMPT_DESC_END)
- goto out;
+ break;
+
/* increment the table index to the next map */
i += SMPT_MAP_REGION_COUNT(smpt[i]) + 1;
}
- ret = smpt + i;
/* fall through */
out:
+ kfree(buf);
nor->addr_width = addr_width;
nor->read_dummy = read_dummy;
nor->read_opcode = read_opcode;
u64 offset;
u32 region_count;
int i, j;
- u8 erase_type;
+ u8 erase_type, uniform_erase_type;
region_count = SMPT_MAP_REGION_COUNT(*smpt);
/*
return -ENOMEM;
map->regions = region;
- map->uniform_erase_type = 0xff;
+ uniform_erase_type = 0xff;
offset = 0;
/* Populate regions. */
for (i = 0; i < region_count; i++) {
* Save the erase types that are supported in all regions and
* can erase the entire flash memory.
*/
- map->uniform_erase_type &= erase_type;
+ uniform_erase_type &= erase_type;
offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) +
region[i].size;
}
+ map->uniform_erase_type = spi_nor_sort_erase_mask(map,
+ uniform_erase_type);
+
spi_nor_region_mark_end(®ion[i - 1]);
return 0;
for (i = 0; i < smpt_header->length; i++)
smpt[i] = le32_to_cpu(smpt[i]);
- sector_map = spi_nor_get_map_in_use(nor, smpt);
- if (!sector_map) {
- ret = -EINVAL;
+ sector_map = spi_nor_get_map_in_use(nor, smpt, smpt_header->length);
+ if (IS_ERR(sector_map)) {
+ ret = PTR_ERR(sector_map);
goto out;
}
if (err)
goto exit;
- /* Parse other parameter headers. */
+ /* Parse optional parameter tables. */
for (i = 0; i < header.nph; i++) {
param_header = ¶m_headers[i];
break;
}
- if (err)
- goto exit;
+ if (err) {
+ dev_warn(dev, "Failed to parse optional parameter table: %04x\n",
+ SFDP_PARAM_HEADER_ID(param_header));
+ /*
+ * Let's not drop all information we extracted so far
+ * if optional table parsers fail. In case of failing,
+ * each optional parser is responsible to roll back to
+ * the previously known spi_nor data.
+ */
+ err = 0;
+ }
}
exit:
}
EXPORT_SYMBOL_GPL(can_put_echo_skb);
+struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)
+{
+ struct can_priv *priv = netdev_priv(dev);
+ struct sk_buff *skb = priv->echo_skb[idx];
+ struct canfd_frame *cf;
+
+ if (idx >= priv->echo_skb_max) {
+ netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",
+ __func__, idx, priv->echo_skb_max);
+ return NULL;
+ }
+
+ if (!skb) {
+ netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n",
+ __func__, idx);
+ return NULL;
+ }
+
+ /* Using "struct canfd_frame::len" for the frame
+ * length is supported on both CAN and CANFD frames.
+ */
+ cf = (struct canfd_frame *)skb->data;
+ *len_ptr = cf->len;
+ priv->echo_skb[idx] = NULL;
+
+ return skb;
+}
+
/*
* Get the skb from the stack and loop it back locally
*
*/
unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
{
- struct can_priv *priv = netdev_priv(dev);
-
- BUG_ON(idx >= priv->echo_skb_max);
-
- if (priv->echo_skb[idx]) {
- struct sk_buff *skb = priv->echo_skb[idx];
- struct can_frame *cf = (struct can_frame *)skb->data;
- u8 dlc = cf->can_dlc;
+ struct sk_buff *skb;
+ u8 len;
- netif_rx(priv->echo_skb[idx]);
- priv->echo_skb[idx] = NULL;
+ skb = __can_get_echo_skb(dev, idx, &len);
+ if (!skb)
+ return 0;
- return dlc;
- }
+ netif_rx(skb);
- return 0;
+ return len;
}
EXPORT_SYMBOL_GPL(can_get_echo_skb);
/* FLEXCAN interrupt flag register (IFLAG) bits */
/* Errata ERR005829 step7: Reserve first valid MB */
-#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
-#define FLEXCAN_TX_MB_OFF_FIFO 9
+#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
#define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP 0
-#define FLEXCAN_TX_MB_OFF_TIMESTAMP 1
-#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_OFF_TIMESTAMP + 1)
-#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST 63
-#define FLEXCAN_IFLAG_MB(x) BIT(x)
+#define FLEXCAN_TX_MB 63
+#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1)
+#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST (FLEXCAN_TX_MB - 1)
+#define FLEXCAN_IFLAG_MB(x) BIT(x & 0x1f)
#define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7)
#define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6)
#define FLEXCAN_IFLAG_RX_FIFO_AVAILABLE BIT(5)
struct can_rx_offload offload;
struct flexcan_regs __iomem *regs;
- struct flexcan_mb __iomem *tx_mb;
struct flexcan_mb __iomem *tx_mb_reserved;
- u8 tx_mb_idx;
u32 reg_ctrl_default;
u32 reg_imask1_default;
u32 reg_imask2_default;
static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
const struct flexcan_priv *priv = netdev_priv(dev);
+ struct flexcan_regs __iomem *regs = priv->regs;
struct can_frame *cf = (struct can_frame *)skb->data;
u32 can_id;
u32 data;
if (cf->can_dlc > 0) {
data = be32_to_cpup((__be32 *)&cf->data[0]);
- priv->write(data, &priv->tx_mb->data[0]);
+ priv->write(data, ®s->mb[FLEXCAN_TX_MB].data[0]);
}
if (cf->can_dlc > 4) {
data = be32_to_cpup((__be32 *)&cf->data[4]);
- priv->write(data, &priv->tx_mb->data[1]);
+ priv->write(data, ®s->mb[FLEXCAN_TX_MB].data[1]);
}
can_put_echo_skb(skb, dev, 0);
- priv->write(can_id, &priv->tx_mb->can_id);
- priv->write(ctrl, &priv->tx_mb->can_ctrl);
+ priv->write(can_id, ®s->mb[FLEXCAN_TX_MB].can_id);
+ priv->write(ctrl, ®s->mb[FLEXCAN_TX_MB].can_ctrl);
/* Errata ERR005829 step8:
* Write twice INACTIVE(0x8) code to first MB.
static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
{
struct flexcan_priv *priv = netdev_priv(dev);
+ struct flexcan_regs __iomem *regs = priv->regs;
struct sk_buff *skb;
struct can_frame *cf;
bool rx_errors = false, tx_errors = false;
+ u32 timestamp;
+
+ timestamp = priv->read(®s->timer) << 16;
skb = alloc_can_err_skb(dev, &cf);
if (unlikely(!skb))
if (tx_errors)
dev->stats.tx_errors++;
- can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
+ can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
}
static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
{
struct flexcan_priv *priv = netdev_priv(dev);
+ struct flexcan_regs __iomem *regs = priv->regs;
struct sk_buff *skb;
struct can_frame *cf;
enum can_state new_state, rx_state, tx_state;
int flt;
struct can_berr_counter bec;
+ u32 timestamp;
+
+ timestamp = priv->read(®s->timer) << 16;
flt = reg_esr & FLEXCAN_ESR_FLT_CONF_MASK;
if (likely(flt == FLEXCAN_ESR_FLT_CONF_ACTIVE)) {
if (unlikely(new_state == CAN_STATE_BUS_OFF))
can_bus_off(dev);
- can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
+ can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
}
static inline struct flexcan_priv *rx_offload_to_priv(struct can_rx_offload *offload)
priv->write(BIT(n - 32), ®s->iflag2);
} else {
priv->write(FLEXCAN_IFLAG_RX_FIFO_AVAILABLE, ®s->iflag1);
- priv->read(®s->timer);
}
+ /* Read the Free Running Timer. It is optional but recommended
+ * to unlock Mailbox as soon as possible and make it available
+ * for reception.
+ */
+ priv->read(®s->timer);
+
return 1;
}
struct flexcan_regs __iomem *regs = priv->regs;
u32 iflag1, iflag2;
- iflag2 = priv->read(®s->iflag2) & priv->reg_imask2_default;
- iflag1 = priv->read(®s->iflag1) & priv->reg_imask1_default &
- ~FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
+ iflag2 = priv->read(®s->iflag2) & priv->reg_imask2_default &
+ ~FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
+ iflag1 = priv->read(®s->iflag1) & priv->reg_imask1_default;
return (u64)iflag2 << 32 | iflag1;
}
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->regs;
irqreturn_t handled = IRQ_NONE;
- u32 reg_iflag1, reg_esr;
+ u32 reg_iflag2, reg_esr;
enum can_state last_state = priv->can.state;
- reg_iflag1 = priv->read(®s->iflag1);
-
/* reception interrupt */
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
u64 reg_iflag;
break;
}
} else {
+ u32 reg_iflag1;
+
+ reg_iflag1 = priv->read(®s->iflag1);
if (reg_iflag1 & FLEXCAN_IFLAG_RX_FIFO_AVAILABLE) {
handled = IRQ_HANDLED;
can_rx_offload_irq_offload_fifo(&priv->offload);
}
}
+ reg_iflag2 = priv->read(®s->iflag2);
+
/* transmission complete interrupt */
- if (reg_iflag1 & FLEXCAN_IFLAG_MB(priv->tx_mb_idx)) {
+ if (reg_iflag2 & FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB)) {
+ u32 reg_ctrl = priv->read(®s->mb[FLEXCAN_TX_MB].can_ctrl);
+
handled = IRQ_HANDLED;
- stats->tx_bytes += can_get_echo_skb(dev, 0);
+ stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload,
+ 0, reg_ctrl << 16);
stats->tx_packets++;
can_led_event(dev, CAN_LED_EVENT_TX);
/* after sending a RTR frame MB is in RX mode */
priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
- &priv->tx_mb->can_ctrl);
- priv->write(FLEXCAN_IFLAG_MB(priv->tx_mb_idx), ®s->iflag1);
+ ®s->mb[FLEXCAN_TX_MB].can_ctrl);
+ priv->write(FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB), ®s->iflag2);
netif_wake_queue(dev);
}
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_SRX_DIS | FLEXCAN_MCR_IRMQ |
- FLEXCAN_MCR_IDAM_C;
+ FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(FLEXCAN_TX_MB);
- if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
+ if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
reg_mcr &= ~FLEXCAN_MCR_FEN;
- reg_mcr |= FLEXCAN_MCR_MAXMB(priv->offload.mb_last);
- } else {
- reg_mcr |= FLEXCAN_MCR_FEN |
- FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
- }
+ else
+ reg_mcr |= FLEXCAN_MCR_FEN;
+
netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr);
priv->write(reg_mcr, ®s->mcr);
priv->write(reg_ctrl2, ®s->ctrl2);
}
- /* clear and invalidate all mailboxes first */
- for (i = priv->tx_mb_idx; i < ARRAY_SIZE(regs->mb); i++) {
- priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
- ®s->mb[i].can_ctrl);
- }
-
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
- for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++)
+ for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) {
priv->write(FLEXCAN_MB_CODE_RX_EMPTY,
®s->mb[i].can_ctrl);
+ }
+ } else {
+ /* clear and invalidate unused mailboxes first */
+ for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= ARRAY_SIZE(regs->mb); i++) {
+ priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
+ ®s->mb[i].can_ctrl);
+ }
}
/* Errata ERR005829: mark first TX mailbox as INACTIVE */
/* mark TX mailbox as INACTIVE */
priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
- &priv->tx_mb->can_ctrl);
+ ®s->mb[FLEXCAN_TX_MB].can_ctrl);
/* acceptance mask/acceptance code (accept everything) */
priv->write(0x0, ®s->rxgmask);
priv->devtype_data = devtype_data;
priv->reg_xceiver = reg_xceiver;
- if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
- priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_TIMESTAMP;
+ if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
priv->tx_mb_reserved = ®s->mb[FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP];
- } else {
- priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_FIFO;
+ else
priv->tx_mb_reserved = ®s->mb[FLEXCAN_TX_MB_RESERVED_OFF_FIFO];
- }
- priv->tx_mb = ®s->mb[priv->tx_mb_idx];
- priv->reg_imask1_default = FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
- priv->reg_imask2_default = 0;
+ priv->reg_imask1_default = 0;
+ priv->reg_imask2_default = FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
priv->offload.mailbox_read = flexcan_mailbox_read;
#define RCAR_CAN_DRV_NAME "rcar_can"
+#define RCAR_SUPPORTED_CLOCKS (BIT(CLKR_CLKP1) | BIT(CLKR_CLKP2) | \
+ BIT(CLKR_CLKEXT))
+
/* Mailbox configuration:
* mailbox 60 - 63 - Rx FIFO mailboxes
* mailbox 56 - 59 - Tx FIFO mailboxes
goto fail_clk;
}
- if (clock_select >= ARRAY_SIZE(clock_names)) {
+ if (!(BIT(clock_select) & RCAR_SUPPORTED_CLOCKS)) {
err = -EINVAL;
dev_err(&pdev->dev, "invalid CAN clock selected\n");
goto fail_clk;
}
EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo);
-int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb)
+int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
+ struct sk_buff *skb, u32 timestamp)
+{
+ struct can_rx_offload_cb *cb;
+ unsigned long flags;
+
+ if (skb_queue_len(&offload->skb_queue) >
+ offload->skb_queue_len_max)
+ return -ENOMEM;
+
+ cb = can_rx_offload_get_cb(skb);
+ cb->timestamp = timestamp;
+
+ spin_lock_irqsave(&offload->skb_queue.lock, flags);
+ __skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare);
+ spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
+
+ can_rx_offload_schedule(offload);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted);
+
+unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
+ unsigned int idx, u32 timestamp)
+{
+ struct net_device *dev = offload->dev;
+ struct net_device_stats *stats = &dev->stats;
+ struct sk_buff *skb;
+ u8 len;
+ int err;
+
+ skb = __can_get_echo_skb(dev, idx, &len);
+ if (!skb)
+ return 0;
+
+ err = can_rx_offload_queue_sorted(offload, skb, timestamp);
+ if (err) {
+ stats->rx_errors++;
+ stats->tx_fifo_errors++;
+ }
+
+ return len;
+}
+EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb);
+
+int can_rx_offload_queue_tail(struct can_rx_offload *offload,
+ struct sk_buff *skb)
{
if (skb_queue_len(&offload->skb_queue) >
offload->skb_queue_len_max)
return 0;
}
-EXPORT_SYMBOL_GPL(can_rx_offload_irq_queue_err_skb);
+EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail);
static int can_rx_offload_init_queue(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight)
{
{
struct hi3110_priv *priv = netdev_priv(net);
struct spi_device *spi = priv->spi;
- unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_RISING;
+ unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_HIGH;
int ret;
ret = open_candev(net);
context = &priv->tx_contexts[i];
context->echo_index = i;
- can_put_echo_skb(skb, netdev, context->echo_index);
++priv->active_tx_contexts;
if (priv->active_tx_contexts >= (int)dev->max_tx_urbs)
netif_stop_queue(netdev);
dev_kfree_skb(skb);
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
- can_free_echo_skb(netdev, context->echo_index);
context->echo_index = dev->max_tx_urbs;
--priv->active_tx_contexts;
netif_wake_queue(netdev);
context->priv = priv;
+ can_put_echo_skb(skb, netdev, context->echo_index);
+
usb_fill_bulk_urb(urb, dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),
new_state : CAN_STATE_ERROR_ACTIVE;
can_change_state(netdev, cf, tx_state, rx_state);
+
+ if (priv->can.restart_ms &&
+ old_state >= CAN_STATE_BUS_OFF &&
+ new_state < CAN_STATE_BUS_OFF)
+ cf->can_id |= CAN_ERR_RESTARTED;
}
if (new_state == CAN_STATE_BUS_OFF) {
can_bus_off(netdev);
}
-
- if (priv->can.restart_ms &&
- old_state >= CAN_STATE_BUS_OFF &&
- new_state < CAN_STATE_BUS_OFF)
- cf->can_id |= CAN_ERR_RESTARTED;
}
if (!skb) {
#include <linux/slab.h>
#include <linux/usb.h>
-#include <linux/can.h>
-#include <linux/can/dev.h>
-#include <linux/can/error.h>
-
#define UCAN_DRIVER_NAME "ucan"
#define UCAN_MAX_RX_URBS 8
/* the CAN controller needs a while to enable/disable the bus */
/* disconnect the device */
static void ucan_disconnect(struct usb_interface *intf)
{
- struct usb_device *udev;
struct ucan_priv *up = usb_get_intfdata(intf);
- udev = interface_to_usbdev(intf);
-
usb_set_intfdata(intf, NULL);
if (up) {
rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);
if (rc)
dev_err(&adapter->pdev->dev, "Device reset failed\n");
+ /* stop submitting admin commands on a device that was reset */
+ ena_com_set_admin_running_state(adapter->ena_dev, false);
}
ena_destroy_all_io_queues(adapter);
netif_dbg(adapter, ifdown, netdev, "%s\n", __func__);
+ if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
+ return 0;
+
if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags))
ena_down(adapter);
ena_down(adapter);
/* Stop the device from sending AENQ events (in case reset flag is set
- * and device is up, ena_close already reset the device
- * In case the reset flag is set and the device is up, ena_down()
- * already perform the reset, so it can be skipped.
+ * and device is up, ena_down() already reset the device.
*/
if (!(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags) && dev_up))
ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);
ena_com_abort_admin_commands(ena_dev);
ena_com_wait_for_abort_completion(ena_dev);
ena_com_admin_destroy(ena_dev);
- ena_com_mmio_reg_read_request_destroy(ena_dev);
ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE);
+ ena_com_mmio_reg_read_request_destroy(ena_dev);
err:
clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
ena_com_rss_destroy(ena_dev);
err_free_msix:
ena_com_dev_reset(ena_dev, ENA_REGS_RESET_INIT_ERR);
+ /* stop submitting admin commands on a device that was reset */
+ ena_com_set_admin_running_state(ena_dev, false);
ena_free_mgmnt_irq(adapter);
ena_disable_msix(adapter);
err_worker_destroy:
cancel_work_sync(&adapter->reset_task);
- unregister_netdev(netdev);
-
- /* If the device is running then we want to make sure the device will be
- * reset to make sure no more events will be issued by the device.
- */
- if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
- set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
-
rtnl_lock();
ena_destroy_device(adapter, true);
rtnl_unlock();
+ unregister_netdev(netdev);
+
free_netdev(netdev);
ena_com_rss_destroy(ena_dev);
#define DRV_MODULE_VER_MAJOR 2
#define DRV_MODULE_VER_MINOR 0
-#define DRV_MODULE_VER_SUBMINOR 1
+#define DRV_MODULE_VER_SUBMINOR 2
#define DRV_MODULE_NAME "ena"
#ifndef DRV_MODULE_VERSION
prop = of_get_property(nd, "tpe-link-test?", NULL);
if (!prop)
- goto no_link_test;
+ goto node_put;
if (strcmp(prop, "true")) {
printk(KERN_NOTICE "SunLance: warning: overriding option "
auxio_set_lte(AUXIO_LTE_ON);
}
+node_put:
+ of_node_put(nd);
no_link_test:
lp->auto_select = 1;
lp->tpe = 0;
#define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \
E1HVN_MAX)
+/* Following is the DMAE channel number allocation for the clients.
+ * MFW: OCBB/OCSD implementations use DMAE channels 14/15 respectively.
+ * Driver: 0-3 and 8-11 (for PF dmae operations)
+ * 4 and 12 (for stats requests)
+ */
+#define BNX2X_FW_DMAE_C 13 /* Channel for FW DMAE operations */
+
/* PCIE link and speed */
#define PCICFG_LINK_WIDTH 0x1f00000
#define PCICFG_LINK_WIDTH_SHIFT 20
rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag);
rdata->path_id = BP_PATH(bp);
rdata->network_cos_mode = start_params->network_cos_mode;
+ rdata->dmae_cmd_id = BNX2X_FW_DMAE_C;
rdata->vxlan_dst_port = cpu_to_le16(start_params->vxlan_dst_port);
rdata->geneve_dst_port = cpu_to_le16(start_params->geneve_dst_port);
} else {
if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS) {
if (dev->features & NETIF_F_RXCSUM)
- cpr->rx_l4_csum_errors++;
+ bnapi->cp_ring.rx_l4_csum_errors++;
}
}
return rc;
}
+static int bnxt_dbg_hwrm_ring_info_get(struct bnxt *bp, u8 ring_type,
+ u32 ring_id, u32 *prod, u32 *cons)
+{
+ struct hwrm_dbg_ring_info_get_output *resp = bp->hwrm_cmd_resp_addr;
+ struct hwrm_dbg_ring_info_get_input req = {0};
+ int rc;
+
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_RING_INFO_GET, -1, -1);
+ req.ring_type = ring_type;
+ req.fw_ring_id = cpu_to_le32(ring_id);
+ mutex_lock(&bp->hwrm_cmd_lock);
+ rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ if (!rc) {
+ *prod = le32_to_cpu(resp->producer_index);
+ *cons = le32_to_cpu(resp->consumer_index);
+ }
+ mutex_unlock(&bp->hwrm_cmd_lock);
+ return rc;
+}
+
static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi)
{
struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
bnxt_queue_sp_work(bp);
}
}
+
+ if ((bp->flags & BNXT_FLAG_CHIP_P5) && netif_carrier_ok(dev)) {
+ set_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event);
+ bnxt_queue_sp_work(bp);
+ }
bnxt_restart_timer:
mod_timer(&bp->timer, jiffies + bp->current_interval);
}
bnxt_rtnl_unlock_sp(bp);
}
+static void bnxt_chk_missed_irq(struct bnxt *bp)
+{
+ int i;
+
+ if (!(bp->flags & BNXT_FLAG_CHIP_P5))
+ return;
+
+ for (i = 0; i < bp->cp_nr_rings; i++) {
+ struct bnxt_napi *bnapi = bp->bnapi[i];
+ struct bnxt_cp_ring_info *cpr;
+ u32 fw_ring_id;
+ int j;
+
+ if (!bnapi)
+ continue;
+
+ cpr = &bnapi->cp_ring;
+ for (j = 0; j < 2; j++) {
+ struct bnxt_cp_ring_info *cpr2 = cpr->cp_ring_arr[j];
+ u32 val[2];
+
+ if (!cpr2 || cpr2->has_more_work ||
+ !bnxt_has_work(bp, cpr2))
+ continue;
+
+ if (cpr2->cp_raw_cons != cpr2->last_cp_raw_cons) {
+ cpr2->last_cp_raw_cons = cpr2->cp_raw_cons;
+ continue;
+ }
+ fw_ring_id = cpr2->cp_ring_struct.fw_ring_id;
+ bnxt_dbg_hwrm_ring_info_get(bp,
+ DBG_RING_INFO_GET_REQ_RING_TYPE_L2_CMPL,
+ fw_ring_id, &val[0], &val[1]);
+ cpr->missed_irqs++;
+ }
+ }
+}
+
static void bnxt_cfg_ntp_filters(struct bnxt *);
static void bnxt_sp_task(struct work_struct *work)
if (test_and_clear_bit(BNXT_FLOW_STATS_SP_EVENT, &bp->sp_event))
bnxt_tc_flow_stats_work(bp);
+ if (test_and_clear_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event))
+ bnxt_chk_missed_irq(bp);
+
/* These functions below will clear BNXT_STATE_IN_SP_TASK. They
* must be the last functions to be called before exiting.
*/
}
bnxt_hwrm_func_qcfg(bp);
+ bnxt_hwrm_vnic_qcaps(bp);
bnxt_hwrm_port_led_qcaps(bp);
bnxt_ethtool_init(bp);
bnxt_dcb_init(bp);
VNIC_RSS_CFG_REQ_HASH_TYPE_UDP_IPV6;
}
- bnxt_hwrm_vnic_qcaps(bp);
if (bnxt_rfs_supported(bp)) {
dev->hw_features |= NETIF_F_NTUPLE;
if (bnxt_rfs_capable(bp)) {
u8 had_work_done:1;
u8 has_more_work:1;
+ u32 last_cp_raw_cons;
+
struct bnxt_coal rx_ring_coal;
u64 rx_packets;
u64 rx_bytes;
dma_addr_t hw_stats_map;
u32 hw_stats_ctx_id;
u64 rx_l4_csum_errors;
+ u64 missed_irqs;
struct bnxt_ring_struct cp_ring_struct;
#define BNXT_LINK_SPEED_CHNG_SP_EVENT 14
#define BNXT_FLOW_STATS_SP_EVENT 15
#define BNXT_UPDATE_PHY_SP_EVENT 16
+#define BNXT_RING_COAL_NOW_SP_EVENT 17
struct bnxt_hw_resc hw_resc;
struct bnxt_pf_info pf;
return rc;
}
-#define BNXT_NUM_STATS 21
+#define BNXT_NUM_STATS 22
#define BNXT_RX_STATS_ENTRY(counter) \
{ BNXT_RX_STATS_OFFSET(counter), __stringify(counter) }
for (k = 0; k < stat_fields; j++, k++)
buf[j] = le64_to_cpu(hw_stats[k]);
buf[j++] = cpr->rx_l4_csum_errors;
+ buf[j++] = cpr->missed_irqs;
bnxt_sw_func_stats[RX_TOTAL_DISCARDS].counter +=
le64_to_cpu(cpr->hw_stats->rx_discard_pkts);
buf += ETH_GSTRING_LEN;
sprintf(buf, "[%d]: rx_l4_csum_errors", i);
buf += ETH_GSTRING_LEN;
+ sprintf(buf, "[%d]: missed_irqs", i);
+ buf += ETH_GSTRING_LEN;
}
for (i = 0; i < BNXT_NUM_SW_FUNC_STATS; i++) {
strcpy(buf, bnxt_sw_func_stats[i].string);
record->asic_state = 0;
strlcpy(record->system_name, utsname()->nodename,
sizeof(record->system_name));
- record->year = cpu_to_le16(tm.tm_year);
- record->month = cpu_to_le16(tm.tm_mon);
+ record->year = cpu_to_le16(tm.tm_year + 1900);
+ record->month = cpu_to_le16(tm.tm_mon + 1);
record->day = cpu_to_le16(tm.tm_mday);
record->hour = cpu_to_le16(tm.tm_hour);
record->minute = cpu_to_le16(tm.tm_min);
if (ulp_id == BNXT_ROCE_ULP) {
unsigned int max_stat_ctxs;
+ if (bp->flags & BNXT_FLAG_CHIP_P5)
+ return -EOPNOTSUPP;
+
max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp);
if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS ||
bp->num_stat_ctxs == max_stat_ctxs)
{
struct tg3 *tp = netdev_priv(dev);
int i, irq_sync = 0, err = 0;
+ bool reset_phy = false;
if ((ering->rx_pending > tp->rx_std_ring_mask) ||
(ering->rx_jumbo_pending > tp->rx_jmb_ring_mask) ||
if (netif_running(dev)) {
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
- err = tg3_restart_hw(tp, false);
+ /* Reset PHY to avoid PHY lock up */
+ if (tg3_asic_rev(tp) == ASIC_REV_5717 ||
+ tg3_asic_rev(tp) == ASIC_REV_5719 ||
+ tg3_asic_rev(tp) == ASIC_REV_5720)
+ reset_phy = true;
+
+ err = tg3_restart_hw(tp, reset_phy);
if (!err)
tg3_netif_start(tp);
}
{
struct tg3 *tp = netdev_priv(dev);
int err = 0;
+ bool reset_phy = false;
if (tp->link_config.autoneg == AUTONEG_ENABLE)
tg3_warn_mgmt_link_flap(tp);
if (netif_running(dev)) {
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
- err = tg3_restart_hw(tp, false);
+ /* Reset PHY to avoid PHY lock up */
+ if (tg3_asic_rev(tp) == ASIC_REV_5717 ||
+ tg3_asic_rev(tp) == ASIC_REV_5719 ||
+ tg3_asic_rev(tp) == ASIC_REV_5720)
+ reset_phy = true;
+
+ err = tg3_restart_hw(tp, reset_phy);
if (!err)
tg3_netif_start(tp);
}
{
struct nicpf *nic = pci_get_drvdata(pdev);
+ if (!nic)
+ return;
+
if (nic->flags & NIC_SRIOV_ENABLED)
pci_disable_sriov(pdev);
bool if_up = netif_running(nic->netdev);
struct bpf_prog *old_prog;
bool bpf_attached = false;
+ int ret = 0;
/* For now just support only the usual MTU sized frames */
if (prog && (dev->mtu > 1500)) {
if (nic->xdp_prog) {
/* Attach BPF program */
nic->xdp_prog = bpf_prog_add(nic->xdp_prog, nic->rx_queues - 1);
- if (!IS_ERR(nic->xdp_prog))
+ if (!IS_ERR(nic->xdp_prog)) {
bpf_attached = true;
+ } else {
+ ret = PTR_ERR(nic->xdp_prog);
+ nic->xdp_prog = NULL;
+ }
}
/* Calculate Tx queues needed for XDP and network stack */
netif_trans_update(nic->netdev);
}
- return 0;
+ return ret;
}
static int nicvf_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
if (!sq->dmem.base)
return;
- if (sq->tso_hdrs)
+ if (sq->tso_hdrs) {
dma_free_coherent(&nic->pdev->dev,
sq->dmem.q_len * TSO_HEADER_SIZE,
sq->tso_hdrs, sq->tso_hdrs_phys);
+ sq->tso_hdrs = NULL;
+ }
/* Free pending skbs in the queue */
smp_rmb();
config CHELSIO_T4
tristate "Chelsio Communications T4/T5/T6 Ethernet support"
depends on PCI && (IPV6 || IPV6=n)
- depends on THERMAL || !THERMAL
select FW_LOADER
select MDIO
select ZLIB_DEFLATE
cxgb4-$(CONFIG_CHELSIO_T4_DCB) += cxgb4_dcb.o
cxgb4-$(CONFIG_CHELSIO_T4_FCOE) += cxgb4_fcoe.o
cxgb4-$(CONFIG_DEBUG_FS) += cxgb4_debugfs.o
-ifdef CONFIG_THERMAL
-cxgb4-objs += cxgb4_thermal.o
-endif
+cxgb4-$(CONFIG_THERMAL) += cxgb4_thermal.o
if (!is_t4(adapter->params.chip))
cxgb4_ptp_init(adapter);
- if (IS_ENABLED(CONFIG_THERMAL) &&
+ if (IS_REACHABLE(CONFIG_THERMAL) &&
!is_t4(adapter->params.chip) && (adapter->flags & FW_OK))
cxgb4_thermal_init(adapter);
if (!is_t4(adapter->params.chip))
cxgb4_ptp_stop(adapter);
- if (IS_ENABLED(CONFIG_THERMAL))
+ if (IS_REACHABLE(CONFIG_THERMAL))
cxgb4_thermal_remove(adapter);
/* If we allocated filters, free up state associated with any
u64_stats_update_begin(&port->tx_stats_syncp);
port->tx_frag_stats[nfrags]++;
- u64_stats_update_end(&port->ir_stats_syncp);
+ u64_stats_update_end(&port->tx_stats_syncp);
}
}
struct net_device *netdev = dev_id;
struct ftmac100 *priv = netdev_priv(netdev);
- if (likely(netif_running(netdev))) {
- /* Disable interrupts for polling */
- ftmac100_disable_all_int(priv);
+ /* Disable interrupts for polling */
+ ftmac100_disable_all_int(priv);
+ if (likely(netif_running(netdev)))
napi_schedule(&priv->napi);
- }
return IRQ_HANDLED;
}
}
ret = register_netdev(ndev);
- if (ret) {
- free_netdev(ndev);
+ if (ret)
goto alloc_fail;
- }
return 0;
for (j = 0; j < rx_pool->size; j++) {
if (rx_pool->rx_buff[j].skb) {
- dev_kfree_skb_any(rx_pool->rx_buff[i].skb);
- rx_pool->rx_buff[i].skb = NULL;
+ dev_kfree_skb_any(rx_pool->rx_buff[j].skb);
+ rx_pool->rx_buff[j].skb = NULL;
}
}
return 0;
}
- mutex_lock(&adapter->reset_lock);
-
if (adapter->state != VNIC_CLOSED) {
rc = ibmvnic_login(netdev);
- if (rc) {
- mutex_unlock(&adapter->reset_lock);
+ if (rc)
return rc;
- }
rc = init_resources(adapter);
if (rc) {
netdev_err(netdev, "failed to initialize resources\n");
release_resources(adapter);
- mutex_unlock(&adapter->reset_lock);
return rc;
}
}
rc = __ibmvnic_open(netdev);
netif_carrier_on(netdev);
- mutex_unlock(&adapter->reset_lock);
-
return rc;
}
return 0;
}
- mutex_lock(&adapter->reset_lock);
rc = __ibmvnic_close(netdev);
ibmvnic_cleanup(netdev);
- mutex_unlock(&adapter->reset_lock);
return rc;
}
struct ibmvnic_rwi *rwi, u32 reset_state)
{
u64 old_num_rx_queues, old_num_tx_queues;
+ u64 old_num_rx_slots, old_num_tx_slots;
struct net_device *netdev = adapter->netdev;
int i, rc;
old_num_rx_queues = adapter->req_rx_queues;
old_num_tx_queues = adapter->req_tx_queues;
+ old_num_rx_slots = adapter->req_rx_add_entries_per_subcrq;
+ old_num_tx_slots = adapter->req_tx_entries_per_subcrq;
ibmvnic_cleanup(netdev);
if (rc)
return rc;
} else if (adapter->req_rx_queues != old_num_rx_queues ||
- adapter->req_tx_queues != old_num_tx_queues) {
- adapter->map_id = 1;
+ adapter->req_tx_queues != old_num_tx_queues ||
+ adapter->req_rx_add_entries_per_subcrq !=
+ old_num_rx_slots ||
+ adapter->req_tx_entries_per_subcrq !=
+ old_num_tx_slots) {
release_rx_pools(adapter);
release_tx_pools(adapter);
- rc = init_rx_pools(netdev);
- if (rc)
- return rc;
- rc = init_tx_pools(netdev);
- if (rc)
- return rc;
-
release_napi(adapter);
- rc = init_napi(adapter);
+ release_vpd_data(adapter);
+
+ rc = init_resources(adapter);
if (rc)
return rc;
+
} else {
rc = reset_tx_pools(adapter);
if (rc)
adapter->state = VNIC_PROBED;
return 0;
}
- /* netif_set_real_num_xx_queues needs to take rtnl lock here
- * unless wait_for_reset is set, in which case the rtnl lock
- * has already been taken before initializing the reset
- */
- if (!adapter->wait_for_reset) {
- rtnl_lock();
- rc = init_resources(adapter);
- rtnl_unlock();
- } else {
- rc = init_resources(adapter);
- }
+
+ rc = init_resources(adapter);
if (rc)
return rc;
struct ibmvnic_rwi *rwi;
struct ibmvnic_adapter *adapter;
struct net_device *netdev;
+ bool we_lock_rtnl = false;
u32 reset_state;
int rc = 0;
adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset);
netdev = adapter->netdev;
- mutex_lock(&adapter->reset_lock);
+ /* netif_set_real_num_xx_queues needs to take rtnl lock here
+ * unless wait_for_reset is set, in which case the rtnl lock
+ * has already been taken before initializing the reset
+ */
+ if (!adapter->wait_for_reset) {
+ rtnl_lock();
+ we_lock_rtnl = true;
+ }
reset_state = adapter->state;
rwi = get_next_rwi(adapter);
if (rc) {
netdev_dbg(adapter->netdev, "Reset failed\n");
free_all_rwi(adapter);
- mutex_unlock(&adapter->reset_lock);
- return;
}
adapter->resetting = false;
- mutex_unlock(&adapter->reset_lock);
+ if (we_lock_rtnl)
+ rtnl_unlock();
}
static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset);
INIT_LIST_HEAD(&adapter->rwi_list);
- mutex_init(&adapter->reset_lock);
mutex_init(&adapter->rwi_lock);
adapter->resetting = false;
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
adapter->state = VNIC_REMOVING;
- unregister_netdev(netdev);
- mutex_lock(&adapter->reset_lock);
+ rtnl_lock();
+ unregister_netdevice(netdev);
release_resources(adapter);
release_sub_crqs(adapter, 1);
adapter->state = VNIC_REMOVED;
- mutex_unlock(&adapter->reset_lock);
+ rtnl_unlock();
device_remove_file(&dev->dev, &dev_attr_failover);
free_netdev(netdev);
dev_set_drvdata(&dev->dev, NULL);
struct tasklet_struct tasklet;
enum vnic_state state;
enum ibmvnic_reset_reason reset_reason;
- struct mutex reset_lock, rwi_lock;
+ struct mutex rwi_lock;
struct list_head rwi_list;
struct work_struct ibmvnic_reset;
bool resetting;
}
vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED;
- set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->state);
+ set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->back->state);
}
/**
}
/**
- * i40e_add_xsk_umem - Store an UMEM for a certain ring/qid
+ * i40e_add_xsk_umem - Store a UMEM for a certain ring/qid
* @vsi: Current VSI
* @umem: UMEM to store
* @qid: Ring/qid to associate with the UMEM
}
/**
- * i40e_remove_xsk_umem - Remove an UMEM for a certain ring/qid
+ * i40e_remove_xsk_umem - Remove a UMEM for a certain ring/qid
* @vsi: Current VSI
* @qid: Ring/qid associated with the UMEM
**/
}
/**
- * i40e_xsk_umem_enable - Enable/associate an UMEM to a certain ring/qid
+ * i40e_xsk_umem_enable - Enable/associate a UMEM to a certain ring/qid
* @vsi: Current VSI
* @umem: UMEM
* @qid: Rx ring to associate UMEM to
}
/**
- * i40e_xsk_umem_disable - Diassociate an UMEM from a certain ring/qid
+ * i40e_xsk_umem_disable - Disassociate a UMEM from a certain ring/qid
* @vsi: Current VSI
* @qid: Rx ring to associate UMEM to
*
}
/**
- * i40e_xsk_umem_query - Queries a certain ring/qid for its UMEM
+ * i40e_xsk_umem_setup - Enable/disassociate a UMEM to/from a ring/qid
* @vsi: Current VSI
* @umem: UMEM to enable/associate to a ring, or NULL to disable
* @qid: Rx ring to (dis)associate UMEM (from)to
*
- * This function enables or disables an UMEM to a certain ring.
+ * This function enables or disables a UMEM to a certain ring.
*
* Returns 0 on success, <0 on failure
**/
* @rx_ring: Rx ring
* @xdp: xdp_buff used as input to the XDP program
*
- * This function enables or disables an UMEM to a certain ring.
+ * This function enables or disables a UMEM to a certain ring.
*
* Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR}
**/
nvm_word = E1000_INVM_DEFAULT_AL;
tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL;
igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, E1000_PHY_PLL_FREQ_PAGE);
+ phy_word = E1000_PHY_PLL_UNCONF;
for (i = 0; i < E1000_MAX_PLL_TRIES; i++) {
/* check current state directly from internal PHY */
igb_read_phy_reg_82580(hw, E1000_PHY_PLL_FREQ_REG, &phy_word);
*autoneg = false;
if (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 ||
- hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1) {
+ hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1 ||
+ hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core0 ||
+ hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core1) {
*speed = IXGBE_LINK_SPEED_1GB_FULL;
return 0;
}
err = register_netdev(net_dev);
if (err)
goto err_unprepare_clk;
- return err;
+
+ return 0;
err_unprepare_clk:
clk_disable_unprepare(priv->clk);
err_uninit_dma:
xrx200_hw_cleanup(priv);
- return 0;
+ return err;
}
static int xrx200_remove(struct platform_device *pdev)
if (state->interface != PHY_INTERFACE_MODE_NA &&
state->interface != PHY_INTERFACE_MODE_QSGMII &&
state->interface != PHY_INTERFACE_MODE_SGMII &&
- state->interface != PHY_INTERFACE_MODE_2500BASEX &&
!phy_interface_mode_is_8023z(state->interface) &&
!phy_interface_mode_is_rgmii(state->interface)) {
bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
/* Asymmetric pause is unsupported */
phylink_set(mask, Pause);
- /* We cannot use 1Gbps when using the 2.5G interface. */
- if (state->interface == PHY_INTERFACE_MODE_2500BASEX) {
- phylink_set(mask, 2500baseT_Full);
- phylink_set(mask, 2500baseX_Full);
- } else {
- phylink_set(mask, 1000baseT_Full);
- phylink_set(mask, 1000baseX_Full);
- }
+ /* Half-duplex at speeds higher than 100Mbit is unsupported */
+ phylink_set(mask, 1000baseT_Full);
+ phylink_set(mask, 1000baseX_Full);
if (!phy_interface_mode_is_8023z(state->interface)) {
/* 10M and 100M are only supported in non-802.3z mode */
static u32 __mlx4_alloc_from_zone(struct mlx4_zone_entry *zone, int count,
int align, u32 skip_mask, u32 *puid)
{
- u32 uid;
+ u32 uid = 0;
u32 res;
struct mlx4_zone_allocator *zone_alloc = zone->allocator;
struct mlx4_zone_entry *curr_node;
struct resource_allocator {
spinlock_t alloc_lock; /* protect quotas */
union {
- int res_reserved;
- int res_port_rsvd[MLX4_MAX_PORTS];
+ unsigned int res_reserved;
+ unsigned int res_port_rsvd[MLX4_MAX_PORTS];
};
union {
int res_free;
container_of((void *)mpt_entry, struct mlx4_cmd_mailbox,
buf);
+ (*mpt_entry)->lkey = 0;
err = mlx4_SW2HW_MPT(dev, mailbox, key);
}
unsigned long state;
int ix;
+ unsigned int hw_mtu;
struct net_dim dim; /* Dynamic Interrupt Moderation */
eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper);
*speed = mlx5e_port_ptys2speed(eth_proto_oper);
- if (!(*speed)) {
- mlx5_core_warn(mdev, "cannot get port speed\n");
+ if (!(*speed))
err = -EINVAL;
- }
return err;
}
case 40000:
if (!write)
*fec_policy = MLX5_GET(pplm_reg, pplm,
- fec_override_cap_10g_40g);
+ fec_override_admin_10g_40g);
else
MLX5_SET(pplm_reg, pplm,
fec_override_admin_10g_40g, *fec_policy);
case 10000:
case 40000:
*fec_cap = MLX5_GET(pplm_reg, pplm,
- fec_override_admin_10g_40g);
+ fec_override_cap_10g_40g);
break;
case 25000:
*fec_cap = MLX5_GET(pplm_reg, pplm,
int mlx5e_set_fec_mode(struct mlx5_core_dev *dev, u8 fec_policy)
{
+ u8 fec_policy_nofec = BIT(MLX5E_FEC_NOFEC);
bool fec_mode_not_supp_in_speed = false;
- u8 no_fec_policy = BIT(MLX5E_FEC_NOFEC);
u32 out[MLX5_ST_SZ_DW(pplm_reg)] = {};
u32 in[MLX5_ST_SZ_DW(pplm_reg)] = {};
int sz = MLX5_ST_SZ_BYTES(pplm_reg);
- u32 current_fec_speed;
+ u8 fec_policy_auto = 0;
u8 fec_caps = 0;
int err;
int i;
if (err)
return err;
- err = mlx5e_port_linkspeed(dev, ¤t_fec_speed);
- if (err)
- return err;
+ MLX5_SET(pplm_reg, out, local_port, 1);
- memset(in, 0, sz);
- MLX5_SET(pplm_reg, in, local_port, 1);
- for (i = 0; i < MLX5E_FEC_SUPPORTED_SPEEDS && !!fec_policy; i++) {
+ for (i = 0; i < MLX5E_FEC_SUPPORTED_SPEEDS; i++) {
mlx5e_get_fec_cap_field(out, &fec_caps, fec_supported_speeds[i]);
- /* policy supported for link speed */
- if (!!(fec_caps & fec_policy)) {
- mlx5e_fec_admin_field(in, &fec_policy, 1,
+ /* policy supported for link speed, or policy is auto */
+ if (fec_caps & fec_policy || fec_policy == fec_policy_auto) {
+ mlx5e_fec_admin_field(out, &fec_policy, 1,
fec_supported_speeds[i]);
} else {
- if (fec_supported_speeds[i] == current_fec_speed)
- return -EOPNOTSUPP;
- mlx5e_fec_admin_field(in, &no_fec_policy, 1,
- fec_supported_speeds[i]);
+ /* turn off FEC if supported. Else, leave it the same */
+ if (fec_caps & fec_policy_nofec)
+ mlx5e_fec_admin_field(out, &fec_policy_nofec, 1,
+ fec_supported_speeds[i]);
fec_mode_not_supp_in_speed = true;
}
}
"FEC policy 0x%x is not supported for some speeds",
fec_policy);
- return mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 1);
+ return mlx5_core_access_reg(dev, out, sz, out, sz, MLX5_REG_PPLM, 0, 1);
}
int err;
err = mlx5e_port_linkspeed(priv->mdev, &speed);
- if (err)
+ if (err) {
+ mlx5_core_warn(priv->mdev, "cannot get port speed\n");
return 0;
+ }
xoff = (301 + 216 * priv->dcbx.cable_len / 100) * speed / 1000 + 272 * mtu / 100;
ethtool_link_ksettings_add_link_mode(link_ksettings, supported,
Autoneg);
- err = get_fec_supported_advertised(mdev, link_ksettings);
- if (err)
+ if (get_fec_supported_advertised(mdev, link_ksettings))
netdev_dbg(netdev, "%s: FEC caps query failed: %d\n",
__func__, err);
rq->channel = c;
rq->ix = c->ix;
rq->mdev = mdev;
+ rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
rq->stats = &c->priv->channel_stats[c->ix].rq;
rq->xdp_prog = params->xdp_prog ? bpf_prog_inc(params->xdp_prog) : NULL;
int err;
u32 i;
+ err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn);
+ if (err)
+ return err;
+
err = mlx5_cqwq_create(mdev, ¶m->wq, param->cqc, &cq->wq,
&cq->wq_ctrl);
if (err)
return err;
- mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn);
-
mcq->cqe_sz = 64;
mcq->set_ci_db = cq->wq_ctrl.db.db;
mcq->arm_db = cq->wq_ctrl.db.db + 1;
int eqn;
int err;
+ err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used);
+ if (err)
+ return err;
+
inlen = MLX5_ST_SZ_BYTES(create_cq_in) +
sizeof(u64) * cq->wq_ctrl.buf.npages;
in = kvzalloc(inlen, GFP_KERNEL);
mlx5_fill_page_frag_array(&cq->wq_ctrl.buf,
(__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas));
- mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used);
-
MLX5_SET(cqc, cqc, cq_period_mode, param->cq_period_mode);
MLX5_SET(cqc, cqc, c_eqn, eqn);
MLX5_SET(cqc, cqc, uar_page, mdev->priv.uar->index);
int err;
int eqn;
+ err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
+ if (err)
+ return err;
+
c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu));
if (!c)
return -ENOMEM;
c->xdp = !!params->xdp_prog;
c->stats = &priv->channel_stats[ix].ch;
- mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
c->irq_desc = irq_to_desc(irq);
netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
return 0;
}
+#ifdef CONFIG_MLX5_ESWITCH
static int set_feature_tc_num_filters(struct net_device *netdev, bool enable)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
return 0;
}
+#endif
static int set_feature_rx_all(struct net_device *netdev, bool enable)
{
err |= MLX5E_HANDLE_FEATURE(NETIF_F_LRO, set_feature_lro);
err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_VLAN_CTAG_FILTER,
set_feature_cvlan_filter);
+#ifdef CONFIG_MLX5_ESWITCH
err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_TC, set_feature_tc_num_filters);
+#endif
err |= MLX5E_HANDLE_FEATURE(NETIF_F_RXALL, set_feature_rx_all);
err |= MLX5E_HANDLE_FEATURE(NETIF_F_RXFCS, set_feature_rx_fcs);
err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_VLAN_CTAG_RX, set_feature_rx_vlan);
}
if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
+ bool is_linear = mlx5e_rx_mpwqe_is_linear_skb(priv->mdev, &new_channels.params);
u8 ppw_old = mlx5e_mpwqe_log_pkts_per_wqe(params);
u8 ppw_new = mlx5e_mpwqe_log_pkts_per_wqe(&new_channels.params);
- reset = reset && (ppw_old != ppw_new);
+ reset = reset && (is_linear || (ppw_old != ppw_new));
}
if (!reset) {
FT_CAP(modify_root) &&
FT_CAP(identified_miss_table_mode) &&
FT_CAP(flow_table_modify)) {
+#ifdef CONFIG_MLX5_ESWITCH
netdev->hw_features |= NETIF_F_HW_TC;
+#endif
#ifdef CONFIG_MLX5_EN_ARFS
netdev->hw_features |= NETIF_F_NTUPLE;
#endif
int mlx5e_attach_netdev(struct mlx5e_priv *priv)
{
const struct mlx5e_profile *profile;
+ int max_nch;
int err;
profile = priv->profile;
clear_bit(MLX5E_STATE_DESTROYING, &priv->state);
+ /* max number of channels may have changed */
+ max_nch = mlx5e_get_max_num_channels(priv->mdev);
+ if (priv->channels.params.num_channels > max_nch) {
+ mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch);
+ priv->channels.params.num_channels = max_nch;
+ mlx5e_build_default_indir_rqt(priv->channels.params.indirection_rqt,
+ MLX5E_INDIR_RQT_SIZE, max_nch);
+ }
+
err = profile->init_tx(priv);
if (err)
goto out;
u32 frag_size;
bool consumed;
+ /* Check packet size. Note LRO doesn't use linear SKB */
+ if (unlikely(cqe_bcnt > rq->hw_mtu)) {
+ rq->stats->oversize_pkts_sw_drop++;
+ return NULL;
+ }
+
va = page_address(di->page) + head_offset;
data = va + rx_headroom;
frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt32);
return 1;
}
-#ifdef CONFIG_INET
-/* loopback test */
-#define MLX5E_TEST_PKT_SIZE (MLX5E_RX_MAX_HEAD - NET_IP_ALIGN)
-static const char mlx5e_test_text[ETH_GSTRING_LEN] = "MLX5E SELF TEST";
-#define MLX5E_TEST_MAGIC 0x5AEED15C001ULL
-
struct mlx5ehdr {
__be32 version;
__be64 magic;
- char text[ETH_GSTRING_LEN];
};
+#ifdef CONFIG_INET
+/* loopback test */
+#define MLX5E_TEST_PKT_SIZE (sizeof(struct ethhdr) + sizeof(struct iphdr) +\
+ sizeof(struct udphdr) + sizeof(struct mlx5ehdr))
+#define MLX5E_TEST_MAGIC 0x5AEED15C001ULL
+
static struct sk_buff *mlx5e_test_get_udp_skb(struct mlx5e_priv *priv)
{
struct sk_buff *skb = NULL;
struct ethhdr *ethh;
struct udphdr *udph;
struct iphdr *iph;
- int datalen, iplen;
-
- datalen = MLX5E_TEST_PKT_SIZE -
- (sizeof(*ethh) + sizeof(*iph) + sizeof(*udph));
+ int iplen;
skb = netdev_alloc_skb(priv->netdev, MLX5E_TEST_PKT_SIZE);
if (!skb) {
/* Fill UDP header */
udph->source = htons(9);
udph->dest = htons(9); /* Discard Protocol */
- udph->len = htons(datalen + sizeof(struct udphdr));
+ udph->len = htons(sizeof(struct mlx5ehdr) + sizeof(struct udphdr));
udph->check = 0;
/* Fill IP header */
iph->ttl = 32;
iph->version = 4;
iph->protocol = IPPROTO_UDP;
- iplen = sizeof(struct iphdr) + sizeof(struct udphdr) + datalen;
+ iplen = sizeof(struct iphdr) + sizeof(struct udphdr) +
+ sizeof(struct mlx5ehdr);
iph->tot_len = htons(iplen);
iph->frag_off = 0;
iph->saddr = 0;
mlxh = skb_put(skb, sizeof(*mlxh));
mlxh->version = 0;
mlxh->magic = cpu_to_be64(MLX5E_TEST_MAGIC);
- strlcpy(mlxh->text, mlx5e_test_text, sizeof(mlxh->text));
- datalen -= sizeof(*mlxh);
- skb_put_zero(skb, datalen);
skb->csum = 0;
skb->ip_summed = CHECKSUM_PARTIAL;
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_wqe_err) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler_cqes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler_strides) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_oversize_pkts_sw_drop) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_buff_alloc_err) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_blks) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_pkts) },
s->rx_wqe_err += rq_stats->wqe_err;
s->rx_mpwqe_filler_cqes += rq_stats->mpwqe_filler_cqes;
s->rx_mpwqe_filler_strides += rq_stats->mpwqe_filler_strides;
+ s->rx_oversize_pkts_sw_drop += rq_stats->oversize_pkts_sw_drop;
s->rx_buff_alloc_err += rq_stats->buff_alloc_err;
s->rx_cqe_compress_blks += rq_stats->cqe_compress_blks;
s->rx_cqe_compress_pkts += rq_stats->cqe_compress_pkts;
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, wqe_err) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_filler_cqes) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_filler_strides) },
+ { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, oversize_pkts_sw_drop) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, buff_alloc_err) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
u64 rx_wqe_err;
u64 rx_mpwqe_filler_cqes;
u64 rx_mpwqe_filler_strides;
+ u64 rx_oversize_pkts_sw_drop;
u64 rx_buff_alloc_err;
u64 rx_cqe_compress_blks;
u64 rx_cqe_compress_pkts;
u64 wqe_err;
u64 mpwqe_filler_cqes;
u64 mpwqe_filler_strides;
+ u64 oversize_pkts_sw_drop;
u64 buff_alloc_err;
u64 cqe_compress_blks;
u64 cqe_compress_pkts;
inner_headers);
}
- if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
- struct flow_dissector_key_eth_addrs *key =
+ if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
+ struct flow_dissector_key_basic *key =
skb_flow_dissector_target(f->dissector,
- FLOW_DISSECTOR_KEY_ETH_ADDRS,
+ FLOW_DISSECTOR_KEY_BASIC,
f->key);
- struct flow_dissector_key_eth_addrs *mask =
+ struct flow_dissector_key_basic *mask =
skb_flow_dissector_target(f->dissector,
- FLOW_DISSECTOR_KEY_ETH_ADDRS,
+ FLOW_DISSECTOR_KEY_BASIC,
f->mask);
+ MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype,
+ ntohs(mask->n_proto));
+ MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype,
+ ntohs(key->n_proto));
- ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
- dmac_47_16),
- mask->dst);
- ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
- dmac_47_16),
- key->dst);
-
- ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
- smac_47_16),
- mask->src);
- ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
- smac_47_16),
- key->src);
-
- if (!is_zero_ether_addr(mask->src) || !is_zero_ether_addr(mask->dst))
+ if (mask->n_proto)
*match_level = MLX5_MATCH_L2;
}
*match_level = MLX5_MATCH_L2;
}
- } else {
+ } else if (*match_level != MLX5_MATCH_NONE) {
MLX5_SET(fte_match_set_lyr_2_4, headers_c, svlan_tag, 1);
MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
+ *match_level = MLX5_MATCH_L2;
}
if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CVLAN)) {
}
}
- if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
- struct flow_dissector_key_basic *key =
+ if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+ struct flow_dissector_key_eth_addrs *key =
skb_flow_dissector_target(f->dissector,
- FLOW_DISSECTOR_KEY_BASIC,
+ FLOW_DISSECTOR_KEY_ETH_ADDRS,
f->key);
- struct flow_dissector_key_basic *mask =
+ struct flow_dissector_key_eth_addrs *mask =
skb_flow_dissector_target(f->dissector,
- FLOW_DISSECTOR_KEY_BASIC,
+ FLOW_DISSECTOR_KEY_ETH_ADDRS,
f->mask);
- MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype,
- ntohs(mask->n_proto));
- MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype,
- ntohs(key->n_proto));
- if (mask->n_proto)
+ ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
+ dmac_47_16),
+ mask->dst);
+ ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
+ dmac_47_16),
+ key->dst);
+
+ ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
+ smac_47_16),
+ mask->src);
+ ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
+ smac_47_16),
+ key->src);
+
+ if (!is_zero_ether_addr(mask->src) || !is_zero_ether_addr(mask->dst))
*match_level = MLX5_MATCH_L2;
}
/* the HW doesn't need L3 inline to match on frag=no */
if (!(key->flags & FLOW_DIS_IS_FRAGMENT))
- *match_level = MLX5_INLINE_MODE_L2;
+ *match_level = MLX5_MATCH_L2;
/* *** L2 attributes parsing up to here *** */
else
- *match_level = MLX5_INLINE_MODE_IP;
+ *match_level = MLX5_MATCH_L3;
}
}
if (!actions_match_supported(priv, exts, parse_attr, flow, extack))
return -EOPNOTSUPP;
- if (attr->out_count > 1 && !mlx5_esw_has_fwd_fdb(priv->mdev)) {
+ if (attr->mirror_count > 0 && !mlx5_esw_has_fwd_fdb(priv->mdev)) {
NL_SET_ERR_MSG_MOD(extack,
"current firmware doesn't support split rule for port mirroring");
netdev_warn_once(priv->netdev, "current firmware doesn't support split rule for port mirroring\n");
};
static const struct rhashtable_params rhash_sa = {
- .key_len = FIELD_SIZEOF(struct mlx5_fpga_ipsec_sa_ctx, hw_sa),
- .key_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hw_sa),
+ /* Keep out "cmd" field from the key as it's
+ * value is not constant during the lifetime
+ * of the key object.
+ */
+ .key_len = FIELD_SIZEOF(struct mlx5_fpga_ipsec_sa_ctx, hw_sa) -
+ FIELD_SIZEOF(struct mlx5_ifc_fpga_ipsec_sa_v1, cmd),
+ .key_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hw_sa) +
+ FIELD_SIZEOF(struct mlx5_ifc_fpga_ipsec_sa_v1, cmd),
.head_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hash),
.automatic_shrinking = true,
.min_size = 1,
netif_carrier_off(epriv->netdev);
mlx5_fs_remove_rx_underlay_qpn(mdev, ipriv->qp.qpn);
- mlx5i_uninit_underlay_qp(epriv);
mlx5e_deactivate_priv_channels(epriv);
mlx5e_close_channels(&epriv->channels);
+ mlx5i_uninit_underlay_qp(epriv);
unlock:
mutex_unlock(&epriv->state_lock);
return 0;
netif_wake_queue(adapter->netdev);
}
- if (!napi_complete_done(napi, weight))
+ if (!napi_complete(napi))
goto done;
/* enable isr */
lan743x_csr_read(adapter, INT_STS);
done:
- return weight;
+ return 0;
}
static void lan743x_tx_ring_cleanup(struct lan743x_tx *tx)
tx->vector_flags = lan743x_intr_get_vector_flags(adapter,
INT_BIT_DMA_TX_
(tx->channel_number));
- netif_napi_add(adapter->netdev,
- &tx->napi, lan743x_tx_napi_poll,
- tx->ring_size - 1);
+ netif_tx_napi_add(adapter->netdev,
+ &tx->napi, lan743x_tx_napi_poll,
+ tx->ring_size - 1);
napi_enable(&tx->napi);
data = 0;
static const struct pci_device_id lan743x_pcidev_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7430) },
+ { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7431) },
{ 0, }
};
/* SMSC acquired EFAR late 1990's, MCHP acquired SMSC 2012 */
#define PCI_VENDOR_ID_SMSC PCI_VENDOR_ID_EFAR
#define PCI_DEVICE_ID_SMSC_LAN7430 (0x7430)
+#define PCI_DEVICE_ID_SMSC_LAN7431 (0x7431)
#define PCI_CONFIG_LENGTH (0x1000)
static void
qed_dcbx_set_params(struct qed_dcbx_results *p_data,
struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
- bool enable, u8 prio, u8 tc,
+ bool app_tlv, bool enable, u8 prio, u8 tc,
enum dcbx_protocol_type type,
enum qed_pci_personality personality)
{
p_data->arr[type].dont_add_vlan0 = true;
/* QM reconf data */
- if (p_hwfn->hw_info.personality == personality)
+ if (app_tlv && p_hwfn->hw_info.personality == personality)
qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc);
/* Configure dcbx vlan priority in doorbell block for roce EDPM */
static void
qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
- bool enable, u8 prio, u8 tc,
+ bool app_tlv, bool enable, u8 prio, u8 tc,
enum dcbx_protocol_type type)
{
enum qed_pci_personality personality;
personality = qed_dcbx_app_update[i].personality;
- qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable,
+ qed_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable,
prio, tc, type, personality);
}
}
enable = true;
}
- qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable,
- priority, tc, type);
+ qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true,
+ enable, priority, tc, type);
}
}
continue;
enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version;
- qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable,
+ qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false, enable,
priority, tc, type);
}
"no error",
"length error",
"function disabled",
- "VF sent command to attnetion address",
+ "VF sent command to attention address",
"host sent prod update command",
"read of during interrupt register while in MIMD mode",
"access to PXP BAR reserved address",
qed_iscsi_free(p_hwfn);
qed_ooo_free(p_hwfn);
}
+
+ if (QED_IS_RDMA_PERSONALITY(p_hwfn))
+ qed_rdma_info_free(p_hwfn);
+
qed_iov_free(p_hwfn);
qed_l2_free(p_hwfn);
qed_dmae_info_free(p_hwfn);
struct qed_qm_info *qm_info = &p_hwfn->qm_info;
/* Can't have multiple flags set here */
- if (bitmap_weight((unsigned long *)&pq_flags, sizeof(pq_flags)) > 1)
+ if (bitmap_weight((unsigned long *)&pq_flags,
+ sizeof(pq_flags) * BITS_PER_BYTE) > 1) {
+ DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags);
+ goto err;
+ }
+
+ if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) {
+ DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags);
goto err;
+ }
switch (pq_flags) {
case PQ_FLAGS_RLS:
}
err:
- DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);
- return NULL;
+ return &qm_info->start_pq;
}
/* save pq index in qm info */
{
u8 max_tc = qed_init_qm_get_num_tcs(p_hwfn);
+ if (max_tc == 0) {
+ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n",
+ PQ_FLAGS_MCOS);
+ return p_hwfn->qm_info.start_pq;
+ }
+
if (tc > max_tc)
DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;
+ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc);
}
u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf)
{
u16 max_vf = qed_init_qm_get_num_vfs(p_hwfn);
+ if (max_vf == 0) {
+ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n",
+ PQ_FLAGS_VFS);
+ return p_hwfn->qm_info.start_pq;
+ }
+
if (vf > max_vf)
DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;
+ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf);
}
u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc)
goto alloc_err;
}
+ if (QED_IS_RDMA_PERSONALITY(p_hwfn)) {
+ rc = qed_rdma_info_alloc(p_hwfn);
+ if (rc)
+ goto alloc_err;
+ }
+
/* DMA info initialization */
rc = qed_dmae_info_alloc(p_hwfn);
if (rc)
if (!p_ptt)
return -EAGAIN;
- /* If roce info is allocated it means roce is initialized and should
- * be enabled in searcher.
- */
if (p_hwfn->p_rdma_info &&
- p_hwfn->b_rdma_enabled_in_prs)
+ p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs)
qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1);
/* Re-open incoming traffic */
*/
do {
index = p_sb_attn->sb_index;
+ /* finish reading index before the loop condition */
+ dma_rmb();
attn_bits = le32_to_cpu(p_sb_attn->atten_bits);
attn_acks = le32_to_cpu(p_sb_attn->atten_ack);
} while (index != p_sb_attn->sb_index);
return -EBUSY;
}
rc = qed_mcp_drain(hwfn, ptt);
+ qed_ptt_release(hwfn, ptt);
if (rc)
return rc;
- qed_ptt_release(hwfn, ptt);
}
return 0;
return FEAT_NUM((struct qed_hwfn *)p_hwfn, QED_PF_L2_QUE) + rel_sb_id;
}
-static int qed_rdma_alloc(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- struct qed_rdma_start_in_params *params)
+int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn)
{
struct qed_rdma_info *p_rdma_info;
- u32 num_cons, num_tasks;
- int rc = -ENOMEM;
- DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n");
-
- /* Allocate a struct with current pf rdma info */
p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL);
if (!p_rdma_info)
- return rc;
+ return -ENOMEM;
+
+ spin_lock_init(&p_rdma_info->lock);
p_hwfn->p_rdma_info = p_rdma_info;
+ return 0;
+}
+
+void qed_rdma_info_free(struct qed_hwfn *p_hwfn)
+{
+ kfree(p_hwfn->p_rdma_info);
+ p_hwfn->p_rdma_info = NULL;
+}
+
+static int qed_rdma_alloc(struct qed_hwfn *p_hwfn)
+{
+ struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info;
+ u32 num_cons, num_tasks;
+ int rc = -ENOMEM;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n");
+
if (QED_IS_IWARP_PERSONALITY(p_hwfn))
p_rdma_info->proto = PROTOCOLID_IWARP;
else
/* Allocate a struct with device params and fill it */
p_rdma_info->dev = kzalloc(sizeof(*p_rdma_info->dev), GFP_KERNEL);
if (!p_rdma_info->dev)
- goto free_rdma_info;
+ return rc;
/* Allocate a struct with port params and fill it */
p_rdma_info->port = kzalloc(sizeof(*p_rdma_info->port), GFP_KERNEL);
kfree(p_rdma_info->port);
free_rdma_dev:
kfree(p_rdma_info->dev);
-free_rdma_info:
- kfree(p_rdma_info);
return rc;
}
kfree(p_rdma_info->port);
kfree(p_rdma_info->dev);
-
- kfree(p_rdma_info);
}
static void qed_rdma_free_tid(void *rdma_cxt, u32 itid)
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA setup\n");
- spin_lock_init(&p_hwfn->p_rdma_info->lock);
-
qed_rdma_init_devinfo(p_hwfn, params);
qed_rdma_init_port(p_hwfn);
qed_rdma_init_events(p_hwfn, params);
/* Disable RoCE search */
qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0);
p_hwfn->b_rdma_enabled_in_prs = false;
-
+ p_hwfn->p_rdma_info->active = 0;
qed_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF, 0);
ll2_ethertype_en = qed_rd(p_hwfn, p_ptt, PRS_REG_LIGHT_L2_ETHERTYPE_EN);
u8 max_stats_queues;
int rc;
- if (!rdma_cxt || !in_params || !out_params || !p_hwfn->p_rdma_info) {
+ if (!rdma_cxt || !in_params || !out_params ||
+ !p_hwfn->p_rdma_info->active) {
DP_ERR(p_hwfn->cdev,
"qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n",
rdma_cxt, in_params, out_params);
{
bool result;
- /* if rdma info has not been allocated, naturally there are no qps */
- if (!p_hwfn->p_rdma_info)
+ /* if rdma wasn't activated yet, naturally there are no qps */
+ if (!p_hwfn->p_rdma_info->active)
return false;
spin_lock_bh(&p_hwfn->p_rdma_info->lock);
if (!p_ptt)
goto err;
- rc = qed_rdma_alloc(p_hwfn, p_ptt, params);
+ rc = qed_rdma_alloc(p_hwfn);
if (rc)
goto err1;
goto err2;
qed_ptt_release(p_hwfn, p_ptt);
+ p_hwfn->p_rdma_info->active = 1;
return rc;
u16 max_queue_zones;
enum protocol_type proto;
struct qed_iwarp_info iwarp;
+ u8 active:1;
};
struct qed_rdma_qp {
#if IS_ENABLED(CONFIG_QED_RDMA)
void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
+int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn);
+void qed_rdma_info_free(struct qed_hwfn *p_hwfn);
#else
static inline void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {}
static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt) {}
+static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL;}
+static inline void qed_rdma_info_free(struct qed_hwfn *p_hwfn) {}
#endif
int
"tx_jumbo",
"rx_mac_control_frames",
"tx_mac_control_frames",
- "rx_frame_alignement_errors",
+ "rx_frame_alignment_errors",
"rx_long_ok",
"rx_long_err",
"tx_sqe_errors",
* assume the pin serves as pull-up. If direction is
* output, the default value is high.
*/
- gpiod_set_value(bitbang->mdo, 1);
+ gpiod_set_value_cansleep(bitbang->mdo, 1);
return;
}
struct mdio_gpio_info *bitbang =
container_of(ctrl, struct mdio_gpio_info, ctrl);
- return gpiod_get_value(bitbang->mdio);
+ return gpiod_get_value_cansleep(bitbang->mdio);
}
static void mdio_set(struct mdiobb_ctrl *ctrl, int what)
container_of(ctrl, struct mdio_gpio_info, ctrl);
if (bitbang->mdo)
- gpiod_set_value(bitbang->mdo, what);
+ gpiod_set_value_cansleep(bitbang->mdo, what);
else
- gpiod_set_value(bitbang->mdio, what);
+ gpiod_set_value_cansleep(bitbang->mdio, what);
}
static void mdc_set(struct mdiobb_ctrl *ctrl, int what)
struct mdio_gpio_info *bitbang =
container_of(ctrl, struct mdio_gpio_info, ctrl);
- gpiod_set_value(bitbang->mdc, what);
+ gpiod_set_value_cansleep(bitbang->mdc, what);
}
static const struct mdiobb_ops mdio_gpio_ops = {
phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
mutex_lock(&phydev->lock);
- rc = phy_select_page(phydev, MSCC_PHY_PAGE_EXTENDED_2);
- if (rc < 0)
- goto out_unlock;
- reg_val = phy_read(phydev, MSCC_PHY_RGMII_CNTL);
- reg_val &= ~(RGMII_RX_CLK_DELAY_MASK);
- reg_val |= (RGMII_RX_CLK_DELAY_1_1_NS << RGMII_RX_CLK_DELAY_POS);
- phy_write(phydev, MSCC_PHY_RGMII_CNTL, reg_val);
+ reg_val = RGMII_RX_CLK_DELAY_1_1_NS << RGMII_RX_CLK_DELAY_POS;
+
+ rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_EXTENDED_2,
+ MSCC_PHY_RGMII_CNTL, RGMII_RX_CLK_DELAY_MASK,
+ reg_val);
-out_unlock:
- rc = phy_restore_page(phydev, rc, rc > 0 ? 0 : rc);
mutex_unlock(&phydev->lock);
return rc;
new_driver->mdiodrv.driver.remove = phy_remove;
new_driver->mdiodrv.driver.owner = owner;
+ /* The following works around an issue where the PHY driver doesn't bind
+ * to the device, resulting in the genphy driver being used instead of
+ * the dedicated driver. The root cause of the issue isn't known yet
+ * and seems to be in the base driver core. Once this is fixed we may
+ * remove this workaround.
+ */
+ new_driver->mdiodrv.driver.probe_type = PROBE_FORCE_SYNCHRONOUS;
+
retval = driver_register(&new_driver->mdiodrv.driver);
if (retval) {
pr_err("%s: Error %d in registering driver\n",
* it just report sending a packet to the target
* (without actual packet transfer).
*/
- dev_kfree_skb_any(skb);
ndev->stats.tx_packets++;
ndev->stats.tx_bytes += skb->len;
+ dev_kfree_skb_any(skb);
}
}
team->en_port_count--;
team_queue_override_port_del(team, port);
team_adjust_ops(team);
- team_notify_peers(team);
- team_mcast_rejoin(team);
team_lower_state_changed(port);
}
if (!rx_batched || (!more && skb_queue_empty(queue))) {
local_bh_disable();
+ skb_record_rx_queue(skb, tfile->queue_index);
netif_receive_skb(skb);
local_bh_enable();
return;
struct sk_buff *nskb;
local_bh_disable();
- while ((nskb = __skb_dequeue(&process_queue)))
+ while ((nskb = __skb_dequeue(&process_queue))) {
+ skb_record_rx_queue(nskb, tfile->queue_index);
netif_receive_skb(nskb);
+ }
+ skb_record_rx_queue(skb, tfile->queue_index);
netif_receive_skb(skb);
local_bh_enable();
}
if (!rcu_dereference(tun->steering_prog))
rxhash = __skb_get_hash_symmetric(skb);
+ skb_record_rx_queue(skb, tfile->queue_index);
netif_receive_skb(skb);
stats = get_cpu_ptr(tun->pcpu_stats);
struct usb_device *udev;
struct usb_interface *intf;
struct net_device *net;
- struct sk_buff *tx_skb;
struct urb *tx_urb;
struct urb *rx_urb;
unsigned char *tx_buf;
case -ENOENT:
case -ECONNRESET:
case -ESHUTDOWN:
+ case -EPROTO:
return;
case 0:
break;
dev_err(&dev->intf->dev, "%s: urb status: %d\n",
__func__, status);
- dev_kfree_skb_irq(dev->tx_skb);
if (status == 0)
netif_wake_queue(dev->net);
else
if (skb->len > IPHETH_BUF_SIZE) {
WARN(1, "%s: skb too large: %d bytes\n", __func__, skb->len);
dev->net->stats.tx_dropped++;
- dev_kfree_skb_irq(skb);
+ dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
dev_err(&dev->intf->dev, "%s: usb_submit_urb: %d\n",
__func__, retval);
dev->net->stats.tx_errors++;
- dev_kfree_skb_irq(skb);
+ dev_kfree_skb_any(skb);
} else {
- dev->tx_skb = skb;
-
dev->net->stats.tx_packets++;
dev->net->stats.tx_bytes += skb->len;
+ dev_consume_skb_any(skb);
netif_stop_queue(net);
}
VIRTIO_NET_F_GUEST_TSO4,
VIRTIO_NET_F_GUEST_TSO6,
VIRTIO_NET_F_GUEST_ECN,
- VIRTIO_NET_F_GUEST_UFO
+ VIRTIO_NET_F_GUEST_UFO,
+ VIRTIO_NET_F_GUEST_CSUM
};
struct virtnet_stat_desc {
if (!vi->guest_offloads)
return 0;
- if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))
- offloads = 1ULL << VIRTIO_NET_F_GUEST_CSUM;
-
return virtnet_set_guest_offloads(vi, offloads);
}
if (!vi->guest_offloads)
return 0;
- if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))
- offloads |= 1ULL << VIRTIO_NET_F_GUEST_CSUM;
return virtnet_set_guest_offloads(vi, offloads);
}
&& (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO4) ||
virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO6) ||
virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) ||
- virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO))) {
- NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO, disable LRO first");
+ virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) ||
+ virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))) {
+ NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO/CSUM, disable LRO/CSUM first");
return -EOPNOTSUPP;
}
u32 bitmap;
if (drop) {
- if (vif->type == NL80211_IFTYPE_STATION) {
+ if (vif && vif->type == NL80211_IFTYPE_STATION) {
bitmap = ~(1 << WMI_MGMT_TID);
list_for_each_entry(arvif, &ar->arvifs, list) {
if (arvif->vdev_type == WMI_VDEV_TYPE_STA)
struct ath_vif *avp = (void *)vif->drv_priv;
struct ath_node *an = &avp->mcast_node;
+ mutex_lock(&sc->mutex);
if (IS_ENABLED(CONFIG_ATH9K_TX99)) {
if (sc->cur_chan->nvifs >= 1) {
mutex_unlock(&sc->mutex);
sc->tx99_vif = vif;
}
- mutex_lock(&sc->mutex);
-
ath_dbg(common, CONFIG, "Attach a VIF of type: %d\n", vif->type);
sc->cur_chan->nvifs++;
* for subsequent chanspecs.
*/
channel->flags = IEEE80211_CHAN_NO_HT40 |
- IEEE80211_CHAN_NO_80MHZ;
+ IEEE80211_CHAN_NO_80MHZ |
+ IEEE80211_CHAN_NO_160MHZ;
ch.bw = BRCMU_CHAN_BW_20;
cfg->d11inf.encchspec(&ch);
chaninfo = ch.chspec;
}
break;
case BRCMU_CHSPEC_D11AC_BW_160:
+ ch->bw = BRCMU_CHAN_BW_160;
+ ch->sb = brcmu_maskget16(ch->chspec, BRCMU_CHSPEC_D11AC_SB_MASK,
+ BRCMU_CHSPEC_D11AC_SB_SHIFT);
switch (ch->sb) {
case BRCMU_CHAN_SB_LLL:
ch->control_ch_num -= CH_70MHZ_APART;
* GPL LICENSE SUMMARY
*
* Copyright(c) 2017 Intel Deutschland GmbH
+ * Copyright(c) 2018 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* BSD LICENSE
*
* Copyright(c) 2017 Intel Deutschland GmbH
+ * Copyright(c) 2018 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define ACPI_WRDS_WIFI_DATA_SIZE (ACPI_SAR_TABLE_SIZE + 2)
#define ACPI_EWRD_WIFI_DATA_SIZE ((ACPI_SAR_PROFILE_NUM - 1) * \
ACPI_SAR_TABLE_SIZE + 3)
-#define ACPI_WGDS_WIFI_DATA_SIZE 18
+#define ACPI_WGDS_WIFI_DATA_SIZE 19
#define ACPI_WRDD_WIFI_DATA_SIZE 2
#define ACPI_SPLC_WIFI_DATA_SIZE 2
const struct iwl_fw_runtime_ops *ops, void *ops_ctx,
struct dentry *dbgfs_dir);
-void iwl_fw_runtime_exit(struct iwl_fw_runtime *fwrt);
+static inline void iwl_fw_runtime_free(struct iwl_fw_runtime *fwrt)
+{
+ kfree(fwrt->dump.d3_debug_data);
+ fwrt->dump.d3_debug_data = NULL;
+}
void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt);
IWL_DEBUG_RADIO(mvm, "Sending GEO_TX_POWER_LIMIT\n");
BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES * ACPI_WGDS_NUM_BANDS *
- ACPI_WGDS_TABLE_SIZE != ACPI_WGDS_WIFI_DATA_SIZE);
+ ACPI_WGDS_TABLE_SIZE + 1 != ACPI_WGDS_WIFI_DATA_SIZE);
BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES > IWL_NUM_GEO_PROFILES);
return -ENOENT;
}
+static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm)
+{
+ return -ENOENT;
+}
+
static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
{
return 0;
IWL_DEBUG_RADIO(mvm,
"WRDS SAR BIOS table invalid or unavailable. (%d)\n",
ret);
- /* if not available, don't fail and don't bother with EWRD */
- return 0;
+ /*
+ * If not available, don't fail and don't bother with EWRD.
+ * Return 1 to tell that we can't use WGDS either.
+ */
+ return 1;
}
ret = iwl_mvm_sar_get_ewrd_table(mvm);
/* choose profile 1 (WRDS) as default for both chains */
ret = iwl_mvm_sar_select_profile(mvm, 1, 1);
- /* if we don't have profile 0 from BIOS, just skip it */
+ /*
+ * If we don't have profile 0 from BIOS, just skip it. This
+ * means that SAR Geo will not be enabled either, even if we
+ * have other valid profiles.
+ */
if (ret == -ENOENT)
- return 0;
+ return 1;
return ret;
}
iwl_mvm_unref(mvm, IWL_MVM_REF_UCODE_DOWN);
ret = iwl_mvm_sar_init(mvm);
- if (ret)
- goto error;
+ if (ret == 0) {
+ ret = iwl_mvm_sar_geo_init(mvm);
+ } else if (ret > 0 && !iwl_mvm_sar_get_wgds_table(mvm)) {
+ /*
+ * If basic SAR is not available, we check for WGDS,
+ * which should *not* be available either. If it is
+ * available, issue an error, because we can't use SAR
+ * Geo without basic SAR.
+ */
+ IWL_ERR(mvm, "BIOS contains WGDS but no WRDS\n");
+ }
- ret = iwl_mvm_sar_geo_init(mvm);
- if (ret)
+ if (ret < 0)
goto error;
iwl_mvm_leds_sync(mvm);
goto out;
}
- if (changed)
- *changed = (resp->status == MCC_RESP_NEW_CHAN_PROFILE);
+ if (changed) {
+ u32 status = le32_to_cpu(resp->status);
+
+ *changed = (status == MCC_RESP_NEW_CHAN_PROFILE ||
+ status == MCC_RESP_ILLEGAL);
+ }
regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg,
__le32_to_cpu(resp->n_channels),
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
}
- if (!fw_has_capa(&mvm->fw->ucode_capa,
- IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS))
- return;
-
/* if beacon filtering isn't on mac80211 does it anyway */
if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER))
return;
}
IWL_DEBUG_LAR(mvm,
- "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') change: %d n_chans: %d\n",
- status, mcc, mcc >> 8, mcc & 0xff,
- !!(status == MCC_RESP_NEW_CHAN_PROFILE), n_channels);
+ "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') n_chans: %d\n",
+ status, mcc, mcc >> 8, mcc & 0xff, n_channels);
exit:
iwl_free_resp(&cmd);
iwl_mvm_thermal_exit(mvm);
out_free:
iwl_fw_flush_dump(&mvm->fwrt);
+ iwl_fw_runtime_free(&mvm->fwrt);
if (iwlmvm_mod_params.init_dbg)
return op_mode;
iwl_mvm_tof_clean(mvm);
+ iwl_fw_runtime_free(&mvm->fwrt);
mutex_destroy(&mvm->mutex);
mutex_destroy(&mvm->d0i3_suspend_mutex);
config MT76_CORE
tristate
+config MT76_LEDS
+ bool
+ depends on MT76_CORE
+ depends on LEDS_CLASS=y || MT76_CORE=LEDS_CLASS
+ default y
+
config MT76_USB
tristate
depends on MT76_CORE
mt76_check_sband(dev, NL80211_BAND_2GHZ);
mt76_check_sband(dev, NL80211_BAND_5GHZ);
- ret = mt76_led_init(dev);
- if (ret)
- return ret;
+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {
+ ret = mt76_led_init(dev);
+ if (ret)
+ return ret;
+ }
return ieee80211_register_hw(hw);
}
struct mac_address macaddr_list[8];
struct mutex phy_mutex;
- struct mutex mutex;
u8 txdone_seq;
DECLARE_KFIFO_PTR(txstatus_fifo, struct mt76x02_tx_status);
mt76x2_dfs_init_detector(dev);
/* init led callbacks */
- dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness;
- dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink;
+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {
+ dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness;
+ dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink;
+ }
ret = mt76_register_device(&dev->mt76, true, mt76x02_rates,
ARRAY_SIZE(mt76x02_rates));
if (val != ~0 && val > 0xffff)
return -EINVAL;
- mutex_lock(&dev->mutex);
+ mutex_lock(&dev->mt76.mutex);
mt76x2_mac_set_tx_protection(dev, val);
- mutex_unlock(&dev->mutex);
+ mutex_unlock(&dev->mt76.mutex);
return 0;
}
struct resource res[2];
mmc_pm_flag_t mmcflags;
int ret = -ENOMEM;
- int irq, wakeirq;
+ int irq, wakeirq, num_irqs;
const char *chip_family;
/* We are only able to handle the wlan function */
irqd_get_trigger_type(irq_get_irq_data(irq));
res[0].name = "irq";
- res[1].start = wakeirq;
- res[1].flags = IORESOURCE_IRQ |
- irqd_get_trigger_type(irq_get_irq_data(wakeirq));
- res[1].name = "wakeirq";
- ret = platform_device_add_resources(glue->core, res, ARRAY_SIZE(res));
+ if (wakeirq > 0) {
+ res[1].start = wakeirq;
+ res[1].flags = IORESOURCE_IRQ |
+ irqd_get_trigger_type(irq_get_irq_data(wakeirq));
+ res[1].name = "wakeirq";
+ num_irqs = 2;
+ } else {
+ num_irqs = 1;
+ }
+ ret = platform_device_add_resources(glue->core, res, num_irqs);
if (ret) {
dev_err(glue->dev, "can't add resources\n");
goto out_dev_put;
bool ioq_live;
bool assoc_active;
+ atomic_t err_work_active;
u64 association_id;
struct list_head ctrl_list; /* rport->ctrl_list */
struct blk_mq_tag_set tag_set;
struct delayed_work connect_work;
+ struct work_struct err_work;
struct kref ref;
u32 flags;
struct nvme_fc_fcp_op *aen_op = ctrl->aen_ops;
int i;
+ /* ensure we've initialized the ops once */
+ if (!(aen_op->flags & FCOP_FLAGS_AEN))
+ return;
+
for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++)
__nvme_fc_abort_op(ctrl, aen_op);
}
static void
nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
{
- /* only proceed if in LIVE state - e.g. on first error */
+ int active;
+
+ /*
+ * if an error (io timeout, etc) while (re)connecting,
+ * it's an error on creating the new association.
+ * Start the error recovery thread if it hasn't already
+ * been started. It is expected there could be multiple
+ * ios hitting this path before things are cleaned up.
+ */
+ if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {
+ active = atomic_xchg(&ctrl->err_work_active, 1);
+ if (!active && !schedule_work(&ctrl->err_work)) {
+ atomic_set(&ctrl->err_work_active, 0);
+ WARN_ON(1);
+ }
+ return;
+ }
+
+ /* Otherwise, only proceed if in LIVE state - e.g. on first error */
if (ctrl->ctrl.state != NVME_CTRL_LIVE)
return;
{
struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);
+ cancel_work_sync(&ctrl->err_work);
cancel_delayed_work_sync(&ctrl->connect_work);
/*
* kill the association on the link side. this will block
}
static void
-nvme_fc_reset_ctrl_work(struct work_struct *work)
+__nvme_fc_terminate_io(struct nvme_fc_ctrl *ctrl)
{
- struct nvme_fc_ctrl *ctrl =
- container_of(work, struct nvme_fc_ctrl, ctrl.reset_work);
- int ret;
-
- nvme_stop_ctrl(&ctrl->ctrl);
+ nvme_stop_keep_alive(&ctrl->ctrl);
/* will block will waiting for io to terminate */
nvme_fc_delete_association(ctrl);
- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
+ if (ctrl->ctrl.state != NVME_CTRL_CONNECTING &&
+ !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING))
dev_err(ctrl->ctrl.device,
"NVME-FC{%d}: error_recovery: Couldn't change state "
"to CONNECTING\n", ctrl->cnum);
- return;
- }
+}
+
+static void
+nvme_fc_reset_ctrl_work(struct work_struct *work)
+{
+ struct nvme_fc_ctrl *ctrl =
+ container_of(work, struct nvme_fc_ctrl, ctrl.reset_work);
+ int ret;
+
+ __nvme_fc_terminate_io(ctrl);
+
+ nvme_stop_ctrl(&ctrl->ctrl);
if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE)
ret = nvme_fc_create_association(ctrl);
ctrl->cnum);
}
+static void
+nvme_fc_connect_err_work(struct work_struct *work)
+{
+ struct nvme_fc_ctrl *ctrl =
+ container_of(work, struct nvme_fc_ctrl, err_work);
+
+ __nvme_fc_terminate_io(ctrl);
+
+ atomic_set(&ctrl->err_work_active, 0);
+
+ /*
+ * Rescheduling the connection after recovering
+ * from the io error is left to the reconnect work
+ * item, which is what should have stalled waiting on
+ * the io that had the error that scheduled this work.
+ */
+}
+
static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = {
.name = "fc",
.module = THIS_MODULE,
ctrl->cnum = idx;
ctrl->ioq_live = false;
ctrl->assoc_active = false;
+ atomic_set(&ctrl->err_work_active, 0);
init_waitqueue_head(&ctrl->ioabort_wait);
get_device(ctrl->dev);
INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work);
INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work);
+ INIT_WORK(&ctrl->err_work, nvme_fc_connect_err_work);
spin_lock_init(&ctrl->lock);
/* io queue count */
fail_ctrl:
nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING);
cancel_work_sync(&ctrl->ctrl.reset_work);
+ cancel_work_sync(&ctrl->err_work);
cancel_delayed_work_sync(&ctrl->connect_work);
ctrl->ctrl.opts = NULL;
int bytes;
int bit_offset;
int nbits;
+ struct device_node *np;
struct nvmem_device *nvmem;
struct list_head node;
};
mutex_lock(&nvmem_mutex);
list_del(&cell->node);
mutex_unlock(&nvmem_mutex);
+ of_node_put(cell->np);
kfree(cell->name);
kfree(cell);
}
return -ENOMEM;
cell->nvmem = nvmem;
+ cell->np = of_node_get(child);
cell->offset = be32_to_cpup(addr++);
cell->bytes = be32_to_cpup(addr);
cell->name = kasprintf(GFP_KERNEL, "%pOFn", child);
#if IS_ENABLED(CONFIG_OF)
static struct nvmem_cell *
-nvmem_find_cell_by_index(struct nvmem_device *nvmem, int index)
+nvmem_find_cell_by_node(struct nvmem_device *nvmem, struct device_node *np)
{
struct nvmem_cell *cell = NULL;
- int i = 0;
mutex_lock(&nvmem_mutex);
list_for_each_entry(cell, &nvmem->cells, node) {
- if (index == i++)
+ if (np == cell->np)
break;
}
mutex_unlock(&nvmem_mutex);
if (IS_ERR(nvmem))
return ERR_CAST(nvmem);
- cell = nvmem_find_cell_by_index(nvmem, index);
+ cell = nvmem_find_cell_by_node(nvmem, cell_np);
if (!cell) {
__nvmem_device_put(nvmem);
return ERR_PTR(-ENOENT);
*/
count = of_count_phandle_with_args(dev->of_node,
"operating-points-v2", NULL);
- if (count != 1)
- return -ENODEV;
-
- index = 0;
+ if (count == 1)
+ index = 0;
}
opp_table = dev_pm_opp_get_opp_table_indexed(dev, index);
int ret;
vdd_uv = _get_optimal_vdd_voltage(dev, &opp_data,
- new_supply_vbb->u_volt);
+ new_supply_vdd->u_volt);
+
+ if (new_supply_vdd->u_volt_min < vdd_uv)
+ new_supply_vdd->u_volt_min = vdd_uv;
/* Scaling up? Scale voltage before frequency */
if (freq > old_freq) {
.probe = ti_opp_supply_probe,
.driver = {
.name = "ti_opp_supply",
- .owner = THIS_MODULE,
.of_match_table = of_match_ptr(ti_opp_supply_of_match),
},
};
{
struct pci_dev *pci_dev = to_pci_dev(dev);
struct acpi_device *adev = ACPI_COMPANION(dev);
- int node;
if (!adev)
return;
- node = acpi_get_node(adev->handle);
- if (node != NUMA_NO_NODE)
- set_dev_node(dev, node);
-
pci_acpi_optimize_delay(pci_dev, adev->handle);
pci_acpi_add_pm_notifier(adev, pci_dev);
static struct meson_bank meson_gxbb_aobus_banks[] = {
/* name first last irq pullen pull dir out in */
- BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
+ BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
};
static struct meson_pinctrl_data meson_gxbb_periphs_pinctrl_data = {
static struct meson_bank meson_gxl_aobus_banks[] = {
/* name first last irq pullen pull dir out in */
- BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
+ BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
};
static struct meson_pinctrl_data meson_gxl_periphs_pinctrl_data = {
dev_dbg(pc->dev, "pin %u: disable bias\n", pin);
meson_calc_reg_and_bit(bank, pin, REG_PULL, ®, &bit);
- ret = regmap_update_bits(pc->reg_pull, reg,
+ ret = regmap_update_bits(pc->reg_pullen, reg,
BIT(bit), 0);
if (ret)
return ret;
static struct meson_bank meson8_aobus_banks[] = {
/* name first last irq pullen pull dir out in */
- BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
+ BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
};
static struct meson_pinctrl_data meson8_cbus_pinctrl_data = {
static struct meson_bank meson8b_aobus_banks[] = {
/* name first lastc irq pullen pull dir out in */
- BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
+ BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
};
static struct meson_pinctrl_data meson8b_cbus_pinctrl_data = {
tv64.tv_sec = rtc_tm_to_time64(&tm);
#if BITS_PER_LONG == 32
- if (tv64.tv_sec > INT_MAX)
+ if (tv64.tv_sec > INT_MAX) {
+ err = -ERANGE;
goto err_read;
+ }
#endif
err = do_settimeofday64(&tv64);
struct cmos_rtc *cmos = dev_get_drvdata(dev);
unsigned char rtc_control;
+ /* This not only a rtc_op, but also called directly */
if (!is_valid_irq(cmos->irq))
return -EIO;
unsigned char mon, mday, hrs, min, sec, rtc_control;
int ret;
+ /* This not only a rtc_op, but also called directly */
if (!is_valid_irq(cmos->irq))
return -EIO;
struct cmos_rtc *cmos = dev_get_drvdata(dev);
unsigned long flags;
- if (!is_valid_irq(cmos->irq))
- return -EINVAL;
-
spin_lock_irqsave(&rtc_lock, flags);
if (enabled)
.alarm_irq_enable = cmos_alarm_irq_enable,
};
+static const struct rtc_class_ops cmos_rtc_ops_no_alarm = {
+ .read_time = cmos_read_time,
+ .set_time = cmos_set_time,
+ .proc = cmos_procfs,
+};
+
/*----------------------------------------------------------------*/
/*
dev_dbg(dev, "IRQ %d is already in use\n", rtc_irq);
goto cleanup1;
}
+
+ cmos_rtc.rtc->ops = &cmos_rtc_ops;
+ } else {
+ cmos_rtc.rtc->ops = &cmos_rtc_ops_no_alarm;
}
- cmos_rtc.rtc->ops = &cmos_rtc_ops;
cmos_rtc.rtc->nvram_old_abi = true;
retval = rtc_register_device(cmos_rtc.rtc);
if (retval)
memcpy(buf + 1, val, val_size);
ret = i2c_master_send(client, buf, val_size + 1);
+
+ kfree(buf);
+
if (ret != val_size + 1)
return ret < 0 ? ret : -EIO;
* orb specified one of the unsupported formats, we defer
* checking for IDAWs in unsupported formats to here.
*/
- if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw))
+ if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) {
+ kfree(p);
return -EOPNOTSUPP;
+ }
if ((!ccw_is_chain(ccw)) && (!ccw_is_tic(ccw)))
break;
ret = pfn_array_alloc_pin(pat->pat_pa, cp->mdev, ccw->cda, ccw->count);
if (ret < 0)
- goto out_init;
+ goto out_unpin;
/* Translate this direct ccw to a idal ccw. */
idaws = kcalloc(ret, sizeof(*idaws), GFP_DMA | GFP_KERNEL);
#include "vfio_ccw_private.h"
struct workqueue_struct *vfio_ccw_work_q;
-struct kmem_cache *vfio_ccw_io_region;
+static struct kmem_cache *vfio_ccw_io_region;
/*
* Helpers
if (ret)
goto out_free;
- ret = vfio_ccw_mdev_reg(sch);
- if (ret)
- goto out_disable;
-
INIT_WORK(&private->io_work, vfio_ccw_sch_io_todo);
atomic_set(&private->avail, 1);
private->state = VFIO_CCW_STATE_STANDBY;
+ ret = vfio_ccw_mdev_reg(sch);
+ if (ret)
+ goto out_disable;
+
return 0;
out_disable:
drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT;
if (!!devres != !!drvres)
return -ENODEV;
+ /* (re-)init queue's state machine */
+ ap_queue_reinit_state(to_ap_queue(dev));
}
/* Add queue/card to list of active queues/cards */
struct ap_device *ap_dev = to_ap_dev(dev);
struct ap_driver *ap_drv = ap_dev->drv;
+ if (is_queue_dev(dev))
+ ap_queue_remove(to_ap_queue(dev));
if (ap_drv->remove)
ap_drv->remove(ap_dev);
aq->ap_dev.device.parent = &ac->ap_dev.device;
dev_set_name(&aq->ap_dev.device,
"%02x.%04x", id, dom);
- /* Start with a device reset */
- spin_lock_bh(&aq->lock);
- ap_wait(ap_sm_event(aq, AP_EVENT_POLL));
- spin_unlock_bh(&aq->lock);
/* Register device */
rc = device_register(&aq->ap_dev.device);
if (rc) {
void ap_queue_remove(struct ap_queue *aq);
void ap_queue_suspend(struct ap_device *ap_dev);
void ap_queue_resume(struct ap_device *ap_dev);
+void ap_queue_reinit_state(struct ap_queue *aq);
struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type,
int comp_device_type, unsigned int functions);
{
ap_flush_queue(aq);
del_timer_sync(&aq->timeout);
+
+ /* reset with zero, also clears irq registration */
+ spin_lock_bh(&aq->lock);
+ ap_zapq(aq->qid);
+ aq->state = AP_STATE_BORKED;
+ spin_unlock_bh(&aq->lock);
}
EXPORT_SYMBOL(ap_queue_remove);
+
+void ap_queue_reinit_state(struct ap_queue *aq)
+{
+ spin_lock_bh(&aq->lock);
+ aq->state = AP_STATE_RESET_START;
+ ap_wait(ap_sm_event(aq, AP_EVENT_POLL));
+ spin_unlock_bh(&aq->lock);
+}
+EXPORT_SYMBOL(ap_queue_reinit_state);
struct ap_queue *aq = to_ap_queue(&ap_dev->device);
struct zcrypt_queue *zq = aq->private;
- ap_queue_remove(aq);
if (zq)
zcrypt_queue_unregister(zq);
}
struct ap_queue *aq = to_ap_queue(&ap_dev->device);
struct zcrypt_queue *zq = aq->private;
- ap_queue_remove(aq);
if (zq)
zcrypt_queue_unregister(zq);
}
struct ap_queue *aq = to_ap_queue(&ap_dev->device);
struct zcrypt_queue *zq = aq->private;
- ap_queue_remove(aq);
if (zq)
zcrypt_queue_unregister(zq);
}
break;
clear_bit_inv(bit, bv);
+ ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0;
barrier();
smcd_handle_irq(ism->smcd, bit + ISM_DMB_BIT_OFFSET);
- ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0;
}
if (ism->sba->e) {
{
struct qeth_ipa_cmd *cmd;
struct qeth_arp_query_info *qinfo;
- struct qeth_snmp_cmd *snmp;
unsigned char *data;
+ void *snmp_data;
__u16 data_len;
QETH_CARD_TEXT(card, 3, "snpcmdcb");
cmd = (struct qeth_ipa_cmd *) sdata;
data = (unsigned char *)((char *)cmd - reply->offset);
qinfo = (struct qeth_arp_query_info *) reply->param;
- snmp = &cmd->data.setadapterparms.data.snmp;
if (cmd->hdr.return_code) {
QETH_CARD_TEXT_(card, 4, "scer1%x", cmd->hdr.return_code);
return 0;
}
data_len = *((__u16 *)QETH_IPA_PDU_LEN_PDU1(data));
- if (cmd->data.setadapterparms.hdr.seq_no == 1)
- data_len -= (__u16)((char *)&snmp->data - (char *)cmd);
- else
- data_len -= (__u16)((char *)&snmp->request - (char *)cmd);
+ if (cmd->data.setadapterparms.hdr.seq_no == 1) {
+ snmp_data = &cmd->data.setadapterparms.data.snmp;
+ data_len -= offsetof(struct qeth_ipa_cmd,
+ data.setadapterparms.data.snmp);
+ } else {
+ snmp_data = &cmd->data.setadapterparms.data.snmp.request;
+ data_len -= offsetof(struct qeth_ipa_cmd,
+ data.setadapterparms.data.snmp.request);
+ }
/* check if there is enough room in userspace */
if ((qinfo->udata_len - qinfo->udata_offset) < data_len) {
QETH_CARD_TEXT_(card, 4, "sseqn%i",
cmd->data.setadapterparms.hdr.seq_no);
/*copy entries to user buffer*/
- if (cmd->data.setadapterparms.hdr.seq_no == 1) {
- memcpy(qinfo->udata + qinfo->udata_offset,
- (char *)snmp,
- data_len + offsetof(struct qeth_snmp_cmd, data));
- qinfo->udata_offset += offsetof(struct qeth_snmp_cmd, data);
- } else {
- memcpy(qinfo->udata + qinfo->udata_offset,
- (char *)&snmp->request, data_len);
- }
+ memcpy(qinfo->udata + qinfo->udata_offset, snmp_data, data_len);
qinfo->udata_offset += data_len;
+
/* check if all replies received ... */
QETH_CARD_TEXT_(card, 4, "srtot%i",
cmd->data.setadapterparms.hdr.used_total);
config SCSI_MYRS
tristate "Mylex DAC960/DAC1100 PCI RAID Controller (SCSI Interface)"
depends on PCI
+ depends on !CPU_BIG_ENDIAN || COMPILE_TEST
select RAID_ATTRS
help
This driver adds support for the Mylex DAC960, AcceleRAID, and
out:
if (!hostdata->selecting)
- return NULL;
+ return false;
hostdata->selecting = NULL;
return ret;
}
{
struct hisi_hba *hisi_hba = dq->hisi_hba;
struct hisi_sas_slot *s, *s1, *s2 = NULL;
- struct list_head *dq_list;
int dlvry_queue = dq->id;
int wp;
- dq_list = &dq->list;
list_for_each_entry_safe(s, s1, &dq->list, delivery) {
if (!s->ready)
break;
{
struct hisi_hba *hisi_hba = dq->hisi_hba;
struct hisi_sas_slot *s, *s1, *s2 = NULL;
- struct list_head *dq_list;
int dlvry_queue = dq->id;
int wp;
- dq_list = &dq->list;
list_for_each_entry_safe(s, s1, &dq->list, delivery) {
if (!s->ready)
break;
{
struct hisi_hba *hisi_hba = dq->hisi_hba;
struct hisi_sas_slot *s, *s1, *s2 = NULL;
- struct list_head *dq_list;
int dlvry_queue = dq->id;
int wp;
- dq_list = &dq->list;
list_for_each_entry_safe(s, s1, &dq->list, delivery) {
if (!s->ready)
break;
rport = lpfc_ndlp_get_nrport(ndlp);
if (rport)
nrport = rport->remoteport;
+ else
+ nrport = NULL;
spin_unlock(&phba->hbalock);
if (!nrport)
continue;
enquiry2->fw.firmware_type = '0';
enquiry2->fw.turn_id = 0;
}
- sprintf(cb->fw_version, "%d.%02d-%c-%02d",
+ snprintf(cb->fw_version, sizeof(cb->fw_version),
+ "%d.%02d-%c-%02d",
enquiry2->fw.major_version,
enquiry2->fw.minor_version,
enquiry2->fw.firmware_type,
dma_addr_t ctlr_info_addr;
union myrs_sgl *sgl;
unsigned char status;
- struct myrs_ctlr_info old;
+ unsigned short ldev_present, ldev_critical, ldev_offline;
+
+ ldev_present = cs->ctlr_info->ldev_present;
+ ldev_critical = cs->ctlr_info->ldev_critical;
+ ldev_offline = cs->ctlr_info->ldev_offline;
- memcpy(&old, cs->ctlr_info, sizeof(struct myrs_ctlr_info));
ctlr_info_addr = dma_map_single(&cs->pdev->dev, cs->ctlr_info,
sizeof(struct myrs_ctlr_info),
DMA_FROM_DEVICE);
cs->ctlr_info->rbld_active +
cs->ctlr_info->exp_active != 0)
cs->needs_update = true;
- if (cs->ctlr_info->ldev_present != old.ldev_present ||
- cs->ctlr_info->ldev_critical != old.ldev_critical ||
- cs->ctlr_info->ldev_offline != old.ldev_offline)
+ if (cs->ctlr_info->ldev_present != ldev_present ||
+ cs->ctlr_info->ldev_critical != ldev_critical ||
+ cs->ctlr_info->ldev_offline != ldev_offline)
shost_printk(KERN_INFO, cs->host,
"Logical drive count changes (%d/%d/%d)\n",
cs->ctlr_info->ldev_critical,
fcport->loop_id = FC_NO_LOOP_ID;
qla2x00_set_fcport_state(fcport, FCS_UNCONFIGURED);
fcport->supported_classes = FC_COS_UNSPECIFIED;
+ fcport->fp_speed = PORT_SPEED_UNKNOWN;
fcport->ct_desc.ct_sns = dma_alloc_coherent(&vha->hw->pdev->dev,
sizeof(struct ct_sns_pkt), &fcport->ct_desc.ct_sns_dma,
MODULE_PARM_DESC(ql2xplogiabsentdevice,
"Option to enable PLOGI to devices that are not present after "
"a Fabric scan. This is needed for several broken switches. "
- "Default is 0 - no PLOGI. 1 - perfom PLOGI.");
+ "Default is 0 - no PLOGI. 1 - perform PLOGI.");
int ql2xloginretrycount = 0;
module_param(ql2xloginretrycount, int, S_IRUGO);
static void
__qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
{
- int cnt;
+ int cnt, status;
unsigned long flags;
srb_t *sp;
scsi_qla_host_t *vha = qp->vha;
if (!sp_get(sp)) {
spin_unlock_irqrestore
(qp->qp_lock_ptr, flags);
- qla2xxx_eh_abort(
+ status = qla2xxx_eh_abort(
GET_CMD_SP(sp));
spin_lock_irqsave
(qp->qp_lock_ptr, flags);
+ /*
+ * Get rid of extra reference caused
+ * by early exit from qla2xxx_eh_abort
+ */
+ if (status == FAST_IO_FAIL)
+ atomic_dec(&sp->ref_count);
}
}
sp->done(sp, res);
*/
scsi_mq_uninit_cmd(cmd);
+ /*
+ * queue is still alive, so grab the ref for preventing it
+ * from being cleaned up during running queue.
+ */
+ percpu_ref_get(&q->q_usage_counter);
+
__blk_mq_end_request(req, error);
if (scsi_target(sdev)->single_lun ||
kblockd_schedule_work(&sdev->requeue_work);
else
blk_mq_run_hw_queues(q, true);
+
+ percpu_ref_put(&q->q_usage_counter);
} else {
unsigned long flags;
#include "unipro.h"
#include "ufs-hisi.h"
#include "ufshci.h"
+#include "ufs_quirks.h"
static int ufs_hisi_check_hibern8(struct ufs_hba *hba)
{
static void ufs_hisi_pwr_change_pre_change(struct ufs_hba *hba)
{
+ if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME) {
+ pr_info("ufs flash device must set VS_DebugSaveConfigTime 0x10\n");
+ /* VS_DebugSaveConfigTime */
+ ufshcd_dme_set(hba, UIC_ARG_MIB(0xD0A0), 0x10);
+ /* sync length */
+ ufshcd_dme_set(hba, UIC_ARG_MIB(0x1556), 0x48);
+ }
+
/* update */
ufshcd_dme_set(hba, UIC_ARG_MIB(0x15A8), 0x1);
/* PA_TxSkip */
*/
#define UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME (1 << 8)
+/*
+ * Some UFS devices require VS_DebugSaveConfigTime is 0x10,
+ * enabling this quirk ensure this.
+ */
+#define UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME (1 << 9)
+
#endif /* UFS_QUIRKS_H_ */
UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ),
UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME),
+ UFS_FIX(UFS_VENDOR_SKHYNIX, "hB8aL1" /*H28U62301AMR*/,
+ UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME),
END_FIX
};
err = -ENOMEM;
goto out_error;
}
-
- /*
- * Do not use blk-mq at this time because blk-mq does not support
- * runtime pm.
- */
- host->use_blk_mq = false;
-
hba = shost_priv(host);
hba->host = host;
hba->dev = dev;
u8 la = txn->la;
bool usr_msg = false;
- if (txn->mc & SLIM_MSG_CLK_PAUSE_SEQ_FLG)
- return -EPROTONOSUPPORT;
-
if (txn->mt == SLIM_MSG_MT_CORE &&
(txn->mc >= SLIM_MSG_MC_BEGIN_RECONFIGURATION &&
txn->mc <= SLIM_MSG_MC_RECONFIGURE_NOW))
#define SLIM_MSG_MC_NEXT_REMOVE_CHANNEL 0x58
#define SLIM_MSG_MC_RECONFIGURE_NOW 0x5F
-/*
- * Clock pause flag to indicate that the reconfig message
- * corresponds to clock pause sequence
- */
-#define SLIM_MSG_CLK_PAUSE_SEQ_FLG (1U << 8)
-
/* Clock pause values per SLIMbus spec */
#define SLIM_CLK_FAST 0
#define SLIM_CLK_CONST_PHASE 1
mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len);
mtk_spi_setup_packet(master);
- cnt = len / 4;
+ cnt = mdata->xfer_len / 4;
iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
trans->tx_buf + mdata->num_xfered, cnt);
- remainder = len % 4;
+ remainder = mdata->xfer_len % 4;
if (remainder > 0) {
reg_val = 0;
memcpy(®_val,
/* work with hotplug and coldplug */
MODULE_ALIAS("platform:omap2_mcspi");
-#ifdef CONFIG_SUSPEND
-static int omap2_mcspi_suspend_noirq(struct device *dev)
+static int __maybe_unused omap2_mcspi_suspend(struct device *dev)
{
- return pinctrl_pm_select_sleep_state(dev);
+ struct spi_master *master = dev_get_drvdata(dev);
+ struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
+ int error;
+
+ error = pinctrl_pm_select_sleep_state(dev);
+ if (error)
+ dev_warn(mcspi->dev, "%s: failed to set pins: %i\n",
+ __func__, error);
+
+ error = spi_master_suspend(master);
+ if (error)
+ dev_warn(mcspi->dev, "%s: master suspend failed: %i\n",
+ __func__, error);
+
+ return pm_runtime_force_suspend(dev);
}
-static int omap2_mcspi_resume_noirq(struct device *dev)
+static int __maybe_unused omap2_mcspi_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
dev_warn(mcspi->dev, "%s: failed to set pins: %i\n",
__func__, error);
- return 0;
-}
+ error = spi_master_resume(master);
+ if (error)
+ dev_warn(mcspi->dev, "%s: master resume failed: %i\n",
+ __func__, error);
-#else
-#define omap2_mcspi_suspend_noirq NULL
-#define omap2_mcspi_resume_noirq NULL
-#endif
+ return pm_runtime_force_resume(dev);
+}
static const struct dev_pm_ops omap2_mcspi_pm_ops = {
- .suspend_noirq = omap2_mcspi_suspend_noirq,
- .resume_noirq = omap2_mcspi_resume_noirq,
+ SET_SYSTEM_SLEEP_PM_OPS(omap2_mcspi_suspend,
+ omap2_mcspi_resume)
.runtime_resume = omap_mcspi_runtime_resume,
};
ipipeif_write(val, ipipeif_base_addr, IPIPEIF_CFG2);
break;
}
+ /* fall through */
case IPIPEIF_SDRAM_YUV:
/* Set clock divider */
static const struct media_device_ops cedrus_m2m_media_ops = {
.req_validate = cedrus_request_validate,
- .req_queue = vb2_m2m_request_queue,
+ .req_queue = v4l2_m2m_request_queue,
};
static int cedrus_probe(struct platform_device *pdev)
void transport_generic_request_failure(struct se_cmd *cmd,
sense_reason_t sense_reason)
{
- int ret = 0;
+ int ret = 0, post_ret;
pr_debug("-----[ Storage Engine Exception; sense_reason %d\n",
sense_reason);
transport_complete_task_attr(cmd);
if (cmd->transport_complete_callback)
- cmd->transport_complete_callback(cmd, false, NULL);
+ cmd->transport_complete_callback(cmd, false, &post_ret);
if (transport_check_aborted_status(cmd, 1))
return;
if (ret)
goto err_uio_dev_add_attributes;
+ info->uio_dev = idev;
+
if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) {
/*
* Note that we deliberately don't use devm_request_irq
*/
ret = request_irq(info->irq, uio_interrupt,
info->irq_flags, info->name, idev);
- if (ret)
+ if (ret) {
+ info->uio_dev = NULL;
goto err_request_irq;
+ }
}
- info->uio_dev = idev;
return 0;
err_request_irq:
{ USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */
.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
},
+ { USB_DEVICE(0x0572, 0x1349), /* Hiro (Conexant) USB MODEM H50228 */
+ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+ },
{ USB_DEVICE(0x20df, 0x0001), /* Simtec Electronics Entropy Key */
.driver_info = QUIRK_CONTROL_LINE_STATE, },
{ USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */
int i, status;
u16 portchange, portstatus;
struct usb_port *port_dev = hub->ports[port1 - 1];
+ int reset_recovery_time;
if (!hub_is_superspeed(hub->hdev)) {
if (warm) {
USB_PORT_FEAT_C_BH_PORT_RESET);
usb_clear_port_feature(hub->hdev, port1,
USB_PORT_FEAT_C_PORT_LINK_STATE);
- usb_clear_port_feature(hub->hdev, port1,
+
+ if (udev)
+ usb_clear_port_feature(hub->hdev, port1,
USB_PORT_FEAT_C_CONNECTION);
/*
done:
if (status == 0) {
- /* TRSTRCY = 10 ms; plus some extra */
if (port_dev->quirks & USB_PORT_QUIRK_FAST_ENUM)
usleep_range(10000, 12000);
- else
- msleep(10 + 40);
+ else {
+ /* TRSTRCY = 10 ms; plus some extra */
+ reset_recovery_time = 10 + 40;
+
+ /* Hub needs extra delay after resetting its port. */
+ if (hub->hdev->quirks & USB_QUIRK_HUB_SLOW_RESET)
+ reset_recovery_time += 100;
+
+ msleep(reset_recovery_time);
+ }
if (udev) {
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
case 'n':
flags |= USB_QUIRK_DELAY_CTRL_MSG;
break;
+ case 'o':
+ flags |= USB_QUIRK_HUB_SLOW_RESET;
+ break;
/* Ignore unrecognized flag characters */
}
}
{ USB_DEVICE(0x1a0a, 0x0200), .driver_info =
USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+ /* Terminus Technology Inc. Hub */
+ { USB_DEVICE(0x1a40, 0x0101), .driver_info = USB_QUIRK_HUB_SLOW_RESET },
+
/* Corsair K70 RGB */
{ USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT },
{ USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT |
USB_QUIRK_DELAY_CTRL_MSG },
+ /* Corsair K70 LUX RGB */
+ { USB_DEVICE(0x1b1c, 0x1b33), .driver_info = USB_QUIRK_DELAY_INIT },
+
/* Corsair K70 LUX */
{ USB_DEVICE(0x1b1c, 0x1b36), .driver_info = USB_QUIRK_DELAY_INIT },
{ USB_DEVICE(0x2040, 0x7200), .driver_info =
USB_QUIRK_CONFIG_INTF_STRINGS },
+ /* Raydium Touchscreen */
+ { USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM },
+
+ { USB_DEVICE(0x2386, 0x3119), .driver_info = USB_QUIRK_NO_LPM },
+
/* DJI CineSSD */
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
dwc2 = platform_device_alloc("dwc2", PLATFORM_DEVID_AUTO);
if (!dwc2) {
dev_err(dev, "couldn't allocate dwc2 device\n");
+ ret = -ENOMEM;
goto err;
}
err5:
dwc3_event_buffers_cleanup(dwc);
+ dwc3_ulpi_exit(dwc);
err4:
dwc3_free_scratch_buffers(dwc);
static void dwc3_pci_remove(struct pci_dev *pci)
{
struct dwc3_pci *dwc = pci_get_drvdata(pci);
+ struct pci_dev *pdev = dwc->pci;
- gpiod_remove_lookup_table(&platform_bytcr_gpios);
+ if (pdev->device == PCI_DEVICE_ID_INTEL_BYT)
+ gpiod_remove_lookup_table(&platform_bytcr_gpios);
#ifdef CONFIG_PM
cancel_work_sync(&dwc->wakeup_work);
#endif
/* Now prepare one extra TRB to align transfer size */
trb = &dep->trb_pool[dep->trb_enqueue];
__dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr,
- maxp - rem, false, 0,
+ maxp - rem, false, 1,
req->request.stream_id,
req->request.short_not_ok,
req->request.no_interrupt);
/* Now prepare one extra TRB to align transfer size */
trb = &dep->trb_pool[dep->trb_enqueue];
__dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp - rem,
- false, 0, req->request.stream_id,
+ false, 1, req->request.stream_id,
req->request.short_not_ok,
req->request.no_interrupt);
} else if (req->request.zero && req->request.length &&
/* Now prepare one extra TRB to handle ZLP */
trb = &dep->trb_pool[dep->trb_enqueue];
__dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0,
- false, 0, req->request.stream_id,
+ false, 1, req->request.stream_id,
req->request.short_not_ok,
req->request.no_interrupt);
} else {
* with one TRB pending in the ring. We need to manually clear HWO bit
* from that TRB.
*/
- if ((req->zero || req->unaligned) && (trb->ctrl & DWC3_TRB_CTRL_HWO)) {
+ if ((req->zero || req->unaligned) && !(trb->ctrl & DWC3_TRB_CTRL_CHN)) {
trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
return 1;
}
struct mm_struct *mm;
struct work_struct work;
- struct work_struct cancellation_work;
struct usb_ep *ep;
struct usb_request *req;
return 0;
}
-static void ffs_aio_cancel_worker(struct work_struct *work)
-{
- struct ffs_io_data *io_data = container_of(work, struct ffs_io_data,
- cancellation_work);
-
- ENTER();
-
- usb_ep_dequeue(io_data->ep, io_data->req);
-}
-
static int ffs_aio_cancel(struct kiocb *kiocb)
{
struct ffs_io_data *io_data = kiocb->private;
- struct ffs_data *ffs = io_data->ffs;
+ struct ffs_epfile *epfile = kiocb->ki_filp->private_data;
int value;
ENTER();
- if (likely(io_data && io_data->ep && io_data->req)) {
- INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker);
- queue_work(ffs->io_completion_wq, &io_data->cancellation_work);
- value = -EINPROGRESS;
- } else {
+ spin_lock_irq(&epfile->ffs->eps_lock);
+
+ if (likely(io_data && io_data->ep && io_data->req))
+ value = usb_ep_dequeue(io_data->ep, io_data->req);
+ else
value = -EINVAL;
- }
+
+ spin_unlock_irq(&epfile->ffs->eps_lock);
return value;
}
struct xhci_hcd_histb *histb = platform_get_drvdata(dev);
struct usb_hcd *hcd = histb->hcd;
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ struct usb_hcd *shared_hcd = xhci->shared_hcd;
xhci->xhc_state |= XHCI_STATE_REMOVING;
- usb_remove_hcd(xhci->shared_hcd);
+ usb_remove_hcd(shared_hcd);
+ xhci->shared_hcd = NULL;
device_wakeup_disable(&dev->dev);
usb_remove_hcd(hcd);
- usb_put_hcd(xhci->shared_hcd);
+ usb_put_hcd(shared_hcd);
xhci_histb_host_disable(histb);
usb_put_hcd(hcd);
status |= USB_PORT_STAT_SUSPEND;
}
if ((raw_port_status & PORT_PLS_MASK) == XDEV_RESUME &&
- !DEV_SUPERSPEED_ANY(raw_port_status)) {
+ !DEV_SUPERSPEED_ANY(raw_port_status) && hcd->speed < HCD_USB3) {
if ((raw_port_status & PORT_RESET) ||
!(raw_port_status & PORT_PE))
return 0xffffffff;
time_left = wait_for_completion_timeout(
&bus_state->rexit_done[wIndex],
msecs_to_jiffies(
- XHCI_MAX_REXIT_TIMEOUT));
+ XHCI_MAX_REXIT_TIMEOUT_MS));
spin_lock_irqsave(&xhci->lock, flags);
if (time_left) {
} else {
int port_status = readl(port->addr);
xhci_warn(xhci, "Port resume took longer than %i msec, port status = 0x%x\n",
- XHCI_MAX_REXIT_TIMEOUT,
+ XHCI_MAX_REXIT_TIMEOUT_MS,
port_status);
status |= USB_PORT_STAT_SUSPEND;
clear_bit(wIndex, &bus_state->rexit_ports);
unsigned long flags;
struct xhci_hub *rhub;
struct xhci_port **ports;
+ u32 portsc_buf[USB_MAXCHILDREN];
+ bool wake_enabled;
rhub = xhci_get_rhub(hcd);
ports = rhub->ports;
max_ports = rhub->num_ports;
bus_state = &xhci->bus_state[hcd_index(hcd)];
+ wake_enabled = hcd->self.root_hub->do_remote_wakeup;
spin_lock_irqsave(&xhci->lock, flags);
- if (hcd->self.root_hub->do_remote_wakeup) {
+ if (wake_enabled) {
if (bus_state->resuming_ports || /* USB2 */
bus_state->port_remote_wakeup) { /* USB3 */
spin_unlock_irqrestore(&xhci->lock, flags);
return -EBUSY;
}
}
-
- port_index = max_ports;
+ /*
+ * Prepare ports for suspend, but don't write anything before all ports
+ * are checked and we know bus suspend can proceed
+ */
bus_state->bus_suspended = 0;
+ port_index = max_ports;
while (port_index--) {
- /* suspend the port if the port is not suspended */
u32 t1, t2;
- int slot_id;
t1 = readl(ports[port_index]->addr);
t2 = xhci_port_state_to_neutral(t1);
+ portsc_buf[port_index] = 0;
- if ((t1 & PORT_PE) && !(t1 & PORT_PLS_MASK)) {
- xhci_dbg(xhci, "port %d not suspended\n", port_index);
- slot_id = xhci_find_slot_id_by_port(hcd, xhci,
- port_index + 1);
- if (slot_id) {
+ /* Bail out if a USB3 port has a new device in link training */
+ if ((t1 & PORT_PLS_MASK) == XDEV_POLLING) {
+ bus_state->bus_suspended = 0;
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ xhci_dbg(xhci, "Bus suspend bailout, port in polling\n");
+ return -EBUSY;
+ }
+
+ /* suspend ports in U0, or bail out for new connect changes */
+ if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
+ if ((t1 & PORT_CSC) && wake_enabled) {
+ bus_state->bus_suspended = 0;
spin_unlock_irqrestore(&xhci->lock, flags);
- xhci_stop_device(xhci, slot_id, 1);
- spin_lock_irqsave(&xhci->lock, flags);
+ xhci_dbg(xhci, "Bus suspend bailout, port connect change\n");
+ return -EBUSY;
}
+ xhci_dbg(xhci, "port %d not suspended\n", port_index);
t2 &= ~PORT_PLS_MASK;
t2 |= PORT_LINK_STROBE | XDEV_U3;
set_bit(port_index, &bus_state->bus_suspended);
* including the USB 3.0 roothub, but only if CONFIG_PM
* is enabled, so also enable remote wake here.
*/
- if (hcd->self.root_hub->do_remote_wakeup) {
+ if (wake_enabled) {
if (t1 & PORT_CONNECT) {
t2 |= PORT_WKOC_E | PORT_WKDISC_E;
t2 &= ~PORT_WKCONN_E;
t1 = xhci_port_state_to_neutral(t1);
if (t1 != t2)
- writel(t2, ports[port_index]->addr);
+ portsc_buf[port_index] = t2;
+ }
+
+ /* write port settings, stopping and suspending ports if needed */
+ port_index = max_ports;
+ while (port_index--) {
+ if (!portsc_buf[port_index])
+ continue;
+ if (test_bit(port_index, &bus_state->bus_suspended)) {
+ int slot_id;
+
+ slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+ port_index + 1);
+ if (slot_id) {
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ xhci_stop_device(xhci, slot_id, 1);
+ spin_lock_irqsave(&xhci->lock, flags);
+ }
+ }
+ writel(portsc_buf[port_index], ports[port_index]->addr);
}
hcd->state = HC_STATE_SUSPENDED;
bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
struct xhci_hcd_mtk *mtk = platform_get_drvdata(dev);
struct usb_hcd *hcd = mtk->hcd;
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ struct usb_hcd *shared_hcd = xhci->shared_hcd;
- usb_remove_hcd(xhci->shared_hcd);
+ usb_remove_hcd(shared_hcd);
+ xhci->shared_hcd = NULL;
device_init_wakeup(&dev->dev, false);
usb_remove_hcd(hcd);
- usb_put_hcd(xhci->shared_hcd);
+ usb_put_hcd(shared_hcd);
usb_put_hcd(hcd);
xhci_mtk_sch_exit(mtk);
xhci_mtk_clks_disable(mtk);
if (pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241)
xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_7;
+ if ((pdev->vendor == PCI_VENDOR_ID_BROADCOM ||
+ pdev->vendor == PCI_VENDOR_ID_CAVIUM) &&
+ pdev->device == 0x9026)
+ xhci->quirks |= XHCI_RESET_PLL_ON_DISCONNECT;
+
if (xhci->quirks & XHCI_RESET_ON_RESUME)
xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
"QUIRK: Resetting on resume");
if (xhci->shared_hcd) {
usb_remove_hcd(xhci->shared_hcd);
usb_put_hcd(xhci->shared_hcd);
+ xhci->shared_hcd = NULL;
}
/* Workaround for spurious wakeups at shutdown with HSW */
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
struct clk *clk = xhci->clk;
struct clk *reg_clk = xhci->reg_clk;
+ struct usb_hcd *shared_hcd = xhci->shared_hcd;
xhci->xhc_state |= XHCI_STATE_REMOVING;
- usb_remove_hcd(xhci->shared_hcd);
+ usb_remove_hcd(shared_hcd);
+ xhci->shared_hcd = NULL;
usb_phy_shutdown(hcd->usb_phy);
usb_remove_hcd(hcd);
- usb_put_hcd(xhci->shared_hcd);
+ usb_put_hcd(shared_hcd);
clk_disable_unprepare(clk);
clk_disable_unprepare(reg_clk);
usb_wakeup_notification(udev->parent, udev->portnum);
}
+/*
+ * Quirk hanlder for errata seen on Cavium ThunderX2 processor XHCI
+ * Controller.
+ * As per ThunderX2errata-129 USB 2 device may come up as USB 1
+ * If a connection to a USB 1 device is followed by another connection
+ * to a USB 2 device.
+ *
+ * Reset the PHY after the USB device is disconnected if device speed
+ * is less than HCD_USB3.
+ * Retry the reset sequence max of 4 times checking the PLL lock status.
+ *
+ */
+static void xhci_cavium_reset_phy_quirk(struct xhci_hcd *xhci)
+{
+ struct usb_hcd *hcd = xhci_to_hcd(xhci);
+ u32 pll_lock_check;
+ u32 retry_count = 4;
+
+ do {
+ /* Assert PHY reset */
+ writel(0x6F, hcd->regs + 0x1048);
+ udelay(10);
+ /* De-assert the PHY reset */
+ writel(0x7F, hcd->regs + 0x1048);
+ udelay(200);
+ pll_lock_check = readl(hcd->regs + 0x1070);
+ } while (!(pll_lock_check & 0x1) && --retry_count);
+}
+
static void handle_port_status(struct xhci_hcd *xhci,
union xhci_trb *event)
{
goto cleanup;
}
+ /* We might get interrupts after shared_hcd is removed */
+ if (port->rhub == &xhci->usb3_rhub && xhci->shared_hcd == NULL) {
+ xhci_dbg(xhci, "ignore port event for removed USB3 hcd\n");
+ bogus_port_status = true;
+ goto cleanup;
+ }
+
hcd = port->rhub->hcd;
bus_state = &xhci->bus_state[hcd_index(hcd)];
hcd_portnum = port->hcd_portnum;
* RExit to a disconnect state). If so, let the the driver know it's
* out of the RExit state.
*/
- if (!DEV_SUPERSPEED_ANY(portsc) &&
+ if (!DEV_SUPERSPEED_ANY(portsc) && hcd->speed < HCD_USB3 &&
test_and_clear_bit(hcd_portnum,
&bus_state->rexit_ports)) {
complete(&bus_state->rexit_done[hcd_portnum]);
goto cleanup;
}
- if (hcd->speed < HCD_USB3)
+ if (hcd->speed < HCD_USB3) {
xhci_test_and_clear_bit(xhci, port, PORT_PLC);
+ if ((xhci->quirks & XHCI_RESET_PLL_ON_DISCONNECT) &&
+ (portsc & PORT_CSC) && !(portsc & PORT_CONNECT))
+ xhci_cavium_reset_phy_quirk(xhci);
+ }
cleanup:
/* Update event ring dequeue pointer before dropping the lock */
goto cleanup;
case COMP_RING_UNDERRUN:
case COMP_RING_OVERRUN:
+ case COMP_STOPPED_LENGTH_INVALID:
goto cleanup;
default:
xhci_err(xhci, "ERROR Transfer event for unknown stream ring slot %u ep %u\n",
usb_remove_hcd(xhci->shared_hcd);
usb_put_hcd(xhci->shared_hcd);
+ xhci->shared_hcd = NULL;
usb_remove_hcd(tegra->hcd);
usb_put_hcd(tegra->hcd);
/* Only halt host and free memory after both hcds are removed */
if (!usb_hcd_is_primary_hcd(hcd)) {
- /* usb core will free this hcd shortly, unset pointer */
- xhci->shared_hcd = NULL;
mutex_unlock(&xhci->mutex);
return;
}
* It can take up to 20 ms to transition from RExit to U0 on the
* Intel Lynx Point LP xHCI host.
*/
-#define XHCI_MAX_REXIT_TIMEOUT (20 * 1000)
+#define XHCI_MAX_REXIT_TIMEOUT_MS 20
static inline unsigned int hcd_index(struct usb_hcd *hcd)
{
#define XHCI_INTEL_USB_ROLE_SW BIT_ULL(31)
#define XHCI_ZERO_64B_REGS BIT_ULL(32)
#define XHCI_DEFAULT_PM_RUNTIME_ALLOW BIT_ULL(33)
+#define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34)
unsigned int num_active_eps;
unsigned int limit_active_eps;
{ APPLEDISPLAY_DEVICE(0x9219) },
{ APPLEDISPLAY_DEVICE(0x921c) },
{ APPLEDISPLAY_DEVICE(0x921d) },
+ { APPLEDISPLAY_DEVICE(0x9222) },
{ APPLEDISPLAY_DEVICE(0x9236) },
/* Terminating entry */
if (fc->ac.error < 0)
return;
- d_drop(new_dentry);
-
inode = afs_iget(fc->vnode->vfs_inode.i_sb, fc->key,
newfid, newstatus, newcb, fc->cbi);
if (IS_ERR(inode)) {
vnode = AFS_FS_I(inode);
set_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
afs_vnode_commit_status(fc, vnode, 0);
- d_add(new_dentry, inode);
+ d_instantiate(new_dentry, inode);
}
/*
afs_io_error(call, afs_io_error_fs_probe_fail);
goto out;
case -ECONNRESET: /* Responded, but call expired. */
+ case -ERFKILL:
+ case -EADDRNOTAVAIL:
case -ENETUNREACH:
case -EHOSTUNREACH:
+ case -EHOSTDOWN:
case -ECONNREFUSED:
case -ETIMEDOUT:
case -ETIME:
static int afs_do_probe_fileserver(struct afs_net *net,
struct afs_server *server,
struct key *key,
- unsigned int server_index)
+ unsigned int server_index,
+ struct afs_error *_e)
{
struct afs_addr_cursor ac = {
.index = 0,
};
- int ret;
+ bool in_progress = false;
+ int err;
_enter("%pU", &server->uuid);
server->probe.rtt = UINT_MAX;
for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) {
- ret = afs_fs_get_capabilities(net, server, &ac, key, server_index,
+ err = afs_fs_get_capabilities(net, server, &ac, key, server_index,
true);
- if (ret != -EINPROGRESS) {
- afs_fs_probe_done(server);
- return ret;
- }
+ if (err == -EINPROGRESS)
+ in_progress = true;
+ else
+ afs_prioritise_error(_e, err, ac.abort_code);
}
- return 0;
+ if (!in_progress)
+ afs_fs_probe_done(server);
+ return in_progress;
}
/*
struct afs_server_list *list)
{
struct afs_server *server;
- int i, ret;
+ struct afs_error e;
+ bool in_progress = false;
+ int i;
+ e.error = 0;
+ e.responded = false;
for (i = 0; i < list->nr_servers; i++) {
server = list->servers[i].server;
if (test_bit(AFS_SERVER_FL_PROBED, &server->flags))
continue;
- if (!test_and_set_bit_lock(AFS_SERVER_FL_PROBING, &server->flags)) {
- ret = afs_do_probe_fileserver(net, server, key, i);
- if (ret)
- return ret;
- }
+ if (!test_and_set_bit_lock(AFS_SERVER_FL_PROBING, &server->flags) &&
+ afs_do_probe_fileserver(net, server, key, i, &e))
+ in_progress = true;
}
- return 0;
+ return in_progress ? 0 : e.error;
}
/*
int afs_validate(struct afs_vnode *vnode, struct key *key)
{
time64_t now = ktime_get_real_seconds();
- bool valid = false;
+ bool valid;
int ret;
_enter("{v={%llx:%llu} fl=%lx},%x",
vnode->cb_v_break = vnode->volume->cb_v_break;
valid = false;
} else if (vnode->status.type == AFS_FTYPE_DIR &&
- test_bit(AFS_VNODE_DIR_VALID, &vnode->flags) &&
- vnode->cb_expires_at - 10 > now) {
- valid = true;
- } else if (!test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags) &&
- vnode->cb_expires_at - 10 > now) {
+ (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags) ||
+ vnode->cb_expires_at - 10 <= now)) {
+ valid = false;
+ } else if (test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags) ||
+ vnode->cb_expires_at - 10 <= now) {
+ valid = false;
+ } else {
valid = true;
}
} else if (test_bit(AFS_VNODE_DELETED, &vnode->flags)) {
valid = true;
+ } else {
+ vnode->cb_s_break = vnode->cb_interest->server->cb_s_break;
+ vnode->cb_v_break = vnode->volume->cb_v_break;
+ valid = false;
}
read_sequnlock_excl(&vnode->cb_lock);
unsigned mtu; /* MTU of interface */
};
+/*
+ * Error prioritisation and accumulation.
+ */
+struct afs_error {
+ short error; /* Accumulated error */
+ bool responded; /* T if server responded */
+};
+
/*
* Cursor for iterating over a server's address list.
*/
* misc.c
*/
extern int afs_abort_to_error(u32);
+extern void afs_prioritise_error(struct afs_error *, int, u32);
/*
* mntpt.c
default: return -EREMOTEIO;
}
}
+
+/*
+ * Select the error to report from a set of errors.
+ */
+void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code)
+{
+ switch (error) {
+ case 0:
+ return;
+ default:
+ if (e->error == -ETIMEDOUT ||
+ e->error == -ETIME)
+ return;
+ case -ETIMEDOUT:
+ case -ETIME:
+ if (e->error == -ENOMEM ||
+ e->error == -ENONET)
+ return;
+ case -ENOMEM:
+ case -ENONET:
+ if (e->error == -ERFKILL)
+ return;
+ case -ERFKILL:
+ if (e->error == -EADDRNOTAVAIL)
+ return;
+ case -EADDRNOTAVAIL:
+ if (e->error == -ENETUNREACH)
+ return;
+ case -ENETUNREACH:
+ if (e->error == -EHOSTUNREACH)
+ return;
+ case -EHOSTUNREACH:
+ if (e->error == -EHOSTDOWN)
+ return;
+ case -EHOSTDOWN:
+ if (e->error == -ECONNREFUSED)
+ return;
+ case -ECONNREFUSED:
+ if (e->error == -ECONNRESET)
+ return;
+ case -ECONNRESET: /* Responded, but call expired. */
+ if (e->responded)
+ return;
+ e->error = error;
+ return;
+
+ case -ECONNABORTED:
+ e->responded = true;
+ e->error = afs_abort_to_error(abort_code);
+ return;
+ }
+}
struct afs_addr_list *alist;
struct afs_server *server;
struct afs_vnode *vnode = fc->vnode;
- u32 rtt, abort_code;
+ struct afs_error e;
+ u32 rtt;
int error = fc->ac.error, i;
_enter("%lx[%d],%lx[%d],%d,%d",
if (fc->error != -EDESTADDRREQ)
goto iterate_address;
/* Fall through */
+ case -ERFKILL:
+ case -EADDRNOTAVAIL:
case -ENETUNREACH:
case -EHOSTUNREACH:
+ case -EHOSTDOWN:
case -ECONNREFUSED:
_debug("no conn");
fc->error = error;
if (fc->flags & AFS_FS_CURSOR_VBUSY)
goto restart_from_beginning;
- abort_code = 0;
- error = -EDESTADDRREQ;
+ e.error = -EDESTADDRREQ;
+ e.responded = false;
for (i = 0; i < fc->server_list->nr_servers; i++) {
struct afs_server *s = fc->server_list->servers[i].server;
- int probe_error = READ_ONCE(s->probe.error);
- switch (probe_error) {
- case 0:
- continue;
- default:
- if (error == -ETIMEDOUT ||
- error == -ETIME)
- continue;
- case -ETIMEDOUT:
- case -ETIME:
- if (error == -ENOMEM ||
- error == -ENONET)
- continue;
- case -ENOMEM:
- case -ENONET:
- if (error == -ENETUNREACH)
- continue;
- case -ENETUNREACH:
- if (error == -EHOSTUNREACH)
- continue;
- case -EHOSTUNREACH:
- if (error == -ECONNREFUSED)
- continue;
- case -ECONNREFUSED:
- if (error == -ECONNRESET)
- continue;
- case -ECONNRESET: /* Responded, but call expired. */
- if (error == -ECONNABORTED)
- continue;
- case -ECONNABORTED:
- abort_code = s->probe.abort_code;
- error = probe_error;
- continue;
- }
+ afs_prioritise_error(&e, READ_ONCE(s->probe.error),
+ s->probe.abort_code);
}
- if (error == -ECONNABORTED)
- error = afs_abort_to_error(abort_code);
-
failed_set_error:
fc->error = error;
failed:
_leave(" = f [abort]");
return false;
+ case -ERFKILL:
+ case -EADDRNOTAVAIL:
case -ENETUNREACH:
case -EHOSTUNREACH:
+ case -EHOSTDOWN:
case -ECONNREFUSED:
case -ETIMEDOUT:
case -ETIME:
struct afs_net *net = afs_v2net(fc->vnode);
if (fc->error == -EDESTADDRREQ ||
+ fc->error == -EADDRNOTAVAIL ||
fc->error == -ENETUNREACH ||
fc->error == -EHOSTUNREACH)
afs_dump_edestaddrreq(fc);
{
signed long rtt2, timeout;
long ret;
+ bool stalled = false;
u64 rtt;
u32 life, last_life;
life = rxrpc_kernel_check_life(call->net->socket, call->rxcall);
if (timeout == 0 &&
- life == last_life && signal_pending(current))
+ life == last_life && signal_pending(current)) {
+ if (stalled)
break;
+ __set_current_state(TASK_RUNNING);
+ rxrpc_kernel_probe_life(call->net->socket, call->rxcall);
+ timeout = rtt2;
+ stalled = true;
+ continue;
+ }
if (life != last_life) {
timeout = rtt2;
last_life = life;
+ stalled = false;
}
timeout = schedule_timeout(timeout);
afs_io_error(call, afs_io_error_vl_probe_fail);
goto out;
case -ECONNRESET: /* Responded, but call expired. */
+ case -ERFKILL:
+ case -EADDRNOTAVAIL:
case -ENETUNREACH:
case -EHOSTUNREACH:
+ case -EHOSTDOWN:
case -ECONNREFUSED:
case -ETIMEDOUT:
case -ETIME:
* Probe all of a vlserver's addresses to find out the best route and to
* query its capabilities.
*/
-static int afs_do_probe_vlserver(struct afs_net *net,
- struct afs_vlserver *server,
- struct key *key,
- unsigned int server_index)
+static bool afs_do_probe_vlserver(struct afs_net *net,
+ struct afs_vlserver *server,
+ struct key *key,
+ unsigned int server_index,
+ struct afs_error *_e)
{
struct afs_addr_cursor ac = {
.index = 0,
};
- int ret;
+ bool in_progress = false;
+ int err;
_enter("%s", server->name);
server->probe.rtt = UINT_MAX;
for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) {
- ret = afs_vl_get_capabilities(net, &ac, key, server,
+ err = afs_vl_get_capabilities(net, &ac, key, server,
server_index, true);
- if (ret != -EINPROGRESS) {
- afs_vl_probe_done(server);
- return ret;
- }
+ if (err == -EINPROGRESS)
+ in_progress = true;
+ else
+ afs_prioritise_error(_e, err, ac.abort_code);
}
- return 0;
+ if (!in_progress)
+ afs_vl_probe_done(server);
+ return in_progress;
}
/*
struct afs_vlserver_list *vllist)
{
struct afs_vlserver *server;
- int i, ret;
+ struct afs_error e;
+ bool in_progress = false;
+ int i;
+ e.error = 0;
+ e.responded = false;
for (i = 0; i < vllist->nr_servers; i++) {
server = vllist->servers[i].server;
if (test_bit(AFS_VLSERVER_FL_PROBED, &server->flags))
continue;
- if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags)) {
- ret = afs_do_probe_vlserver(net, server, key, i);
- if (ret)
- return ret;
- }
+ if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags) &&
+ afs_do_probe_vlserver(net, server, key, i, &e))
+ in_progress = true;
}
- return 0;
+ return in_progress ? 0 : e.error;
}
/*
{
struct afs_addr_list *alist;
struct afs_vlserver *vlserver;
+ struct afs_error e;
u32 rtt;
- int error = vc->ac.error, abort_code, i;
+ int error = vc->ac.error, i;
_enter("%lx[%d],%lx[%d],%d,%d",
vc->untried, vc->index,
goto failed;
}
+ case -ERFKILL:
+ case -EADDRNOTAVAIL:
case -ENETUNREACH:
case -EHOSTUNREACH:
+ case -EHOSTDOWN:
case -ECONNREFUSED:
case -ETIMEDOUT:
case -ETIME:
if (vc->flags & AFS_VL_CURSOR_RETRY)
goto restart_from_beginning;
- abort_code = 0;
- error = -EDESTADDRREQ;
+ e.error = -EDESTADDRREQ;
+ e.responded = false;
for (i = 0; i < vc->server_list->nr_servers; i++) {
struct afs_vlserver *s = vc->server_list->servers[i].server;
- int probe_error = READ_ONCE(s->probe.error);
- switch (probe_error) {
- case 0:
- continue;
- default:
- if (error == -ETIMEDOUT ||
- error == -ETIME)
- continue;
- case -ETIMEDOUT:
- case -ETIME:
- if (error == -ENOMEM ||
- error == -ENONET)
- continue;
- case -ENOMEM:
- case -ENONET:
- if (error == -ENETUNREACH)
- continue;
- case -ENETUNREACH:
- if (error == -EHOSTUNREACH)
- continue;
- case -EHOSTUNREACH:
- if (error == -ECONNREFUSED)
- continue;
- case -ECONNREFUSED:
- if (error == -ECONNRESET)
- continue;
- case -ECONNRESET: /* Responded, but call expired. */
- if (error == -ECONNABORTED)
- continue;
- case -ECONNABORTED:
- abort_code = s->probe.abort_code;
- error = probe_error;
- continue;
- }
+ afs_prioritise_error(&e, READ_ONCE(s->probe.error),
+ s->probe.abort_code);
}
- if (error == -ECONNABORTED)
- error = afs_abort_to_error(abort_code);
-
failed_set_error:
vc->error = error;
failed:
struct afs_net *net = vc->cell->net;
if (vc->error == -EDESTADDRREQ ||
+ vc->error == -EADDRNOTAVAIL ||
vc->error == -ENETUNREACH ||
vc->error == -EHOSTUNREACH)
afs_vl_dump_edestaddrreq(vc);
ret = ioprio_check_cap(iocb->aio_reqprio);
if (ret) {
pr_debug("aio ioprio check cap error: %d\n", ret);
+ fput(req->ki_filp);
return ret;
}
int mirror_num = 0;
int failed_mirror = 0;
- clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree;
while (1) {
+ clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE,
mirror_num);
if (!ret) {
break;
}
- /*
- * This buffer's crc is fine, but its contents are corrupted, so
- * there is no reason to read the other copies, they won't be
- * any less wrong.
- */
- if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags) ||
- ret == -EUCLEAN)
- break;
-
num_copies = btrfs_num_copies(fs_info,
eb->start, eb->len);
if (num_copies == 1)
atomic_inc(&root->log_batch);
+ /*
+ * Before we acquired the inode's lock, someone may have dirtied more
+ * pages in the target range. We need to make sure that writeback for
+ * any such pages does not start while we are logging the inode, because
+ * if it does, any of the following might happen when we are not doing a
+ * full inode sync:
+ *
+ * 1) We log an extent after its writeback finishes but before its
+ * checksums are added to the csum tree, leading to -EIO errors
+ * when attempting to read the extent after a log replay.
+ *
+ * 2) We can end up logging an extent before its writeback finishes.
+ * Therefore after the log replay we will have a file extent item
+ * pointing to an unwritten extent (and no data checksums as well).
+ *
+ * So trigger writeback for any eventual new dirty pages and then we
+ * wait for all ordered extents to complete below.
+ */
+ ret = start_ordered_ops(inode, start, end);
+ if (ret) {
+ inode_unlock(inode);
+ goto out;
+ }
+
/*
* We have to do this here to avoid the priority inversion of waiting on
* IO of a lower priority task while holding a transaciton open.
int i;
u64 *i_qgroups;
struct btrfs_fs_info *fs_info = trans->fs_info;
- struct btrfs_root *quota_root = fs_info->quota_root;
+ struct btrfs_root *quota_root;
struct btrfs_qgroup *srcgroup;
struct btrfs_qgroup *dstgroup;
u32 level_size = 0;
if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
goto out;
+ quota_root = fs_info->quota_root;
if (!quota_root) {
ret = -EINVAL;
goto out;
restart:
if (update_backref_cache(trans, &rc->backref_cache)) {
btrfs_end_transaction(trans);
+ trans = NULL;
continue;
}
kfree(m);
}
-static void tail_append_pending_moves(struct pending_dir_move *moves,
+static void tail_append_pending_moves(struct send_ctx *sctx,
+ struct pending_dir_move *moves,
struct list_head *stack)
{
if (list_empty(&moves->list)) {
list_add_tail(&moves->list, stack);
list_splice_tail(&list, stack);
}
+ if (!RB_EMPTY_NODE(&moves->node)) {
+ rb_erase(&moves->node, &sctx->pending_dir_moves);
+ RB_CLEAR_NODE(&moves->node);
+ }
}
static int apply_children_dir_moves(struct send_ctx *sctx)
return 0;
INIT_LIST_HEAD(&stack);
- tail_append_pending_moves(pm, &stack);
+ tail_append_pending_moves(sctx, pm, &stack);
while (!list_empty(&stack)) {
pm = list_first_entry(&stack, struct pending_dir_move, list);
goto out;
pm = get_pending_dir_moves(sctx, parent_ino);
if (pm)
- tail_append_pending_moves(pm, &stack);
+ tail_append_pending_moves(sctx, pm, &stack);
}
return 0;
vol = memdup_user((void __user *)arg, sizeof(*vol));
if (IS_ERR(vol))
return PTR_ERR(vol);
+ vol->name[BTRFS_PATH_NAME_MAX] = '\0';
switch (cmd) {
case BTRFS_IOC_SCAN_DEV:
return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT));
}
-static void *dax_make_page_entry(struct page *page)
-{
- pfn_t pfn = page_to_pfn_t(page);
- return dax_make_entry(pfn, PageHead(page) ? DAX_PMD : 0);
-}
-
static bool dax_is_locked(void *entry)
{
return xa_to_value(entry) & DAX_LOCKED;
return 0;
}
-static int dax_is_pmd_entry(void *entry)
+static unsigned long dax_is_pmd_entry(void *entry)
{
return xa_to_value(entry) & DAX_PMD;
}
-static int dax_is_pte_entry(void *entry)
+static bool dax_is_pte_entry(void *entry)
{
return !(xa_to_value(entry) & DAX_PMD);
}
ewait.wait.func = wake_exceptional_entry_func;
for (;;) {
- entry = xas_load(xas);
- if (!entry || xa_is_internal(entry) ||
- WARN_ON_ONCE(!xa_is_value(entry)) ||
+ entry = xas_find_conflict(xas);
+ if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
!dax_is_locked(entry))
return entry;
{
void *old;
+ BUG_ON(dax_is_locked(entry));
xas_reset(xas);
xas_lock_irq(xas);
old = xas_store(xas, entry);
return NULL;
}
+/*
+ * dax_lock_mapping_entry - Lock the DAX entry corresponding to a page
+ * @page: The page whose entry we want to lock
+ *
+ * Context: Process context.
+ * Return: %true if the entry was locked or does not need to be locked.
+ */
bool dax_lock_mapping_entry(struct page *page)
{
XA_STATE(xas, NULL, 0);
void *entry;
+ bool locked;
+ /* Ensure page->mapping isn't freed while we look at it */
+ rcu_read_lock();
for (;;) {
struct address_space *mapping = READ_ONCE(page->mapping);
+ locked = false;
if (!dax_mapping(mapping))
- return false;
+ break;
/*
* In the device-dax case there's no need to lock, a
* otherwise we would not have a valid pfn_to_page()
* translation.
*/
+ locked = true;
if (S_ISCHR(mapping->host->i_mode))
- return true;
+ break;
xas.xa = &mapping->i_pages;
xas_lock_irq(&xas);
xas_set(&xas, page->index);
entry = xas_load(&xas);
if (dax_is_locked(entry)) {
+ rcu_read_unlock();
entry = get_unlocked_entry(&xas);
- /* Did the page move while we slept? */
- if (dax_to_pfn(entry) != page_to_pfn(page)) {
- xas_unlock_irq(&xas);
- continue;
- }
+ xas_unlock_irq(&xas);
+ put_unlocked_entry(&xas, entry);
+ rcu_read_lock();
+ continue;
}
dax_lock_entry(&xas, entry);
xas_unlock_irq(&xas);
- return true;
+ break;
}
+ rcu_read_unlock();
+ return locked;
}
void dax_unlock_mapping_entry(struct page *page)
{
struct address_space *mapping = page->mapping;
XA_STATE(xas, &mapping->i_pages, page->index);
+ void *entry;
if (S_ISCHR(mapping->host->i_mode))
return;
- dax_unlock_entry(&xas, dax_make_page_entry(page));
+ rcu_read_lock();
+ entry = xas_load(&xas);
+ rcu_read_unlock();
+ entry = dax_make_entry(page_to_pfn_t(page), dax_is_pmd_entry(entry));
+ dax_unlock_entry(&xas, entry);
}
/*
retry:
xas_lock_irq(xas);
entry = get_unlocked_entry(xas);
- if (xa_is_internal(entry))
- goto fallback;
if (entry) {
- if (WARN_ON_ONCE(!xa_is_value(entry))) {
+ if (!xa_is_value(entry)) {
xas_set_err(xas, EIO);
goto out_unlock;
}
/* Did we race with someone splitting entry or so? */
if (!entry ||
(order == 0 && !dax_is_pte_entry(entry)) ||
- (order == PMD_ORDER && (xa_is_internal(entry) ||
- !dax_is_pmd_entry(entry)))) {
+ (order == PMD_ORDER && !dax_is_pmd_entry(entry))) {
put_unlocked_entry(&xas, entry);
xas_unlock_irq(&xas);
trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
#include <linux/oom.h>
#include <linux/compat.h>
#include <linux/vmalloc.h>
+#include <linux/freezer.h>
#include <linux/uaccess.h>
#include <asm/mmu_context.h>
while (sig->notify_count) {
__set_current_state(TASK_KILLABLE);
spin_unlock_irq(lock);
- schedule();
+ freezable_schedule();
if (unlikely(__fatal_signal_pending(tsk)))
goto killed;
spin_lock_irq(lock);
__set_current_state(TASK_KILLABLE);
write_unlock_irq(&tasklist_lock);
cgroup_threadgroup_change_end(tsk);
- schedule();
+ freezable_schedule();
if (unlikely(__fatal_signal_pending(tsk)))
goto killed;
}
struct dentry *parent = dget_parent(dentry);
dput(dentry);
- if (IS_ROOT(dentry)) {
+ if (dentry == parent) {
dput(parent);
return false;
}
tmp = lookup_one_len_unlocked(nbuf, parent, strlen(nbuf));
if (IS_ERR(tmp)) {
dprintk("%s: lookup failed: %d\n", __func__, PTR_ERR(tmp));
+ err = PTR_ERR(tmp);
goto out_err;
}
if (tmp != dentry) {
if (sb->s_magic != EXT2_SUPER_MAGIC)
goto cantfind_ext2;
+ opts.s_mount_opt = 0;
/* Set defaults before we parse the mount options */
def_mount_opts = le32_to_cpu(es->s_default_mount_opts);
if (def_mount_opts & EXT2_DEFM_DEBUG)
}
cleanup:
- brelse(bh);
if (!(bh && header == HDR(bh)))
kfree(header);
+ brelse(bh);
up_write(&EXT2_I(inode)->xattr_sem);
return error;
static void fuse_drop_waiting(struct fuse_conn *fc)
{
- if (fc->connected) {
- atomic_dec(&fc->num_waiting);
- } else if (atomic_dec_and_test(&fc->num_waiting)) {
+ /*
+ * lockess check of fc->connected is okay, because atomic_dec_and_test()
+ * provides a memory barrier mached with the one in fuse_wait_aborted()
+ * to ensure no wake-up is missed.
+ */
+ if (atomic_dec_and_test(&fc->num_waiting) &&
+ !READ_ONCE(fc->connected)) {
/* wake up aborters */
wake_up_all(&fc->blocked_waitq);
}
req->in.args[1].size = total_len;
err = fuse_request_send_notify_reply(fc, req, outarg->notify_unique);
- if (err)
+ if (err) {
fuse_retrieve_end(fc, req);
+ fuse_put_request(fc, req);
+ }
return err;
}
void fuse_wait_aborted(struct fuse_conn *fc)
{
+ /* matches implicit memory barrier in fuse_drop_waiting() */
+ smp_mb();
wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0);
}
}
if (io->async) {
+ bool blocking = io->blocking;
+
fuse_aio_complete(io, ret < 0 ? ret : 0, -1);
/* we have a non-extending, async request, so return */
- if (!io->blocking)
+ if (!blocking)
return -EIOCBQUEUED;
wait_for_completion(&wait);
ret = gfs2_meta_inode_buffer(ip, &dibh);
if (ret)
goto unlock;
- iomap->private = dibh;
+ mp->mp_bh[0] = dibh;
if (gfs2_is_stuffed(ip)) {
if (flags & IOMAP_WRITE) {
len = lblock_stop - lblock + 1;
iomap->length = len << inode->i_blkbits;
- get_bh(dibh);
- mp->mp_bh[0] = dibh;
-
height = ip->i_height;
while ((lblock + 1) * sdp->sd_sb.sb_bsize > sdp->sd_heightsize[height])
height++;
iomap->bdev = inode->i_sb->s_bdev;
unlock:
up_read(&ip->i_rw_mutex);
- if (ret && dibh)
- brelse(dibh);
return ret;
do_alloc:
static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,
loff_t length, unsigned flags,
- struct iomap *iomap)
+ struct iomap *iomap,
+ struct metapath *mp)
{
- struct metapath mp = { .mp_aheight = 1, };
struct gfs2_inode *ip = GFS2_I(inode);
struct gfs2_sbd *sdp = GFS2_SB(inode);
unsigned int data_blocks = 0, ind_blocks = 0, rblocks;
unstuff = gfs2_is_stuffed(ip) &&
pos + length > gfs2_max_stuffed_size(ip);
- ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);
+ ret = gfs2_iomap_get(inode, pos, length, flags, iomap, mp);
if (ret)
- goto out_release;
+ goto out_unlock;
alloc_required = unstuff || iomap->type == IOMAP_HOLE;
ret = gfs2_quota_lock_check(ip, &ap);
if (ret)
- goto out_release;
+ goto out_unlock;
ret = gfs2_inplace_reserve(ip, &ap);
if (ret)
ret = gfs2_unstuff_dinode(ip, NULL);
if (ret)
goto out_trans_end;
- release_metapath(&mp);
- brelse(iomap->private);
- iomap->private = NULL;
+ release_metapath(mp);
ret = gfs2_iomap_get(inode, iomap->offset, iomap->length,
- flags, iomap, &mp);
+ flags, iomap, mp);
if (ret)
goto out_trans_end;
}
if (iomap->type == IOMAP_HOLE) {
- ret = gfs2_iomap_alloc(inode, iomap, flags, &mp);
+ ret = gfs2_iomap_alloc(inode, iomap, flags, mp);
if (ret) {
gfs2_trans_end(sdp);
gfs2_inplace_release(ip);
goto out_qunlock;
}
}
- release_metapath(&mp);
if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip))
iomap->page_done = gfs2_iomap_journaled_page_done;
return 0;
out_qunlock:
if (alloc_required)
gfs2_quota_unlock(ip);
-out_release:
- if (iomap->private)
- brelse(iomap->private);
- release_metapath(&mp);
+out_unlock:
gfs2_write_unlock(inode);
return ret;
}
trace_gfs2_iomap_start(ip, pos, length, flags);
if ((flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)) {
- ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap);
+ ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp);
} else {
ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);
- release_metapath(&mp);
+
/*
* Silently fall back to buffered I/O for stuffed files or if
* we've hot a hole (see gfs2_file_direct_write).
iomap->type != IOMAP_MAPPED)
ret = -ENOTBLK;
}
+ if (!ret) {
+ get_bh(mp.mp_bh[0]);
+ iomap->private = mp.mp_bh[0];
+ }
+ release_metapath(&mp);
trace_gfs2_iomap_end(ip, iomap, ret);
return ret;
}
if (ret < 0)
goto out;
- /* issue read-ahead on metadata */
- if (mp.mp_aheight > 1) {
- for (; ret > 1; ret--) {
- metapointer_range(&mp, mp.mp_aheight - ret,
+ /* On the first pass, issue read-ahead on metadata. */
+ if (mp.mp_aheight > 1 && strip_h == ip->i_height - 1) {
+ unsigned int height = mp.mp_aheight - 1;
+
+ /* No read-ahead for data blocks. */
+ if (mp.mp_aheight - 1 == strip_h)
+ height--;
+
+ for (; height >= mp.mp_aheight - ret; height--) {
+ metapointer_range(&mp, height,
start_list, start_aligned,
end_list, end_aligned,
&start, &end);
if (gl) {
glock_clear_object(gl, rgd);
+ gfs2_rgrp_brelse(rgd);
gfs2_glock_put(gl);
}
* @rgd: the struct gfs2_rgrpd describing the RG to read in
*
* Read in all of a Resource Group's header and bitmap blocks.
- * Caller must eventually call gfs2_rgrp_relse() to free the bitmaps.
+ * Caller must eventually call gfs2_rgrp_brelse() to free the bitmaps.
*
* Returns: errno
*/
return LRU_REMOVED;
}
- /* recently referenced inodes get one more pass */
- if (inode->i_state & I_REFERENCED) {
+ /*
+ * Recently referenced inodes and inodes with many attached pages
+ * get one more pass.
+ */
+ if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) {
inode->i_state &= ~I_REFERENCED;
spin_unlock(&inode->i_lock);
return LRU_ROTATE;
iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop,
loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp)
{
+ loff_t orig_pos = *pos;
+ loff_t isize = i_size_read(inode);
unsigned block_bits = inode->i_blkbits;
unsigned block_size = (1 << block_bits);
unsigned poff = offset_in_page(*pos);
unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length);
unsigned first = poff >> block_bits;
unsigned last = (poff + plen - 1) >> block_bits;
- unsigned end = offset_in_page(i_size_read(inode)) >> block_bits;
/*
* If the block size is smaller than the page size we need to check the
* handle both halves separately so that we properly zero data in the
* page cache for blocks that are entirely outside of i_size.
*/
- if (first <= end && last > end)
- plen -= (last - end) * block_size;
+ if (orig_pos <= isize && orig_pos + length > isize) {
+ unsigned end = offset_in_page(isize - 1) >> block_bits;
+
+ if (first <= end && last > end)
+ plen -= (last - end) * block_size;
+ }
*offp = poff;
*lenp = plen;
struct bio *bio;
bool need_zeroout = false;
bool use_fua = false;
- int nr_pages, ret;
+ int nr_pages, ret = 0;
size_t copied = 0;
if ((pos | length | align) & ((1 << blkbits) - 1))
if (iomap->flags & IOMAP_F_NEW) {
need_zeroout = true;
- } else {
+ } else if (iomap->type == IOMAP_MAPPED) {
/*
- * Use a FUA write if we need datasync semantics, this
- * is a pure data IO that doesn't require any metadata
- * updates and the underlying device supports FUA. This
- * allows us to avoid cache flushes on IO completion.
+ * Use a FUA write if we need datasync semantics, this is a pure
+ * data IO that doesn't require any metadata updates (including
+ * after IO completion such as unwritten extent conversion) and
+ * the underlying device supports FUA. This allows us to avoid
+ * cache flushes on IO completion.
*/
if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) &&
(dio->flags & IOMAP_DIO_WRITE_FUA) &&
ret = bio_iov_iter_get_pages(bio, &iter);
if (unlikely(ret)) {
+ /*
+ * We have to stop part way through an IO. We must fall
+ * through to the sub-block tail zeroing here, otherwise
+ * this short IO may expose stale data in the tail of
+ * the block we haven't written data to.
+ */
bio_put(bio);
- return copied ? copied : ret;
+ goto zero_tail;
}
n = bio->bi_iter.bi_size;
dio->submit.cookie = submit_bio(bio);
} while (nr_pages);
- if (need_zeroout) {
+ /*
+ * We need to zeroout the tail of a sub-block write if the extent type
+ * requires zeroing or the write extends beyond EOF. If we don't zero
+ * the block tail in the latter case, we can expose stale data via mmap
+ * reads of the EOF block.
+ */
+zero_tail:
+ if (need_zeroout ||
+ ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) {
/* zero out from the end of the write to the end of the block */
pad = pos & (fs_block_size - 1);
if (pad)
iomap_dio_zero(dio, iomap, pos, fs_block_size - pad);
}
- return copied;
+ return copied ? copied : ret;
}
static loff_t
dio->wait_for_completion = true;
ret = 0;
}
+
+ /*
+ * Splicing to pipes can fail on a full pipe. We have to
+ * swallow this to make it look like a short IO
+ * otherwise the higher splice layers will completely
+ * mishandle the error and stop moving data.
+ */
+ if (ret == -EFAULT)
+ ret = 0;
break;
}
pos += ret;
hlist_for_each_entry(mp, chain, m_hash) {
if (mp->m_dentry == dentry) {
- /* might be worth a WARN_ON() */
- if (d_unlinked(dentry))
- return ERR_PTR(-ENOENT);
mp->m_count++;
return mp;
}
int ret;
if (d_mountpoint(dentry)) {
+ /* might be worth a WARN_ON() */
+ if (d_unlinked(dentry))
+ return ERR_PTR(-ENOENT);
mountpoint:
read_seqlock_excl(&mount_lock);
mp = lookup_mountpoint(dentry);
out_iput:
rcu_read_unlock();
trace_nfs4_cb_getattr(cps->clp, &args->fh, inode, -ntohl(res->status));
- iput(inode);
+ nfs_iput_and_deactive(inode);
out:
dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status));
return res->status;
}
trace_nfs4_cb_recall(cps->clp, &args->fh, inode,
&args->stateid, -ntohl(res));
- iput(inode);
+ nfs_iput_and_deactive(inode);
out:
dprintk("%s: exit with status = %d\n", __func__, ntohl(res));
return res;
{
struct cb_offloadargs *args = data;
struct nfs_server *server;
- struct nfs4_copy_state *copy;
+ struct nfs4_copy_state *copy, *tmp_copy;
bool found = false;
+ copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS);
+ if (!copy)
+ return htonl(NFS4ERR_SERVERFAULT);
+
spin_lock(&cps->clp->cl_lock);
rcu_read_lock();
list_for_each_entry_rcu(server, &cps->clp->cl_superblocks,
client_link) {
- list_for_each_entry(copy, &server->ss_copies, copies) {
+ list_for_each_entry(tmp_copy, &server->ss_copies, copies) {
if (memcmp(args->coa_stateid.other,
- copy->stateid.other,
+ tmp_copy->stateid.other,
sizeof(args->coa_stateid.other)))
continue;
- nfs4_copy_cb_args(copy, args);
- complete(©->completion);
+ nfs4_copy_cb_args(tmp_copy, args);
+ complete(&tmp_copy->completion);
found = true;
goto out;
}
out:
rcu_read_unlock();
if (!found) {
- copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS);
- if (!copy) {
- spin_unlock(&cps->clp->cl_lock);
- return htonl(NFS4ERR_SERVERFAULT);
- }
memcpy(©->stateid, &args->coa_stateid, NFS4_STATEID_SIZE);
nfs4_copy_cb_args(copy, args);
list_add_tail(©->copies, &cps->clp->pending_cb_stateids);
- }
+ } else
+ kfree(copy);
spin_unlock(&cps->clp->cl_lock);
return 0;
const struct nfs_fh *fhandle)
{
struct nfs_delegation *delegation;
- struct inode *res = NULL;
+ struct inode *freeme, *res = NULL;
list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
spin_lock(&delegation->lock);
if (delegation->inode != NULL &&
nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) {
- res = igrab(delegation->inode);
+ freeme = igrab(delegation->inode);
+ if (freeme && nfs_sb_active(freeme->i_sb))
+ res = freeme;
spin_unlock(&delegation->lock);
if (res != NULL)
return res;
+ if (freeme) {
+ rcu_read_unlock();
+ iput(freeme);
+ rcu_read_lock();
+ }
return ERR_PTR(-EAGAIN);
}
spin_unlock(&delegation->lock);
task))
return;
- if (ff_layout_read_prepare_common(task, hdr))
- return;
-
- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,
- hdr->args.lock_context, FMODE_READ) == -EIO)
- rpc_exit(task, -EIO); /* lost lock, terminate I/O */
+ ff_layout_read_prepare_common(task, hdr);
}
static void ff_layout_read_call_done(struct rpc_task *task, void *data)
task))
return;
- if (ff_layout_write_prepare_common(task, hdr))
- return;
-
- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,
- hdr->args.lock_context, FMODE_WRITE) == -EIO)
- rpc_exit(task, -EIO); /* lost lock, terminate I/O */
+ ff_layout_write_prepare_common(task, hdr);
}
static void ff_layout_write_call_done(struct rpc_task *task, void *data)
fh = nfs4_ff_layout_select_ds_fh(lseg, idx);
if (fh)
hdr->args.fh = fh;
+
+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))
+ goto out_failed;
+
/*
* Note that if we ever decide to split across DSes,
* then we may need to handle dense-like offsets.
if (fh)
hdr->args.fh = fh;
+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))
+ goto out_failed;
+
/*
* Note that if we ever decide to split across DSes,
* then we may need to handle dense-like offsets.
unsigned int maxnum);
struct nfs_fh *
nfs4_ff_layout_select_ds_fh(struct pnfs_layout_segment *lseg, u32 mirror_idx);
+int
+nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg,
+ u32 mirror_idx,
+ nfs4_stateid *stateid);
struct nfs4_pnfs_ds *
nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx,
return fh;
}
+int
+nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg,
+ u32 mirror_idx,
+ nfs4_stateid *stateid)
+{
+ struct nfs4_ff_layout_mirror *mirror = FF_LAYOUT_COMP(lseg, mirror_idx);
+
+ if (!ff_layout_mirror_valid(lseg, mirror, false)) {
+ pr_err_ratelimited("NFS: %s: No data server for mirror offset index %d\n",
+ __func__, mirror_idx);
+ goto out;
+ }
+
+ nfs4_stateid_copy(stateid, &mirror->stateid);
+ return 1;
+out:
+ return 0;
+}
+
/**
* nfs4_ff_layout_prepare_ds - prepare a DS connection for an RPC call
* @lseg: the layout segment we're operating on
struct file *dst,
nfs4_stateid *src_stateid)
{
- struct nfs4_copy_state *copy;
+ struct nfs4_copy_state *copy, *tmp_copy;
int status = NFS4_OK;
bool found_pending = false;
struct nfs_open_context *ctx = nfs_file_open_context(dst);
+ copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS);
+ if (!copy)
+ return -ENOMEM;
+
spin_lock(&server->nfs_client->cl_lock);
- list_for_each_entry(copy, &server->nfs_client->pending_cb_stateids,
+ list_for_each_entry(tmp_copy, &server->nfs_client->pending_cb_stateids,
copies) {
- if (memcmp(&res->write_res.stateid, ©->stateid,
+ if (memcmp(&res->write_res.stateid, &tmp_copy->stateid,
NFS4_STATEID_SIZE))
continue;
found_pending = true;
- list_del(©->copies);
+ list_del(&tmp_copy->copies);
break;
}
if (found_pending) {
spin_unlock(&server->nfs_client->cl_lock);
+ kfree(copy);
+ copy = tmp_copy;
goto out;
}
- copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS);
- if (!copy) {
- spin_unlock(&server->nfs_client->cl_lock);
- return -ENOMEM;
- }
memcpy(©->stateid, &res->write_res.stateid, NFS4_STATEID_SIZE);
init_completion(©->completion);
copy->parent_state = ctx->state;
NFS4CLNT_MOVED,
NFS4CLNT_LEASE_MOVED,
NFS4CLNT_DELEGATION_EXPIRED,
+ NFS4CLNT_RUN_MANAGER,
+ NFS4CLNT_DELEGRETURN_RUNNING,
};
#define NFS4_RENEW_TIMEOUT 0x01
struct task_struct *task;
char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1];
+ set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
return;
__module_get(THIS_MODULE);
/* Ensure exclusive access to NFSv4 state */
do {
+ clear_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
if (test_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state)) {
section = "purge state";
status = nfs4_purge_lease(clp);
}
nfs4_end_drain_session(clp);
- if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) {
- nfs_client_return_marked_delegations(clp);
- continue;
+ nfs4_clear_state_manager_bit(clp);
+
+ if (!test_and_set_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state)) {
+ if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) {
+ nfs_client_return_marked_delegations(clp);
+ set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
+ }
+ clear_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state);
}
- nfs4_clear_state_manager_bit(clp);
/* Did we race with an attempt to give us more work? */
- if (clp->cl_state == 0)
- break;
+ if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))
+ return;
if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
- break;
- } while (refcount_read(&clp->cl_count) > 1);
- return;
+ return;
+ } while (refcount_read(&clp->cl_count) > 1 && !signalled());
+ goto out_drain;
+
out_error:
if (strlen(section))
section_sep = ": ";
" with error %d\n", section_sep, section,
clp->cl_hostname, -status);
ssleep(1);
+out_drain:
nfs4_end_drain_session(clp);
nfs4_clear_state_manager_bit(clp);
}
{
__be32 status;
+ if (!cstate->save_fh.fh_dentry)
+ return nfserr_nofilehandle;
+
status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->save_fh,
src_stateid, RD_STATE, src, NULL);
if (status) {
return;
if (nbh == NULL) { /* blocksize == pagesize */
- xa_lock_irq(&btnc->i_pages);
- __xa_erase(&btnc->i_pages, newkey);
- xa_unlock_irq(&btnc->i_pages);
+ xa_erase_irq(&btnc->i_pages, newkey);
unlock_page(ctxt->bh->b_page);
} else
brelse(nbh);
continue;
mark = iter_info->marks[type];
/*
- * if the event is for a child and this inode doesn't care about
- * events on the child, don't send it!
+ * If the event is for a child and this mark doesn't care about
+ * events on a child, don't send it!
*/
- if (type == FSNOTIFY_OBJ_TYPE_INODE &&
- (event_mask & FS_EVENT_ON_CHILD) &&
- !(mark->mask & FS_EVENT_ON_CHILD))
+ if (event_mask & FS_EVENT_ON_CHILD &&
+ (type != FSNOTIFY_OBJ_TYPE_INODE ||
+ !(mark->mask & FS_EVENT_ON_CHILD)))
continue;
marks_mask |= mark->mask;
parent = dget_parent(dentry);
p_inode = parent->d_inode;
- if (unlikely(!fsnotify_inode_watches_children(p_inode)))
+ if (unlikely(!fsnotify_inode_watches_children(p_inode))) {
__fsnotify_update_child_dentry_flags(p_inode);
- else if (p_inode->i_fsnotify_mask & mask) {
+ } else if (p_inode->i_fsnotify_mask & mask & ALL_FSNOTIFY_EVENTS) {
struct name_snapshot name;
/* we are notifying a parent so come up with the new mask which
sb = mnt->mnt.mnt_sb;
mnt_or_sb_mask = mnt->mnt_fsnotify_mask | sb->s_fsnotify_mask;
}
+ /* An event "on child" is not intended for a mount/sb mark */
+ if (mask & FS_EVENT_ON_CHILD)
+ mnt_or_sb_mask = 0;
/*
* Optimization: srcu_read_lock() has a memory barrier which can
/* this io's submitter should not have unlocked this before we could */
BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));
- if (bytes > 0 && private)
- ret = ocfs2_dio_end_io_write(inode, private, offset, bytes);
+ if (bytes <= 0)
+ mlog_ratelimited(ML_ERROR, "Direct IO failed, bytes = %lld",
+ (long long)bytes);
+ if (private) {
+ if (bytes > 0)
+ ret = ocfs2_dio_end_io_write(inode, private, offset,
+ bytes);
+ else
+ ocfs2_dio_free_write_ctx(inode, private);
+ }
ocfs2_iocb_clear_rw_locked(iocb);
##__VA_ARGS__); \
} while (0)
+#define mlog_ratelimited(mask, fmt, ...) \
+do { \
+ static DEFINE_RATELIMIT_STATE(_rs, \
+ DEFAULT_RATELIMIT_INTERVAL, \
+ DEFAULT_RATELIMIT_BURST); \
+ if (__ratelimit(&_rs)) \
+ mlog(mask, fmt, ##__VA_ARGS__); \
+} while (0)
+
#define mlog_errno(st) ({ \
int _st = (st); \
if (_st != -ERESTARTSYS && _st != -EINTR && \
cxt->pstore.data = cxt;
/*
- * Console can handle any buffer size, so prefer LOG_LINE_MAX. If we
- * have to handle dumps, we must have at least record_size buffer. And
- * for ftrace, bufsize is irrelevant (if bufsize is 0, buf will be
- * ZERO_SIZE_PTR).
+ * Since bufsize is only used for dmesg crash dumps, it
+ * must match the size of the dprz record (after PRZ header
+ * and ECC bytes have been accounted for).
*/
- if (cxt->console_size)
- cxt->pstore.bufsize = 1024; /* LOG_LINE_MAX */
- cxt->pstore.bufsize = max(cxt->record_size, cxt->pstore.bufsize);
- cxt->pstore.buf = kmalloc(cxt->pstore.bufsize, GFP_KERNEL);
+ cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size;
+ cxt->pstore.buf = kzalloc(cxt->pstore.bufsize, GFP_KERNEL);
if (!cxt->pstore.buf) {
- pr_err("cannot allocate pstore buffer\n");
+ pr_err("cannot allocate pstore crash dump buffer\n");
err = -ENOMEM;
goto fail_clear;
}
off = same->src_offset;
len = same->src_length;
- ret = -EISDIR;
if (S_ISDIR(src->i_mode))
- goto out;
+ return -EISDIR;
- ret = -EINVAL;
if (!S_ISREG(src->i_mode))
- goto out;
+ return -EINVAL;
+
+ if (!file->f_op->remap_file_range)
+ return -EOPNOTSUPP;
ret = remap_verify_area(file, off, len, false);
if (ret < 0)
- goto out;
+ return ret;
ret = 0;
if (off + len > i_size_read(src))
fdput(dst_fd);
next_loop:
if (fatal_signal_pending(current))
- goto out;
+ break;
}
-
-out:
return ret;
}
EXPORT_SYMBOL(vfs_dedupe_file_range);
}
}
brelse(bh);
- return 0;
+ return err;
}
int sysv_write_inode(struct inode *inode, struct writeback_control *wbc)
ret = udf_dstrCS0toChar(sb, outstr, 31, pvoldesc->volIdent, 32);
- if (ret < 0)
- goto out_bh;
-
- strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);
+ if (ret < 0) {
+ strcpy(UDF_SB(sb)->s_volume_ident, "InvalidName");
+ pr_warn("incorrect volume identification, setting to "
+ "'InvalidName'\n");
+ } else {
+ strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);
+ }
udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident);
ret = udf_dstrCS0toChar(sb, outstr, 127, pvoldesc->volSetIdent, 128);
- if (ret < 0)
+ if (ret < 0) {
+ ret = 0;
goto out_bh;
-
+ }
outstr[ret] = 0;
udf_debug("volSetIdent[] = '%s'\n", outstr);
return u_len;
}
+/*
+ * Convert CS0 dstring to output charset. Warning: This function may truncate
+ * input string if it is too long as it is used for informational strings only
+ * and it is better to truncate the string than to refuse mounting a media.
+ */
int udf_dstrCS0toChar(struct super_block *sb, uint8_t *utf_o, int o_len,
const uint8_t *ocu_i, int i_len)
{
if (i_len > 0) {
s_len = ocu_i[i_len - 1];
if (s_len >= i_len) {
- pr_err("incorrect dstring lengths (%d/%d)\n",
- s_len, i_len);
- return -EINVAL;
+ pr_warn("incorrect dstring lengths (%d/%d),"
+ " truncating\n", s_len, i_len);
+ s_len = i_len - 1;
+ /* 2-byte encoding? Need to round properly... */
+ if (ocu_i[0] == 16)
+ s_len -= (s_len - 1) & 2;
}
}
case BMAP_LEFT_FILLING | BMAP_RIGHT_FILLING | BMAP_RIGHT_CONTIG:
/*
* Filling in all of a previously delayed allocation extent.
- * The right neighbor is contiguous, the left is not.
+ * The right neighbor is contiguous, the left is not. Take care
+ * with delay -> unwritten extent allocation here because the
+ * delalloc record we are overwriting is always written.
*/
PREV.br_startblock = new->br_startblock;
PREV.br_blockcount += RIGHT.br_blockcount;
+ PREV.br_state = new->br_state;
xfs_iext_next(ifp, &bma->icur);
xfs_iext_remove(bma->ip, &bma->icur, state);
static xfs_extlen_t
xfs_inobt_max_size(
- struct xfs_mount *mp)
+ struct xfs_mount *mp,
+ xfs_agnumber_t agno)
{
+ xfs_agblock_t agblocks = xfs_ag_block_count(mp, agno);
+
/* Bail out if we're uninitialized, which can happen in mkfs. */
if (mp->m_inobt_mxr[0] == 0)
return 0;
return xfs_btree_calc_size(mp->m_inobt_mnr,
- (uint64_t)mp->m_sb.sb_agblocks * mp->m_sb.sb_inopblock /
- XFS_INODES_PER_CHUNK);
+ (uint64_t)agblocks * mp->m_sb.sb_inopblock /
+ XFS_INODES_PER_CHUNK);
}
static int
if (error)
return error;
- *ask += xfs_inobt_max_size(mp);
+ *ask += xfs_inobt_max_size(mp, agno);
*used += tree_len;
return 0;
}
goto out_unlock;
}
-static int
+int
xfs_flush_unmap_range(
struct xfs_inode *ip,
xfs_off_t offset,
* Writeback and invalidate cache for the remainder of the file as we're
* about to shift down every extent from offset to EOF.
*/
- error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, offset, -1);
- if (error)
- return error;
- error = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping,
- offset >> PAGE_SHIFT, -1);
- if (error)
- return error;
+ error = xfs_flush_unmap_range(ip, offset, XFS_ISIZE(ip));
/*
* Clean out anything hanging around in the cow fork now that
int whichfork, xfs_extnum_t *nextents,
xfs_filblks_t *count);
+int xfs_flush_unmap_range(struct xfs_inode *ip, xfs_off_t offset,
+ xfs_off_t len);
+
#endif /* __XFS_BMAP_UTIL_H__ */
}
/*
- * Requeue a failed buffer for writeback
+ * Requeue a failed buffer for writeback.
*
- * Return true if the buffer has been re-queued properly, false otherwise
+ * We clear the log item failed state here as well, but we have to be careful
+ * about reference counts because the only active reference counts on the buffer
+ * may be the failed log items. Hence if we clear the log item failed state
+ * before queuing the buffer for IO we can release all active references to
+ * the buffer and free it, leading to use after free problems in
+ * xfs_buf_delwri_queue. It makes no difference to the buffer or log items which
+ * order we process them in - the buffer is locked, and we own the buffer list
+ * so nothing on them is going to change while we are performing this action.
+ *
+ * Hence we can safely queue the buffer for IO before we clear the failed log
+ * item state, therefore always having an active reference to the buffer and
+ * avoiding the transient zero-reference state that leads to use-after-free.
+ *
+ * Return true if the buffer was added to the buffer list, false if it was
+ * already on the buffer list.
*/
bool
xfs_buf_resubmit_failed_buffers(
struct list_head *buffer_list)
{
struct xfs_log_item *lip;
+ bool ret;
+
+ ret = xfs_buf_delwri_queue(bp, buffer_list);
/*
- * Clear XFS_LI_FAILED flag from all items before resubmit
- *
- * XFS_LI_FAILED set/clear is protected by ail_lock, caller this
+ * XFS_LI_FAILED set/clear is protected by ail_lock, caller of this
* function already have it acquired
*/
list_for_each_entry(lip, &bp->b_li_list, li_bio_list)
xfs_clear_li_failed(lip);
- /* Add this buffer back to the delayed write list */
- return xfs_buf_delwri_queue(bp, buffer_list);
+ return ret;
}
}
-loff_t
+STATIC loff_t
xfs_file_remap_range(
struct file *file_in,
loff_t pos_in,
if (error)
return error;
+ xfs_trim_extent(imap, got.br_startoff, got.br_blockcount);
trace_xfs_reflink_cow_alloc(ip, &got);
return 0;
}
if (ret)
goto out_unlock;
- /* Zap any page cache for the destination file's range. */
- truncate_inode_pages_range(&inode_out->i_data,
- round_down(pos_out, PAGE_SIZE),
- round_up(pos_out + *len, PAGE_SIZE) - 1);
+ /*
+ * If pos_out > EOF, we may have dirtied blocks between EOF and
+ * pos_out. In that case, we need to extend the flush and unmap to cover
+ * from EOF to the end of the copy length.
+ */
+ if (pos_out > XFS_ISIZE(dest)) {
+ loff_t flen = *len + (pos_out - XFS_ISIZE(dest));
+ ret = xfs_flush_unmap_range(dest, XFS_ISIZE(dest), flen);
+ } else {
+ ret = xfs_flush_unmap_range(dest, pos_out, *len);
+ }
+ if (ret)
+ goto out_unlock;
return 1;
out_unlock:
),
TP_fast_assign(
__entry->dev = bp->b_target->bt_dev;
- __entry->bno = bp->b_bn;
+ if (bp->b_bn == XFS_BUF_DADDR_NULL)
+ __entry->bno = bp->b_maps[0].bm_bn;
+ else
+ __entry->bno = bp->b_bn;
__entry->nblks = bp->b_length;
__entry->hold = atomic_read(&bp->b_hold);
__entry->pincount = atomic_read(&bp->b_pin_count);
void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
unsigned int idx);
+struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr);
unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx);
void can_free_echo_skb(struct net_device *dev, unsigned int idx);
int can_rx_offload_add_fifo(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight);
int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload, u64 reg);
int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload);
-int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb);
+int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
+ struct sk_buff *skb, u32 timestamp);
+unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
+ unsigned int idx, u32 timestamp);
+int can_rx_offload_queue_tail(struct can_rx_offload *offload,
+ struct sk_buff *skb);
void can_rx_offload_reset(struct can_rx_offload *offload);
void can_rx_offload_del(struct can_rx_offload *offload);
void can_rx_offload_enable(struct can_rx_offload *offload);
#include <linux/dma-mapping.h>
#include <linux/mem_encrypt.h>
-#define DIRECT_MAPPING_ERROR 0
+#define DIRECT_MAPPING_ERROR (~(dma_addr_t)0)
#ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
#include <asm/dma-direct.h>
extern void efi_reboot(enum reboot_mode reboot_mode, const char *__unused);
extern bool efi_is_table_address(unsigned long phys_addr);
+
+extern int efi_apply_persistent_mem_reservations(void);
#else
static inline bool efi_enabled(int feature)
{
{
return false;
}
+
+static inline int efi_apply_persistent_mem_reservations(void)
+{
+ return 0;
+}
#endif
extern int efi_status_to_err(efi_status_t status);
void bpf_jit_free(struct bpf_prog *fp);
+int bpf_jit_get_func_addr(const struct bpf_prog *prog,
+ const struct bpf_insn *insn, bool extra_pass,
+ u64 *func_addr, bool *func_addr_fixed);
+
struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *fp);
void bpf_jit_prog_release_other(struct bpf_prog *fp, struct bpf_prog *fp_other);
extern void return_to_handler(void);
extern int
-ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth,
- unsigned long frame_pointer, unsigned long *retp);
+function_graph_enter(unsigned long ret, unsigned long func,
+ unsigned long frame_pointer, unsigned long *retp);
unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,
unsigned long ret, unsigned long *retp);
int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
int interrupt);
-
-/**
- * struct hid_scroll_counter - Utility class for processing high-resolution
- * scroll events.
- * @dev: the input device for which events should be reported.
- * @microns_per_hi_res_unit: the amount moved by the user's finger for each
- * high-resolution unit reported by the mouse, in
- * microns.
- * @resolution_multiplier: the wheel's resolution in high-resolution mode as a
- * multiple of its lower resolution. For example, if
- * moving the wheel by one "notch" would result in a
- * value of 1 in low-resolution mode but 8 in
- * high-resolution, the multiplier is 8.
- * @remainder: counts the number of high-resolution units moved since the last
- * low-resolution event (REL_WHEEL or REL_HWHEEL) was sent. Should
- * only be used by class methods.
- */
-struct hid_scroll_counter {
- struct input_dev *dev;
- int microns_per_hi_res_unit;
- int resolution_multiplier;
-
- int remainder;
-};
-
-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,
- int hi_res_value);
-
/* HID quirks API */
unsigned long hid_lookup_quirk(const struct hid_device *hdev);
int hid_quirks_init(char **quirks_param, __u16 bus, int count);
u8 wq_signature[0x1];
u8 cont_srq[0x1];
- u8 dbr_umem_valid[0x1];
+ u8 reserved_at_22[0x1];
u8 rlky[0x1];
u8 basic_cyclic_rcv_wqe[0x1];
u8 log_rq_stride[0x3];
u8 xrcd[0x18];
u8 page_offset[0x6];
- u8 reserved_at_46[0x2];
+ u8 reserved_at_46[0x1];
+ u8 dbr_umem_valid[0x1];
u8 cqn[0x18];
u8 reserved_at_60[0x20];
struct mlx5_ifc_xrc_srqc_bits xrc_srq_context_entry;
- u8 reserved_at_280[0x40];
+ u8 reserved_at_280[0x60];
+
u8 xrc_srq_umem_valid[0x1];
- u8 reserved_at_2c1[0x5bf];
+ u8 reserved_at_2e1[0x1f];
+
+ u8 reserved_at_300[0x580];
u8 pas[0][0x40];
};
}
/* fall through */
case NET_DIM_START_MEASURE:
+ net_dim_sample(end_sample.event_ctr, end_sample.pkt_ctr, end_sample.byte_ctr,
+ &dim->start_sample);
dim->state = NET_DIM_MEASURE_IN_PROGRESS;
break;
case NET_DIM_APPLY_NEW_PROFILE:
struct nf_conntrack_tuple tuple;
};
+enum grep_conntrack {
+ GRE_CT_UNREPLIED,
+ GRE_CT_REPLIED,
+ GRE_CT_MAX
+};
+
+struct netns_proto_gre {
+ struct nf_proto_net nf;
+ rwlock_t keymap_lock;
+ struct list_head keymap_list;
+ unsigned int gre_timeouts[GRE_CT_MAX];
+};
+
/* add new tuple->key_reply pair to keymap */
int nf_ct_gre_keymap_add(struct nf_conn *ct, enum ip_conntrack_dir dir,
struct nf_conntrack_tuple *t);
*
* @buf_lock: spinlock to serialize access to @buf
* @buf: preallocated crash dump buffer
- * @bufsize: size of @buf available for crash dump writes
+ * @bufsize: size of @buf available for crash dump bytes (must match
+ * smallest number of bytes available for writing to a
+ * backend entry, since compressed bytes don't take kindly
+ * to being truncated)
*
* @read_mutex: serializes @open, @read, @close, and @erase callbacks
* @flags: bitfield of frontends the backend can accept writes for
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* Index of current stored address in ret_stack: */
int curr_ret_stack;
+ int curr_ret_depth;
/* Stack of return addresses for return function tracing: */
struct ftrace_ret_stack *ret_stack;
}
}
+static inline void skb_zcopy_set_nouarg(struct sk_buff *skb, void *val)
+{
+ skb_shinfo(skb)->destructor_arg = (void *)((uintptr_t) val | 0x1UL);
+ skb_shinfo(skb)->tx_flags |= SKBTX_ZEROCOPY_FRAG;
+}
+
+static inline bool skb_zcopy_is_nouarg(struct sk_buff *skb)
+{
+ return (uintptr_t) skb_shinfo(skb)->destructor_arg & 0x1UL;
+}
+
+static inline void *skb_zcopy_get_nouarg(struct sk_buff *skb)
+{
+ return (void *)((uintptr_t) skb_shinfo(skb)->destructor_arg & ~0x1UL);
+}
+
/* Release a reference on a zerocopy structure */
static inline void skb_zcopy_clear(struct sk_buff *skb, bool zerocopy)
{
if (uarg->callback == sock_zerocopy_callback) {
uarg->zerocopy = uarg->zerocopy && zerocopy;
sock_zerocopy_put(uarg);
- } else {
+ } else if (!skb_zcopy_is_nouarg(skb)) {
uarg->callback(uarg, zerocopy);
}
u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */
u32 lsndtime; /* timestamp of last sent data packet (for restart window) */
u32 last_oow_ack_time; /* timestamp of last out-of-window ACK */
+ u32 compressed_ack_rcv_nxt;
u32 tsoffset; /* timestamp offset */
struct tracepoint_func *it_func_ptr; \
void *it_func; \
void *__data; \
- int __maybe_unused idx = 0; \
+ int __maybe_unused __idx = 0; \
\
if (!(cond)) \
return; \
* doesn't work from the idle path. \
*/ \
if (rcuidle) { \
- idx = srcu_read_lock_notrace(&tracepoint_srcu); \
+ __idx = srcu_read_lock_notrace(&tracepoint_srcu);\
rcu_irq_enter_irqson(); \
} \
\
\
if (rcuidle) { \
rcu_irq_exit_irqson(); \
- srcu_read_unlock_notrace(&tracepoint_srcu, idx);\
+ srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\
} \
\
preempt_enable_notrace(); \
/* Device needs a pause after every control message. */
#define USB_QUIRK_DELAY_CTRL_MSG BIT(13)
+/* Hub needs extra delay after resetting its port. */
+#define USB_QUIRK_HUB_SLOW_RESET BIT(14)
+
#endif /* __LINUX_USB_QUIRKS_H */
void xa_init_flags(struct xarray *, gfp_t flags);
void *xa_load(struct xarray *, unsigned long index);
void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
-void *xa_cmpxchg(struct xarray *, unsigned long index,
- void *old, void *entry, gfp_t);
-int xa_reserve(struct xarray *, unsigned long index, gfp_t);
+void *xa_erase(struct xarray *, unsigned long index);
void *xa_store_range(struct xarray *, unsigned long first, unsigned long last,
void *entry, gfp_t);
bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t);
return xa->xa_flags & XA_FLAGS_MARK(mark);
}
-/**
- * xa_erase() - Erase this entry from the XArray.
- * @xa: XArray.
- * @index: Index of entry.
- *
- * This function is the equivalent of calling xa_store() with %NULL as
- * the third argument. The XArray does not need to allocate memory, so
- * the user does not need to provide GFP flags.
- *
- * Context: Process context. Takes and releases the xa_lock.
- * Return: The entry which used to be at this index.
- */
-static inline void *xa_erase(struct xarray *xa, unsigned long index)
-{
- return xa_store(xa, index, NULL, 0);
-}
-
-/**
- * xa_insert() - Store this entry in the XArray unless another entry is
- * already present.
- * @xa: XArray.
- * @index: Index into array.
- * @entry: New entry.
- * @gfp: Memory allocation flags.
- *
- * If you would rather see the existing entry in the array, use xa_cmpxchg().
- * This function is for users who don't care what the entry is, only that
- * one is present.
- *
- * Context: Process context. Takes and releases the xa_lock.
- * May sleep if the @gfp flags permit.
- * Return: 0 if the store succeeded. -EEXIST if another entry was present.
- * -ENOMEM if memory could not be allocated.
- */
-static inline int xa_insert(struct xarray *xa, unsigned long index,
- void *entry, gfp_t gfp)
-{
- void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);
- if (!curr)
- return 0;
- if (xa_is_err(curr))
- return xa_err(curr);
- return -EEXIST;
-}
-
-/**
- * xa_release() - Release a reserved entry.
- * @xa: XArray.
- * @index: Index of entry.
- *
- * After calling xa_reserve(), you can call this function to release the
- * reservation. If the entry at @index has been stored to, this function
- * will do nothing.
- */
-static inline void xa_release(struct xarray *xa, unsigned long index)
-{
- xa_cmpxchg(xa, index, NULL, NULL, 0);
-}
-
/**
* xa_for_each() - Iterate over a portion of an XArray.
* @xa: XArray.
void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,
void *entry, gfp_t);
int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t);
+int __xa_reserve(struct xarray *, unsigned long index, gfp_t);
void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);
void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);
return -EEXIST;
}
+/**
+ * xa_store_bh() - Store this entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * This function is like calling xa_store() except it disables softirqs
+ * while holding the array lock.
+ *
+ * Context: Any context. Takes and releases the xa_lock while
+ * disabling softirqs.
+ * Return: The entry which used to be at this index.
+ */
+static inline void *xa_store_bh(struct xarray *xa, unsigned long index,
+ void *entry, gfp_t gfp)
+{
+ void *curr;
+
+ xa_lock_bh(xa);
+ curr = __xa_store(xa, index, entry, gfp);
+ xa_unlock_bh(xa);
+
+ return curr;
+}
+
+/**
+ * xa_store_irq() - Erase this entry from the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * This function is like calling xa_store() except it disables interrupts
+ * while holding the array lock.
+ *
+ * Context: Process context. Takes and releases the xa_lock while
+ * disabling interrupts.
+ * Return: The entry which used to be at this index.
+ */
+static inline void *xa_store_irq(struct xarray *xa, unsigned long index,
+ void *entry, gfp_t gfp)
+{
+ void *curr;
+
+ xa_lock_irq(xa);
+ curr = __xa_store(xa, index, entry, gfp);
+ xa_unlock_irq(xa);
+
+ return curr;
+}
+
/**
* xa_erase_bh() - Erase this entry from the XArray.
* @xa: XArray.
* the third argument. The XArray does not need to allocate memory, so
* the user does not need to provide GFP flags.
*
- * Context: Process context. Takes and releases the xa_lock while
+ * Context: Any context. Takes and releases the xa_lock while
* disabling softirqs.
* Return: The entry which used to be at this index.
*/
return entry;
}
+/**
+ * xa_cmpxchg() - Conditionally replace an entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @old: Old value to test against.
+ * @entry: New value to place in array.
+ * @gfp: Memory allocation flags.
+ *
+ * If the entry at @index is the same as @old, replace it with @entry.
+ * If the return value is equal to @old, then the exchange was successful.
+ *
+ * Context: Any context. Takes and releases the xa_lock. May sleep
+ * if the @gfp flags permit.
+ * Return: The old value at this index or xa_err() if an error happened.
+ */
+static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,
+ void *old, void *entry, gfp_t gfp)
+{
+ void *curr;
+
+ xa_lock(xa);
+ curr = __xa_cmpxchg(xa, index, old, entry, gfp);
+ xa_unlock(xa);
+
+ return curr;
+}
+
+/**
+ * xa_insert() - Store this entry in the XArray unless another entry is
+ * already present.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * If you would rather see the existing entry in the array, use xa_cmpxchg().
+ * This function is for users who don't care what the entry is, only that
+ * one is present.
+ *
+ * Context: Process context. Takes and releases the xa_lock.
+ * May sleep if the @gfp flags permit.
+ * Return: 0 if the store succeeded. -EEXIST if another entry was present.
+ * -ENOMEM if memory could not be allocated.
+ */
+static inline int xa_insert(struct xarray *xa, unsigned long index,
+ void *entry, gfp_t gfp)
+{
+ void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);
+ if (!curr)
+ return 0;
+ if (xa_is_err(curr))
+ return xa_err(curr);
+ return -EEXIST;
+}
+
/**
* xa_alloc() - Find somewhere to store this entry in the XArray.
* @xa: XArray.
* Updates the @id pointer with the index, then stores the entry at that
* index. A concurrent lookup will not see an uninitialised @id.
*
- * Context: Process context. Takes and releases the xa_lock while
+ * Context: Any context. Takes and releases the xa_lock while
* disabling softirqs. May sleep if the @gfp flags permit.
* Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if
* there is no more space in the XArray.
return err;
}
+/**
+ * xa_reserve() - Reserve this index in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @gfp: Memory allocation flags.
+ *
+ * Ensures there is somewhere to store an entry at @index in the array.
+ * If there is already something stored at @index, this function does
+ * nothing. If there was nothing there, the entry is marked as reserved.
+ * Loading from a reserved entry returns a %NULL pointer.
+ *
+ * If you do not use the entry that you have reserved, call xa_release()
+ * or xa_erase() to free any unnecessary memory.
+ *
+ * Context: Any context. Takes and releases the xa_lock.
+ * May sleep if the @gfp flags permit.
+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
+ */
+static inline
+int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
+{
+ int ret;
+
+ xa_lock(xa);
+ ret = __xa_reserve(xa, index, gfp);
+ xa_unlock(xa);
+
+ return ret;
+}
+
+/**
+ * xa_reserve_bh() - Reserve this index in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @gfp: Memory allocation flags.
+ *
+ * A softirq-disabling version of xa_reserve().
+ *
+ * Context: Any context. Takes and releases the xa_lock while
+ * disabling softirqs.
+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
+ */
+static inline
+int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)
+{
+ int ret;
+
+ xa_lock_bh(xa);
+ ret = __xa_reserve(xa, index, gfp);
+ xa_unlock_bh(xa);
+
+ return ret;
+}
+
+/**
+ * xa_reserve_irq() - Reserve this index in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @gfp: Memory allocation flags.
+ *
+ * An interrupt-disabling version of xa_reserve().
+ *
+ * Context: Process context. Takes and releases the xa_lock while
+ * disabling interrupts.
+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
+ */
+static inline
+int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)
+{
+ int ret;
+
+ xa_lock_irq(xa);
+ ret = __xa_reserve(xa, index, gfp);
+ xa_unlock_irq(xa);
+
+ return ret;
+}
+
+/**
+ * xa_release() - Release a reserved entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ *
+ * After calling xa_reserve(), you can call this function to release the
+ * reservation. If the entry at @index has been stored to, this function
+ * will do nothing.
+ */
+static inline void xa_release(struct xarray *xa, unsigned long index)
+{
+ xa_cmpxchg(xa, index, NULL, NULL, 0);
+}
+
/* Everything below here is the Advanced API. Proceed with caution. */
/*
/* v4l2 request helper */
-void vb2_m2m_request_queue(struct media_request *req);
+void v4l2_m2m_request_queue(struct media_request *req);
/* v4l2 ioctl helpers */
struct sockaddr_rxrpc *, struct key *);
int rxrpc_kernel_check_call(struct socket *, struct rxrpc_call *,
enum rxrpc_call_completion *, u32 *);
-u32 rxrpc_kernel_check_life(struct socket *, struct rxrpc_call *);
+u32 rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *);
+void rxrpc_kernel_probe_life(struct socket *, struct rxrpc_call *);
u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *);
bool rxrpc_kernel_get_reply_time(struct socket *, struct rxrpc_call *,
ktime_t *);
const struct nf_nat_range2 *range,
const struct net_device *out);
-void nf_nat_masquerade_ipv4_register_notifier(void);
+int nf_nat_masquerade_ipv4_register_notifier(void);
void nf_nat_masquerade_ipv4_unregister_notifier(void);
#endif /*_NF_NAT_MASQUERADE_IPV4_H_ */
unsigned int
nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
const struct net_device *out);
-void nf_nat_masquerade_ipv6_register_notifier(void);
+int nf_nat_masquerade_ipv6_register_notifier(void);
void nf_nat_masquerade_ipv6_unregister_notifier(void);
#endif /* _NF_NAT_MASQUERADE_IPV6_H_ */
SCTP_DEFAULT_MINSEGMENT));
}
+static inline bool sctp_transport_pmtu_check(struct sctp_transport *t)
+{
+ __u32 pmtu = sctp_dst_mtu(t->dst);
+
+ if (t->pathmtu == pmtu)
+ return true;
+
+ t->pathmtu = pmtu;
+
+ return false;
+}
+
#endif /* __net_sctp_h__ */
((i) < rtd->num_codecs) && ((dai) = rtd->codec_dais[i]); \
(i)++)
#define for_each_rtd_codec_dai_rollback(rtd, i, dai) \
- for (; ((i--) >= 0) && ((dai) = rtd->codec_dais[i]);)
+ for (; ((--i) >= 0) && ((dai) = rtd->codec_dais[i]);)
/* mixer control */
TP_fast_assign(
__entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent)));
- strlcpy(__entry->domain, domain, DOMAIN_LEN);
- strlcpy(__entry->type, type, DOMAIN_LEN);
+ strlcpy(__entry->domain, domain, sizeof(__entry->domain));
+ strlcpy(__entry->type, type, sizeof(__entry->type));
__entry->percentile = percentile;
__entry->numerator = numerator;
__entry->denominator = denominator;
TP_fast_assign(
__entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent)));
- strlcpy(__entry->domain, domain, DOMAIN_LEN);
+ strlcpy(__entry->domain, domain, sizeof(__entry->domain));
__entry->depth = depth;
),
TP_fast_assign(
__entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent)));
- strlcpy(__entry->domain, domain, DOMAIN_LEN);
+ strlcpy(__entry->domain, domain, sizeof(__entry->domain));
),
TP_printk("%d,%d %s", MAJOR(__entry->dev), MINOR(__entry->dev),
enum rxrpc_propose_ack_trace {
rxrpc_propose_ack_client_tx_end,
rxrpc_propose_ack_input_data,
+ rxrpc_propose_ack_ping_for_check_life,
rxrpc_propose_ack_ping_for_keepalive,
rxrpc_propose_ack_ping_for_lost_ack,
rxrpc_propose_ack_ping_for_lost_reply,
#define rxrpc_propose_ack_traces \
EM(rxrpc_propose_ack_client_tx_end, "ClTxEnd") \
EM(rxrpc_propose_ack_input_data, "DataIn ") \
+ EM(rxrpc_propose_ack_ping_for_check_life, "ChkLife") \
EM(rxrpc_propose_ack_ping_for_keepalive, "KeepAlv") \
EM(rxrpc_propose_ack_ping_for_lost_ack, "LostAck") \
EM(rxrpc_propose_ack_ping_for_lost_reply, "LostRpl") \
#ifdef CREATE_TRACE_POINTS
static inline long __trace_sched_switch_state(bool preempt, struct task_struct *p)
{
+ unsigned int state;
+
#ifdef CONFIG_SCHED_DEBUG
BUG_ON(p != current);
#endif /* CONFIG_SCHED_DEBUG */
if (preempt)
return TASK_REPORT_MAX;
- return 1 << task_state_index(p);
+ /*
+ * task_state_index() uses fls() and returns a value from 0-8 range.
+ * Decrement it by 1 (except TASK_RUNNING state i.e 0) before using
+ * it for left shift operation to get the correct task->state
+ * mapping.
+ */
+ state = task_state_index(p);
+
+ return state ? (1 << (state - 1)) : state;
}
#endif /* CREATE_TRACE_POINTS */
* the situation described above.
*/
#define REL_RESERVED 0x0a
-#define REL_WHEEL_HI_RES 0x0b
#define REL_MAX 0x0f
#define REL_CNT (REL_MAX+1)
#define ABS_MISC 0x28
-/*
- * 0x2e is reserved and should not be used in input drivers.
- * It was used by HID as ABS_MISC+6 and userspace needs to detect if
- * the next ABS_* event is correct or is just ABS_MISC + n.
- * We define here ABS_RESERVED so userspace can rely on it and detect
- * the situation described above.
- */
-#define ABS_RESERVED 0x2e
-
#define ABS_MT_SLOT 0x2f /* MT slot being modified */
#define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */
#define ABS_MT_TOUCH_MINOR 0x31 /* Minor axis (omit if circular) */
#ifndef __LINUX_V4L2_CONTROLS_H
#define __LINUX_V4L2_CONTROLS_H
+#include <linux/types.h>
+
/* Control classes */
#define V4L2_CTRL_CLASS_USER 0x00980000 /* Old-style 'user' controls */
#define V4L2_CTRL_CLASS_MPEG 0x00990000 /* MPEG-compression controls */
__u8 profile_and_level_indication;
__u8 progressive_sequence;
__u8 chroma_format;
+ __u8 pad;
};
struct v4l2_mpeg2_picture {
__u8 alternate_scan;
__u8 repeat_first_field;
__u8 progressive_frame;
+ __u8 pad;
};
struct v4l2_ctrl_mpeg2_slice_params {
__u8 backward_ref_index;
__u8 forward_ref_index;
+ __u8 pad;
};
struct v4l2_ctrl_mpeg2_quantization {
bpf_prog_unlock_free(fp);
}
+int bpf_jit_get_func_addr(const struct bpf_prog *prog,
+ const struct bpf_insn *insn, bool extra_pass,
+ u64 *func_addr, bool *func_addr_fixed)
+{
+ s16 off = insn->off;
+ s32 imm = insn->imm;
+ u8 *addr;
+
+ *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
+ if (!*func_addr_fixed) {
+ /* Place-holder address till the last pass has collected
+ * all addresses for JITed subprograms in which case we
+ * can pick them up from prog->aux.
+ */
+ if (!extra_pass)
+ addr = NULL;
+ else if (prog->aux->func &&
+ off >= 0 && off < prog->aux->func_cnt)
+ addr = (u8 *)prog->aux->func[off]->bpf_func;
+ else
+ return -EINVAL;
+ } else {
+ /* Address of a BPF helper call. Since part of the core
+ * kernel, it's always at a fixed location. __bpf_call_base
+ * and the helper with imm relative to it are both in core
+ * kernel.
+ */
+ addr = (u8 *)__bpf_call_base + imm;
+ }
+
+ *func_addr = (unsigned long)addr;
+ return 0;
+}
+
static int bpf_jit_blind_insn(const struct bpf_insn *from,
const struct bpf_insn *aux,
struct bpf_insn *to_buff)
return -ENOENT;
new = kmalloc_node(sizeof(struct bpf_storage_buffer) +
- map->value_size, __GFP_ZERO | GFP_USER,
+ map->value_size,
+ __GFP_ZERO | GFP_ATOMIC | __GFP_NOWARN,
map->numa_node);
if (!new)
return -ENOMEM;
#include <linux/bpf.h>
#include <linux/list.h>
#include <linux/slab.h>
+#include <linux/capability.h>
#include "percpu_freelist.h"
#define QUEUE_STACK_CREATE_FLAG_MASK \
/* Called from syscall */
static int queue_stack_map_alloc_check(union bpf_attr *attr)
{
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
/* check sanity of attributes */
if (attr->max_entries == 0 || attr->key_size != 0 ||
+ attr->value_size == 0 ||
attr->map_flags & ~QUEUE_STACK_CREATE_FLAG_MASK)
return -EINVAL;
{
int ret, numa_node = bpf_map_attr_numa_node(attr);
struct bpf_queue_stack *qs;
- u32 size, value_size;
- u64 queue_size, cost;
-
- size = attr->max_entries + 1;
- value_size = attr->value_size;
-
- queue_size = sizeof(*qs) + (u64) value_size * size;
+ u64 size, queue_size, cost;
- cost = queue_size;
+ size = (u64) attr->max_entries + 1;
+ cost = queue_size = sizeof(*qs) + size * attr->value_size;
if (cost >= U32_MAX - PAGE_SIZE)
return ERR_PTR(-E2BIG);
return;
/* NOTE: fake 'exit' subprog should be updated as well. */
for (i = 0; i <= env->subprog_cnt; i++) {
- if (env->subprog_info[i].start < off)
+ if (env->subprog_info[i].start <= off)
continue;
env->subprog_info[i].start += len - 1;
}
kdb_printf("no process for cpu %ld\n", cpu);
return 0;
}
- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));
+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));
kdb_parse(buf);
return 0;
}
kdb_printf("btc: cpu status: ");
kdb_parse("cpu\n");
for_each_online_cpu(cpu) {
- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));
+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));
kdb_parse(buf);
touch_nmi_watchdog();
}
int count;
int i;
int diag, dtab_count;
- int key;
+ int key, buf_size, ret;
diag = kdbgetintenv("DTABCOUNT", &dtab_count);
else
p_tmp = tmpbuffer;
len = strlen(p_tmp);
- count = kallsyms_symbol_complete(p_tmp,
- sizeof(tmpbuffer) -
- (p_tmp - tmpbuffer));
+ buf_size = sizeof(tmpbuffer) - (p_tmp - tmpbuffer);
+ count = kallsyms_symbol_complete(p_tmp, buf_size);
if (tab == 2 && count > 0) {
kdb_printf("\n%d symbols are found.", count);
if (count > dtab_count) {
}
kdb_printf("\n");
for (i = 0; i < count; i++) {
- if (WARN_ON(!kallsyms_symbol_next(p_tmp, i)))
+ ret = kallsyms_symbol_next(p_tmp, i, buf_size);
+ if (WARN_ON(!ret))
break;
- kdb_printf("%s ", p_tmp);
+ if (ret != -E2BIG)
+ kdb_printf("%s ", p_tmp);
+ else
+ kdb_printf("%s... ", p_tmp);
*(p_tmp + len) = '\0';
}
if (i >= dtab_count)
case KT_LATIN:
if (isprint(keychar))
break; /* printable characters */
- /* drop through */
+ /* fall through */
case KT_SPEC:
if (keychar == K_ENTER)
break;
- /* drop through */
+ /* fall through */
default:
return -1; /* ignore unprintables */
}
if (reason == KDB_REASON_DEBUG) {
/* special case below */
} else {
- kdb_printf("\nEntering kdb (current=0x%p, pid %d) ",
+ kdb_printf("\nEntering kdb (current=0x%px, pid %d) ",
kdb_current, kdb_current ? kdb_current->pid : 0);
#if defined(CONFIG_SMP)
kdb_printf("on processor %d ", raw_smp_processor_id());
*/
switch (db_result) {
case KDB_DB_BPT:
- kdb_printf("\nEntering kdb (0x%p, pid %d) ",
+ kdb_printf("\nEntering kdb (0x%px, pid %d) ",
kdb_current, kdb_current->pid);
#if defined(CONFIG_SMP)
kdb_printf("on processor %d ", raw_smp_processor_id());
char cbuf[32];
char *c = cbuf;
int i;
+ int j;
unsigned long word;
memset(cbuf, '\0', sizeof(cbuf));
wc.word = word;
#define printable_char(c) \
({unsigned char __c = c; isascii(__c) && isprint(__c) ? __c : '.'; })
- switch (bytesperword) {
- case 8:
+ for (j = 0; j < bytesperword; j++)
*c++ = printable_char(*cp++);
- *c++ = printable_char(*cp++);
- *c++ = printable_char(*cp++);
- *c++ = printable_char(*cp++);
- addr += 4;
- case 4:
- *c++ = printable_char(*cp++);
- *c++ = printable_char(*cp++);
- addr += 2;
- case 2:
- *c++ = printable_char(*cp++);
- addr++;
- case 1:
- *c++ = printable_char(*cp++);
- addr++;
- break;
- }
+ addr += bytesperword;
#undef printable_char
}
}
if (mod->state == MODULE_STATE_UNFORMED)
continue;
- kdb_printf("%-20s%8u 0x%p ", mod->name,
+ kdb_printf("%-20s%8u 0x%px ", mod->name,
mod->core_layout.size, (void *)mod);
#ifdef CONFIG_MODULE_UNLOAD
kdb_printf("%4d ", module_refcount(mod));
kdb_printf(" (Loading)");
else
kdb_printf(" (Live)");
- kdb_printf(" 0x%p", mod->core_layout.base);
+ kdb_printf(" 0x%px", mod->core_layout.base);
#ifdef CONFIG_MODULE_UNLOAD
{
return;
cpu = kdb_process_cpu(p);
- kdb_printf("0x%p %8d %8d %d %4d %c 0x%p %c%s\n",
+ kdb_printf("0x%px %8d %8d %d %4d %c 0x%px %c%s\n",
(void *)p, p->pid, p->parent->pid,
kdb_task_has_cpu(p), kdb_process_cpu(p),
kdb_task_state_char(p),
} else {
if (KDB_TSK(cpu) != p)
kdb_printf(" Error: does not match running "
- "process table (0x%p)\n", KDB_TSK(cpu));
+ "process table (0x%px)\n", KDB_TSK(cpu));
}
}
}
for_each_kdbcmd(kp, i) {
if (kp->cmd_name && (strcmp(kp->cmd_name, cmd) == 0)) {
kdb_printf("Duplicate kdb command registered: "
- "%s, func %p help %s\n", cmd, func, help);
+ "%s, func %px help %s\n", cmd, func, help);
return 1;
}
}
unsigned long sym_start;
unsigned long sym_end;
} kdb_symtab_t;
-extern int kallsyms_symbol_next(char *prefix_name, int flag);
+extern int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size);
extern int kallsyms_symbol_complete(char *prefix_name, int max_len);
/* Exported Symbols for kernel loadable modules to use. */
int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
{
if (KDB_DEBUG(AR))
- kdb_printf("kdbgetsymval: symname=%s, symtab=%p\n", symname,
+ kdb_printf("kdbgetsymval: symname=%s, symtab=%px\n", symname,
symtab);
memset(symtab, 0, sizeof(*symtab));
symtab->sym_start = kallsyms_lookup_name(symname);
char *knt1 = NULL;
if (KDB_DEBUG(AR))
- kdb_printf("kdbnearsym: addr=0x%lx, symtab=%p\n", addr, symtab);
+ kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab);
memset(symtab, 0, sizeof(*symtab));
if (addr < 4096)
symtab->mod_name = "kernel";
if (KDB_DEBUG(AR))
kdb_printf("kdbnearsym: returns %d symtab->sym_start=0x%lx, "
- "symtab->mod_name=%p, symtab->sym_name=%p (%s)\n", ret,
+ "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret,
symtab->sym_start, symtab->mod_name, symtab->sym_name,
symtab->sym_name);
* Parameters:
* prefix_name prefix of a symbol name to lookup
* flag 0 means search from the head, 1 means continue search.
+ * buf_size maximum length that can be written to prefix_name
+ * buffer
* Returns:
* 1 if a symbol matches the given prefix.
* 0 if no string found
*/
-int kallsyms_symbol_next(char *prefix_name, int flag)
+int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size)
{
int prefix_len = strlen(prefix_name);
static loff_t pos;
pos = 0;
while ((name = kdb_walk_kallsyms(&pos))) {
- if (strncmp(name, prefix_name, prefix_len) == 0) {
- strncpy(prefix_name, name, strlen(name)+1);
- return 1;
- }
+ if (!strncmp(name, prefix_name, prefix_len))
+ return strscpy(prefix_name, name, buf_size);
}
return 0;
}
*word = w8;
break;
}
- /* drop through */
+ /* fall through */
default:
diag = KDB_BADWIDTH;
kdb_printf("kdb_getphysword: bad width %ld\n", (long) size);
*word = w8;
break;
}
- /* drop through */
+ /* fall through */
default:
diag = KDB_BADWIDTH;
kdb_printf("kdb_getword: bad width %ld\n", (long) size);
diag = kdb_putarea(addr, w8);
break;
}
- /* drop through */
+ /* fall through */
default:
diag = KDB_BADWIDTH;
kdb_printf("kdb_putword: bad width %ld\n", (long) size);
__func__, dah_first);
if (dah_first) {
h_used = (struct debug_alloc_header *)debug_alloc_pool;
- kdb_printf("%s: h_used %p size %d\n", __func__, h_used,
+ kdb_printf("%s: h_used %px size %d\n", __func__, h_used,
h_used->size);
}
do {
h_used = (struct debug_alloc_header *)
((char *)h_free + dah_overhead + h_free->size);
- kdb_printf("%s: h_used %p size %d caller %p\n",
+ kdb_printf("%s: h_used %px size %d caller %px\n",
__func__, h_used, h_used->size, h_used->caller);
h_free = (struct debug_alloc_header *)
(debug_alloc_pool + h_free->next);
((char *)h_free + dah_overhead + h_free->size);
if ((char *)h_used - debug_alloc_pool !=
sizeof(debug_alloc_pool_aligned))
- kdb_printf("%s: h_used %p size %d caller %p\n",
+ kdb_printf("%s: h_used %px size %d caller %px\n",
__func__, h_used, h_used->size, h_used->caller);
out:
spin_unlock(&dap_lock);
}
if (!dev_is_dma_coherent(dev) &&
- (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
+ (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0 &&
+ dev_addr != DIRECT_MAPPING_ERROR)
arch_sync_dma_for_device(dev, phys, size, dir);
return dev_addr;
BUG_ON((uprobe->offset & ~PAGE_MASK) +
UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
- smp_wmb(); /* pairs with rmb() in find_active_uprobe() */
+ smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
set_bit(UPROBE_COPY_INSN, &uprobe->flags);
out:
* After we hit the bp, _unregister + _register can install the
* new and not-yet-analyzed uprobe at the same address, restart.
*/
- smp_rmb(); /* pairs with wmb() in install_breakpoint() */
if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags)))
goto out;
+ /*
+ * Pairs with the smp_wmb() in prepare_uprobe().
+ *
+ * Guarantees that if we see the UPROBE_COPY_INSN bit set, then
+ * we must also see the stores to &uprobe->arch performed by the
+ * prepare_uprobe() call.
+ */
+ smp_rmb();
+
/* Tracing handlers use ->utask to communicate with fetch methods */
if (!get_utask())
goto out;
return target;
}
-static unsigned long cpu_util_wake(int cpu, struct task_struct *p);
+static unsigned long cpu_util_without(int cpu, struct task_struct *p);
-static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
+static unsigned long capacity_spare_without(int cpu, struct task_struct *p)
{
- return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);
+ return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);
}
/*
avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
- spare_cap = capacity_spare_wake(i, p);
+ spare_cap = capacity_spare_without(i, p);
if (spare_cap > max_spare_cap)
max_spare_cap = spare_cap;
return prev_cpu;
/*
- * We need task's util for capacity_spare_wake, sync it up to prev_cpu's
- * last_update_time.
+ * We need task's util for capacity_spare_without, sync it up to
+ * prev_cpu's last_update_time.
*/
if (!(sd_flag & SD_BALANCE_FORK))
sync_entity_load_avg(&p->se);
}
/*
- * cpu_util_wake: Compute CPU utilization with any contributions from
- * the waking task p removed.
+ * cpu_util_without: compute cpu utilization without any contributions from *p
+ * @cpu: the CPU which utilization is requested
+ * @p: the task which utilization should be discounted
+ *
+ * The utilization of a CPU is defined by the utilization of tasks currently
+ * enqueued on that CPU as well as tasks which are currently sleeping after an
+ * execution on that CPU.
+ *
+ * This method returns the utilization of the specified CPU by discounting the
+ * utilization of the specified task, whenever the task is currently
+ * contributing to the CPU utilization.
*/
-static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
+static unsigned long cpu_util_without(int cpu, struct task_struct *p)
{
struct cfs_rq *cfs_rq;
unsigned int util;
cfs_rq = &cpu_rq(cpu)->cfs;
util = READ_ONCE(cfs_rq->avg.util_avg);
- /* Discount task's blocked util from CPU's util */
+ /* Discount task's util from CPU's util */
util -= min_t(unsigned int, util, task_util(p));
/*
* a) if *p is the only task sleeping on this CPU, then:
* cpu_util (== task_util) > util_est (== 0)
* and thus we return:
- * cpu_util_wake = (cpu_util - task_util) = 0
+ * cpu_util_without = (cpu_util - task_util) = 0
*
* b) if other tasks are SLEEPING on this CPU, which is now exiting
* IDLE, then:
* cpu_util >= task_util
* cpu_util > util_est (== 0)
* and thus we discount *p's blocked utilization to return:
- * cpu_util_wake = (cpu_util - task_util) >= 0
+ * cpu_util_without = (cpu_util - task_util) >= 0
*
* c) if other tasks are RUNNABLE on that CPU and
* util_est > cpu_util
* covered by the following code when estimated utilization is
* enabled.
*/
- if (sched_feat(UTIL_EST))
- util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued));
+ if (sched_feat(UTIL_EST)) {
+ unsigned int estimated =
+ READ_ONCE(cfs_rq->avg.util_est.enqueued);
+
+ /*
+ * Despite the following checks we still have a small window
+ * for a possible race, when an execl's select_task_rq_fair()
+ * races with LB's detach_task():
+ *
+ * detach_task()
+ * p->on_rq = TASK_ON_RQ_MIGRATING;
+ * ---------------------------------- A
+ * deactivate_task() \
+ * dequeue_task() + RaceTime
+ * util_est_dequeue() /
+ * ---------------------------------- B
+ *
+ * The additional check on "current == p" it's required to
+ * properly fix the execl regression and it helps in further
+ * reducing the chances for the above race.
+ */
+ if (unlikely(task_on_rq_queued(p) || current == p)) {
+ estimated -= min_t(unsigned int, estimated,
+ (_task_util_est(p) | UTIL_AVG_UNCHANGED));
+ }
+ util = max(util, estimated);
+ }
/*
* Utilization (estimated) can exceed the CPU capacity, thus let's
*/
void cgroup_move_task(struct task_struct *task, struct css_set *to)
{
- bool move_psi = !psi_disabled;
unsigned int task_flags = 0;
struct rq_flags rf;
struct rq *rq;
- if (move_psi) {
- rq = task_rq_lock(task, &rf);
+ if (psi_disabled) {
+ /*
+ * Lame to do this here, but the scheduler cannot be locked
+ * from the outside, so we move cgroups from inside sched/.
+ */
+ rcu_assign_pointer(task->cgroups, to);
+ return;
+ }
- if (task_on_rq_queued(task))
- task_flags = TSK_RUNNING;
- else if (task->in_iowait)
- task_flags = TSK_IOWAIT;
+ rq = task_rq_lock(task, &rf);
- if (task->flags & PF_MEMSTALL)
- task_flags |= TSK_MEMSTALL;
+ if (task_on_rq_queued(task))
+ task_flags = TSK_RUNNING;
+ else if (task->in_iowait)
+ task_flags = TSK_IOWAIT;
- if (task_flags)
- psi_task_change(task, task_flags, 0);
- }
+ if (task->flags & PF_MEMSTALL)
+ task_flags |= TSK_MEMSTALL;
- /*
- * Lame to do this here, but the scheduler cannot be locked
- * from the outside, so we move cgroups from inside sched/.
- */
+ if (task_flags)
+ psi_task_change(task, task_flags, 0);
+
+ /* See comment above */
rcu_assign_pointer(task->cgroups, to);
- if (move_psi) {
- if (task_flags)
- psi_task_change(task, 0, task_flags);
+ if (task_flags)
+ psi_task_change(task, 0, task_flags);
- task_rq_unlock(rq, task, &rf);
- }
+ task_rq_unlock(rq, task, &rf);
}
#endif /* CONFIG_CGROUPS */
i++;
} else if (fmt[i] == 'p' || fmt[i] == 's') {
mod[fmt_cnt]++;
- i++;
- if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0)
+ /* disallow any further format extensions */
+ if (fmt[i + 1] != 0 &&
+ !isspace(fmt[i + 1]) &&
+ !ispunct(fmt[i + 1]))
return -EINVAL;
fmt_cnt++;
- if (fmt[i - 1] == 's') {
+ if (fmt[i] == 's') {
if (str_seen)
/* allow only one '%s' per fmt string */
return -EINVAL;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
static int profile_graph_entry(struct ftrace_graph_ent *trace)
{
- int index = trace->depth;
+ int index = current->curr_ret_stack;
function_profile_call(trace->func, 0, NULL, NULL);
if (!fgraph_graph_time) {
int index;
- index = trace->depth;
+ index = current->curr_ret_stack;
/* Append this call time to the parent time to subtract */
if (index)
atomic_set(&t->tracing_graph_pause, 0);
atomic_set(&t->trace_overrun, 0);
t->curr_ret_stack = -1;
+ t->curr_ret_depth = -1;
/* Make sure the tasks see the -1 first: */
smp_wmb();
t->ret_stack = ret_stack_list[start++];
void ftrace_graph_init_idle_task(struct task_struct *t, int cpu)
{
t->curr_ret_stack = -1;
+ t->curr_ret_depth = -1;
/*
* The idle task has no parent, it either has its own
* stack or no stack at all.
/* Make sure we do not use the parent ret_stack */
t->ret_stack = NULL;
t->curr_ret_stack = -1;
+ t->curr_ret_depth = -1;
if (ftrace_graph_active) {
struct ftrace_ret_stack *ret_stack;
* can only be modified by current, we can reuse trace_recursion.
*/
TRACE_IRQ_BIT,
+
+ /* Set if the function is in the set_graph_function file */
+ TRACE_GRAPH_BIT,
+
+ /*
+ * In the very unlikely case that an interrupt came in
+ * at a start of graph tracing, and we want to trace
+ * the function in that interrupt, the depth can be greater
+ * than zero, because of the preempted start of a previous
+ * trace. In an even more unlikely case, depth could be 2
+ * if a softirq interrupted the start of graph tracing,
+ * followed by an interrupt preempting a start of graph
+ * tracing in the softirq, and depth can even be 3
+ * if an NMI came in at the start of an interrupt function
+ * that preempted a softirq start of a function that
+ * preempted normal context!!!! Luckily, it can't be
+ * greater than 3, so the next two bits are a mask
+ * of what the depth is when we set TRACE_GRAPH_BIT
+ */
+
+ TRACE_GRAPH_DEPTH_START_BIT,
+ TRACE_GRAPH_DEPTH_END_BIT,
};
#define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0)
#define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0)
#define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit)))
+#define trace_recursion_depth() \
+ (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3)
+#define trace_recursion_set_depth(depth) \
+ do { \
+ current->trace_recursion &= \
+ ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \
+ current->trace_recursion |= \
+ ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \
+ } while (0)
+
#define TRACE_CONTEXT_BITS 4
#define TRACE_FTRACE_START TRACE_FTRACE_BIT
extern struct ftrace_hash *ftrace_graph_hash;
extern struct ftrace_hash *ftrace_graph_notrace_hash;
-static inline int ftrace_graph_addr(unsigned long addr)
+static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)
{
+ unsigned long addr = trace->func;
int ret = 0;
preempt_disable_notrace();
}
if (ftrace_lookup_ip(ftrace_graph_hash, addr)) {
+
+ /*
+ * This needs to be cleared on the return functions
+ * when the depth is zero.
+ */
+ trace_recursion_set(TRACE_GRAPH_BIT);
+ trace_recursion_set_depth(trace->depth);
+
/*
* If no irqs are to be traced, but a set_graph_function
* is set, and called by an interrupt handler, we still
return ret;
}
+static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)
+{
+ if (trace_recursion_test(TRACE_GRAPH_BIT) &&
+ trace->depth == trace_recursion_depth())
+ trace_recursion_clear(TRACE_GRAPH_BIT);
+}
+
static inline int ftrace_graph_notrace_addr(unsigned long addr)
{
int ret = 0;
return ret;
}
#else
-static inline int ftrace_graph_addr(unsigned long addr)
+static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)
{
return 1;
}
{
return 0;
}
+static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)
+{ }
#endif /* CONFIG_DYNAMIC_FTRACE */
extern unsigned int fgraph_max_depth;
static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace)
{
/* trace it when it is-nested-in or is a function enabled. */
- return !(trace->depth || ftrace_graph_addr(trace->func)) ||
+ return !(trace_recursion_test(TRACE_GRAPH_BIT) ||
+ ftrace_graph_addr(trace)) ||
(trace->depth < 0) ||
(fgraph_max_depth && trace->depth >= fgraph_max_depth);
}
struct trace_seq *s, u32 flags);
/* Add a function return address to the trace stack on thread info.*/
-int
-ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth,
+static int
+ftrace_push_return_trace(unsigned long ret, unsigned long func,
unsigned long frame_pointer, unsigned long *retp)
{
unsigned long long calltime;
#ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
current->ret_stack[index].retp = retp;
#endif
- *depth = current->curr_ret_stack;
+ return 0;
+}
+
+int function_graph_enter(unsigned long ret, unsigned long func,
+ unsigned long frame_pointer, unsigned long *retp)
+{
+ struct ftrace_graph_ent trace;
+
+ trace.func = func;
+ trace.depth = ++current->curr_ret_depth;
+
+ if (ftrace_push_return_trace(ret, func,
+ frame_pointer, retp))
+ goto out;
+
+ /* Only trace if the calling function expects to */
+ if (!ftrace_graph_entry(&trace))
+ goto out_ret;
return 0;
+ out_ret:
+ current->curr_ret_stack--;
+ out:
+ current->curr_ret_depth--;
+ return -EBUSY;
}
/* Retrieve a function return address to the trace stack on thread info.*/
trace->func = current->ret_stack[index].func;
trace->calltime = current->ret_stack[index].calltime;
trace->overrun = atomic_read(¤t->trace_overrun);
- trace->depth = index;
+ trace->depth = current->curr_ret_depth--;
+ /*
+ * We still want to trace interrupts coming in if
+ * max_depth is set to 1. Make sure the decrement is
+ * seen before ftrace_graph_return.
+ */
+ barrier();
}
/*
ftrace_pop_return_trace(&trace, &ret, frame_pointer);
trace.rettime = trace_clock_local();
+ ftrace_graph_return(&trace);
+ /*
+ * The ftrace_graph_return() may still access the current
+ * ret_stack structure, we need to make sure the update of
+ * curr_ret_stack is after that.
+ */
barrier();
current->curr_ret_stack--;
/*
return ret;
}
- /*
- * The trace should run after decrementing the ret counter
- * in case an interrupt were to come in. We don't want to
- * lose the interrupt if max_depth is set.
- */
- ftrace_graph_return(&trace);
-
if (unlikely(!ret)) {
ftrace_graph_stop();
WARN_ON(1);
int cpu;
int pc;
+ ftrace_graph_addr_finish(trace);
+
local_irq_save(flags);
cpu = raw_smp_processor_id();
data = per_cpu_ptr(tr->trace_buffer.data, cpu);
static void trace_graph_thresh_return(struct ftrace_graph_ret *trace)
{
+ ftrace_graph_addr_finish(trace);
+
if (tracing_thresh &&
(trace->rettime - trace->calltime < tracing_thresh))
return;
unsigned long flags;
int pc;
+ ftrace_graph_addr_finish(trace);
+
if (!func_prolog_dec(tr, &data, &flags))
return;
unsigned long flags;
int pc;
+ ftrace_graph_addr_finish(trace);
+
if (!func_prolog_preempt_disable(tr, &data, &pc))
return;
return bytes;
}
+static size_t csum_and_copy_to_pipe_iter(const void *addr, size_t bytes,
+ __wsum *csum, struct iov_iter *i)
+{
+ struct pipe_inode_info *pipe = i->pipe;
+ size_t n, r;
+ size_t off = 0;
+ __wsum sum = *csum, next;
+ int idx;
+
+ if (!sanity(i))
+ return 0;
+
+ bytes = n = push_pipe(i, bytes, &idx, &r);
+ if (unlikely(!n))
+ return 0;
+ for ( ; n; idx = next_idx(idx, pipe), r = 0) {
+ size_t chunk = min_t(size_t, n, PAGE_SIZE - r);
+ char *p = kmap_atomic(pipe->bufs[idx].page);
+ next = csum_partial_copy_nocheck(addr, p + r, chunk, 0);
+ sum = csum_block_add(sum, next, off);
+ kunmap_atomic(p);
+ i->idx = idx;
+ i->iov_offset = r + chunk;
+ n -= chunk;
+ off += chunk;
+ addr += chunk;
+ }
+ i->count -= bytes;
+ *csum = sum;
+ return bytes;
+}
+
size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
{
const char *from = addr;
const char *from = addr;
__wsum sum, next;
size_t off = 0;
+
+ if (unlikely(iov_iter_is_pipe(i)))
+ return csum_and_copy_to_pipe_iter(addr, bytes, csum, i);
+
sum = *csum;
- if (unlikely(iov_iter_is_pipe(i) || iov_iter_is_discard(i))) {
+ if (unlikely(iov_iter_is_discard(i))) {
WARN_ON(1); /* for now */
return 0;
}
if (req->fw->size > PAGE_SIZE) {
pr_err("Testing interface must use PAGE_SIZE firmware for now\n");
rc = -EINVAL;
+ goto out;
}
memcpy(buf, req->fw->data, req->fw->size);
XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2));
/* We should see two elements in the array */
+ rcu_read_lock();
xas_for_each(&xas, entry, ULONG_MAX)
seen++;
+ rcu_read_unlock();
XA_BUG_ON(xa, seen != 2);
/* One of which is marked */
xas_set(&xas, 0);
seen = 0;
+ rcu_read_lock();
xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0)
seen++;
+ rcu_read_unlock();
XA_BUG_ON(xa, seen != 1);
}
XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0));
xa_erase_index(xa, 12345678);
XA_BUG_ON(xa, !xa_empty(xa));
+ /* And so does xa_insert */
+ xa_reserve(xa, 12345678, GFP_KERNEL);
+ XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0);
+ xa_erase_index(xa, 12345678);
+ XA_BUG_ON(xa, !xa_empty(xa));
+
/* Can iterate through a reserved entry */
xa_store_index(xa, 5, GFP_KERNEL);
xa_reserve(xa, 6, GFP_KERNEL);
XA_BUG_ON(xa, xa_load(xa, max) != NULL);
XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL);
+ xas_lock(&xas);
XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index));
+ xas_unlock(&xas);
XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min));
XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min));
XA_BUG_ON(xa, xa_load(xa, max) != NULL);
XA_STATE(xas, xa, index);
xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL);
+ xas_lock(&xas);
XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0));
XA_BUG_ON(xa, xas.xa_index != index);
XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1));
+ xas_unlock(&xas);
XA_BUG_ON(xa, !xa_empty(xa));
}
#endif
rcu_read_unlock();
/* We can erase multiple values with a single store */
- xa_store_order(xa, 0, 63, NULL, GFP_KERNEL);
+ xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL);
XA_BUG_ON(xa, !xa_empty(xa));
/* Even when the first slot is empty but the others aren't */
}
}
-static noinline void check_find(struct xarray *xa)
+static noinline void check_find_1(struct xarray *xa)
{
unsigned long i, j, k;
XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0));
}
XA_BUG_ON(xa, !xa_empty(xa));
+}
+
+static noinline void check_find_2(struct xarray *xa)
+{
+ void *entry;
+ unsigned long i, j, index = 0;
+
+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {
+ XA_BUG_ON(xa, true);
+ }
+
+ for (i = 0; i < 1024; i++) {
+ xa_store_index(xa, index, GFP_KERNEL);
+ j = 0;
+ index = 0;
+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {
+ XA_BUG_ON(xa, xa_mk_value(index) != entry);
+ XA_BUG_ON(xa, index != j++);
+ }
+ }
+
+ xa_destroy(xa);
+}
+
+static noinline void check_find(struct xarray *xa)
+{
+ check_find_1(xa);
+ check_find_2(xa);
check_multi_find(xa);
check_multi_find_2(xa);
}
__check_store_range(xa, 4095 + i, 4095 + j);
__check_store_range(xa, 4096 + i, 4096 + j);
__check_store_range(xa, 123456 + i, 123456 + j);
- __check_store_range(xa, UINT_MAX + i, UINT_MAX + j);
+ __check_store_range(xa, (1 << 24) + i, (1 << 24) + j);
}
}
}
XA_STATE(xas, xa, 1 << order);
xa_store_order(xa, 0, order, xa, GFP_KERNEL);
+ rcu_read_lock();
xas_load(&xas);
XA_BUG_ON(xa, xas.xa_node->count == 0);
XA_BUG_ON(xa, xas.xa_node->count > (1 << order));
XA_BUG_ON(xa, xas.xa_node->nr_values != 0);
+ rcu_read_unlock();
xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order),
GFP_KERNEL);
EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds);
-void __noreturn
-__ubsan_handle_builtin_unreachable(struct unreachable_data *data)
+void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)
{
unsigned long flags;
* (see the xa_cmpxchg() implementation for an example).
*
* Return: If the slot already existed, returns the contents of this slot.
- * If the slot was newly created, returns NULL. If it failed to create the
- * slot, returns NULL and indicates the error in @xas.
+ * If the slot was newly created, returns %NULL. If it failed to create the
+ * slot, returns %NULL and indicates the error in @xas.
*/
static void *xas_create(struct xa_state *xas)
{
XA_STATE(xas, xa, index);
return xas_result(&xas, xas_store(&xas, NULL));
}
-EXPORT_SYMBOL_GPL(__xa_erase);
+EXPORT_SYMBOL(__xa_erase);
/**
- * xa_store() - Store this entry in the XArray.
+ * xa_erase() - Erase this entry from the XArray.
* @xa: XArray.
- * @index: Index into array.
- * @entry: New entry.
- * @gfp: Memory allocation flags.
+ * @index: Index of entry.
*
- * After this function returns, loads from this index will return @entry.
- * Storing into an existing multislot entry updates the entry of every index.
- * The marks associated with @index are unaffected unless @entry is %NULL.
+ * This function is the equivalent of calling xa_store() with %NULL as
+ * the third argument. The XArray does not need to allocate memory, so
+ * the user does not need to provide GFP flags.
*
- * Context: Process context. Takes and releases the xa_lock. May sleep
- * if the @gfp flags permit.
- * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry
- * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation
- * failed.
+ * Context: Any context. Takes and releases the xa_lock.
+ * Return: The entry which used to be at this index.
*/
-void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
+void *xa_erase(struct xarray *xa, unsigned long index)
{
- XA_STATE(xas, xa, index);
- void *curr;
-
- if (WARN_ON_ONCE(xa_is_internal(entry)))
- return XA_ERROR(-EINVAL);
+ void *entry;
- do {
- xas_lock(&xas);
- curr = xas_store(&xas, entry);
- if (xa_track_free(xa) && entry)
- xas_clear_mark(&xas, XA_FREE_MARK);
- xas_unlock(&xas);
- } while (xas_nomem(&xas, gfp));
+ xa_lock(xa);
+ entry = __xa_erase(xa, index);
+ xa_unlock(xa);
- return xas_result(&xas, curr);
+ return entry;
}
-EXPORT_SYMBOL(xa_store);
+EXPORT_SYMBOL(xa_erase);
/**
* __xa_store() - Store this entry in the XArray.
if (WARN_ON_ONCE(xa_is_internal(entry)))
return XA_ERROR(-EINVAL);
+ if (xa_track_free(xa) && !entry)
+ entry = XA_ZERO_ENTRY;
do {
curr = xas_store(&xas, entry);
- if (xa_track_free(xa) && entry)
+ if (xa_track_free(xa))
xas_clear_mark(&xas, XA_FREE_MARK);
} while (__xas_nomem(&xas, gfp));
EXPORT_SYMBOL(__xa_store);
/**
- * xa_cmpxchg() - Conditionally replace an entry in the XArray.
+ * xa_store() - Store this entry in the XArray.
* @xa: XArray.
* @index: Index into array.
- * @old: Old value to test against.
- * @entry: New value to place in array.
+ * @entry: New entry.
* @gfp: Memory allocation flags.
*
- * If the entry at @index is the same as @old, replace it with @entry.
- * If the return value is equal to @old, then the exchange was successful.
+ * After this function returns, loads from this index will return @entry.
+ * Storing into an existing multislot entry updates the entry of every index.
+ * The marks associated with @index are unaffected unless @entry is %NULL.
*
- * Context: Process context. Takes and releases the xa_lock. May sleep
- * if the @gfp flags permit.
- * Return: The old value at this index or xa_err() if an error happened.
+ * Context: Any context. Takes and releases the xa_lock.
+ * May sleep if the @gfp flags permit.
+ * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry
+ * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation
+ * failed.
*/
-void *xa_cmpxchg(struct xarray *xa, unsigned long index,
- void *old, void *entry, gfp_t gfp)
+void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
{
- XA_STATE(xas, xa, index);
void *curr;
- if (WARN_ON_ONCE(xa_is_internal(entry)))
- return XA_ERROR(-EINVAL);
-
- do {
- xas_lock(&xas);
- curr = xas_load(&xas);
- if (curr == XA_ZERO_ENTRY)
- curr = NULL;
- if (curr == old) {
- xas_store(&xas, entry);
- if (xa_track_free(xa) && entry)
- xas_clear_mark(&xas, XA_FREE_MARK);
- }
- xas_unlock(&xas);
- } while (xas_nomem(&xas, gfp));
+ xa_lock(xa);
+ curr = __xa_store(xa, index, entry, gfp);
+ xa_unlock(xa);
- return xas_result(&xas, curr);
+ return curr;
}
-EXPORT_SYMBOL(xa_cmpxchg);
+EXPORT_SYMBOL(xa_store);
/**
* __xa_cmpxchg() - Store this entry in the XArray.
if (WARN_ON_ONCE(xa_is_internal(entry)))
return XA_ERROR(-EINVAL);
+ if (xa_track_free(xa) && !entry)
+ entry = XA_ZERO_ENTRY;
do {
curr = xas_load(&xas);
curr = NULL;
if (curr == old) {
xas_store(&xas, entry);
- if (xa_track_free(xa) && entry)
+ if (xa_track_free(xa))
xas_clear_mark(&xas, XA_FREE_MARK);
}
} while (__xas_nomem(&xas, gfp));
EXPORT_SYMBOL(__xa_cmpxchg);
/**
- * xa_reserve() - Reserve this index in the XArray.
+ * __xa_reserve() - Reserve this index in the XArray.
* @xa: XArray.
* @index: Index into array.
* @gfp: Memory allocation flags.
* Ensures there is somewhere to store an entry at @index in the array.
* If there is already something stored at @index, this function does
* nothing. If there was nothing there, the entry is marked as reserved.
- * Loads from @index will continue to see a %NULL pointer until a
- * subsequent store to @index.
+ * Loading from a reserved entry returns a %NULL pointer.
*
* If you do not use the entry that you have reserved, call xa_release()
* or xa_erase() to free any unnecessary memory.
*
- * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe
- * if specified in XArray flags. May sleep if the @gfp flags permit.
+ * Context: Any context. Expects the xa_lock to be held on entry. May
+ * release the lock, sleep and reacquire the lock if the @gfp flags permit.
* Return: 0 if the reservation succeeded or -ENOMEM if it failed.
*/
-int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
+int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
{
XA_STATE(xas, xa, index);
- unsigned int lock_type = xa_lock_type(xa);
void *curr;
do {
- xas_lock_type(&xas, lock_type);
curr = xas_load(&xas);
- if (!curr)
+ if (!curr) {
xas_store(&xas, XA_ZERO_ENTRY);
- xas_unlock_type(&xas, lock_type);
- } while (xas_nomem(&xas, gfp));
+ if (xa_track_free(xa))
+ xas_clear_mark(&xas, XA_FREE_MARK);
+ }
+ } while (__xas_nomem(&xas, gfp));
return xas_error(&xas);
}
-EXPORT_SYMBOL(xa_reserve);
+EXPORT_SYMBOL(__xa_reserve);
#ifdef CONFIG_XARRAY_MULTI
static void xas_set_range(struct xa_state *xas, unsigned long first,
do {
xas_lock(&xas);
if (entry) {
- unsigned int order = (last == ~0UL) ? 64 :
- ilog2(last + 1);
+ unsigned int order = BITS_PER_LONG;
+ if (last + 1)
+ order = __ffs(last + 1);
xas_set_order(&xas, last, order);
xas_create(&xas);
if (xas_error(&xas))
* @index: Index of entry.
* @mark: Mark number.
*
- * Attempting to set a mark on a NULL entry does not succeed.
+ * Attempting to set a mark on a %NULL entry does not succeed.
*
* Context: Any context. Expects xa_lock to be held on entry.
*/
if (entry)
xas_set_mark(&xas, mark);
}
-EXPORT_SYMBOL_GPL(__xa_set_mark);
+EXPORT_SYMBOL(__xa_set_mark);
/**
* __xa_clear_mark() - Clear this mark on this entry while locked.
if (entry)
xas_clear_mark(&xas, mark);
}
-EXPORT_SYMBOL_GPL(__xa_clear_mark);
+EXPORT_SYMBOL(__xa_clear_mark);
/**
* xa_get_mark() - Inquire whether this mark is set on this entry.
* @index: Index of entry.
* @mark: Mark number.
*
- * Attempting to set a mark on a NULL entry does not succeed.
+ * Attempting to set a mark on a %NULL entry does not succeed.
*
* Context: Process context. Takes and releases the xa_lock.
*/
entry = xas_find_marked(&xas, max, filter);
else
entry = xas_find(&xas, max);
+ if (xas.xa_node == XAS_BOUNDS)
+ break;
if (xas.xa_shift) {
if (xas.xa_index & ((1UL << xas.xa_shift) - 1))
continue;
*
* The @filter may be an XArray mark value, in which case entries which are
* marked with that mark will be copied. It may also be %XA_PRESENT, in
- * which case all entries which are not NULL will be copied.
+ * which case all entries which are not %NULL will be copied.
*
* The entries returned may not represent a snapshot of the XArray at a
* moment in time. For example, if another thread stores to index 5, then
* @vma: vm_area_struct mapping @address
* @address: virtual address to look up
* @flags: flags modifying lookup behaviour
- * @page_mask: on output, *page_mask is set according to the size of the page
+ * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a
+ * pointer to output page_mask
*
* @flags can have FOLL_ flags set, defined in <linux/mm.h>
*
- * Returns the mapped (struct page *), %NULL if no mapping exists, or
+ * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches
+ * the device's dev_pagemap metadata to avoid repeating expensive lookups.
+ *
+ * On output, the @ctx->page_mask is set according to the size of the page.
+ *
+ * Return: the mapped (struct page *), %NULL if no mapping exists, or
* an error pointer if there is a mapping to something not represented
* by a page descriptor (see also vm_normal_page()).
*/
int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
struct vm_area_struct *vma)
{
- pte_t *src_pte, *dst_pte, entry;
+ pte_t *src_pte, *dst_pte, entry, dst_entry;
struct page *ptepage;
unsigned long addr;
int cow;
break;
}
- /* If the pagetables are shared don't copy or take references */
- if (dst_pte == src_pte)
+ /*
+ * If the pagetables are shared don't copy or take references.
+ * dst_pte == src_pte is the common case of src/dest sharing.
+ *
+ * However, src could have 'unshared' and dst shares with
+ * another vma. If dst_pte !none, this implies sharing.
+ * Check here before taking page table lock, and once again
+ * after taking the lock below.
+ */
+ dst_entry = huge_ptep_get(dst_pte);
+ if ((dst_pte == src_pte) || !huge_pte_none(dst_entry))
continue;
dst_ptl = huge_pte_lock(h, dst, dst_pte);
src_ptl = huge_pte_lockptr(h, src, src_pte);
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
entry = huge_ptep_get(src_pte);
- if (huge_pte_none(entry)) { /* skip none entry */
+ dst_entry = huge_ptep_get(dst_pte);
+ if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) {
+ /*
+ * Skip if src entry none. Also, skip in the
+ * unlikely case dst entry !none as this implies
+ * sharing with another vma.
+ */
;
} else if (unlikely(is_hugetlb_entry_migration(entry) ||
is_hugetlb_entry_hwpoisoned(entry))) {
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
/*
- * Common iterator interface used to define for_each_mem_range().
+ * Common iterator interface used to define for_each_mem_pfn_range().
*/
void __init_memblock __next_mem_pfn_range(int *idx, int nid,
unsigned long *out_start_pfn,
unsigned int cpuset_mems_cookie;
int reserve_flags;
- /*
- * In the slowpath, we sanity check order to avoid ever trying to
- * reclaim >= MAX_ORDER areas which will never succeed. Callers may
- * be using allocators in order of preference for an area that is
- * too large.
- */
- if (order >= MAX_ORDER) {
- WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));
- return NULL;
- }
-
/*
* We also sanity check to catch abuse of atomic reserves being used by
* callers that are not in atomic context.
gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
struct alloc_context ac = { };
+ /*
+ * There are several places where we assume that the order value is sane
+ * so bail out early if the request is out of bound.
+ */
+ if (unlikely(order >= MAX_ORDER)) {
+ WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));
+ return NULL;
+ }
+
gfp_mask &= gfp_allowed_mask;
alloc_mask = gfp_mask;
if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
if (PageReserved(page))
goto unmovable;
+ /*
+ * If the zone is movable and we have ruled out all reserved
+ * pages then it should be reasonably safe to assume the rest
+ * is movable.
+ */
+ if (zone_idx(zone) == ZONE_MOVABLE)
+ continue;
+
/*
* Hugepages are not in LRU lists, but they're movable.
* We need not scan over tail pages bacause we don't
inode_lock(inode);
/* We're holding i_mutex so we can access i_size directly */
- if (offset < 0)
- offset = -EINVAL;
- else if (offset >= inode->i_size)
+ if (offset < 0 || offset >= inode->i_size)
offset = -ENXIO;
else {
start = offset >> PAGE_SHIFT;
unsigned int type;
int i;
- p = kzalloc(sizeof(*p), GFP_KERNEL);
+ p = kvzalloc(sizeof(*p), GFP_KERNEL);
if (!p)
return ERR_PTR(-ENOMEM);
}
if (type >= MAX_SWAPFILES) {
spin_unlock(&swap_lock);
- kfree(p);
+ kvfree(p);
return ERR_PTR(-EPERM);
}
if (type >= nr_swapfiles) {
smp_wmb();
nr_swapfiles++;
} else {
- kfree(p);
+ kvfree(p);
p = swap_info[type];
/*
* Do not memset this entry: a racing procfs swap_next()
/*
* The fast way of checking if there are any vmstat diffs.
- * This works because the diffs are byte sized items.
*/
- if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS))
+ if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS *
+ sizeof(p->vm_stat_diff[0])))
return true;
#ifdef CONFIG_NUMA
- if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS))
+ if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS *
+ sizeof(p->vm_numa_stat_diff[0])))
return true;
#endif
}
#define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)
#define BUDDY_MASK (0x3)
+#define BUDDY_SHIFT 2
/**
* struct z3fold_pool - stores metadata for each z3fold pool
MIDDLE_CHUNK_MAPPED,
NEEDS_COMPACTING,
PAGE_STALE,
- UNDER_RECLAIM
+ PAGE_CLAIMED, /* by either reclaim or free */
};
/*****************
clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);
clear_bit(NEEDS_COMPACTING, &page->private);
clear_bit(PAGE_STALE, &page->private);
- clear_bit(UNDER_RECLAIM, &page->private);
+ clear_bit(PAGE_CLAIMED, &page->private);
spin_lock_init(&zhdr->page_lock);
kref_init(&zhdr->refcount);
unsigned long handle;
handle = (unsigned long)zhdr;
- if (bud != HEADLESS)
- handle += (bud + zhdr->first_num) & BUDDY_MASK;
+ if (bud != HEADLESS) {
+ handle |= (bud + zhdr->first_num) & BUDDY_MASK;
+ if (bud == LAST)
+ handle |= (zhdr->last_chunks << BUDDY_SHIFT);
+ }
return handle;
}
return (struct z3fold_header *)(handle & PAGE_MASK);
}
+/* only for LAST bud, returns zero otherwise */
+static unsigned short handle_to_chunks(unsigned long handle)
+{
+ return (handle & ~PAGE_MASK) >> BUDDY_SHIFT;
+}
+
/*
* (handle & BUDDY_MASK) < zhdr->first_num is possible in encode_handle
* but that doesn't matter. because the masking will result in the
page = virt_to_page(zhdr);
if (test_bit(PAGE_HEADLESS, &page->private)) {
- /* HEADLESS page stored */
- bud = HEADLESS;
- } else {
- z3fold_page_lock(zhdr);
- bud = handle_to_buddy(handle);
-
- switch (bud) {
- case FIRST:
- zhdr->first_chunks = 0;
- break;
- case MIDDLE:
- zhdr->middle_chunks = 0;
- zhdr->start_middle = 0;
- break;
- case LAST:
- zhdr->last_chunks = 0;
- break;
- default:
- pr_err("%s: unknown bud %d\n", __func__, bud);
- WARN_ON(1);
- z3fold_page_unlock(zhdr);
- return;
+ /* if a headless page is under reclaim, just leave.
+ * NB: we use test_and_set_bit for a reason: if the bit
+ * has not been set before, we release this page
+ * immediately so we don't care about its value any more.
+ */
+ if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) {
+ spin_lock(&pool->lock);
+ list_del(&page->lru);
+ spin_unlock(&pool->lock);
+ free_z3fold_page(page);
+ atomic64_dec(&pool->pages_nr);
}
+ return;
}
- if (bud == HEADLESS) {
- spin_lock(&pool->lock);
- list_del(&page->lru);
- spin_unlock(&pool->lock);
- free_z3fold_page(page);
- atomic64_dec(&pool->pages_nr);
+ /* Non-headless case */
+ z3fold_page_lock(zhdr);
+ bud = handle_to_buddy(handle);
+
+ switch (bud) {
+ case FIRST:
+ zhdr->first_chunks = 0;
+ break;
+ case MIDDLE:
+ zhdr->middle_chunks = 0;
+ break;
+ case LAST:
+ zhdr->last_chunks = 0;
+ break;
+ default:
+ pr_err("%s: unknown bud %d\n", __func__, bud);
+ WARN_ON(1);
+ z3fold_page_unlock(zhdr);
return;
}
atomic64_dec(&pool->pages_nr);
return;
}
- if (test_bit(UNDER_RECLAIM, &page->private)) {
+ if (test_bit(PAGE_CLAIMED, &page->private)) {
z3fold_page_unlock(zhdr);
return;
}
}
list_for_each_prev(pos, &pool->lru) {
page = list_entry(pos, struct page, lru);
+
+ /* this bit could have been set by free, in which case
+ * we pass over to the next page in the pool.
+ */
+ if (test_and_set_bit(PAGE_CLAIMED, &page->private))
+ continue;
+
+ zhdr = page_address(page);
if (test_bit(PAGE_HEADLESS, &page->private))
- /* candidate found */
break;
- zhdr = page_address(page);
- if (!z3fold_page_trylock(zhdr))
+ if (!z3fold_page_trylock(zhdr)) {
+ zhdr = NULL;
continue; /* can't evict at this point */
+ }
kref_get(&zhdr->refcount);
list_del_init(&zhdr->buddy);
zhdr->cpu = -1;
- set_bit(UNDER_RECLAIM, &page->private);
break;
}
+ if (!zhdr)
+ break;
+
list_del_init(&page->lru);
spin_unlock(&pool->lock);
if (test_bit(PAGE_HEADLESS, &page->private)) {
if (ret == 0) {
free_z3fold_page(page);
+ atomic64_dec(&pool->pages_nr);
return 0;
}
spin_lock(&pool->lock);
spin_unlock(&pool->lock);
} else {
z3fold_page_lock(zhdr);
- clear_bit(UNDER_RECLAIM, &page->private);
+ clear_bit(PAGE_CLAIMED, &page->private);
if (kref_put(&zhdr->refcount,
release_z3fold_page_locked)) {
atomic64_dec(&pool->pages_nr);
set_bit(MIDDLE_CHUNK_MAPPED, &page->private);
break;
case LAST:
- addr += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT);
+ addr += PAGE_SIZE - (handle_to_chunks(handle) << CHUNK_SHIFT);
break;
default:
pr_err("unknown buddy id %d\n", buddy);
*/
int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface)
{
+ static const size_t tvlv_padding = sizeof(__be32);
struct batadv_elp_packet *elp_packet;
unsigned char *elp_buff;
u32 random_seqno;
size_t size;
int res = -ENOMEM;
- size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN;
+ size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding;
hard_iface->bat_v.elp_skb = dev_alloc_skb(size);
if (!hard_iface->bat_v.elp_skb)
goto out;
skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN);
- elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN);
+ elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb,
+ BATADV_ELP_HLEN + tvlv_padding);
elp_packet = (struct batadv_elp_packet *)elp_buff;
elp_packet->packet_type = BATADV_ELP;
kfree(entry);
packet = (struct batadv_frag_packet *)skb_out->data;
- size = ntohs(packet->total_size);
+ size = ntohs(packet->total_size) + hdr_size;
/* Make room for the rest of the fragments. */
if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {
struct metadata_dst *tunnel_dst;
};
+/* private vlan flags */
+enum {
+ BR_VLFLAG_PER_PORT_STATS = BIT(0),
+};
+
/**
* struct net_bridge_vlan - per-vlan entry
*
* @vnode: rhashtable member
* @vid: VLAN id
* @flags: bridge vlan flags
+ * @priv_flags: private (in-kernel) bridge vlan flags
* @stats: per-cpu VLAN statistics
* @br: if MASTER flag set, this points to a bridge struct
* @port: if MASTER flag unset, this points to a port struct
struct rhash_head tnode;
u16 vid;
u16 flags;
+ u16 priv_flags;
struct br_vlan_stats __percpu *stats;
union {
struct net_bridge *br;
v = container_of(rcu, struct net_bridge_vlan, rcu);
WARN_ON(br_vlan_is_master(v));
/* if we had per-port stats configured then free them here */
- if (v->brvlan->stats != v->stats)
+ if (v->priv_flags & BR_VLFLAG_PER_PORT_STATS)
free_percpu(v->stats);
v->stats = NULL;
kfree(v);
err = -ENOMEM;
goto out_filt;
}
+ v->priv_flags |= BR_VLFLAG_PER_PORT_STATS;
} else {
v->stats = masterv->stats;
}
} else
ifindex = ro->ifindex;
- if (ro->fd_frames) {
+ dev = dev_get_by_index(sock_net(sk), ifindex);
+ if (!dev)
+ return -ENXIO;
+
+ err = -EINVAL;
+ if (ro->fd_frames && dev->mtu == CANFD_MTU) {
if (unlikely(size != CANFD_MTU && size != CAN_MTU))
- return -EINVAL;
+ goto put_dev;
} else {
if (unlikely(size != CAN_MTU))
- return -EINVAL;
+ goto put_dev;
}
- dev = dev_get_by_index(sock_net(sk), ifindex);
- if (!dev)
- return -ENXIO;
-
skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv),
msg->msg_flags & MSG_DONTWAIT, &err);
if (!skb)
struct bio_vec bvec;
int ret;
- /* sendpage cannot properly handle pages with page_count == 0,
- * we need to fallback to sendmsg if that's the case */
- if (page_count(page) >= 1)
+ /*
+ * sendpage cannot properly handle pages with page_count == 0,
+ * we need to fall back to sendmsg if that's the case.
+ *
+ * Same goes for slab pages: skb_can_coalesce() allows
+ * coalescing neighboring slab objects into a single frag which
+ * triggers one of hardened usercopy checks.
+ */
+ if (page_count(page) >= 1 && !PageSlab(page))
return __ceph_tcp_sendpage(sock, page, offset, size, more);
bvec.bv_page = page;
skb->vlan_tci = 0;
skb->dev = napi->dev;
skb->skb_iif = 0;
+
+ /* eth_type_trans() assumes pkt_type is PACKET_HOST */
+ skb->pkt_type = PACKET_HOST;
+
skb->encapsulation = 0;
skb_shinfo(skb)->gso_type = 0;
skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
if (work_done)
timeout = n->dev->gro_flush_timeout;
+ /* When the NAPI instance uses a timeout and keeps postponing
+ * it, we need to bound somehow the time packets are kept in
+ * the GRO layer
+ */
+ napi_gro_flush(n, !!timeout);
if (timeout)
hrtimer_start(&n->timer, ns_to_ktime(timeout),
HRTIMER_MODE_REL_PINNED);
- else
- napi_gro_flush(n, false);
}
if (unlikely(!list_empty(&n->poll_list))) {
/* If n->poll_list is not empty, we need to mask irqs */
} else {
struct in6_addr *src6 = (struct in6_addr *)&tuple->ipv6.saddr;
struct in6_addr *dst6 = (struct in6_addr *)&tuple->ipv6.daddr;
- u16 hnum = ntohs(tuple->ipv6.dport);
int sdif = inet6_sdif(skb);
if (proto == IPPROTO_TCP)
sk = __inet6_lookup(net, &tcp_hashinfo, skb, 0,
src6, tuple->ipv6.sport,
- dst6, hnum,
+ dst6, ntohs(tuple->ipv6.dport),
dif, sdif, &refcounted);
else if (likely(ipv6_bpf_stub))
sk = ipv6_bpf_stub->udp6_lib_lookup(net,
src6, tuple->ipv6.sport,
- dst6, hnum,
+ dst6, tuple->ipv6.dport,
dif, sdif,
&udp_table, skb);
#endif
nf_reset(skb);
nf_reset_trace(skb);
+#ifdef CONFIG_NET_SWITCHDEV
+ skb->offload_fwd_mark = 0;
+ skb->offload_mr_fwd_mark = 0;
+#endif
+
if (!xnet)
return;
unsigned int fraglen;
unsigned int fraggap;
unsigned int alloclen;
- unsigned int pagedlen = 0;
+ unsigned int pagedlen;
struct sk_buff *skb_prev;
alloc_new_skb:
skb_prev = skb;
if (datalen > mtu - fragheaderlen)
datalen = maxfraglen - fragheaderlen;
fraglen = datalen + fragheaderlen;
+ pagedlen = 0;
if ((flags & MSG_MORE) &&
!(rt->dst.dev->features&NETIF_F_SG))
iph->version = 4;
iph->ihl = sizeof(struct iphdr) >> 2;
- iph->frag_off = df;
+ iph->frag_off = ip_mtu_locked(&rt->dst) ? 0 : df;
iph->protocol = proto;
iph->tos = tos;
iph->daddr = dst;
int ret;
ret = xt_register_target(&masquerade_tg_reg);
+ if (ret)
+ return ret;
- if (ret == 0)
- nf_nat_masquerade_ipv4_register_notifier();
+ ret = nf_nat_masquerade_ipv4_register_notifier();
+ if (ret)
+ xt_unregister_target(&masquerade_tg_reg);
return ret;
}
.notifier_call = masq_inet_event,
};
-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);
+static int masq_refcnt;
+static DEFINE_MUTEX(masq_mutex);
-void nf_nat_masquerade_ipv4_register_notifier(void)
+int nf_nat_masquerade_ipv4_register_notifier(void)
{
+ int ret = 0;
+
+ mutex_lock(&masq_mutex);
/* check if the notifier was already set */
- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)
- return;
+ if (++masq_refcnt > 1)
+ goto out_unlock;
/* Register for device down reports */
- register_netdevice_notifier(&masq_dev_notifier);
+ ret = register_netdevice_notifier(&masq_dev_notifier);
+ if (ret)
+ goto err_dec;
/* Register IP address change reports */
- register_inetaddr_notifier(&masq_inet_notifier);
+ ret = register_inetaddr_notifier(&masq_inet_notifier);
+ if (ret)
+ goto err_unregister;
+
+ mutex_unlock(&masq_mutex);
+ return ret;
+
+err_unregister:
+ unregister_netdevice_notifier(&masq_dev_notifier);
+err_dec:
+ masq_refcnt--;
+out_unlock:
+ mutex_unlock(&masq_mutex);
+ return ret;
}
EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_register_notifier);
void nf_nat_masquerade_ipv4_unregister_notifier(void)
{
+ mutex_lock(&masq_mutex);
/* check if the notifier still has clients */
- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)
- return;
+ if (--masq_refcnt > 0)
+ goto out_unlock;
unregister_netdevice_notifier(&masq_dev_notifier);
unregister_inetaddr_notifier(&masq_inet_notifier);
+out_unlock:
+ mutex_unlock(&masq_mutex);
}
EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_unregister_notifier);
if (ret < 0)
return ret;
- nf_nat_masquerade_ipv4_register_notifier();
+ ret = nf_nat_masquerade_ipv4_register_notifier();
+ if (ret)
+ nft_unregister_expr(&nft_masq_ipv4_type);
return ret;
}
u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;
u32 delta_us;
- if (!delta)
- delta = 1;
- delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
- tcp_rcv_rtt_update(tp, delta_us, 0);
+ if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {
+ if (!delta)
+ delta = 1;
+ delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
+ tcp_rcv_rtt_update(tp, delta_us, 0);
+ }
}
}
if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr &&
flag & FLAG_ACKED) {
u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;
- u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
- seq_rtt_us = ca_rtt_us = delta_us;
+ if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {
+ seq_rtt_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
+ ca_rtt_us = seq_rtt_us;
+ }
}
rs->rtt_us = ca_rtt_us; /* RTT of last (S)ACKed packet (or -1) */
if (seq_rtt_us < 0)
* If the sack array is full, forget about the last one.
*/
if (this_sack >= TCP_NUM_SACKS) {
- if (tp->compressed_ack)
+ if (tp->compressed_ack > TCP_FASTRETRANS_THRESH)
tcp_send_ack(sk);
this_sack--;
tp->rx_opt.num_sacks--;
if (TCP_SKB_CB(from)->has_rxtstamp) {
TCP_SKB_CB(to)->has_rxtstamp = true;
to->tstamp = from->tstamp;
+ skb_hwtstamps(to)->hwtstamp = skb_hwtstamps(from)->hwtstamp;
}
return true;
if (!tcp_is_sack(tp) ||
tp->compressed_ack >= sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr)
goto send_now;
- tp->compressed_ack++;
+
+ if (tp->compressed_ack_rcv_nxt != tp->rcv_nxt) {
+ tp->compressed_ack_rcv_nxt = tp->rcv_nxt;
+ if (tp->compressed_ack > TCP_FASTRETRANS_THRESH)
+ NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPACKCOMPRESSED,
+ tp->compressed_ack - TCP_FASTRETRANS_THRESH);
+ tp->compressed_ack = 0;
+ }
+
+ if (++tp->compressed_ack <= TCP_FASTRETRANS_THRESH)
+ goto send_now;
if (hrtimer_is_queued(&tp->compressed_ack_timer))
return;
{
struct tcp_sock *tp = tcp_sk(sk);
- if (unlikely(tp->compressed_ack)) {
+ if (unlikely(tp->compressed_ack > TCP_FASTRETRANS_THRESH)) {
NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPACKCOMPRESSED,
- tp->compressed_ack);
- tp->compressed_ack = 0;
+ tp->compressed_ack - TCP_FASTRETRANS_THRESH);
+ tp->compressed_ack = TCP_FASTRETRANS_THRESH;
if (hrtimer_try_to_cancel(&tp->compressed_ack_timer) == 1)
__sock_put(sk);
}
{
struct inet_connection_sock *icsk = inet_csk(sk);
u32 elapsed, start_ts;
+ s32 remaining;
start_ts = tcp_retransmit_stamp(sk);
if (!icsk->icsk_user_timeout || !start_ts)
return icsk->icsk_rto;
elapsed = tcp_time_stamp(tcp_sk(sk)) - start_ts;
- if (elapsed >= icsk->icsk_user_timeout)
+ remaining = icsk->icsk_user_timeout - elapsed;
+ if (remaining <= 0)
return 1; /* user timeout has passed; fire ASAP */
- else
- return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(icsk->icsk_user_timeout - elapsed));
+
+ return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining));
}
/**
(boundary - linear_backoff_thresh) * TCP_RTO_MAX;
timeout = jiffies_to_msecs(timeout);
}
- return (tcp_time_stamp(tcp_sk(sk)) - start_ts) >= timeout;
+ return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0;
}
/* A write timeout has occurred. Process the after effects. */
bh_lock_sock(sk);
if (!sock_owned_by_user(sk)) {
- if (tp->compressed_ack)
+ if (tp->compressed_ack > TCP_FASTRETRANS_THRESH)
tcp_send_ack(sk);
} else {
if (!test_and_set_bit(TCP_DELACK_TIMER_DEFERRED,
static void addrconf_dad_work(struct work_struct *w);
static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id,
bool send_na);
-static void addrconf_dad_run(struct inet6_dev *idev);
+static void addrconf_dad_run(struct inet6_dev *idev, bool restart);
static void addrconf_rs_timer(struct timer_list *t);
static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa);
static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa);
void *ptr)
{
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ struct netdev_notifier_change_info *change_info;
struct netdev_notifier_changeupper_info *info;
struct inet6_dev *idev = __in6_dev_get(dev);
struct net *net = dev_net(dev);
break;
}
- if (idev) {
+ if (!IS_ERR_OR_NULL(idev)) {
if (idev->if_flags & IF_READY) {
/* device is already configured -
* but resend MLD reports, we might
* multicast snooping switches
*/
ipv6_mc_up(idev);
+ change_info = ptr;
+ if (change_info->flags_changed & IFF_NOARP)
+ addrconf_dad_run(idev, true);
rt6_sync_up(dev, RTNH_F_LINKDOWN);
break;
}
if (!IS_ERR_OR_NULL(idev)) {
if (run_pending)
- addrconf_dad_run(idev);
+ addrconf_dad_run(idev, false);
/* Device has an address by now */
rt6_sync_up(dev, RTNH_F_DEAD);
addrconf_verify_rtnl();
}
-static void addrconf_dad_run(struct inet6_dev *idev)
+static void addrconf_dad_run(struct inet6_dev *idev, bool restart)
{
struct inet6_ifaddr *ifp;
read_lock_bh(&idev->lock);
list_for_each_entry(ifp, &idev->addr_list, if_list) {
spin_lock(&ifp->lock);
- if (ifp->flags & IFA_F_TENTATIVE &&
- ifp->state == INET6_IFADDR_STATE_DAD)
+ if ((ifp->flags & IFA_F_TENTATIVE &&
+ ifp->state == INET6_IFADDR_STATE_DAD) || restart) {
+ if (restart)
+ ifp->state = INET6_IFADDR_STATE_PREDAD;
addrconf_dad_kick(ifp);
+ }
spin_unlock(&ifp->lock);
}
read_unlock_bh(&idev->lock);
unsigned int fraglen;
unsigned int fraggap;
unsigned int alloclen;
- unsigned int pagedlen = 0;
+ unsigned int pagedlen;
alloc_new_skb:
/* There's no room in the current skb */
if (skb)
if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen)
datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len;
fraglen = datalen + fragheaderlen;
+ pagedlen = 0;
if ((flags & MSG_MORE) &&
!(rt->dst.dev->features&NETIF_F_SG))
unsigned int hh_len;
struct dst_entry *dst;
struct flowi6 fl6 = {
- .flowi6_oif = sk ? sk->sk_bound_dev_if : 0,
+ .flowi6_oif = sk && sk->sk_bound_dev_if ? sk->sk_bound_dev_if :
+ rt6_need_strict(&iph->daddr) ? skb_dst(skb)->dev->ifindex : 0,
.flowi6_mark = skb->mark,
.flowi6_uid = sock_net_uid(net, sk),
.daddr = iph->daddr,
int err;
err = xt_register_target(&masquerade_tg6_reg);
- if (err == 0)
- nf_nat_masquerade_ipv6_register_notifier();
+ if (err)
+ return err;
+
+ err = nf_nat_masquerade_ipv6_register_notifier();
+ if (err)
+ xt_unregister_target(&masquerade_tg6_reg);
return err;
}
* of ipv6 addresses being deleted), we also need to add an upper
* limit to the number of queued work items.
*/
-static int masq_inet_event(struct notifier_block *this,
- unsigned long event, void *ptr)
+static int masq_inet6_event(struct notifier_block *this,
+ unsigned long event, void *ptr)
{
struct inet6_ifaddr *ifa = ptr;
const struct net_device *dev;
return NOTIFY_DONE;
}
-static struct notifier_block masq_inet_notifier = {
- .notifier_call = masq_inet_event,
+static struct notifier_block masq_inet6_notifier = {
+ .notifier_call = masq_inet6_event,
};
-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);
+static int masq_refcnt;
+static DEFINE_MUTEX(masq_mutex);
-void nf_nat_masquerade_ipv6_register_notifier(void)
+int nf_nat_masquerade_ipv6_register_notifier(void)
{
+ int ret = 0;
+
+ mutex_lock(&masq_mutex);
/* check if the notifier is already set */
- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)
- return;
+ if (++masq_refcnt > 1)
+ goto out_unlock;
+
+ ret = register_netdevice_notifier(&masq_dev_notifier);
+ if (ret)
+ goto err_dec;
+
+ ret = register_inet6addr_notifier(&masq_inet6_notifier);
+ if (ret)
+ goto err_unregister;
- register_netdevice_notifier(&masq_dev_notifier);
- register_inet6addr_notifier(&masq_inet_notifier);
+ mutex_unlock(&masq_mutex);
+ return ret;
+
+err_unregister:
+ unregister_netdevice_notifier(&masq_dev_notifier);
+err_dec:
+ masq_refcnt--;
+out_unlock:
+ mutex_unlock(&masq_mutex);
+ return ret;
}
EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_register_notifier);
void nf_nat_masquerade_ipv6_unregister_notifier(void)
{
+ mutex_lock(&masq_mutex);
/* check if the notifier still has clients */
- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)
- return;
+ if (--masq_refcnt > 0)
+ goto out_unlock;
- unregister_inet6addr_notifier(&masq_inet_notifier);
+ unregister_inet6addr_notifier(&masq_inet6_notifier);
unregister_netdevice_notifier(&masq_dev_notifier);
+out_unlock:
+ mutex_unlock(&masq_mutex);
}
EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_unregister_notifier);
if (ret < 0)
return ret;
- nf_nat_masquerade_ipv6_register_notifier();
+ ret = nf_nat_masquerade_ipv6_register_notifier();
+ if (ret)
+ nft_unregister_expr(&nft_masq_ipv6_type);
return ret;
}
if (rt) {
rcu_read_lock();
if (rt->rt6i_flags & RTF_CACHE) {
- if (dst_hold_safe(&rt->dst))
- rt6_remove_exception_rt(rt);
+ rt6_remove_exception_rt(rt);
} else {
struct fib6_info *from;
struct fib6_node *fn;
void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, __be32 mtu)
{
+ int oif = sk->sk_bound_dev_if;
struct dst_entry *dst;
- ip6_update_pmtu(skb, sock_net(sk), mtu,
- sk->sk_bound_dev_if, sk->sk_mark, sk->sk_uid);
+ if (!oif && skb->dev)
+ oif = l3mdev_master_ifindex(skb->dev);
+
+ ip6_update_pmtu(skb, sock_net(sk), mtu, oif, sk->sk_mark, sk->sk_uid);
dst = __sk_dst_get(sk);
if (!dst || !dst->obsolete ||
if (cfg->fc_flags & RTF_GATEWAY &&
!ipv6_addr_equal(&cfg->fc_gateway, &rt->rt6i_gateway))
goto out;
- if (dst_hold_safe(&rt->dst))
- rc = rt6_remove_exception_rt(rt);
+
+ rc = rt6_remove_exception_rt(rt);
out:
return rc;
}
goto err_sock;
}
- sk = sock->sk;
-
- sock_hold(sk);
- tunnel->sock = sk;
tunnel->l2tp_net = net;
-
pn = l2tp_pernet(net);
spin_lock_bh(&pn->l2tp_tunnel_list_lock);
list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+ sk = sock->sk;
+ sock_hold(sk);
+ tunnel->sock = sk;
+
if (tunnel->encap == L2TP_ENCAPTYPE_UDP) {
struct udp_tunnel_sock_cfg udp_cfg = {
.sk_user_data = tunnel,
static struct notifier_block ip_vs_dst_notifier = {
.notifier_call = ip_vs_dst_event,
+#ifdef CONFIG_IP_VS_IPV6
+ .priority = ADDRCONF_NOTIFY_PRIORITY + 5,
+#endif
};
int __net_init ip_vs_control_net_init(struct netns_ipvs *ipvs)
struct nf_conntrack_zone zone;
int cpu;
u32 jiffies32;
+ bool dead;
struct rcu_head rcu_head;
};
conn->zone = *zone;
conn->cpu = raw_smp_processor_id();
conn->jiffies32 = (u32)jiffies;
- spin_lock(&list->list_lock);
+ conn->dead = false;
+ spin_lock_bh(&list->list_lock);
if (list->dead == true) {
kmem_cache_free(conncount_conn_cachep, conn);
- spin_unlock(&list->list_lock);
+ spin_unlock_bh(&list->list_lock);
return NF_CONNCOUNT_SKIP;
}
list_add_tail(&conn->node, &list->head);
list->count++;
- spin_unlock(&list->list_lock);
+ spin_unlock_bh(&list->list_lock);
return NF_CONNCOUNT_ADDED;
}
EXPORT_SYMBOL_GPL(nf_conncount_add);
{
bool free_entry = false;
- spin_lock(&list->list_lock);
+ spin_lock_bh(&list->list_lock);
- if (list->count == 0) {
- spin_unlock(&list->list_lock);
- return free_entry;
+ if (conn->dead) {
+ spin_unlock_bh(&list->list_lock);
+ return free_entry;
}
list->count--;
+ conn->dead = true;
list_del_rcu(&conn->node);
- if (list->count == 0)
+ if (list->count == 0) {
+ list->dead = true;
free_entry = true;
+ }
- spin_unlock(&list->list_lock);
+ spin_unlock_bh(&list->list_lock);
call_rcu(&conn->rcu_head, __conn_free);
return free_entry;
}
{
spin_lock_init(&list->list_lock);
INIT_LIST_HEAD(&list->head);
- list->count = 1;
+ list->count = 0;
list->dead = false;
}
EXPORT_SYMBOL_GPL(nf_conncount_list_init);
struct nf_conn *found_ct;
unsigned int collected = 0;
bool free_entry = false;
+ bool ret = false;
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
found = find_or_evict(net, list, conn, &free_entry);
if (collected > CONNCOUNT_GC_MAX_NODES)
return false;
}
- return false;
+
+ spin_lock_bh(&list->list_lock);
+ if (!list->count) {
+ list->dead = true;
+ ret = true;
+ }
+ spin_unlock_bh(&list->list_lock);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
while (gc_count) {
rbconn = gc_nodes[--gc_count];
spin_lock(&rbconn->list.list_lock);
- if (rbconn->list.count == 0 && rbconn->list.dead == false) {
- rbconn->list.dead = true;
- rb_erase(&rbconn->node, root);
- call_rcu(&rbconn->rcu_head, __tree_nodes_free);
- }
+ rb_erase(&rbconn->node, root);
+ call_rcu(&rbconn->rcu_head, __tree_nodes_free);
spin_unlock(&rbconn->list.list_lock);
}
}
nf_conncount_list_init(&rbconn->list);
list_add(&conn->node, &rbconn->list.head);
count = 1;
+ rbconn->list.count = count;
rb_link_node(&rbconn->node, parent, rbnode);
rb_insert_color(&rbconn->node, root);
#include <linux/netfilter/nf_conntrack_proto_gre.h>
#include <linux/netfilter/nf_conntrack_pptp.h>
-enum grep_conntrack {
- GRE_CT_UNREPLIED,
- GRE_CT_REPLIED,
- GRE_CT_MAX
-};
-
static const unsigned int gre_timeouts[GRE_CT_MAX] = {
[GRE_CT_UNREPLIED] = 30*HZ,
[GRE_CT_REPLIED] = 180*HZ,
};
static unsigned int proto_gre_net_id __read_mostly;
-struct netns_proto_gre {
- struct nf_proto_net nf;
- rwlock_t keymap_lock;
- struct list_head keymap_list;
- unsigned int gre_timeouts[GRE_CT_MAX];
-};
static inline struct netns_proto_gre *gre_pernet(struct net *net)
{
{
int ret;
+ BUILD_BUG_ON(offsetof(struct netns_proto_gre, nf) != 0);
+
ret = register_pernet_subsys(&proto_gre_net_ops);
if (ret < 0)
goto out_pernet;
static void nf_tables_rule_destroy(const struct nft_ctx *ctx,
struct nft_rule *rule)
{
- struct nft_expr *expr;
+ struct nft_expr *expr, *next;
/*
* Careful: some expressions might not be initialized in case this
*/
expr = nft_expr_first(rule);
while (expr != nft_expr_last(rule) && expr->ops) {
+ next = nft_expr_next(expr);
nf_tables_expr_destroy(ctx, expr);
- expr = nft_expr_next(expr);
+ expr = next;
}
kfree(rule);
}
if (chain->use == UINT_MAX)
return -EOVERFLOW;
- }
-
- if (nla[NFTA_RULE_POSITION]) {
- if (!(nlh->nlmsg_flags & NLM_F_CREATE))
- return -EOPNOTSUPP;
- pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
- old_rule = __nft_rule_lookup(chain, pos_handle);
- if (IS_ERR(old_rule)) {
- NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]);
- return PTR_ERR(old_rule);
+ if (nla[NFTA_RULE_POSITION]) {
+ pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
+ old_rule = __nft_rule_lookup(chain, pos_handle);
+ if (IS_ERR(old_rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]);
+ return PTR_ERR(old_rule);
+ }
}
}
}
if (nlh->nlmsg_flags & NLM_F_REPLACE) {
- if (!nft_is_active_next(net, old_rule)) {
- err = -ENOENT;
- goto err2;
- }
- trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE,
- old_rule);
+ trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
if (trans == NULL) {
err = -ENOMEM;
goto err2;
}
- nft_deactivate_next(net, old_rule);
- chain->use--;
-
- if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) {
- err = -ENOMEM;
+ err = nft_delrule(&ctx, old_rule);
+ if (err < 0) {
+ nft_trans_destroy(trans);
goto err2;
}
call_rcu(&old->h, __nf_tables_commit_chain_free_rules_old);
}
-static void nf_tables_commit_chain_active(struct net *net, struct nft_chain *chain)
+static void nf_tables_commit_chain(struct net *net, struct nft_chain *chain)
{
struct nft_rule **g0, **g1;
bool next_genbit;
/* step 2. Make rules_gen_X visible to packet path */
list_for_each_entry(table, &net->nft.tables, list) {
- list_for_each_entry(chain, &table->chains, list) {
- if (!nft_is_active_next(net, chain))
- continue;
- nf_tables_commit_chain_active(net, chain);
- }
+ list_for_each_entry(chain, &table->chains, list)
+ nf_tables_commit_chain(net, chain);
}
/*
case IPPROTO_TCP:
timeouts = nf_tcp_pernet(net)->timeouts;
break;
- case IPPROTO_UDP:
+ case IPPROTO_UDP: /* fallthrough */
+ case IPPROTO_UDPLITE:
timeouts = nf_udp_pernet(net)->timeouts;
break;
case IPPROTO_DCCP:
case IPPROTO_SCTP:
#ifdef CONFIG_NF_CT_PROTO_SCTP
timeouts = nf_sctp_pernet(net)->timeouts;
+#endif
+ break;
+ case IPPROTO_GRE:
+#ifdef CONFIG_NF_CT_PROTO_GRE
+ if (l4proto->net_id) {
+ struct netns_proto_gre *net_gre;
+
+ net_gre = net_generic(net, *l4proto->net_id);
+ timeouts = net_gre->gre_timeouts;
+ }
#endif
break;
case 255:
timeouts = &nf_generic_pernet(net)->timeout;
break;
default:
- WARN_ON_ONCE(1);
+ WARN_ONCE(1, "Missing timeouts for proto %d", l4proto->l4proto);
break;
}
void *info)
{
struct xt_match *match = expr->ops->data;
+ struct module *me = match->me;
struct xt_mtdtor_param par;
par.net = ctx->net;
par.match->destroy(&par);
if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops)))
- module_put(match->me);
+ module_put(me);
}
static void
{
int err;
- register_netdevice_notifier(&flow_offload_netdev_notifier);
+ err = register_netdevice_notifier(&flow_offload_netdev_notifier);
+ if (err)
+ goto err;
err = nft_register_expr(&nft_flow_offload_type);
if (err < 0)
register_expr:
unregister_netdevice_notifier(&flow_offload_netdev_notifier);
+err:
return err;
}
return 0;
}
-static void __net_exit xt_rateest_net_exit(struct net *net)
-{
- struct xt_rateest_net *xn = net_generic(net, xt_rateest_id);
- int i;
-
- for (i = 0; i < ARRAY_SIZE(xn->hash); i++)
- WARN_ON_ONCE(!hlist_empty(&xn->hash[i]));
-}
-
static struct pernet_operations xt_rateest_net_ops = {
.init = xt_rateest_net_init,
- .exit = xt_rateest_net_exit,
.id = &xt_rateest_id,
.size = sizeof(struct xt_rateest_net),
};
/* copy match config into hashtable config */
ret = cfg_copy(&hinfo->cfg, (void *)cfg, 3);
-
- if (ret)
+ if (ret) {
+ vfree(hinfo);
return ret;
+ }
hinfo->cfg.size = size;
if (hinfo->cfg.max == 0)
int ret;
ret = cfg_copy(&cfg, (void *)&info->cfg, 1);
-
if (ret)
return ret;
int ret;
ret = cfg_copy(&cfg, (void *)&info->cfg, 2);
-
if (ret)
return ret;
return ret;
ret = cfg_copy(&cfg, (void *)&info->cfg, 1);
-
if (ret)
return ret;
return ret;
ret = cfg_copy(&cfg, (void *)&info->cfg, 2);
-
if (ret)
return ret;
void *ph;
__u32 ts;
- ph = skb_shinfo(skb)->destructor_arg;
+ ph = skb_zcopy_get_nouarg(skb);
packet_dec_pending(&po->tx_ring);
ts = __packet_set_timestamp(po, ph, skb);
skb->mark = po->sk.sk_mark;
skb->tstamp = sockc->transmit_time;
sock_tx_timestamp(&po->sk, sockc->tsflags, &skb_shinfo(skb)->tx_flags);
- skb_shinfo(skb)->destructor_arg = ph.raw;
+ skb_zcopy_set_nouarg(skb, ph.raw);
skb_reserve(skb, hlen);
skb_reset_network_header(skb);
* getting ACKs from the server. Returns a number representing the life state
* which can be compared to that returned by a previous call.
*
- * If this is a client call, ping ACKs will be sent to the server to find out
- * whether it's still responsive and whether the call is still alive on the
- * server.
+ * If the life state stalls, rxrpc_kernel_probe_life() should be called and
+ * then 2RTT waited.
*/
-u32 rxrpc_kernel_check_life(struct socket *sock, struct rxrpc_call *call)
+u32 rxrpc_kernel_check_life(const struct socket *sock,
+ const struct rxrpc_call *call)
{
return call->acks_latest;
}
EXPORT_SYMBOL(rxrpc_kernel_check_life);
+/**
+ * rxrpc_kernel_probe_life - Poke the peer to see if it's still alive
+ * @sock: The socket the call is on
+ * @call: The call to check
+ *
+ * In conjunction with rxrpc_kernel_check_life(), allow a kernel service to
+ * find out whether a call is still alive by pinging it. This should cause the
+ * life state to be bumped in about 2*RTT.
+ *
+ * The must be called in TASK_RUNNING state on pain of might_sleep() objecting.
+ */
+void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call)
+{
+ rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, 0, true, false,
+ rxrpc_propose_ack_ping_for_check_life);
+ rxrpc_send_ack_packet(call, true, NULL);
+}
+EXPORT_SYMBOL(rxrpc_kernel_probe_life);
+
/**
* rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
* @sock: The socket the call is on
goto out_release;
}
} else {
- return err;
+ ret = err;
+ goto out_free;
}
p = to_pedit(*a);
u32 tcfp_ewma_rate;
s64 tcfp_burst;
u32 tcfp_mtu;
- s64 tcfp_toks;
- s64 tcfp_ptoks;
s64 tcfp_mtu_ptoks;
- s64 tcfp_t_c;
struct psched_ratecfg rate;
bool rate_present;
struct psched_ratecfg peak;
struct tcf_police {
struct tc_action common;
struct tcf_police_params __rcu *params;
+
+ spinlock_t tcfp_lock ____cacheline_aligned_in_smp;
+ s64 tcfp_toks;
+ s64 tcfp_ptoks;
+ s64 tcfp_t_c;
};
#define to_police(pc) ((struct tcf_police *)pc)
return ret;
}
ret = ACT_P_CREATED;
+ spin_lock_init(&(to_police(*a)->tcfp_lock));
} else if (!ovr) {
tcf_idr_release(*a, bind);
return -EEXIST;
}
new->tcfp_burst = PSCHED_TICKS2NS(parm->burst);
- new->tcfp_toks = new->tcfp_burst;
- if (new->peak_present) {
+ if (new->peak_present)
new->tcfp_mtu_ptoks = (s64)psched_l2t_ns(&new->peak,
new->tcfp_mtu);
- new->tcfp_ptoks = new->tcfp_mtu_ptoks;
- }
if (tb[TCA_POLICE_AVRATE])
new->tcfp_ewma_rate = nla_get_u32(tb[TCA_POLICE_AVRATE]);
}
spin_lock_bh(&police->tcf_lock);
- new->tcfp_t_c = ktime_get_ns();
+ spin_lock_bh(&police->tcfp_lock);
+ police->tcfp_t_c = ktime_get_ns();
+ police->tcfp_toks = new->tcfp_burst;
+ if (new->peak_present)
+ police->tcfp_ptoks = new->tcfp_mtu_ptoks;
+ spin_unlock_bh(&police->tcfp_lock);
police->tcf_action = parm->action;
rcu_swap_protected(police->params,
new,
}
now = ktime_get_ns();
- toks = min_t(s64, now - p->tcfp_t_c, p->tcfp_burst);
+ spin_lock_bh(&police->tcfp_lock);
+ toks = min_t(s64, now - police->tcfp_t_c, p->tcfp_burst);
if (p->peak_present) {
- ptoks = toks + p->tcfp_ptoks;
+ ptoks = toks + police->tcfp_ptoks;
if (ptoks > p->tcfp_mtu_ptoks)
ptoks = p->tcfp_mtu_ptoks;
ptoks -= (s64)psched_l2t_ns(&p->peak,
qdisc_pkt_len(skb));
}
- toks += p->tcfp_toks;
+ toks += police->tcfp_toks;
if (toks > p->tcfp_burst)
toks = p->tcfp_burst;
toks -= (s64)psched_l2t_ns(&p->rate, qdisc_pkt_len(skb));
if ((toks|ptoks) >= 0) {
- p->tcfp_t_c = now;
- p->tcfp_toks = toks;
- p->tcfp_ptoks = ptoks;
+ police->tcfp_t_c = now;
+ police->tcfp_toks = toks;
+ police->tcfp_ptoks = ptoks;
+ spin_unlock_bh(&police->tcfp_lock);
ret = p->tcfp_result;
goto inc_drops;
}
+ spin_unlock_bh(&police->tcfp_lock);
}
inc_overlimits:
goto begin;
}
prefetch(&skb->end);
- f->credit -= qdisc_pkt_len(skb);
+ plen = qdisc_pkt_len(skb);
+ f->credit -= plen;
- if (ktime_to_ns(skb->tstamp) || !q->rate_enable)
+ if (!q->rate_enable)
goto out;
rate = q->flow_max_rate;
- if (skb->sk)
- rate = min(skb->sk->sk_pacing_rate, rate);
-
- if (rate <= q->low_rate_threshold) {
- f->credit = 0;
- plen = qdisc_pkt_len(skb);
- } else {
- plen = max(qdisc_pkt_len(skb), q->quantum);
- if (f->credit > 0)
- goto out;
+
+ /* If EDT time was provided for this skb, we need to
+ * update f->time_next_packet only if this qdisc enforces
+ * a flow max rate.
+ */
+ if (!skb->tstamp) {
+ if (skb->sk)
+ rate = min(skb->sk->sk_pacing_rate, rate);
+
+ if (rate <= q->low_rate_threshold) {
+ f->credit = 0;
+ } else {
+ plen = max(plen, q->quantum);
+ if (f->credit > 0)
+ goto out;
+ }
}
if (rate != ~0UL) {
u64 len = (u64)plen * NSEC_PER_SEC;
sctp_transport_route(tp, NULL, sp);
if (asoc->param_flags & SPP_PMTUD_ENABLE)
sctp_assoc_sync_pmtu(asoc);
+ } else if (!sctp_transport_pmtu_check(tp)) {
+ if (asoc->param_flags & SPP_PMTUD_ENABLE)
+ sctp_assoc_sync_pmtu(asoc);
}
if (asoc->pmtu_pending) {
return retval;
}
-static void sctp_packet_release_owner(struct sk_buff *skb)
-{
- sk_free(skb->sk);
-}
-
-static void sctp_packet_set_owner_w(struct sk_buff *skb, struct sock *sk)
-{
- skb_orphan(skb);
- skb->sk = sk;
- skb->destructor = sctp_packet_release_owner;
-
- /*
- * The data chunks have already been accounted for in sctp_sendmsg(),
- * therefore only reserve a single byte to keep socket around until
- * the packet has been transmitted.
- */
- refcount_inc(&sk->sk_wmem_alloc);
-}
-
static void sctp_packet_gso_append(struct sk_buff *head, struct sk_buff *skb)
{
if (SCTP_OUTPUT_CB(head)->last == head)
head->truesize += skb->truesize;
head->data_len += skb->len;
head->len += skb->len;
+ refcount_add(skb->truesize, &head->sk->sk_wmem_alloc);
__skb_header_release(skb);
}
if (!head)
goto out;
skb_reserve(head, packet->overhead + MAX_HEADER);
- sctp_packet_set_owner_w(head, sk);
+ skb_set_owner_w(head, sk);
/* set sctp header */
sh = skb_push(head, sizeof(struct sctphdr));
unsigned int optlen)
{
struct sctp_assoc_value params;
- struct sctp_association *asoc;
- int retval = -EINVAL;
if (optlen != sizeof(params))
- goto out;
-
- if (copy_from_user(¶ms, optval, optlen)) {
- retval = -EFAULT;
- goto out;
- }
-
- asoc = sctp_id2assoc(sk, params.assoc_id);
- if (asoc) {
- asoc->prsctp_enable = !!params.assoc_value;
- } else if (!params.assoc_id) {
- struct sctp_sock *sp = sctp_sk(sk);
+ return -EINVAL;
- sp->ep->prsctp_enable = !!params.assoc_value;
- } else {
- goto out;
- }
+ if (copy_from_user(¶ms, optval, optlen))
+ return -EFAULT;
- retval = 0;
+ sctp_sk(sk)->ep->prsctp_enable = !!params.assoc_value;
-out:
- return retval;
+ return 0;
}
static int sctp_setsockopt_default_prinfo(struct sock *sk,
goto out;
}
- stream->incnt = incnt;
stream->outcnt = outcnt;
asoc->strreset_outstanding = !!out + !!in;
smc = smc_sk(sk);
/* cleanup for a dangling non-blocking connect */
+ if (smc->connect_info && sk->sk_state == SMC_INIT)
+ tcp_abort(smc->clcsock->sk, ECONNABORTED);
flush_work(&smc->connect_work);
kfree(smc->connect_info);
smc->connect_info = NULL;
mutex_lock(&smc_create_lgr_pending);
local_contact = smc_conn_create(smc, false, aclc->hdr.flag, ibdev,
- ibport, &aclc->lcl, NULL, 0);
+ ibport, ntoh24(aclc->qpn), &aclc->lcl,
+ NULL, 0);
if (local_contact < 0) {
if (local_contact == -ENOMEM)
reason_code = SMC_CLC_DECL_MEM;/* insufficient memory*/
int rc = 0;
mutex_lock(&smc_create_lgr_pending);
- local_contact = smc_conn_create(smc, true, aclc->hdr.flag, NULL, 0,
+ local_contact = smc_conn_create(smc, true, aclc->hdr.flag, NULL, 0, 0,
NULL, ismdev, aclc->gid);
if (local_contact < 0)
return smc_connect_abort(smc, SMC_CLC_DECL_MEM, 0);
int *local_contact)
{
/* allocate connection / link group */
- *local_contact = smc_conn_create(new_smc, false, 0, ibdev, ibport,
+ *local_contact = smc_conn_create(new_smc, false, 0, ibdev, ibport, 0,
&pclc->lcl, NULL, 0);
if (*local_contact < 0) {
if (*local_contact == -ENOMEM)
struct smc_clc_msg_smcd *pclc_smcd;
pclc_smcd = smc_get_clc_msg_smcd(pclc);
- *local_contact = smc_conn_create(new_smc, true, 0, NULL, 0, NULL,
+ *local_contact = smc_conn_create(new_smc, true, 0, NULL, 0, 0, NULL,
ismdev, pclc_smcd->gid);
if (*local_contact < 0) {
if (*local_contact == -ENOMEM)
sizeof(struct smc_cdc_msg) > SMC_WR_BUF_SIZE,
"must increase SMC_WR_BUF_SIZE to at least sizeof(struct smc_cdc_msg)");
BUILD_BUG_ON_MSG(
- sizeof(struct smc_cdc_msg) != SMC_WR_TX_SIZE,
+ offsetofend(struct smc_cdc_msg, reserved) > SMC_WR_TX_SIZE,
"must adapt SMC_WR_TX_SIZE to sizeof(struct smc_cdc_msg); if not all smc_wr upper layer protocols use the same message size any more, must start to set link->wr_tx_sges[i].length on each individual smc_wr_tx_send()");
BUILD_BUG_ON_MSG(
sizeof(struct smc_cdc_tx_pend) > SMC_WR_TX_PEND_PRIV_SIZE,
int smcd_cdc_msg_send(struct smc_connection *conn)
{
struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+ union smc_host_cursor curs;
struct smcd_cdc_msg cdc;
int rc, diff;
memset(&cdc, 0, sizeof(cdc));
cdc.common.type = SMC_CDC_MSG_TYPE;
- cdc.prod_wrap = conn->local_tx_ctrl.prod.wrap;
- cdc.prod_count = conn->local_tx_ctrl.prod.count;
-
- cdc.cons_wrap = conn->local_tx_ctrl.cons.wrap;
- cdc.cons_count = conn->local_tx_ctrl.cons.count;
- cdc.prod_flags = conn->local_tx_ctrl.prod_flags;
- cdc.conn_state_flags = conn->local_tx_ctrl.conn_state_flags;
+ curs.acurs.counter = atomic64_read(&conn->local_tx_ctrl.prod.acurs);
+ cdc.prod.wrap = curs.wrap;
+ cdc.prod.count = curs.count;
+ curs.acurs.counter = atomic64_read(&conn->local_tx_ctrl.cons.acurs);
+ cdc.cons.wrap = curs.wrap;
+ cdc.cons.count = curs.count;
+ cdc.cons.prod_flags = conn->local_tx_ctrl.prod_flags;
+ cdc.cons.conn_state_flags = conn->local_tx_ctrl.conn_state_flags;
rc = smcd_tx_ism_write(conn, &cdc, sizeof(cdc), 0, 1);
if (rc)
return rc;
- smc_curs_copy(&conn->rx_curs_confirmed, &conn->local_tx_ctrl.cons,
- conn);
+ smc_curs_copy(&conn->rx_curs_confirmed, &curs, conn);
/* Calculate transmitted data and increment free send buffer space */
diff = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin,
&conn->tx_curs_sent);
static void smcd_cdc_rx_tsklet(unsigned long data)
{
struct smc_connection *conn = (struct smc_connection *)data;
+ struct smcd_cdc_msg *data_cdc;
struct smcd_cdc_msg cdc;
struct smc_sock *smc;
if (!conn)
return;
- memcpy(&cdc, conn->rmb_desc->cpu_addr, sizeof(cdc));
+ data_cdc = (struct smcd_cdc_msg *)conn->rmb_desc->cpu_addr;
+ smcd_curs_copy(&cdc.prod, &data_cdc->prod, conn);
+ smcd_curs_copy(&cdc.cons, &data_cdc->cons, conn);
smc = container_of(conn, struct smc_sock, conn);
smc_cdc_msg_recv(smc, (struct smc_cdc_msg *)&cdc);
}
struct smc_cdc_producer_flags prod_flags;
struct smc_cdc_conn_state_flags conn_state_flags;
u8 reserved[18];
-} __packed; /* format defined in RFC7609 */
+};
+
+/* SMC-D cursor format */
+union smcd_cdc_cursor {
+ struct {
+ u16 wrap;
+ u32 count;
+ struct smc_cdc_producer_flags prod_flags;
+ struct smc_cdc_conn_state_flags conn_state_flags;
+ } __packed;
+#ifdef KERNEL_HAS_ATOMIC64
+ atomic64_t acurs; /* for atomic processing */
+#else
+ u64 acurs; /* for atomic processing */
+#endif
+} __aligned(8);
/* CDC message for SMC-D */
struct smcd_cdc_msg {
struct smc_wr_rx_hdr common; /* Type = 0xFE */
u8 res1[7];
- u16 prod_wrap;
- u32 prod_count;
- u8 res2[2];
- u16 cons_wrap;
- u32 cons_count;
- struct smc_cdc_producer_flags prod_flags;
- struct smc_cdc_conn_state_flags conn_state_flags;
+ union smcd_cdc_cursor prod;
+ union smcd_cdc_cursor cons;
u8 res3[8];
-} __packed;
+} __aligned(8);
static inline bool smc_cdc_rxed_any_close(struct smc_connection *conn)
{
#endif
}
+static inline void smcd_curs_copy(union smcd_cdc_cursor *tgt,
+ union smcd_cdc_cursor *src,
+ struct smc_connection *conn)
+{
+#ifndef KERNEL_HAS_ATOMIC64
+ unsigned long flags;
+
+ spin_lock_irqsave(&conn->acurs_lock, flags);
+ tgt->acurs = src->acurs;
+ spin_unlock_irqrestore(&conn->acurs_lock, flags);
+#else
+ atomic64_set(&tgt->acurs, atomic64_read(&src->acurs));
+#endif
+}
+
/* calculate cursor difference between old and new, where old <= new */
static inline int smc_curs_diff(unsigned int size,
union smc_host_cursor *old,
static inline void smcd_cdc_msg_to_host(struct smc_host_cdc_msg *local,
struct smcd_cdc_msg *peer)
{
- local->prod.wrap = peer->prod_wrap;
- local->prod.count = peer->prod_count;
- local->cons.wrap = peer->cons_wrap;
- local->cons.count = peer->cons_count;
- local->prod_flags = peer->prod_flags;
- local->conn_state_flags = peer->conn_state_flags;
+ union smc_host_cursor temp;
+
+ temp.wrap = peer->prod.wrap;
+ temp.count = peer->prod.count;
+ atomic64_set(&local->prod.acurs, atomic64_read(&temp.acurs));
+
+ temp.wrap = peer->cons.wrap;
+ temp.count = peer->cons.count;
+ atomic64_set(&local->cons.acurs, atomic64_read(&temp.acurs));
+ local->prod_flags = peer->cons.prod_flags;
+ local->conn_state_flags = peer->cons.conn_state_flags;
}
static inline void smc_cdc_msg_to_host(struct smc_host_cdc_msg *local,
if (!lgr->is_smcd && lnk->state != SMC_LNK_INACTIVE)
smc_llc_link_inactive(lnk);
+ if (lgr->is_smcd)
+ smc_ism_signal_shutdown(lgr);
smc_lgr_free(lgr);
}
}
}
/* Called when SMC-D device is terminated or peer is lost */
-void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid)
+void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, unsigned short vlan)
{
struct smc_link_group *lgr, *l;
LIST_HEAD(lgr_free_list);
list_for_each_entry_safe(lgr, l, &smc_lgr_list.list, list) {
if (lgr->is_smcd && lgr->smcd == dev &&
(!peer_gid || lgr->peer_gid == peer_gid) &&
- !list_empty(&lgr->list)) {
+ (vlan == VLAN_VID_MASK || lgr->vlan_id == vlan)) {
__smc_lgr_terminate(lgr);
list_move(&lgr->list, &lgr_free_list);
}
list_for_each_entry_safe(lgr, l, &lgr_free_list, list) {
list_del_init(&lgr->list);
cancel_delayed_work_sync(&lgr->free_work);
+ if (!peer_gid && vlan == VLAN_VID_MASK) /* dev terminated? */
+ smc_ism_signal_shutdown(lgr);
smc_lgr_free(lgr);
}
}
static bool smcr_lgr_match(struct smc_link_group *lgr,
struct smc_clc_msg_local *lcl,
- enum smc_lgr_role role)
+ enum smc_lgr_role role, u32 clcqpn)
{
return !memcmp(lgr->peer_systemid, lcl->id_for_peer,
SMC_SYSTEMID_LEN) &&
SMC_GID_SIZE) &&
!memcmp(lgr->lnk[SMC_SINGLE_LINK].peer_mac, lcl->mac,
sizeof(lcl->mac)) &&
- lgr->role == role;
+ lgr->role == role &&
+ (lgr->role == SMC_SERV ||
+ lgr->lnk[SMC_SINGLE_LINK].peer_qpn == clcqpn);
}
static bool smcd_lgr_match(struct smc_link_group *lgr,
/* create a new SMC connection (and a new link group if necessary) */
int smc_conn_create(struct smc_sock *smc, bool is_smcd, int srv_first_contact,
- struct smc_ib_device *smcibdev, u8 ibport,
+ struct smc_ib_device *smcibdev, u8 ibport, u32 clcqpn,
struct smc_clc_msg_local *lcl, struct smcd_dev *smcd,
u64 peer_gid)
{
list_for_each_entry(lgr, &smc_lgr_list.list, list) {
write_lock_bh(&lgr->conns_lock);
if ((is_smcd ? smcd_lgr_match(lgr, smcd, peer_gid) :
- smcr_lgr_match(lgr, lcl, role)) &&
+ smcr_lgr_match(lgr, lcl, role, clcqpn)) &&
!lgr->sync_err &&
lgr->vlan_id == vlan_id &&
(role == SMC_CLNT ||
smc_llc_link_inactive(lnk);
}
cancel_delayed_work_sync(&lgr->free_work);
+ if (lgr->is_smcd)
+ smc_ism_signal_shutdown(lgr);
smc_lgr_free(lgr); /* free link group */
}
}
void smc_lgr_forget(struct smc_link_group *lgr);
void smc_lgr_terminate(struct smc_link_group *lgr);
void smc_port_terminate(struct smc_ib_device *smcibdev, u8 ibport);
-void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid);
+void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid,
+ unsigned short vlan);
int smc_buf_create(struct smc_sock *smc, bool is_smcd);
int smc_uncompress_bufsize(u8 compressed);
int smc_rmb_rtoken_handling(struct smc_connection *conn,
void smc_conn_free(struct smc_connection *conn);
int smc_conn_create(struct smc_sock *smc, bool is_smcd, int srv_first_contact,
- struct smc_ib_device *smcibdev, u8 ibport,
+ struct smc_ib_device *smcibdev, u8 ibport, u32 clcqpn,
struct smc_clc_msg_local *lcl, struct smcd_dev *smcd,
u64 peer_gid);
void smcd_conn_free(struct smc_connection *conn);
#define ISM_EVENT_REQUEST 0x0001
#define ISM_EVENT_RESPONSE 0x0002
#define ISM_EVENT_REQUEST_IR 0x00000001
+#define ISM_EVENT_CODE_SHUTDOWN 0x80
#define ISM_EVENT_CODE_TESTLINK 0x83
+union smcd_sw_event_info {
+ u64 info;
+ struct {
+ u8 uid[SMC_LGR_ID_SIZE];
+ unsigned short vlan_id;
+ u16 code;
+ };
+};
+
static void smcd_handle_sw_event(struct smc_ism_event_work *wrk)
{
- union {
- u64 info;
- struct {
- u32 uid;
- unsigned short vlanid;
- u16 code;
- };
- } ev_info;
+ union smcd_sw_event_info ev_info;
+ ev_info.info = wrk->event.info;
switch (wrk->event.code) {
+ case ISM_EVENT_CODE_SHUTDOWN: /* Peer shut down DMBs */
+ smc_smcd_terminate(wrk->smcd, wrk->event.tok, ev_info.vlan_id);
+ break;
case ISM_EVENT_CODE_TESTLINK: /* Activity timer */
- ev_info.info = wrk->event.info;
if (ev_info.code == ISM_EVENT_REQUEST) {
ev_info.code = ISM_EVENT_RESPONSE;
wrk->smcd->ops->signal_event(wrk->smcd,
}
}
+int smc_ism_signal_shutdown(struct smc_link_group *lgr)
+{
+ int rc;
+ union smcd_sw_event_info ev_info;
+
+ memcpy(ev_info.uid, lgr->id, SMC_LGR_ID_SIZE);
+ ev_info.vlan_id = lgr->vlan_id;
+ ev_info.code = ISM_EVENT_REQUEST;
+ rc = lgr->smcd->ops->signal_event(lgr->smcd, lgr->peer_gid,
+ ISM_EVENT_REQUEST_IR,
+ ISM_EVENT_CODE_SHUTDOWN,
+ ev_info.info);
+ return rc;
+}
+
/* worker for SMC-D events */
static void smc_ism_event_work(struct work_struct *work)
{
switch (wrk->event.type) {
case ISM_EVENT_GID: /* GID event, token is peer GID */
- smc_smcd_terminate(wrk->smcd, wrk->event.tok);
+ smc_smcd_terminate(wrk->smcd, wrk->event.tok, VLAN_VID_MASK);
break;
case ISM_EVENT_DMB:
break;
spin_unlock(&smcd_dev_list.lock);
flush_workqueue(smcd->event_wq);
destroy_workqueue(smcd->event_wq);
- smc_smcd_terminate(smcd, 0);
+ smc_smcd_terminate(smcd, 0, VLAN_VID_MASK);
device_del(&smcd->dev);
}
int smc_ism_unregister_dmb(struct smcd_dev *dev, struct smc_buf_desc *dmb_desc);
int smc_ism_write(struct smcd_dev *dev, const struct smc_ism_position *pos,
void *data, size_t len);
+int smc_ism_signal_shutdown(struct smc_link_group *lgr);
#endif
pend = container_of(wr_pend_priv, struct smc_wr_tx_pend, priv);
if (pend->idx < link->wr_tx_cnt) {
+ u32 idx = pend->idx;
+
/* clear the full struct smc_wr_tx_pend including .priv */
memset(&link->wr_tx_pends[pend->idx], 0,
sizeof(link->wr_tx_pends[pend->idx]));
memset(&link->wr_tx_bufs[pend->idx], 0,
sizeof(link->wr_tx_bufs[pend->idx]));
- test_and_clear_bit(pend->idx, link->wr_tx_mask);
+ test_and_clear_bit(idx, link->wr_tx_mask);
return 1;
}
struct socket *sock = file->private_data;
if (unlikely(!sock->ops->splice_read))
- return -EINVAL;
+ return generic_file_splice_read(file, ppos, pipe, len, flags);
return sock->ops->splice_read(sock, ppos, pipe, len, flags);
}
{
struct auth_cred *acred = &container_of(cred, struct generic_cred,
gc_base)->acred;
- bool ret;
-
- get_rpccred(cred);
- ret = test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags);
- put_rpccred(cred);
-
- return ret;
+ return test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags);
}
static const struct rpc_credops generic_credops = {
return &gss_auth->rpc_auth;
}
+static struct gss_cred *
+gss_dup_cred(struct gss_auth *gss_auth, struct gss_cred *gss_cred)
+{
+ struct gss_cred *new;
+
+ /* Make a copy of the cred so that we can reference count it */
+ new = kzalloc(sizeof(*gss_cred), GFP_NOIO);
+ if (new) {
+ struct auth_cred acred = {
+ .uid = gss_cred->gc_base.cr_uid,
+ };
+ struct gss_cl_ctx *ctx =
+ rcu_dereference_protected(gss_cred->gc_ctx, 1);
+
+ rpcauth_init_cred(&new->gc_base, &acred,
+ &gss_auth->rpc_auth,
+ &gss_nullops);
+ new->gc_base.cr_flags = 1UL << RPCAUTH_CRED_UPTODATE;
+ new->gc_service = gss_cred->gc_service;
+ new->gc_principal = gss_cred->gc_principal;
+ kref_get(&gss_auth->kref);
+ rcu_assign_pointer(new->gc_ctx, ctx);
+ gss_get_ctx(ctx);
+ }
+ return new;
+}
+
/*
- * gss_destroying_context will cause the RPCSEC_GSS to send a NULL RPC call
+ * gss_send_destroy_context will cause the RPCSEC_GSS to send a NULL RPC call
* to the server with the GSS control procedure field set to
* RPC_GSS_PROC_DESTROY. This should normally cause the server to release
* all RPCSEC_GSS state associated with that context.
*/
-static int
-gss_destroying_context(struct rpc_cred *cred)
+static void
+gss_send_destroy_context(struct rpc_cred *cred)
{
struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base);
struct gss_auth *gss_auth = container_of(cred->cr_auth, struct gss_auth, rpc_auth);
struct gss_cl_ctx *ctx = rcu_dereference_protected(gss_cred->gc_ctx, 1);
+ struct gss_cred *new;
struct rpc_task *task;
- if (test_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) == 0)
- return 0;
-
- ctx->gc_proc = RPC_GSS_PROC_DESTROY;
- cred->cr_ops = &gss_nullops;
-
- /* Take a reference to ensure the cred will be destroyed either
- * by the RPC call or by the put_rpccred() below */
- get_rpccred(cred);
+ new = gss_dup_cred(gss_auth, gss_cred);
+ if (new) {
+ ctx->gc_proc = RPC_GSS_PROC_DESTROY;
- task = rpc_call_null(gss_auth->client, cred, RPC_TASK_ASYNC|RPC_TASK_SOFT);
- if (!IS_ERR(task))
- rpc_put_task(task);
+ task = rpc_call_null(gss_auth->client, &new->gc_base,
+ RPC_TASK_ASYNC|RPC_TASK_SOFT);
+ if (!IS_ERR(task))
+ rpc_put_task(task);
- put_rpccred(cred);
- return 1;
+ put_rpccred(&new->gc_base);
+ }
}
/* gss_destroy_cred (and gss_free_ctx) are used to clean up after failure
gss_destroy_cred(struct rpc_cred *cred)
{
- if (gss_destroying_context(cred))
- return;
+ if (test_and_clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) != 0)
+ gss_send_destroy_context(cred);
gss_destroy_nullcred(cred);
}
static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
size_t nbytes)
{
- static __be32 *p;
+ __be32 *p;
int space_left;
int frag1bytes, frag2bytes;
WARN_ON_ONCE(xdr->iov);
return;
}
- if (fraglen) {
+ if (fraglen)
xdr->end = head->iov_base + head->iov_len;
- xdr->page_ptr--;
- }
/* (otherwise assume xdr->end is already set) */
+ xdr->page_ptr--;
head->iov_len = len;
buf->len = len;
xdr->p = head->iov_base + head->iov_len;
/* Apply trial address if we just left trial period */
if (!trial && !self) {
- tipc_net_finalize(net, tn->trial_addr);
+ tipc_sched_net_finalize(net, tn->trial_addr);
+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);
msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
}
goto exit;
}
- /* Trial period over ? */
- if (!time_before(jiffies, tn->addr_trial_end)) {
- /* Did we just leave it ? */
- if (!tipc_own_addr(net))
- tipc_net_finalize(net, tn->trial_addr);
-
- msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
- msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net));
+ /* Did we just leave trial period ? */
+ if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) {
+ mod_timer(&d->timer, jiffies + TIPC_DISC_INIT);
+ spin_unlock_bh(&d->lock);
+ tipc_sched_net_finalize(net, tn->trial_addr);
+ return;
}
/* Adjust timeout interval according to discovery phase */
d->timer_intv = TIPC_DISC_SLOW;
else if (!d->num_nodes && d->timer_intv > TIPC_DISC_FAST)
d->timer_intv = TIPC_DISC_FAST;
+ msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);
}
mod_timer(&d->timer, jiffies + d->timer_intv);
* - A local spin_lock protecting the queue of subscriber events.
*/
+struct tipc_net_work {
+ struct work_struct work;
+ struct net *net;
+ u32 addr;
+};
+
+static void tipc_net_finalize(struct net *net, u32 addr);
+
int tipc_net_init(struct net *net, u8 *node_id, u32 addr)
{
if (tipc_own_id(net)) {
return 0;
}
-void tipc_net_finalize(struct net *net, u32 addr)
+static void tipc_net_finalize(struct net *net, u32 addr)
{
struct tipc_net *tn = tipc_net(net);
- if (!cmpxchg(&tn->node_addr, 0, addr)) {
- tipc_set_node_addr(net, addr);
- tipc_named_reinit(net);
- tipc_sk_reinit(net);
- tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,
- TIPC_CLUSTER_SCOPE, 0, addr);
- }
+ if (cmpxchg(&tn->node_addr, 0, addr))
+ return;
+ tipc_set_node_addr(net, addr);
+ tipc_named_reinit(net);
+ tipc_sk_reinit(net);
+ tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,
+ TIPC_CLUSTER_SCOPE, 0, addr);
+}
+
+static void tipc_net_finalize_work(struct work_struct *work)
+{
+ struct tipc_net_work *fwork;
+
+ fwork = container_of(work, struct tipc_net_work, work);
+ tipc_net_finalize(fwork->net, fwork->addr);
+ kfree(fwork);
+}
+
+void tipc_sched_net_finalize(struct net *net, u32 addr)
+{
+ struct tipc_net_work *fwork = kzalloc(sizeof(*fwork), GFP_ATOMIC);
+
+ if (!fwork)
+ return;
+ INIT_WORK(&fwork->work, tipc_net_finalize_work);
+ fwork->net = net;
+ fwork->addr = addr;
+ schedule_work(&fwork->work);
}
void tipc_net_stop(struct net *net)
extern const struct nla_policy tipc_nl_net_policy[];
int tipc_net_init(struct net *net, u8 *node_id, u32 addr);
-void tipc_net_finalize(struct net *net, u32 addr);
+void tipc_sched_net_finalize(struct net *net, u32 addr);
void tipc_net_stop(struct net *net);
int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
/* tipc_node_cleanup - delete nodes that does not
* have active links for NODE_CLEANUP_AFTER time
*/
-static int tipc_node_cleanup(struct tipc_node *peer)
+static bool tipc_node_cleanup(struct tipc_node *peer)
{
struct tipc_net *tn = tipc_net(peer->net);
bool deleted = false;
- spin_lock_bh(&tn->node_list_lock);
+ /* If lock held by tipc_node_stop() the node will be deleted anyway */
+ if (!spin_trylock_bh(&tn->node_list_lock))
+ return false;
+
tipc_node_write_lock(peer);
if (!node_is_up(peer) && time_after(jiffies, peer->delete_at)) {
/**
* tipc_sk_anc_data_recv - optionally capture ancillary data for received message
* @m: descriptor for message info
- * @msg: received message header
+ * @skb: received message buffer
* @tsk: TIPC port associated with message
*
* Note: Ancillary data is not captured if not requested by receiver.
*
* Returns 0 if successful, otherwise errno
*/
-static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,
+static int tipc_sk_anc_data_recv(struct msghdr *m, struct sk_buff *skb,
struct tipc_sock *tsk)
{
+ struct tipc_msg *msg;
u32 anc_data[3];
u32 err;
u32 dest_type;
if (likely(m->msg_controllen == 0))
return 0;
+ msg = buf_msg(skb);
/* Optionally capture errored message object(s) */
err = msg ? msg_errcode(msg) : 0;
if (res)
return res;
if (anc_data[1]) {
+ if (skb_linearize(skb))
+ return -ENOMEM;
+ msg = buf_msg(skb);
res = put_cmsg(m, SOL_TIPC, TIPC_RETDATA, anc_data[1],
msg_data(msg));
if (res)
/* Collect msg meta data, including error code and rejected data */
tipc_sk_set_orig_addr(m, skb);
- rc = tipc_sk_anc_data_recv(m, hdr, tsk);
+ rc = tipc_sk_anc_data_recv(m, skb, tsk);
if (unlikely(rc))
goto exit;
+ hdr = buf_msg(skb);
/* Capture data if non-error msg, otherwise just set return value */
if (likely(!err)) {
/* Collect msg meta data, incl. error code and rejected data */
if (!copied) {
tipc_sk_set_orig_addr(m, skb);
- rc = tipc_sk_anc_data_recv(m, hdr, tsk);
+ rc = tipc_sk_anc_data_recv(m, skb, tsk);
if (rc)
break;
+ hdr = buf_msg(skb);
}
/* Copy data if msg ok, otherwise return error/partial data */
# Try to figure out the source directory prefix so we can remove it from the
# addr2line output. HACK ALERT: This assumes that start_kernel() is in
-# kernel/init.c! This only works for vmlinux. Otherwise it falls back to
+# init/main.c! This only works for vmlinux. Otherwise it falls back to
# printing the absolute path.
find_dir_prefix() {
local objfile=$1
self.curline = 0
try:
for line in fd:
- line = line.decode(locale.getpreferredencoding(False), errors='ignore')
self.curline += 1
if self.curline > maxlines:
break
pks.pkey_algo = "rsa";
pks.hash_algo = hash_algo_name[hdr->hash_algo];
+ pks.encoding = "pkcs1";
pks.digest = (u8 *)data;
pks.digest_size = datalen;
pks.s = hdr->sig;
addr_buf = address;
while (walk_size < addrlen) {
+ if (walk_size + sizeof(sa_family_t) > addrlen)
+ return -EINVAL;
+
addr = addr_buf;
switch (addr->sa_family) {
case AF_UNSPEC:
{ RTM_NEWSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_GETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_NEWCACHEREPORT, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_READ },
};
static const struct nlmsg_perm nlmsg_tcpdiag_perms[] =
switch (sclass) {
case SECCLASS_NETLINK_ROUTE_SOCKET:
- /* RTM_MAX always point to RTM_SETxxxx, ie RTM_NEWxxx + 3 */
+ /* RTM_MAX always points to RTM_SETxxxx, ie RTM_NEWxxx + 3.
+ * If the BUILD_BUG_ON() below fails you must update the
+ * structures at the top of this file with the new mappings
+ * before updating the BUILD_BUG_ON() macro!
+ */
BUILD_BUG_ON(RTM_MAX != (RTM_NEWCHAIN + 3));
err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,
sizeof(nlmsg_route_perms));
break;
case SECCLASS_NETLINK_XFRM_SOCKET:
+ /* If the BUILD_BUG_ON() below fails you must update the
+ * structures at the top of this file with the new mappings
+ * before updating the BUILD_BUG_ON() macro!
+ */
BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING);
err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms,
sizeof(nlmsg_xfrm_perms));
char *rangep[2];
if (!pol->mls_enabled) {
- if ((def_sid != SECSID_NULL && oldc) || (*scontext) == '\0')
- return 0;
- return -EINVAL;
+ /*
+ * With no MLS, only return -EINVAL if there is a MLS field
+ * and it did not come from an xattr.
+ */
+ if (oldc && def_sid == SECSID_NULL)
+ return -EINVAL;
+ return 0;
}
/*
return 0;
}
+/* add a new kcontrol object; call with card->controls_rwsem locked */
+static int __snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol)
+{
+ struct snd_ctl_elem_id id;
+ unsigned int idx;
+ unsigned int count;
+
+ id = kcontrol->id;
+ if (id.index > UINT_MAX - kcontrol->count)
+ return -EINVAL;
+
+ if (snd_ctl_find_id(card, &id)) {
+ dev_err(card->dev,
+ "control %i:%i:%i:%s:%i is already present\n",
+ id.iface, id.device, id.subdevice, id.name, id.index);
+ return -EBUSY;
+ }
+
+ if (snd_ctl_find_hole(card, kcontrol->count) < 0)
+ return -ENOMEM;
+
+ list_add_tail(&kcontrol->list, &card->controls);
+ card->controls_count += kcontrol->count;
+ kcontrol->id.numid = card->last_numid + 1;
+ card->last_numid += kcontrol->count;
+
+ id = kcontrol->id;
+ count = kcontrol->count;
+ for (idx = 0; idx < count; idx++, id.index++, id.numid++)
+ snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);
+
+ return 0;
+}
+
/**
* snd_ctl_add - add the control instance to the card
* @card: the card instance
*/
int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol)
{
- struct snd_ctl_elem_id id;
- unsigned int idx;
- unsigned int count;
int err = -EINVAL;
if (! kcontrol)
return err;
if (snd_BUG_ON(!card || !kcontrol->info))
goto error;
- id = kcontrol->id;
- if (id.index > UINT_MAX - kcontrol->count)
- goto error;
down_write(&card->controls_rwsem);
- if (snd_ctl_find_id(card, &id)) {
- up_write(&card->controls_rwsem);
- dev_err(card->dev, "control %i:%i:%i:%s:%i is already present\n",
- id.iface,
- id.device,
- id.subdevice,
- id.name,
- id.index);
- err = -EBUSY;
- goto error;
- }
- if (snd_ctl_find_hole(card, kcontrol->count) < 0) {
- up_write(&card->controls_rwsem);
- err = -ENOMEM;
- goto error;
- }
- list_add_tail(&kcontrol->list, &card->controls);
- card->controls_count += kcontrol->count;
- kcontrol->id.numid = card->last_numid + 1;
- card->last_numid += kcontrol->count;
- id = kcontrol->id;
- count = kcontrol->count;
+ err = __snd_ctl_add(card, kcontrol);
up_write(&card->controls_rwsem);
- for (idx = 0; idx < count; idx++, id.index++, id.numid++)
- snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);
+ if (err < 0)
+ goto error;
return 0;
error:
kctl->tlv.c = snd_ctl_elem_user_tlv;
/* This function manage to free the instance on failure. */
- err = snd_ctl_add(card, kctl);
- if (err < 0)
- return err;
+ down_write(&card->controls_rwsem);
+ err = __snd_ctl_add(card, kctl);
+ if (err < 0) {
+ snd_ctl_free_one(kctl);
+ goto unlock;
+ }
offset = snd_ctl_get_ioff(kctl, &info->id);
snd_ctl_build_ioff(&info->id, kctl, offset);
/*
* which locks the element.
*/
- down_write(&card->controls_rwsem);
card->user_ctl_count++;
- up_write(&card->controls_rwsem);
+ unlock:
+ up_write(&card->controls_rwsem);
return 0;
}
runtime->oss.channels = params_channels(params);
runtime->oss.rate = params_rate(params);
- vfree(runtime->oss.buffer);
- runtime->oss.buffer = vmalloc(runtime->oss.period_bytes);
+ kvfree(runtime->oss.buffer);
+ runtime->oss.buffer = kvzalloc(runtime->oss.period_bytes, GFP_KERNEL);
if (!runtime->oss.buffer) {
err = -ENOMEM;
goto failure;
{
struct snd_pcm_runtime *runtime;
runtime = substream->runtime;
- vfree(runtime->oss.buffer);
+ kvfree(runtime->oss.buffer);
runtime->oss.buffer = NULL;
#ifdef CONFIG_SND_PCM_OSS_PLUGINS
snd_pcm_oss_plugin_clear(substream);
return -ENXIO;
size /= 8;
if (plugin->buf_frames < frames) {
- vfree(plugin->buf);
- plugin->buf = vmalloc(size);
+ kvfree(plugin->buf);
+ plugin->buf = kvzalloc(size, GFP_KERNEL);
plugin->buf_frames = frames;
}
if (!plugin->buf) {
if (plugin->private_free)
plugin->private_free(plugin);
kfree(plugin->buf_channels);
- vfree(plugin->buf);
+ kvfree(plugin->buf);
kfree(plugin);
return 0;
}
if (err < 0) {
if (chip->release_dma)
chip->release_dma(chip, chip->dma_private_data, chip->dma1);
- snd_free_pages(runtime->dma_area, runtime->dma_bytes);
return err;
}
chip->playback_substream = substream;
if (err < 0) {
if (chip->release_dma)
chip->release_dma(chip, chip->dma_private_data, chip->dma2);
- snd_free_pages(runtime->dma_area, runtime->dma_bytes);
return err;
}
chip->capture_substream = substream;
{
struct snd_ac97 *ac97 = snd_kcontrol_chip(kcontrol);
int reg = kcontrol->private_value & 0xff;
- int shift = (kcontrol->private_value >> 8) & 0xff;
+ int shift = (kcontrol->private_value >> 8) & 0x0f;
int mask = (kcontrol->private_value >> 16) & 0xff;
// int invert = (kcontrol->private_value >> 24) & 0xff;
unsigned short value, old, new;
/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
SND_PCI_QUIRK(0x1849, 0xc892, "Asrock B85M-ITX", 0),
/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ SND_PCI_QUIRK(0x1849, 0x0397, "Asrock N68C-S UCC", 0),
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0),
/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0),
SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE),
SND_PCI_QUIRK(0x1102, 0x0010, "Sound Blaster Z", QUIRK_SBZ),
SND_PCI_QUIRK(0x1102, 0x0023, "Sound Blaster Z", QUIRK_SBZ),
+ SND_PCI_QUIRK(0x1102, 0x0033, "Sound Blaster ZxR", QUIRK_SBZ),
SND_PCI_QUIRK(0x1458, 0xA016, "Recon3Di", QUIRK_R3DI),
SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI),
SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
snd_hda_power_down(codec);
if (spec->mem_base)
- iounmap(spec->mem_base);
+ pci_iounmap(codec->bus->pci, spec->mem_base);
kfree(spec->spec_init_verbs);
kfree(codec->spec);
}
break;
case QUIRK_AE5:
codec_dbg(codec, "%s: QUIRK_AE5 applied.\n", __func__);
- snd_hda_apply_pincfgs(codec, r3di_pincfgs);
+ snd_hda_apply_pincfgs(codec, ae5_pincfgs);
break;
}
case 0x10ec0285:
case 0x10ec0298:
case 0x10ec0289:
+ case 0x10ec0300:
alc_update_coef_idx(codec, 0x10, 1<<9, 0);
break;
case 0x10ec0275:
ALC269_TYPE_ALC215,
ALC269_TYPE_ALC225,
ALC269_TYPE_ALC294,
+ ALC269_TYPE_ALC300,
ALC269_TYPE_ALC700,
};
case ALC269_TYPE_ALC215:
case ALC269_TYPE_ALC225:
case ALC269_TYPE_ALC294:
+ case ALC269_TYPE_ALC300:
case ALC269_TYPE_ALC700:
ssids = alc269_ssids;
break;
spec->gen.preferred_dacs = preferred_pairs;
}
+/* The DAC of NID 0x3 will introduce click/pop noise on headphones, so invalidate it */
+static void alc285_fixup_invalidate_dacs(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ if (action != HDA_FIXUP_ACT_PRE_PROBE)
+ return;
+
+ snd_hda_override_wcaps(codec, 0x03, 0);
+}
+
/* for hda_fixup_thinkpad_acpi() */
#include "thinkpad_helper.c"
ALC255_FIXUP_DELL_HEADSET_MIC,
ALC295_FIXUP_HP_X360,
ALC221_FIXUP_HP_HEADSET_MIC,
+ ALC285_FIXUP_LENOVO_HEADPHONE_NOISE,
+ ALC295_FIXUP_HP_AUTO_MUTE,
};
static const struct hda_fixup alc269_fixups[] = {
[ALC269_FIXUP_HP_MUTE_LED_MIC3] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269_fixup_hp_mute_led_mic3,
+ .chained = true,
+ .chain_id = ALC295_FIXUP_HP_AUTO_MUTE
},
[ALC269_FIXUP_HP_GPIO_LED] = {
.type = HDA_FIXUP_FUNC,
.chained = true,
.chain_id = ALC269_FIXUP_HEADSET_MIC
},
+ [ALC285_FIXUP_LENOVO_HEADPHONE_NOISE] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc285_fixup_invalidate_dacs,
+ },
+ [ALC295_FIXUP_HP_AUTO_MUTE] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_auto_mute_via_amp,
+ },
};
static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),
{0x12, 0x90a60130},
{0x19, 0x03a11020},
{0x21, 0x0321101f}),
+ SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_HEADPHONE_NOISE,
+ {0x12, 0x90a60130},
+ {0x14, 0x90170110},
+ {0x19, 0x04a11040},
+ {0x21, 0x04211020}),
SND_HDA_PIN_QUIRK(0x10ec0288, 0x1028, "Dell", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
{0x12, 0x90a60120},
{0x14, 0x90170110},
spec->gen.mixer_nid = 0; /* ALC2x4 does not have any loopback mixer path */
alc_update_coef_idx(codec, 0x6b, 0x0018, (1<<4) | (1<<3)); /* UAJ MIC Vref control by verb */
break;
+ case 0x10ec0300:
+ spec->codec_variant = ALC269_TYPE_ALC300;
+ spec->gen.mixer_nid = 0; /* no loopback on ALC300 */
+ break;
case 0x10ec0700:
case 0x10ec0701:
case 0x10ec0703:
HDA_CODEC_ENTRY(0x10ec0295, "ALC295", patch_alc269),
HDA_CODEC_ENTRY(0x10ec0298, "ALC298", patch_alc269),
HDA_CODEC_ENTRY(0x10ec0299, "ALC299", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0300, "ALC300", patch_alc269),
HDA_CODEC_REV_ENTRY(0x10ec0861, 0x100340, "ALC660", patch_alc861),
HDA_CODEC_ENTRY(0x10ec0660, "ALC660-VD", patch_alc861vd),
HDA_CODEC_ENTRY(0x10ec0861, "ALC861", patch_alc861),
*/
snd_hdac_codec_read(hdev, hdev->afg, 0, AC_VERB_SET_POWER_STATE,
AC_PWRST_D3);
- err = snd_hdac_display_power(bus, false);
- if (err < 0) {
- dev_err(dev, "Cannot turn on display power on i915\n");
- return err;
- }
hlink = snd_hdac_ext_bus_get_link(bus, dev_name(dev));
if (!hlink) {
snd_hdac_ext_bus_link_put(bus, hlink);
- return 0;
+ err = snd_hdac_display_power(bus, false);
+ if (err < 0)
+ dev_err(dev, "Cannot turn off display power on i915\n");
+
+ return err;
}
static int hdac_hdmi_runtime_resume(struct device *dev)
#define PCM186X_MAX_REGISTER PCM186X_CURR_TRIM_CTRL
/* PCM186X_PAGE */
-#define PCM186X_RESET 0xff
+#define PCM186X_RESET 0xfe
/* PCM186X_ADCX_INPUT_SEL_X */
#define PCM186X_ADC_INPUT_SEL_POL BIT(7)
};
static const struct snd_soc_dapm_widget pcm3060_dapm_widgets[] = {
- SND_SOC_DAPM_OUTPUT("OUTL+"),
- SND_SOC_DAPM_OUTPUT("OUTR+"),
- SND_SOC_DAPM_OUTPUT("OUTL-"),
- SND_SOC_DAPM_OUTPUT("OUTR-"),
+ SND_SOC_DAPM_OUTPUT("OUTL"),
+ SND_SOC_DAPM_OUTPUT("OUTR"),
SND_SOC_DAPM_INPUT("INL"),
SND_SOC_DAPM_INPUT("INR"),
};
static const struct snd_soc_dapm_route pcm3060_dapm_map[] = {
- { "OUTL+", NULL, "Playback" },
- { "OUTR+", NULL, "Playback" },
- { "OUTL-", NULL, "Playback" },
- { "OUTR-", NULL, "Playback" },
+ { "OUTL", NULL, "Playback" },
+ { "OUTR", NULL, "Playback" },
{ "Capture", NULL, "INL" },
{ "Capture", NULL, "INR" },
static void wm_adsp2_show_fw_status(struct wm_adsp *dsp)
{
- u16 scratch[4];
+ unsigned int scratch[4];
+ unsigned int addr = dsp->base + ADSP2_SCRATCH0;
+ unsigned int i;
int ret;
- ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2_SCRATCH0,
- scratch, sizeof(scratch));
- if (ret) {
- adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret);
- return;
+ for (i = 0; i < ARRAY_SIZE(scratch); ++i) {
+ ret = regmap_read(dsp->regmap, addr + i, &scratch[i]);
+ if (ret) {
+ adsp_err(dsp, "Failed to read SCRATCH%u: %d\n", i, ret);
+ return;
+ }
}
adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n",
- be16_to_cpu(scratch[0]),
- be16_to_cpu(scratch[1]),
- be16_to_cpu(scratch[2]),
- be16_to_cpu(scratch[3]));
+ scratch[0], scratch[1], scratch[2], scratch[3]);
}
static void wm_adsp2v2_show_fw_status(struct wm_adsp *dsp)
{
- u32 scratch[2];
+ unsigned int scratch[2];
int ret;
- ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1,
- scratch, sizeof(scratch));
-
+ ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1,
+ &scratch[0]);
if (ret) {
- adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret);
+ adsp_err(dsp, "Failed to read SCRATCH0_1: %d\n", ret);
return;
}
- scratch[0] = be32_to_cpu(scratch[0]);
- scratch[1] = be32_to_cpu(scratch[1]);
+ ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH2_3,
+ &scratch[1]);
+ if (ret) {
+ adsp_err(dsp, "Failed to read SCRATCH2_3: %d\n", ret);
+ return;
+ }
adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n",
scratch[0] & 0xFFFF,
codec, then enable this option by saying Y or m. This is a
recommended option
-config SND_SOC_INTEL_SKYLAKE_SSP_CLK
- tristate
-
config SND_SOC_INTEL_SKYLAKE
tristate "SKL/BXT/KBL/GLK/CNL... Platforms"
depends on PCI && ACPI
+ select SND_SOC_INTEL_SKYLAKE_COMMON
+ help
+ If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/
+ GeminiLake or CannonLake platform with the DSP enabled in the BIOS
+ then enable this option by saying Y or m.
+
+if SND_SOC_INTEL_SKYLAKE
+
+config SND_SOC_INTEL_SKYLAKE_SSP_CLK
+ tristate
+
+config SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC
+ bool "HDAudio codec support"
+ help
+ If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/
+ GeminiLake or CannonLake platform with an HDaudio codec
+ then enable this option by saying Y
+
+config SND_SOC_INTEL_SKYLAKE_COMMON
+ tristate
select SND_HDA_EXT_CORE
select SND_HDA_DSP_LOADER
select SND_SOC_TOPOLOGY
select SND_SOC_INTEL_SST
+ select SND_SOC_HDAC_HDA if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC
select SND_SOC_ACPI_INTEL_MATCH
help
If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/
GeminiLake or CannonLake platform with the DSP enabled in the BIOS
then enable this option by saying Y or m.
+endif ## SND_SOC_INTEL_SKYLAKE
+
config SND_SOC_ACPI_INTEL_MATCH
tristate
select SND_SOC_ACPI if ACPI
Say Y if you have such a device.
If unsure select "N".
-config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH
- tristate "SKL/KBL/BXT/APL with HDA Codecs"
- select SND_SOC_HDAC_HDMI
- select SND_SOC_HDAC_HDA
- help
- This adds support for ASoC machine driver for Intel platforms
- SKL/KBL/BXT/APL with iDisp, HDA audio codecs.
- Say Y or m if you have such a device. This is a recommended option.
- If unsure select "N".
-
config SND_SOC_INTEL_GLK_RT5682_MAX98357A_MACH
tristate "GLK with RT5682 and MAX98357A in I2S Mode"
depends on MFD_INTEL_LPSS && I2C && ACPI
endif ## SND_SOC_INTEL_SKYLAKE
+if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC
+
+config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH
+ tristate "SKL/KBL/BXT/APL with HDA Codecs"
+ select SND_SOC_HDAC_HDMI
+ # SND_SOC_HDAC_HDA is already selected
+ help
+ This adds support for ASoC machine driver for Intel platforms
+ SKL/KBL/BXT/APL with iDisp, HDA audio codecs.
+ Say Y or m if you have such a device. This is a recommended option.
+ If unsure select "N".
+
+endif ## SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC
+
endif ## SND_SOC_INTEL_MACH
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
+#include <linux/dmi.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#define CHT_PLAT_CLK_3_HZ 19200000
#define CHT_CODEC_DAI "HiFi"
+#define QUIRK_PMC_PLT_CLK_0 0x01
+
struct cht_mc_private {
struct clk *mclk;
struct snd_soc_jack jack;
.num_controls = ARRAY_SIZE(cht_mc_controls),
};
+static const struct dmi_system_id cht_max98090_quirk_table[] = {
+ {
+ /* Swanky model Chromebook (Toshiba Chromebook 2) */
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Swanky"),
+ },
+ .driver_data = (void *)QUIRK_PMC_PLT_CLK_0,
+ },
+ {}
+};
+
static int snd_cht_mc_probe(struct platform_device *pdev)
{
+ const struct dmi_system_id *dmi_id;
struct device *dev = &pdev->dev;
int ret_val = 0;
struct cht_mc_private *drv;
+ const char *mclk_name;
+ int quirks = 0;
+
+ dmi_id = dmi_first_match(cht_max98090_quirk_table);
+ if (dmi_id)
+ quirks = (unsigned long)dmi_id->driver_data;
drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
if (!drv)
snd_soc_card_cht.dev = &pdev->dev;
snd_soc_card_set_drvdata(&snd_soc_card_cht, drv);
- drv->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3");
+ if (quirks & QUIRK_PMC_PLT_CLK_0)
+ mclk_name = "pmc_plt_clk_0";
+ else
+ mclk_name = "pmc_plt_clk_3";
+
+ drv->mclk = devm_clk_get(&pdev->dev, mclk_name);
if (IS_ERR(drv->mclk)) {
dev_err(&pdev->dev,
- "Failed to get MCLK from pmc_plt_clk_3: %ld\n",
- PTR_ERR(drv->mclk));
+ "Failed to get MCLK from %s: %ld\n",
+ mclk_name, PTR_ERR(drv->mclk));
return PTR_ERR(drv->mclk);
}
#include "skl.h"
#include "skl-sst-dsp.h"
#include "skl-sst-ipc.h"
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)
#include "../../../soc/codecs/hdac_hda.h"
+#endif
/*
* initialize the PCI registers
platform_device_unregister(skl->clk_dev);
}
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)
+
#define IDISP_INTEL_VENDOR_ID 0x80860000
/*
#endif
}
+#endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */
+
/*
* Probe the given codec address
*/
(AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID;
unsigned int res = -1;
struct skl *skl = bus_to_skl(bus);
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)
struct hdac_hda_priv *hda_codec;
- struct hdac_device *hdev;
int err;
+#endif
+ struct hdac_device *hdev;
mutex_lock(&bus->cmd_mutex);
snd_hdac_bus_send_cmd(bus, cmd);
return -EIO;
dev_dbg(bus->dev, "codec #%d probed OK: %x\n", addr, res);
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)
hda_codec = devm_kzalloc(&skl->pci->dev, sizeof(*hda_codec),
GFP_KERNEL);
if (!hda_codec)
load_codec_module(&hda_codec->codec);
}
return 0;
+#else
+ hdev = devm_kzalloc(&skl->pci->dev, sizeof(*hdev), GFP_KERNEL);
+ if (!hdev)
+ return -ENOMEM;
+
+ return snd_hdac_ext_bus_device_init(bus, addr, hdev);
+#endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */
}
/* Codec initialization */
}
}
+ /*
+ * we are done probing so decrement link counts
+ */
+ list_for_each_entry(hlink, &bus->hlink_list, list)
+ snd_hdac_ext_bus_link_put(bus, hlink);
+
if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) {
err = snd_hdac_display_power(bus, false);
if (err < 0) {
}
}
- /*
- * we are done probing so decrement link counts
- */
- list_for_each_entry(hlink, &bus->hlink_list, list)
- snd_hdac_ext_bus_link_put(bus, hlink);
-
/* configure PM */
pm_runtime_put_noidle(bus->dev);
pm_runtime_allow(bus->dev);
hbus = skl_to_hbus(skl);
bus = skl_to_bus(skl);
-#if IS_ENABLED(CONFIG_SND_SOC_HDAC_HDA)
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)
ext_ops = snd_soc_hdac_hda_get_ops();
#endif
snd_hdac_ext_bus_init(bus, &pci->dev, &bus_core_ops, io_ops, ext_ops);
#include "../codecs/twl6040.h"
struct abe_twl6040 {
+ struct snd_soc_card card;
+ struct snd_soc_dai_link dai_links[2];
int jack_detection; /* board can detect jack events */
int mclk_freq; /* MCLK frequency speed for twl6040 */
};
ARRAY_SIZE(dmic_audio_map));
}
-/* Digital audio interface glue - connects codec <--> CPU */
-static struct snd_soc_dai_link abe_twl6040_dai_links[] = {
- {
- .name = "TWL6040",
- .stream_name = "TWL6040",
- .codec_dai_name = "twl6040-legacy",
- .codec_name = "twl6040-codec",
- .init = omap_abe_twl6040_init,
- .ops = &omap_abe_ops,
- },
- {
- .name = "DMIC",
- .stream_name = "DMIC Capture",
- .codec_dai_name = "dmic-hifi",
- .codec_name = "dmic-codec",
- .init = omap_abe_dmic_init,
- .ops = &omap_abe_dmic_ops,
- },
-};
-
-/* Audio machine driver */
-static struct snd_soc_card omap_abe_card = {
- .owner = THIS_MODULE,
-
- .dapm_widgets = twl6040_dapm_widgets,
- .num_dapm_widgets = ARRAY_SIZE(twl6040_dapm_widgets),
- .dapm_routes = audio_map,
- .num_dapm_routes = ARRAY_SIZE(audio_map),
-};
-
static int omap_abe_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
- struct snd_soc_card *card = &omap_abe_card;
+ struct snd_soc_card *card;
struct device_node *dai_node;
struct abe_twl6040 *priv;
int num_links = 0;
return -ENODEV;
}
- card->dev = &pdev->dev;
-
priv = devm_kzalloc(&pdev->dev, sizeof(struct abe_twl6040), GFP_KERNEL);
if (priv == NULL)
return -ENOMEM;
+ card = &priv->card;
+ card->dev = &pdev->dev;
+ card->owner = THIS_MODULE;
+ card->dapm_widgets = twl6040_dapm_widgets;
+ card->num_dapm_widgets = ARRAY_SIZE(twl6040_dapm_widgets);
+ card->dapm_routes = audio_map;
+ card->num_dapm_routes = ARRAY_SIZE(audio_map);
+
if (snd_soc_of_parse_card_name(card, "ti,model")) {
dev_err(&pdev->dev, "Card name is not provided\n");
return -ENODEV;
dev_err(&pdev->dev, "McPDM node is not provided\n");
return -EINVAL;
}
- abe_twl6040_dai_links[0].cpu_of_node = dai_node;
- abe_twl6040_dai_links[0].platform_of_node = dai_node;
+
+ priv->dai_links[0].name = "DMIC";
+ priv->dai_links[0].stream_name = "TWL6040";
+ priv->dai_links[0].cpu_of_node = dai_node;
+ priv->dai_links[0].platform_of_node = dai_node;
+ priv->dai_links[0].codec_dai_name = "twl6040-legacy";
+ priv->dai_links[0].codec_name = "twl6040-codec";
+ priv->dai_links[0].init = omap_abe_twl6040_init;
+ priv->dai_links[0].ops = &omap_abe_ops;
dai_node = of_parse_phandle(node, "ti,dmic", 0);
if (dai_node) {
num_links = 2;
- abe_twl6040_dai_links[1].cpu_of_node = dai_node;
- abe_twl6040_dai_links[1].platform_of_node = dai_node;
+ priv->dai_links[1].name = "TWL6040";
+ priv->dai_links[1].stream_name = "DMIC Capture";
+ priv->dai_links[1].cpu_of_node = dai_node;
+ priv->dai_links[1].platform_of_node = dai_node;
+ priv->dai_links[1].codec_dai_name = "dmic-hifi";
+ priv->dai_links[1].codec_name = "dmic-codec";
+ priv->dai_links[1].init = omap_abe_dmic_init;
+ priv->dai_links[1].ops = &omap_abe_dmic_ops;
} else {
num_links = 1;
}
return -ENODEV;
}
- card->dai_link = abe_twl6040_dai_links;
+ card->dai_link = priv->dai_links;
card->num_links = num_links;
snd_soc_card_set_drvdata(card, priv);
struct device *dev;
void __iomem *io_base;
struct clk *fclk;
+ struct pm_qos_request pm_qos_req;
+ int latency;
int fclk_freq;
int out_freq;
int clk_div;
mutex_lock(&dmic->mutex);
+ pm_qos_remove_request(&dmic->pm_qos_req);
+
if (!dai->active)
dmic->active = 0;
/* packet size is threshold * channels */
dma_data = snd_soc_dai_get_dma_data(dai, substream);
dma_data->maxburst = dmic->threshold * channels;
+ dmic->latency = (OMAP_DMIC_THRES_MAX - dmic->threshold) * USEC_PER_SEC /
+ params_rate(params);
return 0;
}
struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai);
u32 ctrl;
+ if (pm_qos_request_active(&dmic->pm_qos_req))
+ pm_qos_update_request(&dmic->pm_qos_req, dmic->latency);
+
/* Configure uplink threshold */
omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);
pkt_size = channels;
}
- latency = ((((buffer_size - pkt_size) / channels) * 1000)
- / (params->rate_num / params->rate_den));
-
+ latency = (buffer_size - pkt_size) / channels;
+ latency = latency * USEC_PER_SEC /
+ (params->rate_num / params->rate_den);
mcbsp->latency[substream->stream] = latency;
omap_mcbsp_set_threshold(substream, pkt_size);
unsigned long phys_base;
void __iomem *io_base;
int irq;
+ struct pm_qos_request pm_qos_req;
+ int latency[2];
struct mutex mutex;
struct snd_soc_dai *dai)
{
struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai);
+ int tx = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK);
+ int stream1 = tx ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE;
+ int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
mutex_lock(&mcpdm->mutex);
}
}
+ if (mcpdm->latency[stream2])
+ pm_qos_update_request(&mcpdm->pm_qos_req,
+ mcpdm->latency[stream2]);
+ else if (mcpdm->latency[stream1])
+ pm_qos_remove_request(&mcpdm->pm_qos_req);
+
+ mcpdm->latency[stream1] = 0;
+
mutex_unlock(&mcpdm->mutex);
}
int stream = substream->stream;
struct snd_dmaengine_dai_dma_data *dma_data;
u32 threshold;
- int channels;
+ int channels, latency;
int link_mask = 0;
channels = params_channels(params);
dma_data->maxburst =
(MCPDM_DN_THRES_MAX - threshold) * channels;
+ latency = threshold;
} else {
/* If playback is not running assume a stereo stream to come */
if (!mcpdm->config[!stream].link_mask)
mcpdm->config[!stream].link_mask = (0x3 << 3);
dma_data->maxburst = threshold * channels;
+ latency = (MCPDM_DN_THRES_MAX - threshold);
}
+ /*
+ * The DMA must act to a DMA request within latency time (usec) to avoid
+ * under/overflow
+ */
+ mcpdm->latency[stream] = latency * USEC_PER_SEC / params_rate(params);
+
+ if (!mcpdm->latency[stream])
+ mcpdm->latency[stream] = 10;
+
/* Check if we need to restart McPDM with this stream */
if (mcpdm->config[stream].link_mask &&
mcpdm->config[stream].link_mask != link_mask)
struct snd_soc_dai *dai)
{
struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai);
+ struct pm_qos_request *pm_qos_req = &mcpdm->pm_qos_req;
+ int tx = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK);
+ int stream1 = tx ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE;
+ int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
+ int latency = mcpdm->latency[stream2];
+
+ /* Prevent omap hardware from hitting off between FIFO fills */
+ if (!latency || mcpdm->latency[stream1] < latency)
+ latency = mcpdm->latency[stream1];
+
+ if (pm_qos_request_active(pm_qos_req))
+ pm_qos_update_request(pm_qos_req, latency);
+ else if (latency)
+ pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency);
if (!omap_mcpdm_active(mcpdm)) {
omap_mcpdm_start(mcpdm);
free_irq(mcpdm->irq, (void *)mcpdm);
pm_runtime_disable(mcpdm->dev);
+ if (pm_qos_request_active(&mcpdm->pm_qos_req))
+ pm_qos_remove_request(&mcpdm->pm_qos_req);
+
return 0;
}
struct device_node *cpu = NULL;
struct device *dev = card->dev;
struct snd_soc_dai_link *link;
+ struct of_phandle_args args;
int ret, num_links;
ret = snd_soc_of_parse_card_name(card, "model");
goto err;
}
- link->cpu_of_node = of_parse_phandle(cpu, "sound-dai", 0);
- if (!link->cpu_of_node) {
+ ret = of_parse_phandle_with_args(cpu, "sound-dai",
+ "#sound-dai-cells", 0, &args);
+ if (ret) {
dev_err(card->dev, "error getting cpu phandle\n");
- ret = -EINVAL;
goto err;
}
+ link->cpu_of_node = args.np;
+ link->id = args.args[0];
ret = snd_soc_of_get_dai_name(cpu, &link->cpu_dai_name);
if (ret) {
}
static const struct snd_soc_dapm_widget q6afe_dai_widgets[] = {
- SND_SOC_DAPM_AIF_OUT("HDMI_RX", "HDMI Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_RX", "Slimbus Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_RX", "Slimbus1 Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_RX", "Slimbus2 Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_RX", "Slimbus3 Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_RX", "Slimbus4 Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_RX", "Slimbus5 Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_RX", "Slimbus6 Playback", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SLIMBUS_0_TX", "Slimbus Capture", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SLIMBUS_1_TX", "Slimbus1 Capture", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SLIMBUS_2_TX", "Slimbus2 Capture", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SLIMBUS_3_TX", "Slimbus3 Capture", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SLIMBUS_4_TX", "Slimbus4 Capture", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SLIMBUS_5_TX", "Slimbus5 Capture", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SLIMBUS_6_TX", "Slimbus6 Capture", 0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_RX", "Quaternary MI2S Playback",
+ SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_MI2S_RX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_MI2S_TX", "Quaternary MI2S Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_TX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_MI2S_RX", "Tertiary MI2S Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_MI2S_RX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_MI2S_TX", "Tertiary MI2S Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_MI2S_TX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_MI2S_RX", "Secondary MI2S Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_MI2S_TX", "Secondary MI2S Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_MI2S_TX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_MI2S_RX_SD1",
+ SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX_SD1",
"Secondary MI2S Playback SD1",
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRI_MI2S_RX", "Primary MI2S Playback",
+ SND_SOC_DAPM_AIF_IN("PRI_MI2S_RX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRI_MI2S_TX", "Primary MI2S Capture",
+ SND_SOC_DAPM_AIF_OUT("PRI_MI2S_TX", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_0", "Primary TDM0 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_1", "Primary TDM1 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_2", "Primary TDM2 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_3", "Primary TDM3 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_4", "Primary TDM4 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_5", "Primary TDM5 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_6", "Primary TDM6 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_7", "Primary TDM7 Playback",
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_0", "Primary TDM0 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_1", "Primary TDM1 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_2", "Primary TDM2 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_3", "Primary TDM3 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_4", "Primary TDM4 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_5", "Primary TDM5 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_6", "Primary TDM6 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_7", "Primary TDM7 Capture",
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_0", "Secondary TDM0 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_1", "Secondary TDM1 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_2", "Secondary TDM2 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_3", "Secondary TDM3 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_4", "Secondary TDM4 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_5", "Secondary TDM5 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_6", "Secondary TDM6 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_7", "Secondary TDM7 Playback",
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_0", "Secondary TDM0 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_1", "Secondary TDM1 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_2", "Secondary TDM2 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_3", "Secondary TDM3 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_4", "Secondary TDM4 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_5", "Secondary TDM5 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_6", "Secondary TDM6 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_7", "Secondary TDM7 Capture",
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_0", "Tertiary TDM0 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_1", "Tertiary TDM1 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_2", "Tertiary TDM2 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_3", "Tertiary TDM3 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_4", "Tertiary TDM4 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_5", "Tertiary TDM5 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_6", "Tertiary TDM6 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_7", "Tertiary TDM7 Playback",
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_0", "Tertiary TDM0 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_1", "Tertiary TDM1 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_2", "Tertiary TDM2 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_3", "Tertiary TDM3 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_4", "Tertiary TDM4 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_5", "Tertiary TDM5 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_6", "Tertiary TDM6 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_7", "Tertiary TDM7 Capture",
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_0", "Quaternary TDM0 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_1", "Quaternary TDM1 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_2", "Quaternary TDM2 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_3", "Quaternary TDM3 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_4", "Quaternary TDM4 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_5", "Quaternary TDM5 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_6", "Quaternary TDM6 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_7", "Quaternary TDM7 Playback",
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_0", "Quaternary TDM0 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_1", "Quaternary TDM1 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_2", "Quaternary TDM2 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_3", "Quaternary TDM3 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_4", "Quaternary TDM4 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_5", "Quaternary TDM5 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_6", "Quaternary TDM6 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_7", "Quaternary TDM7 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_0", "Quinary TDM0 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_1", "Quinary TDM1 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_2", "Quinary TDM2 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_3", "Quinary TDM3 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_4", "Quinary TDM4 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_5", "Quinary TDM5 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_6", "Quinary TDM6 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_7", "Quinary TDM7 Playback",
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_7", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_0", "Quinary TDM0 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_0", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_1", "Quinary TDM1 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_1", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_2", "Quinary TDM2 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_2", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_3", "Quinary TDM3 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_3", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_4", "Quinary TDM4 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_4", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_5", "Quinary TDM5 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_5", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_6", "Quinary TDM6 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_6", NULL,
0, 0, 0, 0),
- SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_7", "Quinary TDM7 Capture",
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_7", NULL,
0, 0, 0, 0),
};
#define AFE_PORT_I2S_SD1 0x2
#define AFE_PORT_I2S_SD2 0x3
#define AFE_PORT_I2S_SD3 0x4
-#define AFE_PORT_I2S_SD0_MASK BIT(0x1)
-#define AFE_PORT_I2S_SD1_MASK BIT(0x2)
-#define AFE_PORT_I2S_SD2_MASK BIT(0x3)
-#define AFE_PORT_I2S_SD3_MASK BIT(0x4)
-#define AFE_PORT_I2S_SD0_1_MASK GENMASK(2, 1)
-#define AFE_PORT_I2S_SD2_3_MASK GENMASK(4, 3)
-#define AFE_PORT_I2S_SD0_1_2_MASK GENMASK(3, 1)
-#define AFE_PORT_I2S_SD0_1_2_3_MASK GENMASK(4, 1)
+#define AFE_PORT_I2S_SD0_MASK BIT(0x0)
+#define AFE_PORT_I2S_SD1_MASK BIT(0x1)
+#define AFE_PORT_I2S_SD2_MASK BIT(0x2)
+#define AFE_PORT_I2S_SD3_MASK BIT(0x3)
+#define AFE_PORT_I2S_SD0_1_MASK GENMASK(1, 0)
+#define AFE_PORT_I2S_SD2_3_MASK GENMASK(3, 2)
+#define AFE_PORT_I2S_SD0_1_2_MASK GENMASK(2, 0)
+#define AFE_PORT_I2S_SD0_1_2_3_MASK GENMASK(3, 0)
#define AFE_PORT_I2S_QUAD01 0x5
#define AFE_PORT_I2S_QUAD23 0x6
#define AFE_PORT_I2S_6CHS 0x7
.rate_max = 48000, \
}, \
.name = "MultiMedia"#num, \
- .probe = fe_dai_probe, \
.id = MSM_FRONTEND_DAI_MULTIMEDIA##num, \
}
}
}
-static const struct snd_soc_dapm_route afe_pcm_routes[] = {
- {"MM_DL1", NULL, "MultiMedia1 Playback" },
- {"MM_DL2", NULL, "MultiMedia2 Playback" },
- {"MM_DL3", NULL, "MultiMedia3 Playback" },
- {"MM_DL4", NULL, "MultiMedia4 Playback" },
- {"MM_DL5", NULL, "MultiMedia5 Playback" },
- {"MM_DL6", NULL, "MultiMedia6 Playback" },
- {"MM_DL7", NULL, "MultiMedia7 Playback" },
- {"MM_DL7", NULL, "MultiMedia8 Playback" },
- {"MultiMedia1 Capture", NULL, "MM_UL1"},
- {"MultiMedia2 Capture", NULL, "MM_UL2"},
- {"MultiMedia3 Capture", NULL, "MM_UL3"},
- {"MultiMedia4 Capture", NULL, "MM_UL4"},
- {"MultiMedia5 Capture", NULL, "MM_UL5"},
- {"MultiMedia6 Capture", NULL, "MM_UL6"},
- {"MultiMedia7 Capture", NULL, "MM_UL7"},
- {"MultiMedia8 Capture", NULL, "MM_UL8"},
-
-};
-
-static int fe_dai_probe(struct snd_soc_dai *dai)
-{
- struct snd_soc_dapm_context *dapm;
-
- dapm = snd_soc_component_get_dapm(dai->component);
- snd_soc_dapm_add_routes(dapm, afe_pcm_routes,
- ARRAY_SIZE(afe_pcm_routes));
-
- return 0;
-}
-
-
static const struct snd_soc_component_driver q6asm_fe_dai_component = {
.name = DRV_NAME,
.ops = &q6asm_dai_ops,
{"MM_UL6", NULL, "MultiMedia6 Mixer"},
{"MM_UL7", NULL, "MultiMedia7 Mixer"},
{"MM_UL8", NULL, "MultiMedia8 Mixer"},
+
+ {"MM_DL1", NULL, "MultiMedia1 Playback" },
+ {"MM_DL2", NULL, "MultiMedia2 Playback" },
+ {"MM_DL3", NULL, "MultiMedia3 Playback" },
+ {"MM_DL4", NULL, "MultiMedia4 Playback" },
+ {"MM_DL5", NULL, "MultiMedia5 Playback" },
+ {"MM_DL6", NULL, "MultiMedia6 Playback" },
+ {"MM_DL7", NULL, "MultiMedia7 Playback" },
+ {"MM_DL8", NULL, "MultiMedia8 Playback" },
+
+ {"MultiMedia1 Capture", NULL, "MM_UL1"},
+ {"MultiMedia2 Capture", NULL, "MM_UL2"},
+ {"MultiMedia3 Capture", NULL, "MM_UL3"},
+ {"MultiMedia4 Capture", NULL, "MM_UL4"},
+ {"MultiMedia5 Capture", NULL, "MM_UL5"},
+ {"MultiMedia6 Capture", NULL, "MM_UL6"},
+ {"MultiMedia7 Capture", NULL, "MM_UL7"},
+ {"MultiMedia8 Capture", NULL, "MM_UL8"},
+
};
static int routing_hw_params(struct snd_pcm_substream *substream,
static const struct snd_dmaengine_pcm_config rk_dmaengine_pcm_config = {
.pcm_hardware = &snd_rockchip_hardware,
+ .prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config,
.prealloc_buffer_size = 32 * 1024,
};
if (rsnd_ssi_is_multi_slave(mod, io))
return 0;
- if (ssi->rate) {
+ if (ssi->usrcnt > 1) {
if (ssi->rate != rate) {
dev_err(dev, "SSI parent/child should use same rate\n");
return -EINVAL;
snd_soc_acpi_find_machine(struct snd_soc_acpi_mach *machines)
{
struct snd_soc_acpi_mach *mach;
+ struct snd_soc_acpi_mach *mach_alt;
for (mach = machines; mach->id[0]; mach++) {
if (acpi_dev_present(mach->id, NULL, -1)) {
- if (mach->machine_quirk)
- mach = mach->machine_quirk(mach);
+ if (mach->machine_quirk) {
+ mach_alt = mach->machine_quirk(mach);
+ if (!mach_alt)
+ continue; /* not full match, ignore */
+ mach = mach_alt;
+ }
+
return mach;
}
}
}
card->instantiated = 1;
+ dapm_mark_endpoints_dirty(card);
snd_soc_dapm_sync(&card->dapm);
mutex_unlock(&card->mutex);
mutex_unlock(&client_mutex);
char *mclk_name, *p, *s = (char *)pname;
int ret, i = 0;
- mclk = devm_kzalloc(dev, sizeof(mclk), GFP_KERNEL);
+ mclk = devm_kzalloc(dev, sizeof(*mclk), GFP_KERNEL);
if (!mclk)
return -ENOMEM;
config SND_SUN50I_CODEC_ANALOG
tristate "Allwinner sun50i Codec Analog Controls Support"
depends on (ARM64 && ARCH_SUNXI) || COMPILE_TEST
- select SND_SUNXI_ADDA_PR_REGMAP
+ select SND_SUN8I_ADDA_PR_REGMAP
help
Say Y or M if you want to add support for the analog controls for
the codec embedded in Allwinner A64 SoC.
{ "Right Digital DAC Mixer", "AIF1 Slot 0 Digital DAC Playback Switch",
"AIF1 Slot 0 Right"},
- /* ADC routes */
+ /* ADC Routes */
+ { "AIF1 Slot 0 Right ADC", NULL, "ADC" },
+ { "AIF1 Slot 0 Left ADC", NULL, "ADC" },
+
+ /* ADC Mixer Routes */
{ "Left Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch",
"AIF1 Slot 0 Left ADC" },
{ "Right Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch",
static int sun8i_codec_remove(struct platform_device *pdev)
{
- struct snd_soc_card *card = platform_get_drvdata(pdev);
- struct sun8i_codec *scodec = snd_soc_card_get_drvdata(card);
-
pm_runtime_disable(&pdev->dev);
if (!pm_runtime_status_suspended(&pdev->dev))
sun8i_codec_runtime_suspend(&pdev->dev);
- clk_disable_unprepare(scodec->clk_module);
- clk_disable_unprepare(scodec->clk_bus);
-
return 0;
}
runtime->hw = snd_cs4231_playback;
err = snd_cs4231_open(chip, CS4231_MODE_PLAY);
- if (err < 0) {
- snd_free_pages(runtime->dma_area, runtime->dma_bytes);
+ if (err < 0)
return err;
- }
chip->playback_substream = substream;
chip->p_periods_sent = 0;
snd_pcm_set_sync(substream);
runtime->hw = snd_cs4231_capture;
err = snd_cs4231_open(chip, CS4231_MODE_RECORD);
- if (err < 0) {
- snd_free_pages(runtime->dma_area, runtime->dma_bytes);
+ if (err < 0)
return err;
- }
chip->capture_substream = substream;
chip->c_periods_sent = 0;
snd_pcm_set_sync(substream);
.ifnum = QUIRK_NO_INTERFACE
}
},
+/* Dell WD19 Dock */
+{
+ USB_DEVICE(0x0bda, 0x402e),
+ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+ .vendor_name = "Dell",
+ .product_name = "WD19 Dock",
+ .profile_name = "Dell-WD15-Dock",
+ .ifnum = QUIRK_NO_INTERFACE
+ }
+},
#undef USB_DEVICE_VENDOR_SPEC
#define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */
#define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */
#define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */
+#define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */
+#define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */
/* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */
#define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
SEE ALSO
========
- **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8)
+ **bpf**\ (2),
+ **bpf-helpers**\ (7),
+ **bpftool**\ (8),
+ **bpftool-prog**\ (8),
+ **bpftool-map**\ (8),
+ **bpftool-net**\ (8),
+ **bpftool-perf**\ (8)
SEE ALSO
========
- **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8)
+ **bpf**\ (2),
+ **bpf-helpers**\ (7),
+ **bpftool**\ (8),
+ **bpftool-prog**\ (8),
+ **bpftool-cgroup**\ (8),
+ **bpftool-net**\ (8),
+ **bpftool-perf**\ (8)
SEE ALSO
========
- **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8)
+ **bpf**\ (2),
+ **bpf-helpers**\ (7),
+ **bpftool**\ (8),
+ **bpftool-prog**\ (8),
+ **bpftool-map**\ (8),
+ **bpftool-cgroup**\ (8),
+ **bpftool-perf**\ (8)
SEE ALSO
========
- **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8)
+ **bpf**\ (2),
+ **bpf-helpers**\ (7),
+ **bpftool**\ (8),
+ **bpftool-prog**\ (8),
+ **bpftool-map**\ (8),
+ **bpftool-cgroup**\ (8),
+ **bpftool-net**\ (8)
Generate human-readable JSON output. Implies **-j**.
-f, --bpffs
- Show file names of pinned programs.
+ When showing BPF programs, show file names of pinned
+ programs.
EXAMPLES
========
SEE ALSO
========
- **bpftool**\ (8), **bpftool-map**\ (8), **bpftool-cgroup**\ (8)
+ **bpf**\ (2),
+ **bpf-helpers**\ (7),
+ **bpftool**\ (8),
+ **bpftool-map**\ (8),
+ **bpftool-cgroup**\ (8),
+ **bpftool-net**\ (8),
+ **bpftool-perf**\ (8)
SEE ALSO
========
- **bpftool-map**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8)
- **bpftool-perf**\ (8), **bpftool-net**\ (8)
+ **bpf**\ (2),
+ **bpf-helpers**\ (7),
+ **bpftool-prog**\ (8),
+ **bpftool-map**\ (8),
+ **bpftool-cgroup**\ (8),
+ **bpftool-net**\ (8),
+ **bpftool-perf**\ (8)
return 0;
}
-int open_obj_pinned(char *path)
+int open_obj_pinned(char *path, bool quiet)
{
int fd;
fd = bpf_obj_get(path);
if (fd < 0) {
- p_err("bpf obj get (%s): %s", path,
- errno == EACCES && !is_bpffs(dirname(path)) ?
- "directory not in bpf file system (bpffs)" :
- strerror(errno));
+ if (!quiet)
+ p_err("bpf obj get (%s): %s", path,
+ errno == EACCES && !is_bpffs(dirname(path)) ?
+ "directory not in bpf file system (bpffs)" :
+ strerror(errno));
return -1;
}
enum bpf_obj_type type;
int fd;
- fd = open_obj_pinned(path);
+ fd = open_obj_pinned(path, false);
if (fd < 0)
return -1;
return NULL;
}
- while ((n = getline(&line, &line_n, fdi))) {
+ while ((n = getline(&line, &line_n, fdi)) > 0) {
char *value;
int len;
while ((ftse = fts_read(fts))) {
if (!(ftse->fts_info & FTS_F))
continue;
- fd = open_obj_pinned(ftse->fts_path);
+ fd = open_obj_pinned(ftse->fts_path, true);
if (fd < 0)
continue;
int get_fd_type(int fd);
const char *get_fd_type_name(enum bpf_obj_type type);
char *get_fdinfo(int fd, const char *key);
-int open_obj_pinned(char *path);
+int open_obj_pinned(char *path, bool quiet);
int open_obj_pinned_any(char *path, enum bpf_obj_type exp_type);
int do_pin_any(int argc, char **argv, int (*get_fd_by_id)(__u32));
int do_pin_fd(int fd, const char *name);
if (!hash_empty(prog_table.table)) {
struct pinned_obj *obj;
- printf("\n");
hash_for_each_possible(prog_table.table, obj, hash, info->id) {
if (obj->id == info->id)
- printf("\tpinned %s\n", obj->path);
+ printf("\n\tpinned %s", obj->path);
}
}
}
NEXT_ARG();
} else if (is_prefix(*argv, "map")) {
+ void *new_map_replace;
char *endptr, *name;
int fd;
if (fd < 0)
goto err_free_reuse_maps;
- map_replace = reallocarray(map_replace, old_map_fds + 1,
- sizeof(*map_replace));
- if (!map_replace) {
+ new_map_replace = reallocarray(map_replace,
+ old_map_fds + 1,
+ sizeof(*map_replace));
+ if (!new_map_replace) {
p_err("mem alloc failed");
goto err_free_reuse_maps;
}
+ map_replace = new_map_replace;
+
map_replace[old_map_fds].idx = idx;
map_replace[old_map_fds].name = name;
map_replace[old_map_fds].fd = fd;
dwarf_getlocations \
fortify-source \
sync-compare-and-swap \
+ get_current_dir_name \
glibc \
gtk2 \
gtk2-infobar \
test-dwarf_getlocations.bin \
test-fortify-source.bin \
test-sync-compare-and-swap.bin \
+ test-get_current_dir_name.bin \
test-glibc.bin \
test-gtk2.bin \
test-gtk2-infobar.bin \
$(OUTPUT)test-libelf.bin:
$(BUILD) -lelf
+$(OUTPUT)test-get_current_dir_name.bin:
+ $(BUILD)
+
$(OUTPUT)test-glibc.bin:
$(BUILD)
# include "test-libelf-mmap.c"
#undef main
+#define main main_test_get_current_dir_name
+# include "test-get_current_dir_name.c"
+#undef main
+
#define main main_test_glibc
# include "test-glibc.c"
#undef main
main_test_hello();
main_test_libelf();
main_test_libelf_mmap();
+ main_test_get_current_dir_name();
main_test_glibc();
main_test_dwarf();
main_test_dwarf_getlocations();
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#include <unistd.h>
+#include <stdlib.h>
+
+int main(void)
+{
+ free(get_current_dir_name());
+ return 0;
+}
#define TIOCGPTLCK _IOR('T', 0x39, int) /* Get Pty lock state */
#define TIOCGEXCL _IOR('T', 0x40, int) /* Get exclusive mode state */
#define TIOCGPTPEER _IO('T', 0x41) /* Safely open the slave */
+#define TIOCGISO7816 _IOR('T', 0x42, struct serial_iso7816)
+#define TIOCSISO7816 _IOWR('T', 0x43, struct serial_iso7816)
#define FIONCLEX 0x5450
#define FIOCLEX 0x5451
*/
#define I915_PARAM_CS_TIMESTAMP_FREQUENCY 51
+/*
+ * Once upon a time we supposed that writes through the GGTT would be
+ * immediately in physical memory (once flushed out of the CPU path). However,
+ * on a few different processors and chipsets, this is not necessarily the case
+ * as the writes appear to be buffered internally. Thus a read of the backing
+ * storage (physical memory) via a different path (with different physical tags
+ * to the indirect write via the GGTT) will see stale values from before
+ * the GGTT write. Inside the kernel, we can for the most part keep track of
+ * the different read/write domains in use (e.g. set-domain), but the assumption
+ * of coherency is baked into the ABI, hence reporting its true state in this
+ * parameter.
+ *
+ * Reports true when writes via mmap_gtt are immediately visible following an
+ * lfence to flush the WCB.
+ *
+ * Reports false when writes via mmap_gtt are indeterminately delayed in an in
+ * internal buffer and are _not_ immediately visible to third parties accessing
+ * directly via mmap_cpu/mmap_wc. Use of mmap_gtt as part of an IPC
+ * communications channel when reporting false is strongly disadvised.
+ */
+#define I915_PARAM_MMAP_GTT_COHERENT 52
+
typedef struct drm_i915_getparam {
__s32 param;
/*
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef __LINUX_PKT_CLS_H
+#define __LINUX_PKT_CLS_H
+
+#include <linux/types.h>
+#include <linux/pkt_sched.h>
+
+#define TC_COOKIE_MAX_SIZE 16
+
+/* Action attributes */
+enum {
+ TCA_ACT_UNSPEC,
+ TCA_ACT_KIND,
+ TCA_ACT_OPTIONS,
+ TCA_ACT_INDEX,
+ TCA_ACT_STATS,
+ TCA_ACT_PAD,
+ TCA_ACT_COOKIE,
+ __TCA_ACT_MAX
+};
+
+#define TCA_ACT_MAX __TCA_ACT_MAX
+#define TCA_OLD_COMPAT (TCA_ACT_MAX+1)
+#define TCA_ACT_MAX_PRIO 32
+#define TCA_ACT_BIND 1
+#define TCA_ACT_NOBIND 0
+#define TCA_ACT_UNBIND 1
+#define TCA_ACT_NOUNBIND 0
+#define TCA_ACT_REPLACE 1
+#define TCA_ACT_NOREPLACE 0
+
+#define TC_ACT_UNSPEC (-1)
+#define TC_ACT_OK 0
+#define TC_ACT_RECLASSIFY 1
+#define TC_ACT_SHOT 2
+#define TC_ACT_PIPE 3
+#define TC_ACT_STOLEN 4
+#define TC_ACT_QUEUED 5
+#define TC_ACT_REPEAT 6
+#define TC_ACT_REDIRECT 7
+#define TC_ACT_TRAP 8 /* For hw path, this means "trap to cpu"
+ * and don't further process the frame
+ * in hardware. For sw path, this is
+ * equivalent of TC_ACT_STOLEN - drop
+ * the skb and act like everything
+ * is alright.
+ */
+#define TC_ACT_VALUE_MAX TC_ACT_TRAP
+
+/* There is a special kind of actions called "extended actions",
+ * which need a value parameter. These have a local opcode located in
+ * the highest nibble, starting from 1. The rest of the bits
+ * are used to carry the value. These two parts together make
+ * a combined opcode.
+ */
+#define __TC_ACT_EXT_SHIFT 28
+#define __TC_ACT_EXT(local) ((local) << __TC_ACT_EXT_SHIFT)
+#define TC_ACT_EXT_VAL_MASK ((1 << __TC_ACT_EXT_SHIFT) - 1)
+#define TC_ACT_EXT_OPCODE(combined) ((combined) & (~TC_ACT_EXT_VAL_MASK))
+#define TC_ACT_EXT_CMP(combined, opcode) (TC_ACT_EXT_OPCODE(combined) == opcode)
+
+#define TC_ACT_JUMP __TC_ACT_EXT(1)
+#define TC_ACT_GOTO_CHAIN __TC_ACT_EXT(2)
+#define TC_ACT_EXT_OPCODE_MAX TC_ACT_GOTO_CHAIN
+
+/* Action type identifiers*/
+enum {
+ TCA_ID_UNSPEC=0,
+ TCA_ID_POLICE=1,
+ /* other actions go here */
+ __TCA_ID_MAX=255
+};
+
+#define TCA_ID_MAX __TCA_ID_MAX
+
+struct tc_police {
+ __u32 index;
+ int action;
+#define TC_POLICE_UNSPEC TC_ACT_UNSPEC
+#define TC_POLICE_OK TC_ACT_OK
+#define TC_POLICE_RECLASSIFY TC_ACT_RECLASSIFY
+#define TC_POLICE_SHOT TC_ACT_SHOT
+#define TC_POLICE_PIPE TC_ACT_PIPE
+
+ __u32 limit;
+ __u32 burst;
+ __u32 mtu;
+ struct tc_ratespec rate;
+ struct tc_ratespec peakrate;
+ int refcnt;
+ int bindcnt;
+ __u32 capab;
+};
+
+struct tcf_t {
+ __u64 install;
+ __u64 lastuse;
+ __u64 expires;
+ __u64 firstuse;
+};
+
+struct tc_cnt {
+ int refcnt;
+ int bindcnt;
+};
+
+#define tc_gen \
+ __u32 index; \
+ __u32 capab; \
+ int action; \
+ int refcnt; \
+ int bindcnt
+
+enum {
+ TCA_POLICE_UNSPEC,
+ TCA_POLICE_TBF,
+ TCA_POLICE_RATE,
+ TCA_POLICE_PEAKRATE,
+ TCA_POLICE_AVRATE,
+ TCA_POLICE_RESULT,
+ TCA_POLICE_TM,
+ TCA_POLICE_PAD,
+ __TCA_POLICE_MAX
+#define TCA_POLICE_RESULT TCA_POLICE_RESULT
+};
+
+#define TCA_POLICE_MAX (__TCA_POLICE_MAX - 1)
+
+/* tca flags definitions */
+#define TCA_CLS_FLAGS_SKIP_HW (1 << 0) /* don't offload filter to HW */
+#define TCA_CLS_FLAGS_SKIP_SW (1 << 1) /* don't use filter in SW */
+#define TCA_CLS_FLAGS_IN_HW (1 << 2) /* filter is offloaded to HW */
+#define TCA_CLS_FLAGS_NOT_IN_HW (1 << 3) /* filter isn't offloaded to HW */
+#define TCA_CLS_FLAGS_VERBOSE (1 << 4) /* verbose logging */
+
+/* U32 filters */
+
+#define TC_U32_HTID(h) ((h)&0xFFF00000)
+#define TC_U32_USERHTID(h) (TC_U32_HTID(h)>>20)
+#define TC_U32_HASH(h) (((h)>>12)&0xFF)
+#define TC_U32_NODE(h) ((h)&0xFFF)
+#define TC_U32_KEY(h) ((h)&0xFFFFF)
+#define TC_U32_UNSPEC 0
+#define TC_U32_ROOT (0xFFF00000)
+
+enum {
+ TCA_U32_UNSPEC,
+ TCA_U32_CLASSID,
+ TCA_U32_HASH,
+ TCA_U32_LINK,
+ TCA_U32_DIVISOR,
+ TCA_U32_SEL,
+ TCA_U32_POLICE,
+ TCA_U32_ACT,
+ TCA_U32_INDEV,
+ TCA_U32_PCNT,
+ TCA_U32_MARK,
+ TCA_U32_FLAGS,
+ TCA_U32_PAD,
+ __TCA_U32_MAX
+};
+
+#define TCA_U32_MAX (__TCA_U32_MAX - 1)
+
+struct tc_u32_key {
+ __be32 mask;
+ __be32 val;
+ int off;
+ int offmask;
+};
+
+struct tc_u32_sel {
+ unsigned char flags;
+ unsigned char offshift;
+ unsigned char nkeys;
+
+ __be16 offmask;
+ __u16 off;
+ short offoff;
+
+ short hoff;
+ __be32 hmask;
+ struct tc_u32_key keys[0];
+};
+
+struct tc_u32_mark {
+ __u32 val;
+ __u32 mask;
+ __u32 success;
+};
+
+struct tc_u32_pcnt {
+ __u64 rcnt;
+ __u64 rhit;
+ __u64 kcnts[0];
+};
+
+/* Flags */
+
+#define TC_U32_TERMINAL 1
+#define TC_U32_OFFSET 2
+#define TC_U32_VAROFFSET 4
+#define TC_U32_EAT 8
+
+#define TC_U32_MAXDEPTH 8
+
+
+/* RSVP filter */
+
+enum {
+ TCA_RSVP_UNSPEC,
+ TCA_RSVP_CLASSID,
+ TCA_RSVP_DST,
+ TCA_RSVP_SRC,
+ TCA_RSVP_PINFO,
+ TCA_RSVP_POLICE,
+ TCA_RSVP_ACT,
+ __TCA_RSVP_MAX
+};
+
+#define TCA_RSVP_MAX (__TCA_RSVP_MAX - 1 )
+
+struct tc_rsvp_gpi {
+ __u32 key;
+ __u32 mask;
+ int offset;
+};
+
+struct tc_rsvp_pinfo {
+ struct tc_rsvp_gpi dpi;
+ struct tc_rsvp_gpi spi;
+ __u8 protocol;
+ __u8 tunnelid;
+ __u8 tunnelhdr;
+ __u8 pad;
+};
+
+/* ROUTE filter */
+
+enum {
+ TCA_ROUTE4_UNSPEC,
+ TCA_ROUTE4_CLASSID,
+ TCA_ROUTE4_TO,
+ TCA_ROUTE4_FROM,
+ TCA_ROUTE4_IIF,
+ TCA_ROUTE4_POLICE,
+ TCA_ROUTE4_ACT,
+ __TCA_ROUTE4_MAX
+};
+
+#define TCA_ROUTE4_MAX (__TCA_ROUTE4_MAX - 1)
+
+
+/* FW filter */
+
+enum {
+ TCA_FW_UNSPEC,
+ TCA_FW_CLASSID,
+ TCA_FW_POLICE,
+ TCA_FW_INDEV, /* used by CONFIG_NET_CLS_IND */
+ TCA_FW_ACT, /* used by CONFIG_NET_CLS_ACT */
+ TCA_FW_MASK,
+ __TCA_FW_MAX
+};
+
+#define TCA_FW_MAX (__TCA_FW_MAX - 1)
+
+/* TC index filter */
+
+enum {
+ TCA_TCINDEX_UNSPEC,
+ TCA_TCINDEX_HASH,
+ TCA_TCINDEX_MASK,
+ TCA_TCINDEX_SHIFT,
+ TCA_TCINDEX_FALL_THROUGH,
+ TCA_TCINDEX_CLASSID,
+ TCA_TCINDEX_POLICE,
+ TCA_TCINDEX_ACT,
+ __TCA_TCINDEX_MAX
+};
+
+#define TCA_TCINDEX_MAX (__TCA_TCINDEX_MAX - 1)
+
+/* Flow filter */
+
+enum {
+ FLOW_KEY_SRC,
+ FLOW_KEY_DST,
+ FLOW_KEY_PROTO,
+ FLOW_KEY_PROTO_SRC,
+ FLOW_KEY_PROTO_DST,
+ FLOW_KEY_IIF,
+ FLOW_KEY_PRIORITY,
+ FLOW_KEY_MARK,
+ FLOW_KEY_NFCT,
+ FLOW_KEY_NFCT_SRC,
+ FLOW_KEY_NFCT_DST,
+ FLOW_KEY_NFCT_PROTO_SRC,
+ FLOW_KEY_NFCT_PROTO_DST,
+ FLOW_KEY_RTCLASSID,
+ FLOW_KEY_SKUID,
+ FLOW_KEY_SKGID,
+ FLOW_KEY_VLAN_TAG,
+ FLOW_KEY_RXHASH,
+ __FLOW_KEY_MAX,
+};
+
+#define FLOW_KEY_MAX (__FLOW_KEY_MAX - 1)
+
+enum {
+ FLOW_MODE_MAP,
+ FLOW_MODE_HASH,
+};
+
+enum {
+ TCA_FLOW_UNSPEC,
+ TCA_FLOW_KEYS,
+ TCA_FLOW_MODE,
+ TCA_FLOW_BASECLASS,
+ TCA_FLOW_RSHIFT,
+ TCA_FLOW_ADDEND,
+ TCA_FLOW_MASK,
+ TCA_FLOW_XOR,
+ TCA_FLOW_DIVISOR,
+ TCA_FLOW_ACT,
+ TCA_FLOW_POLICE,
+ TCA_FLOW_EMATCHES,
+ TCA_FLOW_PERTURB,
+ __TCA_FLOW_MAX
+};
+
+#define TCA_FLOW_MAX (__TCA_FLOW_MAX - 1)
+
+/* Basic filter */
+
+enum {
+ TCA_BASIC_UNSPEC,
+ TCA_BASIC_CLASSID,
+ TCA_BASIC_EMATCHES,
+ TCA_BASIC_ACT,
+ TCA_BASIC_POLICE,
+ __TCA_BASIC_MAX
+};
+
+#define TCA_BASIC_MAX (__TCA_BASIC_MAX - 1)
+
+
+/* Cgroup classifier */
+
+enum {
+ TCA_CGROUP_UNSPEC,
+ TCA_CGROUP_ACT,
+ TCA_CGROUP_POLICE,
+ TCA_CGROUP_EMATCHES,
+ __TCA_CGROUP_MAX,
+};
+
+#define TCA_CGROUP_MAX (__TCA_CGROUP_MAX - 1)
+
+/* BPF classifier */
+
+#define TCA_BPF_FLAG_ACT_DIRECT (1 << 0)
+
+enum {
+ TCA_BPF_UNSPEC,
+ TCA_BPF_ACT,
+ TCA_BPF_POLICE,
+ TCA_BPF_CLASSID,
+ TCA_BPF_OPS_LEN,
+ TCA_BPF_OPS,
+ TCA_BPF_FD,
+ TCA_BPF_NAME,
+ TCA_BPF_FLAGS,
+ TCA_BPF_FLAGS_GEN,
+ TCA_BPF_TAG,
+ TCA_BPF_ID,
+ __TCA_BPF_MAX,
+};
+
+#define TCA_BPF_MAX (__TCA_BPF_MAX - 1)
+
+/* Flower classifier */
+
+enum {
+ TCA_FLOWER_UNSPEC,
+ TCA_FLOWER_CLASSID,
+ TCA_FLOWER_INDEV,
+ TCA_FLOWER_ACT,
+ TCA_FLOWER_KEY_ETH_DST, /* ETH_ALEN */
+ TCA_FLOWER_KEY_ETH_DST_MASK, /* ETH_ALEN */
+ TCA_FLOWER_KEY_ETH_SRC, /* ETH_ALEN */
+ TCA_FLOWER_KEY_ETH_SRC_MASK, /* ETH_ALEN */
+ TCA_FLOWER_KEY_ETH_TYPE, /* be16 */
+ TCA_FLOWER_KEY_IP_PROTO, /* u8 */
+ TCA_FLOWER_KEY_IPV4_SRC, /* be32 */
+ TCA_FLOWER_KEY_IPV4_SRC_MASK, /* be32 */
+ TCA_FLOWER_KEY_IPV4_DST, /* be32 */
+ TCA_FLOWER_KEY_IPV4_DST_MASK, /* be32 */
+ TCA_FLOWER_KEY_IPV6_SRC, /* struct in6_addr */
+ TCA_FLOWER_KEY_IPV6_SRC_MASK, /* struct in6_addr */
+ TCA_FLOWER_KEY_IPV6_DST, /* struct in6_addr */
+ TCA_FLOWER_KEY_IPV6_DST_MASK, /* struct in6_addr */
+ TCA_FLOWER_KEY_TCP_SRC, /* be16 */
+ TCA_FLOWER_KEY_TCP_DST, /* be16 */
+ TCA_FLOWER_KEY_UDP_SRC, /* be16 */
+ TCA_FLOWER_KEY_UDP_DST, /* be16 */
+
+ TCA_FLOWER_FLAGS,
+ TCA_FLOWER_KEY_VLAN_ID, /* be16 */
+ TCA_FLOWER_KEY_VLAN_PRIO, /* u8 */
+ TCA_FLOWER_KEY_VLAN_ETH_TYPE, /* be16 */
+
+ TCA_FLOWER_KEY_ENC_KEY_ID, /* be32 */
+ TCA_FLOWER_KEY_ENC_IPV4_SRC, /* be32 */
+ TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK,/* be32 */
+ TCA_FLOWER_KEY_ENC_IPV4_DST, /* be32 */
+ TCA_FLOWER_KEY_ENC_IPV4_DST_MASK,/* be32 */
+ TCA_FLOWER_KEY_ENC_IPV6_SRC, /* struct in6_addr */
+ TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK,/* struct in6_addr */
+ TCA_FLOWER_KEY_ENC_IPV6_DST, /* struct in6_addr */
+ TCA_FLOWER_KEY_ENC_IPV6_DST_MASK,/* struct in6_addr */
+
+ TCA_FLOWER_KEY_TCP_SRC_MASK, /* be16 */
+ TCA_FLOWER_KEY_TCP_DST_MASK, /* be16 */
+ TCA_FLOWER_KEY_UDP_SRC_MASK, /* be16 */
+ TCA_FLOWER_KEY_UDP_DST_MASK, /* be16 */
+ TCA_FLOWER_KEY_SCTP_SRC_MASK, /* be16 */
+ TCA_FLOWER_KEY_SCTP_DST_MASK, /* be16 */
+
+ TCA_FLOWER_KEY_SCTP_SRC, /* be16 */
+ TCA_FLOWER_KEY_SCTP_DST, /* be16 */
+
+ TCA_FLOWER_KEY_ENC_UDP_SRC_PORT, /* be16 */
+ TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK, /* be16 */
+ TCA_FLOWER_KEY_ENC_UDP_DST_PORT, /* be16 */
+ TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK, /* be16 */
+
+ TCA_FLOWER_KEY_FLAGS, /* be32 */
+ TCA_FLOWER_KEY_FLAGS_MASK, /* be32 */
+
+ TCA_FLOWER_KEY_ICMPV4_CODE, /* u8 */
+ TCA_FLOWER_KEY_ICMPV4_CODE_MASK,/* u8 */
+ TCA_FLOWER_KEY_ICMPV4_TYPE, /* u8 */
+ TCA_FLOWER_KEY_ICMPV4_TYPE_MASK,/* u8 */
+ TCA_FLOWER_KEY_ICMPV6_CODE, /* u8 */
+ TCA_FLOWER_KEY_ICMPV6_CODE_MASK,/* u8 */
+ TCA_FLOWER_KEY_ICMPV6_TYPE, /* u8 */
+ TCA_FLOWER_KEY_ICMPV6_TYPE_MASK,/* u8 */
+
+ TCA_FLOWER_KEY_ARP_SIP, /* be32 */
+ TCA_FLOWER_KEY_ARP_SIP_MASK, /* be32 */
+ TCA_FLOWER_KEY_ARP_TIP, /* be32 */
+ TCA_FLOWER_KEY_ARP_TIP_MASK, /* be32 */
+ TCA_FLOWER_KEY_ARP_OP, /* u8 */
+ TCA_FLOWER_KEY_ARP_OP_MASK, /* u8 */
+ TCA_FLOWER_KEY_ARP_SHA, /* ETH_ALEN */
+ TCA_FLOWER_KEY_ARP_SHA_MASK, /* ETH_ALEN */
+ TCA_FLOWER_KEY_ARP_THA, /* ETH_ALEN */
+ TCA_FLOWER_KEY_ARP_THA_MASK, /* ETH_ALEN */
+
+ TCA_FLOWER_KEY_MPLS_TTL, /* u8 - 8 bits */
+ TCA_FLOWER_KEY_MPLS_BOS, /* u8 - 1 bit */
+ TCA_FLOWER_KEY_MPLS_TC, /* u8 - 3 bits */
+ TCA_FLOWER_KEY_MPLS_LABEL, /* be32 - 20 bits */
+
+ TCA_FLOWER_KEY_TCP_FLAGS, /* be16 */
+ TCA_FLOWER_KEY_TCP_FLAGS_MASK, /* be16 */
+
+ TCA_FLOWER_KEY_IP_TOS, /* u8 */
+ TCA_FLOWER_KEY_IP_TOS_MASK, /* u8 */
+ TCA_FLOWER_KEY_IP_TTL, /* u8 */
+ TCA_FLOWER_KEY_IP_TTL_MASK, /* u8 */
+
+ TCA_FLOWER_KEY_CVLAN_ID, /* be16 */
+ TCA_FLOWER_KEY_CVLAN_PRIO, /* u8 */
+ TCA_FLOWER_KEY_CVLAN_ETH_TYPE, /* be16 */
+
+ TCA_FLOWER_KEY_ENC_IP_TOS, /* u8 */
+ TCA_FLOWER_KEY_ENC_IP_TOS_MASK, /* u8 */
+ TCA_FLOWER_KEY_ENC_IP_TTL, /* u8 */
+ TCA_FLOWER_KEY_ENC_IP_TTL_MASK, /* u8 */
+
+ TCA_FLOWER_KEY_ENC_OPTS,
+ TCA_FLOWER_KEY_ENC_OPTS_MASK,
+
+ TCA_FLOWER_IN_HW_COUNT,
+
+ __TCA_FLOWER_MAX,
+};
+
+#define TCA_FLOWER_MAX (__TCA_FLOWER_MAX - 1)
+
+enum {
+ TCA_FLOWER_KEY_ENC_OPTS_UNSPEC,
+ TCA_FLOWER_KEY_ENC_OPTS_GENEVE, /* Nested
+ * TCA_FLOWER_KEY_ENC_OPT_GENEVE_
+ * attributes
+ */
+ __TCA_FLOWER_KEY_ENC_OPTS_MAX,
+};
+
+#define TCA_FLOWER_KEY_ENC_OPTS_MAX (__TCA_FLOWER_KEY_ENC_OPTS_MAX - 1)
+
+enum {
+ TCA_FLOWER_KEY_ENC_OPT_GENEVE_UNSPEC,
+ TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS, /* u16 */
+ TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE, /* u8 */
+ TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA, /* 4 to 128 bytes */
+
+ __TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX,
+};
+
+#define TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX \
+ (__TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX - 1)
+
+enum {
+ TCA_FLOWER_KEY_FLAGS_IS_FRAGMENT = (1 << 0),
+ TCA_FLOWER_KEY_FLAGS_FRAG_IS_FIRST = (1 << 1),
+};
+
+/* Match-all classifier */
+
+enum {
+ TCA_MATCHALL_UNSPEC,
+ TCA_MATCHALL_CLASSID,
+ TCA_MATCHALL_ACT,
+ TCA_MATCHALL_FLAGS,
+ __TCA_MATCHALL_MAX,
+};
+
+#define TCA_MATCHALL_MAX (__TCA_MATCHALL_MAX - 1)
+
+/* Extended Matches */
+
+struct tcf_ematch_tree_hdr {
+ __u16 nmatches;
+ __u16 progid;
+};
+
+enum {
+ TCA_EMATCH_TREE_UNSPEC,
+ TCA_EMATCH_TREE_HDR,
+ TCA_EMATCH_TREE_LIST,
+ __TCA_EMATCH_TREE_MAX
+};
+#define TCA_EMATCH_TREE_MAX (__TCA_EMATCH_TREE_MAX - 1)
+
+struct tcf_ematch_hdr {
+ __u16 matchid;
+ __u16 kind;
+ __u16 flags;
+ __u16 pad; /* currently unused */
+};
+
+/* 0 1
+ * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+ * +-----------------------+-+-+---+
+ * | Unused |S|I| R |
+ * +-----------------------+-+-+---+
+ *
+ * R(2) ::= relation to next ematch
+ * where: 0 0 END (last ematch)
+ * 0 1 AND
+ * 1 0 OR
+ * 1 1 Unused (invalid)
+ * I(1) ::= invert result
+ * S(1) ::= simple payload
+ */
+#define TCF_EM_REL_END 0
+#define TCF_EM_REL_AND (1<<0)
+#define TCF_EM_REL_OR (1<<1)
+#define TCF_EM_INVERT (1<<2)
+#define TCF_EM_SIMPLE (1<<3)
+
+#define TCF_EM_REL_MASK 3
+#define TCF_EM_REL_VALID(v) (((v) & TCF_EM_REL_MASK) != TCF_EM_REL_MASK)
+
+enum {
+ TCF_LAYER_LINK,
+ TCF_LAYER_NETWORK,
+ TCF_LAYER_TRANSPORT,
+ __TCF_LAYER_MAX
+};
+#define TCF_LAYER_MAX (__TCF_LAYER_MAX - 1)
+
+/* Ematch type assignments
+ * 1..32767 Reserved for ematches inside kernel tree
+ * 32768..65535 Free to use, not reliable
+ */
+#define TCF_EM_CONTAINER 0
+#define TCF_EM_CMP 1
+#define TCF_EM_NBYTE 2
+#define TCF_EM_U32 3
+#define TCF_EM_META 4
+#define TCF_EM_TEXT 5
+#define TCF_EM_VLAN 6
+#define TCF_EM_CANID 7
+#define TCF_EM_IPSET 8
+#define TCF_EM_IPT 9
+#define TCF_EM_MAX 9
+
+enum {
+ TCF_EM_PROG_TC
+};
+
+enum {
+ TCF_EM_OPND_EQ,
+ TCF_EM_OPND_GT,
+ TCF_EM_OPND_LT
+};
+
+#endif
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
+/*
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __LINUX_TC_BPF_H
+#define __LINUX_TC_BPF_H
+
+#include <linux/pkt_cls.h>
+
+#define TCA_ACT_BPF 13
+
+struct tc_act_bpf {
+ tc_gen;
+};
+
+enum {
+ TCA_ACT_BPF_UNSPEC,
+ TCA_ACT_BPF_TM,
+ TCA_ACT_BPF_PARMS,
+ TCA_ACT_BPF_OPS_LEN,
+ TCA_ACT_BPF_OPS,
+ TCA_ACT_BPF_FD,
+ TCA_ACT_BPF_NAME,
+ TCA_ACT_BPF_PAD,
+ TCA_ACT_BPF_TAG,
+ TCA_ACT_BPF_ID,
+ __TCA_ACT_BPF_MAX,
+};
+#define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1)
+
+#endif
#include "elf.h"
#include "warn.h"
+#define MAX_NAME_LEN 128
+
struct section *find_section_by_name(struct elf *elf, const char *name)
{
struct section *sec;
/* Create parent/child links for any cold subfunctions */
list_for_each_entry(sec, &elf->sections, list) {
list_for_each_entry(sym, &sec->symbol_list, list) {
+ char pname[MAX_NAME_LEN + 1];
+ size_t pnamelen;
if (sym->type != STT_FUNC)
continue;
sym->pfunc = sym->cfunc = sym;
if (!coldstr)
continue;
- coldstr[0] = '\0';
- pfunc = find_symbol_by_name(elf, sym->name);
- coldstr[0] = '.';
+ pnamelen = coldstr - sym->name;
+ if (pnamelen > MAX_NAME_LEN) {
+ WARN("%s(): parent function name exceeds maximum length of %d characters",
+ sym->name, MAX_NAME_LEN);
+ return -1;
+ }
+
+ strncpy(pname, sym->name, pnamelen);
+ pname[pnamelen] = '\0';
+ pfunc = find_symbol_by_name(elf, pname);
if (!pfunc) {
WARN("%s(): can't find parent function",
sym->name);
- goto err;
+ return -1;
}
sym->pfunc = pfunc;
endif
endif
+ifeq ($(feature-get_current_dir_name), 1)
+ CFLAGS += -DHAVE_GET_CURRENT_DIR_NAME
+endif
+
+
ifdef NO_LIBELF
NO_DWARF := 1
NO_DEMANGLE := 1
config=0
sample_period=*
sample_type=263
-read_format=0
+read_format=0|4
disabled=1
inherit=1
pinned=0
"TCSETSW2", "TCSETSF2", "TIOCGRS48", "TIOCSRS485", "TIOCGPTN", "TIOCSPTLCK",
"TIOCGDEV", "TCSETX", "TCSETXF", "TCSETXW", "TIOCSIG", "TIOCVHANGUP", "TIOCGPKT",
"TIOCGPTLCK", [_IOC_NR(TIOCGEXCL)] = "TIOCGEXCL", "TIOCGPTPEER",
+ "TIOCGISO7816", "TIOCSISO7816",
[_IOC_NR(FIONCLEX)] = "FIONCLEX", "FIOCLEX", "FIOASYNC", "TIOCSERCONFIG",
"TIOCSERGWILD", "TIOCSERSWILD", "TIOCGLCKTRMIOS", "TIOCSLCKTRMIOS",
"TIOCSERGSTRUCT", "TIOCSERGETLSR", "TIOCSERGETMULTI", "TIOCSERSETMULTI",
libperf-y += evsel.o
libperf-y += evsel_fprintf.o
libperf-y += find_bit.o
+libperf-y += get_current_dir_name.o
libperf-y += kallsyms.o
libperf-y += levenshtein.o
libperf-y += llvm-utils.o
attr->exclude_user = 1;
}
- if (evsel->own_cpus)
+ if (evsel->own_cpus || evsel->unit)
evsel->attr.read_format |= PERF_FORMAT_ID;
/*
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+//
+#ifndef HAVE_GET_CURRENT_DIR_NAME
+#include "util.h"
+#include <unistd.h>
+#include <stdlib.h>
+#include <stdlib.h>
+
+/* Android's 'bionic' library, for one, doesn't have this */
+
+char *get_current_dir_name(void)
+{
+ char pwd[PATH_MAX];
+
+ return getcwd(pwd, sizeof(pwd)) == NULL ? NULL : strdup(pwd);
+}
+#endif // HAVE_GET_CURRENT_DIR_NAME
#include <stdio.h>
#include <string.h>
#include <unistd.h>
+#include <asm/bug.h>
struct namespaces *namespaces__new(struct namespaces_event *event)
{
char curpath[PATH_MAX];
int oldns = -1;
int newns = -1;
+ char *oldcwd = NULL;
if (nc == NULL)
return;
if (snprintf(curpath, PATH_MAX, "/proc/self/ns/mnt") >= PATH_MAX)
return;
+ oldcwd = get_current_dir_name();
+ if (!oldcwd)
+ return;
+
oldns = open(curpath, O_RDONLY);
if (oldns < 0)
- return;
+ goto errout;
newns = open(nsi->mntns_path, O_RDONLY);
if (newns < 0)
if (setns(newns, CLONE_NEWNS) < 0)
goto errout;
+ nc->oldcwd = oldcwd;
nc->oldns = oldns;
nc->newns = newns;
return;
errout:
+ free(oldcwd);
if (oldns > -1)
close(oldns);
if (newns > -1)
void nsinfo__mountns_exit(struct nscookie *nc)
{
- if (nc == NULL || nc->oldns == -1 || nc->newns == -1)
+ if (nc == NULL || nc->oldns == -1 || nc->newns == -1 || !nc->oldcwd)
return;
setns(nc->oldns, CLONE_NEWNS);
+ if (nc->oldcwd) {
+ WARN_ON_ONCE(chdir(nc->oldcwd));
+ zfree(&nc->oldcwd);
+ }
+
if (nc->oldns > -1) {
close(nc->oldns);
nc->oldns = -1;
struct nscookie {
int oldns;
int newns;
+ char *oldcwd;
};
int nsinfo__init(struct nsinfo *nsi);
const char *perf_tip(const char *dirpath);
+#ifndef HAVE_GET_CURRENT_DIR_NAME
+char *get_current_dir_name(void);
+#endif
+
#ifndef HAVE_SCHED_GETCPU_SUPPORT
int sched_getcpu(void);
#endif
WARNINGS += $(call cc-supports,-Wdeclaration-after-statement)
WARNINGS += -Wshadow
-CFLAGS += -DVERSION=\"$(VERSION)\" -DPACKAGE=\"$(PACKAGE)\" \
+override CFLAGS += -DVERSION=\"$(VERSION)\" -DPACKAGE=\"$(PACKAGE)\" \
-DPACKAGE_BUGREPORT=\"$(PACKAGE_BUGREPORT)\" -D_GNU_SOURCE
UTIL_OBJS = utils/helpers/amd.o utils/helpers/msr.o \
LIB_OBJS = lib/cpufreq.o lib/cpupower.o lib/cpuidle.o
LIB_OBJS := $(addprefix $(OUTPUT),$(LIB_OBJS))
-CFLAGS += -pipe
+override CFLAGS += -pipe
ifeq ($(strip $(NLS)),true)
INSTALL_NLS += install-gmo
COMPILE_NLS += create-gmo
- CFLAGS += -DNLS
+ override CFLAGS += -DNLS
endif
ifeq ($(strip $(CPUFREQ_BENCH)),true)
UTIL_SRC += $(LIB_SRC)
endif
-CFLAGS += $(WARNINGS)
+override CFLAGS += $(WARNINGS)
ifeq ($(strip $(V)),false)
QUIET=@
# if DEBUG is enabled, then we do not strip or optimize
ifeq ($(strip $(DEBUG)),true)
- CFLAGS += -O1 -g -DDEBUG
+ override CFLAGS += -O1 -g -DDEBUG
STRIPCMD = /bin/true -Since_we_are_debugging
else
- CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer
+ override CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer
STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment
endif
ifeq ($(strip $(STATIC)),true)
LIBS = -L../ -L$(OUTPUT) -lm
OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o \
- $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/sysfs.o
+ $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/cpupower.o
else
LIBS = -L../ -L$(OUTPUT) -lm -lcpupower
OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o
default: all
$(OUTPUT)centrino-decode: ../i386/centrino-decode.c
- $(CC) $(CFLAGS) -o $@ $<
+ $(CC) $(CFLAGS) -o $@ $(LDFLAGS) $<
$(OUTPUT)powernow-k8-decode: ../i386/powernow-k8-decode.c
- $(CC) $(CFLAGS) -o $@ $<
+ $(CC) $(CFLAGS) -o $@ $(LDFLAGS) $<
all: $(OUTPUT)centrino-decode $(OUTPUT)powernow-k8-decode
snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/cpufreq/%s",
cpu, fname);
- return sysfs_read_file(path, buf, buflen);
+ return cpupower_read_sysfs(path, buf, buflen);
}
/* helper function to write a new value to a /sys file */
snprintf(path, sizeof(path), PATH_TO_CPU "cpuidle/%s", fname);
- return sysfs_read_file(path, buf, buflen);
+ return cpupower_read_sysfs(path, buf, buflen);
}
#include "cpupower.h"
#include "cpupower_intern.h"
-unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen)
+unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen)
{
int fd;
ssize_t numread;
snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/topology/%s",
cpu, fname);
- if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0)
+ if (cpupower_read_sysfs(path, linebuf, MAX_LINE_LEN) == 0)
return -1;
*result = strtol(linebuf, &endp, 0);
if (endp == linebuf || errno == ERANGE)
#define MAX_LINE_LEN 4096
#define SYSFS_PATH_MAX 255
-unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen);
+unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen);
[6] = NFIT_DIMM_HANDLE(1, 0, 0, 0, 1),
};
-static unsigned long dimm_fail_cmd_flags[NUM_DCR];
-static int dimm_fail_cmd_code[NUM_DCR];
+static unsigned long dimm_fail_cmd_flags[ARRAY_SIZE(handle)];
+static int dimm_fail_cmd_code[ARRAY_SIZE(handle)];
static const struct nd_intel_smart smart_def = {
.flags = ND_INTEL_SMART_HEALTH_VALID
unsigned long deadline;
spinlock_t lock;
} ars_state;
- struct device *dimm_dev[NUM_DCR];
+ struct device *dimm_dev[ARRAY_SIZE(handle)];
struct nd_intel_smart *smart;
struct nd_intel_smart_threshold *smart_threshold;
struct badrange badrange;
u32 nfit_handle = __to_nfit_memdev(nfit_mem)->device_handle;
int i;
- for (i = 0; i < NUM_DCR; i++)
+ for (i = 0; i < ARRAY_SIZE(handle); i++)
if (nfit_handle == handle[i])
dev_set_drvdata(nfit_test->dimm_dev[i],
nfit_mem);
TARGETS += mount
TARGETS += mqueue
TARGETS += net
+TARGETS += netfilter
TARGETS += nsfs
TARGETS += powerpc
TARGETS += proc
goto err;
}
- assert(system("ping localhost -6 -c 10000 -f -q > /dev/null") == 0);
+ if (system("which ping6 &>/dev/null") == 0)
+ assert(!system("ping6 localhost -c 10000 -f -q > /dev/null"));
+ else
+ assert(!system("ping -6 localhost -c 10000 -f -q > /dev/null"));
if (bpf_prog_query(cgroup_fd, BPF_CGROUP_INET_EGRESS, 0, NULL, NULL,
&prog_cnt)) {
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = ACCEPT,
},
+ {
+ "calls: ctx read at start of subprog",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
+ BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_0, 0),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ BPF_EXIT_INSN(),
+ BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+ .errstr_unpriv = "function calls to other bpf functions are allowed for root only",
+ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
};
static int probe_filter_length(const struct bpf_insn *fp)
--- /dev/null
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for netfilter selftests
+
+TEST_PROGS := nft_trans_stress.sh
+
+include ../lib.mk
--- /dev/null
+CONFIG_NET_NS=y
+NF_TABLES_INET=y
--- /dev/null
+#!/bin/bash
+#
+# This test is for stress-testing the nf_tables config plane path vs.
+# packet path processing: Make sure we never release rules that are
+# still visible to other cpus.
+#
+# set -e
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+testns=testns1
+tables="foo bar baz quux"
+
+nft --version > /dev/null 2>&1
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not run test without nft tool"
+ exit $ksft_skip
+fi
+
+ip -Version > /dev/null 2>&1
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not run test without ip tool"
+ exit $ksft_skip
+fi
+
+tmp=$(mktemp)
+
+for table in $tables; do
+ echo add table inet "$table" >> "$tmp"
+ echo flush table inet "$table" >> "$tmp"
+
+ echo "add chain inet $table INPUT { type filter hook input priority 0; }" >> "$tmp"
+ echo "add chain inet $table OUTPUT { type filter hook output priority 0; }" >> "$tmp"
+ for c in $(seq 1 400); do
+ chain=$(printf "chain%03u" "$c")
+ echo "add chain inet $table $chain" >> "$tmp"
+ done
+
+ for c in $(seq 1 400); do
+ chain=$(printf "chain%03u" "$c")
+ for BASE in INPUT OUTPUT; do
+ echo "add rule inet $table $BASE counter jump $chain" >> "$tmp"
+ done
+ echo "add rule inet $table $chain counter return" >> "$tmp"
+ done
+done
+
+ip netns add "$testns"
+ip -netns "$testns" link set lo up
+
+lscpu | grep ^CPU\(s\): | ( read cpu cpunum ;
+cpunum=$((cpunum-1))
+for i in $(seq 0 $cpunum);do
+ mask=$(printf 0x%x $((1<<$i)))
+ ip netns exec "$testns" taskset $mask ping -4 127.0.0.1 -fq > /dev/null &
+ ip netns exec "$testns" taskset $mask ping -6 ::1 -fq > /dev/null &
+done)
+
+sleep 1
+
+for i in $(seq 1 10) ; do ip netns exec "$testns" nft -f "$tmp" & done
+
+for table in $tables;do
+ randsleep=$((RANDOM%10))
+ sleep $randsleep
+ ip netns exec "$testns" nft delete table inet $table 2>/dev/null
+done
+
+randsleep=$((RANDOM%10))
+sleep $randsleep
+
+pkill -9 ping
+
+wait
+
+rm -f "$tmp"
+ip netns del "$testns"
return 0;
}
-#define REG_POISON 0x5a5aUL
-#define POISONED_REG(n) ((REG_POISON << 48) | ((n) << 32) | (REG_POISON << 16) | (n))
+#define REG_POISON 0x5a5a
+#define POISONED_REG(n) ((((unsigned long)REG_POISON) << 48) | ((n) << 32) | \
+ (((unsigned long)REG_POISON) << 16) | (n))
static inline void poison_regs(void)
{
}
}
+#ifdef _CALL_AIXDESC
+struct opd {
+ unsigned long ip;
+ unsigned long toc;
+ unsigned long env;
+};
+static struct opd bad_opd = {
+ .ip = BAD_NIP,
+};
+#define BAD_FUNC (&bad_opd)
+#else
+#define BAD_FUNC BAD_NIP
+#endif
+
int test_wild_bctr(void)
{
int (*func_ptr)(void);
poison_regs();
- func_ptr = (int (*)(void))BAD_NIP;
+ func_ptr = (int (*)(void))BAD_FUNC;
func_ptr();
FAIL_IF(1); /* we didn't segv? */
(rawout, serr) = proc.communicate()
if proc.returncode != 0 and len(serr) > 0:
- foutput = serr.decode("utf-8")
+ foutput = serr.decode("utf-8", errors="ignore")
else:
- foutput = rawout.decode("utf-8")
+ foutput = rawout.decode("utf-8", errors="ignore")
proc.stdout.close()
proc.stderr.close()
file=sys.stderr)
print("\n{} *** Error message: \"{}\"".format(prefix, foutput),
file=sys.stderr)
+ print("returncode {}; expected {}".format(proc.returncode,
+ exit_codes))
print("\n{} *** Aborting test run.".format(prefix), file=sys.stderr)
print("\n\n{} *** stdout ***".format(proc.stdout), file=sys.stderr)
print("\n\n{} *** stderr ***".format(proc.stderr), file=sys.stderr)
print('-----> execute stage')
pm.call_pre_execute()
(p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"])
- exit_code = p.returncode
+ if p:
+ exit_code = p.returncode
+ else:
+ exit_code = None
+
pm.call_post_execute()
- if (exit_code != int(tidx["expExitCode"])):
+ if (exit_code is None or exit_code != int(tidx["expExitCode"])):
result = False
- print("exit:", exit_code, int(tidx["expExitCode"]))
+ print("exit: {!r}".format(exit_code))
+ print("exit: {}".format(int(tidx["expExitCode"])))
+ #print("exit: {!r} {}".format(exit_code, int(tidx["expExitCode"])))
print(procout)
else:
if args.verbose > 0: