<sect2>
<title>Buffer Object Eviction</title>
<para>
- This section documents the interface function for evicting buffer
+ This section documents the interface functions for evicting buffer
objects to make space available in the virtual gpu address spaces.
Note that this is mostly orthogonal to shrinking buffer objects
caches, which has the goal to make main memory (shared with the gpu
</para>
!Idrivers/gpu/drm/i915/i915_gem_evict.c
</sect2>
+ <sect2>
+ <title>Buffer Object Memory Shrinking</title>
+ <para>
+ This section documents the interface function for shrinking memory
+ usage of buffer object caches. Shrinking is used to make main memory
+ available. Note that this is mostly orthogonal to evicting buffer
+ objects, which has the goal to make space in gpu virtual address
+ spaces.
+ </para>
+!Idrivers/gpu/drm/i915/i915_gem_shrinker.c
+ </sect2>
</sect1>
<sect1>
- pclkN, clkN: Pairs of parent of input clock and input clock to the
devices in this power domain. Maximum of 4 pairs (N = 0 to 3)
are supported currently.
+- power-domains: phandle pointing to the parent power domain, for more details
+ see Documentation/devicetree/bindings/power/power_domain.txt
Node of a device using power domains must have a power-domains property
defined with a phandle to respective power domain.
Required root node property:
compatible = "st,stih407";
+Boards with the ST STiH410 SoC shall have the following properties:
+Required root node property:
+compatible = "st,stih410";
+
Boards with the ST STiH418 SoC shall have the following properties:
Required root node property:
compatible = "st,stih418";
APM X-Gene SoC.
Required properties for all the ethernet interfaces:
-- compatible: Should be "apm,xgene-enet"
+- compatible: Should state binding information from the following list,
+ - "apm,xgene-enet": RGMII based 1G interface
+ - "apm,xgene1-sgenet": SGMII based 1G interface
+ - "apm,xgene1-xgenet": XFI based 10G interface
- reg: Address and length of the register set for the device. It contains the
information of registers in the same order as described by reg-names
- reg-names: Should contain the register set names
providing multiple PM domains (e.g. power controllers), but can be any value
as specified by device tree binding documentation of particular provider.
+Optional properties:
+ - power-domains : A phandle and PM domain specifier as defined by bindings of
+ the power controller specified by phandle.
+ Some power domains might be powered from another power domain (or have
+ other hardware specific dependencies). For representing such dependency
+ a standard PM domain consumer binding is used. When provided, all domains
+ created by the given provider should be subdomains of the domain
+ specified by this binding. More details about power domain specifier are
+ available in the next section.
+
Example:
power: power-controller@12340000 {
The node above defines a power controller that is a PM domain provider and
expects one cell as its phandle argument.
+Example 2:
+
+ parent: power-controller@12340000 {
+ compatible = "foo,power-controller";
+ reg = <0x12340000 0x1000>;
+ #power-domain-cells = <1>;
+ };
+
+ child: power-controller@12340000 {
+ compatible = "foo,power-controller";
+ reg = <0x12341000 0x1000>;
+ power-domains = <&parent 0>;
+ #power-domain-cells = <1>;
+ };
+
+The nodes above define two power controllers: 'parent' and 'child'.
+Domains created by the 'child' power controller are subdomains of '0' power
+domain provided by the 'parent' power controller.
+
==PM domain consumers==
Required properties:
--- /dev/null
+* UART (Universal Asynchronous Receiver/Transmitter)
+
+Required properties:
+- compatible : one of:
+ - "ns8250"
+ - "ns16450"
+ - "ns16550a"
+ - "ns16550"
+ - "ns16750"
+ - "ns16850"
+ - For Tegra20, must contain "nvidia,tegra20-uart"
+ - For other Tegra, must contain '"nvidia,<chip>-uart",
+ "nvidia,tegra20-uart"' where <chip> is tegra30, tegra114, tegra124,
+ tegra132, or tegra210.
+ - "nxp,lpc3220-uart"
+ - "ralink,rt2880-uart"
+ - "ibm,qpace-nwp-serial"
+ - "altr,16550-FIFO32"
+ - "altr,16550-FIFO64"
+ - "altr,16550-FIFO128"
+ - "fsl,16550-FIFO64"
+ - "fsl,ns16550"
+ - "serial" if the port type is unknown.
+- reg : offset and length of the register set for the device.
+- interrupts : should contain uart interrupt.
+- clock-frequency : the input clock frequency for the UART
+ or
+ clocks phandle to refer to the clk used as per Documentation/devicetree
+ /bindings/clock/clock-bindings.txt
+
+Optional properties:
+- current-speed : the current active speed of the UART.
+- reg-offset : offset to apply to the mapbase from the start of the registers.
+- reg-shift : quantity to shift the register offsets by.
+- reg-io-width : the size (in bytes) of the IO accesses that should be
+ performed on the device. There are some systems that require 32-bit
+ accesses to the UART (e.g. TI davinci).
+- used-by-rtas : set to indicate that the port is in use by the OpenFirmware
+ RTAS and should not be registered.
+- no-loopback-test: set to indicate that the port does not implements loopback
+ test mode
+- fifo-size: the fifo size of the UART.
+- auto-flow-control: one way to enable automatic flow control support. The
+ driver is allowed to detect support for the capability even without this
+ property.
+
+Note:
+* fsl,ns16550:
+ ------------
+ Freescale DUART is very similar to the PC16552D (and to a
+ pair of NS16550A), albeit with some nonstandard behavior such as
+ erratum A-004737 (relating to incorrect BRK handling).
+
+ Represents a single port that is compatible with the DUART found
+ on many Freescale chips (examples include mpc8349, mpc8548,
+ mpc8641d, p4080 and ls2085a).
+
+Example:
+
+ uart@80230000 {
+ compatible = "ns8250";
+ reg = <0x80230000 0x100>;
+ clock-frequency = <3686400>;
+ interrupts = <10>;
+ reg-shift = <2>;
+ };
--- /dev/null
+ETRAX FS UART
+
+Required properties:
+- compatible : "axis,etraxfs-uart"
+- reg: offset and length of the register set for the device.
+- interrupts: device interrupt
+
+Optional properties:
+- {dtr,dsr,ri,cd}-gpios: specify a GPIO for DTR/DSR/RI/CD
+ line respectively.
+
+Example:
+
+serial@b00260000 {
+ compatible = "axis,etraxfs-uart";
+ reg = <0xb0026000 0x1000>;
+ interrupts = <68>;
+ status = "disabled";
+};
+++ /dev/null
-* UART (Universal Asynchronous Receiver/Transmitter)
-
-Required properties:
-- compatible : one of:
- - "ns8250"
- - "ns16450"
- - "ns16550a"
- - "ns16550"
- - "ns16750"
- - "ns16850"
- - For Tegra20, must contain "nvidia,tegra20-uart"
- - For other Tegra, must contain '"nvidia,<chip>-uart",
- "nvidia,tegra20-uart"' where <chip> is tegra30, tegra114, tegra124,
- tegra132, or tegra210.
- - "nxp,lpc3220-uart"
- - "ralink,rt2880-uart"
- - "ibm,qpace-nwp-serial"
- - "altr,16550-FIFO32"
- - "altr,16550-FIFO64"
- - "altr,16550-FIFO128"
- - "fsl,16550-FIFO64"
- - "fsl,ns16550"
- - "serial" if the port type is unknown.
-- reg : offset and length of the register set for the device.
-- interrupts : should contain uart interrupt.
-- clock-frequency : the input clock frequency for the UART
- or
- clocks phandle to refer to the clk used as per Documentation/devicetree
- /bindings/clock/clock-bindings.txt
-
-Optional properties:
-- current-speed : the current active speed of the UART.
-- reg-offset : offset to apply to the mapbase from the start of the registers.
-- reg-shift : quantity to shift the register offsets by.
-- reg-io-width : the size (in bytes) of the IO accesses that should be
- performed on the device. There are some systems that require 32-bit
- accesses to the UART (e.g. TI davinci).
-- used-by-rtas : set to indicate that the port is in use by the OpenFirmware
- RTAS and should not be registered.
-- no-loopback-test: set to indicate that the port does not implements loopback
- test mode
-- fifo-size: the fifo size of the UART.
-- auto-flow-control: one way to enable automatic flow control support. The
- driver is allowed to detect support for the capability even without this
- property.
-
-Note:
-* fsl,ns16550:
- ------------
- Freescale DUART is very similar to the PC16552D (and to a
- pair of NS16550A), albeit with some nonstandard behavior such as
- erratum A-004737 (relating to incorrect BRK handling).
-
- Represents a single port that is compatible with the DUART found
- on many Freescale chips (examples include mpc8349, mpc8548,
- mpc8641d, p4080 and ls2085a).
-
-Example:
-
- uart@80230000 {
- compatible = "ns8250";
- reg = <0x80230000 0x100>;
- clock-frequency = <3686400>;
- interrupts = <10>;
- reg-shift = <2>;
- };
+ and Cc: the DT maintainers. Use scripts/get_maintainer.pl to identify
+ all of the DT maintainers.
+
3) The Documentation/ portion of the patch should come in the series before
the code implementing the binding.
ams AMS AG
amstaos AMS-Taos Inc.
apm Applied Micro Circuits Corporation (APM)
+arasan Arasan Chip Systems
arm ARM Ltd.
armadeus ARMadeus Systems SARL
asahi-kasei Asahi Kasei Corp.
auo AU Optronics Corporation
avago Avago Technologies
avic Shanghai AVIC Optoelectronics Co., Ltd.
+axis Axis Communications AB
bosch Bosch Sensortec GmbH
brcm Broadcom Corporation
buffalo Buffalo, Inc.
- atmel,disable : Should be present if you want to disable the watchdog.
- atmel,idle-halt : Should be present if you want to stop the watchdog when
entering idle state.
+ CAUTION: This property should be used with care, it actually makes the
+ watchdog not counting when the CPU is in idle state, therefore the
+ watchdog reset time depends on mean CPU usage and will not reset at all
+ if the CPU stop working while it is in idle state, which is probably
+ not what you want.
- atmel,dbg-halt : Should be present if you want to stop the watchdog when
entering debug state.
F: arch/arm/boot/dts/imx*
F: arch/arm/configs/imx*_defconfig
+ARM/FREESCALE VYBRID ARM ARCHITECTURE
+S: Maintained
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
+F: arch/arm/mach-imx/*vf610*
+F: arch/arm/boot/dts/vf*
+
ARM/GLOMATION GESBC9312SX MACHINE SUPPORT
S: Maintained
F: arch/arm/mach-mvebu/
-F: drivers/rtc/armada38x-rtc
+F: drivers/rtc/rtc-armada38x.c
ARM/Marvell Berlin SoC support
S: Maintained
F: arch/arm/mach-dove/
F: drivers/*/*rockchip*
F: drivers/*/*/*rockchip*
F: sound/soc/rockchip/
+N: rockchip
ARM/SAMSUNG EXYNOS ARM ARCHITECTURES
F: include/linux/platform_data/at24.h
ATA OVER ETHERNET (AOE) DRIVER
-W: http://support.coraid.com/support/linux
+W: http://www.openaoe.org/
S: Supported
F: Documentation/aoe/
F: drivers/block/aoe/
F: drivers/net/ethernet/atheros/
ATM
W: http://linux-atm.sourceforge.net
BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE
T: git git://github.com/broadcom/mach-bcm
CAN NETWORK LAYER
-W: http://gitorious.org/linux-can
+W: https://github.com/linux-can
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git
S: Maintained
-W: http://gitorious.org/linux-can
+W: https://github.com/linux-can
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git
S: Maintained
F: Documentation/hwmon/dme1737
F: drivers/hwmon/dme1737.c
+DMI/SMBIOS SUPPORT
+S: Maintained
+F: drivers/firmware/dmi-id.c
+F: drivers/firmware/dmi_scan.c
+F: include/linux/dmi.h
+
DOCKING STATION DRIVER
S: Supported
F: drivers/gpu/drm/rcar-du/
F: drivers/gpu/drm/shmobile/
-F: include/linux/platform_data/rcar-du.h
F: include/linux/platform_data/shmob_drm.h
DSBR100 USB FM RADIO DRIVER
F: Documentation/usb/ohci.txt
F: drivers/usb/host/ohci*
+USB OTG FSM (Finite State Machine)
+T: git git://github.com/hzpeterchen/linux-usb.git
+S: Maintained
+F: drivers/usb/common/usb-otg-fsm.c
+
USB OVER IP DRIVER
VERSION = 4
PATCHLEVEL = 0
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc6
NAME = Hurr durr I'ma sheep
# *DOCUMENTATION*
sigset_t *set)
{
int err;
- err = __copy_to_user(&(sf->uc.uc_mcontext.regs), regs,
+ err = __copy_to_user(&(sf->uc.uc_mcontext.regs.scratch), regs,
sizeof(sf->uc.uc_mcontext.regs.scratch));
err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t));
if (!err)
set_current_blocked(&set);
- err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs),
+ err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs.scratch),
sizeof(sf->uc.uc_mcontext.regs.scratch));
return err;
/* Don't restart from sigreturn */
syscall_wont_restart(regs);
+ /*
+ * Ensure that sigreturn always returns to user mode (in case the
+ * regs saved on user stack got fudged between save and sigreturn)
+ * Otherwise it is easy to panic the kernel with a custom
+ * signal handler and/or restorer which clobberes the status32/ret
+ * to return to a bogus location in kernel mode.
+ */
+ regs->status32 |= STATUS_U_MASK;
+
return regs->r0;
badframe:
/*
* handler returns using sigreturn stub provided already by userpsace
+ * If not, nuke the process right away
*/
- BUG_ON(!(ksig->ka.sa.sa_flags & SA_RESTORER));
+ if(!(ksig->ka.sa.sa_flags & SA_RESTORER))
+ return 1;
+
regs->blink = (unsigned long)ksig->ka.sa.sa_restorer;
/* User Stack for signal handler will be above the frame just carved */
handle_signal(struct ksignal *ksig, struct pt_regs *regs)
{
sigset_t *oldset = sigmask_to_save();
- int ret;
+ int failed;
/* Set up the stack frame */
- ret = setup_rt_frame(ksig, oldset, regs);
+ failed = setup_rt_frame(ksig, oldset, regs);
- signal_setup_done(ret, ksig, 0);
+ signal_setup_done(failed, ksig, 0);
}
void do_signal(struct pt_regs *regs)
select GENERIC_CLOCKEVENTS
select GPIO_PXA
select HAVE_IDE
+ select IRQ_DOMAIN
select MULTI_IRQ_HANDLER
select PLAT_PXA
select SPARSE_IRQ
machine-$(CONFIG_ARCH_CLPS711X) += clps711x
machine-$(CONFIG_ARCH_CNS3XXX) += cns3xxx
machine-$(CONFIG_ARCH_DAVINCI) += davinci
+machine-$(CONFIG_ARCH_DIGICOLOR) += digicolor
machine-$(CONFIG_ARCH_DOVE) += dove
machine-$(CONFIG_ARCH_EBSA110) += ebsa110
machine-$(CONFIG_ARCH_EFM32) += efm32
cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
cd-inverted;
};
+
+&aes {
+ status = "okay";
+};
+
+&sham {
+ status = "okay";
+};
&mmc1 {
vmmc-supply = <&ldo3_reg>;
};
-
-&sham {
- status = "okay";
-};
-
-&aes {
- status = "okay";
-};
dual_emac_res_vlan = <3>;
};
+&phy_sel {
+ rmii-clock-ext;
+};
+
&mac {
pinctrl-names = "default", "sleep";
pinctrl-0 = <&cpsw_default>;
ehrpwm0_tbclk: ehrpwm0_tbclk@44e10664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <0>;
reg = <0x0664>;
};
ehrpwm1_tbclk: ehrpwm1_tbclk@44e10664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <1>;
reg = <0x0664>;
};
ehrpwm2_tbclk: ehrpwm2_tbclk@44e10664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <2>;
reg = <0x0664>;
};
ehrpwm0_tbclk: ehrpwm0_tbclk {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <0>;
reg = <0x0664>;
};
ehrpwm1_tbclk: ehrpwm1_tbclk {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <1>;
reg = <0x0664>;
};
ehrpwm2_tbclk: ehrpwm2_tbclk {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <2>;
reg = <0x0664>;
};
ehrpwm3_tbclk: ehrpwm3_tbclk {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <4>;
reg = <0x0664>;
};
ehrpwm4_tbclk: ehrpwm4_tbclk {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <5>;
reg = <0x0664>;
};
ehrpwm5_tbclk: ehrpwm5_tbclk {
#clock-cells = <0>;
compatible = "ti,gate-clock";
- clocks = <&dpll_per_m2_ck>;
+ clocks = <&l4ls_gclk>;
ti,bit-shift = <6>;
reg = <0x0664>;
};
pinctrl_usart3_rts: usart3_rts-0 {
atmel,pins =
- <AT91_PIOB 8 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC8 periph B */
+ <AT91_PIOC 8 AT91_PERIPH_B AT91_PINCTRL_NONE>;
};
pinctrl_usart3_cts: usart3_cts-0 {
atmel,pins =
- <AT91_PIOB 10 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC10 periph B */
+ <AT91_PIOC 10 AT91_PERIPH_B AT91_PINCTRL_NONE>;
};
};
};
usb1: gadget@fffa4000 {
- compatible = "atmel,at91rm9200-udc";
+ compatible = "atmel,at91sam9260-udc";
reg = <0xfffa4000 0x4000>;
interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>;
clocks = <&udc_clk>, <&udpck>;
atmel,watchdog-type = "hardware";
atmel,reset-type = "all";
atmel,dbg-halt;
- atmel,idle-halt;
status = "disabled";
};
};
usb1: gadget@fffa4000 {
- compatible = "atmel,at91rm9200-udc";
+ compatible = "atmel,at91sam9261-udc";
reg = <0xfffa4000 0x4000>;
interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>;
- clocks = <&usb>, <&udc_clk>, <&udpck>;
- clock-names = "usb_clk", "udc_clk", "udpck";
+ clocks = <&udc_clk>, <&udpck>;
+ clock-names = "pclk", "hclk";
+ atmel,matrix = <&matrix>;
status = "disabled";
};
};
matrix: matrix@ffffee00 {
- compatible = "atmel,at91sam9260-bus-matrix";
+ compatible = "atmel,at91sam9260-bus-matrix", "syscon";
reg = <0xffffee00 0x200>;
};
sram1: sram@00500000 {
compatible = "mmio-sram";
- reg = <0x00300000 0x4000>;
+ reg = <0x00500000 0x4000>;
};
ahb {
};
usb1: gadget@fff78000 {
- compatible = "atmel,at91rm9200-udc";
+ compatible = "atmel,at91sam9263-udc";
reg = <0xfff78000 0x4000>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH 2>;
clocks = <&udc_clk>, <&udpck>;
atmel,watchdog-type = "hardware";
atmel,reset-type = "all";
atmel,dbg-halt;
- atmel,idle-halt;
status = "disabled";
};
atmel,watchdog-type = "hardware";
atmel,reset-type = "all";
atmel,dbg-halt;
- atmel,idle-halt;
status = "disabled";
};
compatible = "atmel,at91sam9g45-ehci", "usb-ehci";
reg = <0x00800000 0x100000>;
interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>;
- clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>;
+ clocks = <&utmi>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>;
clock-names = "usb_clk", "ehci_clk", "hclk", "uhpck";
status = "disabled";
};
atmel,watchdog-type = "hardware";
atmel,reset-type = "all";
atmel,dbg-halt;
- atmel,idle-halt;
status = "disabled";
};
reg = <0x00500000 0x80000
0xf803c000 0x400>;
interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>;
- clocks = <&usb>, <&udphs_clk>;
+ clocks = <&utmi>, <&udphs_clk>;
clock-names = "hclk", "pclk";
status = "disabled";
atmel,watchdog-type = "hardware";
atmel,reset-type = "all";
atmel,dbg-halt;
- atmel,idle-halt;
status = "disabled";
};
compatible = "atmel,at91sam9g45-ehci", "usb-ehci";
reg = <0x00700000 0x100000>;
interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>;
- clocks = <&usb>, <&uhphs_clk>, <&uhpck>;
+ clocks = <&utmi>, <&uhphs_clk>, <&uhpck>;
clock-names = "usb_clk", "ehci_clk", "uhpck";
status = "disabled";
};
>;
};
+ mmc_pins: pinmux_mmc_pins {
+ pinctrl-single,pins = <
+ DM816X_IOPAD(0x0a70, MUX_MODE0) /* SD_POW */
+ DM816X_IOPAD(0x0a74, MUX_MODE0) /* SD_CLK */
+ DM816X_IOPAD(0x0a78, MUX_MODE0) /* SD_CMD */
+ DM816X_IOPAD(0x0a7C, MUX_MODE0) /* SD_DAT0 */
+ DM816X_IOPAD(0x0a80, MUX_MODE0) /* SD_DAT1 */
+ DM816X_IOPAD(0x0a84, MUX_MODE0) /* SD_DAT2 */
+ DM816X_IOPAD(0x0a88, MUX_MODE0) /* SD_DAT2 */
+ DM816X_IOPAD(0x0a8c, MUX_MODE2) /* GP1[7] */
+ DM816X_IOPAD(0x0a90, MUX_MODE2) /* GP1[8] */
+ >;
+ };
+
usb0_pins: pinmux_usb0_pins {
pinctrl-single,pins = <
DM816X_IOPAD(0x0d00, MUX_MODE0) /* USB0_DRVVBUS */
};
&mmc1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc_pins>;
vmmc-supply = <&vmmcsd_fixed>;
+ bus-width = <4>;
+ cd-gpios = <&gpio2 7 GPIO_ACTIVE_LOW>;
+ wp-gpios = <&gpio2 8 GPIO_ACTIVE_LOW>;
};
/* At least dm8168-evm rev c won't support multipoint, later may */
};
gpio1: gpio@48032000 {
- compatible = "ti,omap3-gpio";
+ compatible = "ti,omap4-gpio";
ti,hwmods = "gpio1";
+ ti,gpio-always-on;
reg = <0x48032000 0x1000>;
- interrupts = <97>;
+ interrupts = <96>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
};
gpio2: gpio@4804c000 {
- compatible = "ti,omap3-gpio";
+ compatible = "ti,omap4-gpio";
ti,hwmods = "gpio2";
+ ti,gpio-always-on;
reg = <0x4804c000 0x1000>;
- interrupts = <99>;
+ interrupts = <98>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
};
gpmc: gpmc@50000000 {
dcan1_pins_default: dcan1_pins_default {
pinctrl-single,pins = <
- 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */
- 0x3d4 (MUX_MODE15) /* dcan1_rx.off */
- 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */
+ 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */
+ 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */
>;
};
dcan1_pins_sleep: dcan1_pins_sleep {
pinctrl-single,pins = <
- 0x3d0 (MUX_MODE15) /* dcan1_tx.off */
- 0x3d4 (MUX_MODE15) /* dcan1_rx.off */
- 0x418 (MUX_MODE15) /* wakeup0.off */
+ 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */
+ 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */
>;
};
};
"wkupclk", "refclk",
"div-clk", "phy-div";
#phy-cells = <0>;
- ti,hwmods = "pcie1-phy";
};
pcie2_phy: pciephy@4a095000 {
"wkupclk", "refclk",
"div-clk", "phy-div";
#phy-cells = <0>;
- ti,hwmods = "pcie2-phy";
status = "disabled";
};
};
dcan1_pins_default: dcan1_pins_default {
pinctrl-single,pins = <
- 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */
- 0x3d4 (MUX_MODE15) /* dcan1_rx.off */
- 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */
+ 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */
+ 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */
>;
};
dcan1_pins_sleep: dcan1_pins_sleep {
pinctrl-single,pins = <
- 0x3d0 (MUX_MODE15) /* dcan1_tx.off */
- 0x3d4 (MUX_MODE15) /* dcan1_rx.off */
- 0x418 (MUX_MODE15) /* wakeup0.off */
+ 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */
+ 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */
>;
};
ti,invert-autoidle-bit;
};
+ dpll_core_byp_mux: dpll_core_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ ti,bit-shift = <23>;
+ reg = <0x012c>;
+ };
+
dpll_core_ck: dpll_core_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-core-clock";
- clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ clocks = <&sys_clkin1>, <&dpll_core_byp_mux>;
reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>;
};
clock-div = <1>;
};
+ dpll_dsp_byp_mux: dpll_dsp_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x0240>;
+ };
+
dpll_dsp_ck: dpll_dsp_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>;
+ clocks = <&sys_clkin1>, <&dpll_dsp_byp_mux>;
reg = <0x0234>, <0x0238>, <0x0240>, <0x023c>;
};
clock-div = <1>;
};
+ dpll_iva_byp_mux: dpll_iva_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x01ac>;
+ };
+
dpll_iva_ck: dpll_iva_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>;
+ clocks = <&sys_clkin1>, <&dpll_iva_byp_mux>;
reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>;
};
clock-div = <1>;
};
+ dpll_gpu_byp_mux: dpll_gpu_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ ti,bit-shift = <23>;
+ reg = <0x02e4>;
+ };
+
dpll_gpu_ck: dpll_gpu_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ clocks = <&sys_clkin1>, <&dpll_gpu_byp_mux>;
reg = <0x02d8>, <0x02dc>, <0x02e4>, <0x02e0>;
};
clock-div = <1>;
};
+ dpll_ddr_byp_mux: dpll_ddr_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ ti,bit-shift = <23>;
+ reg = <0x021c>;
+ };
+
dpll_ddr_ck: dpll_ddr_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ clocks = <&sys_clkin1>, <&dpll_ddr_byp_mux>;
reg = <0x0210>, <0x0214>, <0x021c>, <0x0218>;
};
ti,invert-autoidle-bit;
};
+ dpll_gmac_byp_mux: dpll_gmac_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ ti,bit-shift = <23>;
+ reg = <0x02b4>;
+ };
+
dpll_gmac_ck: dpll_gmac_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
+ clocks = <&sys_clkin1>, <&dpll_gmac_byp_mux>;
reg = <0x02a8>, <0x02ac>, <0x02b4>, <0x02b0>;
};
clock-div = <1>;
};
+ dpll_eve_byp_mux: dpll_eve_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x0290>;
+ };
+
dpll_eve_ck: dpll_eve_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>;
+ clocks = <&sys_clkin1>, <&dpll_eve_byp_mux>;
reg = <0x0284>, <0x0288>, <0x0290>, <0x028c>;
};
clock-div = <1>;
};
+ dpll_per_byp_mux: dpll_per_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x014c>;
+ };
+
dpll_per_ck: dpll_per_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>;
+ clocks = <&sys_clkin1>, <&dpll_per_byp_mux>;
reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>;
};
clock-div = <1>;
};
+ dpll_usb_byp_mux: dpll_usb_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x018c>;
+ };
+
dpll_usb_ck: dpll_usb_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-j-type-clock";
- clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>;
+ clocks = <&sys_clkin1>, <&dpll_usb_byp_mux>;
reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>;
};
*/
#include "skeleton.dtsi"
+#include "exynos4-cpu-thermal.dtsi"
#include <dt-bindings/clock/exynos3250.h>
/ {
interrupts = <0 216 0>;
clocks = <&cmu CLK_TMU_APBIF>;
clock-names = "tmu_apbif";
+ #include "exynos4412-tmu-sensor-conf.dtsi"
status = "disabled";
};
--- /dev/null
+/*
+ * Device tree sources for Exynos4 thermal zone
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <dt-bindings/thermal/thermal.h>
+
+/ {
+thermal-zones {
+ cpu_thermal: cpu-thermal {
+ thermal-sensors = <&tmu 0>;
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+ trips {
+ cpu_alert0: cpu-alert-0 {
+ temperature = <70000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "active";
+ };
+ cpu_alert1: cpu-alert-1 {
+ temperature = <95000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "active";
+ };
+ cpu_alert2: cpu-alert-2 {
+ temperature = <110000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "active";
+ };
+ cpu_crit0: cpu-crit-0 {
+ temperature = <120000>; /* millicelsius */
+ hysteresis = <0>; /* millicelsius */
+ type = "critical";
+ };
+ };
+ cooling-maps {
+ map0 {
+ trip = <&cpu_alert0>;
+ };
+ map1 {
+ trip = <&cpu_alert1>;
+ };
+ };
+ };
+};
+};
i2c5 = &i2c_5;
i2c6 = &i2c_6;
i2c7 = &i2c_7;
+ i2c8 = &i2c_8;
csis0 = &csis_0;
csis1 = &csis_1;
fimc0 = &fimc_0;
compatible = "samsung,exynos4210-pd";
reg = <0x10023C20 0x20>;
#power-domain-cells = <0>;
+ power-domains = <&pd_lcd0>;
};
pd_cam: cam-power-domain@10023C00 {
status = "disabled";
};
+ i2c_8: i2c@138E0000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "samsung,s3c2440-hdmiphy-i2c";
+ reg = <0x138E0000 0x100>;
+ interrupts = <0 93 0>;
+ clocks = <&clock CLK_I2C_HDMI>;
+ clock-names = "i2c";
+ status = "disabled";
+
+ hdmi_i2c_phy: hdmiphy@38 {
+ compatible = "exynos4210-hdmiphy";
+ reg = <0x38>;
+ };
+ };
+
spi_0: spi@13920000 {
compatible = "samsung,exynos4210-spi";
reg = <0x13920000 0x100>;
status = "disabled";
};
+ tmu: tmu@100C0000 {
+ #include "exynos4412-tmu-sensor-conf.dtsi"
+ };
+
+ hdmi: hdmi@12D00000 {
+ compatible = "samsung,exynos4210-hdmi";
+ reg = <0x12D00000 0x70000>;
+ interrupts = <0 92 0>;
+ clock-names = "hdmi", "sclk_hdmi", "sclk_pixel", "sclk_hdmiphy",
+ "mout_hdmi";
+ clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>,
+ <&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>,
+ <&clock CLK_MOUT_HDMI>;
+ phy = <&hdmi_i2c_phy>;
+ power-domains = <&pd_tv>;
+ samsung,syscon-phandle = <&pmu_system_controller>;
+ status = "disabled";
+ };
+
+ mixer: mixer@12C10000 {
+ compatible = "samsung,exynos4210-mixer";
+ interrupts = <0 91 0>;
+ reg = <0x12C10000 0x2100>, <0x12c00000 0x300>;
+ power-domains = <&pd_tv>;
+ status = "disabled";
+ };
+
ppmu_dmc0: ppmu_dmc0@106a0000 {
compatible = "samsung,exynos-ppmu";
reg = <0x106a0000 0x2000>;
status = "okay";
};
+ tmu@100C0000 {
+ status = "okay";
+ };
+
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ cooling-maps {
+ map0 {
+ /* Corresponds to 800MHz at freq_table */
+ cooling-device = <&cpu0 2 2>;
+ };
+ map1 {
+ /* Corresponds to 200MHz at freq_table */
+ cooling-device = <&cpu0 4 4>;
+ };
+ };
+ };
+ };
+
camera {
pinctrl-names = "default";
pinctrl-0 = <>;
assigned-clock-rates = <0>, <160000000>;
};
};
+
+ hdmi_en: voltage-regulator-hdmi-5v {
+ compatible = "regulator-fixed";
+ regulator-name = "HDMI_5V";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpe0 1 0>;
+ enable-active-high;
+ };
+
+ hdmi_ddc: i2c-ddc {
+ compatible = "i2c-gpio";
+ gpios = <&gpe4 2 0 &gpe4 3 0>;
+ i2c-gpio,delay-us = <100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ pinctrl-0 = <&i2c_ddc_bus>;
+ pinctrl-names = "default";
+ status = "okay";
+ };
+
+ mixer@12C10000 {
+ status = "okay";
+ };
+
+ hdmi@12D00000 {
+ hpd-gpio = <&gpx3 7 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&hdmi_hpd>;
+ hdmi-en-supply = <&hdmi_en>;
+ vdd-supply = <&ldo3_reg>;
+ vdd_osc-supply = <&ldo4_reg>;
+ vdd_pll-supply = <&ldo3_reg>;
+ ddc = <&hdmi_ddc>;
+ status = "okay";
+ };
+
+ i2c@138E0000 {
+ status = "okay";
+ };
+};
+
+&pinctrl_1 {
+ hdmi_hpd: hdmi-hpd {
+ samsung,pins = "gpx3-7";
+ samsung,pin-pud = <0>;
+ };
+};
+
+&pinctrl_0 {
+ i2c_ddc_bus: i2c-ddc-bus {
+ samsung,pins = "gpe4-2", "gpe4-3";
+ samsung,pin-function = <2>;
+ samsung,pin-pud = <3>;
+ samsung,pin-drv = <0>;
+ };
};
&mdma1 {
#include "exynos4.dtsi"
#include "exynos4210-pinctrl.dtsi"
+#include "exynos4-cpu-thermal.dtsi"
/ {
compatible = "samsung,exynos4210", "samsung,exynos4";
#address-cells = <1>;
#size-cells = <0>;
- cpu@900 {
+ cpu0: cpu@900 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0x900>;
+ cooling-min-level = <4>;
+ cooling-max-level = <2>;
+ #cooling-cells = <2>; /* min followed by max */
};
cpu@901 {
reg = <0x03860000 0x1000>;
};
- tmu@100C0000 {
+ tmu: tmu@100C0000 {
compatible = "samsung,exynos4210-tmu";
interrupt-parent = <&combiner>;
reg = <0x100C0000 0x100>;
interrupts = <2 4>;
clocks = <&clock CLK_TMU_APBIF>;
clock-names = "tmu_apbif";
+ samsung,tmu_gain = <15>;
+ samsung,tmu_reference_voltage = <7>;
status = "disabled";
};
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+ thermal-sensors = <&tmu 0>;
+
+ trips {
+ cpu_alert0: cpu-alert-0 {
+ temperature = <85000>; /* millicelsius */
+ };
+ cpu_alert1: cpu-alert-1 {
+ temperature = <100000>; /* millicelsius */
+ };
+ cpu_alert2: cpu-alert-2 {
+ temperature = <110000>; /* millicelsius */
+ };
+ };
+ };
+ };
+
g2d@12800000 {
compatible = "samsung,s5pv210-g2d";
reg = <0x12800000 0x1000>;
};
};
+ mixer: mixer@12C10000 {
+ clock-names = "mixer", "hdmi", "sclk_hdmi", "vp", "mout_mixer",
+ "sclk_mixer";
+ clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>,
+ <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>,
+ <&clock CLK_MOUT_MIXER>, <&clock CLK_SCLK_MIXER>;
+ };
+
ppmu_lcd1: ppmu_lcd1@12240000 {
compatible = "samsung,exynos-ppmu";
reg = <0x12240000 0x2000>;
#address-cells = <1>;
#size-cells = <0>;
- cpu@A00 {
+ cpu0: cpu@A00 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0xA00>;
+ cooling-min-level = <13>;
+ cooling-max-level = <7>;
+ #cooling-cells = <2>; /* min followed by max */
};
cpu@A01 {
regulator-always-on;
};
+ ldo8_reg: ldo@8 {
+ regulator-compatible = "LDO8";
+ regulator-name = "VDD10_HDMI_1.0V";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ };
+
+ ldo10_reg: ldo@10 {
+ regulator-compatible = "LDO10";
+ regulator-name = "VDDQ_MIPIHSI_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
ldo11_reg: LDO11 {
regulator-name = "VDD18_ABB1_1.8V";
regulator-min-microvolt = <1800000>;
ehci: ehci@12580000 {
status = "okay";
};
+
+ tmu@100C0000 {
+ vtmu-supply = <&ldo10_reg>;
+ status = "okay";
+ };
+
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ cooling-maps {
+ map0 {
+ /* Corresponds to 800MHz at freq_table */
+ cooling-device = <&cpu0 7 7>;
+ };
+ map1 {
+ /* Corresponds to 200MHz at freq_table */
+ cooling-device = <&cpu0 13 13>;
+ };
+ };
+ };
+ };
+
+ mixer: mixer@12C10000 {
+ status = "okay";
+ };
+
+ hdmi@12D00000 {
+ hpd-gpio = <&gpx3 7 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&hdmi_hpd>;
+ vdd-supply = <&ldo8_reg>;
+ vdd_osc-supply = <&ldo10_reg>;
+ vdd_pll-supply = <&ldo8_reg>;
+ ddc = <&hdmi_ddc>;
+ status = "okay";
+ };
+
+ hdmi_ddc: i2c@13880000 {
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_bus>;
+ };
+
+ i2c@138E0000 {
+ status = "okay";
+ };
};
&pinctrl_1 {
samsung,pin-pud = <0>;
samsung,pin-drv = <0>;
};
+
+ hdmi_hpd: hdmi-hpd {
+ samsung,pins = "gpx3-7";
+ samsung,pin-pud = <1>;
+ };
};
--- /dev/null
+/*
+ * Device tree sources for Exynos4412 TMU sensor configuration
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <dt-bindings/thermal/thermal_exynos.h>
+
+#thermal-sensor-cells = <0>;
+samsung,tmu_gain = <8>;
+samsung,tmu_reference_voltage = <16>;
+samsung,tmu_noise_cancel_mode = <4>;
+samsung,tmu_efuse_value = <55>;
+samsung,tmu_min_efuse_value = <40>;
+samsung,tmu_max_efuse_value = <100>;
+samsung,tmu_first_point_trim = <25>;
+samsung,tmu_second_point_trim = <85>;
+samsung,tmu_default_temp_offset = <50>;
+samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
pulldown-ohm = <100000>; /* 100K */
io-channels = <&adc 2>; /* Battery temperature */
};
+
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ cooling-maps {
+ map0 {
+ /* Corresponds to 800MHz at freq_table */
+ cooling-device = <&cpu0 7 7>;
+ };
+ map1 {
+ /* Corresponds to 200MHz at freq_table */
+ cooling-device = <&cpu0 13 13>;
+ };
+ };
+ };
+ };
};
&pmu_system_controller {
#address-cells = <1>;
#size-cells = <0>;
- cpu@A00 {
+ cpu0: cpu@A00 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0xA00>;
+ cooling-min-level = <13>;
+ cooling-max-level = <7>;
+ #cooling-cells = <2>; /* min followed by max */
};
cpu@A01 {
#include "exynos4.dtsi"
#include "exynos4x12-pinctrl.dtsi"
+#include "exynos4-cpu-thermal.dtsi"
/ {
aliases {
clock-names = "tmu_apbif";
status = "disabled";
};
+
+ hdmi: hdmi@12D00000 {
+ compatible = "samsung,exynos4212-hdmi";
+ };
+
+ mixer: mixer@12C10000 {
+ compatible = "samsung,exynos4212-mixer";
+ clock-names = "mixer", "hdmi", "sclk_hdmi", "vp";
+ clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>,
+ <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>;
+ };
};
#include <dt-bindings/clock/exynos5250.h>
#include "exynos5.dtsi"
#include "exynos5250-pinctrl.dtsi"
-
+#include "exynos4-cpu-thermal.dtsi"
#include <dt-bindings/clock/exynos-audss-clk.h>
/ {
#address-cells = <1>;
#size-cells = <0>;
- cpu@0 {
+ cpu0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a15";
reg = <0>;
clock-frequency = <1700000000>;
+ cooling-min-level = <15>;
+ cooling-max-level = <9>;
+ #cooling-cells = <2>; /* min followed by max */
};
cpu@1 {
device_type = "cpu";
#power-domain-cells = <0>;
};
+ pd_disp1: disp1-power-domain@100440A0 {
+ compatible = "samsung,exynos4210-pd";
+ reg = <0x100440A0 0x20>;
+ #power-domain-cells = <0>;
+ };
+
clock: clock-controller@10010000 {
compatible = "samsung,exynos5250-clock";
reg = <0x10010000 0x30000>;
status = "disabled";
};
- tmu@10060000 {
+ tmu: tmu@10060000 {
compatible = "samsung,exynos5250-tmu";
reg = <0x10060000 0x100>;
interrupts = <0 65 0>;
clocks = <&clock CLK_TMU>;
clock-names = "tmu_apbif";
+ #include "exynos4412-tmu-sensor-conf.dtsi"
+ };
+
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+ thermal-sensors = <&tmu 0>;
+
+ cooling-maps {
+ map0 {
+ /* Corresponds to 800MHz at freq_table */
+ cooling-device = <&cpu0 9 9>;
+ };
+ map1 {
+ /* Corresponds to 200MHz at freq_table */
+ cooling-device = <&cpu0 15 15>;
+ };
+ };
+ };
};
serial@12C00000 {
hdmi: hdmi {
compatible = "samsung,exynos4212-hdmi";
reg = <0x14530000 0x70000>;
+ power-domains = <&pd_disp1>;
interrupts = <0 95 0>;
clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>,
<&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>,
mixer {
compatible = "samsung,exynos5250-mixer";
reg = <0x14450000 0x10000>;
+ power-domains = <&pd_disp1>;
interrupts = <0 94 0>;
- clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>;
- clock-names = "mixer", "sclk_hdmi";
+ clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>,
+ <&clock CLK_SCLK_HDMI>;
+ clock-names = "mixer", "hdmi", "sclk_hdmi";
};
dp_phy: video-phy@10040720 {
};
dp: dp-controller@145B0000 {
+ power-domains = <&pd_disp1>;
clocks = <&clock CLK_DP>;
clock-names = "dp";
phys = <&dp_phy>;
};
fimd: fimd@14400000 {
+ power-domains = <&pd_disp1>;
clocks = <&clock CLK_SCLK_FIMD1>, <&clock CLK_FIMD1>;
clock-names = "sclk_fimd", "fimd";
};
--- /dev/null
+/*
+ * Device tree sources for default Exynos5420 thermal zone definition
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+polling-delay-passive = <0>;
+polling-delay = <0>;
+trips {
+ cpu-alert-0 {
+ temperature = <85000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "active";
+ };
+ cpu-alert-1 {
+ temperature = <103000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "active";
+ };
+ cpu-alert-2 {
+ temperature = <110000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "active";
+ };
+ cpu-crit-0 {
+ temperature = <1200000>; /* millicelsius */
+ hysteresis = <0>; /* millicelsius */
+ type = "critical";
+ };
+};
compatible = "samsung,exynos5420-mixer";
reg = <0x14450000 0x10000>;
interrupts = <0 94 0>;
- clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>;
- clock-names = "mixer", "sclk_hdmi";
+ clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>,
+ <&clock CLK_SCLK_HDMI>;
+ clock-names = "mixer", "hdmi", "sclk_hdmi";
power-domains = <&disp_pd>;
};
interrupts = <0 65 0>;
clocks = <&clock CLK_TMU>;
clock-names = "tmu_apbif";
+ #include "exynos4412-tmu-sensor-conf.dtsi"
};
tmu_cpu1: tmu@10064000 {
interrupts = <0 183 0>;
clocks = <&clock CLK_TMU>;
clock-names = "tmu_apbif";
+ #include "exynos4412-tmu-sensor-conf.dtsi"
};
tmu_cpu2: tmu@10068000 {
interrupts = <0 184 0>;
clocks = <&clock CLK_TMU>, <&clock CLK_TMU>;
clock-names = "tmu_apbif", "tmu_triminfo_apbif";
+ #include "exynos4412-tmu-sensor-conf.dtsi"
};
tmu_cpu3: tmu@1006c000 {
interrupts = <0 185 0>;
clocks = <&clock CLK_TMU>, <&clock CLK_TMU_GPU>;
clock-names = "tmu_apbif", "tmu_triminfo_apbif";
+ #include "exynos4412-tmu-sensor-conf.dtsi"
};
tmu_gpu: tmu@100a0000 {
interrupts = <0 215 0>;
clocks = <&clock CLK_TMU_GPU>, <&clock CLK_TMU>;
clock-names = "tmu_apbif", "tmu_triminfo_apbif";
+ #include "exynos4412-tmu-sensor-conf.dtsi"
+ };
+
+ thermal-zones {
+ cpu0_thermal: cpu0-thermal {
+ thermal-sensors = <&tmu_cpu0>;
+ #include "exynos5420-trip-points.dtsi"
+ };
+ cpu1_thermal: cpu1-thermal {
+ thermal-sensors = <&tmu_cpu1>;
+ #include "exynos5420-trip-points.dtsi"
+ };
+ cpu2_thermal: cpu2-thermal {
+ thermal-sensors = <&tmu_cpu2>;
+ #include "exynos5420-trip-points.dtsi"
+ };
+ cpu3_thermal: cpu3-thermal {
+ thermal-sensors = <&tmu_cpu3>;
+ #include "exynos5420-trip-points.dtsi"
+ };
+ gpu_thermal: gpu-thermal {
+ thermal-sensors = <&tmu_gpu>;
+ #include "exynos5420-trip-points.dtsi"
+ };
};
watchdog: watchdog@101D0000 {
--- /dev/null
+/*
+ * Device tree sources for Exynos5440 TMU sensor configuration
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <dt-bindings/thermal/thermal_exynos.h>
+
+#thermal-sensor-cells = <0>;
+samsung,tmu_gain = <5>;
+samsung,tmu_reference_voltage = <16>;
+samsung,tmu_noise_cancel_mode = <4>;
+samsung,tmu_efuse_value = <0x5d2d>;
+samsung,tmu_min_efuse_value = <16>;
+samsung,tmu_max_efuse_value = <76>;
+samsung,tmu_first_point_trim = <25>;
+samsung,tmu_second_point_trim = <70>;
+samsung,tmu_default_temp_offset = <25>;
+samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
--- /dev/null
+/*
+ * Device tree sources for default Exynos5440 thermal zone definition
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+polling-delay-passive = <0>;
+polling-delay = <0>;
+trips {
+ cpu-alert-0 {
+ temperature = <100000>; /* millicelsius */
+ hysteresis = <0>; /* millicelsius */
+ type = "active";
+ };
+ cpu-crit-0 {
+ temperature = <1050000>; /* millicelsius */
+ hysteresis = <0>; /* millicelsius */
+ type = "critical";
+ };
+};
interrupts = <0 58 0>;
clocks = <&clock CLK_B_125>;
clock-names = "tmu_apbif";
+ #include "exynos5440-tmu-sensor-conf.dtsi"
};
tmuctrl_1: tmuctrl@16011C {
interrupts = <0 58 0>;
clocks = <&clock CLK_B_125>;
clock-names = "tmu_apbif";
+ #include "exynos5440-tmu-sensor-conf.dtsi"
};
tmuctrl_2: tmuctrl@160120 {
interrupts = <0 58 0>;
clocks = <&clock CLK_B_125>;
clock-names = "tmu_apbif";
+ #include "exynos5440-tmu-sensor-conf.dtsi"
+ };
+
+ thermal-zones {
+ cpu0_thermal: cpu0-thermal {
+ thermal-sensors = <&tmuctrl_0>;
+ #include "exynos5440-trip-points.dtsi"
+ };
+ cpu1_thermal: cpu1-thermal {
+ thermal-sensors = <&tmuctrl_1>;
+ #include "exynos5440-trip-points.dtsi"
+ };
+ cpu2_thermal: cpu2-thermal {
+ thermal-sensors = <&tmuctrl_2>;
+ #include "exynos5440-trip-points.dtsi"
+ };
};
sata@210000 {
regulator-max-microvolt = <5000000>;
gpio = <&gpio3 22 0>;
enable-active-high;
+ vin-supply = <&swbst_reg>;
};
reg_usb_h1_vbus: regulator@1 {
regulator-max-microvolt = <5000000>;
gpio = <&gpio1 29 0>;
enable-active-high;
+ vin-supply = <&swbst_reg>;
};
reg_audio: regulator@2 {
regulator-max-microvolt = <5000000>;
gpio = <&gpio4 0 0>;
enable-active-high;
+ vin-supply = <&swbst_reg>;
};
reg_usb_otg2_vbus: regulator@1 {
regulator-max-microvolt = <5000000>;
gpio = <&gpio4 2 0>;
enable-active-high;
+ vin-supply = <&swbst_reg>;
};
reg_aud3v: regulator@2 {
ti,hwmods = "aes";
reg = <0x480c5000 0x50>;
interrupts = <0>;
+ dmas = <&sdma 65 &sdma 66>;
+ dma-names = "tx", "rx";
};
prm: prm@48306000 {
ti,hwmods = "sham";
reg = <0x480c3000 0x64>;
interrupts = <49>;
+ dmas = <&sdma 69>;
+ dma-names = "rx";
};
smartreflex_core: smartreflex@480cb000 {
core_thermal: core_thermal {
polling-delay-passive = <250>; /* milliseconds */
- polling-delay = <1000>; /* milliseconds */
+ polling-delay = <500>; /* milliseconds */
/* sensor ID */
thermal-sensors = <&bandgap 2>;
gpu_thermal: gpu_thermal {
polling-delay-passive = <250>; /* milliseconds */
- polling-delay = <1000>; /* milliseconds */
+ polling-delay = <500>; /* milliseconds */
/* sensor ID */
thermal-sensors = <&bandgap 1>;
};
};
+&cpu_thermal {
+ polling-delay = <500>; /* milliseconds */
+};
+
/include/ "omap54xx-clocks.dtsi"
ti,index-starts-at-one;
};
+ dpll_core_byp_mux: dpll_core_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>;
+ ti,bit-shift = <23>;
+ reg = <0x012c>;
+ };
+
dpll_core_ck: dpll_core_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-core-clock";
- clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>;
+ clocks = <&sys_clkin>, <&dpll_core_byp_mux>;
reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>;
};
clock-div = <1>;
};
+ dpll_iva_byp_mux: dpll_iva_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x01ac>;
+ };
+
dpll_iva_ck: dpll_iva_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>;
+ clocks = <&sys_clkin>, <&dpll_iva_byp_mux>;
reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>;
};
};
};
&cm_core_clocks {
+
+ dpll_per_byp_mux: dpll_per_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x014c>;
+ };
+
dpll_per_ck: dpll_per_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
- clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>;
+ clocks = <&sys_clkin>, <&dpll_per_byp_mux>;
reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>;
};
ti,index-starts-at-one;
};
+ dpll_usb_byp_mux: dpll_usb_byp_mux {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>;
+ ti,bit-shift = <23>;
+ reg = <0x018c>;
+ };
+
dpll_usb_ck: dpll_usb_ck {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-j-type-clock";
- clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>;
+ clocks = <&sys_clkin>, <&dpll_usb_byp_mux>;
reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>;
};
"mac_clk_rx", "mac_clk_tx",
"clk_mac_ref", "clk_mac_refout",
"aclk_mac", "pclk_mac";
+ status = "disabled";
};
usb_host0_ehci: usb@ff500000 {
atmel,watchdog-type = "hardware";
atmel,reset-type = "all";
atmel,dbg-halt;
- atmel,idle-halt;
status = "disabled";
};
compatible = "atmel,at91sam9g45-ehci", "usb-ehci";
reg = <0x00700000 0x100000>;
interrupts = <32 IRQ_TYPE_LEVEL_HIGH 2>;
- clocks = <&usb>, <&uhphs_clk>, <&uhpck>;
+ clocks = <&utmi>, <&uhphs_clk>, <&uhpck>;
clock-names = "usb_clk", "ehci_clk", "uhpck";
status = "disabled";
};
gpio4 = &pioE;
tcb0 = &tcb0;
tcb1 = &tcb1;
+ i2c0 = &i2c0;
i2c2 = &i2c2;
};
cpus {
compatible = "atmel,at91sam9g45-ehci", "usb-ehci";
reg = <0x00600000 0x100000>;
interrupts = <46 IRQ_TYPE_LEVEL_HIGH 2>;
- clocks = <&usb>, <&uhphs_clk>, <&uhpck>;
+ clocks = <&utmi>, <&uhphs_clk>, <&uhpck>;
clock-names = "usb_clk", "ehci_clk", "uhpck";
status = "disabled";
};
lcdck: lcdck {
#clock-cells = <0>;
- reg = <4>;
- clocks = <&smd>;
+ reg = <3>;
+ clocks = <&mck>;
};
smdck: smdck {
reg = <50>;
};
- lcd_clk: lcd_clk {
+ lcdc_clk: lcdc_clk {
#clock-cells = <0>;
reg = <51>;
};
#address-cells = <1>;
#size-cells = <0>;
reg = <0xfff01000 0x1000>;
- interrupts = <0 156 4>;
+ interrupts = <0 155 4>;
num-cs = <4>;
clocks = <&spi_m_clk>;
status = "disabled";
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&l4_sp_clk>;
+ dmas = <&pdma 28>,
+ <&pdma 29>;
+ dma-names = "tx", "rx";
};
uart1: serial1@ffc03000 {
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&l4_sp_clk>;
+ dmas = <&pdma 30>,
+ <&pdma 31>;
+ dma-names = "tx", "rx";
};
rst: rstmgr@ffd05000 {
model = "Olimex A10-OLinuXino-LIME";
compatible = "olimex,a10-olinuxino-lime", "allwinner,sun4i-a10";
+ cpus {
+ cpu0: cpu@0 {
+ /*
+ * The A10-Lime is known to be unstable
+ * when running at 1008 MHz
+ */
+ operating-points = <
+ /* kHz uV */
+ 912000 1350000
+ 864000 1300000
+ 624000 1250000
+ >;
+ cooling-max-level = <2>;
+ };
+ };
+
soc@01c00000 {
emac: ethernet@01c0b000 {
pinctrl-names = "default";
clock-latency = <244144>; /* 8 32k periods */
operating-points = <
/* kHz uV */
- 1056000 1500000
1008000 1400000
912000 1350000
864000 1300000
>;
#cooling-cells = <2>;
cooling-min-level = <0>;
- cooling-max-level = <4>;
+ cooling-max-level = <3>;
};
};
clock-latency = <244144>; /* 8 32k periods */
operating-points = <
/* kHz uV */
- 1104000 1500000
1008000 1400000
912000 1350000
864000 1300000
>;
#cooling-cells = <2>;
cooling-min-level = <0>;
- cooling-max-level = <6>;
+ cooling-max-level = <5>;
};
};
clock-latency = <244144>; /* 8 32k periods */
operating-points = <
/* kHz uV */
- 1008000 1450000
960000 1400000
912000 1400000
864000 1300000
>;
#cooling-cells = <2>;
cooling-min-level = <0>;
- cooling-max-level = <7>;
+ cooling-max-level = <6>;
};
cpu@1 {
CONFIG_BLK_DEV_SD=y
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_NETDEVICES=y
+CONFIG_ARM_AT91_ETHER=y
CONFIG_MACB=y
# CONFIG_NET_VENDOR_BROADCOM is not set
CONFIG_DM9000=y
CONFIG_PCI_RCAR_GEN2_PCIE=y
CONFIG_PCIEPORTBUS=y
CONFIG_SMP=y
-CONFIG_NR_CPUS=8
+CONFIG_NR_CPUS=16
CONFIG_HIGHPTE=y
CONFIG_CMA=y
CONFIG_ARM_APPENDED_DTB=y
CONFIG_PWM_TWL_LED=m
CONFIG_OMAP_USB2=m
CONFIG_TI_PIPE3=y
+CONFIG_TWL4030_USB=m
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_FS_XATTR is not set
CONFIG_SYSVIPC=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_SYSFS_DEPRECATED=y
-CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_EMBEDDED=y
CONFIG_SLAB=y
CONFIG_PERF_EVENTS=y
CONFIG_ARCH_SUNXI=y
CONFIG_SMP=y
+CONFIG_NR_CPUS=8
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
CONFIG_HIGHPTE=y
CONFIG_USB=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
CONFIG_USB_MON=y
-CONFIG_USB_ISP1760_HCD=y
CONFIG_USB_STORAGE=y
+CONFIG_USB_ISP1760=y
CONFIG_MMC=y
CONFIG_MMC_ARMMMCI=y
CONFIG_NEW_LEDS=y
# define VFP_ABI_FRAME 0
# define BSAES_ASM_EXTENDED_KEY
# define XTS_CHAIN_TWEAK
-# define __ARM_ARCH__ 7
+# define __ARM_ARCH__ __LINUX_ARM_ARCH__
+# define __ARM_MAX_ARCH__ 7
#endif
#ifdef __thumb__
# define adrl adr
#endif
-#if __ARM_ARCH__>=7
+#if __ARM_MAX_ARCH__>=7
+.arch armv7-a
+.fpu neon
+
.text
.syntax unified @ ARMv7-capable assembler is expected to handle this
#ifdef __thumb2__
.code 32
#endif
-.fpu neon
-
.type _bsaes_decrypt8,%function
.align 4
_bsaes_decrypt8:
vld1.8 {q8}, [r0] @ initial tweak
adr r2, .Lxts_magic
+#ifndef XTS_CHAIN_TWEAK
tst r9, #0xf @ if not multiple of 16
it ne @ Thumb2 thing, sanity check in ARM
subne r9, #0x10 @ subtract another 16 bytes
+#endif
subs r9, #0x80
blo .Lxts_dec_short
# define VFP_ABI_FRAME 0
# define BSAES_ASM_EXTENDED_KEY
# define XTS_CHAIN_TWEAK
-# define __ARM_ARCH__ 7
+# define __ARM_ARCH__ __LINUX_ARM_ARCH__
+# define __ARM_MAX_ARCH__ 7
#endif
#ifdef __thumb__
# define adrl adr
#endif
-#if __ARM_ARCH__>=7
+#if __ARM_MAX_ARCH__>=7
+.arch armv7-a
+.fpu neon
+
.text
.syntax unified @ ARMv7-capable assembler is expected to handle this
#ifdef __thumb2__
.code 32
#endif
-.fpu neon
-
.type _bsaes_decrypt8,%function
.align 4
_bsaes_decrypt8:
vld1.8 {@XMM[8]}, [r0] @ initial tweak
adr $magic, .Lxts_magic
+#ifndef XTS_CHAIN_TWEAK
tst $len, #0xf @ if not multiple of 16
it ne @ Thumb2 thing, sanity check in ARM
subne $len, #0x10 @ subtract another 16 bytes
+#endif
subs $len, #0x80
blo .Lxts_dec_short
(__boundary - 1 < (end) - 1)? __boundary: (end); \
})
+#define kvm_pgd_index(addr) pgd_index(addr)
+
static inline bool kvm_page_empty(void *ptr)
{
struct page *ptr_page = virt_to_page(ptr);
return page_count(ptr_page) == 1;
}
-
#define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep)
#define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp)
#define kvm_pud_table_empty(kvm, pudp) (0)
#define KVM_PREALLOC_LEVEL 0
-static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
+static inline void *kvm_get_hwpgd(struct kvm *kvm)
{
- return 0;
+ return kvm->arch.pgd;
}
-static inline void kvm_free_hwpgd(struct kvm *kvm) { }
-
-static inline void *kvm_get_hwpgd(struct kvm *kvm)
+static inline unsigned int kvm_get_hwpgd_size(void)
{
- return kvm->arch.pgd;
+ return PTRS_PER_S2_PGD * sizeof(pgd_t);
}
struct kvm;
#define AT91_DBGU 0xfc00c000 /* SAMA5D4_BASE_USART3 */
#endif
-/* Keep in sync with mach-at91/include/mach/hardware.h */
+#ifdef CONFIG_MMU
#define AT91_IO_P2V(x) ((x) - 0x01000000)
+#else
+#define AT91_IO_P2V(x) (x)
+#endif
#define AT91_DBGU_SR (0x14) /* Status Register */
#define AT91_DBGU_THR (0x1c) /* Transmitter Holding Register */
if (cpu_arch)
cpu_arch += CPU_ARCH_ARMv3;
} else if ((read_cpuid_id() & 0x000f0000) == 0x000f0000) {
- unsigned int mmfr0;
-
/* Revised CPUID format. Read the Memory Model Feature
* Register 0 and check for VMSAv7 or PMSAv7 */
- asm("mrc p15, 0, %0, c0, c1, 4"
- : "=r" (mmfr0));
+ unsigned int mmfr0 = read_cpuid_ext(CPUID_EXT_MMFR0);
if ((mmfr0 & 0x0000000f) >= 0x00000003 ||
(mmfr0 & 0x000000f0) >= 0x00000030)
cpu_arch = CPU_ARCH_ARMv7;
phys_addr_t addr = start, end = start + size;
phys_addr_t next;
- pgd = pgdp + pgd_index(addr);
+ pgd = pgdp + kvm_pgd_index(addr);
do {
next = kvm_pgd_addr_end(addr, end);
if (!pgd_none(*pgd))
phys_addr_t next;
pgd_t *pgd;
- pgd = kvm->arch.pgd + pgd_index(addr);
+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);
do {
next = kvm_pgd_addr_end(addr, end);
stage2_flush_puds(kvm, pgd, addr, next);
__phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
}
+/* Free the HW pgd, one page at a time */
+static void kvm_free_hwpgd(void *hwpgd)
+{
+ free_pages_exact(hwpgd, kvm_get_hwpgd_size());
+}
+
+/* Allocate the HW PGD, making sure that each page gets its own refcount */
+static void *kvm_alloc_hwpgd(void)
+{
+ unsigned int size = kvm_get_hwpgd_size();
+
+ return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
+}
+
/**
* kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation.
* @kvm: The KVM struct pointer for the VM.
*/
int kvm_alloc_stage2_pgd(struct kvm *kvm)
{
- int ret;
pgd_t *pgd;
+ void *hwpgd;
if (kvm->arch.pgd != NULL) {
kvm_err("kvm_arch already initialized?\n");
return -EINVAL;
}
+ hwpgd = kvm_alloc_hwpgd();
+ if (!hwpgd)
+ return -ENOMEM;
+
+ /* When the kernel uses more levels of page tables than the
+ * guest, we allocate a fake PGD and pre-populate it to point
+ * to the next-level page table, which will be the real
+ * initial page table pointed to by the VTTBR.
+ *
+ * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
+ * the PMD and the kernel will use folded pud.
+ * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
+ * pages.
+ */
if (KVM_PREALLOC_LEVEL > 0) {
+ int i;
+
/*
* Allocate fake pgd for the page table manipulation macros to
* work. This is not used by the hardware and we have no
*/
pgd = (pgd_t *)kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
GFP_KERNEL | __GFP_ZERO);
+
+ if (!pgd) {
+ kvm_free_hwpgd(hwpgd);
+ return -ENOMEM;
+ }
+
+ /* Plug the HW PGD into the fake one. */
+ for (i = 0; i < PTRS_PER_S2_PGD; i++) {
+ if (KVM_PREALLOC_LEVEL == 1)
+ pgd_populate(NULL, pgd + i,
+ (pud_t *)hwpgd + i * PTRS_PER_PUD);
+ else if (KVM_PREALLOC_LEVEL == 2)
+ pud_populate(NULL, pud_offset(pgd, 0) + i,
+ (pmd_t *)hwpgd + i * PTRS_PER_PMD);
+ }
} else {
/*
* Allocate actual first-level Stage-2 page table used by the
* hardware for Stage-2 page table walks.
*/
- pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, S2_PGD_ORDER);
+ pgd = (pgd_t *)hwpgd;
}
- if (!pgd)
- return -ENOMEM;
-
- ret = kvm_prealloc_hwpgd(kvm, pgd);
- if (ret)
- goto out_err;
-
kvm_clean_pgd(pgd);
kvm->arch.pgd = pgd;
return 0;
-out_err:
- if (KVM_PREALLOC_LEVEL > 0)
- kfree(pgd);
- else
- free_pages((unsigned long)pgd, S2_PGD_ORDER);
- return ret;
}
/**
return;
unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
- kvm_free_hwpgd(kvm);
+ kvm_free_hwpgd(kvm_get_hwpgd(kvm));
if (KVM_PREALLOC_LEVEL > 0)
kfree(kvm->arch.pgd);
- else
- free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
+
kvm->arch.pgd = NULL;
}
pgd_t *pgd;
pud_t *pud;
- pgd = kvm->arch.pgd + pgd_index(addr);
+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);
if (WARN_ON(pgd_none(*pgd))) {
if (!cache)
return NULL;
pgd_t *pgd;
phys_addr_t next;
- pgd = kvm->arch.pgd + pgd_index(addr);
+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);
do {
/*
* Release kvm_mmu_lock periodically if the memory region is
phys_addr_t sram_pbase;
unsigned long sram_base;
struct device_node *node;
- struct platform_device *pdev;
+ struct platform_device *pdev = NULL;
- node = of_find_compatible_node(NULL, NULL, "mmio-sram");
- if (!node) {
- pr_warn("%s: failed to find sram node!\n", __func__);
- return;
+ for_each_compatible_node(node, NULL, "mmio-sram") {
+ pdev = of_find_device_by_node(node);
+ if (pdev) {
+ of_node_put(node);
+ break;
+ }
}
- pdev = of_find_device_by_node(node);
if (!pdev) {
pr_warn("%s: failed to find sram device!\n", __func__);
- goto put_node;
+ return;
}
sram_pool = dev_get_gen_pool(&pdev->dev);
if (!sram_pool) {
pr_warn("%s: sram pool unavailable!\n", __func__);
- goto put_node;
+ return;
}
sram_base = gen_pool_alloc(sram_pool, at91_slow_clock_sz);
if (!sram_base) {
pr_warn("%s: unable to alloc ocram!\n", __func__);
- goto put_node;
+ return;
}
sram_pbase = gen_pool_virt_to_phys(sram_pool, sram_base);
slow_clock = __arm_ioremap_exec(sram_pbase, at91_slow_clock_sz, false);
-
-put_node:
- of_node_put(node);
}
#endif
" mcr p15, 0, %0, c7, c0, 4\n\t"
" str %5, [%1, %2]"
:
- : "r" (0), "r" (AT91_BASE_SYS), "r" (AT91RM9200_SDRAMC_LPR),
+ : "r" (0), "r" (at91_ramc_base[0]), "r" (AT91RM9200_SDRAMC_LPR),
"r" (1), "r" (AT91RM9200_SDRAMC_SRR),
"r" (lpr));
}
*/
#undef SLOWDOWN_MASTER_CLOCK
-#define MCKRDY_TIMEOUT 1000
-#define MOSCRDY_TIMEOUT 1000
-#define PLLALOCK_TIMEOUT 1000
-#define PLLBLOCK_TIMEOUT 1000
-
pmc .req r0
sdramc .req r1
ramc1 .req r2
* Wait until master clock is ready (after switching master clock source)
*/
.macro wait_mckrdy
- mov tmp2, #MCKRDY_TIMEOUT
-1: sub tmp2, tmp2, #1
- cmp tmp2, #0
- beq 2f
- ldr tmp1, [pmc, #AT91_PMC_SR]
+1: ldr tmp1, [pmc, #AT91_PMC_SR]
tst tmp1, #AT91_PMC_MCKRDY
beq 1b
-2:
.endm
/*
* Wait until master oscillator has stabilized.
*/
.macro wait_moscrdy
- mov tmp2, #MOSCRDY_TIMEOUT
-1: sub tmp2, tmp2, #1
- cmp tmp2, #0
- beq 2f
- ldr tmp1, [pmc, #AT91_PMC_SR]
+1: ldr tmp1, [pmc, #AT91_PMC_SR]
tst tmp1, #AT91_PMC_MOSCS
beq 1b
-2:
.endm
/*
* Wait until PLLA has locked.
*/
.macro wait_pllalock
- mov tmp2, #PLLALOCK_TIMEOUT
-1: sub tmp2, tmp2, #1
- cmp tmp2, #0
- beq 2f
- ldr tmp1, [pmc, #AT91_PMC_SR]
+1: ldr tmp1, [pmc, #AT91_PMC_SR]
tst tmp1, #AT91_PMC_LOCKA
beq 1b
-2:
.endm
/*
* Wait until PLLB has locked.
*/
.macro wait_pllblock
- mov tmp2, #PLLBLOCK_TIMEOUT
-1: sub tmp2, tmp2, #1
- cmp tmp2, #0
- beq 2f
- ldr tmp1, [pmc, #AT91_PMC_SR]
+1: ldr tmp1, [pmc, #AT91_PMC_SR]
tst tmp1, #AT91_PMC_LOCKB
beq 1b
-2:
.endm
.text
+ .arm
+
/* void at91_slow_clock(void __iomem *pmc, void __iomem *sdramc,
* void __iomem *ramc1, int memctrl)
*/
cmp memctrl, #AT91_MEMCTRL_DDRSDR
bne sdr_sr_enable
+ /* LPDDR1 --> force DDR2 mode during self-refresh */
+ ldr tmp1, [sdramc, #AT91_DDRSDRC_MDR]
+ str tmp1, .saved_sam9_mdr
+ bic tmp1, tmp1, #~AT91_DDRSDRC_MD
+ cmp tmp1, #AT91_DDRSDRC_MD_LOW_POWER_DDR
+ ldreq tmp1, [sdramc, #AT91_DDRSDRC_MDR]
+ biceq tmp1, tmp1, #AT91_DDRSDRC_MD
+ orreq tmp1, tmp1, #AT91_DDRSDRC_MD_DDR2
+ streq tmp1, [sdramc, #AT91_DDRSDRC_MDR]
+
/* prepare for DDRAM self-refresh mode */
ldr tmp1, [sdramc, #AT91_DDRSDRC_LPR]
str tmp1, .saved_sam9_lpr
/* figure out if we use the second ram controller */
cmp ramc1, #0
- ldrne tmp2, [ramc1, #AT91_DDRSDRC_LPR]
- strne tmp2, .saved_sam9_lpr1
- bicne tmp2, #AT91_DDRSDRC_LPCB
- orrne tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH
+ beq ddr_no_2nd_ctrl
+
+ ldr tmp2, [ramc1, #AT91_DDRSDRC_MDR]
+ str tmp2, .saved_sam9_mdr1
+ bic tmp2, tmp2, #~AT91_DDRSDRC_MD
+ cmp tmp2, #AT91_DDRSDRC_MD_LOW_POWER_DDR
+ ldreq tmp2, [ramc1, #AT91_DDRSDRC_MDR]
+ biceq tmp2, tmp2, #AT91_DDRSDRC_MD
+ orreq tmp2, tmp2, #AT91_DDRSDRC_MD_DDR2
+ streq tmp2, [ramc1, #AT91_DDRSDRC_MDR]
+
+ ldr tmp2, [ramc1, #AT91_DDRSDRC_LPR]
+ str tmp2, .saved_sam9_lpr1
+ bic tmp2, #AT91_DDRSDRC_LPCB
+ orr tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH
/* Enable DDRAM self-refresh mode */
+ str tmp2, [ramc1, #AT91_DDRSDRC_LPR]
+ddr_no_2nd_ctrl:
str tmp1, [sdramc, #AT91_DDRSDRC_LPR]
- strne tmp2, [ramc1, #AT91_DDRSDRC_LPR]
b sdr_sr_done
/* Turn off the main oscillator */
ldr tmp1, [pmc, #AT91_CKGR_MOR]
bic tmp1, tmp1, #AT91_PMC_MOSCEN
+ orr tmp1, tmp1, #AT91_PMC_KEY
str tmp1, [pmc, #AT91_CKGR_MOR]
/* Wait for interrupt */
/* Turn on the main oscillator */
ldr tmp1, [pmc, #AT91_CKGR_MOR]
orr tmp1, tmp1, #AT91_PMC_MOSCEN
+ orr tmp1, tmp1, #AT91_PMC_KEY
str tmp1, [pmc, #AT91_CKGR_MOR]
wait_moscrdy
*/
cmp memctrl, #AT91_MEMCTRL_DDRSDR
bne sdr_en_restore
+ /* Restore MDR in case of LPDDR1 */
+ ldr tmp1, .saved_sam9_mdr
+ str tmp1, [sdramc, #AT91_DDRSDRC_MDR]
/* Restore LPR on AT91 with DDRAM */
ldr tmp1, .saved_sam9_lpr
str tmp1, [sdramc, #AT91_DDRSDRC_LPR]
/* if we use the second ram controller */
cmp ramc1, #0
+ ldrne tmp2, .saved_sam9_mdr1
+ strne tmp2, [ramc1, #AT91_DDRSDRC_MDR]
ldrne tmp2, .saved_sam9_lpr1
strne tmp2, [ramc1, #AT91_DDRSDRC_LPR]
.saved_sam9_lpr1:
.word 0
+.saved_sam9_mdr:
+ .word 0
+
+.saved_sam9_mdr1:
+ .word 0
+
ENTRY(at91_slow_clock_sz)
.word .-at91_slow_clock
*/
void exynos_cpu_power_down(int cpu)
{
- if (cpu == 0 && (of_machine_is_compatible("samsung,exynos5420") ||
- of_machine_is_compatible("samsung,exynos5800"))) {
+ if (cpu == 0 && (soc_is_exynos5420() || soc_is_exynos5800())) {
/*
* Bypass power down for CPU0 during suspend. Check for
* the SYS_PWR_REG value to decide if we are suspending
of_genpd_add_provider_simple(np, &pd->pd);
}
+ /* Assign the child power domains to their parents */
+ for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") {
+ struct generic_pm_domain *child_domain, *parent_domain;
+ struct of_phandle_args args;
+
+ args.np = np;
+ args.args_count = 0;
+ child_domain = of_genpd_get_from_provider(&args);
+ if (!child_domain)
+ continue;
+
+ if (of_parse_phandle_with_args(np, "power-domains",
+ "#power-domain-cells", 0, &args) != 0)
+ continue;
+
+ parent_domain = of_genpd_get_from_provider(&args);
+ if (!parent_domain)
+ continue;
+
+ if (pm_genpd_add_subdomain(parent_domain, child_domain))
+ pr_warn("%s failed to add subdomain: %s\n",
+ parent_domain->name, child_domain->name);
+ else
+ pr_info("%s has as child subdomain: %s.\n",
+ parent_domain->name, child_domain->name);
+ of_node_put(np);
+ }
+
return 0;
}
arch_initcall(exynos4_pm_init_power_domain);
static u32 exynos_irqwake_intmask = 0xffffffff;
static const struct exynos_wkup_irq exynos3250_wkup_irq[] = {
- { 73, BIT(1) }, /* RTC alarm */
- { 74, BIT(2) }, /* RTC tick */
+ { 105, BIT(1) }, /* RTC alarm */
+ { 106, BIT(2) }, /* RTC tick */
{ /* sentinel */ },
};
* set bit IOMUXC_GPR1[21]. Or the PTP clock must be from pad
* (external OSC), and we need to clear the bit.
*/
- clksel = ptp_clk == enet_ref ? IMX6Q_GPR1_ENET_CLK_SEL_ANATOP :
- IMX6Q_GPR1_ENET_CLK_SEL_PAD;
+ clksel = clk_is_match(ptp_clk, enet_ref) ?
+ IMX6Q_GPR1_ENET_CLK_SEL_ANATOP :
+ IMX6Q_GPR1_ENET_CLK_SEL_PAD;
gpr = syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");
if (!IS_ERR(gpr))
regmap_update_bits(gpr, IOMUXC_GPR1,
return kasprintf(GFP_KERNEL, "OMAP4");
else if (soc_is_omap54xx())
return kasprintf(GFP_KERNEL, "OMAP5");
+ else if (soc_is_am33xx() || soc_is_am335x())
+ return kasprintf(GFP_KERNEL, "AM33xx");
else if (soc_is_am43xx())
return kasprintf(GFP_KERNEL, "AM43xx");
else if (soc_is_dra7xx())
if (ret == -EBUSY)
pr_warn("omap_hwmod: %s: failed to hardreset\n", oh->name);
- if (!ret) {
+ if (oh->clkdm) {
/*
* Set the clockdomain to HW_AUTO, assuming that the
* previous state was HW_AUTO.
*/
- if (oh->clkdm && hwsup)
+ if (hwsup)
clkdm_allow_idle(oh->clkdm);
- } else {
- if (oh->clkdm)
- clkdm_hwmod_disable(oh->clkdm, oh);
+
+ clkdm_hwmod_disable(oh->clkdm, oh);
}
return ret;
INIT_LIST_HEAD(&oh->master_ports);
INIT_LIST_HEAD(&oh->slave_ports);
spin_lock_init(&oh->_lock);
+ lockdep_set_class(&oh->_lock, &oh->hwmod_key);
oh->_state = _HWMOD_STATE_REGISTERED;
u32 _sysc_cache;
void __iomem *_mpu_rt_va;
spinlock_t _lock;
+ struct lock_class_key hwmod_key; /* unique lock class */
struct list_head node;
struct omap_hwmod_ocp_if *_mpu_port;
unsigned int (*xlate_irq)(unsigned int);
*
*/
-static struct omap_hwmod_class dra7xx_pcie_hwmod_class = {
+static struct omap_hwmod_class dra7xx_pciess_hwmod_class = {
.name = "pcie",
};
/* pcie1 */
-static struct omap_hwmod dra7xx_pcie1_hwmod = {
+static struct omap_hwmod dra7xx_pciess1_hwmod = {
.name = "pcie1",
- .class = &dra7xx_pcie_hwmod_class,
+ .class = &dra7xx_pciess_hwmod_class,
.clkdm_name = "pcie_clkdm",
.main_clk = "l4_root_clk_div",
- .prcm = {
- .omap4 = {
- .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET,
- .modulemode = MODULEMODE_SWCTRL,
- },
- },
-};
-
-/* pcie2 */
-static struct omap_hwmod dra7xx_pcie2_hwmod = {
- .name = "pcie2",
- .class = &dra7xx_pcie_hwmod_class,
- .clkdm_name = "pcie_clkdm",
- .main_clk = "l4_root_clk_div",
- .prcm = {
- .omap4 = {
- .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET,
- .modulemode = MODULEMODE_SWCTRL,
- },
- },
-};
-
-/*
- * 'PCIE PHY' class
- *
- */
-
-static struct omap_hwmod_class dra7xx_pcie_phy_hwmod_class = {
- .name = "pcie-phy",
-};
-
-/* pcie1 phy */
-static struct omap_hwmod dra7xx_pcie1_phy_hwmod = {
- .name = "pcie1-phy",
- .class = &dra7xx_pcie_phy_hwmod_class,
- .clkdm_name = "l3init_clkdm",
- .main_clk = "l4_root_clk_div",
.prcm = {
.omap4 = {
.clkctrl_offs = DRA7XX_CM_L3INIT_PCIESS1_CLKCTRL_OFFSET,
},
};
-/* pcie2 phy */
-static struct omap_hwmod dra7xx_pcie2_phy_hwmod = {
- .name = "pcie2-phy",
- .class = &dra7xx_pcie_phy_hwmod_class,
- .clkdm_name = "l3init_clkdm",
+/* pcie2 */
+static struct omap_hwmod dra7xx_pciess2_hwmod = {
+ .name = "pcie2",
+ .class = &dra7xx_pciess_hwmod_class,
+ .clkdm_name = "pcie_clkdm",
.main_clk = "l4_root_clk_div",
.prcm = {
.omap4 = {
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
-/* l3_main_1 -> pcie1 */
-static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie1 = {
+/* l3_main_1 -> pciess1 */
+static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess1 = {
.master = &dra7xx_l3_main_1_hwmod,
- .slave = &dra7xx_pcie1_hwmod,
+ .slave = &dra7xx_pciess1_hwmod,
.clk = "l3_iclk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
-/* l4_cfg -> pcie1 */
-static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1 = {
+/* l4_cfg -> pciess1 */
+static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess1 = {
.master = &dra7xx_l4_cfg_hwmod,
- .slave = &dra7xx_pcie1_hwmod,
+ .slave = &dra7xx_pciess1_hwmod,
.clk = "l4_root_clk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
-/* l3_main_1 -> pcie2 */
-static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie2 = {
+/* l3_main_1 -> pciess2 */
+static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess2 = {
.master = &dra7xx_l3_main_1_hwmod,
- .slave = &dra7xx_pcie2_hwmod,
+ .slave = &dra7xx_pciess2_hwmod,
.clk = "l3_iclk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
-/* l4_cfg -> pcie2 */
-static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2 = {
- .master = &dra7xx_l4_cfg_hwmod,
- .slave = &dra7xx_pcie2_hwmod,
- .clk = "l4_root_clk_div",
- .user = OCP_USER_MPU | OCP_USER_SDMA,
-};
-
-/* l4_cfg -> pcie1 phy */
-static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1_phy = {
- .master = &dra7xx_l4_cfg_hwmod,
- .slave = &dra7xx_pcie1_phy_hwmod,
- .clk = "l4_root_clk_div",
- .user = OCP_USER_MPU | OCP_USER_SDMA,
-};
-
-/* l4_cfg -> pcie2 phy */
-static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2_phy = {
+/* l4_cfg -> pciess2 */
+static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess2 = {
.master = &dra7xx_l4_cfg_hwmod,
- .slave = &dra7xx_pcie2_phy_hwmod,
+ .slave = &dra7xx_pciess2_hwmod,
.clk = "l4_root_clk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
&dra7xx_l4_cfg__mpu,
&dra7xx_l4_cfg__ocp2scp1,
&dra7xx_l4_cfg__ocp2scp3,
- &dra7xx_l3_main_1__pcie1,
- &dra7xx_l4_cfg__pcie1,
- &dra7xx_l3_main_1__pcie2,
- &dra7xx_l4_cfg__pcie2,
- &dra7xx_l4_cfg__pcie1_phy,
- &dra7xx_l4_cfg__pcie2_phy,
+ &dra7xx_l3_main_1__pciess1,
+ &dra7xx_l4_cfg__pciess1,
+ &dra7xx_l3_main_1__pciess2,
+ &dra7xx_l4_cfg__pciess2,
&dra7xx_l3_main_1__qspi,
&dra7xx_l4_per3__rtcss,
&dra7xx_l4_cfg__sata,
static void __init omap3_evm_legacy_init(void)
{
+ hsmmc2_internal_input_clk();
legacy_init_wl12xx(WL12XX_REFCLOCK_38, 0, 149);
}
{
saved_mask[0] =
omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST,
- OMAP4_PRM_IRQSTATUS_MPU_OFFSET);
+ OMAP4_PRM_IRQENABLE_MPU_OFFSET);
saved_mask[1] =
omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST,
- OMAP4_PRM_IRQSTATUS_MPU_2_OFFSET);
+ OMAP4_PRM_IRQENABLE_MPU_2_OFFSET);
omap4_prm_write_inst_reg(0, OMAP4430_PRM_OCP_SOCKET_INST,
OMAP4_PRM_IRQENABLE_MPU_OFFSET);
#include <linux/platform_data/video-pxafb.h>
#include <mach/bitfield.h>
#include <linux/platform_data/mmc-pxamci.h>
+#include <linux/smc91x.h>
#include "generic.h"
#include "devices.h"
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+#include <linux/bitops.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#define ICHP_VAL_IRQ (1 << 31)
#define ICHP_IRQ(i) (((i) >> 16) & 0x7fff)
#define IPR_VALID (1 << 31)
-#define IRQ_BIT(n) (((n) - PXA_IRQ(0)) & 0x1f)
#define MAX_INTERNAL_IRQS 128
static void __iomem *pxa_irq_base;
static int pxa_internal_irq_nr;
static bool cpu_has_ipr;
+static struct irq_domain *pxa_irq_domain;
static inline void __iomem *irq_base(int i)
{
void pxa_mask_irq(struct irq_data *d)
{
void __iomem *base = irq_data_get_irq_chip_data(d);
+ irq_hw_number_t irq = irqd_to_hwirq(d);
uint32_t icmr = __raw_readl(base + ICMR);
- icmr &= ~(1 << IRQ_BIT(d->irq));
+ icmr &= ~BIT(irq & 0x1f);
__raw_writel(icmr, base + ICMR);
}
void pxa_unmask_irq(struct irq_data *d)
{
void __iomem *base = irq_data_get_irq_chip_data(d);
+ irq_hw_number_t irq = irqd_to_hwirq(d);
uint32_t icmr = __raw_readl(base + ICMR);
- icmr |= 1 << IRQ_BIT(d->irq);
+ icmr |= BIT(irq & 0x1f);
__raw_writel(icmr, base + ICMR);
}
} while (1);
}
-void __init pxa_init_irq(int irq_nr, int (*fn)(struct irq_data *, unsigned int))
+static int pxa_irq_map(struct irq_domain *h, unsigned int virq,
+ irq_hw_number_t hw)
{
- int irq, i, n;
+ void __iomem *base = irq_base(hw / 32);
- BUG_ON(irq_nr > MAX_INTERNAL_IRQS);
+ /* initialize interrupt priority */
+ if (cpu_has_ipr)
+ __raw_writel(hw | IPR_VALID, pxa_irq_base + IPR(hw));
+
+ irq_set_chip_and_handler(virq, &pxa_internal_irq_chip,
+ handle_level_irq);
+ irq_set_chip_data(virq, base);
+ set_irq_flags(virq, IRQF_VALID);
+
+ return 0;
+}
+
+static struct irq_domain_ops pxa_irq_ops = {
+ .map = pxa_irq_map,
+ .xlate = irq_domain_xlate_onecell,
+};
+
+static __init void
+pxa_init_irq_common(struct device_node *node, int irq_nr,
+ int (*fn)(struct irq_data *, unsigned int))
+{
+ int n;
pxa_internal_irq_nr = irq_nr;
- cpu_has_ipr = !cpu_is_pxa25x();
- pxa_irq_base = io_p2v(0x40d00000);
+ pxa_irq_domain = irq_domain_add_legacy(node, irq_nr,
+ PXA_IRQ(0), 0,
+ &pxa_irq_ops, NULL);
+ if (!pxa_irq_domain)
+ panic("Unable to add PXA IRQ domain\n");
+ irq_set_default_host(pxa_irq_domain);
for (n = 0; n < irq_nr; n += 32) {
void __iomem *base = irq_base(n >> 5);
__raw_writel(0, base + ICMR); /* disable all IRQs */
__raw_writel(0, base + ICLR); /* all IRQs are IRQ, not FIQ */
- for (i = n; (i < (n + 32)) && (i < irq_nr); i++) {
- /* initialize interrupt priority */
- if (cpu_has_ipr)
- __raw_writel(i | IPR_VALID, pxa_irq_base + IPR(i));
-
- irq = PXA_IRQ(i);
- irq_set_chip_and_handler(irq, &pxa_internal_irq_chip,
- handle_level_irq);
- irq_set_chip_data(irq, base);
- set_irq_flags(irq, IRQF_VALID);
- }
}
-
/* only unmasked interrupts kick us out of idle */
__raw_writel(1, irq_base(0) + ICCR);
pxa_internal_irq_chip.irq_set_wake = fn;
}
+void __init pxa_init_irq(int irq_nr, int (*fn)(struct irq_data *, unsigned int))
+{
+ BUG_ON(irq_nr > MAX_INTERNAL_IRQS);
+
+ pxa_irq_base = io_p2v(0x40d00000);
+ cpu_has_ipr = !cpu_is_pxa25x();
+ pxa_init_irq_common(NULL, irq_nr, fn);
+}
+
#ifdef CONFIG_PM
static unsigned long saved_icmr[MAX_INTERNAL_IRQS/32];
static unsigned long saved_ipr[MAX_INTERNAL_IRQS];
};
#ifdef CONFIG_OF
-static struct irq_domain *pxa_irq_domain;
-
-static int pxa_irq_map(struct irq_domain *h, unsigned int virq,
- irq_hw_number_t hw)
-{
- void __iomem *base = irq_base(hw / 32);
-
- /* initialize interrupt priority */
- if (cpu_has_ipr)
- __raw_writel(hw | IPR_VALID, pxa_irq_base + IPR(hw));
-
- irq_set_chip_and_handler(hw, &pxa_internal_irq_chip,
- handle_level_irq);
- irq_set_chip_data(hw, base);
- set_irq_flags(hw, IRQF_VALID);
-
- return 0;
-}
-
-static struct irq_domain_ops pxa_irq_ops = {
- .map = pxa_irq_map,
- .xlate = irq_domain_xlate_onecell,
-};
-
static const struct of_device_id intc_ids[] __initconst = {
{ .compatible = "marvell,pxa-intc", },
{}
{
struct device_node *node;
struct resource res;
- int n, ret;
+ int ret;
node = of_find_matching_node(NULL, intc_ids);
if (!node) {
return;
}
- pxa_irq_domain = irq_domain_add_legacy(node, pxa_internal_irq_nr, 0, 0,
- &pxa_irq_ops, NULL);
- if (!pxa_irq_domain)
- panic("Unable to add PXA IRQ domain\n");
-
- irq_set_default_host(pxa_irq_domain);
-
- for (n = 0; n < pxa_internal_irq_nr; n += 32) {
- void __iomem *base = irq_base(n >> 5);
-
- __raw_writel(0, base + ICMR); /* disable all IRQs */
- __raw_writel(0, base + ICLR); /* all IRQs are IRQ, not FIQ */
- }
-
- /* only unmasked interrupts kick us out of idle */
- __raw_writel(1, irq_base(0) + ICCR);
-
- pxa_internal_irq_chip.irq_set_wake = fn;
+ pxa_init_irq_common(node, pxa_internal_irq_nr, fn);
}
#endif /* CONFIG_OF */
};
struct smc91x_platdata smc91x_platdata = {
- .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT;
+ .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT,
};
static struct platform_device smc91x_device = {
};
static struct platform_device can_regulator_device = {
- .name = "reg-fixed-volage",
+ .name = "reg-fixed-voltage",
.id = 0,
.dev = {
.platform_data = &can_regulator_pdata,
.id = 0,
.res = smc91x_resources,
.num_res = ARRAY_SIZE(smc91x_resources),
- .data = &smc91c_platdata,
- .size_data = sizeof(smc91c_platdata),
+ .data = &smc91x_platdata,
+ .size_data = sizeof(smc91x_platdata),
};
int ret, irq;
.num_resources = ARRAY_SIZE(smc91x_resources),
.resource = smc91x_resources,
.dev = {
- .platform_data = &smc91c_platdata,
+ .platform_data = &smc91x_platdata,
},
};
extern unsigned long socfpga_cpu1start_addr;
-#define SOCFPGA_SCU_VIRT_BASE 0xfffec000
+#define SOCFPGA_SCU_VIRT_BASE 0xfee00000
#endif
#include <asm/hardware/cache-l2x0.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
+#include <asm/cacheflush.h>
#include "core.h"
(u32 *) &socfpga_cpu1start_addr))
pr_err("SMP: Need cpu1-start-addr in device tree.\n");
+ /* Ensure that socfpga_cpu1start_addr is visible to other CPUs */
+ smp_wmb();
+ sync_cache_w(&socfpga_cpu1start_addr);
+
sys_manager_base_addr = of_iomap(np, 0);
np = of_find_compatible_node(NULL, NULL, "altr,rst-mgr");
"st,stih415",
"st,stih416",
"st,stih407",
+ "st,stih410",
"st,stih418",
NULL
};
menuconfig ARCH_SUNXI
bool "Allwinner SoCs" if ARCH_MULTI_V7
select ARCH_REQUIRE_GPIOLIB
+ select ARCH_HAS_RESET_CONTROLLER
select CLKSRC_MMIO
select GENERIC_IRQ_CHIP
select PINCTRL
select SUN4I_TIMER
+ select RESET_CONTROLLER
if ARCH_SUNXI
config MACH_SUN6I
bool "Allwinner A31 (sun6i) SoCs support"
default ARCH_SUNXI
- select ARCH_HAS_RESET_CONTROLLER
select ARM_GIC
select MFD_SUN6I_PRCM
- select RESET_CONTROLLER
select SUN5I_HSTIMER
config MACH_SUN7I
config MACH_SUN8I
bool "Allwinner A23 (sun8i) SoCs support"
default ARCH_SUNXI
- select ARCH_HAS_RESET_CONTROLLER
select ARM_GIC
select MFD_SUN6I_PRCM
- select RESET_CONTROLLER
config MACH_SUN9I
bool "Allwinner (sun9i) SoCs support"
default ARCH_SUNXI
- select ARCH_HAS_RESET_CONTROLLER
select ARM_GIC
- select RESET_CONTROLLER
endif
}
ret = l2x0_cache_size_of_parse(np, aux_val, aux_mask, &assoc, SZ_512K);
- if (ret)
- return;
-
- switch (assoc) {
- case 16:
- *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK;
- *aux_val |= L310_AUX_CTRL_ASSOCIATIVITY_16;
- *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK;
- break;
- case 8:
- *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK;
- *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK;
- break;
- default:
- pr_err("L2C-310 OF cache associativity %d invalid, only 8 or 16 permitted\n",
- assoc);
- break;
+ if (!ret) {
+ switch (assoc) {
+ case 16:
+ *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK;
+ *aux_val |= L310_AUX_CTRL_ASSOCIATIVITY_16;
+ *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK;
+ break;
+ case 8:
+ *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK;
+ *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK;
+ break;
+ default:
+ pr_err("L2C-310 OF cache associativity %d invalid, only 8 or 16 permitted\n",
+ assoc);
+ break;
+ }
}
prefetch = l2x0_saved_regs.prefetch_ctrl;
*/
if (sizeof(mask) != sizeof(dma_addr_t) &&
mask > (dma_addr_t)~0 &&
- dma_to_pfn(dev, ~0) < max_pfn) {
+ dma_to_pfn(dev, ~0) < max_pfn - 1) {
if (warn) {
dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n",
mask);
pr_alert("Unhandled fault: %s (0x%03x) at 0x%08lx\n",
inf->name, fsr, addr);
+ show_pte(current->mm, addr);
info.si_signo = inf->sig;
info.si_errno = 0;
WARN_ON_ONCE(1);
}
- if (!is_module_address(start) || !is_module_address(end - 1))
+ if (start < MODULES_VADDR || start >= MODULES_END)
+ return -EINVAL;
+
+ if (end < MODULES_VADDR || start >= MODULES_END)
return -EINVAL;
data.set_mask = set_mask;
struct device *dev = &pdev->dev;
const struct of_device_id *match;
const struct dmtimer_platform_data *pdata;
+ int ret;
match = of_match_device(of_match_ptr(omap_timer_match), dev);
pdata = match ? match->data : dev->platform_data;
}
if (!timer->reserved) {
- pm_runtime_get_sync(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "%s: pm_runtime_get_sync failed!\n",
+ __func__);
+ goto err_get_sync;
+ }
__omap_dm_timer_init_regs(timer);
pm_runtime_put(dev);
}
dev_dbg(dev, "Device Probed.\n");
return 0;
+
+err_get_sync:
+ pm_runtime_put_noidle(dev);
+ pm_runtime_disable(dev);
+ return ret;
}
/**
}
spin_unlock_irqrestore(&dm_timer_lock, flags);
+ pm_runtime_disable(&pdev->dev);
+
return ret;
}
};
sgenet0: ethernet@1f210000 {
- compatible = "apm,xgene-enet";
+ compatible = "apm,xgene1-sgenet";
status = "disabled";
reg = <0x0 0x1f210000 0x0 0xd100>,
<0x0 0x1f200000 0x0 0Xc300>,
};
xgenet: ethernet@1f610000 {
- compatible = "apm,xgene-enet";
+ compatible = "apm,xgene1-xgenet";
status = "disabled";
reg = <0x0 0x1f610000 0x0 0xd100>,
<0x0 0x1f600000 0x0 0Xc300>,
*/
/* SoC fixed clocks */
- soc_uartclk: refclk72738khz {
+ soc_uartclk: refclk7273800hz {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <7273800>;
__ret; \
})
-#define this_cpu_cmpxchg_1(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)
-#define this_cpu_cmpxchg_2(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)
-#define this_cpu_cmpxchg_4(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)
-#define this_cpu_cmpxchg_8(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)
-
-#define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \
- cmpxchg_double_local(raw_cpu_ptr(&(ptr1)), raw_cpu_ptr(&(ptr2)), \
- o1, o2, n1, n2)
+#define _protect_cmpxchg_local(pcp, o, n) \
+({ \
+ typeof(*raw_cpu_ptr(&(pcp))) __ret; \
+ preempt_disable(); \
+ __ret = cmpxchg_local(raw_cpu_ptr(&(pcp)), o, n); \
+ preempt_enable(); \
+ __ret; \
+})
+
+#define this_cpu_cmpxchg_1(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)
+#define this_cpu_cmpxchg_2(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)
+#define this_cpu_cmpxchg_4(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)
+#define this_cpu_cmpxchg_8(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)
+
+#define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \
+({ \
+ int __ret; \
+ preempt_disable(); \
+ __ret = cmpxchg_double_local( raw_cpu_ptr(&(ptr1)), \
+ raw_cpu_ptr(&(ptr2)), \
+ o1, o2, n1, n2); \
+ preempt_enable(); \
+ __ret; \
+})
#define cmpxchg64(ptr,o,n) cmpxchg((ptr),(o),(n))
#define cmpxchg64_local(ptr,o,n) cmpxchg_local((ptr),(o),(n))
* 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are
* not known to exist and will break with this configuration.
*
+ * VTCR_EL2.PS is extracted from ID_AA64MMFR0_EL1.PARange at boot time
+ * (see hyp-init.S).
+ *
* Note that when using 4K pages, we concatenate two first level page tables
* together.
*
#ifdef CONFIG_ARM64_64K_PAGES
/*
* Stage2 translation configuration:
- * 40bits output (PS = 2)
* 40bits input (T0SZ = 24)
* 64kB pages (TG0 = 1)
* 2 level page tables (SL = 1)
#else
/*
* Stage2 translation configuration:
- * 40bits output (PS = 2)
* 40bits input (T0SZ = 24)
* 4kB pages (TG0 = 0)
* 3 level page tables (SL = 1)
#define PTRS_PER_S2_PGD (1 << PTRS_PER_S2_PGD_SHIFT)
#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+#define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+
/*
* If we are concatenating first level stage-2 page tables, we would have less
* than or equal to 16 pointers in the fake PGD, because that's what the
#define KVM_PREALLOC_LEVEL (0)
#endif
-/**
- * kvm_prealloc_hwpgd - allocate inital table for VTTBR
- * @kvm: The KVM struct pointer for the VM.
- * @pgd: The kernel pseudo pgd
- *
- * When the kernel uses more levels of page tables than the guest, we allocate
- * a fake PGD and pre-populate it to point to the next-level page table, which
- * will be the real initial page table pointed to by the VTTBR.
- *
- * When KVM_PREALLOC_LEVEL==2, we allocate a single page for the PMD and
- * the kernel will use folded pud. When KVM_PREALLOC_LEVEL==1, we
- * allocate 2 consecutive PUD pages.
- */
-static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)
-{
- unsigned int i;
- unsigned long hwpgd;
-
- if (KVM_PREALLOC_LEVEL == 0)
- return 0;
-
- hwpgd = __get_free_pages(GFP_KERNEL | __GFP_ZERO, PTRS_PER_S2_PGD_SHIFT);
- if (!hwpgd)
- return -ENOMEM;
-
- for (i = 0; i < PTRS_PER_S2_PGD; i++) {
- if (KVM_PREALLOC_LEVEL == 1)
- pgd_populate(NULL, pgd + i,
- (pud_t *)hwpgd + i * PTRS_PER_PUD);
- else if (KVM_PREALLOC_LEVEL == 2)
- pud_populate(NULL, pud_offset(pgd, 0) + i,
- (pmd_t *)hwpgd + i * PTRS_PER_PMD);
- }
-
- return 0;
-}
-
static inline void *kvm_get_hwpgd(struct kvm *kvm)
{
pgd_t *pgd = kvm->arch.pgd;
return pmd_offset(pud, 0);
}
-static inline void kvm_free_hwpgd(struct kvm *kvm)
+static inline unsigned int kvm_get_hwpgd_size(void)
{
- if (KVM_PREALLOC_LEVEL > 0) {
- unsigned long hwpgd = (unsigned long)kvm_get_hwpgd(kvm);
- free_pages(hwpgd, PTRS_PER_S2_PGD_SHIFT);
- }
+ if (KVM_PREALLOC_LEVEL > 0)
+ return PTRS_PER_S2_PGD * PAGE_SIZE;
+ return PTRS_PER_S2_PGD * sizeof(pgd_t);
}
static inline bool kvm_page_empty(void *ptr)
{
unsigned int cpu = smp_processor_id();
+ /*
+ * init_mm.pgd does not contain any user mappings and it is always
+ * active for kernel addresses in TTBR1. Just set the reserved TTBR0.
+ */
+ if (next == &init_mm) {
+ cpu_set_reserved_ttbr0();
+ return;
+ }
+
if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next)
check_and_switch_context(next, tsk);
}
return ret;
}
+#define _percpu_read(pcp) \
+({ \
+ typeof(pcp) __retval; \
+ preempt_disable(); \
+ __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \
+ sizeof(pcp)); \
+ preempt_enable(); \
+ __retval; \
+})
+
+#define _percpu_write(pcp, val) \
+do { \
+ preempt_disable(); \
+ __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \
+ sizeof(pcp)); \
+ preempt_enable(); \
+} while(0) \
+
+#define _pcp_protect(operation, pcp, val) \
+({ \
+ typeof(pcp) __retval; \
+ preempt_disable(); \
+ __retval = (typeof(pcp))operation(raw_cpu_ptr(&(pcp)), \
+ (val), sizeof(pcp)); \
+ preempt_enable(); \
+ __retval; \
+})
+
#define _percpu_add(pcp, val) \
- __percpu_add(raw_cpu_ptr(&(pcp)), val, sizeof(pcp))
+ _pcp_protect(__percpu_add, pcp, val)
-#define _percpu_add_return(pcp, val) (typeof(pcp)) (_percpu_add(pcp, val))
+#define _percpu_add_return(pcp, val) _percpu_add(pcp, val)
#define _percpu_and(pcp, val) \
- __percpu_and(raw_cpu_ptr(&(pcp)), val, sizeof(pcp))
+ _pcp_protect(__percpu_and, pcp, val)
#define _percpu_or(pcp, val) \
- __percpu_or(raw_cpu_ptr(&(pcp)), val, sizeof(pcp))
-
-#define _percpu_read(pcp) (typeof(pcp)) \
- (__percpu_read(raw_cpu_ptr(&(pcp)), sizeof(pcp)))
-
-#define _percpu_write(pcp, val) \
- __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp))
+ _pcp_protect(__percpu_or, pcp, val)
#define _percpu_xchg(pcp, val) (typeof(pcp)) \
- (__percpu_xchg(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp)))
+ _pcp_protect(__percpu_xchg, pcp, (unsigned long)(val))
#define this_cpu_add_1(pcp, val) _percpu_add(pcp, val)
#define this_cpu_add_2(pcp, val) _percpu_add(pcp, val)
#include <asm/memory.h>
-#define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
+#define cpu_switch_mm(pgd,mm) \
+do { \
+ BUG_ON(pgd == swapper_pg_dir); \
+ cpu_do_switch_mm(virt_to_phys(pgd),mm); \
+} while (0)
#define cpu_get_pgd() \
({ \
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
unsigned long addr)
{
+ __flush_tlb_pgtable(tlb->mm, addr);
pgtable_page_dtor(pte);
tlb_remove_entry(tlb, pte);
}
static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
unsigned long addr)
{
+ __flush_tlb_pgtable(tlb->mm, addr);
tlb_remove_entry(tlb, virt_to_page(pmdp));
}
#endif
static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp,
unsigned long addr)
{
+ __flush_tlb_pgtable(tlb->mm, addr);
tlb_remove_entry(tlb, virt_to_page(pudp));
}
#endif
flush_tlb_all();
}
+/*
+ * Used to invalidate the TLB (walk caches) corresponding to intermediate page
+ * table levels (pgd/pud/pmd).
+ */
+static inline void __flush_tlb_pgtable(struct mm_struct *mm,
+ unsigned long uaddr)
+{
+ unsigned long addr = uaddr >> 12 | ((unsigned long)ASID(mm) << 48);
+
+ dsb(ishst);
+ asm("tlbi vae1is, %0" : : "r" (addr));
+ dsb(ish);
+}
/*
* On AArch64, the cache coherency is handled via the set_pte_at() function.
*/
static void efi_set_pgd(struct mm_struct *mm)
{
- cpu_switch_mm(mm->pgd, mm);
+ if (mm == &init_mm)
+ cpu_set_reserved_ttbr0();
+ else
+ cpu_switch_mm(mm->pgd, mm);
+
flush_tlb_all();
if (icache_is_aivivt())
__flush_icache_all();
efi_set_pgd(current->active_mm);
preempt_enable();
}
+
+/*
+ * UpdateCapsule() depends on the system being shutdown via
+ * ResetSystem().
+ */
+bool efi_poweroff_required(void)
+{
+ return efi_enabled(EFI_RUNTIME_SERVICES);
+}
* zeroing of .bss would clobber it.
*/
.pushsection .data..cacheline_aligned
-ENTRY(__boot_cpu_mode)
.align L1_CACHE_SHIFT
+ENTRY(__boot_cpu_mode)
.long BOOT_CPU_MODE_EL2
.long 0
.popsection
#include <stdarg.h>
#include <linux/compat.h>
+#include <linux/efi.h>
#include <linux/export.h>
#include <linux/sched.h>
#include <linux/kernel.h>
local_irq_disable();
smp_send_stop();
+ /*
+ * UpdateCapsule() depends on the system being reset via
+ * ResetSystem().
+ */
+ if (efi_enabled(EFI_RUNTIME_SERVICES))
+ efi_reboot(reboot_mode, NULL);
+
/* Now call the architecture specific reboot code. */
if (arm_pm_restart)
arm_pm_restart(reboot_mode, cmd);
}
early_param("coherent_pool", early_coherent_pool);
-static void *__alloc_from_pool(size_t size, struct page **ret_page)
+static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags)
{
unsigned long val;
void *ptr = NULL;
*ret_page = phys_to_page(phys);
ptr = (void *)val;
+ if (flags & __GFP_ZERO)
+ memset(ptr, 0, size);
}
return ptr;
flags |= GFP_DMA;
if (IS_ENABLED(CONFIG_DMA_CMA) && (flags & __GFP_WAIT)) {
struct page *page;
+ void *addr;
size = PAGE_ALIGN(size);
page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
return NULL;
*dma_handle = phys_to_dma(dev, page_to_phys(page));
- return page_address(page);
+ addr = page_address(page);
+ if (flags & __GFP_ZERO)
+ memset(addr, 0, size);
+ return addr;
} else {
return swiotlb_alloc_coherent(dev, size, dma_handle, flags);
}
if (!coherent && !(flags & __GFP_WAIT)) {
struct page *page = NULL;
- void *addr = __alloc_from_pool(size, &page);
+ void *addr = __alloc_from_pool(size, &page, flags);
if (addr)
*dma_handle = phys_to_dma(dev, page_to_phys(page));
*/
#define pgtable_cache_init() do { } while (0)
+/*
+ * c6x is !MMU, so define the simpliest implementation
+ */
+#define pgprot_writecombine pgprot_noncached
+
#include <asm-generic/pgtable.h>
#endif /* _ASM_C6X_PGTABLE_H */
#define _ASM_METAG_IO_H
#include <linux/types.h>
+#include <asm/pgtable-bits.h>
#define IO_SPACE_LIMIT 0
--- /dev/null
+/*
+ * Meta page table definitions.
+ */
+
+#ifndef _METAG_PGTABLE_BITS_H
+#define _METAG_PGTABLE_BITS_H
+
+#include <asm/metag_mem.h>
+
+/*
+ * Definitions for MMU descriptors
+ *
+ * These are the hardware bits in the MMCU pte entries.
+ * Derived from the Meta toolkit headers.
+ */
+#define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT
+#define _PAGE_WRITE MMCU_ENTRY_WR_BIT
+#define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT
+/* Write combine bit - this can cause writes to occur out of order */
+#define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT
+/* Sys coherent bit - this bit is never used by Linux */
+#define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT
+#define _PAGE_ALWAYS_ZERO_1 0x020
+#define _PAGE_CACHE_CTRL0 0x040
+#define _PAGE_CACHE_CTRL1 0x080
+#define _PAGE_ALWAYS_ZERO_2 0x100
+#define _PAGE_ALWAYS_ZERO_3 0x200
+#define _PAGE_ALWAYS_ZERO_4 0x400
+#define _PAGE_ALWAYS_ZERO_5 0x800
+
+/* These are software bits that we stuff into the gaps in the hardware
+ * pte entries that are not used. Note, these DO get stored in the actual
+ * hardware, but the hardware just does not use them.
+ */
+#define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_1
+#define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_2
+
+/* Pages owned, and protected by, the kernel. */
+#define _PAGE_KERNEL _PAGE_PRIV
+
+/* No cacheing of this page */
+#define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S)
+/* burst cacheing - good for data streaming */
+#define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S)
+/* One cache way per thread */
+#define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S)
+/* Full on cacheing */
+#define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S)
+
+#define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE)
+
+/* which bits are used for cache control ... */
+#define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \
+ _PAGE_WR_COMBINE)
+
+/* This is a mask of the bits that pte_modify is allowed to change. */
+#define _PAGE_CHG_MASK (PAGE_MASK)
+
+#define _PAGE_SZ_SHIFT 1
+#define _PAGE_SZ_4K (0x0)
+#define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT)
+#define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT)
+
+#if defined(CONFIG_PAGE_SIZE_4K)
+#define _PAGE_SZ (_PAGE_SZ_4K)
+#elif defined(CONFIG_PAGE_SIZE_8K)
+#define _PAGE_SZ (_PAGE_SZ_8K)
+#elif defined(CONFIG_PAGE_SIZE_16K)
+#define _PAGE_SZ (_PAGE_SZ_16K)
+#endif
+#define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT)
+
+#if defined(CONFIG_HUGETLB_PAGE_SIZE_8K)
+# define _PAGE_SZHUGE (_PAGE_SZ_8K)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K)
+# define _PAGE_SZHUGE (_PAGE_SZ_16K)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K)
+# define _PAGE_SZHUGE (_PAGE_SZ_32K)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+# define _PAGE_SZHUGE (_PAGE_SZ_64K)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K)
+# define _PAGE_SZHUGE (_PAGE_SZ_128K)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
+# define _PAGE_SZHUGE (_PAGE_SZ_256K)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K)
+# define _PAGE_SZHUGE (_PAGE_SZ_512K)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M)
+# define _PAGE_SZHUGE (_PAGE_SZ_1M)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M)
+# define _PAGE_SZHUGE (_PAGE_SZ_2M)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M)
+# define _PAGE_SZHUGE (_PAGE_SZ_4M)
+#endif
+
+#endif /* _METAG_PGTABLE_BITS_H */
#ifndef _METAG_PGTABLE_H
#define _METAG_PGTABLE_H
+#include <asm/pgtable-bits.h>
#include <asm-generic/pgtable-nopmd.h>
/* Invalid regions on Meta: 0x00000000-0x001FFFFF and 0xFFFF0000-0xFFFFFFFF */
#define VMALLOC_END 0x7FFFFFFF
#endif
-/*
- * Definitions for MMU descriptors
- *
- * These are the hardware bits in the MMCU pte entries.
- * Derived from the Meta toolkit headers.
- */
-#define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT
-#define _PAGE_WRITE MMCU_ENTRY_WR_BIT
-#define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT
-/* Write combine bit - this can cause writes to occur out of order */
-#define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT
-/* Sys coherent bit - this bit is never used by Linux */
-#define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT
-#define _PAGE_ALWAYS_ZERO_1 0x020
-#define _PAGE_CACHE_CTRL0 0x040
-#define _PAGE_CACHE_CTRL1 0x080
-#define _PAGE_ALWAYS_ZERO_2 0x100
-#define _PAGE_ALWAYS_ZERO_3 0x200
-#define _PAGE_ALWAYS_ZERO_4 0x400
-#define _PAGE_ALWAYS_ZERO_5 0x800
-
-/* These are software bits that we stuff into the gaps in the hardware
- * pte entries that are not used. Note, these DO get stored in the actual
- * hardware, but the hardware just does not use them.
- */
-#define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_1
-#define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_2
-
-/* Pages owned, and protected by, the kernel. */
-#define _PAGE_KERNEL _PAGE_PRIV
-
-/* No cacheing of this page */
-#define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S)
-/* burst cacheing - good for data streaming */
-#define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S)
-/* One cache way per thread */
-#define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S)
-/* Full on cacheing */
-#define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S)
-
-#define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE)
-
-/* which bits are used for cache control ... */
-#define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \
- _PAGE_WR_COMBINE)
-
-/* This is a mask of the bits that pte_modify is allowed to change. */
-#define _PAGE_CHG_MASK (PAGE_MASK)
-
-#define _PAGE_SZ_SHIFT 1
-#define _PAGE_SZ_4K (0x0)
-#define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT)
-#define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT)
-
-#if defined(CONFIG_PAGE_SIZE_4K)
-#define _PAGE_SZ (_PAGE_SZ_4K)
-#elif defined(CONFIG_PAGE_SIZE_8K)
-#define _PAGE_SZ (_PAGE_SZ_8K)
-#elif defined(CONFIG_PAGE_SIZE_16K)
-#define _PAGE_SZ (_PAGE_SZ_16K)
-#endif
-#define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT)
-
-#if defined(CONFIG_HUGETLB_PAGE_SIZE_8K)
-# define _PAGE_SZHUGE (_PAGE_SZ_8K)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K)
-# define _PAGE_SZHUGE (_PAGE_SZ_16K)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K)
-# define _PAGE_SZHUGE (_PAGE_SZ_32K)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
-# define _PAGE_SZHUGE (_PAGE_SZ_64K)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K)
-# define _PAGE_SZHUGE (_PAGE_SZ_128K)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
-# define _PAGE_SZHUGE (_PAGE_SZ_256K)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K)
-# define _PAGE_SZHUGE (_PAGE_SZ_512K)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M)
-# define _PAGE_SZHUGE (_PAGE_SZ_1M)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M)
-# define _PAGE_SZHUGE (_PAGE_SZ_2M)
-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M)
-# define _PAGE_SZHUGE (_PAGE_SZ_4M)
-#endif
-
/*
* The Linux memory management assumes a three-level page table setup. On
* Meta, we use that, but "fold" the mid level into the top-level page
* The LP register should point to the location where the called function
* should return. [note that MAKE_SYS_CALL uses label 1] */
/* See if the system call number is valid */
+ blti r12, 5f
addi r11, r12, -__NR_syscalls;
- bgei r11,5f;
+ bgei r11, 5f;
/* Figure out which function to use for this system call. */
/* Note Microblaze barrel shift is optional, so don't rely on it */
add r12, r12, r12; /* convert num -> ptr */
/* The syscall number is invalid, return an error. */
5:
- rtsd r15, 8; /* looks like a normal subroutine return */
+ braid ret_from_trap
addi r3, r0, -ENOSYS;
/* Entry point used to return from a syscall/trap */
bri 1b
/* Maybe handle a signal */
-5:
+5:
andi r11, r19, _TIF_SIGPENDING | _TIF_NOTIFY_RESUME;
beqi r11, 4f; /* Signals to handle, handle them */
#include <uapi/asm/ptrace.h>
+/* This struct defines the way the registers are stored on the
+ stack during a system call. */
+
#ifndef __ASSEMBLY__
+struct pt_regs {
+ unsigned long r8; /* r8-r15 Caller-saved GP registers */
+ unsigned long r9;
+ unsigned long r10;
+ unsigned long r11;
+ unsigned long r12;
+ unsigned long r13;
+ unsigned long r14;
+ unsigned long r15;
+ unsigned long r1; /* Assembler temporary */
+ unsigned long r2; /* Retval LS 32bits */
+ unsigned long r3; /* Retval MS 32bits */
+ unsigned long r4; /* r4-r7 Register arguments */
+ unsigned long r5;
+ unsigned long r6;
+ unsigned long r7;
+ unsigned long orig_r2; /* Copy of r2 ?? */
+ unsigned long ra; /* Return address */
+ unsigned long fp; /* Frame pointer */
+ unsigned long sp; /* Stack pointer */
+ unsigned long gp; /* Global pointer */
+ unsigned long estatus;
+ unsigned long ea; /* Exception return address (pc) */
+ unsigned long orig_r7;
+};
+
+/*
+ * This is the extended stack used by signal handlers and the context
+ * switcher: it's pushed after the normal "struct pt_regs".
+ */
+struct switch_stack {
+ unsigned long r16; /* r16-r23 Callee-saved GP registers */
+ unsigned long r17;
+ unsigned long r18;
+ unsigned long r19;
+ unsigned long r20;
+ unsigned long r21;
+ unsigned long r22;
+ unsigned long r23;
+ unsigned long fp;
+ unsigned long gp;
+ unsigned long ra;
+};
+
#define user_mode(regs) (((regs)->estatus & ESTATUS_EU))
#define instruction_pointer(regs) ((regs)->ra)
+++ /dev/null
-/*
- * Copyright (C) 2004 Microtronix Datacom Ltd
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- */
-
-#ifndef _ASM_NIOS2_UCONTEXT_H
-#define _ASM_NIOS2_UCONTEXT_H
-
-typedef int greg_t;
-#define NGREG 32
-typedef greg_t gregset_t[NGREG];
-
-struct mcontext {
- int version;
- gregset_t gregs;
-};
-
-#define MCONTEXT_VERSION 2
-
-struct ucontext {
- unsigned long uc_flags;
- struct ucontext *uc_link;
- stack_t uc_stack;
- struct mcontext uc_mcontext;
- sigset_t uc_sigmask; /* mask last for extensibility */
-};
-
-#endif
include include/uapi/asm-generic/Kbuild.asm
header-y += elf.h
-header-y += ucontext.h
+
+generic-y += ucontext.h
typedef unsigned long elf_greg_t;
-#define ELF_NGREG \
- ((sizeof(struct pt_regs) + sizeof(struct switch_stack)) / \
- sizeof(elf_greg_t))
+#define ELF_NGREG 49
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef unsigned long elf_fpregset_t;
#define NUM_PTRACE_REG (PTR_TLBMISC + 1)
-/* this struct defines the way the registers are stored on the
- stack during a system call.
-
- There is a fake_regs in setup.c that has to match pt_regs.*/
-
-struct pt_regs {
- unsigned long r8; /* r8-r15 Caller-saved GP registers */
- unsigned long r9;
- unsigned long r10;
- unsigned long r11;
- unsigned long r12;
- unsigned long r13;
- unsigned long r14;
- unsigned long r15;
- unsigned long r1; /* Assembler temporary */
- unsigned long r2; /* Retval LS 32bits */
- unsigned long r3; /* Retval MS 32bits */
- unsigned long r4; /* r4-r7 Register arguments */
- unsigned long r5;
- unsigned long r6;
- unsigned long r7;
- unsigned long orig_r2; /* Copy of r2 ?? */
- unsigned long ra; /* Return address */
- unsigned long fp; /* Frame pointer */
- unsigned long sp; /* Stack pointer */
- unsigned long gp; /* Global pointer */
- unsigned long estatus;
- unsigned long ea; /* Exception return address (pc) */
- unsigned long orig_r7;
-};
-
-/*
- * This is the extended stack used by signal handlers and the context
- * switcher: it's pushed after the normal "struct pt_regs".
- */
-struct switch_stack {
- unsigned long r16; /* r16-r23 Callee-saved GP registers */
- unsigned long r17;
- unsigned long r18;
- unsigned long r19;
- unsigned long r20;
- unsigned long r21;
- unsigned long r22;
- unsigned long r23;
- unsigned long fp;
- unsigned long gp;
- unsigned long ra;
+/* User structures for general purpose registers. */
+struct user_pt_regs {
+ __u32 regs[49];
};
#endif /* __ASSEMBLY__ */
* details.
*/
-#ifndef _ASM_NIOS2_SIGCONTEXT_H
-#define _ASM_NIOS2_SIGCONTEXT_H
+#ifndef _UAPI__ASM_SIGCONTEXT_H
+#define _UAPI__ASM_SIGCONTEXT_H
-#include <asm/ptrace.h>
+#include <linux/types.h>
+
+#define MCONTEXT_VERSION 2
struct sigcontext {
- struct pt_regs regs;
- unsigned long sc_mask; /* old sigmask */
+ int version;
+ unsigned long gregs[32];
};
#endif
struct ucontext *uc, int *pr2)
{
int temp;
- greg_t *gregs = uc->uc_mcontext.gregs;
+ unsigned long *gregs = uc->uc_mcontext.gregs;
int err;
/* Always make any pending restarted system calls return -EINTR */
static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
{
struct switch_stack *sw = (struct switch_stack *)regs - 1;
- greg_t *gregs = uc->uc_mcontext.gregs;
+ unsigned long *gregs = uc->uc_mcontext.gregs;
int err = 0;
err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
break;
}
-survive:
/*
* If for any reason at all we couldn't handle the fault,
* make sure we exit gracefully rather than endlessly redo
*/
out_of_memory:
up_read(&mm->mmap_sem);
- if (is_global_init(tsk)) {
- yield();
- down_read(&mm->mmap_sem);
- goto survive;
- }
if (!user_mode(regs))
goto no_context;
pagefault_out_of_memory();
if (likely(pgd != NULL)) {
memset(pgd, 0, PAGE_SIZE<<PGD_ALLOC_ORDER);
-#ifdef CONFIG_64BIT
+#if PT_NLEVELS == 3
actual_pgd += PTRS_PER_PGD;
/* Populate first pmd with allocated memory. We mark it
* with PxD_FLAG_ATTACHED as a signal to the system that this
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
-#ifdef CONFIG_64BIT
+#if PT_NLEVELS == 3
pgd -= PTRS_PER_PGD;
#endif
free_pages((unsigned long)pgd, PGD_ALLOC_ORDER);
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
-#ifdef CONFIG_64BIT
if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
- /* This is the permanent pmd attached to the pgd;
- * cannot free it */
+ /*
+ * This is the permanent pmd attached to the pgd;
+ * cannot free it.
+ * Increment the counter to compensate for the decrement
+ * done by generic mm code.
+ */
+ mm_inc_nr_pmds(mm);
return;
-#endif
free_pages((unsigned long)pmd, PMD_ORDER);
}
static inline void
pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)
{
-#ifdef CONFIG_64BIT
+#if PT_NLEVELS == 3
/* preserve the gateway marker if this is the beginning of
* the permanent pmd */
if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
#define ENTRY_COMP(_name_) .word sys_##_name_
#endif
- ENTRY_SAME(restart_syscall) /* 0 */
- ENTRY_SAME(exit)
+90: ENTRY_SAME(restart_syscall) /* 0 */
+91: ENTRY_SAME(exit)
ENTRY_SAME(fork_wrapper)
ENTRY_SAME(read)
ENTRY_SAME(write)
ENTRY_SAME(bpf)
ENTRY_COMP(execveat)
- /* Nothing yet */
+
+.ifne (. - 90b) - (__NR_Linux_syscalls * (91b - 90b))
+.error "size of syscall table does not fit value of __NR_Linux_syscalls"
+.endif
#undef ENTRY_SAME
#undef ENTRY_DIFF
#define PPC_INST_MFSPR_PVR_MASK 0xfc1fffff
#define PPC_INST_MFTMR 0x7c0002dc
#define PPC_INST_MSGSND 0x7c00019c
+#define PPC_INST_MSGCLR 0x7c0001dc
#define PPC_INST_MSGSNDP 0x7c00011c
#define PPC_INST_MTTMR 0x7c0003dc
#define PPC_INST_NOP 0x60000000
___PPC_RB(b) | __PPC_EH(eh))
#define PPC_MSGSND(b) stringify_in_c(.long PPC_INST_MSGSND | \
___PPC_RB(b))
+#define PPC_MSGCLR(b) stringify_in_c(.long PPC_INST_MSGCLR | \
+ ___PPC_RB(b))
#define PPC_MSGSNDP(b) stringify_in_c(.long PPC_INST_MSGSNDP | \
___PPC_RB(b))
#define PPC_POPCNTB(a, s) stringify_in_c(.long PPC_INST_POPCNTB | \
#define SRR1_ISI_N_OR_G 0x10000000 /* ISI: Access is no-exec or G */
#define SRR1_ISI_PROT 0x08000000 /* ISI: Other protection fault */
#define SRR1_WAKEMASK 0x00380000 /* reason for wakeup */
+#define SRR1_WAKEMASK_P8 0x003c0000 /* reason for wakeup on POWER8 */
#define SRR1_WAKESYSERR 0x00300000 /* System error */
#define SRR1_WAKEEE 0x00200000 /* External interrupt */
#define SRR1_WAKEMT 0x00280000 /* mtctrl */
#define SRR1_WAKEHMI 0x00280000 /* Hypervisor maintenance */
#define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */
+#define SRR1_WAKEDBELL 0x00140000 /* Privileged doorbell on P8 */
#define SRR1_WAKETHERM 0x00100000 /* Thermal management interrupt */
#define SRR1_WAKERESET 0x00100000 /* System reset */
+#define SRR1_WAKEHDBELL 0x000c0000 /* Hypervisor doorbell on P8 */
#define SRR1_WAKESTATE 0x00030000 /* Powersave exit mask [46:47] */
#define SRR1_WS_DEEPEST 0x00030000 /* Some resources not maintained,
* may not be recoverable */
.machine_check_early = __machine_check_early_realmode_p8,
.platform = "power8",
},
+ { /* Power8NVL */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x004c0000,
+ .cpu_name = "POWER8NVL (raw)",
+ .cpu_features = CPU_FTRS_POWER8,
+ .cpu_user_features = COMMON_USER_POWER8,
+ .cpu_user_features2 = COMMON_USER2_POWER8,
+ .mmu_features = MMU_FTRS_POWER8,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .num_pmcs = 6,
+ .pmc_type = PPC_PMC_IBM,
+ .oprofile_cpu_type = "ppc64/power8",
+ .oprofile_type = PPC_OPROFILE_INVALID,
+ .cpu_setup = __setup_cpu_power8,
+ .cpu_restore = __restore_cpu_power8,
+ .flush_tlb = __flush_tlb_power8,
+ .machine_check_early = __machine_check_early_realmode_p8,
+ .platform = "power8",
+ },
{ /* Power8 DD1: Does not support doorbell IPIs */
.pvr_mask = 0xffffff00,
.pvr_value = 0x004d0100,
#include <asm/dbell.h>
#include <asm/irq_regs.h>
+#include <asm/kvm_ppc.h>
#ifdef CONFIG_SMP
void doorbell_setup_this_cpu(void)
may_hard_irq_enable();
+ kvmppc_set_host_ipi(smp_processor_id(), 0);
__this_cpu_inc(irq_stat.doorbell_irqs);
smp_ipi_demux();
bne 9f /* continue in V mode if we are. */
5:
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
/*
* We are coming from kernel context. Check if we are coming from
* guest. if yes, then we can continue. We will fall through
spin_lock(&vcpu->arch.vpa_update_lock);
lppaca = (struct lppaca *)vcpu->arch.vpa.pinned_addr;
if (lppaca)
- yield_count = lppaca->yield_count;
+ yield_count = be32_to_cpu(lppaca->yield_count);
spin_unlock(&vcpu->arch.vpa_update_lock);
return yield_count;
}
static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,
bool preserve_top32)
{
+ struct kvm *kvm = vcpu->kvm;
struct kvmppc_vcore *vc = vcpu->arch.vcore;
u64 mask;
+ mutex_lock(&kvm->lock);
spin_lock(&vc->lock);
/*
* If ILE (interrupt little-endian) has changed, update the
* MSR_LE bit in the intr_msr for each vcpu in this vcore.
*/
if ((new_lpcr & LPCR_ILE) != (vc->lpcr & LPCR_ILE)) {
- struct kvm *kvm = vcpu->kvm;
struct kvm_vcpu *vcpu;
int i;
- mutex_lock(&kvm->lock);
kvm_for_each_vcpu(i, vcpu, kvm) {
if (vcpu->arch.vcore != vc)
continue;
else
vcpu->arch.intr_msr &= ~MSR_LE;
}
- mutex_unlock(&kvm->lock);
}
/*
mask &= 0xFFFFFFFF;
vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask);
spin_unlock(&vc->lock);
+ mutex_unlock(&kvm->lock);
}
static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
/* Save HEIR (HV emulation assist reg) in emul_inst
if this is an HEI (HV emulation interrupt, e40) */
li r3,KVM_INST_FETCH_FAILED
+ stw r3,VCPU_LAST_INST(r9)
cmpwi r12,BOOK3S_INTERRUPT_H_EMUL_ASSIST
bne 11f
mfspr r3,SPRN_HEIR
#include <asm/runlatch.h>
#include <asm/code-patching.h>
#include <asm/dbell.h>
+#include <asm/kvm_ppc.h>
+#include <asm/ppc-opcode.h>
#include "powernv.h"
static void pnv_smp_cpu_kill_self(void)
{
unsigned int cpu;
- unsigned long srr1;
+ unsigned long srr1, wmask;
u32 idle_states;
/* Standard hot unplug procedure */
generic_set_cpu_dead(cpu);
smp_wmb();
+ wmask = SRR1_WAKEMASK;
+ if (cpu_has_feature(CPU_FTR_ARCH_207S))
+ wmask = SRR1_WAKEMASK_P8;
+
idle_states = pnv_get_supported_cpuidle_states();
/* We don't want to take decrementer interrupts while we are offline,
* so clear LPCR:PECE1. We keep PECE2 enabled.
* having finished executing in a KVM guest, then srr1
* contains 0.
*/
- if ((srr1 & SRR1_WAKEMASK) == SRR1_WAKEEE) {
+ if ((srr1 & wmask) == SRR1_WAKEEE) {
icp_native_flush_interrupt();
local_paca->irq_happened &= PACA_IRQ_HARD_DIS;
smp_mb();
+ } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) {
+ unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER);
+ asm volatile(PPC_MSGCLR(%0) : : "r" (msg));
+ kvmppc_set_host_ipi(cpu, 0);
}
if (cpu_core_split_required())
static struct kobject *mobility_kobj;
struct update_props_workarea {
- u32 phandle;
- u32 state;
- u64 reserved;
- u32 nprops;
+ __be32 phandle;
+ __be32 state;
+ __be64 reserved;
+ __be32 nprops;
} __packed;
#define NODE_ACTION_MASK 0xff000000
return rc;
}
-static int delete_dt_node(u32 phandle)
+static int delete_dt_node(__be32 phandle)
{
struct device_node *dn;
- dn = of_find_node_by_phandle(phandle);
+ dn = of_find_node_by_phandle(be32_to_cpu(phandle));
if (!dn)
return -ENOENT;
return 0;
}
-static int update_dt_node(u32 phandle, s32 scope)
+static int update_dt_node(__be32 phandle, s32 scope)
{
struct update_props_workarea *upwa;
struct device_node *dn;
char *prop_data;
char *rtas_buf;
int update_properties_token;
+ u32 nprops;
u32 vd;
update_properties_token = rtas_token("ibm,update-properties");
if (!rtas_buf)
return -ENOMEM;
- dn = of_find_node_by_phandle(phandle);
+ dn = of_find_node_by_phandle(be32_to_cpu(phandle));
if (!dn) {
kfree(rtas_buf);
return -ENOENT;
break;
prop_data = rtas_buf + sizeof(*upwa);
+ nprops = be32_to_cpu(upwa->nprops);
/* On the first call to ibm,update-properties for a node the
* the first property value descriptor contains an empty
*/
if (*prop_data == 0) {
prop_data++;
- vd = *(u32 *)prop_data;
+ vd = be32_to_cpu(*(__be32 *)prop_data);
prop_data += vd + sizeof(vd);
- upwa->nprops--;
+ nprops--;
}
- for (i = 0; i < upwa->nprops; i++) {
+ for (i = 0; i < nprops; i++) {
char *prop_name;
prop_name = prop_data;
prop_data += strlen(prop_name) + 1;
- vd = *(u32 *)prop_data;
+ vd = be32_to_cpu(*(__be32 *)prop_data);
prop_data += sizeof(vd);
switch (vd) {
return 0;
}
-static int add_dt_node(u32 parent_phandle, u32 drc_index)
+static int add_dt_node(__be32 parent_phandle, __be32 drc_index)
{
struct device_node *dn;
struct device_node *parent_dn;
int rc;
- parent_dn = of_find_node_by_phandle(parent_phandle);
+ parent_dn = of_find_node_by_phandle(be32_to_cpu(parent_phandle));
if (!parent_dn)
return -ENOENT;
int pseries_devicetree_update(s32 scope)
{
char *rtas_buf;
- u32 *data;
+ __be32 *data;
int update_nodes_token;
int rc;
if (rc && rc != 1)
break;
- data = (u32 *)rtas_buf + 4;
- while (*data & NODE_ACTION_MASK) {
+ data = (__be32 *)rtas_buf + 4;
+ while (be32_to_cpu(*data) & NODE_ACTION_MASK) {
int i;
- u32 action = *data & NODE_ACTION_MASK;
- int node_count = *data & NODE_COUNT_MASK;
+ u32 action = be32_to_cpu(*data) & NODE_ACTION_MASK;
+ u32 node_count = be32_to_cpu(*data) & NODE_COUNT_MASK;
data++;
for (i = 0; i < node_count; i++) {
- u32 phandle = *data++;
- u32 drc_index;
+ __be32 phandle = *data++;
+ __be32 drc_index;
switch (action) {
case DELETE_DT_NODE:
extern unsigned long mmap_rnd_mask;
-#define STACK_RND_MASK (mmap_rnd_mask)
+#define STACK_RND_MASK (test_thread_flag(TIF_31BIT) ? 0x7ff : mmap_rnd_mask)
#define ARCH_DLINFO \
do { \
#define S390_ARCH_FAC_MASK_SIZE_U64 \
(S390_ARCH_FAC_MASK_SIZE_BYTE / sizeof(u64))
-struct s390_model_fac {
- /* facilities used in SIE context */
- __u64 sie[S390_ARCH_FAC_LIST_SIZE_U64];
- /* subset enabled by kvm */
- __u64 kvm[S390_ARCH_FAC_LIST_SIZE_U64];
+struct kvm_s390_fac {
+ /* facility list requested by guest */
+ __u64 list[S390_ARCH_FAC_LIST_SIZE_U64];
+ /* facility mask supported by kvm & hosting machine */
+ __u64 mask[S390_ARCH_FAC_LIST_SIZE_U64];
};
struct kvm_s390_cpu_model {
- struct s390_model_fac *fac;
+ struct kvm_s390_fac *fac;
struct cpuid cpu_id;
unsigned short ibc;
};
{
int cpu = smp_processor_id();
+ S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd);
if (prev == next)
return;
if (MACHINE_HAS_TLB_LC)
atomic_dec(&prev->context.attach_count);
if (MACHINE_HAS_TLB_LC)
cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);
- S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd);
}
#define finish_arch_post_lock_switch finish_arch_post_lock_switch
#endif
}
-static inline void clear_page(void *page)
-{
- register unsigned long reg1 asm ("1") = 0;
- register void *reg2 asm ("2") = page;
- register unsigned long reg3 asm ("3") = 4096;
- asm volatile(
- " mvcl 2,0"
- : "+d" (reg2), "+d" (reg3) : "d" (reg1)
- : "memory", "cc");
-}
+#define clear_page(page) memset((page), 0, PAGE_SIZE)
/*
* copy_page uses the mvcl instruction with 0xb0 padding byte in order to
unsigned long ftrace_plt;
+static inline void ftrace_generate_orig_insn(struct ftrace_insn *insn)
+{
+#ifdef CC_USING_HOTPATCH
+ /* brcl 0,0 */
+ insn->opc = 0xc004;
+ insn->disp = 0;
+#else
+ /* stg r14,8(r15) */
+ insn->opc = 0xe3e0;
+ insn->disp = 0xf0080024;
+#endif
+}
+
+static inline int is_kprobe_on_ftrace(struct ftrace_insn *insn)
+{
+#ifdef CONFIG_KPROBES
+ if (insn->opc == BREAKPOINT_INSTRUCTION)
+ return 1;
+#endif
+ return 0;
+}
+
+static inline void ftrace_generate_kprobe_nop_insn(struct ftrace_insn *insn)
+{
+#ifdef CONFIG_KPROBES
+ insn->opc = BREAKPOINT_INSTRUCTION;
+ insn->disp = KPROBE_ON_FTRACE_NOP;
+#endif
+}
+
+static inline void ftrace_generate_kprobe_call_insn(struct ftrace_insn *insn)
+{
+#ifdef CONFIG_KPROBES
+ insn->opc = BREAKPOINT_INSTRUCTION;
+ insn->disp = KPROBE_ON_FTRACE_CALL;
+#endif
+}
+
int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
unsigned long addr)
{
return -EFAULT;
if (addr == MCOUNT_ADDR) {
/* Initial code replacement */
-#ifdef CC_USING_HOTPATCH
- /* We expect to see brcl 0,0 */
- ftrace_generate_nop_insn(&orig);
-#else
- /* We expect to see stg r14,8(r15) */
- orig.opc = 0xe3e0;
- orig.disp = 0xf0080024;
-#endif
+ ftrace_generate_orig_insn(&orig);
ftrace_generate_nop_insn(&new);
- } else if (old.opc == BREAKPOINT_INSTRUCTION) {
+ } else if (is_kprobe_on_ftrace(&old)) {
/*
* If we find a breakpoint instruction, a kprobe has been
* placed at the beginning of the function. We write the
* bytes of the original instruction so that the kprobes
* handler can execute a nop, if it reaches this breakpoint.
*/
- new.opc = orig.opc = BREAKPOINT_INSTRUCTION;
- orig.disp = KPROBE_ON_FTRACE_CALL;
- new.disp = KPROBE_ON_FTRACE_NOP;
+ ftrace_generate_kprobe_call_insn(&orig);
+ ftrace_generate_kprobe_nop_insn(&new);
} else {
/* Replace ftrace call with a nop. */
ftrace_generate_call_insn(&orig, rec->ip);
if (probe_kernel_read(&old, (void *) rec->ip, sizeof(old)))
return -EFAULT;
- if (old.opc == BREAKPOINT_INSTRUCTION) {
+ if (is_kprobe_on_ftrace(&old)) {
/*
* If we find a breakpoint instruction, a kprobe has been
* placed at the beginning of the function. We write the
* bytes of the original instruction so that the kprobes
* handler can execute a brasl if it reaches this breakpoint.
*/
- new.opc = orig.opc = BREAKPOINT_INSTRUCTION;
- orig.disp = KPROBE_ON_FTRACE_NOP;
- new.disp = KPROBE_ON_FTRACE_CALL;
+ ftrace_generate_kprobe_nop_insn(&orig);
+ ftrace_generate_kprobe_call_insn(&new);
} else {
/* Replace nop with an ftrace call. */
ftrace_generate_nop_insn(&orig);
insn->offset = (entry->target - entry->code) >> 1;
}
-static void jump_label_bug(struct jump_entry *entry, struct insn *insn)
+static void jump_label_bug(struct jump_entry *entry, struct insn *expected,
+ struct insn *new)
{
unsigned char *ipc = (unsigned char *)entry->code;
- unsigned char *ipe = (unsigned char *)insn;
+ unsigned char *ipe = (unsigned char *)expected;
+ unsigned char *ipn = (unsigned char *)new;
pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc);
pr_emerg("Found: %02x %02x %02x %02x %02x %02x\n",
ipc[0], ipc[1], ipc[2], ipc[3], ipc[4], ipc[5]);
pr_emerg("Expected: %02x %02x %02x %02x %02x %02x\n",
ipe[0], ipe[1], ipe[2], ipe[3], ipe[4], ipe[5]);
+ pr_emerg("New: %02x %02x %02x %02x %02x %02x\n",
+ ipn[0], ipn[1], ipn[2], ipn[3], ipn[4], ipn[5]);
panic("Corrupted kernel text");
}
}
if (init) {
if (memcmp((void *)entry->code, &orignop, sizeof(orignop)))
- jump_label_bug(entry, &old);
+ jump_label_bug(entry, &orignop, &new);
} else {
if (memcmp((void *)entry->code, &old, sizeof(old)))
- jump_label_bug(entry, &old);
+ jump_label_bug(entry, &old, &new);
}
probe_kernel_write((void *)entry->code, &new, sizeof(new));
}
const Elf_Shdr *sechdrs,
struct module *me)
{
+ jump_label_apply_nops(me);
vfree(me->arch.syminfo);
me->arch.syminfo = NULL;
return 0;
static struct attribute *cpumsf_pmu_events_attr[] = {
CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC),
- CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG),
+ NULL,
NULL,
};
return -EINVAL;
}
- if (si.ad)
+ if (si.ad) {
sfb_set_limits(CPUM_SF_MIN_SDB, CPUM_SF_MAX_SDB);
+ cpumsf_pmu_events_attr[1] =
+ CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG);
+ }
sfdbg = debug_register(KMSG_COMPONENT, 2, 1, 80);
if (!sfdbg)
static DEFINE_PER_CPU(struct cpuid, cpu_id);
-void cpu_relax(void)
+void notrace cpu_relax(void)
{
if (!smp_cpu_mtid && MACHINE_HAS_DIAG44)
asm volatile("diag 0,0,0x44");
lhi %r1,1
sigp %r1,%r0,SIGP_SET_ARCHITECTURE
sam64
+#ifdef CONFIG_SMP
+ larl %r1,smp_cpu_mt_shift
+ icm %r1,15,0(%r1)
+ jz smt_done
+ llgfr %r1,%r1
+smt_loop:
+ sigp %r1,%r0,SIGP_SET_MULTI_THREADING
+ brc 8,smt_done /* accepted */
+ brc 2,smt_loop /* busy, try again */
+smt_done:
+#endif
larl %r1,.Lnew_pgm_check_psw
lpswe 0(%r1)
pgm_check_entry:
case KVM_CAP_ONE_REG:
case KVM_CAP_ENABLE_CAP:
case KVM_CAP_S390_CSS_SUPPORT:
- case KVM_CAP_IRQFD:
case KVM_CAP_IOEVENTFD:
case KVM_CAP_DEVICE_CTRL:
case KVM_CAP_ENABLE_CAP_VM:
memcpy(&kvm->arch.model.cpu_id, &proc->cpuid,
sizeof(struct cpuid));
kvm->arch.model.ibc = proc->ibc;
- memcpy(kvm->arch.model.fac->kvm, proc->fac_list,
+ memcpy(kvm->arch.model.fac->list, proc->fac_list,
S390_ARCH_FAC_LIST_SIZE_BYTE);
} else
ret = -EFAULT;
}
memcpy(&proc->cpuid, &kvm->arch.model.cpu_id, sizeof(struct cpuid));
proc->ibc = kvm->arch.model.ibc;
- memcpy(&proc->fac_list, kvm->arch.model.fac->kvm, S390_ARCH_FAC_LIST_SIZE_BYTE);
+ memcpy(&proc->fac_list, kvm->arch.model.fac->list, S390_ARCH_FAC_LIST_SIZE_BYTE);
if (copy_to_user((void __user *)attr->addr, proc, sizeof(*proc)))
ret = -EFAULT;
kfree(proc);
}
get_cpu_id((struct cpuid *) &mach->cpuid);
mach->ibc = sclp_get_ibc();
- memcpy(&mach->fac_mask, kvm_s390_fac_list_mask,
- kvm_s390_fac_list_mask_size() * sizeof(u64));
+ memcpy(&mach->fac_mask, kvm->arch.model.fac->mask,
+ S390_ARCH_FAC_LIST_SIZE_BYTE);
memcpy((unsigned long *)&mach->fac_list, S390_lowcore.stfle_fac_list,
- S390_ARCH_FAC_LIST_SIZE_U64);
+ S390_ARCH_FAC_LIST_SIZE_BYTE);
if (copy_to_user((void __user *)attr->addr, mach, sizeof(*mach)))
ret = -EFAULT;
kfree(mach);
static int kvm_s390_query_ap_config(u8 *config)
{
u32 fcn_code = 0x04000000UL;
- u32 cc;
+ u32 cc = 0;
+ memset(config, 0, 128);
asm volatile(
"lgr 0,%1\n"
"lgr 2,%2\n"
".long 0xb2af0000\n" /* PQAP(QCI) */
- "ipm %0\n"
+ "0: ipm %0\n"
"srl %0,28\n"
- : "=r" (cc)
+ "1:\n"
+ EX_TABLE(0b, 1b)
+ : "+r" (cc)
: "r" (fcn_code), "r" (config)
: "cc", "0", "2", "memory"
);
kvm_s390_set_crycb_format(kvm);
- /* Disable AES/DEA protected key functions by default */
- kvm->arch.crypto.aes_kw = 0;
- kvm->arch.crypto.dea_kw = 0;
+ /* Enable AES/DEA protected key functions by default */
+ kvm->arch.crypto.aes_kw = 1;
+ kvm->arch.crypto.dea_kw = 1;
+ get_random_bytes(kvm->arch.crypto.crycb->aes_wrapping_key_mask,
+ sizeof(kvm->arch.crypto.crycb->aes_wrapping_key_mask));
+ get_random_bytes(kvm->arch.crypto.crycb->dea_wrapping_key_mask,
+ sizeof(kvm->arch.crypto.crycb->dea_wrapping_key_mask));
return 0;
}
/*
* The architectural maximum amount of facilities is 16 kbit. To store
* this amount, 2 kbyte of memory is required. Thus we need a full
- * page to hold the active copy (arch.model.fac->sie) and the current
- * facilities set (arch.model.fac->kvm). Its address size has to be
+ * page to hold the guest facility list (arch.model.fac->list) and the
+ * facility mask (arch.model.fac->mask). Its address size has to be
* 31 bits and word aligned.
*/
kvm->arch.model.fac =
- (struct s390_model_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
+ (struct kvm_s390_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (!kvm->arch.model.fac)
goto out_nofac;
- memcpy(kvm->arch.model.fac->kvm, S390_lowcore.stfle_fac_list,
- S390_ARCH_FAC_LIST_SIZE_U64);
-
- /*
- * If this KVM host runs *not* in a LPAR, relax the facility bits
- * of the kvm facility mask by all missing facilities. This will allow
- * to determine the right CPU model by means of the remaining facilities.
- * Live guest migration must prohibit the migration of KVMs running in
- * a LPAR to non LPAR hosts.
- */
- if (!MACHINE_IS_LPAR)
- for (i = 0; i < kvm_s390_fac_list_mask_size(); i++)
- kvm_s390_fac_list_mask[i] &= kvm->arch.model.fac->kvm[i];
-
- /*
- * Apply the kvm facility mask to limit the kvm supported/tolerated
- * facility list.
- */
+ /* Populate the facility mask initially. */
+ memcpy(kvm->arch.model.fac->mask, S390_lowcore.stfle_fac_list,
+ S390_ARCH_FAC_LIST_SIZE_BYTE);
for (i = 0; i < S390_ARCH_FAC_LIST_SIZE_U64; i++) {
if (i < kvm_s390_fac_list_mask_size())
- kvm->arch.model.fac->kvm[i] &= kvm_s390_fac_list_mask[i];
+ kvm->arch.model.fac->mask[i] &= kvm_s390_fac_list_mask[i];
else
- kvm->arch.model.fac->kvm[i] = 0UL;
+ kvm->arch.model.fac->mask[i] = 0UL;
}
+ /* Populate the facility list initially. */
+ memcpy(kvm->arch.model.fac->list, kvm->arch.model.fac->mask,
+ S390_ARCH_FAC_LIST_SIZE_BYTE);
+
kvm_s390_get_cpu_id(&kvm->arch.model.cpu_id);
kvm->arch.model.ibc = sclp_get_ibc() & 0x0fff;
mutex_lock(&vcpu->kvm->lock);
vcpu->arch.cpu_id = vcpu->kvm->arch.model.cpu_id;
- memcpy(vcpu->kvm->arch.model.fac->sie, vcpu->kvm->arch.model.fac->kvm,
- S390_ARCH_FAC_LIST_SIZE_BYTE);
vcpu->arch.sie_block->ibc = vcpu->kvm->arch.model.ibc;
mutex_unlock(&vcpu->kvm->lock);
vcpu->arch.sie_block->scaol = (__u32)(__u64)kvm->arch.sca;
set_bit(63 - id, (unsigned long *) &kvm->arch.sca->mcn);
}
- vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->sie;
+ vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->list;
spin_lock_init(&vcpu->arch.local_int.lock);
vcpu->arch.local_int.float_int = &kvm->arch.float_int;
/* test availability of facility in a kvm intance */
static inline int test_kvm_facility(struct kvm *kvm, unsigned long nr)
{
- return __test_facility(nr, kvm->arch.model.fac->kvm);
+ return __test_facility(nr, kvm->arch.model.fac->mask) &&
+ __test_facility(nr, kvm->arch.model.fac->list);
}
/* are cpu states controlled by user space */
* We need to shift the lower 32 facility bits (bit 0-31) from a u64
* into a u32 memory representation. They will remain bits 0-31.
*/
- fac = *vcpu->kvm->arch.model.fac->sie >> 32;
+ fac = *vcpu->kvm->arch.model.fac->list >> 32;
rc = write_guest_lc(vcpu, offsetof(struct _lowcore, stfl_fac_list),
&fac, sizeof(fac));
if (rc)
addr = ZPCI_IOMAP_ADDR_BASE | ((u64) idx << 48);
return (void __iomem *) addr + offset;
}
-EXPORT_SYMBOL_GPL(pci_iomap_range);
+EXPORT_SYMBOL(pci_iomap_range);
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
{
}
spin_unlock(&zpci_iomap_lock);
}
-EXPORT_SYMBOL_GPL(pci_iounmap);
+EXPORT_SYMBOL(pci_iounmap);
static int pci_read(struct pci_bus *bus, unsigned int devfn, int where,
int size, u32 *val)
airq_iv_free_bit(zpci_aisb_iv, zdev->aisb);
}
-static void zpci_map_resources(struct zpci_dev *zdev)
+static void zpci_map_resources(struct pci_dev *pdev)
{
- struct pci_dev *pdev = zdev->pdev;
resource_size_t len;
int i;
}
}
-static void zpci_unmap_resources(struct zpci_dev *zdev)
+static void zpci_unmap_resources(struct pci_dev *pdev)
{
- struct pci_dev *pdev = zdev->pdev;
resource_size_t len;
int i;
zdev->pdev = pdev;
pdev->dev.groups = zpci_attr_groups;
- zpci_map_resources(zdev);
+ zpci_map_resources(pdev);
for (i = 0; i < PCI_BAR_COUNT; i++) {
res = &pdev->resource[i];
return 0;
}
+void pcibios_release_device(struct pci_dev *pdev)
+{
+ zpci_unmap_resources(pdev);
+}
+
int pcibios_enable_device(struct pci_dev *pdev, int mask)
{
struct zpci_dev *zdev = get_zdev(pdev);
zdev->pdev = pdev;
zpci_debug_init_device(zdev);
zpci_fmb_enable_device(zdev);
- zpci_map_resources(zdev);
return pci_enable_resources(pdev, mask);
}
{
struct zpci_dev *zdev = get_zdev(pdev);
- zpci_unmap_resources(zdev);
zpci_fmb_disable_device(zdev);
zpci_debug_exit_device(zdev);
zdev->pdev = NULL;
#ifdef CONFIG_HIBERNATE_CALLBACKS
static int zpci_restore(struct device *dev)
{
- struct zpci_dev *zdev = get_zdev(to_pci_dev(dev));
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct zpci_dev *zdev = get_zdev(pdev);
int ret = 0;
if (zdev->state != ZPCI_FN_STATE_ONLINE)
if (ret)
goto out;
- zpci_map_resources(zdev);
+ zpci_map_resources(pdev);
zpci_register_ioat(zdev, 0, zdev->start_dma + PAGE_OFFSET,
zdev->start_dma + zdev->iommu_size - 1,
(u64) zdev->dma_table);
static int zpci_freeze(struct device *dev)
{
- struct zpci_dev *zdev = get_zdev(to_pci_dev(dev));
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct zpci_dev *zdev = get_zdev(pdev);
if (zdev->state != ZPCI_FN_STATE_ONLINE)
return 0;
zpci_unregister_ioat(zdev, 0);
+ zpci_unmap_resources(pdev);
return clp_disable_fh(zdev);
}
if (copy_from_user(buf, user_buffer, length))
goto out;
- memcpy_toio(io_addr, buf, length);
- ret = 0;
+ ret = zpci_memcpy_toio(io_addr, buf, length);
out:
if (buf != local_buf)
kfree(buf);
goto out;
io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK));
- ret = -EFAULT;
- if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE)
+ if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) {
+ ret = -EFAULT;
goto out;
-
- memcpy_fromio(buf, io_addr, length);
-
- if (copy_to_user(user_buffer, buf, length))
+ }
+ ret = zpci_memcpy_fromio(buf, io_addr, length);
+ if (ret)
goto out;
+ if (copy_to_user(user_buffer, buf, length))
+ ret = -EFAULT;
- ret = 0;
out:
if (buf != local_buf)
kfree(buf);
default "arch/sparc/configs/sparc32_defconfig" if SPARC32
default "arch/sparc/configs/sparc64_defconfig" if SPARC64
+config ARCH_PROC_KCORE_TEXT
+ def_bool y
+
config IOMMU_HELPER
bool
default y if SPARC64
unsigned long reg_val);
#endif
+
+#define HV_FAST_M7_GET_PERFREG 0x43
+#define HV_FAST_M7_SET_PERFREG 0x44
+
+#ifndef __ASSEMBLY__
+unsigned long sun4v_m7_get_perfreg(unsigned long reg_num,
+ unsigned long *reg_val);
+unsigned long sun4v_m7_set_perfreg(unsigned long reg_num,
+ unsigned long reg_val);
+#endif
+
/* Function numbers for HV_CORE_TRAP. */
#define HV_CORE_SET_VER 0x00
#define HV_CORE_PUTCHAR 0x01
#define HV_GRP_SDIO 0x0108
#define HV_GRP_SDIO_ERR 0x0109
#define HV_GRP_REBOOT_DATA 0x0110
+#define HV_GRP_M7_PERF 0x0114
#define HV_GRP_NIAG_PERF 0x0200
#define HV_GRP_FIRE_PERF 0x0201
#define HV_GRP_N2_CPU 0x0202
{
}
-#define ioread8(X) readb(X)
-#define ioread16(X) readw(X)
-#define ioread16be(X) __raw_readw(X)
-#define ioread32(X) readl(X)
-#define ioread32be(X) __raw_readl(X)
-#define iowrite8(val,X) writeb(val,X)
-#define iowrite16(val,X) writew(val,X)
-#define iowrite16be(val,X) __raw_writew(val,X)
-#define iowrite32(val,X) writel(val,X)
-#define iowrite32be(val,X) __raw_writel(val,X)
+#define ioread8 readb
+#define ioread16 readw
+#define ioread16be __raw_readw
+#define ioread32 readl
+#define ioread32be __raw_readl
+#define iowrite8 writeb
+#define iowrite16 writew
+#define iowrite16be __raw_writew
+#define iowrite32 writel
+#define iowrite32be __raw_writel
/* Create a virtual mapping cookie for an IO port range */
void __iomem *ioport_map(unsigned long port, unsigned int nr);
extern int this_is_starfire;
void check_if_starfire(void);
-int starfire_hard_smp_processor_id(void);
void starfire_hookup(int);
unsigned int starfire_translate(unsigned long imap, unsigned int upaid);
void do_privop(struct pt_regs *regs);
void do_privact(struct pt_regs *regs);
void do_cee(struct pt_regs *regs);
-void do_cee_tl1(struct pt_regs *regs);
-void do_dae_tl1(struct pt_regs *regs);
-void do_iae_tl1(struct pt_regs *regs);
void do_div0_tl1(struct pt_regs *regs);
-void do_fpdis_tl1(struct pt_regs *regs);
void do_fpieee_tl1(struct pt_regs *regs);
void do_fpother_tl1(struct pt_regs *regs);
void do_ill_tl1(struct pt_regs *regs);
{ .group = HV_GRP_VT_CPU, },
{ .group = HV_GRP_T5_CPU, },
{ .group = HV_GRP_DIAG, .flags = FLAG_PRE_API },
+ { .group = HV_GRP_M7_PERF, },
};
static DEFINE_SPINLOCK(hvapi_lock);
retl
nop
ENDPROC(sun4v_t5_set_perfreg)
+
+ENTRY(sun4v_m7_get_perfreg)
+ mov %o1, %o4
+ mov HV_FAST_M7_GET_PERFREG, %o5
+ ta HV_FAST_TRAP
+ stx %o1, [%o4]
+ retl
+ nop
+ENDPROC(sun4v_m7_get_perfreg)
+
+ENTRY(sun4v_m7_set_perfreg)
+ mov HV_FAST_M7_SET_PERFREG, %o5
+ ta HV_FAST_TRAP
+ retl
+ nop
+ENDPROC(sun4v_m7_set_perfreg)
.pcr_nmi_disable = PCR_N4_PICNPT,
};
+static u64 m7_pcr_read(unsigned long reg_num)
+{
+ unsigned long val;
+
+ (void) sun4v_m7_get_perfreg(reg_num, &val);
+
+ return val;
+}
+
+static void m7_pcr_write(unsigned long reg_num, u64 val)
+{
+ (void) sun4v_m7_set_perfreg(reg_num, val);
+}
+
+static const struct pcr_ops m7_pcr_ops = {
+ .read_pcr = m7_pcr_read,
+ .write_pcr = m7_pcr_write,
+ .read_pic = n4_pic_read,
+ .write_pic = n4_pic_write,
+ .nmi_picl_value = n4_picl_value,
+ .pcr_nmi_enable = (PCR_N4_PICNPT | PCR_N4_STRACE |
+ PCR_N4_UTRACE | PCR_N4_TOE |
+ (26 << PCR_N4_SL_SHIFT)),
+ .pcr_nmi_disable = PCR_N4_PICNPT,
+};
static unsigned long perf_hsvc_group;
static unsigned long perf_hsvc_major;
perf_hsvc_group = HV_GRP_T5_CPU;
break;
+ case SUN4V_CHIP_SPARC_M7:
+ perf_hsvc_group = HV_GRP_M7_PERF;
+ break;
+
default:
return -ENODEV;
}
pcr_ops = &n5_pcr_ops;
break;
+ case SUN4V_CHIP_SPARC_M7:
+ pcr_ops = &m7_pcr_ops;
+ break;
+
default:
ret = -ENODEV;
break;
.num_pic_regs = 4,
};
+static void sparc_m7_write_pmc(int idx, u64 val)
+{
+ u64 pcr;
+
+ pcr = pcr_ops->read_pcr(idx);
+ /* ensure ov and ntc are reset */
+ pcr &= ~(PCR_N4_OV | PCR_N4_NTC);
+
+ pcr_ops->write_pic(idx, val & 0xffffffff);
+
+ pcr_ops->write_pcr(idx, pcr);
+}
+
+static const struct sparc_pmu sparc_m7_pmu = {
+ .event_map = niagara4_event_map,
+ .cache_map = &niagara4_cache_map,
+ .max_events = ARRAY_SIZE(niagara4_perfmon_event_map),
+ .read_pmc = sparc_vt_read_pmc,
+ .write_pmc = sparc_m7_write_pmc,
+ .upper_shift = 5,
+ .lower_shift = 5,
+ .event_mask = 0x7ff,
+ .user_bit = PCR_N4_UTRACE,
+ .priv_bit = PCR_N4_STRACE,
+
+ /* We explicitly don't support hypervisor tracing. */
+ .hv_bit = 0,
+
+ .irq_bit = PCR_N4_TOE,
+ .upper_nop = 0,
+ .lower_nop = 0,
+ .flags = 0,
+ .max_hw_events = 4,
+ .num_pcrs = 4,
+ .num_pic_regs = 4,
+};
static const struct sparc_pmu *sparc_pmu __read_mostly;
static u64 event_encoding(u64 event_id, int idx)
cpuc->pcr[0] |= cpuc->event[0]->hw.config_base;
}
+static void sparc_pmu_start(struct perf_event *event, int flags);
+
/* On this PMU each PIC has it's own PCR control register. */
static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc)
{
struct perf_event *cp = cpuc->event[i];
struct hw_perf_event *hwc = &cp->hw;
int idx = hwc->idx;
- u64 enc;
if (cpuc->current_idx[i] != PIC_NO_INDEX)
continue;
- sparc_perf_event_set_period(cp, hwc, idx);
cpuc->current_idx[i] = idx;
- enc = perf_event_get_enc(cpuc->events[i]);
- cpuc->pcr[idx] &= ~mask_for_index(idx);
- if (hwc->state & PERF_HES_STOPPED)
- cpuc->pcr[idx] |= nop_for_index(idx);
- else
- cpuc->pcr[idx] |= event_encoding(enc, idx);
+ sparc_pmu_start(cp, PERF_EF_RELOAD);
}
out:
for (i = 0; i < cpuc->n_events; i++) {
int i;
local_irq_save(flags);
- perf_pmu_disable(event->pmu);
for (i = 0; i < cpuc->n_events; i++) {
if (event == cpuc->event[i]) {
}
}
- perf_pmu_enable(event->pmu);
local_irq_restore(flags);
}
unsigned long flags;
local_irq_save(flags);
- perf_pmu_disable(event->pmu);
n0 = cpuc->n_events;
if (n0 >= sparc_pmu->max_hw_events)
ret = 0;
out:
- perf_pmu_enable(event->pmu);
local_irq_restore(flags);
return ret;
}
sparc_pmu = &niagara4_pmu;
return true;
}
+ if (!strcmp(sparc_pmu_type, "sparc-m7")) {
+ sparc_pmu = &sparc_m7_pmu;
+ return true;
+ }
return false;
}
printk(" TPC[%lx] O7[%lx] I7[%lx] RPC[%lx]\n",
gp->tpc, gp->o7, gp->i7, gp->rpc);
}
+
+ touch_nmi_watchdog();
}
memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
(cpu == this_cpu ? '*' : ' '), cpu,
pp->pcr[0], pp->pcr[1], pp->pcr[2], pp->pcr[3],
pp->pic[0], pp->pic[1], pp->pic[2], pp->pic[3]);
+
+ touch_nmi_watchdog();
}
memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
scheduler_ipi();
}
-/* This is a nop because we capture all other cpus
- * anyways when making the PROM active.
- */
+static void stop_this_cpu(void *dummy)
+{
+ prom_stopself();
+}
+
void smp_send_stop(void)
{
+ int cpu;
+
+ if (tlb_type == hypervisor) {
+ for_each_online_cpu(cpu) {
+ if (cpu == smp_processor_id())
+ continue;
+#ifdef CONFIG_SUN_LDOMS
+ if (ldom_domaining_enabled) {
+ unsigned long hv_err;
+ hv_err = sun4v_cpu_stop(cpu);
+ if (hv_err)
+ printk(KERN_ERR "sun4v_cpu_stop() "
+ "failed err=%lu\n", hv_err);
+ } else
+#endif
+ prom_stopcpu_cpuid(cpu);
+ }
+ } else
+ smp_call_function(stop_this_cpu, NULL, 0);
}
/**
this_is_starfire = 1;
}
-int starfire_hard_smp_processor_id(void)
-{
- return upa_readl(0x1fff40000d0UL);
-}
-
/*
* Each Starfire board has 32 registers which perform translation
* and delivery of traditional interrupt packets into the extended
long err;
/* No need for backward compatibility. We can start fresh... */
- if (call <= SEMCTL) {
+ if (call <= SEMTIMEDOP) {
switch (call) {
case SEMOP:
err = sys_semtimedop(first, ptr,
}
user_instruction_dump ((unsigned int __user *) regs->tpc);
}
+ if (panic_on_oops)
+ panic("Fatal exception");
if (regs->tstate & TSTATE_PRIV)
do_exit(SIGKILL);
do_exit(SIGSEGV);
die_if_kernel("TL0: Cache Error Exception", regs);
}
-void do_cee_tl1(struct pt_regs *regs)
-{
- exception_enter();
- dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
- die_if_kernel("TL1: Cache Error Exception", regs);
-}
-
-void do_dae_tl1(struct pt_regs *regs)
-{
- exception_enter();
- dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
- die_if_kernel("TL1: Data Access Exception", regs);
-}
-
-void do_iae_tl1(struct pt_regs *regs)
-{
- exception_enter();
- dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
- die_if_kernel("TL1: Instruction Access Exception", regs);
-}
-
void do_div0_tl1(struct pt_regs *regs)
{
exception_enter();
die_if_kernel("TL1: DIV0 Exception", regs);
}
-void do_fpdis_tl1(struct pt_regs *regs)
-{
- exception_enter();
- dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
- die_if_kernel("TL1: FPU Disabled", regs);
-}
-
void do_fpieee_tl1(struct pt_regs *regs)
{
exception_enter();
.text
ENTRY(memmove) /* o0=dst o1=src o2=len */
- mov %o0, %g1
+ brz,pn %o2, 99f
+ mov %o0, %g1
+
cmp %o0, %o1
- bleu,pt %xcc, memcpy
+ bleu,pt %xcc, 2f
add %o1, %o2, %g7
cmp %g7, %o0
bleu,pt %xcc, memcpy
stb %g7, [%o0]
bne,pt %icc, 1b
sub %o0, 1, %o0
-
+99:
retl
mov %g1, %o0
+
+ /* We can't just call memcpy for these memmove cases. On some
+ * chips the memcpy uses cache initializing stores and when dst
+ * and src are close enough, those can clobber the source data
+ * before we've loaded it in.
+ */
+2: or %o0, %o1, %g7
+ or %o2, %g7, %g7
+ andcc %g7, 0x7, %g0
+ bne,pn %xcc, 4f
+ nop
+
+3: ldx [%o1], %g7
+ add %o1, 8, %o1
+ subcc %o2, 8, %o2
+ add %o0, 8, %o0
+ bne,pt %icc, 3b
+ stx %g7, [%o0 - 0x8]
+ ba,a,pt %xcc, 99b
+
+4: ldub [%o1], %g7
+ add %o1, 1, %o1
+ subcc %o2, 1, %o2
+ add %o0, 1, %o0
+ bne,pt %icc, 4b
+ stb %g7, [%o0 - 0x1]
+ ba,a,pt %xcc, 99b
ENDPROC(memmove)
return 0;
}
-device_initcall(report_memory);
+arch_initcall(report_memory);
#ifdef CONFIG_SMP
#define do_flush_tlb_kernel_range smp_flush_tlb_kernel_range
static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
-struct kaslr_setup_data {
- __u64 next;
- __u32 type;
- __u32 len;
- __u8 data[1];
-} kaslr_setup_data;
-
#define I8254_PORT_CONTROL 0x43
#define I8254_PORT_COUNTER0 0x40
#define I8254_CMD_READBACK 0xC0
return slots_fetch_random();
}
-static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled)
-{
- struct setup_data *data;
-
- kaslr_setup_data.type = SETUP_KASLR;
- kaslr_setup_data.len = 1;
- kaslr_setup_data.next = 0;
- kaslr_setup_data.data[0] = enabled;
-
- data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
-
- while (data && data->next)
- data = (struct setup_data *)(unsigned long)data->next;
-
- if (data)
- data->next = (unsigned long)&kaslr_setup_data;
- else
- params->hdr.setup_data = (unsigned long)&kaslr_setup_data;
-
-}
-
-unsigned char *choose_kernel_location(struct boot_params *params,
- unsigned char *input,
+unsigned char *choose_kernel_location(unsigned char *input,
unsigned long input_size,
unsigned char *output,
unsigned long output_size)
#ifdef CONFIG_HIBERNATION
if (!cmdline_find_option_bool("kaslr")) {
debug_putstr("KASLR disabled by default...\n");
- add_kaslr_setup_data(params, 0);
goto out;
}
#else
if (cmdline_find_option_bool("nokaslr")) {
debug_putstr("KASLR disabled by cmdline...\n");
- add_kaslr_setup_data(params, 0);
goto out;
}
#endif
- add_kaslr_setup_data(params, 1);
/* Record the various known unsafe memory ranges. */
mem_avoid_init((unsigned long)input, input_size,
* the entire decompressed kernel plus relocation table, or the
* entire decompressed kernel plus .bss and .brk sections.
*/
- output = choose_kernel_location(real_mode, input_data, input_len,
- output,
+ output = choose_kernel_location(input_data, input_len, output,
output_len > run_size ? output_len
: run_size);
#if CONFIG_RANDOMIZE_BASE
/* aslr.c */
-unsigned char *choose_kernel_location(struct boot_params *params,
- unsigned char *input,
+unsigned char *choose_kernel_location(unsigned char *input,
unsigned long input_size,
unsigned char *output,
unsigned long output_size);
bool has_cpuflag(int flag);
#else
static inline
-unsigned char *choose_kernel_location(struct boot_params *params,
- unsigned char *input,
+unsigned char *choose_kernel_location(unsigned char *input,
unsigned long input_size,
unsigned char *output,
unsigned long output_size)
src = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC);
if (!src)
return -ENOMEM;
- assoc = (src + req->cryptlen + auth_tag_len);
+ assoc = (src + req->cryptlen);
scatterwalk_map_and_copy(src, req->src, 0, req->cryptlen, 0);
scatterwalk_map_and_copy(assoc, req->assoc, 0,
req->assoclen, 0);
scatterwalk_done(&src_sg_walk, 0, 0);
scatterwalk_done(&assoc_sg_walk, 0, 0);
} else {
- scatterwalk_map_and_copy(dst, req->dst, 0, req->cryptlen, 1);
+ scatterwalk_map_and_copy(dst, req->dst, 0, tempCipherLen, 1);
kfree(src);
}
return retval;
preempt_disable();
tsk->thread.fpu_counter = 0;
__drop_fpu(tsk);
- clear_used_math();
+ clear_stopped_child_used_math(tsk);
preempt_enable();
}
extern unsigned long max_low_pfn_mapped;
extern unsigned long max_pfn_mapped;
-extern bool kaslr_enabled;
-
static inline phys_addr_t get_max_mapped(void)
{
return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
extern int (*pcibios_enable_irq)(struct pci_dev *dev);
extern void (*pcibios_disable_irq)(struct pci_dev *dev);
+extern bool mp_should_keep_irq(struct device *dev);
+
struct pci_raw_ops {
int (*read)(unsigned int domain, unsigned int bus, unsigned int devfn,
int reg, int len, u32 *val);
#define SETUP_DTB 2
#define SETUP_PCI 3
#define SETUP_EFI 4
-#define SETUP_KASLR 5
/* ram_size flags */
#define RAMDISK_IMAGE_START_MASK 0x07FF
return 0;
}
+/*
+ * ACPI offers an alternative platform interface model that removes
+ * ACPI hardware requirements for platforms that do not implement
+ * the PC Architecture.
+ *
+ * We initialize the Hardware-reduced ACPI model here:
+ */
+static void __init acpi_reduced_hw_init(void)
+{
+ if (acpi_gbl_reduced_hardware) {
+ /*
+ * Override x86_init functions and bypass legacy pic
+ * in Hardware-reduced ACPI mode
+ */
+ x86_init.timers.timer_init = x86_init_noop;
+ x86_init.irqs.pre_vector_init = x86_init_noop;
+ legacy_pic = &null_legacy_pic;
+ }
+}
+
/*
* If your system is blacklisted here, but you find that acpi=force
*/
early_acpi_process_madt();
+ /*
+ * Hardware-reduced ACPI mode initialization:
+ */
+ acpi_reduced_hw_init();
+
return 0;
}
static unsigned int get_apic_id(unsigned long x)
{
unsigned long value;
- unsigned int id;
+ unsigned int id = (x >> 24) & 0xff;
- rdmsrl(MSR_FAM10H_NODE_ID, value);
- id = ((x >> 24) & 0xffU) | ((value << 2) & 0xff00U);
+ if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) {
+ rdmsrl(MSR_FAM10H_NODE_ID, value);
+ id |= (value << 2) & 0xff00;
+ }
return id;
}
static void fixup_cpu_id(struct cpuinfo_x86 *c, int node)
{
- if (c->phys_proc_id != node) {
- c->phys_proc_id = node;
- per_cpu(cpu_llc_id, smp_processor_id()) = node;
+ u64 val;
+ u32 nodes = 1;
+
+ this_cpu_write(cpu_llc_id, node);
+
+ /* Account for nodes per socket in multi-core-module processors */
+ if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) {
+ rdmsrl(MSR_FAM10H_NODE_ID, val);
+ nodes = ((val >> 3) & 7) + 1;
}
+
+ c->phys_proc_id = node / nodes;
}
static int __init numachip_system_init(void)
* Has incomplete stack frame and undefined top of stack.
*/
ret_from_sys_call:
- testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
- jnz int_ret_from_sys_call_fixup /* Go the the slow path */
-
LOCKDEP_SYS_EXIT
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF
+
+ /*
+ * We must check ti flags with interrupts (or at least preemption)
+ * off because we must *never* return to userspace without
+ * processing exit work that is enqueued if we're preempted here.
+ * In particular, returning to userspace with any of the one-shot
+ * flags (TIF_NOTIFY_RESUME, TIF_USER_RETURN_NOTIFY, etc) set is
+ * very bad.
+ */
+ testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
+ jnz int_ret_from_sys_call_fixup /* Go the the slow path */
+
CFI_REMEMBER_STATE
/*
* sysretq will re-enable interrupts:
int_ret_from_sys_call_fixup:
FIXUP_TOP_OF_STACK %r11, -ARGOFFSET
- jmp int_ret_from_sys_call
+ jmp int_ret_from_sys_call_irqs_off
/* Do syscall tracing */
tracesys:
GLOBAL(int_ret_from_sys_call)
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF
+int_ret_from_sys_call_irqs_off:
movl $_TIF_ALLWORK_MASK,%edi
/* edi: mask to check */
GLOBAL(int_with_check)
#ifdef CONFIG_RANDOMIZE_BASE
static unsigned long module_load_offset;
+static int randomize_modules = 1;
/* Mutex protects the module_load_offset. */
static DEFINE_MUTEX(module_kaslr_mutex);
+static int __init parse_nokaslr(char *p)
+{
+ randomize_modules = 0;
+ return 0;
+}
+early_param("nokaslr", parse_nokaslr);
+
static unsigned long int get_module_load_offset(void)
{
- if (kaslr_enabled) {
+ if (randomize_modules) {
mutex_lock(&module_kaslr_mutex);
/*
* Calculate the module_load_offset the first time this
unsigned long max_low_pfn_mapped;
unsigned long max_pfn_mapped;
-bool __read_mostly kaslr_enabled = false;
-
#ifdef CONFIG_DMI
RESERVE_BRK(dmi_alloc, 65536);
#endif
}
#endif /* CONFIG_BLK_DEV_INITRD */
-static void __init parse_kaslr_setup(u64 pa_data, u32 data_len)
-{
- kaslr_enabled = (bool)(pa_data + sizeof(struct setup_data));
-}
-
static void __init parse_setup_data(void)
{
struct setup_data *data;
case SETUP_EFI:
parse_efi_setup(pa_data, data_len);
break;
- case SETUP_KASLR:
- parse_kaslr_setup(pa_data, data_len);
- break;
default:
break;
}
static int
dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
{
- if (kaslr_enabled)
- pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
- (unsigned long)&_text - __START_KERNEL,
- __START_KERNEL,
- __START_KERNEL_map,
- MODULES_VADDR-1);
- else
- pr_emerg("Kernel Offset: disabled\n");
+ pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
+ "(relocation range: 0x%lx-0x%lx)\n",
+ (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
+ __START_KERNEL_map, MODULES_VADDR-1);
return 0;
}
goto exit;
conditional_sti(regs);
- if (!user_mode(regs))
+ if (!user_mode_vm(regs))
die("bounds", regs, error_code);
if (!cpu_feature_enabled(X86_FEATURE_MPX)) {
* then it's very likely the result of an icebp/int01 trap.
* User wants a sigtrap for that.
*/
- if (!dr6 && user_mode(regs))
+ if (!dr6 && user_mode_vm(regs))
user_icebp = 1;
/* Catch kmemcheck conditions first of all! */
* thread's fpu state, reconstruct fxstate from the fsave
* header. Sanitize the copied state etc.
*/
- struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave;
+ struct fpu *fpu = &tsk->thread.fpu;
struct user_i387_ia32_struct env;
int err = 0;
*/
drop_fpu(tsk);
- if (__copy_from_user(xsave, buf_fx, state_size) ||
+ if (__copy_from_user(&fpu->state->xsave, buf_fx, state_size) ||
__copy_from_user(&env, buf, sizeof(env))) {
+ fpu_finit(fpu);
err = -1;
} else {
sanitize_restored_xstate(tsk, &env, xstate_bv, fx_only);
- set_used_math();
}
+ set_used_math();
if (use_eager_fpu()) {
preempt_disable();
math_state_restore();
return -EOPNOTSUPP;
if (len != 1) {
+ memset(val, 0, len);
pr_pic_unimpl("non byte read\n");
return 0;
}
struct kvm_ioapic *ioapic, int vector, int trigger_mode)
{
int i;
+ struct kvm_lapic *apic = vcpu->arch.apic;
for (i = 0; i < IOAPIC_NUM_PINS; i++) {
union kvm_ioapic_redirect_entry *ent = &ioapic->redirtbl[i];
kvm_notify_acked_irq(ioapic->kvm, KVM_IRQCHIP_IOAPIC, i);
spin_lock(&ioapic->lock);
- if (trigger_mode != IOAPIC_LEVEL_TRIG)
+ if (trigger_mode != IOAPIC_LEVEL_TRIG ||
+ kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI)
continue;
ASSERT(ent->fields.trig_mode == IOAPIC_LEVEL_TRIG);
static void kvm_ioapic_send_eoi(struct kvm_lapic *apic, int vector)
{
- if (!(kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) &&
- kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) {
+ if (kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) {
int trigger_mode;
if (apic_test_vector(vector, apic->regs + APIC_TMR))
trigger_mode = IOAPIC_LEVEL_TRIG;
{
unsigned long *msr_bitmap;
- if (irqchip_in_kernel(vcpu->kvm) && apic_x2apic_mode(vcpu->arch.apic)) {
+ if (is_guest_mode(vcpu))
+ msr_bitmap = vmx_msr_bitmap_nested;
+ else if (irqchip_in_kernel(vcpu->kvm) &&
+ apic_x2apic_mode(vcpu->arch.apic)) {
if (is_long_mode(vcpu))
msr_bitmap = vmx_msr_bitmap_longmode_x2apic;
else
if (enable_ept) {
/* nested EPT: emulate EPT also to L1 */
vmx->nested.nested_vmx_secondary_ctls_high |=
- SECONDARY_EXEC_ENABLE_EPT |
- SECONDARY_EXEC_UNRESTRICTED_GUEST;
+ SECONDARY_EXEC_ENABLE_EPT;
vmx->nested.nested_vmx_ept_caps = VMX_EPT_PAGE_WALK_4_BIT |
VMX_EPTP_WB_BIT | VMX_EPT_2MB_PAGE_BIT |
VMX_EPT_INVEPT_BIT;
} else
vmx->nested.nested_vmx_ept_caps = 0;
+ if (enable_unrestricted_guest)
+ vmx->nested.nested_vmx_secondary_ctls_high |=
+ SECONDARY_EXEC_UNRESTRICTED_GUEST;
+
/* miscellaneous data */
rdmsr(MSR_IA32_VMX_MISC,
vmx->nested.nested_vmx_misc_low,
}
if (cpu_has_vmx_msr_bitmap() &&
- exec_control & CPU_BASED_USE_MSR_BITMAPS &&
- nested_vmx_merge_msr_bitmap(vcpu, vmcs12)) {
- vmcs_write64(MSR_BITMAP, __pa(vmx_msr_bitmap_nested));
+ exec_control & CPU_BASED_USE_MSR_BITMAPS) {
+ nested_vmx_merge_msr_bitmap(vcpu, vmcs12);
+ /* MSR_BITMAP will be set by following vmx_set_efer. */
} else
exec_control &= ~CPU_BASED_USE_MSR_BITMAPS;
case KVM_CAP_USER_NMI:
case KVM_CAP_REINJECT_CONTROL:
case KVM_CAP_IRQ_INJECT_STATUS:
- case KVM_CAP_IRQFD:
case KVM_CAP_IOEVENTFD:
case KVM_CAP_IOEVENTFD_NO_LENGTH:
case KVM_CAP_PIT2:
}
}
-/*
- * Some device drivers assume dev->irq won't change after calling
- * pci_disable_device(). So delay releasing of IRQ resource to driver
- * unbinding time. Otherwise it will break PM subsystem and drivers
- * like xen-pciback etc.
- */
-static int pci_irq_notifier(struct notifier_block *nb, unsigned long action,
- void *data)
-{
- struct pci_dev *dev = to_pci_dev(data);
-
- if (action != BUS_NOTIFY_UNBOUND_DRIVER)
- return NOTIFY_DONE;
-
- if (pcibios_disable_irq)
- pcibios_disable_irq(dev);
-
- return NOTIFY_OK;
-}
-
-static struct notifier_block pci_irq_nb = {
- .notifier_call = pci_irq_notifier,
- .priority = INT_MIN,
-};
-
int __init pcibios_init(void)
{
if (!raw_pci_ops) {
if (pci_bf_sort >= pci_force_bf)
pci_sort_breadthfirst();
-
- bus_register_notifier(&pci_bus_type, &pci_irq_nb);
-
return 0;
}
return 0;
}
+void pcibios_disable_device (struct pci_dev *dev)
+{
+ if (!pci_dev_msi_enabled(dev) && pcibios_disable_irq)
+ pcibios_disable_irq(dev);
+}
+
int pci_ext_cfg_avail(void)
{
if (raw_pci_ext_ops)
static void intel_mid_pci_irq_disable(struct pci_dev *dev)
{
- if (dev->irq_managed && dev->irq > 0) {
+ if (!mp_should_keep_irq(&dev->dev) && dev->irq_managed &&
+ dev->irq > 0) {
mp_unmap_irq(dev->irq);
dev->irq_managed = 0;
- dev->irq = 0;
}
}
return 0;
}
+bool mp_should_keep_irq(struct device *dev)
+{
+ if (dev->power.is_prepared)
+ return true;
+#ifdef CONFIG_PM
+ if (dev->power.runtime_status == RPM_SUSPENDING)
+ return true;
+#endif
+
+ return false;
+}
+
static void pirq_disable_irq(struct pci_dev *dev)
{
- if (io_apic_assign_pci_irqs && dev->irq_managed && dev->irq) {
+ if (io_apic_assign_pci_irqs && !mp_should_keep_irq(&dev->dev) &&
+ dev->irq_managed && dev->irq) {
mp_unmap_irq(dev->irq);
dev->irq = 0;
dev->irq_managed = 0;
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
+ nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */
ALIGN
__kernel_sigreturn:
.LSTART_sigreturn:
if (p2m_pfn == PFN_DOWN(__pa(p2m_missing)))
p2m_init(p2m);
else
- p2m_init_identity(p2m, pfn);
+ p2m_init_identity(p2m, pfn & ~(P2M_PER_PAGE - 1));
spin_lock_irqsave(&p2m_update_lock, flags);
if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS)) {
struct bio_vec *bprev;
- bprev = &rq->biotail->bi_io_vec[bio->bi_vcnt - 1];
+ bprev = &rq->biotail->bi_io_vec[rq->biotail->bi_vcnt - 1];
if (bvec_gap_to_prev(bprev, bio->bi_io_vec[0].bv_offset))
return false;
}
/*
* We're out of tags on this hardware queue, kick any
* pending IO submits before going to sleep waiting for
- * some to complete.
+ * some to complete. Note that hctx can be NULL here for
+ * reserved tag allocation.
*/
- blk_mq_run_hw_queue(hctx, false);
+ if (hctx)
+ blk_mq_run_hw_queue(hctx, false);
/*
* Retry tag allocation after running the hardware queue,
*/
if (percpu_ref_init(&q->mq_usage_counter, blk_mq_usage_counter_release,
PERCPU_REF_INIT_ATOMIC, GFP_KERNEL))
- goto err_map;
+ goto err_mq_usage;
setup_timer(&q->timeout, blk_mq_rq_timer, (unsigned long) q);
blk_queue_rq_timeout(q, 30000);
blk_mq_init_cpu_queues(q, set->nr_hw_queues);
if (blk_mq_init_hw_queues(q, set))
- goto err_hw;
+ goto err_mq_usage;
mutex_lock(&all_q_mutex);
list_add_tail(&q->all_q_node, &all_q_list);
return q;
-err_hw:
+err_mq_usage:
blk_cleanup_queue(q);
err_hctxs:
kfree(map);
struct lpss_device_desc {
unsigned int flags;
+ const char *clk_con_id;
unsigned int prv_offset;
size_t prv_size_override;
void (*setup)(struct lpss_private_data *pdata);
static struct lpss_device_desc lpt_uart_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR,
+ .clk_con_id = "baudclk",
.prv_offset = 0x800,
.setup = lpss_uart_setup,
};
static struct lpss_device_desc byt_uart_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX,
+ .clk_con_id = "baudclk",
.prv_offset = 0x800,
.setup = lpss_uart_setup,
};
return PTR_ERR(clk);
pdata->clk = clk;
- clk_register_clkdev(clk, NULL, devname);
+ clk_register_clkdev(clk, dev_desc->clk_con_id, devname);
return 0;
}
if (!pin || !dev->irq_managed || dev->irq <= 0)
return;
+ /* Keep IOAPIC pin configuration when suspending */
+ if (dev->dev.power.is_prepared)
+ return;
+#ifdef CONFIG_PM
+ if (dev->dev.power.runtime_status == RPM_SUSPENDING)
+ return;
+#endif
+
entry = acpi_pci_irq_lookup(dev, pin);
if (!entry)
return;
if (gsi >= 0) {
acpi_unregister_gsi(gsi);
dev->irq_managed = 0;
- dev->irq = 0;
}
}
return NULL;
/* libsas case */
- if (!ap->scsi_host) {
+ if (ap->flags & ATA_FLAG_SAS_HOST) {
tag = ata_sas_allocate_tag(ap);
if (tag < 0)
return NULL;
tag = qc->tag;
if (likely(ata_tag_valid(tag))) {
qc->tag = ATA_TAG_POISON;
- if (!ap->scsi_host)
+ if (ap->flags & ATA_FLAG_SAS_HOST)
ata_sas_free_tag(tag, ap);
}
}
*/
ata_msleep(ap, 1);
+ sata_set_spd(link);
+
/*
* Now, bring the host controller online again, this can take time
* as PHY reset and communication establishment, 1st D2H FIS and
extern struct regcache_ops regcache_lzo_ops;
extern struct regcache_ops regcache_flat_ops;
+static inline const char *regmap_name(const struct regmap *map)
+{
+ if (map->dev)
+ return dev_name(map->dev);
+
+ return map->name;
+}
+
#endif
if (pos == 0) {
memmove(blk + offset * map->cache_word_size,
blk, rbnode->blklen * map->cache_word_size);
- bitmap_shift_right(present, present, offset, blklen);
+ bitmap_shift_left(present, present, offset, blklen);
}
/* update the rbnode block, its size and the base register */
ret = map->cache_ops->read(map, reg, value);
if (ret == 0)
- trace_regmap_reg_read_cache(map->dev, reg, *value);
+ trace_regmap_reg_read_cache(map, reg, *value);
return ret;
}
dev_dbg(map->dev, "Syncing %s cache\n",
map->cache_ops->name);
name = map->cache_ops->name;
- trace_regcache_sync(map->dev, name, "start");
+ trace_regcache_sync(map, name, "start");
if (!map->cache_dirty)
goto out;
regmap_async_complete(map);
- trace_regcache_sync(map->dev, name, "stop");
+ trace_regcache_sync(map, name, "stop");
return ret;
}
name = map->cache_ops->name;
dev_dbg(map->dev, "Syncing %s cache from %d-%d\n", name, min, max);
- trace_regcache_sync(map->dev, name, "start region");
+ trace_regcache_sync(map, name, "start region");
if (!map->cache_dirty)
goto out;
regmap_async_complete(map);
- trace_regcache_sync(map->dev, name, "stop region");
+ trace_regcache_sync(map, name, "stop region");
return ret;
}
map->lock(map->lock_arg);
- trace_regcache_drop_region(map->dev, min, max);
+ trace_regcache_drop_region(map, min, max);
ret = map->cache_ops->drop(map, min, max);
map->lock(map->lock_arg);
WARN_ON(map->cache_bypass && enable);
map->cache_only = enable;
- trace_regmap_cache_only(map->dev, enable);
+ trace_regmap_cache_only(map, enable);
map->unlock(map->lock_arg);
}
EXPORT_SYMBOL_GPL(regcache_cache_only);
map->lock(map->lock_arg);
WARN_ON(map->cache_only && enable);
map->cache_bypass = enable;
- trace_regmap_cache_bypass(map->dev, enable);
+ trace_regmap_cache_bypass(map, enable);
map->unlock(map->lock_arg);
}
EXPORT_SYMBOL_GPL(regcache_cache_bypass);
for (i = start; i < end; i++) {
regtmp = block_base + (i * map->reg_stride);
- if (!regcache_reg_present(cache_present, i))
+ if (!regcache_reg_present(cache_present, i) ||
+ !regmap_writeable(map, regtmp))
continue;
val = regcache_get_val(map, block, i);
for (i = start; i < end; i++) {
regtmp = block_base + (i * map->reg_stride);
- if (!regcache_reg_present(cache_present, i)) {
+ if (!regcache_reg_present(cache_present, i) ||
+ !regmap_writeable(map, regtmp)) {
ret = regcache_sync_block_raw_flush(map, &data,
base, regtmp);
if (ret != 0)
goto err_alloc;
}
- ret = request_threaded_irq(irq, NULL, regmap_irq_thread, irq_flags,
+ ret = request_threaded_irq(irq, NULL, regmap_irq_thread,
+ irq_flags | IRQF_ONESHOT,
chip->name, d);
if (ret != 0) {
dev_err(map->dev, "Failed to request IRQ %d for %s: %d\n",
if (map->async && map->bus->async_write) {
struct regmap_async *async;
- trace_regmap_async_write_start(map->dev, reg, val_len);
+ trace_regmap_async_write_start(map, reg, val_len);
spin_lock_irqsave(&map->async_lock, flags);
async = list_first_entry_or_null(&map->async_free,
return ret;
}
- trace_regmap_hw_write_start(map->dev, reg,
- val_len / map->format.val_bytes);
+ trace_regmap_hw_write_start(map, reg, val_len / map->format.val_bytes);
/* If we're doing a single register write we can probably just
* send the work_buf directly, otherwise try to do a gather
kfree(buf);
}
- trace_regmap_hw_write_done(map->dev, reg,
- val_len / map->format.val_bytes);
+ trace_regmap_hw_write_done(map, reg, val_len / map->format.val_bytes);
return ret;
}
map->format.format_write(map, reg, val);
- trace_regmap_hw_write_start(map->dev, reg, 1);
+ trace_regmap_hw_write_start(map, reg, 1);
ret = map->bus->write(map->bus_context, map->work_buf,
map->format.buf_size);
- trace_regmap_hw_write_done(map->dev, reg, 1);
+ trace_regmap_hw_write_done(map, reg, 1);
return ret;
}
dev_info(map->dev, "%x <= %x\n", reg, val);
#endif
- trace_regmap_reg_write(map->dev, reg, val);
+ trace_regmap_reg_write(map, reg, val);
return map->reg_write(context, reg, val);
}
for (i = 0; i < num_regs; i++) {
int reg = regs[i].reg;
int val = regs[i].def;
- trace_regmap_hw_write_start(map->dev, reg, 1);
+ trace_regmap_hw_write_start(map, reg, 1);
map->format.format_reg(u8, reg, map->reg_shift);
u8 += reg_bytes + pad_bytes;
map->format.format_val(u8, val, 0);
for (i = 0; i < num_regs; i++) {
int reg = regs[i].reg;
- trace_regmap_hw_write_done(map->dev, reg, 1);
+ trace_regmap_hw_write_done(map, reg, 1);
}
return ret;
}
*/
u8[0] |= map->read_flag_mask;
- trace_regmap_hw_read_start(map->dev, reg,
- val_len / map->format.val_bytes);
+ trace_regmap_hw_read_start(map, reg, val_len / map->format.val_bytes);
ret = map->bus->read(map->bus_context, map->work_buf,
map->format.reg_bytes + map->format.pad_bytes,
val, val_len);
- trace_regmap_hw_read_done(map->dev, reg,
- val_len / map->format.val_bytes);
+ trace_regmap_hw_read_done(map, reg, val_len / map->format.val_bytes);
return ret;
}
dev_info(map->dev, "%x => %x\n", reg, *val);
#endif
- trace_regmap_reg_read(map->dev, reg, *val);
+ trace_regmap_reg_read(map, reg, *val);
if (!map->cache_bypass)
regcache_write(map, reg, *val);
struct regmap *map = async->map;
bool wake;
- trace_regmap_async_io_complete(map->dev);
+ trace_regmap_async_io_complete(map);
spin_lock(&map->async_lock);
list_move(&async->list, &map->async_free);
if (!map->bus || !map->bus->async_write)
return 0;
- trace_regmap_async_complete_start(map->dev);
+ trace_regmap_async_complete_start(map);
wait_event(map->async_waitq, regmap_async_is_done(map));
map->async_ret = 0;
spin_unlock_irqrestore(&map->async_lock, flags);
- trace_regmap_async_complete_done(map->dev);
+ trace_regmap_async_complete_done(map);
return ret;
}
return -EINVAL;
}
- nbd_dev = kcalloc(nbds_max, sizeof(*nbd_dev), GFP_KERNEL);
- if (!nbd_dev)
- return -ENOMEM;
-
part_shift = 0;
if (max_part > 0) {
part_shift = fls(max_part);
if (nbds_max > 1UL << (MINORBITS - part_shift))
return -EINVAL;
+ nbd_dev = kcalloc(nbds_max, sizeof(*nbd_dev), GFP_KERNEL);
+ if (!nbd_dev)
+ return -ENOMEM;
+
for (i = 0; i < nbds_max; i++) {
struct gendisk *disk = alloc_disk(1 << part_shift);
if (!disk)
}
get_device(dev->device);
+ INIT_LIST_HEAD(&dev->node);
INIT_WORK(&dev->probe_work, nvme_async_probe);
schedule_work(&dev->probe_work);
return 0;
{
int rc;
- rc = device_add(&chip->dev);
+ rc = cdev_add(&chip->cdev, chip->dev.devt, 1);
if (rc) {
dev_err(&chip->dev,
- "unable to device_register() %s, major %d, minor %d, err=%d\n",
+ "unable to cdev_add() %s, major %d, minor %d, err=%d\n",
chip->devname, MAJOR(chip->dev.devt),
MINOR(chip->dev.devt), rc);
+ device_unregister(&chip->dev);
return rc;
}
- rc = cdev_add(&chip->cdev, chip->dev.devt, 1);
+ rc = device_add(&chip->dev);
if (rc) {
dev_err(&chip->dev,
- "unable to cdev_add() %s, major %d, minor %d, err=%d\n",
+ "unable to device_register() %s, major %d, minor %d, err=%d\n",
chip->devname, MAJOR(chip->dev.devt),
MINOR(chip->dev.devt), rc);
- device_unregister(&chip->dev);
return rc;
}
* tpm_chip_register() - create a character device for the TPM chip
* @chip: TPM chip to use.
*
- * Creates a character device for the TPM chip and adds sysfs interfaces for
- * the device, PPI and TCPA. As the last step this function adds the
- * chip to the list of TPM chips available for use.
+ * Creates a character device for the TPM chip and adds sysfs attributes for
+ * the device. As the last step this function adds the chip to the list of TPM
+ * chips available for in-kernel use.
*
- * NOTE: This function should be only called after the chip initialization
- * is complete.
- *
- * Called from tpm_<specific>.c probe function only for devices
- * the driver has determined it should claim. Prior to calling
- * this function the specific probe function has called pci_enable_device
- * upon errant exit from this function specific probe function should call
- * pci_disable_device
+ * This function should be only called after the chip initialization is
+ * complete.
*/
int tpm_chip_register(struct tpm_chip *chip)
{
int rc;
- rc = tpm_dev_add_device(chip);
- if (rc)
- return rc;
-
/* Populate sysfs for TPM1 devices. */
if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
rc = tpm_sysfs_add_device(chip);
chip->bios_dir = tpm_bios_log_setup(chip->devname);
}
+ rc = tpm_dev_add_device(chip);
+ if (rc)
+ return rc;
+
/* Make the chip available. */
spin_lock(&driver_lock);
list_add_rcu(&chip->list, &tpm_chip_list);
{
struct ibmvtpm_dev *ibmvtpm;
struct ibmvtpm_crq crq;
- u64 *word = (u64 *) &crq;
+ __be64 *word = (__be64 *)&crq;
int rc;
ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip);
memcpy((void *)ibmvtpm->rtce_buf, (void *)buf, count);
crq.valid = (u8)IBMVTPM_VALID_CMD;
crq.msg = (u8)VTPM_TPM_COMMAND;
- crq.len = (u16)count;
- crq.data = ibmvtpm->rtce_dma_handle;
+ crq.len = cpu_to_be16(count);
+ crq.data = cpu_to_be32(ibmvtpm->rtce_dma_handle);
- rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(word[0]),
- cpu_to_be64(word[1]));
+ rc = ibmvtpm_send_crq(ibmvtpm->vdev, be64_to_cpu(word[0]),
+ be64_to_cpu(word[1]));
if (rc != H_SUCCESS) {
dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
rc = 0;
struct ibmvtpm_crq {
u8 valid;
u8 msg;
- u16 len;
- u32 data;
- u64 reserved;
+ __be16 len;
+ __be32 data;
+ __be64 reserved;
} __attribute__((packed, aligned(8)));
struct ibmvtpm_crq_queue {
* notification
*/
struct work_struct control_work;
+ struct work_struct config_work;
struct list_head ports;
portdev = vdev->priv;
+ if (!use_multiport(portdev))
+ schedule_work(&portdev->config_work);
+}
+
+static void config_work_handler(struct work_struct *work)
+{
+ struct ports_device *portdev;
+
+ portdev = container_of(work, struct ports_device, control_work);
if (!use_multiport(portdev)) {
+ struct virtio_device *vdev;
struct port *port;
u16 rows, cols;
+ vdev = portdev->vdev;
virtio_cread(vdev, struct virtio_console_config, cols, &cols);
virtio_cread(vdev, struct virtio_console_config, rows, &rows);
virtio_device_ready(portdev->vdev);
+ INIT_WORK(&portdev->config_work, &config_work_handler);
+ INIT_WORK(&portdev->control_work, &control_work_handler);
+
if (multiport) {
unsigned int nr_added_bufs;
spin_lock_init(&portdev->c_ivq_lock);
spin_lock_init(&portdev->c_ovq_lock);
- INIT_WORK(&portdev->control_work, &control_work_handler);
nr_added_bufs = fill_queue(portdev->c_ivq,
&portdev->c_ivq_lock);
/* Finish up work that's lined up */
if (use_multiport(portdev))
cancel_work_sync(&portdev->control_work);
+ else
+ cancel_work_sync(&portdev->config_work);
list_for_each_entry_safe(port, port2, &portdev->ports, list)
unplug_port(port);
virtqueue_disable_cb(portdev->c_ivq);
cancel_work_sync(&portdev->control_work);
+ cancel_work_sync(&portdev->config_work);
/*
* Once more: if control_work_handler() was running, it would
* enable the cb as the last step.
divider->flags);
}
-/*
- * The reverse of DIV_ROUND_UP: The maximum number which
- * divided by m is r
- */
-#define MULT_ROUND_UP(r, m) ((r) * (m) + (m) - 1)
-
static bool _is_valid_table_div(const struct clk_div_table *table,
unsigned int div)
{
unsigned long parent_rate, unsigned long rate,
unsigned long flags)
{
- int up, down, div;
+ int up, down;
+ unsigned long up_rate, down_rate;
- up = down = div = DIV_ROUND_CLOSEST(parent_rate, rate);
+ up = DIV_ROUND_UP(parent_rate, rate);
+ down = parent_rate / rate;
if (flags & CLK_DIVIDER_POWER_OF_TWO) {
- up = __roundup_pow_of_two(div);
- down = __rounddown_pow_of_two(div);
+ up = __roundup_pow_of_two(up);
+ down = __rounddown_pow_of_two(down);
} else if (table) {
- up = _round_up_table(table, div);
- down = _round_down_table(table, div);
+ up = _round_up_table(table, up);
+ down = _round_down_table(table, down);
}
- return (up - div) <= (div - down) ? up : down;
+ up_rate = DIV_ROUND_UP(parent_rate, up);
+ down_rate = DIV_ROUND_UP(parent_rate, down);
+
+ return (rate - up_rate) <= (down_rate - rate) ? up : down;
}
static int _div_round(const struct clk_div_table *table,
return i;
}
parent_rate = __clk_round_rate(__clk_get_parent(hw->clk),
- MULT_ROUND_UP(rate, i));
+ rate * i);
now = DIV_ROUND_UP(parent_rate, i);
if (_is_best_div(rate, now, best, flags)) {
bestdiv = i;
bestdiv = readl(divider->reg) >> divider->shift;
bestdiv &= div_mask(divider->width);
bestdiv = _get_div(divider->table, bestdiv, divider->flags);
- return bestdiv;
+ return DIV_ROUND_UP(*prate, bestdiv);
}
return divider_round_rate(hw, rate, prate, divider->table,
return rate;
}
-EXPORT_SYMBOL_GPL(clk_core_get_rate);
/**
* clk_get_rate - return the rate of clk
return clk_core_get_phase(clk->core);
}
+/**
+ * clk_is_match - check if two clk's point to the same hardware clock
+ * @p: clk compared against q
+ * @q: clk compared against p
+ *
+ * Returns true if the two struct clk pointers both point to the same hardware
+ * clock node. Put differently, returns true if struct clk *p and struct clk *q
+ * share the same struct clk_core object.
+ *
+ * Returns false otherwise. Note that two NULL clks are treated as matching.
+ */
+bool clk_is_match(const struct clk *p, const struct clk *q)
+{
+ /* trivial case: identical struct clk's or both NULL */
+ if (p == q)
+ return true;
+
+ /* true if clk->core pointers match. Avoid derefing garbage */
+ if (!IS_ERR_OR_NULL(p) && !IS_ERR_OR_NULL(q))
+ if (p->core == q->core)
+ return true;
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(clk_is_match);
+
/**
* __clk_init - initialize the data structures in a struct clk
* @dev: device initializing this clk, placeholder for now
},
};
+static struct clk_regmap pll4_vote = {
+ .enable_reg = 0x34c0,
+ .enable_mask = BIT(4),
+ .hw.init = &(struct clk_init_data){
+ .name = "pll4_vote",
+ .parent_names = (const char *[]){ "pll4" },
+ .num_parents = 1,
+ .ops = &clk_pll_vote_ops,
+ },
+};
+
static struct clk_pll pll8 = {
.l_reg = 0x3144,
.m_reg = 0x3148,
static struct clk_regmap *gcc_msm8960_clks[] = {
[PLL3] = &pll3.clkr,
+ [PLL4_VOTE] = &pll4_vote,
[PLL8] = &pll8.clkr,
[PLL8_VOTE] = &pll8_vote,
[PLL14] = &pll14.clkr,
static struct clk_regmap *gcc_apq8064_clks[] = {
[PLL3] = &pll3.clkr,
+ [PLL4_VOTE] = &pll4_vote,
[PLL8] = &pll8.clkr,
[PLL8_VOTE] = &pll8_vote,
[PLL14] = &pll14.clkr,
.remove = lcc_ipq806x_remove,
.driver = {
.name = "lcc-ipq806x",
- .owner = THIS_MODULE,
.of_match_table = lcc_ipq806x_match_table,
},
};
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
- .n_val_shift = 16,
- .m_val_shift = 16,
+ .n_val_shift = 24,
+ .m_val_shift = 8,
.width = 8,
},
.p = {
return PTR_ERR(regmap);
/* Use the correct frequency plan depending on speed of PLL4 */
- val = regmap_read(regmap, 0x4, &val);
+ regmap_read(regmap, 0x4, &val);
if (val == 0x12) {
slimbus_src.freq_tbl = clk_tbl_aif_osr_492;
mi2s_osr_src.freq_tbl = clk_tbl_aif_osr_492;
.remove = lcc_msm8960_remove,
.driver = {
.name = "lcc-msm8960",
- .owner = THIS_MODULE,
.of_match_table = lcc_msm8960_match_table,
},
};
struct fapll_data *fd = to_fapll(hw);
u32 v = readl_relaxed(fd->base);
- v |= (1 << FAPLL_MAIN_PLLEN);
+ v |= FAPLL_MAIN_PLLEN;
writel_relaxed(v, fd->base);
return 0;
struct fapll_data *fd = to_fapll(hw);
u32 v = readl_relaxed(fd->base);
- v &= ~(1 << FAPLL_MAIN_PLLEN);
+ v &= ~FAPLL_MAIN_PLLEN;
writel_relaxed(v, fd->base);
}
struct fapll_data *fd = to_fapll(hw);
u32 v = readl_relaxed(fd->base);
- return v & (1 << FAPLL_MAIN_PLLEN);
+ return v & FAPLL_MAIN_PLLEN;
}
static unsigned long ti_fapll_recalc_rate(struct clk_hw *hw,
config SH_TIMER_CMT
bool "Renesas CMT timer driver" if COMPILE_TEST
depends on GENERIC_CLOCKEVENTS
+ depends on HAS_IOMEM
default SYS_SUPPORTS_SH_CMT
help
This enables build of a clocksource and clockevent driver for
config SH_TIMER_MTU2
bool "Renesas MTU2 timer driver" if COMPILE_TEST
depends on GENERIC_CLOCKEVENTS
+ depends on HAS_IOMEM
default SYS_SUPPORTS_SH_MTU2
help
This enables build of a clockevent driver for the Multi-Function
config SH_TIMER_TMU
bool "Renesas TMU timer driver" if COMPILE_TEST
depends on GENERIC_CLOCKEVENTS
+ depends on HAS_IOMEM
default SYS_SUPPORTS_SH_TMU
help
This enables build of a clocksource and clockevent driver for
clock_event_ddata.base = base;
clock_event_ddata.periodic_top = DIV_ROUND_CLOSEST(rate, 1024 * HZ);
- setup_irq(irq, &efm32_clock_event_irq);
-
clockevents_config_and_register(&clock_event_ddata.evtdev,
DIV_ROUND_CLOSEST(rate, 1024),
0xf, 0xffff);
+ setup_irq(irq, &efm32_clock_event_irq);
+
return 0;
err_get_irq:
#include <linux/irq.h>
#include <linux/irqreturn.h>
#include <linux/reset.h>
-#include <linux/sched_clock.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
.dev_id = &sun5i_clockevent,
};
-static u64 sun5i_timer_sched_read(void)
-{
- return ~readl(timer_base + TIMER_CNTVAL_LO_REG(1));
-}
-
static void __init sun5i_timer_init(struct device_node *node)
{
struct reset_control *rstc;
writel(TIMER_CTL_ENABLE | TIMER_CTL_RELOAD,
timer_base + TIMER_CTL_REG(1));
- sched_clock_register(sun5i_timer_sched_read, 32, rate);
clocksource_mmio_init(timer_base + TIMER_CNTVAL_LO_REG(1), node->name,
rate, 340, 32, clocksource_mmio_readl_down);
ticks_per_jiffy = DIV_ROUND_UP(rate, HZ);
- ret = setup_irq(irq, &sun5i_timer_irq);
- if (ret)
- pr_warn("failed to setup irq %d\n", irq);
-
/* Enable timer0 interrupt */
val = readl(timer_base + TIMER_IRQ_EN_REG);
writel(val | TIMER_IRQ_EN(0), timer_base + TIMER_IRQ_EN_REG);
clockevents_config_and_register(&sun5i_clockevent, rate,
TIMER_SYNC_TICKS, 0xffffffff);
+
+ ret = setup_irq(irq, &sun5i_timer_irq);
+ if (ret)
+ pr_warn("failed to setup irq %d\n", irq);
}
CLOCKSOURCE_OF_DECLARE(sun5i_a13, "allwinner,sun5i-a13-hstimer",
sun5i_timer_init);
deepidle = true;
ret = mvebu_v7_cpu_suspend(deepidle);
+ cpu_pm_exit();
+
if (ret)
return ret;
- cpu_pm_exit();
-
return index;
}
.states[0] = ARM_CPUIDLE_WFI_STATE,
.states[1] = {
.enter = mvebu_v7_enter_idle,
- .exit_latency = 10,
+ .exit_latency = 100,
.power_usage = 50,
- .target_residency = 100,
+ .target_residency = 1000,
.name = "MV CPU IDLE",
.desc = "CPU power down",
},
.states[2] = {
.enter = mvebu_v7_enter_idle,
- .exit_latency = 100,
+ .exit_latency = 1000,
.power_usage = 5,
- .target_residency = 1000,
+ .target_residency = 10000,
.flags = MVEBU_V7_FLAG_DEEP_IDLE,
.name = "MV CPU DEEP IDLE",
.desc = "CPU and L2 Fabric power down",
#define DRIVER_NAME "pl08xdmac"
+#define PL80X_DMA_BUSWIDTHS \
+ BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \
+ BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
+ BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
+ BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)
+
static struct amba_driver pl08x_amba_driver;
struct pl08x_driver_data;
pl08x->memcpy.device_pause = pl08x_pause;
pl08x->memcpy.device_resume = pl08x_resume;
pl08x->memcpy.device_terminate_all = pl08x_terminate_all;
+ pl08x->memcpy.src_addr_widths = PL80X_DMA_BUSWIDTHS;
+ pl08x->memcpy.dst_addr_widths = PL80X_DMA_BUSWIDTHS;
+ pl08x->memcpy.directions = BIT(DMA_MEM_TO_MEM);
+ pl08x->memcpy.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
/* Initialize slave engine */
dma_cap_set(DMA_SLAVE, pl08x->slave.cap_mask);
pl08x->slave.device_pause = pl08x_pause;
pl08x->slave.device_resume = pl08x_resume;
pl08x->slave.device_terminate_all = pl08x_terminate_all;
+ pl08x->slave.src_addr_widths = PL80X_DMA_BUSWIDTHS;
+ pl08x->slave.dst_addr_widths = PL80X_DMA_BUSWIDTHS;
+ pl08x->slave.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ pl08x->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
/* Get the platform data */
pl08x->pd = dev_get_platdata(&adev->dev);
}
/*
- * atc_get_current_descriptors -
- * locate the descriptor which equal to physical address in DSCR
- * @atchan: the channel we want to start
- * @dscr_addr: physical descriptor address in DSCR
+ * atc_get_desc_by_cookie - get the descriptor of a cookie
+ * @atchan: the DMA channel
+ * @cookie: the cookie to get the descriptor for
*/
-static struct at_desc *atc_get_current_descriptors(struct at_dma_chan *atchan,
- u32 dscr_addr)
+static struct at_desc *atc_get_desc_by_cookie(struct at_dma_chan *atchan,
+ dma_cookie_t cookie)
{
- struct at_desc *desc, *_desc, *child, *desc_cur = NULL;
+ struct at_desc *desc, *_desc;
- list_for_each_entry_safe(desc, _desc, &atchan->active_list, desc_node) {
- if (desc->lli.dscr == dscr_addr) {
- desc_cur = desc;
- break;
- }
+ list_for_each_entry_safe(desc, _desc, &atchan->queue, desc_node) {
+ if (desc->txd.cookie == cookie)
+ return desc;
+ }
- list_for_each_entry(child, &desc->tx_list, desc_node) {
- if (child->lli.dscr == dscr_addr) {
- desc_cur = child;
- break;
- }
- }
+ list_for_each_entry_safe(desc, _desc, &atchan->active_list, desc_node) {
+ if (desc->txd.cookie == cookie)
+ return desc;
}
- return desc_cur;
+ return NULL;
}
-/*
- * atc_get_bytes_left -
- * Get the number of bytes residue in dma buffer,
- * @chan: the channel we want to start
+/**
+ * atc_calc_bytes_left - calculates the number of bytes left according to the
+ * value read from CTRLA.
+ *
+ * @current_len: the number of bytes left before reading CTRLA
+ * @ctrla: the value of CTRLA
+ * @desc: the descriptor containing the transfer width
+ */
+static inline int atc_calc_bytes_left(int current_len, u32 ctrla,
+ struct at_desc *desc)
+{
+ return current_len - ((ctrla & ATC_BTSIZE_MAX) << desc->tx_width);
+}
+
+/**
+ * atc_calc_bytes_left_from_reg - calculates the number of bytes left according
+ * to the current value of CTRLA.
+ *
+ * @current_len: the number of bytes left before reading CTRLA
+ * @atchan: the channel to read CTRLA for
+ * @desc: the descriptor containing the transfer width
+ */
+static inline int atc_calc_bytes_left_from_reg(int current_len,
+ struct at_dma_chan *atchan, struct at_desc *desc)
+{
+ u32 ctrla = channel_readl(atchan, CTRLA);
+
+ return atc_calc_bytes_left(current_len, ctrla, desc);
+}
+
+/**
+ * atc_get_bytes_left - get the number of bytes residue for a cookie
+ * @chan: DMA channel
+ * @cookie: transaction identifier to check status of
*/
-static int atc_get_bytes_left(struct dma_chan *chan)
+static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
- struct at_dma *atdma = to_at_dma(chan->device);
- int chan_id = atchan->chan_common.chan_id;
struct at_desc *desc_first = atc_first_active(atchan);
- struct at_desc *desc_cur;
- int ret = 0, count = 0;
+ struct at_desc *desc;
+ int ret;
+ u32 ctrla, dscr;
/*
- * Initialize necessary values in the first time.
- * remain_desc record remain desc length.
+ * If the cookie doesn't match to the currently running transfer then
+ * we can return the total length of the associated DMA transfer,
+ * because it is still queued.
*/
- if (atchan->remain_desc == 0)
- /* First descriptor embedds the transaction length */
- atchan->remain_desc = desc_first->len;
+ desc = atc_get_desc_by_cookie(atchan, cookie);
+ if (desc == NULL)
+ return -EINVAL;
+ else if (desc != desc_first)
+ return desc->total_len;
- /*
- * This happens when current descriptor transfer complete.
- * The residual buffer size should reduce current descriptor length.
- */
- if (unlikely(test_bit(ATC_IS_BTC, &atchan->status))) {
- clear_bit(ATC_IS_BTC, &atchan->status);
- desc_cur = atc_get_current_descriptors(atchan,
- channel_readl(atchan, DSCR));
- if (!desc_cur) {
- ret = -EINVAL;
- goto out;
- }
+ /* cookie matches to the currently running transfer */
+ ret = desc_first->total_len;
- count = (desc_cur->lli.ctrla & ATC_BTSIZE_MAX)
- << desc_first->tx_width;
- if (atchan->remain_desc < count) {
- ret = -EINVAL;
- goto out;
+ if (desc_first->lli.dscr) {
+ /* hardware linked list transfer */
+
+ /*
+ * Calculate the residue by removing the length of the child
+ * descriptors already transferred from the total length.
+ * To get the current child descriptor we can use the value of
+ * the channel's DSCR register and compare it against the value
+ * of the hardware linked list structure of each child
+ * descriptor.
+ */
+
+ ctrla = channel_readl(atchan, CTRLA);
+ rmb(); /* ensure CTRLA is read before DSCR */
+ dscr = channel_readl(atchan, DSCR);
+
+ /* for the first descriptor we can be more accurate */
+ if (desc_first->lli.dscr == dscr)
+ return atc_calc_bytes_left(ret, ctrla, desc_first);
+
+ ret -= desc_first->len;
+ list_for_each_entry(desc, &desc_first->tx_list, desc_node) {
+ if (desc->lli.dscr == dscr)
+ break;
+
+ ret -= desc->len;
}
- atchan->remain_desc -= count;
- ret = atchan->remain_desc;
- } else {
/*
- * Get residual bytes when current
- * descriptor transfer in progress.
+ * For the last descriptor in the chain we can calculate
+ * the remaining bytes using the channel's register.
+ * Note that the transfer width of the first and last
+ * descriptor may differ.
*/
- count = (channel_readl(atchan, CTRLA) & ATC_BTSIZE_MAX)
- << (desc_first->tx_width);
- ret = atchan->remain_desc - count;
+ if (!desc->lli.dscr)
+ ret = atc_calc_bytes_left_from_reg(ret, atchan, desc);
+ } else {
+ /* single transfer */
+ ret = atc_calc_bytes_left_from_reg(ret, atchan, desc_first);
}
- /*
- * Check fifo empty.
- */
- if (!(dma_readl(atdma, CHSR) & AT_DMA_EMPT(chan_id)))
- atc_issue_pending(chan);
-out:
return ret;
}
/* Give information to tasklet */
set_bit(ATC_IS_ERROR, &atchan->status);
}
- if (pending & AT_DMA_BTC(i))
- set_bit(ATC_IS_BTC, &atchan->status);
tasklet_schedule(&atchan->tasklet);
ret = IRQ_HANDLED;
}
desc->lli.ctrlb = ctrlb;
desc->txd.cookie = 0;
+ desc->len = xfer_count << src_width;
atc_desc_chain(&first, &prev, desc);
}
/* First descriptor of the chain embedds additional information */
first->txd.cookie = -EBUSY;
- first->len = len;
+ first->total_len = len;
+
+ /* set transfer width for the calculation of the residue */
first->tx_width = src_width;
+ prev->tx_width = src_width;
/* set end-of-link to the last link descriptor of list*/
set_desc_eol(desc);
| ATC_SRC_WIDTH(mem_width)
| len >> mem_width;
desc->lli.ctrlb = ctrlb;
+ desc->len = len;
atc_desc_chain(&first, &prev, desc);
total_len += len;
| ATC_DST_WIDTH(mem_width)
| len >> reg_width;
desc->lli.ctrlb = ctrlb;
+ desc->len = len;
atc_desc_chain(&first, &prev, desc);
total_len += len;
/* First descriptor of the chain embedds additional information */
first->txd.cookie = -EBUSY;
- first->len = total_len;
+ first->total_len = total_len;
+
+ /* set transfer width for the calculation of the residue */
first->tx_width = reg_width;
+ prev->tx_width = reg_width;
/* first link descriptor of list is responsible of flags */
first->txd.flags = flags; /* client is in control of this ack */
| ATC_FC_MEM2PER
| ATC_SIF(atchan->mem_if)
| ATC_DIF(atchan->per_if);
+ desc->len = period_len;
break;
case DMA_DEV_TO_MEM:
| ATC_FC_PER2MEM
| ATC_SIF(atchan->per_if)
| ATC_DIF(atchan->mem_if);
+ desc->len = period_len;
break;
default:
/* First descriptor of the chain embedds additional information */
first->txd.cookie = -EBUSY;
- first->len = buf_len;
+ first->total_len = buf_len;
first->tx_width = reg_width;
return &first->txd;
spin_lock_irqsave(&atchan->lock, flags);
/* Get number of bytes left in the active transactions */
- bytes = atc_get_bytes_left(chan);
+ bytes = atc_get_bytes_left(chan, cookie);
spin_unlock_irqrestore(&atchan->lock, flags);
spin_lock_irqsave(&atchan->lock, flags);
atchan->descs_allocated = i;
- atchan->remain_desc = 0;
list_splice(&tmp_list, &atchan->free_list);
dma_cookie_init(chan);
spin_unlock_irqrestore(&atchan->lock, flags);
list_splice_init(&atchan->free_list, &list);
atchan->descs_allocated = 0;
atchan->status = 0;
- atchan->remain_desc = 0;
dev_vdbg(chan2dev(chan), "free_chan_resources: done\n");
}
* @at_lli: hardware lli structure
* @txd: support for the async_tx api
* @desc_node: node on the channed descriptors list
- * @len: total transaction bytecount
+ * @len: descriptor byte count
* @tx_width: transfer width
+ * @total_len: total transaction byte count
*/
struct at_desc {
/* FIRST values the hardware uses */
struct list_head desc_node;
size_t len;
u32 tx_width;
+ size_t total_len;
};
static inline struct at_desc *
enum atc_status {
ATC_IS_ERROR = 0,
ATC_IS_PAUSED = 1,
- ATC_IS_BTC = 2,
ATC_IS_CYCLIC = 24,
};
* @save_cfg: configuration register that is saved on suspend/resume cycle
* @save_dscr: for cyclic operations, preserve next descriptor address in
* the cyclic list on suspend/resume cycle
- * @remain_desc: to save remain desc length
* @dma_sconfig: configuration for slave transfers, passed via
* .device_config
* @lock: serializes enqueue/dequeue operations to descriptors lists
struct tasklet_struct tasklet;
u32 save_cfg;
u32 save_dscr;
- u32 remain_desc;
struct dma_slave_config dma_sconfig;
spinlock_t lock;
#include "internal.h"
+#define DRV_NAME "dw_dmac"
+
static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
.remove = dw_remove,
.shutdown = dw_shutdown,
.driver = {
- .name = "dw_dmac",
+ .name = DRV_NAME,
.pm = &dw_dev_pm_ops,
.of_match_table = of_match_ptr(dw_dma_of_id_table),
.acpi_match_table = ACPI_PTR(dw_dma_acpi_id_table),
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller platform driver");
+MODULE_ALIAS("platform:" DRV_NAME);
dev_err(sdma->dev, "Timeout waiting for CH0 ready\n");
}
+ /* Set bits of CONFIG register with dynamic context switching */
+ if (readl(sdma->regs + SDMA_H_CONFIG) == 0)
+ writel_relaxed(SDMA_H_CONFIG_CSM, sdma->regs + SDMA_H_CONFIG);
+
return ret ? 0 : -ETIMEDOUT;
}
writel_relaxed(ccb_phys, sdma->regs + SDMA_H_C0PTR);
- /* Set bits of CONFIG register with given context switching mode */
- writel_relaxed(SDMA_H_CONFIG_CSM, sdma->regs + SDMA_H_CONFIG);
-
/* Initializes channel's priorities */
sdma_set_channel_priority(&sdma->channel[0], 7);
Choose this option if you have a Savage3D/4/SuperSavage/Pro/Twister
chipset. If M is selected the module will be called savage.
+config DRM_VGEM
+ tristate "Virtual GEM provider"
+ depends on DRM
+ help
+ Choose this option to get a virtual graphics memory manager,
+ as used by Mesa's software renderer for enhanced performance.
+ If M is selected the module will be called vgem.
+
+
source "drivers/gpu/drm/exynos/Kconfig"
source "drivers/gpu/drm/rockchip/Kconfig"
obj-$(CONFIG_DRM_SAVAGE)+= savage/
obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/
obj-$(CONFIG_DRM_VIA) +=via/
+obj-$(CONFIG_DRM_VGEM) += vgem/
obj-$(CONFIG_DRM_NOUVEAU) +=nouveau/
obj-$(CONFIG_DRM_EXYNOS) +=exynos/
obj-$(CONFIG_DRM_ROCKCHIP) +=rockchip/
{
struct kfd_ioctl_get_clock_counters_args *args = data;
struct kfd_dev *dev;
- struct timespec time;
+ struct timespec64 time;
dev = kfd_device_by_id(args->gpu_id);
if (dev == NULL)
return -EINVAL;
/* Reading GPU clock counter from KGD */
- args->gpu_clock_counter = kfd2kgd->get_gpu_clock_counter(dev->kgd);
+ args->gpu_clock_counter =
+ dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);
/* No access to rdtsc. Using raw monotonic time */
- getrawmonotonic(&time);
- args->cpu_clock_counter = (uint64_t)timespec_to_ns(&time);
+ getrawmonotonic64(&time);
+ args->cpu_clock_counter = (uint64_t)timespec64_to_ns(&time);
- get_monotonic_boottime(&time);
- args->system_clock_counter = (uint64_t)timespec_to_ns(&time);
+ get_monotonic_boottime64(&time);
+ args->system_clock_counter = (uint64_t)timespec64_to_ns(&time);
/* Since the counter is in nano-seconds we use 1GHz frequency */
args->system_clock_freq = 1000000000;
return NULL;
}
-struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev)
+struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd,
+ struct pci_dev *pdev, const struct kfd2kgd_calls *f2g)
{
struct kfd_dev *kfd;
kfd->device_info = device_info;
kfd->pdev = pdev;
kfd->init_complete = false;
+ kfd->kfd2kgd = f2g;
+
+ mutex_init(&kfd->doorbell_mutex);
+ memset(&kfd->doorbell_available_index, 0,
+ sizeof(kfd->doorbell_available_index));
return kfd;
}
/* add another 512KB for all other allocations on gart (HPD, fences) */
size += 512 * 1024;
- if (kfd2kgd->init_gtt_mem_allocation(kfd->kgd, size, &kfd->gtt_mem,
- &kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)) {
+ if (kfd->kfd2kgd->init_gtt_mem_allocation(
+ kfd->kgd, size, &kfd->gtt_mem,
+ &kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)){
dev_err(kfd_device,
"Could not allocate %d bytes for device (%x:%x)\n",
size, kfd->pdev->vendor, kfd->pdev->device);
kfd_topology_add_device_error:
kfd_gtt_sa_fini(kfd);
kfd_gtt_sa_init_error:
- kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
+ kfd->kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
dev_err(kfd_device,
"device (%x:%x) NOT added due to errors\n",
kfd->pdev->vendor, kfd->pdev->device);
amd_iommu_free_device(kfd->pdev);
kfd_topology_remove_device(kfd);
kfd_gtt_sa_fini(kfd);
- kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
+ kfd->kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
}
kfree(kfd);
void program_sh_mem_settings(struct device_queue_manager *dqm,
struct qcm_process_device *qpd)
{
- return kfd2kgd->program_sh_mem_settings(dqm->dev->kgd, qpd->vmid,
+ return dqm->dev->kfd2kgd->program_sh_mem_settings(
+ dqm->dev->kgd, qpd->vmid,
qpd->sh_mem_config,
qpd->sh_mem_ape1_base,
qpd->sh_mem_ape1_limit,
{
uint32_t pasid_mapping;
- pasid_mapping = (pasid == 0) ? 0 : (uint32_t)pasid |
- ATC_VMID_PASID_MAPPING_VALID;
- return kfd2kgd->set_pasid_vmid_mapping(dqm->dev->kgd, pasid_mapping,
+ pasid_mapping = (pasid == 0) ? 0 :
+ (uint32_t)pasid |
+ ATC_VMID_PASID_MAPPING_VALID;
+
+ return dqm->dev->kfd2kgd->set_pasid_vmid_mapping(
+ dqm->dev->kgd, pasid_mapping,
vmid);
}
pipe_hpd_addr = dqm->pipelines_addr + i * CIK_HPD_EOP_BYTES;
pr_debug("kfd: pipeline address %llX\n", pipe_hpd_addr);
/* = log2(bytes/4)-1 */
- kfd2kgd->init_pipeline(dqm->dev->kgd, inx,
+ dqm->dev->kfd2kgd->init_pipeline(dqm->dev->kgd, inx,
CIK_HPD_EOP_BYTES_LOG2 - 3, pipe_hpd_addr);
}
pr_debug(" sdma queue id: %d\n", q->properties.sdma_queue_id);
pr_debug(" sdma engine id: %d\n", q->properties.sdma_engine_id);
+ init_sdma_vm(dqm, q, qpd);
retval = mqd->init_mqd(mqd, &q->mqd, &q->mqd_mem_obj,
&q->gart_mqd_addr, &q->properties);
if (retval != 0) {
return retval;
}
- init_sdma_vm(dqm, q, qpd);
+ retval = mqd->load_mqd(mqd, q->mqd, 0,
+ 0, NULL);
+ if (retval != 0) {
+ deallocate_sdma_queue(dqm, q->sdma_id);
+ mqd->uninit_mqd(mqd, q->mqd, q->mqd_mem_obj);
+ return retval;
+ }
+
return 0;
}
return retval;
}
-static int fence_wait_timeout(unsigned int *fence_addr,
+static int amdkfd_fence_wait_timeout(unsigned int *fence_addr,
unsigned int fence_value,
unsigned long timeout)
{
pm_send_query_status(&dqm->packets, dqm->fence_gpu_addr,
KFD_FENCE_COMPLETED);
/* should be timed out */
- fence_wait_timeout(dqm->fence_addr, KFD_FENCE_COMPLETED,
+ amdkfd_fence_wait_timeout(dqm->fence_addr, KFD_FENCE_COMPLETED,
QUEUE_PREEMPT_DEFAULT_TIMEOUT_MS);
pm_release_ib(&dqm->packets);
dqm->active_runlist = false;
* and that's assures that any user process won't get access to the
* kernel doorbells page
*/
-static DEFINE_MUTEX(doorbell_mutex);
-static unsigned long doorbell_available_index[
- DIV_ROUND_UP(KFD_MAX_NUM_OF_QUEUES_PER_PROCESS, BITS_PER_LONG)] = { 0 };
#define KERNEL_DOORBELL_PASID 1
#define KFD_SIZE_OF_DOORBELL_IN_BYTES 4
BUG_ON(!kfd || !doorbell_off);
- mutex_lock(&doorbell_mutex);
- inx = find_first_zero_bit(doorbell_available_index,
+ mutex_lock(&kfd->doorbell_mutex);
+ inx = find_first_zero_bit(kfd->doorbell_available_index,
KFD_MAX_NUM_OF_QUEUES_PER_PROCESS);
- __set_bit(inx, doorbell_available_index);
- mutex_unlock(&doorbell_mutex);
+ __set_bit(inx, kfd->doorbell_available_index);
+ mutex_unlock(&kfd->doorbell_mutex);
if (inx >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)
return NULL;
inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr);
- mutex_lock(&doorbell_mutex);
- __clear_bit(inx, doorbell_available_index);
- mutex_unlock(&doorbell_mutex);
+ mutex_lock(&kfd->doorbell_mutex);
+ __clear_bit(inx, kfd->doorbell_available_index);
+ mutex_unlock(&kfd->doorbell_mutex);
}
inline void write_kernel_doorbell(u32 __iomem *db, u32 value)
BUG_ON(!kq || !dev);
BUG_ON(type != KFD_QUEUE_TYPE_DIQ && type != KFD_QUEUE_TYPE_HIQ);
- pr_debug("kfd: In func %s initializing queue type %d size %d\n",
+ pr_debug("amdkfd: In func %s initializing queue type %d size %d\n",
__func__, KFD_QUEUE_TYPE_HIQ, queue_size);
nop.opcode = IT_NOP;
prop.doorbell_ptr = kfd_get_kernel_doorbell(dev, &prop.doorbell_off);
- if (prop.doorbell_ptr == NULL)
+ if (prop.doorbell_ptr == NULL) {
+ pr_err("amdkfd: error init doorbell");
goto err_get_kernel_doorbell;
+ }
retval = kfd_gtt_sa_allocate(dev, queue_size, &kq->pq);
- if (retval != 0)
+ if (retval != 0) {
+ pr_err("amdkfd: error init pq queues size (%d)\n", queue_size);
goto err_pq_allocate_vidmem;
+ }
kq->pq_kernel_addr = kq->pq->cpu_ptr;
kq->pq_gpu_addr = kq->pq->gpu_addr;
err_eop_allocate_vidmem:
kfd_gtt_sa_free(dev, kq->pq);
err_pq_allocate_vidmem:
- pr_err("kfd: error init pq\n");
kfd_release_kernel_doorbell(dev, prop.doorbell_ptr);
err_get_kernel_doorbell:
- pr_err("kfd: error init doorbell");
return false;
}
else if (kq->queue->properties.type == KFD_QUEUE_TYPE_DIQ)
kfd_gtt_sa_free(kq->dev, kq->fence_mem_obj);
+ kq->mqd->uninit_mqd(kq->mqd, kq->queue->mqd, kq->queue->mqd_mem_obj);
+
kfd_gtt_sa_free(kq->dev, kq->rptr_mem);
kfd_gtt_sa_free(kq->dev, kq->wptr_mem);
kq->ops_asic_specific.uninitialize(kq);
queue_address = (unsigned int *)kq->pq_kernel_addr;
queue_size_dwords = kq->queue->properties.queue_size / sizeof(uint32_t);
- pr_debug("kfd: In func %s\nrptr: %d\nwptr: %d\nqueue_address 0x%p\n",
+ pr_debug("amdkfd: In func %s\nrptr: %d\nwptr: %d\nqueue_address 0x%p\n",
__func__, rptr, wptr, queue_address);
available_size = (rptr - 1 - wptr + queue_size_dwords) %
}
if (kq->ops.initialize(kq, dev, type, KFD_KERNEL_QUEUE_SIZE) == false) {
- pr_err("kfd: failed to init kernel queue\n");
+ pr_err("amdkfd: failed to init kernel queue\n");
kfree(kq);
return NULL;
}
BUG_ON(!dev);
- pr_err("kfd: starting kernel queue test\n");
+ pr_err("amdkfd: starting kernel queue test\n");
kq = kernel_queue_init(dev, KFD_QUEUE_TYPE_HIQ);
BUG_ON(!kq);
buffer[i] = kq->nop_packet;
kq->ops.submit_packet(kq);
- pr_err("kfd: ending kernel queue test\n");
+ pr_err("amdkfd: ending kernel queue test\n");
}
#define KFD_DRIVER_MINOR 7
#define KFD_DRIVER_PATCHLEVEL 1
-const struct kfd2kgd_calls *kfd2kgd;
static const struct kgd2kfd_calls kgd2kfd = {
.exit = kgd2kfd_exit,
.probe = kgd2kfd_probe,
MODULE_PARM_DESC(max_num_of_queues_per_device,
"Maximum number of supported queues per device (1 = Minimum, 4096 = default)");
-bool kgd2kfd_init(unsigned interface_version,
- const struct kfd2kgd_calls *f2g,
- const struct kgd2kfd_calls **g2f)
+bool kgd2kfd_init(unsigned interface_version, const struct kgd2kfd_calls **g2f)
{
/*
* Only one interface version is supported,
if (interface_version != KFD_INTERFACE_VERSION)
return false;
- /* Protection against multiple amd kgd loads */
- if (kfd2kgd)
- return true;
-
- kfd2kgd = f2g;
*g2f = &kgd2kfd;
return true;
{
int err;
- kfd2kgd = NULL;
-
/* Verify module parameters */
if ((sched_policy < KFD_SCHED_POLICY_HWS) ||
(sched_policy > KFD_SCHED_POLICY_NO_HWS)) {
static int load_mqd(struct mqd_manager *mm, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr)
{
- return kfd2kgd->hqd_load(mm->dev->kgd, mqd, pipe_id, queue_id, wptr);
+ return mm->dev->kfd2kgd->hqd_load
+ (mm->dev->kgd, mqd, pipe_id, queue_id, wptr);
}
static int load_mqd_sdma(struct mqd_manager *mm, void *mqd,
uint32_t pipe_id, uint32_t queue_id,
uint32_t __user *wptr)
{
- return kfd2kgd->hqd_sdma_load(mm->dev->kgd, mqd);
+ return mm->dev->kfd2kgd->hqd_sdma_load(mm->dev->kgd, mqd);
}
static int update_mqd(struct mqd_manager *mm, void *mqd,
unsigned int timeout, uint32_t pipe_id,
uint32_t queue_id)
{
- return kfd2kgd->hqd_destroy(mm->dev->kgd, type, timeout,
+ return mm->dev->kfd2kgd->hqd_destroy(mm->dev->kgd, type, timeout,
pipe_id, queue_id);
}
unsigned int timeout, uint32_t pipe_id,
uint32_t queue_id)
{
- return kfd2kgd->hqd_sdma_destroy(mm->dev->kgd, mqd, timeout);
+ return mm->dev->kfd2kgd->hqd_sdma_destroy(mm->dev->kgd, mqd, timeout);
}
static bool is_occupied(struct mqd_manager *mm, void *mqd,
uint32_t queue_id)
{
- return kfd2kgd->hqd_is_occupied(mm->dev->kgd, queue_address,
+ return mm->dev->kfd2kgd->hqd_is_occupied(mm->dev->kgd, queue_address,
pipe_id, queue_id);
}
uint64_t queue_address, uint32_t pipe_id,
uint32_t queue_id)
{
- return kfd2kgd->hqd_sdma_is_occupied(mm->dev->kgd, mqd);
+ return mm->dev->kfd2kgd->hqd_sdma_is_occupied(mm->dev->kgd, mqd);
}
/*
struct kgd2kfd_shared_resources shared_resources;
+ const struct kfd2kgd_calls *kfd2kgd;
+ struct mutex doorbell_mutex;
+ unsigned long doorbell_available_index[DIV_ROUND_UP(
+ KFD_MAX_NUM_OF_QUEUES_PER_PROCESS, BITS_PER_LONG)];
+
void *gtt_mem;
uint64_t gtt_start_gpu_addr;
void *gtt_start_cpu_ptr;
/* KGD2KFD callbacks */
void kgd2kfd_exit(void);
-struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev);
+struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd,
+ struct pci_dev *pdev, const struct kfd2kgd_calls *f2g);
bool kgd2kfd_device_init(struct kfd_dev *kfd,
- const struct kgd2kfd_shared_resources *gpu_resources);
+ const struct kgd2kfd_shared_resources *gpu_resources);
void kgd2kfd_device_exit(struct kfd_dev *kfd);
-extern const struct kfd2kgd_calls *kfd2kgd;
-
enum kfd_mempool {
KFD_MEMPOOL_SYSTEM_CACHEABLE = 1,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE = 2,
/* The Device Queue Manager that owns this data */
struct device_queue_manager *dqm;
struct process_queue_manager *pqm;
- /* Device Queue Manager lock */
- struct mutex *lock;
/* Queues list */
struct list_head queues_list;
struct list_head priv_queue_list;
p = my_work->p;
+ pr_debug("Releasing process (pasid %d) in workqueue\n",
+ p->pasid);
+
mutex_lock(&p->mutex);
list_for_each_entry_safe(pdd, temp, &p->per_device_data,
per_device_list) {
+ pr_debug("Releasing pdd (topology id %d) for process (pasid %d) in workqueue\n",
+ pdd->dev->id, p->pasid);
+
amd_iommu_unbind_pasid(pdd->dev->pdev, p->pasid);
list_del(&pdd->per_device_list);
}
sysfs_show_32bit_prop(buffer, "max_engine_clk_fcompute",
- kfd2kgd->get_max_engine_clock_in_mhz(
+ dev->gpu->kfd2kgd->get_max_engine_clock_in_mhz(
dev->gpu->kgd));
sysfs_show_64bit_prop(buffer, "local_mem_size",
- kfd2kgd->get_vmem_size(dev->gpu->kgd));
+ dev->gpu->kfd2kgd->get_vmem_size(
+ dev->gpu->kgd));
sysfs_show_32bit_prop(buffer, "fw_version",
- kfd2kgd->get_fw_version(
+ dev->gpu->kfd2kgd->get_fw_version(
dev->gpu->kgd,
KGD_ENGINE_MEC1));
}
buf[2] = gpu->pdev->subsystem_device;
buf[3] = gpu->pdev->device;
buf[4] = gpu->pdev->bus->number;
- buf[5] = (uint32_t)(kfd2kgd->get_vmem_size(gpu->kgd) & 0xffffffff);
- buf[6] = (uint32_t)(kfd2kgd->get_vmem_size(gpu->kgd) >> 32);
+ buf[5] = (uint32_t)(gpu->kfd2kgd->get_vmem_size(gpu->kgd)
+ & 0xffffffff);
+ buf[6] = (uint32_t)(gpu->kfd2kgd->get_vmem_size(gpu->kgd) >> 32);
for (i = 0, hashout = 0; i < 7; i++)
hashout ^= hash_32(buf[i], KFD_GPU_ID_HASH_WIDTH);
size_t doorbell_start_offset;
};
-/**
- * struct kgd2kfd_calls
- *
- * @exit: Notifies amdkfd that kgd module is unloaded
- *
- * @probe: Notifies amdkfd about a probe done on a device in the kgd driver.
- *
- * @device_init: Initialize the newly probed device (if it is a device that
- * amdkfd supports)
- *
- * @device_exit: Notifies amdkfd about a removal of a kgd device
- *
- * @suspend: Notifies amdkfd about a suspend action done to a kgd device
- *
- * @resume: Notifies amdkfd about a resume action done to a kgd device
- *
- * This structure contains function callback pointers so the kgd driver
- * will notify to the amdkfd about certain status changes.
- *
- */
-struct kgd2kfd_calls {
- void (*exit)(void);
- struct kfd_dev* (*probe)(struct kgd_dev *kgd, struct pci_dev *pdev);
- bool (*device_init)(struct kfd_dev *kfd,
- const struct kgd2kfd_shared_resources *gpu_resources);
- void (*device_exit)(struct kfd_dev *kfd);
- void (*interrupt)(struct kfd_dev *kfd, const void *ih_ring_entry);
- void (*suspend)(struct kfd_dev *kfd);
- int (*resume)(struct kfd_dev *kfd);
-};
-
/**
* struct kfd2kgd_calls
*
enum kgd_engine_type type);
};
+/**
+ * struct kgd2kfd_calls
+ *
+ * @exit: Notifies amdkfd that kgd module is unloaded
+ *
+ * @probe: Notifies amdkfd about a probe done on a device in the kgd driver.
+ *
+ * @device_init: Initialize the newly probed device (if it is a device that
+ * amdkfd supports)
+ *
+ * @device_exit: Notifies amdkfd about a removal of a kgd device
+ *
+ * @suspend: Notifies amdkfd about a suspend action done to a kgd device
+ *
+ * @resume: Notifies amdkfd about a resume action done to a kgd device
+ *
+ * This structure contains function callback pointers so the kgd driver
+ * will notify to the amdkfd about certain status changes.
+ *
+ */
+struct kgd2kfd_calls {
+ void (*exit)(void);
+ struct kfd_dev* (*probe)(struct kgd_dev *kgd, struct pci_dev *pdev,
+ const struct kfd2kgd_calls *f2g);
+ bool (*device_init)(struct kfd_dev *kfd,
+ const struct kgd2kfd_shared_resources *gpu_resources);
+ void (*device_exit)(struct kfd_dev *kfd);
+ void (*interrupt)(struct kfd_dev *kfd, const void *ih_ring_entry);
+ void (*suspend)(struct kfd_dev *kfd);
+ int (*resume)(struct kfd_dev *kfd);
+};
+
bool kgd2kfd_init(unsigned interface_version,
- const struct kfd2kgd_calls *f2g,
const struct kgd2kfd_calls **g2f);
#endif /* KGD_KFD_INTERFACE_H_INCLUDED */
crtc->enabled = true;
}
+void atmel_hlcdc_crtc_suspend(struct drm_crtc *c)
+{
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+
+ if (crtc->enabled) {
+ atmel_hlcdc_crtc_disable(c);
+ /* save enable state for resume */
+ crtc->enabled = true;
+ }
+}
+
+void atmel_hlcdc_crtc_resume(struct drm_crtc *c)
+{
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+
+ if (crtc->enabled) {
+ crtc->enabled = false;
+ atmel_hlcdc_crtc_enable(c);
+ }
+}
+
static int atmel_hlcdc_crtc_atomic_check(struct drm_crtc *c,
struct drm_crtc_state *s)
{
return 0;
drm_modeset_lock_all(drm_dev);
- list_for_each_entry(crtc, &drm_dev->mode_config.crtc_list, head) {
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
- if (crtc->enabled) {
- crtc_funcs->disable(crtc);
- /* save enable state for resume */
- crtc->enabled = true;
- }
- }
+ list_for_each_entry(crtc, &drm_dev->mode_config.crtc_list, head)
+ atmel_hlcdc_crtc_suspend(crtc);
drm_modeset_unlock_all(drm_dev);
return 0;
}
return 0;
drm_modeset_lock_all(drm_dev);
- list_for_each_entry(crtc, &drm_dev->mode_config.crtc_list, head) {
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
- if (crtc->enabled) {
- crtc->enabled = false;
- crtc_funcs->enable(crtc);
- }
- }
+ list_for_each_entry(crtc, &drm_dev->mode_config.crtc_list, head)
+ atmel_hlcdc_crtc_resume(crtc);
drm_modeset_unlock_all(drm_dev);
return 0;
}
void atmel_hlcdc_crtc_cancel_page_flip(struct drm_crtc *crtc,
struct drm_file *file);
+void atmel_hlcdc_crtc_suspend(struct drm_crtc *crtc);
+void atmel_hlcdc_crtc_resume(struct drm_crtc *crtc);
+
int atmel_hlcdc_crtc_create(struct drm_device *dev);
int atmel_hlcdc_create_outputs(struct drm_device *dev);
bochs_vga_writeb(bochs, 0x3c0, 0x20); /* unblank */
+ bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_BPP, bochs->bpp);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_XRES, bochs->xres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_YRES, bochs->yres);
connector->funcs->atomic_destroy_state(connector,
state->connector_states[i]);
+ state->connectors[i] = NULL;
state->connector_states[i] = NULL;
}
crtc->funcs->atomic_destroy_state(crtc,
state->crtc_states[i]);
+ state->crtcs[i] = NULL;
state->crtc_states[i] = NULL;
}
plane->funcs->atomic_destroy_state(plane,
state->plane_states[i]);
+ state->planes[i] = NULL;
state->plane_states[i] = NULL;
}
}
*/
void drm_atomic_state_free(struct drm_atomic_state *state)
{
+ if (!state)
+ return;
+
drm_atomic_state_clear(state);
DRM_DEBUG_ATOMIC("Freeing atomic state %p\n", state);
struct drm_mode_config *config = &dev->mode_config;
/* FIXME: Mode prop is missing, which also controls ->enable. */
- if (property == config->prop_active) {
+ if (property == config->prop_active)
state->active = val;
- } else if (crtc->funcs->atomic_set_property)
+ else if (crtc->funcs->atomic_set_property)
return crtc->funcs->atomic_set_property(crtc, state, property, val);
- return -EINVAL;
+ else
+ return -EINVAL;
+
+ return 0;
}
EXPORT_SYMBOL(drm_atomic_crtc_set_property);
const struct drm_crtc_state *state,
struct drm_property *property, uint64_t *val)
{
- if (crtc->funcs->atomic_get_property)
+ struct drm_device *dev = crtc->dev;
+ struct drm_mode_config *config = &dev->mode_config;
+
+ if (property == config->prop_active)
+ *val = state->active;
+ else if (crtc->funcs->atomic_get_property)
return crtc->funcs->atomic_get_property(crtc, state, property, val);
- return -EINVAL;
+ else
+ return -EINVAL;
+
+ return 0;
}
/**
old_crtc_state = old_state->crtc_states[drm_crtc_index(old_conn_state->crtc)];
- if (!old_crtc_state->active)
+ if (!old_crtc_state->active ||
+ !needs_modeset(old_conn_state->crtc->state))
continue;
encoder = old_conn_state->best_encoder;
if (!connector || !connector->state->best_encoder)
continue;
- if (!connector->state->crtc->state->active)
+ if (!connector->state->crtc->state->active ||
+ !needs_modeset(connector->state->crtc->state))
continue;
encoder = connector->state->best_encoder;
#include "drm_crtc_internal.h"
#include "drm_internal.h"
-static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev,
- struct drm_mode_fb_cmd2 *r,
- struct drm_file *file_priv);
+static struct drm_framebuffer *
+internal_framebuffer_create(struct drm_device *dev,
+ struct drm_mode_fb_cmd2 *r,
+ struct drm_file *file_priv);
/* Avoid boilerplate. I'm tired of typing. */
#define DRM_ENUM_NAME_FN(fnname, list) \
*/
if (req->flags & DRM_MODE_CURSOR_BO) {
if (req->handle) {
- fb = add_framebuffer_internal(dev, &fbreq, file_priv);
+ fb = internal_framebuffer_create(dev, &fbreq, file_priv);
if (IS_ERR(fb)) {
DRM_DEBUG_KMS("failed to wrap cursor buffer in drm framebuffer\n");
return PTR_ERR(fb);
}
-
- drm_framebuffer_reference(fb);
} else {
fb = NULL;
}
return 0;
}
-static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev,
- struct drm_mode_fb_cmd2 *r,
- struct drm_file *file_priv)
+static struct drm_framebuffer *
+internal_framebuffer_create(struct drm_device *dev,
+ struct drm_mode_fb_cmd2 *r,
+ struct drm_file *file_priv)
{
struct drm_mode_config *config = &dev->mode_config;
struct drm_framebuffer *fb;
return fb;
}
- mutex_lock(&file_priv->fbs_lock);
- r->fb_id = fb->base.id;
- list_add(&fb->filp_head, &file_priv->fbs);
- DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id);
- mutex_unlock(&file_priv->fbs_lock);
-
return fb;
}
int drm_mode_addfb2(struct drm_device *dev,
void *data, struct drm_file *file_priv)
{
+ struct drm_mode_fb_cmd2 *r = data;
struct drm_framebuffer *fb;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
- fb = add_framebuffer_internal(dev, data, file_priv);
+ fb = internal_framebuffer_create(dev, r, file_priv);
if (IS_ERR(fb))
return PTR_ERR(fb);
+ /* Transfer ownership to the filp for reaping on close */
+
+ DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id);
+ mutex_lock(&file_priv->fbs_lock);
+ r->fb_id = fb->base.id;
+ list_add(&fb->filp_head, &file_priv->fbs);
+ mutex_unlock(&file_priv->fbs_lock);
+
return 0;
}
struct drm_framebuffer *old_fb)
{
struct drm_device *dev = crtc->dev;
- struct drm_display_mode *adjusted_mode, saved_mode;
+ struct drm_display_mode *adjusted_mode, saved_mode, saved_hwmode;
struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
struct drm_encoder_helper_funcs *encoder_funcs;
int saved_x, saved_y;
}
saved_mode = crtc->mode;
+ saved_hwmode = crtc->hwmode;
saved_x = crtc->x;
saved_y = crtc->y;
}
DRM_DEBUG_KMS("[CRTC:%d]\n", crtc->base.id);
+ crtc->hwmode = *adjusted_mode;
+
/* Prepare the encoders and CRTCs before setting the mode. */
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
encoder->bridge->funcs->enable(encoder->bridge);
}
- /* Store real post-adjustment hardware mode. */
- crtc->hwmode = *adjusted_mode;
-
/* Calculate and store various constants which
* are later needed by vblank and swap-completion
* timestamping. They are derived from true hwmode.
if (!ret) {
crtc->enabled = saved_enabled;
crtc->mode = saved_mode;
+ crtc->hwmode = saved_hwmode;
crtc->x = saved_x;
crtc->y = saved_y;
}
break;
case DP_AUX_NATIVE_REPLY_NACK:
- DRM_DEBUG_KMS("native nack\n");
+ DRM_DEBUG_KMS("native nack (result=%d, size=%zu)\n", ret, msg->size);
return -EREMOTEIO;
case DP_AUX_NATIVE_REPLY_DEFER:
return ret;
case DP_AUX_I2C_REPLY_NACK:
- DRM_DEBUG_KMS("I2C nack\n");
+ DRM_DEBUG_KMS("I2C nack (result=%d, size=%zu\n", ret, msg->size);
aux->i2c_nack_count++;
return -EREMOTEIO;
struct drm_dp_sideband_msg_tx *txmsg)
{
bool ret;
- mutex_lock(&mgr->qlock);
+
+ /*
+ * All updates to txmsg->state are protected by mgr->qlock, and the two
+ * cases we check here are terminal states. For those the barriers
+ * provided by the wake_up/wait_event pair are enough.
+ */
ret = (txmsg->state == DRM_DP_SIDEBAND_TX_RX ||
txmsg->state == DRM_DP_SIDEBAND_TX_TIMEOUT);
- mutex_unlock(&mgr->qlock);
return ret;
}
return 0;
}
-/* must be called holding qlock */
static void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)
{
struct drm_dp_sideband_msg_tx *txmsg;
int ret;
+ WARN_ON(!mutex_is_locked(&mgr->qlock));
+
/* construct a chunk from the first msg in the tx_msg queue */
if (list_empty(&mgr->tx_msg_downq)) {
mgr->tx_down_in_progress = false;
}
EXPORT_SYMBOL(drm_dp_mst_allocate_vcpi);
+int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)
+{
+ int slots = 0;
+ port = drm_dp_get_validated_port_ref(mgr, port);
+ if (!port)
+ return slots;
+
+ slots = port->vcpi.num_slots;
+ drm_dp_put_port(port);
+ return slots;
+}
+EXPORT_SYMBOL(drm_dp_mst_get_vcpi_slots);
+
/**
* drm_dp_mst_reset_vcpi_slots() - Reset number of slots to 0 for VCPI
* @mgr: manager for this port
int width, int height)
{
struct drm_cmdline_mode *cmdline_mode;
- struct drm_display_mode *mode = NULL;
+ struct drm_display_mode *mode;
bool prefer_non_interlace;
cmdline_mode = &fb_helper_conn->connector->cmdline_mode;
if (cmdline_mode->specified == false)
- return mode;
+ return NULL;
/* attempt to find a matching mode in the list of modes
* we have gotten so far, if not add a CVT mode that conforms
goto create_mode;
prefer_non_interlace = !cmdline_mode->interlace;
- again:
+again:
list_for_each_entry(mode, &fb_helper_conn->connector->modes, head) {
/* check width/height */
if (mode->hdisplay != cmdline_mode->xres ||
return 0;
}
-#define DRM_IOCTL_DEF(ioctl, _func, _flags) \
- [DRM_IOCTL_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, .cmd_drv = 0, .name = #ioctl}
+#define DRM_IOCTL_DEF(ioctl, _func, _flags) \
+ [DRM_IOCTL_NR(ioctl)] = { \
+ .cmd = ioctl, \
+ .func = _func, \
+ .flags = _flags, \
+ .name = #ioctl \
+ }
/** Ioctl table */
static const struct drm_ioctl_desc drm_ioctls[] = {
int retcode = -EINVAL;
char stack_kdata[128];
char *kdata = NULL;
- unsigned int usize, asize;
+ unsigned int usize, asize, drv_size;
dev = file_priv->minor->dev;
if (drm_device_is_unplugged(dev))
return -ENODEV;
- if ((nr >= DRM_CORE_IOCTL_COUNT) &&
- ((nr < DRM_COMMAND_BASE) || (nr >= DRM_COMMAND_END)))
- goto err_i1;
- if ((nr >= DRM_COMMAND_BASE) && (nr < DRM_COMMAND_END) &&
- (nr < DRM_COMMAND_BASE + dev->driver->num_ioctls)) {
- u32 drv_size;
+ if (nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END) {
+ /* driver ioctl */
+ if (nr - DRM_COMMAND_BASE >= dev->driver->num_ioctls)
+ goto err_i1;
ioctl = &dev->driver->ioctls[nr - DRM_COMMAND_BASE];
- drv_size = _IOC_SIZE(ioctl->cmd_drv);
- usize = asize = _IOC_SIZE(cmd);
- if (drv_size > asize)
- asize = drv_size;
- cmd = ioctl->cmd_drv;
- }
- else if ((nr >= DRM_COMMAND_END) || (nr < DRM_COMMAND_BASE)) {
- u32 drv_size;
-
+ } else {
+ /* core ioctl */
+ if (nr >= DRM_CORE_IOCTL_COUNT)
+ goto err_i1;
ioctl = &drm_ioctls[nr];
+ }
- drv_size = _IOC_SIZE(ioctl->cmd);
- usize = asize = _IOC_SIZE(cmd);
- if (drv_size > asize)
- asize = drv_size;
-
- cmd = ioctl->cmd;
- } else
- goto err_i1;
+ drv_size = _IOC_SIZE(ioctl->cmd);
+ usize = _IOC_SIZE(cmd);
+ asize = max(usize, drv_size);
+ cmd = ioctl->cmd;
DRM_DEBUG("pid=%d, dev=0x%lx, auth=%d, %s\n",
task_pid_nr(current),
*/
bool drm_ioctl_flags(unsigned int nr, unsigned int *flags)
{
- if ((nr >= DRM_COMMAND_END && nr < DRM_CORE_IOCTL_COUNT) ||
- (nr < DRM_COMMAND_BASE)) {
- *flags = drm_ioctls[nr].flags;
- return true;
- }
+ if (nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END)
+ return false;
+
+ if (nr >= DRM_CORE_IOCTL_COUNT)
+ return false;
- return false;
+ *flags = drm_ioctls[nr].flags;
+ return true;
}
EXPORT_SYMBOL(drm_ioctl_flags);
unsigned rem;
rem = do_div(tmp, alignment);
- if (tmp)
+ if (rem)
start += alignment - rem;
}
*/
bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_display_mode *mode2)
{
+ if (!mode1 && !mode2)
+ return true;
+
+ if (!mode1 || !mode2)
+ return false;
+
/* do clock check convert to PICOS so fb modes get matched
* the same */
if (mode1->clock && mode2->clock) {
/**
* drm_mode_connector_list_update - update the mode list for the connector
* @connector: the connector to update
- * @merge_type_bits: whether to merge or overright type bits.
+ * @merge_type_bits: whether to merge or overwrite type bits
*
* This moves the modes from the @connector probed_modes list
* to the actual mode list. It compares the probed mode against the current
config DRM_EXYNOS_DP
bool "EXYNOS DRM DP driver support"
- depends on (DRM_EXYNOS_FIMD || DRM_EXYNOS7DECON) && ARCH_EXYNOS && (DRM_PTN3460=n || DRM_PTN3460=y || DRM_PTN3460=DRM_EXYNOS)
+ depends on (DRM_EXYNOS_FIMD || DRM_EXYNOS7_DECON) && ARCH_EXYNOS && (DRM_PTN3460=n || DRM_PTN3460=y || DRM_PTN3460=DRM_EXYNOS)
default DRM_EXYNOS
select DRM_PANEL
help
of_node_put(i80_if_timings);
ctx->regs = of_iomap(dev->of_node, 0);
- if (IS_ERR(ctx->regs)) {
- ret = PTR_ERR(ctx->regs);
+ if (!ctx->regs) {
+ ret = -ENOMEM;
goto err_del_component;
}
+++ /dev/null
-/*
- * Copyright (c) 2011 Samsung Electronics Co., Ltd.
- * Authors:
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; either version 2 of the License, or (at your
- * option) any later version.
- */
-
-#include <drm/drmP.h>
-#include <drm/drm_crtc_helper.h>
-
-#include <drm/exynos_drm.h>
-#include "exynos_drm_drv.h"
-#include "exynos_drm_encoder.h"
-#include "exynos_drm_connector.h"
-
-#define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\
- drm_connector)
-
-struct exynos_drm_connector {
- struct drm_connector drm_connector;
- uint32_t encoder_id;
- struct exynos_drm_display *display;
-};
-
-static int exynos_drm_connector_get_modes(struct drm_connector *connector)
-{
- struct exynos_drm_connector *exynos_connector =
- to_exynos_connector(connector);
- struct exynos_drm_display *display = exynos_connector->display;
- struct edid *edid = NULL;
- unsigned int count = 0;
- int ret;
-
- /*
- * if get_edid() exists then get_edid() callback of hdmi side
- * is called to get edid data through i2c interface else
- * get timing from the FIMD driver(display controller).
- *
- * P.S. in case of lcd panel, count is always 1 if success
- * because lcd panel has only one mode.
- */
- if (display->ops->get_edid) {
- edid = display->ops->get_edid(display, connector);
- if (IS_ERR_OR_NULL(edid)) {
- ret = PTR_ERR(edid);
- edid = NULL;
- DRM_ERROR("Panel operation get_edid failed %d\n", ret);
- goto out;
- }
-
- count = drm_add_edid_modes(connector, edid);
- if (!count) {
- DRM_ERROR("Add edid modes failed %d\n", count);
- goto out;
- }
-
- drm_mode_connector_update_edid_property(connector, edid);
- } else {
- struct exynos_drm_panel_info *panel;
- struct drm_display_mode *mode = drm_mode_create(connector->dev);
- if (!mode) {
- DRM_ERROR("failed to create a new display mode.\n");
- return 0;
- }
-
- if (display->ops->get_panel)
- panel = display->ops->get_panel(display);
- else {
- drm_mode_destroy(connector->dev, mode);
- return 0;
- }
-
- drm_display_mode_from_videomode(&panel->vm, mode);
- mode->width_mm = panel->width_mm;
- mode->height_mm = panel->height_mm;
- connector->display_info.width_mm = mode->width_mm;
- connector->display_info.height_mm = mode->height_mm;
-
- mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
- drm_mode_set_name(mode);
- drm_mode_probed_add(connector, mode);
-
- count = 1;
- }
-
-out:
- kfree(edid);
- return count;
-}
-
-static int exynos_drm_connector_mode_valid(struct drm_connector *connector,
- struct drm_display_mode *mode)
-{
- struct exynos_drm_connector *exynos_connector =
- to_exynos_connector(connector);
- struct exynos_drm_display *display = exynos_connector->display;
- int ret = MODE_BAD;
-
- DRM_DEBUG_KMS("%s\n", __FILE__);
-
- if (display->ops->check_mode)
- if (!display->ops->check_mode(display, mode))
- ret = MODE_OK;
-
- return ret;
-}
-
-static struct drm_encoder *exynos_drm_best_encoder(
- struct drm_connector *connector)
-{
- struct drm_device *dev = connector->dev;
- struct exynos_drm_connector *exynos_connector =
- to_exynos_connector(connector);
- return drm_encoder_find(dev, exynos_connector->encoder_id);
-}
-
-static struct drm_connector_helper_funcs exynos_connector_helper_funcs = {
- .get_modes = exynos_drm_connector_get_modes,
- .mode_valid = exynos_drm_connector_mode_valid,
- .best_encoder = exynos_drm_best_encoder,
-};
-
-static int exynos_drm_connector_fill_modes(struct drm_connector *connector,
- unsigned int max_width, unsigned int max_height)
-{
- struct exynos_drm_connector *exynos_connector =
- to_exynos_connector(connector);
- struct exynos_drm_display *display = exynos_connector->display;
- unsigned int width, height;
-
- width = max_width;
- height = max_height;
-
- /*
- * if specific driver want to find desired_mode using maxmum
- * resolution then get max width and height from that driver.
- */
- if (display->ops->get_max_resol)
- display->ops->get_max_resol(display, &width, &height);
-
- return drm_helper_probe_single_connector_modes(connector, width,
- height);
-}
-
-/* get detection status of display device. */
-static enum drm_connector_status
-exynos_drm_connector_detect(struct drm_connector *connector, bool force)
-{
- struct exynos_drm_connector *exynos_connector =
- to_exynos_connector(connector);
- struct exynos_drm_display *display = exynos_connector->display;
- enum drm_connector_status status = connector_status_disconnected;
-
- if (display->ops->is_connected) {
- if (display->ops->is_connected(display))
- status = connector_status_connected;
- else
- status = connector_status_disconnected;
- }
-
- return status;
-}
-
-static void exynos_drm_connector_destroy(struct drm_connector *connector)
-{
- struct exynos_drm_connector *exynos_connector =
- to_exynos_connector(connector);
-
- drm_connector_unregister(connector);
- drm_connector_cleanup(connector);
- kfree(exynos_connector);
-}
-
-static struct drm_connector_funcs exynos_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
- .fill_modes = exynos_drm_connector_fill_modes,
- .detect = exynos_drm_connector_detect,
- .destroy = exynos_drm_connector_destroy,
-};
-
-struct drm_connector *exynos_drm_connector_create(struct drm_device *dev,
- struct drm_encoder *encoder)
-{
- struct exynos_drm_connector *exynos_connector;
- struct exynos_drm_display *display = exynos_drm_get_display(encoder);
- struct drm_connector *connector;
- int type;
- int err;
-
- exynos_connector = kzalloc(sizeof(*exynos_connector), GFP_KERNEL);
- if (!exynos_connector)
- return NULL;
-
- connector = &exynos_connector->drm_connector;
-
- switch (display->type) {
- case EXYNOS_DISPLAY_TYPE_HDMI:
- type = DRM_MODE_CONNECTOR_HDMIA;
- connector->interlace_allowed = true;
- connector->polled = DRM_CONNECTOR_POLL_HPD;
- break;
- case EXYNOS_DISPLAY_TYPE_VIDI:
- type = DRM_MODE_CONNECTOR_VIRTUAL;
- connector->polled = DRM_CONNECTOR_POLL_HPD;
- break;
- default:
- type = DRM_MODE_CONNECTOR_Unknown;
- break;
- }
-
- drm_connector_init(dev, connector, &exynos_connector_funcs, type);
- drm_connector_helper_add(connector, &exynos_connector_helper_funcs);
-
- err = drm_connector_register(connector);
- if (err)
- goto err_connector;
-
- exynos_connector->encoder_id = encoder->base.id;
- exynos_connector->display = display;
- connector->dpms = DRM_MODE_DPMS_OFF;
- connector->encoder = encoder;
-
- err = drm_mode_connector_attach_encoder(connector, encoder);
- if (err) {
- DRM_ERROR("failed to attach a connector to a encoder\n");
- goto err_sysfs;
- }
-
- DRM_DEBUG_KMS("connector has been created\n");
-
- return connector;
-
-err_sysfs:
- drm_connector_unregister(connector);
-err_connector:
- drm_connector_cleanup(connector);
- kfree(exynos_connector);
- return NULL;
-}
+++ /dev/null
-/*
- * Copyright (c) 2011 Samsung Electronics Co., Ltd.
- * Authors:
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; either version 2 of the License, or (at your
- * option) any later version.
- */
-
-#ifndef _EXYNOS_DRM_CONNECTOR_H_
-#define _EXYNOS_DRM_CONNECTOR_H_
-
-struct drm_connector *exynos_drm_connector_create(struct drm_device *dev,
- struct drm_encoder *encoder);
-
-#endif
}
}
-static int fimd_ctx_initialize(struct fimd_context *ctx,
+static int fimd_iommu_attach_devices(struct fimd_context *ctx,
struct drm_device *drm_dev)
{
- struct exynos_drm_private *priv;
- priv = drm_dev->dev_private;
-
- ctx->drm_dev = drm_dev;
- ctx->pipe = priv->pipe++;
/* attach this sub driver to iommu mapping if supported. */
if (is_drm_iommu_supported(ctx->drm_dev)) {
return 0;
}
-static void fimd_ctx_remove(struct fimd_context *ctx)
+static void fimd_iommu_detach_devices(struct fimd_context *ctx)
{
/* detach this sub driver from iommu mapping if supported. */
if (is_drm_iommu_supported(ctx->drm_dev))
{
struct fimd_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
+ struct exynos_drm_private *priv = drm_dev->dev_private;
int ret;
- ret = fimd_ctx_initialize(ctx, drm_dev);
- if (ret) {
- DRM_ERROR("fimd_ctx_initialize failed.\n");
- return ret;
- }
+ ctx->drm_dev = drm_dev;
+ ctx->pipe = priv->pipe++;
ctx->crtc = exynos_drm_crtc_create(drm_dev, ctx->pipe,
EXYNOS_DISPLAY_TYPE_LCD,
&fimd_crtc_ops, ctx);
- if (IS_ERR(ctx->crtc)) {
- fimd_ctx_remove(ctx);
- return PTR_ERR(ctx->crtc);
- }
if (ctx->display)
exynos_drm_create_enc_conn(drm_dev, ctx->display);
+ ret = fimd_iommu_attach_devices(ctx, drm_dev);
+ if (ret)
+ return ret;
+
return 0;
}
fimd_dpms(ctx->crtc, DRM_MODE_DPMS_OFF);
+ fimd_iommu_detach_devices(ctx);
+
if (ctx->display)
exynos_dpi_remove(ctx->display);
-
- fimd_ctx_remove(ctx);
}
static const struct component_ops fimd_component_ops = {
struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane);
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(plane->crtc);
- if (exynos_crtc->ops->win_disable)
+ if (exynos_crtc && exynos_crtc->ops->win_disable)
exynos_crtc->ops->win_disable(exynos_crtc,
exynos_plane->zpos);
struct regmap *regmap;
struct regmap *packet_memory_regmap;
enum drm_connector_status status;
- int dpms_mode;
+ bool powered;
unsigned int f_tmds;
unsigned int current_edid_segment;
uint8_t edid_buf[256];
+ bool edid_read;
wait_queue_head_t wq;
struct drm_encoder *encoder;
adv7511->rgb = config->input_colorspace == HDMI_COLORSPACE_RGB;
}
+static void adv7511_power_on(struct adv7511 *adv7511)
+{
+ adv7511->current_edid_segment = -1;
+
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+ ADV7511_INT0_EDID_READY);
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
+ ADV7511_INT1_DDC_ERROR);
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ ADV7511_POWER_POWER_DOWN, 0);
+
+ /*
+ * Per spec it is allowed to pulse the HDP signal to indicate that the
+ * EDID information has changed. Some monitors do this when they wakeup
+ * from standby or are enabled. When the HDP goes low the adv7511 is
+ * reset and the outputs are disabled which might cause the monitor to
+ * go to standby again. To avoid this we ignore the HDP pin for the
+ * first few seconds after enabling the output.
+ */
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+ ADV7511_REG_POWER2_HDP_SRC_MASK,
+ ADV7511_REG_POWER2_HDP_SRC_NONE);
+
+ /*
+ * Most of the registers are reset during power down or when HPD is low.
+ */
+ regcache_sync(adv7511->regmap);
+
+ adv7511->powered = true;
+}
+
+static void adv7511_power_off(struct adv7511 *adv7511)
+{
+ /* TODO: setup additional power down modes */
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ ADV7511_POWER_POWER_DOWN,
+ ADV7511_POWER_POWER_DOWN);
+ regcache_mark_dirty(adv7511->regmap);
+
+ adv7511->powered = false;
+}
+
/* -----------------------------------------------------------------------------
* Interrupt and hotplug detection
*/
return false;
}
-static irqreturn_t adv7511_irq_handler(int irq, void *devid)
-{
- struct adv7511 *adv7511 = devid;
-
- if (adv7511_hpd(adv7511))
- drm_helper_hpd_irq_event(adv7511->encoder->dev);
-
- wake_up_all(&adv7511->wq);
-
- return IRQ_HANDLED;
-}
-
-static unsigned int adv7511_is_interrupt_pending(struct adv7511 *adv7511,
- unsigned int irq)
+static int adv7511_irq_process(struct adv7511 *adv7511)
{
unsigned int irq0, irq1;
- unsigned int pending;
int ret;
ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(0), &irq0);
if (ret < 0)
- return 0;
+ return ret;
+
ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(1), &irq1);
if (ret < 0)
- return 0;
+ return ret;
- pending = (irq1 << 8) | irq0;
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(0), irq0);
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(1), irq1);
- return pending & irq;
+ if (irq0 & ADV7511_INT0_HDP)
+ drm_helper_hpd_irq_event(adv7511->encoder->dev);
+
+ if (irq0 & ADV7511_INT0_EDID_READY || irq1 & ADV7511_INT1_DDC_ERROR) {
+ adv7511->edid_read = true;
+
+ if (adv7511->i2c_main->irq)
+ wake_up_all(&adv7511->wq);
+ }
+
+ return 0;
}
-static int adv7511_wait_for_interrupt(struct adv7511 *adv7511, int irq,
- int timeout)
+static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+{
+ struct adv7511 *adv7511 = devid;
+ int ret;
+
+ ret = adv7511_irq_process(adv7511);
+ return ret < 0 ? IRQ_NONE : IRQ_HANDLED;
+}
+
+/* -----------------------------------------------------------------------------
+ * EDID retrieval
+ */
+
+static int adv7511_wait_for_edid(struct adv7511 *adv7511, int timeout)
{
- unsigned int pending;
int ret;
if (adv7511->i2c_main->irq) {
ret = wait_event_interruptible_timeout(adv7511->wq,
- adv7511_is_interrupt_pending(adv7511, irq),
- msecs_to_jiffies(timeout));
- if (ret <= 0)
- return 0;
- pending = adv7511_is_interrupt_pending(adv7511, irq);
+ adv7511->edid_read, msecs_to_jiffies(timeout));
} else {
- if (timeout < 25)
- timeout = 25;
- do {
- pending = adv7511_is_interrupt_pending(adv7511, irq);
- if (pending)
+ for (; timeout > 0; timeout -= 25) {
+ ret = adv7511_irq_process(adv7511);
+ if (ret < 0)
break;
+
+ if (adv7511->edid_read)
+ break;
+
msleep(25);
- timeout -= 25;
- } while (timeout >= 25);
+ }
}
- return pending;
+ return adv7511->edid_read ? 0 : -EIO;
}
-/* -----------------------------------------------------------------------------
- * EDID retrieval
- */
-
static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
size_t len)
{
return ret;
if (status != 2) {
+ adv7511->edid_read = false;
regmap_write(adv7511->regmap, ADV7511_REG_EDID_SEGMENT,
block);
- ret = adv7511_wait_for_interrupt(adv7511,
- ADV7511_INT0_EDID_READY |
- ADV7511_INT1_DDC_ERROR, 200);
-
- if (!(ret & ADV7511_INT0_EDID_READY))
- return -EIO;
+ ret = adv7511_wait_for_edid(adv7511, 200);
+ if (ret < 0)
+ return ret;
}
- regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
- ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
-
/* Break this apart, hopefully more I2C controllers will
* support 64 byte transfers than 256 byte transfers
*/
unsigned int count;
/* Reading the EDID only works if the device is powered */
- if (adv7511->dpms_mode != DRM_MODE_DPMS_ON) {
+ if (!adv7511->powered) {
regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
- ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
+ ADV7511_INT0_EDID_READY);
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
+ ADV7511_INT1_DDC_ERROR);
regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
ADV7511_POWER_POWER_DOWN, 0);
adv7511->current_edid_segment = -1;
edid = drm_do_get_edid(connector, adv7511_get_edid_block, adv7511);
- if (adv7511->dpms_mode != DRM_MODE_DPMS_ON)
+ if (!adv7511->powered)
regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
ADV7511_POWER_POWER_DOWN,
ADV7511_POWER_POWER_DOWN);
{
struct adv7511 *adv7511 = encoder_to_adv7511(encoder);
- switch (mode) {
- case DRM_MODE_DPMS_ON:
- adv7511->current_edid_segment = -1;
-
- regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
- ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN, 0);
- /*
- * Per spec it is allowed to pulse the HDP signal to indicate
- * that the EDID information has changed. Some monitors do this
- * when they wakeup from standby or are enabled. When the HDP
- * goes low the adv7511 is reset and the outputs are disabled
- * which might cause the monitor to go to standby again. To
- * avoid this we ignore the HDP pin for the first few seconds
- * after enabling the output.
- */
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
- ADV7511_REG_POWER2_HDP_SRC_MASK,
- ADV7511_REG_POWER2_HDP_SRC_NONE);
- /* Most of the registers are reset during power down or
- * when HPD is low
- */
- regcache_sync(adv7511->regmap);
- break;
- default:
- /* TODO: setup additional power down modes */
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN,
- ADV7511_POWER_POWER_DOWN);
- regcache_mark_dirty(adv7511->regmap);
- break;
- }
-
- adv7511->dpms_mode = mode;
+ if (mode == DRM_MODE_DPMS_ON)
+ adv7511_power_on(adv7511);
+ else
+ adv7511_power_off(adv7511);
}
static enum drm_connector_status
* there is a pending HPD interrupt and the cable is connected there was
* at least one transition from disconnected to connected and the chip
* has to be reinitialized. */
- if (status == connector_status_connected && hpd &&
- adv7511->dpms_mode == DRM_MODE_DPMS_ON) {
+ if (status == connector_status_connected && hpd && adv7511->powered) {
regcache_mark_dirty(adv7511->regmap);
- adv7511_encoder_dpms(encoder, adv7511->dpms_mode);
+ adv7511_power_on(adv7511);
adv7511_get_modes(encoder, connector);
if (adv7511->status == connector_status_connected)
status = connector_status_disconnected;
if (!adv7511)
return -ENOMEM;
- adv7511->dpms_mode = DRM_MODE_DPMS_OFF;
+ adv7511->powered = false;
adv7511->status = connector_status_disconnected;
ret = adv7511_parse_dt(dev->of_node, &link_config);
regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL,
ADV7511_CEC_CTRL_POWER_DOWN);
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN, ADV7511_POWER_POWER_DOWN);
-
- adv7511->current_edid_segment = -1;
+ adv7511_power_off(adv7511);
i2c_set_clientdata(i2c, adv7511);
i915_gem_execbuffer.o \
i915_gem_gtt.o \
i915_gem.o \
+ i915_gem_shrinker.o \
i915_gem_stolen.o \
i915_gem_tiling.o \
i915_gem_userptr.o \
}
i = 0;
- for_each_sg_page(obj->pages->sgl, &sg_iter, npages, first_page)
+ for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first_page) {
pages[i++] = sg_page_iter_page(&sg_iter);
+ if (i == npages)
+ break;
+ }
addr = vmap(pages, i, 0, PAGE_KERNEL);
if (addr == NULL) {
seq_printf(m, "Current P-state: %d\n",
(rgvstat & MEMSTAT_PSTATE_MASK) >> MEMSTAT_PSTATE_SHIFT);
} else if (IS_GEN6(dev) || (IS_GEN7(dev) && !IS_VALLEYVIEW(dev)) ||
- IS_BROADWELL(dev)) {
+ IS_BROADWELL(dev) || IS_GEN9(dev)) {
u32 gt_perf_status = I915_READ(GEN6_GT_PERF_STATUS);
u32 rp_state_limits = I915_READ(GEN6_RP_STATE_LIMITS);
u32 rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
reqf = I915_READ(GEN6_RPNSWREQ);
- reqf &= ~GEN6_TURBO_DISABLE;
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
- reqf >>= 24;
- else
- reqf >>= 25;
+ if (IS_GEN9(dev))
+ reqf >>= 23;
+ else {
+ reqf &= ~GEN6_TURBO_DISABLE;
+ if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ reqf >>= 24;
+ else
+ reqf >>= 25;
+ }
reqf = intel_gpu_freq(dev_priv, reqf);
rpmodectl = I915_READ(GEN6_RP_CONTROL);
rpdownei = I915_READ(GEN6_RP_CUR_DOWN_EI);
rpcurdown = I915_READ(GEN6_RP_CUR_DOWN);
rpprevdown = I915_READ(GEN6_RP_PREV_DOWN);
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ if (IS_GEN9(dev))
+ cagf = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT;
+ else if (IS_HASWELL(dev) || IS_BROADWELL(dev))
cagf = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT;
else
cagf = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
pm_ier, pm_imr, pm_isr, pm_iir, pm_mask);
seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status);
seq_printf(m, "Render p-state ratio: %d\n",
- (gt_perf_status & 0xff00) >> 8);
+ (gt_perf_status & (IS_GEN9(dev) ? 0x1ff00 : 0xff00)) >> 8);
seq_printf(m, "Render p-state VID: %d\n",
gt_perf_status & 0xff);
seq_printf(m, "Render p-state limit: %d\n",
GEN6_CURBSYTAVG_MASK);
max_freq = (rp_state_cap & 0xff0000) >> 16;
+ max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Lowest (RPN) frequency: %dMHz\n",
intel_gpu_freq(dev_priv, max_freq));
max_freq = (rp_state_cap & 0xff00) >> 8;
+ max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Nominal (RP1) frequency: %dMHz\n",
intel_gpu_freq(dev_priv, max_freq));
max_freq = rp_state_cap & 0xff;
+ max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Max non-overclocked (RP0) frequency: %dMHz\n",
intel_gpu_freq(dev_priv, max_freq));
seq_printf(m, "Max overclocked frequency: %dMHz\n",
intel_gpu_freq(dev_priv, dev_priv->rps.max_freq));
+
+ seq_printf(m, "Idle freq: %d MHz\n",
+ intel_gpu_freq(dev_priv, dev_priv->rps.idle_freq));
} else if (IS_VALLEYVIEW(dev)) {
u32 freq_sts;
seq_printf(m, "min GPU freq: %d MHz\n",
intel_gpu_freq(dev_priv, dev_priv->rps.min_freq));
+ seq_printf(m, "idle GPU freq: %d MHz\n",
+ intel_gpu_freq(dev_priv, dev_priv->rps.idle_freq));
+
seq_printf(m,
"efficient (RPe) frequency: %d MHz\n",
intel_gpu_freq(dev_priv, dev_priv->rps.efficient_freq));
if (ret)
return ret;
- if (dev_priv->ips.pwrctx) {
- seq_puts(m, "power context ");
- describe_obj(m, dev_priv->ips.pwrctx);
- seq_putc(m, '\n');
- }
-
- if (dev_priv->ips.renderctx) {
- seq_puts(m, "render context ");
- describe_obj(m, dev_priv->ips.renderctx);
- seq_putc(m, '\n');
- }
-
list_for_each_entry(ctx, &dev_priv->context_list, link) {
if (!i915.enable_execlists &&
ctx->legacy_hw_ctx.rcs_state == NULL)
enum pipe pipe;
bool enabled = false;
+ if (!HAS_PSR(dev)) {
+ seq_puts(m, "PSR not supported\n");
+ return 0;
+ }
+
intel_runtime_pm_get(dev_priv);
mutex_lock(&dev_priv->psr.lock);
seq_printf(m, "Re-enable work scheduled: %s\n",
yesno(work_busy(&dev_priv->psr.work.work)));
- if (HAS_PSR(dev)) {
- if (HAS_DDI(dev))
- enabled = I915_READ(EDP_PSR_CTL(dev)) & EDP_PSR_ENABLE;
- else {
- for_each_pipe(dev_priv, pipe) {
- stat[pipe] = I915_READ(VLV_PSRSTAT(pipe)) &
- VLV_EDP_PSR_CURR_STATE_MASK;
- if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) ||
- (stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE))
- enabled = true;
- }
+ if (HAS_DDI(dev))
+ enabled = I915_READ(EDP_PSR_CTL(dev)) & EDP_PSR_ENABLE;
+ else {
+ for_each_pipe(dev_priv, pipe) {
+ stat[pipe] = I915_READ(VLV_PSRSTAT(pipe)) &
+ VLV_EDP_PSR_CURR_STATE_MASK;
+ if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) ||
+ (stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE))
+ enabled = true;
}
}
seq_printf(m, "HW Enabled & Active bit: %s", yesno(enabled));
yesno((bool)dev_priv->psr.link_standby));
/* CHV PSR has no kind of performance counter */
- if (HAS_PSR(dev) && HAS_DDI(dev)) {
+ if (HAS_DDI(dev)) {
psrperf = I915_READ(EDP_PSR_PERF_CNT(dev)) &
EDP_PSR_PERF_CNT_MASK;
u8 crc[6];
drm_modeset_lock_all(dev);
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->base.dpms != DRM_MODE_DPMS_ON)
continue;
active = cursor_position(dev, crtc->pipe, &x, &y);
seq_printf(m, "\tcursor visible? %s, position (%d, %d), size %dx%d, addr 0x%08x, active? %s\n",
yesno(crtc->cursor_base),
- x, y, crtc->cursor_width, crtc->cursor_height,
+ x, y, crtc->base.cursor->state->crtc_w,
+ crtc->base.cursor->state->crtc_h,
crtc->cursor_addr, yesno(active));
}
for_each_pipe(dev_priv, pipe) {
seq_printf(m, "Pipe %c\n", pipe_name(pipe));
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
entry = &ddb->plane[pipe][plane];
seq_printf(m, " Plane%-8d%8u%8u%8u\n", plane + 1,
entry->start, entry->end,
return 0;
}
+static void drrs_status_per_crtc(struct seq_file *m,
+ struct drm_device *dev, struct intel_crtc *intel_crtc)
+{
+ struct intel_encoder *intel_encoder;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct i915_drrs *drrs = &dev_priv->drrs;
+ int vrefresh = 0;
+
+ for_each_encoder_on_crtc(dev, &intel_crtc->base, intel_encoder) {
+ /* Encoder connected on this CRTC */
+ switch (intel_encoder->type) {
+ case INTEL_OUTPUT_EDP:
+ seq_puts(m, "eDP:\n");
+ break;
+ case INTEL_OUTPUT_DSI:
+ seq_puts(m, "DSI:\n");
+ break;
+ case INTEL_OUTPUT_HDMI:
+ seq_puts(m, "HDMI:\n");
+ break;
+ case INTEL_OUTPUT_DISPLAYPORT:
+ seq_puts(m, "DP:\n");
+ break;
+ default:
+ seq_printf(m, "Other encoder (id=%d).\n",
+ intel_encoder->type);
+ return;
+ }
+ }
+
+ if (dev_priv->vbt.drrs_type == STATIC_DRRS_SUPPORT)
+ seq_puts(m, "\tVBT: DRRS_type: Static");
+ else if (dev_priv->vbt.drrs_type == SEAMLESS_DRRS_SUPPORT)
+ seq_puts(m, "\tVBT: DRRS_type: Seamless");
+ else if (dev_priv->vbt.drrs_type == DRRS_NOT_SUPPORTED)
+ seq_puts(m, "\tVBT: DRRS_type: None");
+ else
+ seq_puts(m, "\tVBT: DRRS_type: FIXME: Unrecognized Value");
+
+ seq_puts(m, "\n\n");
+
+ if (intel_crtc->config->has_drrs) {
+ struct intel_panel *panel;
+
+ mutex_lock(&drrs->mutex);
+ /* DRRS Supported */
+ seq_puts(m, "\tDRRS Supported: Yes\n");
+
+ /* disable_drrs() will make drrs->dp NULL */
+ if (!drrs->dp) {
+ seq_puts(m, "Idleness DRRS: Disabled");
+ mutex_unlock(&drrs->mutex);
+ return;
+ }
+
+ panel = &drrs->dp->attached_connector->panel;
+ seq_printf(m, "\t\tBusy_frontbuffer_bits: 0x%X",
+ drrs->busy_frontbuffer_bits);
+
+ seq_puts(m, "\n\t\t");
+ if (drrs->refresh_rate_type == DRRS_HIGH_RR) {
+ seq_puts(m, "DRRS_State: DRRS_HIGH_RR\n");
+ vrefresh = panel->fixed_mode->vrefresh;
+ } else if (drrs->refresh_rate_type == DRRS_LOW_RR) {
+ seq_puts(m, "DRRS_State: DRRS_LOW_RR\n");
+ vrefresh = panel->downclock_mode->vrefresh;
+ } else {
+ seq_printf(m, "DRRS_State: Unknown(%d)\n",
+ drrs->refresh_rate_type);
+ mutex_unlock(&drrs->mutex);
+ return;
+ }
+ seq_printf(m, "\t\tVrefresh: %d", vrefresh);
+
+ seq_puts(m, "\n\t\t");
+ mutex_unlock(&drrs->mutex);
+ } else {
+ /* DRRS not supported. Print the VBT parameter*/
+ seq_puts(m, "\tDRRS Supported : No");
+ }
+ seq_puts(m, "\n");
+}
+
+static int i915_drrs_status(struct seq_file *m, void *unused)
+{
+ struct drm_info_node *node = m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct intel_crtc *intel_crtc;
+ int active_crtc_cnt = 0;
+
+ for_each_intel_crtc(dev, intel_crtc) {
+ drm_modeset_lock(&intel_crtc->base.mutex, NULL);
+
+ if (intel_crtc->active) {
+ active_crtc_cnt++;
+ seq_printf(m, "\nCRTC %d: ", active_crtc_cnt);
+
+ drrs_status_per_crtc(m, dev, intel_crtc);
+ }
+
+ drm_modeset_unlock(&intel_crtc->base.mutex);
+ }
+
+ if (!active_crtc_cnt)
+ seq_puts(m, "No active crtc found\n");
+
+ return 0;
+}
+
struct pipe_crc_info {
const char *name;
struct drm_device *dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned int s_tot = 0, ss_tot = 0, ss_per = 0, eu_tot = 0, eu_per = 0;
- if (INTEL_INFO(dev)->gen < 9)
+ if ((INTEL_INFO(dev)->gen < 8) || IS_BROADWELL(dev))
return -ENODEV;
seq_puts(m, "SSEU Device Info\n");
yesno(INTEL_INFO(dev)->has_eu_pg));
seq_puts(m, "SSEU Device Status\n");
- if (IS_SKYLAKE(dev)) {
+ if (IS_CHERRYVIEW(dev)) {
+ const int ss_max = 2;
+ int ss;
+ u32 sig1[ss_max], sig2[ss_max];
+
+ sig1[0] = I915_READ(CHV_POWER_SS0_SIG1);
+ sig1[1] = I915_READ(CHV_POWER_SS1_SIG1);
+ sig2[0] = I915_READ(CHV_POWER_SS0_SIG2);
+ sig2[1] = I915_READ(CHV_POWER_SS1_SIG2);
+
+ for (ss = 0; ss < ss_max; ss++) {
+ unsigned int eu_cnt;
+
+ if (sig1[ss] & CHV_SS_PG_ENABLE)
+ /* skip disabled subslice */
+ continue;
+
+ s_tot = 1;
+ ss_per++;
+ eu_cnt = ((sig1[ss] & CHV_EU08_PG_ENABLE) ? 0 : 2) +
+ ((sig1[ss] & CHV_EU19_PG_ENABLE) ? 0 : 2) +
+ ((sig1[ss] & CHV_EU210_PG_ENABLE) ? 0 : 2) +
+ ((sig2[ss] & CHV_EU311_PG_ENABLE) ? 0 : 2);
+ eu_tot += eu_cnt;
+ eu_per = max(eu_per, eu_cnt);
+ }
+ ss_tot = ss_per;
+ } else if (IS_SKYLAKE(dev)) {
const int s_max = 3, ss_max = 4;
int s, ss;
u32 s_reg[s_max], eu_reg[2*s_max], eu_mask[2];
{"i915_wa_registers", i915_wa_registers, 0},
{"i915_ddb_info", i915_ddb_info, 0},
{"i915_sseu_status", i915_sseu_status, 0},
+ {"i915_drrs_status", i915_drrs_status, 0},
};
#define I915_DEBUGFS_ENTRIES ARRAY_SIZE(i915_debugfs_list)
case I915_PARAM_CHIPSET_ID:
value = dev->pdev->device;
break;
+ case I915_PARAM_REVISION:
+ value = dev->pdev->revision;
+ break;
case I915_PARAM_HAS_GEM:
value = 1;
break;
case I915_PARAM_MMAP_VERSION:
value = 1;
break;
+ case I915_PARAM_SUBSLICE_TOTAL:
+ value = INTEL_INFO(dev)->subslice_total;
+ if (!value)
+ return -ENODEV;
+ break;
+ case I915_PARAM_EU_TOTAL:
+ value = INTEL_INFO(dev)->eu_total;
+ if (!value)
+ return -ENODEV;
+ break;
default:
DRM_DEBUG("Unknown parameter %d\n", param->param);
return -EINVAL;
/* Initialize slice/subslice/EU info */
if (IS_CHERRYVIEW(dev)) {
- u32 fuse, mask_eu;
+ u32 fuse, eu_dis;
fuse = I915_READ(CHV_FUSE_GT);
- mask_eu = fuse & (CHV_FGT_EU_DIS_SS0_R0_MASK |
- CHV_FGT_EU_DIS_SS0_R1_MASK |
- CHV_FGT_EU_DIS_SS1_R0_MASK |
- CHV_FGT_EU_DIS_SS1_R1_MASK);
- info->eu_total = 16 - hweight32(mask_eu);
+
+ info->slice_total = 1;
+
+ if (!(fuse & CHV_FGT_DISABLE_SS0)) {
+ info->subslice_per_slice++;
+ eu_dis = fuse & (CHV_FGT_EU_DIS_SS0_R0_MASK |
+ CHV_FGT_EU_DIS_SS0_R1_MASK);
+ info->eu_total += 8 - hweight32(eu_dis);
+ }
+
+ if (!(fuse & CHV_FGT_DISABLE_SS1)) {
+ info->subslice_per_slice++;
+ eu_dis = fuse & (CHV_FGT_EU_DIS_SS1_R0_MASK |
+ CHV_FGT_EU_DIS_SS1_R1_MASK);
+ info->eu_total += 8 - hweight32(eu_dis);
+ }
+
+ info->subslice_total = info->subslice_per_slice;
+ /*
+ * CHV expected to always have a uniform distribution of EU
+ * across subslices.
+ */
+ info->eu_per_subslice = info->subslice_total ?
+ info->eu_total / info->subslice_total :
+ 0;
+ /*
+ * CHV supports subslice power gating on devices with more than
+ * one subslice, and supports EU power gating on devices with
+ * more than one EU pair per subslice.
+ */
+ info->has_slice_pg = 0;
+ info->has_subslice_pg = (info->subslice_total > 1);
+ info->has_eu_pg = (info->eu_per_subslice > 2);
} else if (IS_SKYLAKE(dev)) {
const int s_max = 3, ss_max = 4, eu_max = 8;
int s, ss;
DRM_IOCTL_DEF_DRV(I915_OVERLAY_PUT_IMAGE, intel_overlay_put_image, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF_DRV(I915_OVERLAY_ATTRS, intel_overlay_attrs, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF_DRV(I915_SET_SPRITE_COLORKEY, intel_sprite_set_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, intel_sprite_get_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE, i915_gem_context_create_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
};
static const struct intel_device_info intel_cherryview_info = {
- .is_preliminary = 1,
.gen = 8, .num_pipes = 3,
.need_gfx_hws = 1, .has_hotplug = 1,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING,
return ret;
}
- /*
- * FIXME: This races pretty badly against concurrent holders of
- * ring interrupts. This is possible since we've started to drop
- * dev->struct_mutex in select places when waiting for the gpu.
- */
-
/*
* rps/rc6 re-init is necessary to restore state lost after the
* reset and the re-install of gt irqs. Skip for ironlake per
#define DRIVER_NAME "i915"
#define DRIVER_DESC "Intel Graphics"
-#define DRIVER_DATE "20150227"
+#define DRIVER_DATE "20150327"
#undef WARN_ON
/* Many gcc seem to no see through this and fall over :( */
#define WARN_ON(x) WARN((x), "WARN_ON(" #x ")")
#endif
+#undef WARN_ON_ONCE
+#define WARN_ON_ONCE(x) WARN_ONCE((x), "WARN_ON_ONCE(" #x ")")
+
#define MISSING_CASE(x) WARN(1, "Missing switch case (%lu) in %s\n", \
(long) (x), __func__);
#define for_each_pipe(__dev_priv, __p) \
for ((__p) = 0; (__p) < INTEL_INFO(__dev_priv)->num_pipes; (__p)++)
-#define for_each_plane(pipe, p) \
- for ((p) = 0; (p) < INTEL_INFO(dev)->num_sprites[(pipe)] + 1; (p)++)
-#define for_each_sprite(p, s) for ((s) = 0; (s) < INTEL_INFO(dev)->num_sprites[(p)]; (s)++)
+#define for_each_plane(__dev_priv, __pipe, __p) \
+ for ((__p) = 0; \
+ (__p) < INTEL_INFO(__dev_priv)->num_sprites[(__pipe)] + 1; \
+ (__p)++)
+#define for_each_sprite(__dev_priv, __p, __s) \
+ for ((__s) = 0; \
+ (__s) < INTEL_INFO(__dev_priv)->num_sprites[(__p)]; \
+ (__s)++)
#define for_each_crtc(dev, crtc) \
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
&(dev)->mode_config.encoder_list, \
base.head)
+#define for_each_intel_connector(dev, intel_connector) \
+ list_for_each_entry(intel_connector, \
+ &dev->mode_config.connector_list, \
+ base.head)
+
+
#define for_each_encoder_on_crtc(dev, __crtc, intel_encoder) \
list_for_each_entry((intel_encoder), &(dev)->mode_config.encoder_list, base.head) \
if ((intel_encoder)->base.crtc == (__crtc))
u32 forcewake;
u32 error; /* gen6+ */
u32 err_int; /* gen7 */
+ u32 fault_data0; /* gen8, gen9 */
+ u32 fault_data1; /* gen8, gen9 */
u32 done_reg;
u32 gac_eco;
u32 gam_ecochk;
* Returns true on success, false on failure.
*/
bool (*find_dpll)(const struct intel_limit *limit,
- struct intel_crtc *crtc,
+ struct intel_crtc_state *crtc_state,
int target, int refclk,
struct dpll *match_clock,
struct dpll *best_clock);
struct drm_crtc *crtc,
uint32_t sprite_width, uint32_t sprite_height,
int pixel_size, bool enable, bool scaled);
- void (*modeset_global_resources)(struct drm_device *dev);
+ void (*modeset_global_resources)(struct drm_atomic_state *state);
/* Returns the active state of the crtc, and if the crtc is active,
* fills out the pipe-config with the hw state. */
bool (*get_pipe_config)(struct intel_crtc *,
struct list_head link;
};
+enum fb_op_origin {
+ ORIGIN_GTT,
+ ORIGIN_CPU,
+ ORIGIN_CS,
+ ORIGIN_FLIP,
+};
+
struct i915_fbc {
unsigned long uncompressed_size;
unsigned threshold;
unsigned int fb_id;
+ unsigned int possible_framebuffer_bits;
+ unsigned int busy_bits;
struct intel_crtc *crtc;
int y;
* possible. */
bool enabled;
- /* On gen8 some rings cannont perform fbc clean operation so for now
- * we are doing this on SW with mmio.
- * This variable works in the opposite information direction
- * of ring->fbc_dirty telling software on frontbuffer tracking
- * to perform the cache clean on sw side.
- */
- bool need_sw_cache_clean;
-
struct intel_fbc_work {
struct delayed_work work;
struct drm_crtc *crtc;
u8 max_freq_softlimit; /* Max frequency permitted by the driver */
u8 max_freq; /* Maximum frequency, RP0 if not overclocking */
u8 min_freq; /* AKA RPn. Minimum frequency */
+ u8 idle_freq; /* Frequency to request when we are idle */
u8 efficient_freq; /* AKA RPe. Pre-determined balanced frequency */
u8 rp1_freq; /* "less than" RP0 power/freqency */
u8 rp0_freq; /* Non-overclocked max frequency. */
u32 cz_freq;
- u32 ei_interrupt_count;
-
int last_adj;
enum { LOW_POWER, BETWEEN, HIGH_POWER } power;
int c_m;
int r_t;
-
- struct drm_i915_gem_object *pwrctx;
- struct drm_i915_gem_object *renderctx;
};
struct drm_i915_private;
enum intel_ddb_partitioning partitioning;
};
+struct vlv_wm_values {
+ struct {
+ uint16_t primary;
+ uint16_t sprite[2];
+ uint8_t cursor;
+ } pipe[3];
+
+ struct {
+ uint16_t plane;
+ uint8_t cursor;
+ } sr;
+
+ struct {
+ uint8_t cursor;
+ uint8_t sprite[2];
+ uint8_t primary;
+ } ddl[3];
+};
+
struct skl_ddb_entry {
uint16_t start, end; /* in number of blocks, 'end' is exclusive */
};
union {
struct ilk_wm_values hw;
struct skl_wm_values skl_hw;
+ struct vlv_wm_values vlv;
};
} wm;
#define NUM_L3_SLICES(dev) (IS_HSW_GT3(dev) ? 2 : HAS_L3_DPF(dev))
#define GT_FREQUENCY_MULTIPLIER 50
+#define GEN9_FREQ_SCALER 3
#include "i915_trace.h"
struct i915_params {
int modeset;
int panel_ignore_lid;
- unsigned int powersave;
int semaphores;
unsigned int lvds_downclock;
int lvds_channel_mode;
bool enable_hangcheck;
bool fastboot;
bool prefault_disable;
+ bool load_detect_test;
bool reset;
bool disable_display;
bool disable_vtd_wa;
int use_mmio_flip;
- bool mmio_debug;
+ int mmio_debug;
bool verbose_state_checks;
bool nuclear_pageflip;
};
int i915_gem_wait_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
void i915_gem_load(struct drm_device *dev);
-unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
- long target,
- unsigned flags);
-#define I915_SHRINK_PURGEABLE 0x1
-#define I915_SHRINK_UNBOUND 0x2
-#define I915_SHRINK_BOUND 0x4
void *i915_gem_object_alloc(struct drm_device *dev);
void i915_gem_object_free(struct drm_i915_gem_object *obj);
void i915_gem_object_init(struct drm_i915_gem_object *obj,
#define PIN_GLOBAL 0x4
#define PIN_OFFSET_BIAS 0x8
#define PIN_OFFSET_MASK (~4095)
-int __must_check i915_gem_object_pin_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint32_t alignment,
- uint64_t flags,
- const struct i915_ggtt_view *view);
-static inline
-int __must_check i915_gem_object_pin(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint32_t alignment,
- uint64_t flags)
-{
- return i915_gem_object_pin_view(obj, vm, alignment, flags,
- &i915_ggtt_view_normal);
-}
+int __must_check
+i915_gem_object_pin(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ uint32_t alignment,
+ uint64_t flags);
+int __must_check
+i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view,
+ uint32_t alignment,
+ uint64_t flags);
int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
u32 flags);
int __must_check
i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
u32 alignment,
- struct intel_engine_cs *pipelined);
-void i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj);
+ struct intel_engine_cs *pipelined,
+ const struct i915_ggtt_view *view);
+void i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj,
int align);
int i915_gem_open(struct drm_device *dev, struct drm_file *file);
void i915_gem_restore_fences(struct drm_device *dev);
-unsigned long i915_gem_obj_offset_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view);
-static inline
-unsigned long i915_gem_obj_offset(struct drm_i915_gem_object *o,
- struct i915_address_space *vm)
+unsigned long
+i915_gem_obj_ggtt_offset_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view);
+unsigned long
+i915_gem_obj_offset(struct drm_i915_gem_object *o,
+ struct i915_address_space *vm);
+static inline unsigned long
+i915_gem_obj_ggtt_offset(struct drm_i915_gem_object *o)
{
- return i915_gem_obj_offset_view(o, vm, I915_GGTT_VIEW_NORMAL);
+ return i915_gem_obj_ggtt_offset_view(o, &i915_ggtt_view_normal);
}
+
bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o);
-bool i915_gem_obj_bound_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view);
-static inline
+bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view);
bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
- struct i915_address_space *vm)
-{
- return i915_gem_obj_bound_view(o, vm, I915_GGTT_VIEW_NORMAL);
-}
+ struct i915_address_space *vm);
unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,
struct i915_address_space *vm);
-struct i915_vma *i915_gem_obj_to_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view);
-static inline
-struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm)
-{
- return i915_gem_obj_to_vma_view(obj, vm, &i915_ggtt_view_normal);
-}
-
struct i915_vma *
-i915_gem_obj_lookup_or_create_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view);
+i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm);
+struct i915_vma *
+i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
-static inline
struct i915_vma *
i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm)
-{
- return i915_gem_obj_lookup_or_create_vma_view(obj, vm,
- &i915_ggtt_view_normal);
-}
+ struct i915_address_space *vm);
+struct i915_vma *
+i915_gem_obj_lookup_or_create_ggtt_vma(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
-struct i915_vma *i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj);
-static inline bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj) {
- struct i915_vma *vma;
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (vma->pin_count > 0)
- return true;
- return false;
+static inline struct i915_vma *
+i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj)
+{
+ return i915_gem_obj_to_ggtt_view(obj, &i915_ggtt_view_normal);
}
+bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj);
/* Some GGTT VM helpers */
#define i915_obj_to_ggtt(obj) \
static inline bool i915_gem_obj_ggtt_bound(struct drm_i915_gem_object *obj)
{
- return i915_gem_obj_bound(obj, i915_obj_to_ggtt(obj));
-}
-
-static inline unsigned long
-i915_gem_obj_ggtt_offset(struct drm_i915_gem_object *obj)
-{
- return i915_gem_obj_offset(obj, i915_obj_to_ggtt(obj));
+ return i915_gem_obj_ggtt_bound_view(obj, &i915_ggtt_view_normal);
}
static inline unsigned long
return i915_vma_unbind(i915_gem_obj_to_ggtt(obj));
}
-void i915_gem_object_ggtt_unpin(struct drm_i915_gem_object *obj);
+void i915_gem_object_ggtt_unpin_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
+static inline void
+i915_gem_object_ggtt_unpin(struct drm_i915_gem_object *obj)
+{
+ i915_gem_object_ggtt_unpin_view(obj, &i915_ggtt_view_normal);
+}
/* i915_gem_context.c */
int __must_check i915_gem_context_init(struct drm_device *dev);
u32 gtt_offset,
u32 size);
+/* i915_gem_shrinker.c */
+unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
+ long target,
+ unsigned flags);
+#define I915_SHRINK_PURGEABLE 0x1
+#define I915_SHRINK_UNBOUND 0x2
+#define I915_SHRINK_BOUND 0x4
+unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv);
+void i915_gem_shrinker_init(struct drm_i915_private *dev_priv);
+
+
/* i915_gem_tiling.c */
static inline bool i915_gem_object_needs_bit17_swizzle(struct drm_i915_gem_object *obj)
{
/*
- * Copyright © 2008 Intel Corporation
+ * Copyright © 2008-2015 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
#include "i915_vgpu.h"
#include "i915_trace.h"
#include "intel_drv.h"
-#include <linux/oom.h>
#include <linux/shmem_fs.h>
#include <linux/slab.h>
#include <linux/swap.h>
struct drm_i915_fence_reg *fence,
bool enable);
-static unsigned long i915_gem_shrinker_count(struct shrinker *shrinker,
- struct shrink_control *sc);
-static unsigned long i915_gem_shrinker_scan(struct shrinker *shrinker,
- struct shrink_control *sc);
-static int i915_gem_shrinker_oom(struct notifier_block *nb,
- unsigned long event,
- void *ptr);
-static unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv);
-
static bool cpu_cache_is_coherent(struct drm_device *dev,
enum i915_cache_level level)
{
struct drm_device *dev = obj->base.dev;
void *vaddr = obj->phys_handle->vaddr + args->offset;
char __user *user_data = to_user_ptr(args->data_ptr);
- int ret;
+ int ret = 0;
/* We manually control the domain here and pretend that it
* remains coherent i.e. in the GTT domain, like shmem_pwrite.
if (ret)
return ret;
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU);
if (__copy_from_user_inatomic_nocache(vaddr, user_data, args->size)) {
unsigned long unwritten;
mutex_unlock(&dev->struct_mutex);
unwritten = copy_from_user(vaddr, user_data, args->size);
mutex_lock(&dev->struct_mutex);
- if (unwritten)
- return -EFAULT;
+ if (unwritten) {
+ ret = -EFAULT;
+ goto out;
+ }
}
drm_clflush_virt_range(vaddr, args->size);
i915_gem_chipset_flush(dev);
- return 0;
+
+out:
+ intel_fb_obj_flush(obj, false);
+ return ret;
}
void *i915_gem_object_alloc(struct drm_device *dev)
offset = i915_gem_obj_ggtt_offset(obj) + args->offset;
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_GTT);
+
while (remain > 0) {
/* Operation in this page
*
if (fast_user_write(dev_priv->gtt.mappable, page_base,
page_offset, user_data, page_length)) {
ret = -EFAULT;
- goto out_unpin;
+ goto out_flush;
}
remain -= page_length;
offset += page_length;
}
+out_flush:
+ intel_fb_obj_flush(obj, false);
out_unpin:
i915_gem_object_ggtt_unpin(obj);
out:
if (ret)
return ret;
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU);
+
i915_gem_object_pin_pages(obj);
offset = args->offset;
if (needs_clflush_after)
i915_gem_chipset_flush(dev);
+ intel_fb_obj_flush(obj, false);
return ret;
}
return i915_gem_mmap_gtt(file, dev, args->handle, &args->offset);
}
-static inline int
-i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
-{
- return obj->madv == I915_MADV_DONTNEED;
-}
-
/* Immediately discard the backing storage */
static void
i915_gem_object_truncate(struct drm_i915_gem_object *obj)
return 0;
}
-unsigned long
-i915_gem_shrink(struct drm_i915_private *dev_priv,
- long target, unsigned flags)
-{
- const struct {
- struct list_head *list;
- unsigned int bit;
- } phases[] = {
- { &dev_priv->mm.unbound_list, I915_SHRINK_UNBOUND },
- { &dev_priv->mm.bound_list, I915_SHRINK_BOUND },
- { NULL, 0 },
- }, *phase;
- unsigned long count = 0;
-
- /*
- * As we may completely rewrite the (un)bound list whilst unbinding
- * (due to retiring requests) we have to strictly process only
- * one element of the list at the time, and recheck the list
- * on every iteration.
- *
- * In particular, we must hold a reference whilst removing the
- * object as we may end up waiting for and/or retiring the objects.
- * This might release the final reference (held by the active list)
- * and result in the object being freed from under us. This is
- * similar to the precautions the eviction code must take whilst
- * removing objects.
- *
- * Also note that although these lists do not hold a reference to
- * the object we can safely grab one here: The final object
- * unreferencing and the bound_list are both protected by the
- * dev->struct_mutex and so we won't ever be able to observe an
- * object on the bound_list with a reference count equals 0.
- */
- for (phase = phases; phase->list; phase++) {
- struct list_head still_in_list;
-
- if ((flags & phase->bit) == 0)
- continue;
-
- INIT_LIST_HEAD(&still_in_list);
- while (count < target && !list_empty(phase->list)) {
- struct drm_i915_gem_object *obj;
- struct i915_vma *vma, *v;
-
- obj = list_first_entry(phase->list,
- typeof(*obj), global_list);
- list_move_tail(&obj->global_list, &still_in_list);
-
- if (flags & I915_SHRINK_PURGEABLE &&
- !i915_gem_object_is_purgeable(obj))
- continue;
-
- drm_gem_object_reference(&obj->base);
-
- /* For the unbound phase, this should be a no-op! */
- list_for_each_entry_safe(vma, v,
- &obj->vma_list, vma_link)
- if (i915_vma_unbind(vma))
- break;
-
- if (i915_gem_object_put_pages(obj) == 0)
- count += obj->base.size >> PAGE_SHIFT;
-
- drm_gem_object_unreference(&obj->base);
- }
- list_splice(&still_in_list, phase->list);
- }
-
- return count;
-}
-
-static unsigned long
-i915_gem_shrink_all(struct drm_i915_private *dev_priv)
-{
- i915_gem_evict_everything(dev_priv->dev);
- return i915_gem_shrink(dev_priv, LONG_MAX,
- I915_SHRINK_BOUND | I915_SHRINK_UNBOUND);
-}
-
static int
i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
{
WARN_ON(i915_verify_lists(ring->dev));
- /* Move any buffers on the active list that are no longer referenced
- * by the ringbuffer to the flushing/inactive lists as appropriate,
- * before we free the context associated with the requests.
+ /* Retire requests first as we use it above for the early return.
+ * If we retire requests last, we may use a later seqno and so clear
+ * the requests lists without clearing the active list, leading to
+ * confusion.
*/
- while (!list_empty(&ring->active_list)) {
- struct drm_i915_gem_object *obj;
-
- obj = list_first_entry(&ring->active_list,
- struct drm_i915_gem_object,
- ring_list);
-
- if (!i915_gem_request_completed(obj->last_read_req, true))
- break;
-
- i915_gem_object_move_to_inactive(obj);
- }
-
-
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
i915_gem_free_request(request);
}
+ /* Move any buffers on the active list that are no longer referenced
+ * by the ringbuffer to the flushing/inactive lists as appropriate,
+ * before we free the context associated with the requests.
+ */
+ while (!list_empty(&ring->active_list)) {
+ struct drm_i915_gem_object *obj;
+
+ obj = list_first_entry(&ring->active_list,
+ struct drm_i915_gem_object,
+ ring_list);
+
+ if (!i915_gem_request_completed(obj->last_read_req, true))
+ break;
+
+ i915_gem_object_move_to_inactive(obj);
+ }
+
if (unlikely(ring->trace_irq_req &&
i915_gem_request_completed(ring->trace_irq_req, true))) {
ring->irq_put(ring);
req = obj->last_read_req;
/* Do this after OLR check to make sure we make forward progress polling
- * on this IOCTL with a timeout <=0 (like busy ioctl)
+ * on this IOCTL with a timeout == 0 (like busy ioctl)
*/
- if (args->timeout_ns <= 0) {
+ if (args->timeout_ns == 0) {
ret = -ETIME;
goto out;
}
i915_gem_request_reference(req);
mutex_unlock(&dev->struct_mutex);
- ret = __i915_wait_request(req, reset_counter, true, &args->timeout_ns,
+ ret = __i915_wait_request(req, reset_counter, true,
+ args->timeout_ns > 0 ? &args->timeout_ns : NULL,
file->driver_priv);
mutex_lock(&dev->struct_mutex);
i915_gem_request_unreference(req);
static struct i915_vma *
i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
+ const struct i915_ggtt_view *ggtt_view,
unsigned alignment,
- uint64_t flags,
- const struct i915_ggtt_view *view)
+ uint64_t flags)
{
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_vma *vma;
int ret;
+ if(WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
+ return ERR_PTR(-EINVAL);
+
fence_size = i915_gem_get_gtt_size(dev,
obj->base.size,
obj->tiling_mode);
i915_gem_object_pin_pages(obj);
- vma = i915_gem_obj_lookup_or_create_vma_view(obj, vm, view);
+ vma = ggtt_view ? i915_gem_obj_lookup_or_create_ggtt_vma(obj, ggtt_view) :
+ i915_gem_obj_lookup_or_create_vma(obj, vm);
+
if (IS_ERR(vma))
goto err_unpin;
if (ret)
goto err_remove_node;
+ /* allocate before insert / bind */
+ if (vma->vm->allocate_va_range) {
+ trace_i915_va_alloc(vma->vm, vma->node.start, vma->node.size,
+ VM_TO_TRACE_NAME(vma->vm));
+ ret = vma->vm->allocate_va_range(vma->vm,
+ vma->node.start,
+ vma->node.size);
+ if (ret)
+ goto err_remove_node;
+ }
+
trace_i915_vma_bind(vma, flags);
ret = i915_vma_bind(vma, obj->cache_level,
flags & PIN_GLOBAL ? GLOBAL_BIND : 0);
}
if (write)
- intel_fb_obj_invalidate(obj, NULL);
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_GTT);
trace_i915_gem_object_change_domain(obj,
old_read_domains,
int
i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
u32 alignment,
- struct intel_engine_cs *pipelined)
+ struct intel_engine_cs *pipelined,
+ const struct i915_ggtt_view *view)
{
u32 old_read_domains, old_write_domain;
bool was_pin_display;
* (e.g. libkms for the bootup splash), we have to ensure that we
* always use map_and_fenceable for all scanout buffers.
*/
- ret = i915_gem_obj_ggtt_pin(obj, alignment, PIN_MAPPABLE);
+ ret = i915_gem_object_ggtt_pin(obj, view, alignment,
+ view->type == I915_GGTT_VIEW_NORMAL ?
+ PIN_MAPPABLE : 0);
if (ret)
goto err_unpin_display;
}
void
-i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj)
+i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view)
{
- i915_gem_object_ggtt_unpin(obj);
+ i915_gem_object_ggtt_unpin_view(obj, view);
+
obj->pin_display = is_pin_display(obj);
}
}
if (write)
- intel_fb_obj_invalidate(obj, NULL);
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU);
trace_i915_gem_object_change_domain(obj,
old_read_domains,
return false;
}
-int
-i915_gem_object_pin_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint32_t alignment,
- uint64_t flags,
- const struct i915_ggtt_view *view)
+static int
+i915_gem_object_do_pin(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ const struct i915_ggtt_view *ggtt_view,
+ uint32_t alignment,
+ uint64_t flags)
{
struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
struct i915_vma *vma;
if (WARN_ON((flags & (PIN_MAPPABLE | PIN_GLOBAL)) == PIN_MAPPABLE))
return -EINVAL;
- vma = i915_gem_obj_to_vma_view(obj, vm, view);
+ if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
+ return -EINVAL;
+
+ vma = ggtt_view ? i915_gem_obj_to_ggtt_view(obj, ggtt_view) :
+ i915_gem_obj_to_vma(obj, vm);
+
+ if (IS_ERR(vma))
+ return PTR_ERR(vma);
+
if (vma) {
if (WARN_ON(vma->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT))
return -EBUSY;
if (i915_vma_misplaced(vma, alignment, flags)) {
+ unsigned long offset;
+ offset = ggtt_view ? i915_gem_obj_ggtt_offset_view(obj, ggtt_view) :
+ i915_gem_obj_offset(obj, vm);
WARN(vma->pin_count,
- "bo is already pinned with incorrect alignment:"
+ "bo is already pinned in %s with incorrect alignment:"
" offset=%lx, req.alignment=%x, req.map_and_fenceable=%d,"
" obj->map_and_fenceable=%d\n",
- i915_gem_obj_offset_view(obj, vm, view->type),
+ ggtt_view ? "ggtt" : "ppgtt",
+ offset,
alignment,
!!(flags & PIN_MAPPABLE),
obj->map_and_fenceable);
bound = vma ? vma->bound : 0;
if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
- vma = i915_gem_object_bind_to_vm(obj, vm, alignment,
- flags, view);
+ /* In true PPGTT, bind has possibly changed PDEs, which
+ * means we must do a context switch before the GPU can
+ * accurately read some of the VMAs.
+ */
+ vma = i915_gem_object_bind_to_vm(obj, vm, ggtt_view, alignment,
+ flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
}
return 0;
}
+int
+i915_gem_object_pin(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ uint32_t alignment,
+ uint64_t flags)
+{
+ return i915_gem_object_do_pin(obj, vm,
+ i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL,
+ alignment, flags);
+}
+
+int
+i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view,
+ uint32_t alignment,
+ uint64_t flags)
+{
+ if (WARN_ONCE(!view, "no view specified"))
+ return -EINVAL;
+
+ return i915_gem_object_do_pin(obj, i915_obj_to_ggtt(obj), view,
+ alignment, flags | PIN_GLOBAL);
+}
+
void
-i915_gem_object_ggtt_unpin(struct drm_i915_gem_object *obj)
+i915_gem_object_ggtt_unpin_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view)
{
- struct i915_vma *vma = i915_gem_obj_to_ggtt(obj);
+ struct i915_vma *vma = i915_gem_obj_to_ggtt_view(obj, view);
BUG_ON(!vma);
- BUG_ON(vma->pin_count == 0);
- BUG_ON(!i915_gem_obj_ggtt_bound(obj));
+ WARN_ON(vma->pin_count == 0);
+ WARN_ON(!i915_gem_obj_ggtt_bound_view(obj, view));
- if (--vma->pin_count == 0)
+ if (--vma->pin_count == 0 && view->type == I915_GGTT_VIEW_NORMAL)
obj->pin_mappable = false;
}
obj->madv = args->madv;
/* if the object is no longer attached, discard its backing storage */
- if (i915_gem_object_is_purgeable(obj) && obj->pages == NULL)
+ if (obj->madv == I915_MADV_DONTNEED && obj->pages == NULL)
i915_gem_object_truncate(obj);
args->retained = obj->madv != __I915_MADV_PURGED;
intel_runtime_pm_put(dev_priv);
}
-struct i915_vma *i915_gem_obj_to_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view)
+struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm)
{
struct i915_vma *vma;
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (vma->vm == vm && vma->ggtt_view.type == view->type)
+ list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
+ if (vma->vm == vm)
return vma;
+ }
+ return NULL;
+}
+
+struct i915_vma *i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view)
+{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(obj);
+ struct i915_vma *vma;
+
+ if (WARN_ONCE(!view, "no view specified"))
+ return ERR_PTR(-EINVAL);
+ list_for_each_entry(vma, &obj->vma_list, vma_link)
+ if (vma->vm == ggtt &&
+ i915_ggtt_view_equal(&vma->ggtt_view, view))
+ return vma;
return NULL;
}
if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt())
return -EIO;
+ /* Double layer security blanket, see i915_gem_init() */
+ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
+
if (dev_priv->ellc_size)
I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf));
for_each_ring(ring, dev_priv, i) {
ret = ring->init_hw(ring);
if (ret)
- return ret;
+ goto out;
}
for (i = 0; i < NUM_L3_SLICES(dev); i++)
DRM_ERROR("Context enable failed %d\n", ret);
i915_gem_cleanup_ringbuffer(dev);
- return ret;
+ goto out;
}
+out:
+ intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
return ret;
}
dev_priv->gt.stop_ring = intel_logical_ring_stop;
}
+ /* This is just a security blanket to placate dragons.
+ * On some systems, we very sporadically observe that the first TLBs
+ * used by the CS may be stale, despite us poking the TLB reset. If
+ * we hold the forcewake during initialisation these problems
+ * just magically go away.
+ */
+ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
+
ret = i915_gem_init_userptr(dev);
if (ret)
goto out_unlock;
}
out_unlock:
+ intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
mutex_unlock(&dev->struct_mutex);
return ret;
dev_priv->mm.interruptible = true;
- dev_priv->mm.shrinker.scan_objects = i915_gem_shrinker_scan;
- dev_priv->mm.shrinker.count_objects = i915_gem_shrinker_count;
- dev_priv->mm.shrinker.seeks = DEFAULT_SEEKS;
- register_shrinker(&dev_priv->mm.shrinker);
-
- dev_priv->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
- register_oom_notifier(&dev_priv->mm.oom_notifier);
+ i915_gem_shrinker_init(dev_priv);
i915_gem_batch_pool_init(dev, &dev_priv->mm.batch_pool);
}
}
-static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
-{
- if (!mutex_is_locked(mutex))
- return false;
-
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
- return mutex->owner == task;
-#else
- /* Since UP may be pre-empted, we cannot assume that we own the lock */
- return false;
-#endif
-}
-
-static bool i915_gem_shrinker_lock(struct drm_device *dev, bool *unlock)
+/* All the new VM stuff */
+unsigned long
+i915_gem_obj_offset(struct drm_i915_gem_object *o,
+ struct i915_address_space *vm)
{
- if (!mutex_trylock(&dev->struct_mutex)) {
- if (!mutex_is_locked_by(&dev->struct_mutex, current))
- return false;
+ struct drm_i915_private *dev_priv = o->base.dev->dev_private;
+ struct i915_vma *vma;
- if (to_i915(dev)->mm.shrinker_no_lock_stealing)
- return false;
+ WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
- *unlock = false;
- } else
- *unlock = true;
+ list_for_each_entry(vma, &o->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
+ if (vma->vm == vm)
+ return vma->node.start;
+ }
- return true;
+ WARN(1, "%s vma for this object not found.\n",
+ i915_is_ggtt(vm) ? "global" : "ppgtt");
+ return -1;
}
-static int num_vma_bound(struct drm_i915_gem_object *obj)
+unsigned long
+i915_gem_obj_ggtt_offset_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view)
{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(o);
struct i915_vma *vma;
- int count = 0;
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (drm_mm_node_allocated(&vma->node))
- count++;
-
- return count;
-}
-
-static unsigned long
-i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
-{
- struct drm_i915_private *dev_priv =
- container_of(shrinker, struct drm_i915_private, mm.shrinker);
- struct drm_device *dev = dev_priv->dev;
- struct drm_i915_gem_object *obj;
- unsigned long count;
- bool unlock;
-
- if (!i915_gem_shrinker_lock(dev, &unlock))
- return 0;
-
- count = 0;
- list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list)
- if (obj->pages_pin_count == 0)
- count += obj->base.size >> PAGE_SHIFT;
-
- list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
- if (!i915_gem_obj_is_pinned(obj) &&
- obj->pages_pin_count == num_vma_bound(obj))
- count += obj->base.size >> PAGE_SHIFT;
- }
-
- if (unlock)
- mutex_unlock(&dev->struct_mutex);
+ list_for_each_entry(vma, &o->vma_list, vma_link)
+ if (vma->vm == ggtt &&
+ i915_ggtt_view_equal(&vma->ggtt_view, view))
+ return vma->node.start;
- return count;
+ WARN(1, "global vma for this object not found.\n");
+ return -1;
}
-/* All the new VM stuff */
-unsigned long i915_gem_obj_offset_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view)
+bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
+ struct i915_address_space *vm)
{
- struct drm_i915_private *dev_priv = o->base.dev->dev_private;
struct i915_vma *vma;
- WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
-
list_for_each_entry(vma, &o->vma_list, vma_link) {
- if (vma->vm == vm && vma->ggtt_view.type == view)
- return vma->node.start;
-
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
+ if (vma->vm == vm && drm_mm_node_allocated(&vma->node))
+ return true;
}
- WARN(1, "%s vma for this object not found.\n",
- i915_is_ggtt(vm) ? "global" : "ppgtt");
- return -1;
+
+ return false;
}
-bool i915_gem_obj_bound_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view)
+bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view)
{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(o);
struct i915_vma *vma;
list_for_each_entry(vma, &o->vma_list, vma_link)
- if (vma->vm == vm &&
- vma->ggtt_view.type == view &&
+ if (vma->vm == ggtt &&
+ i915_ggtt_view_equal(&vma->ggtt_view, view) &&
drm_mm_node_allocated(&vma->node))
return true;
BUG_ON(list_empty(&o->vma_list));
- list_for_each_entry(vma, &o->vma_list, vma_link)
+ list_for_each_entry(vma, &o->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
if (vma->vm == vm)
return vma->node.size;
-
+ }
return 0;
}
-static unsigned long
-i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
+bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj)
{
- struct drm_i915_private *dev_priv =
- container_of(shrinker, struct drm_i915_private, mm.shrinker);
- struct drm_device *dev = dev_priv->dev;
- unsigned long freed;
- bool unlock;
-
- if (!i915_gem_shrinker_lock(dev, &unlock))
- return SHRINK_STOP;
-
- freed = i915_gem_shrink(dev_priv,
- sc->nr_to_scan,
- I915_SHRINK_BOUND |
- I915_SHRINK_UNBOUND |
- I915_SHRINK_PURGEABLE);
- if (freed < sc->nr_to_scan)
- freed += i915_gem_shrink(dev_priv,
- sc->nr_to_scan - freed,
- I915_SHRINK_BOUND |
- I915_SHRINK_UNBOUND);
- if (unlock)
- mutex_unlock(&dev->struct_mutex);
-
- return freed;
-}
-
-static int
-i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
-{
- struct drm_i915_private *dev_priv =
- container_of(nb, struct drm_i915_private, mm.oom_notifier);
- struct drm_device *dev = dev_priv->dev;
- struct drm_i915_gem_object *obj;
- unsigned long timeout = msecs_to_jiffies(5000) + 1;
- unsigned long pinned, bound, unbound, freed_pages;
- bool was_interruptible;
- bool unlock;
-
- while (!i915_gem_shrinker_lock(dev, &unlock) && --timeout) {
- schedule_timeout_killable(1);
- if (fatal_signal_pending(current))
- return NOTIFY_DONE;
- }
- if (timeout == 0) {
- pr_err("Unable to purge GPU memory due lock contention.\n");
- return NOTIFY_DONE;
- }
-
- was_interruptible = dev_priv->mm.interruptible;
- dev_priv->mm.interruptible = false;
-
- freed_pages = i915_gem_shrink_all(dev_priv);
-
- dev_priv->mm.interruptible = was_interruptible;
-
- /* Because we may be allocating inside our own driver, we cannot
- * assert that there are no objects with pinned pages that are not
- * being pointed to by hardware.
- */
- unbound = bound = pinned = 0;
- list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
- if (!obj->base.filp) /* not backed by a freeable object */
- continue;
-
- if (obj->pages_pin_count)
- pinned += obj->base.size;
- else
- unbound += obj->base.size;
- }
- list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
- if (!obj->base.filp)
+ struct i915_vma *vma;
+ list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
continue;
-
- if (obj->pages_pin_count)
- pinned += obj->base.size;
- else
- bound += obj->base.size;
+ if (vma->pin_count > 0)
+ return true;
}
-
- if (unlock)
- mutex_unlock(&dev->struct_mutex);
-
- if (freed_pages || unbound || bound)
- pr_info("Purging GPU memory, %lu bytes freed, %lu bytes still pinned.\n",
- freed_pages << PAGE_SHIFT, pinned);
- if (unbound || bound)
- pr_err("%lu and %lu bytes still available in the "
- "bound and unbound GPU page lists.\n",
- bound, unbound);
-
- *(unsigned long *)ptr += freed_pages;
- return NOTIFY_DONE;
+ return false;
}
-struct i915_vma *i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj)
-{
- struct i915_address_space *ggtt = i915_obj_to_ggtt(obj);
- struct i915_vma *vma;
-
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (vma->vm == ggtt &&
- vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
- return vma;
-
- return NULL;
-}
return ret;
}
+static inline bool should_skip_switch(struct intel_engine_cs *ring,
+ struct intel_context *from,
+ struct intel_context *to)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ if (to->remap_slice)
+ return false;
+
+ if (to->ppgtt) {
+ if (from == to && !test_bit(ring->id,
+ &to->ppgtt->pd_dirty_rings))
+ return true;
+ } else if (dev_priv->mm.aliasing_ppgtt) {
+ if (from == to && !test_bit(ring->id,
+ &dev_priv->mm.aliasing_ppgtt->pd_dirty_rings))
+ return true;
+ }
+
+ return false;
+}
+
+static bool
+needs_pd_load_pre(struct intel_engine_cs *ring, struct intel_context *to)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ if (!to->ppgtt)
+ return false;
+
+ if (INTEL_INFO(ring->dev)->gen < 8)
+ return true;
+
+ if (ring != &dev_priv->ring[RCS])
+ return true;
+
+ return false;
+}
+
+static bool
+needs_pd_load_post(struct intel_engine_cs *ring, struct intel_context *to,
+ u32 hw_flags)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ if (!to->ppgtt)
+ return false;
+
+ if (!IS_GEN8(ring->dev))
+ return false;
+
+ if (ring != &dev_priv->ring[RCS])
+ return false;
+
+ if (hw_flags & MI_RESTORE_INHIBIT)
+ return true;
+
+ return false;
+}
+
static int do_switch(struct intel_engine_cs *ring,
struct intel_context *to)
{
BUG_ON(!i915_gem_obj_is_pinned(from->legacy_hw_ctx.rcs_state));
}
- if (from == to && !to->remap_slice)
+ if (should_skip_switch(ring, from, to))
return 0;
/* Trying to pin first makes error handling easier. */
*/
from = ring->last_context;
- if (to->ppgtt) {
+ if (needs_pd_load_pre(ring, to)) {
+ /* Older GENs and non render rings still want the load first,
+ * "PP_DCLV followed by PP_DIR_BASE register through Load
+ * Register Immediate commands in Ring Buffer before submitting
+ * a context."*/
trace_switch_mm(ring, to);
ret = to->ppgtt->switch_mm(to->ppgtt, ring);
if (ret)
goto unpin_out;
+
+ /* Doing a PD load always reloads the page dirs */
+ clear_bit(ring->id, &to->ppgtt->pd_dirty_rings);
}
if (ring != &dev_priv->ring[RCS]) {
goto unpin_out;
}
- if (!to->legacy_hw_ctx.initialized || i915_gem_context_is_default(to))
+ if (!to->legacy_hw_ctx.initialized) {
hw_flags |= MI_RESTORE_INHIBIT;
+ /* NB: If we inhibit the restore, the context is not allowed to
+ * die because future work may end up depending on valid address
+ * space. This means we must enforce that a page table load
+ * occur when this occurs. */
+ } else if (to->ppgtt &&
+ test_and_clear_bit(ring->id, &to->ppgtt->pd_dirty_rings))
+ hw_flags |= MI_FORCE_RESTORE;
+
+ /* We should never emit switch_mm more than once */
+ WARN_ON(needs_pd_load_pre(ring, to) &&
+ needs_pd_load_post(ring, to, hw_flags));
ret = mi_set_context(ring, to, hw_flags);
if (ret)
goto unpin_out;
+ /* GEN8 does *not* require an explicit reload if the PDPs have been
+ * setup, and we do not wish to move them.
+ */
+ if (needs_pd_load_post(ring, to, hw_flags)) {
+ trace_switch_mm(ring, to);
+ ret = to->ppgtt->switch_mm(to->ppgtt, ring);
+ /* The hardware context switch is emitted, but we haven't
+ * actually changed the state - so it's probably safe to bail
+ * here. Still, let the user know something dangerous has
+ * happened.
+ */
+ if (ret) {
+ DRM_ERROR("Failed to change address space on context switch\n");
+ goto unpin_out;
+ }
+ }
+
for (i = 0; i < MAX_L3_SLICES; i++) {
if (!(to->remap_slice & (1<<i)))
continue;
i915_gem_context_unreference(from);
}
- uninitialized = !to->legacy_hw_ctx.initialized && from == NULL;
+ uninitialized = !to->legacy_hw_ctx.initialized;
to->legacy_hw_ctx.initialized = true;
done:
*
* This function is used by the object/vma binding code.
*
+ * Since this function is only used to free up virtual address space it only
+ * ignores pinned vmas, and not object where the backing storage itself is
+ * pinned. Hence obj->pages_pin_count does not protect against eviction.
+ *
* To clarify: This is for freeing up virtual address space, not for freeing
* memory in e.g. the shrinker.
*/
{
return (HAS_LLC(obj->base.dev) ||
obj->base.write_domain == I915_GEM_DOMAIN_CPU ||
- !obj->map_and_fenceable ||
obj->cache_level != I915_CACHE_NONE);
}
return 0;
}
+static void
+clflush_write32(void *addr, uint32_t value)
+{
+ /* This is not a fast path, so KISS. */
+ drm_clflush_virt_range(addr, sizeof(uint32_t));
+ *(uint32_t *)addr = value;
+ drm_clflush_virt_range(addr, sizeof(uint32_t));
+}
+
+static int
+relocate_entry_clflush(struct drm_i915_gem_object *obj,
+ struct drm_i915_gem_relocation_entry *reloc,
+ uint64_t target_offset)
+{
+ struct drm_device *dev = obj->base.dev;
+ uint32_t page_offset = offset_in_page(reloc->offset);
+ uint64_t delta = (int)reloc->delta + target_offset;
+ char *vaddr;
+ int ret;
+
+ ret = i915_gem_object_set_to_gtt_domain(obj, true);
+ if (ret)
+ return ret;
+
+ vaddr = kmap_atomic(i915_gem_object_get_page(obj,
+ reloc->offset >> PAGE_SHIFT));
+ clflush_write32(vaddr + page_offset, lower_32_bits(delta));
+
+ if (INTEL_INFO(dev)->gen >= 8) {
+ page_offset = offset_in_page(page_offset + sizeof(uint32_t));
+
+ if (page_offset == 0) {
+ kunmap_atomic(vaddr);
+ vaddr = kmap_atomic(i915_gem_object_get_page(obj,
+ (reloc->offset + sizeof(uint32_t)) >> PAGE_SHIFT));
+ }
+
+ clflush_write32(vaddr + page_offset, upper_32_bits(delta));
+ }
+
+ kunmap_atomic(vaddr);
+
+ return 0;
+}
+
static int
i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
struct eb_vmas *eb,
if (use_cpu_reloc(obj))
ret = relocate_entry_cpu(obj, reloc, target_offset);
- else
+ else if (obj->map_and_fenceable)
ret = relocate_entry_gtt(obj, reloc, target_offset);
+ else if (cpu_has_clflush)
+ ret = relocate_entry_clflush(obj, reloc, target_offset);
+ else {
+ WARN_ONCE(1, "Impossible case in relocation handling\n");
+ ret = -ENODEV;
+ }
if (ret)
return ret;
return ret;
}
+static bool only_mappable_for_reloc(unsigned int flags)
+{
+ return (flags & (EXEC_OBJECT_NEEDS_FENCE | __EXEC_OBJECT_NEEDS_MAP)) ==
+ __EXEC_OBJECT_NEEDS_MAP;
+}
+
static int
i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
struct intel_engine_cs *ring,
int ret;
flags = 0;
- if (entry->flags & __EXEC_OBJECT_NEEDS_MAP)
- flags |= PIN_GLOBAL | PIN_MAPPABLE;
- if (entry->flags & EXEC_OBJECT_NEEDS_GTT)
- flags |= PIN_GLOBAL;
- if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS)
- flags |= BATCH_OFFSET_BIAS | PIN_OFFSET_BIAS;
+ if (!drm_mm_node_allocated(&vma->node)) {
+ if (entry->flags & __EXEC_OBJECT_NEEDS_MAP)
+ flags |= PIN_GLOBAL | PIN_MAPPABLE;
+ if (entry->flags & EXEC_OBJECT_NEEDS_GTT)
+ flags |= PIN_GLOBAL;
+ if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS)
+ flags |= BATCH_OFFSET_BIAS | PIN_OFFSET_BIAS;
+ }
ret = i915_gem_object_pin(obj, vma->vm, entry->alignment, flags);
+ if ((ret == -ENOSPC || ret == -E2BIG) &&
+ only_mappable_for_reloc(entry->flags))
+ ret = i915_gem_object_pin(obj, vma->vm,
+ entry->alignment,
+ flags & ~(PIN_GLOBAL | PIN_MAPPABLE));
if (ret)
return ret;
vma->node.start & (entry->alignment - 1))
return true;
- if (entry->flags & __EXEC_OBJECT_NEEDS_MAP && !obj->map_and_fenceable)
- return true;
-
if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS &&
vma->node.start < BATCH_OFFSET_BIAS)
return true;
+ /* avoid costly ping-pong once a batch bo ended up non-mappable */
+ if (entry->flags & __EXEC_OBJECT_NEEDS_MAP && !obj->map_and_fenceable)
+ return !only_mappable_for_reloc(entry->flags);
+
return false;
}
obj->dirty = 1;
i915_gem_request_assign(&obj->last_write_req, req);
- intel_fb_obj_invalidate(obj, ring);
+ intel_fb_obj_invalidate(obj, ring, ORIGIN_CS);
/* update for the implicit flush after a batch */
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
if (ret)
goto error;
+ if (ctx->ppgtt)
+ WARN(ctx->ppgtt->pd_dirty_rings & (1<<ring->id),
+ "%s didn't clear reload\n", ring->name);
+ else if (dev_priv->mm.aliasing_ppgtt)
+ WARN(dev_priv->mm.aliasing_ppgtt->pd_dirty_rings &
+ (1<<ring->id), "%s didn't clear reload\n", ring->name);
+
instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
instp_mask = I915_EXEC_CONSTANTS_MASK;
switch (instp_mode) {
goto err;
}
- if (i915_needs_cmd_parser(ring)) {
+ if (i915_needs_cmd_parser(ring) && args->batch_len) {
batch_obj = i915_gem_execbuffer_parse(ring,
&shadow_exec_entry,
eb,
* - The batch is already pinned into the relevant ppgtt, so we
* already have the backing storage fully allocated.
* - No other BO uses the global gtt (well contexts, but meh),
- * so we don't really have issues with mutliple objects not
+ * so we don't really have issues with multiple objects not
* fitting due to fragmentation.
* So this is actually safe.
*/
* i915_ggtt_view_type and struct i915_ggtt_view.
*
* A new flavour of core GEM functions which work with GGTT bound objects were
- * added with the _view suffix. They take the struct i915_ggtt_view parameter
- * encapsulating all metadata required to implement a view.
+ * added with the _ggtt_ infix, and sometimes with _view postfix to avoid
+ * renaming in large amounts of code. They take the struct i915_ggtt_view
+ * parameter encapsulating all metadata required to implement a view.
*
* As a helper for callers which are only interested in the normal view,
* globally const i915_ggtt_view_normal singleton instance exists. All old core
*/
const struct i915_ggtt_view i915_ggtt_view_normal;
+const struct i915_ggtt_view i915_ggtt_view_rotated = {
+ .type = I915_GGTT_VIEW_ROTATED
+};
static void bdw_setup_private_ppat(struct drm_i915_private *dev_priv);
static void chv_setup_private_ppat(struct drm_i915_private *dev_priv);
u32 flags);
static void ppgtt_unbind_vma(struct i915_vma *vma);
-static inline gen8_gtt_pte_t gen8_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid)
+static inline gen8_pte_t gen8_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid)
{
- gen8_gtt_pte_t pte = valid ? _PAGE_PRESENT | _PAGE_RW : 0;
+ gen8_pte_t pte = valid ? _PAGE_PRESENT | _PAGE_RW : 0;
pte |= addr;
switch (level) {
return pte;
}
-static inline gen8_ppgtt_pde_t gen8_pde_encode(struct drm_device *dev,
- dma_addr_t addr,
- enum i915_cache_level level)
+static inline gen8_pde_t gen8_pde_encode(struct drm_device *dev,
+ dma_addr_t addr,
+ enum i915_cache_level level)
{
- gen8_ppgtt_pde_t pde = _PAGE_PRESENT | _PAGE_RW;
+ gen8_pde_t pde = _PAGE_PRESENT | _PAGE_RW;
pde |= addr;
if (level != I915_CACHE_NONE)
pde |= PPAT_CACHED_PDE_INDEX;
return pde;
}
-static gen6_gtt_pte_t snb_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t snb_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
switch (level) {
return pte;
}
-static gen6_gtt_pte_t ivb_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t ivb_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
switch (level) {
return pte;
}
-static gen6_gtt_pte_t byt_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 flags)
+static gen6_pte_t byt_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 flags)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
if (!(flags & PTE_READ_ONLY))
return pte;
}
-static gen6_gtt_pte_t hsw_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t hsw_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= HSW_PTE_ADDR_ENCODE(addr);
if (level != I915_CACHE_NONE)
return pte;
}
-static gen6_gtt_pte_t iris_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t iris_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= HSW_PTE_ADDR_ENCODE(addr);
switch (level) {
return pte;
}
-static void unmap_and_free_pt(struct i915_page_table_entry *pt, struct drm_device *dev)
+#define i915_dma_unmap_single(px, dev) \
+ __i915_dma_unmap_single((px)->daddr, dev)
+
+static inline void __i915_dma_unmap_single(dma_addr_t daddr,
+ struct drm_device *dev)
+{
+ struct device *device = &dev->pdev->dev;
+
+ dma_unmap_page(device, daddr, 4096, PCI_DMA_BIDIRECTIONAL);
+}
+
+/**
+ * i915_dma_map_single() - Create a dma mapping for a page table/dir/etc.
+ * @px: Page table/dir/etc to get a DMA map for
+ * @dev: drm device
+ *
+ * Page table allocations are unified across all gens. They always require a
+ * single 4k allocation, as well as a DMA mapping. If we keep the structs
+ * symmetric here, the simple macro covers us for every page table type.
+ *
+ * Return: 0 if success.
+ */
+#define i915_dma_map_single(px, dev) \
+ i915_dma_map_page_single((px)->page, (dev), &(px)->daddr)
+
+static inline int i915_dma_map_page_single(struct page *page,
+ struct drm_device *dev,
+ dma_addr_t *daddr)
+{
+ struct device *device = &dev->pdev->dev;
+
+ *daddr = dma_map_page(device, page, 0, 4096, PCI_DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(device, *daddr))
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void unmap_and_free_pt(struct i915_page_table_entry *pt,
+ struct drm_device *dev)
{
if (WARN_ON(!pt->page))
return;
+
+ i915_dma_unmap_single(pt, dev);
__free_page(pt->page);
+ kfree(pt->used_ptes);
kfree(pt);
}
static struct i915_page_table_entry *alloc_pt_single(struct drm_device *dev)
{
struct i915_page_table_entry *pt;
+ const size_t count = INTEL_INFO(dev)->gen >= 8 ?
+ GEN8_PTES : GEN6_PTES;
+ int ret = -ENOMEM;
pt = kzalloc(sizeof(*pt), GFP_KERNEL);
if (!pt)
return ERR_PTR(-ENOMEM);
- pt->page = alloc_page(GFP_KERNEL | __GFP_ZERO);
- if (!pt->page) {
- kfree(pt);
- return ERR_PTR(-ENOMEM);
- }
+ pt->used_ptes = kcalloc(BITS_TO_LONGS(count), sizeof(*pt->used_ptes),
+ GFP_KERNEL);
+
+ if (!pt->used_ptes)
+ goto fail_bitmap;
+
+ pt->page = alloc_page(GFP_KERNEL);
+ if (!pt->page)
+ goto fail_page;
+
+ ret = i915_dma_map_single(pt, dev);
+ if (ret)
+ goto fail_dma;
return pt;
+
+fail_dma:
+ __free_page(pt->page);
+fail_page:
+ kfree(pt->used_ptes);
+fail_bitmap:
+ kfree(pt);
+
+ return ERR_PTR(ret);
}
/**
* Return: 0 if allocation succeeded.
*/
static int alloc_pt_range(struct i915_page_directory_entry *pd, uint16_t pde, size_t count,
- struct drm_device *dev)
+ struct drm_device *dev)
{
int i, ret;
/* 512 is the max page tables per page_directory on any platform. */
- if (WARN_ON(pde + count > GEN6_PPGTT_PD_ENTRIES))
+ if (WARN_ON(pde + count > I915_PDES))
return -EINVAL;
for (i = pde; i < pde + count; i++) {
int i, ret;
/* bit of a hack to find the actual last used pd */
- int used_pd = ppgtt->num_pd_entries / GEN8_PDES_PER_PAGE;
+ int used_pd = ppgtt->num_pd_entries / I915_PDES;
for (i = used_pd - 1; i >= 0; i--) {
dma_addr_t addr = ppgtt->pdp.page_directory[i]->daddr;
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen8_gtt_pte_t *pt_vaddr, scratch_pte;
+ gen8_pte_t *pt_vaddr, scratch_pte;
unsigned pdpe = start >> GEN8_PDPE_SHIFT & GEN8_PDPE_MASK;
unsigned pde = start >> GEN8_PDE_SHIFT & GEN8_PDE_MASK;
unsigned pte = start >> GEN8_PTE_SHIFT & GEN8_PTE_MASK;
page_table = pt->page;
last_pte = pte + num_entries;
- if (last_pte > GEN8_PTES_PER_PAGE)
- last_pte = GEN8_PTES_PER_PAGE;
+ if (last_pte > GEN8_PTES)
+ last_pte = GEN8_PTES;
pt_vaddr = kmap_atomic(page_table);
kunmap_atomic(pt_vaddr);
pte = 0;
- if (++pde == GEN8_PDES_PER_PAGE) {
+ if (++pde == I915_PDES) {
pdpe++;
pde = 0;
}
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen8_gtt_pte_t *pt_vaddr;
+ gen8_pte_t *pt_vaddr;
unsigned pdpe = start >> GEN8_PDPE_SHIFT & GEN8_PDPE_MASK;
unsigned pde = start >> GEN8_PDE_SHIFT & GEN8_PDE_MASK;
unsigned pte = start >> GEN8_PTE_SHIFT & GEN8_PTE_MASK;
pt_vaddr[pte] =
gen8_pte_encode(sg_page_iter_dma_address(&sg_iter),
cache_level, true);
- if (++pte == GEN8_PTES_PER_PAGE) {
+ if (++pte == GEN8_PTES) {
if (!HAS_LLC(ppgtt->base.dev))
drm_clflush_virt_range(pt_vaddr, PAGE_SIZE);
kunmap_atomic(pt_vaddr);
pt_vaddr = NULL;
- if (++pde == GEN8_PDES_PER_PAGE) {
+ if (++pde == I915_PDES) {
pdpe++;
pde = 0;
}
if (!pd->page)
return;
- for (i = 0; i < GEN8_PDES_PER_PAGE; i++) {
+ for (i = 0; i < I915_PDES; i++) {
if (WARN_ON(!pd->page_table[i]))
continue;
pci_unmap_page(hwdev, ppgtt->pdp.page_directory[i]->daddr, PAGE_SIZE,
PCI_DMA_BIDIRECTIONAL);
- for (j = 0; j < GEN8_PDES_PER_PAGE; j++) {
+ for (j = 0; j < I915_PDES; j++) {
struct i915_page_directory_entry *pd = ppgtt->pdp.page_directory[i];
struct i915_page_table_entry *pt;
dma_addr_t addr;
for (i = 0; i < ppgtt->num_pd_pages; i++) {
ret = alloc_pt_range(ppgtt->pdp.page_directory[i],
- 0, GEN8_PDES_PER_PAGE, ppgtt->base.dev);
+ 0, I915_PDES, ppgtt->base.dev);
if (ret)
goto unwind_out;
}
if (ret)
goto err_out;
- ppgtt->num_pd_entries = max_pdp * GEN8_PDES_PER_PAGE;
+ ppgtt->num_pd_entries = max_pdp * I915_PDES;
return 0;
return 0;
}
-/**
+/*
* GEN8 legacy ppgtt programming is accomplished through a max 4 PDP registers
* with a net effect resembling a 2-level page table in normal x86 terms. Each
* PDP represents 1GB of memory 4 * 512 * 512 * 4096 = 4GB legacy 32b address
static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt, uint64_t size)
{
const int max_pdp = DIV_ROUND_UP(size, 1 << 30);
- const int min_pt_pages = GEN8_PDES_PER_PAGE * max_pdp;
+ const int min_pt_pages = I915_PDES * max_pdp;
int i, j, ret;
if (size % (1<<30))
DRM_INFO("Pages will be wasted unless GTT size (%llu) is divisible by 1GB\n", size);
- /* 1. Do all our allocations for page directories and page tables. */
- ret = gen8_ppgtt_alloc(ppgtt, max_pdp);
+ /* 1. Do all our allocations for page directories and page tables.
+ * We allocate more than was asked so that we can point the unused parts
+ * to valid entries that point to scratch page. Dynamic page tables
+ * will fix this eventually.
+ */
+ ret = gen8_ppgtt_alloc(ppgtt, GEN8_LEGACY_PDPES);
if (ret)
return ret;
/*
* 2. Create DMA mappings for the page directories and page tables.
*/
- for (i = 0; i < max_pdp; i++) {
+ for (i = 0; i < GEN8_LEGACY_PDPES; i++) {
ret = gen8_ppgtt_setup_page_directories(ppgtt, i);
if (ret)
goto bail;
- for (j = 0; j < GEN8_PDES_PER_PAGE; j++) {
+ for (j = 0; j < I915_PDES; j++) {
ret = gen8_ppgtt_setup_page_tables(ppgtt, i, j);
if (ret)
goto bail;
* plugged in correctly. So we do that now/here. For aliasing PPGTT, we
* will never need to touch the PDEs again.
*/
- for (i = 0; i < max_pdp; i++) {
+ for (i = 0; i < GEN8_LEGACY_PDPES; i++) {
struct i915_page_directory_entry *pd = ppgtt->pdp.page_directory[i];
- gen8_ppgtt_pde_t *pd_vaddr;
+ gen8_pde_t *pd_vaddr;
pd_vaddr = kmap_atomic(ppgtt->pdp.page_directory[i]->page);
- for (j = 0; j < GEN8_PDES_PER_PAGE; j++) {
+ for (j = 0; j < I915_PDES; j++) {
struct i915_page_table_entry *pt = pd->page_table[j];
dma_addr_t addr = pt->daddr;
pd_vaddr[j] = gen8_pde_encode(ppgtt->base.dev, addr,
ppgtt->base.insert_entries = gen8_ppgtt_insert_entries;
ppgtt->base.cleanup = gen8_ppgtt_cleanup;
ppgtt->base.start = 0;
- ppgtt->base.total = ppgtt->num_pd_entries * GEN8_PTES_PER_PAGE * PAGE_SIZE;
- ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true);
+ /* This is the area that we advertise as usable for the caller */
+ ppgtt->base.total = max_pdp * I915_PDES * GEN8_PTES * PAGE_SIZE;
+
+ /* Set all ptes to a valid scratch page. Also above requested space */
+ ppgtt->base.clear_range(&ppgtt->base, 0,
+ ppgtt->num_pd_pages * GEN8_PTES * PAGE_SIZE,
+ true);
DRM_DEBUG_DRIVER("Allocated %d pages for page directories (%d wasted)\n",
ppgtt->num_pd_pages, ppgtt->num_pd_pages - max_pdp);
{
struct drm_i915_private *dev_priv = ppgtt->base.dev->dev_private;
struct i915_address_space *vm = &ppgtt->base;
- gen6_gtt_pte_t __iomem *pd_addr;
- gen6_gtt_pte_t scratch_pte;
+ gen6_pte_t __iomem *pd_addr;
+ gen6_pte_t scratch_pte;
uint32_t pd_entry;
int pte, pde;
scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true, 0);
- pd_addr = (gen6_gtt_pte_t __iomem *)dev_priv->gtt.gsm +
- ppgtt->pd.pd_offset / sizeof(gen6_gtt_pte_t);
+ pd_addr = (gen6_pte_t __iomem *)dev_priv->gtt.gsm +
+ ppgtt->pd.pd_offset / sizeof(gen6_pte_t);
seq_printf(m, " VM %p (pd_offset %x-%x):\n", vm,
ppgtt->pd.pd_offset,
ppgtt->pd.pd_offset + ppgtt->num_pd_entries);
for (pde = 0; pde < ppgtt->num_pd_entries; pde++) {
u32 expected;
- gen6_gtt_pte_t *pt_vaddr;
+ gen6_pte_t *pt_vaddr;
dma_addr_t pt_addr = ppgtt->pd.page_table[pde]->daddr;
pd_entry = readl(pd_addr + pde);
expected = (GEN6_PDE_ADDR_ENCODE(pt_addr) | GEN6_PDE_VALID);
seq_printf(m, "\tPDE: %x\n", pd_entry);
pt_vaddr = kmap_atomic(ppgtt->pd.page_table[pde]->page);
- for (pte = 0; pte < I915_PPGTT_PT_ENTRIES; pte+=4) {
+ for (pte = 0; pte < GEN6_PTES; pte+=4) {
unsigned long va =
- (pde * PAGE_SIZE * I915_PPGTT_PT_ENTRIES) +
+ (pde * PAGE_SIZE * GEN6_PTES) +
(pte * PAGE_SIZE);
int i;
bool found = false;
}
}
-static void gen6_write_pdes(struct i915_hw_ppgtt *ppgtt)
+/* Write pde (index) from the page directory @pd to the page table @pt */
+static void gen6_write_pde(struct i915_page_directory_entry *pd,
+ const int pde, struct i915_page_table_entry *pt)
{
- struct drm_i915_private *dev_priv = ppgtt->base.dev->dev_private;
- gen6_gtt_pte_t __iomem *pd_addr;
- uint32_t pd_entry;
- int i;
+ /* Caller needs to make sure the write completes if necessary */
+ struct i915_hw_ppgtt *ppgtt =
+ container_of(pd, struct i915_hw_ppgtt, pd);
+ u32 pd_entry;
- WARN_ON(ppgtt->pd.pd_offset & 0x3f);
- pd_addr = (gen6_gtt_pte_t __iomem*)dev_priv->gtt.gsm +
- ppgtt->pd.pd_offset / sizeof(gen6_gtt_pte_t);
- for (i = 0; i < ppgtt->num_pd_entries; i++) {
- dma_addr_t pt_addr;
+ pd_entry = GEN6_PDE_ADDR_ENCODE(pt->daddr);
+ pd_entry |= GEN6_PDE_VALID;
- pt_addr = ppgtt->pd.page_table[i]->daddr;
- pd_entry = GEN6_PDE_ADDR_ENCODE(pt_addr);
- pd_entry |= GEN6_PDE_VALID;
+ writel(pd_entry, ppgtt->pd_addr + pde);
+}
- writel(pd_entry, pd_addr + i);
- }
- readl(pd_addr);
+/* Write all the page tables found in the ppgtt structure to incrementing page
+ * directories. */
+static void gen6_write_page_range(struct drm_i915_private *dev_priv,
+ struct i915_page_directory_entry *pd,
+ uint32_t start, uint32_t length)
+{
+ struct i915_page_table_entry *pt;
+ uint32_t pde, temp;
+
+ gen6_for_each_pde(pt, pd, start, length, temp, pde)
+ gen6_write_pde(pd, pde, pt);
+
+ /* Make sure write is complete before other code can use this page
+ * table. Also require for WC mapped PTEs */
+ readl(dev_priv->gtt.gsm);
}
static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen6_gtt_pte_t *pt_vaddr, scratch_pte;
+ gen6_pte_t *pt_vaddr, scratch_pte;
unsigned first_entry = start >> PAGE_SHIFT;
unsigned num_entries = length >> PAGE_SHIFT;
- unsigned act_pt = first_entry / I915_PPGTT_PT_ENTRIES;
- unsigned first_pte = first_entry % I915_PPGTT_PT_ENTRIES;
+ unsigned act_pt = first_entry / GEN6_PTES;
+ unsigned first_pte = first_entry % GEN6_PTES;
unsigned last_pte, i;
scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true, 0);
while (num_entries) {
last_pte = first_pte + num_entries;
- if (last_pte > I915_PPGTT_PT_ENTRIES)
- last_pte = I915_PPGTT_PT_ENTRIES;
+ if (last_pte > GEN6_PTES)
+ last_pte = GEN6_PTES;
pt_vaddr = kmap_atomic(ppgtt->pd.page_table[act_pt]->page);
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen6_gtt_pte_t *pt_vaddr;
+ gen6_pte_t *pt_vaddr;
unsigned first_entry = start >> PAGE_SHIFT;
- unsigned act_pt = first_entry / I915_PPGTT_PT_ENTRIES;
- unsigned act_pte = first_entry % I915_PPGTT_PT_ENTRIES;
+ unsigned act_pt = first_entry / GEN6_PTES;
+ unsigned act_pte = first_entry % GEN6_PTES;
struct sg_page_iter sg_iter;
pt_vaddr = NULL;
vm->pte_encode(sg_page_iter_dma_address(&sg_iter),
cache_level, true, flags);
- if (++act_pte == I915_PPGTT_PT_ENTRIES) {
+ if (++act_pte == GEN6_PTES) {
kunmap_atomic(pt_vaddr);
pt_vaddr = NULL;
act_pt++;
kunmap_atomic(pt_vaddr);
}
-static void gen6_ppgtt_unmap_pages(struct i915_hw_ppgtt *ppgtt)
+/* PDE TLBs are a pain invalidate pre GEN8. It requires a context reload. If we
+ * are switching between contexts with the same LRCA, we also must do a force
+ * restore.
+ */
+static inline void mark_tlbs_dirty(struct i915_hw_ppgtt *ppgtt)
+{
+ /* If current vm != vm, */
+ ppgtt->pd_dirty_rings = INTEL_INFO(ppgtt->base.dev)->ring_mask;
+}
+
+static void gen6_initialize_pt(struct i915_address_space *vm,
+ struct i915_page_table_entry *pt)
{
+ gen6_pte_t *pt_vaddr, scratch_pte;
int i;
- for (i = 0; i < ppgtt->num_pd_entries; i++)
- pci_unmap_page(ppgtt->base.dev->pdev,
- ppgtt->pd.page_table[i]->daddr,
- 4096, PCI_DMA_BIDIRECTIONAL);
+ WARN_ON(vm->scratch.addr == 0);
+
+ scratch_pte = vm->pte_encode(vm->scratch.addr,
+ I915_CACHE_LLC, true, 0);
+
+ pt_vaddr = kmap_atomic(pt->page);
+
+ for (i = 0; i < GEN6_PTES; i++)
+ pt_vaddr[i] = scratch_pte;
+
+ kunmap_atomic(pt_vaddr);
+}
+
+static int gen6_alloc_va_range(struct i915_address_space *vm,
+ uint64_t start, uint64_t length)
+{
+ DECLARE_BITMAP(new_page_tables, I915_PDES);
+ struct drm_device *dev = vm->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct i915_hw_ppgtt *ppgtt =
+ container_of(vm, struct i915_hw_ppgtt, base);
+ struct i915_page_table_entry *pt;
+ const uint32_t start_save = start, length_save = length;
+ uint32_t pde, temp;
+ int ret;
+
+ WARN_ON(upper_32_bits(start));
+
+ bitmap_zero(new_page_tables, I915_PDES);
+
+ /* The allocation is done in two stages so that we can bail out with
+ * minimal amount of pain. The first stage finds new page tables that
+ * need allocation. The second stage marks use ptes within the page
+ * tables.
+ */
+ gen6_for_each_pde(pt, &ppgtt->pd, start, length, temp, pde) {
+ if (pt != ppgtt->scratch_pt) {
+ WARN_ON(bitmap_empty(pt->used_ptes, GEN6_PTES));
+ continue;
+ }
+
+ /* We've already allocated a page table */
+ WARN_ON(!bitmap_empty(pt->used_ptes, GEN6_PTES));
+
+ pt = alloc_pt_single(dev);
+ if (IS_ERR(pt)) {
+ ret = PTR_ERR(pt);
+ goto unwind_out;
+ }
+
+ gen6_initialize_pt(vm, pt);
+
+ ppgtt->pd.page_table[pde] = pt;
+ set_bit(pde, new_page_tables);
+ trace_i915_page_table_entry_alloc(vm, pde, start, GEN6_PDE_SHIFT);
+ }
+
+ start = start_save;
+ length = length_save;
+
+ gen6_for_each_pde(pt, &ppgtt->pd, start, length, temp, pde) {
+ DECLARE_BITMAP(tmp_bitmap, GEN6_PTES);
+
+ bitmap_zero(tmp_bitmap, GEN6_PTES);
+ bitmap_set(tmp_bitmap, gen6_pte_index(start),
+ gen6_pte_count(start, length));
+
+ if (test_and_clear_bit(pde, new_page_tables))
+ gen6_write_pde(&ppgtt->pd, pde, pt);
+
+ trace_i915_page_table_entry_map(vm, pde, pt,
+ gen6_pte_index(start),
+ gen6_pte_count(start, length),
+ GEN6_PTES);
+ bitmap_or(pt->used_ptes, tmp_bitmap, pt->used_ptes,
+ GEN6_PTES);
+ }
+
+ WARN_ON(!bitmap_empty(new_page_tables, I915_PDES));
+
+ /* Make sure write is complete before other code can use this page
+ * table. Also require for WC mapped PTEs */
+ readl(dev_priv->gtt.gsm);
+
+ mark_tlbs_dirty(ppgtt);
+ return 0;
+
+unwind_out:
+ for_each_set_bit(pde, new_page_tables, I915_PDES) {
+ struct i915_page_table_entry *pt = ppgtt->pd.page_table[pde];
+
+ ppgtt->pd.page_table[pde] = ppgtt->scratch_pt;
+ unmap_and_free_pt(pt, vm->dev);
+ }
+
+ mark_tlbs_dirty(ppgtt);
+ return ret;
}
static void gen6_ppgtt_free(struct i915_hw_ppgtt *ppgtt)
{
int i;
- for (i = 0; i < ppgtt->num_pd_entries; i++)
- unmap_and_free_pt(ppgtt->pd.page_table[i], ppgtt->base.dev);
+ for (i = 0; i < ppgtt->num_pd_entries; i++) {
+ struct i915_page_table_entry *pt = ppgtt->pd.page_table[i];
+
+ if (pt != ppgtt->scratch_pt)
+ unmap_and_free_pt(ppgtt->pd.page_table[i], ppgtt->base.dev);
+ }
+ unmap_and_free_pt(ppgtt->scratch_pt, ppgtt->base.dev);
unmap_and_free_pd(&ppgtt->pd);
}
drm_mm_remove_node(&ppgtt->node);
- gen6_ppgtt_unmap_pages(ppgtt);
gen6_ppgtt_free(ppgtt);
}
* size. We allocate at the top of the GTT to avoid fragmentation.
*/
BUG_ON(!drm_mm_initialized(&dev_priv->gtt.base.mm));
+ ppgtt->scratch_pt = alloc_pt_single(ppgtt->base.dev);
+ if (IS_ERR(ppgtt->scratch_pt))
+ return PTR_ERR(ppgtt->scratch_pt);
+
+ gen6_initialize_pt(&ppgtt->base, ppgtt->scratch_pt);
+
alloc:
ret = drm_mm_insert_node_in_range_generic(&dev_priv->gtt.base.mm,
&ppgtt->node, GEN6_PD_SIZE,
0, dev_priv->gtt.base.total,
0);
if (ret)
- return ret;
+ goto err_out;
retried = true;
goto alloc;
}
if (ret)
- return ret;
+ goto err_out;
+
if (ppgtt->node.start < dev_priv->gtt.mappable_end)
DRM_DEBUG("Forced to use aperture for PDEs\n");
- ppgtt->num_pd_entries = GEN6_PPGTT_PD_ENTRIES;
+ ppgtt->num_pd_entries = I915_PDES;
return 0;
+
+err_out:
+ unmap_and_free_pt(ppgtt->scratch_pt, ppgtt->base.dev);
+ return ret;
}
static int gen6_ppgtt_alloc(struct i915_hw_ppgtt *ppgtt)
{
- int ret;
-
- ret = gen6_ppgtt_allocate_page_directories(ppgtt);
- if (ret)
- return ret;
-
- ret = alloc_pt_range(&ppgtt->pd, 0, ppgtt->num_pd_entries,
- ppgtt->base.dev);
-
- if (ret) {
- drm_mm_remove_node(&ppgtt->node);
- return ret;
- }
-
- return 0;
+ return gen6_ppgtt_allocate_page_directories(ppgtt);
}
-static int gen6_ppgtt_setup_page_tables(struct i915_hw_ppgtt *ppgtt)
+static void gen6_scratch_va_range(struct i915_hw_ppgtt *ppgtt,
+ uint64_t start, uint64_t length)
{
- struct drm_device *dev = ppgtt->base.dev;
- int i;
-
- for (i = 0; i < ppgtt->num_pd_entries; i++) {
- struct page *page;
- dma_addr_t pt_addr;
-
- page = ppgtt->pd.page_table[i]->page;
- pt_addr = pci_map_page(dev->pdev, page, 0, 4096,
- PCI_DMA_BIDIRECTIONAL);
+ struct i915_page_table_entry *unused;
+ uint32_t pde, temp;
- if (pci_dma_mapping_error(dev->pdev, pt_addr)) {
- gen6_ppgtt_unmap_pages(ppgtt);
- return -EIO;
- }
-
- ppgtt->pd.page_table[i]->daddr = pt_addr;
- }
-
- return 0;
+ gen6_for_each_pde(unused, &ppgtt->pd, start, length, temp, pde)
+ ppgtt->pd.page_table[pde] = ppgtt->scratch_pt;
}
-static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
+static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt, bool aliasing)
{
struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
if (ret)
return ret;
- ret = gen6_ppgtt_setup_page_tables(ppgtt);
- if (ret) {
- gen6_ppgtt_free(ppgtt);
- return ret;
+ if (aliasing) {
+ /* preallocate all pts */
+ ret = alloc_pt_range(&ppgtt->pd, 0, ppgtt->num_pd_entries,
+ ppgtt->base.dev);
+
+ if (ret) {
+ gen6_ppgtt_cleanup(&ppgtt->base);
+ return ret;
+ }
}
+ ppgtt->base.allocate_va_range = gen6_alloc_va_range;
ppgtt->base.clear_range = gen6_ppgtt_clear_range;
ppgtt->base.insert_entries = gen6_ppgtt_insert_entries;
ppgtt->base.cleanup = gen6_ppgtt_cleanup;
ppgtt->base.start = 0;
- ppgtt->base.total = ppgtt->num_pd_entries * I915_PPGTT_PT_ENTRIES * PAGE_SIZE;
+ ppgtt->base.total = ppgtt->num_pd_entries * GEN6_PTES * PAGE_SIZE;
ppgtt->debug_dump = gen6_dump_ppgtt;
ppgtt->pd.pd_offset =
- ppgtt->node.start / PAGE_SIZE * sizeof(gen6_gtt_pte_t);
+ ppgtt->node.start / PAGE_SIZE * sizeof(gen6_pte_t);
+
+ ppgtt->pd_addr = (gen6_pte_t __iomem *)dev_priv->gtt.gsm +
+ ppgtt->pd.pd_offset / sizeof(gen6_pte_t);
- ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true);
+ if (aliasing)
+ ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true);
+ else
+ gen6_scratch_va_range(ppgtt, 0, ppgtt->base.total);
+
+ gen6_write_page_range(dev_priv, &ppgtt->pd, 0, ppgtt->base.total);
DRM_DEBUG_DRIVER("Allocated pde space (%lldM) at GTT entry: %llx\n",
ppgtt->node.size >> 20,
ppgtt->node.start / PAGE_SIZE);
- gen6_write_pdes(ppgtt);
DRM_DEBUG("Adding PPGTT at offset %x\n",
ppgtt->pd.pd_offset << 10);
return 0;
}
-static int __hw_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+static int __hw_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt,
+ bool aliasing)
{
struct drm_i915_private *dev_priv = dev->dev_private;
ppgtt->base.scratch = dev_priv->gtt.base.scratch;
if (INTEL_INFO(dev)->gen < 8)
- return gen6_ppgtt_init(ppgtt);
+ return gen6_ppgtt_init(ppgtt, aliasing);
else
return gen8_ppgtt_init(ppgtt, dev_priv->gtt.base.total);
}
struct drm_i915_private *dev_priv = dev->dev_private;
int ret = 0;
- ret = __hw_ppgtt_init(dev, ppgtt);
+ ret = __hw_ppgtt_init(dev, ppgtt, false);
if (ret == 0) {
kref_init(&ppgtt->ref);
drm_mm_init(&ppgtt->base.mm, ppgtt->base.start,
return;
}
- list_for_each_entry(vm, &dev_priv->vm_list, global_link) {
- /* TODO: Perhaps it shouldn't be gen6 specific */
- if (i915_is_ggtt(vm)) {
- if (dev_priv->mm.aliasing_ppgtt)
- gen6_write_pdes(dev_priv->mm.aliasing_ppgtt);
- continue;
- }
+ if (USES_PPGTT(dev)) {
+ list_for_each_entry(vm, &dev_priv->vm_list, global_link) {
+ /* TODO: Perhaps it shouldn't be gen6 specific */
+
+ struct i915_hw_ppgtt *ppgtt =
+ container_of(vm, struct i915_hw_ppgtt,
+ base);
- gen6_write_pdes(container_of(vm, struct i915_hw_ppgtt, base));
+ if (i915_is_ggtt(vm))
+ ppgtt = dev_priv->mm.aliasing_ppgtt;
+
+ gen6_write_page_range(dev_priv, &ppgtt->pd,
+ 0, ppgtt->base.total);
+ }
}
i915_ggtt_flush(dev_priv);
return 0;
}
-static inline void gen8_set_pte(void __iomem *addr, gen8_gtt_pte_t pte)
+static inline void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
{
#ifdef writeq
writeq(pte, addr);
{
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
- gen8_gtt_pte_t __iomem *gtt_entries =
- (gen8_gtt_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
+ gen8_pte_t __iomem *gtt_entries =
+ (gen8_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
int i = 0;
struct sg_page_iter sg_iter;
dma_addr_t addr = 0; /* shut up gcc */
{
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
- gen6_gtt_pte_t __iomem *gtt_entries =
- (gen6_gtt_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
+ gen6_pte_t __iomem *gtt_entries =
+ (gen6_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
int i = 0;
struct sg_page_iter sg_iter;
dma_addr_t addr = 0;
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
unsigned num_entries = length >> PAGE_SHIFT;
- gen8_gtt_pte_t scratch_pte, __iomem *gtt_base =
- (gen8_gtt_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
+ gen8_pte_t scratch_pte, __iomem *gtt_base =
+ (gen8_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
const int max_entries = gtt_total_entries(dev_priv->gtt) - first_entry;
int i;
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
unsigned num_entries = length >> PAGE_SHIFT;
- gen6_gtt_pte_t scratch_pte, __iomem *gtt_base =
- (gen6_gtt_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
+ gen6_pte_t scratch_pte, __iomem *gtt_base =
+ (gen6_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
const int max_entries = gtt_total_entries(dev_priv->gtt) - first_entry;
int i;
struct drm_device *dev = vma->vm->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj = vma->obj;
+ struct sg_table *pages = obj->pages;
/* Currently applicable only to VLV */
if (obj->gt_ro)
flags |= PTE_READ_ONLY;
+ if (i915_is_ggtt(vma->vm))
+ pages = vma->ggtt_view.pages;
+
/* If there is no aliasing PPGTT, or the caller needs a global mapping,
* or we have a global mapping already but the cacheability flags have
* changed, set the global PTEs.
if (!dev_priv->mm.aliasing_ppgtt || flags & GLOBAL_BIND) {
if (!(vma->bound & GLOBAL_BIND) ||
(cache_level != obj->cache_level)) {
- vma->vm->insert_entries(vma->vm, vma->ggtt_view.pages,
+ vma->vm->insert_entries(vma->vm, pages,
vma->node.start,
cache_level, flags);
vma->bound |= GLOBAL_BIND;
(!(vma->bound & LOCAL_BIND) ||
(cache_level != obj->cache_level))) {
struct i915_hw_ppgtt *appgtt = dev_priv->mm.aliasing_ppgtt;
- appgtt->base.insert_entries(&appgtt->base,
- vma->ggtt_view.pages,
+ appgtt->base.insert_entries(&appgtt->base, pages,
vma->node.start,
cache_level, flags);
vma->bound |= LOCAL_BIND;
if (!ppgtt)
return -ENOMEM;
- ret = __hw_ppgtt_init(dev, ppgtt);
- if (ret != 0)
+ ret = __hw_ppgtt_init(dev, ppgtt, true);
+ if (ret) {
+ kfree(ppgtt);
return ret;
+ }
dev_priv->mm.aliasing_ppgtt = ppgtt;
}
gtt_size = gen8_get_total_gtt_size(snb_gmch_ctl);
}
- *gtt_total = (gtt_size / sizeof(gen8_gtt_pte_t)) << PAGE_SHIFT;
+ *gtt_total = (gtt_size / sizeof(gen8_pte_t)) << PAGE_SHIFT;
if (IS_CHERRYVIEW(dev))
chv_setup_private_ppat(dev_priv);
*stolen = gen6_get_stolen_size(snb_gmch_ctl);
gtt_size = gen6_get_total_gtt_size(snb_gmch_ctl);
- *gtt_total = (gtt_size / sizeof(gen6_gtt_pte_t)) << PAGE_SHIFT;
+ *gtt_total = (gtt_size / sizeof(gen6_pte_t)) << PAGE_SHIFT;
ret = ggtt_probe_common(dev, gtt_size);
return 0;
}
-static struct i915_vma *__i915_gem_vma_create(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view)
+static struct i915_vma *
+__i915_gem_vma_create(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ const struct i915_ggtt_view *ggtt_view)
{
- struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL);
+ struct i915_vma *vma;
+
+ if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
+ return ERR_PTR(-EINVAL);
+ vma = kzalloc(sizeof(*vma), GFP_KERNEL);
if (vma == NULL)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&vma->exec_list);
vma->vm = vm;
vma->obj = obj;
- vma->ggtt_view = *view;
if (INTEL_INFO(vm->dev)->gen >= 6) {
if (i915_is_ggtt(vm)) {
+ vma->ggtt_view = *ggtt_view;
+
vma->unbind_vma = ggtt_unbind_vma;
vma->bind_vma = ggtt_bind_vma;
} else {
}
} else {
BUG_ON(!i915_is_ggtt(vm));
+ vma->ggtt_view = *ggtt_view;
vma->unbind_vma = i915_ggtt_unbind_vma;
vma->bind_vma = i915_ggtt_bind_vma;
}
}
struct i915_vma *
-i915_gem_obj_lookup_or_create_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
+i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm)
+{
+ struct i915_vma *vma;
+
+ vma = i915_gem_obj_to_vma(obj, vm);
+ if (!vma)
+ vma = __i915_gem_vma_create(obj, vm,
+ i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL);
+
+ return vma;
+}
+
+struct i915_vma *
+i915_gem_obj_lookup_or_create_ggtt_vma(struct drm_i915_gem_object *obj,
const struct i915_ggtt_view *view)
{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(obj);
struct i915_vma *vma;
- vma = i915_gem_obj_to_vma_view(obj, vm, view);
+ if (WARN_ON(!view))
+ return ERR_PTR(-EINVAL);
+
+ vma = i915_gem_obj_to_ggtt_view(obj, view);
+
+ if (IS_ERR(vma))
+ return vma;
+
if (!vma)
- vma = __i915_gem_vma_create(obj, vm, view);
+ vma = __i915_gem_vma_create(obj, ggtt, view);
return vma;
+
+}
+
+static void
+rotate_pages(dma_addr_t *in, unsigned int width, unsigned int height,
+ struct sg_table *st)
+{
+ unsigned int column, row;
+ unsigned int src_idx;
+ struct scatterlist *sg = st->sgl;
+
+ st->nents = 0;
+
+ for (column = 0; column < width; column++) {
+ src_idx = width * (height - 1) + column;
+ for (row = 0; row < height; row++) {
+ st->nents++;
+ /* We don't need the pages, but need to initialize
+ * the entries so the sg list can be happily traversed.
+ * The only thing we need are DMA addresses.
+ */
+ sg_set_page(sg, NULL, PAGE_SIZE, 0);
+ sg_dma_address(sg) = in[src_idx];
+ sg_dma_len(sg) = PAGE_SIZE;
+ sg = sg_next(sg);
+ src_idx -= width;
+ }
+ }
+}
+
+static struct sg_table *
+intel_rotate_fb_obj_pages(struct i915_ggtt_view *ggtt_view,
+ struct drm_i915_gem_object *obj)
+{
+ struct drm_device *dev = obj->base.dev;
+ struct intel_rotation_info *rot_info = &ggtt_view->rotation_info;
+ unsigned long size, pages, rot_pages;
+ struct sg_page_iter sg_iter;
+ unsigned long i;
+ dma_addr_t *page_addr_list;
+ struct sg_table *st;
+ unsigned int tile_pitch, tile_height;
+ unsigned int width_pages, height_pages;
+ int ret = -ENOMEM;
+
+ pages = obj->base.size / PAGE_SIZE;
+
+ /* Calculate tiling geometry. */
+ tile_height = intel_tile_height(dev, rot_info->pixel_format,
+ rot_info->fb_modifier);
+ tile_pitch = PAGE_SIZE / tile_height;
+ width_pages = DIV_ROUND_UP(rot_info->pitch, tile_pitch);
+ height_pages = DIV_ROUND_UP(rot_info->height, tile_height);
+ rot_pages = width_pages * height_pages;
+ size = rot_pages * PAGE_SIZE;
+
+ /* Allocate a temporary list of source pages for random access. */
+ page_addr_list = drm_malloc_ab(pages, sizeof(dma_addr_t));
+ if (!page_addr_list)
+ return ERR_PTR(ret);
+
+ /* Allocate target SG list. */
+ st = kmalloc(sizeof(*st), GFP_KERNEL);
+ if (!st)
+ goto err_st_alloc;
+
+ ret = sg_alloc_table(st, rot_pages, GFP_KERNEL);
+ if (ret)
+ goto err_sg_alloc;
+
+ /* Populate source page list from the object. */
+ i = 0;
+ for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) {
+ page_addr_list[i] = sg_page_iter_dma_address(&sg_iter);
+ i++;
+ }
+
+ /* Rotate the pages. */
+ rotate_pages(page_addr_list, width_pages, height_pages, st);
+
+ DRM_DEBUG_KMS(
+ "Created rotated page mapping for object size %lu (pitch=%u, height=%u, pixel_format=0x%x, %ux%u tiles, %lu pages).\n",
+ size, rot_info->pitch, rot_info->height,
+ rot_info->pixel_format, width_pages, height_pages,
+ rot_pages);
+
+ drm_free_large(page_addr_list);
+
+ return st;
+
+err_sg_alloc:
+ kfree(st);
+err_st_alloc:
+ drm_free_large(page_addr_list);
+
+ DRM_DEBUG_KMS(
+ "Failed to create rotated mapping for object size %lu! (%d) (pitch=%u, height=%u, pixel_format=0x%x, %ux%u tiles, %lu pages)\n",
+ size, ret, rot_info->pitch, rot_info->height,
+ rot_info->pixel_format, width_pages, height_pages,
+ rot_pages);
+ return ERR_PTR(ret);
}
-static inline
-int i915_get_vma_pages(struct i915_vma *vma)
+static inline int
+i915_get_ggtt_vma_pages(struct i915_vma *vma)
{
+ int ret = 0;
+
if (vma->ggtt_view.pages)
return 0;
if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
vma->ggtt_view.pages = vma->obj->pages;
+ else if (vma->ggtt_view.type == I915_GGTT_VIEW_ROTATED)
+ vma->ggtt_view.pages =
+ intel_rotate_fb_obj_pages(&vma->ggtt_view, vma->obj);
else
WARN_ONCE(1, "GGTT view %u not implemented!\n",
vma->ggtt_view.type);
if (!vma->ggtt_view.pages) {
- DRM_ERROR("Failed to get pages for VMA view type %u!\n",
+ DRM_ERROR("Failed to get pages for GGTT view type %u!\n",
vma->ggtt_view.type);
- return -EINVAL;
+ ret = -EINVAL;
+ } else if (IS_ERR(vma->ggtt_view.pages)) {
+ ret = PTR_ERR(vma->ggtt_view.pages);
+ vma->ggtt_view.pages = NULL;
+ DRM_ERROR("Failed to get pages for VMA view type %u (%d)!\n",
+ vma->ggtt_view.type, ret);
}
- return 0;
+ return ret;
}
/**
int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
u32 flags)
{
- int ret = i915_get_vma_pages(vma);
+ if (i915_is_ggtt(vma->vm)) {
+ int ret = i915_get_ggtt_vma_pages(vma);
- if (ret)
- return ret;
+ if (ret)
+ return ret;
+ }
vma->bind_vma(vma, cache_level, flags);
struct drm_i915_file_private;
-typedef uint32_t gen6_gtt_pte_t;
-typedef uint64_t gen8_gtt_pte_t;
-typedef gen8_gtt_pte_t gen8_ppgtt_pde_t;
+typedef uint32_t gen6_pte_t;
+typedef uint64_t gen8_pte_t;
+typedef uint64_t gen8_pde_t;
#define gtt_total_entries(gtt) ((gtt).base.total >> PAGE_SHIFT)
-#define I915_PPGTT_PT_ENTRIES (PAGE_SIZE / sizeof(gen6_gtt_pte_t))
+
/* gen6-hsw has bit 11-4 for physical addr bit 39-32 */
#define GEN6_GTT_ADDR_ENCODE(addr) ((addr) | (((addr) >> 28) & 0xff0))
#define GEN6_PTE_ADDR_ENCODE(addr) GEN6_GTT_ADDR_ENCODE(addr)
#define GEN6_PTE_UNCACHED (1 << 1)
#define GEN6_PTE_VALID (1 << 0)
-#define GEN6_PPGTT_PD_ENTRIES 512
-#define GEN6_PD_SIZE (GEN6_PPGTT_PD_ENTRIES * PAGE_SIZE)
+#define I915_PTES(pte_len) (PAGE_SIZE / (pte_len))
+#define I915_PTE_MASK(pte_len) (I915_PTES(pte_len) - 1)
+#define I915_PDES 512
+#define I915_PDE_MASK (I915_PDES - 1)
+#define NUM_PTE(pde_shift) (1 << (pde_shift - PAGE_SHIFT))
+
+#define GEN6_PTES I915_PTES(sizeof(gen6_pte_t))
+#define GEN6_PD_SIZE (I915_PDES * PAGE_SIZE)
#define GEN6_PD_ALIGN (PAGE_SIZE * 16)
+#define GEN6_PDE_SHIFT 22
#define GEN6_PDE_VALID (1 << 0)
#define GEN7_PTE_CACHE_L3_LLC (3 << 1)
#define GEN8_PTE_SHIFT 12
#define GEN8_PTE_MASK 0x1ff
#define GEN8_LEGACY_PDPES 4
-#define GEN8_PTES_PER_PAGE (PAGE_SIZE / sizeof(gen8_gtt_pte_t))
-#define GEN8_PDES_PER_PAGE (PAGE_SIZE / sizeof(gen8_ppgtt_pde_t))
+#define GEN8_PTES I915_PTES(sizeof(gen8_pte_t))
#define PPAT_UNCACHED_INDEX (_PAGE_PWT | _PAGE_PCD)
#define PPAT_CACHED_PDE_INDEX 0 /* WB LLC */
enum i915_ggtt_view_type {
I915_GGTT_VIEW_NORMAL = 0,
+ I915_GGTT_VIEW_ROTATED
+};
+
+struct intel_rotation_info {
+ unsigned int height;
+ unsigned int pitch;
+ uint32_t pixel_format;
+ uint64_t fb_modifier;
};
struct i915_ggtt_view {
enum i915_ggtt_view_type type;
struct sg_table *pages;
+
+ union {
+ struct intel_rotation_info rotation_info;
+ };
};
extern const struct i915_ggtt_view i915_ggtt_view_normal;
+extern const struct i915_ggtt_view i915_ggtt_view_rotated;
enum i915_cache_level;
struct i915_page_table_entry {
struct page *page;
dma_addr_t daddr;
+
+ unsigned long *used_ptes;
};
struct i915_page_directory_entry {
dma_addr_t daddr;
};
- struct i915_page_table_entry *page_table[GEN6_PPGTT_PD_ENTRIES]; /* PDEs */
+ struct i915_page_table_entry *page_table[I915_PDES]; /* PDEs */
};
struct i915_page_directory_pointer_entry {
struct list_head inactive_list;
/* FIXME: Need a more generic return type */
- gen6_gtt_pte_t (*pte_encode)(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 flags); /* Create a valid PTE */
+ gen6_pte_t (*pte_encode)(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 flags); /* Create a valid PTE */
+ int (*allocate_va_range)(struct i915_address_space *vm,
+ uint64_t start,
+ uint64_t length);
void (*clear_range)(struct i915_address_space *vm,
uint64_t start,
uint64_t length,
struct i915_address_space base;
struct kref ref;
struct drm_mm_node node;
+ unsigned long pd_dirty_rings;
unsigned num_pd_entries;
unsigned num_pd_pages; /* gen8+ */
union {
struct i915_page_directory_entry pd;
};
+ struct i915_page_table_entry *scratch_pt;
+
struct drm_i915_file_private *file_priv;
+ gen6_pte_t __iomem *pd_addr;
+
int (*enable)(struct i915_hw_ppgtt *ppgtt);
int (*switch_mm)(struct i915_hw_ppgtt *ppgtt,
struct intel_engine_cs *ring);
void (*debug_dump)(struct i915_hw_ppgtt *ppgtt, struct seq_file *m);
};
+/* For each pde iterates over every pde between from start until start + length.
+ * If start, and start+length are not perfectly divisible, the macro will round
+ * down, and up as needed. The macro modifies pde, start, and length. Dev is
+ * only used to differentiate shift values. Temp is temp. On gen6/7, start = 0,
+ * and length = 2G effectively iterates over every PDE in the system.
+ *
+ * XXX: temp is not actually needed, but it saves doing the ALIGN operation.
+ */
+#define gen6_for_each_pde(pt, pd, start, length, temp, iter) \
+ for (iter = gen6_pde_index(start); \
+ pt = (pd)->page_table[iter], length > 0 && iter < I915_PDES; \
+ iter++, \
+ temp = ALIGN(start+1, 1 << GEN6_PDE_SHIFT) - start, \
+ temp = min_t(unsigned, temp, length), \
+ start += temp, length -= temp)
+
+static inline uint32_t i915_pte_index(uint64_t address, uint32_t pde_shift)
+{
+ const uint32_t mask = NUM_PTE(pde_shift) - 1;
+
+ return (address >> PAGE_SHIFT) & mask;
+}
+
+/* Helper to counts the number of PTEs within the given length. This count
+ * does not cross a page table boundary, so the max value would be
+ * GEN6_PTES for GEN6, and GEN8_PTES for GEN8.
+*/
+static inline uint32_t i915_pte_count(uint64_t addr, size_t length,
+ uint32_t pde_shift)
+{
+ const uint64_t mask = ~((1 << pde_shift) - 1);
+ uint64_t end;
+
+ WARN_ON(length == 0);
+ WARN_ON(offset_in_page(addr|length));
+
+ end = addr + length;
+
+ if ((addr & mask) != (end & mask))
+ return NUM_PTE(pde_shift) - i915_pte_index(addr, pde_shift);
+
+ return i915_pte_index(end, pde_shift) - i915_pte_index(addr, pde_shift);
+}
+
+static inline uint32_t i915_pde_index(uint64_t addr, uint32_t shift)
+{
+ return (addr >> shift) & I915_PDE_MASK;
+}
+
+static inline uint32_t gen6_pte_index(uint32_t addr)
+{
+ return i915_pte_index(addr, GEN6_PDE_SHIFT);
+}
+
+static inline size_t gen6_pte_count(uint32_t addr, uint32_t length)
+{
+ return i915_pte_count(addr, length, GEN6_PDE_SHIFT);
+}
+
+static inline uint32_t gen6_pde_index(uint32_t addr)
+{
+ return i915_pde_index(addr, GEN6_PDE_SHIFT);
+}
+
int i915_gem_gtt_init(struct drm_device *dev);
void i915_gem_init_global_gtt(struct drm_device *dev);
void i915_global_gtt_cleanup(struct drm_device *dev);
int __must_check i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj);
void i915_gem_gtt_finish_object(struct drm_i915_gem_object *obj);
+static inline bool
+i915_ggtt_view_equal(const struct i915_ggtt_view *a,
+ const struct i915_ggtt_view *b)
+{
+ if (WARN_ON(!a || !b))
+ return false;
+
+ return a->type == b->type;
+}
+
#endif
--- /dev/null
+/*
+ * Copyright © 2008-2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/oom.h>
+#include <linux/shmem_fs.h>
+#include <linux/slab.h>
+#include <linux/swap.h>
+#include <linux/pci.h>
+#include <linux/dma-buf.h>
+#include <drm/drmP.h>
+#include <drm/i915_drm.h>
+
+#include "i915_drv.h"
+#include "i915_trace.h"
+
+static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
+{
+ if (!mutex_is_locked(mutex))
+ return false;
+
+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
+ return mutex->owner == task;
+#else
+ /* Since UP may be pre-empted, we cannot assume that we own the lock */
+ return false;
+#endif
+}
+
+/**
+ * i915_gem_shrink - Shrink buffer object caches
+ * @dev_priv: i915 device
+ * @target: amount of memory to make available, in pages
+ * @flags: control flags for selecting cache types
+ *
+ * This function is the main interface to the shrinker. It will try to release
+ * up to @target pages of main memory backing storage from buffer objects.
+ * Selection of the specific caches can be done with @flags. This is e.g. useful
+ * when purgeable objects should be removed from caches preferentially.
+ *
+ * Note that it's not guaranteed that released amount is actually available as
+ * free system memory - the pages might still be in-used to due to other reasons
+ * (like cpu mmaps) or the mm core has reused them before we could grab them.
+ * Therefore code that needs to explicitly shrink buffer objects caches (e.g. to
+ * avoid deadlocks in memory reclaim) must fall back to i915_gem_shrink_all().
+ *
+ * Also note that any kind of pinning (both per-vma address space pins and
+ * backing storage pins at the buffer object level) result in the shrinker code
+ * having to skip the object.
+ *
+ * Returns:
+ * The number of pages of backing storage actually released.
+ */
+unsigned long
+i915_gem_shrink(struct drm_i915_private *dev_priv,
+ long target, unsigned flags)
+{
+ const struct {
+ struct list_head *list;
+ unsigned int bit;
+ } phases[] = {
+ { &dev_priv->mm.unbound_list, I915_SHRINK_UNBOUND },
+ { &dev_priv->mm.bound_list, I915_SHRINK_BOUND },
+ { NULL, 0 },
+ }, *phase;
+ unsigned long count = 0;
+
+ /*
+ * As we may completely rewrite the (un)bound list whilst unbinding
+ * (due to retiring requests) we have to strictly process only
+ * one element of the list at the time, and recheck the list
+ * on every iteration.
+ *
+ * In particular, we must hold a reference whilst removing the
+ * object as we may end up waiting for and/or retiring the objects.
+ * This might release the final reference (held by the active list)
+ * and result in the object being freed from under us. This is
+ * similar to the precautions the eviction code must take whilst
+ * removing objects.
+ *
+ * Also note that although these lists do not hold a reference to
+ * the object we can safely grab one here: The final object
+ * unreferencing and the bound_list are both protected by the
+ * dev->struct_mutex and so we won't ever be able to observe an
+ * object on the bound_list with a reference count equals 0.
+ */
+ for (phase = phases; phase->list; phase++) {
+ struct list_head still_in_list;
+
+ if ((flags & phase->bit) == 0)
+ continue;
+
+ INIT_LIST_HEAD(&still_in_list);
+ while (count < target && !list_empty(phase->list)) {
+ struct drm_i915_gem_object *obj;
+ struct i915_vma *vma, *v;
+
+ obj = list_first_entry(phase->list,
+ typeof(*obj), global_list);
+ list_move_tail(&obj->global_list, &still_in_list);
+
+ if (flags & I915_SHRINK_PURGEABLE &&
+ obj->madv != I915_MADV_DONTNEED)
+ continue;
+
+ drm_gem_object_reference(&obj->base);
+
+ /* For the unbound phase, this should be a no-op! */
+ list_for_each_entry_safe(vma, v,
+ &obj->vma_list, vma_link)
+ if (i915_vma_unbind(vma))
+ break;
+
+ if (i915_gem_object_put_pages(obj) == 0)
+ count += obj->base.size >> PAGE_SHIFT;
+
+ drm_gem_object_unreference(&obj->base);
+ }
+ list_splice(&still_in_list, phase->list);
+ }
+
+ return count;
+}
+
+/**
+ * i915_gem_shrink - Shrink buffer object caches completely
+ * @dev_priv: i915 device
+ *
+ * This is a simple wraper around i915_gem_shrink() to aggressively shrink all
+ * caches completely. It also first waits for and retires all outstanding
+ * requests to also be able to release backing storage for active objects.
+ *
+ * This should only be used in code to intentionally quiescent the gpu or as a
+ * last-ditch effort when memory seems to have run out.
+ *
+ * Returns:
+ * The number of pages of backing storage actually released.
+ */
+unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv)
+{
+ i915_gem_evict_everything(dev_priv->dev);
+ return i915_gem_shrink(dev_priv, LONG_MAX,
+ I915_SHRINK_BOUND | I915_SHRINK_UNBOUND);
+}
+
+static bool i915_gem_shrinker_lock(struct drm_device *dev, bool *unlock)
+{
+ if (!mutex_trylock(&dev->struct_mutex)) {
+ if (!mutex_is_locked_by(&dev->struct_mutex, current))
+ return false;
+
+ if (to_i915(dev)->mm.shrinker_no_lock_stealing)
+ return false;
+
+ *unlock = false;
+ } else
+ *unlock = true;
+
+ return true;
+}
+
+static int num_vma_bound(struct drm_i915_gem_object *obj)
+{
+ struct i915_vma *vma;
+ int count = 0;
+
+ list_for_each_entry(vma, &obj->vma_list, vma_link)
+ if (drm_mm_node_allocated(&vma->node))
+ count++;
+
+ return count;
+}
+
+static unsigned long
+i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
+{
+ struct drm_i915_private *dev_priv =
+ container_of(shrinker, struct drm_i915_private, mm.shrinker);
+ struct drm_device *dev = dev_priv->dev;
+ struct drm_i915_gem_object *obj;
+ unsigned long count;
+ bool unlock;
+
+ if (!i915_gem_shrinker_lock(dev, &unlock))
+ return 0;
+
+ count = 0;
+ list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list)
+ if (obj->pages_pin_count == 0)
+ count += obj->base.size >> PAGE_SHIFT;
+
+ list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+ if (!i915_gem_obj_is_pinned(obj) &&
+ obj->pages_pin_count == num_vma_bound(obj))
+ count += obj->base.size >> PAGE_SHIFT;
+ }
+
+ if (unlock)
+ mutex_unlock(&dev->struct_mutex);
+
+ return count;
+}
+
+static unsigned long
+i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
+{
+ struct drm_i915_private *dev_priv =
+ container_of(shrinker, struct drm_i915_private, mm.shrinker);
+ struct drm_device *dev = dev_priv->dev;
+ unsigned long freed;
+ bool unlock;
+
+ if (!i915_gem_shrinker_lock(dev, &unlock))
+ return SHRINK_STOP;
+
+ freed = i915_gem_shrink(dev_priv,
+ sc->nr_to_scan,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND |
+ I915_SHRINK_PURGEABLE);
+ if (freed < sc->nr_to_scan)
+ freed += i915_gem_shrink(dev_priv,
+ sc->nr_to_scan - freed,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND);
+ if (unlock)
+ mutex_unlock(&dev->struct_mutex);
+
+ return freed;
+}
+
+static int
+i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
+{
+ struct drm_i915_private *dev_priv =
+ container_of(nb, struct drm_i915_private, mm.oom_notifier);
+ struct drm_device *dev = dev_priv->dev;
+ struct drm_i915_gem_object *obj;
+ unsigned long timeout = msecs_to_jiffies(5000) + 1;
+ unsigned long pinned, bound, unbound, freed_pages;
+ bool was_interruptible;
+ bool unlock;
+
+ while (!i915_gem_shrinker_lock(dev, &unlock) && --timeout) {
+ schedule_timeout_killable(1);
+ if (fatal_signal_pending(current))
+ return NOTIFY_DONE;
+ }
+ if (timeout == 0) {
+ pr_err("Unable to purge GPU memory due lock contention.\n");
+ return NOTIFY_DONE;
+ }
+
+ was_interruptible = dev_priv->mm.interruptible;
+ dev_priv->mm.interruptible = false;
+
+ freed_pages = i915_gem_shrink_all(dev_priv);
+
+ dev_priv->mm.interruptible = was_interruptible;
+
+ /* Because we may be allocating inside our own driver, we cannot
+ * assert that there are no objects with pinned pages that are not
+ * being pointed to by hardware.
+ */
+ unbound = bound = pinned = 0;
+ list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
+ if (!obj->base.filp) /* not backed by a freeable object */
+ continue;
+
+ if (obj->pages_pin_count)
+ pinned += obj->base.size;
+ else
+ unbound += obj->base.size;
+ }
+ list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+ if (!obj->base.filp)
+ continue;
+
+ if (obj->pages_pin_count)
+ pinned += obj->base.size;
+ else
+ bound += obj->base.size;
+ }
+
+ if (unlock)
+ mutex_unlock(&dev->struct_mutex);
+
+ if (freed_pages || unbound || bound)
+ pr_info("Purging GPU memory, %lu bytes freed, %lu bytes still pinned.\n",
+ freed_pages << PAGE_SHIFT, pinned);
+ if (unbound || bound)
+ pr_err("%lu and %lu bytes still available in the "
+ "bound and unbound GPU page lists.\n",
+ bound, unbound);
+
+ *(unsigned long *)ptr += freed_pages;
+ return NOTIFY_DONE;
+}
+
+/**
+ * i915_gem_shrinker_init - Initialize i915 shrinker
+ * @dev_priv: i915 device
+ *
+ * This function registers and sets up the i915 shrinker and OOM handler.
+ */
+void i915_gem_shrinker_init(struct drm_i915_private *dev_priv)
+{
+ dev_priv->mm.shrinker.scan_objects = i915_gem_shrinker_scan;
+ dev_priv->mm.shrinker.count_objects = i915_gem_shrinker_count;
+ dev_priv->mm.shrinker.seeks = DEFAULT_SEEKS;
+ register_shrinker(&dev_priv->mm.shrinker);
+
+ dev_priv->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
+ register_oom_notifier(&dev_priv->mm.oom_notifier);
+}
if (INTEL_INFO(dev)->gen >= 6) {
err_printf(m, "ERROR: 0x%08x\n", error->error);
+
+ if (INTEL_INFO(dev)->gen >= 8)
+ err_printf(m, "FAULT_TLB_DATA: 0x%08x 0x%08x\n",
+ error->fault_data1, error->fault_data0);
+
err_printf(m, "DONE_REG: 0x%08x\n", error->done_reg);
}
}
i915_error_object_free(error->semaphore_obj);
+
+ for (i = 0; i < error->vm_count; i++)
+ kfree(error->active_bo[i]);
+
kfree(error->active_bo);
+ kfree(error->active_bo_count);
+ kfree(error->pinned_bo);
+ kfree(error->pinned_bo_count);
kfree(error->overlay);
kfree(error->display);
kfree(error);
if (IS_GEN7(dev))
error->err_int = I915_READ(GEN7_ERR_INT);
+ if (INTEL_INFO(dev)->gen >= 8) {
+ error->fault_data0 = I915_READ(GEN8_FAULT_TLB_DATA0);
+ error->fault_data1 = I915_READ(GEN8_FAULT_TLB_DATA1);
+ }
+
if (IS_GEN6(dev)) {
error->forcewake = I915_READ(FORCEWAKE);
error->gab_ctl = I915_READ(GAB_CTL);
I915_WRITE(reg, dev_priv->pm_rps_events);
I915_WRITE(reg, dev_priv->pm_rps_events);
POSTING_READ(reg);
+ dev_priv->rps.pm_iir = 0;
spin_unlock_irq(&dev_priv->irq_lock);
}
__gen6_disable_pm_irq(dev_priv, dev_priv->pm_rps_events);
I915_WRITE(gen6_pm_ier(dev_priv), I915_READ(gen6_pm_ier(dev_priv)) &
~dev_priv->pm_rps_events);
- I915_WRITE(gen6_pm_iir(dev_priv), dev_priv->pm_rps_events);
- I915_WRITE(gen6_pm_iir(dev_priv), dev_priv->pm_rps_events);
-
- dev_priv->rps.pm_iir = 0;
spin_unlock_irq(&dev_priv->irq_lock);
+
+ synchronize_irq(dev->irq);
}
/**
wake_up_all(&ring->irq_queue);
}
-static u32 vlv_c0_residency(struct drm_i915_private *dev_priv,
- struct intel_rps_ei *rps_ei)
+static void vlv_c0_read(struct drm_i915_private *dev_priv,
+ struct intel_rps_ei *ei)
{
- u32 cz_ts, cz_freq_khz;
- u32 render_count, media_count;
- u32 elapsed_render, elapsed_media, elapsed_time;
- u32 residency = 0;
-
- cz_ts = vlv_punit_read(dev_priv, PUNIT_REG_CZ_TIMESTAMP);
- cz_freq_khz = DIV_ROUND_CLOSEST(dev_priv->mem_freq * 1000, 4);
-
- render_count = I915_READ(VLV_RENDER_C0_COUNT_REG);
- media_count = I915_READ(VLV_MEDIA_C0_COUNT_REG);
-
- if (rps_ei->cz_clock == 0) {
- rps_ei->cz_clock = cz_ts;
- rps_ei->render_c0 = render_count;
- rps_ei->media_c0 = media_count;
-
- return dev_priv->rps.cur_freq;
- }
-
- elapsed_time = cz_ts - rps_ei->cz_clock;
- rps_ei->cz_clock = cz_ts;
+ ei->cz_clock = vlv_punit_read(dev_priv, PUNIT_REG_CZ_TIMESTAMP);
+ ei->render_c0 = I915_READ(VLV_RENDER_C0_COUNT);
+ ei->media_c0 = I915_READ(VLV_MEDIA_C0_COUNT);
+}
- elapsed_render = render_count - rps_ei->render_c0;
- rps_ei->render_c0 = render_count;
+static bool vlv_c0_above(struct drm_i915_private *dev_priv,
+ const struct intel_rps_ei *old,
+ const struct intel_rps_ei *now,
+ int threshold)
+{
+ u64 time, c0;
- elapsed_media = media_count - rps_ei->media_c0;
- rps_ei->media_c0 = media_count;
+ if (old->cz_clock == 0)
+ return false;
- /* Convert all the counters into common unit of milli sec */
- elapsed_time /= VLV_CZ_CLOCK_TO_MILLI_SEC;
- elapsed_render /= cz_freq_khz;
- elapsed_media /= cz_freq_khz;
+ time = now->cz_clock - old->cz_clock;
+ time *= threshold * dev_priv->mem_freq;
- /*
- * Calculate overall C0 residency percentage
- * only if elapsed time is non zero
+ /* Workload can be split between render + media, e.g. SwapBuffers
+ * being blitted in X after being rendered in mesa. To account for
+ * this we need to combine both engines into our activity counter.
*/
- if (elapsed_time) {
- residency =
- ((max(elapsed_render, elapsed_media) * 100)
- / elapsed_time);
- }
+ c0 = now->render_c0 - old->render_c0;
+ c0 += now->media_c0 - old->media_c0;
+ c0 *= 100 * VLV_CZ_CLOCK_TO_MILLI_SEC * 4 / 1000;
- return residency;
+ return c0 >= time;
}
-/**
- * vlv_calc_delay_from_C0_counters - Increase/Decrease freq based on GPU
- * busy-ness calculated from C0 counters of render & media power wells
- * @dev_priv: DRM device private
- *
- */
-static int vlv_calc_delay_from_C0_counters(struct drm_i915_private *dev_priv)
+void gen6_rps_reset_ei(struct drm_i915_private *dev_priv)
{
- u32 residency_C0_up = 0, residency_C0_down = 0;
- int new_delay, adj;
-
- dev_priv->rps.ei_interrupt_count++;
-
- WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
-
-
- if (dev_priv->rps.up_ei.cz_clock == 0) {
- vlv_c0_residency(dev_priv, &dev_priv->rps.up_ei);
- vlv_c0_residency(dev_priv, &dev_priv->rps.down_ei);
- return dev_priv->rps.cur_freq;
- }
+ vlv_c0_read(dev_priv, &dev_priv->rps.down_ei);
+ dev_priv->rps.up_ei = dev_priv->rps.down_ei;
+}
+static u32 vlv_wa_c0_ei(struct drm_i915_private *dev_priv, u32 pm_iir)
+{
+ struct intel_rps_ei now;
+ u32 events = 0;
- /*
- * To down throttle, C0 residency should be less than down threshold
- * for continous EI intervals. So calculate down EI counters
- * once in VLV_INT_COUNT_FOR_DOWN_EI
- */
- if (dev_priv->rps.ei_interrupt_count == VLV_INT_COUNT_FOR_DOWN_EI) {
+ if ((pm_iir & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED)) == 0)
+ return 0;
- dev_priv->rps.ei_interrupt_count = 0;
+ vlv_c0_read(dev_priv, &now);
+ if (now.cz_clock == 0)
+ return 0;
- residency_C0_down = vlv_c0_residency(dev_priv,
- &dev_priv->rps.down_ei);
- } else {
- residency_C0_up = vlv_c0_residency(dev_priv,
- &dev_priv->rps.up_ei);
+ if (pm_iir & GEN6_PM_RP_DOWN_EI_EXPIRED) {
+ if (!vlv_c0_above(dev_priv,
+ &dev_priv->rps.down_ei, &now,
+ VLV_RP_DOWN_EI_THRESHOLD))
+ events |= GEN6_PM_RP_DOWN_THRESHOLD;
+ dev_priv->rps.down_ei = now;
}
- new_delay = dev_priv->rps.cur_freq;
-
- adj = dev_priv->rps.last_adj;
- /* C0 residency is greater than UP threshold. Increase Frequency */
- if (residency_C0_up >= VLV_RP_UP_EI_THRESHOLD) {
- if (adj > 0)
- adj *= 2;
- else
- adj = 1;
-
- if (dev_priv->rps.cur_freq < dev_priv->rps.max_freq_softlimit)
- new_delay = dev_priv->rps.cur_freq + adj;
-
- /*
- * For better performance, jump directly
- * to RPe if we're below it.
- */
- if (new_delay < dev_priv->rps.efficient_freq)
- new_delay = dev_priv->rps.efficient_freq;
-
- } else if (!dev_priv->rps.ei_interrupt_count &&
- (residency_C0_down < VLV_RP_DOWN_EI_THRESHOLD)) {
- if (adj < 0)
- adj *= 2;
- else
- adj = -1;
- /*
- * This means, C0 residency is less than down threshold over
- * a period of VLV_INT_COUNT_FOR_DOWN_EI. So, reduce the freq
- */
- if (dev_priv->rps.cur_freq > dev_priv->rps.min_freq_softlimit)
- new_delay = dev_priv->rps.cur_freq + adj;
+ if (pm_iir & GEN6_PM_RP_UP_EI_EXPIRED) {
+ if (vlv_c0_above(dev_priv,
+ &dev_priv->rps.up_ei, &now,
+ VLV_RP_UP_EI_THRESHOLD))
+ events |= GEN6_PM_RP_UP_THRESHOLD;
+ dev_priv->rps.up_ei = now;
}
- return new_delay;
+ return events;
}
static void gen6_pm_rps_work(struct work_struct *work)
mutex_lock(&dev_priv->rps.hw_lock);
+ pm_iir |= vlv_wa_c0_ei(dev_priv, pm_iir);
+
adj = dev_priv->rps.last_adj;
if (pm_iir & GEN6_PM_RP_UP_THRESHOLD) {
if (adj > 0)
else
new_delay = dev_priv->rps.min_freq_softlimit;
adj = 0;
- } else if (pm_iir & GEN6_PM_RP_UP_EI_EXPIRED) {
- new_delay = vlv_calc_delay_from_C0_counters(dev_priv);
} else if (pm_iir & GEN6_PM_RP_DOWN_THRESHOLD) {
if (adj < 0)
adj *= 2;
* the work queue. */
static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir)
{
- /* TODO: RPS on GEN9+ is not supported yet. */
- if (WARN_ONCE(INTEL_INFO(dev_priv)->gen >= 9,
- "GEN9+: unexpected RPS IRQ\n"))
- return;
-
if (pm_iir & dev_priv->pm_rps_events) {
spin_lock(&dev_priv->irq_lock);
gen6_disable_pm_irq(dev_priv, pm_iir & dev_priv->pm_rps_events);
ibx_irq_reset(dev);
}
-void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv)
+void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv,
+ unsigned int pipe_mask)
{
uint32_t extra_ier = GEN8_PIPE_VBLANK | GEN8_PIPE_FIFO_UNDERRUN;
spin_lock_irq(&dev_priv->irq_lock);
- GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_B, dev_priv->de_irq_mask[PIPE_B],
- ~dev_priv->de_irq_mask[PIPE_B] | extra_ier);
- GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_C, dev_priv->de_irq_mask[PIPE_C],
- ~dev_priv->de_irq_mask[PIPE_C] | extra_ier);
+ if (pipe_mask & 1 << PIPE_A)
+ GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_A,
+ dev_priv->de_irq_mask[PIPE_A],
+ ~dev_priv->de_irq_mask[PIPE_A] | extra_ier);
+ if (pipe_mask & 1 << PIPE_B)
+ GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_B,
+ dev_priv->de_irq_mask[PIPE_B],
+ ~dev_priv->de_irq_mask[PIPE_B] | extra_ier);
+ if (pipe_mask & 1 << PIPE_C)
+ GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_C,
+ dev_priv->de_irq_mask[PIPE_C],
+ ~dev_priv->de_irq_mask[PIPE_C] | extra_ier);
spin_unlock_irq(&dev_priv->irq_lock);
}
/* Let's track the enabled rps events */
if (IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv))
/* WaGsvRC0ResidencyMethod:vlv */
- dev_priv->pm_rps_events = GEN6_PM_RP_UP_EI_EXPIRED;
+ dev_priv->pm_rps_events = GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED;
else
dev_priv->pm_rps_events = GEN6_PM_RPS_EVENTS;
struct i915_params i915 __read_mostly = {
.modeset = -1,
.panel_ignore_lid = 1,
- .powersave = 1,
.semaphores = -1,
.lvds_downclock = 0,
.lvds_channel_mode = 0,
.enable_ips = 1,
.fastboot = 0,
.prefault_disable = 0,
+ .load_detect_test = 0,
.reset = true,
.invert_brightness = 0,
.disable_display = 0,
"Override lid status (0=autodetect, 1=autodetect disabled [default], "
"-1=force lid closed, -2=force lid open)");
-module_param_named(powersave, i915.powersave, int, 0600);
-MODULE_PARM_DESC(powersave,
- "Enable powersavings, fbc, downclocking, etc. (default: true)");
-
module_param_named_unsafe(semaphores, i915.semaphores, int, 0400);
MODULE_PARM_DESC(semaphores,
"Use semaphores for inter-ring sync "
MODULE_PARM_DESC(fastboot,
"Try to skip unnecessary mode sets at boot time (default: false)");
-module_param_named(prefault_disable, i915.prefault_disable, bool, 0600);
+module_param_named_unsafe(prefault_disable, i915.prefault_disable, bool, 0600);
MODULE_PARM_DESC(prefault_disable,
"Disable page prefaulting for pread/pwrite/reloc (default:false). "
"For developers only.");
+module_param_named_unsafe(load_detect_test, i915.load_detect_test, bool, 0600);
+MODULE_PARM_DESC(load_detect_test,
+ "Force-enable the VGA load detect code for testing (default:false). "
+ "For developers only.");
+
module_param_named(invert_brightness, i915.invert_brightness, int, 0600);
MODULE_PARM_DESC(invert_brightness,
"Invert backlight brightness "
MODULE_PARM_DESC(use_mmio_flip,
"use MMIO flips (-1=never, 0=driver discretion [default], 1=always)");
-module_param_named(mmio_debug, i915.mmio_debug, bool, 0600);
+module_param_named(mmio_debug, i915.mmio_debug, int, 0600);
MODULE_PARM_DESC(mmio_debug,
- "Enable the MMIO debug code (default: false). This may negatively "
- "affect performance.");
+ "Enable the MMIO debug code for the first N failures (default: off). "
+ "This may negatively affect performance.");
module_param_named(verbose_state_checks, i915.verbose_state_checks, bool, 0600);
MODULE_PARM_DESC(verbose_state_checks,
#define DSPFREQSTAT_MASK (0x3 << DSPFREQSTAT_SHIFT)
#define DSPFREQGUAR_SHIFT 14
#define DSPFREQGUAR_MASK (0x3 << DSPFREQGUAR_SHIFT)
+#define DSP_MAXFIFO_PM5_STATUS (1 << 22) /* chv */
+#define DSP_AUTO_CDCLK_GATE_DISABLE (1 << 7) /* chv */
+#define DSP_MAXFIFO_PM5_ENABLE (1 << 6) /* chv */
#define _DP_SSC(val, pipe) ((val) << (2 * (pipe)))
#define DP_SSC_MASK(pipe) _DP_SSC(0x3, (pipe))
#define DP_SSC_PWR_ON(pipe) _DP_SSC(0x0, (pipe))
#define FB_GFX_FMIN_AT_VMIN_FUSE 0x137
#define FB_GFX_FMIN_AT_VMIN_FUSE_SHIFT 8
+#define PUNIT_REG_DDR_SETUP2 0x139
+#define FORCE_DDR_FREQ_REQ_ACK (1 << 8)
+#define FORCE_DDR_LOW_FREQ (1 << 1)
+#define FORCE_DDR_HIGH_FREQ (1 << 0)
+
#define PUNIT_GPU_STATUS_REG 0xdb
#define PUNIT_GPU_STATUS_MAX_FREQ_SHIFT 16
#define PUNIT_GPU_STATUS_MAX_FREQ_MASK 0xff
#define VLV_CZ_CLOCK_TO_MILLI_SEC 100000
#define VLV_RP_UP_EI_THRESHOLD 90
#define VLV_RP_DOWN_EI_THRESHOLD 70
-#define VLV_INT_COUNT_FOR_DOWN_EI 5
/* vlv2 north clock has */
#define CCK_FUSE_REG 0x8
#define DPIO_CHV_FIRST_MOD (0 << 8)
#define DPIO_CHV_SECOND_MOD (1 << 8)
#define DPIO_CHV_FEEDFWD_GAIN_SHIFT 0
+#define DPIO_CHV_FEEDFWD_GAIN_MASK (0xF << 0)
#define CHV_PLL_DW3(ch) _PIPE(ch, _CHV_PLL_DW3_CH0, _CHV_PLL_DW3_CH1)
#define _CHV_PLL_DW6_CH0 0x8018
#define _CHV_PLL_DW8_CH0 0x8020
#define _CHV_PLL_DW8_CH1 0x81A0
+#define DPIO_CHV_TDC_TARGET_CNT_SHIFT 0
+#define DPIO_CHV_TDC_TARGET_CNT_MASK (0x3FF << 0)
#define CHV_PLL_DW8(ch) _PIPE(ch, _CHV_PLL_DW8_CH0, _CHV_PLL_DW8_CH1)
#define _CHV_PLL_DW9_CH0 0x8024
#define _CHV_PLL_DW9_CH1 0x81A4
#define DPIO_CHV_INT_LOCK_THRESHOLD_SHIFT 1 /* 3 bits */
+#define DPIO_CHV_INT_LOCK_THRESHOLD_MASK (7 << 1)
#define DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE 1 /* 1: coarse & 0 : fine */
#define CHV_PLL_DW9(ch) _PIPE(ch, _CHV_PLL_DW9_CH0, _CHV_PLL_DW9_CH1)
#define ERR_INT_FIFO_UNDERRUN_A (1<<0)
#define ERR_INT_FIFO_UNDERRUN(pipe) (1<<(pipe*3))
+#define GEN8_FAULT_TLB_DATA0 0x04b10
+#define GEN8_FAULT_TLB_DATA1 0x04b14
+
#define FPGA_DBG 0x42300
#define FPGA_DBG_RM_NOCLAIM (1<<31)
/* Fuse readout registers for GT */
#define CHV_FUSE_GT (VLV_DISPLAY_BASE + 0x2168)
+#define CHV_FGT_DISABLE_SS0 (1 << 10)
+#define CHV_FGT_DISABLE_SS1 (1 << 11)
#define CHV_FGT_EU_DIS_SS0_R0_SHIFT 16
#define CHV_FGT_EU_DIS_SS0_R0_MASK (0xf << CHV_FGT_EU_DIS_SS0_R0_SHIFT)
#define CHV_FGT_EU_DIS_SS0_R1_SHIFT 20
#define CDCLK_FREQ_SHIFT 4
#define CDCLK_FREQ_MASK (0x1f << CDCLK_FREQ_SHIFT)
#define CZCLK_FREQ_MASK 0xf
+
+#define GCI_CONTROL (VLV_DISPLAY_BASE + 0x650C)
+#define PFI_CREDIT_63 (9 << 28) /* chv only */
+#define PFI_CREDIT_31 (8 << 28) /* chv only */
+#define PFI_CREDIT(x) (((x) - 8) << 28) /* 8-15 */
+#define PFI_CREDIT_RESEND (1 << 27)
+#define VGA_FAST_MODE_DISABLE (1 << 14)
+
#define GMBUSFREQ_VLV (VLV_DISPLAY_BASE + 0x6510)
/*
#define GEN6_RP_STATE_LIMITS (MCHBAR_MIRROR_BASE_SNB + 0x5994)
#define GEN6_RP_STATE_CAP (MCHBAR_MIRROR_BASE_SNB + 0x5998)
+#define INTERVAL_1_28_US(us) (((us) * 100) >> 7)
+#define INTERVAL_1_33_US(us) (((us) * 3) >> 2)
+#define GT_INTERVAL_FROM_US(dev_priv, us) (IS_GEN9(dev_priv) ? \
+ INTERVAL_1_33_US(us) : \
+ INTERVAL_1_28_US(us))
+
/*
* Logical Context regs
*/
/* Video Data Island Packet control */
#define VIDEO_DIP_DATA 0x61178
-/* Read the description of VIDEO_DIP_DATA (before Haswel) or VIDEO_DIP_ECC
+/* Read the description of VIDEO_DIP_DATA (before Haswell) or VIDEO_DIP_ECC
* (Haswell and newer) to see which VIDEO_DIP_DATA byte corresponds to each byte
* of the infoframe structure specified by CEA-861. */
#define VIDEO_DIP_DATA_SIZE 32
#define DPINVGTT_STATUS_MASK 0xff
#define DPINVGTT_STATUS_MASK_CHV 0xfff
-#define DSPARB 0x70030
+#define DSPARB (dev_priv->info.display_mmio_offset + 0x70030)
#define DSPARB_CSTART_MASK (0x7f << 7)
#define DSPARB_CSTART_SHIFT 7
#define DSPARB_BSTART_MASK (0x7f)
#define DSPARB_BEND_SHIFT 9 /* on 855 */
#define DSPARB_AEND_SHIFT 0
+#define DSPARB2 (VLV_DISPLAY_BASE + 0x70060) /* vlv/chv */
+#define DSPARB3 (VLV_DISPLAY_BASE + 0x7006c) /* chv */
+
/* pnv/gen4/g4x/vlv/chv */
#define DSPFW1 (dev_priv->info.display_mmio_offset + 0x70034)
#define DSPFW_SR_SHIFT 23
#define DSPFW_SPRITEB_MASK_VLV (0xff<<16) /* vlv/chv */
#define DSPFW_CURSORA_SHIFT 8
#define DSPFW_CURSORA_MASK (0x3f<<8)
-#define DSPFW_PLANEC_SHIFT_OLD 0
-#define DSPFW_PLANEC_MASK_OLD (0x7f<<0) /* pre-gen4 sprite C */
+#define DSPFW_PLANEC_OLD_SHIFT 0
+#define DSPFW_PLANEC_OLD_MASK (0x7f<<0) /* pre-gen4 sprite C */
#define DSPFW_SPRITEA_SHIFT 0
#define DSPFW_SPRITEA_MASK (0x7f<<0) /* g4x */
#define DSPFW_SPRITEA_MASK_VLV (0xff<<0) /* vlv/chv */
#define DSPFW_SPRITED_WM1_SHIFT 24
#define DSPFW_SPRITED_WM1_MASK (0xff<<24)
#define DSPFW_SPRITED_SHIFT 16
-#define DSPFW_SPRITED_MASK (0xff<<16)
+#define DSPFW_SPRITED_MASK_VLV (0xff<<16)
#define DSPFW_SPRITEC_WM1_SHIFT 8
#define DSPFW_SPRITEC_WM1_MASK (0xff<<8)
#define DSPFW_SPRITEC_SHIFT 0
-#define DSPFW_SPRITEC_MASK (0xff<<0)
+#define DSPFW_SPRITEC_MASK_VLV (0xff<<0)
#define DSPFW8_CHV (VLV_DISPLAY_BASE + 0x700b8)
#define DSPFW_SPRITEF_WM1_SHIFT 24
#define DSPFW_SPRITEF_WM1_MASK (0xff<<24)
#define DSPFW_SPRITEF_SHIFT 16
-#define DSPFW_SPRITEF_MASK (0xff<<16)
+#define DSPFW_SPRITEF_MASK_VLV (0xff<<16)
#define DSPFW_SPRITEE_WM1_SHIFT 8
#define DSPFW_SPRITEE_WM1_MASK (0xff<<8)
#define DSPFW_SPRITEE_SHIFT 0
-#define DSPFW_SPRITEE_MASK (0xff<<0)
+#define DSPFW_SPRITEE_MASK_VLV (0xff<<0)
#define DSPFW9_CHV (VLV_DISPLAY_BASE + 0x7007c) /* wtf #2? */
#define DSPFW_PLANEC_WM1_SHIFT 24
#define DSPFW_PLANEC_WM1_MASK (0xff<<24)
#define DSPFW_PLANEC_SHIFT 16
-#define DSPFW_PLANEC_MASK (0xff<<16)
+#define DSPFW_PLANEC_MASK_VLV (0xff<<16)
#define DSPFW_CURSORC_WM1_SHIFT 8
#define DSPFW_CURSORC_WM1_MASK (0x3f<<16)
#define DSPFW_CURSORC_SHIFT 0
/* vlv/chv high order bits */
#define DSPHOWM (VLV_DISPLAY_BASE + 0x70064)
#define DSPFW_SR_HI_SHIFT 24
-#define DSPFW_SR_HI_MASK (1<<24)
+#define DSPFW_SR_HI_MASK (3<<24) /* 2 bits for chv, 1 for vlv */
#define DSPFW_SPRITEF_HI_SHIFT 23
#define DSPFW_SPRITEF_HI_MASK (1<<23)
#define DSPFW_SPRITEE_HI_SHIFT 22
#define DSPFW_PLANEA_HI_MASK (1<<0)
#define DSPHOWM1 (VLV_DISPLAY_BASE + 0x70068)
#define DSPFW_SR_WM1_HI_SHIFT 24
-#define DSPFW_SR_WM1_HI_MASK (1<<24)
+#define DSPFW_SR_WM1_HI_MASK (3<<24) /* 2 bits for chv, 1 for vlv */
#define DSPFW_SPRITEF_WM1_HI_SHIFT 23
#define DSPFW_SPRITEF_WM1_HI_MASK (1<<23)
#define DSPFW_SPRITEE_WM1_HI_SHIFT 22
#define DSPFW_PLANEA_WM1_HI_MASK (1<<0)
/* drain latency register values*/
-#define DRAIN_LATENCY_PRECISION_16 16
-#define DRAIN_LATENCY_PRECISION_32 32
-#define DRAIN_LATENCY_PRECISION_64 64
#define VLV_DDL(pipe) (VLV_DISPLAY_BASE + 0x70050 + 4 * (pipe))
-#define DDL_CURSOR_PRECISION_HIGH (1<<31)
-#define DDL_CURSOR_PRECISION_LOW (0<<31)
#define DDL_CURSOR_SHIFT 24
-#define DDL_SPRITE_PRECISION_HIGH(sprite) (1<<(15+8*(sprite)))
-#define DDL_SPRITE_PRECISION_LOW(sprite) (0<<(15+8*(sprite)))
#define DDL_SPRITE_SHIFT(sprite) (8+8*(sprite))
-#define DDL_PLANE_PRECISION_HIGH (1<<7)
-#define DDL_PLANE_PRECISION_LOW (0<<7)
#define DDL_PLANE_SHIFT 0
+#define DDL_PRECISION_HIGH (1<<7)
+#define DDL_PRECISION_LOW (0<<7)
#define DRAIN_LATENCY_MASK 0x7f
+#define CBR1_VLV (VLV_DISPLAY_BASE + 0x70400)
+#define CBR_PND_DEADLINE_DISABLE (1<<31)
+
/* FIFO watermark sizes etc */
#define G4X_FIFO_LINE_SIZE 64
#define I915_FIFO_LINE_SIZE 64
#define GEN6_TURBO_DISABLE (1<<31)
#define GEN6_FREQUENCY(x) ((x)<<25)
#define HSW_FREQUENCY(x) ((x)<<24)
+#define GEN9_FREQUENCY(x) ((x)<<23)
#define GEN6_OFFSET(x) ((x)<<19)
#define GEN6_AGGRESSIVE_TURBO (0<<15)
#define GEN6_RC_VIDEO_FREQ 0xA00C
#define GEN6_RPSTAT1 0xA01C
#define GEN6_CAGF_SHIFT 8
#define HSW_CAGF_SHIFT 7
+#define GEN9_CAGF_SHIFT 23
#define GEN6_CAGF_MASK (0x7f << GEN6_CAGF_SHIFT)
#define HSW_CAGF_MASK (0x7f << HSW_CAGF_SHIFT)
+#define GEN9_CAGF_MASK (0x1ff << GEN9_CAGF_SHIFT)
#define GEN6_RP_CONTROL 0xA024
#define GEN6_RP_MEDIA_TURBO (1<<11)
#define GEN6_RP_MEDIA_MODE_MASK (3<<9)
#define GEN6_GT_GFX_RC6p 0x13810C
#define GEN6_GT_GFX_RC6pp 0x138110
-#define VLV_RENDER_C0_COUNT_REG 0x138118
-#define VLV_MEDIA_C0_COUNT_REG 0x13811C
+#define VLV_RENDER_C0_COUNT 0x138118
+#define VLV_MEDIA_C0_COUNT 0x13811C
#define GEN6_PCODE_MAILBOX 0x138124
#define GEN6_PCODE_READY (1<<31)
#define GEN6_RC6 3
#define GEN6_RC7 4
+#define CHV_POWER_SS0_SIG1 0xa720
+#define CHV_POWER_SS1_SIG1 0xa728
+#define CHV_SS_PG_ENABLE (1<<1)
+#define CHV_EU08_PG_ENABLE (1<<9)
+#define CHV_EU19_PG_ENABLE (1<<17)
+#define CHV_EU210_PG_ENABLE (1<<25)
+
+#define CHV_POWER_SS0_SIG2 0xa724
+#define CHV_POWER_SS1_SIG2 0xa72c
+#define CHV_EU311_PG_ENABLE (1<<1)
+
#define GEN9_SLICE0_PGCTL_ACK 0x804c
#define GEN9_SLICE1_PGCTL_ACK 0x8050
#define GEN9_SLICE2_PGCTL_ACK 0x8054
ret = intel_gpu_freq(dev_priv, (freq >> 8) & 0xff);
} else {
u32 rpstat = I915_READ(GEN6_RPSTAT1);
- if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
+ if (IS_GEN9(dev_priv))
+ ret = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT;
+ else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
ret = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT;
else
ret = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
__entry->obj, __entry->offset, __entry->size, __entry->vm)
);
+#define VM_TO_TRACE_NAME(vm) \
+ (i915_is_ggtt(vm) ? "G" : \
+ "P")
+
+DECLARE_EVENT_CLASS(i915_va,
+ TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name),
+ TP_ARGS(vm, start, length, name),
+
+ TP_STRUCT__entry(
+ __field(struct i915_address_space *, vm)
+ __field(u64, start)
+ __field(u64, end)
+ __string(name, name)
+ ),
+
+ TP_fast_assign(
+ __entry->vm = vm;
+ __entry->start = start;
+ __entry->end = start + length - 1;
+ __assign_str(name, name);
+ ),
+
+ TP_printk("vm=%p (%s), 0x%llx-0x%llx",
+ __entry->vm, __get_str(name), __entry->start, __entry->end)
+);
+
+DEFINE_EVENT(i915_va, i915_va_alloc,
+ TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name),
+ TP_ARGS(vm, start, length, name)
+);
+
+DECLARE_EVENT_CLASS(i915_page_table_entry,
+ TP_PROTO(struct i915_address_space *vm, u32 pde, u64 start, u64 pde_shift),
+ TP_ARGS(vm, pde, start, pde_shift),
+
+ TP_STRUCT__entry(
+ __field(struct i915_address_space *, vm)
+ __field(u32, pde)
+ __field(u64, start)
+ __field(u64, end)
+ ),
+
+ TP_fast_assign(
+ __entry->vm = vm;
+ __entry->pde = pde;
+ __entry->start = start;
+ __entry->end = ((start + (1ULL << pde_shift)) & ~((1ULL << pde_shift)-1)) - 1;
+ ),
+
+ TP_printk("vm=%p, pde=%d (0x%llx-0x%llx)",
+ __entry->vm, __entry->pde, __entry->start, __entry->end)
+);
+
+DEFINE_EVENT(i915_page_table_entry, i915_page_table_entry_alloc,
+ TP_PROTO(struct i915_address_space *vm, u32 pde, u64 start, u64 pde_shift),
+ TP_ARGS(vm, pde, start, pde_shift)
+);
+
+/* Avoid extra math because we only support two sizes. The format is defined by
+ * bitmap_scnprintf. Each 32 bits is 8 HEX digits followed by comma */
+#define TRACE_PT_SIZE(bits) \
+ ((((bits) == 1024) ? 288 : 144) + 1)
+
+DECLARE_EVENT_CLASS(i915_page_table_entry_update,
+ TP_PROTO(struct i915_address_space *vm, u32 pde,
+ struct i915_page_table_entry *pt, u32 first, u32 count, u32 bits),
+ TP_ARGS(vm, pde, pt, first, count, bits),
+
+ TP_STRUCT__entry(
+ __field(struct i915_address_space *, vm)
+ __field(u32, pde)
+ __field(u32, first)
+ __field(u32, last)
+ __dynamic_array(char, cur_ptes, TRACE_PT_SIZE(bits))
+ ),
+
+ TP_fast_assign(
+ __entry->vm = vm;
+ __entry->pde = pde;
+ __entry->first = first;
+ __entry->last = first + count - 1;
+ scnprintf(__get_str(cur_ptes),
+ TRACE_PT_SIZE(bits),
+ "%*pb",
+ bits,
+ pt->used_ptes);
+ ),
+
+ TP_printk("vm=%p, pde=%d, updating %u:%u\t%s",
+ __entry->vm, __entry->pde, __entry->last, __entry->first,
+ __get_str(cur_ptes))
+);
+
+DEFINE_EVENT(i915_page_table_entry_update, i915_page_table_entry_map,
+ TP_PROTO(struct i915_address_space *vm, u32 pde,
+ struct i915_page_table_entry *pt, u32 first, u32 count, u32 bits),
+ TP_ARGS(vm, pde, pt, first, count, bits)
+);
+
TRACE_EVENT(i915_gem_object_change_domain,
TP_PROTO(struct drm_i915_gem_object *obj, u32 old_read, u32 old_write),
TP_ARGS(obj, old_read, old_write),
* The following structure pages are defined in GEN MMIO space
* for virtualization. (One page for now)
*/
-#define VGT_MAGIC 0x4776544776544776 /* 'vGTvGTvG' */
+#define VGT_MAGIC 0x4776544776544776ULL /* 'vGTvGTvG' */
#define VGT_VERSION_MAJOR 1
#define VGT_VERSION_MINOR 0
intel_crtc_duplicate_state(struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct intel_crtc_state *crtc_state;
if (WARN_ON(!intel_crtc->config))
- return kzalloc(sizeof(*intel_crtc->config), GFP_KERNEL);
+ crtc_state = kzalloc(sizeof(*crtc_state), GFP_KERNEL);
+ else
+ crtc_state = kmemdup(intel_crtc->config,
+ sizeof(*intel_crtc->config), GFP_KERNEL);
- return kmemdup(intel_crtc->config, sizeof(*intel_crtc->config),
- GFP_KERNEL);
+ if (crtc_state)
+ crtc_state->base.crtc = crtc;
+
+ return &crtc_state->base;
}
/**
* broken monitor (without edid) to work behind a broken kvm (that fails
* to have the right resistors for HP detection) needs to fix this up.
* For now just bail out. */
- if (I915_HAS_HOTPLUG(dev)) {
+ if (I915_HAS_HOTPLUG(dev) && !i915.load_detect_test) {
status = connector_status_disconnected;
goto out;
}
if (intel_get_load_detect_pipe(connector, NULL, &tmp, &ctx)) {
if (intel_crt_detect_ddc(connector))
status = connector_status_connected;
- else
+ else if (INTEL_INFO(dev)->gen < 4)
status = intel_crt_load_detect(crt);
- intel_release_load_detect_pipe(connector, &tmp);
+ else
+ status = connector_status_unknown;
+ intel_release_load_detect_pipe(connector, &tmp, &ctx);
} else
status = connector_status_unknown;
.destroy = intel_crt_destroy,
.set_property = intel_crt_set_property,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_get_property = intel_connector_atomic_get_property,
};
static const struct ddi_buf_trans skl_ddi_translations_hdmi[] = {
/* Idx NT mV T mV db */
- { 0x00000018, 0x000000a0 }, /* 0: 400 400 0 */
- { 0x00004014, 0x00000098 }, /* 1: 400 600 3.5 */
- { 0x00006012, 0x00000088 }, /* 2: 400 800 6 */
- { 0x00000018, 0x0000003c }, /* 3: 450 450 0 */
- { 0x00000018, 0x00000098 }, /* 4: 600 600 0 */
- { 0x00003015, 0x00000088 }, /* 5: 600 800 2.5 */
- { 0x00005013, 0x00000080 }, /* 6: 600 1000 4.5 */
- { 0x00000018, 0x00000088 }, /* 7: 800 800 0 */
- { 0x00000096, 0x00000080 }, /* 8: 800 1000 2 */
- { 0x00000018, 0x00000080 }, /* 9: 1200 1200 0 */
+ { 0x00004014, 0x00000087 }, /* 0: 800 1000 2 */
};
enum port intel_ddi_get_encoder_port(struct intel_encoder *intel_encoder)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 reg;
- int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_800mV_0dB,
+ int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_default_entry,
size;
int hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
const struct ddi_buf_trans *ddi_translations_fdi;
n_edp_entries = ARRAY_SIZE(skl_ddi_translations_dp);
}
+ /*
+ * On SKL, the recommendation from the hw team is to always use
+ * a certain type of level shifter (and thus the corresponding
+ * 800mV+2dB entry). Given that's the only validated entry, we
+ * override what is in the VBT, at least until further notice.
+ */
+ hdmi_level = 0;
ddi_translations_hdmi = skl_ddi_translations_hdmi;
n_hdmi_entries = ARRAY_SIZE(skl_ddi_translations_hdmi);
- hdmi_800mV_0dB = 7;
+ hdmi_default_entry = 0;
} else if (IS_BROADWELL(dev)) {
ddi_translations_fdi = bdw_ddi_translations_fdi;
ddi_translations_dp = bdw_ddi_translations_dp;
n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
- hdmi_800mV_0dB = 7;
+ hdmi_default_entry = 7;
} else if (IS_HASWELL(dev)) {
ddi_translations_fdi = hsw_ddi_translations_fdi;
ddi_translations_dp = hsw_ddi_translations_dp;
ddi_translations_hdmi = hsw_ddi_translations_hdmi;
n_dp_entries = n_edp_entries = ARRAY_SIZE(hsw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);
- hdmi_800mV_0dB = 6;
+ hdmi_default_entry = 6;
} else {
WARN(1, "ddi translation table missing\n");
ddi_translations_edp = bdw_ddi_translations_dp;
n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
- hdmi_800mV_0dB = 7;
+ hdmi_default_entry = 7;
}
switch (port) {
/* Choose a good default if VBT is badly populated */
if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||
hdmi_level >= n_hdmi_entries)
- hdmi_level = hdmi_800mV_0dB;
+ hdmi_level = hdmi_default_entry;
/* Entry 9 is for HDMI: */
I915_WRITE(reg, ddi_translations_hdmi[hdmi_level].trans1);
}
static struct intel_encoder *
-intel_ddi_get_crtc_new_encoder(struct intel_crtc *crtc)
+intel_ddi_get_crtc_new_encoder(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->base.dev;
- struct intel_encoder *intel_encoder, *ret = NULL;
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
+ struct intel_encoder *ret = NULL;
+ struct drm_atomic_state *state;
int num_encoders = 0;
+ int i;
- for_each_intel_encoder(dev, intel_encoder) {
- if (intel_encoder->new_crtc == crtc) {
- ret = intel_encoder;
- num_encoders++;
- }
+ state = crtc_state->base.state;
+
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i] ||
+ state->connector_states[i]->crtc != crtc_state->base.crtc)
+ continue;
+
+ ret = to_intel_encoder(state->connector_states[i]->best_encoder);
+ num_encoders++;
}
WARN(num_encoders != 1, "%d encoders on crtc for pipe %c\n", num_encoders,
case DPLL_CRTL1_LINK_RATE_810:
link_clock = 81000;
break;
+ case DPLL_CRTL1_LINK_RATE_1080:
+ link_clock = 108000;
+ break;
case DPLL_CRTL1_LINK_RATE_1350:
link_clock = 135000;
break;
+ case DPLL_CRTL1_LINK_RATE_1620:
+ link_clock = 162000;
+ break;
+ case DPLL_CRTL1_LINK_RATE_2160:
+ link_clock = 216000;
+ break;
case DPLL_CRTL1_LINK_RATE_2700:
link_clock = 270000;
break;
{
struct drm_device *dev = intel_crtc->base.dev;
struct intel_encoder *intel_encoder =
- intel_ddi_get_crtc_new_encoder(intel_crtc);
+ intel_ddi_get_crtc_new_encoder(crtc_state);
int clock = crtc_state->port_clock;
if (IS_SKYLAKE(dev))
#include <drm/i915_drm.h>
#include "i915_drv.h"
#include "i915_trace.h"
+#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_dp_helper.h>
#include <drm/drm_crtc_helper.h>
struct intel_crtc_state *pipe_config);
static int intel_set_mode(struct drm_crtc *crtc, struct drm_display_mode *mode,
- int x, int y, struct drm_framebuffer *old_fb);
+ int x, int y, struct drm_framebuffer *old_fb,
+ struct drm_atomic_state *state);
static int intel_framebuffer_init(struct drm_device *dev,
struct intel_framebuffer *ifb,
struct drm_mode_fb_cmd2 *mode_cmd,
* them would make no difference.
*/
.dot = { .min = 25000 * 5, .max = 540000 * 5},
- .vco = { .min = 4860000, .max = 6480000 },
+ .vco = { .min = 4800000, .max = 6480000 },
.n = { .min = 1, .max = 1 },
.m1 = { .min = 2, .max = 2 },
.m2 = { .min = 24 << 22, .max = 175 << 22 },
* intel_pipe_has_type() but looking at encoder->new_crtc instead of
* encoder->crtc.
*/
-static bool intel_pipe_will_have_type(struct intel_crtc *crtc, int type)
+static bool intel_pipe_will_have_type(const struct intel_crtc_state *crtc_state,
+ int type)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
struct intel_encoder *encoder;
+ int i, num_connectors = 0;
+
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
+ continue;
+
+ num_connectors++;
- for_each_intel_encoder(dev, encoder)
- if (encoder->new_crtc == crtc && encoder->type == type)
+ encoder = to_intel_encoder(connector_state->best_encoder);
+ if (encoder->type == type)
return true;
+ }
+
+ WARN_ON(num_connectors == 0);
return false;
}
-static const intel_limit_t *intel_ironlake_limit(struct intel_crtc *crtc,
- int refclk)
+static const intel_limit_t *
+intel_ironlake_limit(struct intel_crtc_state *crtc_state, int refclk)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
const intel_limit_t *limit;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
if (intel_is_dual_link_lvds(dev)) {
if (refclk == 100000)
limit = &intel_limits_ironlake_dual_lvds_100m;
return limit;
}
-static const intel_limit_t *intel_g4x_limit(struct intel_crtc *crtc)
+static const intel_limit_t *
+intel_g4x_limit(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
const intel_limit_t *limit;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
if (intel_is_dual_link_lvds(dev))
limit = &intel_limits_g4x_dual_channel_lvds;
else
limit = &intel_limits_g4x_single_channel_lvds;
- } else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_HDMI) ||
- intel_pipe_will_have_type(crtc, INTEL_OUTPUT_ANALOG)) {
+ } else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_HDMI) ||
+ intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_ANALOG)) {
limit = &intel_limits_g4x_hdmi;
- } else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_SDVO)) {
+ } else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_SDVO)) {
limit = &intel_limits_g4x_sdvo;
} else /* The option is for other outputs */
limit = &intel_limits_i9xx_sdvo;
return limit;
}
-static const intel_limit_t *intel_limit(struct intel_crtc *crtc, int refclk)
+static const intel_limit_t *
+intel_limit(struct intel_crtc_state *crtc_state, int refclk)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
const intel_limit_t *limit;
if (HAS_PCH_SPLIT(dev))
- limit = intel_ironlake_limit(crtc, refclk);
+ limit = intel_ironlake_limit(crtc_state, refclk);
else if (IS_G4X(dev)) {
- limit = intel_g4x_limit(crtc);
+ limit = intel_g4x_limit(crtc_state);
} else if (IS_PINEVIEW(dev)) {
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
limit = &intel_limits_pineview_lvds;
else
limit = &intel_limits_pineview_sdvo;
} else if (IS_VALLEYVIEW(dev)) {
limit = &intel_limits_vlv;
} else if (!IS_GEN2(dev)) {
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
limit = &intel_limits_i9xx_lvds;
else
limit = &intel_limits_i9xx_sdvo;
} else {
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
limit = &intel_limits_i8xx_lvds;
- else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_DVO))
+ else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_DVO))
limit = &intel_limits_i8xx_dvo;
else
limit = &intel_limits_i8xx_dac;
}
static bool
-i9xx_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+i9xx_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
int err = target;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
/*
* For LVDS just rely on its current settings for dual-channel.
* We haven't figured out how to reliably set up different
}
static bool
-pnv_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+pnv_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
int err = target;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
/*
* For LVDS just rely on its current settings for dual-channel.
* We haven't figured out how to reliably set up different
}
static bool
-g4x_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+g4x_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
int max_n;
int err_most = (target >> 8) + (target >> 9);
found = false;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
if (intel_is_dual_link_lvds(dev))
clock.p2 = limit->p2.p2_fast;
else
return found;
}
+/*
+ * Check if the calculated PLL configuration is more optimal compared to the
+ * best configuration and error found so far. Return the calculated error.
+ */
+static bool vlv_PLL_is_optimal(struct drm_device *dev, int target_freq,
+ const intel_clock_t *calculated_clock,
+ const intel_clock_t *best_clock,
+ unsigned int best_error_ppm,
+ unsigned int *error_ppm)
+{
+ /*
+ * For CHV ignore the error and consider only the P value.
+ * Prefer a bigger P value based on HW requirements.
+ */
+ if (IS_CHERRYVIEW(dev)) {
+ *error_ppm = 0;
+
+ return calculated_clock->p > best_clock->p;
+ }
+
+ if (WARN_ON_ONCE(!target_freq))
+ return false;
+
+ *error_ppm = div_u64(1000000ULL *
+ abs(target_freq - calculated_clock->dot),
+ target_freq);
+ /*
+ * Prefer a better P value over a better (smaller) error if the error
+ * is small. Ensure this preference for future configurations too by
+ * setting the error to 0.
+ */
+ if (*error_ppm < 100 && calculated_clock->p > best_clock->p) {
+ *error_ppm = 0;
+
+ return true;
+ }
+
+ return *error_ppm + 10 < best_error_ppm;
+}
+
static bool
-vlv_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+vlv_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
unsigned int bestppm = 1000000;
clock.p = clock.p1 * clock.p2;
/* based on hardware requirement, prefer bigger m1,m2 values */
for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; clock.m1++) {
- unsigned int ppm, diff;
+ unsigned int ppm;
clock.m2 = DIV_ROUND_CLOSEST(target * clock.p * clock.n,
refclk * clock.m1);
&clock))
continue;
- diff = abs(clock.dot - target);
- ppm = div_u64(1000000ULL * diff, target);
-
- if (ppm < 100 && clock.p > best_clock->p) {
- bestppm = 0;
- *best_clock = clock;
- found = true;
- }
+ if (!vlv_PLL_is_optimal(dev, target,
+ &clock,
+ best_clock,
+ bestppm, &ppm))
+ continue;
- if (bestppm >= 10 && ppm < bestppm - 10) {
- bestppm = ppm;
- *best_clock = clock;
- found = true;
- }
+ *best_clock = clock;
+ bestppm = ppm;
+ found = true;
}
}
}
}
static bool
-chv_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+chv_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
+ unsigned int best_error_ppm;
intel_clock_t clock;
uint64_t m2;
int found = false;
memset(best_clock, 0, sizeof(*best_clock));
+ best_error_ppm = 1000000;
/*
* Based on hardware doc, the n always set to 1, and m1 always
for (clock.p2 = limit->p2.p2_fast;
clock.p2 >= limit->p2.p2_slow;
clock.p2 -= clock.p2 > 10 ? 2 : 1) {
+ unsigned int error_ppm;
clock.p = clock.p1 * clock.p2;
if (!intel_PLL_is_valid(dev, limit, &clock))
continue;
- /* based on hardware requirement, prefer bigger p
- */
- if (clock.p > best_clock->p) {
- *best_clock = clock;
- found = true;
- }
+ if (!vlv_PLL_is_optimal(dev, target, &clock, best_clock,
+ best_error_ppm, &error_ppm))
+ continue;
+
+ *best_clock = clock;
+ best_error_ppm = error_ppm;
+ found = true;
}
}
*
* We can ditch the crtc->primary->fb check as soon as we can
* properly reconstruct framebuffers.
+ *
+ * FIXME: The intel_crtc->active here should be switched to
+ * crtc->state->active once we have proper CRTC states wired up
+ * for atomic.
*/
- return intel_crtc->active && crtc->primary->fb &&
+ return intel_crtc->active && crtc->primary->state->fb &&
intel_crtc->config->base.adjusted_mode.crtc_clock;
}
u32 val;
if (INTEL_INFO(dev)->gen >= 9) {
- for_each_sprite(pipe, sprite) {
+ for_each_sprite(dev_priv, pipe, sprite) {
val = I915_READ(PLANE_CTL(pipe, sprite));
I915_STATE_WARN(val & PLANE_CTL_ENABLE,
"plane %d assertion failure, should be off on pipe %c but is still active\n",
sprite, pipe_name(pipe));
}
} else if (IS_VALLEYVIEW(dev)) {
- for_each_sprite(pipe, sprite) {
+ for_each_sprite(dev_priv, pipe, sprite) {
reg = SPCNTR(pipe, sprite);
val = I915_READ(reg);
I915_STATE_WARN(val & SP_ENABLE,
return false;
}
-int
-intel_fb_align_height(struct drm_device *dev, int height,
- uint32_t pixel_format,
- uint64_t fb_format_modifier)
+unsigned int
+intel_tile_height(struct drm_device *dev, uint32_t pixel_format,
+ uint64_t fb_format_modifier)
{
- int tile_height;
- uint32_t bits_per_pixel;
+ unsigned int tile_height;
+ uint32_t pixel_bytes;
switch (fb_format_modifier) {
case DRM_FORMAT_MOD_NONE:
tile_height = 32;
break;
case I915_FORMAT_MOD_Yf_TILED:
- bits_per_pixel = drm_format_plane_cpp(pixel_format, 0) * 8;
- switch (bits_per_pixel) {
+ pixel_bytes = drm_format_plane_cpp(pixel_format, 0);
+ switch (pixel_bytes) {
default:
- case 8:
+ case 1:
tile_height = 64;
break;
- case 16:
- case 32:
+ case 2:
+ case 4:
tile_height = 32;
break;
- case 64:
+ case 8:
tile_height = 16;
break;
- case 128:
+ case 16:
WARN_ONCE(1,
"128-bit pixels are not supported for display!");
tile_height = 16;
break;
}
- return ALIGN(height, tile_height);
+ return tile_height;
+}
+
+unsigned int
+intel_fb_align_height(struct drm_device *dev, unsigned int height,
+ uint32_t pixel_format, uint64_t fb_format_modifier)
+{
+ return ALIGN(height, intel_tile_height(dev, pixel_format,
+ fb_format_modifier));
+}
+
+static int
+intel_fill_fb_ggtt_view(struct i915_ggtt_view *view, struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state)
+{
+ struct intel_rotation_info *info = &view->rotation_info;
+
+ *view = i915_ggtt_view_normal;
+
+ if (!plane_state)
+ return 0;
+
+ if (!intel_rotation_90_or_270(plane_state->rotation))
+ return 0;
+
+ *view = i915_ggtt_view_rotated;
+
+ info->height = fb->height;
+ info->pixel_format = fb->pixel_format;
+ info->pitch = fb->pitches[0];
+ info->fb_modifier = fb->modifier[0];
+
+ if (!(info->fb_modifier == I915_FORMAT_MOD_Y_TILED ||
+ info->fb_modifier == I915_FORMAT_MOD_Yf_TILED)) {
+ DRM_DEBUG_KMS(
+ "Y or Yf tiling is needed for 90/270 rotation!\n");
+ return -EINVAL;
+ }
+
+ return 0;
}
int
intel_pin_and_fence_fb_obj(struct drm_plane *plane,
struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state,
struct intel_engine_cs *pipelined)
{
struct drm_device *dev = fb->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct i915_ggtt_view view;
u32 alignment;
int ret;
return -EINVAL;
}
+ ret = intel_fill_fb_ggtt_view(&view, fb, plane_state);
+ if (ret)
+ return ret;
+
/* Note that the w/a also requires 64 PTE of padding following the
* bo. We currently fill all unused PTE with the shadow page and so
* we should always have valid PTE following the scanout preventing
intel_runtime_pm_get(dev_priv);
dev_priv->mm.interruptible = false;
- ret = i915_gem_object_pin_to_display_plane(obj, alignment, pipelined);
+ ret = i915_gem_object_pin_to_display_plane(obj, alignment, pipelined,
+ &view);
if (ret)
goto err_interruptible;
return 0;
err_unpin:
- i915_gem_object_unpin_from_display_plane(obj);
+ i915_gem_object_unpin_from_display_plane(obj, &view);
err_interruptible:
dev_priv->mm.interruptible = true;
intel_runtime_pm_put(dev_priv);
return ret;
}
-static void intel_unpin_fb_obj(struct drm_i915_gem_object *obj)
+static void intel_unpin_fb_obj(struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state)
{
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct i915_ggtt_view view;
+ int ret;
+
WARN_ON(!mutex_is_locked(&obj->base.dev->struct_mutex));
+ ret = intel_fill_fb_ggtt_view(&view, fb, plane_state);
+ WARN_ONCE(ret, "Couldn't get view from plane state!");
+
i915_gem_object_unpin_fence(obj);
- i915_gem_object_unpin_from_display_plane(obj);
+ i915_gem_object_unpin_from_display_plane(obj, &view);
}
/* Computes the linear offset to the base tile and adjusts x, y. bytes per pixel
}
static bool
-intel_alloc_plane_obj(struct intel_crtc *crtc,
- struct intel_initial_plane_config *plane_config)
+intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
+ struct intel_initial_plane_config *plane_config)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_gem_object *obj = NULL;
mode_cmd.flags = DRM_MODE_FB_MODIFIERS;
mutex_lock(&dev->struct_mutex);
-
if (intel_framebuffer_init(dev, to_intel_framebuffer(fb),
&mode_cmd, obj)) {
DRM_DEBUG_KMS("intel fb init failed\n");
goto out_unref_obj;
}
-
- obj->frontbuffer_bits = INTEL_FRONTBUFFER_PRIMARY(crtc->pipe);
mutex_unlock(&dev->struct_mutex);
- DRM_DEBUG_KMS("plane fb obj %p\n", obj);
+ DRM_DEBUG_KMS("initial plane fb obj %p\n", obj);
return true;
out_unref_obj:
}
static void
-intel_find_plane_obj(struct intel_crtc *intel_crtc,
- struct intel_initial_plane_config *plane_config)
+intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
+ struct intel_initial_plane_config *plane_config)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_crtc *c;
struct intel_crtc *i;
struct drm_i915_gem_object *obj;
+ struct drm_plane *primary = intel_crtc->base.primary;
+ struct drm_framebuffer *fb;
if (!plane_config->fb)
return;
- if (intel_alloc_plane_obj(intel_crtc, plane_config)) {
- struct drm_plane *primary = intel_crtc->base.primary;
-
- primary->fb = &plane_config->fb->base;
- primary->state->crtc = &intel_crtc->base;
- update_state_fb(primary);
-
- return;
+ if (intel_alloc_initial_plane_obj(intel_crtc, plane_config)) {
+ fb = &plane_config->fb->base;
+ goto valid_fb;
}
kfree(plane_config->fb);
if (!i->active)
continue;
- obj = intel_fb_obj(c->primary->fb);
- if (obj == NULL)
+ fb = c->primary->fb;
+ if (!fb)
continue;
+ obj = intel_fb_obj(fb);
if (i915_gem_obj_ggtt_offset(obj) == plane_config->base) {
- struct drm_plane *primary = intel_crtc->base.primary;
-
- if (obj->tiling_mode != I915_TILING_NONE)
- dev_priv->preserve_bios_swizzle = true;
-
- drm_framebuffer_reference(c->primary->fb);
- primary->fb = c->primary->fb;
- primary->state->crtc = &intel_crtc->base;
- update_state_fb(intel_crtc->base.primary);
- obj->frontbuffer_bits |= INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe);
- break;
+ drm_framebuffer_reference(fb);
+ goto valid_fb;
}
}
+ return;
+
+valid_fb:
+ obj = intel_fb_obj(fb);
+ if (obj->tiling_mode != I915_TILING_NONE)
+ dev_priv->preserve_bios_swizzle = true;
+
+ primary->fb = fb;
+ primary->state->crtc = &intel_crtc->base;
+ primary->crtc = &intel_crtc->base;
+ update_state_fb(primary);
+ obj->frontbuffer_bits |= INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe);
}
static void i9xx_update_primary_plane(struct drm_crtc *crtc,
I915_WRITE(reg, dspcntr);
- DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
- i915_gem_obj_ggtt_offset(obj), linear_offset, x, y,
- fb->pitches[0]);
I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);
if (INTEL_INFO(dev)->gen >= 4) {
I915_WRITE(DSPSURF(plane),
I915_WRITE(reg, dspcntr);
- DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
- i915_gem_obj_ggtt_offset(obj), linear_offset, x, y,
- fb->pitches[0]);
I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);
I915_WRITE(DSPSURF(plane),
i915_gem_obj_ggtt_offset(obj) + intel_crtc->dspaddr_offset);
}
}
+unsigned long intel_plane_obj_offset(struct intel_plane *intel_plane,
+ struct drm_i915_gem_object *obj)
+{
+ const struct i915_ggtt_view *view = &i915_ggtt_view_normal;
+
+ if (intel_rotation_90_or_270(intel_plane->base.state->rotation))
+ view = &i915_ggtt_view_rotated;
+
+ return i915_gem_obj_ggtt_offset_view(obj, view);
+}
+
static void skylake_update_primary_plane(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
int x, int y)
struct drm_i915_gem_object *obj;
int pipe = intel_crtc->pipe;
u32 plane_ctl, stride_div;
+ unsigned long surf_addr;
if (!intel_crtc->primary_enabled) {
I915_WRITE(PLANE_CTL(pipe, 0), 0);
obj = intel_fb_obj(fb);
stride_div = intel_fb_stride_alignment(dev, fb->modifier[0],
fb->pixel_format);
+ surf_addr = intel_plane_obj_offset(to_intel_plane(crtc->primary), obj);
I915_WRITE(PLANE_CTL(pipe, 0), plane_ctl);
-
- DRM_DEBUG_KMS("Writing base %08lX %d,%d,%d,%d pitch=%d\n",
- i915_gem_obj_ggtt_offset(obj),
- x, y, fb->width, fb->height,
- fb->pitches[0]);
-
I915_WRITE(PLANE_POS(pipe, 0), 0);
I915_WRITE(PLANE_OFFSET(pipe, 0), (y << 16) | x);
I915_WRITE(PLANE_SIZE(pipe, 0),
(intel_crtc->config->pipe_src_h - 1) << 16 |
(intel_crtc->config->pipe_src_w - 1));
I915_WRITE(PLANE_STRIDE(pipe, 0), fb->pitches[0] / stride_div);
- I915_WRITE(PLANE_SURF(pipe, 0), i915_gem_obj_ggtt_offset(obj));
+ I915_WRITE(PLANE_SURF(pipe, 0), surf_addr);
POSTING_READ(PLANE_SURF(pipe, 0));
}
FDI_FE_ERRC_ENABLE);
}
-static bool pipe_has_enabled_pch(struct intel_crtc *crtc)
-{
- return crtc->base.state->enable && crtc->active &&
- crtc->config->has_pch_encoder;
-}
-
-static void ivb_modeset_global_resources(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *pipe_B_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]);
- struct intel_crtc *pipe_C_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_C]);
- uint32_t temp;
-
- /*
- * When everything is off disable fdi C so that we could enable fdi B
- * with all lanes. Note that we don't care about enabled pipes without
- * an enabled pch encoder.
- */
- if (!pipe_has_enabled_pch(pipe_B_crtc) &&
- !pipe_has_enabled_pch(pipe_C_crtc)) {
- WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
- WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
-
- temp = I915_READ(SOUTH_CHICKEN1);
- temp &= ~FDI_BC_BIFURCATION_SELECT;
- DRM_DEBUG_KMS("disabling fdi C rx\n");
- I915_WRITE(SOUTH_CHICKEN1, temp);
- }
-}
-
/* The FDI link training functions for ILK/Ibexpeak. */
static void ironlake_fdi_link_train(struct drm_crtc *crtc)
{
I915_READ(VSYNCSHIFT(cpu_transcoder)));
}
-static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev)
+static void cpt_set_fdi_bc_bifurcation(struct drm_device *dev, bool enable)
{
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t temp;
temp = I915_READ(SOUTH_CHICKEN1);
- if (temp & FDI_BC_BIFURCATION_SELECT)
+ if (!!(temp & FDI_BC_BIFURCATION_SELECT) == enable)
return;
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
- temp |= FDI_BC_BIFURCATION_SELECT;
- DRM_DEBUG_KMS("enabling fdi C rx\n");
+ temp &= ~FDI_BC_BIFURCATION_SELECT;
+ if (enable)
+ temp |= FDI_BC_BIFURCATION_SELECT;
+
+ DRM_DEBUG_KMS("%sabling fdi C rx\n", enable ? "en" : "dis");
I915_WRITE(SOUTH_CHICKEN1, temp);
POSTING_READ(SOUTH_CHICKEN1);
}
static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc)
{
struct drm_device *dev = intel_crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
switch (intel_crtc->pipe) {
case PIPE_A:
break;
case PIPE_B:
if (intel_crtc->config->fdi_lanes > 2)
- WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT);
+ cpt_set_fdi_bc_bifurcation(dev, false);
else
- cpt_enable_fdi_bc_bifurcation(dev);
+ cpt_set_fdi_bc_bifurcation(dev, true);
break;
case PIPE_C:
- cpt_enable_fdi_bc_bifurcation(dev);
+ cpt_set_fdi_bc_bifurcation(dev, true);
break;
default:
}
}
+/*
+ * Disable a plane internally without actually modifying the plane's state.
+ * This will allow us to easily restore the plane later by just reprogramming
+ * its state.
+ */
+static void disable_plane_internal(struct drm_plane *plane)
+{
+ struct intel_plane *intel_plane = to_intel_plane(plane);
+ struct drm_plane_state *state =
+ plane->funcs->atomic_duplicate_state(plane);
+ struct intel_plane_state *intel_state = to_intel_plane_state(state);
+
+ intel_state->visible = false;
+ intel_plane->commit_plane(plane, intel_state);
+
+ intel_plane_destroy_state(plane, state);
+}
+
static void intel_disable_sprite_planes(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
drm_for_each_legacy_plane(plane, &dev->mode_config.plane_list) {
intel_plane = to_intel_plane(plane);
- if (intel_plane->pipe == pipe)
- plane->funcs->disable_plane(plane);
+ if (plane->fb && intel_plane->pipe == pipe)
+ disable_plane_internal(plane);
}
}
return mask;
}
-static void modeset_update_crtc_power_domains(struct drm_device *dev)
+static void modeset_update_crtc_power_domains(struct drm_atomic_state *state)
{
+ struct drm_device *dev = state->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long pipe_domains[I915_MAX_PIPES] = { 0, };
struct intel_crtc *crtc;
}
if (dev_priv->display.modeset_global_resources)
- dev_priv->display.modeset_global_resources(dev);
+ dev_priv->display.modeset_global_resources(state);
for_each_intel_crtc(dev, crtc) {
enum intel_display_power_domain domain;
WARN_ON(dev_priv->display.get_display_clock_speed(dev) != dev_priv->vlv_cdclk_freq);
switch (cdclk) {
- case 400000:
- cmd = 3;
- break;
case 333333:
case 320000:
- cmd = 2;
- break;
case 266667:
- cmd = 1;
- break;
case 200000:
- cmd = 0;
break;
default:
MISSING_CASE(cdclk);
return;
}
+ /*
+ * Specs are full of misinformation, but testing on actual
+ * hardware has shown that we just need to write the desired
+ * CCK divider into the Punit register.
+ */
+ cmd = DIV_ROUND_CLOSEST(dev_priv->hpll_freq << 1, cdclk) - 1;
+
mutex_lock(&dev_priv->rps.hw_lock);
val = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ);
val &= ~DSPFREQGUAR_MASK_CHV;
int max_pixclk)
{
int freq_320 = (dev_priv->hpll_freq << 1) % 320000 != 0 ? 333333 : 320000;
-
- /* FIXME: Punit isn't quite ready yet */
- if (IS_CHERRYVIEW(dev_priv->dev))
- return 400000;
+ int limit = IS_CHERRYVIEW(dev_priv) ? 95 : 90;
/*
* Really only a few cases to deal with, as only 4 CDclks are supported:
* 200MHz
* 267MHz
* 320/333MHz (depends on HPLL freq)
- * 400MHz
- * So we check to see whether we're above 90% of the lower bin and
- * adjust if needed.
+ * 400MHz (VLV only)
+ * So we check to see whether we're above 90% (VLV) or 95% (CHV)
+ * of the lower bin and adjust if needed.
*
* We seem to get an unstable or solid color picture at 200MHz.
* Not sure what's wrong. For now use 200MHz only when all pipes
* are off.
*/
- if (max_pixclk > freq_320*9/10)
+ if (!IS_CHERRYVIEW(dev_priv) &&
+ max_pixclk > freq_320*limit/100)
return 400000;
- else if (max_pixclk > 266667*9/10)
+ else if (max_pixclk > 266667*limit/100)
return freq_320;
else if (max_pixclk > 0)
return 266667;
*prepare_pipes |= (1 << intel_crtc->pipe);
}
-static void valleyview_modeset_global_resources(struct drm_device *dev)
+static void vlv_program_pfi_credits(struct drm_i915_private *dev_priv)
{
+ unsigned int credits, default_credits;
+
+ if (IS_CHERRYVIEW(dev_priv))
+ default_credits = PFI_CREDIT(12);
+ else
+ default_credits = PFI_CREDIT(8);
+
+ if (DIV_ROUND_CLOSEST(dev_priv->vlv_cdclk_freq, 1000) >= dev_priv->rps.cz_freq) {
+ /* CHV suggested value is 31 or 63 */
+ if (IS_CHERRYVIEW(dev_priv))
+ credits = PFI_CREDIT_31;
+ else
+ credits = PFI_CREDIT(15);
+ } else {
+ credits = default_credits;
+ }
+
+ /*
+ * WA - write default credits before re-programming
+ * FIXME: should we also set the resend bit here?
+ */
+ I915_WRITE(GCI_CONTROL, VGA_FAST_MODE_DISABLE |
+ default_credits);
+
+ I915_WRITE(GCI_CONTROL, VGA_FAST_MODE_DISABLE |
+ credits | PFI_CREDIT_RESEND);
+
+ /*
+ * FIXME is this guaranteed to clear
+ * immediately or should we poll for it?
+ */
+ WARN_ON(I915_READ(GCI_CONTROL) & PFI_CREDIT_RESEND);
+}
+
+static void valleyview_modeset_global_resources(struct drm_atomic_state *state)
+{
+ struct drm_device *dev = state->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int max_pixclk = intel_mode_max_pixclk(dev_priv);
int req_cdclk = valleyview_calc_cdclk(dev_priv, max_pixclk);
else
valleyview_set_cdclk(dev, req_cdclk);
+ vlv_program_pfi_credits(dev_priv);
+
intel_display_power_put(dev_priv, POWER_DOMAIN_PIPE_A);
}
}
return encoder->get_hw_state(encoder, &pipe);
}
+static int pipe_required_fdi_lanes(struct drm_device *dev, enum pipe pipe)
+{
+ struct intel_crtc *crtc =
+ to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe));
+
+ if (crtc->base.state->enable &&
+ crtc->config->has_pch_encoder)
+ return crtc->config->fdi_lanes;
+
+ return 0;
+}
+
static bool ironlake_check_fdi_lanes(struct drm_device *dev, enum pipe pipe,
struct intel_crtc_state *pipe_config)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *pipe_B_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]);
-
DRM_DEBUG_KMS("checking fdi config on pipe %c, lanes %i\n",
pipe_name(pipe), pipe_config->fdi_lanes);
if (pipe_config->fdi_lanes > 4) {
case PIPE_A:
return true;
case PIPE_B:
- if (dev_priv->pipe_to_crtc_mapping[PIPE_C]->enabled &&
- pipe_config->fdi_lanes > 2) {
+ if (pipe_config->fdi_lanes > 2 &&
+ pipe_required_fdi_lanes(dev, PIPE_C) > 0) {
DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n",
pipe_name(pipe), pipe_config->fdi_lanes);
return false;
}
return true;
case PIPE_C:
- if (!pipe_has_enabled_pch(pipe_B_crtc) ||
- pipe_B_crtc->config->fdi_lanes <= 2) {
- if (pipe_config->fdi_lanes > 2) {
- DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n",
- pipe_name(pipe), pipe_config->fdi_lanes);
- return false;
- }
- } else {
+ if (pipe_config->fdi_lanes > 2) {
+ DRM_DEBUG_KMS("only 2 lanes on pipe %c: required %i lanes\n",
+ pipe_name(pipe), pipe_config->fdi_lanes);
+ return false;
+ }
+ if (pipe_required_fdi_lanes(dev, PIPE_B) > 2) {
DRM_DEBUG_KMS("fdi link B uses too many lanes to enable link C\n");
return false;
}
* - LVDS dual channel mode
* - Double wide pipe
*/
- if ((intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
+ if ((intel_pipe_will_have_type(pipe_config, INTEL_OUTPUT_LVDS) &&
intel_is_dual_link_lvds(dev)) || pipe_config->double_wide)
pipe_config->pipe_src_w &= ~1;
u32 val;
int divider;
- /* FIXME: Punit isn't quite ready yet */
- if (IS_CHERRYVIEW(dev))
- return 400000;
-
if (dev_priv->hpll_freq == 0)
dev_priv->hpll_freq = valleyview_get_vco(dev_priv);
&& !(dev_priv->quirks & QUIRK_LVDS_SSC_DISABLE);
}
-static int i9xx_get_refclk(struct intel_crtc *crtc, int num_connectors)
+static int i9xx_get_refclk(const struct intel_crtc_state *crtc_state,
+ int num_connectors)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int refclk;
+ WARN_ON(!crtc_state->base.state);
+
if (IS_VALLEYVIEW(dev)) {
refclk = 100000;
- } else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
+ } else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2) {
refclk = dev_priv->vbt.lvds_ssc_freq;
DRM_DEBUG_KMS("using SSC reference clock of %d kHz\n", refclk);
crtc_state->dpll_hw_state.fp0 = fp;
crtc->lowfreq_avail = false;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
- reduced_clock && i915.powersave) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
+ reduced_clock) {
crtc_state->dpll_hw_state.fp1 = fp2;
crtc->lowfreq_avail = true;
} else {
int pipe = crtc->pipe;
int dpll_reg = DPLL(crtc->pipe);
enum dpio_channel port = vlv_pipe_to_channel(pipe);
- u32 loopfilter, intcoeff;
+ u32 loopfilter, tribuf_calcntr;
u32 bestn, bestm1, bestm2, bestp1, bestp2, bestm2_frac;
- int refclk;
+ u32 dpio_val;
+ int vco;
bestn = pipe_config->dpll.n;
bestm2_frac = pipe_config->dpll.m2 & 0x3fffff;
bestm2 = pipe_config->dpll.m2 >> 22;
bestp1 = pipe_config->dpll.p1;
bestp2 = pipe_config->dpll.p2;
+ vco = pipe_config->dpll.vco;
+ dpio_val = 0;
+ loopfilter = 0;
/*
* Enable Refclk and SSC
1 << DPIO_CHV_N_DIV_SHIFT);
/* M2 fraction division */
- vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW2(port), bestm2_frac);
+ if (bestm2_frac)
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW2(port), bestm2_frac);
/* M2 fraction division enable */
- vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW3(port),
- DPIO_CHV_FRAC_DIV_EN |
- (2 << DPIO_CHV_FEEDFWD_GAIN_SHIFT));
+ dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW3(port));
+ dpio_val &= ~(DPIO_CHV_FEEDFWD_GAIN_MASK | DPIO_CHV_FRAC_DIV_EN);
+ dpio_val |= (2 << DPIO_CHV_FEEDFWD_GAIN_SHIFT);
+ if (bestm2_frac)
+ dpio_val |= DPIO_CHV_FRAC_DIV_EN;
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW3(port), dpio_val);
+
+ /* Program digital lock detect threshold */
+ dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW9(port));
+ dpio_val &= ~(DPIO_CHV_INT_LOCK_THRESHOLD_MASK |
+ DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE);
+ dpio_val |= (0x5 << DPIO_CHV_INT_LOCK_THRESHOLD_SHIFT);
+ if (!bestm2_frac)
+ dpio_val |= DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE;
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW9(port), dpio_val);
/* Loop filter */
- refclk = i9xx_get_refclk(crtc, 0);
- loopfilter = 5 << DPIO_CHV_PROP_COEFF_SHIFT |
- 2 << DPIO_CHV_GAIN_CTRL_SHIFT;
- if (refclk == 100000)
- intcoeff = 11;
- else if (refclk == 38400)
- intcoeff = 10;
- else
- intcoeff = 9;
- loopfilter |= intcoeff << DPIO_CHV_INT_COEFF_SHIFT;
+ if (vco == 5400000) {
+ loopfilter |= (0x3 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0x8 << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x1 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0x9;
+ } else if (vco <= 6200000) {
+ loopfilter |= (0x5 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0xB << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0x9;
+ } else if (vco <= 6480000) {
+ loopfilter |= (0x4 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0x9 << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0x8;
+ } else {
+ /* Not supported. Apply the same limits as in the max case */
+ loopfilter |= (0x4 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0x9 << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0;
+ }
vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW6(port), loopfilter);
+ dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW8(port));
+ dpio_val &= ~DPIO_CHV_TDC_TARGET_CNT_MASK;
+ dpio_val |= (tribuf_calcntr << DPIO_CHV_TDC_TARGET_CNT_SHIFT);
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW8(port), dpio_val);
+
/* AFC Recal */
vlv_dpio_write(dev_priv, pipe, CHV_CMN_DW14(port),
vlv_dpio_read(dev_priv, pipe, CHV_CMN_DW14(port)) |
struct intel_crtc *crtc =
to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe));
struct intel_crtc_state pipe_config = {
+ .base.crtc = &crtc->base,
.pixel_multiplier = 1,
.dpll = *dpll,
};
i9xx_update_pll_dividers(crtc, crtc_state, reduced_clock);
- is_sdvo = intel_pipe_will_have_type(crtc, INTEL_OUTPUT_SDVO) ||
- intel_pipe_will_have_type(crtc, INTEL_OUTPUT_HDMI);
+ is_sdvo = intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_SDVO) ||
+ intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_HDMI);
dpll = DPLL_VGA_MODE_DIS;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
dpll |= DPLLB_MODE_LVDS;
else
dpll |= DPLLB_MODE_DAC_SERIAL;
if (crtc_state->sdvo_tv_clock)
dpll |= PLL_REF_INPUT_TVCLKINBC;
- else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
+ else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2)
dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
else
dpll = DPLL_VGA_MODE_DIS;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
} else {
if (clock->p1 == 2)
dpll |= PLL_P2_DIVIDE_BY_4;
}
- if (!IS_I830(dev) && intel_pipe_will_have_type(crtc, INTEL_OUTPUT_DVO))
+ if (!IS_I830(dev) && intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_DVO))
dpll |= DPLL_DVO_2X_MODE;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2)
dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
else
bool is_lvds = false, is_dsi = false;
struct intel_encoder *encoder;
const intel_limit_t *limit;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
+ int i;
- for_each_intel_encoder(dev, encoder) {
- if (encoder->new_crtc != crtc)
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != &crtc->base)
continue;
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
return 0;
if (!crtc_state->clock_set) {
- refclk = i9xx_get_refclk(crtc, num_connectors);
+ refclk = i9xx_get_refclk(crtc_state, num_connectors);
/*
* Returns a set of divisors for the desired target clock with
* the clock equation: reflck * (5 * (m1 + 2) + (m2 + 2)) / (n +
* 2) / p1 / p2.
*/
- limit = intel_limit(crtc, refclk);
- ok = dev_priv->display.find_dpll(limit, crtc,
+ limit = intel_limit(crtc_state, refclk);
+ ok = dev_priv->display.find_dpll(limit, crtc_state,
crtc_state->port_clock,
refclk, NULL, &clock);
if (!ok) {
* we will disable the LVDS downclock feature.
*/
has_reduced_clock =
- dev_priv->display.find_dpll(limit, crtc,
+ dev_priv->display.find_dpll(limit, crtc_state,
dev_priv->lvds_downclock,
refclk, &clock,
&reduced_clock);
u32 val, base, offset;
int pipe = crtc->pipe, plane = crtc->plane;
int fourcc, pixel_format;
- int aligned_height;
+ unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
lpt_init_pch_refclk(dev);
}
-static int ironlake_get_refclk(struct drm_crtc *crtc)
+static int ironlake_get_refclk(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
struct intel_encoder *encoder;
- int num_connectors = 0;
+ int num_connectors = 0, i;
bool is_lvds = false;
- for_each_intel_encoder(dev, encoder) {
- if (encoder->new_crtc != to_intel_crtc(crtc))
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
continue;
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int refclk;
const intel_limit_t *limit;
bool ret, is_lvds = false;
- is_lvds = intel_pipe_will_have_type(intel_crtc, INTEL_OUTPUT_LVDS);
+ is_lvds = intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS);
- refclk = ironlake_get_refclk(crtc);
+ refclk = ironlake_get_refclk(crtc_state);
/*
* Returns a set of divisors for the desired target clock with the given
* refclk, or FALSE. The returned values represent the clock equation:
* reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2.
*/
- limit = intel_limit(intel_crtc, refclk);
- ret = dev_priv->display.find_dpll(limit, intel_crtc,
+ limit = intel_limit(crtc_state, refclk);
+ ret = dev_priv->display.find_dpll(limit, crtc_state,
crtc_state->port_clock,
refclk, NULL, clock);
if (!ret)
* downclock feature.
*/
*has_reduced_clock =
- dev_priv->display.find_dpll(limit, intel_crtc,
+ dev_priv->display.find_dpll(limit, crtc_state,
dev_priv->lvds_downclock,
refclk, clock,
reduced_clock);
struct drm_crtc *crtc = &intel_crtc->base;
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_encoder *intel_encoder;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
+ struct intel_encoder *encoder;
uint32_t dpll;
- int factor, num_connectors = 0;
+ int factor, num_connectors = 0, i;
bool is_lvds = false, is_sdvo = false;
- for_each_intel_encoder(dev, intel_encoder) {
- if (intel_encoder->new_crtc != to_intel_crtc(crtc))
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
continue;
- switch (intel_encoder->type) {
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
+ switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
}
}
- if (is_lvds && has_reduced_clock && i915.powersave)
+ if (is_lvds && has_reduced_clock)
crtc->lowfreq_avail = true;
else
crtc->lowfreq_avail = false;
u32 val, base, offset, stride_mult, tiling;
int pipe = crtc->pipe;
int fourcc, pixel_format;
- int aligned_height;
+ unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
u32 val, base, offset;
int pipe = crtc->pipe;
int fourcc, pixel_format;
- int aligned_height;
+ unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
uint32_t cntl = 0, size = 0;
if (base) {
- unsigned int width = intel_crtc->cursor_width;
- unsigned int height = intel_crtc->cursor_height;
+ unsigned int width = intel_crtc->base.cursor->state->crtc_w;
+ unsigned int height = intel_crtc->base.cursor->state->crtc_h;
unsigned int stride = roundup_pow_of_two(width) * 4;
switch (stride) {
cntl = 0;
if (base) {
cntl = MCURSOR_GAMMA_ENABLE;
- switch (intel_crtc->cursor_width) {
+ switch (intel_crtc->base.cursor->state->crtc_w) {
case 64:
cntl |= CURSOR_MODE_64_ARGB_AX;
break;
cntl |= CURSOR_MODE_256_ARGB_AX;
break;
default:
- MISSING_CASE(intel_crtc->cursor_width);
+ MISSING_CASE(intel_crtc->base.cursor->state->crtc_w);
return;
}
cntl |= pipe << 28; /* Connect to correct pipe */
base = 0;
if (x < 0) {
- if (x + intel_crtc->cursor_width <= 0)
+ if (x + intel_crtc->base.cursor->state->crtc_w <= 0)
base = 0;
pos |= CURSOR_POS_SIGN << CURSOR_X_SHIFT;
pos |= x << CURSOR_X_SHIFT;
if (y < 0) {
- if (y + intel_crtc->cursor_height <= 0)
+ if (y + intel_crtc->base.cursor->state->crtc_h <= 0)
base = 0;
pos |= CURSOR_POS_SIGN << CURSOR_Y_SHIFT;
/* ILK+ do this automagically */
if (HAS_GMCH_DISPLAY(dev) &&
crtc->cursor->state->rotation == BIT(DRM_ROTATE_180)) {
- base += (intel_crtc->cursor_height *
- intel_crtc->cursor_width - 1) * 4;
+ base += (intel_crtc->base.cursor->state->crtc_h *
+ intel_crtc->base.cursor->state->crtc_w - 1) * 4;
}
if (IS_845G(dev) || IS_I865G(dev))
struct drm_device *dev = encoder->dev;
struct drm_framebuffer *fb;
struct drm_mode_config *config = &dev->mode_config;
+ struct drm_atomic_state *state = NULL;
+ struct drm_connector_state *connector_state;
int ret, i = -1;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
old->load_detect_temp = true;
old->release_fb = NULL;
+ state = drm_atomic_state_alloc(dev);
+ if (!state)
+ return false;
+
+ state->acquire_ctx = ctx;
+
+ connector_state = drm_atomic_get_connector_state(state, connector);
+ if (IS_ERR(connector_state)) {
+ ret = PTR_ERR(connector_state);
+ goto fail;
+ }
+
+ connector_state->crtc = crtc;
+ connector_state->best_encoder = &intel_encoder->base;
+
if (!mode)
mode = &load_detect_mode;
goto fail;
}
- if (intel_set_mode(crtc, mode, 0, 0, fb)) {
+ if (intel_set_mode(crtc, mode, 0, 0, fb, state)) {
DRM_DEBUG_KMS("failed to set mode on load-detect pipe\n");
if (old->release_fb)
old->release_fb->funcs->destroy(old->release_fb);
else
intel_crtc->new_config = NULL;
fail_unlock:
+ if (state) {
+ drm_atomic_state_free(state);
+ state = NULL;
+ }
+
if (ret == -EDEADLK) {
drm_modeset_backoff(ctx);
goto retry;
}
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old)
+ struct intel_load_detect_pipe *old,
+ struct drm_modeset_acquire_ctx *ctx)
{
+ struct drm_device *dev = connector->dev;
struct intel_encoder *intel_encoder =
intel_attached_encoder(connector);
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_crtc *crtc = encoder->crtc;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_atomic_state *state;
+ struct drm_connector_state *connector_state;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
connector->base.id, connector->name,
encoder->base.id, encoder->name);
if (old->load_detect_temp) {
+ state = drm_atomic_state_alloc(dev);
+ if (!state)
+ goto fail;
+
+ state->acquire_ctx = ctx;
+
+ connector_state = drm_atomic_get_connector_state(state, connector);
+ if (IS_ERR(connector_state))
+ goto fail;
+
to_intel_connector(connector)->new_encoder = NULL;
intel_encoder->new_crtc = NULL;
intel_crtc->new_enabled = false;
intel_crtc->new_config = NULL;
- intel_set_mode(crtc, NULL, 0, 0, NULL);
+
+ connector_state->best_encoder = NULL;
+ connector_state->crtc = NULL;
+
+ intel_set_mode(crtc, NULL, 0, 0, NULL, state);
+
+ drm_atomic_state_free(state);
if (old->release_fb) {
drm_framebuffer_unregister_private(old->release_fb);
/* Switch crtc and encoder back off if necessary */
if (old->dpms_mode != DRM_MODE_DPMS_ON)
connector->funcs->dpms(connector, old->dpms_mode);
+
+ return;
+fail:
+ DRM_DEBUG_KMS("Couldn't release load detect pipe.\n");
+ drm_atomic_state_free(state);
}
static int i9xx_pll_refclk(struct drm_device *dev,
intel_runtime_pm_get(dev_priv);
i915_update_gfx_val(dev_priv);
+ if (INTEL_INFO(dev)->gen >= 6)
+ gen6_rps_busy(dev_priv);
dev_priv->mm.busy = true;
}
dev_priv->mm.busy = false;
- if (!i915.powersave)
- goto out;
-
for_each_crtc(dev, crtc) {
if (!crtc->primary->fb)
continue;
if (INTEL_INFO(dev)->gen >= 6)
gen6_rps_idle(dev->dev_private);
-out:
intel_runtime_pm_put(dev_priv);
}
enum pipe pipe = to_intel_crtc(work->crtc)->pipe;
mutex_lock(&dev->struct_mutex);
- intel_unpin_fb_obj(intel_fb_obj(work->old_fb));
+ intel_unpin_fb_obj(work->old_fb, work->crtc->primary->state);
drm_gem_object_unreference(&work->pending_flip_obj->base);
- drm_framebuffer_unreference(work->old_fb);
intel_fbc_update(dev);
mutex_unlock(&dev->struct_mutex);
intel_frontbuffer_flip_complete(dev, INTEL_FRONTBUFFER_PRIMARY(pipe));
+ drm_framebuffer_unreference(work->old_fb);
BUG_ON(atomic_read(&to_intel_crtc(work->crtc)->unpin_work_count) == 0);
atomic_dec(&to_intel_crtc(work->crtc)->unpin_work_count);
struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- WARN_ON(!in_irq());
+ WARN_ON(!in_interrupt());
if (crtc == NULL)
return;
if (atomic_read(&intel_crtc->unpin_work_count) >= 2)
flush_workqueue(dev_priv->wq);
- ret = i915_mutex_lock_interruptible(dev);
- if (ret)
- goto cleanup;
-
/* Reference the objects for the scheduled work. */
drm_framebuffer_reference(work->old_fb);
drm_gem_object_reference(&obj->base);
work->pending_flip_obj = obj;
+ ret = i915_mutex_lock_interruptible(dev);
+ if (ret)
+ goto cleanup;
+
atomic_inc(&intel_crtc->unpin_work_count);
intel_crtc->reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
ring = &dev_priv->ring[RCS];
}
- ret = intel_pin_and_fence_fb_obj(crtc->primary, fb, ring);
+ ret = intel_pin_and_fence_fb_obj(crtc->primary, fb,
+ crtc->primary->state, ring);
if (ret)
goto cleanup_pending;
- work->gtt_offset =
- i915_gem_obj_ggtt_offset(obj) + intel_crtc->dspaddr_offset;
+ work->gtt_offset = intel_plane_obj_offset(to_intel_plane(primary), obj)
+ + intel_crtc->dspaddr_offset;
if (use_mmio_flip(ring, obj)) {
ret = intel_queue_mmio_flip(dev, crtc, fb, obj, ring,
return 0;
cleanup_unpin:
- intel_unpin_fb_obj(obj);
+ intel_unpin_fb_obj(fb, crtc->primary->state);
cleanup_pending:
atomic_dec(&intel_crtc->unpin_work_count);
+ mutex_unlock(&dev->struct_mutex);
+cleanup:
crtc->primary->fb = old_fb;
update_state_fb(crtc->primary);
+
+ drm_gem_object_unreference_unlocked(&obj->base);
drm_framebuffer_unreference(work->old_fb);
- drm_gem_object_unreference(&obj->base);
- mutex_unlock(&dev->struct_mutex);
-cleanup:
spin_lock_irq(&dev->event_lock);
intel_crtc->unpin_work = NULL;
spin_unlock_irq(&dev->event_lock);
struct intel_encoder *encoder;
struct intel_connector *connector;
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
connector->new_encoder =
to_intel_encoder(connector->base.encoder);
}
}
}
+/* Transitional helper to copy current connector/encoder state to
+ * connector->state. This is needed so that code that is partially
+ * converted to atomic does the right thing.
+ */
+static void intel_modeset_update_connector_atomic_state(struct drm_device *dev)
+{
+ struct intel_connector *connector;
+
+ for_each_intel_connector(dev, connector) {
+ if (connector->base.encoder) {
+ connector->base.state->best_encoder =
+ connector->base.encoder;
+ connector->base.state->crtc =
+ connector->base.encoder->crtc;
+ } else {
+ connector->base.state->best_encoder = NULL;
+ connector->base.state->crtc = NULL;
+ }
+ }
+}
+
/**
* intel_modeset_commit_output_state
*
struct intel_encoder *encoder;
struct intel_connector *connector;
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
connector->base.encoder = &connector->new_encoder->base;
}
crtc->base.state->enable = crtc->new_enabled;
crtc->base.enabled = crtc->new_enabled;
}
+
+ intel_modeset_update_connector_atomic_state(dev);
}
static void
struct intel_crtc_state *pipe_config)
{
struct drm_device *dev = crtc->base.dev;
+ struct drm_atomic_state *state;
struct intel_connector *connector;
- int bpp;
+ int bpp, i;
switch (fb->pixel_format) {
case DRM_FORMAT_C8:
pipe_config->pipe_bpp = bpp;
+ state = pipe_config->base.state;
+
/* Clamp display bpp to EDID value */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
- if (!connector->new_encoder ||
- connector->new_encoder->new_crtc != crtc)
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ connector = to_intel_connector(state->connectors[i]);
+ if (state->connector_states[i]->crtc != &crtc->base)
continue;
connected_sink_compute_bpp(connector, pipe_config);
* list to detect the problem on ddi platforms
* where there's just one encoder per digital port.
*/
- list_for_each_entry(connector,
- &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, connector) {
struct intel_encoder *encoder = connector->new_encoder;
if (!encoder)
return true;
}
+static void
+clear_intel_crtc_state(struct intel_crtc_state *crtc_state)
+{
+ struct drm_crtc_state tmp_state;
+
+ /* Clear only the intel specific part of the crtc state */
+ tmp_state = crtc_state->base;
+ memset(crtc_state, 0, sizeof *crtc_state);
+ crtc_state->base = tmp_state;
+}
+
static struct intel_crtc_state *
intel_modeset_pipe_config(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_display_mode *mode)
+ struct drm_display_mode *mode,
+ struct drm_atomic_state *state)
{
struct drm_device *dev = crtc->dev;
struct intel_encoder *encoder;
+ struct intel_connector *connector;
+ struct drm_connector_state *connector_state;
struct intel_crtc_state *pipe_config;
int plane_bpp, ret = -EINVAL;
+ int i;
bool retry = true;
if (!check_encoder_cloning(to_intel_crtc(crtc))) {
return ERR_PTR(-EINVAL);
}
- pipe_config = kzalloc(sizeof(*pipe_config), GFP_KERNEL);
- if (!pipe_config)
- return ERR_PTR(-ENOMEM);
+ pipe_config = intel_atomic_get_crtc_state(state, to_intel_crtc(crtc));
+ if (IS_ERR(pipe_config))
+ return pipe_config;
+
+ clear_intel_crtc_state(pipe_config);
pipe_config->base.crtc = crtc;
drm_mode_copy(&pipe_config->base.adjusted_mode, mode);
* adjust it according to limitations or connector properties, and also
* a chance to reject the mode entirely.
*/
- for_each_intel_encoder(dev, encoder) {
+ for (i = 0; i < state->num_connector; i++) {
+ connector = to_intel_connector(state->connectors[i]);
+ if (!connector)
+ continue;
- if (&encoder->new_crtc->base != crtc)
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc)
continue;
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
if (!(encoder->compute_config(encoder, pipe_config))) {
DRM_DEBUG_KMS("Encoder config failure\n");
goto fail;
return pipe_config;
fail:
- kfree(pipe_config);
return ERR_PTR(ret);
}
* to be part of the prepare_pipes mask. We don't (yet) support global
* modeset across multiple crtcs, so modeset_pipes will only have one
* bit set at most. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->base.encoder == &connector->new_encoder->base)
continue;
continue;
/* planes */
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
hw_entry = &hw_ddb.plane[pipe][plane];
sw_entry = &sw_ddb->plane[pipe][plane];
{
struct intel_connector *connector;
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
/* This also checks the encoder/connector hw state with the
* ->get_hw_state callbacks. */
intel_connector_check_state(connector);
I915_STATE_WARN(encoder->connectors_active && !encoder->base.crtc,
"encoder's active_connectors set, but no crtc\n");
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->base.encoder != &encoder->base)
continue;
enabled = true;
intel_modeset_compute_config(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_framebuffer *fb,
+ struct drm_atomic_state *state,
unsigned *modeset_pipes,
unsigned *prepare_pipes,
unsigned *disable_pipes)
{
+ struct drm_device *dev = crtc->dev;
struct intel_crtc_state *pipe_config = NULL;
+ struct intel_crtc *intel_crtc;
+ int ret = 0;
+
+ ret = drm_atomic_add_affected_connectors(state, crtc);
+ if (ret)
+ return ERR_PTR(ret);
intel_modeset_affected_pipes(crtc, modeset_pipes,
prepare_pipes, disable_pipes);
- if ((*modeset_pipes) == 0)
- goto out;
+ for_each_intel_crtc_masked(dev, *disable_pipes, intel_crtc) {
+ pipe_config = intel_atomic_get_crtc_state(state, intel_crtc);
+ if (IS_ERR(pipe_config))
+ return pipe_config;
+
+ pipe_config->base.enable = false;
+ }
/*
* Note this needs changes when we start tracking multiple modes
* (i.e. one pipe_config for each crtc) rather than just the one
* for this crtc.
*/
- pipe_config = intel_modeset_pipe_config(crtc, fb, mode);
- if (IS_ERR(pipe_config)) {
- goto out;
+ for_each_intel_crtc_masked(dev, *modeset_pipes, intel_crtc) {
+ /* FIXME: For now we still expect modeset_pipes has at most
+ * one bit set. */
+ if (WARN_ON(&intel_crtc->base != crtc))
+ continue;
+
+ pipe_config = intel_modeset_pipe_config(crtc, fb, mode, state);
+ if (IS_ERR(pipe_config))
+ return pipe_config;
+
+ intel_dump_pipe_config(to_intel_crtc(crtc), pipe_config,
+ "[modeset]");
}
- intel_dump_pipe_config(to_intel_crtc(crtc), pipe_config,
- "[modeset]");
-out:
- return pipe_config;
+ return intel_atomic_get_crtc_state(state, to_intel_crtc(crtc));;
}
static int __intel_set_mode_setup_plls(struct drm_device *dev,
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_display_mode *saved_mode;
+ struct intel_crtc_state *crtc_state_copy = NULL;
struct intel_crtc *intel_crtc;
int ret = 0;
if (!saved_mode)
return -ENOMEM;
+ crtc_state_copy = kmalloc(sizeof(*crtc_state_copy), GFP_KERNEL);
+ if (!crtc_state_copy) {
+ ret = -ENOMEM;
+ goto done;
+ }
+
*saved_mode = crtc->mode;
if (modeset_pipes)
* update the the output configuration. */
intel_modeset_update_state(dev, prepare_pipes);
- modeset_update_crtc_power_domains(dev);
+ modeset_update_crtc_power_domains(pipe_config->base.state);
/* Set up the DPLL and any encoders state that needs to adjust or depend
* on the DPLL.
if (ret && crtc->state->enable)
crtc->mode = *saved_mode;
+ if (ret == 0 && pipe_config) {
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ /* The pipe_config will be freed with the atomic state, so
+ * make a copy. */
+ memcpy(crtc_state_copy, intel_crtc->config,
+ sizeof *crtc_state_copy);
+ intel_crtc->config = crtc_state_copy;
+ intel_crtc->base.state = &crtc_state_copy->base;
+
+ if (modeset_pipes)
+ intel_crtc->new_config = intel_crtc->config;
+ } else {
+ kfree(crtc_state_copy);
+ }
+
kfree(saved_mode);
return ret;
}
static int intel_set_mode(struct drm_crtc *crtc,
struct drm_display_mode *mode,
- int x, int y, struct drm_framebuffer *fb)
+ int x, int y, struct drm_framebuffer *fb,
+ struct drm_atomic_state *state)
{
struct intel_crtc_state *pipe_config;
unsigned modeset_pipes, prepare_pipes, disable_pipes;
+ int ret = 0;
- pipe_config = intel_modeset_compute_config(crtc, mode, fb,
+ pipe_config = intel_modeset_compute_config(crtc, mode, fb, state,
&modeset_pipes,
&prepare_pipes,
&disable_pipes);
- if (IS_ERR(pipe_config))
- return PTR_ERR(pipe_config);
+ if (IS_ERR(pipe_config)) {
+ ret = PTR_ERR(pipe_config);
+ goto out;
+ }
- return intel_set_mode_pipes(crtc, mode, x, y, fb, pipe_config,
- modeset_pipes, prepare_pipes,
- disable_pipes);
+ ret = intel_set_mode_pipes(crtc, mode, x, y, fb, pipe_config,
+ modeset_pipes, prepare_pipes,
+ disable_pipes);
+ if (ret)
+ goto out;
+
+out:
+ return ret;
}
void intel_crtc_restore_mode(struct drm_crtc *crtc)
{
- intel_set_mode(crtc, &crtc->mode, crtc->x, crtc->y, crtc->primary->fb);
+ struct drm_device *dev = crtc->dev;
+ struct drm_atomic_state *state;
+ struct intel_encoder *encoder;
+ struct intel_connector *connector;
+ struct drm_connector_state *connector_state;
+
+ state = drm_atomic_state_alloc(dev);
+ if (!state) {
+ DRM_DEBUG_KMS("[CRTC:%d] mode restore failed, out of memory",
+ crtc->base.id);
+ return;
+ }
+
+ state->acquire_ctx = dev->mode_config.acquire_ctx;
+
+ /* The force restore path in the HW readout code relies on the staged
+ * config still keeping the user requested config while the actual
+ * state has been overwritten by the configuration read from HW. We
+ * need to copy the staged config to the atomic state, otherwise the
+ * mode set will just reapply the state the HW is already in. */
+ for_each_intel_encoder(dev, encoder) {
+ if (&encoder->new_crtc->base != crtc)
+ continue;
+
+ for_each_intel_connector(dev, connector) {
+ if (connector->new_encoder != encoder)
+ continue;
+
+ connector_state = drm_atomic_get_connector_state(state, &connector->base);
+ if (IS_ERR(connector_state)) {
+ DRM_DEBUG_KMS("Failed to add [CONNECTOR:%d:%s] to state: %ld\n",
+ connector->base.base.id,
+ connector->base.name,
+ PTR_ERR(connector_state));
+ continue;
+ }
+
+ connector_state->crtc = crtc;
+ connector_state->best_encoder = &encoder->base;
+ }
+ }
+
+ intel_set_mode(crtc, &crtc->mode, crtc->x, crtc->y, crtc->primary->fb,
+ state);
+
+ drm_atomic_state_free(state);
}
#undef for_each_intel_crtc_masked
}
count = 0;
- list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, connector) {
connector->new_encoder =
to_intel_encoder(config->save_connector_encoders[count++]);
}
static int
intel_modeset_stage_output_state(struct drm_device *dev,
struct drm_mode_set *set,
- struct intel_set_config *config)
+ struct intel_set_config *config,
+ struct drm_atomic_state *state)
{
struct intel_connector *connector;
+ struct drm_connector_state *connector_state;
struct intel_encoder *encoder;
struct intel_crtc *crtc;
int ro;
WARN_ON(!set->fb && (set->num_connectors != 0));
WARN_ON(set->fb && (set->num_connectors == 0));
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
/* Otherwise traverse passed in connector list and get encoders
* for them. */
for (ro = 0; ro < set->num_connectors; ro++) {
if (&connector->new_encoder->base != connector->base.encoder) {
- DRM_DEBUG_KMS("encoder changed, full mode switch\n");
+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s] encoder changed, full mode switch\n",
+ connector->base.base.id,
+ connector->base.name);
config->mode_changed = true;
}
}
/* connector->new_encoder is now updated for all connectors. */
/* Update crtc of enabled connectors. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
struct drm_crtc *new_crtc;
if (!connector->new_encoder)
}
connector->new_encoder->new_crtc = to_intel_crtc(new_crtc);
+ connector_state =
+ drm_atomic_get_connector_state(state, &connector->base);
+ if (IS_ERR(connector_state))
+ return PTR_ERR(connector_state);
+
+ connector_state->crtc = new_crtc;
+ connector_state->best_encoder = &connector->new_encoder->base;
+
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] to [CRTC:%d]\n",
connector->base.base.id,
connector->base.name,
/* Check for any encoders that needs to be disabled. */
for_each_intel_encoder(dev, encoder) {
int num_connectors = 0;
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->new_encoder == encoder) {
WARN_ON(!connector->new_encoder->new_crtc);
num_connectors++;
/* Only now check for crtc changes so we don't miss encoders
* that will be disabled. */
if (&encoder->new_crtc->base != encoder->base.crtc) {
- DRM_DEBUG_KMS("crtc changed, full mode switch\n");
+ DRM_DEBUG_KMS("[ENCODER:%d:%s] crtc changed, full mode switch\n",
+ encoder->base.base.id,
+ encoder->base.name);
config->mode_changed = true;
}
}
/* Now we've also updated encoder->new_crtc for all encoders. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
- if (connector->new_encoder)
+ for_each_intel_connector(dev, connector) {
+ connector_state =
+ drm_atomic_get_connector_state(state, &connector->base);
+ if (IS_ERR(connector_state))
+ return PTR_ERR(connector_state);
+
+ if (connector->new_encoder) {
if (connector->new_encoder != connector->encoder)
connector->encoder = connector->new_encoder;
+ } else {
+ connector_state->crtc = NULL;
+ }
}
for_each_intel_crtc(dev, crtc) {
crtc->new_enabled = false;
}
if (crtc->new_enabled != crtc->base.state->enable) {
- DRM_DEBUG_KMS("crtc %sabled, full mode switch\n",
+ DRM_DEBUG_KMS("[CRTC:%d] %sabled, full mode switch\n",
+ crtc->base.base.id,
crtc->new_enabled ? "en" : "dis");
config->mode_changed = true;
}
DRM_DEBUG_KMS("Trying to restore without FB -> disabling pipe %c\n",
pipe_name(crtc->pipe));
- list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->new_encoder &&
connector->new_encoder->new_crtc == crtc)
connector->new_encoder = NULL;
{
struct drm_device *dev;
struct drm_mode_set save_set;
+ struct drm_atomic_state *state = NULL;
struct intel_set_config *config;
struct intel_crtc_state *pipe_config;
unsigned modeset_pipes, prepare_pipes, disable_pipes;
* such cases. */
intel_set_config_compute_mode_changes(set, config);
- ret = intel_modeset_stage_output_state(dev, set, config);
+ state = drm_atomic_state_alloc(dev);
+ if (!state) {
+ ret = -ENOMEM;
+ goto out_config;
+ }
+
+ state->acquire_ctx = dev->mode_config.acquire_ctx;
+
+ ret = intel_modeset_stage_output_state(dev, set, config, state);
if (ret)
goto fail;
pipe_config = intel_modeset_compute_config(set->crtc, set->mode,
- set->fb,
+ set->fb, state,
&modeset_pipes,
&prepare_pipes,
&disable_pipes);
*/
}
- /* set_mode will free it in the mode_changed case */
- if (!config->mode_changed)
- kfree(pipe_config);
-
intel_update_pipe_size(to_intel_crtc(set->crtc));
if (config->mode_changed) {
fail:
intel_set_config_restore_state(dev, config);
+ drm_atomic_state_clear(state);
+
/*
* HACK: if the pipe was on, but we didn't have a framebuffer,
* force the pipe off to avoid oopsing in the modeset code
/* Try to restore the config */
if (config->mode_changed &&
intel_set_mode(save_set.crtc, save_set.mode,
- save_set.x, save_set.y, save_set.fb))
+ save_set.x, save_set.y, save_set.fb,
+ state))
DRM_ERROR("failed to restore config after modeset failure\n");
}
out_config:
+ if (state)
+ drm_atomic_state_free(state);
+
intel_set_config_free(config);
return ret;
}
BUG_ON(dev_priv->num_shared_dpll > I915_NUM_PLLS);
}
+/**
+ * intel_wm_need_update - Check whether watermarks need updating
+ * @plane: drm plane
+ * @state: new plane state
+ *
+ * Check current plane state versus the new one to determine whether
+ * watermarks need to be recalculated.
+ *
+ * Returns true or false.
+ */
+bool intel_wm_need_update(struct drm_plane *plane,
+ struct drm_plane_state *state)
+{
+ /* Update watermarks on tiling changes. */
+ if (!plane->state->fb || !state->fb ||
+ plane->state->fb->modifier[0] != state->fb->modifier[0] ||
+ plane->state->rotation != state->rotation)
+ return true;
+
+ return false;
+}
+
/**
* intel_prepare_plane_fb - Prepare fb for usage on plane
* @plane: drm plane to prepare for
if (ret)
DRM_DEBUG_KMS("failed to attach phys object\n");
} else {
- ret = intel_pin_and_fence_fb_obj(plane, fb, NULL);
+ ret = intel_pin_and_fence_fb_obj(plane, fb, new_state, NULL);
}
if (ret == 0)
if (plane->type != DRM_PLANE_TYPE_CURSOR ||
!INTEL_INFO(dev)->cursor_needs_physical) {
mutex_lock(&dev->struct_mutex);
- intel_unpin_fb_obj(obj);
+ intel_unpin_fb_obj(fb, old_state);
mutex_unlock(&dev->struct_mutex);
}
}
intel_crtc->atomic.update_fbc = true;
- /* Update watermarks on tiling changes. */
- if (!plane->state->fb || !state->base.fb ||
- plane->state->fb->modifier[0] !=
- state->base.fb->modifier[0])
+ if (intel_wm_need_update(plane, &state->base))
intel_crtc->atomic.update_wm = true;
}
struct drm_device *dev = plane->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc;
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
- struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_rect *src = &state->src;
crtc = crtc ? crtc : plane->crtc;
crtc->x = src->x1 >> 16;
crtc->y = src->y1 >> 16;
- intel_plane->obj = obj;
-
if (intel_crtc->active) {
if (state->visible) {
/* FIXME: kill this fastboot hack */
}
const struct drm_plane_funcs intel_plane_funcs = {
- .update_plane = drm_atomic_helper_update_plane,
- .disable_plane = drm_atomic_helper_disable_plane,
+ .update_plane = drm_plane_helper_update,
+ .disable_plane = drm_plane_helper_disable,
.destroy = intel_plane_destroy,
.set_property = drm_atomic_helper_plane_set_property,
.atomic_get_property = intel_plane_atomic_get_property,
finish:
if (intel_crtc->active) {
- if (intel_crtc->cursor_width != state->base.crtc_w)
+ if (plane->state->crtc_w != state->base.crtc_w)
intel_crtc->atomic.update_wm = true;
intel_crtc->atomic.fb_bits |=
struct drm_crtc *crtc = state->base.crtc;
struct drm_device *dev = plane->dev;
struct intel_crtc *intel_crtc;
- struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_i915_gem_object *obj = intel_fb_obj(state->base.fb);
uint32_t addr;
crtc->cursor_x = state->base.crtc_x;
crtc->cursor_y = state->base.crtc_y;
- intel_plane->obj = obj;
-
if (intel_crtc->cursor_bo == obj)
goto update;
intel_crtc->cursor_addr = addr;
intel_crtc->cursor_bo = obj;
update:
- intel_crtc->cursor_width = state->base.crtc_w;
- intel_crtc->cursor_height = state->base.crtc_h;
if (intel_crtc->active)
intel_crtc_update_cursor(crtc, state->visible);
if (HAS_DDI(dev)) {
int found;
- /* Haswell uses DDI functions to detect digital outputs */
+ /*
+ * Haswell uses DDI functions to detect digital outputs.
+ * On SKL pre-D0 the strap isn't connected, so we assume
+ * it's there.
+ */
found = I915_READ(DDI_BUF_CTL_A) & DDI_INIT_DISPLAY_DETECTED;
- /* DDI A only supports eDP */
- if (found)
+ /* WaIgnoreDDIAStrap: skl */
+ if (found ||
+ (IS_SKYLAKE(dev) && INTEL_REVID(dev) < SKL_REVID_D0))
intel_ddi_init(dev, PORT_A);
/* DDI B, C and D detection is indicated by the SFUSE_STRAP
* testing/debug of the plane operations (and only when a specific
* kernel module option is given), that shouldn't really matter.
*
+ * We are also relying on these states to convert the legacy mode set
+ * to use a drm_atomic_state struct. The states are kept consistent
+ * with actual state, so that it is safe to rely on that instead of
+ * the staged config.
+ *
* Once atomic support for crtc's + connectors lands, this loop should
* be removed since we'll be setting up real connector state, which
* will contain Intel-specific properties.
*/
- if (drm_core_check_feature(dev, DRIVER_ATOMIC)) {
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- head) {
- if (!WARN_ON(connector->state)) {
- connector->state =
- kzalloc(sizeof(*connector->state),
- GFP_KERNEL);
- }
+ list_for_each_entry(connector,
+ &dev->mode_config.connector_list,
+ head) {
+ if (!WARN_ON(connector->state)) {
+ connector->state = kzalloc(sizeof(*connector->state),
+ GFP_KERNEL);
}
}
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_i915_gem_object *obj)
{
- int aligned_height;
+ unsigned int aligned_height;
int ret;
u32 pitch_limit, stride_alignment;
case I915_FORMAT_MOD_X_TILED:
break;
default:
- DRM_ERROR("Unsupported fb modifier 0x%llx!\n",
- mode_cmd->modifier[0]);
+ DRM_DEBUG("Unsupported fb modifier 0x%llx!\n",
+ mode_cmd->modifier[0]);
return -EINVAL;
}
} else if (IS_IVYBRIDGE(dev)) {
/* FIXME: detect B0+ stepping and use auto training */
dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train;
- dev_priv->display.modeset_global_resources =
- ivb_modeset_global_resources;
} else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
dev_priv->display.fdi_link_train = hsw_fdi_link_train;
} else if (IS_VALLEYVIEW(dev)) {
for_each_pipe(dev_priv, pipe) {
intel_crtc_init(dev, pipe);
- for_each_sprite(pipe, sprite) {
+ for_each_sprite(dev_priv, pipe, sprite) {
ret = intel_plane_init(dev, pipe, sprite);
if (ret)
DRM_DEBUG_KMS("pipe %c sprite %c init failed: %d\n",
* If the fb is shared between multiple heads, we'll
* just get the first one.
*/
- intel_find_plane_obj(crtc, &crtc->plane_config);
+ intel_find_initial_plane_obj(crtc, &crtc->plane_config);
}
}
}
/* We can't just switch on the pipe A, we need to set things up with a
* proper mode and output configuration. As a gross hack, enable pipe A
* by enabling the load detect pipe once. */
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->encoder->type == INTEL_OUTPUT_ANALOG) {
crt = &connector->base;
break;
return;
if (intel_get_load_detect_pipe(crt, NULL, &load_detect_temp, ctx))
- intel_release_load_detect_pipe(crt, &load_detect_temp);
+ intel_release_load_detect_pipe(crt, &load_detect_temp, ctx);
}
static bool
crtc->plane = plane;
/* ... and break all links. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->encoder->base.crtc != &crtc->base)
continue;
}
/* multiple connectors may have the same encoder:
* handle them and break crtc link separately */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head)
+ for_each_intel_connector(dev, connector)
if (connector->encoder->base.crtc == &crtc->base) {
connector->encoder->base.crtc = NULL;
connector->encoder->connectors_active = false;
* a bug in one of the get_hw_state functions. Or someplace else
* in our code, like the register restore mess on resume. Clamp
* things to off as a safer default. */
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->encoder != encoder)
continue;
connector->base.dpms = DRM_MODE_DPMS_OFF;
pipe_name(pipe));
}
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->get_hw_state(connector)) {
connector->base.dpms = DRM_MODE_DPMS_ON;
connector->encoder->connectors_active = true;
"[setup_hw_state]");
}
+ intel_modeset_update_connector_atomic_state(dev);
+
for (i = 0; i < dev_priv->num_shared_dpll; i++) {
struct intel_shared_dpll *pll = &dev_priv->shared_dplls[i];
struct drm_crtc *crtc =
dev_priv->pipe_to_crtc_mapping[pipe];
- intel_set_mode(crtc, &crtc->mode, crtc->x, crtc->y,
- crtc->primary->fb);
+ intel_crtc_restore_mode(crtc);
}
} else {
intel_modeset_update_staged_output_state(dev);
if (intel_pin_and_fence_fb_obj(c->primary,
c->primary->fb,
+ c->primary->state,
NULL)) {
DRM_ERROR("failed to pin boot fb on pipe %d\n",
to_intel_crtc(c)->pipe);
intel_fbc_disable(dev);
- ironlake_teardown_rc6(dev);
-
mutex_unlock(&dev->struct_mutex);
/* flush any delayed tasks or pending work */
{ DP_LINK_BW_5_4, /* m2_int = 27, m2_fraction = 0 */
{ .p1 = 2, .p2 = 1, .n = 1, .m1 = 2, .m2 = 0x6c00000 } }
};
+/* Skylake supports following rates */
+static const int gen9_rates[] = { 162000, 216000, 270000,
+ 324000, 432000, 540000 };
+static const int chv_rates[] = { 162000, 202500, 210000, 216000,
+ 243000, 270000, 324000, 405000,
+ 420000, 432000, 540000 };
+static const int default_rates[] = { 162000, 270000, 540000 };
/**
* is_edp - is the given port attached to an eDP panel (either CPU or PCH)
static void vlv_steal_power_sequencer(struct drm_device *dev,
enum pipe pipe);
-int
-intel_dp_max_link_bw(struct intel_dp *intel_dp)
+static int
+intel_dp_max_link_bw(struct intel_dp *intel_dp)
{
int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
- struct drm_device *dev = intel_dp->attached_connector->base.dev;
switch (max_link_bw) {
case DP_LINK_BW_1_62:
case DP_LINK_BW_2_7:
- break;
- case DP_LINK_BW_5_4: /* 1.2 capable displays may advertise higher bw */
- if (((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) ||
- INTEL_INFO(dev)->gen >= 8) &&
- intel_dp->dpcd[DP_DPCD_REV] >= 0x12)
- max_link_bw = DP_LINK_BW_5_4;
- else
- max_link_bw = DP_LINK_BW_2_7;
+ case DP_LINK_BW_5_4:
break;
default:
WARN(1, "invalid max DP link bw val %x, using 1.62Gbps\n",
target_clock = fixed_mode->clock;
}
- max_link_clock = drm_dp_bw_code_to_link_rate(intel_dp_max_link_bw(intel_dp));
+ max_link_clock = intel_dp_max_link_rate(intel_dp);
max_lanes = intel_dp_max_lane_count(intel_dp);
max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
size_t txsize, rxsize;
int ret;
- txbuf[0] = msg->request << 4;
- txbuf[1] = msg->address >> 8;
+ txbuf[0] = (msg->request << 4) |
+ ((msg->address >> 16) & 0xf);
+ txbuf[1] = (msg->address >> 8) & 0xff;
txbuf[2] = msg->address & 0xff;
txbuf[3] = msg->size - 1;
case DP_AUX_NATIVE_WRITE:
case DP_AUX_I2C_WRITE:
txsize = msg->size ? HEADER_SIZE + msg->size : BARE_ADDRESS_SIZE;
- rxsize = 1;
+ rxsize = 2; /* 0 or 1 data bytes */
if (WARN_ON(txsize > 20))
return -E2BIG;
if (ret > 0) {
msg->reply = rxbuf[0] >> 4;
- /* Return payload size. */
- ret = msg->size;
+ if (ret > 1) {
+ /* Number of bytes written in a short write. */
+ ret = clamp_t(int, rxbuf[1], 0, msg->size);
+ } else {
+ /* Return payload size. */
+ ret = msg->size;
+ }
}
break;
}
static void
-skl_edp_set_pll_config(struct intel_crtc_state *pipe_config, int link_bw)
+skl_edp_set_pll_config(struct intel_crtc_state *pipe_config, int link_clock)
{
u32 ctrl1;
pipe_config->dpll_hw_state.cfgcr2 = 0;
ctrl1 = DPLL_CTRL1_OVERRIDE(SKL_DPLL0);
- switch (link_bw) {
- case DP_LINK_BW_1_62:
+ switch (link_clock / 2) {
+ case 81000:
ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_810,
SKL_DPLL0);
break;
- case DP_LINK_BW_2_7:
+ case 135000:
ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1350,
SKL_DPLL0);
break;
- case DP_LINK_BW_5_4:
+ case 270000:
ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_2700,
SKL_DPLL0);
break;
+ case 162000:
+ ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1620,
+ SKL_DPLL0);
+ break;
+ /* TBD: For DP link rates 2.16 GHz and 4.32 GHz, VCO is 8640 which
+ results in CDCLK change. Need to handle the change of CDCLK by
+ disabling pipes and re-enabling them */
+ case 108000:
+ ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1080,
+ SKL_DPLL0);
+ break;
+ case 216000:
+ ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_2160,
+ SKL_DPLL0);
+ break;
+
}
pipe_config->dpll_hw_state.ctrl1 = ctrl1;
}
}
}
+static int
+intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
+{
+ if (intel_dp->num_sink_rates) {
+ *sink_rates = intel_dp->sink_rates;
+ return intel_dp->num_sink_rates;
+ }
+
+ *sink_rates = default_rates;
+
+ return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
+}
+
+static int
+intel_dp_source_rates(struct drm_device *dev, const int **source_rates)
+{
+ if (INTEL_INFO(dev)->gen >= 9) {
+ *source_rates = gen9_rates;
+ return ARRAY_SIZE(gen9_rates);
+ } else if (IS_CHERRYVIEW(dev)) {
+ *source_rates = chv_rates;
+ return ARRAY_SIZE(chv_rates);
+ }
+
+ *source_rates = default_rates;
+
+ if (IS_SKYLAKE(dev) && INTEL_REVID(dev) <= SKL_REVID_B0)
+ /* WaDisableHBR2:skl */
+ return (DP_LINK_BW_2_7 >> 3) + 1;
+ else if (INTEL_INFO(dev)->gen >= 8 ||
+ (IS_HASWELL(dev) && !IS_HSW_ULX(dev)))
+ return (DP_LINK_BW_5_4 >> 3) + 1;
+ else
+ return (DP_LINK_BW_2_7 >> 3) + 1;
+}
+
static void
intel_dp_set_clock(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config, int link_bw)
}
}
+static int intersect_rates(const int *source_rates, int source_len,
+ const int *sink_rates, int sink_len,
+ int *common_rates)
+{
+ int i = 0, j = 0, k = 0;
+
+ while (i < source_len && j < sink_len) {
+ if (source_rates[i] == sink_rates[j]) {
+ if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
+ return k;
+ common_rates[k] = source_rates[i];
+ ++k;
+ ++i;
+ ++j;
+ } else if (source_rates[i] < sink_rates[j]) {
+ ++i;
+ } else {
+ ++j;
+ }
+ }
+ return k;
+}
+
+static int intel_dp_common_rates(struct intel_dp *intel_dp,
+ int *common_rates)
+{
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ const int *source_rates, *sink_rates;
+ int source_len, sink_len;
+
+ sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+ source_len = intel_dp_source_rates(dev, &source_rates);
+
+ return intersect_rates(source_rates, source_len,
+ sink_rates, sink_len,
+ common_rates);
+}
+
+static void snprintf_int_array(char *str, size_t len,
+ const int *array, int nelem)
+{
+ int i;
+
+ str[0] = '\0';
+
+ for (i = 0; i < nelem; i++) {
+ int r = snprintf(str, len, "%d,", array[i]);
+ if (r >= len)
+ return;
+ str += r;
+ len -= r;
+ }
+}
+
+static void intel_dp_print_rates(struct intel_dp *intel_dp)
+{
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ const int *source_rates, *sink_rates;
+ int source_len, sink_len, common_len;
+ int common_rates[DP_MAX_SUPPORTED_RATES];
+ char str[128]; /* FIXME: too big for stack? */
+
+ if ((drm_debug & DRM_UT_KMS) == 0)
+ return;
+
+ source_len = intel_dp_source_rates(dev, &source_rates);
+ snprintf_int_array(str, sizeof(str), source_rates, source_len);
+ DRM_DEBUG_KMS("source rates: %s\n", str);
+
+ sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+ snprintf_int_array(str, sizeof(str), sink_rates, sink_len);
+ DRM_DEBUG_KMS("sink rates: %s\n", str);
+
+ common_len = intel_dp_common_rates(intel_dp, common_rates);
+ snprintf_int_array(str, sizeof(str), common_rates, common_len);
+ DRM_DEBUG_KMS("common rates: %s\n", str);
+}
+
+static int rate_to_index(int find, const int *rates)
+{
+ int i = 0;
+
+ for (i = 0; i < DP_MAX_SUPPORTED_RATES; ++i)
+ if (find == rates[i])
+ break;
+
+ return i;
+}
+
+int
+intel_dp_max_link_rate(struct intel_dp *intel_dp)
+{
+ int rates[DP_MAX_SUPPORTED_RATES] = {};
+ int len;
+
+ len = intel_dp_common_rates(intel_dp, rates);
+ if (WARN_ON(len <= 0))
+ return 162000;
+
+ return rates[rate_to_index(0, rates) - 1];
+}
+
+int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
+{
+ return rate_to_index(rate, intel_dp->sink_rates);
+}
+
bool
intel_dp_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
enum port port = dp_to_dig_port(intel_dp)->port;
- struct intel_crtc *intel_crtc = encoder->new_crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_connector *intel_connector = intel_dp->attached_connector;
int lane_count, clock;
int min_lane_count = 1;
int max_lane_count = intel_dp_max_lane_count(intel_dp);
/* Conveniently, the link BW constants become indices with a shift...*/
int min_clock = 0;
- int max_clock = intel_dp_max_link_bw(intel_dp) >> 3;
+ int max_clock;
int bpp, mode_rate;
- static int bws[] = { DP_LINK_BW_1_62, DP_LINK_BW_2_7, DP_LINK_BW_5_4 };
int link_avail, link_clock;
+ int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+ int common_len;
+
+ common_len = intel_dp_common_rates(intel_dp, common_rates);
+
+ /* No common link rates between source and sink */
+ WARN_ON(common_len <= 0);
+
+ max_clock = common_len - 1;
if (HAS_PCH_SPLIT(dev) && !HAS_DDI(dev) && port != PORT_A)
pipe_config->has_pch_encoder = true;
return false;
DRM_DEBUG_KMS("DP link computation with max lane count %i "
- "max bw %02x pixel clock %iKHz\n",
- max_lane_count, bws[max_clock],
+ "max bw %d pixel clock %iKHz\n",
+ max_lane_count, common_rates[max_clock],
adjusted_mode->crtc_clock);
/* Walk through all bpp values. Luckily they're all nicely spaced with 2
bpp);
for (clock = min_clock; clock <= max_clock; clock++) {
- for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) {
- link_clock = drm_dp_bw_code_to_link_rate(bws[clock]);
+ for (lane_count = min_lane_count;
+ lane_count <= max_lane_count;
+ lane_count <<= 1) {
+
+ link_clock = common_rates[clock];
link_avail = intel_dp_max_data_rate(link_clock,
lane_count);
if (intel_dp->color_range)
pipe_config->limited_color_range = true;
- intel_dp->link_bw = bws[clock];
intel_dp->lane_count = lane_count;
+
+ if (intel_dp->num_sink_rates) {
+ intel_dp->link_bw = 0;
+ intel_dp->rate_select =
+ intel_dp_rate_select(intel_dp, common_rates[clock]);
+ } else {
+ intel_dp->link_bw =
+ drm_dp_link_rate_to_bw_code(common_rates[clock]);
+ intel_dp->rate_select = 0;
+ }
+
pipe_config->pipe_bpp = bpp;
- pipe_config->port_clock = drm_dp_bw_code_to_link_rate(intel_dp->link_bw);
+ pipe_config->port_clock = common_rates[clock];
DRM_DEBUG_KMS("DP link bw %02x lane count %d clock %d bpp %d\n",
intel_dp->link_bw, intel_dp->lane_count,
}
if (IS_SKYLAKE(dev) && is_edp(intel_dp))
- skl_edp_set_pll_config(pipe_config, intel_dp->link_bw);
+ skl_edp_set_pll_config(pipe_config, common_rates[clock]);
else if (IS_HASWELL(dev) || IS_BROADWELL(dev))
hsw_dp_set_ddi_pll_sel(pipe_config, intel_dp->link_bw);
else
if (drm_dp_enhanced_frame_cap(intel_dp->dpcd))
link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_BW_SET, link_config, 2);
+ if (intel_dp->num_sink_rates)
+ drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_RATE_SET,
+ &intel_dp->rate_select, 1);
link_config[0] = 0;
link_config[1] = DP_SET_ANSI_8B10B;
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ uint8_t rev;
if (intel_dp_dpcd_read_wake(&intel_dp->aux, 0x000, intel_dp->dpcd,
sizeof(intel_dp->dpcd)) < 0)
} else
intel_dp->use_tps3 = false;
+ /* Intermediate frequency support */
+ if (is_edp(intel_dp) &&
+ (intel_dp->dpcd[DP_EDP_CONFIGURATION_CAP] & DP_DPCD_DISPLAY_CONTROL_CAPABLE) &&
+ (intel_dp_dpcd_read_wake(&intel_dp->aux, DP_EDP_DPCD_REV, &rev, 1) == 1) &&
+ (rev >= 0x03)) { /* eDp v1.4 or higher */
+ __le16 sink_rates[DP_MAX_SUPPORTED_RATES];
+ int i;
+
+ intel_dp_dpcd_read_wake(&intel_dp->aux,
+ DP_SUPPORTED_LINK_RATES,
+ sink_rates,
+ sizeof(sink_rates));
+
+ for (i = 0; i < ARRAY_SIZE(sink_rates); i++) {
+ int val = le16_to_cpu(sink_rates[i]);
+
+ if (val == 0)
+ break;
+
+ intel_dp->sink_rates[i] = val * 200;
+ }
+ intel_dp->num_sink_rates = i;
+ }
+
+ intel_dp_print_rates(intel_dp);
+
if (!(intel_dp->dpcd[DP_DOWNSTREAMPORT_PRESENT] &
DP_DWN_STRM_PORT_PRESENT))
return true; /* native DP sink */
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_dp_connector_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_dp_connector_helper_funcs = {
dig_port = dp_to_dig_port(intel_dp);
encoder = &dig_port->base;
- intel_crtc = encoder->new_crtc;
+ intel_crtc = to_intel_crtc(encoder->base.crtc);
if (!intel_crtc) {
DRM_DEBUG_KMS("DRRS: intel_crtc not initialized\n");
if (!dev_priv->drrs.dp)
return;
+ cancel_delayed_work_sync(&dev_priv->drrs.work);
+
mutex_lock(&dev_priv->drrs.mutex);
crtc = dp_to_dig_port(dev_priv->drrs.dp)->base.base.crtc;
pipe = to_intel_crtc(crtc)->pipe;
if (dev_priv->drrs.refresh_rate_type == DRRS_LOW_RR) {
- cancel_delayed_work_sync(&dev_priv->drrs.work);
intel_dp_set_drrs_state(dev_priv->dev,
dev_priv->drrs.dp->attached_connector->panel.
fixed_mode->vrefresh);
if (!dev_priv->drrs.dp)
return;
+ cancel_delayed_work_sync(&dev_priv->drrs.work);
+
mutex_lock(&dev_priv->drrs.mutex);
crtc = dp_to_dig_port(dev_priv->drrs.dp)->base.base.crtc;
pipe = to_intel_crtc(crtc)->pipe;
dev_priv->drrs.busy_frontbuffer_bits &= ~frontbuffer_bits;
- cancel_delayed_work_sync(&dev_priv->drrs.work);
-
if (dev_priv->drrs.refresh_rate_type != DRRS_LOW_RR &&
!dev_priv->drrs.busy_frontbuffer_bits)
schedule_delayed_work(&dev_priv->drrs.work,
struct edid *edid;
enum pipe pipe = INVALID_PIPE;
- dev_priv->drrs.type = DRRS_NOT_SUPPORTED;
-
if (!is_edp(intel_dp))
return true;
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(&encoder->base);
struct intel_digital_port *intel_dig_port = intel_mst->primary;
struct intel_dp *intel_dp = &intel_dig_port->dp;
- struct drm_device *dev = encoder->base.dev;
- int bpp;
- int lane_count, slots;
+ struct drm_atomic_state *state;
+ int bpp, i;
+ int lane_count, slots, rate;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
- struct intel_connector *found = NULL, *intel_connector;
+ struct intel_connector *found = NULL;
int mst_pbn;
pipe_config->dp_encoder_is_mst = true;
* seem to suggest we should do otherwise.
*/
lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
- intel_dp->link_bw = intel_dp_max_link_bw(intel_dp);
+
+ rate = intel_dp_max_link_rate(intel_dp);
+
+ if (intel_dp->num_sink_rates) {
+ intel_dp->link_bw = 0;
+ intel_dp->rate_select = intel_dp_rate_select(intel_dp, rate);
+ } else {
+ intel_dp->link_bw = drm_dp_link_rate_to_bw_code(rate);
+ intel_dp->rate_select = 0;
+ }
+
intel_dp->lane_count = lane_count;
pipe_config->pipe_bpp = 24;
- pipe_config->port_clock = drm_dp_bw_code_to_link_rate(intel_dp->link_bw);
+ pipe_config->port_clock = rate;
- list_for_each_entry(intel_connector, &dev->mode_config.connector_list, base.head) {
- if (intel_connector->new_encoder == encoder) {
- found = intel_connector;
+ state = pipe_config->base.state;
+
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ if (state->connector_states[i]->best_encoder == &encoder->base) {
+ found = to_intel_connector(state->connectors[i]);
break;
}
}
struct drm_crtc *crtc = encoder->base.crtc;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- list_for_each_entry(intel_connector, &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, intel_connector) {
if (intel_connector->new_encoder == encoder) {
found = intel_connector;
break;
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_dp_mst_connector_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static int intel_dp_mst_get_modes(struct drm_connector *connector)
#include <drm/drm_fb_helper.h>
#include <drm/drm_dp_mst_helper.h>
#include <drm/drm_rect.h>
+#include <drm/drm_atomic.h>
#define DIV_ROUND_CLOSEST_ULL(ll, d) \
({ unsigned long long _tmp = (ll)+(d)/2; do_div(_tmp, d); _tmp; })
ret__ = -ETIMEDOUT; \
break; \
} \
- if (W && drm_can_sleep()) { \
- msleep(W); \
+ if ((W) && drm_can_sleep()) { \
+ usleep_range((W)*1000, (W)*2000); \
} else { \
cpu_relax(); \
} \
struct drm_i915_gem_object *cursor_bo;
uint32_t cursor_addr;
- int16_t cursor_width, cursor_height;
uint32_t cursor_cntl;
uint32_t cursor_size;
uint32_t cursor_base;
bool enabled;
bool scaled;
u64 tiling;
+ unsigned int rotation;
};
struct intel_plane {
struct drm_plane base;
int plane;
enum pipe pipe;
- struct drm_i915_gem_object *obj;
bool can_scale;
int max_downscale;
+ /* FIXME convert to properties */
+ struct drm_intel_sprite_colorkey ckey;
+
/* Since we need to change the watermarks before/after
* enabling/disabling the planes, we need to store the parameters here
* as the other pieces of the struct may not reflect the values we want
void (*update_plane)(struct drm_plane *plane,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
struct intel_plane_state *state);
void (*commit_plane)(struct drm_plane *plane,
struct intel_plane_state *state);
- int (*update_colorkey)(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key);
- void (*get_colorkey)(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key);
};
struct intel_watermark_params {
};
#define to_intel_crtc(x) container_of(x, struct intel_crtc, base)
+#define to_intel_crtc_state(x) container_of(x, struct intel_crtc_state, base)
#define to_intel_connector(x) container_of(x, struct intel_connector, base)
#define to_intel_encoder(x) container_of(x, struct intel_encoder, base)
#define to_intel_framebuffer(x) container_of(x, struct intel_framebuffer, base)
uint32_t color_range;
bool color_range_auto;
uint8_t link_bw;
+ uint8_t rate_select;
uint8_t lane_count;
uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
+ /* sink rates as reported by DP_SUPPORTED_LINK_RATES */
+ uint8_t num_sink_rates;
+ int sink_rates[DP_MAX_SUPPORTED_RATES];
struct drm_dp_aux aux;
uint8_t train_set[4];
int panel_power_up_delay;
}
int intel_get_crtc_scanline(struct intel_crtc *crtc);
-void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv);
+void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv,
+ unsigned int pipe_mask);
/* intel_crt.c */
void intel_crt_init(struct drm_device *dev);
/* intel_frontbuffer.c */
void intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
- struct intel_engine_cs *ring);
+ struct intel_engine_cs *ring,
+ enum fb_op_origin origin);
void intel_frontbuffer_flip_prepare(struct drm_device *dev,
unsigned frontbuffer_bits);
void intel_frontbuffer_flip_complete(struct drm_device *dev,
intel_frontbuffer_flush(dev, frontbuffer_bits);
}
-int intel_fb_align_height(struct drm_device *dev, int height,
- uint32_t pixel_format,
- uint64_t fb_format_modifier);
+unsigned int intel_fb_align_height(struct drm_device *dev,
+ unsigned int height,
+ uint32_t pixel_format,
+ uint64_t fb_format_modifier);
void intel_fb_obj_flush(struct drm_i915_gem_object *obj, bool retire);
u32 intel_fb_stride_alignment(struct drm_device *dev, uint64_t fb_modifier,
struct intel_load_detect_pipe *old,
struct drm_modeset_acquire_ctx *ctx);
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old);
+ struct intel_load_detect_pipe *old,
+ struct drm_modeset_acquire_ctx *ctx);
int intel_pin_and_fence_fb_obj(struct drm_plane *plane,
struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state,
struct intel_engine_cs *pipelined);
struct drm_framebuffer *
__intel_framebuffer_create(struct drm_device *dev,
struct drm_property *property,
uint64_t val);
+unsigned int
+intel_tile_height(struct drm_device *dev, uint32_t pixel_format,
+ uint64_t fb_format_modifier);
+
+static inline bool
+intel_rotation_90_or_270(unsigned int rotation)
+{
+ return rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270));
+}
+
+bool intel_wm_need_update(struct drm_plane *plane,
+ struct drm_plane_state *state);
+
/* shared dpll functions */
struct intel_shared_dpll *intel_crtc_to_shared_dpll(struct intel_crtc *crtc);
void assert_shared_dpll(struct drm_i915_private *dev_priv,
void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc);
void intel_modeset_preclose(struct drm_device *dev, struct drm_file *file);
+unsigned long intel_plane_obj_offset(struct intel_plane *intel_plane,
+ struct drm_i915_gem_object *obj);
+
/* intel_dp.c */
void intel_dp_init(struct drm_device *dev, int output_reg, enum port port);
bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
void intel_dp_mst_suspend(struct drm_device *dev);
void intel_dp_mst_resume(struct drm_device *dev);
-int intel_dp_max_link_bw(struct intel_dp *intel_dp);
+int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
void vlv_power_sequencer_reset(struct drm_i915_private *dev_priv);
uint32_t intel_dp_pack_aux(const uint8_t *src, int src_bytes);
void intel_fbc_update(struct drm_device *dev);
void intel_fbc_init(struct drm_i915_private *dev_priv);
void intel_fbc_disable(struct drm_device *dev);
-void bdw_fbc_sw_flush(struct drm_device *dev, u32 value);
+void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits,
+ enum fb_op_origin origin);
+void intel_fbc_flush(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits);
/* intel_hdmi.c */
void intel_hdmi_init(struct drm_device *dev, int hdmi_reg, enum port port);
void intel_disable_gt_powersave(struct drm_device *dev);
void intel_suspend_gt_powersave(struct drm_device *dev);
void intel_reset_gt_powersave(struct drm_device *dev);
-void ironlake_teardown_rc6(struct drm_device *dev);
void gen6_update_ring_freq(struct drm_device *dev);
+void gen6_rps_busy(struct drm_i915_private *dev_priv);
+void gen6_rps_reset_ei(struct drm_i915_private *dev_priv);
void gen6_rps_idle(struct drm_i915_private *dev_priv);
void gen6_rps_boost(struct drm_i915_private *dev_priv);
void ilk_wm_get_hw_state(struct drm_device *dev);
int intel_plane_restore(struct drm_plane *plane);
int intel_sprite_set_colorkey(struct drm_device *dev, void *data,
struct drm_file *file_priv);
-int intel_sprite_get_colorkey(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
bool intel_pipe_update_start(struct intel_crtc *crtc,
uint32_t *start_vbl_count);
void intel_pipe_update_end(struct intel_crtc *crtc, u32 start_vbl_count);
struct drm_crtc_state *intel_crtc_duplicate_state(struct drm_crtc *crtc);
void intel_crtc_destroy_state(struct drm_crtc *crtc,
struct drm_crtc_state *state);
+static inline struct intel_crtc_state *
+intel_atomic_get_crtc_state(struct drm_atomic_state *state,
+ struct intel_crtc *crtc)
+{
+ struct drm_crtc_state *crtc_state;
+ crtc_state = drm_atomic_get_crtc_state(state, &crtc->base);
+ if (IS_ERR(crtc_state))
+ return ERR_PTR(PTR_ERR(crtc_state));
+
+ return to_intel_crtc_state(crtc_state);
+}
/* intel_atomic_plane.c */
struct intel_plane_state *intel_create_plane_state(struct drm_plane *plane);
.fill_modes = drm_helper_probe_single_connector_modes,
.atomic_get_property = intel_connector_atomic_get_property,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
void intel_dsi_init(struct drm_device *dev)
.fill_modes = drm_helper_probe_single_connector_modes,
.atomic_get_property = intel_connector_atomic_get_property,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_dvo_connector_helper_funcs = {
return I915_READ(DPFC_CONTROL) & DPFC_CTL_EN;
}
-static void snb_fbc_blit_update(struct drm_device *dev)
+static void intel_fbc_nuke(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- u32 blt_ecoskpd;
-
- /* Make sure blitter notifies FBC of writes */
-
- /* Blitter is part of Media powerwell on VLV. No impact of
- * his param in other platforms for now */
- intel_uncore_forcewake_get(dev_priv, FORCEWAKE_MEDIA);
-
- blt_ecoskpd = I915_READ(GEN6_BLITTER_ECOSKPD);
- blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY <<
- GEN6_BLITTER_LOCK_SHIFT;
- I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
- blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY;
- I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
- blt_ecoskpd &= ~(GEN6_BLITTER_FBC_NOTIFY <<
- GEN6_BLITTER_LOCK_SHIFT);
- I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
- POSTING_READ(GEN6_BLITTER_ECOSKPD);
-
- intel_uncore_forcewake_put(dev_priv, FORCEWAKE_MEDIA);
+ I915_WRITE(MSG_FBC_REND_STATE, FBC_REND_NUKE);
+ POSTING_READ(MSG_FBC_REND_STATE);
}
static void ilk_fbc_enable(struct drm_crtc *crtc)
I915_WRITE(SNB_DPFC_CTL_SA,
SNB_CPU_FENCE_ENABLE | obj->fence_reg);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y);
- snb_fbc_blit_update(dev);
}
+ intel_fbc_nuke(dev_priv);
+
DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane));
}
SNB_CPU_FENCE_ENABLE | obj->fence_reg);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y);
- snb_fbc_blit_update(dev);
+ intel_fbc_nuke(dev_priv);
DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane));
}
return dev_priv->fbc.enabled;
}
-void bdw_fbc_sw_flush(struct drm_device *dev, u32 value)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (!IS_GEN8(dev))
- return;
-
- if (!intel_fbc_enabled(dev))
- return;
-
- I915_WRITE(MSG_FBC_REND_STATE, value);
-}
-
static void intel_fbc_work_fn(struct work_struct *__work)
{
struct intel_fbc_work *work =
goto out_disable;
}
- if (!i915.enable_fbc || !i915.powersave) {
+ if (!i915.enable_fbc) {
if (set_no_fbc_reason(dev_priv, FBC_MODULE_PARAM))
DRM_DEBUG_KMS("fbc disabled per module param\n");
goto out_disable;
i915_gem_stolen_cleanup_compression(dev);
}
+void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits,
+ enum fb_op_origin origin)
+{
+ struct drm_device *dev = dev_priv->dev;
+ unsigned int fbc_bits;
+
+ if (origin == ORIGIN_GTT)
+ return;
+
+ if (dev_priv->fbc.enabled)
+ fbc_bits = INTEL_FRONTBUFFER_PRIMARY(dev_priv->fbc.crtc->pipe);
+ else if (dev_priv->fbc.fbc_work)
+ fbc_bits = INTEL_FRONTBUFFER_PRIMARY(
+ to_intel_crtc(dev_priv->fbc.fbc_work->crtc)->pipe);
+ else
+ fbc_bits = dev_priv->fbc.possible_framebuffer_bits;
+
+ dev_priv->fbc.busy_bits |= (fbc_bits & frontbuffer_bits);
+
+ if (dev_priv->fbc.busy_bits)
+ intel_fbc_disable(dev);
+}
+
+void intel_fbc_flush(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits)
+{
+ struct drm_device *dev = dev_priv->dev;
+
+ if (!dev_priv->fbc.busy_bits)
+ return;
+
+ dev_priv->fbc.busy_bits &= ~frontbuffer_bits;
+
+ if (!dev_priv->fbc.busy_bits)
+ intel_fbc_update(dev);
+}
+
/**
* intel_fbc_init - Initialize FBC
* @dev_priv: the i915 device
*/
void intel_fbc_init(struct drm_i915_private *dev_priv)
{
+ enum pipe pipe;
+
if (!HAS_FBC(dev_priv)) {
dev_priv->fbc.enabled = false;
dev_priv->fbc.no_fbc_reason = FBC_UNSUPPORTED;
return;
}
+ for_each_pipe(dev_priv, pipe) {
+ dev_priv->fbc.possible_framebuffer_bits |=
+ INTEL_FRONTBUFFER_PRIMARY(pipe);
+
+ if (IS_HASWELL(dev_priv) || INTEL_INFO(dev_priv)->gen >= 8)
+ break;
+ }
+
if (INTEL_INFO(dev_priv)->gen >= 7) {
dev_priv->display.fbc_enabled = ilk_fbc_enabled;
dev_priv->display.enable_fbc = gen7_fbc_enable;
return ret;
}
+static int intel_fbdev_blank(int blank, struct fb_info *info)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct intel_fbdev *ifbdev =
+ container_of(fb_helper, struct intel_fbdev, helper);
+ int ret;
+
+ ret = drm_fb_helper_blank(blank, info);
+
+ if (ret == 0) {
+ /*
+ * FIXME: fbdev presumes that all callbacks also work from
+ * atomic contexts and relies on that for emergency oops
+ * printing. KMS totally doesn't do that and the locking here is
+ * by far not the only place this goes wrong. Ignore this for
+ * now until we solve this for real.
+ */
+ mutex_lock(&fb_helper->dev->struct_mutex);
+ intel_fb_obj_invalidate(ifbdev->fb->obj, NULL, ORIGIN_GTT);
+ mutex_unlock(&fb_helper->dev->struct_mutex);
+ }
+
+ return ret;
+}
+
static struct fb_ops intelfb_ops = {
.owner = THIS_MODULE,
.fb_check_var = drm_fb_helper_check_var,
.fb_copyarea = cfb_copyarea,
.fb_imageblit = cfb_imageblit,
.fb_pan_display = drm_fb_helper_pan_display,
- .fb_blank = drm_fb_helper_blank,
+ .fb_blank = intel_fbdev_blank,
.fb_setcmap = drm_fb_helper_setcmap,
.fb_debug_enter = drm_fb_helper_debug_enter,
.fb_debug_leave = drm_fb_helper_debug_leave,
}
/* Flush everything out, we'll be doing GTT only from now on */
- ret = intel_pin_and_fence_fb_obj(NULL, fb, NULL);
+ ret = intel_pin_and_fence_fb_obj(NULL, fb, NULL, NULL);
if (ret) {
DRM_ERROR("failed to pin obj: %d\n", ret);
goto out_fb;
struct drm_i915_private *dev_priv = dev->dev_private;
enum pipe pipe;
- if (!i915.powersave)
- return;
-
for_each_pipe(dev_priv, pipe) {
if (!(frontbuffer_bits & INTEL_FRONTBUFFER_ALL_MASK(pipe)))
continue;
intel_increase_pllclock(dev, pipe);
- if (ring && intel_fbc_enabled(dev))
- ring->fbc_dirty = true;
}
}
* intel_fb_obj_invalidate - invalidate frontbuffer object
* @obj: GEM object to invalidate
* @ring: set for asynchronous rendering
+ * @origin: which operation caused the invalidation
*
* This function gets called every time rendering on the given object starts and
* frontbuffer caching (fbc, low refresh rate for DRRS, panel self refresh) must
* scheduled.
*/
void intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
- struct intel_engine_cs *ring)
+ struct intel_engine_cs *ring,
+ enum fb_op_origin origin)
{
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
intel_psr_invalidate(dev, obj->frontbuffer_bits);
intel_edp_drrs_invalidate(dev, obj->frontbuffer_bits);
+ intel_fbc_invalidate(dev_priv, obj->frontbuffer_bits, origin);
}
/**
intel_edp_drrs_flush(dev, frontbuffer_bits);
intel_psr_flush(dev, frontbuffer_bits);
-
- /*
- * FIXME: Unconditional fbc flushing here is a rather gross hack and
- * needs to be reworked into a proper frontbuffer tracking scheme like
- * psr employs.
- */
- if (dev_priv->fbc.need_sw_cache_clean) {
- dev_priv->fbc.need_sw_cache_clean = false;
- bdw_fbc_sw_flush(dev, FBC_REND_CACHE_CLEAN);
- }
+ intel_fbc_flush(dev_priv, frontbuffer_bits);
}
/**
return MODE_OK;
}
-static bool hdmi_12bpc_possible(struct intel_crtc *crtc)
+static bool hdmi_12bpc_possible(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
+ struct drm_atomic_state *state;
struct intel_encoder *encoder;
+ struct drm_connector_state *connector_state;
int count = 0, count_hdmi = 0;
+ int i;
if (HAS_GMCH_DISPLAY(dev))
return false;
- for_each_intel_encoder(dev, encoder) {
- if (encoder->new_crtc != crtc)
+ state = crtc_state->base.state;
+
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
continue;
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
+ continue;
+
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
count_hdmi += encoder->type == INTEL_OUTPUT_HDMI;
count++;
}
*/
if (pipe_config->pipe_bpp > 8*3 && pipe_config->has_hdmi_sink &&
clock_12bpc <= portclock_limit &&
- hdmi_12bpc_possible(encoder->new_crtc)) {
+ hdmi_12bpc_possible(pipe_config)) {
DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n");
desired_bpp = 12*3;
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_hdmi_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_hdmi_connector_helper_funcs = {
struct intel_connector *intel_connector =
&lvds_encoder->attached_connector->base;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
- struct intel_crtc *intel_crtc = lvds_encoder->base.new_crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
unsigned int lvds_bpp;
/* Should never happen!! */
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_lvds_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_encoder_funcs intel_lvds_enc_funcs = {
if (ret != 0)
return ret;
- ret = i915_gem_object_pin_to_display_plane(new_bo, 0, NULL);
+ ret = i915_gem_object_pin_to_display_plane(new_bo, 0, NULL,
+ &i915_ggtt_view_normal);
if (ret != 0)
return ret;
return NULL;
}
+static void chv_set_memory_dvfs(struct drm_i915_private *dev_priv, bool enable)
+{
+ u32 val;
+
+ mutex_lock(&dev_priv->rps.hw_lock);
+
+ val = vlv_punit_read(dev_priv, PUNIT_REG_DDR_SETUP2);
+ if (enable)
+ val &= ~FORCE_DDR_HIGH_FREQ;
+ else
+ val |= FORCE_DDR_HIGH_FREQ;
+ val &= ~FORCE_DDR_LOW_FREQ;
+ val |= FORCE_DDR_FREQ_REQ_ACK;
+ vlv_punit_write(dev_priv, PUNIT_REG_DDR_SETUP2, val);
+
+ if (wait_for((vlv_punit_read(dev_priv, PUNIT_REG_DDR_SETUP2) &
+ FORCE_DDR_FREQ_REQ_ACK) == 0, 3))
+ DRM_ERROR("timed out waiting for Punit DDR DVFS request\n");
+
+ mutex_unlock(&dev_priv->rps.hw_lock);
+}
+
+static void chv_set_memory_pm5(struct drm_i915_private *dev_priv, bool enable)
+{
+ u32 val;
+
+ mutex_lock(&dev_priv->rps.hw_lock);
+
+ val = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ);
+ if (enable)
+ val |= DSP_MAXFIFO_PM5_ENABLE;
+ else
+ val &= ~DSP_MAXFIFO_PM5_ENABLE;
+ vlv_punit_write(dev_priv, PUNIT_REG_DSPFREQ, val);
+
+ mutex_unlock(&dev_priv->rps.hw_lock);
+}
+
+#define FW_WM(value, plane) \
+ (((value) << DSPFW_ ## plane ## _SHIFT) & DSPFW_ ## plane ## _MASK)
+
void intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable)
{
struct drm_device *dev = dev_priv->dev;
if (IS_VALLEYVIEW(dev)) {
I915_WRITE(FW_BLC_SELF_VLV, enable ? FW_CSPWRDWNEN : 0);
+ if (IS_CHERRYVIEW(dev))
+ chv_set_memory_pm5(dev_priv, enable);
} else if (IS_G4X(dev) || IS_CRESTLINE(dev)) {
I915_WRITE(FW_BLC_SELF, enable ? FW_BLC_SELF_EN : 0);
} else if (IS_PINEVIEW(dev)) {
enable ? "enabled" : "disabled");
}
+
/*
* Latency for FIFO fetches is dependent on several factors:
* - memory configuration (speed, channels)
*/
static const int pessimal_latency_ns = 5000;
+#define VLV_FIFO_START(dsparb, dsparb2, lo_shift, hi_shift) \
+ ((((dsparb) >> (lo_shift)) & 0xff) | ((((dsparb2) >> (hi_shift)) & 0x1) << 8))
+
+static int vlv_get_fifo_size(struct drm_device *dev,
+ enum pipe pipe, int plane)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int sprite0_start, sprite1_start, size;
+
+ switch (pipe) {
+ uint32_t dsparb, dsparb2, dsparb3;
+ case PIPE_A:
+ dsparb = I915_READ(DSPARB);
+ dsparb2 = I915_READ(DSPARB2);
+ sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 0, 0);
+ sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 8, 4);
+ break;
+ case PIPE_B:
+ dsparb = I915_READ(DSPARB);
+ dsparb2 = I915_READ(DSPARB2);
+ sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 16, 8);
+ sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 24, 12);
+ break;
+ case PIPE_C:
+ dsparb2 = I915_READ(DSPARB2);
+ dsparb3 = I915_READ(DSPARB3);
+ sprite0_start = VLV_FIFO_START(dsparb3, dsparb2, 0, 16);
+ sprite1_start = VLV_FIFO_START(dsparb3, dsparb2, 8, 20);
+ break;
+ default:
+ return 0;
+ }
+
+ switch (plane) {
+ case 0:
+ size = sprite0_start;
+ break;
+ case 1:
+ size = sprite1_start - sprite0_start;
+ break;
+ case 2:
+ size = 512 - 1 - sprite1_start;
+ break;
+ default:
+ return 0;
+ }
+
+ DRM_DEBUG_KMS("Pipe %c %s %c FIFO size: %d\n",
+ pipe_name(pipe), plane == 0 ? "primary" : "sprite",
+ plane == 0 ? plane_name(pipe) : sprite_name(pipe, plane - 1),
+ size);
+
+ return size;
+}
+
static int i9xx_get_fifo_size(struct drm_device *dev, int plane)
{
struct drm_i915_private *dev_priv = dev->dev_private;
crtc = single_enabled_crtc(dev);
if (crtc) {
const struct drm_display_mode *adjusted_mode;
- int pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ int pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
int clock;
adjusted_mode = &to_intel_crtc(crtc)->config->base.adjusted_mode;
pixel_size, latency->display_sr);
reg = I915_READ(DSPFW1);
reg &= ~DSPFW_SR_MASK;
- reg |= wm << DSPFW_SR_SHIFT;
+ reg |= FW_WM(wm, SR);
I915_WRITE(DSPFW1, reg);
DRM_DEBUG_KMS("DSPFW1 register is %x\n", reg);
pixel_size, latency->cursor_sr);
reg = I915_READ(DSPFW3);
reg &= ~DSPFW_CURSOR_SR_MASK;
- reg |= (wm & 0x3f) << DSPFW_CURSOR_SR_SHIFT;
+ reg |= FW_WM(wm, CURSOR_SR);
I915_WRITE(DSPFW3, reg);
/* Display HPLL off SR */
pixel_size, latency->display_hpll_disable);
reg = I915_READ(DSPFW3);
reg &= ~DSPFW_HPLL_SR_MASK;
- reg |= wm & DSPFW_HPLL_SR_MASK;
+ reg |= FW_WM(wm, HPLL_SR);
I915_WRITE(DSPFW3, reg);
/* cursor HPLL off SR */
pixel_size, latency->cursor_hpll_disable);
reg = I915_READ(DSPFW3);
reg &= ~DSPFW_HPLL_CURSOR_MASK;
- reg |= (wm & 0x3f) << DSPFW_HPLL_CURSOR_SHIFT;
+ reg |= FW_WM(wm, HPLL_CURSOR);
I915_WRITE(DSPFW3, reg);
DRM_DEBUG_KMS("DSPFW3 register is %x\n", reg);
clock = adjusted_mode->crtc_clock;
htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config->pipe_src_w;
- pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
/* Use the small buffer method to calculate plane watermark */
entries = ((clock * pixel_size / 1000) * display_latency_ns) / 1000;
/* Use the large buffer method to calculate cursor watermark */
line_time_us = max(htotal * 1000 / clock, 1);
line_count = (cursor_latency_ns / line_time_us + 1000) / 1000;
- entries = line_count * to_intel_crtc(crtc)->cursor_width * pixel_size;
+ entries = line_count * crtc->cursor->state->crtc_w * pixel_size;
tlb_miss = cursor->fifo_size*cursor->cacheline_size - hdisplay * 8;
if (tlb_miss > 0)
entries += tlb_miss;
clock = adjusted_mode->crtc_clock;
htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config->pipe_src_w;
- pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
line_time_us = max(htotal * 1000 / clock, 1);
line_count = (latency_ns / line_time_us + 1000) / 1000;
*display_wm = entries + display->guard_size;
/* calculate the self-refresh watermark for display cursor */
- entries = line_count * pixel_size * to_intel_crtc(crtc)->cursor_width;
+ entries = line_count * pixel_size * crtc->cursor->state->crtc_w;
entries = DIV_ROUND_UP(entries, cursor->cacheline_size);
*cursor_wm = entries + cursor->guard_size;
display, cursor);
}
-static bool vlv_compute_drain_latency(struct drm_crtc *crtc,
- int pixel_size,
- int *prec_mult,
- int *drain_latency)
-{
- struct drm_device *dev = crtc->dev;
- int entries;
- int clock = to_intel_crtc(crtc)->config->base.adjusted_mode.crtc_clock;
+#define FW_WM_VLV(value, plane) \
+ (((value) << DSPFW_ ## plane ## _SHIFT) & DSPFW_ ## plane ## _MASK_VLV)
- if (WARN(clock == 0, "Pixel clock is zero!\n"))
- return false;
+static void vlv_write_wm_values(struct intel_crtc *crtc,
+ const struct vlv_wm_values *wm)
+{
+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ enum pipe pipe = crtc->pipe;
- if (WARN(pixel_size == 0, "Pixel size is zero!\n"))
- return false;
+ I915_WRITE(VLV_DDL(pipe),
+ (wm->ddl[pipe].cursor << DDL_CURSOR_SHIFT) |
+ (wm->ddl[pipe].sprite[1] << DDL_SPRITE_SHIFT(1)) |
+ (wm->ddl[pipe].sprite[0] << DDL_SPRITE_SHIFT(0)) |
+ (wm->ddl[pipe].primary << DDL_PLANE_SHIFT));
- entries = DIV_ROUND_UP(clock, 1000) * pixel_size;
- if (IS_CHERRYVIEW(dev))
- *prec_mult = (entries > 128) ? DRAIN_LATENCY_PRECISION_32 :
- DRAIN_LATENCY_PRECISION_16;
- else
- *prec_mult = (entries > 128) ? DRAIN_LATENCY_PRECISION_64 :
- DRAIN_LATENCY_PRECISION_32;
- *drain_latency = (64 * (*prec_mult) * 4) / entries;
+ I915_WRITE(DSPFW1,
+ FW_WM(wm->sr.plane, SR) |
+ FW_WM(wm->pipe[PIPE_B].cursor, CURSORB) |
+ FW_WM_VLV(wm->pipe[PIPE_B].primary, PLANEB) |
+ FW_WM_VLV(wm->pipe[PIPE_A].primary, PLANEA));
+ I915_WRITE(DSPFW2,
+ FW_WM_VLV(wm->pipe[PIPE_A].sprite[1], SPRITEB) |
+ FW_WM(wm->pipe[PIPE_A].cursor, CURSORA) |
+ FW_WM_VLV(wm->pipe[PIPE_A].sprite[0], SPRITEA));
+ I915_WRITE(DSPFW3,
+ FW_WM(wm->sr.cursor, CURSOR_SR));
+
+ if (IS_CHERRYVIEW(dev_priv)) {
+ I915_WRITE(DSPFW7_CHV,
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[1], SPRITED) |
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[0], SPRITEC));
+ I915_WRITE(DSPFW8_CHV,
+ FW_WM_VLV(wm->pipe[PIPE_C].sprite[1], SPRITEF) |
+ FW_WM_VLV(wm->pipe[PIPE_C].sprite[0], SPRITEE));
+ I915_WRITE(DSPFW9_CHV,
+ FW_WM_VLV(wm->pipe[PIPE_C].primary, PLANEC) |
+ FW_WM(wm->pipe[PIPE_C].cursor, CURSORC));
+ I915_WRITE(DSPHOWM,
+ FW_WM(wm->sr.plane >> 9, SR_HI) |
+ FW_WM(wm->pipe[PIPE_C].sprite[1] >> 8, SPRITEF_HI) |
+ FW_WM(wm->pipe[PIPE_C].sprite[0] >> 8, SPRITEE_HI) |
+ FW_WM(wm->pipe[PIPE_C].primary >> 8, PLANEC_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[1] >> 8, SPRITED_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[0] >> 8, SPRITEC_HI) |
+ FW_WM(wm->pipe[PIPE_B].primary >> 8, PLANEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[1] >> 8, SPRITEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[0] >> 8, SPRITEA_HI) |
+ FW_WM(wm->pipe[PIPE_A].primary >> 8, PLANEA_HI));
+ } else {
+ I915_WRITE(DSPFW7,
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[1], SPRITED) |
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[0], SPRITEC));
+ I915_WRITE(DSPHOWM,
+ FW_WM(wm->sr.plane >> 9, SR_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[1] >> 8, SPRITED_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[0] >> 8, SPRITEC_HI) |
+ FW_WM(wm->pipe[PIPE_B].primary >> 8, PLANEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[1] >> 8, SPRITEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[0] >> 8, SPRITEA_HI) |
+ FW_WM(wm->pipe[PIPE_A].primary >> 8, PLANEA_HI));
+ }
- if (*drain_latency > DRAIN_LATENCY_MASK)
- *drain_latency = DRAIN_LATENCY_MASK;
+ POSTING_READ(DSPFW1);
- return true;
+ dev_priv->wm.vlv = *wm;
}
-/*
- * Update drain latency registers of memory arbiter
- *
- * Valleyview SoC has a new memory arbiter and needs drain latency registers
- * to be programmed. Each plane has a drain latency multiplier and a drain
- * latency value.
- */
+#undef FW_WM_VLV
-static void vlv_update_drain_latency(struct drm_crtc *crtc)
+static uint8_t vlv_compute_drain_latency(struct drm_crtc *crtc,
+ struct drm_plane *plane)
{
struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- int pixel_size;
- int drain_latency;
- enum pipe pipe = intel_crtc->pipe;
- int plane_prec, prec_mult, plane_dl;
- const int high_precision = IS_CHERRYVIEW(dev) ?
- DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_64;
+ int entries, prec_mult, drain_latency, pixel_size;
+ int clock = intel_crtc->config->base.adjusted_mode.crtc_clock;
+ const int high_precision = IS_CHERRYVIEW(dev) ? 16 : 64;
- plane_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_PLANE_PRECISION_HIGH |
- DRAIN_LATENCY_MASK | DDL_CURSOR_PRECISION_HIGH |
- (DRAIN_LATENCY_MASK << DDL_CURSOR_SHIFT));
+ /*
+ * FIXME the plane might have an fb
+ * but be invisible (eg. due to clipping)
+ */
+ if (!intel_crtc->active || !plane->state->fb)
+ return 0;
- if (!intel_crtc_active(crtc)) {
- I915_WRITE(VLV_DDL(pipe), plane_dl);
- return;
- }
+ if (WARN(clock == 0, "Pixel clock is zero!\n"))
+ return 0;
- /* Primary plane Drain Latency */
- pixel_size = crtc->primary->fb->bits_per_pixel / 8; /* BPP */
- if (vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) {
- plane_prec = (prec_mult == high_precision) ?
- DDL_PLANE_PRECISION_HIGH :
- DDL_PLANE_PRECISION_LOW;
- plane_dl |= plane_prec | drain_latency;
- }
+ pixel_size = drm_format_plane_cpp(plane->state->fb->pixel_format, 0);
- /* Cursor Drain Latency
- * BPP is always 4 for cursor
- */
- pixel_size = 4;
+ if (WARN(pixel_size == 0, "Pixel size is zero!\n"))
+ return 0;
- /* Program cursor DL only if it is enabled */
- if (intel_crtc->cursor_base &&
- vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) {
- plane_prec = (prec_mult == high_precision) ?
- DDL_CURSOR_PRECISION_HIGH :
- DDL_CURSOR_PRECISION_LOW;
- plane_dl |= plane_prec | (drain_latency << DDL_CURSOR_SHIFT);
+ entries = DIV_ROUND_UP(clock, 1000) * pixel_size;
+
+ prec_mult = high_precision;
+ drain_latency = 64 * prec_mult * 4 / entries;
+
+ if (drain_latency > DRAIN_LATENCY_MASK) {
+ prec_mult /= 2;
+ drain_latency = 64 * prec_mult * 4 / entries;
}
- I915_WRITE(VLV_DDL(pipe), plane_dl);
-}
+ if (drain_latency > DRAIN_LATENCY_MASK)
+ drain_latency = DRAIN_LATENCY_MASK;
-#define single_plane_enabled(mask) is_power_of_2(mask)
+ return drain_latency | (prec_mult == high_precision ?
+ DDL_PRECISION_HIGH : DDL_PRECISION_LOW);
+}
-static void valleyview_update_wm(struct drm_crtc *crtc)
+static int vlv_compute_wm(struct intel_crtc *crtc,
+ struct intel_plane *plane,
+ int fifo_size)
{
- struct drm_device *dev = crtc->dev;
- static const int sr_latency_ns = 12000;
- struct drm_i915_private *dev_priv = dev->dev_private;
- int planea_wm, planeb_wm, cursora_wm, cursorb_wm;
- int plane_sr, cursor_sr;
- int ignore_plane_sr, ignore_cursor_sr;
- unsigned int enabled = 0;
- bool cxsr_enabled;
+ int clock, entries, pixel_size;
- vlv_update_drain_latency(crtc);
+ /*
+ * FIXME the plane might have an fb
+ * but be invisible (eg. due to clipping)
+ */
+ if (!crtc->active || !plane->base.state->fb)
+ return 0;
- if (g4x_compute_wm0(dev, PIPE_A,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planea_wm, &cursora_wm))
- enabled |= 1 << PIPE_A;
+ pixel_size = drm_format_plane_cpp(plane->base.state->fb->pixel_format, 0);
+ clock = crtc->config->base.adjusted_mode.crtc_clock;
- if (g4x_compute_wm0(dev, PIPE_B,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planeb_wm, &cursorb_wm))
- enabled |= 1 << PIPE_B;
+ entries = DIV_ROUND_UP(clock, 1000) * pixel_size;
- if (single_plane_enabled(enabled) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &plane_sr, &ignore_cursor_sr) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- 2*sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &ignore_plane_sr, &cursor_sr)) {
- cxsr_enabled = true;
- } else {
- cxsr_enabled = false;
- intel_set_memory_cxsr(dev_priv, false);
- plane_sr = cursor_sr = 0;
+ /*
+ * Set up the watermark such that we don't start issuing memory
+ * requests until we are within PND's max deadline value (256us).
+ * Idea being to be idle as long as possible while still taking
+ * advatange of PND's deadline scheduling. The limit of 8
+ * cachelines (used when the FIFO will anyway drain in less time
+ * than 256us) should match what we would be done if trickle
+ * feed were enabled.
+ */
+ return fifo_size - clamp(DIV_ROUND_UP(256 * entries, 64), 0, fifo_size - 8);
+}
+
+static bool vlv_compute_sr_wm(struct drm_device *dev,
+ struct vlv_wm_values *wm)
+{
+ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct drm_crtc *crtc;
+ enum pipe pipe = INVALID_PIPE;
+ int num_planes = 0;
+ int fifo_size = 0;
+ struct intel_plane *plane;
+
+ wm->sr.cursor = wm->sr.plane = 0;
+
+ crtc = single_enabled_crtc(dev);
+ /* maxfifo not supported on pipe C */
+ if (crtc && to_intel_crtc(crtc)->pipe != PIPE_C) {
+ pipe = to_intel_crtc(crtc)->pipe;
+ num_planes = !!wm->pipe[pipe].primary +
+ !!wm->pipe[pipe].sprite[0] +
+ !!wm->pipe[pipe].sprite[1];
+ fifo_size = INTEL_INFO(dev_priv)->num_pipes * 512 - 1;
}
- DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, "
- "B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n",
- planea_wm, cursora_wm,
- planeb_wm, cursorb_wm,
- plane_sr, cursor_sr);
+ if (fifo_size == 0 || num_planes > 1)
+ return false;
- I915_WRITE(DSPFW1,
- (plane_sr << DSPFW_SR_SHIFT) |
- (cursorb_wm << DSPFW_CURSORB_SHIFT) |
- (planeb_wm << DSPFW_PLANEB_SHIFT) |
- (planea_wm << DSPFW_PLANEA_SHIFT));
- I915_WRITE(DSPFW2,
- (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
- (cursora_wm << DSPFW_CURSORA_SHIFT));
- I915_WRITE(DSPFW3,
- (I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) |
- (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
+ wm->sr.cursor = vlv_compute_wm(to_intel_crtc(crtc),
+ to_intel_plane(crtc->cursor), 0x3f);
- if (cxsr_enabled)
- intel_set_memory_cxsr(dev_priv, true);
+ list_for_each_entry(plane, &dev->mode_config.plane_list, base.head) {
+ if (plane->base.type == DRM_PLANE_TYPE_CURSOR)
+ continue;
+
+ if (plane->pipe != pipe)
+ continue;
+
+ wm->sr.plane = vlv_compute_wm(to_intel_crtc(crtc),
+ plane, fifo_size);
+ if (wm->sr.plane != 0)
+ break;
+ }
+
+ return true;
}
-static void cherryview_update_wm(struct drm_crtc *crtc)
+static void valleyview_update_wm(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
- static const int sr_latency_ns = 12000;
struct drm_i915_private *dev_priv = dev->dev_private;
- int planea_wm, planeb_wm, planec_wm;
- int cursora_wm, cursorb_wm, cursorc_wm;
- int plane_sr, cursor_sr;
- int ignore_plane_sr, ignore_cursor_sr;
- unsigned int enabled = 0;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ enum pipe pipe = intel_crtc->pipe;
bool cxsr_enabled;
+ struct vlv_wm_values wm = dev_priv->wm.vlv;
- vlv_update_drain_latency(crtc);
+ wm.ddl[pipe].primary = vlv_compute_drain_latency(crtc, crtc->primary);
+ wm.pipe[pipe].primary = vlv_compute_wm(intel_crtc,
+ to_intel_plane(crtc->primary),
+ vlv_get_fifo_size(dev, pipe, 0));
- if (g4x_compute_wm0(dev, PIPE_A,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planea_wm, &cursora_wm))
- enabled |= 1 << PIPE_A;
+ wm.ddl[pipe].cursor = vlv_compute_drain_latency(crtc, crtc->cursor);
+ wm.pipe[pipe].cursor = vlv_compute_wm(intel_crtc,
+ to_intel_plane(crtc->cursor),
+ 0x3f);
- if (g4x_compute_wm0(dev, PIPE_B,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planeb_wm, &cursorb_wm))
- enabled |= 1 << PIPE_B;
+ cxsr_enabled = vlv_compute_sr_wm(dev, &wm);
- if (g4x_compute_wm0(dev, PIPE_C,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planec_wm, &cursorc_wm))
- enabled |= 1 << PIPE_C;
+ if (memcmp(&wm, &dev_priv->wm.vlv, sizeof(wm)) == 0)
+ return;
- if (single_plane_enabled(enabled) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &plane_sr, &ignore_cursor_sr) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- 2*sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &ignore_plane_sr, &cursor_sr)) {
- cxsr_enabled = true;
- } else {
- cxsr_enabled = false;
- intel_set_memory_cxsr(dev_priv, false);
- plane_sr = cursor_sr = 0;
- }
+ DRM_DEBUG_KMS("Setting FIFO watermarks - %c: plane=%d, cursor=%d, "
+ "SR: plane=%d, cursor=%d\n", pipe_name(pipe),
+ wm.pipe[pipe].primary, wm.pipe[pipe].cursor,
+ wm.sr.plane, wm.sr.cursor);
- DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, "
- "B: plane=%d, cursor=%d, C: plane=%d, cursor=%d, "
- "SR: plane=%d, cursor=%d\n",
- planea_wm, cursora_wm,
- planeb_wm, cursorb_wm,
- planec_wm, cursorc_wm,
- plane_sr, cursor_sr);
+ /*
+ * FIXME DDR DVFS introduces massive memory latencies which
+ * are not known to system agent so any deadline specified
+ * by the display may not be respected. To support DDR DVFS
+ * the watermark code needs to be rewritten to essentially
+ * bypass deadline mechanism and rely solely on the
+ * watermarks. For now disable DDR DVFS.
+ */
+ if (IS_CHERRYVIEW(dev_priv))
+ chv_set_memory_dvfs(dev_priv, false);
- I915_WRITE(DSPFW1,
- (plane_sr << DSPFW_SR_SHIFT) |
- (cursorb_wm << DSPFW_CURSORB_SHIFT) |
- (planeb_wm << DSPFW_PLANEB_SHIFT) |
- (planea_wm << DSPFW_PLANEA_SHIFT));
- I915_WRITE(DSPFW2,
- (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
- (cursora_wm << DSPFW_CURSORA_SHIFT));
- I915_WRITE(DSPFW3,
- (I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) |
- (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
- I915_WRITE(DSPFW9_CHV,
- (I915_READ(DSPFW9_CHV) & ~(DSPFW_PLANEC_MASK |
- DSPFW_CURSORC_MASK)) |
- (planec_wm << DSPFW_PLANEC_SHIFT) |
- (cursorc_wm << DSPFW_CURSORC_SHIFT));
+ if (!cxsr_enabled)
+ intel_set_memory_cxsr(dev_priv, false);
+
+ vlv_write_wm_values(intel_crtc, &wm);
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- int pipe = to_intel_plane(plane)->pipe;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ enum pipe pipe = intel_crtc->pipe;
int sprite = to_intel_plane(plane)->plane;
- int drain_latency;
- int plane_prec;
- int sprite_dl;
- int prec_mult;
- const int high_precision = IS_CHERRYVIEW(dev) ?
- DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_64;
+ bool cxsr_enabled;
+ struct vlv_wm_values wm = dev_priv->wm.vlv;
- sprite_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_SPRITE_PRECISION_HIGH(sprite) |
- (DRAIN_LATENCY_MASK << DDL_SPRITE_SHIFT(sprite)));
+ if (enabled) {
+ wm.ddl[pipe].sprite[sprite] =
+ vlv_compute_drain_latency(crtc, plane);
- if (enabled && vlv_compute_drain_latency(crtc, pixel_size, &prec_mult,
- &drain_latency)) {
- plane_prec = (prec_mult == high_precision) ?
- DDL_SPRITE_PRECISION_HIGH(sprite) :
- DDL_SPRITE_PRECISION_LOW(sprite);
- sprite_dl |= plane_prec |
- (drain_latency << DDL_SPRITE_SHIFT(sprite));
+ wm.pipe[pipe].sprite[sprite] =
+ vlv_compute_wm(intel_crtc,
+ to_intel_plane(plane),
+ vlv_get_fifo_size(dev, pipe, sprite+1));
+ } else {
+ wm.ddl[pipe].sprite[sprite] = 0;
+ wm.pipe[pipe].sprite[sprite] = 0;
}
- I915_WRITE(VLV_DDL(pipe), sprite_dl);
+ cxsr_enabled = vlv_compute_sr_wm(dev, &wm);
+
+ if (memcmp(&wm, &dev_priv->wm.vlv, sizeof(wm)) == 0)
+ return;
+
+ DRM_DEBUG_KMS("Setting FIFO watermarks - %c: sprite %c=%d, "
+ "SR: plane=%d, cursor=%d\n", pipe_name(pipe),
+ sprite_name(pipe, sprite),
+ wm.pipe[pipe].sprite[sprite],
+ wm.sr.plane, wm.sr.cursor);
+
+ if (!cxsr_enabled)
+ intel_set_memory_cxsr(dev_priv, false);
+
+ vlv_write_wm_values(intel_crtc, &wm);
+
+ if (cxsr_enabled)
+ intel_set_memory_cxsr(dev_priv, true);
}
+#define single_plane_enabled(mask) is_power_of_2(mask)
+
static void g4x_update_wm(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
plane_sr, cursor_sr);
I915_WRITE(DSPFW1,
- (plane_sr << DSPFW_SR_SHIFT) |
- (cursorb_wm << DSPFW_CURSORB_SHIFT) |
- (planeb_wm << DSPFW_PLANEB_SHIFT) |
- (planea_wm << DSPFW_PLANEA_SHIFT));
+ FW_WM(plane_sr, SR) |
+ FW_WM(cursorb_wm, CURSORB) |
+ FW_WM(planeb_wm, PLANEB) |
+ FW_WM(planea_wm, PLANEA));
I915_WRITE(DSPFW2,
(I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
- (cursora_wm << DSPFW_CURSORA_SHIFT));
+ FW_WM(cursora_wm, CURSORA));
/* HPLL off in SR has some issues on G4x... disable it */
I915_WRITE(DSPFW3,
(I915_READ(DSPFW3) & ~(DSPFW_HPLL_SR_EN | DSPFW_CURSOR_SR_MASK)) |
- (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
+ FW_WM(cursor_sr, CURSOR_SR));
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
int clock = adjusted_mode->crtc_clock;
int htotal = adjusted_mode->crtc_htotal;
int hdisplay = to_intel_crtc(crtc)->config->pipe_src_w;
- int pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ int pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
unsigned long line_time_us;
int entries;
entries, srwm);
entries = (((sr_latency_ns / line_time_us) + 1000) / 1000) *
- pixel_size * to_intel_crtc(crtc)->cursor_width;
+ pixel_size * crtc->cursor->state->crtc_w;
entries = DIV_ROUND_UP(entries,
i965_cursor_wm_info.cacheline_size);
cursor_sr = i965_cursor_wm_info.fifo_size -
srwm);
/* 965 has limitations... */
- I915_WRITE(DSPFW1, (srwm << DSPFW_SR_SHIFT) |
- (8 << DSPFW_CURSORB_SHIFT) |
- (8 << DSPFW_PLANEB_SHIFT) |
- (8 << DSPFW_PLANEA_SHIFT));
- I915_WRITE(DSPFW2, (8 << DSPFW_CURSORA_SHIFT) |
- (8 << DSPFW_PLANEC_SHIFT_OLD));
+ I915_WRITE(DSPFW1, FW_WM(srwm, SR) |
+ FW_WM(8, CURSORB) |
+ FW_WM(8, PLANEB) |
+ FW_WM(8, PLANEA));
+ I915_WRITE(DSPFW2, FW_WM(8, CURSORA) |
+ FW_WM(8, PLANEC_OLD));
/* update cursor SR watermark */
- I915_WRITE(DSPFW3, (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
+ I915_WRITE(DSPFW3, FW_WM(cursor_sr, CURSOR_SR));
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
}
+#undef FW_WM
+
static void i9xx_update_wm(struct drm_crtc *unused_crtc)
{
struct drm_device *dev = unused_crtc->dev;
crtc = intel_get_crtc_for_plane(dev, 0);
if (intel_crtc_active(crtc)) {
const struct drm_display_mode *adjusted_mode;
- int cpp = crtc->primary->fb->bits_per_pixel / 8;
+ int cpp = crtc->primary->state->fb->bits_per_pixel / 8;
if (IS_GEN2(dev))
cpp = 4;
crtc = intel_get_crtc_for_plane(dev, 1);
if (intel_crtc_active(crtc)) {
const struct drm_display_mode *adjusted_mode;
- int cpp = crtc->primary->fb->bits_per_pixel / 8;
+ int cpp = crtc->primary->state->fb->bits_per_pixel / 8;
if (IS_GEN2(dev))
cpp = 4;
if (IS_I915GM(dev) && enabled) {
struct drm_i915_gem_object *obj;
- obj = intel_fb_obj(enabled->primary->fb);
+ obj = intel_fb_obj(enabled->primary->state->fb);
/* self-refresh seems busted with untiled */
if (obj->tiling_mode == I915_TILING_NONE)
int clock = adjusted_mode->crtc_clock;
int htotal = adjusted_mode->crtc_htotal;
int hdisplay = to_intel_crtc(enabled)->config->pipe_src_w;
- int pixel_size = enabled->primary->fb->bits_per_pixel / 8;
+ int pixel_size = enabled->primary->state->fb->bits_per_pixel / 8;
unsigned long line_time_us;
int entries;
struct drm_display_mode *mode = &intel_crtc->config->base.adjusted_mode;
u32 linetime, ips_linetime;
- if (!intel_crtc_active(crtc))
+ if (!intel_crtc->active)
return 0;
/* The WM are computed with base on how long it takes to fill a single
enum pipe pipe = intel_crtc->pipe;
struct drm_plane *plane;
- if (!intel_crtc_active(crtc))
+ if (!intel_crtc->active)
return;
p->active = true;
p->pipe_htotal = intel_crtc->config->base.adjusted_mode.crtc_htotal;
p->pixel_rate = ilk_pipe_pixel_rate(dev, crtc);
- p->pri.bytes_per_pixel = crtc->primary->fb->bits_per_pixel / 8;
- p->cur.bytes_per_pixel = 4;
+
+ if (crtc->primary->state->fb) {
+ p->pri.enabled = true;
+ p->pri.bytes_per_pixel =
+ crtc->primary->state->fb->bits_per_pixel / 8;
+ } else {
+ p->pri.enabled = false;
+ p->pri.bytes_per_pixel = 0;
+ }
+
+ if (crtc->cursor->state->fb) {
+ p->cur.enabled = true;
+ p->cur.bytes_per_pixel = 4;
+ } else {
+ p->cur.enabled = false;
+ p->cur.bytes_per_pixel = 0;
+ }
p->pri.horiz_pixels = intel_crtc->config->pipe_src_w;
- p->cur.horiz_pixels = intel_crtc->cursor_width;
- /* TODO: for now, assume primary and cursor planes are always enabled. */
- p->pri.enabled = true;
- p->cur.enabled = true;
+ p->cur.horiz_pixels = intel_crtc->base.cursor->state->crtc_w;
drm_for_each_legacy_plane(plane, &dev->mode_config.plane_list) {
struct intel_plane *intel_plane = to_intel_plane(plane);
nth_active_pipe = 0;
for_each_crtc(dev, crtc) {
- if (!intel_crtc_active(crtc))
+ if (!to_intel_crtc(crtc)->active)
continue;
if (crtc == for_crtc)
void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv,
struct skl_ddb_allocation *ddb /* out */)
{
- struct drm_device *dev = dev_priv->dev;
enum pipe pipe;
int plane;
u32 val;
for_each_pipe(dev_priv, pipe) {
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
val = I915_READ(PLANE_BUF_CFG(pipe, plane));
skl_ddb_entry_init_from_hw(&ddb->plane[pipe][plane],
val);
struct skl_ddb_allocation *ddb /* out */)
{
struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum pipe pipe = intel_crtc->pipe;
struct skl_ddb_entry *alloc = &ddb->pipe[pipe];
alloc->end -= cursor_blocks;
/* 1. Allocate the mininum required blocks for each active plane */
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
const struct intel_plane_wm_parameters *p;
p = ¶ms->plane[plane];
struct drm_plane *plane;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
- config->num_pipes_active += intel_crtc_active(crtc);
+ config->num_pipes_active += to_intel_crtc(crtc)->active;
/* FIXME: I don't think we need those two global parameters on SKL */
list_for_each_entry(plane, &dev->mode_config.plane_list, head) {
struct drm_framebuffer *fb;
int i = 1; /* Index for sprite planes start */
- p->active = intel_crtc_active(crtc);
+ p->active = intel_crtc->active;
if (p->active) {
p->pipe_htotal = intel_crtc->config->base.adjusted_mode.crtc_htotal;
p->pixel_rate = skl_pipe_pixel_rate(intel_crtc->config);
- /*
- * For now, assume primary and cursor planes are always enabled.
- */
- p->plane[0].enabled = true;
- p->plane[0].bytes_per_pixel =
- crtc->primary->fb->bits_per_pixel / 8;
- p->plane[0].horiz_pixels = intel_crtc->config->pipe_src_w;
- p->plane[0].vert_pixels = intel_crtc->config->pipe_src_h;
- p->plane[0].tiling = DRM_FORMAT_MOD_NONE;
fb = crtc->primary->state->fb;
- /*
- * Framebuffer can be NULL on plane disable, but it does not
- * matter for watermarks if we assume no tiling in that case.
- */
- if (fb)
+ if (fb) {
+ p->plane[0].enabled = true;
+ p->plane[0].bytes_per_pixel = fb->bits_per_pixel / 8;
p->plane[0].tiling = fb->modifier[0];
-
- p->cursor.enabled = true;
- p->cursor.bytes_per_pixel = 4;
- p->cursor.horiz_pixels = intel_crtc->cursor_width ?
- intel_crtc->cursor_width : 64;
+ } else {
+ p->plane[0].enabled = false;
+ p->plane[0].bytes_per_pixel = 0;
+ p->plane[0].tiling = DRM_FORMAT_MOD_NONE;
+ }
+ p->plane[0].horiz_pixels = intel_crtc->config->pipe_src_w;
+ p->plane[0].vert_pixels = intel_crtc->config->pipe_src_h;
+ p->plane[0].rotation = crtc->primary->state->rotation;
+
+ fb = crtc->cursor->state->fb;
+ if (fb) {
+ p->cursor.enabled = true;
+ p->cursor.bytes_per_pixel = fb->bits_per_pixel / 8;
+ p->cursor.horiz_pixels = crtc->cursor->state->crtc_w;
+ p->cursor.vert_pixels = crtc->cursor->state->crtc_h;
+ } else {
+ p->cursor.enabled = false;
+ p->cursor.bytes_per_pixel = 0;
+ p->cursor.horiz_pixels = 64;
+ p->cursor.vert_pixels = 64;
+ }
}
list_for_each_entry(plane, &dev->mode_config.plane_list, head) {
if (p_params->tiling == I915_FORMAT_MOD_Y_TILED ||
p_params->tiling == I915_FORMAT_MOD_Yf_TILED) {
- uint32_t y_tile_minimum = plane_blocks_per_line * 4;
+ uint32_t min_scanlines = 4;
+ uint32_t y_tile_minimum;
+ if (intel_rotation_90_or_270(p_params->rotation)) {
+ switch (p_params->bytes_per_pixel) {
+ case 1:
+ min_scanlines = 16;
+ break;
+ case 2:
+ min_scanlines = 8;
+ break;
+ case 8:
+ WARN(1, "Unsupported pixel depth for rotation");
+ }
+ }
+ y_tile_minimum = plane_blocks_per_line * min_scanlines;
selected_result = max(method2, y_tile_minimum);
} else {
if ((ddb_allocation / plane_blocks_per_line) >= 1)
static uint32_t
skl_compute_linetime_wm(struct drm_crtc *crtc, struct skl_pipe_wm_parameters *p)
{
- if (!intel_crtc_active(crtc))
+ if (!to_intel_crtc(crtc)->active)
return 0;
return DIV_ROUND_UP(8 * p->pipe_htotal * 1000, p->pixel_rate);
static void
skl_wm_flush_pipe(struct drm_i915_private *dev_priv, enum pipe pipe, int pass)
{
- struct drm_device *dev = dev_priv->dev;
int plane;
DRM_DEBUG_KMS("flush pipe %c (pass %d)\n", pipe_name(pipe), pass);
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
I915_WRITE(PLANE_SURF(pipe, plane),
I915_READ(PLANE_SURF(pipe, plane)));
}
*/
if (fb)
intel_plane->wm.tiling = fb->modifier[0];
+ intel_plane->wm.rotation = plane->state->rotation;
skl_update_wm(crtc);
}
hw->plane_trans[pipe][i] = I915_READ(PLANE_WM_TRANS(pipe, i));
hw->cursor_trans[pipe] = I915_READ(CUR_WM_TRANS(pipe));
- if (!intel_crtc_active(crtc))
+ if (!intel_crtc->active)
return;
hw->dirty[pipe] = true;
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
hw->wm_linetime[pipe] = I915_READ(PIPE_WM_LINETIME(pipe));
- active->pipe_enabled = intel_crtc_active(crtc);
+ active->pipe_enabled = intel_crtc->active;
if (active->pipe_enabled) {
u32 tmp = hw->wm_pipe[pipe];
pixel_size, enabled, scaled);
}
-static struct drm_i915_gem_object *
-intel_alloc_context_page(struct drm_device *dev)
-{
- struct drm_i915_gem_object *ctx;
- int ret;
-
- WARN_ON(!mutex_is_locked(&dev->struct_mutex));
-
- ctx = i915_gem_alloc_object(dev, 4096);
- if (!ctx) {
- DRM_DEBUG("failed to alloc power context, RC6 disabled\n");
- return NULL;
- }
-
- ret = i915_gem_obj_ggtt_pin(ctx, 4096, 0);
- if (ret) {
- DRM_ERROR("failed to pin power context: %d\n", ret);
- goto err_unref;
- }
-
- ret = i915_gem_object_set_to_gtt_domain(ctx, 1);
- if (ret) {
- DRM_ERROR("failed to set-domain on power context: %d\n", ret);
- goto err_unpin;
- }
-
- return ctx;
-
-err_unpin:
- i915_gem_object_ggtt_unpin(ctx);
-err_unref:
- drm_gem_object_unreference(&ctx->base);
- return NULL;
-}
-
/**
* Lock protecting IPS related data structures
*/
* ourselves, instead of doing a rmw cycle (which might result in us clearing
* all limits and the gpu stuck at whatever frequency it is at atm).
*/
-static u32 gen6_rps_limits(struct drm_i915_private *dev_priv, u8 val)
+static u32 intel_rps_limits(struct drm_i915_private *dev_priv, u8 val)
{
u32 limits;
* the hw runs at the minimal clock before selecting the desired
* frequency, if the down threshold expires in that window we will not
* receive a down interrupt. */
- limits = dev_priv->rps.max_freq_softlimit << 24;
- if (val <= dev_priv->rps.min_freq_softlimit)
- limits |= dev_priv->rps.min_freq_softlimit << 16;
+ if (IS_GEN9(dev_priv->dev)) {
+ limits = (dev_priv->rps.max_freq_softlimit) << 23;
+ if (val <= dev_priv->rps.min_freq_softlimit)
+ limits |= (dev_priv->rps.min_freq_softlimit) << 14;
+ } else {
+ limits = dev_priv->rps.max_freq_softlimit << 24;
+ if (val <= dev_priv->rps.min_freq_softlimit)
+ limits |= dev_priv->rps.min_freq_softlimit << 16;
+ }
return limits;
}
static void gen6_set_rps_thresholds(struct drm_i915_private *dev_priv, u8 val)
{
int new_power;
+ u32 threshold_up = 0, threshold_down = 0; /* in % */
+ u32 ei_up = 0, ei_down = 0;
new_power = dev_priv->rps.power;
switch (dev_priv->rps.power) {
break;
}
/* Max/min bins are special */
- if (val == dev_priv->rps.min_freq_softlimit)
+ if (val <= dev_priv->rps.min_freq_softlimit)
new_power = LOW_POWER;
- if (val == dev_priv->rps.max_freq_softlimit)
+ if (val >= dev_priv->rps.max_freq_softlimit)
new_power = HIGH_POWER;
if (new_power == dev_priv->rps.power)
return;
switch (new_power) {
case LOW_POWER:
/* Upclock if more than 95% busy over 16ms */
- I915_WRITE(GEN6_RP_UP_EI, 12500);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 11800);
+ ei_up = 16000;
+ threshold_up = 95;
/* Downclock if less than 85% busy over 32ms */
- I915_WRITE(GEN6_RP_DOWN_EI, 25000);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 21250);
-
- I915_WRITE(GEN6_RP_CONTROL,
- GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_NORMAL_MODE |
- GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE |
- GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
+ ei_down = 32000;
+ threshold_down = 85;
break;
case BETWEEN:
/* Upclock if more than 90% busy over 13ms */
- I915_WRITE(GEN6_RP_UP_EI, 10250);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 9225);
+ ei_up = 13000;
+ threshold_up = 90;
/* Downclock if less than 75% busy over 32ms */
- I915_WRITE(GEN6_RP_DOWN_EI, 25000);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 18750);
-
- I915_WRITE(GEN6_RP_CONTROL,
- GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_NORMAL_MODE |
- GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE |
- GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
+ ei_down = 32000;
+ threshold_down = 75;
break;
case HIGH_POWER:
/* Upclock if more than 85% busy over 10ms */
- I915_WRITE(GEN6_RP_UP_EI, 8000);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 6800);
+ ei_up = 10000;
+ threshold_up = 85;
/* Downclock if less than 60% busy over 32ms */
- I915_WRITE(GEN6_RP_DOWN_EI, 25000);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 15000);
-
- I915_WRITE(GEN6_RP_CONTROL,
- GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_NORMAL_MODE |
- GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE |
- GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
+ ei_down = 32000;
+ threshold_down = 60;
break;
}
+ I915_WRITE(GEN6_RP_UP_EI,
+ GT_INTERVAL_FROM_US(dev_priv, ei_up));
+ I915_WRITE(GEN6_RP_UP_THRESHOLD,
+ GT_INTERVAL_FROM_US(dev_priv, (ei_up * threshold_up / 100)));
+
+ I915_WRITE(GEN6_RP_DOWN_EI,
+ GT_INTERVAL_FROM_US(dev_priv, ei_down));
+ I915_WRITE(GEN6_RP_DOWN_THRESHOLD,
+ GT_INTERVAL_FROM_US(dev_priv, (ei_down * threshold_down / 100)));
+
+ I915_WRITE(GEN6_RP_CONTROL,
+ GEN6_RP_MEDIA_TURBO |
+ GEN6_RP_MEDIA_HW_NORMAL_MODE |
+ GEN6_RP_MEDIA_IS_GFX |
+ GEN6_RP_ENABLE |
+ GEN6_RP_UP_BUSY_AVG |
+ GEN6_RP_DOWN_IDLE_AVG);
+
dev_priv->rps.power = new_power;
dev_priv->rps.last_adj = 0;
}
u32 mask = 0;
if (val > dev_priv->rps.min_freq_softlimit)
- mask |= GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT;
+ mask |= GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT;
if (val < dev_priv->rps.max_freq_softlimit)
- mask |= GEN6_PM_RP_UP_THRESHOLD;
+ mask |= GEN6_PM_RP_UP_EI_EXPIRED | GEN6_PM_RP_UP_THRESHOLD;
- mask |= dev_priv->pm_rps_events & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED);
mask &= dev_priv->pm_rps_events;
return gen6_sanitize_rps_pm_mask(dev_priv, ~mask);
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
- WARN_ON(val > dev_priv->rps.max_freq_softlimit);
- WARN_ON(val < dev_priv->rps.min_freq_softlimit);
+ WARN_ON(val > dev_priv->rps.max_freq);
+ WARN_ON(val < dev_priv->rps.min_freq);
/* min/max delay may still have been modified so be sure to
* write the limits value.
if (val != dev_priv->rps.cur_freq) {
gen6_set_rps_thresholds(dev_priv, val);
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ if (IS_GEN9(dev))
+ I915_WRITE(GEN6_RPNSWREQ,
+ GEN9_FREQUENCY(val));
+ else if (IS_HASWELL(dev) || IS_BROADWELL(dev))
I915_WRITE(GEN6_RPNSWREQ,
HSW_FREQUENCY(val));
else
/* Make sure we continue to get interrupts
* until we hit the minimum or maximum frequencies.
*/
- I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, gen6_rps_limits(dev_priv, val));
+ I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, intel_rps_limits(dev_priv, val));
I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val));
POSTING_READ(GEN6_RPNSWREQ);
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
- WARN_ON(val > dev_priv->rps.max_freq_softlimit);
- WARN_ON(val < dev_priv->rps.min_freq_softlimit);
+ WARN_ON(val > dev_priv->rps.max_freq);
+ WARN_ON(val < dev_priv->rps.min_freq);
if (WARN_ONCE(IS_CHERRYVIEW(dev) && (val & 1),
"Odd GPU freq value\n"))
static void vlv_set_rps_idle(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = dev_priv->dev;
+ u32 val = dev_priv->rps.idle_freq;
/* CHV and latest VLV don't need to force the gfx clock */
if (IS_CHERRYVIEW(dev) || dev->pdev->revision >= 0xd) {
- valleyview_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ valleyview_set_rps(dev_priv->dev, val);
return;
}
* When we are idle. Drop to min voltage state.
*/
- if (dev_priv->rps.cur_freq <= dev_priv->rps.min_freq_softlimit)
+ if (dev_priv->rps.cur_freq <= val)
return;
/* Mask turbo interrupt so that they will not come in between */
vlv_force_gfx_clock(dev_priv, true);
- dev_priv->rps.cur_freq = dev_priv->rps.min_freq_softlimit;
+ dev_priv->rps.cur_freq = val;
- vlv_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ,
- dev_priv->rps.min_freq_softlimit);
+ vlv_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ, val);
if (wait_for(((vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS))
& GENFREQSTATUS) == 0, 100))
vlv_force_gfx_clock(dev_priv, false);
- I915_WRITE(GEN6_PMINTRMSK,
- gen6_rps_pm_mask(dev_priv, dev_priv->rps.cur_freq));
+ I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val));
+}
+
+void gen6_rps_busy(struct drm_i915_private *dev_priv)
+{
+ mutex_lock(&dev_priv->rps.hw_lock);
+ if (dev_priv->rps.enabled) {
+ if (dev_priv->pm_rps_events & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED))
+ gen6_rps_reset_ei(dev_priv);
+ I915_WRITE(GEN6_PMINTRMSK,
+ gen6_rps_pm_mask(dev_priv, dev_priv->rps.cur_freq));
+ }
+ mutex_unlock(&dev_priv->rps.hw_lock);
}
void gen6_rps_idle(struct drm_i915_private *dev_priv)
if (IS_VALLEYVIEW(dev))
vlv_set_rps_idle(dev_priv);
else
- gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.idle_freq);
dev_priv->rps.last_adj = 0;
+ I915_WRITE(GEN6_PMINTRMSK, 0xffffffff);
}
mutex_unlock(&dev_priv->rps.hw_lock);
}
void gen6_rps_boost(struct drm_i915_private *dev_priv)
{
+ u32 val;
+
mutex_lock(&dev_priv->rps.hw_lock);
- if (dev_priv->rps.enabled) {
- intel_set_rps(dev_priv->dev, dev_priv->rps.max_freq_softlimit);
+ val = dev_priv->rps.max_freq_softlimit;
+ if (dev_priv->rps.enabled &&
+ dev_priv->mm.busy &&
+ dev_priv->rps.cur_freq < val) {
+ intel_set_rps(dev_priv->dev, val);
dev_priv->rps.last_adj = 0;
}
mutex_unlock(&dev_priv->rps.hw_lock);
dev_priv->rps.rp0_freq = (rp_state_cap >> 0) & 0xff;
dev_priv->rps.rp1_freq = (rp_state_cap >> 8) & 0xff;
dev_priv->rps.min_freq = (rp_state_cap >> 16) & 0xff;
+ if (IS_SKYLAKE(dev)) {
+ /* Store the frequency values in 16.66 MHZ units, which is
+ the natural hardware unit for SKL */
+ dev_priv->rps.rp0_freq *= GEN9_FREQ_SCALER;
+ dev_priv->rps.rp1_freq *= GEN9_FREQ_SCALER;
+ dev_priv->rps.min_freq *= GEN9_FREQ_SCALER;
+ }
/* hw_max = RP0 until we check for overclocking */
dev_priv->rps.max_freq = dev_priv->rps.rp0_freq;
dev_priv->rps.max_freq);
}
+ dev_priv->rps.idle_freq = dev_priv->rps.min_freq;
+
/* Preserve min/max settings in case of re-init */
if (dev_priv->rps.max_freq_softlimit == 0)
dev_priv->rps.max_freq_softlimit = dev_priv->rps.max_freq;
gen6_init_rps_frequencies(dev);
- I915_WRITE(GEN6_RPNSWREQ, 0xc800000);
- I915_WRITE(GEN6_RC_VIDEO_FREQ, 0xc800000);
+ /* Program defaults and thresholds for RPS*/
+ I915_WRITE(GEN6_RC_VIDEO_FREQ,
+ GEN9_FREQUENCY(dev_priv->rps.rp1_freq));
+
+ /* 1 second timeout*/
+ I915_WRITE(GEN6_RP_DOWN_TIMEOUT,
+ GT_INTERVAL_FROM_US(dev_priv, 1000000));
- I915_WRITE(GEN6_RP_DOWN_TIMEOUT, 0xf4240);
- I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, 0x12060000);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 0xe808);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 0x3bd08);
- I915_WRITE(GEN6_RP_UP_EI, 0x101d0);
- I915_WRITE(GEN6_RP_DOWN_EI, 0x55730);
I915_WRITE(GEN6_RP_IDLE_HYSTERSIS, 0xa);
- I915_WRITE(GEN6_PMINTRMSK, 0x6);
- I915_WRITE(GEN6_RP_CONTROL, GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_MODE | GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE | GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
- gen6_enable_rps_interrupts(dev);
+ /* Leaning on the below call to gen6_set_rps to program/setup the
+ * Up/Down EI & threshold registers, as well as the RP_CONTROL,
+ * RP_INTERRUPT_LIMITS & RPNSWREQ registers */
+ dev_priv->rps.power = HIGH_POWER; /* force a reset */
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
}
/* 6: Ring frequency + overclocking (our driver does this later */
dev_priv->rps.power = HIGH_POWER; /* force a reset */
- gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.idle_freq);
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
}
}
dev_priv->rps.power = HIGH_POWER; /* force a reset */
- gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.idle_freq);
rc6vids = 0;
ret = sandybridge_pcode_read(dev_priv, GEN6_PCODE_READ_RC6VIDS, &rc6vids);
intel_gpu_freq(dev_priv, dev_priv->rps.min_freq),
dev_priv->rps.min_freq);
+ dev_priv->rps.idle_freq = dev_priv->rps.min_freq;
+
/* Preserve min/max settings in case of re-init */
if (dev_priv->rps.max_freq_softlimit == 0)
dev_priv->rps.max_freq_softlimit = dev_priv->rps.max_freq;
dev_priv->rps.min_freq) & 1,
"Odd GPU freq values\n");
+ dev_priv->rps.idle_freq = dev_priv->rps.min_freq;
+
/* Preserve min/max settings in case of re-init */
if (dev_priv->rps.max_freq_softlimit == 0)
dev_priv->rps.max_freq_softlimit = dev_priv->rps.max_freq;
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
}
-void ironlake_teardown_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (dev_priv->ips.renderctx) {
- i915_gem_object_ggtt_unpin(dev_priv->ips.renderctx);
- drm_gem_object_unreference(&dev_priv->ips.renderctx->base);
- dev_priv->ips.renderctx = NULL;
- }
-
- if (dev_priv->ips.pwrctx) {
- i915_gem_object_ggtt_unpin(dev_priv->ips.pwrctx);
- drm_gem_object_unreference(&dev_priv->ips.pwrctx->base);
- dev_priv->ips.pwrctx = NULL;
- }
-}
-
-static void ironlake_disable_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (I915_READ(PWRCTXA)) {
- /* Wake the GPU, prevent RC6, then restore RSTDBYCTL */
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) | RCX_SW_EXIT);
- wait_for(((I915_READ(RSTDBYCTL) & RSX_STATUS_MASK) == RSX_STATUS_ON),
- 50);
-
- I915_WRITE(PWRCTXA, 0);
- POSTING_READ(PWRCTXA);
-
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
- POSTING_READ(RSTDBYCTL);
- }
-}
-
-static int ironlake_setup_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (dev_priv->ips.renderctx == NULL)
- dev_priv->ips.renderctx = intel_alloc_context_page(dev);
- if (!dev_priv->ips.renderctx)
- return -ENOMEM;
-
- if (dev_priv->ips.pwrctx == NULL)
- dev_priv->ips.pwrctx = intel_alloc_context_page(dev);
- if (!dev_priv->ips.pwrctx) {
- ironlake_teardown_rc6(dev);
- return -ENOMEM;
- }
-
- return 0;
-}
-
-static void ironlake_enable_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_engine_cs *ring = &dev_priv->ring[RCS];
- bool was_interruptible;
- int ret;
-
- /* rc6 disabled by default due to repeated reports of hanging during
- * boot and resume.
- */
- if (!intel_enable_rc6(dev))
- return;
-
- WARN_ON(!mutex_is_locked(&dev->struct_mutex));
-
- ret = ironlake_setup_rc6(dev);
- if (ret)
- return;
-
- was_interruptible = dev_priv->mm.interruptible;
- dev_priv->mm.interruptible = false;
-
- /*
- * GPU can automatically power down the render unit if given a page
- * to save state.
- */
- ret = intel_ring_begin(ring, 6);
- if (ret) {
- ironlake_teardown_rc6(dev);
- dev_priv->mm.interruptible = was_interruptible;
- return;
- }
-
- intel_ring_emit(ring, MI_SUSPEND_FLUSH | MI_SUSPEND_FLUSH_EN);
- intel_ring_emit(ring, MI_SET_CONTEXT);
- intel_ring_emit(ring, i915_gem_obj_ggtt_offset(dev_priv->ips.renderctx) |
- MI_MM_SPACE_GTT |
- MI_SAVE_EXT_STATE_EN |
- MI_RESTORE_EXT_STATE_EN |
- MI_RESTORE_INHIBIT);
- intel_ring_emit(ring, MI_SUSPEND_FLUSH);
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_emit(ring, MI_FLUSH);
- intel_ring_advance(ring);
-
- /*
- * Wait for the command parser to advance past MI_SET_CONTEXT. The HW
- * does an implicit flush, combined with MI_FLUSH above, it should be
- * safe to assume that renderctx is valid
- */
- ret = intel_ring_idle(ring);
- dev_priv->mm.interruptible = was_interruptible;
- if (ret) {
- DRM_ERROR("failed to enable ironlake power savings\n");
- ironlake_teardown_rc6(dev);
- return;
- }
-
- I915_WRITE(PWRCTXA, i915_gem_obj_ggtt_offset(dev_priv->ips.pwrctx) | PWRCTX_EN);
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
-
- intel_print_rc6_info(dev, GEN6_RC_CTL_RC6_ENABLE);
-}
-
static unsigned long intel_pxfreq(u32 vidfreq)
{
unsigned long freq;
flush_delayed_work(&dev_priv->rps.delayed_resume_work);
- /*
- * TODO: disable RPS interrupts on GEN9+ too once RPS support
- * is added for it.
- */
- if (INTEL_INFO(dev)->gen < 9)
- gen6_disable_rps_interrupts(dev);
+ gen6_disable_rps_interrupts(dev);
}
/**
if (IS_IRONLAKE_M(dev)) {
ironlake_disable_drps(dev);
- ironlake_disable_rc6(dev);
} else if (INTEL_INFO(dev)->gen >= 6) {
intel_suspend_gt_powersave(dev);
mutex_lock(&dev_priv->rps.hw_lock);
- /*
- * TODO: reset/enable RPS interrupts on GEN9+ too, once RPS support is
- * added for it.
- */
- if (INTEL_INFO(dev)->gen < 9)
- gen6_reset_rps_interrupts(dev);
+ gen6_reset_rps_interrupts(dev);
if (IS_CHERRYVIEW(dev)) {
cherryview_enable_rps(dev);
gen6_enable_rps(dev);
__gen6_update_ring_freq(dev);
}
+
+ WARN_ON(dev_priv->rps.max_freq < dev_priv->rps.min_freq);
+ WARN_ON(dev_priv->rps.idle_freq > dev_priv->rps.max_freq);
+
+ WARN_ON(dev_priv->rps.efficient_freq < dev_priv->rps.min_freq);
+ WARN_ON(dev_priv->rps.efficient_freq > dev_priv->rps.max_freq);
+
dev_priv->rps.enabled = true;
- if (INTEL_INFO(dev)->gen < 9)
- gen6_enable_rps_interrupts(dev);
+ gen6_enable_rps_interrupts(dev);
mutex_unlock(&dev_priv->rps.hw_lock);
if (IS_IRONLAKE_M(dev)) {
mutex_lock(&dev->struct_mutex);
ironlake_enable_drps(dev);
- ironlake_enable_rc6(dev);
intel_init_emon(dev);
mutex_unlock(&dev->struct_mutex);
} else if (INTEL_INFO(dev)->gen >= 6) {
gen6_check_mch_setup(dev);
}
+static void vlv_init_display_clock_gating(struct drm_i915_private *dev_priv)
+{
+ I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
+
+ /*
+ * Disable trickle feed and enable pnd deadline calculation
+ */
+ I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
+ I915_WRITE(CBR1_VLV, 0);
+}
+
static void valleyview_init_clock_gating(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
+ vlv_init_display_clock_gating(dev_priv);
/* WaDisableEarlyCull:vlv */
I915_WRITE(_3D_CHICKEN3,
I915_WRITE(GEN7_UCGCTL4,
I915_READ(GEN7_UCGCTL4) | GEN7_L3BANK2X_CLOCK_GATE_DISABLE);
- I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
-
/*
* BSpec says this must be set, even though
* WaDisable4x2SubspanOptimization isn't listed for VLV.
{
struct drm_i915_private *dev_priv = dev->dev_private;
- I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
-
- I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
+ vlv_init_display_clock_gating(dev_priv);
/* WaVSRefCountFullforceMissDisable:chv */
/* WaDSRefCountFullforceMissDisable:chv */
else if (INTEL_INFO(dev)->gen == 8)
dev_priv->display.init_clock_gating = broadwell_init_clock_gating;
} else if (IS_CHERRYVIEW(dev)) {
- dev_priv->display.update_wm = cherryview_update_wm;
+ dev_priv->display.update_wm = valleyview_update_wm;
dev_priv->display.update_sprite_wm = valleyview_update_sprite_wm;
dev_priv->display.init_clock_gating =
cherryview_init_clock_gating;
int intel_gpu_freq(struct drm_i915_private *dev_priv, int val)
{
- if (IS_CHERRYVIEW(dev_priv->dev))
+ if (IS_GEN9(dev_priv->dev))
+ return (val * GT_FREQUENCY_MULTIPLIER) / GEN9_FREQ_SCALER;
+ else if (IS_CHERRYVIEW(dev_priv->dev))
return chv_gpu_freq(dev_priv, val);
else if (IS_VALLEYVIEW(dev_priv->dev))
return byt_gpu_freq(dev_priv, val);
int intel_freq_opcode(struct drm_i915_private *dev_priv, int val)
{
- if (IS_CHERRYVIEW(dev_priv->dev))
+ if (IS_GEN9(dev_priv->dev))
+ return (val * GEN9_FREQ_SCALER) / GT_FREQUENCY_MULTIPLIER;
+ else if (IS_CHERRYVIEW(dev_priv->dev))
return chv_freq_opcode(dev_priv, val);
else if (IS_VALLEYVIEW(dev_priv->dev))
return byt_freq_opcode(dev_priv, val);
WARN_ON(!(val & EDP_PSR_ENABLE));
I915_WRITE(EDP_PSR_CTL(dev), val & ~EDP_PSR_ENABLE);
-
- dev_priv->psr.active = false;
} else {
val = I915_READ(VLV_PSRCTL(pipe));
return 0;
}
-static int gen7_ring_fbc_flush(struct intel_engine_cs *ring, u32 value)
-{
- int ret;
-
- if (!ring->fbc_dirty)
- return 0;
-
- ret = intel_ring_begin(ring, 6);
- if (ret)
- return ret;
- /* WaFbcNukeOn3DBlt:ivb/hsw */
- intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit(ring, MSG_FBC_REND_STATE);
- intel_ring_emit(ring, value);
- intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1) | MI_SRM_LRM_GLOBAL_GTT);
- intel_ring_emit(ring, MSG_FBC_REND_STATE);
- intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
- intel_ring_advance(ring);
-
- ring->fbc_dirty = false;
- return 0;
-}
-
static int
gen7_render_ring_flush(struct intel_engine_cs *ring,
u32 invalidate_domains, u32 flush_domains)
intel_ring_emit(ring, 0);
intel_ring_advance(ring);
- if (!invalidate_domains && flush_domains)
- return gen7_ring_fbc_flush(ring, FBC_REND_NUKE);
-
return 0;
}
return ret;
}
- ret = gen8_emit_pipe_control(ring, flags, scratch_addr);
- if (ret)
- return ret;
-
- if (!invalidate_domains && flush_domains)
- return gen7_ring_fbc_flush(ring, FBC_REND_NUKE);
-
- return 0;
+ return gen8_emit_pipe_control(ring, flags, scratch_addr);
}
static void ring_write_tail(struct intel_engine_cs *ring,
u32 invalidate, u32 flush)
{
struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t cmd;
int ret;
return ret;
cmd = MI_FLUSH_DW;
- if (INTEL_INFO(ring->dev)->gen >= 8)
+ if (INTEL_INFO(dev)->gen >= 8)
cmd += 1;
/* We always require a command barrier so that subsequent
cmd |= MI_INVALIDATE_TLB;
intel_ring_emit(ring, cmd);
intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
- if (INTEL_INFO(ring->dev)->gen >= 8) {
+ if (INTEL_INFO(dev)->gen >= 8) {
intel_ring_emit(ring, 0); /* upper addr */
intel_ring_emit(ring, 0); /* value */
} else {
}
intel_ring_advance(ring);
- if (!invalidate && flush) {
- if (IS_GEN7(dev))
- return gen7_ring_fbc_flush(ring, FBC_REND_CACHE_CLEAN);
- else if (IS_BROADWELL(dev))
- dev_priv->fbc.need_sw_cache_clean = true;
- }
-
return 0;
}
*/
struct drm_i915_gem_request *outstanding_lazy_request;
bool gpu_caches_dirty;
- bool fbc_dirty;
wait_queue_head_t irq_queue;
outb(inb(VGA_MSR_READ), VGA_MSR_WRITE);
vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
- if (IS_BROADWELL(dev) || (INTEL_INFO(dev)->gen >= 9))
- gen8_irq_power_well_post_enable(dev_priv);
+ if (IS_BROADWELL(dev))
+ gen8_irq_power_well_post_enable(dev_priv,
+ 1 << PIPE_C | 1 << PIPE_B);
+}
+
+static void skl_power_well_post_enable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ struct drm_device *dev = dev_priv->dev;
+
+ /*
+ * After we re-enable the power well, if we touch VGA register 0x3d5
+ * we'll get unclaimed register interrupts. This stops after we write
+ * anything to the VGA MSR register. The vgacon module uses this
+ * register all the time, so if we unbind our driver and, as a
+ * consequence, bind vgacon, we'll get stuck in an infinite loop at
+ * console_unlock(). So make here we touch the VGA MSR register, making
+ * sure vgacon can keep working normally without triggering interrupts
+ * and error messages.
+ */
+ if (power_well->data == SKL_DISP_PW_2) {
+ vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO);
+ outb(inb(VGA_MSR_READ), VGA_MSR_WRITE);
+ vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
+
+ gen8_irq_power_well_post_enable(dev_priv,
+ 1 << PIPE_C | 1 << PIPE_B);
+ }
+
+ if (power_well->data == SKL_DISP_PW_1) {
+ intel_prepare_ddi(dev);
+ gen8_irq_power_well_post_enable(dev_priv, 1 << PIPE_A);
+ }
}
static void hsw_set_power_well(struct drm_i915_private *dev_priv,
{
uint32_t tmp, fuse_status;
uint32_t req_mask, state_mask;
- bool check_fuse_status = false;
+ bool is_enabled, enable_requested, check_fuse_status = false;
tmp = I915_READ(HSW_PWR_WELL_DRIVER);
fuse_status = I915_READ(SKL_FUSE_STATUS);
}
req_mask = SKL_POWER_WELL_REQ(power_well->data);
+ enable_requested = tmp & req_mask;
state_mask = SKL_POWER_WELL_STATE(power_well->data);
+ is_enabled = tmp & state_mask;
if (enable) {
- if (!(tmp & req_mask)) {
+ if (!enable_requested) {
I915_WRITE(HSW_PWR_WELL_DRIVER, tmp | req_mask);
- DRM_DEBUG_KMS("Enabling %s\n", power_well->name);
}
- if (!(tmp & state_mask)) {
+ if (!is_enabled) {
+ DRM_DEBUG_KMS("Enabling %s\n", power_well->name);
if (wait_for((I915_READ(HSW_PWR_WELL_DRIVER) &
state_mask), 1))
DRM_ERROR("%s enable timeout\n",
check_fuse_status = true;
}
} else {
- if (tmp & req_mask) {
+ if (enable_requested) {
I915_WRITE(HSW_PWR_WELL_DRIVER, tmp & ~req_mask);
POSTING_READ(HSW_PWR_WELL_DRIVER);
DRM_DEBUG_KMS("Disabling %s\n", power_well->name);
DRM_ERROR("PG2 distributing status timeout\n");
}
}
+
+ if (enable && !is_enabled)
+ skl_power_well_post_enable(dev_priv, power_well);
}
static void hsw_power_well_sync_hw(struct drm_i915_private *dev_priv,
}
/**
- * intel_aux_display_runtime_get - grab an auxilliary power domain reference
+ * intel_aux_display_runtime_get - grab an auxiliary power domain reference
* @dev_priv: i915 device instance
*
* This function grabs a power domain reference for the auxiliary power domain
}
/**
- * intel_aux_display_runtime_put - release an auxilliary power domain reference
+ * intel_aux_display_runtime_put - release an auxiliary power domain reference
* @dev_priv: i915 device instance
*
- * This function drops the auxilliary power domain reference obtained by
+ * This function drops the auxiliary power domain reference obtained by
* intel_aux_display_runtime_get() and might power down the corresponding
* hardware block right away if this is the last reference.
*/
switch (crtc->config->pixel_multiplier) {
default:
- WARN(1, "unknown pixel mutlipler specified\n");
+ WARN(1, "unknown pixel multiplier specified\n");
case 1: rate = SDVO_CLOCK_RATE_MULT_1X; break;
case 2: rate = SDVO_CLOCK_RATE_MULT_2X; break;
case 4: rate = SDVO_CLOCK_RATE_MULT_4X; break;
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_sdvo_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_sdvo_connector_helper_funcs = {
static void
skl_update_plane(struct drm_plane *drm_plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
struct drm_device *dev = drm_plane->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(drm_plane);
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
const int pipe = intel_plane->pipe;
const int plane = intel_plane->plane + 1;
u32 plane_ctl, stride_div;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
+ unsigned long surf_addr;
- plane_ctl = I915_READ(PLANE_CTL(pipe, plane));
-
- /* Mask out pixel format bits in case we change it */
- plane_ctl &= ~PLANE_CTL_FORMAT_MASK;
- plane_ctl &= ~PLANE_CTL_ORDER_RGBX;
- plane_ctl &= ~PLANE_CTL_YUV422_ORDER_MASK;
- plane_ctl &= ~PLANE_CTL_TILED_MASK;
- plane_ctl &= ~PLANE_CTL_ALPHA_MASK;
- plane_ctl &= ~PLANE_CTL_ROTATE_MASK;
-
- /* Trickle feed has to be enabled */
- plane_ctl &= ~PLANE_CTL_TRICKLE_FEED_DISABLE;
+ plane_ctl = PLANE_CTL_ENABLE |
+ PLANE_CTL_PIPE_CSC_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_RGB565:
if (drm_plane->state->rotation == BIT(DRM_ROTATE_180))
plane_ctl |= PLANE_CTL_ROTATE_180;
- plane_ctl |= PLANE_CTL_ENABLE;
- plane_ctl |= PLANE_CTL_PIPE_CSC_ENABLE;
-
intel_update_sprite_watermarks(drm_plane, crtc, src_w, src_h,
pixel_size, true,
src_w != crtc_w || src_h != crtc_h);
crtc_w--;
crtc_h--;
+ if (key->flags) {
+ I915_WRITE(PLANE_KEYVAL(pipe, plane), key->min_value);
+ I915_WRITE(PLANE_KEYMAX(pipe, plane), key->max_value);
+ I915_WRITE(PLANE_KEYMSK(pipe, plane), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_DESTINATION)
+ plane_ctl |= PLANE_CTL_KEY_ENABLE_DESTINATION;
+ else if (key->flags & I915_SET_COLORKEY_SOURCE)
+ plane_ctl |= PLANE_CTL_KEY_ENABLE_SOURCE;
+
+ surf_addr = intel_plane_obj_offset(intel_plane, obj);
+
I915_WRITE(PLANE_OFFSET(pipe, plane), (y << 16) | x);
I915_WRITE(PLANE_STRIDE(pipe, plane), fb->pitches[0] / stride_div);
I915_WRITE(PLANE_POS(pipe, plane), (crtc_y << 16) | crtc_x);
I915_WRITE(PLANE_SIZE(pipe, plane), (crtc_h << 16) | crtc_w);
I915_WRITE(PLANE_CTL(pipe, plane), plane_ctl);
- I915_WRITE(PLANE_SURF(pipe, plane), i915_gem_obj_ggtt_offset(obj));
+ I915_WRITE(PLANE_SURF(pipe, plane), surf_addr);
POSTING_READ(PLANE_SURF(pipe, plane));
}
const int pipe = intel_plane->pipe;
const int plane = intel_plane->plane + 1;
- I915_WRITE(PLANE_CTL(pipe, plane),
- I915_READ(PLANE_CTL(pipe, plane)) & ~PLANE_CTL_ENABLE);
+ I915_WRITE(PLANE_CTL(pipe, plane), 0);
/* Activate double buffered register update */
- I915_WRITE(PLANE_CTL(pipe, plane), 0);
- POSTING_READ(PLANE_CTL(pipe, plane));
+ I915_WRITE(PLANE_SURF(pipe, plane), 0);
+ POSTING_READ(PLANE_SURF(pipe, plane));
intel_update_sprite_watermarks(drm_plane, crtc, 0, 0, 0, false, false);
}
-static int
-skl_update_colorkey(struct drm_plane *drm_plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = drm_plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(drm_plane);
- const int pipe = intel_plane->pipe;
- const int plane = intel_plane->plane;
- u32 plane_ctl;
-
- I915_WRITE(PLANE_KEYVAL(pipe, plane), key->min_value);
- I915_WRITE(PLANE_KEYMAX(pipe, plane), key->max_value);
- I915_WRITE(PLANE_KEYMSK(pipe, plane), key->channel_mask);
-
- plane_ctl = I915_READ(PLANE_CTL(pipe, plane));
- plane_ctl &= ~PLANE_CTL_KEY_ENABLE_MASK;
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- plane_ctl |= PLANE_CTL_KEY_ENABLE_DESTINATION;
- else if (key->flags & I915_SET_COLORKEY_SOURCE)
- plane_ctl |= PLANE_CTL_KEY_ENABLE_SOURCE;
- I915_WRITE(PLANE_CTL(pipe, plane), plane_ctl);
-
- POSTING_READ(PLANE_CTL(pipe, plane));
-
- return 0;
-}
-
-static void
-skl_get_colorkey(struct drm_plane *drm_plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = drm_plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(drm_plane);
- const int pipe = intel_plane->pipe;
- const int plane = intel_plane->plane;
- u32 plane_ctl;
-
- key->min_value = I915_READ(PLANE_KEYVAL(pipe, plane));
- key->max_value = I915_READ(PLANE_KEYMAX(pipe, plane));
- key->channel_mask = I915_READ(PLANE_KEYMSK(pipe, plane));
-
- plane_ctl = I915_READ(PLANE_CTL(pipe, plane));
-
- switch (plane_ctl & PLANE_CTL_KEY_ENABLE_MASK) {
- case PLANE_CTL_KEY_ENABLE_DESTINATION:
- key->flags = I915_SET_COLORKEY_DESTINATION;
- break;
- case PLANE_CTL_KEY_ENABLE_SOURCE:
- key->flags = I915_SET_COLORKEY_SOURCE;
- break;
- default:
- key->flags = I915_SET_COLORKEY_NONE;
- }
-}
-
static void
chv_update_csc(struct intel_plane *intel_plane, uint32_t format)
{
static void
vlv_update_plane(struct drm_plane *dplane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(dplane);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
int pipe = intel_plane->pipe;
int plane = intel_plane->plane;
u32 sprctl;
unsigned long sprsurf_offset, linear_offset;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
- sprctl = I915_READ(SPCNTR(pipe, plane));
-
- /* Mask out pixel format bits in case we change it */
- sprctl &= ~SP_PIXFORMAT_MASK;
- sprctl &= ~SP_YUV_BYTE_ORDER_MASK;
- sprctl &= ~SP_TILED;
- sprctl &= ~SP_ROTATE_180;
+ sprctl = SP_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_YUYV:
if (obj->tiling_mode != I915_TILING_NONE)
sprctl |= SP_TILED;
- sprctl |= SP_ENABLE;
-
intel_update_sprite_watermarks(dplane, crtc, src_w, src_h,
pixel_size, true,
src_w != crtc_w || src_h != crtc_h);
intel_update_primary_plane(intel_crtc);
+ if (key->flags) {
+ I915_WRITE(SPKEYMINVAL(pipe, plane), key->min_value);
+ I915_WRITE(SPKEYMAXVAL(pipe, plane), key->max_value);
+ I915_WRITE(SPKEYMSK(pipe, plane), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_SOURCE)
+ sprctl |= SP_SOURCE_KEY;
+
if (IS_CHERRYVIEW(dev) && pipe == PIPE_B)
chv_update_csc(intel_plane, fb->pixel_format);
intel_update_primary_plane(intel_crtc);
- I915_WRITE(SPCNTR(pipe, plane), I915_READ(SPCNTR(pipe, plane)) &
- ~SP_ENABLE);
+ I915_WRITE(SPCNTR(pipe, plane), 0);
+
/* Activate double buffered register update */
I915_WRITE(SPSURF(pipe, plane), 0);
intel_update_sprite_watermarks(dplane, crtc, 0, 0, 0, false, false);
}
-static int
-vlv_update_colorkey(struct drm_plane *dplane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = dplane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(dplane);
- int pipe = intel_plane->pipe;
- int plane = intel_plane->plane;
- u32 sprctl;
-
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- return -EINVAL;
-
- I915_WRITE(SPKEYMINVAL(pipe, plane), key->min_value);
- I915_WRITE(SPKEYMAXVAL(pipe, plane), key->max_value);
- I915_WRITE(SPKEYMSK(pipe, plane), key->channel_mask);
-
- sprctl = I915_READ(SPCNTR(pipe, plane));
- sprctl &= ~SP_SOURCE_KEY;
- if (key->flags & I915_SET_COLORKEY_SOURCE)
- sprctl |= SP_SOURCE_KEY;
- I915_WRITE(SPCNTR(pipe, plane), sprctl);
-
- POSTING_READ(SPKEYMSK(pipe, plane));
-
- return 0;
-}
-
-static void
-vlv_get_colorkey(struct drm_plane *dplane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = dplane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(dplane);
- int pipe = intel_plane->pipe;
- int plane = intel_plane->plane;
- u32 sprctl;
-
- key->min_value = I915_READ(SPKEYMINVAL(pipe, plane));
- key->max_value = I915_READ(SPKEYMAXVAL(pipe, plane));
- key->channel_mask = I915_READ(SPKEYMSK(pipe, plane));
-
- sprctl = I915_READ(SPCNTR(pipe, plane));
- if (sprctl & SP_SOURCE_KEY)
- key->flags = I915_SET_COLORKEY_SOURCE;
- else
- key->flags = I915_SET_COLORKEY_NONE;
-}
static void
ivb_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(plane);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- int pipe = intel_plane->pipe;
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ enum pipe pipe = intel_plane->pipe;
u32 sprctl, sprscale = 0;
unsigned long sprsurf_offset, linear_offset;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
- sprctl = I915_READ(SPRCTL(pipe));
-
- /* Mask out pixel format bits in case we change it */
- sprctl &= ~SPRITE_PIXFORMAT_MASK;
- sprctl &= ~SPRITE_RGB_ORDER_RGBX;
- sprctl &= ~SPRITE_YUV_BYTE_ORDER_MASK;
- sprctl &= ~SPRITE_TILED;
- sprctl &= ~SPRITE_ROTATE_180;
+ sprctl = SPRITE_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_XBGR8888:
else
sprctl |= SPRITE_TRICKLE_FEED_DISABLE;
- sprctl |= SPRITE_ENABLE;
-
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
sprctl |= SPRITE_PIPE_CSC_ENABLE;
intel_update_primary_plane(intel_crtc);
+ if (key->flags) {
+ I915_WRITE(SPRKEYVAL(pipe), key->min_value);
+ I915_WRITE(SPRKEYMAX(pipe), key->max_value);
+ I915_WRITE(SPRKEYMSK(pipe), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_DESTINATION)
+ sprctl |= SPRITE_DEST_KEY;
+ else if (key->flags & I915_SET_COLORKEY_SOURCE)
+ sprctl |= SPRITE_SOURCE_KEY;
+
I915_WRITE(SPRSTRIDE(pipe), fb->pitches[0]);
I915_WRITE(SPRPOS(pipe), (crtc_y << 16) | crtc_x);
I915_WRITE(SPRSURF(pipe), 0);
intel_flush_primary_plane(dev_priv, intel_crtc->plane);
-
- /*
- * Avoid underruns when disabling the sprite.
- * FIXME remove once watermark updates are done properly.
- */
- intel_crtc->atomic.wait_vblank = true;
- intel_crtc->atomic.update_sprite_watermarks |= (1 << drm_plane_index(plane));
-}
-
-static int
-ivb_update_colorkey(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 sprctl;
- int ret = 0;
-
- intel_plane = to_intel_plane(plane);
-
- I915_WRITE(SPRKEYVAL(intel_plane->pipe), key->min_value);
- I915_WRITE(SPRKEYMAX(intel_plane->pipe), key->max_value);
- I915_WRITE(SPRKEYMSK(intel_plane->pipe), key->channel_mask);
-
- sprctl = I915_READ(SPRCTL(intel_plane->pipe));
- sprctl &= ~(SPRITE_SOURCE_KEY | SPRITE_DEST_KEY);
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- sprctl |= SPRITE_DEST_KEY;
- else if (key->flags & I915_SET_COLORKEY_SOURCE)
- sprctl |= SPRITE_SOURCE_KEY;
- I915_WRITE(SPRCTL(intel_plane->pipe), sprctl);
-
- POSTING_READ(SPRKEYMSK(intel_plane->pipe));
-
- return ret;
-}
-
-static void
-ivb_get_colorkey(struct drm_plane *plane, struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 sprctl;
-
- intel_plane = to_intel_plane(plane);
-
- key->min_value = I915_READ(SPRKEYVAL(intel_plane->pipe));
- key->max_value = I915_READ(SPRKEYMAX(intel_plane->pipe));
- key->channel_mask = I915_READ(SPRKEYMSK(intel_plane->pipe));
- key->flags = 0;
-
- sprctl = I915_READ(SPRCTL(intel_plane->pipe));
-
- if (sprctl & SPRITE_DEST_KEY)
- key->flags = I915_SET_COLORKEY_DESTINATION;
- else if (sprctl & SPRITE_SOURCE_KEY)
- key->flags = I915_SET_COLORKEY_SOURCE;
- else
- key->flags = I915_SET_COLORKEY_NONE;
}
static void
ilk_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(plane);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
int pipe = intel_plane->pipe;
unsigned long dvssurf_offset, linear_offset;
u32 dvscntr, dvsscale;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
- dvscntr = I915_READ(DVSCNTR(pipe));
-
- /* Mask out pixel format bits in case we change it */
- dvscntr &= ~DVS_PIXFORMAT_MASK;
- dvscntr &= ~DVS_RGB_ORDER_XBGR;
- dvscntr &= ~DVS_YUV_BYTE_ORDER_MASK;
- dvscntr &= ~DVS_TILED;
- dvscntr &= ~DVS_ROTATE_180;
+ dvscntr = DVS_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_XBGR8888:
if (IS_GEN6(dev))
dvscntr |= DVS_TRICKLE_FEED_DISABLE; /* must disable */
- dvscntr |= DVS_ENABLE;
intel_update_sprite_watermarks(plane, crtc, src_w, src_h,
pixel_size, true,
intel_update_primary_plane(intel_crtc);
+ if (key->flags) {
+ I915_WRITE(DVSKEYVAL(pipe), key->min_value);
+ I915_WRITE(DVSKEYMAX(pipe), key->max_value);
+ I915_WRITE(DVSKEYMSK(pipe), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_DESTINATION)
+ dvscntr |= DVS_DEST_KEY;
+ else if (key->flags & I915_SET_COLORKEY_SOURCE)
+ dvscntr |= DVS_SOURCE_KEY;
+
I915_WRITE(DVSSTRIDE(pipe), fb->pitches[0]);
I915_WRITE(DVSPOS(pipe), (crtc_y << 16) | crtc_x);
intel_update_primary_plane(intel_crtc);
- I915_WRITE(DVSCNTR(pipe), I915_READ(DVSCNTR(pipe)) & ~DVS_ENABLE);
+ I915_WRITE(DVSCNTR(pipe), 0);
/* Disable the scaler */
I915_WRITE(DVSSCALE(pipe), 0);
+
/* Flush double buffered register updates */
I915_WRITE(DVSSURF(pipe), 0);
intel_flush_primary_plane(dev_priv, intel_crtc->plane);
-
- /*
- * Avoid underruns when disabling the sprite.
- * FIXME remove once watermark updates are done properly.
- */
- intel_crtc->atomic.wait_vblank = true;
- intel_crtc->atomic.update_sprite_watermarks |= (1 << drm_plane_index(plane));
}
/**
hsw_disable_ips(intel_crtc);
}
-static int
-ilk_update_colorkey(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 dvscntr;
- int ret = 0;
-
- intel_plane = to_intel_plane(plane);
-
- I915_WRITE(DVSKEYVAL(intel_plane->pipe), key->min_value);
- I915_WRITE(DVSKEYMAX(intel_plane->pipe), key->max_value);
- I915_WRITE(DVSKEYMSK(intel_plane->pipe), key->channel_mask);
-
- dvscntr = I915_READ(DVSCNTR(intel_plane->pipe));
- dvscntr &= ~(DVS_SOURCE_KEY | DVS_DEST_KEY);
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- dvscntr |= DVS_DEST_KEY;
- else if (key->flags & I915_SET_COLORKEY_SOURCE)
- dvscntr |= DVS_SOURCE_KEY;
- I915_WRITE(DVSCNTR(intel_plane->pipe), dvscntr);
-
- POSTING_READ(DVSKEYMSK(intel_plane->pipe));
-
- return ret;
-}
-
-static void
-ilk_get_colorkey(struct drm_plane *plane, struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 dvscntr;
-
- intel_plane = to_intel_plane(plane);
-
- key->min_value = I915_READ(DVSKEYVAL(intel_plane->pipe));
- key->max_value = I915_READ(DVSKEYMAX(intel_plane->pipe));
- key->channel_mask = I915_READ(DVSKEYMSK(intel_plane->pipe));
- key->flags = 0;
-
- dvscntr = I915_READ(DVSCNTR(intel_plane->pipe));
-
- if (dvscntr & DVS_DEST_KEY)
- key->flags = I915_SET_COLORKEY_DESTINATION;
- else if (dvscntr & DVS_SOURCE_KEY)
- key->flags = I915_SET_COLORKEY_SOURCE;
- else
- key->flags = I915_SET_COLORKEY_NONE;
-}
-
static bool colorkey_enabled(struct intel_plane *intel_plane)
{
- struct drm_intel_sprite_colorkey key;
-
- intel_plane->get_colorkey(&intel_plane->base, &key);
-
- return key.flags != I915_SET_COLORKEY_NONE;
+ return intel_plane->ckey.flags != I915_SET_COLORKEY_NONE;
}
static int
if (!intel_crtc->primary_enabled && !state->hides_primary)
intel_crtc->atomic.post_enable_primary = true;
- /* Update watermarks on tiling changes. */
- if (!plane->state->fb || !state->base.fb ||
- plane->state->fb->modifier[0] !=
- state->base.fb->modifier[0])
+ if (intel_wm_need_update(plane, &state->base))
intel_crtc->atomic.update_wm = true;
+
+ if (!state->visible) {
+ /*
+ * Avoid underruns when disabling the sprite.
+ * FIXME remove once watermark updates are done properly.
+ */
+ intel_crtc->atomic.wait_vblank = true;
+ intel_crtc->atomic.update_sprite_watermarks |=
+ (1 << drm_plane_index(plane));
+ }
}
return 0;
struct intel_crtc *intel_crtc;
struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_framebuffer *fb = state->base.fb;
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
int crtc_x, crtc_y;
unsigned int crtc_w, crtc_h;
uint32_t src_x, src_y, src_w, src_h;
crtc = crtc ? crtc : plane->crtc;
intel_crtc = to_intel_crtc(crtc);
- plane->fb = state->base.fb;
- intel_plane->obj = obj;
+ plane->fb = fb;
if (intel_crtc->active) {
intel_crtc->primary_enabled = !state->hides_primary;
src_y = state->src.y1;
src_w = drm_rect_width(&state->src);
src_h = drm_rect_height(&state->src);
- intel_plane->update_plane(plane, crtc, fb, obj,
+ intel_plane->update_plane(plane, crtc, fb,
crtc_x, crtc_y, crtc_w, crtc_h,
src_x, src_y, src_w, src_h);
} else {
if ((set->flags & (I915_SET_COLORKEY_DESTINATION | I915_SET_COLORKEY_SOURCE)) == (I915_SET_COLORKEY_DESTINATION | I915_SET_COLORKEY_SOURCE))
return -EINVAL;
+ if (IS_VALLEYVIEW(dev) &&
+ set->flags & I915_SET_COLORKEY_DESTINATION)
+ return -EINVAL;
+
drm_modeset_lock_all(dev);
plane = drm_plane_find(dev, set->plane_id);
}
intel_plane = to_intel_plane(plane);
- ret = intel_plane->update_colorkey(plane, set);
-
-out_unlock:
- drm_modeset_unlock_all(dev);
- return ret;
-}
-
-int intel_sprite_get_colorkey(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
-{
- struct drm_intel_sprite_colorkey *get = data;
- struct drm_plane *plane;
- struct intel_plane *intel_plane;
- int ret = 0;
+ intel_plane->ckey = *set;
- drm_modeset_lock_all(dev);
-
- plane = drm_plane_find(dev, get->plane_id);
- if (!plane) {
- ret = -ENOENT;
- goto out_unlock;
- }
-
- intel_plane = to_intel_plane(plane);
- intel_plane->get_colorkey(plane, get);
+ /*
+ * The only way this could fail would be due to
+ * the current plane state being unsupportable already,
+ * and we dont't consider that an error for the
+ * colorkey ioctl. So just ignore any error.
+ */
+ intel_plane_restore(plane);
out_unlock:
drm_modeset_unlock_all(dev);
int intel_plane_restore(struct drm_plane *plane)
{
- if (!plane->crtc || !plane->fb)
+ if (!plane->crtc || !plane->state->fb)
return 0;
- return plane->funcs->update_plane(plane, plane->crtc, plane->fb,
+ return plane->funcs->update_plane(plane, plane->crtc, plane->state->fb,
plane->state->crtc_x, plane->state->crtc_y,
plane->state->crtc_w, plane->state->crtc_h,
plane->state->src_x, plane->state->src_y,
intel_plane->max_downscale = 16;
intel_plane->update_plane = ilk_update_plane;
intel_plane->disable_plane = ilk_disable_plane;
- intel_plane->update_colorkey = ilk_update_colorkey;
- intel_plane->get_colorkey = ilk_get_colorkey;
if (IS_GEN6(dev)) {
plane_formats = snb_plane_formats;
if (IS_VALLEYVIEW(dev)) {
intel_plane->update_plane = vlv_update_plane;
intel_plane->disable_plane = vlv_disable_plane;
- intel_plane->update_colorkey = vlv_update_colorkey;
- intel_plane->get_colorkey = vlv_get_colorkey;
plane_formats = vlv_plane_formats;
num_plane_formats = ARRAY_SIZE(vlv_plane_formats);
} else {
intel_plane->update_plane = ivb_update_plane;
intel_plane->disable_plane = ivb_disable_plane;
- intel_plane->update_colorkey = ivb_update_colorkey;
- intel_plane->get_colorkey = ivb_get_colorkey;
plane_formats = snb_plane_formats;
num_plane_formats = ARRAY_SIZE(snb_plane_formats);
intel_plane->max_downscale = 1;
intel_plane->update_plane = skl_update_plane;
intel_plane->disable_plane = skl_disable_plane;
- intel_plane->update_colorkey = skl_update_colorkey;
- intel_plane->get_colorkey = skl_get_colorkey;
plane_formats = skl_plane_formats;
num_plane_formats = ARRAY_SIZE(skl_plane_formats);
if (intel_get_load_detect_pipe(connector, &mode, &tmp, &ctx)) {
type = intel_tv_detect_type(intel_tv, connector);
- intel_release_load_detect_pipe(connector, &tmp);
+ intel_release_load_detect_pipe(connector, &tmp, &ctx);
status = type < 0 ?
connector_status_disconnected :
connector_status_connected;
.atomic_get_property = intel_connector_atomic_get_property,
.fill_modes = drm_helper_probe_single_connector_modes,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_tv_connector_helper_funcs = {
WARN(1, "Unclaimed register detected %s %s register 0x%x\n",
when, op, reg);
__raw_i915_write32(dev_priv, FPGA_DBG, FPGA_DBG_RM_NOCLAIM);
+ i915.mmio_debug--; /* Only report the first N failures */
}
}
static void
hsw_unclaimed_reg_detect(struct drm_i915_private *dev_priv)
{
- if (i915.mmio_debug)
+ static bool mmio_debug_once = true;
+
+ if (i915.mmio_debug || !mmio_debug_once)
return;
if (__raw_i915_read32(dev_priv, FPGA_DBG) & FPGA_DBG_RM_NOCLAIM) {
- DRM_ERROR("Unclaimed register detected. Please use the i915.mmio_debug=1 to debug this problem.");
+ DRM_DEBUG("Unclaimed register detected, "
+ "enabling oneshot unclaimed register reporting. "
+ "Please use i915.mmio_debug=N for more information.\n");
__raw_i915_write32(dev_priv, FPGA_DBG, FPGA_DBG_RM_NOCLAIM);
+ i915.mmio_debug = mmio_debug_once--;
}
}
/* We need to init first for ECOBUS access and then
* determine later if we want to reinit, in case of MT access is
- * not working
+ * not working. In this stage we don't know which flavour this
+ * ivb is, so it is better to reset also the gen6 fw registers
+ * before the ecobus check.
*/
+
+ __raw_i915_write32(dev_priv, FORCEWAKE, 0);
+ __raw_posting_read(dev_priv, ECOBUS);
+
fw_domain_init(dev_priv, FW_DOMAIN_ID_RENDER,
FORCEWAKE_MT, FORCEWAKE_MT_ACK);
/* switch mmio to cpu's native endianness */
#ifndef __BIG_ENDIAN
- if (ioread32_native(map + 0x000004) != 0x00000000)
+ if (ioread32_native(map + 0x000004) != 0x00000000) {
#else
- if (ioread32_native(map + 0x000004) == 0x00000000)
+ if (ioread32_native(map + 0x000004) == 0x00000000) {
#endif
iowrite32_native(0x01000001, map + 0x000004);
+ ioread32_native(map);
+ }
/* read boot0 and strapping information */
boot0 = ioread32_native(map + 0x000000);
device->oclass[NVDEV_ENGINE_MSVLD ] = &gk104_msvld_oclass;
device->oclass[NVDEV_ENGINE_MSPDEC ] = &gk104_mspdec_oclass;
device->oclass[NVDEV_ENGINE_MSPPP ] = &gf100_msppp_oclass;
+#endif
+ break;
+ case 0x126:
+ device->cname = "GM206";
+ device->oclass[NVDEV_SUBDEV_VBIOS ] = &nvkm_bios_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = gk104_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_I2C ] = gm204_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gm107_fuse_oclass;
+#if 0
+ /* looks to be some non-trivial changes */
+ device->oclass[NVDEV_SUBDEV_CLK ] = &gk104_clk_oclass;
+ /* priv ring says no to 0x10eb14 writes */
+ device->oclass[NVDEV_SUBDEV_THERM ] = &gm107_therm_oclass;
+#endif
+ device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
+ device->oclass[NVDEV_SUBDEV_DEVINIT] = gm204_devinit_oclass;
+ device->oclass[NVDEV_SUBDEV_MC ] = gk20a_mc_oclass;
+ device->oclass[NVDEV_SUBDEV_BUS ] = gf100_bus_oclass;
+ device->oclass[NVDEV_SUBDEV_TIMER ] = &gk20a_timer_oclass;
+ device->oclass[NVDEV_SUBDEV_FB ] = gm107_fb_oclass;
+ device->oclass[NVDEV_SUBDEV_LTC ] = gm107_ltc_oclass;
+ device->oclass[NVDEV_SUBDEV_IBUS ] = &gk104_ibus_oclass;
+ device->oclass[NVDEV_SUBDEV_INSTMEM] = nv50_instmem_oclass;
+ device->oclass[NVDEV_SUBDEV_MMU ] = &gf100_mmu_oclass;
+ device->oclass[NVDEV_SUBDEV_BAR ] = &gf100_bar_oclass;
+ device->oclass[NVDEV_SUBDEV_PMU ] = gk208_pmu_oclass;
+#if 0
+ device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass;
+#endif
+ device->oclass[NVDEV_ENGINE_DMAOBJ ] = gf110_dmaeng_oclass;
+#if 0
+ device->oclass[NVDEV_ENGINE_FIFO ] = gk208_fifo_oclass;
+ device->oclass[NVDEV_ENGINE_SW ] = gf100_sw_oclass;
+ device->oclass[NVDEV_ENGINE_GR ] = gm107_gr_oclass;
+#endif
+ device->oclass[NVDEV_ENGINE_DISP ] = gm204_disp_oclass;
+#if 0
+ device->oclass[NVDEV_ENGINE_CE0 ] = &gm204_ce0_oclass;
+ device->oclass[NVDEV_ENGINE_CE1 ] = &gm204_ce1_oclass;
+ device->oclass[NVDEV_ENGINE_CE2 ] = &gm204_ce2_oclass;
+ device->oclass[NVDEV_ENGINE_MSVLD ] = &gk104_msvld_oclass;
+ device->oclass[NVDEV_ENGINE_MSPDEC ] = &gk104_mspdec_oclass;
+ device->oclass[NVDEV_ENGINE_MSPPP ] = &gf100_msppp_oclass;
#endif
break;
default:
{
struct nvkm_device *device = nv_device(subdev);
struct nv04_fifo_priv *priv = (void *)subdev;
- uint32_t status, reassign;
- int cnt = 0;
+ u32 mask = nv_rd32(priv, NV03_PFIFO_INTR_EN_0);
+ u32 stat = nv_rd32(priv, NV03_PFIFO_INTR_0) & mask;
+ u32 reassign, chid, get, sem;
reassign = nv_rd32(priv, NV03_PFIFO_CACHES) & 1;
- while ((status = nv_rd32(priv, NV03_PFIFO_INTR_0)) && (cnt++ < 100)) {
- uint32_t chid, get;
-
- nv_wr32(priv, NV03_PFIFO_CACHES, 0);
-
- chid = nv_rd32(priv, NV03_PFIFO_CACHE1_PUSH1) & priv->base.max;
- get = nv_rd32(priv, NV03_PFIFO_CACHE1_GET);
+ nv_wr32(priv, NV03_PFIFO_CACHES, 0);
- if (status & NV_PFIFO_INTR_CACHE_ERROR) {
- nv04_fifo_cache_error(device, priv, chid, get);
- status &= ~NV_PFIFO_INTR_CACHE_ERROR;
- }
+ chid = nv_rd32(priv, NV03_PFIFO_CACHE1_PUSH1) & priv->base.max;
+ get = nv_rd32(priv, NV03_PFIFO_CACHE1_GET);
- if (status & NV_PFIFO_INTR_DMA_PUSHER) {
- nv04_fifo_dma_pusher(device, priv, chid);
- status &= ~NV_PFIFO_INTR_DMA_PUSHER;
- }
+ if (stat & NV_PFIFO_INTR_CACHE_ERROR) {
+ nv04_fifo_cache_error(device, priv, chid, get);
+ stat &= ~NV_PFIFO_INTR_CACHE_ERROR;
+ }
- if (status & NV_PFIFO_INTR_SEMAPHORE) {
- uint32_t sem;
+ if (stat & NV_PFIFO_INTR_DMA_PUSHER) {
+ nv04_fifo_dma_pusher(device, priv, chid);
+ stat &= ~NV_PFIFO_INTR_DMA_PUSHER;
+ }
- status &= ~NV_PFIFO_INTR_SEMAPHORE;
- nv_wr32(priv, NV03_PFIFO_INTR_0,
- NV_PFIFO_INTR_SEMAPHORE);
+ if (stat & NV_PFIFO_INTR_SEMAPHORE) {
+ stat &= ~NV_PFIFO_INTR_SEMAPHORE;
+ nv_wr32(priv, NV03_PFIFO_INTR_0, NV_PFIFO_INTR_SEMAPHORE);
- sem = nv_rd32(priv, NV10_PFIFO_CACHE1_SEMAPHORE);
- nv_wr32(priv, NV10_PFIFO_CACHE1_SEMAPHORE, sem | 0x1);
+ sem = nv_rd32(priv, NV10_PFIFO_CACHE1_SEMAPHORE);
+ nv_wr32(priv, NV10_PFIFO_CACHE1_SEMAPHORE, sem | 0x1);
- nv_wr32(priv, NV03_PFIFO_CACHE1_GET, get + 4);
- nv_wr32(priv, NV04_PFIFO_CACHE1_PULL0, 1);
- }
+ nv_wr32(priv, NV03_PFIFO_CACHE1_GET, get + 4);
+ nv_wr32(priv, NV04_PFIFO_CACHE1_PULL0, 1);
+ }
- if (device->card_type == NV_50) {
- if (status & 0x00000010) {
- status &= ~0x00000010;
- nv_wr32(priv, 0x002100, 0x00000010);
- }
-
- if (status & 0x40000000) {
- nv_wr32(priv, 0x002100, 0x40000000);
- nvkm_fifo_uevent(&priv->base);
- status &= ~0x40000000;
- }
+ if (device->card_type == NV_50) {
+ if (stat & 0x00000010) {
+ stat &= ~0x00000010;
+ nv_wr32(priv, 0x002100, 0x00000010);
}
- if (status) {
- nv_warn(priv, "unknown intr 0x%08x, ch %d\n",
- status, chid);
- nv_wr32(priv, NV03_PFIFO_INTR_0, status);
- status = 0;
+ if (stat & 0x40000000) {
+ nv_wr32(priv, 0x002100, 0x40000000);
+ nvkm_fifo_uevent(&priv->base);
+ stat &= ~0x40000000;
}
-
- nv_wr32(priv, NV03_PFIFO_CACHES, reassign);
}
- if (status) {
- nv_error(priv, "still angry after %d spins, halt\n", cnt);
- nv_wr32(priv, 0x002140, 0);
- nv_wr32(priv, 0x000140, 0);
+ if (stat) {
+ nv_warn(priv, "unknown intr 0x%08x\n", stat);
+ nv_mask(priv, NV03_PFIFO_INTR_EN_0, stat, 0x00000000);
+ nv_wr32(priv, NV03_PFIFO_INTR_0, stat);
}
- nv_wr32(priv, 0x000100, 0x00000100);
+ nv_wr32(priv, NV03_PFIFO_CACHES, reassign);
}
static int
const int s = 8;
const int b = mmio_vram(info, impl->bundle_size, (1 << s), access);
mmio_refn(info, 0x408004, 0x00000000, s, b);
- mmio_refn(info, 0x408008, 0x80000000 | (impl->bundle_size >> s), 0, b);
+ mmio_wr32(info, 0x408008, 0x80000000 | (impl->bundle_size >> s));
mmio_refn(info, 0x418808, 0x00000000, s, b);
- mmio_refn(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s), 0, b);
+ mmio_wr32(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s));
}
void
const int s = 8;
const int b = mmio_vram(info, impl->bundle_size, (1 << s), access);
mmio_refn(info, 0x408004, 0x00000000, s, b);
- mmio_refn(info, 0x408008, 0x80000000 | (impl->bundle_size >> s), 0, b);
+ mmio_wr32(info, 0x408008, 0x80000000 | (impl->bundle_size >> s));
mmio_refn(info, 0x418808, 0x00000000, s, b);
- mmio_refn(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s), 0, b);
+ mmio_wr32(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s));
mmio_wr32(info, 0x4064c8, (state_limit << 16) | token_limit);
}
const int s = 8;
const int b = mmio_vram(info, impl->bundle_size, (1 << s), access);
mmio_refn(info, 0x408004, 0x00000000, s, b);
- mmio_refn(info, 0x408008, 0x80000000 | (impl->bundle_size >> s), 0, b);
+ mmio_wr32(info, 0x408008, 0x80000000 | (impl->bundle_size >> s));
mmio_refn(info, 0x418e24, 0x00000000, s, b);
- mmio_refn(info, 0x418e28, 0x80000000 | (impl->bundle_size >> s), 0, b);
+ mmio_wr32(info, 0x418e28, 0x80000000 | (impl->bundle_size >> s));
mmio_wr32(info, 0x4064c8, (state_limit << 16) | token_limit);
}
u16 ent = dcb_i2c_entry(bios, idx, &ver, &len);
if (ent) {
if (ver >= 0x41) {
- if (!(nv_ro32(bios, ent) & 0x80000000))
+ u32 ent_value = nv_ro32(bios, ent);
+ u8 i2c_port = (ent_value >> 27) & 0x1f;
+ u8 dpaux_port = (ent_value >> 22) & 0x1f;
+ /* value 0x1f means unused according to DCB 4.x spec */
+ if (i2c_port == 0x1f && dpaux_port == 0x1f)
info->type = DCB_I2C_UNUSED;
else
info->type = DCB_I2C_PMGR;
rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o \
trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \
ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \
- radeon_sync.o radeon_audio.o
+ radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon_dp_mst.o
radeon-$(CONFIG_MMU_NOTIFIER) += radeon_mn.o
misc |= ATOM_COMPOSITESYNC;
if (mode->flags & DRM_MODE_FLAG_INTERLACE)
misc |= ATOM_INTERLACE;
- if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
misc |= ATOM_DOUBLE_CLOCK_MODE;
+ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
args.ucCRTC = radeon_crtc->crtc_id;
misc |= ATOM_COMPOSITESYNC;
if (mode->flags & DRM_MODE_FLAG_INTERLACE)
misc |= ATOM_INTERLACE;
- if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
misc |= ATOM_DOUBLE_CLOCK_MODE;
+ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
args.ucCRTC = radeon_crtc->crtc_id;
}
}
+ if (radeon_encoder->is_mst_encoder) {
+ struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv;
+ struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv;
+
+ dp_clock = dig_connector->dp_clock;
+ }
+
/* use recommended ref_div for ss */
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
if (radeon_crtc->ss_enabled) {
radeon_crtc->bpc = 8;
radeon_crtc->ss_enabled = false;
- if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) ||
+ if (radeon_encoder->is_mst_encoder) {
+ radeon_dp_mst_prepare_pll(crtc, mode);
+ } else if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) ||
(radeon_encoder_get_dp_bridge_encoder_id(radeon_crtc->encoder) != ENCODER_OBJECT_ID_NONE)) {
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
struct drm_connector *connector =
radeon_crtc->connector = NULL;
return false;
}
+ if (radeon_crtc->encoder) {
+ struct radeon_encoder *radeon_encoder =
+ to_radeon_encoder(radeon_crtc->encoder);
+
+ radeon_crtc->output_csc = radeon_encoder->output_csc;
+ }
if (!radeon_crtc_scaling_mode_fixup(crtc, mode, adjusted_mode))
return false;
if (!atombios_crtc_prepare_pll(crtc, adjusted_mode))
#define HEADER_SIZE (BARE_ADDRESS_SIZE + 1)
static ssize_t
-radeon_dp_aux_transfer(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
+radeon_dp_aux_transfer_atom(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
{
struct radeon_i2c_chan *chan =
container_of(aux, struct radeon_i2c_chan, aux);
void radeon_dp_aux_init(struct radeon_connector *radeon_connector)
{
+ struct drm_device *dev = radeon_connector->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
int ret;
radeon_connector->ddc_bus->rec.hpd = radeon_connector->hpd.hpd;
radeon_connector->ddc_bus->aux.dev = radeon_connector->base.kdev;
- radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer;
+ if (ASIC_IS_DCE5(rdev)) {
+ if (radeon_auxch)
+ radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer_native;
+ else
+ radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer_atom;
+ } else {
+ radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer_atom;
+ }
ret = drm_dp_aux_register(&radeon_connector->ddc_bus->aux);
if (!ret)
/***** radeon specific DP functions *****/
-static int radeon_dp_get_max_link_rate(struct drm_connector *connector,
- u8 dpcd[DP_DPCD_SIZE])
+int radeon_dp_get_max_link_rate(struct drm_connector *connector,
+ u8 dpcd[DP_DPCD_SIZE])
{
int max_link_rate;
struct drm_connector *connector;
struct radeon_connector *radeon_connector;
struct radeon_connector_atom_dig *dig_connector;
+ struct radeon_encoder_atom_dig *dig_enc;
+ if (radeon_encoder_is_digital(encoder)) {
+ dig_enc = radeon_encoder->enc_priv;
+ if (dig_enc->active_mst_links)
+ return ATOM_ENCODER_MODE_DP_MST;
+ }
+ if (radeon_encoder->is_mst_encoder || radeon_encoder->offset)
+ return ATOM_ENCODER_MODE_DP_MST;
/* dp bridges are always DP */
if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE)
return ATOM_ENCODER_MODE_DP;
};
void
-atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode)
+atombios_dig_encoder_setup2(struct drm_encoder *encoder, int action, int panel_mode, int enc_override)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
if (ENCODER_MODE_IS_DP(args.v3.ucEncoderMode) && (dp_clock == 270000))
args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ;
- args.v3.acConfig.ucDigSel = dig->dig_encoder;
+ if (enc_override != -1)
+ args.v3.acConfig.ucDigSel = enc_override;
+ else
+ args.v3.acConfig.ucDigSel = dig->dig_encoder;
args.v3.ucBitPerColor = radeon_atom_get_bpc(encoder);
break;
case 4:
else
args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V4_DPLINKRATE_1_62GHZ;
}
- args.v4.acConfig.ucDigSel = dig->dig_encoder;
+
+ if (enc_override != -1)
+ args.v4.acConfig.ucDigSel = enc_override;
+ else
+ args.v4.acConfig.ucDigSel = dig->dig_encoder;
args.v4.ucBitPerColor = radeon_atom_get_bpc(encoder);
if (hpd_id == RADEON_HPD_NONE)
args.v4.ucHPD_ID = 0;
}
+void
+atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode)
+{
+ atombios_dig_encoder_setup2(encoder, action, panel_mode, -1);
+}
+
union dig_transmitter_control {
DIG_TRANSMITTER_CONTROL_PS_ALLOCATION v1;
DIG_TRANSMITTER_CONTROL_PARAMETERS_V2 v2;
};
void
-atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set)
+atombios_dig_transmitter_setup2(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set, int fe)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
args.v5.asConfig.ucHPDSel = 0;
else
args.v5.asConfig.ucHPDSel = hpd_id + 1;
- args.v5.ucDigEncoderSel = 1 << dig_encoder;
+ args.v5.ucDigEncoderSel = (fe != -1) ? (1 << fe) : (1 << dig_encoder);
args.v5.ucDPLaneSet = lane_set;
break;
default:
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
+void
+atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set)
+{
+ atombios_dig_transmitter_setup2(encoder, action, lane_num, lane_set, -1);
+}
+
bool
atombios_set_edp_panel_power(struct drm_connector *connector, int action)
{
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
+
+ /* don't power off encoders with active MST links */
+ if (dig->active_mst_links)
+ return;
+
if (ASIC_IS_DCE4(rdev)) {
if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector)
atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0);
radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id);
}
+void
+atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder, int fe)
+{
+ struct drm_device *dev = encoder->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);
+ int index = GetIndexIntoMasterTable(COMMAND, SelectCRTC_Source);
+ uint8_t frev, crev;
+ union crtc_source_param args;
+
+ memset(&args, 0, sizeof(args));
+
+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))
+ return;
+
+ if (frev != 1 && crev != 2)
+ DRM_ERROR("Unknown table for MST %d, %d\n", frev, crev);
+
+ args.v2.ucCRTC = radeon_crtc->crtc_id;
+ args.v2.ucEncodeMode = ATOM_ENCODER_MODE_DP_MST;
+
+ switch (fe) {
+ case 0:
+ args.v2.ucEncoderID = ASIC_INT_DIG1_ENCODER_ID;
+ break;
+ case 1:
+ args.v2.ucEncoderID = ASIC_INT_DIG2_ENCODER_ID;
+ break;
+ case 2:
+ args.v2.ucEncoderID = ASIC_INT_DIG3_ENCODER_ID;
+ break;
+ case 3:
+ args.v2.ucEncoderID = ASIC_INT_DIG4_ENCODER_ID;
+ break;
+ case 4:
+ args.v2.ucEncoderID = ASIC_INT_DIG5_ENCODER_ID;
+ break;
+ case 5:
+ args.v2.ucEncoderID = ASIC_INT_DIG6_ENCODER_ID;
+ break;
+ case 6:
+ args.v2.ucEncoderID = ASIC_INT_DIG7_ENCODER_ID;
+ break;
+ }
+ atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
+}
+
static void
atombios_apply_encoder_quirks(struct drm_encoder *encoder,
struct drm_display_mode *mode)
}
}
-static int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder)
+void radeon_atom_release_dig_encoder(struct radeon_device *rdev, int enc_idx)
+{
+ if (enc_idx < 0)
+ return;
+ rdev->mode_info.active_encoders &= ~(1 << enc_idx);
+}
+
+int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct drm_encoder *test_encoder;
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
uint32_t dig_enc_in_use = 0;
+ int enc_idx = -1;
+ if (fe_idx >= 0) {
+ enc_idx = fe_idx;
+ goto assigned;
+ }
if (ASIC_IS_DCE6(rdev)) {
/* DCE6 */
switch (radeon_encoder->encoder_id) {
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
if (dig->linkb)
- return 1;
+ enc_idx = 1;
else
- return 0;
+ enc_idx = 0;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
if (dig->linkb)
- return 3;
+ enc_idx = 3;
else
- return 2;
+ enc_idx = 2;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
if (dig->linkb)
- return 5;
+ enc_idx = 5;
else
- return 4;
+ enc_idx = 4;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3:
- return 6;
+ enc_idx = 6;
break;
}
+ goto assigned;
} else if (ASIC_IS_DCE4(rdev)) {
/* DCE4/5 */
if (ASIC_IS_DCE41(rdev) && !ASIC_IS_DCE61(rdev)) {
/* ontario follows DCE4 */
if (rdev->family == CHIP_PALM) {
if (dig->linkb)
- return 1;
+ enc_idx = 1;
else
- return 0;
+ enc_idx = 0;
} else
/* llano follows DCE3.2 */
- return radeon_crtc->crtc_id;
+ enc_idx = radeon_crtc->crtc_id;
} else {
switch (radeon_encoder->encoder_id) {
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
if (dig->linkb)
- return 1;
+ enc_idx = 1;
else
- return 0;
+ enc_idx = 0;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
if (dig->linkb)
- return 3;
+ enc_idx = 3;
else
- return 2;
+ enc_idx = 2;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
if (dig->linkb)
- return 5;
+ enc_idx = 5;
else
- return 4;
+ enc_idx = 4;
break;
}
}
+ goto assigned;
}
/* on DCE32 and encoder can driver any block so just crtc id */
if (ASIC_IS_DCE32(rdev)) {
- return radeon_crtc->crtc_id;
+ enc_idx = radeon_crtc->crtc_id;
+ goto assigned;
}
/* on DCE3 - LVTMA can only be driven by DIGB */
if (!(dig_enc_in_use & 1))
return 0;
return 1;
+
+assigned:
+ if (enc_idx == -1) {
+ DRM_ERROR("Got encoder index incorrect - returning 0\n");
+ return 0;
+ }
+ if (rdev->mode_info.active_encoders & (1 << enc_idx)) {
+ DRM_ERROR("chosen encoder in use %d\n", enc_idx);
+ }
+ rdev->mode_info.active_encoders |= (1 << enc_idx);
+ return enc_idx;
}
/* This only needs to be called once at startup */
ENCODER_OBJECT_ID_NONE)) {
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
if (dig) {
- dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder);
+ if (dig->dig_encoder >= 0)
+ radeon_atom_release_dig_encoder(rdev, dig->dig_encoder);
+ dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder, -1);
if (radeon_encoder->active_device & ATOM_DEVICE_DFP_SUPPORT) {
if (rdev->family >= CHIP_R600)
dig->afmt = rdev->mode_info.afmt[dig->dig_encoder];
disable_done:
if (radeon_encoder_is_digital(encoder)) {
- dig = radeon_encoder->enc_priv;
- dig->dig_encoder = -1;
- }
- radeon_encoder->active_device = 0;
+ if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) {
+ if (rdev->asic->display.hdmi_enable)
+ radeon_hdmi_enable(rdev, encoder, false);
+ }
+ if (atombios_get_encoder_mode(encoder) != ATOM_ENCODER_MODE_DP_MST) {
+ dig = radeon_encoder->enc_priv;
+ radeon_atom_release_dig_encoder(rdev, dig->dig_encoder);
+ dig->dig_encoder = -1;
+ radeon_encoder->active_device = 0;
+ }
+ } else
+ radeon_encoder->active_device = 0;
}
/* these are handled by the primary encoders */
else /* current_index == 2 */
pl = &ps->high;
seq_printf(m, "uvd vclk: %d dclk: %d\n", rps->vclk, rps->dclk);
- if (rdev->family >= CHIP_CEDAR) {
- seq_printf(m, "power level %d sclk: %u mclk: %u vddc: %u vddci: %u\n",
- current_index, pl->sclk, pl->mclk, pl->vddc, pl->vddci);
- } else {
- seq_printf(m, "power level %d sclk: %u mclk: %u vddc: %u\n",
- current_index, pl->sclk, pl->mclk, pl->vddc);
- }
+ seq_printf(m, "power level %d sclk: %u mclk: %u vddc: %u vddci: %u\n",
+ current_index, pl->sclk, pl->mclk, pl->vddc, pl->vddci);
+ }
+}
+
+u32 btc_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->sclk;
+ }
+}
+
+u32 btc_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->mclk;
}
}
r600_dpm_print_ps_status(rdev, rps);
}
+u32 ci_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ u32 sclk = ci_get_average_sclk_freq(rdev);
+
+ return sclk;
+}
+
+u32 ci_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ u32 mclk = ci_get_average_mclk_freq(rdev);
+
+ return mclk;
+}
+
u32 ci_dpm_get_sclk(struct radeon_device *rdev, bool low)
{
struct ci_power_info *pi = ci_get_pi(rdev);
static void cik_enable_gui_idle_interrupt(struct radeon_device *rdev,
bool enable);
+/**
+ * cik_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int cik_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS2:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case GRBM_STATUS_SE2:
+ case GRBM_STATUS_SE3:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case (SDMA0_STATUS_REG + SDMA0_REGISTER_OFFSET):
+ case (SDMA0_STATUS_REG + SDMA1_REGISTER_OFFSET):
+ case UVD_STATUS:
+ /* TODO VCE */
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
/* get temperature in millidegrees */
int ci_get_temp(struct radeon_device *rdev)
{
(CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
cp_int_cntl |= PRIV_INSTR_INT_ENABLE | PRIV_REG_INT_ENABLE;
- hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
+ hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
dma_cntl = RREG32(SDMA0_CNTL + SDMA0_REGISTER_OFFSET) & ~TRAP_ENABLE;
dma_cntl1 = RREG32(SDMA0_CNTL + SDMA1_REGISTER_OFFSET) & ~TRAP_ENABLE;
}
if (rdev->irq.hpd[0]) {
DRM_DEBUG("cik_irq_set: hpd 1\n");
- hpd1 |= DC_HPDx_INT_EN;
+ hpd1 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[1]) {
DRM_DEBUG("cik_irq_set: hpd 2\n");
- hpd2 |= DC_HPDx_INT_EN;
+ hpd2 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[2]) {
DRM_DEBUG("cik_irq_set: hpd 3\n");
- hpd3 |= DC_HPDx_INT_EN;
+ hpd3 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[3]) {
DRM_DEBUG("cik_irq_set: hpd 4\n");
- hpd4 |= DC_HPDx_INT_EN;
+ hpd4 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[4]) {
DRM_DEBUG("cik_irq_set: hpd 5\n");
- hpd5 |= DC_HPDx_INT_EN;
+ hpd5 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[5]) {
DRM_DEBUG("cik_irq_set: hpd 6\n");
- hpd6 |= DC_HPDx_INT_EN;
+ hpd6 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
WREG32(CP_INT_CNTL_RING0, cp_int_cntl);
tmp |= DC_HPDx_INT_ACK;
WREG32(DC_HPD6_INT_CONTROL, tmp);
}
+ if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD1_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD1_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD2_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD2_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD3_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD3_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD4_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD4_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD5_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD6_INT_CONTROL, tmp);
+ }
}
/**
u8 me_id, pipe_id, queue_id;
u32 ring_index;
bool queue_hotplug = false;
+ bool queue_dp = false;
bool queue_reset = false;
u32 addr, status, mc_client;
bool queue_thermal = false;
DRM_DEBUG("IH: HPD6\n");
}
break;
+ case 6:
+ if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 1\n");
+ }
+ break;
+ case 7:
+ if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 2\n");
+ }
+ break;
+ case 8:
+ if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 3\n");
+ }
+ break;
+ case 9:
+ if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 4\n");
+ }
+ break;
+ case 10:
+ if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 5\n");
+ }
+ break;
+ case 11:
+ if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 6\n");
+ }
+ break;
default:
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
break;
rptr &= rdev->ih.ptr_mask;
WREG32(IH_RB_RPTR, rptr);
}
+ if (queue_dp)
+ schedule_work(&rdev->dp_work);
if (queue_hotplug)
schedule_work(&rdev->hotplug_work);
if (queue_reset) {
# define CLK_OD(x) ((x) << 6)
# define CLK_OD_MASK (0x1f << 6)
+#define UVD_STATUS 0xf6bc
+
/* UVD clocks */
#define CG_DCLK_CNTL 0xC050009C
}
}
+/**
+ * evergreen_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int evergreen_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case DMA_STATUS_REG:
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
void evergreen_tiling_fields(unsigned tiling_flags, unsigned *bankw,
unsigned *bankh, unsigned *mtaspect,
unsigned *tile_split)
return 0;
}
- hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
+ hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
if (rdev->family == CHIP_ARUBA)
thermal_int = RREG32(TN_CG_THERMAL_INT_CTRL) &
~(THERM_INT_MASK_HIGH | THERM_INT_MASK_LOW);
}
if (rdev->irq.hpd[0]) {
DRM_DEBUG("evergreen_irq_set: hpd 1\n");
- hpd1 |= DC_HPDx_INT_EN;
+ hpd1 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[1]) {
DRM_DEBUG("evergreen_irq_set: hpd 2\n");
- hpd2 |= DC_HPDx_INT_EN;
+ hpd2 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[2]) {
DRM_DEBUG("evergreen_irq_set: hpd 3\n");
- hpd3 |= DC_HPDx_INT_EN;
+ hpd3 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[3]) {
DRM_DEBUG("evergreen_irq_set: hpd 4\n");
- hpd4 |= DC_HPDx_INT_EN;
+ hpd4 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[4]) {
DRM_DEBUG("evergreen_irq_set: hpd 5\n");
- hpd5 |= DC_HPDx_INT_EN;
+ hpd5 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[5]) {
DRM_DEBUG("evergreen_irq_set: hpd 6\n");
- hpd6 |= DC_HPDx_INT_EN;
+ hpd6 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.afmt[0]) {
DRM_DEBUG("evergreen_irq_set: hdmi 0\n");
tmp |= DC_HPDx_INT_ACK;
WREG32(DC_HPD6_INT_CONTROL, tmp);
}
+
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD1_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD1_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD2_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD2_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD3_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD3_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD4_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD4_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD5_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD6_INT_CONTROL, tmp);
+ }
+
if (rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG) {
tmp = RREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET);
tmp |= AFMT_AZ_FORMAT_WTRIG_ACK;
u32 ring_index;
bool queue_hotplug = false;
bool queue_hdmi = false;
+ bool queue_dp = false;
bool queue_thermal = false;
u32 status, addr;
DRM_DEBUG("IH: HPD6\n");
}
break;
+ case 6:
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 1\n");
+ }
+ break;
+ case 7:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 2\n");
+ }
+ break;
+ case 8:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 3\n");
+ }
+ break;
+ case 9:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 4\n");
+ }
+ break;
+ case 10:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 5\n");
+ }
+ break;
+ case 11:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 6\n");
+ }
+ break;
default:
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
break;
rptr &= rdev->ih.ptr_mask;
WREG32(IH_RB_RPTR, rptr);
}
+ if (queue_dp)
+ schedule_work(&rdev->dp_work);
if (queue_hotplug)
schedule_work(&rdev->hotplug_work);
if (queue_hdmi)
#define UVD_UDEC_DBW_ADDR_CONFIG 0xef54
#define UVD_RBC_RB_RPTR 0xf690
#define UVD_RBC_RB_WPTR 0xf694
+#define UVD_STATUS 0xf6bc
/*
* PM4
}
}
+u32 kv_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct kv_power_info *pi = kv_get_pi(rdev);
+ u32 current_index =
+ (RREG32_SMC(TARGET_AND_CURRENT_PROFILE_INDEX) & CURR_SCLK_INDEX_MASK) >>
+ CURR_SCLK_INDEX_SHIFT;
+ u32 sclk;
+
+ if (current_index >= SMU__NUM_SCLK_DPM_STATE) {
+ return 0;
+ } else {
+ sclk = be32_to_cpu(pi->graphics_level[current_index].SclkFrequency);
+ return sclk;
+ }
+}
+
+u32 kv_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct kv_power_info *pi = kv_get_pi(rdev);
+
+ return pi->sys_info.bootup_uma_clk;
+}
+
void kv_dpm_print_power_state(struct radeon_device *rdev,
struct radeon_ps *rps)
{
return err;
}
+/**
+ * cayman_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int cayman_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case (DMA_STATUS_REG + DMA0_REGISTER_OFFSET):
+ case (DMA_STATUS_REG + DMA1_REGISTER_OFFSET):
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
int tn_get_temp(struct radeon_device *rdev)
{
u32 temp = RREG32_SMC(TN_CURRENT_GNB_TEMP) & 0x7ff;
}
}
+u32 ni_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 ni_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->mclk;
+ }
+}
+
u32 ni_dpm_get_sclk(struct radeon_device *rdev, bool low)
{
struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
# define NI_REGAMMA_PROG_B 4
# define NI_OVL_REGAMMA_MODE(x) (((x) & 0x7) << 4)
+#define NI_DP_MSE_LINK_TIMING 0x73a0
+# define NI_DP_MSE_LINK_FRAME (((x) & 0x3ff) << 0)
+# define NI_DP_MSE_LINK_LINE (((x) & 0x3) << 16)
+
+#define NI_DP_MSE_MISC_CNTL 0x736c
+# define NI_DP_MSE_BLANK_CODE (((x) & 0x1) << 0)
+# define NI_DP_MSE_TIMESTAMP_MODE (((x) & 0x1) << 4)
+# define NI_DP_MSE_ZERO_ENCODER (((x) & 0x1) << 8)
+
+#define NI_DP_MSE_RATE_CNTL 0x7384
+# define NI_DP_MSE_RATE_Y(x) (((x) & 0x3ffffff) << 0)
+# define NI_DP_MSE_RATE_X(x) (((x) & 0x3f) << 26)
+
+#define NI_DP_MSE_RATE_UPDATE 0x738c
+
+#define NI_DP_MSE_SAT0 0x7390
+# define NI_DP_MSE_SAT_SRC0(x) (((x) & 0x7) << 0)
+# define NI_DP_MSE_SAT_SLOT_COUNT0(x) (((x) & 0x3f) << 8)
+# define NI_DP_MSE_SAT_SRC1(x) (((x) & 0x7) << 16)
+# define NI_DP_MSE_SAT_SLOT_COUNT1(x) (((x) & 0x3f) << 24)
+
+#define NI_DP_MSE_SAT1 0x7394
+
+#define NI_DP_MSE_SAT2 0x7398
+
+#define NI_DP_MSE_SAT_UPDATE 0x739c
+
+#define NI_DIG_BE_CNTL 0x7140
+# define NI_DIG_FE_SOURCE_SELECT(x) (((x) & 0x7f) << 8)
+# define NI_DIG_FE_DIG_MODE(x) (((x) & 0x7) << 16)
+# define NI_DIG_MODE_DP_SST 0
+# define NI_DIG_MODE_LVDS 1
+# define NI_DIG_MODE_TMDS_DVI 2
+# define NI_DIG_MODE_TMDS_HDMI 3
+# define NI_DIG_MODE_DP_MST 5
+# define NI_DIG_HPD_SELECT(x) (((x) & 0x7) << 28)
+
+#define NI_DIG_FE_CNTL 0x7000
+# define NI_DIG_SOURCE_SELECT(x) (((x) & 0x3) << 0)
+# define NI_DIG_STEREOSYNC_SELECT(x) (((x) & 0x3) << 4)
+# define NI_DIG_STEREOSYNC_GATE_EN(x) (((x) & 0x1) << 8)
+# define NI_DIG_DUAL_LINK_ENABLE(x) (((x) & 0x1) << 16)
+# define NI_DIG_SWAP(x) (((x) & 0x1) << 18)
+# define NI_DIG_SYMCLK_FE_ON (0x1 << 24)
#endif
#define MC_PMG_CMD_MRS2 0x2b5c
#define MC_SEQ_PMG_CMD_MRS2_LP 0x2b60
+#define AUX_CONTROL 0x6200
+#define AUX_EN (1 << 0)
+#define AUX_LS_READ_EN (1 << 8)
+#define AUX_LS_UPDATE_DISABLE(x) (((x) & 0x1) << 12)
+#define AUX_HPD_DISCON(x) (((x) & 0x1) << 16)
+#define AUX_DET_EN (1 << 18)
+#define AUX_HPD_SEL(x) (((x) & 0x7) << 20)
+#define AUX_IMPCAL_REQ_EN (1 << 24)
+#define AUX_TEST_MODE (1 << 28)
+#define AUX_DEGLITCH_EN (1 << 29)
+#define AUX_SW_CONTROL 0x6204
+#define AUX_SW_GO (1 << 0)
+#define AUX_LS_READ_TRIG (1 << 2)
+#define AUX_SW_START_DELAY(x) (((x) & 0xf) << 4)
+#define AUX_SW_WR_BYTES(x) (((x) & 0x1f) << 16)
+
+#define AUX_SW_INTERRUPT_CONTROL 0x620c
+#define AUX_SW_DONE_INT (1 << 0)
+#define AUX_SW_DONE_ACK (1 << 1)
+#define AUX_SW_DONE_MASK (1 << 2)
+#define AUX_SW_LS_DONE_INT (1 << 4)
+#define AUX_SW_LS_DONE_MASK (1 << 6)
+#define AUX_SW_STATUS 0x6210
+#define AUX_SW_DONE (1 << 0)
+#define AUX_SW_REQ (1 << 1)
+#define AUX_SW_RX_TIMEOUT_STATE(x) (((x) & 0x7) << 4)
+#define AUX_SW_RX_TIMEOUT (1 << 7)
+#define AUX_SW_RX_OVERFLOW (1 << 8)
+#define AUX_SW_RX_HPD_DISCON (1 << 9)
+#define AUX_SW_RX_PARTIAL_BYTE (1 << 10)
+#define AUX_SW_NON_AUX_MODE (1 << 11)
+#define AUX_SW_RX_MIN_COUNT_VIOL (1 << 12)
+#define AUX_SW_RX_INVALID_STOP (1 << 14)
+#define AUX_SW_RX_SYNC_INVALID_L (1 << 17)
+#define AUX_SW_RX_SYNC_INVALID_H (1 << 18)
+#define AUX_SW_RX_INVALID_START (1 << 19)
+#define AUX_SW_RX_RECV_NO_DET (1 << 20)
+#define AUX_SW_RX_RECV_INVALID_H (1 << 22)
+#define AUX_SW_RX_RECV_INVALID_V (1 << 23)
+
+#define AUX_SW_DATA 0x6218
+#define AUX_SW_DATA_RW (1 << 0)
+#define AUX_SW_DATA_MASK(x) (((x) & 0xff) << 8)
+#define AUX_SW_DATA_INDEX(x) (((x) & 0x1f) << 16)
+#define AUX_SW_AUTOINCREMENT_DISABLE (1 << 31)
+
#define LB_SYNC_RESET_SEL 0x6b28
#define LB_SYNC_RESET_SEL_MASK (3 << 0)
#define LB_SYNC_RESET_SEL_SHIFT 0
#define UVD_UDEC_DBW_ADDR_CONFIG 0xEF54
#define UVD_RBC_RB_RPTR 0xF690
#define UVD_RBC_RB_WPTR 0xF694
+#define UVD_STATUS 0xf6bc
/*
* PM4
extern int evergreen_rlc_resume(struct radeon_device *rdev);
extern void rv770_set_clk_bypass_mode(struct radeon_device *rdev);
+/**
+ * r600_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int r600_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS2:
+ case R_000E50_SRBM_STATUS:
+ case DMA_STATUS_REG:
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
/**
* r600_get_xclk - get the xclk
*
extern int radeon_use_pflipirq;
extern int radeon_bapm;
extern int radeon_backlight;
+extern int radeon_auxch;
+extern int radeon_mst;
/*
* Copy from radeon_drv.h so we don't have to include both and have conflicting
u32 (*get_xclk)(struct radeon_device *rdev);
/* get the gpu clock counter */
uint64_t (*get_gpu_clock_counter)(struct radeon_device *rdev);
+ /* get register for info ioctl */
+ int (*get_allowed_info_register)(struct radeon_device *rdev, u32 reg, u32 *val);
/* gart */
struct {
void (*tlb_flush)(struct radeon_device *rdev);
u32 (*fan_ctrl_get_mode)(struct radeon_device *rdev);
int (*set_fan_speed_percent)(struct radeon_device *rdev, u32 speed);
int (*get_fan_speed_percent)(struct radeon_device *rdev, u32 *speed);
+ u32 (*get_current_sclk)(struct radeon_device *rdev);
+ u32 (*get_current_mclk)(struct radeon_device *rdev);
} dpm;
/* pageflipping */
struct {
struct radeon_rlc rlc;
struct radeon_mec mec;
struct work_struct hotplug_work;
+ struct work_struct dp_work;
struct work_struct audio_work;
int num_crtc; /* number of crtcs */
struct mutex dc_hw_i2c_mutex; /* display controller hw i2c mutex */
#define radeon_mc_wait_for_idle(rdev) (rdev)->asic->mc_wait_for_idle((rdev))
#define radeon_get_xclk(rdev) (rdev)->asic->get_xclk((rdev))
#define radeon_get_gpu_clock_counter(rdev) (rdev)->asic->get_gpu_clock_counter((rdev))
+#define radeon_get_allowed_info_register(rdev, r, v) (rdev)->asic->get_allowed_info_register((rdev), (r), (v))
#define radeon_dpm_init(rdev) rdev->asic->dpm.init((rdev))
#define radeon_dpm_setup_asic(rdev) rdev->asic->dpm.setup_asic((rdev))
#define radeon_dpm_enable(rdev) rdev->asic->dpm.enable((rdev))
#define radeon_dpm_vblank_too_short(rdev) rdev->asic->dpm.vblank_too_short((rdev))
#define radeon_dpm_powergate_uvd(rdev, g) rdev->asic->dpm.powergate_uvd((rdev), (g))
#define radeon_dpm_enable_bapm(rdev, e) rdev->asic->dpm.enable_bapm((rdev), (e))
+#define radeon_dpm_get_current_sclk(rdev) rdev->asic->dpm.get_current_sclk((rdev))
+#define radeon_dpm_get_current_mclk(rdev) rdev->asic->dpm.get_current_mclk((rdev))
/* Common functions */
/* AGP */
}
}
+static int radeon_invalid_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ return -EINVAL;
+}
/* helper to disable agp */
/**
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r100_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &r100_pci_gart_tlb_flush,
.get_page_entry = &r100_pci_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r100_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &r100_pci_gart_tlb_flush,
.get_page_entry = &r100_pci_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r300_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &r100_pci_gart_tlb_flush,
.get_page_entry = &r100_pci_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r300_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r300_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rs400_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rs400_gart_tlb_flush,
.get_page_entry = &rs400_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rs600_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rs600_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rs690_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rs400_gart_tlb_flush,
.get_page_entry = &rs400_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rv515_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r520_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.print_power_state = &rv6xx_dpm_print_power_state,
.debugfs_print_current_performance_level = &rv6xx_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv6xx_dpm_force_performance_level,
+ .get_current_sclk = &rv6xx_dpm_get_current_sclk,
+ .get_current_mclk = &rv6xx_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &rs600_page_flip,
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.print_power_state = &rs780_dpm_print_power_state,
.debugfs_print_current_performance_level = &rs780_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rs780_dpm_force_performance_level,
+ .get_current_sclk = &rs780_dpm_get_current_sclk,
+ .get_current_mclk = &rs780_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &rs600_page_flip,
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.debugfs_print_current_performance_level = &rv770_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv770_dpm_force_performance_level,
.vblank_too_short = &rv770_dpm_vblank_too_short,
+ .get_current_sclk = &rv770_dpm_get_current_sclk,
+ .get_current_mclk = &rv770_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &rv770_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = evergreen_get_allowed_info_register,
.gart = {
.tlb_flush = &evergreen_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.debugfs_print_current_performance_level = &rv770_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv770_dpm_force_performance_level,
.vblank_too_short = &cypress_dpm_vblank_too_short,
+ .get_current_sclk = &rv770_dpm_get_current_sclk,
+ .get_current_mclk = &rv770_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = evergreen_get_allowed_info_register,
.gart = {
.tlb_flush = &evergreen_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.print_power_state = &sumo_dpm_print_power_state,
.debugfs_print_current_performance_level = &sumo_dpm_debugfs_print_current_performance_level,
.force_performance_level = &sumo_dpm_force_performance_level,
+ .get_current_sclk = &sumo_dpm_get_current_sclk,
+ .get_current_mclk = &sumo_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = evergreen_get_allowed_info_register,
.gart = {
.tlb_flush = &evergreen_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.debugfs_print_current_performance_level = &btc_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv770_dpm_force_performance_level,
.vblank_too_short = &btc_dpm_vblank_too_short,
+ .get_current_sclk = &btc_dpm_get_current_sclk,
+ .get_current_mclk = &btc_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = cayman_get_allowed_info_register,
.gart = {
.tlb_flush = &cayman_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.debugfs_print_current_performance_level = &ni_dpm_debugfs_print_current_performance_level,
.force_performance_level = &ni_dpm_force_performance_level,
.vblank_too_short = &ni_dpm_vblank_too_short,
+ .get_current_sclk = &ni_dpm_get_current_sclk,
+ .get_current_mclk = &ni_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = cayman_get_allowed_info_register,
.gart = {
.tlb_flush = &cayman_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.debugfs_print_current_performance_level = &trinity_dpm_debugfs_print_current_performance_level,
.force_performance_level = &trinity_dpm_force_performance_level,
.enable_bapm = &trinity_dpm_enable_bapm,
+ .get_current_sclk = &trinity_dpm_get_current_sclk,
+ .get_current_mclk = &trinity_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &si_get_xclk,
.get_gpu_clock_counter = &si_get_gpu_clock_counter,
+ .get_allowed_info_register = si_get_allowed_info_register,
.gart = {
.tlb_flush = &si_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.fan_ctrl_get_mode = &si_fan_ctrl_get_mode,
.get_fan_speed_percent = &si_fan_ctrl_get_fan_speed_percent,
.set_fan_speed_percent = &si_fan_ctrl_set_fan_speed_percent,
+ .get_current_sclk = &si_dpm_get_current_sclk,
+ .get_current_mclk = &si_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &cik_get_xclk,
.get_gpu_clock_counter = &cik_get_gpu_clock_counter,
+ .get_allowed_info_register = cik_get_allowed_info_register,
.gart = {
.tlb_flush = &cik_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.fan_ctrl_get_mode = &ci_fan_ctrl_get_mode,
.get_fan_speed_percent = &ci_fan_ctrl_get_fan_speed_percent,
.set_fan_speed_percent = &ci_fan_ctrl_set_fan_speed_percent,
+ .get_current_sclk = &ci_dpm_get_current_sclk,
+ .get_current_mclk = &ci_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &cik_get_xclk,
.get_gpu_clock_counter = &cik_get_gpu_clock_counter,
+ .get_allowed_info_register = cik_get_allowed_info_register,
.gart = {
.tlb_flush = &cik_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
.force_performance_level = &kv_dpm_force_performance_level,
.powergate_uvd = &kv_dpm_powergate_uvd,
.enable_bapm = &kv_dpm_enable_bapm,
+ .get_current_sclk = &kv_dpm_get_current_sclk,
+ .get_current_mclk = &kv_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
struct radeon_ring *ring);
void r600_gfx_set_wptr(struct radeon_device *rdev,
struct radeon_ring *ring);
+int r600_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
/* r600 irq */
int r600_irq_process(struct radeon_device *rdev);
int r600_irq_init(struct radeon_device *rdev);
struct seq_file *m);
int rv6xx_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
+u32 rv6xx_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 rv6xx_dpm_get_current_mclk(struct radeon_device *rdev);
/* rs780 dpm */
int rs780_dpm_init(struct radeon_device *rdev);
int rs780_dpm_enable(struct radeon_device *rdev);
struct seq_file *m);
int rs780_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
+u32 rs780_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 rs780_dpm_get_current_mclk(struct radeon_device *rdev);
/*
* rv770,rv730,rv710,rv740
int rv770_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
bool rv770_dpm_vblank_too_short(struct radeon_device *rdev);
+u32 rv770_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 rv770_dpm_get_current_mclk(struct radeon_device *rdev);
/*
* evergreen
unsigned num_gpu_pages,
struct reservation_object *resv);
int evergreen_get_temp(struct radeon_device *rdev);
+int evergreen_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int sumo_get_temp(struct radeon_device *rdev);
int tn_get_temp(struct radeon_device *rdev);
int cypress_dpm_init(struct radeon_device *rdev);
bool btc_dpm_vblank_too_short(struct radeon_device *rdev);
void btc_dpm_debugfs_print_current_performance_level(struct radeon_device *rdev,
struct seq_file *m);
+u32 btc_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 btc_dpm_get_current_mclk(struct radeon_device *rdev);
int sumo_dpm_init(struct radeon_device *rdev);
int sumo_dpm_enable(struct radeon_device *rdev);
int sumo_dpm_late_enable(struct radeon_device *rdev);
struct seq_file *m);
int sumo_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
+u32 sumo_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 sumo_dpm_get_current_mclk(struct radeon_device *rdev);
/*
* cayman
struct radeon_ring *ring);
void cayman_dma_set_wptr(struct radeon_device *rdev,
struct radeon_ring *ring);
+int cayman_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int ni_dpm_init(struct radeon_device *rdev);
void ni_dpm_setup_asic(struct radeon_device *rdev);
int ni_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
bool ni_dpm_vblank_too_short(struct radeon_device *rdev);
+u32 ni_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 ni_dpm_get_current_mclk(struct radeon_device *rdev);
int trinity_dpm_init(struct radeon_device *rdev);
int trinity_dpm_enable(struct radeon_device *rdev);
int trinity_dpm_late_enable(struct radeon_device *rdev);
int trinity_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
void trinity_dpm_enable_bapm(struct radeon_device *rdev, bool enable);
+u32 trinity_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 trinity_dpm_get_current_mclk(struct radeon_device *rdev);
/* DCE6 - SI */
void dce6_bandwidth_update(struct radeon_device *rdev);
uint64_t si_get_gpu_clock_counter(struct radeon_device *rdev);
int si_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk);
int si_get_temp(struct radeon_device *rdev);
+int si_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int si_dpm_init(struct radeon_device *rdev);
void si_dpm_setup_asic(struct radeon_device *rdev);
int si_dpm_enable(struct radeon_device *rdev);
u32 speed);
u32 si_fan_ctrl_get_mode(struct radeon_device *rdev);
void si_fan_ctrl_set_mode(struct radeon_device *rdev, u32 mode);
+u32 si_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 si_dpm_get_current_mclk(struct radeon_device *rdev);
/* DCE8 - CIK */
void dce8_bandwidth_update(struct radeon_device *rdev);
struct radeon_ring *ring);
int ci_get_temp(struct radeon_device *rdev);
int kv_get_temp(struct radeon_device *rdev);
+int cik_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int ci_dpm_init(struct radeon_device *rdev);
int ci_dpm_enable(struct radeon_device *rdev);
enum radeon_dpm_forced_level level);
bool ci_dpm_vblank_too_short(struct radeon_device *rdev);
void ci_dpm_powergate_uvd(struct radeon_device *rdev, bool gate);
+u32 ci_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 ci_dpm_get_current_mclk(struct radeon_device *rdev);
int ci_fan_ctrl_get_fan_speed_percent(struct radeon_device *rdev,
u32 *speed);
enum radeon_dpm_forced_level level);
void kv_dpm_powergate_uvd(struct radeon_device *rdev, bool gate);
void kv_dpm_enable_bapm(struct radeon_device *rdev, bool enable);
+u32 kv_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 kv_dpm_get_current_mclk(struct radeon_device *rdev);
/* uvd v1.0 */
uint32_t uvd_v1_0_get_rptr(struct radeon_device *rdev,
radeon_link_encoder_connector(dev);
+ radeon_setup_mst_connector(dev);
return true;
}
struct radeon_device *rdev = encoder->dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
+ struct drm_connector *connector;
+ struct radeon_connector *radeon_connector = NULL;
u8 buffer[HDMI_INFOFRAME_HEADER_SIZE + HDMI_AVI_INFOFRAME_SIZE];
struct hdmi_avi_infoframe frame;
int err;
+ list_for_each_entry(connector,
+ &encoder->dev->mode_config.connector_list, head) {
+ if (connector->encoder == encoder) {
+ radeon_connector = to_radeon_connector(connector);
+ break;
+ }
+ }
+
+ if (!radeon_connector) {
+ DRM_ERROR("Couldn't find encoder's connector\n");
+ return -ENOENT;
+ }
+
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %d\n", err);
return err;
}
+ if (drm_rgb_quant_range_selectable(radeon_connector_edid(connector))) {
+ if (radeon_encoder->output_csc == RADEON_OUTPUT_CSC_TVRGB)
+ frame.quantization_range = HDMI_QUANTIZATION_RANGE_LIMITED;
+ else
+ frame.quantization_range = HDMI_QUANTIZATION_RANGE_FULL;
+ } else {
+ frame.quantization_range = HDMI_QUANTIZATION_RANGE_DEFAULT;
+ }
+
err = hdmi_avi_infoframe_pack(&frame, buffer, sizeof(buffer));
if (err < 0) {
DRM_ERROR("failed to pack AVI infoframe: %d\n", err);
#include <drm/drm_edid.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
+#include <drm/drm_dp_mst_helper.h>
#include <drm/radeon_drm.h>
#include "radeon.h"
#include "radeon_audio.h"
#include <linux/pm_runtime.h>
+static int radeon_dp_handle_hpd(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ int ret;
+
+ ret = radeon_dp_mst_check_status(radeon_connector);
+ if (ret == -EINVAL)
+ return 1;
+ return 0;
+}
void radeon_connector_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
+ struct radeon_connector_atom_dig *dig_connector =
+ radeon_connector->con_priv;
+
+ if (radeon_connector->is_mst_connector)
+ return;
+ if (dig_connector->is_mst) {
+ radeon_dp_handle_hpd(connector);
+ return;
+ }
+ }
/* bail if the connector does not have hpd pin, e.g.,
* VGA, TV, etc.
*/
radeon_property_change_mode(&radeon_encoder->base);
}
+ if (property == rdev->mode_info.output_csc_property) {
+ if (connector->encoder)
+ radeon_encoder = to_radeon_encoder(connector->encoder);
+ else {
+ struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
+ radeon_encoder = to_radeon_encoder(connector_funcs->best_encoder(connector));
+ }
+
+ if (radeon_encoder->output_csc == val)
+ return 0;
+
+ radeon_encoder->output_csc = val;
+
+ if (connector->encoder->crtc) {
+ struct drm_crtc *crtc = connector->encoder->crtc;
+ struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+
+ radeon_crtc->output_csc = radeon_encoder->output_csc;
+
+ (*crtc_funcs->load_lut)(crtc);
+ }
+ }
+
return 0;
}
struct drm_encoder *encoder = radeon_best_single_encoder(connector);
int r;
+ if (radeon_dig_connector->is_mst)
+ return connector_status_disconnected;
+
r = pm_runtime_get_sync(connector->dev->dev);
if (r < 0)
return connector_status_disconnected;
radeon_dig_connector->dp_sink_type = radeon_dp_getsinktype(radeon_connector);
if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
ret = connector_status_connected;
- if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
+ if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
radeon_dp_getdpcd(radeon_connector);
+ r = radeon_dp_mst_probe(radeon_connector);
+ if (r == 1)
+ ret = connector_status_disconnected;
+ }
} else {
if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
- if (radeon_dp_getdpcd(radeon_connector))
- ret = connector_status_connected;
+ if (radeon_dp_getdpcd(radeon_connector)) {
+ r = radeon_dp_mst_probe(radeon_connector);
+ if (r == 1)
+ ret = connector_status_disconnected;
+ else
+ ret = connector_status_connected;
+ }
} else {
/* try non-aux ddc (DP to DVI/HDMI/etc. adapter) */
if (radeon_ddc_probe(radeon_connector, false))
drm_object_attach_property(&radeon_connector->base.base,
dev->mode_config.scaling_mode_property,
DRM_MODE_SCALE_NONE);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
break;
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_DVID:
drm_object_attach_property(&radeon_connector->base.base,
rdev->mode_info.audio_property,
RADEON_AUDIO_AUTO);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
subpixel_order = SubPixelHorizontalRGB;
connector->interlace_allowed = true;
drm_object_attach_property(&radeon_connector->base.base,
dev->mode_config.scaling_mode_property,
DRM_MODE_SCALE_NONE);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
/* no HPD on analog connectors */
radeon_connector->hpd.hpd = RADEON_HPD_NONE;
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
drm_object_attach_property(&radeon_connector->base.base,
dev->mode_config.scaling_mode_property,
DRM_MODE_SCALE_NONE);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
/* no HPD on analog connectors */
radeon_connector->hpd.hpd = RADEON_HPD_NONE;
connector->interlace_allowed = true;
rdev->mode_info.load_detect_property,
1);
}
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
connector->interlace_allowed = true;
if (connector_type == DRM_MODE_CONNECTOR_DVII)
connector->doublescan_allowed = true;
rdev->mode_info.audio_property,
RADEON_AUDIO_AUTO);
}
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
subpixel_order = SubPixelHorizontalRGB;
connector->interlace_allowed = true;
if (connector_type == DRM_MODE_CONNECTOR_HDMIB)
rdev->mode_info.audio_property,
RADEON_AUDIO_AUTO);
}
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
connector->interlace_allowed = true;
/* in theory with a DP to VGA converter... */
connector->doublescan_allowed = false;
connector->display_info.subpixel_order = subpixel_order;
drm_connector_register(connector);
}
+
+void radeon_setup_mst_connector(struct drm_device *dev)
+{
+ struct radeon_device *rdev = dev->dev_private;
+ struct drm_connector *connector;
+ struct radeon_connector *radeon_connector;
+
+ if (!ASIC_IS_DCE5(rdev))
+ return;
+
+ if (radeon_mst == 0)
+ return;
+
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ int ret;
+
+ radeon_connector = to_radeon_connector(connector);
+
+ if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort)
+ continue;
+
+ ret = radeon_dp_mst_init(radeon_connector);
+ }
+}
DRM_ERROR("registering gem debugfs failed (%d).\n", r);
}
+ r = radeon_mst_debugfs_init(rdev);
+ if (r) {
+ DRM_ERROR("registering mst debugfs failed (%d).\n", r);
+ }
+
if (rdev->flags & RADEON_IS_AGP && !rdev->accel_working) {
/* Acceleration not working on AGP card try again
* with fallback to PCI or PCIE GART
(NI_GRPH_REGAMMA_MODE(NI_REGAMMA_BYPASS) |
NI_OVL_REGAMMA_MODE(NI_REGAMMA_BYPASS)));
WREG32(NI_OUTPUT_CSC_CONTROL + radeon_crtc->crtc_offset,
- (NI_OUTPUT_CSC_GRPH_MODE(NI_OUTPUT_CSC_BYPASS) |
+ (NI_OUTPUT_CSC_GRPH_MODE(radeon_crtc->output_csc) |
NI_OUTPUT_CSC_OVL_MODE(NI_OUTPUT_CSC_BYPASS)));
/* XXX match this to the depth of the crtc fmt block, move to modeset? */
WREG32(0x6940 + radeon_crtc->crtc_offset, 0);
{ RADEON_FMT_DITHER_ENABLE, "on" },
};
+static struct drm_prop_enum_list radeon_output_csc_enum_list[] =
+{ { RADEON_OUTPUT_CSC_BYPASS, "bypass" },
+ { RADEON_OUTPUT_CSC_TVRGB, "tvrgb" },
+ { RADEON_OUTPUT_CSC_YCBCR601, "ycbcr601" },
+ { RADEON_OUTPUT_CSC_YCBCR709, "ycbcr709" },
+};
+
static int radeon_modeset_create_props(struct radeon_device *rdev)
{
int sz;
"dither",
radeon_dither_enum_list, sz);
+ sz = ARRAY_SIZE(radeon_output_csc_enum_list);
+ rdev->mode_info.output_csc_property =
+ drm_property_create_enum(rdev->ddev, 0,
+ "output_csc",
+ radeon_output_csc_enum_list, sz);
+
return 0;
}
--- /dev/null
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Dave Airlie
+ */
+#include <drm/drmP.h>
+#include <drm/radeon_drm.h>
+#include "radeon.h"
+#include "nid.h"
+
+#define AUX_RX_ERROR_FLAGS (AUX_SW_RX_OVERFLOW | \
+ AUX_SW_RX_HPD_DISCON | \
+ AUX_SW_RX_PARTIAL_BYTE | \
+ AUX_SW_NON_AUX_MODE | \
+ AUX_SW_RX_MIN_COUNT_VIOL | \
+ AUX_SW_RX_INVALID_STOP | \
+ AUX_SW_RX_SYNC_INVALID_L | \
+ AUX_SW_RX_SYNC_INVALID_H | \
+ AUX_SW_RX_INVALID_START | \
+ AUX_SW_RX_RECV_NO_DET | \
+ AUX_SW_RX_RECV_INVALID_H | \
+ AUX_SW_RX_RECV_INVALID_V)
+
+#define AUX_SW_REPLY_GET_BYTE_COUNT(x) (((x) >> 24) & 0x1f)
+
+#define BARE_ADDRESS_SIZE 3
+
+static const u32 aux_offset[] =
+{
+ 0x6200 - 0x6200,
+ 0x6250 - 0x6200,
+ 0x62a0 - 0x6200,
+ 0x6300 - 0x6200,
+ 0x6350 - 0x6200,
+ 0x63a0 - 0x6200,
+};
+
+ssize_t
+radeon_dp_aux_transfer_native(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
+{
+ struct radeon_i2c_chan *chan =
+ container_of(aux, struct radeon_i2c_chan, aux);
+ struct drm_device *dev = chan->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ int ret = 0, i;
+ uint32_t tmp, ack = 0;
+ int instance = chan->rec.i2c_id & 0xf;
+ u8 byte;
+ u8 *buf = msg->buffer;
+ int retry_count = 0;
+ int bytes;
+ int msize;
+ bool is_write = false;
+
+ if (WARN_ON(msg->size > 16))
+ return -E2BIG;
+
+ switch (msg->request & ~DP_AUX_I2C_MOT) {
+ case DP_AUX_NATIVE_WRITE:
+ case DP_AUX_I2C_WRITE:
+ is_write = true;
+ break;
+ case DP_AUX_NATIVE_READ:
+ case DP_AUX_I2C_READ:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* work out two sizes required */
+ msize = 0;
+ bytes = BARE_ADDRESS_SIZE;
+ if (msg->size) {
+ msize = msg->size - 1;
+ bytes++;
+ if (is_write)
+ bytes += msg->size;
+ }
+
+ mutex_lock(&chan->mutex);
+
+ /* switch the pad to aux mode */
+ tmp = RREG32(chan->rec.mask_clk_reg);
+ tmp |= (1 << 16);
+ WREG32(chan->rec.mask_clk_reg, tmp);
+
+ /* setup AUX control register with correct HPD pin */
+ tmp = RREG32(AUX_CONTROL + aux_offset[instance]);
+
+ tmp &= AUX_HPD_SEL(0x7);
+ tmp |= AUX_HPD_SEL(chan->rec.hpd);
+ tmp |= AUX_EN | AUX_LS_READ_EN;
+
+ WREG32(AUX_CONTROL + aux_offset[instance], tmp);
+
+ /* atombios appears to write this twice lets copy it */
+ WREG32(AUX_SW_CONTROL + aux_offset[instance],
+ AUX_SW_WR_BYTES(bytes));
+ WREG32(AUX_SW_CONTROL + aux_offset[instance],
+ AUX_SW_WR_BYTES(bytes));
+
+ /* write the data header into the registers */
+ /* request, addres, msg size */
+ byte = (msg->request << 4);
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte) | AUX_SW_AUTOINCREMENT_DISABLE);
+
+ byte = (msg->address >> 8) & 0xff;
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte));
+
+ byte = msg->address & 0xff;
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte));
+
+ byte = msize;
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte));
+
+ /* if we are writing - write the msg buffer */
+ if (is_write) {
+ for (i = 0; i < msg->size; i++) {
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(buf[i]));
+ }
+ }
+
+ /* clear the ACK */
+ WREG32(AUX_SW_INTERRUPT_CONTROL + aux_offset[instance], AUX_SW_DONE_ACK);
+
+ /* write the size and GO bits */
+ WREG32(AUX_SW_CONTROL + aux_offset[instance],
+ AUX_SW_WR_BYTES(bytes) | AUX_SW_GO);
+
+ /* poll the status registers - TODO irq support */
+ do {
+ tmp = RREG32(AUX_SW_STATUS + aux_offset[instance]);
+ if (tmp & AUX_SW_DONE) {
+ break;
+ }
+ usleep_range(100, 200);
+ } while (retry_count++ < 1000);
+
+ if (retry_count >= 1000) {
+ DRM_ERROR("auxch hw never signalled completion, error %08x\n", tmp);
+ ret = -EIO;
+ goto done;
+ }
+
+ if (tmp & AUX_SW_RX_TIMEOUT) {
+ DRM_DEBUG_KMS("dp_aux_ch timed out\n");
+ ret = -ETIMEDOUT;
+ goto done;
+ }
+ if (tmp & AUX_RX_ERROR_FLAGS) {
+ DRM_DEBUG_KMS("dp_aux_ch flags not zero: %08x\n", tmp);
+ ret = -EIO;
+ goto done;
+ }
+
+ bytes = AUX_SW_REPLY_GET_BYTE_COUNT(tmp);
+ if (bytes) {
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_RW | AUX_SW_AUTOINCREMENT_DISABLE);
+
+ tmp = RREG32(AUX_SW_DATA + aux_offset[instance]);
+ ack = (tmp >> 8) & 0xff;
+
+ for (i = 0; i < bytes - 1; i++) {
+ tmp = RREG32(AUX_SW_DATA + aux_offset[instance]);
+ if (buf)
+ buf[i] = (tmp >> 8) & 0xff;
+ }
+ if (buf)
+ ret = bytes - 1;
+ }
+
+ WREG32(AUX_SW_INTERRUPT_CONTROL + aux_offset[instance], AUX_SW_DONE_ACK);
+
+ if (is_write)
+ ret = msg->size;
+done:
+ mutex_unlock(&chan->mutex);
+
+ if (ret >= 0)
+ msg->reply = ack >> 4;
+ return ret;
+}
--- /dev/null
+
+#include <drm/drmP.h>
+#include <drm/drm_dp_mst_helper.h>
+#include <drm/drm_fb_helper.h>
+
+#include "radeon.h"
+#include "atom.h"
+#include "ni_reg.h"
+
+static struct radeon_encoder *radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector);
+
+static int radeon_atom_set_enc_offset(int id)
+{
+ static const int offsets[] = { EVERGREEN_CRTC0_REGISTER_OFFSET,
+ EVERGREEN_CRTC1_REGISTER_OFFSET,
+ EVERGREEN_CRTC2_REGISTER_OFFSET,
+ EVERGREEN_CRTC3_REGISTER_OFFSET,
+ EVERGREEN_CRTC4_REGISTER_OFFSET,
+ EVERGREEN_CRTC5_REGISTER_OFFSET,
+ 0x13830 - 0x7030 };
+
+ return offsets[id];
+}
+
+static int radeon_dp_mst_set_be_cntl(struct radeon_encoder *primary,
+ struct radeon_encoder_mst *mst_enc,
+ enum radeon_hpd_id hpd, bool enable)
+{
+ struct drm_device *dev = primary->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ uint32_t reg;
+ int retries = 0;
+ uint32_t temp;
+
+ reg = RREG32(NI_DIG_BE_CNTL + primary->offset);
+
+ /* set MST mode */
+ reg &= ~NI_DIG_FE_DIG_MODE(7);
+ reg |= NI_DIG_FE_DIG_MODE(NI_DIG_MODE_DP_MST);
+
+ if (enable)
+ reg |= NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe);
+ else
+ reg &= ~NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe);
+
+ reg |= NI_DIG_HPD_SELECT(hpd);
+ DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DIG_BE_CNTL + primary->offset, reg);
+ WREG32(NI_DIG_BE_CNTL + primary->offset, reg);
+
+ if (enable) {
+ uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe);
+
+ do {
+ temp = RREG32(NI_DIG_FE_CNTL + offset);
+ } while ((temp & NI_DIG_SYMCLK_FE_ON) && retries++ < 10000);
+ if (retries == 10000)
+ DRM_ERROR("timed out waiting for FE %d %d\n", primary->offset, mst_enc->fe);
+ }
+ return 0;
+}
+
+static int radeon_dp_mst_set_stream_attrib(struct radeon_encoder *primary,
+ int stream_number,
+ int fe,
+ int slots)
+{
+ struct drm_device *dev = primary->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ u32 temp, val;
+ int retries = 0;
+ int satreg, satidx;
+
+ satreg = stream_number >> 1;
+ satidx = stream_number & 1;
+
+ temp = RREG32(NI_DP_MSE_SAT0 + satreg + primary->offset);
+
+ val = NI_DP_MSE_SAT_SLOT_COUNT0(slots) | NI_DP_MSE_SAT_SRC0(fe);
+
+ val <<= (16 * satidx);
+
+ temp &= ~(0xffff << (16 * satidx));
+
+ temp |= val;
+
+ DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DP_MSE_SAT0 + satreg + primary->offset, temp);
+ WREG32(NI_DP_MSE_SAT0 + satreg + primary->offset, temp);
+
+ WREG32(NI_DP_MSE_SAT_UPDATE + primary->offset, 1);
+
+ do {
+ temp = RREG32(NI_DP_MSE_SAT_UPDATE + primary->offset);
+ } while ((temp & 0x1) && retries++ < 10000);
+
+ if (retries == 10000)
+ DRM_ERROR("timed out waitin for SAT update %d\n", primary->offset);
+
+ /* MTP 16 ? */
+ return 0;
+}
+
+static int radeon_dp_mst_update_stream_attribs(struct radeon_connector *mst_conn,
+ struct radeon_encoder *primary)
+{
+ struct drm_device *dev = mst_conn->base.dev;
+ struct stream_attribs new_attribs[6];
+ int i;
+ int idx = 0;
+ struct radeon_connector *radeon_connector;
+ struct drm_connector *connector;
+
+ memset(new_attribs, 0, sizeof(new_attribs));
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ struct radeon_encoder *subenc;
+ struct radeon_encoder_mst *mst_enc;
+
+ radeon_connector = to_radeon_connector(connector);
+ if (!radeon_connector->is_mst_connector)
+ continue;
+
+ if (radeon_connector->mst_port != mst_conn)
+ continue;
+
+ subenc = radeon_connector->mst_encoder;
+ mst_enc = subenc->enc_priv;
+
+ if (!mst_enc->enc_active)
+ continue;
+
+ new_attribs[idx].fe = mst_enc->fe;
+ new_attribs[idx].slots = drm_dp_mst_get_vcpi_slots(&mst_conn->mst_mgr, mst_enc->port);
+ idx++;
+ }
+
+ for (i = 0; i < idx; i++) {
+ if (new_attribs[i].fe != mst_conn->cur_stream_attribs[i].fe ||
+ new_attribs[i].slots != mst_conn->cur_stream_attribs[i].slots) {
+ radeon_dp_mst_set_stream_attrib(primary, i, new_attribs[i].fe, new_attribs[i].slots);
+ mst_conn->cur_stream_attribs[i].fe = new_attribs[i].fe;
+ mst_conn->cur_stream_attribs[i].slots = new_attribs[i].slots;
+ }
+ }
+
+ for (i = idx; i < mst_conn->enabled_attribs; i++) {
+ radeon_dp_mst_set_stream_attrib(primary, i, 0, 0);
+ mst_conn->cur_stream_attribs[i].fe = 0;
+ mst_conn->cur_stream_attribs[i].slots = 0;
+ }
+ mst_conn->enabled_attribs = idx;
+ return 0;
+}
+
+static int radeon_dp_mst_set_vcp_size(struct radeon_encoder *mst, uint32_t x, uint32_t y)
+{
+ struct drm_device *dev = mst->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder_mst *mst_enc = mst->enc_priv;
+ uint32_t val, temp;
+ uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe);
+ int retries = 0;
+
+ val = NI_DP_MSE_RATE_X(x) | NI_DP_MSE_RATE_Y(y);
+
+ WREG32(NI_DP_MSE_RATE_CNTL + offset, val);
+
+ do {
+ temp = RREG32(NI_DP_MSE_RATE_UPDATE + offset);
+ } while ((temp & 0x1) && (retries++ < 10000));
+
+ if (retries >= 10000)
+ DRM_ERROR("timed out wait for rate cntl %d\n", mst_enc->fe);
+ return 0;
+}
+
+static int radeon_dp_mst_get_ddc_modes(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ struct radeon_connector *master = radeon_connector->mst_port;
+ struct edid *edid;
+ int ret = 0;
+
+ edid = drm_dp_mst_get_edid(connector, &master->mst_mgr, radeon_connector->port);
+ radeon_connector->edid = edid;
+ DRM_DEBUG_KMS("edid retrieved %p\n", edid);
+ if (radeon_connector->edid) {
+ drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid);
+ ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid);
+ drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid);
+ return ret;
+ }
+ drm_mode_connector_update_edid_property(&radeon_connector->base, NULL);
+
+ return ret;
+}
+
+static int radeon_dp_mst_get_modes(struct drm_connector *connector)
+{
+ return radeon_dp_mst_get_ddc_modes(connector);
+}
+
+static enum drm_mode_status
+radeon_dp_mst_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+{
+ /* TODO - validate mode against available PBN for link */
+ if (mode->clock < 10000)
+ return MODE_CLOCK_LOW;
+
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ return MODE_H_ILLEGAL;
+
+ return MODE_OK;
+}
+
+struct drm_encoder *radeon_mst_best_encoder(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+
+ return &radeon_connector->mst_encoder->base;
+}
+
+static const struct drm_connector_helper_funcs radeon_dp_mst_connector_helper_funcs = {
+ .get_modes = radeon_dp_mst_get_modes,
+ .mode_valid = radeon_dp_mst_mode_valid,
+ .best_encoder = radeon_mst_best_encoder,
+};
+
+static enum drm_connector_status
+radeon_dp_mst_detect(struct drm_connector *connector, bool force)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ struct radeon_connector *master = radeon_connector->mst_port;
+
+ return drm_dp_mst_detect_port(connector, &master->mst_mgr, radeon_connector->port);
+}
+
+static void
+radeon_dp_mst_connector_destroy(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ struct radeon_encoder *radeon_encoder = radeon_connector->mst_encoder;
+
+ drm_encoder_cleanup(&radeon_encoder->base);
+ kfree(radeon_encoder);
+ drm_connector_cleanup(connector);
+ kfree(radeon_connector);
+}
+
+static void radeon_connector_dpms(struct drm_connector *connector, int mode)
+{
+ DRM_DEBUG_KMS("\n");
+}
+
+static const struct drm_connector_funcs radeon_dp_mst_connector_funcs = {
+ .dpms = radeon_connector_dpms,
+ .detect = radeon_dp_mst_detect,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .destroy = radeon_dp_mst_connector_destroy,
+};
+
+static struct drm_connector *radeon_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_mst_port *port,
+ const char *pathprop)
+{
+ struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
+ struct drm_device *dev = master->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_connector *radeon_connector;
+ struct drm_connector *connector;
+
+ radeon_connector = kzalloc(sizeof(*radeon_connector), GFP_KERNEL);
+ if (!radeon_connector)
+ return NULL;
+
+ radeon_connector->is_mst_connector = true;
+ connector = &radeon_connector->base;
+ radeon_connector->port = port;
+ radeon_connector->mst_port = master;
+ DRM_DEBUG_KMS("\n");
+
+ drm_connector_init(dev, connector, &radeon_dp_mst_connector_funcs, DRM_MODE_CONNECTOR_DisplayPort);
+ drm_connector_helper_add(connector, &radeon_dp_mst_connector_helper_funcs);
+ radeon_connector->mst_encoder = radeon_dp_create_fake_mst_encoder(master);
+
+ drm_object_attach_property(&connector->base, dev->mode_config.path_property, 0);
+ drm_mode_connector_set_path_property(connector, pathprop);
+ drm_reinit_primary_mode_group(dev);
+
+ mutex_lock(&dev->mode_config.mutex);
+ radeon_fb_add_connector(rdev, connector);
+ mutex_unlock(&dev->mode_config.mutex);
+
+ drm_connector_register(connector);
+ return connector;
+}
+
+static void radeon_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ struct drm_connector *connector)
+{
+ struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
+ struct drm_device *dev = master->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+
+ drm_connector_unregister(connector);
+ /* need to nuke the connector */
+ mutex_lock(&dev->mode_config.mutex);
+ /* dpms off */
+ radeon_fb_remove_connector(rdev, connector);
+
+ drm_connector_cleanup(connector);
+ mutex_unlock(&dev->mode_config.mutex);
+ drm_reinit_primary_mode_group(dev);
+
+
+ kfree(connector);
+ DRM_DEBUG_KMS("\n");
+}
+
+static void radeon_dp_mst_hotplug(struct drm_dp_mst_topology_mgr *mgr)
+{
+ struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
+ struct drm_device *dev = master->base.dev;
+
+ drm_kms_helper_hotplug_event(dev);
+}
+
+struct drm_dp_mst_topology_cbs mst_cbs = {
+ .add_connector = radeon_dp_add_mst_connector,
+ .destroy_connector = radeon_dp_destroy_mst_connector,
+ .hotplug = radeon_dp_mst_hotplug,
+};
+
+struct radeon_connector *radeon_mst_find_connector(struct drm_encoder *encoder)
+{
+ struct drm_device *dev = encoder->dev;
+ struct drm_connector *connector;
+
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ if (!connector->encoder)
+ continue;
+ if (!radeon_connector->is_mst_connector)
+ continue;
+
+ DRM_DEBUG_KMS("checking %p vs %p\n", connector->encoder, encoder);
+ if (connector->encoder == encoder)
+ return radeon_connector;
+ }
+ return NULL;
+}
+
+void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode)
+{
+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+ struct drm_device *dev = crtc->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(radeon_crtc->encoder);
+ struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv;
+ struct radeon_connector *radeon_connector = radeon_mst_find_connector(&radeon_encoder->base);
+ int dp_clock;
+ struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv;
+
+ if (radeon_connector) {
+ radeon_connector->pixelclock_for_modeset = mode->clock;
+ if (radeon_connector->base.display_info.bpc)
+ radeon_crtc->bpc = radeon_connector->base.display_info.bpc;
+ else
+ radeon_crtc->bpc = 8;
+ }
+
+ DRM_DEBUG_KMS("dp_clock %p %d\n", dig_connector, dig_connector->dp_clock);
+ dp_clock = dig_connector->dp_clock;
+ radeon_crtc->ss_enabled =
+ radeon_atombios_get_asic_ss_info(rdev, &radeon_crtc->ss,
+ ASIC_INTERNAL_SS_ON_DP,
+ dp_clock);
+}
+
+static void
+radeon_mst_encoder_dpms(struct drm_encoder *encoder, int mode)
+{
+ struct drm_device *dev = encoder->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder *radeon_encoder, *primary;
+ struct radeon_encoder_mst *mst_enc;
+ struct radeon_encoder_atom_dig *dig_enc;
+ struct radeon_connector *radeon_connector;
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ int ret, slots;
+
+ if (!ASIC_IS_DCE5(rdev)) {
+ DRM_ERROR("got mst dpms on non-DCE5\n");
+ return;
+ }
+
+ radeon_connector = radeon_mst_find_connector(encoder);
+ if (!radeon_connector)
+ return;
+
+ radeon_encoder = to_radeon_encoder(encoder);
+
+ mst_enc = radeon_encoder->enc_priv;
+
+ primary = mst_enc->primary;
+
+ dig_enc = primary->enc_priv;
+
+ crtc = encoder->crtc;
+ DRM_DEBUG_KMS("got connector %d\n", dig_enc->active_mst_links);
+
+ switch (mode) {
+ case DRM_MODE_DPMS_ON:
+ dig_enc->active_mst_links++;
+
+ radeon_crtc = to_radeon_crtc(crtc);
+
+ if (dig_enc->active_mst_links == 1) {
+ mst_enc->fe = dig_enc->dig_encoder;
+ mst_enc->fe_from_be = true;
+ atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe);
+
+ atombios_dig_encoder_setup(&primary->base, ATOM_ENCODER_CMD_SETUP, 0);
+ atombios_dig_transmitter_setup2(&primary->base, ATOM_TRANSMITTER_ACTION_ENABLE,
+ 0, 0, dig_enc->dig_encoder);
+
+ if (radeon_dp_needs_link_train(mst_enc->connector) ||
+ dig_enc->active_mst_links == 1) {
+ radeon_dp_link_train(&primary->base, &mst_enc->connector->base);
+ }
+
+ } else {
+ mst_enc->fe = radeon_atom_pick_dig_encoder(encoder, radeon_crtc->crtc_id);
+ if (mst_enc->fe == -1)
+ DRM_ERROR("failed to get frontend for dig encoder\n");
+ mst_enc->fe_from_be = false;
+ atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe);
+ }
+
+ DRM_DEBUG_KMS("dig encoder is %d %d %d\n", dig_enc->dig_encoder,
+ dig_enc->linkb, radeon_crtc->crtc_id);
+
+ ret = drm_dp_mst_allocate_vcpi(&radeon_connector->mst_port->mst_mgr,
+ radeon_connector->port,
+ mst_enc->pbn, &slots);
+ ret = drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr);
+
+ radeon_dp_mst_set_be_cntl(primary, mst_enc,
+ radeon_connector->mst_port->hpd.hpd, true);
+
+ mst_enc->enc_active = true;
+ radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary);
+ radeon_dp_mst_set_vcp_size(radeon_encoder, slots, 0);
+
+ atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0,
+ mst_enc->fe);
+ ret = drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr);
+
+ ret = drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr);
+
+ break;
+ case DRM_MODE_DPMS_STANDBY:
+ case DRM_MODE_DPMS_SUSPEND:
+ case DRM_MODE_DPMS_OFF:
+ DRM_ERROR("DPMS OFF %d\n", dig_enc->active_mst_links);
+
+ if (!mst_enc->enc_active)
+ return;
+
+ drm_dp_mst_reset_vcpi_slots(&radeon_connector->mst_port->mst_mgr, mst_enc->port);
+ ret = drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr);
+
+ drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr);
+ /* and this can also fail */
+ drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr);
+
+ drm_dp_mst_deallocate_vcpi(&radeon_connector->mst_port->mst_mgr, mst_enc->port);
+
+ mst_enc->enc_active = false;
+ radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary);
+
+ radeon_dp_mst_set_be_cntl(primary, mst_enc,
+ radeon_connector->mst_port->hpd.hpd, false);
+ atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0,
+ mst_enc->fe);
+
+ if (!mst_enc->fe_from_be)
+ radeon_atom_release_dig_encoder(rdev, mst_enc->fe);
+
+ mst_enc->fe_from_be = false;
+ dig_enc->active_mst_links--;
+ if (dig_enc->active_mst_links == 0) {
+ /* drop link */
+ }
+
+ break;
+ }
+
+}
+
+static bool radeon_mst_mode_fixup(struct drm_encoder *encoder,
+ const struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ struct radeon_encoder_mst *mst_enc;
+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+ int bpp = 24;
+
+ mst_enc = radeon_encoder->enc_priv;
+
+ mst_enc->pbn = drm_dp_calc_pbn_mode(adjusted_mode->clock, bpp);
+
+ mst_enc->primary->active_device = mst_enc->primary->devices & mst_enc->connector->devices;
+ DRM_DEBUG_KMS("setting active device to %08x from %08x %08x for encoder %d\n",
+ mst_enc->primary->active_device, mst_enc->primary->devices,
+ mst_enc->connector->devices, mst_enc->primary->base.encoder_type);
+
+
+ drm_mode_set_crtcinfo(adjusted_mode, 0);
+ {
+ struct radeon_connector_atom_dig *dig_connector;
+
+ dig_connector = mst_enc->connector->con_priv;
+ dig_connector->dp_lane_count = drm_dp_max_lane_count(dig_connector->dpcd);
+ dig_connector->dp_clock = radeon_dp_get_max_link_rate(&mst_enc->connector->base,
+ dig_connector->dpcd);
+ DRM_DEBUG_KMS("dig clock %p %d %d\n", dig_connector,
+ dig_connector->dp_lane_count, dig_connector->dp_clock);
+ }
+ return true;
+}
+
+static void radeon_mst_encoder_prepare(struct drm_encoder *encoder)
+{
+ struct radeon_connector *radeon_connector;
+ struct radeon_encoder *radeon_encoder, *primary;
+ struct radeon_encoder_mst *mst_enc;
+ struct radeon_encoder_atom_dig *dig_enc;
+
+ radeon_connector = radeon_mst_find_connector(encoder);
+ if (!radeon_connector) {
+ DRM_DEBUG_KMS("failed to find connector %p\n", encoder);
+ return;
+ }
+ radeon_encoder = to_radeon_encoder(encoder);
+
+ radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);
+
+ mst_enc = radeon_encoder->enc_priv;
+
+ primary = mst_enc->primary;
+
+ dig_enc = primary->enc_priv;
+
+ mst_enc->port = radeon_connector->port;
+
+ if (dig_enc->dig_encoder == -1) {
+ dig_enc->dig_encoder = radeon_atom_pick_dig_encoder(&primary->base, -1);
+ primary->offset = radeon_atom_set_enc_offset(dig_enc->dig_encoder);
+ atombios_set_mst_encoder_crtc_source(encoder, dig_enc->dig_encoder);
+
+
+ }
+ DRM_DEBUG_KMS("%d %d\n", dig_enc->dig_encoder, primary->offset);
+}
+
+static void
+radeon_mst_encoder_mode_set(struct drm_encoder *encoder,
+ struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ DRM_DEBUG_KMS("\n");
+}
+
+static void radeon_mst_encoder_commit(struct drm_encoder *encoder)
+{
+ radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_ON);
+ DRM_DEBUG_KMS("\n");
+}
+
+static const struct drm_encoder_helper_funcs radeon_mst_helper_funcs = {
+ .dpms = radeon_mst_encoder_dpms,
+ .mode_fixup = radeon_mst_mode_fixup,
+ .prepare = radeon_mst_encoder_prepare,
+ .mode_set = radeon_mst_encoder_mode_set,
+ .commit = radeon_mst_encoder_commit,
+};
+
+void radeon_dp_mst_encoder_destroy(struct drm_encoder *encoder)
+{
+ drm_encoder_cleanup(encoder);
+ kfree(encoder);
+}
+
+static const struct drm_encoder_funcs radeon_dp_mst_enc_funcs = {
+ .destroy = radeon_dp_mst_encoder_destroy,
+};
+
+static struct radeon_encoder *
+radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector)
+{
+ struct drm_device *dev = connector->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder *radeon_encoder;
+ struct radeon_encoder_mst *mst_enc;
+ struct drm_encoder *encoder;
+ struct drm_connector_helper_funcs *connector_funcs = connector->base.helper_private;
+ struct drm_encoder *enc_master = connector_funcs->best_encoder(&connector->base);
+
+ DRM_DEBUG_KMS("enc master is %p\n", enc_master);
+ radeon_encoder = kzalloc(sizeof(*radeon_encoder), GFP_KERNEL);
+ if (!radeon_encoder)
+ return NULL;
+
+ radeon_encoder->enc_priv = kzalloc(sizeof(*mst_enc), GFP_KERNEL);
+ if (!radeon_encoder->enc_priv) {
+ kfree(radeon_encoder);
+ return NULL;
+ }
+ encoder = &radeon_encoder->base;
+ switch (rdev->num_crtc) {
+ case 1:
+ encoder->possible_crtcs = 0x1;
+ break;
+ case 2:
+ default:
+ encoder->possible_crtcs = 0x3;
+ break;
+ case 4:
+ encoder->possible_crtcs = 0xf;
+ break;
+ case 6:
+ encoder->possible_crtcs = 0x3f;
+ break;
+ }
+
+ drm_encoder_init(dev, &radeon_encoder->base, &radeon_dp_mst_enc_funcs,
+ DRM_MODE_ENCODER_DPMST);
+ drm_encoder_helper_add(encoder, &radeon_mst_helper_funcs);
+
+ mst_enc = radeon_encoder->enc_priv;
+ mst_enc->connector = connector;
+ mst_enc->primary = to_radeon_encoder(enc_master);
+ radeon_encoder->is_mst_encoder = true;
+ return radeon_encoder;
+}
+
+int
+radeon_dp_mst_init(struct radeon_connector *radeon_connector)
+{
+ struct drm_device *dev = radeon_connector->base.dev;
+
+ if (!radeon_connector->ddc_bus->has_aux)
+ return 0;
+
+ radeon_connector->mst_mgr.cbs = &mst_cbs;
+ return drm_dp_mst_topology_mgr_init(&radeon_connector->mst_mgr, dev->dev,
+ &radeon_connector->ddc_bus->aux, 16, 6,
+ radeon_connector->base.base.id);
+}
+
+int
+radeon_dp_mst_probe(struct radeon_connector *radeon_connector)
+{
+ struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
+ int ret;
+ u8 msg[1];
+
+ if (dig_connector->dpcd[DP_DPCD_REV] < 0x12)
+ return 0;
+
+ ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_MSTM_CAP, msg,
+ 1);
+ if (ret) {
+ if (msg[0] & DP_MST_CAP) {
+ DRM_DEBUG_KMS("Sink is MST capable\n");
+ dig_connector->is_mst = true;
+ } else {
+ DRM_DEBUG_KMS("Sink is not MST capable\n");
+ dig_connector->is_mst = false;
+ }
+
+ }
+ drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr,
+ dig_connector->is_mst);
+ return dig_connector->is_mst;
+}
+
+int
+radeon_dp_mst_check_status(struct radeon_connector *radeon_connector)
+{
+ struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
+ int retry;
+
+ if (dig_connector->is_mst) {
+ u8 esi[16] = { 0 };
+ int dret;
+ int ret = 0;
+ bool handled;
+
+ dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux,
+ DP_SINK_COUNT_ESI, esi, 8);
+go_again:
+ if (dret == 8) {
+ DRM_DEBUG_KMS("got esi %02x %02x %02x\n", esi[0], esi[1], esi[2]);
+ ret = drm_dp_mst_hpd_irq(&radeon_connector->mst_mgr, esi, &handled);
+
+ if (handled) {
+ for (retry = 0; retry < 3; retry++) {
+ int wret;
+ wret = drm_dp_dpcd_write(&radeon_connector->ddc_bus->aux,
+ DP_SINK_COUNT_ESI + 1, &esi[1], 3);
+ if (wret == 3)
+ break;
+ }
+
+ dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux,
+ DP_SINK_COUNT_ESI, esi, 8);
+ if (dret == 8) {
+ DRM_DEBUG_KMS("got esi2 %02x %02x %02x\n", esi[0], esi[1], esi[2]);
+ goto go_again;
+ }
+ } else
+ ret = 0;
+
+ return ret;
+ } else {
+ DRM_DEBUG_KMS("failed to get ESI - device may have failed %d\n", ret);
+ dig_connector->is_mst = false;
+ drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr,
+ dig_connector->is_mst);
+ /* send a hotplug event */
+ }
+ }
+ return -EINVAL;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+
+static int radeon_debugfs_mst_info(struct seq_file *m, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *)m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct drm_connector *connector;
+ struct radeon_connector *radeon_connector;
+ struct radeon_connector_atom_dig *dig_connector;
+ int i;
+
+ drm_modeset_lock_all(dev);
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort)
+ continue;
+
+ radeon_connector = to_radeon_connector(connector);
+ dig_connector = radeon_connector->con_priv;
+ if (radeon_connector->is_mst_connector)
+ continue;
+ if (!dig_connector->is_mst)
+ continue;
+ drm_dp_mst_dump_topology(m, &radeon_connector->mst_mgr);
+
+ for (i = 0; i < radeon_connector->enabled_attribs; i++)
+ seq_printf(m, "attrib %d: %d %d\n", i,
+ radeon_connector->cur_stream_attribs[i].fe,
+ radeon_connector->cur_stream_attribs[i].slots);
+ }
+ drm_modeset_unlock_all(dev);
+ return 0;
+}
+
+static struct drm_info_list radeon_debugfs_mst_list[] = {
+ {"radeon_mst_info", &radeon_debugfs_mst_info, 0, NULL},
+};
+#endif
+
+int radeon_mst_debugfs_init(struct radeon_device *rdev)
+{
+#if defined(CONFIG_DEBUG_FS)
+ return radeon_debugfs_add_files(rdev, radeon_debugfs_mst_list, 1);
+#endif
+ return 0;
+}
int radeon_use_pflipirq = 2;
int radeon_bapm = -1;
int radeon_backlight = -1;
+int radeon_auxch = -1;
+int radeon_mst = 0;
MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers");
module_param_named(no_wb, radeon_no_wb, int, 0444);
MODULE_PARM_DESC(msi, "MSI support (1 = enable, 0 = disable, -1 = auto)");
module_param_named(msi, radeon_msi, int, 0444);
-MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (defaul 10000 = 10 seconds, 0 = disable)");
+MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default 10000 = 10 seconds, 0 = disable)");
module_param_named(lockup_timeout, radeon_lockup_timeout, int, 0444);
MODULE_PARM_DESC(fastfb, "Direct FB access for IGP chips (0 = disable, 1 = enable)");
MODULE_PARM_DESC(backlight, "backlight support (1 = enable, 0 = disable, -1 = auto)");
module_param_named(backlight, radeon_backlight, int, 0444);
+MODULE_PARM_DESC(auxch, "Use native auxch experimental support (1 = enable, 0 = disable, -1 = auto)");
+module_param_named(auxch, radeon_auxch, int, 0444);
+
+MODULE_PARM_DESC(mst, "DisplayPort MST experimental support (1 = enable, 0 = disable)");
+module_param_named(mst, radeon_mst, int, 0444);
+
static struct pci_device_id pciidlist[] = {
radeon_PCI_IDS
};
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
radeon_connector = to_radeon_connector(connector);
- if (radeon_encoder->active_device & radeon_connector->devices)
+ if (radeon_encoder->is_mst_encoder) {
+ struct radeon_encoder_mst *mst_enc;
+
+ if (!radeon_connector->is_mst_connector)
+ continue;
+
+ mst_enc = radeon_encoder->enc_priv;
+ if (mst_enc->connector == radeon_connector->mst_port)
+ return connector;
+ } else if (radeon_encoder->active_device & radeon_connector->devices)
return connector;
}
return NULL;
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_DisplayPort:
+ if (radeon_connector->is_mst_connector)
+ return false;
+
dig_connector = radeon_connector->con_priv;
if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
(dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP))
}
info->par = rfbdev;
+ info->skip_vt_switch = true;
ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj);
if (ret) {
return true;
return false;
}
+
+void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector)
+{
+ drm_fb_helper_add_one_connector(&rdev->mode_info.rfbdev->helper, connector);
+}
+
+void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector)
+{
+ drm_fb_helper_remove_one_connector(&rdev->mode_info.rfbdev->helper, connector);
+}
return test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags);
}
+struct radeon_wait_cb {
+ struct fence_cb base;
+ struct task_struct *task;
+};
+
+static void
+radeon_fence_wait_cb(struct fence *fence, struct fence_cb *cb)
+{
+ struct radeon_wait_cb *wait =
+ container_of(cb, struct radeon_wait_cb, base);
+
+ wake_up_process(wait->task);
+}
+
static signed long radeon_fence_default_wait(struct fence *f, bool intr,
signed long t)
{
struct radeon_fence *fence = to_radeon_fence(f);
struct radeon_device *rdev = fence->rdev;
- bool signaled;
+ struct radeon_wait_cb cb;
- fence_enable_sw_signaling(&fence->base);
+ cb.task = current;
- /*
- * This function has to return -EDEADLK, but cannot hold
- * exclusive_lock during the wait because some callers
- * may already hold it. This means checking needs_reset without
- * lock, and not fiddling with any gpu internals.
- *
- * The callback installed with fence_enable_sw_signaling will
- * run before our wait_event_*timeout call, so we will see
- * both the signaled fence and the changes to needs_reset.
- */
+ if (fence_add_callback(f, &cb.base, radeon_fence_wait_cb))
+ return t;
+
+ while (t > 0) {
+ if (intr)
+ set_current_state(TASK_INTERRUPTIBLE);
+ else
+ set_current_state(TASK_UNINTERRUPTIBLE);
+
+ /*
+ * radeon_test_signaled must be called after
+ * set_current_state to prevent a race with wake_up_process
+ */
+ if (radeon_test_signaled(fence))
+ break;
+
+ if (rdev->needs_reset) {
+ t = -EDEADLK;
+ break;
+ }
+
+ t = schedule_timeout(t);
+
+ if (t > 0 && intr && signal_pending(current))
+ t = -ERESTARTSYS;
+ }
+
+ __set_current_state(TASK_RUNNING);
+ fence_remove_callback(f, &cb.base);
- if (intr)
- t = wait_event_interruptible_timeout(rdev->fence_queue,
- ((signaled = radeon_test_signaled(fence)) ||
- rdev->needs_reset), t);
- else
- t = wait_event_timeout(rdev->fence_queue,
- ((signaled = radeon_test_signaled(fence)) ||
- rdev->needs_reset), t);
-
- if (t > 0 && !signaled)
- return -EDEADLK;
return t;
}
drm_helper_hpd_irq_event(dev);
}
+static void radeon_dp_work_func(struct work_struct *work)
+{
+ struct radeon_device *rdev = container_of(work, struct radeon_device,
+ dp_work);
+ struct drm_device *dev = rdev->ddev;
+ struct drm_mode_config *mode_config = &dev->mode_config;
+ struct drm_connector *connector;
+
+ /* this should take a mutex */
+ if (mode_config->num_connector) {
+ list_for_each_entry(connector, &mode_config->connector_list, head)
+ radeon_connector_hotplug(connector);
+ }
+}
/**
* radeon_driver_irq_preinstall_kms - drm irq preinstall callback
*
}
INIT_WORK(&rdev->hotplug_work, radeon_hotplug_work_func);
+ INIT_WORK(&rdev->dp_work, radeon_dp_work_func);
INIT_WORK(&rdev->audio_work, r600_audio_update_hdmi);
rdev->irq.installed = true;
bool radeon_kfd_init(void)
{
#if defined(CONFIG_HSA_AMD_MODULE)
- bool (*kgd2kfd_init_p)(unsigned, const struct kfd2kgd_calls*,
- const struct kgd2kfd_calls**);
+ bool (*kgd2kfd_init_p)(unsigned, const struct kgd2kfd_calls**);
kgd2kfd_init_p = symbol_request(kgd2kfd_init);
if (kgd2kfd_init_p == NULL)
return false;
- if (!kgd2kfd_init_p(KFD_INTERFACE_VERSION, &kfd2kgd, &kgd2kfd)) {
+ if (!kgd2kfd_init_p(KFD_INTERFACE_VERSION, &kgd2kfd)) {
symbol_put(kgd2kfd_init);
kgd2kfd = NULL;
return true;
#elif defined(CONFIG_HSA_AMD)
- if (!kgd2kfd_init(KFD_INTERFACE_VERSION, &kfd2kgd, &kgd2kfd)) {
+ if (!kgd2kfd_init(KFD_INTERFACE_VERSION, &kgd2kfd)) {
kgd2kfd = NULL;
return false;
void radeon_kfd_device_probe(struct radeon_device *rdev)
{
if (kgd2kfd)
- rdev->kfd = kgd2kfd->probe((struct kgd_dev *)rdev, rdev->pdev);
+ rdev->kfd = kgd2kfd->probe((struct kgd_dev *)rdev,
+ rdev->pdev, &kfd2kgd);
}
void radeon_kfd_device_init(struct radeon_device *rdev)
.compute_vmid_bitmap = 0xFF00,
.first_compute_pipe = 1,
- .compute_pipe_count = 8 - 1,
+ .compute_pipe_count = 4 - 1,
};
radeon_doorbell_get_kfd_info(rdev,
else
*value = 1;
break;
+ case RADEON_INFO_CURRENT_GPU_TEMP:
+ /* get temperature in millidegrees C */
+ if (rdev->asic->pm.get_temperature)
+ *value = radeon_get_temperature(rdev);
+ else
+ *value = 0;
+ break;
+ case RADEON_INFO_CURRENT_GPU_SCLK:
+ /* get sclk in Mhz */
+ if (rdev->pm.dpm_enabled)
+ *value = radeon_dpm_get_current_sclk(rdev) / 100;
+ else
+ *value = rdev->pm.current_sclk / 100;
+ break;
+ case RADEON_INFO_CURRENT_GPU_MCLK:
+ /* get mclk in Mhz */
+ if (rdev->pm.dpm_enabled)
+ *value = radeon_dpm_get_current_mclk(rdev) / 100;
+ else
+ *value = rdev->pm.current_mclk / 100;
+ break;
+ case RADEON_INFO_READ_REG:
+ if (copy_from_user(value, value_ptr, sizeof(uint32_t))) {
+ DRM_ERROR("copy_from_user %s:%u\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+ if (radeon_get_allowed_info_register(rdev, *value, value))
+ return -EINVAL;
+ break;
default:
DRM_DEBUG_KMS("Invalid request %d\n", info->request);
return -EINVAL;
#include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/drm_dp_helper.h>
+#include <drm/drm_dp_mst_helper.h>
#include <drm/drm_fixed.h>
#include <drm/drm_crtc_helper.h>
#include <linux/i2c.h>
RADEON_HPD_NONE = 0xff,
};
+enum radeon_output_csc {
+ RADEON_OUTPUT_CSC_BYPASS = 0,
+ RADEON_OUTPUT_CSC_TVRGB = 1,
+ RADEON_OUTPUT_CSC_YCBCR601 = 2,
+ RADEON_OUTPUT_CSC_YCBCR709 = 3,
+};
+
#define RADEON_MAX_I2C_BUS 16
/* radeon gpio-based i2c
struct drm_property *audio_property;
/* FMT dithering */
struct drm_property *dither_property;
+ /* Output CSC */
+ struct drm_property *output_csc_property;
/* hardcoded DFP edid from BIOS */
struct edid *bios_hardcoded_edid;
int bios_hardcoded_edid_size;
u16 firmware_flags;
/* pointer to backlight encoder */
struct radeon_encoder *bl_encoder;
+
+ /* bitmask for active encoder frontends */
+ uint32_t active_encoders;
};
#define RADEON_MAX_BL_LEVEL 0xFF
u32 wm_low;
u32 wm_high;
struct drm_display_mode hw_mode;
+ enum radeon_output_csc output_csc;
};
struct radeon_encoder_primary_dac {
uint8_t backlight_level;
int panel_mode;
struct radeon_afmt *afmt;
+ int active_mst_links;
};
struct radeon_encoder_atom_dac {
enum radeon_tv_std tv_std;
};
+struct radeon_encoder_mst {
+ int crtc;
+ struct radeon_encoder *primary;
+ struct radeon_connector *connector;
+ struct drm_dp_mst_port *port;
+ int pbn;
+ int fe;
+ bool fe_from_be;
+ bool enc_active;
+};
+
struct radeon_encoder {
struct drm_encoder base;
uint32_t encoder_enum;
bool is_ext_encoder;
u16 caps;
struct radeon_audio_funcs *audio;
+ enum radeon_output_csc output_csc;
+ bool can_mst;
+ uint32_t offset;
+ bool is_mst_encoder;
+ /* front end for this mst encoder */
};
struct radeon_connector_atom_dig {
int dp_clock;
int dp_lane_count;
bool edp_on;
+ bool is_mst;
};
struct radeon_gpio_rec {
RADEON_FMT_DITHER_ENABLE = 1,
};
+struct stream_attribs {
+ uint16_t fe;
+ uint16_t slots;
+};
+
struct radeon_connector {
struct drm_connector base;
uint32_t connector_id;
enum radeon_connector_audio audio;
enum radeon_connector_dither dither;
int pixelclock_for_modeset;
+ bool is_mst_connector;
+ struct radeon_connector *mst_port;
+ struct drm_dp_mst_port *port;
+ struct drm_dp_mst_topology_mgr mst_mgr;
+
+ struct radeon_encoder *mst_encoder;
+ struct stream_attribs cur_stream_attribs[6];
+ int enabled_attribs;
};
struct radeon_framebuffer {
extern bool radeon_dp_getdpcd(struct radeon_connector *radeon_connector);
extern int radeon_dp_get_panel_mode(struct drm_encoder *encoder,
struct drm_connector *connector);
+int radeon_dp_get_max_link_rate(struct drm_connector *connector,
+ u8 *dpcd);
extern void radeon_dp_set_rx_power_state(struct drm_connector *connector,
u8 power_state);
extern void radeon_dp_aux_init(struct radeon_connector *radeon_connector);
+extern ssize_t
+radeon_dp_aux_transfer_native(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg);
+
extern void atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode);
+extern void atombios_dig_encoder_setup2(struct drm_encoder *encoder, int action, int panel_mode, int enc_override);
extern void radeon_atom_encoder_init(struct radeon_device *rdev);
extern void radeon_atom_disp_eng_pll_init(struct radeon_device *rdev);
extern void atombios_dig_transmitter_setup(struct drm_encoder *encoder,
int action, uint8_t lane_num,
uint8_t lane_set);
+extern void atombios_dig_transmitter_setup2(struct drm_encoder *encoder,
+ int action, uint8_t lane_num,
+ uint8_t lane_set, int fe);
+extern void atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder,
+ int fe);
extern void radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder);
extern struct drm_encoder *radeon_get_external_encoder(struct drm_encoder *encoder);
void radeon_atom_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le);
void radeon_fb_output_poll_changed(struct radeon_device *rdev);
void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id);
+
+void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector);
+void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector);
+
void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id);
int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled);
+
+/* mst */
+int radeon_dp_mst_init(struct radeon_connector *radeon_connector);
+int radeon_dp_mst_probe(struct radeon_connector *radeon_connector);
+int radeon_dp_mst_check_status(struct radeon_connector *radeon_connector);
+int radeon_mst_debugfs_init(struct radeon_device *rdev);
+void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode);
+
+void radeon_setup_mst_connector(struct drm_device *dev);
+
+int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx);
+void radeon_atom_release_dig_encoder(struct radeon_device *rdev, int enc_idx);
#endif
else
rbo->placements[i].lpfn = 0;
}
-
- /*
- * Use two-ended allocation depending on the buffer size to
- * improve fragmentation quality.
- * 512kb was measured as the most optimal number.
- */
- if (rbo->tbo.mem.size > 512 * 1024) {
- for (i = 0; i < c; i++) {
- rbo->placements[i].flags |= TTM_PL_FLAG_TOPDOWN;
- }
- }
}
int radeon_bo_create(struct radeon_device *rdev,
ps->sclk_high, ps->max_voltage);
}
+/* get the current sclk in 10 khz units */
+u32 rs780_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ u32 current_fb_div = RREG32(FVTHROT_STATUS_REG0) & CURRENT_FEEDBACK_DIV_MASK;
+ u32 func_cntl = RREG32(CG_SPLL_FUNC_CNTL);
+ u32 ref_div = ((func_cntl & SPLL_REF_DIV_MASK) >> SPLL_REF_DIV_SHIFT) + 1;
+ u32 post_div = ((func_cntl & SPLL_SW_HILEN_MASK) >> SPLL_SW_HILEN_SHIFT) + 1 +
+ ((func_cntl & SPLL_SW_LOLEN_MASK) >> SPLL_SW_LOLEN_SHIFT) + 1;
+ u32 sclk = (rdev->clock.spll.reference_freq * current_fb_div) /
+ (post_div * ref_div);
+
+ return sclk;
+}
+
+/* get the current mclk in 10 khz units */
+u32 rs780_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct igp_power_info *pi = rs780_get_pi(rdev);
+
+ return pi->bootup_uma_clk;
+}
+
int rs780_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level)
{
}
}
+/* get the current sclk in 10 khz units */
+u32 rv6xx_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv6xx_ps *ps = rv6xx_get_ps(rps);
+ struct rv6xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->sclk;
+ }
+}
+
+/* get the current mclk in 10 khz units */
+u32 rv6xx_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv6xx_ps *ps = rv6xx_get_ps(rps);
+ struct rv6xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->mclk;
+ }
+}
+
void rv6xx_dpm_fini(struct radeon_device *rdev)
{
int i;
}
}
+u32 rv770_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->sclk;
+ }
+}
+
+u32 rv770_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->mclk;
+ }
+}
+
void rv770_dpm_fini(struct radeon_device *rdev)
{
int i;
}
}
+/**
+ * si_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int si_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS2:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case (DMA_STATUS_REG + DMA0_REGISTER_OFFSET):
+ case (DMA_STATUS_REG + DMA1_REGISTER_OFFSET):
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
#define PCIE_BUS_CLK 10000
#define TCLK (PCIE_BUS_CLK / 10)
(CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
if (!ASIC_IS_NODCE(rdev)) {
- hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
+ hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
}
dma_cntl = RREG32(DMA_CNTL + DMA0_REGISTER_OFFSET) & ~TRAP_ENABLE;
}
if (rdev->irq.hpd[0]) {
DRM_DEBUG("si_irq_set: hpd 1\n");
- hpd1 |= DC_HPDx_INT_EN;
+ hpd1 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[1]) {
DRM_DEBUG("si_irq_set: hpd 2\n");
- hpd2 |= DC_HPDx_INT_EN;
+ hpd2 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[2]) {
DRM_DEBUG("si_irq_set: hpd 3\n");
- hpd3 |= DC_HPDx_INT_EN;
+ hpd3 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[3]) {
DRM_DEBUG("si_irq_set: hpd 4\n");
- hpd4 |= DC_HPDx_INT_EN;
+ hpd4 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[4]) {
DRM_DEBUG("si_irq_set: hpd 5\n");
- hpd5 |= DC_HPDx_INT_EN;
+ hpd5 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[5]) {
DRM_DEBUG("si_irq_set: hpd 6\n");
- hpd6 |= DC_HPDx_INT_EN;
+ hpd6 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
WREG32(CP_INT_CNTL_RING0, cp_int_cntl);
tmp |= DC_HPDx_INT_ACK;
WREG32(DC_HPD6_INT_CONTROL, tmp);
}
+
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD1_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD1_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD2_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD2_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD3_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD3_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD4_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD4_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD5_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD6_INT_CONTROL, tmp);
+ }
}
static void si_irq_disable(struct radeon_device *rdev)
u32 src_id, src_data, ring_id;
u32 ring_index;
bool queue_hotplug = false;
+ bool queue_dp = false;
bool queue_thermal = false;
u32 status, addr;
DRM_DEBUG("IH: HPD6\n");
}
break;
+ case 6:
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 1\n");
+ }
+ break;
+ case 7:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 2\n");
+ }
+ break;
+ case 8:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 3\n");
+ }
+ break;
+ case 9:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 4\n");
+ }
+ break;
+ case 10:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 5\n");
+ }
+ break;
+ case 11:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 6\n");
+ }
+ break;
default:
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
break;
rptr &= rdev->ih.ptr_mask;
WREG32(IH_RB_RPTR, rptr);
}
+ if (queue_dp)
+ schedule_work(&rdev->dp_work);
if (queue_hotplug)
schedule_work(&rdev->hotplug_work);
if (queue_thermal && rdev->pm.dpm_enabled)
WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_BYPASS_EN_MASK, ~UPLL_BYPASS_EN_MASK);
if (!vclk || !dclk) {
- /* keep the Bypass mode, put PLL to sleep */
- WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK);
+ /* keep the Bypass mode */
return 0;
}
/* set VCO_MODE to 1 */
WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_VCO_MODE_MASK, ~UPLL_VCO_MODE_MASK);
- /* toggle UPLL_SLEEP to 1 then back to 0 */
- WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK);
+ /* disable sleep mode */
WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_SLEEP_MASK);
/* deassert UPLL_RESET */
current_index, pl->sclk, pl->mclk, pl->vddc, pl->vddci, pl->pcie_gen + 1);
}
}
+
+u32 si_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 si_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->mclk;
+ }
+}
#define UVD_UDEC_DBW_ADDR_CONFIG 0xEF54
#define UVD_RBC_RB_RPTR 0xF690
#define UVD_RBC_RB_WPTR 0xF694
+#define UVD_STATUS 0xf6bc
#define UVD_CGC_CTRL 0xF4B0
# define DCM (1 << 0)
}
}
+u32 sumo_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct sumo_power_info *pi = sumo_get_pi(rdev);
+ struct radeon_ps *rps = &pi->current_rps;
+ struct sumo_ps *ps = sumo_get_ps(rps);
+ struct sumo_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURR_INDEX_MASK) >>
+ CURR_INDEX_SHIFT;
+
+ if (current_index == BOOST_DPM_LEVEL) {
+ pl = &pi->boost_pl;
+ return pl->sclk;
+ } else if (current_index >= ps->num_levels) {
+ return 0;
+ } else {
+ pl = &ps->levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 sumo_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct sumo_power_info *pi = sumo_get_pi(rdev);
+
+ return pi->sys_info.bootup_uma_clk;
+}
+
void sumo_dpm_fini(struct radeon_device *rdev)
{
int i;
}
}
+u32 trinity_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct trinity_power_info *pi = trinity_get_pi(rdev);
+ struct radeon_ps *rps = &pi->current_rps;
+ struct trinity_ps *ps = trinity_get_ps(rps);
+ struct trinity_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_MASK) >>
+ CURRENT_STATE_SHIFT;
+
+ if (current_index >= ps->num_levels) {
+ return 0;
+ } else {
+ pl = &ps->levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 trinity_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct trinity_power_info *pi = trinity_get_pi(rdev);
+
+ return pi->sys_info.bootup_uma_clk;
+}
+
void trinity_dpm_fini(struct radeon_device *rdev)
{
int i;
unsigned long flags;
if (event) {
- event->pipe = rcrtc->index;
-
WARN_ON(drm_crtc_vblank_get(crtc) != 0);
spin_lock_irqsave(&dev->event_lock, flags);
};
static struct drm_driver rcar_du_driver = {
- .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME,
+ .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME
+ | DRIVER_ATOMIC,
.load = rcar_du_load,
.unload = rcar_du_unload,
.preclose = rcar_du_preclose,
static void rcar_du_plane_atomic_destroy_state(struct drm_plane *plane,
struct drm_plane_state *state)
{
+ if (state->fb)
+ drm_framebuffer_unreference(state->fb);
+
kfree(to_rcar_du_plane_state(state));
}
struct rockchip_drm_private *private;
struct dma_iommu_mapping *mapping;
struct device *dev = drm_dev->dev;
+ struct drm_connector *connector;
int ret;
private = devm_kzalloc(drm_dev->dev, sizeof(*private), GFP_KERNEL);
if (ret)
goto err_detach_device;
+ /*
+ * All components are now added, we can publish the connector sysfs
+ * entries to userspace. This will generate hotplug events and so
+ * userspace will expect to be able to access DRM at this point.
+ */
+ list_for_each_entry(connector, &drm_dev->mode_config.connector_list,
+ head) {
+ ret = drm_connector_register(connector);
+ if (ret) {
+ dev_err(drm_dev->dev,
+ "[CONNECTOR:%d:%s] drm_connector_register failed: %d\n",
+ connector->base.id,
+ connector->name, ret);
+ goto err_unbind;
+ }
+ }
+
/* init kms poll for handling hpd */
drm_kms_helper_poll_init(drm_dev);
drm_vblank_cleanup(drm_dev);
err_kms_helper_poll_fini:
drm_kms_helper_poll_fini(drm_dev);
+err_unbind:
component_unbind_all(dev, drm_dev);
err_detach_device:
arm_iommu_detach_device(dev);
DRM_DEBUG_KMS("FB [%dx%d]-%d kvaddr=%p offset=%ld size=%d\n",
fb->width, fb->height, fb->depth, rk_obj->kvaddr,
offset, size);
+
+ fbi->skip_vt_switch = true;
+
return 0;
err_drm_framebuffer_unref:
if (vop->is_enabled)
return;
+ ret = pm_runtime_get_sync(vop->dev);
+ if (ret < 0) {
+ dev_err(vop->dev, "failed to get pm runtime: %d\n", ret);
+ return;
+ }
+
ret = clk_enable(vop->hclk);
if (ret < 0) {
dev_err(vop->dev, "failed to enable hclk - %d\n", ret);
clk_disable(vop->dclk);
clk_disable(vop->aclk);
clk_disable(vop->hclk);
+ pm_runtime_put(vop->dev);
}
/*
u16 vsync_len = adjusted_mode->vsync_end - adjusted_mode->vsync_start;
u16 vact_st = adjusted_mode->vtotal - adjusted_mode->vsync_start;
u16 vact_end = vact_st + vdisplay;
- int ret;
+ int ret, ret_clk;
uint32_t val;
/*
default:
DRM_ERROR("unsupport connector_type[%d]\n",
vop->connector_type);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
};
VOP_CTRL_SET(vop, out_mode, vop->connector_out_mode);
ret = vop_crtc_mode_set_base(crtc, x, y, fb);
if (ret)
- return ret;
+ goto out;
/*
* reset dclk, take all mode config affect, so the clk would run in
reset_control_deassert(vop->dclk_rst);
clk_set_rate(vop->dclk, adjusted_mode->clock * 1000);
- ret = clk_enable(vop->dclk);
- if (ret < 0) {
- dev_err(vop->dev, "failed to enable dclk - %d\n", ret);
- return ret;
+out:
+ ret_clk = clk_enable(vop->dclk);
+ if (ret_clk < 0) {
+ dev_err(vop->dev, "failed to enable dclk - %d\n", ret_clk);
+ return ret_clk;
}
- return 0;
+ return ret;
}
static void vop_crtc_commit(struct drm_crtc *crtc)
#include <linux/clk.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_plane_helper.h>
}
static int
-sti_drm_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode, int x, int y,
- struct drm_framebuffer *old_fb)
+sti_drm_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode)
{
struct sti_mixer *mixer = to_sti_mixer(crtc);
struct device *dev = mixer->dev;
struct sti_compositor *compo = dev_get_drvdata(dev);
- struct sti_layer *layer;
struct clk *clk;
int rate = mode->clock * 1000;
int res;
- unsigned int w, h;
- DRM_DEBUG_KMS("CRTC:%d (%s) fb:%d mode:%d (%s)\n",
+ DRM_DEBUG_KMS("CRTC:%d (%s) mode:%d (%s)\n",
crtc->base.id, sti_mixer_to_str(mixer),
- crtc->primary->fb->base.id, mode->base.id, mode->name);
+ mode->base.id, mode->name);
DRM_DEBUG_KMS("%d %d %d %d %d %d %d %d %d %d 0x%x 0x%x\n",
mode->vrefresh, mode->clock,
sti_vtg_set_config(mixer->id == STI_MIXER_MAIN ?
compo->vtg_main : compo->vtg_aux, &crtc->mode);
- /* a GDP is reserved to the CRTC FB */
- layer = to_sti_layer(crtc->primary);
- if (!layer) {
- DRM_ERROR("Can not find GDP0)\n");
- return -EINVAL;
- }
-
- /* copy the mode data adjusted by mode_fixup() into crtc->mode
- * so that hardware can be set to proper mode
- */
- memcpy(&crtc->mode, adjusted_mode, sizeof(*adjusted_mode));
-
- res = sti_mixer_set_layer_depth(mixer, layer);
- if (res) {
- DRM_ERROR("Can not set layer depth\n");
- return -EINVAL;
- }
res = sti_mixer_active_video_area(mixer, &crtc->mode);
if (res) {
DRM_ERROR("Can not set active video area\n");
return -EINVAL;
}
- w = crtc->primary->fb->width - x;
- h = crtc->primary->fb->height - y;
-
- return sti_layer_prepare(layer, crtc,
- crtc->primary->fb, &crtc->mode,
- mixer->id, 0, 0, w, h, x, y, w, h);
-}
-
-static int sti_drm_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
- struct drm_framebuffer *old_fb)
-{
- struct sti_mixer *mixer = to_sti_mixer(crtc);
- struct sti_layer *layer;
- unsigned int w, h;
- int ret;
-
- DRM_DEBUG_KMS("CRTC:%d (%s) fb:%d (%d,%d)\n",
- crtc->base.id, sti_mixer_to_str(mixer),
- crtc->primary->fb->base.id, x, y);
-
- /* GDP is reserved to the CRTC FB */
- layer = to_sti_layer(crtc->primary);
- if (!layer) {
- DRM_ERROR("Can not find GDP0)\n");
- ret = -EINVAL;
- goto out;
- }
-
- w = crtc->primary->fb->width - crtc->x;
- h = crtc->primary->fb->height - crtc->y;
-
- ret = sti_layer_prepare(layer, crtc,
- crtc->primary->fb, &crtc->mode,
- mixer->id, 0, 0, w, h,
- crtc->x, crtc->y, w, h);
- if (ret) {
- DRM_ERROR("Can not prepare layer\n");
- goto out;
- }
-
- sti_drm_crtc_commit(crtc);
-out:
- return ret;
+ return res;
}
static void sti_drm_crtc_disable(struct drm_crtc *crtc)
struct sti_mixer *mixer = to_sti_mixer(crtc);
struct device *dev = mixer->dev;
struct sti_compositor *compo = dev_get_drvdata(dev);
- struct sti_layer *layer;
if (!mixer->enabled)
return;
/* Disable Background */
sti_mixer_set_background_status(mixer, false);
- /* Disable GDP */
- layer = to_sti_layer(crtc->primary);
- if (!layer) {
- DRM_ERROR("Cannot find GDP0\n");
- return;
- }
-
- /* Disable layer at mixer level */
- if (sti_mixer_set_layer_status(mixer, layer, false))
- DRM_ERROR("Can not disable %s layer at mixer\n",
- sti_layer_to_str(layer));
-
- /* Wait a while to be sure that a Vsync event is received */
- msleep(WAIT_NEXT_VSYNC_MS);
-
- /* Then disable layer itself */
- sti_layer_disable(layer);
-
drm_crtc_vblank_off(crtc);
/* Disable pixel clock and compo IP clocks */
mixer->enabled = false;
}
-static struct drm_crtc_helper_funcs sti_crtc_helper_funcs = {
- .dpms = sti_drm_crtc_dpms,
- .prepare = sti_drm_crtc_prepare,
- .commit = sti_drm_crtc_commit,
- .mode_fixup = sti_drm_crtc_mode_fixup,
- .mode_set = sti_drm_crtc_mode_set,
- .mode_set_base = sti_drm_crtc_mode_set_base,
- .disable = sti_drm_crtc_disable,
-};
+static void
+sti_drm_crtc_mode_set_nofb(struct drm_crtc *crtc)
+{
+ sti_drm_crtc_prepare(crtc);
+ sti_drm_crtc_mode_set(crtc, &crtc->state->adjusted_mode);
+}
-static int sti_drm_crtc_page_flip(struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_pending_vblank_event *event,
- uint32_t page_flip_flags)
+static void sti_drm_atomic_begin(struct drm_crtc *crtc)
{
- struct drm_device *drm_dev = crtc->dev;
- struct drm_framebuffer *old_fb;
struct sti_mixer *mixer = to_sti_mixer(crtc);
- unsigned long flags;
- int ret;
- DRM_DEBUG_KMS("fb %d --> fb %d\n",
- crtc->primary->fb->base.id, fb->base.id);
+ if (crtc->state->event) {
+ crtc->state->event->pipe = drm_crtc_index(crtc);
- mutex_lock(&drm_dev->struct_mutex);
+ WARN_ON(drm_crtc_vblank_get(crtc) != 0);
- old_fb = crtc->primary->fb;
- crtc->primary->fb = fb;
- ret = sti_drm_crtc_mode_set_base(crtc, crtc->x, crtc->y, old_fb);
- if (ret) {
- DRM_ERROR("failed\n");
- crtc->primary->fb = old_fb;
- goto out;
+ mixer->pending_event = crtc->state->event;
+ crtc->state->event = NULL;
}
+}
- if (event) {
- event->pipe = mixer->id;
-
- ret = drm_vblank_get(drm_dev, event->pipe);
- if (ret) {
- DRM_ERROR("Cannot get vblank\n");
- goto out;
- }
-
- spin_lock_irqsave(&drm_dev->event_lock, flags);
- if (mixer->pending_event) {
- drm_vblank_put(drm_dev, event->pipe);
- ret = -EBUSY;
- } else {
- mixer->pending_event = event;
- }
- spin_unlock_irqrestore(&drm_dev->event_lock, flags);
- }
-out:
- mutex_unlock(&drm_dev->struct_mutex);
- return ret;
+static void sti_drm_atomic_flush(struct drm_crtc *crtc)
+{
}
+static struct drm_crtc_helper_funcs sti_crtc_helper_funcs = {
+ .dpms = sti_drm_crtc_dpms,
+ .prepare = sti_drm_crtc_prepare,
+ .commit = sti_drm_crtc_commit,
+ .mode_fixup = sti_drm_crtc_mode_fixup,
+ .mode_set = drm_helper_crtc_mode_set,
+ .mode_set_nofb = sti_drm_crtc_mode_set_nofb,
+ .mode_set_base = drm_helper_crtc_mode_set_base,
+ .disable = sti_drm_crtc_disable,
+ .atomic_begin = sti_drm_atomic_begin,
+ .atomic_flush = sti_drm_atomic_flush,
+};
+
static void sti_drm_crtc_destroy(struct drm_crtc *crtc)
{
DRM_DEBUG_KMS("\n");
EXPORT_SYMBOL(sti_drm_crtc_disable_vblank);
static struct drm_crtc_funcs sti_crtc_funcs = {
- .set_config = drm_crtc_helper_set_config,
- .page_flip = sti_drm_crtc_page_flip,
+ .set_config = drm_atomic_helper_set_config,
+ .page_flip = drm_atomic_helper_page_flip,
.destroy = sti_drm_crtc_destroy,
.set_property = sti_drm_crtc_set_property,
+ .reset = drm_atomic_helper_crtc_reset,
+ .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
};
bool sti_drm_crtc_is_main(struct drm_crtc *crtc)
#include <linux/module.h>
#include <linux/of_platform.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_fb_cma_helper.h>
#define STI_MAX_FB_HEIGHT 4096
#define STI_MAX_FB_WIDTH 4096
+static void sti_drm_atomic_schedule(struct sti_drm_private *private,
+ struct drm_atomic_state *state)
+{
+ private->commit.state = state;
+ schedule_work(&private->commit.work);
+}
+
+static void sti_drm_atomic_complete(struct sti_drm_private *private,
+ struct drm_atomic_state *state)
+{
+ struct drm_device *drm = private->drm_dev;
+
+ /*
+ * Everything below can be run asynchronously without the need to grab
+ * any modeset locks at all under one condition: It must be guaranteed
+ * that the asynchronous work has either been cancelled (if the driver
+ * supports it, which at least requires that the framebuffers get
+ * cleaned up with drm_atomic_helper_cleanup_planes()) or completed
+ * before the new state gets committed on the software side with
+ * drm_atomic_helper_swap_state().
+ *
+ * This scheme allows new atomic state updates to be prepared and
+ * checked in parallel to the asynchronous completion of the previous
+ * update. Which is important since compositors need to figure out the
+ * composition of the next frame right after having submitted the
+ * current layout.
+ */
+
+ drm_atomic_helper_commit_modeset_disables(drm, state);
+ drm_atomic_helper_commit_planes(drm, state);
+ drm_atomic_helper_commit_modeset_enables(drm, state);
+
+ drm_atomic_helper_wait_for_vblanks(drm, state);
+
+ drm_atomic_helper_cleanup_planes(drm, state);
+ drm_atomic_state_free(state);
+}
+
+static void sti_drm_atomic_work(struct work_struct *work)
+{
+ struct sti_drm_private *private = container_of(work,
+ struct sti_drm_private, commit.work);
+
+ sti_drm_atomic_complete(private, private->commit.state);
+}
+
+static int sti_drm_atomic_commit(struct drm_device *drm,
+ struct drm_atomic_state *state, bool async)
+{
+ struct sti_drm_private *private = drm->dev_private;
+ int err;
+
+ err = drm_atomic_helper_prepare_planes(drm, state);
+ if (err)
+ return err;
+
+ /* serialize outstanding asynchronous commits */
+ mutex_lock(&private->commit.lock);
+ flush_work(&private->commit.work);
+
+ /*
+ * This is the point of no return - everything below never fails except
+ * when the hw goes bonghits. Which means we can commit the new state on
+ * the software side now.
+ */
+
+ drm_atomic_helper_swap_state(drm, state);
+
+ if (async)
+ sti_drm_atomic_schedule(private, state);
+ else
+ sti_drm_atomic_complete(private, state);
+
+ mutex_unlock(&private->commit.lock);
+ return 0;
+}
+
static struct drm_mode_config_funcs sti_drm_mode_config_funcs = {
.fb_create = drm_fb_cma_create,
+ .atomic_check = drm_atomic_helper_check,
+ .atomic_commit = sti_drm_atomic_commit,
};
static void sti_drm_mode_config_init(struct drm_device *dev)
dev->dev_private = (void *)private;
private->drm_dev = dev;
+ mutex_init(&private->commit.lock);
+ INIT_WORK(&private->commit.work, sti_drm_atomic_work);
+
drm_mode_config_init(dev);
drm_kms_helper_poll_init(dev);
return ret;
}
- drm_helper_disable_unused_functions(dev);
+ drm_mode_config_reset(dev);
#ifdef CONFIG_DRM_STI_FBDEV
drm_fbdev_cma_init(dev, 32,
struct sti_compositor *compo;
struct drm_property *plane_zorder_property;
struct drm_device *drm_dev;
+
+ struct {
+ struct drm_atomic_state *state;
+ struct work_struct work;
+ struct mutex lock;
+ } commit;
};
#endif
* License terms: GNU General Public License (GPL), version 2
*/
+#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
+#include <drm/drm_plane_helper.h>
+
#include "sti_compositor.h"
#include "sti_drm_drv.h"
#include "sti_drm_plane.h"
struct sti_mixer *mixer = to_sti_mixer(crtc);
int res;
- DRM_DEBUG_KMS("CRTC:%d (%s) drm plane:%d (%s) drm fb:%d\n",
+ DRM_DEBUG_KMS("CRTC:%d (%s) drm plane:%d (%s)\n",
crtc->base.id, sti_mixer_to_str(mixer),
- plane->base.id, sti_layer_to_str(layer), fb->base.id);
+ plane->base.id, sti_layer_to_str(layer));
DRM_DEBUG_KMS("(%dx%d)@(%d,%d)\n", crtc_w, crtc_h, crtc_x, crtc_y);
res = sti_mixer_set_layer_depth(mixer, layer);
{
DRM_DEBUG_DRIVER("\n");
- sti_drm_disable_plane(plane);
+ drm_plane_helper_disable(plane);
drm_plane_cleanup(plane);
}
}
static struct drm_plane_funcs sti_drm_plane_funcs = {
- .update_plane = sti_drm_update_plane,
- .disable_plane = sti_drm_disable_plane,
+ .update_plane = drm_atomic_helper_update_plane,
+ .disable_plane = drm_atomic_helper_disable_plane,
.destroy = sti_drm_plane_destroy,
.set_property = sti_drm_plane_set_property,
+ .reset = drm_atomic_helper_plane_reset,
+ .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
+};
+
+static int sti_drm_plane_prepare_fb(struct drm_plane *plane,
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state)
+{
+ return 0;
+}
+
+static void sti_drm_plane_cleanup_fb(struct drm_plane *plane,
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_fb)
+{
+}
+
+static int sti_drm_plane_atomic_check(struct drm_plane *plane,
+ struct drm_plane_state *state)
+{
+ return 0;
+}
+
+static void sti_drm_plane_atomic_update(struct drm_plane *plane,
+ struct drm_plane_state *oldstate)
+{
+ struct drm_plane_state *state = plane->state;
+
+ sti_drm_update_plane(plane, state->crtc, state->fb,
+ state->crtc_x, state->crtc_y,
+ state->crtc_w, state->crtc_h,
+ state->src_x, state->src_y,
+ state->src_w, state->src_h);
+}
+
+static void sti_drm_plane_atomic_disable(struct drm_plane *plane,
+ struct drm_plane_state *oldstate)
+{
+ sti_drm_disable_plane(plane);
+}
+
+static const struct drm_plane_helper_funcs sti_drm_plane_helpers_funcs = {
+ .prepare_fb = sti_drm_plane_prepare_fb,
+ .cleanup_fb = sti_drm_plane_cleanup_fb,
+ .atomic_check = sti_drm_plane_atomic_check,
+ .atomic_update = sti_drm_plane_atomic_update,
+ .atomic_disable = sti_drm_plane_atomic_disable,
};
static void sti_drm_plane_attach_zorder_property(struct drm_plane *plane,
return NULL;
}
+ drm_plane_helper_add(&layer->plane, &sti_drm_plane_helpers_funcs);
+
for (i = 0; i < ARRAY_SIZE(sti_layer_default_zorder); i++)
if (sti_layer_default_zorder[i] == layer->desc)
break;
- default_zorder = i;
+ default_zorder = i + 1;
if (type == DRM_PLANE_TYPE_OVERLAY)
sti_drm_plane_attach_zorder_property(&layer->plane,
#include <linux/platform_device.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_panel.h>
}
static struct drm_connector_funcs sti_dvo_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
.fill_modes = drm_helper_probe_single_connector_modes,
.detect = sti_dvo_connector_detect,
.destroy = sti_dvo_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static struct drm_encoder *sti_dvo_find_encoder(struct drm_device *dev)
#include <linux/platform_device.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
/* HDformatter registers */
}
static struct drm_connector_funcs sti_hda_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
.fill_modes = drm_helper_probe_single_connector_modes,
.detect = sti_hda_connector_detect,
.destroy = sti_hda_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static struct drm_encoder *sti_hda_find_encoder(struct drm_device *dev)
#include <linux/reset.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_edid.h>
}
static struct drm_connector_funcs sti_hdmi_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
.fill_modes = drm_helper_probe_single_connector_modes,
.detect = sti_hdmi_connector_detect,
.destroy = sti_hdmi_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static struct drm_encoder *sti_hdmi_find_encoder(struct drm_device *dev)
--- /dev/null
+ccflags-y := -Iinclude/drm
+vgem-y := vgem_drv.o vgem_dma_buf.o
+
+obj-$(CONFIG_DRM_VGEM) += vgem.o
--- /dev/null
+/*
+ * Copyright © 2012 Intel Corporation
+ * Copyright © 2014 The Chromium OS Authors
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include "vgem_drv.h"
+
+struct sg_table *vgem_gem_prime_get_sg_table(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ BUG_ON(obj->pages == NULL);
+
+ return drm_prime_pages_to_sg(obj->pages, obj->base.size / PAGE_SIZE);
+}
+
+int vgem_gem_prime_pin(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ return vgem_gem_get_pages(obj);
+}
+
+void vgem_gem_prime_unpin(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ vgem_gem_put_pages(obj);
+}
+
+void *vgem_gem_prime_vmap(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ BUG_ON(obj->pages == NULL);
+
+ return vmap(obj->pages, obj->base.size / PAGE_SIZE, 0, PAGE_KERNEL);
+}
+
+void vgem_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+{
+ vunmap(vaddr);
+}
+
+struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev,
+ struct dma_buf *dma_buf)
+{
+ struct drm_vgem_gem_object *obj = NULL;
+ int ret;
+
+ obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+ if (obj == NULL) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ ret = drm_gem_object_init(dev, &obj->base, dma_buf->size);
+ if (ret) {
+ ret = -ENOMEM;
+ goto fail_free;
+ }
+
+ get_dma_buf(dma_buf);
+
+ obj->base.dma_buf = dma_buf;
+ obj->use_dma_buf = true;
+
+ return &obj->base;
+
+fail_free:
+ kfree(obj);
+fail:
+ return ERR_PTR(ret);
+}
--- /dev/null
+/*
+ * Copyright 2011 Red Hat, Inc.
+ * Copyright © 2014 The Chromium OS Authors
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software")
+ * to deal in the software without restriction, including without limitation
+ * on the rights to use, copy, modify, merge, publish, distribute, sub
+ * license, and/or sell copies of the Software, and to permit persons to whom
+ * them Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTIBILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER
+ * IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ */
+
+/**
+ * This is vgem, a (non-hardware-backed) GEM service. This is used by Mesa's
+ * software renderer and the X server for efficient buffer sharing.
+ */
+
+#include <linux/module.h>
+#include <linux/ramfs.h>
+#include <linux/shmem_fs.h>
+#include <linux/dma-buf.h>
+#include "vgem_drv.h"
+
+#define DRIVER_NAME "vgem"
+#define DRIVER_DESC "Virtual GEM provider"
+#define DRIVER_DATE "20120112"
+#define DRIVER_MAJOR 1
+#define DRIVER_MINOR 0
+
+void vgem_gem_put_pages(struct drm_vgem_gem_object *obj)
+{
+ drm_gem_put_pages(&obj->base, obj->pages, false, false);
+ obj->pages = NULL;
+}
+
+static void vgem_gem_free_object(struct drm_gem_object *obj)
+{
+ struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj);
+
+ drm_gem_free_mmap_offset(obj);
+
+ if (vgem_obj->use_dma_buf && obj->dma_buf) {
+ dma_buf_put(obj->dma_buf);
+ obj->dma_buf = NULL;
+ }
+
+ drm_gem_object_release(obj);
+
+ if (vgem_obj->pages)
+ vgem_gem_put_pages(vgem_obj);
+
+ vgem_obj->pages = NULL;
+
+ kfree(vgem_obj);
+}
+
+int vgem_gem_get_pages(struct drm_vgem_gem_object *obj)
+{
+ struct page **pages;
+
+ if (obj->pages || obj->use_dma_buf)
+ return 0;
+
+ pages = drm_gem_get_pages(&obj->base);
+ if (IS_ERR(pages)) {
+ return PTR_ERR(pages);
+ }
+
+ obj->pages = pages;
+
+ return 0;
+}
+
+static int vgem_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+{
+ struct drm_vgem_gem_object *obj = vma->vm_private_data;
+ struct drm_device *dev = obj->base.dev;
+ loff_t num_pages;
+ pgoff_t page_offset;
+ int ret;
+
+ /* We don't use vmf->pgoff since that has the fake offset */
+ page_offset = ((unsigned long)vmf->virtual_address - vma->vm_start) >>
+ PAGE_SHIFT;
+
+ num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE);
+
+ if (page_offset > num_pages)
+ return VM_FAULT_SIGBUS;
+
+ mutex_lock(&dev->struct_mutex);
+
+ ret = vm_insert_page(vma, (unsigned long)vmf->virtual_address,
+ obj->pages[page_offset]);
+
+ mutex_unlock(&dev->struct_mutex);
+ switch (ret) {
+ case 0:
+ return VM_FAULT_NOPAGE;
+ case -ENOMEM:
+ return VM_FAULT_OOM;
+ case -EBUSY:
+ return VM_FAULT_RETRY;
+ case -EFAULT:
+ case -EINVAL:
+ return VM_FAULT_SIGBUS;
+ default:
+ WARN_ON(1);
+ return VM_FAULT_SIGBUS;
+ }
+}
+
+static struct vm_operations_struct vgem_gem_vm_ops = {
+ .fault = vgem_gem_fault,
+ .open = drm_gem_vm_open,
+ .close = drm_gem_vm_close,
+};
+
+/* ioctls */
+
+static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
+ struct drm_file *file,
+ unsigned int *handle,
+ unsigned long size)
+{
+ struct drm_vgem_gem_object *obj;
+ struct drm_gem_object *gem_object;
+ int err;
+
+ size = roundup(size, PAGE_SIZE);
+
+ obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+ if (!obj)
+ return ERR_PTR(-ENOMEM);
+
+ gem_object = &obj->base;
+
+ err = drm_gem_object_init(dev, gem_object, size);
+ if (err)
+ goto out;
+
+ err = drm_gem_handle_create(file, gem_object, handle);
+ if (err)
+ goto handle_out;
+
+ drm_gem_object_unreference_unlocked(gem_object);
+
+ return gem_object;
+
+handle_out:
+ drm_gem_object_release(gem_object);
+out:
+ kfree(obj);
+ return ERR_PTR(err);
+}
+
+static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
+ struct drm_mode_create_dumb *args)
+{
+ struct drm_gem_object *gem_object;
+ uint64_t size;
+ uint64_t pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
+
+ size = args->height * pitch;
+ if (size == 0)
+ return -EINVAL;
+
+ gem_object = vgem_gem_create(dev, file, &args->handle, size);
+
+ if (IS_ERR(gem_object)) {
+ DRM_DEBUG_DRIVER("object creation failed\n");
+ return PTR_ERR(gem_object);
+ }
+
+ args->size = gem_object->size;
+ args->pitch = pitch;
+
+ DRM_DEBUG_DRIVER("Created object of size %lld\n", size);
+
+ return 0;
+}
+
+int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
+ uint32_t handle, uint64_t *offset)
+{
+ int ret = 0;
+ struct drm_gem_object *obj;
+
+ mutex_lock(&dev->struct_mutex);
+ obj = drm_gem_object_lookup(dev, file, handle);
+ if (!obj) {
+ ret = -ENOENT;
+ goto unlock;
+ }
+
+ if (!drm_vma_node_has_offset(&obj->vma_node)) {
+ ret = drm_gem_create_mmap_offset(obj);
+ if (ret)
+ goto unref;
+ }
+
+ BUG_ON(!obj->filp);
+
+ obj->filp->private_data = obj;
+
+ ret = vgem_gem_get_pages(to_vgem_bo(obj));
+ if (ret)
+ goto fail_get_pages;
+
+ *offset = drm_vma_node_offset_addr(&obj->vma_node);
+
+ goto unref;
+
+fail_get_pages:
+ drm_gem_free_mmap_offset(obj);
+unref:
+ drm_gem_object_unreference(obj);
+unlock:
+ mutex_unlock(&dev->struct_mutex);
+ return ret;
+}
+
+int vgem_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct drm_file *priv = filp->private_data;
+ struct drm_device *dev = priv->minor->dev;
+ struct drm_vma_offset_node *node;
+ struct drm_gem_object *obj;
+ struct drm_vgem_gem_object *vgem_obj;
+ int ret = 0;
+
+ mutex_lock(&dev->struct_mutex);
+
+ node = drm_vma_offset_exact_lookup(dev->vma_offset_manager,
+ vma->vm_pgoff,
+ vma_pages(vma));
+ if (!node) {
+ ret = -EINVAL;
+ goto out_unlock;
+ } else if (!drm_vma_node_is_allowed(node, filp)) {
+ ret = -EACCES;
+ goto out_unlock;
+ }
+
+ obj = container_of(node, struct drm_gem_object, vma_node);
+
+ vgem_obj = to_vgem_bo(obj);
+
+ if (obj->dma_buf && vgem_obj->use_dma_buf) {
+ ret = dma_buf_mmap(obj->dma_buf, vma, 0);
+ goto out_unlock;
+ }
+
+ if (!obj->dev->driver->gem_vm_ops) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_ops = obj->dev->driver->gem_vm_ops;
+ vma->vm_private_data = vgem_obj;
+ vma->vm_page_prot =
+ pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+
+ mutex_unlock(&dev->struct_mutex);
+ drm_gem_vm_open(vma);
+ return ret;
+
+out_unlock:
+ mutex_unlock(&dev->struct_mutex);
+
+ return ret;
+}
+
+
+static struct drm_ioctl_desc vgem_ioctls[] = {
+};
+
+static const struct file_operations vgem_driver_fops = {
+ .owner = THIS_MODULE,
+ .open = drm_open,
+ .mmap = vgem_drm_gem_mmap,
+ .poll = drm_poll,
+ .read = drm_read,
+ .unlocked_ioctl = drm_ioctl,
+ .release = drm_release,
+};
+
+static struct drm_driver vgem_driver = {
+ .driver_features = DRIVER_GEM | DRIVER_PRIME,
+ .gem_free_object = vgem_gem_free_object,
+ .gem_vm_ops = &vgem_gem_vm_ops,
+ .ioctls = vgem_ioctls,
+ .fops = &vgem_driver_fops,
+ .dumb_create = vgem_gem_dumb_create,
+ .dumb_map_offset = vgem_gem_dumb_map,
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ .gem_prime_export = drm_gem_prime_export,
+ .gem_prime_import = vgem_gem_prime_import,
+ .gem_prime_pin = vgem_gem_prime_pin,
+ .gem_prime_unpin = vgem_gem_prime_unpin,
+ .gem_prime_get_sg_table = vgem_gem_prime_get_sg_table,
+ .gem_prime_vmap = vgem_gem_prime_vmap,
+ .gem_prime_vunmap = vgem_gem_prime_vunmap,
+ .name = DRIVER_NAME,
+ .desc = DRIVER_DESC,
+ .date = DRIVER_DATE,
+ .major = DRIVER_MAJOR,
+ .minor = DRIVER_MINOR,
+};
+
+struct drm_device *vgem_device;
+
+static int __init vgem_init(void)
+{
+ int ret;
+
+ vgem_device = drm_dev_alloc(&vgem_driver, NULL);
+ if (!vgem_device) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ret = drm_dev_register(vgem_device, 0);
+
+ if (ret)
+ goto out_unref;
+
+ return 0;
+
+out_unref:
+ drm_dev_unref(vgem_device);
+out:
+ return ret;
+}
+
+static void __exit vgem_exit(void)
+{
+ drm_dev_unregister(vgem_device);
+ drm_dev_unref(vgem_device);
+}
+
+module_init(vgem_init);
+module_exit(vgem_exit);
+
+MODULE_AUTHOR("Red Hat, Inc.");
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL and additional rights");
--- /dev/null
+/*
+ * Copyright © 2012 Intel Corporation
+ * Copyright © 2014 The Chromium OS Authors
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *
+ */
+
+#ifndef _VGEM_DRV_H_
+#define _VGEM_DRV_H_
+
+#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+
+#define to_vgem_bo(x) container_of(x, struct drm_vgem_gem_object, base)
+struct drm_vgem_gem_object {
+ struct drm_gem_object base;
+ struct page **pages;
+ bool use_dma_buf;
+};
+
+/* vgem_drv.c */
+extern void vgem_gem_put_pages(struct drm_vgem_gem_object *obj);
+extern int vgem_gem_get_pages(struct drm_vgem_gem_object *obj);
+
+/* vgem_dma_buf.c */
+extern struct sg_table *vgem_gem_prime_get_sg_table(
+ struct drm_gem_object *gobj);
+extern int vgem_gem_prime_pin(struct drm_gem_object *gobj);
+extern void vgem_gem_prime_unpin(struct drm_gem_object *gobj);
+extern void *vgem_gem_prime_vmap(struct drm_gem_object *gobj);
+extern void vgem_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+extern struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev,
+ struct dma_buf *dma_buf);
+
+
+#endif
*/
#define VMW_IOCTL_DEF(ioctl, func, flags) \
- [DRM_IOCTL_NR(DRM_IOCTL_##ioctl) - DRM_COMMAND_BASE] = {DRM_##ioctl, flags, func, DRM_IOCTL_##ioctl}
+ [DRM_IOCTL_NR(DRM_IOCTL_##ioctl) - DRM_COMMAND_BASE] = {DRM_IOCTL_##ioctl, flags, func}
/**
* Ioctl definitions.
goto out_err1;
}
- ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM,
- (dev_priv->vram_size >> PAGE_SHIFT));
- if (unlikely(ret != 0)) {
- DRM_ERROR("Failed initializing memory manager for VRAM.\n");
- goto out_err2;
- }
-
- dev_priv->has_gmr = true;
- if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) ||
- refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR,
- VMW_PL_GMR) != 0) {
- DRM_INFO("No GMR memory available. "
- "Graphics memory resources are very limited.\n");
- dev_priv->has_gmr = false;
- }
-
- if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) {
- dev_priv->has_mob = true;
- if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB,
- VMW_PL_MOB) != 0) {
- DRM_INFO("No MOB memory available. "
- "3D will be disabled.\n");
- dev_priv->has_mob = false;
- }
- }
-
dev_priv->mmio_mtrr = arch_phys_wc_add(dev_priv->mmio_start,
dev_priv->mmio_size);
goto out_no_fman;
}
+
+ ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM,
+ (dev_priv->vram_size >> PAGE_SHIFT));
+ if (unlikely(ret != 0)) {
+ DRM_ERROR("Failed initializing memory manager for VRAM.\n");
+ goto out_no_vram;
+ }
+
+ dev_priv->has_gmr = true;
+ if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) ||
+ refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR,
+ VMW_PL_GMR) != 0) {
+ DRM_INFO("No GMR memory available. "
+ "Graphics memory resources are very limited.\n");
+ dev_priv->has_gmr = false;
+ }
+
+ if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) {
+ dev_priv->has_mob = true;
+ if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB,
+ VMW_PL_MOB) != 0) {
+ DRM_INFO("No MOB memory available. "
+ "3D will be disabled.\n");
+ dev_priv->has_mob = false;
+ }
+ }
+
vmw_kms_save_vga(dev_priv);
/* Start kms and overlay systems, needs fifo. */
vmw_kms_close(dev_priv);
out_no_kms:
vmw_kms_restore_vga(dev_priv);
+ if (dev_priv->has_mob)
+ (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB);
+ if (dev_priv->has_gmr)
+ (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR);
+ (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM);
+out_no_vram:
vmw_fence_manager_takedown(dev_priv->fman);
out_no_fman:
if (dev_priv->capabilities & SVGA_CAP_IRQMASK)
iounmap(dev_priv->mmio_virt);
out_err3:
arch_phys_wc_del(dev_priv->mmio_mtrr);
- if (dev_priv->has_mob)
- (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB);
- if (dev_priv->has_gmr)
- (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR);
- (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM);
-out_err2:
(void)ttm_bo_device_release(&dev_priv->bdev);
out_err1:
vmw_ttm_global_release(dev_priv);
}
vmw_kms_close(dev_priv);
vmw_overlay_close(dev_priv);
+
+ if (dev_priv->has_mob)
+ (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB);
+ if (dev_priv->has_gmr)
+ (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR);
+ (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM);
+
vmw_fence_manager_takedown(dev_priv->fman);
if (dev_priv->capabilities & SVGA_CAP_IRQMASK)
drm_irq_uninstall(dev_priv->dev);
ttm_object_device_release(&dev_priv->tdev);
iounmap(dev_priv->mmio_virt);
arch_phys_wc_del(dev_priv->mmio_mtrr);
- if (dev_priv->has_mob)
- (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB);
- if (dev_priv->has_gmr)
- (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR);
- (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM);
(void)ttm_bo_device_release(&dev_priv->bdev);
vmw_ttm_global_release(dev_priv);
const struct drm_ioctl_desc *ioctl =
&vmw_ioctls[nr - DRM_COMMAND_BASE];
- if (unlikely(ioctl->cmd_drv != cmd)) {
+ if (unlikely(ioctl->cmd != cmd)) {
DRM_ERROR("Invalid command format, ioctl %d\n",
nr - DRM_COMMAND_BASE);
return -EINVAL;
{
struct drm_device *dev = pci_get_drvdata(pdev);
+ pci_disable_device(pdev);
drm_put_dev(dev);
}
ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo);
if (unlikely(ret != 0)) {
DRM_ERROR("Could not find or use MOB buffer.\n");
- return -EINVAL;
+ ret = -EINVAL;
+ goto out_no_reloc;
}
bo = &vmw_bo->base;
out_no_reloc:
vmw_dmabuf_unreference(&vmw_bo);
- vmw_bo_p = NULL;
+ *vmw_bo_p = NULL;
return ret;
}
ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo);
if (unlikely(ret != 0)) {
DRM_ERROR("Could not find or use GMR region.\n");
- return -EINVAL;
+ ret = -EINVAL;
+ goto out_no_reloc;
}
bo = &vmw_bo->base;
out_no_reloc:
vmw_dmabuf_unreference(&vmw_bo);
- vmw_bo_p = NULL;
+ *vmw_bo_p = NULL;
return ret;
}
NULL, arg->command_size, arg->throttle_us,
(void __user *)(unsigned long)arg->fence_rep,
NULL);
-
+ ttm_read_unlock(&dev_priv->reservation_sem);
if (unlikely(ret != 0))
- goto out_unlock;
+ return ret;
vmw_kms_cursor_post_execbuf(dev_priv);
-out_unlock:
- ttm_read_unlock(&dev_priv->reservation_sem);
- return ret;
+ return 0;
}
int i;
struct drm_mode_config *mode_config = &dev->mode_config;
- ret = ttm_read_lock(&dev_priv->reservation_sem, true);
- if (unlikely(ret != 0))
- return ret;
-
if (!arg->num_outputs) {
struct drm_vmw_rect def_rect = {0, 0, 800, 600};
vmw_du_update_layout(dev_priv, 1, &def_rect);
- goto out_unlock;
+ return 0;
}
rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect);
rects = kcalloc(arg->num_outputs, sizeof(struct drm_vmw_rect),
GFP_KERNEL);
- if (unlikely(!rects)) {
- ret = -ENOMEM;
- goto out_unlock;
- }
+ if (unlikely(!rects))
+ return -ENOMEM;
user_rects = (void __user *)(unsigned long)arg->rects;
ret = copy_from_user(rects, user_rects, rects_size);
out_free:
kfree(rects);
-out_unlock:
- ttm_read_unlock(&dev_priv->reservation_sem);
return ret;
}
{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65a) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_BT) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_PRO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED, USB_DEVICE_ID_TOPSEED_CYBERLINK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED2, USB_DEVICE_ID_TOPSEED2_RF_COMBO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TWINHAN, USB_DEVICE_ID_TWINHAN_IR_REMOTE) },
#define USB_VENDOR_ID_LOGITECH 0x046d
#define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e
#define USB_DEVICE_ID_LOGITECH_T651 0xb00c
+#define USB_DEVICE_ID_LOGITECH_C077 0xc007
#define USB_DEVICE_ID_LOGITECH_RECEIVER 0xc101
#define USB_DEVICE_ID_LOGITECH_HARMONY_FIRST 0xc110
#define USB_DEVICE_ID_LOGITECH_HARMONY_LAST 0xc14f
#define USB_VENDOR_ID_TIVO 0x150a
#define USB_DEVICE_ID_TIVO_SLIDE_BT 0x1200
#define USB_DEVICE_ID_TIVO_SLIDE 0x1201
+#define USB_DEVICE_ID_TIVO_SLIDE_PRO 0x1203
#define USB_VENDOR_ID_TOPSEED 0x0766
#define USB_DEVICE_ID_TOPSEED_CYBERLINK 0x0204
/* TiVo Slide Bluetooth remote, pairs with a Broadcom dongle */
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_BT) },
{ HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_PRO) },
{ }
};
MODULE_DEVICE_TABLE(hid, tivo_devices);
{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3_JP, HID_QUIRK_NO_INIT_REPORTS },
(features->type == CINTIQ && !(data[1] & 0x40)))
return 1;
- if (features->quirks & WACOM_QUIRK_MULTI_INPUT)
+ if (wacom->shared) {
wacom->shared->stylus_in_proximity = true;
+ if (wacom->shared->touch_down)
+ return 1;
+ }
+
/* in Range while exiting */
if (((data[1] & 0xfe) == 0x20) && wacom->reporting_data) {
input_report_key(input, BTN_TOUCH, 0);
struct input_dev *input = wacom->input;
unsigned char *data = wacom->data;
int i;
- int current_num_contacts = 0;
+ int current_num_contacts = data[61];
int contacts_to_send = 0;
int num_contacts_left = 4; /* maximum contacts per packet */
int byte_per_packet = WACOM_BYTES_PER_24HDT_PACKET;
int y_offset = 2;
+ static int contact_with_no_pen_down_count = 0;
if (wacom->features.type == WACOM_27QHDT) {
current_num_contacts = data[63];
num_contacts_left = 10;
byte_per_packet = WACOM_BYTES_PER_QHDTHID_PACKET;
y_offset = 0;
- } else {
- current_num_contacts = data[61];
}
/*
* First packet resets the counter since only the first
* packet in series will have non-zero current_num_contacts.
*/
- if (current_num_contacts)
+ if (current_num_contacts) {
wacom->num_contacts_left = current_num_contacts;
+ contact_with_no_pen_down_count = 0;
+ }
contacts_to_send = min(num_contacts_left, wacom->num_contacts_left);
input_report_abs(input, ABS_MT_WIDTH_MINOR, min(w, h));
input_report_abs(input, ABS_MT_ORIENTATION, w > h);
}
+ contact_with_no_pen_down_count++;
}
}
input_mt_report_pointer_emulation(input, true);
wacom->num_contacts_left -= contacts_to_send;
- if (wacom->num_contacts_left <= 0)
+ if (wacom->num_contacts_left <= 0) {
wacom->num_contacts_left = 0;
-
- wacom->shared->touch_down = (wacom->num_contacts_left > 0);
+ wacom->shared->touch_down = (contact_with_no_pen_down_count > 0);
+ }
return 1;
}
int current_num_contacts = data[2];
int contacts_to_send = 0;
int x_offset = 0;
+ static int contact_with_no_pen_down_count = 0;
/* MTTPC does not support Height and Width */
if (wacom->features.type == MTTPC || wacom->features.type == MTTPC_B)
* First packet resets the counter since only the first
* packet in series will have non-zero current_num_contacts.
*/
- if (current_num_contacts)
+ if (current_num_contacts) {
wacom->num_contacts_left = current_num_contacts;
+ contact_with_no_pen_down_count = 0;
+ }
/* There are at most 5 contacts per packet */
contacts_to_send = min(5, wacom->num_contacts_left);
int y = get_unaligned_le16(&data[offset + x_offset + 9]);
input_report_abs(input, ABS_MT_POSITION_X, x);
input_report_abs(input, ABS_MT_POSITION_Y, y);
+ contact_with_no_pen_down_count++;
}
}
input_mt_report_pointer_emulation(input, true);
wacom->num_contacts_left -= contacts_to_send;
- if (wacom->num_contacts_left < 0)
+ if (wacom->num_contacts_left <= 0) {
wacom->num_contacts_left = 0;
-
- wacom->shared->touch_down = (wacom->num_contacts_left > 0);
+ wacom->shared->touch_down = (contact_with_no_pen_down_count > 0);
+ }
return 1;
}
{
unsigned char *data = wacom->data;
struct input_dev *input = wacom->input;
- bool prox;
+ bool prox = !wacom->shared->stylus_in_proximity;
int x = 0, y = 0;
if (wacom->features.touch_max > 1 || len > WACOM_PKGLEN_TPC2FG)
return 0;
- if (!wacom->shared->stylus_in_proximity) {
- if (len == WACOM_PKGLEN_TPC1FG) {
- prox = data[0] & 0x01;
- x = get_unaligned_le16(&data[1]);
- y = get_unaligned_le16(&data[3]);
- } else if (len == WACOM_PKGLEN_TPC1FG_B) {
- prox = data[2] & 0x01;
- x = get_unaligned_le16(&data[3]);
- y = get_unaligned_le16(&data[5]);
- } else {
- prox = data[1] & 0x01;
- x = le16_to_cpup((__le16 *)&data[2]);
- y = le16_to_cpup((__le16 *)&data[4]);
- }
- } else
- /* force touch out when pen is in prox */
- prox = 0;
+ if (len == WACOM_PKGLEN_TPC1FG) {
+ prox = prox && (data[0] & 0x01);
+ x = get_unaligned_le16(&data[1]);
+ y = get_unaligned_le16(&data[3]);
+ } else if (len == WACOM_PKGLEN_TPC1FG_B) {
+ prox = prox && (data[2] & 0x01);
+ x = get_unaligned_le16(&data[3]);
+ y = get_unaligned_le16(&data[5]);
+ } else {
+ prox = prox && (data[1] & 0x01);
+ x = le16_to_cpup((__le16 *)&data[2]);
+ y = le16_to_cpup((__le16 *)&data[4]);
+ }
if (prox) {
input_report_abs(input, ABS_X, x);
struct input_dev *pad_input = wacom->pad_input;
unsigned char *data = wacom->data;
int i;
+ int contact_with_no_pen_down_count = 0;
if (data[0] != 0x02)
return 0;
}
input_report_abs(input, ABS_MT_POSITION_X, x);
input_report_abs(input, ABS_MT_POSITION_Y, y);
+ contact_with_no_pen_down_count++;
}
}
input_report_key(pad_input, BTN_FORWARD, (data[1] & 0x04) != 0);
input_report_key(pad_input, BTN_BACK, (data[1] & 0x02) != 0);
input_report_key(pad_input, BTN_RIGHT, (data[1] & 0x01) != 0);
+ wacom->shared->touch_down = (contact_with_no_pen_down_count > 0);
return 1;
}
-static void wacom_bpt3_touch_msg(struct wacom_wac *wacom, unsigned char *data)
+static int wacom_bpt3_touch_msg(struct wacom_wac *wacom, unsigned char *data, int last_touch_count)
{
struct wacom_features *features = &wacom->features;
struct input_dev *input = wacom->input;
int slot = input_mt_get_slot_by_key(input, data[0]);
if (slot < 0)
- return;
+ return 0;
touch = touch && !wacom->shared->stylus_in_proximity;
input_report_abs(input, ABS_MT_POSITION_Y, y);
input_report_abs(input, ABS_MT_TOUCH_MAJOR, width);
input_report_abs(input, ABS_MT_TOUCH_MINOR, height);
+ last_touch_count++;
}
+ return last_touch_count;
}
static void wacom_bpt3_button_msg(struct wacom_wac *wacom, unsigned char *data)
unsigned char *data = wacom->data;
int count = data[1] & 0x07;
int i;
+ int contact_with_no_pen_down_count = 0;
if (data[0] != 0x02)
return 0;
int msg_id = data[offset];
if (msg_id >= 2 && msg_id <= 17)
- wacom_bpt3_touch_msg(wacom, data + offset);
+ contact_with_no_pen_down_count =
+ wacom_bpt3_touch_msg(wacom, data + offset,
+ contact_with_no_pen_down_count);
else if (msg_id == 128)
wacom_bpt3_button_msg(wacom, data + offset);
}
input_mt_report_pointer_emulation(input, true);
+ wacom->shared->touch_down = (contact_with_no_pen_down_count > 0);
return 1;
}
return 0;
}
+ if (wacom->shared->touch_down)
+ return 0;
+
prox = (data[1] & 0x20) == 0x20;
/*
status = driver->remove(client);
}
- if (dev->of_node)
- irq_dispose_mapping(client->irq);
-
dev_pm_domain_detach(&client->dev, true);
return status;
}
tape->best_dsc_rw_freq = clamp_t(unsigned long, t, IDETAPE_DSC_RW_MIN,
IDETAPE_DSC_RW_MAX);
printk(KERN_INFO "ide-tape: %s <-> %s: %dKBps, %d*%dkB buffer, "
- "%lums tDSC%s\n",
+ "%ums tDSC%s\n",
drive->name, tape->name, *(u16 *)&tape->caps[14],
(*(u16 *)&tape->caps[16] * 512) / tape->buffer_size,
tape->buffer_size / 1024,
- tape->best_dsc_rw_freq * 1000 / HZ,
+ jiffies_to_msecs(tape->best_dsc_rw_freq),
(drive->dev_flags & IDE_DFLAG_USING_DMA) ? ", DMA" : "");
ide_proc_register_driver(drive, tape->driver);
#define GUID_TBL_BLK_NUM_ENTRIES 8
#define GUID_TBL_BLK_SIZE (GUID_TBL_ENTRY_SIZE * GUID_TBL_BLK_NUM_ENTRIES)
+/* Counters should be saturate once they reach their maximum value */
+#define ASSIGN_32BIT_COUNTER(counter, value) do {\
+ if ((value) > U32_MAX) \
+ counter = cpu_to_be32(U32_MAX); \
+ else \
+ counter = cpu_to_be32(value); \
+} while (0)
+
struct mlx4_mad_rcv_buf {
struct ib_grh grh;
u8 payload[256];
static void edit_counter(struct mlx4_counter *cnt,
struct ib_pma_portcounters *pma_cnt)
{
- pma_cnt->port_xmit_data = cpu_to_be32((be64_to_cpu(cnt->tx_bytes)>>2));
- pma_cnt->port_rcv_data = cpu_to_be32((be64_to_cpu(cnt->rx_bytes)>>2));
- pma_cnt->port_xmit_packets = cpu_to_be32(be64_to_cpu(cnt->tx_frames));
- pma_cnt->port_rcv_packets = cpu_to_be32(be64_to_cpu(cnt->rx_frames));
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data,
+ (be64_to_cpu(cnt->tx_bytes) >> 2));
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data,
+ (be64_to_cpu(cnt->rx_bytes) >> 2));
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets,
+ be64_to_cpu(cnt->tx_frames));
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_packets,
+ be64_to_cpu(cnt->rx_frames));
}
static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
spin_lock_bh(&ibdev->iboe.lock);
for (i = 0; i < MLX4_MAX_PORTS; ++i) {
struct net_device *curr_netdev = ibdev->iboe.netdevs[i];
+ enum ib_port_state curr_port_state;
- enum ib_port_state curr_port_state =
+ if (!curr_netdev)
+ continue;
+
+ curr_port_state =
(netif_running(curr_netdev) &&
netif_carrier_ok(curr_netdev)) ?
IB_PORT_ACTIVE : IB_PORT_DOWN;
input_set_drvdata(input, keypad);
- error = request_threaded_irq(irq, NULL,
- tc3589x_keypad_irq, plat->irqtype,
- "tc3589x-keypad", keypad);
+ error = request_threaded_irq(irq, NULL, tc3589x_keypad_irq,
+ plat->irqtype | IRQF_ONESHOT,
+ "tc3589x-keypad", keypad);
if (error < 0) {
dev_err(&pdev->dev,
"Could not allocate irq %d,error %d\n",
idev->private = m;
idev->input->name = MMA8450_DRV_NAME;
idev->input->id.bustype = BUS_I2C;
+ idev->input->dev.parent = &c->dev;
idev->poll = mma8450_poll;
idev->poll_interval = POLL_INTERVAL;
idev->poll_interval_max = POLL_INTERVAL_MAX;
return -ENOMEM;
error = alps_identify(psmouse, priv);
- if (error)
+ if (error) {
+ kfree(priv);
return error;
+ }
if (set_properties) {
psmouse->vendor = "ALPS";
#include <linux/input/mt.h>
#include <linux/module.h>
#include <linux/slab.h>
-#include <linux/unaligned/access_ok.h>
+#include <asm/unaligned.h>
#include "cyapa.h"
#include <linux/mutex.h>
#include <linux/completion.h>
#include <linux/slab.h>
-#include <linux/unaligned/access_ok.h>
+#include <asm/unaligned.h>
#include <linux/crc-itu-t.h>
#include "cyapa.h"
electrodes_tx = cyapa->electrodes_x;
max_element_cnt = ((cyapa->aligned_electrodes_rx + 7) &
~7u) * electrodes_tx;
- } else if (idac_data_type == GEN5_RETRIEVE_SELF_CAP_PWC_DATA) {
+ } else {
offset = 2;
max_element_cnt = cyapa->electrodes_x +
cyapa->electrodes_y;
#define FOC_MAX_FINGERS 5
-#define FOC_MAX_X 2431
-#define FOC_MAX_Y 1663
-
/*
* Current state of a single finger on the touchpad.
*/
input_mt_slot(dev, i);
input_mt_report_slot_state(dev, MT_TOOL_FINGER, active);
if (active) {
- input_report_abs(dev, ABS_MT_POSITION_X, finger->x);
+ unsigned int clamped_x, clamped_y;
+ /*
+ * The touchpad might report invalid data, so we clamp
+ * the resulting values so that we do not confuse
+ * userspace.
+ */
+ clamped_x = clamp(finger->x, 0U, priv->x_max);
+ clamped_y = clamp(finger->y, 0U, priv->y_max);
+ input_report_abs(dev, ABS_MT_POSITION_X, clamped_x);
input_report_abs(dev, ABS_MT_POSITION_Y,
- FOC_MAX_Y - finger->y);
+ priv->y_max - clamped_y);
}
}
input_mt_report_pointer_emulation(dev, true);
state->pressed = (packet[0] >> 4) & 1;
- /*
- * packet[5] contains some kind of tool size in the most
- * significant nibble. 0xff is a special value (latching) that
- * signals a large contact area.
- */
- if (packet[5] == 0xff) {
- state->fingers[finger].valid = false;
- return;
- }
-
state->fingers[finger].x = ((packet[1] & 0xf) << 8) | packet[2];
state->fingers[finger].y = (packet[3] << 8) | packet[4];
state->fingers[finger].valid = true;
return 0;
}
+
+void focaltech_set_resolution(struct psmouse *psmouse, unsigned int resolution)
+{
+ /* not supported yet */
+}
+
+static void focaltech_set_rate(struct psmouse *psmouse, unsigned int rate)
+{
+ /* not supported yet */
+}
+
+static void focaltech_set_scale(struct psmouse *psmouse,
+ enum psmouse_scale scale)
+{
+ /* not supported yet */
+}
+
int focaltech_init(struct psmouse *psmouse)
{
struct focaltech_data *priv;
psmouse->cleanup = focaltech_reset;
/* resync is not supported yet */
psmouse->resync_time = 0;
+ /*
+ * rate/resolution/scale changes are not supported yet, and
+ * the generic implementations of these functions seem to
+ * confuse some touchpads
+ */
+ psmouse->set_resolution = focaltech_set_resolution;
+ psmouse->set_rate = focaltech_set_rate;
+ psmouse->set_scale = focaltech_set_scale;
return 0;
psmouse->rate = r;
}
+/*
+ * Here we set the mouse scaling.
+ */
+
+static void psmouse_set_scale(struct psmouse *psmouse, enum psmouse_scale scale)
+{
+ ps2_command(&psmouse->ps2dev, NULL,
+ scale == PSMOUSE_SCALE21 ? PSMOUSE_CMD_SETSCALE21 :
+ PSMOUSE_CMD_SETSCALE11);
+}
+
/*
* psmouse_poll() - default poll handler. Everyone except for ALPS uses it.
*/
psmouse->set_rate = psmouse_set_rate;
psmouse->set_resolution = psmouse_set_resolution;
+ psmouse->set_scale = psmouse_set_scale;
psmouse->poll = psmouse_poll;
psmouse->protocol_handler = psmouse_process_byte;
psmouse->pktsize = 3;
if (psmouse_max_proto != PSMOUSE_PS2) {
psmouse->set_rate(psmouse, psmouse->rate);
psmouse->set_resolution(psmouse, psmouse->resolution);
- ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ psmouse->set_scale(psmouse, PSMOUSE_SCALE11);
}
}
PSMOUSE_FULL_PACKET
} psmouse_ret_t;
+enum psmouse_scale {
+ PSMOUSE_SCALE11,
+ PSMOUSE_SCALE21
+};
+
struct psmouse {
void *private;
struct input_dev *dev;
psmouse_ret_t (*protocol_handler)(struct psmouse *psmouse);
void (*set_rate)(struct psmouse *psmouse, unsigned int rate);
void (*set_resolution)(struct psmouse *psmouse, unsigned int resolution);
+ void (*set_scale)(struct psmouse *psmouse, enum psmouse_scale scale);
int (*reconnect)(struct psmouse *psmouse);
void (*disconnect)(struct psmouse *psmouse);
#define X_MAX_POSITIVE 8176
#define Y_MAX_POSITIVE 8176
-/* maximum ABS_MT_POSITION displacement (in mm) */
-#define DMAX 10
-
/*****************************************************************************
* Stuff we need even when we do not want native Synaptics support
****************************************************************************/
static bool cr48_profile_sensor;
+#define ANY_BOARD_ID 0
struct min_max_quirk {
const char * const *pnp_ids;
+ struct {
+ unsigned long int min, max;
+ } board_id;
int x_min, x_max, y_min, y_max;
};
static const struct min_max_quirk min_max_pnpid_table[] = {
{
(const char * const []){"LEN0033", NULL},
+ {ANY_BOARD_ID, ANY_BOARD_ID},
1024, 5052, 2258, 4832
},
{
- (const char * const []){"LEN0035", "LEN0042", NULL},
+ (const char * const []){"LEN0042", NULL},
+ {ANY_BOARD_ID, ANY_BOARD_ID},
1232, 5710, 1156, 4696
},
{
(const char * const []){"LEN0034", "LEN0036", "LEN0037",
"LEN0039", "LEN2002", "LEN2004",
NULL},
+ {ANY_BOARD_ID, 2961},
1024, 5112, 2024, 4832
},
{
(const char * const []){"LEN2001", NULL},
+ {ANY_BOARD_ID, ANY_BOARD_ID},
1024, 5022, 2508, 4832
},
{
(const char * const []){"LEN2006", NULL},
+ {ANY_BOARD_ID, ANY_BOARD_ID},
1264, 5675, 1171, 4688
},
{ }
"LEN0041",
"LEN0042", /* Yoga */
"LEN0045",
- "LEN0046",
"LEN0047",
- "LEN0048",
"LEN0049",
"LEN2000",
"LEN2001", /* Edge E431 */
return 0;
}
+static int synaptics_more_extended_queries(struct psmouse *psmouse)
+{
+ struct synaptics_data *priv = psmouse->private;
+ unsigned char buf[3];
+
+ if (synaptics_send_cmd(psmouse, SYN_QUE_MEXT_CAPAB_10, buf))
+ return -1;
+
+ priv->ext_cap_10 = (buf[0]<<16) | (buf[1]<<8) | buf[2];
+
+ return 0;
+}
+
/*
- * Read the board id from the touchpad
+ * Read the board id and the "More Extended Queries" from the touchpad
* The board id is encoded in the "QUERY MODES" response
*/
-static int synaptics_board_id(struct psmouse *psmouse)
+static int synaptics_query_modes(struct psmouse *psmouse)
{
struct synaptics_data *priv = psmouse->private;
unsigned char bid[3];
+ /* firmwares prior 7.5 have no board_id encoded */
+ if (SYN_ID_FULL(priv->identity) < 0x705)
+ return 0;
+
if (synaptics_send_cmd(psmouse, SYN_QUE_MODES, bid))
return -1;
priv->board_id = ((bid[0] & 0xfc) << 6) | bid[1];
+
+ if (SYN_MEXT_CAP_BIT(bid[0]))
+ return synaptics_more_extended_queries(psmouse);
+
return 0;
}
{
struct synaptics_data *priv = psmouse->private;
unsigned char resp[3];
- int i;
if (SYN_ID_MAJOR(priv->identity) < 4)
return 0;
}
}
- for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) {
- if (psmouse_matches_pnp_id(psmouse,
- min_max_pnpid_table[i].pnp_ids)) {
- priv->x_min = min_max_pnpid_table[i].x_min;
- priv->x_max = min_max_pnpid_table[i].x_max;
- priv->y_min = min_max_pnpid_table[i].y_min;
- priv->y_max = min_max_pnpid_table[i].y_max;
- return 0;
- }
- }
-
if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 5 &&
SYN_CAP_MAX_DIMENSIONS(priv->ext_cap_0c)) {
if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MAX_COORDS, resp)) {
} else {
priv->x_max = (resp[0] << 5) | ((resp[1] & 0x0f) << 1);
priv->y_max = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3);
+ psmouse_info(psmouse,
+ "queried max coordinates: x [..%d], y [..%d]\n",
+ priv->x_max, priv->y_max);
}
}
- if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 &&
- SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c)) {
+ if (SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c) &&
+ (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 ||
+ /*
+ * Firmware v8.1 does not report proper number of extended
+ * capabilities, but has been proven to report correct min
+ * coordinates.
+ */
+ SYN_ID_FULL(priv->identity) == 0x801)) {
if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MIN_COORDS, resp)) {
psmouse_warn(psmouse,
"device claims to have min coordinates query, but I'm not able to read it.\n");
} else {
priv->x_min = (resp[0] << 5) | ((resp[1] & 0x0f) << 1);
priv->y_min = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3);
+ psmouse_info(psmouse,
+ "queried min coordinates: x [%d..], y [%d..]\n",
+ priv->x_min, priv->y_min);
}
}
return 0;
}
+/*
+ * Apply quirk(s) if the hardware matches
+ */
+
+static void synaptics_apply_quirks(struct psmouse *psmouse)
+{
+ struct synaptics_data *priv = psmouse->private;
+ int i;
+
+ for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) {
+ if (!psmouse_matches_pnp_id(psmouse,
+ min_max_pnpid_table[i].pnp_ids))
+ continue;
+
+ if (min_max_pnpid_table[i].board_id.min != ANY_BOARD_ID &&
+ priv->board_id < min_max_pnpid_table[i].board_id.min)
+ continue;
+
+ if (min_max_pnpid_table[i].board_id.max != ANY_BOARD_ID &&
+ priv->board_id > min_max_pnpid_table[i].board_id.max)
+ continue;
+
+ priv->x_min = min_max_pnpid_table[i].x_min;
+ priv->x_max = min_max_pnpid_table[i].x_max;
+ priv->y_min = min_max_pnpid_table[i].y_min;
+ priv->y_max = min_max_pnpid_table[i].y_max;
+ psmouse_info(psmouse,
+ "quirked min/max coordinates: x [%d..%d], y [%d..%d]\n",
+ priv->x_min, priv->x_max,
+ priv->y_min, priv->y_max);
+ break;
+ }
+}
+
static int synaptics_query_hardware(struct psmouse *psmouse)
{
if (synaptics_identify(psmouse))
return -1;
if (synaptics_firmware_id(psmouse))
return -1;
- if (synaptics_board_id(psmouse))
+ if (synaptics_query_modes(psmouse))
return -1;
if (synaptics_capability(psmouse))
return -1;
if (synaptics_resolution(psmouse))
return -1;
+ synaptics_apply_quirks(psmouse);
+
return 0;
}
return (buf[0] & 0xFC) == 0x84 && (buf[3] & 0xCC) == 0xC4;
}
-static void synaptics_pass_pt_packet(struct serio *ptport, unsigned char *packet)
+static void synaptics_pass_pt_packet(struct psmouse *psmouse,
+ struct serio *ptport,
+ unsigned char *packet)
{
+ struct synaptics_data *priv = psmouse->private;
struct psmouse *child = serio_get_drvdata(ptport);
if (child && child->state == PSMOUSE_ACTIVATED) {
- serio_interrupt(ptport, packet[1], 0);
+ serio_interrupt(ptport, packet[1] | priv->pt_buttons, 0);
serio_interrupt(ptport, packet[4], 0);
serio_interrupt(ptport, packet[5], 0);
if (child->pktsize == 4)
serio_interrupt(ptport, packet[2], 0);
- } else
+ } else {
serio_interrupt(ptport, packet[1], 0);
+ }
}
static void synaptics_pt_activate(struct psmouse *psmouse)
}
}
+static void synaptics_parse_ext_buttons(const unsigned char buf[],
+ struct synaptics_data *priv,
+ struct synaptics_hw_state *hw)
+{
+ unsigned int ext_bits =
+ (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1;
+ unsigned int ext_mask = GENMASK(ext_bits - 1, 0);
+
+ hw->ext_buttons = buf[4] & ext_mask;
+ hw->ext_buttons |= (buf[5] & ext_mask) << ext_bits;
+}
+
static bool is_forcepad;
static int synaptics_parse_hw_state(const unsigned char buf[],
hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0;
}
- if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) &&
+ if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) > 0 &&
((buf[0] ^ buf[3]) & 0x02)) {
- switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) {
- default:
- /*
- * if nExtBtn is greater than 8 it should be
- * considered invalid and treated as 0
- */
- break;
- case 8:
- hw->ext_buttons |= ((buf[5] & 0x08)) ? 0x80 : 0;
- hw->ext_buttons |= ((buf[4] & 0x08)) ? 0x40 : 0;
- case 6:
- hw->ext_buttons |= ((buf[5] & 0x04)) ? 0x20 : 0;
- hw->ext_buttons |= ((buf[4] & 0x04)) ? 0x10 : 0;
- case 4:
- hw->ext_buttons |= ((buf[5] & 0x02)) ? 0x08 : 0;
- hw->ext_buttons |= ((buf[4] & 0x02)) ? 0x04 : 0;
- case 2:
- hw->ext_buttons |= ((buf[5] & 0x01)) ? 0x02 : 0;
- hw->ext_buttons |= ((buf[4] & 0x01)) ? 0x01 : 0;
- }
+ synaptics_parse_ext_buttons(buf, priv, hw);
}
} else {
hw->x = (((buf[1] & 0x1f) << 8) | buf[2]);
}
}
+static void synaptics_report_ext_buttons(struct psmouse *psmouse,
+ const struct synaptics_hw_state *hw)
+{
+ struct input_dev *dev = psmouse->dev;
+ struct synaptics_data *priv = psmouse->private;
+ int ext_bits = (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1;
+ char buf[6] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+ int i;
+
+ if (!SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap))
+ return;
+
+ /* Bug in FW 8.1, buttons are reported only when ExtBit is 1 */
+ if (SYN_ID_FULL(priv->identity) == 0x801 &&
+ !((psmouse->packet[0] ^ psmouse->packet[3]) & 0x02))
+ return;
+
+ if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) {
+ for (i = 0; i < ext_bits; i++) {
+ input_report_key(dev, BTN_0 + 2 * i,
+ hw->ext_buttons & (1 << i));
+ input_report_key(dev, BTN_1 + 2 * i,
+ hw->ext_buttons & (1 << (i + ext_bits)));
+ }
+ return;
+ }
+
+ /*
+ * This generation of touchpads has the trackstick buttons
+ * physically wired to the touchpad. Re-route them through
+ * the pass-through interface.
+ */
+ if (!priv->pt_port)
+ return;
+
+ /* The trackstick expects at most 3 buttons */
+ priv->pt_buttons = SYN_CAP_EXT_BUTTON_STICK_L(hw->ext_buttons) |
+ SYN_CAP_EXT_BUTTON_STICK_R(hw->ext_buttons) << 1 |
+ SYN_CAP_EXT_BUTTON_STICK_M(hw->ext_buttons) << 2;
+
+ synaptics_pass_pt_packet(psmouse, priv->pt_port, buf);
+}
+
static void synaptics_report_buttons(struct psmouse *psmouse,
const struct synaptics_hw_state *hw)
{
struct input_dev *dev = psmouse->dev;
struct synaptics_data *priv = psmouse->private;
- int i;
input_report_key(dev, BTN_LEFT, hw->left);
input_report_key(dev, BTN_RIGHT, hw->right);
input_report_key(dev, BTN_BACK, hw->down);
}
- for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++)
- input_report_key(dev, BTN_0 + i, hw->ext_buttons & (1 << i));
+ synaptics_report_ext_buttons(psmouse, hw);
}
static void synaptics_report_mt_data(struct psmouse *psmouse,
pos[i].y = synaptics_invert_y(hw[i]->y);
}
- input_mt_assign_slots(dev, slot, pos, nsemi, DMAX * priv->x_res);
+ input_mt_assign_slots(dev, slot, pos, nsemi, 0);
for (i = 0; i < nsemi; i++) {
input_mt_slot(dev, slot[i]);
if (SYN_CAP_PASS_THROUGH(priv->capabilities) &&
synaptics_is_pt_packet(psmouse->packet)) {
if (priv->pt_port)
- synaptics_pass_pt_packet(priv->pt_port, psmouse->packet);
+ synaptics_pass_pt_packet(psmouse, priv->pt_port,
+ psmouse->packet);
} else
synaptics_process_packet(psmouse);
__set_bit(BTN_BACK, dev->keybit);
}
- for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++)
- __set_bit(BTN_0 + i, dev->keybit);
+ if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10))
+ for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++)
+ __set_bit(BTN_0 + i, dev->keybit);
__clear_bit(EV_REL, dev->evbit);
__clear_bit(REL_X, dev->relbit);
if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) {
__set_bit(INPUT_PROP_BUTTONPAD, dev->propbit);
- if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids))
+ if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids) &&
+ !SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10))
__set_bit(INPUT_PROP_TOPBUTTONPAD, dev->propbit);
/* Clickpads report only left button */
__clear_bit(BTN_RIGHT, dev->keybit);
#define SYN_QUE_EXT_CAPAB_0C 0x0c
#define SYN_QUE_EXT_MAX_COORDS 0x0d
#define SYN_QUE_EXT_MIN_COORDS 0x0f
+#define SYN_QUE_MEXT_CAPAB_10 0x10
/* synatics modes */
#define SYN_BIT_ABSOLUTE_MODE (1 << 7)
#define SYN_EXT_CAP_REQUESTS(c) (((c) & 0x700000) >> 20)
#define SYN_CAP_MULTI_BUTTON_NO(ec) (((ec) & 0x00f000) >> 12)
#define SYN_CAP_PRODUCT_ID(ec) (((ec) & 0xff0000) >> 16)
+#define SYN_MEXT_CAP_BIT(m) ((m) & (1 << 1))
/*
* The following describes response for the 0x0c query.
#define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400)
#define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800)
+/*
+ * The following descibes response for the 0x10 query.
+ *
+ * byte mask name meaning
+ * ---- ---- ------- ------------
+ * 1 0x01 ext buttons are stick buttons exported in the extended
+ * capability are actually meant to be used
+ * by the tracktick (pass-through).
+ * 1 0x02 SecurePad the touchpad is a SecurePad, so it
+ * contains a built-in fingerprint reader.
+ * 1 0xe0 more ext count how many more extented queries are
+ * available after this one.
+ * 2 0xff SecurePad width the width of the SecurePad fingerprint
+ * reader.
+ * 3 0xff SecurePad height the height of the SecurePad fingerprint
+ * reader.
+ */
+#define SYN_CAP_EXT_BUTTONS_STICK(ex10) ((ex10) & 0x010000)
+#define SYN_CAP_SECUREPAD(ex10) ((ex10) & 0x020000)
+
+#define SYN_CAP_EXT_BUTTON_STICK_L(eb) (!!((eb) & 0x01))
+#define SYN_CAP_EXT_BUTTON_STICK_M(eb) (!!((eb) & 0x02))
+#define SYN_CAP_EXT_BUTTON_STICK_R(eb) (!!((eb) & 0x04))
+
/* synaptics modes query bits */
#define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7))
#define SYN_MODE_RATE(m) ((m) & (1 << 6))
unsigned long int capabilities; /* Capabilities */
unsigned long int ext_cap; /* Extended Capabilities */
unsigned long int ext_cap_0c; /* Ext Caps from 0x0c query */
+ unsigned long int ext_cap_10; /* Ext Caps from 0x10 query */
unsigned long int identity; /* Identification */
unsigned int x_res, y_res; /* X/Y resolution in units/mm */
unsigned int x_max, y_max; /* Max coordinates (from FW) */
bool disable_gesture; /* disable gestures */
struct serio *pt_port; /* Pass-through serio port */
+ unsigned char pt_buttons; /* Pass-through buttons */
/*
* Last received Advanced Gesture Mode (AGM) packet. An AGM packet
tristate "Allwinner sun4i resistive touchscreen controller support"
depends on ARCH_SUNXI || COMPILE_TEST
depends on HWMON
+ depends on THERMAL || !THERMAL_OF
help
This selects support for the resistive touchscreen controller
found on Allwinner sunxi SoCs.
config IOMMU_IO_PGTABLE_LPAE
bool "ARMv7/v8 Long Descriptor Format"
select IOMMU_IO_PGTABLE
+ depends on ARM || ARM64 || COMPILE_TEST
help
Enable support for the ARM long descriptor pagetable format.
This allocator supports 4K/2M/1G, 16K/32M and 64K/512M page
bool "MSM IOMMU Support"
depends on ARM
depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST
+ depends on BROKEN
select IOMMU_API
help
Support for the IOMMUs found on certain Qualcomm SOCs.
static int __init exynos_iommu_init(void)
{
+ struct device_node *np;
int ret;
+ np = of_find_matching_node(NULL, sysmmu_of_match);
+ if (!np)
+ return 0;
+
+ of_node_put(np);
+
lv2table_kmem_cache = kmem_cache_create("exynos-iommu-lv2table",
LV2TABLE_SIZE, LV2TABLE_SIZE, 0, NULL);
if (!lv2table_kmem_cache) {
((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1)) \
* (d)->bits_per_level) + (d)->pg_shift)
-#define ARM_LPAE_PAGES_PER_PGD(d) ((d)->pgd_size >> (d)->pg_shift)
+#define ARM_LPAE_PAGES_PER_PGD(d) \
+ DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
/*
* Calculate the index at level l used to map virtual address a using the
((l) == ARM_LPAE_START_LVL(d) ? ilog2(ARM_LPAE_PAGES_PER_PGD(d)) : 0)
#define ARM_LPAE_LVL_IDX(a,l,d) \
- (((a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \
+ (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \
((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1))
/* Calculate the block/page mapping size at level l for pagetable in d. */
struct kmem_cache *p;
const unsigned long flags = SLAB_HWCACHE_ALIGN;
size_t align = 1 << 10; /* L2 pagetable alignement */
+ struct device_node *np;
+
+ np = of_find_matching_node(NULL, omap_iommu_of_match);
+ if (!np)
+ return 0;
+
+ of_node_put(np);
p = kmem_cache_create("iopte_cache", IOPTE_TABLE_SIZE, align, flags,
iopte_cachep_ctor);
static int __init rk_iommu_init(void)
{
+ struct device_node *np;
int ret;
+ np = of_find_matching_node(NULL, rk_iommu_dt_ids);
+ if (!np)
+ return 0;
+
+ of_node_put(np);
+
ret = bus_set_iommu(&platform_bus_type, &rk_iommu_ops);
if (ret)
return ret;
static void __iomem *main_int_base;
static struct irq_domain *armada_370_xp_mpic_domain;
static u32 doorbell_mask_reg;
+static int parent_irq;
#ifdef CONFIG_PCI_MSI
static struct irq_domain *armada_370_xp_msi_domain;
static DECLARE_BITMAP(msi_used, PCI_MSI_DOORBELL_NR);
{
if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)
armada_xp_mpic_smp_cpu_init();
+
return NOTIFY_OK;
}
.priority = 100,
};
+static int mpic_cascaded_secondary_init(struct notifier_block *nfb,
+ unsigned long action, void *hcpu)
+{
+ if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)
+ enable_percpu_irq(parent_irq, IRQ_TYPE_NONE);
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block mpic_cascaded_cpu_notifier = {
+ .notifier_call = mpic_cascaded_secondary_init,
+ .priority = 100,
+};
+
#endif /* CONFIG_SMP */
static struct irq_domain_ops armada_370_xp_mpic_irq_ops = {
struct device_node *parent)
{
struct resource main_int_res, per_cpu_int_res;
- int parent_irq, nr_irqs, i;
+ int nr_irqs, i;
u32 control;
BUG_ON(of_address_to_resource(node, 0, &main_int_res));
register_cpu_notifier(&armada_370_xp_mpic_cpu_notifier);
#endif
} else {
+#ifdef CONFIG_SMP
+ register_cpu_notifier(&mpic_cascaded_cpu_notifier);
+#endif
irq_set_chained_handler(parent_irq,
armada_370_xp_mpic_handle_cascade_irq);
}
{
struct its_cmd_block *cmd, *sync_cmd, *next_cmd;
struct its_collection *sync_col;
+ unsigned long flags;
- raw_spin_lock(&its->lock);
+ raw_spin_lock_irqsave(&its->lock, flags);
cmd = its_allocate_entry(its);
if (!cmd) { /* We're soooooo screewed... */
pr_err_ratelimited("ITS can't allocate, dropping command\n");
- raw_spin_unlock(&its->lock);
+ raw_spin_unlock_irqrestore(&its->lock, flags);
return;
}
sync_col = builder(cmd, desc);
post:
next_cmd = its_post_commands(its);
- raw_spin_unlock(&its->lock);
+ raw_spin_unlock_irqrestore(&its->lock, flags);
its_wait_for_range_completion(its, cmd, next_cmd);
}
{
int err;
int i;
- int psz = PAGE_SIZE;
+ int psz = SZ_64K;
u64 shr = GITS_BASER_InnerShareable;
for (i = 0; i < GITS_BASER_NR_REGS; i++) {
u64 val = readq_relaxed(its->base + GITS_BASER + i * 8);
u64 type = GITS_BASER_TYPE(val);
u64 entry_size = GITS_BASER_ENTRY_SIZE(val);
+ int order = get_order(psz);
+ int alloc_size;
u64 tmp;
void *base;
if (type == GITS_BASER_TYPE_NONE)
continue;
- /* We're lazy and only allocate a single page for now */
- base = (void *)get_zeroed_page(GFP_KERNEL);
+ /*
+ * Allocate as many entries as required to fit the
+ * range of device IDs that the ITS can grok... The ID
+ * space being incredibly sparse, this results in a
+ * massive waste of memory.
+ *
+ * For other tables, only allocate a single page.
+ */
+ if (type == GITS_BASER_TYPE_DEVICE) {
+ u64 typer = readq_relaxed(its->base + GITS_TYPER);
+ u32 ids = GITS_TYPER_DEVBITS(typer);
+
+ order = get_order((1UL << ids) * entry_size);
+ if (order >= MAX_ORDER) {
+ order = MAX_ORDER - 1;
+ pr_warn("%s: Device Table too large, reduce its page order to %u\n",
+ its->msi_chip.of_node->full_name, order);
+ }
+ }
+
+ alloc_size = (1 << order) * PAGE_SIZE;
+ base = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
if (!base) {
err = -ENOMEM;
goto out_free;
break;
}
- val |= (PAGE_SIZE / psz) - 1;
+ val |= (alloc_size / psz) - 1;
writeq_relaxed(val, its->base + GITS_BASER + i * 8);
tmp = readq_relaxed(its->base + GITS_BASER + i * 8);
}
pr_info("ITS: allocated %d %s @%lx (psz %dK, shr %d)\n",
- (int)(PAGE_SIZE / entry_size),
+ (int)(alloc_size / entry_size),
its_base_type_string[type],
(unsigned long)virt_to_phys(base),
psz / SZ_1K, (int)shr >> GITS_BASER_SHAREABILITY_SHIFT);
static struct its_device *its_find_device(struct its_node *its, u32 dev_id)
{
struct its_device *its_dev = NULL, *tmp;
+ unsigned long flags;
- raw_spin_lock(&its->lock);
+ raw_spin_lock_irqsave(&its->lock, flags);
list_for_each_entry(tmp, &its->its_device_list, entry) {
if (tmp->device_id == dev_id) {
}
}
- raw_spin_unlock(&its->lock);
+ raw_spin_unlock_irqrestore(&its->lock, flags);
return its_dev;
}
{
struct its_device *dev;
unsigned long *lpi_map;
+ unsigned long flags;
void *itt;
int lpi_base;
int nr_lpis;
nr_ites = max(2UL, roundup_pow_of_two(nvecs));
sz = nr_ites * its->ite_size;
sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1;
- itt = kmalloc(sz, GFP_KERNEL);
+ itt = kzalloc(sz, GFP_KERNEL);
lpi_map = its_lpi_alloc_chunks(nvecs, &lpi_base, &nr_lpis);
if (!dev || !itt || !lpi_map) {
dev->device_id = dev_id;
INIT_LIST_HEAD(&dev->entry);
- raw_spin_lock(&its->lock);
+ raw_spin_lock_irqsave(&its->lock, flags);
list_add(&dev->entry, &its->its_device_list);
- raw_spin_unlock(&its->lock);
+ raw_spin_unlock_irqrestore(&its->lock, flags);
/* Bind the device to the first possible CPU */
cpu = cpumask_first(cpu_online_mask);
static void its_free_device(struct its_device *its_dev)
{
- raw_spin_lock(&its_dev->its->lock);
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&its_dev->its->lock, flags);
list_del(&its_dev->entry);
- raw_spin_unlock(&its_dev->its->lock);
+ raw_spin_unlock_irqrestore(&its_dev->its->lock, flags);
kfree(its_dev->itt);
kfree(its_dev);
}
return 0;
}
+struct its_pci_alias {
+ struct pci_dev *pdev;
+ u32 dev_id;
+ u32 count;
+};
+
+static int its_pci_msi_vec_count(struct pci_dev *pdev)
+{
+ int msi, msix;
+
+ msi = max(pci_msi_vec_count(pdev), 0);
+ msix = max(pci_msix_vec_count(pdev), 0);
+
+ return max(msi, msix);
+}
+
+static int its_get_pci_alias(struct pci_dev *pdev, u16 alias, void *data)
+{
+ struct its_pci_alias *dev_alias = data;
+
+ dev_alias->dev_id = alias;
+ if (pdev != dev_alias->pdev)
+ dev_alias->count += its_pci_msi_vec_count(dev_alias->pdev);
+
+ return 0;
+}
+
static int its_msi_prepare(struct irq_domain *domain, struct device *dev,
int nvec, msi_alloc_info_t *info)
{
struct pci_dev *pdev;
struct its_node *its;
- u32 dev_id;
struct its_device *its_dev;
+ struct its_pci_alias dev_alias;
if (!dev_is_pci(dev))
return -EINVAL;
pdev = to_pci_dev(dev);
- dev_id = PCI_DEVID(pdev->bus->number, pdev->devfn);
+ dev_alias.pdev = pdev;
+ dev_alias.count = nvec;
+
+ pci_for_each_dma_alias(pdev, its_get_pci_alias, &dev_alias);
its = domain->parent->host_data;
- its_dev = its_find_device(its, dev_id);
- if (WARN_ON(its_dev))
- return -EINVAL;
+ its_dev = its_find_device(its, dev_alias.dev_id);
+ if (its_dev) {
+ /*
+ * We already have seen this ID, probably through
+ * another alias (PCI bridge of some sort). No need to
+ * create the device.
+ */
+ dev_dbg(dev, "Reusing ITT for devID %x\n", dev_alias.dev_id);
+ goto out;
+ }
- its_dev = its_create_device(its, dev_id, nvec);
+ its_dev = its_create_device(its, dev_alias.dev_id, dev_alias.count);
if (!its_dev)
return -ENOMEM;
- dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n", nvec, ilog2(nvec));
-
+ dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n",
+ dev_alias.count, ilog2(dev_alias.count));
+out:
info->scratchpad[0].ptr = its_dev;
info->scratchpad[1].ptr = dev;
return 0;
.deactivate = its_irq_domain_deactivate,
};
+static int its_force_quiescent(void __iomem *base)
+{
+ u32 count = 1000000; /* 1s */
+ u32 val;
+
+ val = readl_relaxed(base + GITS_CTLR);
+ if (val & GITS_CTLR_QUIESCENT)
+ return 0;
+
+ /* Disable the generation of all interrupts to this ITS */
+ val &= ~GITS_CTLR_ENABLE;
+ writel_relaxed(val, base + GITS_CTLR);
+
+ /* Poll GITS_CTLR and wait until ITS becomes quiescent */
+ while (1) {
+ val = readl_relaxed(base + GITS_CTLR);
+ if (val & GITS_CTLR_QUIESCENT)
+ return 0;
+
+ count--;
+ if (!count)
+ return -EBUSY;
+
+ cpu_relax();
+ udelay(1);
+ }
+}
+
static int its_probe(struct device_node *node, struct irq_domain *parent)
{
struct resource res;
goto out_unmap;
}
+ err = its_force_quiescent(its_base);
+ if (err) {
+ pr_warn("%s: failed to quiesce, giving up\n",
+ node->full_name);
+ goto out_unmap;
+ }
+
pr_info("ITS: %s\n", node->full_name);
its = kzalloc(sizeof(*its), GFP_KERNEL);
writeq_relaxed(baser, its->base + GITS_CBASER);
tmp = readq_relaxed(its->base + GITS_CBASER);
writeq_relaxed(0, its->base + GITS_CWRITER);
- writel_relaxed(1, its->base + GITS_CTLR);
+ writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR);
if ((tmp ^ baser) & GITS_BASER_SHAREABILITY_MASK) {
pr_info("ITS: using cache flushing for cmd queue\n");
int its_cpu_init(void)
{
- if (!gic_rdists_supports_plpis()) {
- pr_info("CPU%d: LPIs not supported\n", smp_processor_id());
- return -ENXIO;
- }
-
if (!list_empty(&its_nodes)) {
+ if (!gic_rdists_supports_plpis()) {
+ pr_info("CPU%d: LPIs not supported\n", smp_processor_id());
+ return -ENXIO;
+ }
its_cpu_init_lpis();
its_cpu_init_collection();
}
tlist |= 1 << (mpidr & 0xf);
cpu = cpumask_next(cpu, mask);
- if (cpu == nr_cpu_ids)
+ if (cpu >= nr_cpu_ids)
goto out;
mpidr = cpu_logical_map(cpu);
static void gic_mask_irq(struct irq_data *d)
{
u32 mask = 1 << (gic_irq(d) % 32);
+ unsigned long flags;
- raw_spin_lock(&irq_controller_lock);
+ raw_spin_lock_irqsave(&irq_controller_lock, flags);
writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_CLEAR + (gic_irq(d) / 32) * 4);
if (gic_arch_extn.irq_mask)
gic_arch_extn.irq_mask(d);
- raw_spin_unlock(&irq_controller_lock);
+ raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
}
static void gic_unmask_irq(struct irq_data *d)
{
u32 mask = 1 << (gic_irq(d) % 32);
+ unsigned long flags;
- raw_spin_lock(&irq_controller_lock);
+ raw_spin_lock_irqsave(&irq_controller_lock, flags);
if (gic_arch_extn.irq_unmask)
gic_arch_extn.irq_unmask(d);
writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_SET + (gic_irq(d) / 32) * 4);
- raw_spin_unlock(&irq_controller_lock);
+ raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
}
static void gic_eoi_irq(struct irq_data *d)
{
void __iomem *base = gic_dist_base(d);
unsigned int gicirq = gic_irq(d);
+ unsigned long flags;
int ret;
/* Interrupt configuration for SGIs can't be changed */
type != IRQ_TYPE_EDGE_RISING)
return -EINVAL;
- raw_spin_lock(&irq_controller_lock);
+ raw_spin_lock_irqsave(&irq_controller_lock, flags);
if (gic_arch_extn.irq_set_type)
gic_arch_extn.irq_set_type(d, type);
ret = gic_configure_irq(gicirq, type, base, NULL);
- raw_spin_unlock(&irq_controller_lock);
+ raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
return ret;
}
void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3);
unsigned int cpu, shift = (gic_irq(d) % 4) * 8;
u32 val, mask, bit;
+ unsigned long flags;
if (!force)
cpu = cpumask_any_and(mask_val, cpu_online_mask);
if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids)
return -EINVAL;
- raw_spin_lock(&irq_controller_lock);
+ raw_spin_lock_irqsave(&irq_controller_lock, flags);
mask = 0xff << shift;
bit = gic_cpu_map[cpu] << shift;
val = readl_relaxed(reg) & ~mask;
writel_relaxed(val | bit, reg);
- raw_spin_unlock(&irq_controller_lock);
+ raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
return IRQ_SET_MASK_OK;
}
if (ints[0] > 1)
membase = (unsigned long)ints[2];
if (str && *str) {
- strcpy(sid, str);
+ strlcpy(sid, str, sizeof(sid));
icn_id = sid;
if ((p = strchr(sid, ','))) {
*p++ = 0;
struct request_queue *q = bdev_get_queue(where->bdev);
unsigned short logical_block_size = queue_logical_block_size(q);
sector_t num_sectors;
+ unsigned int uninitialized_var(special_cmd_max_sectors);
- /* Reject unsupported discard requests */
- if ((rw & REQ_DISCARD) && !blk_queue_discard(q)) {
+ /*
+ * Reject unsupported discard and write same requests.
+ */
+ if (rw & REQ_DISCARD)
+ special_cmd_max_sectors = q->limits.max_discard_sectors;
+ else if (rw & REQ_WRITE_SAME)
+ special_cmd_max_sectors = q->limits.max_write_same_sectors;
+ if ((rw & (REQ_DISCARD | REQ_WRITE_SAME)) && special_cmd_max_sectors == 0) {
dec_count(io, region, -EOPNOTSUPP);
return;
}
store_io_and_region_in_bio(bio, io, region);
if (rw & REQ_DISCARD) {
- num_sectors = min_t(sector_t, q->limits.max_discard_sectors, remaining);
+ num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining);
bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT;
remaining -= num_sectors;
} else if (rw & REQ_WRITE_SAME) {
*/
dp->get_page(dp, &page, &len, &offset);
bio_add_page(bio, page, logical_block_size, offset);
- num_sectors = min_t(sector_t, q->limits.max_write_same_sectors, remaining);
+ num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining);
bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT;
offset = 0;
#include <linux/log2.h>
#include <linux/dm-kcopyd.h>
+#include "dm.h"
+
#include "dm-exception-store.h"
#define DM_MSG_PREFIX "snapshots"
struct list_head snapshots;
};
+/*
+ * This structure is allocated for each origin target
+ */
+struct dm_origin {
+ struct dm_dev *dev;
+ struct dm_target *ti;
+ unsigned split_boundary;
+ struct list_head hash_list;
+};
+
/*
* Size of the hash table for origin volumes. If we make this
* the size of the minors list then it should be nearly perfect
#define ORIGIN_HASH_SIZE 256
#define ORIGIN_MASK 0xFF
static struct list_head *_origins;
+static struct list_head *_dm_origins;
static struct rw_semaphore _origins_lock;
static DECLARE_WAIT_QUEUE_HEAD(_pending_exceptions_done);
_origins = kmalloc(ORIGIN_HASH_SIZE * sizeof(struct list_head),
GFP_KERNEL);
if (!_origins) {
- DMERR("unable to allocate memory");
+ DMERR("unable to allocate memory for _origins");
return -ENOMEM;
}
-
for (i = 0; i < ORIGIN_HASH_SIZE; i++)
INIT_LIST_HEAD(_origins + i);
+
+ _dm_origins = kmalloc(ORIGIN_HASH_SIZE * sizeof(struct list_head),
+ GFP_KERNEL);
+ if (!_dm_origins) {
+ DMERR("unable to allocate memory for _dm_origins");
+ kfree(_origins);
+ return -ENOMEM;
+ }
+ for (i = 0; i < ORIGIN_HASH_SIZE; i++)
+ INIT_LIST_HEAD(_dm_origins + i);
+
init_rwsem(&_origins_lock);
return 0;
static void exit_origin_hash(void)
{
kfree(_origins);
+ kfree(_dm_origins);
}
static unsigned origin_hash(struct block_device *bdev)
list_add_tail(&o->hash_list, sl);
}
+static struct dm_origin *__lookup_dm_origin(struct block_device *origin)
+{
+ struct list_head *ol;
+ struct dm_origin *o;
+
+ ol = &_dm_origins[origin_hash(origin)];
+ list_for_each_entry (o, ol, hash_list)
+ if (bdev_equal(o->dev->bdev, origin))
+ return o;
+
+ return NULL;
+}
+
+static void __insert_dm_origin(struct dm_origin *o)
+{
+ struct list_head *sl = &_dm_origins[origin_hash(o->dev->bdev)];
+ list_add_tail(&o->hash_list, sl);
+}
+
+static void __remove_dm_origin(struct dm_origin *o)
+{
+ list_del(&o->hash_list);
+}
+
/*
* _origins_lock must be held when calling this function.
* Returns number of snapshots registered using the supplied cow device, plus:
static void snapshot_resume(struct dm_target *ti)
{
struct dm_snapshot *s = ti->private;
- struct dm_snapshot *snap_src = NULL, *snap_dest = NULL;
+ struct dm_snapshot *snap_src = NULL, *snap_dest = NULL, *snap_merging = NULL;
+ struct dm_origin *o;
+ struct mapped_device *origin_md = NULL;
+ bool must_restart_merging = false;
down_read(&_origins_lock);
+
+ o = __lookup_dm_origin(s->origin->bdev);
+ if (o)
+ origin_md = dm_table_get_md(o->ti->table);
+ if (!origin_md) {
+ (void) __find_snapshots_sharing_cow(s, NULL, NULL, &snap_merging);
+ if (snap_merging)
+ origin_md = dm_table_get_md(snap_merging->ti->table);
+ }
+ if (origin_md == dm_table_get_md(ti->table))
+ origin_md = NULL;
+ if (origin_md) {
+ if (dm_hold(origin_md))
+ origin_md = NULL;
+ }
+
+ up_read(&_origins_lock);
+
+ if (origin_md) {
+ dm_internal_suspend_fast(origin_md);
+ if (snap_merging && test_bit(RUNNING_MERGE, &snap_merging->state_bits)) {
+ must_restart_merging = true;
+ stop_merge(snap_merging);
+ }
+ }
+
+ down_read(&_origins_lock);
+
(void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL);
if (snap_src && snap_dest) {
down_write(&snap_src->lock);
up_write(&snap_dest->lock);
up_write(&snap_src->lock);
}
+
up_read(&_origins_lock);
+ if (origin_md) {
+ if (must_restart_merging)
+ start_merge(snap_merging);
+ dm_internal_resume_fast(origin_md);
+ dm_put(origin_md);
+ }
+
/* Now we have correct chunk size, reregister */
reregister_snapshot(s);
* Origin: maps a linear range of a device, with hooks for snapshotting.
*/
-struct dm_origin {
- struct dm_dev *dev;
- unsigned split_boundary;
-};
-
/*
* Construct an origin mapping: <dev_path>
* The context for an origin is merely a 'struct dm_dev *'
goto bad_open;
}
+ o->ti = ti;
ti->private = o;
ti->num_flush_bios = 1;
static void origin_dtr(struct dm_target *ti)
{
struct dm_origin *o = ti->private;
+
dm_put_device(ti, o->dev);
kfree(o);
}
struct dm_origin *o = ti->private;
o->split_boundary = get_origin_minimum_chunksize(o->dev->bdev);
+
+ down_write(&_origins_lock);
+ __insert_dm_origin(o);
+ up_write(&_origins_lock);
+}
+
+static void origin_postsuspend(struct dm_target *ti)
+{
+ struct dm_origin *o = ti->private;
+
+ down_write(&_origins_lock);
+ __remove_dm_origin(o);
+ up_write(&_origins_lock);
}
static void origin_status(struct dm_target *ti, status_type_t type,
static struct target_type origin_target = {
.name = "snapshot-origin",
- .version = {1, 8, 1},
+ .version = {1, 9, 0},
.module = THIS_MODULE,
.ctr = origin_ctr,
.dtr = origin_dtr,
.map = origin_map,
.resume = origin_resume,
+ .postsuspend = origin_postsuspend,
.status = origin_status,
.merge = origin_merge,
.iterate_devices = origin_iterate_devices,
static struct target_type snapshot_target = {
.name = "snapshot",
- .version = {1, 12, 0},
+ .version = {1, 13, 0},
.module = THIS_MODULE,
.ctr = snapshot_ctr,
.dtr = snapshot_dtr,
static struct target_type merge_target = {
.name = dm_snapshot_merge_target_name,
- .version = {1, 2, 0},
+ .version = {1, 3, 0},
.module = THIS_MODULE,
.ctr = snapshot_ctr,
.dtr = snapshot_dtr,
return DM_MAPIO_REMAPPED;
case -ENODATA:
- if (get_pool_mode(tc->pool) == PM_READ_ONLY) {
- /*
- * This block isn't provisioned, and we have no way
- * of doing so.
- */
- handle_unserviceable_bio(tc->pool, bio);
- cell_defer_no_holder(tc, virt_cell);
- return DM_MAPIO_SUBMITTED;
- }
- /* fall through */
-
case -EWOULDBLOCK:
thin_defer_cell(tc, virt_cell);
return DM_MAPIO_SUBMITTED;
dm_get(md);
atomic_inc(&md->open_count);
-
out:
spin_unlock(&_minor_lock);
static void dm_blk_close(struct gendisk *disk, fmode_t mode)
{
- struct mapped_device *md = disk->private_data;
+ struct mapped_device *md;
spin_lock(&_minor_lock);
+ md = disk->private_data;
+ if (WARN_ON(!md))
+ goto out;
+
if (atomic_dec_and_test(&md->open_count) &&
(test_bit(DMF_DEFERRED_REMOVE, &md->flags)))
queue_work(deferred_remove_workqueue, &deferred_remove_work);
dm_put(md);
-
+out:
spin_unlock(&_minor_lock);
}
int minor = MINOR(disk_devt(md->disk));
unlock_fs(md);
- bdput(md->bdev);
destroy_workqueue(md->wq);
if (md->kworker_task)
mempool_destroy(md->rq_pool);
if (md->bs)
bioset_free(md->bs);
- blk_integrity_unregister(md->disk);
- del_gendisk(md->disk);
+
cleanup_srcu_struct(&md->io_barrier);
free_table_devices(&md->table_devices);
- free_minor(minor);
+ dm_stats_cleanup(&md->stats);
spin_lock(&_minor_lock);
md->disk->private_data = NULL;
spin_unlock(&_minor_lock);
-
+ if (blk_get_integrity(md->disk))
+ blk_integrity_unregister(md->disk);
+ del_gendisk(md->disk);
put_disk(md->disk);
blk_cleanup_queue(md->queue);
- dm_stats_cleanup(&md->stats);
+ bdput(md->bdev);
+ free_minor(minor);
+
module_put(THIS_MODULE);
kfree(md);
}
BUG_ON(test_bit(DMF_FREEING, &md->flags));
}
+int dm_hold(struct mapped_device *md)
+{
+ spin_lock(&_minor_lock);
+ if (test_bit(DMF_FREEING, &md->flags)) {
+ spin_unlock(&_minor_lock);
+ return -EBUSY;
+ }
+ dm_get(md);
+ spin_unlock(&_minor_lock);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dm_hold);
+
const char *dm_device_name(struct mapped_device *md)
{
return md->name;
might_sleep();
- spin_lock(&_minor_lock);
map = dm_get_live_table(md, &srcu_idx);
+
+ spin_lock(&_minor_lock);
idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md))));
set_bit(DMF_FREEING, &md->flags);
spin_unlock(&_minor_lock);
if (dm_request_based(md))
flush_kthread_worker(&md->kworker);
+ /*
+ * Take suspend_lock so that presuspend and postsuspend methods
+ * do not race with internal suspend.
+ */
+ mutex_lock(&md->suspend_lock);
if (!dm_suspended_md(md)) {
dm_table_presuspend_targets(map);
dm_table_postsuspend_targets(map);
}
+ mutex_unlock(&md->suspend_lock);
/* dm_put_live_table must be before msleep, otherwise deadlock is possible */
dm_put_live_table(md, srcu_idx);
flush_workqueue(md->wq);
dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);
}
+EXPORT_SYMBOL_GPL(dm_internal_suspend_fast);
void dm_internal_resume_fast(struct mapped_device *md)
{
done:
mutex_unlock(&md->suspend_lock);
}
+EXPORT_SYMBOL_GPL(dm_internal_resume_fast);
/*-----------------------------------------------------------------
* Event notification.
}
if (err) {
mddev_detach(mddev);
- pers->free(mddev, mddev->private);
+ if (mddev->private)
+ pers->free(mddev, mddev->private);
module_put(pers->owner);
bitmap_destroy(mddev);
return err;
dump_zones(mddev);
ret = md_integrity_register(mddev);
- if (ret)
- raid0_free(mddev, conf);
return ret;
}
for (id = kempld_dmi_table;
id->matches[0].slot != DMI_NONE; id++)
if (strstr(id->ident, force_device_id))
- if (id->callback && id->callback(id))
+ if (id->callback && !id->callback(id))
break;
if (id->matches[0].slot == DMI_NONE)
return -ENODEV;
int rtsx_usb_ep0_read_register(struct rtsx_ucr *ucr, u16 addr, u8 *data)
{
u16 value;
+ u8 *buf;
+ int ret;
if (!data)
return -EINVAL;
- *data = 0;
+
+ buf = kzalloc(sizeof(u8), GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
addr |= EP0_READ_REG_CMD << EP0_OP_SHIFT;
value = swab16(addr);
- return usb_control_msg(ucr->pusb_dev,
+ ret = usb_control_msg(ucr->pusb_dev,
usb_rcvctrlpipe(ucr->pusb_dev, 0), RTSX_USB_REQ_REG_OP,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- value, 0, data, 1, 100);
+ value, 0, buf, 1, 100);
+ *data = *buf;
+
+ kfree(buf);
+ return ret;
}
EXPORT_SYMBOL_GPL(rtsx_usb_ep0_read_register);
int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)
{
int ret;
+ u16 *buf;
if (!status)
return -EINVAL;
- if (polling_pipe == 0)
+ if (polling_pipe == 0) {
+ buf = kzalloc(sizeof(u16), GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
ret = usb_control_msg(ucr->pusb_dev,
usb_rcvctrlpipe(ucr->pusb_dev, 0),
RTSX_USB_REQ_POLL,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- 0, 0, status, 2, 100);
- else
+ 0, 0, buf, 2, 100);
+ *status = *buf;
+
+ kfree(buf);
+ } else {
ret = rtsx_usb_get_status_with_bulk(ucr, status);
+ }
/* usb_control_msg may return positive when success */
if (ret < 0)
PTR_ERR(pwrseq->reset_gpios[i]) != -ENOSYS) {
ret = PTR_ERR(pwrseq->reset_gpios[i]);
- while (--i)
+ while (i--)
gpiod_put(pwrseq->reset_gpios[i]);
goto clk_put;
config MTD_NAND_HISI504
tristate "Support for NAND controller on Hisilicon SoC Hip04"
+ depends on HAS_DMA
help
Enables support for NAND controller on Hisilicon SoC Hip04.
nand_writel(info, NDCR, ndcr | int_mask);
}
+static void drain_fifo(struct pxa3xx_nand_info *info, void *data, int len)
+{
+ if (info->ecc_bch) {
+ int timeout;
+
+ /*
+ * According to the datasheet, when reading from NDDB
+ * with BCH enabled, after each 32 bytes reads, we
+ * have to make sure that the NDSR.RDDREQ bit is set.
+ *
+ * Drain the FIFO 8 32 bits reads at a time, and skip
+ * the polling on the last read.
+ */
+ while (len > 8) {
+ __raw_readsl(info->mmio_base + NDDB, data, 8);
+
+ for (timeout = 0;
+ !(nand_readl(info, NDSR) & NDSR_RDDREQ);
+ timeout++) {
+ if (timeout >= 5) {
+ dev_err(&info->pdev->dev,
+ "Timeout on RDDREQ while draining the FIFO\n");
+ return;
+ }
+
+ mdelay(1);
+ }
+
+ data += 32;
+ len -= 8;
+ }
+ }
+
+ __raw_readsl(info->mmio_base + NDDB, data, len);
+}
+
static void handle_data_pio(struct pxa3xx_nand_info *info)
{
unsigned int do_bytes = min(info->data_size, info->chunk_size);
DIV_ROUND_UP(info->oob_size, 4));
break;
case STATE_PIO_READING:
- __raw_readsl(info->mmio_base + NDDB,
- info->data_buff + info->data_buff_pos,
- DIV_ROUND_UP(do_bytes, 4));
+ drain_fifo(info,
+ info->data_buff + info->data_buff_pos,
+ DIV_ROUND_UP(do_bytes, 4));
if (info->oob_size > 0)
- __raw_readsl(info->mmio_base + NDDB,
- info->oob_buff + info->oob_buff_pos,
- DIV_ROUND_UP(info->oob_size, 4));
+ drain_fifo(info,
+ info->oob_buff + info->oob_buff_pos,
+ DIV_ROUND_UP(info->oob_size, 4));
break;
default:
dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__,
int ret, irq, cs;
pdata = dev_get_platdata(&pdev->dev);
+ if (pdata->num_cs <= 0)
+ return -ENODEV;
info = devm_kzalloc(&pdev->dev, sizeof(*info) + (sizeof(*mtd) +
sizeof(*host)) * pdata->num_cs, GFP_KERNEL);
if (!info)
ubi_warn(ubi, "corrupted VID header at PEB %d, LEB %d:%d",
pnum, vol_id, lnum);
err = -EBADMSG;
- } else
+ } else {
err = -EINVAL;
ubi_ro_mode(ubi);
+ }
}
goto out_free;
} else if (err == UBI_IO_BITFLIPS)
config CAN_XILINXCAN
tristate "Xilinx CAN"
- depends on ARCH_ZYNQ || MICROBLAZE || COMPILE_TEST
+ depends on ARCH_ZYNQ || ARM64 || MICROBLAZE || COMPILE_TEST
depends on COMMON_CLK && HAS_IOMEM
---help---
Xilinx CAN driver. This driver supports both soft AXI CAN IP and
skb->pkt_type = PACKET_BROADCAST;
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ skb_reset_mac_header(skb);
+ skb_reset_network_header(skb);
+ skb_reset_transport_header(skb);
+
can_skb_reserve(skb);
can_skb_prv(skb)->ifindex = dev->ifindex;
skb->pkt_type = PACKET_BROADCAST;
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ skb_reset_mac_header(skb);
+ skb_reset_network_header(skb);
+ skb_reset_transport_header(skb);
+
can_skb_reserve(skb);
can_skb_prv(skb)->ifindex = dev->ifindex;
* Copyright (C) 2015 Valeo S.A.
*/
+#include <linux/spinlock.h>
+#include <linux/kernel.h>
#include <linux/completion.h>
#include <linux/module.h>
#include <linux/netdevice.h>
struct kvaser_usb_net_priv {
struct can_priv can;
- atomic_t active_tx_urbs;
- struct usb_anchor tx_submitted;
+ spinlock_t tx_contexts_lock;
+ int active_tx_contexts;
struct kvaser_usb_tx_urb_context tx_contexts[MAX_TX_URBS];
+ struct usb_anchor tx_submitted;
struct completion start_comp, stop_comp;
struct kvaser_usb *dev;
while (pos <= actual_len - MSG_HEADER_LEN) {
tmp = buf + pos;
- if (!tmp->len)
- break;
+ /* Handle messages crossing the USB endpoint max packet
+ * size boundary. Check kvaser_usb_read_bulk_callback()
+ * for further details.
+ */
+ if (tmp->len == 0) {
+ pos = round_up(pos,
+ dev->bulk_in->wMaxPacketSize);
+ continue;
+ }
if (pos + tmp->len > actual_len) {
dev_err(dev->udev->dev.parent,
struct kvaser_usb_net_priv *priv;
struct sk_buff *skb;
struct can_frame *cf;
+ unsigned long flags;
u8 channel, tid;
channel = msg->u.tx_acknowledge_header.channel;
stats->tx_packets++;
stats->tx_bytes += context->dlc;
- can_get_echo_skb(priv->netdev, context->echo_index);
- context->echo_index = MAX_TX_URBS;
- atomic_dec(&priv->active_tx_urbs);
+ spin_lock_irqsave(&priv->tx_contexts_lock, flags);
+ can_get_echo_skb(priv->netdev, context->echo_index);
+ context->echo_index = MAX_TX_URBS;
+ --priv->active_tx_contexts;
netif_wake_queue(priv->netdev);
+
+ spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
}
static void kvaser_usb_simple_msg_callback(struct urb *urb)
netdev_err(netdev, "Error transmitting URB\n");
usb_unanchor_urb(urb);
usb_free_urb(urb);
- kfree(buf);
return err;
}
return 0;
}
-static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
-{
- int i;
-
- usb_kill_anchored_urbs(&priv->tx_submitted);
- atomic_set(&priv->active_tx_urbs, 0);
-
- for (i = 0; i < MAX_TX_URBS; i++)
- priv->tx_contexts[i].echo_index = MAX_TX_URBS;
-}
-
static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
const struct kvaser_usb_error_summary *es,
struct can_frame *cf)
while (pos <= urb->actual_length - MSG_HEADER_LEN) {
msg = urb->transfer_buffer + pos;
- if (!msg->len)
- break;
+ /* The Kvaser firmware can only read and write messages that
+ * does not cross the USB's endpoint wMaxPacketSize boundary.
+ * If a follow-up command crosses such boundary, firmware puts
+ * a placeholder zero-length command in its place then aligns
+ * the real command to the next max packet size.
+ *
+ * Handle such cases or we're going to miss a significant
+ * number of events in case of a heavy rx load on the bus.
+ */
+ if (msg->len == 0) {
+ pos = round_up(pos, dev->bulk_in->wMaxPacketSize);
+ continue;
+ }
if (pos + msg->len > urb->actual_length) {
dev_err(dev->udev->dev.parent, "Format error\n");
}
kvaser_usb_handle_message(dev, msg);
-
pos += msg->len;
}
return err;
}
+static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv)
+{
+ int i;
+
+ priv->active_tx_contexts = 0;
+ for (i = 0; i < MAX_TX_URBS; i++)
+ priv->tx_contexts[i].echo_index = MAX_TX_URBS;
+}
+
+/* This method might sleep. Do not call it in the atomic context
+ * of URB completions.
+ */
+static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
+{
+ usb_kill_anchored_urbs(&priv->tx_submitted);
+ kvaser_usb_reset_tx_urb_contexts(priv);
+}
+
static void kvaser_usb_unlink_all_urbs(struct kvaser_usb *dev)
{
int i;
struct urb *urb;
void *buf;
struct kvaser_msg *msg;
- int i, err;
- int ret = NETDEV_TX_OK;
+ int i, err, ret = NETDEV_TX_OK;
u8 *msg_tx_can_flags = NULL; /* GCC */
+ unsigned long flags;
if (can_dropped_invalid_skb(netdev, skb))
return NETDEV_TX_OK;
if (!buf) {
stats->tx_dropped++;
dev_kfree_skb(skb);
- goto nobufmem;
+ goto freeurb;
}
msg = buf;
if (cf->can_id & CAN_RTR_FLAG)
*msg_tx_can_flags |= MSG_FLAG_REMOTE_FRAME;
+ spin_lock_irqsave(&priv->tx_contexts_lock, flags);
for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++) {
if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) {
context = &priv->tx_contexts[i];
+
+ context->echo_index = i;
+ can_put_echo_skb(skb, netdev, context->echo_index);
+ ++priv->active_tx_contexts;
+ if (priv->active_tx_contexts >= MAX_TX_URBS)
+ netif_stop_queue(netdev);
+
break;
}
}
+ spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
/* This should never happen; it implies a flow control bug */
if (!context) {
netdev_warn(netdev, "cannot find free context\n");
+
+ kfree(buf);
ret = NETDEV_TX_BUSY;
- goto releasebuf;
+ goto freeurb;
}
context->priv = priv;
- context->echo_index = i;
context->dlc = cf->can_dlc;
msg->u.tx_can.tid = context->echo_index;
kvaser_usb_write_bulk_callback, context);
usb_anchor_urb(urb, &priv->tx_submitted);
- can_put_echo_skb(skb, netdev, context->echo_index);
-
- atomic_inc(&priv->active_tx_urbs);
-
- if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
- netif_stop_queue(netdev);
-
err = usb_submit_urb(urb, GFP_ATOMIC);
if (unlikely(err)) {
+ spin_lock_irqsave(&priv->tx_contexts_lock, flags);
+
can_free_echo_skb(netdev, context->echo_index);
+ context->echo_index = MAX_TX_URBS;
+ --priv->active_tx_contexts;
+ netif_wake_queue(netdev);
+
+ spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
- atomic_dec(&priv->active_tx_urbs);
usb_unanchor_urb(urb);
stats->tx_dropped++;
else
netdev_warn(netdev, "Failed tx_urb %d\n", err);
- goto releasebuf;
+ goto freeurb;
}
- usb_free_urb(urb);
-
- return NETDEV_TX_OK;
+ ret = NETDEV_TX_OK;
-releasebuf:
- kfree(buf);
-nobufmem:
+freeurb:
usb_free_urb(urb);
return ret;
}
struct kvaser_usb *dev = usb_get_intfdata(intf);
struct net_device *netdev;
struct kvaser_usb_net_priv *priv;
- int i, err;
+ int err;
err = kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, channel);
if (err)
priv = netdev_priv(netdev);
+ init_usb_anchor(&priv->tx_submitted);
init_completion(&priv->start_comp);
init_completion(&priv->stop_comp);
- init_usb_anchor(&priv->tx_submitted);
- atomic_set(&priv->active_tx_urbs, 0);
-
- for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++)
- priv->tx_contexts[i].echo_index = MAX_TX_URBS;
-
priv->dev = dev;
priv->netdev = netdev;
priv->channel = channel;
+ spin_lock_init(&priv->tx_contexts_lock);
+ kvaser_usb_reset_tx_urb_contexts(priv);
+
priv->can.state = CAN_STATE_STOPPED;
priv->can.clock.freq = CAN_USB_CLOCK;
priv->can.bittiming_const = &kvaser_usb_bittiming_const;
pdev->usb_if = ppdev->usb_if;
pdev->cmd_buffer_addr = ppdev->cmd_buffer_addr;
+
+ /* do a copy of the ctrlmode[_supported] too */
+ dev->can.ctrlmode = ppdev->dev.can.ctrlmode;
+ dev->can.ctrlmode_supported = ppdev->dev.can.ctrlmode_supported;
}
pdev->usb_if->dev[dev->ctrl_idx] = dev;
{
struct pcnet32_private *lp;
int i, media;
- int fdx, mii, fset, dxsuflo;
+ int fdx, mii, fset, dxsuflo, sram;
int chip_version;
char *chipname;
struct net_device *dev;
}
/* initialize variables */
- fdx = mii = fset = dxsuflo = 0;
+ fdx = mii = fset = dxsuflo = sram = 0;
chip_version = (chip_version >> 12) & 0xffff;
switch (chip_version) {
chipname = "PCnet/FAST III 79C973"; /* PCI */
fdx = 1;
mii = 1;
+ sram = 1;
break;
case 0x2626:
chipname = "PCnet/Home 79C978"; /* PCI */
chipname = "PCnet/FAST III 79C975"; /* PCI */
fdx = 1;
mii = 1;
+ sram = 1;
break;
case 0x2628:
chipname = "PCnet/PRO 79C976";
dxsuflo = 1;
}
+ /*
+ * The Am79C973/Am79C975 controllers come with 12K of SRAM
+ * which we can use for the Tx/Rx buffers but most importantly,
+ * the use of SRAM allow us to use the BCR18:NOUFLO bit to avoid
+ * Tx fifo underflows.
+ */
+ if (sram) {
+ /*
+ * The SRAM is being configured in two steps. First we
+ * set the SRAM size in the BCR25:SRAM_SIZE bits. According
+ * to the datasheet, each bit corresponds to a 512-byte
+ * page so we can have at most 24 pages. The SRAM_SIZE
+ * holds the value of the upper 8 bits of the 16-bit SRAM size.
+ * The low 8-bits start at 0x00 and end at 0xff. So the
+ * address range is from 0x0000 up to 0x17ff. Therefore,
+ * the SRAM_SIZE is set to 0x17. The next step is to set
+ * the BCR26:SRAM_BND midway through so the Tx and Rx
+ * buffers can share the SRAM equally.
+ */
+ a->write_bcr(ioaddr, 25, 0x17);
+ a->write_bcr(ioaddr, 26, 0xc);
+ /* And finally enable the NOUFLO bit */
+ a->write_bcr(ioaddr, 18, a->read_bcr(ioaddr, 18) | (1 << 11));
+ }
+
dev = alloc_etherdev(sizeof(*lp));
if (!dev) {
ret = -ENOMEM;
if (!xgene_ring_mgr_init(pdata))
return -ENODEV;
- if (!efi_enabled(EFI_BOOT)) {
+ if (pdata->clk) {
clk_prepare_enable(pdata->clk);
clk_disable_unprepare(pdata->clk);
clk_prepare_enable(pdata->clk);
#ifdef CONFIG_ACPI
static const struct acpi_device_id xgene_enet_acpi_match[] = {
{ "APMC0D05", },
+ { "APMC0D30", },
+ { "APMC0D31", },
{ }
};
MODULE_DEVICE_TABLE(acpi, xgene_enet_acpi_match);
#ifdef CONFIG_OF
static struct of_device_id xgene_enet_of_match[] = {
{.compatible = "apm,xgene-enet",},
+ {.compatible = "apm,xgene1-sgenet",},
+ {.compatible = "apm,xgene1-xgenet",},
{},
};
{
struct bcm_enet_priv *priv;
struct net_device *dev;
- int tx_work_done, rx_work_done;
+ int rx_work_done;
priv = container_of(napi, struct bcm_enet_priv, napi);
dev = priv->net_dev;
ENETDMAC_IR, priv->tx_chan);
/* reclaim sent skb */
- tx_work_done = bcm_enet_tx_reclaim(dev, 0);
+ bcm_enet_tx_reclaim(dev, 0);
spin_lock(&priv->rx_lock);
rx_work_done = bcm_enet_receive_queue(dev, budget);
spin_unlock(&priv->rx_lock);
- if (rx_work_done >= budget || tx_work_done > 0) {
- /* rx/tx queue is not yet empty/clean */
+ if (rx_work_done >= budget) {
+ /* rx queue is not yet empty/clean */
return rx_work_done;
}
slot->skb = skb;
slot->dma_addr = dma_addr;
- if (slot->dma_addr & 0xC0000000)
- bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");
-
return 0;
}
ring->mmio_base);
goto err_dma_free;
}
- if (ring->dma_base & 0xC0000000)
- bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");
ring->unaligned = bgmac_dma_unaligned(bgmac, ring,
BGMAC_DMA_RING_TX);
err = -ENOMEM;
goto err_dma_free;
}
- if (ring->dma_base & 0xC0000000)
- bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");
ring->unaligned = bgmac_dma_unaligned(bgmac, ring,
BGMAC_DMA_RING_RX);
pci_write_config_dword(bp->pdev, PCICFG_GRC_ADDRESS,
PCICFG_VENDOR_ID_OFFSET);
+ /* Set PCIe reset type to fundamental for EEH recovery */
+ pdev->needs_freset = 1;
+
/* AER (Advanced Error reporting) configuration */
rc = pci_enable_pcie_error_reporting(pdev);
if (!rc)
NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 |
NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO |
NETIF_F_RXHASH | NETIF_F_HW_VLAN_CTAG_TX;
- if (!CHIP_IS_E1x(bp)) {
+ if (!chip_is_e1x) {
dev->hw_features |= NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL |
NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT;
dev->hw_enc_features =
if (wol->wolopts & ~(WAKE_MAGIC | WAKE_MAGICSECURE))
return -EINVAL;
+ reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL);
if (wol->wolopts & WAKE_MAGICSECURE) {
bcmgenet_umac_writel(priv, get_unaligned_be16(&wol->sopass[0]),
UMAC_MPD_PW_MS);
bcmgenet_umac_writel(priv, get_unaligned_be32(&wol->sopass[2]),
UMAC_MPD_PW_LS);
- reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL);
reg |= MPD_PW_EN;
- bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL);
+ } else {
+ reg &= ~MPD_PW_EN;
}
+ bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL);
/* Flag the device and relevant IRQ as wakeup capable */
if (wol->wolopts) {
};
#if defined(CONFIG_OF)
-static struct macb_config pc302gem_config = {
+static const struct macb_config pc302gem_config = {
.caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE,
.dma_burst_length = 16,
};
-static struct macb_config sama5d3_config = {
+static const struct macb_config sama5d3_config = {
.caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE,
.dma_burst_length = 16,
};
-static struct macb_config sama5d4_config = {
+static const struct macb_config sama5d4_config = {
.caps = 0,
.dma_burst_length = 4,
};
if (bp->pdev->dev.of_node) {
match = of_match_node(macb_dt_ids, bp->pdev->dev.of_node);
if (match && match->data) {
- config = (const struct macb_config *)match->data;
+ config = match->data;
bp->caps = config->caps;
/*
/* Bitfields in MID */
#define MACB_IDNUM_OFFSET 16
-#define MACB_IDNUM_SIZE 16
+#define MACB_IDNUM_SIZE 12
#define MACB_REV_OFFSET 0
#define MACB_REV_SIZE 16
}
/* Installed successfully, update the cached header too. */
- memcpy(card_fw, fs_fw, sizeof(*card_fw));
+ *card_fw = *fs_fw;
card_fw_usable = 1;
*reset = 0; /* already reset as part of load_fw */
}
(unsigned int)tp->rx_ring[i].buffer1,
(unsigned int)tp->rx_ring[i].buffer2,
buf[0], buf[1], buf[2]);
- for (j = 0; buf[j] != 0xee && j < 1600; j++)
+ for (j = 0; ((j < 1600) && buf[j] != 0xee); j++)
if (j < 100)
pr_cont(" %02x", buf[j]);
pr_cont(" j=%d\n", j);
u16 vlan_tag;
u32 tx_rate;
u32 plink_tracking;
+ u32 privileges;
};
enum vf_state {
u8 __iomem *csr; /* CSR BAR used only for BE2/3 */
u8 __iomem *db; /* Door Bell */
+ u8 __iomem *pcicfg; /* On SH,BEx only. Shadow of PCI config space */
struct mutex mbox_lock; /* For serializing mbox cmds to BE card */
struct be_dma_mem mbox_mem;
{
int num_eqs, i = 0;
- if (lancer_chip(adapter) && num > 8) {
- while (num) {
- num_eqs = min(num, 8);
- __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs);
- i += num_eqs;
- num -= num_eqs;
- }
- } else {
- __be_cmd_modify_eqd(adapter, set_eqd, num);
+ while (num) {
+ num_eqs = min(num, 8);
+ __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs);
+ i += num_eqs;
+ num -= num_eqs;
}
return 0;
/* Uses sycnhronous mcc */
int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
- u32 num)
+ u32 num, u32 domain)
{
struct be_mcc_wrb *wrb;
struct be_cmd_req_vlan_config *req;
be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
OPCODE_COMMON_NTWK_VLAN_CONFIG, sizeof(*req),
wrb, NULL);
+ req->hdr.domain = domain;
req->interface_id = if_id;
req->untagged = BE_IF_FLAGS_UNTAGGED & be_if_cap_flags(adapter) ? 1 : 0;
int be_cmd_get_fw_ver(struct be_adapter *adapter);
int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *, int num);
int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
- u32 num);
+ u32 num, u32 domain);
int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 status);
int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc);
int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc);
for_each_set_bit(i, adapter->vids, VLAN_N_VID)
vids[num++] = cpu_to_le16(i);
- status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num);
+ status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num, 0);
if (status) {
dev_err(dev, "Setting HW VLAN filtering failed\n");
/* Set to VLAN promisc mode as setting VLAN filter failed */
return 0;
}
+static int be_set_vf_tvt(struct be_adapter *adapter, int vf, u16 vlan)
+{
+ struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf];
+ u16 vids[BE_NUM_VLANS_SUPPORTED];
+ int vf_if_id = vf_cfg->if_handle;
+ int status;
+
+ /* Enable Transparent VLAN Tagging */
+ status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, vf_if_id, 0);
+ if (status)
+ return status;
+
+ /* Clear pre-programmed VLAN filters on VF if any, if TVT is enabled */
+ vids[0] = 0;
+ status = be_cmd_vlan_config(adapter, vf_if_id, vids, 1, vf + 1);
+ if (!status)
+ dev_info(&adapter->pdev->dev,
+ "Cleared guest VLANs on VF%d", vf);
+
+ /* After TVT is enabled, disallow VFs to program VLAN filters */
+ if (vf_cfg->privileges & BE_PRIV_FILTMGMT) {
+ status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges &
+ ~BE_PRIV_FILTMGMT, vf + 1);
+ if (!status)
+ vf_cfg->privileges &= ~BE_PRIV_FILTMGMT;
+ }
+ return 0;
+}
+
+static int be_clear_vf_tvt(struct be_adapter *adapter, int vf)
+{
+ struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf];
+ struct device *dev = &adapter->pdev->dev;
+ int status;
+
+ /* Reset Transparent VLAN Tagging. */
+ status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID, vf + 1,
+ vf_cfg->if_handle, 0);
+ if (status)
+ return status;
+
+ /* Allow VFs to program VLAN filtering */
+ if (!(vf_cfg->privileges & BE_PRIV_FILTMGMT)) {
+ status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges |
+ BE_PRIV_FILTMGMT, vf + 1);
+ if (!status) {
+ vf_cfg->privileges |= BE_PRIV_FILTMGMT;
+ dev_info(dev, "VF%d: FILTMGMT priv enabled", vf);
+ }
+ }
+
+ dev_info(dev,
+ "Disable/re-enable i/f in VM to clear Transparent VLAN tag");
+ return 0;
+}
+
static int be_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos)
{
struct be_adapter *adapter = netdev_priv(netdev);
struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf];
- int status = 0;
+ int status;
if (!sriov_enabled(adapter))
return -EPERM;
if (vlan || qos) {
vlan |= qos << VLAN_PRIO_SHIFT;
- if (vf_cfg->vlan_tag != vlan)
- status = be_cmd_set_hsw_config(adapter, vlan, vf + 1,
- vf_cfg->if_handle, 0);
+ status = be_set_vf_tvt(adapter, vf, vlan);
} else {
- /* Reset Transparent Vlan Tagging. */
- status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID,
- vf + 1, vf_cfg->if_handle, 0);
+ status = be_clear_vf_tvt(adapter, vf);
}
if (status) {
dev_err(&adapter->pdev->dev,
- "VLAN %d config on VF %d failed : %#x\n", vlan,
- vf, status);
+ "VLAN %d config on VF %d failed : %#x\n", vlan, vf,
+ status);
return be_cmd_status(status);
}
vf_cfg->vlan_tag = vlan;
-
return 0;
}
}
}
} else {
- pci_read_config_dword(adapter->pdev,
- PCICFG_UE_STATUS_LOW, &ue_lo);
- pci_read_config_dword(adapter->pdev,
- PCICFG_UE_STATUS_HIGH, &ue_hi);
- pci_read_config_dword(adapter->pdev,
- PCICFG_UE_STATUS_LOW_MASK, &ue_lo_mask);
- pci_read_config_dword(adapter->pdev,
- PCICFG_UE_STATUS_HI_MASK, &ue_hi_mask);
+ ue_lo = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_LOW);
+ ue_hi = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_HIGH);
+ ue_lo_mask = ioread32(adapter->pcicfg +
+ PCICFG_UE_STATUS_LOW_MASK);
+ ue_hi_mask = ioread32(adapter->pcicfg +
+ PCICFG_UE_STATUS_HI_MASK);
ue_lo = (ue_lo & ~ue_lo_mask);
ue_hi = (ue_hi & ~ue_hi_mask);
u32 cap_flags, u32 vf)
{
u32 en_flags;
- int status;
en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS |
en_flags &= cap_flags;
- status = be_cmd_if_create(adapter, cap_flags, en_flags,
- if_handle, vf);
-
- return status;
+ return be_cmd_if_create(adapter, cap_flags, en_flags, if_handle, vf);
}
static int be_vfs_if_create(struct be_adapter *adapter)
if (!BE3_chip(adapter)) {
status = be_cmd_get_profile_config(adapter, &res,
vf + 1);
- if (!status)
+ if (!status) {
cap_flags = res.if_cap_flags;
+ /* Prevent VFs from enabling VLAN promiscuous
+ * mode
+ */
+ cap_flags &= ~BE_IF_FLAGS_VLAN_PROMISCUOUS;
+ }
}
status = be_if_create(adapter, &vf_cfg->if_handle,
struct device *dev = &adapter->pdev->dev;
struct be_vf_cfg *vf_cfg;
int status, old_vfs, vf;
- u32 privileges;
old_vfs = pci_num_vf(adapter->pdev);
for_all_vfs(adapter, vf_cfg, vf) {
/* Allow VFs to programs MAC/VLAN filters */
- status = be_cmd_get_fn_privileges(adapter, &privileges, vf + 1);
- if (!status && !(privileges & BE_PRIV_FILTMGMT)) {
+ status = be_cmd_get_fn_privileges(adapter, &vf_cfg->privileges,
+ vf + 1);
+ if (!status && !(vf_cfg->privileges & BE_PRIV_FILTMGMT)) {
status = be_cmd_set_fn_privileges(adapter,
- privileges |
+ vf_cfg->privileges |
BE_PRIV_FILTMGMT,
vf + 1);
- if (!status)
+ if (!status) {
+ vf_cfg->privileges |= BE_PRIV_FILTMGMT;
dev_info(dev, "VF%d has FILTMGMT privilege\n",
vf);
+ }
}
/* Allow full available bandwidth */
static int be_map_pci_bars(struct be_adapter *adapter)
{
+ struct pci_dev *pdev = adapter->pdev;
u8 __iomem *addr;
if (BEx_chip(adapter) && be_physfn(adapter)) {
- adapter->csr = pci_iomap(adapter->pdev, 2, 0);
+ adapter->csr = pci_iomap(pdev, 2, 0);
if (!adapter->csr)
return -ENOMEM;
}
- addr = pci_iomap(adapter->pdev, db_bar(adapter), 0);
+ addr = pci_iomap(pdev, db_bar(adapter), 0);
if (!addr)
goto pci_map_err;
adapter->db = addr;
+ if (skyhawk_chip(adapter) || BEx_chip(adapter)) {
+ if (be_physfn(adapter)) {
+ /* PCICFG is the 2nd BAR in BE2 */
+ addr = pci_iomap(pdev, BE2_chip(adapter) ? 1 : 0, 0);
+ if (!addr)
+ goto pci_map_err;
+ adapter->pcicfg = addr;
+ } else {
+ adapter->pcicfg = adapter->db + SRIOV_VF_PCICFG_OFFSET;
+ }
+ }
+
be_roce_map_pci_bars(adapter);
return 0;
pci_map_err:
- dev_err(&adapter->pdev->dev, "Error in mapping PCI BARs\n");
+ dev_err(&pdev->dev, "Error in mapping PCI BARs\n");
be_unmap_pci_bars(adapter);
return -ENOMEM;
}
fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
{
struct fec_enet_private *fep;
- struct bufdesc *bdp, *bdp_t;
+ struct bufdesc *bdp;
unsigned short status;
struct sk_buff *skb;
struct fec_enet_priv_tx_q *txq;
struct netdev_queue *nq;
int index = 0;
- int i, bdnum;
int entries_free;
fep = netdev_priv(ndev);
if (bdp == txq->cur_tx)
break;
- bdp_t = bdp;
- bdnum = 1;
- index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep);
- skb = txq->tx_skbuff[index];
- while (!skb) {
- bdp_t = fec_enet_get_nextdesc(bdp_t, fep, queue_id);
- index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep);
- skb = txq->tx_skbuff[index];
- bdnum++;
- }
- if (skb_shinfo(skb)->nr_frags &&
- (status = bdp_t->cbd_sc) & BD_ENET_TX_READY)
- break;
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
- for (i = 0; i < bdnum; i++) {
- if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr))
- dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
- bdp->cbd_datlen, DMA_TO_DEVICE);
- bdp->cbd_bufaddr = 0;
- if (i < bdnum - 1)
- bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
- }
+ skb = txq->tx_skbuff[index];
txq->tx_skbuff[index] = NULL;
+ if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr))
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
+ bdp->cbd_datlen, DMA_TO_DEVICE);
+ bdp->cbd_bufaddr = 0;
+ if (!skb) {
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
+ continue;
+ }
/* Check for errors. */
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
vlan_packet_rcvd = true;
- skb_copy_to_linear_data_offset(skb, VLAN_HLEN,
- data, (2 * ETH_ALEN));
+ memmove(skb->data + VLAN_HLEN, data, ETH_ALEN * 2);
skb_pull(skb, VLAN_HLEN);
}
writel(int_events, fep->hwp + FEC_IEVENT);
fec_enet_collect_events(fep, int_events);
- if (fep->work_tx || fep->work_rx) {
+ if ((fep->work_tx || fep->work_rx) && fep->link) {
ret = IRQ_HANDLED;
if (napi_schedule_prep(&fep->napi)) {
regulator_disable(fep->reg_phy);
if (fep->ptp_clock)
ptp_clock_unregister(fep->ptp_clock);
- fec_enet_clk_enable(ndev, false);
of_node_put(fep->phy_node);
free_netdev(ndev);
return 0;
}
+static int gfar_of_group_count(struct device_node *np)
+{
+ struct device_node *child;
+ int num = 0;
+
+ for_each_available_child_of_node(np, child)
+ if (!of_node_cmp(child->name, "queue-group"))
+ num++;
+
+ return num;
+}
+
static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev)
{
const char *model;
num_rx_qs = 1;
} else { /* MQ_MG_MODE */
/* get the actual number of supported groups */
- unsigned int num_grps = of_get_available_child_count(np);
+ unsigned int num_grps = gfar_of_group_count(np);
if (num_grps == 0 || num_grps > MAXGROUPS) {
dev_err(&ofdev->dev, "Invalid # of int groups(%d)\n",
/* Parse and initialize group specific information */
if (priv->mode == MQ_MG_MODE) {
- for_each_child_of_node(np, child) {
+ for_each_available_child_of_node(np, child) {
+ if (of_node_cmp(child->name, "queue-group"))
+ continue;
+
err = gfar_parse_group(child, priv, model);
if (err)
goto err_grp_init;
ibmveth_replenish_task(adapter);
if (frames_processed < budget) {
+ napi_complete(napi);
+
/* We think we are done - reenable interrupts,
* then check once more to make sure we are done.
*/
BUG_ON(lpar_rc != H_SUCCESS);
- napi_complete(napi);
-
if (ibmveth_rxq_pending_buffer(adapter) &&
napi_reschedule(napi)) {
lpar_rc = h_vio_signal(adapter->vdev->unit_address,
/* Schedule multicast task to populate multicast list */
queue_work(mdev->workqueue, &priv->rx_mode_task);
- mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap);
-
#ifdef CONFIG_MLX4_EN_VXLAN
if (priv->mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN)
vxlan_get_rx_port(dev);
queue_delayed_work(mdev->workqueue, &priv->service_task,
SERVICE_TASK_DELAY);
+ mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap);
+
return 0;
out:
unsigned long rx_chksum_none;
unsigned long rx_chksum_complete;
unsigned long tx_chksum_offload;
-#define NUM_PORT_STATS 9
+#define NUM_PORT_STATS 10
};
struct mlx4_en_perf_stats {
#include "smc91x.h"
#if defined(CONFIG_ASSABET_NEPONSET)
+#include <mach/assabet.h>
#include <mach/neponset.h>
#endif
const struct of_device_id *match = NULL;
struct smc_local *lp;
struct net_device *ndev;
- struct resource *res;
+ struct resource *res, *ires;
unsigned int __iomem *addr;
unsigned long irq_flags = SMC_IRQ_FLAGS;
- unsigned long irq_resflags;
int ret;
ndev = alloc_etherdev(sizeof(struct smc_local));
goto out_free_netdev;
}
- ndev->irq = platform_get_irq(pdev, 0);
- if (ndev->irq <= 0) {
+ ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (!ires) {
ret = -ENODEV;
goto out_release_io;
}
- /*
- * If this platform does not specify any special irqflags, or if
- * the resource supplies a trigger, override the irqflags with
- * the trigger flags from the resource.
- */
- irq_resflags = irqd_get_trigger_type(irq_get_irq_data(ndev->irq));
- if (irq_flags == -1 || irq_resflags & IRQF_TRIGGER_MASK)
- irq_flags = irq_resflags & IRQF_TRIGGER_MASK;
+
+ ndev->irq = ires->start;
+
+ if (irq_flags == -1 || ires->flags & IRQF_TRIGGER_MASK)
+ irq_flags = ires->flags & IRQF_TRIGGER_MASK;
ret = smc_request_attrib(pdev, ndev);
if (ret)
struct stmmac_priv *priv = NULL;
struct plat_stmmacenet_data *plat_dat = NULL;
const char *mac = NULL;
+ int irq, wol_irq, lpi_irq;
+
+ /* Get IRQ information early to have an ability to ask for deferred
+ * probe if needed before we went too far with resource allocation.
+ */
+ irq = platform_get_irq_byname(pdev, "macirq");
+ if (irq < 0) {
+ if (irq != -EPROBE_DEFER) {
+ dev_err(dev,
+ "MAC IRQ configuration information not found\n");
+ }
+ return irq;
+ }
+
+ /* On some platforms e.g. SPEAr the wake up irq differs from the mac irq
+ * The external wake up irq can be passed through the platform code
+ * named as "eth_wake_irq"
+ *
+ * In case the wake up interrupt is not passed from the platform
+ * so the driver will continue to use the mac irq (ndev->irq)
+ */
+ wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq");
+ if (wol_irq < 0) {
+ if (wol_irq == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ wol_irq = irq;
+ }
+
+ lpi_irq = platform_get_irq_byname(pdev, "eth_lpi");
+ if (lpi_irq == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
addr = devm_ioremap_resource(dev, res);
return PTR_ERR(priv);
}
+ /* Copy IRQ values to priv structure which is now avaialble */
+ priv->dev->irq = irq;
+ priv->wol_irq = wol_irq;
+ priv->lpi_irq = lpi_irq;
+
/* Get MAC address if available (DT) */
if (mac)
memcpy(priv->dev->dev_addr, mac, ETH_ALEN);
- /* Get the MAC information */
- priv->dev->irq = platform_get_irq_byname(pdev, "macirq");
- if (priv->dev->irq < 0) {
- if (priv->dev->irq != -EPROBE_DEFER) {
- netdev_err(priv->dev,
- "MAC IRQ configuration information not found\n");
- }
- return priv->dev->irq;
- }
-
- /*
- * On some platforms e.g. SPEAr the wake up irq differs from the mac irq
- * The external wake up irq can be passed through the platform code
- * named as "eth_wake_irq"
- *
- * In case the wake up interrupt is not passed from the platform
- * so the driver will continue to use the mac irq (ndev->irq)
- */
- priv->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq");
- if (priv->wol_irq < 0) {
- if (priv->wol_irq == -EPROBE_DEFER)
- return -EPROBE_DEFER;
- priv->wol_irq = priv->dev->irq;
- }
-
- priv->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi");
- if (priv->lpi_irq == -EPROBE_DEFER)
- return -EPROBE_DEFER;
-
platform_set_drvdata(pdev, priv->dev);
pr_debug("STMMAC platform driver registration completed");
}
if (rx_count < budget) {
+ napi_complete(napi);
w5100_write(priv, W5100_IMR, IR_S0);
mmiowb();
- napi_complete(napi);
}
return rx_count;
}
if (rx_count < budget) {
+ napi_complete(napi);
w5300_write(priv, W5300_IMR, IR_S0);
mmiowb();
- napi_complete(napi);
}
return rx_count;
if (dev->type == ARPHRD_ETHER && !is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
- rcu_read_lock();
- list_for_each_entry_rcu(port, &team->port_list, list)
+ mutex_lock(&team->lock);
+ list_for_each_entry(port, &team->port_list, list)
if (team->ops.port_change_dev_addr)
team->ops.port_change_dev_addr(team, port);
- rcu_read_unlock();
+ mutex_unlock(&team->lock);
return 0;
}
};
#define CMD_PACKET_SIZE 64
-/* first command after power on can take around 8 seconds */
-#define CMD_TIMEOUT 15000
+#define CMD_TIMEOUT 100
#define CMD_REPLY_RETRY 5
#define CX82310_MTU 1514
ret = usb_bulk_msg(udev, usb_sndbulkpipe(udev, CMD_EP), buf,
CMD_PACKET_SIZE, &actual_len, CMD_TIMEOUT);
if (ret < 0) {
- dev_err(&dev->udev->dev, "send command %#x: error %d\n",
- cmd, ret);
+ if (cmd != CMD_GET_LINK_STATUS)
+ dev_err(&dev->udev->dev, "send command %#x: error %d\n",
+ cmd, ret);
goto end;
}
buf, CMD_PACKET_SIZE, &actual_len,
CMD_TIMEOUT);
if (ret < 0) {
- dev_err(&dev->udev->dev,
- "reply receive error %d\n", ret);
+ if (cmd != CMD_GET_LINK_STATUS)
+ dev_err(&dev->udev->dev,
+ "reply receive error %d\n",
+ ret);
goto end;
}
if (actual_len > 0)
int ret;
char buf[15];
struct usb_device *udev = dev->udev;
+ u8 link[3];
+ int timeout = 50;
/* avoid ADSL modems - continue only if iProduct is "USB NET CARD" */
if (usb_string(udev, udev->descriptor.iProduct, buf, sizeof(buf)) > 0
if (!dev->partial_data)
return -ENOMEM;
+ /* wait for firmware to become ready (indicated by the link being up) */
+ while (--timeout) {
+ ret = cx82310_cmd(dev, CMD_GET_LINK_STATUS, true, NULL, 0,
+ link, sizeof(link));
+ /* the command can time out during boot - it's not an error */
+ if (!ret && link[0] == 1 && link[2] == 1)
+ break;
+ msleep(500);
+ };
+ if (!timeout) {
+ dev_err(&udev->dev, "firmware not ready in time\n");
+ return -ETIMEDOUT;
+ }
+
/* enable ethernet mode (?) */
ret = cx82310_cmd(dev, CMD_ETHERNET_MODE, true, "\x01", 1, NULL, 0);
if (ret) {
.tx_fixup = cx82310_tx_fixup,
};
+#define USB_DEVICE_CLASS(vend, prod, cl, sc, pr) \
+ .match_flags = USB_DEVICE_ID_MATCH_DEVICE | \
+ USB_DEVICE_ID_MATCH_DEV_INFO, \
+ .idVendor = (vend), \
+ .idProduct = (prod), \
+ .bDeviceClass = (cl), \
+ .bDeviceSubClass = (sc), \
+ .bDeviceProtocol = (pr)
+
static const struct usb_device_id products[] = {
{
- USB_DEVICE_AND_INTERFACE_INFO(0x0572, 0xcb01, 0xff, 0, 0),
+ USB_DEVICE_CLASS(0x0572, 0xcb01, 0xff, 0, 0),
.driver_info = (unsigned long) &cx82310_info
},
{ },
{
int i;
- for (i = 0; i < vi->max_queue_pairs; i++)
+ for (i = 0; i < vi->max_queue_pairs; i++) {
+ napi_hash_del(&vi->rq[i].napi);
netif_napi_del(&vi->rq[i].napi);
+ }
kfree(vi->rq);
kfree(vi->sq);
cancel_delayed_work_sync(&vi->refill);
if (netif_running(vi->dev)) {
- for (i = 0; i < vi->max_queue_pairs; i++) {
+ for (i = 0; i < vi->max_queue_pairs; i++)
napi_disable(&vi->rq[i].napi);
- napi_hash_del(&vi->rq[i].napi);
- netif_napi_del(&vi->rq[i].napi);
- }
}
remove_vq_common(vi);
goto drop;
flags &= ~VXLAN_HF_RCO;
- vni &= VXLAN_VID_MASK;
+ vni &= VXLAN_VNI_MASK;
}
/* For backwards compatibility, only allow reserved fields to be
flags &= ~VXLAN_GBP_USED_BITS;
}
- if (flags || (vni & ~VXLAN_VID_MASK)) {
+ if (flags || vni & ~VXLAN_VNI_MASK) {
/* If there are any unprocessed flags remaining treat
* this as a malformed packet. This behavior diverges from
* VXLAN RFC (RFC7348) which stipulates that bits in reserved
case 0x432a: /* BCM4321 */
case 0x432d: /* BCM4322 */
case 0x4352: /* BCM43222 */
+ case 0x435a: /* BCM43228 */
case 0x4333: /* BCM4331 */
case 0x43a2: /* BCM4360 */
case 0x43b3: /* BCM4352 */
void *dcmd_buf = NULL, *wr_pointer;
u16 msglen, maxmsglen = PAGE_SIZE - 0x100;
- brcmf_dbg(TRACE, "cmd %x set %d len %d\n", cmdhdr->cmd, cmdhdr->set,
- cmdhdr->len);
+ if (len < sizeof(*cmdhdr)) {
+ brcmf_err("vendor command too short: %d\n", len);
+ return -EINVAL;
+ }
vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev);
ifp = vif->ifp;
- len -= sizeof(struct brcmf_vndr_dcmd_hdr);
+ brcmf_dbg(TRACE, "ifidx=%d, cmd=%d\n", ifp->ifidx, cmdhdr->cmd);
+
+ if (cmdhdr->offset > len) {
+ brcmf_err("bad buffer offset %d > %d\n", cmdhdr->offset, len);
+ return -EINVAL;
+ }
+
+ len -= cmdhdr->offset;
ret_len = cmdhdr->len;
if (ret_len > 0 || len > 0) {
if (len > BRCMF_DCMD_MAXLEN) {
.nvm_calib_ver = EEPROM_1000_TX_POWER_VERSION, \
.base_params = &iwl1000_base_params, \
.eeprom_params = &iwl1000_eeprom_params, \
- .led_mode = IWL_LED_BLINK
+ .led_mode = IWL_LED_BLINK, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl1000_bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N 1000 BGN",
.base_params = &iwl1000_base_params, \
.eeprom_params = &iwl1000_eeprom_params, \
.led_mode = IWL_LED_RF_STATE, \
- .rx_with_siso_diversity = true
+ .rx_with_siso_diversity = true, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl100_bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N 100 BGN",
.nvm_calib_ver = EEPROM_2000_TX_POWER_VERSION, \
.base_params = &iwl2000_base_params, \
.eeprom_params = &iwl20x0_eeprom_params, \
- .led_mode = IWL_LED_RF_STATE
+ .led_mode = IWL_LED_RF_STATE, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
+
const struct iwl_cfg iwl2000_2bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N 2200 BGN",
.nvm_calib_ver = EEPROM_2000_TX_POWER_VERSION, \
.base_params = &iwl2030_base_params, \
.eeprom_params = &iwl20x0_eeprom_params, \
- .led_mode = IWL_LED_RF_STATE
+ .led_mode = IWL_LED_RF_STATE, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl2030_2bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N 2230 BGN",
.base_params = &iwl2000_base_params, \
.eeprom_params = &iwl20x0_eeprom_params, \
.led_mode = IWL_LED_RF_STATE, \
- .rx_with_siso_diversity = true
+ .rx_with_siso_diversity = true, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl105_bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N 105 BGN",
.base_params = &iwl2030_base_params, \
.eeprom_params = &iwl20x0_eeprom_params, \
.led_mode = IWL_LED_RF_STATE, \
- .rx_with_siso_diversity = true
+ .rx_with_siso_diversity = true, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl135_bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N 135 BGN",
.nvm_calib_ver = EEPROM_5000_TX_POWER_VERSION, \
.base_params = &iwl5000_base_params, \
.eeprom_params = &iwl5000_eeprom_params, \
- .led_mode = IWL_LED_BLINK
+ .led_mode = IWL_LED_BLINK, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl5300_agn_cfg = {
.name = "Intel(R) Ultimate N WiFi Link 5300 AGN",
.base_params = &iwl5000_base_params, \
.eeprom_params = &iwl5000_eeprom_params, \
.led_mode = IWL_LED_BLINK, \
- .internal_wimax_coex = true
+ .internal_wimax_coex = true, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl5150_agn_cfg = {
.name = "Intel(R) WiMAX/WiFi Link 5150 AGN",
.nvm_calib_ver = EEPROM_6005_TX_POWER_VERSION, \
.base_params = &iwl6000_g2_base_params, \
.eeprom_params = &iwl6000_eeprom_params, \
- .led_mode = IWL_LED_RF_STATE
+ .led_mode = IWL_LED_RF_STATE, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl6005_2agn_cfg = {
.name = "Intel(R) Centrino(R) Advanced-N 6205 AGN",
.nvm_calib_ver = EEPROM_6030_TX_POWER_VERSION, \
.base_params = &iwl6000_g2_base_params, \
.eeprom_params = &iwl6000_eeprom_params, \
- .led_mode = IWL_LED_RF_STATE
+ .led_mode = IWL_LED_RF_STATE, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl6030_2agn_cfg = {
.name = "Intel(R) Centrino(R) Advanced-N 6230 AGN",
.nvm_calib_ver = EEPROM_6030_TX_POWER_VERSION, \
.base_params = &iwl6000_g2_base_params, \
.eeprom_params = &iwl6000_eeprom_params, \
- .led_mode = IWL_LED_RF_STATE
+ .led_mode = IWL_LED_RF_STATE, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl6035_2agn_cfg = {
.name = "Intel(R) Centrino(R) Advanced-N 6235 AGN",
.nvm_calib_ver = EEPROM_6000_TX_POWER_VERSION, \
.base_params = &iwl6000_base_params, \
.eeprom_params = &iwl6000_eeprom_params, \
- .led_mode = IWL_LED_BLINK
+ .led_mode = IWL_LED_BLINK, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl6000i_2agn_cfg = {
.name = "Intel(R) Centrino(R) Advanced-N 6200 AGN",
.base_params = &iwl6050_base_params, \
.eeprom_params = &iwl6000_eeprom_params, \
.led_mode = IWL_LED_BLINK, \
- .internal_wimax_coex = true
+ .internal_wimax_coex = true, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl6050_2agn_cfg = {
.name = "Intel(R) Centrino(R) Advanced-N + WiMAX 6250 AGN",
.base_params = &iwl6050_base_params, \
.eeprom_params = &iwl6000_eeprom_params, \
.led_mode = IWL_LED_BLINK, \
- .internal_wimax_coex = true
+ .internal_wimax_coex = true, \
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K
const struct iwl_cfg iwl6150_bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N + WiMAX 6150 BGN",
if (!vif->bss_conf.assoc)
smps_mode = IEEE80211_SMPS_AUTOMATIC;
- if (IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status,
+ if (mvmvif->phy_ctxt &&
+ IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status,
mvmvif->phy_ctxt->id))
smps_mode = IEEE80211_SMPS_AUTOMATIC;
if (!vif->bss_conf.assoc)
smps_mode = IEEE80211_SMPS_AUTOMATIC;
- if (data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id))
+ if (mvmvif->phy_ctxt &&
+ data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id))
smps_mode = IEEE80211_SMPS_AUTOMATIC;
IWL_DEBUG_COEX(data->mvm,
hw->wiphy->bands[IEEE80211_BAND_5GHZ] =
&mvm->nvm_data->bands[IEEE80211_BAND_5GHZ];
- if (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_BEAMFORMER)
+ if ((mvm->fw->ucode_capa.capa[0] &
+ IWL_UCODE_TLV_CAPA_BEAMFORMER) &&
+ (mvm->fw->ucode_capa.api[0] &
+ IWL_UCODE_TLV_API_LQ_SS_PARAMS))
hw->wiphy->bands[IEEE80211_BAND_5GHZ]->vht_cap.cap |=
IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE;
}
mutex_lock(&mvm->mutex);
- iwl_mvm_cancel_scan(mvm);
+ /* Due to a race condition, it's possible that mac80211 asks
+ * us to stop a hw_scan when it's already stopped. This can
+ * happen, for instance, if we stopped the scan ourselves,
+ * called ieee80211_scan_completed() and the userspace called
+ * cancel scan scan before ieee80211_scan_work() could run.
+ * To handle that, simply return if the scan is not running.
+ */
+ /* FIXME: for now, we ignore this race for UMAC scans, since
+ * they don't set the scan_status.
+ */
+ if ((mvm->scan_status == IWL_MVM_SCAN_OS) ||
+ (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN))
+ iwl_mvm_cancel_scan(mvm);
mutex_unlock(&mvm->mutex);
}
int ret;
mutex_lock(&mvm->mutex);
+
+ /* Due to a race condition, it's possible that mac80211 asks
+ * us to stop a sched_scan when it's already stopped. This
+ * can happen, for instance, if we stopped the scan ourselves,
+ * called ieee80211_sched_scan_stopped() and the userspace called
+ * stop sched scan scan before ieee80211_sched_scan_stopped_work()
+ * could run. To handle this, simply return if the scan is
+ * not running.
+ */
+ /* FIXME: for now, we ignore this race for UMAC scans, since
+ * they don't set the scan_status.
+ */
+ if (mvm->scan_status != IWL_MVM_SCAN_SCHED &&
+ !(mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN)) {
+ mutex_unlock(&mvm->mutex);
+ return 0;
+ }
+
ret = iwl_mvm_scan_offload_stop(mvm, false);
mutex_unlock(&mvm->mutex);
iwl_mvm_wait_for_async_handlers(mvm);
return ret;
-
}
static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
#define MAX_NEXT_COLUMNS 7
#define MAX_COLUMN_CHECKS 3
+struct rs_tx_column;
+
typedef bool (*allow_column_func_t) (struct iwl_mvm *mvm,
struct ieee80211_sta *sta,
- struct iwl_scale_tbl_info *tbl);
+ struct iwl_scale_tbl_info *tbl,
+ const struct rs_tx_column *next_col);
struct rs_tx_column {
enum rs_column_mode mode;
};
static bool rs_ant_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
- struct iwl_scale_tbl_info *tbl)
+ struct iwl_scale_tbl_info *tbl,
+ const struct rs_tx_column *next_col)
{
- return iwl_mvm_bt_coex_is_ant_avail(mvm, tbl->rate.ant);
+ return iwl_mvm_bt_coex_is_ant_avail(mvm, next_col->ant);
}
static bool rs_mimo_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
- struct iwl_scale_tbl_info *tbl)
+ struct iwl_scale_tbl_info *tbl,
+ const struct rs_tx_column *next_col)
{
if (!sta->ht_cap.ht_supported)
return false;
}
static bool rs_siso_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
- struct iwl_scale_tbl_info *tbl)
+ struct iwl_scale_tbl_info *tbl,
+ const struct rs_tx_column *next_col)
{
if (!sta->ht_cap.ht_supported)
return false;
}
static bool rs_sgi_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
- struct iwl_scale_tbl_info *tbl)
+ struct iwl_scale_tbl_info *tbl,
+ const struct rs_tx_column *next_col)
{
struct rs_rate *rate = &tbl->rate;
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
for (j = 0; j < MAX_COLUMN_CHECKS; j++) {
allow_func = next_col->checks[j];
- if (allow_func && !allow_func(mvm, sta, tbl))
+ if (allow_func && !allow_func(mvm, sta, tbl, next_col))
break;
}
if (mvm->scan_status == IWL_MVM_SCAN_NONE)
return 0;
- if (iwl_mvm_is_radio_killed(mvm))
+ if (iwl_mvm_is_radio_killed(mvm)) {
+ ret = 0;
goto out;
+ }
if (mvm->scan_status != IWL_MVM_SCAN_SCHED &&
(!(mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_LMAC_SCAN) ||
IWL_DEBUG_SCAN(mvm, "Send stop %sscan failed %d\n",
sched ? "offloaded " : "", ret);
iwl_remove_notification(&mvm->notif_wait, &wait_scan_done);
- return ret;
+ goto out;
}
IWL_DEBUG_SCAN(mvm, "Successfully sent stop %sscan\n",
sched ? "offloaded " : "");
ret = iwl_wait_notification(&mvm->notif_wait, &wait_scan_done, 1 * HZ);
- if (ret)
- return ret;
-
+out:
/*
* Clear the scan status so the next scan requests will succeed. This
* also ensures the Rx handler doesn't do anything, as the scan was
if (mvm->scan_status == IWL_MVM_SCAN_OS)
iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN);
-out:
mvm->scan_status = IWL_MVM_SCAN_NONE;
if (notify) {
ieee80211_scan_completed(mvm->hw, true);
}
- return 0;
+ return ret;
}
static void iwl_mvm_unified_scan_fill_tx_cmd(struct iwl_mvm *mvm,
* request
*/
list_for_each_entry(te_data, &mvm->time_event_list, list) {
- if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE &&
- te_data->running) {
+ if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) {
mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
is_p2p = true;
goto remove_te;
* request
*/
list_for_each_entry(te_data, &mvm->aux_roc_te_list, list) {
- if (te_data->running) {
- mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
- goto remove_te;
- }
+ mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
+ goto remove_te;
}
remove_te:
}
return true;
- } else if (0x86DD == ether_type) {
- return true;
+ } else if (ETH_P_IPV6 == ether_type) {
+ /* TODO: Handle any IPv6 cases that need special handling.
+ * For now, always return false
+ */
+ goto end;
}
end:
unsigned int num_queues = vif->num_queues;
int i;
unsigned int queue_index;
- struct xenvif_stats *vif_stats;
for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) {
unsigned long accum = 0;
for (queue_index = 0; queue_index < num_queues; ++queue_index) {
- vif_stats = &vif->queues[queue_index].stats;
+ void *vif_stats = &vif->queues[queue_index].stats;
accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset);
}
data[i] = accum;
static void make_tx_response(struct xenvif_queue *queue,
struct xen_netif_tx_request *txp,
s8 st);
+static void push_tx_responses(struct xenvif_queue *queue);
static inline int tx_work_todo(struct xenvif_queue *queue);
unsigned long flags;
do {
- int notify;
-
spin_lock_irqsave(&queue->response_lock, flags);
make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
- RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
+ push_tx_responses(queue);
spin_unlock_irqrestore(&queue->response_lock, flags);
- if (notify)
- notify_remote_via_irq(queue->tx_irq);
-
if (cons == end)
break;
txp = RING_GET_REQUEST(&queue->tx, cons++);
{
unsigned int offset = skb_headlen(skb);
skb_frag_t frags[MAX_SKB_FRAGS];
- int i;
+ int i, f;
struct ubuf_info *uarg;
struct sk_buff *nskb = skb_shinfo(skb)->frag_list;
frags[i].page_offset = 0;
skb_frag_size_set(&frags[i], len);
}
- /* swap out with old one */
- memcpy(skb_shinfo(skb)->frags,
- frags,
- i * sizeof(skb_frag_t));
- skb_shinfo(skb)->nr_frags = i;
- skb->truesize += i * PAGE_SIZE;
- /* remove traces of mapped pages and frag_list */
+ /* Copied all the bits from the frag list -- free it. */
skb_frag_list_init(skb);
+ xenvif_skb_zerocopy_prepare(queue, nskb);
+ kfree_skb(nskb);
+
+ /* Release all the original (foreign) frags. */
+ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+ skb_frag_unref(skb, f);
uarg = skb_shinfo(skb)->destructor_arg;
/* increase inflight counter to offset decrement in callback */
atomic_inc(&queue->inflight_packets);
uarg->callback(uarg, true);
skb_shinfo(skb)->destructor_arg = NULL;
- xenvif_skb_zerocopy_prepare(queue, nskb);
- kfree_skb(nskb);
+ /* Fill the skb with the new (local) frags. */
+ memcpy(skb_shinfo(skb)->frags, frags, i * sizeof(skb_frag_t));
+ skb_shinfo(skb)->nr_frags = i;
+ skb->truesize += i * PAGE_SIZE;
return 0;
}
{
struct pending_tx_info *pending_tx_info;
pending_ring_idx_t index;
- int notify;
unsigned long flags;
pending_tx_info = &queue->pending_tx_info[pending_idx];
index = pending_index(queue->pending_prod++);
queue->pending_ring[index] = pending_idx;
- RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
+ push_tx_responses(queue);
spin_unlock_irqrestore(&queue->response_lock, flags);
-
- if (notify)
- notify_remote_via_irq(queue->tx_irq);
}
queue->tx.rsp_prod_pvt = ++i;
}
+static void push_tx_responses(struct xenvif_queue *queue)
+{
+ int notify;
+
+ RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
+ if (notify)
+ notify_remote_via_irq(queue->tx_irq);
+}
+
static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
u16 id,
s8 st,
bool
config OF_OVERLAY
- bool
- depends on OF
+ bool "Device Tree overlays"
select OF_DYNAMIC
select OF_RESOLVE
const char *path)
{
struct device_node *child;
- int len = strchrnul(path, '/') - path;
- int term;
+ int len;
+ len = strcspn(path, "/:");
if (!len)
return NULL;
- term = strchrnul(path, ':') - path;
- if (term < len)
- len = term;
-
__for_each_child_of_node(parent, child) {
const char *name = strrchr(child->full_name, '/');
if (WARN(!name, "malformed device_node %s\n", child->full_name))
/* The path could begin with an alias */
if (*path != '/') {
- char *p = strchrnul(path, '/');
- int len = separator ? separator - path : p - path;
+ int len;
+ const char *p = separator;
+
+ if (!p)
+ p = strchrnul(path, '/');
+ len = p - path;
/* of_aliases must not be NULL */
if (!of_aliases)
path++; /* Increment past '/' delimiter */
np = __of_find_node_by_path(np, path);
path = strchrnul(path, '/');
+ if (separator && separator < path)
+ break;
}
raw_spin_unlock_irqrestore(&devtree_lock, flags);
return np;
struct device_node *p;
const __be32 *intspec, *tmp, *addr;
u32 intsize, intlen;
- int i, res = -EINVAL;
+ int i, res;
pr_debug("of_irq_parse_one: dev=%s, index=%d\n", of_node_full_name(device), index);
/* Get size of interrupt specifier */
tmp = of_get_property(p, "#interrupt-cells", NULL);
- if (tmp == NULL)
+ if (tmp == NULL) {
+ res = -EINVAL;
goto out;
+ }
intsize = be32_to_cpu(*tmp);
pr_debug(" intsize=%d intlen=%d\n", intsize, intlen);
/* Check index */
- if ((index + 1) * intsize > intlen)
+ if ((index + 1) * intsize > intlen) {
+ res = -EINVAL;
goto out;
+ }
/* Copy intspec into irq structure */
intspec += index * intsize;
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/err.h>
+#include <linux/idr.h>
#include "of_private.h"
struct device_node *target, struct device_node *child)
{
const char *cname;
- struct device_node *tchild, *grandchild;
+ struct device_node *tchild;
int ret = 0;
cname = kbasename(child->full_name);
"option path test failed\n");
of_node_put(np);
+ np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+ selftest(np && !strcmp("test/option", options),
+ "option path test, subcase #1 failed\n");
+ of_node_put(np);
+
+ np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
+ selftest(np && !strcmp("test/option", options),
+ "option path test, subcase #2 failed\n");
+ of_node_put(np);
+
np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
selftest(np, "NULL option path test failed\n");
of_node_put(np);
"option alias path test failed\n");
of_node_put(np);
+ np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+ &options);
+ selftest(np && !strcmp("test/alias/option", options),
+ "option alias path test, subcase #1 failed\n");
+ of_node_put(np);
+
np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
selftest(np, "NULL option alias path test failed\n");
of_node_put(np);
rc = of_property_match_string(np, "phandle-list-names", "first");
selftest(rc == 0, "first expected:0 got:%i\n", rc);
rc = of_property_match_string(np, "phandle-list-names", "second");
- selftest(rc == 1, "second expected:0 got:%i\n", rc);
+ selftest(rc == 1, "second expected:1 got:%i\n", rc);
rc = of_property_match_string(np, "phandle-list-names", "third");
- selftest(rc == 2, "third expected:0 got:%i\n", rc);
+ selftest(rc == 2, "third expected:2 got:%i\n", rc);
rc = of_property_match_string(np, "phandle-list-names", "fourth");
selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
rc = of_property_match_string(np, "missing-property", "blah");
struct device_node *n1, *n2, *n21, *nremove, *parent, *np;
struct of_changeset chgset;
- of_changeset_init(&chgset);
n1 = __of_node_dup(NULL, "/testcase-data/changeset/n1");
selftest(n1, "testcase setup failure\n");
n2 = __of_node_dup(NULL, "/testcase-data/changeset/n2");
return pdev != NULL;
}
-#if IS_ENABLED(CONFIG_I2C)
+#if IS_BUILTIN(CONFIG_I2C)
/* get the i2c client device instantiated at the path */
static struct i2c_client *of_path_to_i2c_client(const char *path)
return;
}
-#if IS_ENABLED(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
+#if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
struct selftest_i2c_bus_data {
struct platform_device *pdev;
.id_table = selftest_i2c_dev_id,
};
-#if IS_ENABLED(CONFIG_I2C_MUX)
+#if IS_BUILTIN(CONFIG_I2C_MUX)
struct selftest_i2c_mux_data {
int nchans;
"could not register selftest i2c bus driver\n"))
return ret;
-#if IS_ENABLED(CONFIG_I2C_MUX)
+#if IS_BUILTIN(CONFIG_I2C_MUX)
ret = i2c_add_driver(&selftest_i2c_mux_driver);
if (selftest(ret == 0,
"could not register selftest i2c mux driver\n"))
static void of_selftest_overlay_i2c_cleanup(void)
{
-#if IS_ENABLED(CONFIG_I2C_MUX)
+#if IS_BUILTIN(CONFIG_I2C_MUX)
i2c_del_driver(&selftest_i2c_mux_driver);
#endif
platform_driver_unregister(&selftest_i2c_bus_driver);
of_selftest_overlay_10();
of_selftest_overlay_11();
-#if IS_ENABLED(CONFIG_I2C)
+#if IS_BUILTIN(CONFIG_I2C)
if (selftest(of_selftest_overlay_i2c_init() == 0, "i2c init failed\n"))
goto out;
return false;
}
-static int xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
+static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
int offset)
{
struct xgene_pcie_port *port = bus->sysdata;
return NULL;
xgene_pcie_set_rtdid_reg(bus, devfn);
- return xgene_pcie_get_cfg_base(bus);
+ return xgene_pcie_get_cfg_base(bus) + offset;
}
static struct pci_ops xgene_pcie_ops = {
struct pci_dev *pdev = to_pci_dev(dev);
char *driver_override, *old = pdev->driver_override, *cp;
- if (count > PATH_MAX)
+ /* We need to keep extra room for a newline */
+ if (count >= (PAGE_SIZE - 1))
return -EINVAL;
driver_override = kstrndup(buf, count, GFP_KERNEL);
{
struct pci_dev *pdev = to_pci_dev(dev);
- return sprintf(buf, "%s\n", pdev->driver_override);
+ return snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override);
}
static DEVICE_ATTR_RW(driver_override);
tristate "CardBus yenta-compatible bridge support"
depends on PCI
select CARDBUS if !EXPERT
- select PCCARD_NONSTATIC if PCMCIA != n && ISA
- select PCCARD_PCI if PCMCIA !=n && !ISA
+ select PCCARD_NONSTATIC if PCMCIA != n
---help---
This option enables support for CardBus host bridges. Virtually
all modern PCMCIA bridges are CardBus compatible. A "bridge" is
config PD6729
tristate "Cirrus PD6729 compatible bridge support"
depends on PCMCIA && PCI
- select PCCARD_NONSTATIC if PCMCIA != n && ISA
- select PCCARD_PCI if PCMCIA !=n && !ISA
+ select PCCARD_NONSTATIC
help
This provides support for the Cirrus PD6729 PCI-to-PCMCIA bridge
device, found in some older laptops and PCMCIA card readers.
config I82092
tristate "i82092 compatible bridge support"
depends on PCMCIA && PCI
- select PCCARD_NONSTATIC if PCMCIA != n && ISA
- select PCCARD_PCI if PCMCIA !=n && !ISA
+ select PCCARD_NONSTATIC
help
This provides support for the Intel I82092AA PCI-to-PCMCIA bridge device,
found in some older laptops and more commonly in evaluation boards for the
Say Y here to support the CompactFlash controller on the
PA Semi Electra eval board.
-config PCCARD_PCI
- bool
-
config PCCARD_NONSTATIC
bool
pcmcia_rsrc-y += rsrc_mgr.o
pcmcia_rsrc-$(CONFIG_PCCARD_NONSTATIC) += rsrc_nonstatic.o
pcmcia_rsrc-$(CONFIG_PCCARD_IODYN) += rsrc_iodyn.o
-pcmcia_rsrc-$(CONFIG_PCCARD_PCI) += rsrc_pci.o
obj-$(CONFIG_PCCARD) += pcmcia_rsrc.o
+++ /dev/null
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/pci.h>
-
-#include <pcmcia/ss.h>
-#include <pcmcia/cistpl.h>
-#include "cs_internal.h"
-
-
-struct pcmcia_align_data {
- unsigned long mask;
- unsigned long offset;
-};
-
-static resource_size_t pcmcia_align(void *align_data,
- const struct resource *res,
- resource_size_t size, resource_size_t align)
-{
- struct pcmcia_align_data *data = align_data;
- resource_size_t start;
-
- start = (res->start & ~data->mask) + data->offset;
- if (start < res->start)
- start += data->mask + 1;
- return start;
-}
-
-static struct resource *find_io_region(struct pcmcia_socket *s,
- unsigned long base, int num,
- unsigned long align)
-{
- struct resource *res = pcmcia_make_resource(0, num, IORESOURCE_IO,
- dev_name(&s->dev));
- struct pcmcia_align_data data;
- int ret;
-
- data.mask = align - 1;
- data.offset = base & data.mask;
-
- ret = pci_bus_alloc_resource(s->cb_dev->bus, res, num, 1,
- base, 0, pcmcia_align, &data);
- if (ret != 0) {
- kfree(res);
- res = NULL;
- }
- return res;
-}
-
-static int res_pci_find_io(struct pcmcia_socket *s, unsigned int attr,
- unsigned int *base, unsigned int num,
- unsigned int align, struct resource **parent)
-{
- int i, ret = 0;
-
- /* Check for an already-allocated window that must conflict with
- * what was asked for. It is a hack because it does not catch all
- * potential conflicts, just the most obvious ones.
- */
- for (i = 0; i < MAX_IO_WIN; i++) {
- if (!s->io[i].res)
- continue;
-
- if (!*base)
- continue;
-
- if ((s->io[i].res->start & (align-1)) == *base)
- return -EBUSY;
- }
-
- for (i = 0; i < MAX_IO_WIN; i++) {
- struct resource *res = s->io[i].res;
- unsigned int try;
-
- if (res && (res->flags & IORESOURCE_BITS) !=
- (attr & IORESOURCE_BITS))
- continue;
-
- if (!res) {
- if (align == 0)
- align = 0x10000;
-
- res = s->io[i].res = find_io_region(s, *base, num,
- align);
- if (!res)
- return -EINVAL;
-
- *base = res->start;
- s->io[i].res->flags =
- ((res->flags & ~IORESOURCE_BITS) |
- (attr & IORESOURCE_BITS));
- s->io[i].InUse = num;
- *parent = res;
- return 0;
- }
-
- /* Try to extend top of window */
- try = res->end + 1;
- if ((*base == 0) || (*base == try)) {
- ret = adjust_resource(s->io[i].res, res->start,
- resource_size(res) + num);
- if (ret)
- continue;
- *base = try;
- s->io[i].InUse += num;
- *parent = res;
- return 0;
- }
-
- /* Try to extend bottom of window */
- try = res->start - num;
- if ((*base == 0) || (*base == try)) {
- ret = adjust_resource(s->io[i].res,
- res->start - num,
- resource_size(res) + num);
- if (ret)
- continue;
- *base = try;
- s->io[i].InUse += num;
- *parent = res;
- return 0;
- }
- }
- return -EINVAL;
-}
-
-static struct resource *res_pci_find_mem(u_long base, u_long num,
- u_long align, int low, struct pcmcia_socket *s)
-{
- struct resource *res = pcmcia_make_resource(0, num, IORESOURCE_MEM,
- dev_name(&s->dev));
- struct pcmcia_align_data data;
- unsigned long min;
- int ret;
-
- if (align < 0x20000)
- align = 0x20000;
- data.mask = align - 1;
- data.offset = base & data.mask;
-
- min = 0;
- if (!low)
- min = 0x100000UL;
-
- ret = pci_bus_alloc_resource(s->cb_dev->bus,
- res, num, 1, min, 0,
- pcmcia_align, &data);
-
- if (ret != 0) {
- kfree(res);
- res = NULL;
- }
- return res;
-}
-
-
-static int res_pci_init(struct pcmcia_socket *s)
-{
- if (!s->cb_dev || !(s->features & SS_CAP_PAGE_REGS)) {
- dev_err(&s->dev, "not supported by res_pci\n");
- return -EOPNOTSUPP;
- }
- return 0;
-}
-
-struct pccard_resource_ops pccard_nonstatic_ops = {
- .validate_mem = NULL,
- .find_io = res_pci_find_io,
- .find_mem = res_pci_find_mem,
- .init = res_pci_init,
- .exit = NULL,
-};
-EXPORT_SYMBOL(pccard_nonstatic_ops);
struct armada375_cluster_phy *cluster_phy;
u32 reg;
- cluster_phy = dev_get_drvdata(phy->dev.parent);
+ cluster_phy = phy_get_drvdata(phy);
if (!cluster_phy)
return -ENODEV;
cluster_phy->reg = usb_cluster_base;
dev_set_drvdata(dev, cluster_phy);
+ phy_set_drvdata(phy, cluster_phy);
phy_provider = devm_of_phy_provider_register(&pdev->dev,
armada375_usb_phy_xlate);
static int devm_phy_match(struct device *dev, void *res, void *match_data)
{
- return res == match_data;
+ struct phy **phy = res;
+
+ return *phy == match_data;
}
/**
ret = phy_pm_runtime_get_sync(phy);
if (ret < 0 && ret != -ENOTSUPP)
return ret;
+ ret = 0; /* Override possible ret == -ENOTSUPP */
mutex_lock(&phy->mutex);
if (phy->init_count == 0 && phy->ops->init) {
dev_err(&phy->dev, "phy init failed --> %d\n", ret);
goto out;
}
- } else {
- ret = 0; /* Override possible ret == -ENOTSUPP */
}
++phy->init_count;
ret = phy_pm_runtime_get_sync(phy);
if (ret < 0 && ret != -ENOTSUPP)
return ret;
+ ret = 0; /* Override possible ret == -ENOTSUPP */
mutex_lock(&phy->mutex);
if (phy->init_count == 1 && phy->ops->exit) {
ret = phy_pm_runtime_get_sync(phy);
if (ret < 0 && ret != -ENOTSUPP)
return ret;
+ ret = 0; /* Override possible ret == -ENOTSUPP */
mutex_lock(&phy->mutex);
if (phy->power_count == 0 && phy->ops->power_on) {
dev_err(&phy->dev, "phy poweron failed --> %d\n", ret);
goto out;
}
- } else {
- ret = 0; /* Override possible ret == -ENOTSUPP */
}
++phy->power_count;
mutex_unlock(&phy->mutex);
const struct exynos_dp_video_phy_drvdata *drvdata;
};
-static void exynos_dp_video_phy_pwr_isol(struct exynos_dp_video_phy *state,
- unsigned int on)
-{
- unsigned int val;
-
- if (IS_ERR(state->regs))
- return;
-
- val = on ? 0 : EXYNOS5_PHY_ENABLE;
-
- regmap_update_bits(state->regs, state->drvdata->phy_ctrl_offset,
- EXYNOS5_PHY_ENABLE, val);
-}
-
static int exynos_dp_video_phy_power_on(struct phy *phy)
{
struct exynos_dp_video_phy *state = phy_get_drvdata(phy);
/* Disable power isolation on DP-PHY */
- exynos_dp_video_phy_pwr_isol(state, 0);
-
- return 0;
+ return regmap_update_bits(state->regs, state->drvdata->phy_ctrl_offset,
+ EXYNOS5_PHY_ENABLE, EXYNOS5_PHY_ENABLE);
}
static int exynos_dp_video_phy_power_off(struct phy *phy)
struct exynos_dp_video_phy *state = phy_get_drvdata(phy);
/* Enable power isolation on DP-PHY */
- exynos_dp_video_phy_pwr_isol(state, 1);
-
- return 0;
+ return regmap_update_bits(state->regs, state->drvdata->phy_ctrl_offset,
+ EXYNOS5_PHY_ENABLE, 0);
}
static struct phy_ops exynos_dp_video_phy_ops = {
} phys[EXYNOS_MIPI_PHYS_NUM];
spinlock_t slock;
void __iomem *regs;
- struct mutex mutex;
struct regmap *regmap;
};
else
reset = EXYNOS4_MIPI_PHY_SRESETN;
- if (state->regmap) {
- mutex_lock(&state->mutex);
+ spin_lock(&state->slock);
+
+ if (!IS_ERR(state->regmap)) {
regmap_read(state->regmap, offset, &val);
if (on)
val |= reset;
else if (!(val & EXYNOS4_MIPI_PHY_RESET_MASK))
val &= ~EXYNOS4_MIPI_PHY_ENABLE;
regmap_write(state->regmap, offset, val);
- mutex_unlock(&state->mutex);
} else {
addr = state->regs + EXYNOS_MIPI_PHY_CONTROL(id / 2);
- spin_lock(&state->slock);
val = readl(addr);
if (on)
val |= reset;
val &= ~EXYNOS4_MIPI_PHY_ENABLE;
writel(val, addr);
- spin_unlock(&state->slock);
}
+ spin_unlock(&state->slock);
return 0;
}
dev_set_drvdata(dev, state);
spin_lock_init(&state->slock);
- mutex_init(&state->mutex);
for (i = 0; i < EXYNOS_MIPI_PHYS_NUM; i++) {
struct phy *phy = devm_phy_create(dev, NULL,
.power_on = exynos4210_power_on,
.power_off = exynos4210_power_off,
},
- {},
};
const struct samsung_usb2_phy_config exynos4210_usb2_phy_config = {
.power_on = exynos4x12_power_on,
.power_off = exynos4x12_power_off,
},
- {},
};
const struct samsung_usb2_phy_config exynos3250_usb2_phy_config = {
{
struct exynos5_usbdrd_phy *phy_drd = dev_get_drvdata(dev);
- if (WARN_ON(args->args[0] > EXYNOS5_DRDPHYS_NUM))
+ if (WARN_ON(args->args[0] >= EXYNOS5_DRDPHYS_NUM))
return ERR_PTR(-ENODEV);
return phy_drd->phys[args->args[0]].phy;
.power_on = exynos5250_power_on,
.power_off = exynos5250_power_off,
},
- {},
};
const struct samsung_usb2_phy_config exynos5250_usb2_phy_config = {
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -EINVAL;
+
priv->base = devm_ioremap(dev, res->start, resource_size(res));
if (!priv->base)
return -ENOMEM;
struct regmap *regmap;
struct mutex miphy_mutex;
struct miphy28lp_phy **phys;
+ int nphys;
};
struct miphy_initval {
return ERR_PTR(-EINVAL);
}
- for (index = 0; index < of_get_child_count(dev->of_node); index++)
+ for (index = 0; index < miphy_dev->nphys; index++)
if (phynode == miphy_dev->phys[index]->phy->dev.of_node) {
miphy_phy = miphy_dev->phys[index];
break;
static struct phy_ops miphy28lp_ops = {
.init = miphy28lp_init,
+ .owner = THIS_MODULE,
};
static int miphy28lp_probe_resets(struct device_node *node,
struct miphy28lp_dev *miphy_dev;
struct phy_provider *provider;
struct phy *phy;
- int chancount, port = 0;
- int ret;
+ int ret, port = 0;
miphy_dev = devm_kzalloc(&pdev->dev, sizeof(*miphy_dev), GFP_KERNEL);
if (!miphy_dev)
return -ENOMEM;
- chancount = of_get_child_count(np);
- miphy_dev->phys = devm_kzalloc(&pdev->dev, sizeof(phy) * chancount,
- GFP_KERNEL);
+ miphy_dev->nphys = of_get_child_count(np);
+ miphy_dev->phys = devm_kcalloc(&pdev->dev, miphy_dev->nphys,
+ sizeof(*miphy_dev->phys), GFP_KERNEL);
if (!miphy_dev->phys)
return -ENOMEM;
struct regmap *regmap;
struct mutex miphy_mutex;
struct miphy365x_phy **phys;
+ int nphys;
};
/*
return ERR_PTR(-EINVAL);
}
- for (index = 0; index < of_get_child_count(dev->of_node); index++)
+ for (index = 0; index < miphy_dev->nphys; index++)
if (phynode == miphy_dev->phys[index]->phy->dev.of_node) {
miphy_phy = miphy_dev->phys[index];
break;
struct miphy365x_dev *miphy_dev;
struct phy_provider *provider;
struct phy *phy;
- int chancount, port = 0;
- int ret;
+ int ret, port = 0;
miphy_dev = devm_kzalloc(&pdev->dev, sizeof(*miphy_dev), GFP_KERNEL);
if (!miphy_dev)
return -ENOMEM;
- chancount = of_get_child_count(np);
- miphy_dev->phys = devm_kzalloc(&pdev->dev, sizeof(phy) * chancount,
- GFP_KERNEL);
+ miphy_dev->nphys = of_get_child_count(np);
+ miphy_dev->phys = devm_kcalloc(&pdev->dev, miphy_dev->nphys,
+ sizeof(*miphy_dev->phys), GFP_KERNEL);
if (!miphy_dev->phys)
return -ENOMEM;
}
module_exit(omap_control_phy_exit);
-MODULE_ALIAS("platform: omap_control_phy");
+MODULE_ALIAS("platform:omap_control_phy");
MODULE_AUTHOR("Texas Instruments Inc.");
MODULE_DESCRIPTION("OMAP Control Module PHY Driver");
MODULE_LICENSE("GPL v2");
dev_warn(&pdev->dev,
"found usb_otg_ss_refclk960m, please fix DTS\n");
}
- } else {
- clk_prepare(phy->optclk);
}
+ if (!IS_ERR(phy->optclk))
+ clk_prepare(phy->optclk);
+
usb_add_phy_dev(&phy->phy);
return 0;
module_platform_driver(omap_usb2_driver);
-MODULE_ALIAS("platform: omap_usb2");
+MODULE_ALIAS("platform:omap_usb2");
MODULE_AUTHOR("Texas Instruments Inc.");
MODULE_DESCRIPTION("OMAP USB2 phy driver");
MODULE_LICENSE("GPL v2");
return ret;
clk_disable_unprepare(phy->clk);
- if (ret)
- return ret;
return 0;
}
/* Power up usb phy analog blocks by set siddq 0 */
ret = rockchip_usb_phy_power(phy, 0);
- if (ret)
+ if (ret) {
+ clk_disable_unprepare(phy->clk);
return ret;
+ }
return 0;
}
cpu_relax();
val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_STATUS);
if (val & PLL_LOCK)
- break;
+ return 0;
} while (!time_after(jiffies, timeout));
- if (!(val & PLL_LOCK)) {
- dev_err(phy->dev, "DPLL failed to lock\n");
- return -EBUSY;
- }
-
- return 0;
+ dev_err(phy->dev, "DPLL failed to lock\n");
+ return -EBUSY;
}
static int ti_pipe3_dpll_program(struct ti_pipe3 *phy)
module_platform_driver(ti_pipe3_driver);
-MODULE_ALIAS("platform: ti_pipe3");
+MODULE_ALIAS("platform:ti_pipe3");
MODULE_AUTHOR("Texas Instruments Inc.");
MODULE_DESCRIPTION("TI PIPE3 phy driver");
MODULE_LICENSE("GPL v2");
twl->dev = &pdev->dev;
twl->irq = platform_get_irq(pdev, 0);
twl->vbus_supplied = false;
- twl->linkstat = -EINVAL;
twl->linkstat = OMAP_MUSB_UNKNOWN;
twl->phy.dev = twl->dev;
for (i = 0; i < MAX_LANE; i++)
ctx->sata_param.speed[i] = 2; /* Default to Gen3 */
- ctx->dev = &pdev->dev;
platform_set_drvdata(pdev, ctx);
ctx->phy = devm_phy_create(ctx->dev, NULL, &xgene_phy_ops);
#define BYT_DIR_MASK (BIT(1) | BIT(2))
#define BYT_TRIG_MASK (BIT(26) | BIT(25) | BIT(24))
+#define BYT_CONF0_RESTORE_MASK (BYT_DIRECT_IRQ_EN | BYT_TRIG_MASK | \
+ BYT_PIN_MUX)
+#define BYT_VAL_RESTORE_MASK (BYT_DIR_MASK | BYT_LEVEL)
+
#define BYT_NGPIO_SCORE 102
#define BYT_NGPIO_NCORE 28
#define BYT_NGPIO_SUS 44
},
};
+struct byt_gpio_pin_context {
+ u32 conf0;
+ u32 val;
+};
+
struct byt_gpio {
struct gpio_chip chip;
struct platform_device *pdev;
spinlock_t lock;
void __iomem *reg_base;
struct pinctrl_gpio_range *range;
+ struct byt_gpio_pin_context *saved_context;
};
#define to_byt_gpio(c) container_of(c, struct byt_gpio, chip)
return vg->reg_base + reg_offset + reg;
}
-static bool is_special_pin(struct byt_gpio *vg, unsigned offset)
+static void byt_gpio_clear_triggering(struct byt_gpio *vg, unsigned offset)
+{
+ void __iomem *reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG);
+ unsigned long flags;
+ u32 value;
+
+ spin_lock_irqsave(&vg->lock, flags);
+ value = readl(reg);
+ value &= ~(BYT_TRIG_POS | BYT_TRIG_NEG | BYT_TRIG_LVL);
+ writel(value, reg);
+ spin_unlock_irqrestore(&vg->lock, flags);
+}
+
+static u32 byt_get_gpio_mux(struct byt_gpio *vg, unsigned offset)
{
/* SCORE pin 92-93 */
if (!strcmp(vg->range->name, BYT_SCORE_ACPI_UID) &&
offset >= 92 && offset <= 93)
- return true;
+ return 1;
/* SUS pin 11-21 */
if (!strcmp(vg->range->name, BYT_SUS_ACPI_UID) &&
offset >= 11 && offset <= 21)
- return true;
+ return 1;
- return false;
+ return 0;
}
static int byt_gpio_request(struct gpio_chip *chip, unsigned offset)
{
struct byt_gpio *vg = to_byt_gpio(chip);
void __iomem *reg = byt_gpio_reg(chip, offset, BYT_CONF0_REG);
- u32 value;
- bool special;
+ u32 value, gpio_mux;
/*
* In most cases, func pin mux 000 means GPIO function.
* But, some pins may have func pin mux 001 represents
- * GPIO function. Only allow user to export pin with
- * func pin mux preset as GPIO function by BIOS/FW.
+ * GPIO function.
+ *
+ * Because there are devices out there where some pins were not
+ * configured correctly we allow changing the mux value from
+ * request (but print out warning about that).
*/
value = readl(reg) & BYT_PIN_MUX;
- special = is_special_pin(vg, offset);
- if ((special && value != 1) || (!special && value)) {
- dev_err(&vg->pdev->dev,
- "pin %u cannot be used as GPIO.\n", offset);
- return -EINVAL;
+ gpio_mux = byt_get_gpio_mux(vg, offset);
+ if (WARN_ON(gpio_mux != value)) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&vg->lock, flags);
+ value = readl(reg) & ~BYT_PIN_MUX;
+ value |= gpio_mux;
+ writel(value, reg);
+ spin_unlock_irqrestore(&vg->lock, flags);
+
+ dev_warn(&vg->pdev->dev,
+ "pin %u forcibly re-configured as GPIO\n", offset);
}
pm_runtime_get(&vg->pdev->dev);
static void byt_gpio_free(struct gpio_chip *chip, unsigned offset)
{
struct byt_gpio *vg = to_byt_gpio(chip);
- void __iomem *reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG);
- u32 value;
-
- /* clear interrupt triggering */
- value = readl(reg);
- value &= ~(BYT_TRIG_POS | BYT_TRIG_NEG | BYT_TRIG_LVL);
- writel(value, reg);
+ byt_gpio_clear_triggering(vg, offset);
pm_runtime_put(&vg->pdev->dev);
}
value &= ~(BYT_DIRECT_IRQ_EN | BYT_TRIG_POS | BYT_TRIG_NEG |
BYT_TRIG_LVL);
- switch (type) {
- case IRQ_TYPE_LEVEL_HIGH:
- value |= BYT_TRIG_LVL;
- case IRQ_TYPE_EDGE_RISING:
- value |= BYT_TRIG_POS;
- break;
- case IRQ_TYPE_LEVEL_LOW:
- value |= BYT_TRIG_LVL;
- case IRQ_TYPE_EDGE_FALLING:
- value |= BYT_TRIG_NEG;
- break;
- case IRQ_TYPE_EDGE_BOTH:
- value |= (BYT_TRIG_NEG | BYT_TRIG_POS);
- break;
- }
writel(value, reg);
+ if (type & IRQ_TYPE_EDGE_BOTH)
+ __irq_set_handler_locked(d->irq, handle_edge_irq);
+ else if (type & IRQ_TYPE_LEVEL_MASK)
+ __irq_set_handler_locked(d->irq, handle_level_irq);
+
spin_unlock_irqrestore(&vg->lock, flags);
return 0;
struct irq_data *data = irq_desc_get_irq_data(desc);
struct byt_gpio *vg = to_byt_gpio(irq_desc_get_handler_data(desc));
struct irq_chip *chip = irq_data_get_irq_chip(data);
- u32 base, pin, mask;
+ u32 base, pin;
void __iomem *reg;
- u32 pending;
+ unsigned long pending;
unsigned virq;
- int looplimit = 0;
/* check from GPIO controller which pin triggered the interrupt */
for (base = 0; base < vg->chip.ngpio; base += 32) {
-
reg = byt_gpio_reg(&vg->chip, base, BYT_INT_STAT_REG);
-
- while ((pending = readl(reg))) {
- pin = __ffs(pending);
- mask = BIT(pin);
- /* Clear before handling so we can't lose an edge */
- writel(mask, reg);
-
+ pending = readl(reg);
+ for_each_set_bit(pin, &pending, 32) {
virq = irq_find_mapping(vg->chip.irqdomain, base + pin);
generic_handle_irq(virq);
-
- /* In case bios or user sets triggering incorretly a pin
- * might remain in "interrupt triggered" state.
- */
- if (looplimit++ > 32) {
- dev_err(&vg->pdev->dev,
- "Gpio %d interrupt flood, disabling\n",
- base + pin);
-
- reg = byt_gpio_reg(&vg->chip, base + pin,
- BYT_CONF0_REG);
- mask = readl(reg);
- mask &= ~(BYT_TRIG_NEG | BYT_TRIG_POS |
- BYT_TRIG_LVL);
- writel(mask, reg);
- mask = readl(reg); /* flush */
- break;
- }
}
}
chip->irq_eoi(data);
}
+static void byt_irq_ack(struct irq_data *d)
+{
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct byt_gpio *vg = to_byt_gpio(gc);
+ unsigned offset = irqd_to_hwirq(d);
+ void __iomem *reg;
+
+ reg = byt_gpio_reg(&vg->chip, offset, BYT_INT_STAT_REG);
+ writel(BIT(offset % 32), reg);
+}
+
static void byt_irq_unmask(struct irq_data *d)
{
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct byt_gpio *vg = to_byt_gpio(gc);
+ unsigned offset = irqd_to_hwirq(d);
+ unsigned long flags;
+ void __iomem *reg;
+ u32 value;
+
+ spin_lock_irqsave(&vg->lock, flags);
+
+ reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG);
+ value = readl(reg);
+
+ switch (irqd_get_trigger_type(d)) {
+ case IRQ_TYPE_LEVEL_HIGH:
+ value |= BYT_TRIG_LVL;
+ case IRQ_TYPE_EDGE_RISING:
+ value |= BYT_TRIG_POS;
+ break;
+ case IRQ_TYPE_LEVEL_LOW:
+ value |= BYT_TRIG_LVL;
+ case IRQ_TYPE_EDGE_FALLING:
+ value |= BYT_TRIG_NEG;
+ break;
+ case IRQ_TYPE_EDGE_BOTH:
+ value |= (BYT_TRIG_NEG | BYT_TRIG_POS);
+ break;
+ }
+
+ writel(value, reg);
+
+ spin_unlock_irqrestore(&vg->lock, flags);
}
static void byt_irq_mask(struct irq_data *d)
{
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct byt_gpio *vg = to_byt_gpio(gc);
+
+ byt_gpio_clear_triggering(vg, irqd_to_hwirq(d));
}
static struct irq_chip byt_irqchip = {
.name = "BYT-GPIO",
+ .irq_ack = byt_irq_ack,
.irq_mask = byt_irq_mask,
.irq_unmask = byt_irq_unmask,
.irq_set_type = byt_irq_type,
{
void __iomem *reg;
u32 base, value;
+ int i;
+
+ /*
+ * Clear interrupt triggers for all pins that are GPIOs and
+ * do not use direct IRQ mode. This will prevent spurious
+ * interrupts from misconfigured pins.
+ */
+ for (i = 0; i < vg->chip.ngpio; i++) {
+ value = readl(byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG));
+ if ((value & BYT_PIN_MUX) == byt_get_gpio_mux(vg, i) &&
+ !(value & BYT_DIRECT_IRQ_EN)) {
+ byt_gpio_clear_triggering(vg, i);
+ dev_dbg(&vg->pdev->dev, "disabling GPIO %d\n", i);
+ }
+ }
/* clear interrupt status trigger registers */
for (base = 0; base < vg->chip.ngpio; base += 32) {
gc->can_sleep = false;
gc->dev = dev;
+#ifdef CONFIG_PM_SLEEP
+ vg->saved_context = devm_kcalloc(&pdev->dev, gc->ngpio,
+ sizeof(*vg->saved_context), GFP_KERNEL);
+#endif
+
ret = gpiochip_add(gc);
if (ret) {
dev_err(&pdev->dev, "failed adding byt-gpio chip\n");
return 0;
}
+#ifdef CONFIG_PM_SLEEP
+static int byt_gpio_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct byt_gpio *vg = platform_get_drvdata(pdev);
+ int i;
+
+ for (i = 0; i < vg->chip.ngpio; i++) {
+ void __iomem *reg;
+ u32 value;
+
+ reg = byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG);
+ value = readl(reg) & BYT_CONF0_RESTORE_MASK;
+ vg->saved_context[i].conf0 = value;
+
+ reg = byt_gpio_reg(&vg->chip, i, BYT_VAL_REG);
+ value = readl(reg) & BYT_VAL_RESTORE_MASK;
+ vg->saved_context[i].val = value;
+ }
+
+ return 0;
+}
+
+static int byt_gpio_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct byt_gpio *vg = platform_get_drvdata(pdev);
+ int i;
+
+ for (i = 0; i < vg->chip.ngpio; i++) {
+ void __iomem *reg;
+ u32 value;
+
+ reg = byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG);
+ value = readl(reg);
+ if ((value & BYT_CONF0_RESTORE_MASK) !=
+ vg->saved_context[i].conf0) {
+ value &= ~BYT_CONF0_RESTORE_MASK;
+ value |= vg->saved_context[i].conf0;
+ writel(value, reg);
+ dev_info(dev, "restored pin %d conf0 %#08x", i, value);
+ }
+
+ reg = byt_gpio_reg(&vg->chip, i, BYT_VAL_REG);
+ value = readl(reg);
+ if ((value & BYT_VAL_RESTORE_MASK) !=
+ vg->saved_context[i].val) {
+ u32 v;
+
+ v = value & ~BYT_VAL_RESTORE_MASK;
+ v |= vg->saved_context[i].val;
+ if (v != value) {
+ writel(v, reg);
+ dev_dbg(dev, "restored pin %d val %#08x\n",
+ i, v);
+ }
+ }
+ }
+
+ return 0;
+}
+#endif
+
static int byt_gpio_runtime_suspend(struct device *dev)
{
return 0;
}
static const struct dev_pm_ops byt_gpio_pm_ops = {
- .runtime_suspend = byt_gpio_runtime_suspend,
- .runtime_resume = byt_gpio_runtime_resume,
+ SET_LATE_SYSTEM_SLEEP_PM_OPS(byt_gpio_suspend, byt_gpio_resume)
+ SET_RUNTIME_PM_OPS(byt_gpio_runtime_suspend, byt_gpio_runtime_resume,
+ NULL)
};
static const struct acpi_device_id byt_gpio_acpi_match[] = {
static int chv_gpio_direction_output(struct gpio_chip *chip, unsigned offset,
int value)
{
+ chv_gpio_set(chip, offset, value);
return pinctrl_gpio_direction_output(chip->base + offset);
}
/* the interrupt is already cleared before by reading ISR */
}
-static unsigned int gpio_irq_startup(struct irq_data *d)
+static int gpio_irq_request_res(struct irq_data *d)
{
struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d);
unsigned pin = d->hwirq;
int ret;
ret = gpiochip_lock_as_irq(&at91_gpio->chip, pin);
- if (ret) {
+ if (ret)
dev_err(at91_gpio->chip.dev, "unable to lock pind %lu IRQ\n",
d->hwirq);
- return ret;
- }
- gpio_irq_unmask(d);
- return 0;
+
+ return ret;
}
-static void gpio_irq_shutdown(struct irq_data *d)
+static void gpio_irq_release_res(struct irq_data *d)
{
struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d);
unsigned pin = d->hwirq;
- gpio_irq_mask(d);
gpiochip_unlock_as_irq(&at91_gpio->chip, pin);
}
static struct irq_chip gpio_irqchip = {
.name = "GPIO",
.irq_ack = gpio_irq_ack,
- .irq_startup = gpio_irq_startup,
- .irq_shutdown = gpio_irq_shutdown,
+ .irq_request_resources = gpio_irq_request_res,
+ .irq_release_resources = gpio_irq_release_res,
.irq_disable = gpio_irq_mask,
.irq_mask = gpio_irq_mask,
.irq_unmask = gpio_irq_unmask,
.pins = sun4i_a10_pins,
.npins = ARRAY_SIZE(sun4i_a10_pins),
.irq_banks = 1,
+ .irq_read_needs_mux = true,
};
static int sun4i_a10_pinctrl_probe(struct platform_device *pdev)
#include <linux/slab.h>
#include "../core.h"
+#include "../../gpio/gpiolib.h"
#include "pinctrl-sunxi.h"
static struct irq_chip sunxi_pinctrl_edge_irq_chip;
static int sunxi_pinctrl_gpio_get(struct gpio_chip *chip, unsigned offset)
{
struct sunxi_pinctrl *pctl = dev_get_drvdata(chip->dev);
-
u32 reg = sunxi_data_reg(offset);
u8 index = sunxi_data_offset(offset);
- u32 val = (readl(pctl->membase + reg) >> index) & DATA_PINS_MASK;
+ u32 set_mux = pctl->desc->irq_read_needs_mux &&
+ test_bit(FLAG_USED_AS_IRQ, &chip->desc[offset].flags);
+ u32 val;
+
+ if (set_mux)
+ sunxi_pmx_set(pctl->pctl_dev, offset, SUN4I_FUNC_INPUT);
+
+ val = (readl(pctl->membase + reg) >> index) & DATA_PINS_MASK;
+
+ if (set_mux)
+ sunxi_pmx_set(pctl->pctl_dev, offset, SUN4I_FUNC_IRQ);
return val;
}
#define IRQ_LEVEL_LOW 0x03
#define IRQ_EDGE_BOTH 0x04
+#define SUN4I_FUNC_INPUT 0
+#define SUN4I_FUNC_IRQ 6
+
struct sunxi_desc_function {
const char *name;
u8 muxval;
int npins;
unsigned pin_base;
unsigned irq_banks;
+ bool irq_read_needs_mux;
};
struct sunxi_pinctrl_function {
#define TIME_WINDOW_MAX_MSEC 40000
#define TIME_WINDOW_MIN_MSEC 250
-
+#define ENERGY_UNIT_SCALE 1000 /* scale from driver unit to powercap unit */
enum unit_type {
ARBITRARY_UNIT, /* no translation */
POWER_UNIT,
struct rapl_power_limit rpl[NR_POWER_LIMITS];
u64 attr_map; /* track capabilities */
unsigned int state;
+ unsigned int domain_energy_unit;
int package_id;
};
#define power_zone_to_rapl_domain(_zone) \
void (*set_floor_freq)(struct rapl_domain *rd, bool mode);
u64 (*compute_time_window)(struct rapl_package *rp, u64 val,
bool to_raw);
+ unsigned int dram_domain_energy_unit;
};
static struct rapl_defaults *rapl_defaults;
static int rapl_write_data_raw(struct rapl_domain *rd,
enum rapl_primitives prim,
unsigned long long value);
-static u64 rapl_unit_xlate(int package, enum unit_type type, u64 value,
+static u64 rapl_unit_xlate(struct rapl_domain *rd, int package,
+ enum unit_type type, u64 value,
int to_raw);
static void package_power_limit_irq_save(int package_id);
static int get_max_energy_counter(struct powercap_zone *pcd_dev, u64 *energy)
{
- *energy = rapl_unit_xlate(0, ENERGY_UNIT, ENERGY_STATUS_MASK, 0);
+ struct rapl_domain *rd = power_zone_to_rapl_domain(pcd_dev);
+
+ *energy = rapl_unit_xlate(rd, 0, ENERGY_UNIT, ENERGY_STATUS_MASK, 0);
return 0;
}
rd->msrs[4] = MSR_DRAM_POWER_INFO;
rd->rpl[0].prim_id = PL1_ENABLE;
rd->rpl[0].name = pl1_name;
+ rd->domain_energy_unit =
+ rapl_defaults->dram_domain_energy_unit;
+ if (rd->domain_energy_unit)
+ pr_info("DRAM domain energy unit %dpj\n",
+ rd->domain_energy_unit);
break;
}
if (mask) {
}
}
-static u64 rapl_unit_xlate(int package, enum unit_type type, u64 value,
+static u64 rapl_unit_xlate(struct rapl_domain *rd, int package,
+ enum unit_type type, u64 value,
int to_raw)
{
u64 units = 1;
struct rapl_package *rp;
+ u64 scale = 1;
rp = find_package_by_id(package);
if (!rp)
units = rp->power_unit;
break;
case ENERGY_UNIT:
- units = rp->energy_unit;
+ scale = ENERGY_UNIT_SCALE;
+ /* per domain unit takes precedence */
+ if (rd && rd->domain_energy_unit)
+ units = rd->domain_energy_unit;
+ else
+ units = rp->energy_unit;
break;
case TIME_UNIT:
return rapl_defaults->compute_time_window(rp, value, to_raw);
};
if (to_raw)
- return div64_u64(value, units);
+ return div64_u64(value, units) * scale;
value *= units;
- return value;
+ return div64_u64(value, scale);
}
/* in the order of enum rapl_primitives */
final = value & rp->mask;
final = final >> rp->shift;
if (xlate)
- *data = rapl_unit_xlate(rd->package_id, rp->unit, final, 0);
+ *data = rapl_unit_xlate(rd, rd->package_id, rp->unit, final, 0);
else
*data = final;
"failed to read msr 0x%x on cpu %d\n", msr, cpu);
return -EIO;
}
- value = rapl_unit_xlate(rd->package_id, rp->unit, value, 1);
+ value = rapl_unit_xlate(rd, rd->package_id, rp->unit, value, 1);
msr_val &= ~rp->mask;
msr_val |= value << rp->shift;
if (wrmsrl_safe_on_cpu(cpu, msr, msr_val)) {
* calculate units differ on different CPUs.
* We convert the units to below format based on CPUs.
* i.e.
- * energy unit: microJoules : Represented in microJoules by default
+ * energy unit: picoJoules : Represented in picoJoules by default
* power unit : microWatts : Represented in milliWatts by default
* time unit : microseconds: Represented in seconds by default
*/
}
value = (msr_val & ENERGY_UNIT_MASK) >> ENERGY_UNIT_OFFSET;
- rp->energy_unit = 1000000 / (1 << value);
+ rp->energy_unit = ENERGY_UNIT_SCALE * 1000000 / (1 << value);
value = (msr_val & POWER_UNIT_MASK) >> POWER_UNIT_OFFSET;
rp->power_unit = 1000000 / (1 << value);
value = (msr_val & TIME_UNIT_MASK) >> TIME_UNIT_OFFSET;
rp->time_unit = 1000000 / (1 << value);
- pr_debug("Core CPU package %d energy=%duJ, time=%dus, power=%duW\n",
+ pr_debug("Core CPU package %d energy=%dpJ, time=%dus, power=%duW\n",
rp->id, rp->energy_unit, rp->time_unit, rp->power_unit);
return 0;
return -ENODEV;
}
value = (msr_val & ENERGY_UNIT_MASK) >> ENERGY_UNIT_OFFSET;
- rp->energy_unit = 1 << value;
+ rp->energy_unit = ENERGY_UNIT_SCALE * 1 << value;
value = (msr_val & POWER_UNIT_MASK) >> POWER_UNIT_OFFSET;
rp->power_unit = (1 << value) * 1000;
value = (msr_val & TIME_UNIT_MASK) >> TIME_UNIT_OFFSET;
rp->time_unit = 1000000 / (1 << value);
- pr_debug("Atom package %d energy=%duJ, time=%dus, power=%duW\n",
+ pr_debug("Atom package %d energy=%dpJ, time=%dus, power=%duW\n",
rp->id, rp->energy_unit, rp->time_unit, rp->power_unit);
return 0;
.compute_time_window = rapl_compute_time_window_core,
};
+static const struct rapl_defaults rapl_defaults_hsw_server = {
+ .check_unit = rapl_check_unit_core,
+ .set_floor_freq = set_floor_freq_default,
+ .compute_time_window = rapl_compute_time_window_core,
+ .dram_domain_energy_unit = 15300,
+};
+
static const struct rapl_defaults rapl_defaults_atom = {
.check_unit = rapl_check_unit_atom,
.set_floor_freq = set_floor_freq_atom,
RAPL_CPU(0x3a, rapl_defaults_core),/* Ivy Bridge */
RAPL_CPU(0x3c, rapl_defaults_core),/* Haswell */
RAPL_CPU(0x3d, rapl_defaults_core),/* Broadwell */
- RAPL_CPU(0x3f, rapl_defaults_core),/* Haswell */
+ RAPL_CPU(0x3f, rapl_defaults_hsw_server),/* Haswell servers */
RAPL_CPU(0x45, rapl_defaults_core),/* Haswell ULT */
RAPL_CPU(0x4C, rapl_defaults_atom),/* Braswell */
RAPL_CPU(0x4A, rapl_defaults_atom),/* Tangier */
}
if (rdev->ena_pin) {
- ret = regulator_ena_gpio_ctrl(rdev, true);
- if (ret < 0)
- return ret;
- rdev->ena_gpio_state = 1;
+ if (!rdev->ena_gpio_state) {
+ ret = regulator_ena_gpio_ctrl(rdev, true);
+ if (ret < 0)
+ return ret;
+ rdev->ena_gpio_state = 1;
+ }
} else if (rdev->desc->ops->enable) {
ret = rdev->desc->ops->enable(rdev);
if (ret < 0)
trace_regulator_disable(rdev_get_name(rdev));
if (rdev->ena_pin) {
- ret = regulator_ena_gpio_ctrl(rdev, false);
- if (ret < 0)
- return ret;
- rdev->ena_gpio_state = 0;
+ if (rdev->ena_gpio_state) {
+ ret = regulator_ena_gpio_ctrl(rdev, false);
+ if (ret < 0)
+ return ret;
+ rdev->ena_gpio_state = 0;
+ }
} else if (rdev->desc->ops->disable) {
ret = rdev->desc->ops->disable(rdev);
if (attr == &dev_attr_requested_microamps.attr)
return rdev->desc->type == REGULATOR_CURRENT ? mode : 0;
- /* all the other attributes exist to support constraints;
- * don't show them if there are no constraints, or if the
- * relevant supporting methods are missing.
- */
- if (!rdev->constraints)
- return 0;
-
/* constraints need specific supporting methods */
if (attr == &dev_attr_min_microvolts.attr ||
attr == &dev_attr_max_microvolts.attr)
config->ena_gpio, ret);
goto wash;
}
-
- if (config->ena_gpio_flags & GPIOF_OUT_INIT_HIGH)
- rdev->ena_gpio_state = 1;
-
- if (config->ena_gpio_invert)
- rdev->ena_gpio_state = !rdev->ena_gpio_state;
}
/* set regulator constraints */
list_for_each_entry(rdev, ®ulator_list, list) {
mutex_lock(&rdev->mutex);
if (rdev->use_count > 0 || rdev->constraints->always_on) {
- error = _regulator_do_enable(rdev);
- if (error)
- ret = error;
+ if (!_regulator_is_enabled(rdev)) {
+ error = _regulator_do_enable(rdev);
+ if (error)
+ ret = error;
+ }
} else {
if (!have_full_constraints())
goto unlock;
config.regmap = chip->regmap;
config.of_node = dev->of_node;
+ /* Mask all interrupt sources to deassert interrupt line */
+ error = regmap_write(chip->regmap, DA9210_REG_MASK_A, ~0);
+ if (!error)
+ error = regmap_write(chip->regmap, DA9210_REG_MASK_B, ~0);
+ if (error) {
+ dev_err(&i2c->dev, "Failed to write to mask reg: %d\n", error);
+ return error;
+ }
+
rdev = devm_regulator_register(&i2c->dev, &da9210_reg, &config);
if (IS_ERR(rdev)) {
dev_err(&i2c->dev, "Failed to register DA9210 regulator\n");
if (!pmic)
return -ENOMEM;
+ if (of_device_is_compatible(node, "ti,tps659038-pmic"))
+ palmas_generic_regs_info[PALMAS_REG_REGEN2].ctrl_addr =
+ TPS659038_REGEN2_CTRL;
+
pmic->dev = &pdev->dev;
pmic->palmas = palmas;
palmas->pmic = pmic;
.vsel_mask = RK808_LDO_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(0),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "LDO_REG2",
.vsel_mask = RK808_LDO_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(1),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "LDO_REG3",
.vsel_mask = RK808_BUCK4_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(2),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "LDO_REG4",
.vsel_mask = RK808_LDO_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(3),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "LDO_REG5",
.vsel_mask = RK808_LDO_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(4),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "LDO_REG6",
.vsel_mask = RK808_LDO_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(5),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "LDO_REG7",
.vsel_mask = RK808_LDO_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(6),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "LDO_REG8",
.vsel_mask = RK808_LDO_VSEL_MASK,
.enable_reg = RK808_LDO_EN_REG,
.enable_mask = BIT(7),
+ .enable_time = 400,
.owner = THIS_MODULE,
}, {
.name = "SWITCH_REG1",
#include <linux/module.h>
#include <linux/init.h>
#include <linux/err.h>
+#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/regulator/driver.h>
#include <linux/regulator/machine.h>
void *bufs_va;
int err = 0, i;
size_t total_buf_space;
+ bool notify;
vrp = kzalloc(sizeof(*vrp), GFP_KERNEL);
if (!vrp)
}
}
+ /*
+ * Prepare to kick but don't notify yet - we can't do this before
+ * device is ready.
+ */
+ notify = virtqueue_kick_prepare(vrp->rvq);
+
+ /* From this point on, we can notify and get callbacks. */
+ virtio_device_ready(vdev);
+
/* tell the remote processor it can start sending messages */
- virtqueue_kick(vrp->rvq);
+ /*
+ * this might be concurrent with callbacks, but we are only
+ * doing notify, not a full kick here, so that's ok.
+ */
+ if (notify)
+ virtqueue_notify(vrp->rvq);
dev_info(&vdev->dev, "rpmsg host is online\n");
ret = IRQ_HANDLED;
}
- spin_lock(&suspended_lock);
+ spin_unlock(&suspended_lock);
return ret;
}
mrst->dev = NULL;
}
-#ifdef CONFIG_PM
-static int mrst_suspend(struct device *dev, pm_message_t mesg)
+#ifdef CONFIG_PM_SLEEP
+static int mrst_suspend(struct device *dev)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
unsigned char tmp;
*/
static inline int mrst_poweroff(struct device *dev)
{
- return mrst_suspend(dev, PMSG_HIBERNATE);
+ return mrst_suspend(dev);
}
static int mrst_resume(struct device *dev)
return 0;
}
+static SIMPLE_DEV_PM_OPS(mrst_pm_ops, mrst_suspend, mrst_resume);
+#define MRST_PM_OPS (&mrst_pm_ops)
+
#else
-#define mrst_suspend NULL
-#define mrst_resume NULL
+#define MRST_PM_OPS NULL
static inline int mrst_poweroff(struct device *dev)
{
.remove = vrtc_mrst_platform_remove,
.shutdown = vrtc_mrst_platform_shutdown,
.driver = {
- .name = (char *) driver_name,
- .suspend = mrst_suspend,
- .resume = mrst_resume,
+ .name = driver_name,
+ .pm = MRST_PM_OPS,
}
};
static struct s3c_rtc_data const s3c6410_rtc_data = {
.max_user_freq = 32768,
+ .needs_src_clk = true,
.irq_handler = s3c6410_rtc_irq,
.set_freq = s3c6410_rtc_setfreq,
.enable_tick = s3c6410_rtc_enable_tick,
* parse input
*/
num_of_segments = 0;
- for (i = 0; ((buf[i] != '\0') && (buf[i] != '\n') && i < count); i++) {
+ for (i = 0; (i < count && (buf[i] != '\0') && (buf[i] != '\n')); i++) {
for (j = i; (buf[j] != ':') &&
(buf[j] != '\0') &&
(buf[j] != '\n') &&
add = 0;
continue;
}
- for (pos = 0; pos <= iter->aob->request.msb_count; pos++) {
+ for (pos = 0; pos < iter->aob->request.msb_count; pos++) {
if (clusters_intersect(req, iter->request[pos]) &&
(rq_data_dir(req) == WRITE ||
rq_data_dir(iter->request[pos]) == WRITE)) {
};
static struct ata_port_info sata_port_info = {
- .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA,
+ .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA |
+ ATA_FLAG_SAS_HOST,
.pio_mask = ATA_PIO4_ONLY,
.mwdma_mask = ATA_MWDMA2,
.udma_mask = ATA_UDMA6,
};
static struct ata_port_info sata_port_info = {
- .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | ATA_FLAG_NCQ,
+ .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | ATA_FLAG_NCQ |
+ ATA_FLAG_SAS_HOST,
.pio_mask = ATA_PIO4,
.mwdma_mask = ATA_MWDMA2,
.udma_mask = ATA_UDMA6,
struct sas_discovery_event *ev = to_sas_discovery_event(work);
struct asd_sas_port *port = ev->port;
struct sas_ha_struct *ha = port->ha;
+ struct domain_device *ddev = port->port_dev;
/* prevent revalidation from finding sata links in recovery */
mutex_lock(&ha->disco_mutex);
SAS_DPRINTK("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id,
task_pid_nr(current));
- if (port->port_dev)
- res = sas_ex_revalidate_domain(port->port_dev);
+ if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE ||
+ ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE))
+ res = sas_ex_revalidate_domain(ddev);
SAS_DPRINTK("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n",
port->id, task_pid_nr(current), res);
/*
* Finally register the new FC Nexus with TCM
*/
- __transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess);
+ transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess);
return 0;
}
(unsigned long long)xfer->rx_dma);
}
- /* REVISIT: We're waiting for ENDRX before we start the next
+ /* REVISIT: We're waiting for RXBUFF before we start the next
* transfer because we need to handle some difficult timing
- * issues otherwise. If we wait for ENDTX in one transfer and
- * then starts waiting for ENDRX in the next, it's difficult
- * to tell the difference between the ENDRX interrupt we're
- * actually waiting for and the ENDRX interrupt of the
+ * issues otherwise. If we wait for TXBUFE in one transfer and
+ * then starts waiting for RXBUFF in the next, it's difficult
+ * to tell the difference between the RXBUFF interrupt we're
+ * actually waiting for and the RXBUFF interrupt of the
* previous transfer.
*
* It should be doable, though. Just not now...
*/
- spi_writel(as, IER, SPI_BIT(ENDRX) | SPI_BIT(OVRES));
+ spi_writel(as, IER, SPI_BIT(RXBUFF) | SPI_BIT(OVRES));
spi_writel(as, PTCR, SPI_BIT(TXTEN) | SPI_BIT(RXTEN));
}
{
struct dw_spi *dws = arg;
- if (test_and_clear_bit(TX_BUSY, &dws->dma_chan_busy) & BIT(RX_BUSY))
+ clear_bit(TX_BUSY, &dws->dma_chan_busy);
+ if (test_bit(RX_BUSY, &dws->dma_chan_busy))
return;
dw_spi_xfer_done(dws);
}
1,
DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!txdesc)
+ return NULL;
+
txdesc->callback = dw_spi_dma_tx_done;
txdesc->callback_param = dws;
{
struct dw_spi *dws = arg;
- if (test_and_clear_bit(RX_BUSY, &dws->dma_chan_busy) & BIT(TX_BUSY))
+ clear_bit(RX_BUSY, &dws->dma_chan_busy);
+ if (test_bit(TX_BUSY, &dws->dma_chan_busy))
return;
dw_spi_xfer_done(dws);
}
1,
DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!rxdesc)
+ return NULL;
+
rxdesc->callback = dw_spi_dma_rx_done;
rxdesc->callback_param = dws;
static struct spi_pci_desc spi_pci_mid_desc_1 = {
.setup = dw_spi_mid_init,
- .num_cs = 32,
+ .num_cs = 5,
.bus_num = 0,
};
static struct spi_pci_desc spi_pci_mid_desc_2 = {
.setup = dw_spi_mid_init,
- .num_cs = 4,
+ .num_cs = 2,
.bus_num = 1,
};
if (!dws->fifo_len) {
u32 fifo;
- for (fifo = 2; fifo <= 256; fifo++) {
+ for (fifo = 1; fifo < 256; fifo++) {
dw_writew(dws, DW_SPI_TXFLTR, fifo);
if (fifo != dw_readw(dws, DW_SPI_TXFLTR))
break;
}
dw_writew(dws, DW_SPI_TXFLTR, 0);
- dws->fifo_len = (fifo == 2) ? 0 : fifo - 1;
+ dws->fifo_len = (fifo == 1) ? 0 : fifo;
dev_dbg(dev, "Detected FIFO size: %u bytes\n", dws->fifo_len);
}
}
unsigned long flags;
int ret;
+ if (xfer->len > SPFI_TRANSACTION_TSIZE_MASK) {
+ dev_err(spfi->dev,
+ "Transfer length (%d) is greater than the max supported (%d)",
+ xfer->len, SPFI_TRANSACTION_TSIZE_MASK);
+ return -EINVAL;
+ }
+
/*
* Stop all DMA and reset the controller if the previous transaction
* timed-out and never completed it's DMA.
pl022->cur_msg = NULL;
pl022->cur_transfer = NULL;
pl022->cur_chip = NULL;
- spi_finalize_current_message(pl022->master);
/* disable the SPI/SSP operation */
writew((readw(SSP_CR1(pl022->virtbase)) &
(~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase));
+ spi_finalize_current_message(pl022->master);
}
/**
struct resource *res;
struct device *dev;
void __iomem *base;
- u32 max_freq, iomode;
+ u32 max_freq, iomode, num_cs;
int ret, irq, size;
dev = &pdev->dev;
}
/* use num-cs unless not present or out of range */
- if (of_property_read_u16(dev->of_node, "num-cs",
- &master->num_chipselect) ||
- (master->num_chipselect > SPI_NUM_CHIPSELECTS))
+ if (of_property_read_u32(dev->of_node, "num-cs", &num_cs) ||
+ num_cs > SPI_NUM_CHIPSELECTS)
master->num_chipselect = SPI_NUM_CHIPSELECTS;
+ else
+ master->num_chipselect = num_cs;
master->bus_num = pdev->id;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;
#define QSPI_FLEN(n) ((n - 1) << 0)
/* STATUS REGISTER */
+#define BUSY 0x01
#define WC 0x02
/* INTERRUPT REGISTER */
ti_qspi_write(qspi, ctx_reg->clkctrl, QSPI_SPI_CLOCK_CNTRL_REG);
}
+static inline u32 qspi_is_busy(struct ti_qspi *qspi)
+{
+ u32 stat;
+ unsigned long timeout = jiffies + QSPI_COMPLETION_TIMEOUT;
+
+ stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG);
+ while ((stat & BUSY) && time_after(timeout, jiffies)) {
+ cpu_relax();
+ stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG);
+ }
+
+ WARN(stat & BUSY, "qspi busy\n");
+ return stat & BUSY;
+}
+
static int qspi_write_msg(struct ti_qspi *qspi, struct spi_transfer *t)
{
int wlen, count;
wlen = t->bits_per_word >> 3; /* in bytes */
while (count) {
+ if (qspi_is_busy(qspi))
+ return -EBUSY;
+
switch (wlen) {
case 1:
dev_dbg(qspi->dev, "tx cmd %08x dc %08x data %02x\n",
while (count) {
dev_dbg(qspi->dev, "rx cmd %08x dc %08x\n", cmd, qspi->dc);
+ if (qspi_is_busy(qspi))
+ return -EBUSY;
+
ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG);
if (!wait_for_completion_timeout(&qspi->transfer_complete,
QSPI_COMPLETION_TIMEOUT)) {
"failed to unprepare message: %d\n", ret);
}
}
+
+ trace_spi_message_done(mesg);
+
master->cur_msg_prepared = false;
mesg->state = NULL;
if (mesg->complete)
mesg->complete(mesg->context);
-
- trace_spi_message_done(mesg);
}
EXPORT_SYMBOL_GPL(spi_finalize_current_message);
/* zonetype initial */
pDevice->byOriginalZonetype = pDevice->abyEEPROM[EEP_OFS_ZONETYPE];
- /* Get RFType */
- pDevice->byRFType = SROMbyReadEmbedded(pDevice->PortOffset, EEP_OFS_RFTYPE);
-
- /* force change RevID for VT3253 emu */
- if ((pDevice->byRFType & RF_EMU) != 0)
- pDevice->byRevId = 0x80;
-
- pDevice->byRFType &= RF_MASK;
- pr_debug("pDevice->byRFType = %x\n", pDevice->byRFType);
-
if (!pDevice->bZoneRegExist)
pDevice->byZoneType = pDevice->abyEEPROM[EEP_OFS_ZONETYPE];
{
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
PSTxDesc head_td;
- u32 dma_idx = TYPE_AC0DMA;
+ u32 dma_idx;
unsigned long flags;
spin_lock_irqsave(&priv->lock, flags);
- if (!ieee80211_is_data(hdr->frame_control))
+ if (ieee80211_is_data(hdr->frame_control))
+ dma_idx = TYPE_AC0DMA;
+ else
dma_idx = TYPE_TXDMA0;
if (AVAIL_TD(priv, dma_idx) < 1) {
head_td->pTDInfo->skb = skb;
+ if (dma_idx == TYPE_AC0DMA)
+ head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB;
+
priv->iTDUsed[dma_idx]++;
/* Take ownership */
head_td->buff_addr = cpu_to_le32(head_td->pTDInfo->skb_dma);
- if (dma_idx == TYPE_AC0DMA) {
- head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB;
-
+ if (head_td->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)
MACvTransmitAC0(priv->PortOffset);
- } else {
+ else
MACvTransmit0(priv->PortOffset);
- }
spin_unlock_irqrestore(&priv->lock, flags);
MACvInitialize(priv->PortOffset);
MACvReadEtherAddress(priv->PortOffset, priv->abyCurrentNetAddr);
+ /* Get RFType */
+ priv->byRFType = SROMbyReadEmbedded(priv->PortOffset, EEP_OFS_RFTYPE);
+ priv->byRFType &= RF_MASK;
+
+ dev_dbg(&pcid->dev, "RF Type = %x\n", priv->byRFType);
+
device_get_options(priv);
device_set_options(priv);
/* Mask out the options cannot be set to the chip */
break;
case RATE_6M:
case RATE_9M:
+ case RATE_12M:
case RATE_18M:
byPwr = priv->abyOFDMPwrTbl[uCH];
if (priv->byRFType == RF_UW2452)
break;
case RATE_6M:
case RATE_9M:
+ case RATE_12M:
case RATE_18M:
case RATE_24M:
case RATE_36M:
pr_debug("Closing iSCSI connection CID %hu on SID:"
" %u\n", conn->cid, sess->sid);
/*
- * Always up conn_logout_comp just in case the RX Thread is sleeping
- * and the logout response never got sent because the connection
- * failed.
+ * Always up conn_logout_comp for the traditional TCP case just in case
+ * the RX Thread in iscsi_target_rx_opcode() is sleeping and the logout
+ * response never got sent because the connection failed.
+ *
+ * However for iser-target, isert_wait4logout() is using conn_logout_comp
+ * to signal logout response TX interrupt completion. Go ahead and skip
+ * this for iser since isert_rx_opcode() does not wait on logout failure,
+ * and to avoid iscsi_conn pointer dereference in iser-target code.
*/
- complete(&conn->conn_logout_comp);
+ if (conn->conn_transport->transport_type == ISCSI_TCP)
+ complete(&conn->conn_logout_comp);
iscsi_release_thread_set(conn);
#include <target/target_core_fabric.h>
#include <target/iscsi/iscsi_target_core.h>
-#include <target/iscsi/iscsi_transport.h>
#include "iscsi_target_seq_pdu_list.h"
#include "iscsi_target_tq.h"
#include "iscsi_target_erl0.h"
if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT) {
spin_unlock_bh(&conn->state_lock);
- if (conn->conn_transport->transport_type == ISCSI_TCP)
- iscsit_close_connection(conn);
+ iscsit_close_connection(conn);
return;
}
transport_free_session(tl_nexus->se_sess);
goto out;
}
- /*
- * Now, register the SAS I_T Nexus as active with the call to
- * transport_register_session()
- */
- __transport_register_session(se_tpg, tl_nexus->se_sess->se_node_acl,
+ /* Now, register the SAS I_T Nexus as active. */
+ transport_register_session(se_tpg, tl_nexus->se_sess->se_node_acl,
tl_nexus->se_sess, tl_nexus);
tl_tpg->tl_nexus = tl_nexus;
pr_debug("TCM_Loop_ConfigFS: Established I_T Nexus to emulated"
return aligned_max_sectors;
}
+bool se_dev_check_wce(struct se_device *dev)
+{
+ bool wce = false;
+
+ if (dev->transport->get_write_cache)
+ wce = dev->transport->get_write_cache(dev);
+ else if (dev->dev_attrib.emulate_write_cache > 0)
+ wce = true;
+
+ return wce;
+}
+
int se_dev_set_max_unmap_lba_count(
struct se_device *dev,
u32 max_unmap_lba_count)
pr_err("Illegal value %d\n", flag);
return -EINVAL;
}
+ if (flag &&
+ dev->transport->get_write_cache) {
+ pr_err("emulate_fua_write not supported for this device\n");
+ return -EINVAL;
+ }
+ if (dev->export_count) {
+ pr_err("emulate_fua_write cannot be changed with active"
+ " exports: %d\n", dev->export_count);
+ return -EINVAL;
+ }
dev->dev_attrib.emulate_fua_write = flag;
pr_debug("dev[%p]: SE Device Forced Unit Access WRITEs: %d\n",
dev, dev->dev_attrib.emulate_fua_write);
pr_err("emulate_write_cache not supported for this device\n");
return -EINVAL;
}
-
+ if (dev->export_count) {
+ pr_err("emulate_write_cache cannot be changed with active"
+ " exports: %d\n", dev->export_count);
+ return -EINVAL;
+ }
dev->dev_attrib.emulate_write_cache = flag;
pr_debug("dev[%p]: SE Device WRITE_CACHE_EMULATION flag: %d\n",
dev, dev->dev_attrib.emulate_write_cache);
ret = dev->transport->configure_device(dev);
if (ret)
goto out;
- dev->dev_flags |= DF_CONFIGURED;
-
/*
* XXX: there is not much point to have two different values here..
*/
list_add_tail(&dev->g_dev_node, &g_device_list);
mutex_unlock(&g_device_mutex);
+ dev->dev_flags |= DF_CONFIGURED;
+
return 0;
out_free_alua:
struct pscsi_dev_virt *pdv = PSCSI_DEV(dev);
struct scsi_device *sd = pdv->pdv_sd;
- return sd->type;
+ return (sd) ? sd->type : TYPE_NO_LUN;
}
static sector_t pscsi_get_blocks(struct se_device *dev)
}
}
if (cdb[1] & 0x8) {
- if (!dev->dev_attrib.emulate_fua_write ||
- !dev->dev_attrib.emulate_write_cache) {
+ if (!dev->dev_attrib.emulate_fua_write || !se_dev_check_wce(dev)) {
pr_err("Got CDB: 0x%02x with FUA bit set, but device"
" does not advertise support for FUA write\n",
cdb[0]);
}
EXPORT_SYMBOL(spc_emulate_evpd_83);
-static bool
-spc_check_dev_wce(struct se_device *dev)
-{
- bool wce = false;
-
- if (dev->transport->get_write_cache)
- wce = dev->transport->get_write_cache(dev);
- else if (dev->dev_attrib.emulate_write_cache > 0)
- wce = true;
-
- return wce;
-}
-
/* Extended INQUIRY Data VPD Page */
static sense_reason_t
spc_emulate_evpd_86(struct se_cmd *cmd, unsigned char *buf)
buf[5] = 0x07;
/* If WriteCache emulation is enabled, set V_SUP */
- if (spc_check_dev_wce(dev))
+ if (se_dev_check_wce(dev))
buf[6] = 0x01;
/* If an LBA map is present set R_SUP */
spin_lock(&cmd->se_dev->t10_alua.lba_map_lock);
if (pc == 1)
goto out;
- if (spc_check_dev_wce(dev))
+ if (se_dev_check_wce(dev))
p[2] = 0x04; /* Write Cache Enable */
p[12] = 0x20; /* Disabled Read Ahead */
(cmd->se_deve->lun_flags & TRANSPORT_LUNFLAGS_READ_ONLY)))
spc_modesense_write_protect(&buf[length], type);
- if ((spc_check_dev_wce(dev)) &&
+ if ((se_dev_check_wce(dev)) &&
(dev->dev_attrib.emulate_fua_write > 0))
spc_modesense_dpofua(&buf[length], type);
list_add_tail(&se_cmd->se_cmd_list, &se_sess->sess_cmd_list);
out:
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
+
+ if (ret && ack_kref)
+ target_put_sess_cmd(se_sess, se_cmd);
+
return ret;
}
EXPORT_SYMBOL(target_get_sess_cmd);
ep = fc_seq_exch(seq);
if (ep) {
lport = ep->lp;
- if (lport && (ep->xid <= lport->lro_xid))
+ if (lport && (ep->xid <= lport->lro_xid)) {
/*
* "ddp_done" trigger invalidation of HW
* specific DDP context
* identified using ep->xid)
*/
cmd->was_ddp_setup = 0;
+ }
}
}
}
dw8250_force_idle(p);
writeb(value, p->membase + (UART_LCR << p->regshift));
}
- dev_err(p->dev, "Couldn't set LCR to %d\n", value);
+ /*
+ * FIXME: this deadlocks if port->lock is already held
+ * dev_err(p->dev, "Couldn't set LCR to %d\n", value);
+ */
}
}
__raw_writeq(value & 0xff,
p->membase + (UART_LCR << p->regshift));
}
- dev_err(p->dev, "Couldn't set LCR to %d\n", value);
+ /*
+ * FIXME: this deadlocks if port->lock is already held
+ * dev_err(p->dev, "Couldn't set LCR to %d\n", value);
+ */
}
}
#endif /* CONFIG_64BIT */
dw8250_force_idle(p);
writel(value, p->membase + (UART_LCR << p->regshift));
}
- dev_err(p->dev, "Couldn't set LCR to %d\n", value);
+ /*
+ * FIXME: this deadlocks if port->lock is already held
+ * dev_err(p->dev, "Couldn't set LCR to %d\n", value);
+ */
}
}
return retval;
}
+static int otg_a_alt_hnp_support(struct ci_hdrc *ci)
+{
+ dev_warn(&ci->gadget.dev,
+ "connect the device to an alternate port if you want HNP\n");
+ return isr_setup_status_phase(ci);
+}
+
/**
* isr_setup_packet_handler: setup packet handler
* @ci: UDC descriptor
ci);
}
break;
+ case USB_DEVICE_A_ALT_HNP_SUPPORT:
+ if (ci_otg_is_fsm_mode(ci))
+ err = otg_a_alt_hnp_support(ci);
+ break;
default:
goto delegate;
}
break;
case OTG_STATE_B_PERIPHERAL:
otg_chrg_vbus(fsm, 0);
- otg_loc_conn(fsm, 1);
otg_loc_sof(fsm, 0);
otg_set_protocol(fsm, PROTO_GADGET);
+ otg_loc_conn(fsm, 1);
break;
case OTG_STATE_B_WAIT_ACON:
otg_chrg_vbus(fsm, 0);
break;
case OTG_STATE_A_PERIPHERAL:
- otg_loc_conn(fsm, 1);
otg_loc_sof(fsm, 0);
otg_set_protocol(fsm, PROTO_GADGET);
otg_drv_vbus(fsm, 1);
+ otg_loc_conn(fsm, 1);
otg_add_timer(fsm, A_BIDL_ADIS);
break;
case OTG_STATE_A_WAIT_VFALL:
dwc2_is_host_mode(hsotg) ? "Host" : "Device",
dwc2_op_state_str(hsotg));
+ if (hsotg->op_state == OTG_STATE_A_HOST)
+ dwc2_hcd_disconnect(hsotg);
+
/* Change to L3 (OFF) state */
hsotg->lx_state = DWC2_L3;
bool read;
struct kiocb *kiocb;
- const struct iovec *iovec;
- unsigned long nr_segs;
- char __user *buf;
- size_t len;
+ struct iov_iter data;
+ const void *to_free;
+ char *buf;
struct mm_struct *mm;
struct work_struct work;
io_data->req->actual;
if (io_data->read && ret > 0) {
- int i;
- size_t pos = 0;
-
- /*
- * Since req->length may be bigger than io_data->len (after
- * being rounded up to maxpacketsize), we may end up with more
- * data then user space has space for.
- */
- ret = min_t(int, ret, io_data->len);
-
use_mm(io_data->mm);
- for (i = 0; i < io_data->nr_segs; i++) {
- size_t len = min_t(size_t, ret - pos,
- io_data->iovec[i].iov_len);
- if (!len)
- break;
- if (unlikely(copy_to_user(io_data->iovec[i].iov_base,
- &io_data->buf[pos], len))) {
- ret = -EFAULT;
- break;
- }
- pos += len;
- }
+ ret = copy_to_iter(io_data->buf, ret, &io_data->data);
+ if (iov_iter_count(&io_data->data))
+ ret = -EFAULT;
unuse_mm(io_data->mm);
}
io_data->kiocb->private = NULL;
if (io_data->read)
- kfree(io_data->iovec);
+ kfree(io_data->to_free);
kfree(io_data->buf);
kfree(io_data);
}
* before the waiting completes, so do not assign to 'gadget' earlier
*/
struct usb_gadget *gadget = epfile->ffs->gadget;
+ size_t copied;
spin_lock_irq(&epfile->ffs->eps_lock);
/* In the meantime, endpoint got disabled or changed. */
spin_unlock_irq(&epfile->ffs->eps_lock);
return -ESHUTDOWN;
}
+ data_len = iov_iter_count(&io_data->data);
/*
* Controller may require buffer size to be aligned to
* maxpacketsize of an out endpoint.
*/
- data_len = io_data->read ?
- usb_ep_align_maybe(gadget, ep->ep, io_data->len) :
- io_data->len;
+ if (io_data->read)
+ data_len = usb_ep_align_maybe(gadget, ep->ep, data_len);
spin_unlock_irq(&epfile->ffs->eps_lock);
data = kmalloc(data_len, GFP_KERNEL);
if (unlikely(!data))
return -ENOMEM;
- if (io_data->aio && !io_data->read) {
- int i;
- size_t pos = 0;
- for (i = 0; i < io_data->nr_segs; i++) {
- if (unlikely(copy_from_user(&data[pos],
- io_data->iovec[i].iov_base,
- io_data->iovec[i].iov_len))) {
- ret = -EFAULT;
- goto error;
- }
- pos += io_data->iovec[i].iov_len;
- }
- } else {
- if (!io_data->read &&
- unlikely(__copy_from_user(data, io_data->buf,
- io_data->len))) {
+ if (!io_data->read) {
+ copied = copy_from_iter(data, data_len, &io_data->data);
+ if (copied != data_len) {
ret = -EFAULT;
goto error;
}
*/
ret = ep->status;
if (io_data->read && ret > 0) {
- ret = min_t(size_t, ret, io_data->len);
-
- if (unlikely(copy_to_user(io_data->buf,
- data, ret)))
+ ret = copy_to_iter(data, ret, &io_data->data);
+ if (unlikely(iov_iter_count(&io_data->data)))
ret = -EFAULT;
}
}
return ret;
}
-static ssize_t
-ffs_epfile_write(struct file *file, const char __user *buf, size_t len,
- loff_t *ptr)
-{
- struct ffs_io_data io_data;
-
- ENTER();
-
- io_data.aio = false;
- io_data.read = false;
- io_data.buf = (char * __user)buf;
- io_data.len = len;
-
- return ffs_epfile_io(file, &io_data);
-}
-
-static ssize_t
-ffs_epfile_read(struct file *file, char __user *buf, size_t len, loff_t *ptr)
-{
- struct ffs_io_data io_data;
-
- ENTER();
-
- io_data.aio = false;
- io_data.read = true;
- io_data.buf = buf;
- io_data.len = len;
-
- return ffs_epfile_io(file, &io_data);
-}
-
static int
ffs_epfile_open(struct inode *inode, struct file *file)
{
return value;
}
-static ssize_t ffs_epfile_aio_write(struct kiocb *kiocb,
- const struct iovec *iovec,
- unsigned long nr_segs, loff_t loff)
+static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from)
{
- struct ffs_io_data *io_data;
+ struct ffs_io_data io_data, *p = &io_data;
+ ssize_t res;
ENTER();
- io_data = kmalloc(sizeof(*io_data), GFP_KERNEL);
- if (unlikely(!io_data))
- return -ENOMEM;
+ if (!is_sync_kiocb(kiocb)) {
+ p = kmalloc(sizeof(io_data), GFP_KERNEL);
+ if (unlikely(!p))
+ return -ENOMEM;
+ p->aio = true;
+ } else {
+ p->aio = false;
+ }
- io_data->aio = true;
- io_data->read = false;
- io_data->kiocb = kiocb;
- io_data->iovec = iovec;
- io_data->nr_segs = nr_segs;
- io_data->len = kiocb->ki_nbytes;
- io_data->mm = current->mm;
+ p->read = false;
+ p->kiocb = kiocb;
+ p->data = *from;
+ p->mm = current->mm;
- kiocb->private = io_data;
+ kiocb->private = p;
kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);
- return ffs_epfile_io(kiocb->ki_filp, io_data);
+ res = ffs_epfile_io(kiocb->ki_filp, p);
+ if (res == -EIOCBQUEUED)
+ return res;
+ if (p->aio)
+ kfree(p);
+ else
+ *from = p->data;
+ return res;
}
-static ssize_t ffs_epfile_aio_read(struct kiocb *kiocb,
- const struct iovec *iovec,
- unsigned long nr_segs, loff_t loff)
+static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to)
{
- struct ffs_io_data *io_data;
- struct iovec *iovec_copy;
+ struct ffs_io_data io_data, *p = &io_data;
+ ssize_t res;
ENTER();
- iovec_copy = kmalloc_array(nr_segs, sizeof(*iovec_copy), GFP_KERNEL);
- if (unlikely(!iovec_copy))
- return -ENOMEM;
-
- memcpy(iovec_copy, iovec, sizeof(struct iovec)*nr_segs);
-
- io_data = kmalloc(sizeof(*io_data), GFP_KERNEL);
- if (unlikely(!io_data)) {
- kfree(iovec_copy);
- return -ENOMEM;
+ if (!is_sync_kiocb(kiocb)) {
+ p = kmalloc(sizeof(io_data), GFP_KERNEL);
+ if (unlikely(!p))
+ return -ENOMEM;
+ p->aio = true;
+ } else {
+ p->aio = false;
}
- io_data->aio = true;
- io_data->read = true;
- io_data->kiocb = kiocb;
- io_data->iovec = iovec_copy;
- io_data->nr_segs = nr_segs;
- io_data->len = kiocb->ki_nbytes;
- io_data->mm = current->mm;
+ p->read = true;
+ p->kiocb = kiocb;
+ if (p->aio) {
+ p->to_free = dup_iter(&p->data, to, GFP_KERNEL);
+ if (!p->to_free) {
+ kfree(p);
+ return -ENOMEM;
+ }
+ } else {
+ p->data = *to;
+ p->to_free = NULL;
+ }
+ p->mm = current->mm;
- kiocb->private = io_data;
+ kiocb->private = p;
kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);
- return ffs_epfile_io(kiocb->ki_filp, io_data);
+ res = ffs_epfile_io(kiocb->ki_filp, p);
+ if (res == -EIOCBQUEUED)
+ return res;
+
+ if (p->aio) {
+ kfree(p->to_free);
+ kfree(p);
+ } else {
+ *to = p->data;
+ }
+ return res;
}
static int
.llseek = no_llseek,
.open = ffs_epfile_open,
- .write = ffs_epfile_write,
- .read = ffs_epfile_read,
- .aio_write = ffs_epfile_aio_write,
- .aio_read = ffs_epfile_aio_read,
+ .write = new_sync_write,
+ .read = new_sync_read,
+ .write_iter = ffs_epfile_write_iter,
+ .read_iter = ffs_epfile_read_iter,
.release = ffs_epfile_release,
.unlocked_ioctl = ffs_epfile_ioctl,
};
struct usb_composite_dev *cdev;
cdev = loop->function.config->cdev;
- disable_endpoints(cdev, loop->in_ep, loop->out_ep, NULL, NULL, NULL,
- NULL);
+ disable_endpoints(cdev, loop->in_ep, loop->out_ep, NULL, NULL);
VDBG(cdev, "%s disabled\n", loop->function.name);
}
#include "gadget_chips.h"
#include "u_f.h"
-#define USB_MS_TO_SS_INTERVAL(x) USB_MS_TO_HS_INTERVAL(x)
-
-enum eptype {
- EP_CONTROL = 0,
- EP_BULK,
- EP_ISOC,
- EP_INTERRUPT,
-};
-
/*
* SOURCE/SINK FUNCTION ... a primary testing vehicle for USB peripheral
* controller drivers.
struct usb_ep *out_ep;
struct usb_ep *iso_in_ep;
struct usb_ep *iso_out_ep;
- struct usb_ep *int_in_ep;
- struct usb_ep *int_out_ep;
int cur_alt;
};
static unsigned isoc_maxpacket;
static unsigned isoc_mult;
static unsigned isoc_maxburst;
-static unsigned int_interval; /* In ms */
-static unsigned int_maxpacket;
-static unsigned int_mult;
-static unsigned int_maxburst;
static unsigned buflen;
/*-------------------------------------------------------------------------*/
/* .iInterface = DYNAMIC */
};
-static struct usb_interface_descriptor source_sink_intf_alt2 = {
- .bLength = USB_DT_INTERFACE_SIZE,
- .bDescriptorType = USB_DT_INTERFACE,
-
- .bAlternateSetting = 2,
- .bNumEndpoints = 2,
- .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
- /* .iInterface = DYNAMIC */
-};
-
/* full speed support: */
static struct usb_endpoint_descriptor fs_source_desc = {
.bInterval = 4,
};
-static struct usb_endpoint_descriptor fs_int_source_desc = {
- .bLength = USB_DT_ENDPOINT_SIZE,
- .bDescriptorType = USB_DT_ENDPOINT,
-
- .bEndpointAddress = USB_DIR_IN,
- .bmAttributes = USB_ENDPOINT_XFER_INT,
- .wMaxPacketSize = cpu_to_le16(64),
- .bInterval = GZERO_INT_INTERVAL,
-};
-
-static struct usb_endpoint_descriptor fs_int_sink_desc = {
- .bLength = USB_DT_ENDPOINT_SIZE,
- .bDescriptorType = USB_DT_ENDPOINT,
-
- .bEndpointAddress = USB_DIR_OUT,
- .bmAttributes = USB_ENDPOINT_XFER_INT,
- .wMaxPacketSize = cpu_to_le16(64),
- .bInterval = GZERO_INT_INTERVAL,
-};
-
static struct usb_descriptor_header *fs_source_sink_descs[] = {
(struct usb_descriptor_header *) &source_sink_intf_alt0,
(struct usb_descriptor_header *) &fs_sink_desc,
(struct usb_descriptor_header *) &fs_source_desc,
(struct usb_descriptor_header *) &fs_iso_sink_desc,
(struct usb_descriptor_header *) &fs_iso_source_desc,
- (struct usb_descriptor_header *) &source_sink_intf_alt2,
-#define FS_ALT_IFC_2_OFFSET 8
- (struct usb_descriptor_header *) &fs_int_sink_desc,
- (struct usb_descriptor_header *) &fs_int_source_desc,
NULL,
};
.bInterval = 4,
};
-static struct usb_endpoint_descriptor hs_int_source_desc = {
- .bLength = USB_DT_ENDPOINT_SIZE,
- .bDescriptorType = USB_DT_ENDPOINT,
-
- .bmAttributes = USB_ENDPOINT_XFER_INT,
- .wMaxPacketSize = cpu_to_le16(1024),
- .bInterval = USB_MS_TO_HS_INTERVAL(GZERO_INT_INTERVAL),
-};
-
-static struct usb_endpoint_descriptor hs_int_sink_desc = {
- .bLength = USB_DT_ENDPOINT_SIZE,
- .bDescriptorType = USB_DT_ENDPOINT,
-
- .bmAttributes = USB_ENDPOINT_XFER_INT,
- .wMaxPacketSize = cpu_to_le16(1024),
- .bInterval = USB_MS_TO_HS_INTERVAL(GZERO_INT_INTERVAL),
-};
-
static struct usb_descriptor_header *hs_source_sink_descs[] = {
(struct usb_descriptor_header *) &source_sink_intf_alt0,
(struct usb_descriptor_header *) &hs_source_desc,
(struct usb_descriptor_header *) &hs_sink_desc,
(struct usb_descriptor_header *) &hs_iso_source_desc,
(struct usb_descriptor_header *) &hs_iso_sink_desc,
- (struct usb_descriptor_header *) &source_sink_intf_alt2,
-#define HS_ALT_IFC_2_OFFSET 8
- (struct usb_descriptor_header *) &hs_int_source_desc,
- (struct usb_descriptor_header *) &hs_int_sink_desc,
NULL,
};
.wBytesPerInterval = cpu_to_le16(1024),
};
-static struct usb_endpoint_descriptor ss_int_source_desc = {
- .bLength = USB_DT_ENDPOINT_SIZE,
- .bDescriptorType = USB_DT_ENDPOINT,
-
- .bmAttributes = USB_ENDPOINT_XFER_INT,
- .wMaxPacketSize = cpu_to_le16(1024),
- .bInterval = USB_MS_TO_SS_INTERVAL(GZERO_INT_INTERVAL),
-};
-
-static struct usb_ss_ep_comp_descriptor ss_int_source_comp_desc = {
- .bLength = USB_DT_SS_EP_COMP_SIZE,
- .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
-
- .bMaxBurst = 0,
- .bmAttributes = 0,
- .wBytesPerInterval = cpu_to_le16(1024),
-};
-
-static struct usb_endpoint_descriptor ss_int_sink_desc = {
- .bLength = USB_DT_ENDPOINT_SIZE,
- .bDescriptorType = USB_DT_ENDPOINT,
-
- .bmAttributes = USB_ENDPOINT_XFER_INT,
- .wMaxPacketSize = cpu_to_le16(1024),
- .bInterval = USB_MS_TO_SS_INTERVAL(GZERO_INT_INTERVAL),
-};
-
-static struct usb_ss_ep_comp_descriptor ss_int_sink_comp_desc = {
- .bLength = USB_DT_SS_EP_COMP_SIZE,
- .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
-
- .bMaxBurst = 0,
- .bmAttributes = 0,
- .wBytesPerInterval = cpu_to_le16(1024),
-};
-
static struct usb_descriptor_header *ss_source_sink_descs[] = {
(struct usb_descriptor_header *) &source_sink_intf_alt0,
(struct usb_descriptor_header *) &ss_source_desc,
(struct usb_descriptor_header *) &ss_iso_source_comp_desc,
(struct usb_descriptor_header *) &ss_iso_sink_desc,
(struct usb_descriptor_header *) &ss_iso_sink_comp_desc,
- (struct usb_descriptor_header *) &source_sink_intf_alt2,
-#define SS_ALT_IFC_2_OFFSET 14
- (struct usb_descriptor_header *) &ss_int_source_desc,
- (struct usb_descriptor_header *) &ss_int_source_comp_desc,
- (struct usb_descriptor_header *) &ss_int_sink_desc,
- (struct usb_descriptor_header *) &ss_int_sink_comp_desc,
NULL,
};
};
/*-------------------------------------------------------------------------*/
-static const char *get_ep_string(enum eptype ep_type)
-{
- switch (ep_type) {
- case EP_ISOC:
- return "ISOC-";
- case EP_INTERRUPT:
- return "INTERRUPT-";
- case EP_CONTROL:
- return "CTRL-";
- case EP_BULK:
- return "BULK-";
- default:
- return "UNKNOWN-";
- }
-}
static inline struct usb_request *ss_alloc_ep_req(struct usb_ep *ep, int len)
{
void disable_endpoints(struct usb_composite_dev *cdev,
struct usb_ep *in, struct usb_ep *out,
- struct usb_ep *iso_in, struct usb_ep *iso_out,
- struct usb_ep *int_in, struct usb_ep *int_out)
+ struct usb_ep *iso_in, struct usb_ep *iso_out)
{
disable_ep(cdev, in);
disable_ep(cdev, out);
disable_ep(cdev, iso_in);
if (iso_out)
disable_ep(cdev, iso_out);
- if (int_in)
- disable_ep(cdev, int_in);
- if (int_out)
- disable_ep(cdev, int_out);
}
static int
return id;
source_sink_intf_alt0.bInterfaceNumber = id;
source_sink_intf_alt1.bInterfaceNumber = id;
- source_sink_intf_alt2.bInterfaceNumber = id;
/* allocate bulk endpoints */
ss->in_ep = usb_ep_autoconfig(cdev->gadget, &fs_source_desc);
if (isoc_maxpacket > 1024)
isoc_maxpacket = 1024;
- /* sanity check the interrupt module parameters */
- if (int_interval < 1)
- int_interval = 1;
- if (int_interval > 4096)
- int_interval = 4096;
- if (int_mult > 2)
- int_mult = 2;
- if (int_maxburst > 15)
- int_maxburst = 15;
-
- /* fill in the FS interrupt descriptors from the module parameters */
- fs_int_source_desc.wMaxPacketSize = int_maxpacket > 64 ?
- 64 : int_maxpacket;
- fs_int_source_desc.bInterval = int_interval > 255 ?
- 255 : int_interval;
- fs_int_sink_desc.wMaxPacketSize = int_maxpacket > 64 ?
- 64 : int_maxpacket;
- fs_int_sink_desc.bInterval = int_interval > 255 ?
- 255 : int_interval;
-
- /* allocate int endpoints */
- ss->int_in_ep = usb_ep_autoconfig(cdev->gadget, &fs_int_source_desc);
- if (!ss->int_in_ep)
- goto no_int;
- ss->int_in_ep->driver_data = cdev; /* claim */
-
- ss->int_out_ep = usb_ep_autoconfig(cdev->gadget, &fs_int_sink_desc);
- if (ss->int_out_ep) {
- ss->int_out_ep->driver_data = cdev; /* claim */
- } else {
- ss->int_in_ep->driver_data = NULL;
- ss->int_in_ep = NULL;
-no_int:
- fs_source_sink_descs[FS_ALT_IFC_2_OFFSET] = NULL;
- hs_source_sink_descs[HS_ALT_IFC_2_OFFSET] = NULL;
- ss_source_sink_descs[SS_ALT_IFC_2_OFFSET] = NULL;
- }
-
- if (int_maxpacket > 1024)
- int_maxpacket = 1024;
-
/* support high speed hardware */
hs_source_desc.bEndpointAddress = fs_source_desc.bEndpointAddress;
hs_sink_desc.bEndpointAddress = fs_sink_desc.bEndpointAddress;
/*
- * Fill in the HS isoc and interrupt descriptors from the module
- * parameters. We assume that the user knows what they are doing and
- * won't give parameters that their UDC doesn't support.
+ * Fill in the HS isoc descriptors from the module parameters.
+ * We assume that the user knows what they are doing and won't
+ * give parameters that their UDC doesn't support.
*/
hs_iso_source_desc.wMaxPacketSize = isoc_maxpacket;
hs_iso_source_desc.wMaxPacketSize |= isoc_mult << 11;
hs_iso_sink_desc.bInterval = isoc_interval;
hs_iso_sink_desc.bEndpointAddress = fs_iso_sink_desc.bEndpointAddress;
- hs_int_source_desc.wMaxPacketSize = int_maxpacket;
- hs_int_source_desc.wMaxPacketSize |= int_mult << 11;
- hs_int_source_desc.bInterval = USB_MS_TO_HS_INTERVAL(int_interval);
- hs_int_source_desc.bEndpointAddress =
- fs_int_source_desc.bEndpointAddress;
-
- hs_int_sink_desc.wMaxPacketSize = int_maxpacket;
- hs_int_sink_desc.wMaxPacketSize |= int_mult << 11;
- hs_int_sink_desc.bInterval = USB_MS_TO_HS_INTERVAL(int_interval);
- hs_int_sink_desc.bEndpointAddress = fs_int_sink_desc.bEndpointAddress;
-
/* support super speed hardware */
ss_source_desc.bEndpointAddress =
fs_source_desc.bEndpointAddress;
fs_sink_desc.bEndpointAddress;
/*
- * Fill in the SS isoc and interrupt descriptors from the module
- * parameters. We assume that the user knows what they are doing and
- * won't give parameters that their UDC doesn't support.
+ * Fill in the SS isoc descriptors from the module parameters.
+ * We assume that the user knows what they are doing and won't
+ * give parameters that their UDC doesn't support.
*/
ss_iso_source_desc.wMaxPacketSize = isoc_maxpacket;
ss_iso_source_desc.bInterval = isoc_interval;
isoc_maxpacket * (isoc_mult + 1) * (isoc_maxburst + 1);
ss_iso_sink_desc.bEndpointAddress = fs_iso_sink_desc.bEndpointAddress;
- ss_int_source_desc.wMaxPacketSize = int_maxpacket;
- ss_int_source_desc.bInterval = USB_MS_TO_SS_INTERVAL(int_interval);
- ss_int_source_comp_desc.bmAttributes = int_mult;
- ss_int_source_comp_desc.bMaxBurst = int_maxburst;
- ss_int_source_comp_desc.wBytesPerInterval =
- int_maxpacket * (int_mult + 1) * (int_maxburst + 1);
- ss_int_source_desc.bEndpointAddress =
- fs_int_source_desc.bEndpointAddress;
-
- ss_int_sink_desc.wMaxPacketSize = int_maxpacket;
- ss_int_sink_desc.bInterval = USB_MS_TO_SS_INTERVAL(int_interval);
- ss_int_sink_comp_desc.bmAttributes = int_mult;
- ss_int_sink_comp_desc.bMaxBurst = int_maxburst;
- ss_int_sink_comp_desc.wBytesPerInterval =
- int_maxpacket * (int_mult + 1) * (int_maxburst + 1);
- ss_int_sink_desc.bEndpointAddress = fs_int_sink_desc.bEndpointAddress;
-
ret = usb_assign_descriptors(f, fs_source_sink_descs,
hs_source_sink_descs, ss_source_sink_descs);
if (ret)
return ret;
- DBG(cdev, "%s speed %s: IN/%s, OUT/%s, ISO-IN/%s, ISO-OUT/%s, "
- "INT-IN/%s, INT-OUT/%s\n",
+ DBG(cdev, "%s speed %s: IN/%s, OUT/%s, ISO-IN/%s, ISO-OUT/%s\n",
(gadget_is_superspeed(c->cdev->gadget) ? "super" :
(gadget_is_dualspeed(c->cdev->gadget) ? "dual" : "full")),
f->name, ss->in_ep->name, ss->out_ep->name,
ss->iso_in_ep ? ss->iso_in_ep->name : "<none>",
- ss->iso_out_ep ? ss->iso_out_ep->name : "<none>",
- ss->int_in_ep ? ss->int_in_ep->name : "<none>",
- ss->int_out_ep ? ss->int_out_ep->name : "<none>");
+ ss->iso_out_ep ? ss->iso_out_ep->name : "<none>");
return 0;
}
}
static int source_sink_start_ep(struct f_sourcesink *ss, bool is_in,
- enum eptype ep_type, int speed)
+ bool is_iso, int speed)
{
struct usb_ep *ep;
struct usb_request *req;
int i, size, status;
for (i = 0; i < 8; i++) {
- switch (ep_type) {
- case EP_ISOC:
+ if (is_iso) {
switch (speed) {
case USB_SPEED_SUPER:
size = isoc_maxpacket * (isoc_mult + 1) *
}
ep = is_in ? ss->iso_in_ep : ss->iso_out_ep;
req = ss_alloc_ep_req(ep, size);
- break;
- case EP_INTERRUPT:
- switch (speed) {
- case USB_SPEED_SUPER:
- size = int_maxpacket * (int_mult + 1) *
- (int_maxburst + 1);
- break;
- case USB_SPEED_HIGH:
- size = int_maxpacket * (int_mult + 1);
- break;
- default:
- size = int_maxpacket > 1023 ?
- 1023 : int_maxpacket;
- break;
- }
- ep = is_in ? ss->int_in_ep : ss->int_out_ep;
- req = ss_alloc_ep_req(ep, size);
- break;
- default:
+ } else {
ep = is_in ? ss->in_ep : ss->out_ep;
req = ss_alloc_ep_req(ep, 0);
- break;
}
if (!req)
cdev = ss->function.config->cdev;
ERROR(cdev, "start %s%s %s --> %d\n",
- get_ep_string(ep_type), is_in ? "IN" : "OUT",
- ep->name, status);
+ is_iso ? "ISO-" : "", is_in ? "IN" : "OUT",
+ ep->name, status);
free_ep_req(ep, req);
}
- if (!(ep_type == EP_ISOC))
+ if (!is_iso)
break;
}
cdev = ss->function.config->cdev;
disable_endpoints(cdev, ss->in_ep, ss->out_ep, ss->iso_in_ep,
- ss->iso_out_ep, ss->int_in_ep, ss->int_out_ep);
+ ss->iso_out_ep);
VDBG(cdev, "%s disabled\n", ss->function.name);
}
int speed = cdev->gadget->speed;
struct usb_ep *ep;
- if (alt == 2) {
- /* Configure for periodic interrupt endpoint */
- ep = ss->int_in_ep;
- if (ep) {
- result = config_ep_by_speed(cdev->gadget,
- &(ss->function), ep);
- if (result)
- return result;
-
- result = usb_ep_enable(ep);
- if (result < 0)
- return result;
-
- ep->driver_data = ss;
- result = source_sink_start_ep(ss, true, EP_INTERRUPT,
- speed);
- if (result < 0) {
-fail1:
- ep = ss->int_in_ep;
- if (ep) {
- usb_ep_disable(ep);
- ep->driver_data = NULL;
- }
- return result;
- }
- }
-
- /*
- * one interrupt endpoint reads (sinks) anything OUT (from the
- * host)
- */
- ep = ss->int_out_ep;
- if (ep) {
- result = config_ep_by_speed(cdev->gadget,
- &(ss->function), ep);
- if (result)
- goto fail1;
-
- result = usb_ep_enable(ep);
- if (result < 0)
- goto fail1;
-
- ep->driver_data = ss;
- result = source_sink_start_ep(ss, false, EP_INTERRUPT,
- speed);
- if (result < 0) {
- ep = ss->int_out_ep;
- usb_ep_disable(ep);
- ep->driver_data = NULL;
- goto fail1;
- }
- }
-
- goto out;
- }
-
/* one bulk endpoint writes (sources) zeroes IN (to the host) */
ep = ss->in_ep;
result = config_ep_by_speed(cdev->gadget, &(ss->function), ep);
return result;
ep->driver_data = ss;
- result = source_sink_start_ep(ss, true, EP_BULK, speed);
+ result = source_sink_start_ep(ss, true, false, speed);
if (result < 0) {
fail:
ep = ss->in_ep;
goto fail;
ep->driver_data = ss;
- result = source_sink_start_ep(ss, false, EP_BULK, speed);
+ result = source_sink_start_ep(ss, false, false, speed);
if (result < 0) {
fail2:
ep = ss->out_ep;
goto fail2;
ep->driver_data = ss;
- result = source_sink_start_ep(ss, true, EP_ISOC, speed);
+ result = source_sink_start_ep(ss, true, true, speed);
if (result < 0) {
fail3:
ep = ss->iso_in_ep;
goto fail3;
ep->driver_data = ss;
- result = source_sink_start_ep(ss, false, EP_ISOC, speed);
+ result = source_sink_start_ep(ss, false, true, speed);
if (result < 0) {
usb_ep_disable(ep);
ep->driver_data = NULL;
goto fail3;
}
}
-
out:
ss->cur_alt = alt;
if (ss->in_ep->driver_data)
disable_source_sink(ss);
- else if (alt == 2 && ss->int_in_ep->driver_data)
- disable_source_sink(ss);
return enable_source_sink(cdev, ss, alt);
}
isoc_maxpacket = ss_opts->isoc_maxpacket;
isoc_mult = ss_opts->isoc_mult;
isoc_maxburst = ss_opts->isoc_maxburst;
- int_interval = ss_opts->int_interval;
- int_maxpacket = ss_opts->int_maxpacket;
- int_mult = ss_opts->int_mult;
- int_maxburst = ss_opts->int_maxburst;
buflen = ss_opts->bulk_buflen;
ss->function.name = "source/sink";
f_ss_opts_bulk_buflen_show,
f_ss_opts_bulk_buflen_store);
-static ssize_t f_ss_opts_int_interval_show(struct f_ss_opts *opts, char *page)
-{
- int result;
-
- mutex_lock(&opts->lock);
- result = sprintf(page, "%u", opts->int_interval);
- mutex_unlock(&opts->lock);
-
- return result;
-}
-
-static ssize_t f_ss_opts_int_interval_store(struct f_ss_opts *opts,
- const char *page, size_t len)
-{
- int ret;
- u32 num;
-
- mutex_lock(&opts->lock);
- if (opts->refcnt) {
- ret = -EBUSY;
- goto end;
- }
-
- ret = kstrtou32(page, 0, &num);
- if (ret)
- goto end;
-
- if (num > 4096) {
- ret = -EINVAL;
- goto end;
- }
-
- opts->int_interval = num;
- ret = len;
-end:
- mutex_unlock(&opts->lock);
- return ret;
-}
-
-static struct f_ss_opts_attribute f_ss_opts_int_interval =
- __CONFIGFS_ATTR(int_interval, S_IRUGO | S_IWUSR,
- f_ss_opts_int_interval_show,
- f_ss_opts_int_interval_store);
-
-static ssize_t f_ss_opts_int_maxpacket_show(struct f_ss_opts *opts, char *page)
-{
- int result;
-
- mutex_lock(&opts->lock);
- result = sprintf(page, "%u", opts->int_maxpacket);
- mutex_unlock(&opts->lock);
-
- return result;
-}
-
-static ssize_t f_ss_opts_int_maxpacket_store(struct f_ss_opts *opts,
- const char *page, size_t len)
-{
- int ret;
- u16 num;
-
- mutex_lock(&opts->lock);
- if (opts->refcnt) {
- ret = -EBUSY;
- goto end;
- }
-
- ret = kstrtou16(page, 0, &num);
- if (ret)
- goto end;
-
- if (num > 1024) {
- ret = -EINVAL;
- goto end;
- }
-
- opts->int_maxpacket = num;
- ret = len;
-end:
- mutex_unlock(&opts->lock);
- return ret;
-}
-
-static struct f_ss_opts_attribute f_ss_opts_int_maxpacket =
- __CONFIGFS_ATTR(int_maxpacket, S_IRUGO | S_IWUSR,
- f_ss_opts_int_maxpacket_show,
- f_ss_opts_int_maxpacket_store);
-
-static ssize_t f_ss_opts_int_mult_show(struct f_ss_opts *opts, char *page)
-{
- int result;
-
- mutex_lock(&opts->lock);
- result = sprintf(page, "%u", opts->int_mult);
- mutex_unlock(&opts->lock);
-
- return result;
-}
-
-static ssize_t f_ss_opts_int_mult_store(struct f_ss_opts *opts,
- const char *page, size_t len)
-{
- int ret;
- u8 num;
-
- mutex_lock(&opts->lock);
- if (opts->refcnt) {
- ret = -EBUSY;
- goto end;
- }
-
- ret = kstrtou8(page, 0, &num);
- if (ret)
- goto end;
-
- if (num > 2) {
- ret = -EINVAL;
- goto end;
- }
-
- opts->int_mult = num;
- ret = len;
-end:
- mutex_unlock(&opts->lock);
- return ret;
-}
-
-static struct f_ss_opts_attribute f_ss_opts_int_mult =
- __CONFIGFS_ATTR(int_mult, S_IRUGO | S_IWUSR,
- f_ss_opts_int_mult_show,
- f_ss_opts_int_mult_store);
-
-static ssize_t f_ss_opts_int_maxburst_show(struct f_ss_opts *opts, char *page)
-{
- int result;
-
- mutex_lock(&opts->lock);
- result = sprintf(page, "%u", opts->int_maxburst);
- mutex_unlock(&opts->lock);
-
- return result;
-}
-
-static ssize_t f_ss_opts_int_maxburst_store(struct f_ss_opts *opts,
- const char *page, size_t len)
-{
- int ret;
- u8 num;
-
- mutex_lock(&opts->lock);
- if (opts->refcnt) {
- ret = -EBUSY;
- goto end;
- }
-
- ret = kstrtou8(page, 0, &num);
- if (ret)
- goto end;
-
- if (num > 15) {
- ret = -EINVAL;
- goto end;
- }
-
- opts->int_maxburst = num;
- ret = len;
-end:
- mutex_unlock(&opts->lock);
- return ret;
-}
-
-static struct f_ss_opts_attribute f_ss_opts_int_maxburst =
- __CONFIGFS_ATTR(int_maxburst, S_IRUGO | S_IWUSR,
- f_ss_opts_int_maxburst_show,
- f_ss_opts_int_maxburst_store);
-
static struct configfs_attribute *ss_attrs[] = {
&f_ss_opts_pattern.attr,
&f_ss_opts_isoc_interval.attr,
&f_ss_opts_isoc_mult.attr,
&f_ss_opts_isoc_maxburst.attr,
&f_ss_opts_bulk_buflen.attr,
- &f_ss_opts_int_interval.attr,
- &f_ss_opts_int_maxpacket.attr,
- &f_ss_opts_int_mult.attr,
- &f_ss_opts_int_maxburst.attr,
NULL,
};
ss_opts->isoc_interval = GZERO_ISOC_INTERVAL;
ss_opts->isoc_maxpacket = GZERO_ISOC_MAXPACKET;
ss_opts->bulk_buflen = GZERO_BULK_BUFLEN;
- ss_opts->int_interval = GZERO_INT_INTERVAL;
- ss_opts->int_maxpacket = GZERO_INT_MAXPACKET;
config_group_init_type_name(&ss_opts->func_inst.group, "",
&ss_func_type);
#define GZERO_QLEN 32
#define GZERO_ISOC_INTERVAL 4
#define GZERO_ISOC_MAXPACKET 1024
-#define GZERO_INT_INTERVAL 1 /* Default interrupt interval = 1 ms */
-#define GZERO_INT_MAXPACKET 1024
struct usb_zero_options {
unsigned pattern;
unsigned isoc_maxpacket;
unsigned isoc_mult;
unsigned isoc_maxburst;
- unsigned int_interval; /* In ms */
- unsigned int_maxpacket;
- unsigned int_mult;
- unsigned int_maxburst;
unsigned bulk_buflen;
unsigned qlen;
};
unsigned isoc_maxpacket;
unsigned isoc_mult;
unsigned isoc_maxburst;
- unsigned int_interval; /* In ms */
- unsigned int_maxpacket;
- unsigned int_mult;
- unsigned int_maxburst;
unsigned bulk_buflen;
/*
void free_ep_req(struct usb_ep *ep, struct usb_request *req);
void disable_endpoints(struct usb_composite_dev *cdev,
struct usb_ep *in, struct usb_ep *out,
- struct usb_ep *iso_in, struct usb_ep *iso_out,
- struct usb_ep *int_in, struct usb_ep *int_out);
+ struct usb_ep *iso_in, struct usb_ep *iso_out);
#endif /* __G_ZERO_H */
MODULE_AUTHOR ("David Brownell");
MODULE_LICENSE ("GPL");
+static int ep_open(struct inode *, struct file *);
+
/*----------------------------------------------------------------------*/
* still need dev->lock to use epdata->ep.
*/
static int
-get_ready_ep (unsigned f_flags, struct ep_data *epdata)
+get_ready_ep (unsigned f_flags, struct ep_data *epdata, bool is_write)
{
int val;
if (f_flags & O_NONBLOCK) {
if (!mutex_trylock(&epdata->lock))
goto nonblock;
- if (epdata->state != STATE_EP_ENABLED) {
+ if (epdata->state != STATE_EP_ENABLED &&
+ (!is_write || epdata->state != STATE_EP_READY)) {
mutex_unlock(&epdata->lock);
nonblock:
val = -EAGAIN;
switch (epdata->state) {
case STATE_EP_ENABLED:
+ return 0;
+ case STATE_EP_READY: /* not configured yet */
+ if (is_write)
+ return 0;
+ // FALLTHRU
+ case STATE_EP_UNBOUND: /* clean disconnect */
break;
// case STATE_EP_DISABLED: /* "can't happen" */
- // case STATE_EP_READY: /* "can't happen" */
default: /* error! */
pr_debug ("%s: ep %p not available, state %d\n",
shortname, epdata, epdata->state);
- // FALLTHROUGH
- case STATE_EP_UNBOUND: /* clean disconnect */
- val = -ENODEV;
- mutex_unlock(&epdata->lock);
}
- return val;
+ mutex_unlock(&epdata->lock);
+ return -ENODEV;
}
static ssize_t
return value;
}
-
-/* handle a synchronous OUT bulk/intr/iso transfer */
-static ssize_t
-ep_read (struct file *fd, char __user *buf, size_t len, loff_t *ptr)
-{
- struct ep_data *data = fd->private_data;
- void *kbuf;
- ssize_t value;
-
- if ((value = get_ready_ep (fd->f_flags, data)) < 0)
- return value;
-
- /* halt any endpoint by doing a "wrong direction" i/o call */
- if (usb_endpoint_dir_in(&data->desc)) {
- if (usb_endpoint_xfer_isoc(&data->desc)) {
- mutex_unlock(&data->lock);
- return -EINVAL;
- }
- DBG (data->dev, "%s halt\n", data->name);
- spin_lock_irq (&data->dev->lock);
- if (likely (data->ep != NULL))
- usb_ep_set_halt (data->ep);
- spin_unlock_irq (&data->dev->lock);
- mutex_unlock(&data->lock);
- return -EBADMSG;
- }
-
- /* FIXME readahead for O_NONBLOCK and poll(); careful with ZLPs */
-
- value = -ENOMEM;
- kbuf = kmalloc (len, GFP_KERNEL);
- if (unlikely (!kbuf))
- goto free1;
-
- value = ep_io (data, kbuf, len);
- VDEBUG (data->dev, "%s read %zu OUT, status %d\n",
- data->name, len, (int) value);
- if (value >= 0 && copy_to_user (buf, kbuf, value))
- value = -EFAULT;
-
-free1:
- mutex_unlock(&data->lock);
- kfree (kbuf);
- return value;
-}
-
-/* handle a synchronous IN bulk/intr/iso transfer */
-static ssize_t
-ep_write (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
-{
- struct ep_data *data = fd->private_data;
- void *kbuf;
- ssize_t value;
-
- if ((value = get_ready_ep (fd->f_flags, data)) < 0)
- return value;
-
- /* halt any endpoint by doing a "wrong direction" i/o call */
- if (!usb_endpoint_dir_in(&data->desc)) {
- if (usb_endpoint_xfer_isoc(&data->desc)) {
- mutex_unlock(&data->lock);
- return -EINVAL;
- }
- DBG (data->dev, "%s halt\n", data->name);
- spin_lock_irq (&data->dev->lock);
- if (likely (data->ep != NULL))
- usb_ep_set_halt (data->ep);
- spin_unlock_irq (&data->dev->lock);
- mutex_unlock(&data->lock);
- return -EBADMSG;
- }
-
- /* FIXME writebehind for O_NONBLOCK and poll(), qlen = 1 */
-
- value = -ENOMEM;
- kbuf = memdup_user(buf, len);
- if (IS_ERR(kbuf)) {
- value = PTR_ERR(kbuf);
- kbuf = NULL;
- goto free1;
- }
-
- value = ep_io (data, kbuf, len);
- VDEBUG (data->dev, "%s write %zu IN, status %d\n",
- data->name, len, (int) value);
-free1:
- mutex_unlock(&data->lock);
- kfree (kbuf);
- return value;
-}
-
static int
ep_release (struct inode *inode, struct file *fd)
{
struct ep_data *data = fd->private_data;
int status;
- if ((status = get_ready_ep (fd->f_flags, data)) < 0)
+ if ((status = get_ready_ep (fd->f_flags, data, false)) < 0)
return status;
spin_lock_irq (&data->dev->lock);
struct mm_struct *mm;
struct work_struct work;
void *buf;
- const struct iovec *iv;
- unsigned long nr_segs;
+ struct iov_iter to;
+ const void *to_free;
unsigned actual;
};
return value;
}
-static ssize_t ep_copy_to_user(struct kiocb_priv *priv)
-{
- ssize_t len, total;
- void *to_copy;
- int i;
-
- /* copy stuff into user buffers */
- total = priv->actual;
- len = 0;
- to_copy = priv->buf;
- for (i=0; i < priv->nr_segs; i++) {
- ssize_t this = min((ssize_t)(priv->iv[i].iov_len), total);
-
- if (copy_to_user(priv->iv[i].iov_base, to_copy, this)) {
- if (len == 0)
- len = -EFAULT;
- break;
- }
-
- total -= this;
- len += this;
- to_copy += this;
- if (total == 0)
- break;
- }
-
- return len;
-}
-
static void ep_user_copy_worker(struct work_struct *work)
{
struct kiocb_priv *priv = container_of(work, struct kiocb_priv, work);
size_t ret;
use_mm(mm);
- ret = ep_copy_to_user(priv);
+ ret = copy_to_iter(priv->buf, priv->actual, &priv->to);
unuse_mm(mm);
+ if (!ret)
+ ret = -EFAULT;
/* completing the iocb can drop the ctx and mm, don't touch mm after */
aio_complete(iocb, ret, ret);
kfree(priv->buf);
+ kfree(priv->to_free);
kfree(priv);
}
* don't need to copy anything to userspace, so we can
* complete the aio request immediately.
*/
- if (priv->iv == NULL || unlikely(req->actual == 0)) {
+ if (priv->to_free == NULL || unlikely(req->actual == 0)) {
kfree(req->buf);
+ kfree(priv->to_free);
kfree(priv);
iocb->private = NULL;
/* aio_complete() reports bytes-transferred _and_ faults */
priv->buf = req->buf;
priv->actual = req->actual;
+ INIT_WORK(&priv->work, ep_user_copy_worker);
schedule_work(&priv->work);
}
spin_unlock(&epdata->dev->lock);
put_ep(epdata);
}
-static ssize_t
-ep_aio_rwtail(
- struct kiocb *iocb,
- char *buf,
- size_t len,
- struct ep_data *epdata,
- const struct iovec *iv,
- unsigned long nr_segs
-)
+static ssize_t ep_aio(struct kiocb *iocb,
+ struct kiocb_priv *priv,
+ struct ep_data *epdata,
+ char *buf,
+ size_t len)
{
- struct kiocb_priv *priv;
- struct usb_request *req;
- ssize_t value;
+ struct usb_request *req;
+ ssize_t value;
- priv = kmalloc(sizeof *priv, GFP_KERNEL);
- if (!priv) {
- value = -ENOMEM;
-fail:
- kfree(buf);
- return value;
- }
iocb->private = priv;
priv->iocb = iocb;
- priv->iv = iv;
- priv->nr_segs = nr_segs;
- INIT_WORK(&priv->work, ep_user_copy_worker);
-
- value = get_ready_ep(iocb->ki_filp->f_flags, epdata);
- if (unlikely(value < 0)) {
- kfree(priv);
- goto fail;
- }
kiocb_set_cancel_fn(iocb, ep_aio_cancel);
get_ep(epdata);
* allocate or submit those if the host disconnected.
*/
spin_lock_irq(&epdata->dev->lock);
- if (likely(epdata->ep)) {
- req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC);
- if (likely(req)) {
- priv->req = req;
- req->buf = buf;
- req->length = len;
- req->complete = ep_aio_complete;
- req->context = iocb;
- value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC);
- if (unlikely(0 != value))
- usb_ep_free_request(epdata->ep, req);
- } else
- value = -EAGAIN;
- } else
- value = -ENODEV;
- spin_unlock_irq(&epdata->dev->lock);
+ value = -ENODEV;
+ if (unlikely(epdata->ep))
+ goto fail;
- mutex_unlock(&epdata->lock);
+ req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC);
+ value = -ENOMEM;
+ if (unlikely(!req))
+ goto fail;
- if (unlikely(value)) {
- kfree(priv);
- put_ep(epdata);
- } else
- value = -EIOCBQUEUED;
+ priv->req = req;
+ req->buf = buf;
+ req->length = len;
+ req->complete = ep_aio_complete;
+ req->context = iocb;
+ value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC);
+ if (unlikely(0 != value)) {
+ usb_ep_free_request(epdata->ep, req);
+ goto fail;
+ }
+ spin_unlock_irq(&epdata->dev->lock);
+ return -EIOCBQUEUED;
+
+fail:
+ spin_unlock_irq(&epdata->dev->lock);
+ kfree(priv->to_free);
+ kfree(priv);
+ put_ep(epdata);
return value;
}
static ssize_t
-ep_aio_read(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t o)
+ep_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
- struct ep_data *epdata = iocb->ki_filp->private_data;
- char *buf;
+ struct file *file = iocb->ki_filp;
+ struct ep_data *epdata = file->private_data;
+ size_t len = iov_iter_count(to);
+ ssize_t value;
+ char *buf;
- if (unlikely(usb_endpoint_dir_in(&epdata->desc)))
- return -EINVAL;
+ if ((value = get_ready_ep(file->f_flags, epdata, false)) < 0)
+ return value;
- buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL);
- if (unlikely(!buf))
- return -ENOMEM;
+ /* halt any endpoint by doing a "wrong direction" i/o call */
+ if (usb_endpoint_dir_in(&epdata->desc)) {
+ if (usb_endpoint_xfer_isoc(&epdata->desc) ||
+ !is_sync_kiocb(iocb)) {
+ mutex_unlock(&epdata->lock);
+ return -EINVAL;
+ }
+ DBG (epdata->dev, "%s halt\n", epdata->name);
+ spin_lock_irq(&epdata->dev->lock);
+ if (likely(epdata->ep != NULL))
+ usb_ep_set_halt(epdata->ep);
+ spin_unlock_irq(&epdata->dev->lock);
+ mutex_unlock(&epdata->lock);
+ return -EBADMSG;
+ }
- return ep_aio_rwtail(iocb, buf, iocb->ki_nbytes, epdata, iov, nr_segs);
+ buf = kmalloc(len, GFP_KERNEL);
+ if (unlikely(!buf)) {
+ mutex_unlock(&epdata->lock);
+ return -ENOMEM;
+ }
+ if (is_sync_kiocb(iocb)) {
+ value = ep_io(epdata, buf, len);
+ if (value >= 0 && copy_to_iter(buf, value, to))
+ value = -EFAULT;
+ } else {
+ struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL);
+ value = -ENOMEM;
+ if (!priv)
+ goto fail;
+ priv->to_free = dup_iter(&priv->to, to, GFP_KERNEL);
+ if (!priv->to_free) {
+ kfree(priv);
+ goto fail;
+ }
+ value = ep_aio(iocb, priv, epdata, buf, len);
+ if (value == -EIOCBQUEUED)
+ buf = NULL;
+ }
+fail:
+ kfree(buf);
+ mutex_unlock(&epdata->lock);
+ return value;
}
+static ssize_t ep_config(struct ep_data *, const char *, size_t);
+
static ssize_t
-ep_aio_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t o)
+ep_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
- struct ep_data *epdata = iocb->ki_filp->private_data;
- char *buf;
- size_t len = 0;
- int i = 0;
+ struct file *file = iocb->ki_filp;
+ struct ep_data *epdata = file->private_data;
+ size_t len = iov_iter_count(from);
+ bool configured;
+ ssize_t value;
+ char *buf;
+
+ if ((value = get_ready_ep(file->f_flags, epdata, true)) < 0)
+ return value;
- if (unlikely(!usb_endpoint_dir_in(&epdata->desc)))
- return -EINVAL;
+ configured = epdata->state == STATE_EP_ENABLED;
- buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL);
- if (unlikely(!buf))
+ /* halt any endpoint by doing a "wrong direction" i/o call */
+ if (configured && !usb_endpoint_dir_in(&epdata->desc)) {
+ if (usb_endpoint_xfer_isoc(&epdata->desc) ||
+ !is_sync_kiocb(iocb)) {
+ mutex_unlock(&epdata->lock);
+ return -EINVAL;
+ }
+ DBG (epdata->dev, "%s halt\n", epdata->name);
+ spin_lock_irq(&epdata->dev->lock);
+ if (likely(epdata->ep != NULL))
+ usb_ep_set_halt(epdata->ep);
+ spin_unlock_irq(&epdata->dev->lock);
+ mutex_unlock(&epdata->lock);
+ return -EBADMSG;
+ }
+
+ buf = kmalloc(len, GFP_KERNEL);
+ if (unlikely(!buf)) {
+ mutex_unlock(&epdata->lock);
return -ENOMEM;
+ }
- for (i=0; i < nr_segs; i++) {
- if (unlikely(copy_from_user(&buf[len], iov[i].iov_base,
- iov[i].iov_len) != 0)) {
- kfree(buf);
- return -EFAULT;
+ if (unlikely(copy_from_iter(buf, len, from) != len)) {
+ value = -EFAULT;
+ goto out;
+ }
+
+ if (unlikely(!configured)) {
+ value = ep_config(epdata, buf, len);
+ } else if (is_sync_kiocb(iocb)) {
+ value = ep_io(epdata, buf, len);
+ } else {
+ struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL);
+ value = -ENOMEM;
+ if (priv) {
+ value = ep_aio(iocb, priv, epdata, buf, len);
+ if (value == -EIOCBQUEUED)
+ buf = NULL;
}
- len += iov[i].iov_len;
}
- return ep_aio_rwtail(iocb, buf, len, epdata, NULL, 0);
+out:
+ kfree(buf);
+ mutex_unlock(&epdata->lock);
+ return value;
}
/*----------------------------------------------------------------------*/
/* used after endpoint configuration */
static const struct file_operations ep_io_operations = {
.owner = THIS_MODULE,
- .llseek = no_llseek,
- .read = ep_read,
- .write = ep_write,
- .unlocked_ioctl = ep_ioctl,
+ .open = ep_open,
.release = ep_release,
-
- .aio_read = ep_aio_read,
- .aio_write = ep_aio_write,
+ .llseek = no_llseek,
+ .read = new_sync_read,
+ .write = new_sync_write,
+ .unlocked_ioctl = ep_ioctl,
+ .read_iter = ep_read_iter,
+ .write_iter = ep_write_iter,
};
/* ENDPOINT INITIALIZATION
* speed descriptor, then optional high speed descriptor.
*/
static ssize_t
-ep_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ep_config (struct ep_data *data, const char *buf, size_t len)
{
- struct ep_data *data = fd->private_data;
struct usb_ep *ep;
u32 tag;
int value, length = len;
- value = mutex_lock_interruptible(&data->lock);
- if (value < 0)
- return value;
-
if (data->state != STATE_EP_READY) {
value = -EL2HLT;
goto fail;
goto fail0;
/* we might need to change message format someday */
- if (copy_from_user (&tag, buf, 4)) {
- goto fail1;
- }
+ memcpy(&tag, buf, 4);
if (tag != 1) {
DBG(data->dev, "config %s, bad tag %d\n", data->name, tag);
goto fail0;
*/
/* full/low speed descriptor, then high speed */
- if (copy_from_user (&data->desc, buf, USB_DT_ENDPOINT_SIZE)) {
- goto fail1;
- }
+ memcpy(&data->desc, buf, USB_DT_ENDPOINT_SIZE);
if (data->desc.bLength != USB_DT_ENDPOINT_SIZE
|| data->desc.bDescriptorType != USB_DT_ENDPOINT)
goto fail0;
if (len != USB_DT_ENDPOINT_SIZE) {
if (len != 2 * USB_DT_ENDPOINT_SIZE)
goto fail0;
- if (copy_from_user (&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE,
- USB_DT_ENDPOINT_SIZE)) {
- goto fail1;
- }
+ memcpy(&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE,
+ USB_DT_ENDPOINT_SIZE);
if (data->hs_desc.bLength != USB_DT_ENDPOINT_SIZE
|| data->hs_desc.bDescriptorType
!= USB_DT_ENDPOINT) {
case USB_SPEED_LOW:
case USB_SPEED_FULL:
ep->desc = &data->desc;
- value = usb_ep_enable(ep);
- if (value == 0)
- data->state = STATE_EP_ENABLED;
break;
case USB_SPEED_HIGH:
/* fails if caller didn't provide that descriptor... */
ep->desc = &data->hs_desc;
- value = usb_ep_enable(ep);
- if (value == 0)
- data->state = STATE_EP_ENABLED;
break;
default:
DBG(data->dev, "unconnected, %s init abandoned\n",
data->name);
value = -EINVAL;
+ goto gone;
}
+ value = usb_ep_enable(ep);
if (value == 0) {
- fd->f_op = &ep_io_operations;
+ data->state = STATE_EP_ENABLED;
value = length;
}
gone:
data->desc.bDescriptorType = 0;
data->hs_desc.bDescriptorType = 0;
}
- mutex_unlock(&data->lock);
return value;
fail0:
value = -EINVAL;
goto fail;
-fail1:
- value = -EFAULT;
- goto fail;
}
static int
return value;
}
-/* used before endpoint configuration */
-static const struct file_operations ep_config_operations = {
- .llseek = no_llseek,
-
- .open = ep_open,
- .write = ep_config,
- .release = ep_release,
-};
-
/*----------------------------------------------------------------------*/
/* EP0 IMPLEMENTATION can be partly in userspace.
enum ep0_state state;
spin_lock_irq (&dev->lock);
+ if (dev->state <= STATE_DEV_OPENED) {
+ retval = -EINVAL;
+ goto done;
+ }
/* report fd mode change before acting on it */
if (dev->setup_abort) {
struct dev_data *dev = fd->private_data;
ssize_t retval = -ESRCH;
- spin_lock_irq (&dev->lock);
-
/* report fd mode change before acting on it */
if (dev->setup_abort) {
dev->setup_abort = 0;
} else
DBG (dev, "fail %s, state %d\n", __func__, dev->state);
- spin_unlock_irq (&dev->lock);
return retval;
}
struct dev_data *dev = fd->private_data;
int mask = 0;
+ if (dev->state <= STATE_DEV_OPENED)
+ return DEFAULT_POLLMASK;
+
poll_wait(fd, &dev->wait, wait);
spin_lock_irq (&dev->lock);
return ret;
}
-/* used after device configuration */
-static const struct file_operations ep0_io_operations = {
- .owner = THIS_MODULE,
- .llseek = no_llseek,
-
- .read = ep0_read,
- .write = ep0_write,
- .fasync = ep0_fasync,
- .poll = ep0_poll,
- .unlocked_ioctl = dev_ioctl,
- .release = dev_release,
-};
-
/*----------------------------------------------------------------------*/
/* The in-kernel gadget driver handles most ep0 issues, in particular
goto enomem1;
data->dentry = gadgetfs_create_file (dev->sb, data->name,
- data, &ep_config_operations);
+ data, &ep_io_operations);
if (!data->dentry)
goto enomem2;
list_add_tail (&data->epfiles, &dev->epfiles);
u32 tag;
char *kbuf;
+ spin_lock_irq(&dev->lock);
+ if (dev->state > STATE_DEV_OPENED) {
+ value = ep0_write(fd, buf, len, ptr);
+ spin_unlock_irq(&dev->lock);
+ return value;
+ }
+ spin_unlock_irq(&dev->lock);
+
if (len < (USB_DT_CONFIG_SIZE + USB_DT_DEVICE_SIZE + 4))
return -EINVAL;
* on, they can work ... except in cleanup paths that
* kick in after the ep0 descriptor is closed.
*/
- fd->f_op = &ep0_io_operations;
value = len;
}
return value;
return value;
}
-static const struct file_operations dev_init_operations = {
+static const struct file_operations ep0_operations = {
.llseek = no_llseek,
.open = dev_open,
+ .read = ep0_read,
.write = dev_config,
.fasync = ep0_fasync,
+ .poll = ep0_poll,
.unlocked_ioctl = dev_ioctl,
.release = dev_release,
};
goto Enomem;
dev->sb = sb;
- dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &dev_init_operations);
+ dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &ep0_operations);
if (!dev->dentry) {
put_dev(dev);
goto Enomem;
goto err_session;
}
/*
- * Now register the TCM vHost virtual I_T Nexus as active with the
- * call to __transport_register_session()
+ * Now register the TCM vHost virtual I_T Nexus as active.
*/
- __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
+ transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
tv_nexus->tvn_se_sess, tv_nexus);
tpg->tpg_nexus = tv_nexus;
mutex_unlock(&tpg->tpg_mutex);
.isoc_maxpacket = GZERO_ISOC_MAXPACKET,
.bulk_buflen = GZERO_BULK_BUFLEN,
.qlen = GZERO_QLEN,
- .int_interval = GZERO_INT_INTERVAL,
- .int_maxpacket = GZERO_INT_MAXPACKET,
};
/*-------------------------------------------------------------------------*/
S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(isoc_maxburst, "0 - 15 (ss only)");
-module_param_named(int_interval, gzero_options.int_interval, uint,
- S_IRUGO|S_IWUSR);
-MODULE_PARM_DESC(int_interval, "1 - 16");
-
-module_param_named(int_maxpacket, gzero_options.int_maxpacket, uint,
- S_IRUGO|S_IWUSR);
-MODULE_PARM_DESC(int_maxpacket, "0 - 1023 (fs), 0 - 1024 (hs/ss)");
-
-module_param_named(int_mult, gzero_options.int_mult, uint, S_IRUGO|S_IWUSR);
-MODULE_PARM_DESC(int_mult, "0 - 2 (hs/ss only)");
-
-module_param_named(int_maxburst, gzero_options.int_maxburst, uint,
- S_IRUGO|S_IWUSR);
-MODULE_PARM_DESC(int_maxburst, "0 - 15 (ss only)");
-
static struct usb_function *func_lb;
static struct usb_function_instance *func_inst_lb;
ss_opts->isoc_maxpacket = gzero_options.isoc_maxpacket;
ss_opts->isoc_mult = gzero_options.isoc_mult;
ss_opts->isoc_maxburst = gzero_options.isoc_maxburst;
- ss_opts->int_interval = gzero_options.int_interval;
- ss_opts->int_maxpacket = gzero_options.int_maxpacket;
- ss_opts->int_mult = gzero_options.int_mult;
- ss_opts->int_maxburst = gzero_options.int_maxburst;
ss_opts->bulk_buflen = gzero_options.bulk_buflen;
func_ss = usb_get_function(func_inst_ss);
struct atmel_ehci_priv {
struct clk *iclk;
- struct clk *fclk;
struct clk *uclk;
bool clocked;
};
{
if (atmel_ehci->clocked)
return;
- if (IS_ENABLED(CONFIG_COMMON_CLK)) {
- clk_set_rate(atmel_ehci->uclk, 48000000);
- clk_prepare_enable(atmel_ehci->uclk);
- }
+
+ clk_prepare_enable(atmel_ehci->uclk);
clk_prepare_enable(atmel_ehci->iclk);
- clk_prepare_enable(atmel_ehci->fclk);
atmel_ehci->clocked = true;
}
{
if (!atmel_ehci->clocked)
return;
- clk_disable_unprepare(atmel_ehci->fclk);
+
clk_disable_unprepare(atmel_ehci->iclk);
- if (IS_ENABLED(CONFIG_COMMON_CLK))
- clk_disable_unprepare(atmel_ehci->uclk);
+ clk_disable_unprepare(atmel_ehci->uclk);
atmel_ehci->clocked = false;
}
retval = -ENOENT;
goto fail_request_resource;
}
- atmel_ehci->fclk = devm_clk_get(&pdev->dev, "uhpck");
- if (IS_ERR(atmel_ehci->fclk)) {
- dev_err(&pdev->dev, "Error getting function clock\n");
- retval = -ENOENT;
+
+ atmel_ehci->uclk = devm_clk_get(&pdev->dev, "usb_clk");
+ if (IS_ERR(atmel_ehci->uclk)) {
+ dev_err(&pdev->dev, "failed to get uclk\n");
+ retval = PTR_ERR(atmel_ehci->uclk);
goto fail_request_resource;
}
- if (IS_ENABLED(CONFIG_COMMON_CLK)) {
- atmel_ehci->uclk = devm_clk_get(&pdev->dev, "usb_clk");
- if (IS_ERR(atmel_ehci->uclk)) {
- dev_err(&pdev->dev, "failed to get uclk\n");
- retval = PTR_ERR(atmel_ehci->uclk);
- goto fail_request_resource;
- }
- }
ehci = hcd_to_ehci(hcd);
/* registers start at offset 0x0 */
if (!command)
return;
- ep->ep_state |= EP_HALTED | EP_RECENTLY_HALTED;
+ ep->ep_state |= EP_HALTED;
ep->stopped_stream = stream_id;
xhci_queue_reset_ep(xhci, command, slot_id, ep_index);
goto exit;
}
- /* Reject urb if endpoint is in soft reset, queue must stay empty */
- if (xhci->devs[slot_id]->eps[ep_index].ep_state & EP_CONFIG_PENDING) {
- xhci_warn(xhci, "Can't enqueue URB while ep is in soft reset\n");
- ret = -EINVAL;
- }
-
if (usb_endpoint_xfer_isoc(&urb->ep->desc))
size = urb->number_of_packets;
else
}
}
-/* Called after clearing a halted device. USB core should have sent the control
+/* Called when clearing halted device. The core should have sent the control
* message to clear the device halt condition. The host side of the halt should
- * already be cleared with a reset endpoint command issued immediately when the
- * STALL tx event was received.
+ * already be cleared with a reset endpoint command issued when the STALL tx
+ * event was received.
+ *
+ * Context: in_interrupt
*/
void xhci_endpoint_reset(struct usb_hcd *hcd,
struct usb_host_endpoint *ep)
{
struct xhci_hcd *xhci;
- struct usb_device *udev;
- struct xhci_virt_device *virt_dev;
- struct xhci_virt_ep *virt_ep;
- struct xhci_input_control_ctx *ctrl_ctx;
- struct xhci_command *command;
- unsigned int ep_index, ep_state;
- unsigned long flags;
- u32 ep_flag;
xhci = hcd_to_xhci(hcd);
- udev = (struct usb_device *) ep->hcpriv;
- if (!ep->hcpriv)
- return;
- virt_dev = xhci->devs[udev->slot_id];
- ep_index = xhci_get_endpoint_index(&ep->desc);
- virt_ep = &virt_dev->eps[ep_index];
- ep_state = virt_ep->ep_state;
/*
- * Implement the config ep command in xhci 4.6.8 additional note:
+ * We might need to implement the config ep cmd in xhci 4.8.1 note:
* The Reset Endpoint Command may only be issued to endpoints in the
* Halted state. If software wishes reset the Data Toggle or Sequence
* Number of an endpoint that isn't in the Halted state, then software
* for the target endpoint. that is in the Stopped state.
*/
- if (ep_state & SET_DEQ_PENDING || ep_state & EP_RECENTLY_HALTED) {
- virt_ep->ep_state &= ~EP_RECENTLY_HALTED;
- xhci_dbg(xhci, "ep recently halted, no toggle reset needed\n");
- return;
- }
-
- /* Only interrupt and bulk ep's use Data toggle, USB2 spec 5.5.4-> */
- if (usb_endpoint_xfer_control(&ep->desc) ||
- usb_endpoint_xfer_isoc(&ep->desc))
- return;
-
- ep_flag = xhci_get_endpoint_flag(&ep->desc);
-
- if (ep_flag == SLOT_FLAG || ep_flag == EP0_FLAG)
- return;
-
- command = xhci_alloc_command(xhci, true, true, GFP_NOWAIT);
- if (!command) {
- xhci_err(xhci, "Could not allocate xHCI command structure.\n");
- return;
- }
-
- spin_lock_irqsave(&xhci->lock, flags);
-
- /* block ringing ep doorbell */
- virt_ep->ep_state |= EP_CONFIG_PENDING;
-
- /*
- * Make sure endpoint ring is empty before resetting the toggle/seq.
- * Driver is required to synchronously cancel all transfer request.
- *
- * xhci 4.6.6 says we can issue a configure endpoint command on a
- * running endpoint ring as long as it's idle (queue empty)
- */
-
- if (!list_empty(&virt_ep->ring->td_list)) {
- dev_err(&udev->dev, "EP not empty, refuse reset\n");
- spin_unlock_irqrestore(&xhci->lock, flags);
- goto cleanup;
- }
-
- xhci_dbg(xhci, "Reset toggle/seq for slot %d, ep_index: %d\n",
- udev->slot_id, ep_index);
-
- ctrl_ctx = xhci_get_input_control_ctx(command->in_ctx);
- if (!ctrl_ctx) {
- xhci_err(xhci, "Could not get input context, bad type. virt_dev: %p, in_ctx %p\n",
- virt_dev, virt_dev->in_ctx);
- spin_unlock_irqrestore(&xhci->lock, flags);
- goto cleanup;
- }
- xhci_setup_input_ctx_for_config_ep(xhci, command->in_ctx,
- virt_dev->out_ctx, ctrl_ctx,
- ep_flag, ep_flag);
- xhci_endpoint_copy(xhci, command->in_ctx, virt_dev->out_ctx, ep_index);
-
- xhci_queue_configure_endpoint(xhci, command, command->in_ctx->dma,
- udev->slot_id, false);
- xhci_ring_cmd_db(xhci);
- spin_unlock_irqrestore(&xhci->lock, flags);
-
- wait_for_completion(command->completion);
-
-cleanup:
- virt_ep->ep_state &= ~EP_CONFIG_PENDING;
- xhci_free_command(xhci, command);
+ /* For now just print debug to follow the situation */
+ xhci_dbg(xhci, "Endpoint 0x%x ep reset callback called\n",
+ ep->desc.bEndpointAddress);
}
static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
#define EP_HAS_STREAMS (1 << 4)
/* Transitioning the endpoint to not using streams, don't enqueue URBs */
#define EP_GETTING_NO_STREAMS (1 << 5)
-#define EP_RECENTLY_HALTED (1 << 6)
-#define EP_CONFIG_PENDING (1 << 7)
/* ---- Related to URB cancellation ---- */
struct list_head cancelled_td_list;
struct xhci_td *stopped_td;
}
if (IS_ENABLED(CONFIG_USB_ISP1761_UDC) && !udc_disabled) {
- ret = isp1760_udc_register(isp, irq, irqflags | IRQF_SHARED |
- IRQF_DISABLED);
+ ret = isp1760_udc_register(isp, irq, irqflags);
if (ret < 0) {
isp1760_hcd_unregister(&isp->hcd);
return ret;
struct usb_gadget_driver *driver)
{
struct isp1760_udc *udc = gadget_to_udc(gadget);
+ unsigned long flags;
/* The hardware doesn't support low speed. */
if (driver->max_speed < USB_SPEED_FULL) {
return -EINVAL;
}
- spin_lock(&udc->lock);
+ spin_lock_irqsave(&udc->lock, flags);
if (udc->driver) {
dev_err(udc->isp->dev, "UDC already has a gadget driver\n");
udc->driver = driver;
- spin_unlock(&udc->lock);
+ spin_unlock_irqrestore(&udc->lock, flags);
dev_dbg(udc->isp->dev, "starting UDC with driver %s\n",
driver->function);
static int isp1760_udc_stop(struct usb_gadget *gadget)
{
struct isp1760_udc *udc = gadget_to_udc(gadget);
+ unsigned long flags;
dev_dbg(udc->isp->dev, "%s\n", __func__);
isp1760_udc_write(udc, DC_MODE, 0);
- spin_lock(&udc->lock);
+ spin_lock_irqsave(&udc->lock, flags);
udc->driver = NULL;
- spin_unlock(&udc->lock);
+ spin_unlock_irqrestore(&udc->lock, flags);
return 0;
}
return -ENODEV;
}
- if (chipid != 0x00011582) {
+ if (chipid != 0x00011582 && chipid != 0x00158210) {
dev_err(udc->isp->dev, "udc: invalid chip ID 0x%08x\n", chipid);
return -ENODEV;
}
sprintf(udc->irqname, "%s (udc)", devname);
- ret = request_irq(irq, isp1760_udc_irq, IRQF_SHARED | IRQF_DISABLED |
- irqflags, udc->irqname, udc);
+ ret = request_irq(irq, isp1760_udc_irq, IRQF_SHARED | irqflags,
+ udc->irqname, udc);
if (ret < 0)
goto error;
config USB_MUSB_OMAP2PLUS
tristate "OMAP2430 and onwards"
- depends on ARCH_OMAP2PLUS && USB && OMAP_CONTROL_PHY
+ depends on ARCH_OMAP2PLUS && USB
+ depends on OMAP_CONTROL_PHY || !OMAP_CONTROL_PHY
select GENERIC_PHY
config USB_MUSB_AM35X
return NULL;
dev = bus_find_device(&platform_bus_type, NULL, node, match);
+ if (!dev)
+ return NULL;
+
ctrl_usb = dev_get_drvdata(dev);
if (!ctrl_usb)
return NULL;
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_NO_ATA_1X),
+UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999,
+ "Initio Corporation",
+ "",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
+
UNUSUAL_DEV(0x152d, 0x0539, 0x0000, 0x9999,
"JMicron",
func = vfio_pci_set_err_trigger;
break;
}
+ break;
case VFIO_PCI_REQ_IRQ_INDEX:
switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
case VFIO_IRQ_SET_ACTION_TRIGGER:
func = vfio_pci_set_req_trigger;
break;
}
+ break;
}
if (!func)
goto out;
}
/*
- * Now register the TCM vhost virtual I_T Nexus as active with the
- * call to __transport_register_session()
+ * Now register the TCM vhost virtual I_T Nexus as active.
*/
- __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
+ transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
tv_nexus->tvn_se_sess, tv_nexus);
tpg->tpg_nexus = tv_nexus;
len = clcdfb_snprintf_mode(NULL, 0, mode);
name = devm_kzalloc(dev, len + 1, GFP_KERNEL);
+ if (!name)
+ return -ENOMEM;
+
clcdfb_snprintf_mode(name, len + 1, mode);
mode->name = name;
int num = 0, i, first = 1;
int ver, rev;
- ver = edid[EDID_STRUCT_VERSION];
- rev = edid[EDID_STRUCT_REVISION];
-
mode = kzalloc(50 * sizeof(struct fb_videomode), GFP_KERNEL);
if (mode == NULL)
return NULL;
return NULL;
}
+ ver = edid[EDID_STRUCT_VERSION];
+ rev = edid[EDID_STRUCT_REVISION];
+
*dbsize = 0;
DPRINTK(" Detailed Timings\n");
#include <video/omapdss.h>
#include "dss.h"
-static struct omap_dss_device *to_dss_device_sysfs(struct device *dev)
+static ssize_t display_name_show(struct omap_dss_device *dssdev, char *buf)
{
- struct omap_dss_device *dssdev = NULL;
-
- for_each_dss_dev(dssdev) {
- if (dssdev->dev == dev) {
- omap_dss_put_device(dssdev);
- return dssdev;
- }
- }
-
- return NULL;
-}
-
-static ssize_t display_name_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
-
return snprintf(buf, PAGE_SIZE, "%s\n",
dssdev->name ?
dssdev->name : "");
}
-static ssize_t display_enabled_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t display_enabled_show(struct omap_dss_device *dssdev, char *buf)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
-
return snprintf(buf, PAGE_SIZE, "%d\n",
omapdss_device_is_enabled(dssdev));
}
-static ssize_t display_enabled_store(struct device *dev,
- struct device_attribute *attr,
+static ssize_t display_enabled_store(struct omap_dss_device *dssdev,
const char *buf, size_t size)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
int r;
bool enable;
return size;
}
-static ssize_t display_tear_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t display_tear_show(struct omap_dss_device *dssdev, char *buf)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
return snprintf(buf, PAGE_SIZE, "%d\n",
dssdev->driver->get_te ?
dssdev->driver->get_te(dssdev) : 0);
}
-static ssize_t display_tear_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t size)
+static ssize_t display_tear_store(struct omap_dss_device *dssdev,
+ const char *buf, size_t size)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
int r;
bool te;
return size;
}
-static ssize_t display_timings_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t display_timings_show(struct omap_dss_device *dssdev, char *buf)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
struct omap_video_timings t;
if (!dssdev->driver->get_timings)
t.y_res, t.vfp, t.vbp, t.vsw);
}
-static ssize_t display_timings_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t size)
+static ssize_t display_timings_store(struct omap_dss_device *dssdev,
+ const char *buf, size_t size)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
struct omap_video_timings t = dssdev->panel.timings;
int r, found;
return size;
}
-static ssize_t display_rotate_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t display_rotate_show(struct omap_dss_device *dssdev, char *buf)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
int rotate;
if (!dssdev->driver->get_rotate)
return -ENOENT;
return snprintf(buf, PAGE_SIZE, "%u\n", rotate);
}
-static ssize_t display_rotate_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t size)
+static ssize_t display_rotate_store(struct omap_dss_device *dssdev,
+ const char *buf, size_t size)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
int rot, r;
if (!dssdev->driver->set_rotate || !dssdev->driver->get_rotate)
return size;
}
-static ssize_t display_mirror_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t display_mirror_show(struct omap_dss_device *dssdev, char *buf)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
int mirror;
if (!dssdev->driver->get_mirror)
return -ENOENT;
return snprintf(buf, PAGE_SIZE, "%u\n", mirror);
}
-static ssize_t display_mirror_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t size)
+static ssize_t display_mirror_store(struct omap_dss_device *dssdev,
+ const char *buf, size_t size)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
int r;
bool mirror;
return size;
}
-static ssize_t display_wss_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t display_wss_show(struct omap_dss_device *dssdev, char *buf)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
unsigned int wss;
if (!dssdev->driver->get_wss)
return snprintf(buf, PAGE_SIZE, "0x%05x\n", wss);
}
-static ssize_t display_wss_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t size)
+static ssize_t display_wss_store(struct omap_dss_device *dssdev,
+ const char *buf, size_t size)
{
- struct omap_dss_device *dssdev = to_dss_device_sysfs(dev);
u32 wss;
int r;
return size;
}
-static DEVICE_ATTR(display_name, S_IRUGO, display_name_show, NULL);
-static DEVICE_ATTR(enabled, S_IRUGO|S_IWUSR,
+struct display_attribute {
+ struct attribute attr;
+ ssize_t (*show)(struct omap_dss_device *, char *);
+ ssize_t (*store)(struct omap_dss_device *, const char *, size_t);
+};
+
+#define DISPLAY_ATTR(_name, _mode, _show, _store) \
+ struct display_attribute display_attr_##_name = \
+ __ATTR(_name, _mode, _show, _store)
+
+static DISPLAY_ATTR(name, S_IRUGO, display_name_show, NULL);
+static DISPLAY_ATTR(display_name, S_IRUGO, display_name_show, NULL);
+static DISPLAY_ATTR(enabled, S_IRUGO|S_IWUSR,
display_enabled_show, display_enabled_store);
-static DEVICE_ATTR(tear_elim, S_IRUGO|S_IWUSR,
+static DISPLAY_ATTR(tear_elim, S_IRUGO|S_IWUSR,
display_tear_show, display_tear_store);
-static DEVICE_ATTR(timings, S_IRUGO|S_IWUSR,
+static DISPLAY_ATTR(timings, S_IRUGO|S_IWUSR,
display_timings_show, display_timings_store);
-static DEVICE_ATTR(rotate, S_IRUGO|S_IWUSR,
+static DISPLAY_ATTR(rotate, S_IRUGO|S_IWUSR,
display_rotate_show, display_rotate_store);
-static DEVICE_ATTR(mirror, S_IRUGO|S_IWUSR,
+static DISPLAY_ATTR(mirror, S_IRUGO|S_IWUSR,
display_mirror_show, display_mirror_store);
-static DEVICE_ATTR(wss, S_IRUGO|S_IWUSR,
+static DISPLAY_ATTR(wss, S_IRUGO|S_IWUSR,
display_wss_show, display_wss_store);
-static const struct attribute *display_sysfs_attrs[] = {
- &dev_attr_display_name.attr,
- &dev_attr_enabled.attr,
- &dev_attr_tear_elim.attr,
- &dev_attr_timings.attr,
- &dev_attr_rotate.attr,
- &dev_attr_mirror.attr,
- &dev_attr_wss.attr,
+static struct attribute *display_sysfs_attrs[] = {
+ &display_attr_name.attr,
+ &display_attr_display_name.attr,
+ &display_attr_enabled.attr,
+ &display_attr_tear_elim.attr,
+ &display_attr_timings.attr,
+ &display_attr_rotate.attr,
+ &display_attr_mirror.attr,
+ &display_attr_wss.attr,
NULL
};
+static ssize_t display_attr_show(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ struct omap_dss_device *dssdev;
+ struct display_attribute *display_attr;
+
+ dssdev = container_of(kobj, struct omap_dss_device, kobj);
+ display_attr = container_of(attr, struct display_attribute, attr);
+
+ if (!display_attr->show)
+ return -ENOENT;
+
+ return display_attr->show(dssdev, buf);
+}
+
+static ssize_t display_attr_store(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t size)
+{
+ struct omap_dss_device *dssdev;
+ struct display_attribute *display_attr;
+
+ dssdev = container_of(kobj, struct omap_dss_device, kobj);
+ display_attr = container_of(attr, struct display_attribute, attr);
+
+ if (!display_attr->store)
+ return -ENOENT;
+
+ return display_attr->store(dssdev, buf, size);
+}
+
+static const struct sysfs_ops display_sysfs_ops = {
+ .show = display_attr_show,
+ .store = display_attr_store,
+};
+
+static struct kobj_type display_ktype = {
+ .sysfs_ops = &display_sysfs_ops,
+ .default_attrs = display_sysfs_attrs,
+};
+
int display_init_sysfs(struct platform_device *pdev)
{
struct omap_dss_device *dssdev = NULL;
int r;
for_each_dss_dev(dssdev) {
- struct kobject *kobj = &dssdev->dev->kobj;
-
- r = sysfs_create_files(kobj, display_sysfs_attrs);
+ r = kobject_init_and_add(&dssdev->kobj, &display_ktype,
+ &pdev->dev.kobj, dssdev->alias);
if (r) {
DSSERR("failed to create sysfs files\n");
- goto err;
- }
-
- r = sysfs_create_link(&pdev->dev.kobj, kobj, dssdev->alias);
- if (r) {
- sysfs_remove_files(kobj, display_sysfs_attrs);
-
- DSSERR("failed to create sysfs display link\n");
+ omap_dss_put_device(dssdev);
goto err;
}
}
struct omap_dss_device *dssdev = NULL;
for_each_dss_dev(dssdev) {
- sysfs_remove_link(&pdev->dev.kobj, dssdev->alias);
- sysfs_remove_files(&dssdev->dev->kobj,
- display_sysfs_attrs);
+ if (kobject_name(&dssdev->kobj) == NULL)
+ continue;
+
+ kobject_del(&dssdev->kobj);
+ kobject_put(&dssdev->kobj);
+
+ memset(&dssdev->kobj, 0, sizeof(dssdev->kobj));
}
}
#include <linux/module.h>
#include <linux/balloon_compaction.h>
#include <linux/oom.h>
+#include <linux/wait.h>
/*
* Balloon device works in 4K page units. So each page is pointed to by
static int balloon(void *_vballoon)
{
struct virtio_balloon *vb = _vballoon;
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
set_freezable();
while (!kthread_should_stop()) {
s64 diff;
try_to_freeze();
- wait_event_interruptible(vb->config_change,
- (diff = towards_target(vb)) != 0
- || vb->need_stats_update
- || kthread_should_stop()
- || freezing(current));
+
+ add_wait_queue(&vb->config_change, &wait);
+ for (;;) {
+ if ((diff = towards_target(vb)) != 0 ||
+ vb->need_stats_update ||
+ kthread_should_stop() ||
+ freezing(current))
+ break;
+ wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);
+ }
+ remove_wait_queue(&vb->config_change, &wait);
+
if (vb->need_stats_update)
stats_handle_request(vb);
if (diff > 0)
if (err < 0)
goto out_oom_notify;
+ virtio_device_ready(vdev);
+
vb->thread = kthread_run(balloon, vb, "vballoon");
if (IS_ERR(vb->thread)) {
err = PTR_ERR(vb->thread);
void *buf, unsigned len)
{
struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
- u8 *ptr = buf;
- int i;
+ void __iomem *base = vm_dev->base + VIRTIO_MMIO_CONFIG;
+ u8 b;
+ __le16 w;
+ __le32 l;
- for (i = 0; i < len; i++)
- ptr[i] = readb(vm_dev->base + VIRTIO_MMIO_CONFIG + offset + i);
+ if (vm_dev->version == 1) {
+ u8 *ptr = buf;
+ int i;
+
+ for (i = 0; i < len; i++)
+ ptr[i] = readb(base + offset + i);
+ return;
+ }
+
+ switch (len) {
+ case 1:
+ b = readb(base + offset);
+ memcpy(buf, &b, sizeof b);
+ break;
+ case 2:
+ w = cpu_to_le16(readw(base + offset));
+ memcpy(buf, &w, sizeof w);
+ break;
+ case 4:
+ l = cpu_to_le32(readl(base + offset));
+ memcpy(buf, &l, sizeof l);
+ break;
+ case 8:
+ l = cpu_to_le32(readl(base + offset));
+ memcpy(buf, &l, sizeof l);
+ l = cpu_to_le32(ioread32(base + offset + sizeof l));
+ memcpy(buf + sizeof l, &l, sizeof l);
+ break;
+ default:
+ BUG();
+ }
}
static void vm_set(struct virtio_device *vdev, unsigned offset,
const void *buf, unsigned len)
{
struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
- const u8 *ptr = buf;
- int i;
+ void __iomem *base = vm_dev->base + VIRTIO_MMIO_CONFIG;
+ u8 b;
+ __le16 w;
+ __le32 l;
- for (i = 0; i < len; i++)
- writeb(ptr[i], vm_dev->base + VIRTIO_MMIO_CONFIG + offset + i);
+ if (vm_dev->version == 1) {
+ const u8 *ptr = buf;
+ int i;
+
+ for (i = 0; i < len; i++)
+ writeb(ptr[i], base + offset + i);
+
+ return;
+ }
+
+ switch (len) {
+ case 1:
+ memcpy(&b, buf, sizeof b);
+ writeb(b, base + offset);
+ break;
+ case 2:
+ memcpy(&w, buf, sizeof w);
+ writew(le16_to_cpu(w), base + offset);
+ break;
+ case 4:
+ memcpy(&l, buf, sizeof l);
+ writel(le32_to_cpu(l), base + offset);
+ break;
+ case 8:
+ memcpy(&l, buf, sizeof l);
+ writel(le32_to_cpu(l), base + offset);
+ memcpy(&l, buf + sizeof l, sizeof l);
+ writel(le32_to_cpu(l), base + offset + sizeof l);
+ break;
+ default:
+ BUG();
+ }
+}
+
+static u32 vm_generation(struct virtio_device *vdev)
+{
+ struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
+
+ if (vm_dev->version == 1)
+ return 0;
+ else
+ return readl(vm_dev->base + VIRTIO_MMIO_CONFIG_GENERATION);
}
static u8 vm_get_status(struct virtio_device *vdev)
static const struct virtio_config_ops virtio_mmio_config_ops = {
.get = vm_get,
.set = vm_set,
+ .generation = vm_generation,
.get_status = vm_get_status,
.set_status = vm_set_status,
.reset = vm_reset,
#define PDC_WDT_MIN_TIMEOUT 1
#define PDC_WDT_DEF_TIMEOUT 64
-static int heartbeat;
+static int heartbeat = PDC_WDT_DEF_TIMEOUT;
module_param(heartbeat, int, 0);
-MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds. "
- "(default = " __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")");
+MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds "
+ "(default=" __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")");
static bool nowayout = WATCHDOG_NOWAYOUT;
module_param(nowayout, bool, 0);
pdc_wdt->wdt_dev.ops = &pdc_wdt_ops;
pdc_wdt->wdt_dev.max_timeout = 1 << PDC_WDT_CONFIG_DELAY_MASK;
pdc_wdt->wdt_dev.parent = &pdev->dev;
+ watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt);
ret = watchdog_init_timeout(&pdc_wdt->wdt_dev, heartbeat, &pdev->dev);
if (ret < 0) {
watchdog_set_nowayout(&pdc_wdt->wdt_dev, nowayout);
platform_set_drvdata(pdev, pdc_wdt);
- watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt);
ret = watchdog_register_device(&pdc_wdt->wdt_dev);
if (ret)
u32 reg;
struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);
void __iomem *wdt_base = mtk_wdt->wdt_base;
- u32 ret;
+ int ret;
ret = mtk_wdt_set_timeout(wdt_dev, wdt_dev->timeout);
if (ret < 0)
pirq_query_unmask(irq);
rc = set_evtchn_to_irq(evtchn, irq);
- if (rc != 0) {
- pr_err("irq%d: Failed to set port to irq mapping (%d)\n",
- irq, rc);
- xen_evtchn_close(evtchn);
- return 0;
- }
+ if (rc)
+ goto err;
+
bind_evtchn_to_cpu(evtchn, 0);
info->evtchn = evtchn;
+ rc = xen_evtchn_port_setup(info);
+ if (rc)
+ goto err;
+
out:
unmask_evtchn(evtchn);
eoi_pirq(irq_get_irq_data(irq));
return 0;
+
+err:
+ pr_err("irq%d: Failed to set port to irq mapping (%d)\n", irq, rc);
+ xen_evtchn_close(evtchn);
+ return 0;
}
static unsigned int startup_pirq(struct irq_data *data)
#include "conf_space.h"
#include "conf_space_quirks.h"
-static bool permissive;
+bool permissive;
module_param(permissive, bool, 0644);
/* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word,
void *data;
};
+extern bool permissive;
+
#define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset)
/* Add fields to a device - the add_fields macro expects to get a pointer to
#include "pciback.h"
#include "conf_space.h"
+struct pci_cmd_info {
+ u16 val;
+};
+
struct pci_bar_info {
u32 val;
u32 len_val;
#define is_enable_cmd(value) ((value)&(PCI_COMMAND_MEMORY|PCI_COMMAND_IO))
#define is_master_cmd(value) ((value)&PCI_COMMAND_MASTER)
-static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data)
+/* Bits guests are allowed to control in permissive mode. */
+#define PCI_COMMAND_GUEST (PCI_COMMAND_MASTER|PCI_COMMAND_SPECIAL| \
+ PCI_COMMAND_INVALIDATE|PCI_COMMAND_VGA_PALETTE| \
+ PCI_COMMAND_WAIT|PCI_COMMAND_FAST_BACK)
+
+static void *command_init(struct pci_dev *dev, int offset)
{
- int i;
- int ret;
-
- ret = xen_pcibk_read_config_word(dev, offset, value, data);
- if (!pci_is_enabled(dev))
- return ret;
-
- for (i = 0; i < PCI_ROM_RESOURCE; i++) {
- if (dev->resource[i].flags & IORESOURCE_IO)
- *value |= PCI_COMMAND_IO;
- if (dev->resource[i].flags & IORESOURCE_MEM)
- *value |= PCI_COMMAND_MEMORY;
+ struct pci_cmd_info *cmd = kmalloc(sizeof(*cmd), GFP_KERNEL);
+ int err;
+
+ if (!cmd)
+ return ERR_PTR(-ENOMEM);
+
+ err = pci_read_config_word(dev, PCI_COMMAND, &cmd->val);
+ if (err) {
+ kfree(cmd);
+ return ERR_PTR(err);
}
+ return cmd;
+}
+
+static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data)
+{
+ int ret = pci_read_config_word(dev, offset, value);
+ const struct pci_cmd_info *cmd = data;
+
+ *value &= PCI_COMMAND_GUEST;
+ *value |= cmd->val & ~PCI_COMMAND_GUEST;
+
return ret;
}
{
struct xen_pcibk_dev_data *dev_data;
int err;
+ u16 val;
+ struct pci_cmd_info *cmd = data;
dev_data = pci_get_drvdata(dev);
if (!pci_is_enabled(dev) && is_enable_cmd(value)) {
}
}
+ cmd->val = value;
+
+ if (!permissive && (!dev_data || !dev_data->permissive))
+ return 0;
+
+ /* Only allow the guest to control certain bits. */
+ err = pci_read_config_word(dev, offset, &val);
+ if (err || val == value)
+ return err;
+
+ value &= PCI_COMMAND_GUEST;
+ value |= val & ~PCI_COMMAND_GUEST;
+
return pci_write_config_word(dev, offset, value);
}
{
.offset = PCI_COMMAND,
.size = 2,
+ .init = command_init,
+ .release = bar_release,
.u.w.read = command_read,
.u.w.write = command_write,
},
name);
goto out;
}
- /*
- * Now register the TCM pvscsi virtual I_T Nexus as active with the
- * call to __transport_register_session()
- */
- __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
+ /* Now register the TCM pvscsi virtual I_T Nexus as active. */
+ transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
tv_nexus->tvn_se_sess, tv_nexus);
tpg->tpg_nexus = tv_nexus;
boff = tmp % bsize;
if (boff) {
bh = affs_bread_ino(inode, bidx, 0);
- if (IS_ERR(bh))
- return PTR_ERR(bh);
+ if (IS_ERR(bh)) {
+ written = PTR_ERR(bh);
+ goto err_first_bh;
+ }
tmp = min(bsize - boff, to - from);
BUG_ON(boff + tmp > bsize || tmp > bsize);
memcpy(AFFS_DATA(bh) + boff, data + from, tmp);
bidx++;
} else if (bidx) {
bh = affs_bread_ino(inode, bidx - 1, 0);
- if (IS_ERR(bh))
- return PTR_ERR(bh);
+ if (IS_ERR(bh)) {
+ written = PTR_ERR(bh);
+ goto err_first_bh;
+ }
}
while (from + bsize <= to) {
prev_bh = bh;
bh = affs_getemptyblk_ino(inode, bidx);
if (IS_ERR(bh))
- goto out;
+ goto err_bh;
memcpy(AFFS_DATA(bh), data + from, bsize);
if (buffer_new(bh)) {
AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA);
prev_bh = bh;
bh = affs_bread_ino(inode, bidx, 1);
if (IS_ERR(bh))
- goto out;
+ goto err_bh;
tmp = min(bsize, to - from);
BUG_ON(tmp > bsize);
memcpy(AFFS_DATA(bh), data + from, tmp);
if (tmp > inode->i_size)
inode->i_size = AFFS_I(inode)->mmu_private = tmp;
+err_first_bh:
unlock_page(page);
page_cache_release(page);
return written;
-out:
+err_bh:
bh = prev_bh;
if (!written)
written = PTR_ERR(bh);
int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans,
struct btrfs_root *root);
+int btrfs_setup_space_cache(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root);
int btrfs_extent_readonly(struct btrfs_root *root, u64 bytenr);
int btrfs_free_block_groups(struct btrfs_fs_info *info);
int btrfs_read_block_groups(struct btrfs_root *root);
loff_t actual_len, u64 *alloc_hint);
int btrfs_inode_check_errors(struct inode *inode);
extern const struct dentry_operations btrfs_dentry_operations;
+#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+void btrfs_test_inode_set_ops(struct inode *inode);
+#endif
/* ioctl.c */
long btrfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
}
if (btrfs_super_sys_array_size(sb) < sizeof(struct btrfs_disk_key)
+ sizeof(struct btrfs_chunk)) {
- printk(KERN_ERR "BTRFS: system chunk array too small %u < %lu\n",
+ printk(KERN_ERR "BTRFS: system chunk array too small %u < %zu\n",
btrfs_super_sys_array_size(sb),
sizeof(struct btrfs_disk_key)
+ sizeof(struct btrfs_chunk));
return ret;
}
+int btrfs_setup_space_cache(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root)
+{
+ struct btrfs_block_group_cache *cache, *tmp;
+ struct btrfs_transaction *cur_trans = trans->transaction;
+ struct btrfs_path *path;
+
+ if (list_empty(&cur_trans->dirty_bgs) ||
+ !btrfs_test_opt(root, SPACE_CACHE))
+ return 0;
+
+ path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+
+ /* Could add new block groups, use _safe just in case */
+ list_for_each_entry_safe(cache, tmp, &cur_trans->dirty_bgs,
+ dirty_list) {
+ if (cache->disk_cache_state == BTRFS_DC_CLEAR)
+ cache_save_setup(cache, trans, path);
+ }
+
+ btrfs_free_path(path);
+ return 0;
+}
+
int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans,
struct btrfs_root *root)
{
num_bytes = ALIGN(num_bytes, root->sectorsize);
spin_lock(&BTRFS_I(inode)->lock);
- BTRFS_I(inode)->outstanding_extents++;
+ nr_extents = (unsigned)div64_u64(num_bytes +
+ BTRFS_MAX_EXTENT_SIZE - 1,
+ BTRFS_MAX_EXTENT_SIZE);
+ BTRFS_I(inode)->outstanding_extents += nr_extents;
+ nr_extents = 0;
if (BTRFS_I(inode)->outstanding_extents >
BTRFS_I(inode)->reserved_extents)
if (dropped > 0)
to_free += btrfs_calc_trans_metadata_size(root, dropped);
+ if (btrfs_test_is_dummy_root(root))
+ return;
+
trace_btrfs_space_reservation(root->fs_info, "delalloc",
btrfs_ino(inode), to_free, 0);
if (root->fs_info->quota_enabled) {
/* Should be safe to release our pages at this point */
btrfs_release_extent_buffer_page(eb);
+#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ if (unlikely(test_bit(EXTENT_BUFFER_DUMMY, &eb->bflags))) {
+ __free_extent_buffer(eb);
+ return 1;
+ }
+#endif
call_rcu(&eb->rcu_head, btrfs_release_extent_buffer_rcu);
return 1;
}
static int btrfs_dirty_inode(struct inode *inode);
+#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+void btrfs_test_inode_set_ops(struct inode *inode)
+{
+ BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
+}
+#endif
+
static int btrfs_init_inode_security(struct btrfs_trans_handle *trans,
struct inode *inode, struct inode *dir,
const struct qstr *qstr)
u64 new_size;
/*
- * We need the largest size of the remaining extent to see if we
- * need to add a new outstanding extent. Think of the following
- * case
- *
- * [MEAX_EXTENT_SIZEx2 - 4k][4k]
- *
- * The new_size would just be 4k and we'd think we had enough
- * outstanding extents for this if we only took one side of the
- * split, same goes for the other direction. We need to see if
- * the larger size still is the same amount of extents as the
- * original size, because if it is we need to add a new
- * outstanding extent. But if we split up and the larger size
- * is less than the original then we are good to go since we've
- * already accounted for the extra extent in our original
- * accounting.
+ * See the explanation in btrfs_merge_extent_hook, the same
+ * applies here, just in reverse.
*/
new_size = orig->end - split + 1;
- if ((split - orig->start) > new_size)
- new_size = split - orig->start;
-
- num_extents = div64_u64(size + BTRFS_MAX_EXTENT_SIZE - 1,
+ num_extents = div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,
BTRFS_MAX_EXTENT_SIZE);
- if (div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,
- BTRFS_MAX_EXTENT_SIZE) < num_extents)
+ new_size = split - orig->start;
+ num_extents += div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,
+ BTRFS_MAX_EXTENT_SIZE);
+ if (div64_u64(size + BTRFS_MAX_EXTENT_SIZE - 1,
+ BTRFS_MAX_EXTENT_SIZE) >= num_extents)
return;
}
if (!(other->state & EXTENT_DELALLOC))
return;
- old_size = other->end - other->start + 1;
- new_size = old_size + (new->end - new->start + 1);
+ if (new->start > other->start)
+ new_size = new->end - other->start + 1;
+ else
+ new_size = other->end - new->start + 1;
/* we're not bigger than the max, unreserve the space and go */
if (new_size <= BTRFS_MAX_EXTENT_SIZE) {
}
/*
- * If we grew by another max_extent, just return, we want to keep that
- * reserved amount.
+ * We have to add up either side to figure out how many extents were
+ * accounted for before we merged into one big extent. If the number of
+ * extents we accounted for is <= the amount we need for the new range
+ * then we can return, otherwise drop. Think of it like this
+ *
+ * [ 4k][MAX_SIZE]
+ *
+ * So we've grown the extent by a MAX_SIZE extent, this would mean we
+ * need 2 outstanding extents, on one side we have 1 and the other side
+ * we have 1 so they are == and we can return. But in this case
+ *
+ * [MAX_SIZE+4k][MAX_SIZE+4k]
+ *
+ * Each range on their own accounts for 2 extents, but merged together
+ * they are only 3 extents worth of accounting, so we need to drop in
+ * this case.
*/
+ old_size = other->end - other->start + 1;
num_extents = div64_u64(old_size + BTRFS_MAX_EXTENT_SIZE - 1,
BTRFS_MAX_EXTENT_SIZE);
+ old_size = new->end - new->start + 1;
+ num_extents += div64_u64(old_size + BTRFS_MAX_EXTENT_SIZE - 1,
+ BTRFS_MAX_EXTENT_SIZE);
+
if (div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,
- BTRFS_MAX_EXTENT_SIZE) > num_extents)
+ BTRFS_MAX_EXTENT_SIZE) >= num_extents)
return;
spin_lock(&BTRFS_I(inode)->lock);
spin_unlock(&BTRFS_I(inode)->lock);
}
+ /* For sanity tests */
+ if (btrfs_test_is_dummy_root(root))
+ return;
+
__percpu_counter_add(&root->fs_info->delalloc_bytes, len,
root->fs_info->delalloc_batch);
spin_lock(&BTRFS_I(inode)->lock);
root != root->fs_info->tree_root)
btrfs_delalloc_release_metadata(inode, len);
+ /* For sanity tests. */
+ if (btrfs_test_is_dummy_root(root))
+ return;
+
if (root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID
&& do_list && !(state->state & EXTENT_NORESERVE))
btrfs_free_reserved_data_space(inode, len);
u64 start = iblock << inode->i_blkbits;
u64 lockstart, lockend;
u64 len = bh_result->b_size;
- u64 orig_len = len;
+ u64 *outstanding_extents = NULL;
int unlock_bits = EXTENT_LOCKED;
int ret = 0;
lockstart = start;
lockend = start + len - 1;
+ if (current->journal_info) {
+ /*
+ * Need to pull our outstanding extents and set journal_info to NULL so
+ * that anything that needs to check if there's a transction doesn't get
+ * confused.
+ */
+ outstanding_extents = current->journal_info;
+ current->journal_info = NULL;
+ }
+
/*
* If this errors out it's because we couldn't invalidate pagecache for
* this range and we need to fallback to buffered.
if (start + len > i_size_read(inode))
i_size_write(inode, start + len);
- if (len < orig_len) {
+ /*
+ * If we have an outstanding_extents count still set then we're
+ * within our reservation, otherwise we need to adjust our inode
+ * counter appropriately.
+ */
+ if (*outstanding_extents) {
+ (*outstanding_extents)--;
+ } else {
spin_lock(&BTRFS_I(inode)->lock);
BTRFS_I(inode)->outstanding_extents++;
spin_unlock(&BTRFS_I(inode)->lock);
}
+
+ current->journal_info = outstanding_extents;
btrfs_free_reserved_data_space(inode, len);
}
unlock_err:
clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, lockend,
unlock_bits, 1, 0, &cached_state, GFP_NOFS);
+ if (outstanding_extents)
+ current->journal_info = outstanding_extents;
return ret;
}
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
+ u64 outstanding_extents = 0;
size_t count = 0;
int flags = 0;
bool wakeup = true;
ret = btrfs_delalloc_reserve_space(inode, count);
if (ret)
goto out;
+ outstanding_extents = div64_u64(count +
+ BTRFS_MAX_EXTENT_SIZE - 1,
+ BTRFS_MAX_EXTENT_SIZE);
+
+ /*
+ * We need to know how many extents we reserved so that we can
+ * do the accounting properly if we go over the number we
+ * originally calculated. Abuse current->journal_info for this.
+ */
+ current->journal_info = &outstanding_extents;
} else if (test_bit(BTRFS_INODE_READDIO_NEED_LOCK,
&BTRFS_I(inode)->runtime_flags)) {
inode_dio_done(inode);
iter, offset, btrfs_get_blocks_direct, NULL,
btrfs_submit_direct, flags);
if (rw & WRITE) {
+ current->journal_info = NULL;
if (ret < 0 && ret != -EIOCBQUEUED)
btrfs_delalloc_release_space(inode, count);
else if (ret >= 0 && (size_t)ret < count)
if (oper1->seq < oper2->seq)
return -1;
if (oper1->seq > oper2->seq)
- return -1;
+ return 1;
if (oper1->ref_root < oper2->ref_root)
return -1;
if (oper1->ref_root > oper2->ref_root)
return ret;
}
+static int test_extent_accounting(void)
+{
+ struct inode *inode = NULL;
+ struct btrfs_root *root = NULL;
+ int ret = -ENOMEM;
+
+ inode = btrfs_new_test_inode();
+ if (!inode) {
+ test_msg("Couldn't allocate inode\n");
+ return ret;
+ }
+
+ root = btrfs_alloc_dummy_root();
+ if (IS_ERR(root)) {
+ test_msg("Couldn't allocate root\n");
+ goto out;
+ }
+
+ root->fs_info = btrfs_alloc_dummy_fs_info();
+ if (!root->fs_info) {
+ test_msg("Couldn't allocate dummy fs info\n");
+ goto out;
+ }
+
+ BTRFS_I(inode)->root = root;
+ btrfs_test_inode_set_ops(inode);
+
+ /* [BTRFS_MAX_EXTENT_SIZE] */
+ BTRFS_I(inode)->outstanding_extents++;
+ ret = btrfs_set_extent_delalloc(inode, 0, BTRFS_MAX_EXTENT_SIZE - 1,
+ NULL);
+ if (ret) {
+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 1) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 1, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /* [BTRFS_MAX_EXTENT_SIZE][4k] */
+ BTRFS_I(inode)->outstanding_extents++;
+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE,
+ BTRFS_MAX_EXTENT_SIZE + 4095, NULL);
+ if (ret) {
+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 2) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 2, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /* [BTRFS_MAX_EXTENT_SIZE/2][4K HOLE][the rest] */
+ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree,
+ BTRFS_MAX_EXTENT_SIZE >> 1,
+ (BTRFS_MAX_EXTENT_SIZE >> 1) + 4095,
+ EXTENT_DELALLOC | EXTENT_DIRTY |
+ EXTENT_UPTODATE | EXTENT_DO_ACCOUNTING, 0, 0,
+ NULL, GFP_NOFS);
+ if (ret) {
+ test_msg("clear_extent_bit returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 2) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 2, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /* [BTRFS_MAX_EXTENT_SIZE][4K] */
+ BTRFS_I(inode)->outstanding_extents++;
+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE >> 1,
+ (BTRFS_MAX_EXTENT_SIZE >> 1) + 4095,
+ NULL);
+ if (ret) {
+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 2) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 2, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /*
+ * [BTRFS_MAX_EXTENT_SIZE+4K][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4K]
+ *
+ * I'm artificially adding 2 to outstanding_extents because in the
+ * buffered IO case we'd add things up as we go, but I don't feel like
+ * doing that here, this isn't the interesting case we want to test.
+ */
+ BTRFS_I(inode)->outstanding_extents += 2;
+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE + 8192,
+ (BTRFS_MAX_EXTENT_SIZE << 1) + 12287,
+ NULL);
+ if (ret) {
+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 4) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 4, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /* [BTRFS_MAX_EXTENT_SIZE+4k][4k][BTRFS_MAX_EXTENT_SIZE+4k] */
+ BTRFS_I(inode)->outstanding_extents++;
+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE+4096,
+ BTRFS_MAX_EXTENT_SIZE+8191, NULL);
+ if (ret) {
+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 3) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 3, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /* [BTRFS_MAX_EXTENT_SIZE+4k][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4k] */
+ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree,
+ BTRFS_MAX_EXTENT_SIZE+4096,
+ BTRFS_MAX_EXTENT_SIZE+8191,
+ EXTENT_DIRTY | EXTENT_DELALLOC |
+ EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0,
+ NULL, GFP_NOFS);
+ if (ret) {
+ test_msg("clear_extent_bit returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 4) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 4, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /*
+ * Refill the hole again just for good measure, because I thought it
+ * might fail and I'd rather satisfy my paranoia at this point.
+ */
+ BTRFS_I(inode)->outstanding_extents++;
+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE+4096,
+ BTRFS_MAX_EXTENT_SIZE+8191, NULL);
+ if (ret) {
+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents != 3) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 3, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+
+ /* Empty */
+ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1,
+ EXTENT_DIRTY | EXTENT_DELALLOC |
+ EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0,
+ NULL, GFP_NOFS);
+ if (ret) {
+ test_msg("clear_extent_bit returned %d\n", ret);
+ goto out;
+ }
+ if (BTRFS_I(inode)->outstanding_extents) {
+ ret = -EINVAL;
+ test_msg("Miscount, wanted 0, got %u\n",
+ BTRFS_I(inode)->outstanding_extents);
+ goto out;
+ }
+ ret = 0;
+out:
+ if (ret)
+ clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1,
+ EXTENT_DIRTY | EXTENT_DELALLOC |
+ EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0,
+ NULL, GFP_NOFS);
+ iput(inode);
+ btrfs_free_dummy_root(root);
+ return ret;
+}
+
int btrfs_test_inodes(void)
{
int ret;
if (ret)
return ret;
test_msg("Running hole first btrfs_get_extent test\n");
- return test_hole_first();
+ ret = test_hole_first();
+ if (ret)
+ return ret;
+ test_msg("Running outstanding_extents tests\n");
+ return test_extent_accounting();
}
u64 old_root_bytenr;
u64 old_root_used;
struct btrfs_root *tree_root = root->fs_info->tree_root;
- bool extent_root = (root->objectid == BTRFS_EXTENT_TREE_OBJECTID);
old_root_used = btrfs_root_used(&root->root_item);
- btrfs_write_dirty_block_groups(trans, root);
while (1) {
old_root_bytenr = btrfs_root_bytenr(&root->root_item);
if (old_root_bytenr == root->node->start &&
- old_root_used == btrfs_root_used(&root->root_item) &&
- (!extent_root ||
- list_empty(&trans->transaction->dirty_bgs)))
+ old_root_used == btrfs_root_used(&root->root_item))
break;
btrfs_set_root_node(&root->root_item, root->node);
return ret;
old_root_used = btrfs_root_used(&root->root_item);
- if (extent_root) {
- ret = btrfs_write_dirty_block_groups(trans, root);
- if (ret)
- return ret;
- }
- ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
- if (ret)
- return ret;
}
return 0;
struct btrfs_root *root)
{
struct btrfs_fs_info *fs_info = root->fs_info;
+ struct list_head *dirty_bgs = &trans->transaction->dirty_bgs;
struct list_head *next;
struct extent_buffer *eb;
int ret;
if (ret)
return ret;
+ ret = btrfs_setup_space_cache(trans, root);
+ if (ret)
+ return ret;
+
/* run_qgroups might have added some more refs */
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
if (ret)
return ret;
-
+again:
while (!list_empty(&fs_info->dirty_cowonly_roots)) {
next = fs_info->dirty_cowonly_roots.next;
list_del_init(next);
ret = update_cowonly_root(trans, root);
if (ret)
return ret;
+ ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
+ if (ret)
+ return ret;
}
+ while (!list_empty(dirty_bgs)) {
+ ret = btrfs_write_dirty_block_groups(trans, root);
+ if (ret)
+ return ret;
+ ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
+ if (ret)
+ return ret;
+ }
+
+ if (!list_empty(&fs_info->dirty_cowonly_roots))
+ goto again;
+
list_add_tail(&fs_info->extent_root->dirty_list,
&trans->transaction->switch_commits);
btrfs_after_dev_replace_commit(fs_info);
wait_for_commit(root, cur_trans);
+ if (unlikely(cur_trans->aborted))
+ ret = cur_trans->aborted;
+
btrfs_put_transaction(cur_trans);
return ret;
newpage = buf->page;
- if (WARN_ON(!PageUptodate(newpage)))
- return -EIO;
+ if (!PageUptodate(newpage))
+ SetPageUptodate(newpage);
ClearPageMappedToDisk(newpage);
return err;
}
+static int fuse_dev_open(struct inode *inode, struct file *file)
+{
+ /*
+ * The fuse device's file's private_data is used to hold
+ * the fuse_conn(ection) when it is mounted, and is used to
+ * keep track of whether the file has been mounted already.
+ */
+ file->private_data = NULL;
+ return 0;
+}
+
static ssize_t fuse_dev_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos)
{
static int fuse_notify(struct fuse_conn *fc, enum fuse_notify_code code,
unsigned int size, struct fuse_copy_state *cs)
{
+ /* Don't try to move pages (yet) */
+ cs->move_pages = 0;
+
switch (code) {
case FUSE_NOTIFY_POLL:
return fuse_notify_poll(fc, size, cs);
const struct file_operations fuse_dev_operations = {
.owner = THIS_MODULE,
+ .open = fuse_dev_open,
.llseek = no_llseek,
.read = do_sync_read,
.aio_read = fuse_dev_read,
hfs_bnode_write(node, entry, data_off + key_len, entry_len);
hfs_bnode_dump(node);
- if (new_node) {
- /* update parent key if we inserted a key
- * at the start of the first node
- */
- if (!rec && new_node != node)
- hfs_brec_update_parent(fd);
+ /*
+ * update parent key if we inserted a key
+ * at the start of the node and it is not the new node
+ */
+ if (!rec && new_node != node) {
+ hfs_bnode_read_key(node, fd->search_key, data_off + size);
+ hfs_brec_update_parent(fd);
+ }
+ if (new_node) {
hfs_bnode_put(fd->bnode);
if (!new_node->parent) {
hfs_btree_inc_height(tree);
goto again;
}
- if (!rec)
- hfs_brec_update_parent(fd);
-
return 0;
}
if (IS_ERR(parent))
return PTR_ERR(parent);
__hfs_brec_find(parent, fd, hfs_find_rec_by_key);
+ if (fd->record < 0)
+ return -ENOENT;
hfs_bnode_dump(parent);
rec = fd->record;
goto out_free;
}
+ of->event = atomic_read(&of->kn->attr.open->event);
ops = kernfs_ops(of->kn);
if (ops->read)
len = ops->read(of, buf, len, *ppos);
break;
}
}
- trace_generic_delete_lease(inode, fl);
+ trace_generic_delete_lease(inode, victim);
if (victim)
error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);
spin_unlock(&ctx->flc_lock);
rpc_ntop((struct sockaddr *)&clp->cl_addr, addr_str, sizeof(addr_str));
- nfsd4_cb_layout_fail(ls);
-
printk(KERN_WARNING
"nfsd: client %s failed to respond to layout recall. "
" Fencing..\n", addr_str);
struct the_nilfs *nilfs)
{
struct nilfs_inode_info *ii, *n;
+ int during_mount = !(sci->sc_super->s_flags & MS_ACTIVE);
int defer_iput = false;
spin_lock(&nilfs->ns_inode_lock);
brelse(ii->i_bh);
ii->i_bh = NULL;
list_del_init(&ii->i_dirty);
- if (!ii->vfs_inode.i_nlink) {
+ if (!ii->vfs_inode.i_nlink || during_mount) {
/*
- * Defer calling iput() to avoid a deadlock
- * over I_SYNC flag for inodes with i_nlink == 0
+ * Defer calling iput() to avoid deadlocks if
+ * i_nlink == 0 or mount is not yet finished.
*/
list_add_tail(&ii->i_dirty, &sci->sc_iput_queue);
defer_iput = true;
!(marks_mask & FS_ISDIR & ~marks_ignored_mask))
return false;
- if (event_mask & marks_mask & ~marks_ignored_mask)
+ if (event_mask & FAN_ALL_OUTGOING_EVENTS & marks_mask &
+ ~marks_ignored_mask)
return true;
return false;
static inline int ocfs2_supports_append_dio(struct ocfs2_super *osb)
{
- if (osb->s_feature_ro_compat & OCFS2_FEATURE_RO_COMPAT_APPEND_DIO)
+ if (osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_APPEND_DIO)
return 1;
return 0;
}
| OCFS2_FEATURE_INCOMPAT_INDEXED_DIRS \
| OCFS2_FEATURE_INCOMPAT_REFCOUNT_TREE \
| OCFS2_FEATURE_INCOMPAT_DISCONTIG_BG \
- | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO)
+ | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO \
+ | OCFS2_FEATURE_INCOMPAT_APPEND_DIO)
#define OCFS2_FEATURE_RO_COMPAT_SUPP (OCFS2_FEATURE_RO_COMPAT_UNWRITTEN \
| OCFS2_FEATURE_RO_COMPAT_USRQUOTA \
- | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA \
- | OCFS2_FEATURE_RO_COMPAT_APPEND_DIO)
+ | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA)
/*
* Heartbeat-only devices are missing journals and other files. The
*/
#define OCFS2_FEATURE_INCOMPAT_CLUSTERINFO 0x4000
+/*
+ * Append Direct IO support
+ */
+#define OCFS2_FEATURE_INCOMPAT_APPEND_DIO 0x8000
+
/*
* backup superblock flag is used to indicate that this volume
* has backup superblocks.
#define OCFS2_FEATURE_RO_COMPAT_USRQUOTA 0x0002
#define OCFS2_FEATURE_RO_COMPAT_GRPQUOTA 0x0004
-/*
- * Append Direct IO support
- */
-#define OCFS2_FEATURE_RO_COMPAT_APPEND_DIO 0x0008
/* The byte offset of the first backup block will be 1G.
* The following will be 4G, 16G, 64G, 256G and 1T.
{
struct ovl_fs *ufs = sb->s_fs_info;
- if (!(*flags & MS_RDONLY) &&
- (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY)))
+ if (!(*flags & MS_RDONLY) && !ufs->upper_mnt)
return -EROFS;
return 0;
break;
default:
+ pr_err("overlayfs: unrecognized mount option \"%s\" or missing value\n", p);
return -EINVAL;
}
}
+
+ /* Workdir is useless in non-upper mount */
+ if (!config->upperdir && config->workdir) {
+ pr_info("overlayfs: option \"workdir=%s\" is useless in a non-upper mount, ignore\n",
+ config->workdir);
+ kfree(config->workdir);
+ config->workdir = NULL;
+ }
+
return 0;
}
sb->s_stack_depth = 0;
if (ufs->config.upperdir) {
- /* FIXME: workdir is not needed for a R/O mount */
if (!ufs->config.workdir) {
pr_err("overlayfs: missing 'workdir'\n");
goto out_free_config;
if (err)
goto out_free_config;
+ /* Upper fs should not be r/o */
+ if (upperpath.mnt->mnt_sb->s_flags & MS_RDONLY) {
+ pr_err("overlayfs: upper fs is r/o, try multi-lower layers mount\n");
+ err = -EINVAL;
+ goto out_put_upperpath;
+ }
+
err = ovl_mount_dir(ufs->config.workdir, &workpath);
if (err)
goto out_put_upperpath;
err = -EINVAL;
stacklen = ovl_split_lowerdirs(lowertmp);
- if (stacklen > OVL_MAX_STACK)
+ if (stacklen > OVL_MAX_STACK) {
+ pr_err("overlayfs: too many lower directries, limit is %d\n",
+ OVL_MAX_STACK);
goto out_free_lowertmp;
+ } else if (!ufs->config.upperdir && stacklen == 1) {
+ pr_err("overlayfs: at least 2 lowerdir are needed while upperdir nonexistent\n");
+ goto out_free_lowertmp;
+ }
stack = kcalloc(stacklen, sizeof(struct path), GFP_KERNEL);
if (!stack)
ufs->numlower++;
}
- /* If the upper fs is r/o or nonexistent, we mark overlayfs r/o too */
- if (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY))
+ /* If the upper fs is nonexistent, we mark overlayfs r/o too */
+ if (!ufs->upper_mnt)
sb->s_flags |= MS_RDONLY;
sb->s_d_op = &ovl_dentry_operations;
static int pagemap_open(struct inode *inode, struct file *file)
{
+ /* do not disclose physical addresses: attack vector */
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
pr_warn_once("Bits 55-60 of /proc/PID/pagemap entries are about "
"to stop being page-shift some time soon. See the "
"linux/Documentation/vm/pagemap.txt for details.\n");
unsigned int cmd;
int flags;
drm_ioctl_t *func;
- unsigned int cmd_drv;
const char *name;
};
* ioctl, for use by drm_ioctl().
*/
-#define DRM_IOCTL_DEF_DRV(ioctl, _func, _flags) \
- [DRM_IOCTL_NR(DRM_##ioctl)] = {.cmd = DRM_##ioctl, .func = _func, .flags = _flags, .cmd_drv = DRM_IOCTL_##ioctl, .name = #ioctl}
+#define DRM_IOCTL_DEF_DRV(ioctl, _func, _flags) \
+ [DRM_IOCTL_NR(DRM_IOCTL_##ioctl) - DRM_COMMAND_BASE] = { \
+ .cmd = DRM_IOCTL_##ioctl, \
+ .func = _func, \
+ .flags = _flags, \
+ .name = #ioctl \
+ }
/* Event queued up for userspace to read */
struct drm_pending_event {
bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, int pbn, int *slots);
+int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
+
void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
#define INTEL_VLV_D_IDS(info) \
INTEL_VGA_DEVICE(0x0155, info)
-#define _INTEL_BDW_M(gt, id, info) \
- INTEL_VGA_DEVICE((((gt) - 1) << 4) | (id), info)
-#define _INTEL_BDW_D(gt, id, info) \
- INTEL_VGA_DEVICE((((gt) - 1) << 4) | (id), info)
-
-#define _INTEL_BDW_M_IDS(gt, info) \
- _INTEL_BDW_M(gt, 0x1602, info), /* Halo */ \
- _INTEL_BDW_M(gt, 0x1606, info), /* ULT */ \
- _INTEL_BDW_M(gt, 0x160B, info), /* ULT */ \
- _INTEL_BDW_M(gt, 0x160E, info) /* ULX */
-
-#define _INTEL_BDW_D_IDS(gt, info) \
- _INTEL_BDW_D(gt, 0x160A, info), /* Server */ \
- _INTEL_BDW_D(gt, 0x160D, info) /* Workstation */
-
-#define INTEL_BDW_GT12M_IDS(info) \
- _INTEL_BDW_M_IDS(1, info), \
- _INTEL_BDW_M_IDS(2, info)
+#define INTEL_BDW_GT12M_IDS(info) \
+ INTEL_VGA_DEVICE(0x1602, info), /* GT1 ULT */ \
+ INTEL_VGA_DEVICE(0x1606, info), /* GT1 ULT */ \
+ INTEL_VGA_DEVICE(0x160B, info), /* GT1 Iris */ \
+ INTEL_VGA_DEVICE(0x160E, info), /* GT1 ULX */ \
+ INTEL_VGA_DEVICE(0x1612, info), /* GT2 Halo */ \
+ INTEL_VGA_DEVICE(0x1616, info), /* GT2 ULT */ \
+ INTEL_VGA_DEVICE(0x161B, info), /* GT2 ULT */ \
+ INTEL_VGA_DEVICE(0x161E, info) /* GT2 ULX */
#define INTEL_BDW_GT12D_IDS(info) \
- _INTEL_BDW_D_IDS(1, info), \
- _INTEL_BDW_D_IDS(2, info)
+ INTEL_VGA_DEVICE(0x160A, info), /* GT1 Server */ \
+ INTEL_VGA_DEVICE(0x160D, info), /* GT1 Workstation */ \
+ INTEL_VGA_DEVICE(0x161A, info), /* GT2 Server */ \
+ INTEL_VGA_DEVICE(0x161D, info) /* GT2 Workstation */
#define INTEL_BDW_GT3M_IDS(info) \
- _INTEL_BDW_M_IDS(3, info)
+ INTEL_VGA_DEVICE(0x1622, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x1626, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x162B, info), /* Iris */ \
+ INTEL_VGA_DEVICE(0x162E, info) /* ULX */
#define INTEL_BDW_GT3D_IDS(info) \
- _INTEL_BDW_D_IDS(3, info)
+ INTEL_VGA_DEVICE(0x162A, info), /* Server */ \
+ INTEL_VGA_DEVICE(0x162D, info) /* Workstation */
#define INTEL_BDW_RSVDM_IDS(info) \
- _INTEL_BDW_M_IDS(4, info)
+ INTEL_VGA_DEVICE(0x1632, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x1636, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x163B, info), /* Iris */ \
+ INTEL_VGA_DEVICE(0x163E, info) /* ULX */
#define INTEL_BDW_RSVDD_IDS(info) \
- _INTEL_BDW_D_IDS(4, info)
+ INTEL_VGA_DEVICE(0x163A, info), /* Server */ \
+ INTEL_VGA_DEVICE(0x163D, info) /* Workstation */
#define INTEL_BDW_M_IDS(info) \
INTEL_BDW_GT12M_IDS(info), \
#define PULL_DISABLE (1 << 3)
#define INPUT_EN (1 << 5)
-#define SLEWCTRL_FAST (1 << 6)
+#define SLEWCTRL_SLOW (1 << 6)
+#define SLEWCTRL_FAST 0
/* update macro depending on INPUT_EN and PULL_ENA */
#undef PIN_OUTPUT
#define PULL_DISABLE (1 << 16)
#define PULL_UP (1 << 17)
#define INPUT_EN (1 << 18)
-#define SLEWCTRL_FAST (1 << 19)
+#define SLEWCTRL_SLOW (1 << 19)
+#define SLEWCTRL_FAST 0
#define DS0_PULL_UP_DOWN_EN (1 << 27)
#define PIN_OUTPUT (PULL_DISABLE)
void (*sync_lr_elrsr)(struct kvm_vcpu *, int, struct vgic_lr);
u64 (*get_elrsr)(const struct kvm_vcpu *vcpu);
u64 (*get_eisr)(const struct kvm_vcpu *vcpu);
+ void (*clear_eisr)(struct kvm_vcpu *vcpu);
u32 (*get_interrupt_status)(const struct kvm_vcpu *vcpu);
void (*enable_underflow)(struct kvm_vcpu *vcpu);
void (*disable_underflow)(struct kvm_vcpu *vcpu);
*/
int clk_get_phase(struct clk *clk);
+/**
+ * clk_is_match - check if two clk's point to the same hardware clock
+ * @p: clk compared against q
+ * @q: clk compared against p
+ *
+ * Returns true if the two struct clk pointers both point to the same hardware
+ * clock node. Put differently, returns true if struct clk *p and struct clk *q
+ * share the same struct clk_core object.
+ *
+ * Returns false otherwise. Note that two NULL clks are treated as matching.
+ */
+bool clk_is_match(const struct clk *p, const struct clk *q);
+
#else
static inline long clk_get_accuracy(struct clk *clk)
return -ENOTSUPP;
}
+static inline bool clk_is_match(const struct clk *p, const struct clk *q)
+{
+ return p == q;
+}
+
#endif
/**
*/
struct mapped_device *dm_get_md(dev_t dev);
void dm_get(struct mapped_device *md);
+int dm_hold(struct mapped_device *md);
void dm_put(struct mapped_device *md);
/*
#define GITS_TRANSLATER 0x10040
+#define GITS_CTLR_ENABLE (1U << 0)
+#define GITS_CTLR_QUIESCENT (1U << 31)
+
+#define GITS_TYPER_DEVBITS_SHIFT 13
+#define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1)
#define GITS_TYPER_PTA (1UL << 19)
#define GITS_CBASER_VALID (1UL << 63)
struct kmem_cache;
struct page;
+struct vm_struct;
#ifdef CONFIG_KASAN
void kasan_slab_alloc(struct kmem_cache *s, void *object);
void kasan_slab_free(struct kmem_cache *s, void *object);
-#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
-
int kasan_module_alloc(void *addr, size_t size);
-void kasan_module_free(void *addr);
+void kasan_free_shadow(const struct vm_struct *vm);
#else /* CONFIG_KASAN */
-#define MODULE_ALIGN 1
-
static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
static inline void kasan_enable_current(void) {}
static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
-static inline void kasan_module_free(void *addr) {}
+static inline void kasan_free_shadow(const struct vm_struct *vm) {}
#endif /* CONFIG_KASAN */
* led */
ATA_FLAG_NO_DIPM = (1 << 23), /* host not happy with DIPM */
ATA_FLAG_LOWTAG = (1 << 24), /* host wants lowest available tag */
+ ATA_FLAG_SAS_HOST = (1 << 25), /* SAS host */
/* bits 24:31 of ap->flags are reserved for LLD specific flags */
#define PALMAS_GPADC_TRIM15 0x0E
#define PALMAS_GPADC_TRIM16 0x0F
+/* TPS659038 regen2_ctrl offset iss different from palmas */
+#define TPS659038_REGEN2_CTRL 0x12
+
/* TPS65917 Interrupt registers */
/* Registers for function INTERRUPT */
unsigned long *ftrace_callsites;
#endif
+#ifdef CONFIG_LIVEPATCH
+ bool klp_alive;
+#endif
+
#ifdef CONFIG_MODULE_UNLOAD
/* What modules depend on me? */
struct list_head source_list;
/* Any cleanup before freeing mod->module_init */
void module_arch_freeing_init(struct module *mod);
+
+#ifdef CONFIG_KASAN
+#include <linux/kasan.h>
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+#else
+#define MODULE_ALIGN PAGE_SIZE
+#endif
+
#endif
* Used to add FDB entries to dump requests. Implementers should add
* entries to skb and update idx with the number of entries.
*
- * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh)
+ * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh,
+ * u16 flags)
* int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq,
* struct net_device *dev, u32 filter_mask)
+ * int (*ndo_bridge_dellink)(struct net_device *dev, struct nlmsghdr *nlh,
+ * u16 flags);
*
* int (*ndo_change_carrier)(struct net_device *dev, bool new_carrier);
* Called to change device carrier. Soft-devices (like dummy, team, etc)
static inline void of_platform_depopulate(struct device *parent) { }
#endif
-#ifdef CONFIG_OF_DYNAMIC
+#if defined(CONFIG_OF_DYNAMIC) && defined(CONFIG_OF_ADDRESS)
extern void of_platform_register_reconfig_notifier(void);
#else
static inline void of_platform_register_reconfig_notifier(void) { }
static inline struct pinctrl * __must_check pinctrl_get(struct device *dev)
{
- return ERR_PTR(-ENOSYS);
+ return NULL;
}
static inline void pinctrl_put(struct pinctrl *p)
struct pinctrl *p,
const char *name)
{
- return ERR_PTR(-ENOSYS);
+ return NULL;
}
static inline int pinctrl_select_state(struct pinctrl *p,
static inline struct pinctrl * __must_check devm_pinctrl_get(struct device *dev)
{
- return ERR_PTR(-ENOSYS);
+ return NULL;
}
static inline void devm_pinctrl_put(struct pinctrl *p)
* @driver_data: private regulator data
* @of_node: OpenFirmware node to parse for device tree bindings (may be
* NULL).
- * @regmap: regmap to use for core regmap helpers if dev_get_regulator() is
+ * @regmap: regmap to use for core regmap helpers if dev_get_regmap() is
* insufficient.
* @ena_gpio_initialized: GPIO controlling regulator enable was properly
* initialized, meaning that >= 0 is a valid gpio
/*
* numa_faults_locality tracks if faults recorded during the last
- * scan window were remote/local. The task scan period is adapted
- * based on the locality of the faults with different weights
- * depending on whether they were shared or private faults
+ * scan window were remote/local or failed to migrate. The task scan
+ * period is adapted based on the locality of the faults with different
+ * weights depending on whether they were shared or private faults
*/
- unsigned long numa_faults_locality[2];
+ unsigned long numa_faults_locality[3];
unsigned long numa_pages_migrated;
#endif /* CONFIG_NUMA_BALANCING */
#define TNF_NO_GROUP 0x02
#define TNF_SHARED 0x04
#define TNF_FAULT_LOCAL 0x08
+#define TNF_MIGRATE_FAIL 0x10
#ifdef CONFIG_NUMA_BALANCING
extern void task_numa_fault(int last_node, int node, int pages, int flags);
to->l4_hash = from->l4_hash;
};
+static inline void skb_sender_cpu_clear(struct sk_buff *skb)
+{
+#ifdef CONFIG_XPS
+ skb->sender_cpu = 0;
+#endif
+}
+
#ifdef NET_SKBUFF_DATA_USES_OFFSET
static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
{
* sequence completes. On some systems, many such sequences can execute as
* as single programmed DMA transfer. On all systems, these messages are
* queued, and might complete after transactions to other devices. Messages
- * sent to a given spi_device are alway executed in FIFO order.
+ * sent to a given spi_device are always executed in FIFO order.
*
* The code that submits an spi_message (and its spi_transfers)
* to the lower layers is responsible for managing its memory.
size_t maxsize, size_t *start);
int iov_iter_npages(const struct iov_iter *i, int maxpages);
+const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags);
+
static inline size_t iov_iter_count(struct iov_iter *i)
{
return i->count;
#define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */
#define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
#define VM_NO_GUARD 0x00000040 /* don't add guard page */
+#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
/* bits [20..32] reserved for arch specific ioremap internals */
/*
/* data contains off-queue information when !WORK_STRUCT_PWQ */
WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT,
- WORK_OFFQ_CANCELING = (1 << WORK_OFFQ_FLAG_BASE),
+ __WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE,
+ WORK_OFFQ_CANCELING = (1 << __WORK_OFFQ_CANCELING),
/*
* When a work item is off queue, its high bits point to the last
enum {
XFRM_LOOKUP_ICMP = 1 << 0,
XFRM_LOOKUP_QUEUE = 1 << 1,
+ XFRM_LOOKUP_KEEP_DST_REF = 1 << 2,
};
struct flowi;
const struct nf_loginfo *li,
const char *fmt, ...);
+__printf(8, 9)
+void nf_log_trace(struct net *net,
+ u_int8_t pf,
+ unsigned int hooknum,
+ const struct sk_buff *skb,
+ const struct net_device *in,
+ const struct net_device *out,
+ const struct nf_loginfo *li,
+ const char *fmt, ...);
+
struct nf_log_buf;
struct nf_log_buf *nf_log_buf_open(void);
const struct nft_data *data,
enum nft_data_types type);
+
+/**
+ * struct nft_userdata - user defined data associated with an object
+ *
+ * @len: length of the data
+ * @data: content
+ *
+ * The presence of user data is indicated in an object specific fashion,
+ * so a length of zero can't occur and the value "len" indicates data
+ * of length len + 1.
+ */
+struct nft_userdata {
+ u8 len;
+ unsigned char data[0];
+};
+
/**
* struct nft_set_elem - generic representation of set elements
*
* @handle: rule handle
* @genmask: generation mask
* @dlen: length of expression data
- * @ulen: length of user data (used for comments)
+ * @udata: user data is appended to the rule
* @data: expression data
*/
struct nft_rule {
u64 handle:42,
genmask:2,
dlen:12,
- ulen:8;
+ udata:1;
unsigned char data[]
__attribute__((aligned(__alignof__(struct nft_expr))));
};
return (struct nft_expr *)&rule->data[rule->dlen];
}
-static inline void *nft_userdata(const struct nft_rule *rule)
+static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule)
{
return (void *)&rule->data[rule->dlen];
}
#define VXLAN_N_VID (1u << 24)
#define VXLAN_VID_MASK (VXLAN_N_VID - 1)
+#define VXLAN_VNI_MASK (VXLAN_VID_MASK << 8)
#define VXLAN_HLEN (sizeof(struct udphdr) + sizeof(struct vxlanhdr))
struct vxlan_metadata {
#define AT91_DDRSDRC_UPD_MR (3 << 20) /* Update load mode register and extended mode register */
#define AT91_DDRSDRC_MDR 0x20 /* Memory Device Register */
-#define AT91_DDRSDRC_MD (3 << 0) /* Memory Device Type */
+#define AT91_DDRSDRC_MD (7 << 0) /* Memory Device Type */
#define AT91_DDRSDRC_MD_SDR 0
#define AT91_DDRSDRC_MD_LOW_POWER_SDR 1
#define AT91_DDRSDRC_MD_LOW_POWER_DDR 3
void target_core_setup_sub_cits(struct se_subsystem_api *);
/* attribute helpers from target_core_device.c for backend drivers */
+bool se_dev_check_wce(struct se_device *);
int se_dev_set_max_unmap_lba_count(struct se_device *, u32);
int se_dev_set_max_unmap_block_desc_count(struct se_device *, u32);
int se_dev_set_unmap_granularity(struct se_device *, u32);
#include <linux/ktime.h>
#include <linux/tracepoint.h>
-struct device;
-struct regmap;
+#include "../../../drivers/base/regmap/internal.h"
/*
* Log register events
*/
DECLARE_EVENT_CLASS(regmap_reg,
- TP_PROTO(struct device *dev, unsigned int reg,
+ TP_PROTO(struct regmap *map, unsigned int reg,
unsigned int val),
- TP_ARGS(dev, reg, val),
+ TP_ARGS(map, reg, val),
TP_STRUCT__entry(
- __string( name, dev_name(dev) )
- __field( unsigned int, reg )
- __field( unsigned int, val )
+ __string( name, regmap_name(map) )
+ __field( unsigned int, reg )
+ __field( unsigned int, val )
),
TP_fast_assign(
- __assign_str(name, dev_name(dev));
+ __assign_str(name, regmap_name(map));
__entry->reg = reg;
__entry->val = val;
),
DEFINE_EVENT(regmap_reg, regmap_reg_write,
- TP_PROTO(struct device *dev, unsigned int reg,
+ TP_PROTO(struct regmap *map, unsigned int reg,
unsigned int val),
- TP_ARGS(dev, reg, val)
+ TP_ARGS(map, reg, val)
);
DEFINE_EVENT(regmap_reg, regmap_reg_read,
- TP_PROTO(struct device *dev, unsigned int reg,
+ TP_PROTO(struct regmap *map, unsigned int reg,
unsigned int val),
- TP_ARGS(dev, reg, val)
+ TP_ARGS(map, reg, val)
);
DEFINE_EVENT(regmap_reg, regmap_reg_read_cache,
- TP_PROTO(struct device *dev, unsigned int reg,
+ TP_PROTO(struct regmap *map, unsigned int reg,
unsigned int val),
- TP_ARGS(dev, reg, val)
+ TP_ARGS(map, reg, val)
);
DECLARE_EVENT_CLASS(regmap_block,
- TP_PROTO(struct device *dev, unsigned int reg, int count),
+ TP_PROTO(struct regmap *map, unsigned int reg, int count),
- TP_ARGS(dev, reg, count),
+ TP_ARGS(map, reg, count),
TP_STRUCT__entry(
- __string( name, dev_name(dev) )
- __field( unsigned int, reg )
- __field( int, count )
+ __string( name, regmap_name(map) )
+ __field( unsigned int, reg )
+ __field( int, count )
),
TP_fast_assign(
- __assign_str(name, dev_name(dev));
+ __assign_str(name, regmap_name(map));
__entry->reg = reg;
__entry->count = count;
),
DEFINE_EVENT(regmap_block, regmap_hw_read_start,
- TP_PROTO(struct device *dev, unsigned int reg, int count),
+ TP_PROTO(struct regmap *map, unsigned int reg, int count),
- TP_ARGS(dev, reg, count)
+ TP_ARGS(map, reg, count)
);
DEFINE_EVENT(regmap_block, regmap_hw_read_done,
- TP_PROTO(struct device *dev, unsigned int reg, int count),
+ TP_PROTO(struct regmap *map, unsigned int reg, int count),
- TP_ARGS(dev, reg, count)
+ TP_ARGS(map, reg, count)
);
DEFINE_EVENT(regmap_block, regmap_hw_write_start,
- TP_PROTO(struct device *dev, unsigned int reg, int count),
+ TP_PROTO(struct regmap *map, unsigned int reg, int count),
- TP_ARGS(dev, reg, count)
+ TP_ARGS(map, reg, count)
);
DEFINE_EVENT(regmap_block, regmap_hw_write_done,
- TP_PROTO(struct device *dev, unsigned int reg, int count),
+ TP_PROTO(struct regmap *map, unsigned int reg, int count),
- TP_ARGS(dev, reg, count)
+ TP_ARGS(map, reg, count)
);
TRACE_EVENT(regcache_sync,
- TP_PROTO(struct device *dev, const char *type,
+ TP_PROTO(struct regmap *map, const char *type,
const char *status),
- TP_ARGS(dev, type, status),
+ TP_ARGS(map, type, status),
TP_STRUCT__entry(
- __string( name, dev_name(dev) )
- __string( status, status )
- __string( type, type )
- __field( int, type )
+ __string( name, regmap_name(map) )
+ __string( status, status )
+ __string( type, type )
+ __field( int, type )
),
TP_fast_assign(
- __assign_str(name, dev_name(dev));
+ __assign_str(name, regmap_name(map));
__assign_str(status, status);
__assign_str(type, type);
),
DECLARE_EVENT_CLASS(regmap_bool,
- TP_PROTO(struct device *dev, bool flag),
+ TP_PROTO(struct regmap *map, bool flag),
- TP_ARGS(dev, flag),
+ TP_ARGS(map, flag),
TP_STRUCT__entry(
- __string( name, dev_name(dev) )
- __field( int, flag )
+ __string( name, regmap_name(map) )
+ __field( int, flag )
),
TP_fast_assign(
- __assign_str(name, dev_name(dev));
+ __assign_str(name, regmap_name(map));
__entry->flag = flag;
),
DEFINE_EVENT(regmap_bool, regmap_cache_only,
- TP_PROTO(struct device *dev, bool flag),
+ TP_PROTO(struct regmap *map, bool flag),
- TP_ARGS(dev, flag)
+ TP_ARGS(map, flag)
);
DEFINE_EVENT(regmap_bool, regmap_cache_bypass,
- TP_PROTO(struct device *dev, bool flag),
+ TP_PROTO(struct regmap *map, bool flag),
- TP_ARGS(dev, flag)
+ TP_ARGS(map, flag)
);
DECLARE_EVENT_CLASS(regmap_async,
- TP_PROTO(struct device *dev),
+ TP_PROTO(struct regmap *map),
- TP_ARGS(dev),
+ TP_ARGS(map),
TP_STRUCT__entry(
- __string( name, dev_name(dev) )
+ __string( name, regmap_name(map) )
),
TP_fast_assign(
- __assign_str(name, dev_name(dev));
+ __assign_str(name, regmap_name(map));
),
TP_printk("%s", __get_str(name))
DEFINE_EVENT(regmap_block, regmap_async_write_start,
- TP_PROTO(struct device *dev, unsigned int reg, int count),
+ TP_PROTO(struct regmap *map, unsigned int reg, int count),
- TP_ARGS(dev, reg, count)
+ TP_ARGS(map, reg, count)
);
DEFINE_EVENT(regmap_async, regmap_async_io_complete,
- TP_PROTO(struct device *dev),
+ TP_PROTO(struct regmap *map),
- TP_ARGS(dev)
+ TP_ARGS(map)
);
DEFINE_EVENT(regmap_async, regmap_async_complete_start,
- TP_PROTO(struct device *dev),
+ TP_PROTO(struct regmap *map),
- TP_ARGS(dev)
+ TP_ARGS(map)
);
DEFINE_EVENT(regmap_async, regmap_async_complete_done,
- TP_PROTO(struct device *dev),
+ TP_PROTO(struct regmap *map),
- TP_ARGS(dev)
+ TP_ARGS(map)
);
TRACE_EVENT(regcache_drop_region,
- TP_PROTO(struct device *dev, unsigned int from,
+ TP_PROTO(struct regmap *map, unsigned int from,
unsigned int to),
- TP_ARGS(dev, from, to),
+ TP_ARGS(map, from, to),
TP_STRUCT__entry(
- __string( name, dev_name(dev) )
- __field( unsigned int, from )
- __field( unsigned int, to )
+ __string( name, regmap_name(map) )
+ __field( unsigned int, from )
+ __field( unsigned int, to )
),
TP_fast_assign(
- __assign_str(name, dev_name(dev));
+ __assign_str(name, regmap_name(map));
__entry->from = from;
__entry->to = to;
),
/* add more to the end as needed */
#define fourcc_mod_code(vendor, val) \
- ((((u64)DRM_FORMAT_MOD_VENDOR_## vendor) << 56) | (val & 0x00ffffffffffffffL))
+ ((((u64)DRM_FORMAT_MOD_VENDOR_## vendor) << 56) | (val & 0x00ffffffffffffffULL))
/*
* Format Modifier tokens:
#define DRM_IOCTL_I915_OVERLAY_PUT_IMAGE DRM_IOW(DRM_COMMAND_BASE + DRM_I915_OVERLAY_PUT_IMAGE, struct drm_intel_overlay_put_image)
#define DRM_IOCTL_I915_OVERLAY_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_OVERLAY_ATTRS, struct drm_intel_overlay_attrs)
#define DRM_IOCTL_I915_SET_SPRITE_COLORKEY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_SET_SPRITE_COLORKEY, struct drm_intel_sprite_colorkey)
-#define DRM_IOCTL_I915_GET_SPRITE_COLORKEY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_SET_SPRITE_COLORKEY, struct drm_intel_sprite_colorkey)
+#define DRM_IOCTL_I915_GET_SPRITE_COLORKEY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GET_SPRITE_COLORKEY, struct drm_intel_sprite_colorkey)
#define DRM_IOCTL_I915_GEM_WAIT DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_WAIT, struct drm_i915_gem_wait)
#define DRM_IOCTL_I915_GEM_CONTEXT_CREATE DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_CREATE, struct drm_i915_gem_context_create)
#define DRM_IOCTL_I915_GEM_CONTEXT_DESTROY DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_DESTROY, struct drm_i915_gem_context_destroy)
#define I915_PARAM_HAS_COHERENT_PHYS_GTT 29
#define I915_PARAM_MMAP_VERSION 30
#define I915_PARAM_HAS_BSD2 31
+#define I915_PARAM_REVISION 32
+#define I915_PARAM_SUBSLICE_TOTAL 33
+#define I915_PARAM_EU_TOTAL 34
typedef struct drm_i915_getparam {
int param;
#define RADEON_INFO_VRAM_USAGE 0x1e
#define RADEON_INFO_GTT_USAGE 0x1f
#define RADEON_INFO_ACTIVE_CU_COUNT 0x20
+#define RADEON_INFO_CURRENT_GPU_TEMP 0x21
+#define RADEON_INFO_CURRENT_GPU_SCLK 0x22
+#define RADEON_INFO_CURRENT_GPU_MCLK 0x23
+#define RADEON_INFO_READ_REG 0x24
struct drm_radeon_info {
uint32_t request;
__u32 size_max;
/* The maximum number of segments (if VIRTIO_BLK_F_SEG_MAX) */
__u32 seg_max;
- /* geometry the device (if VIRTIO_BLK_F_GEOMETRY) */
+ /* geometry of the device (if VIRTIO_BLK_F_GEOMETRY) */
struct virtio_blk_geometry {
__u16 cylinders;
__u8 heads;
#define VIRTIO_BLK_T_BARRIER 0x80000000
#endif /* !VIRTIO_BLK_NO_LEGACY */
-/* This is the first element of the read scatter-gather list. */
+/*
+ * This comes first in the read scatter-gather list.
+ * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated,
+ * this is the first element of the read scatter-gather list.
+ */
struct virtio_blk_outhdr {
/* VIRTIO_BLK_T* */
__virtio32 type;
#include <linux/virtio_types.h>
-#define VIRTIO_SCSI_CDB_SIZE 32
-#define VIRTIO_SCSI_SENSE_SIZE 96
+/* Default values of the CDB and sense data size configuration fields */
+#define VIRTIO_SCSI_CDB_DEFAULT_SIZE 32
+#define VIRTIO_SCSI_SENSE_DEFAULT_SIZE 96
+
+#ifndef VIRTIO_SCSI_CDB_SIZE
+#define VIRTIO_SCSI_CDB_SIZE VIRTIO_SCSI_CDB_DEFAULT_SIZE
+#endif
+#ifndef VIRTIO_SCSI_SENSE_SIZE
+#define VIRTIO_SCSI_SENSE_SIZE VIRTIO_SCSI_SENSE_DEFAULT_SIZE
+#endif
/* SCSI command request, followed by data-out */
struct virtio_scsi_cmd_req {
};
struct omap_dss_device {
+ struct kobject kobj;
struct device *dev;
struct module *owner;
const char *mod_name);
#define xenbus_register_frontend(drv) \
- __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME);
+ __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME)
#define xenbus_register_backend(drv) \
- __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME);
+ __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME)
void xenbus_unregister_driver(struct xenbus_driver *drv);
rcu_read_lock();
cpuset_for_each_descendant_pre(cp, pos_css, root_cs) {
- if (cp == root_cs)
- continue;
-
/* skip the whole subtree if @cp doesn't have any CPU */
if (cpumask_empty(cp->cpus_allowed)) {
pos_css = css_rightmost_descendant(pos_css);
* If it becomes empty, inherit the effective mask of the
* parent, which is guaranteed to have some CPUs.
*/
- if (cpumask_empty(new_cpus))
+ if (cgroup_on_dfl(cp->css.cgroup) && cpumask_empty(new_cpus))
cpumask_copy(new_cpus, parent->effective_cpus);
/* Skip the whole subtree if the cpumask remains the same. */
* If it becomes empty, inherit the effective mask of the
* parent, which is guaranteed to have some MEMs.
*/
- if (nodes_empty(*new_mems))
+ if (cgroup_on_dfl(cp->css.cgroup) && nodes_empty(*new_mems))
*new_mems = parent->effective_mems;
/* Skip the whole subtree if the nodemask remains the same. */
spin_lock_irq(&callback_lock);
cs->mems_allowed = parent->mems_allowed;
+ cs->effective_mems = parent->mems_allowed;
cpumask_copy(cs->cpus_allowed, parent->cpus_allowed);
+ cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
spin_unlock_irq(&callback_lock);
out_unlock:
mutex_unlock(&cpuset_mutex);
ctx = perf_event_ctx_lock_nested(event, SINGLE_DEPTH_NESTING);
WARN_ON_ONCE(ctx->parent_ctx);
perf_remove_from_context(event, true);
- mutex_unlock(&ctx->mutex);
+ perf_event_ctx_unlock(event, ctx);
_free_event(event);
}
{
struct perf_event *event = container_of(entry,
struct perf_event, pending);
+ int rctx;
+
+ rctx = perf_swevent_get_recursion_context();
+ /*
+ * If we 'fail' here, that's OK, it means recursion is already disabled
+ * and we won't recurse 'further'.
+ */
if (event->pending_disable) {
event->pending_disable = 0;
event->pending_wakeup = 0;
perf_event_wakeup(event);
}
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
}
/*
/* sets obj->mod if object is not vmlinux and module is found */
static void klp_find_object_module(struct klp_object *obj)
{
+ struct module *mod;
+
if (!klp_is_module(obj))
return;
mutex_lock(&module_mutex);
/*
- * We don't need to take a reference on the module here because we have
- * the klp_mutex, which is also taken by the module notifier. This
- * prevents any module from unloading until we release the klp_mutex.
+ * We do not want to block removal of patched modules and therefore
+ * we do not take a reference here. The patches are removed by
+ * a going module handler instead.
+ */
+ mod = find_module(obj->name);
+ /*
+ * Do not mess work of the module coming and going notifiers.
+ * Note that the patch might still be needed before the going handler
+ * is called. Module functions can be called even in the GOING state
+ * until mod->exit() finishes. This is especially important for
+ * patches that modify semantic of the functions.
*/
- obj->mod = find_module(obj->name);
+ if (mod && mod->klp_alive)
+ obj->mod = mod;
+
mutex_unlock(&module_mutex);
}
return -EINVAL;
obj->state = KLP_DISABLED;
+ obj->mod = NULL;
klp_find_object_module(obj);
mutex_lock(&klp_mutex);
+ /*
+ * Each module has to know that the notifier has been called.
+ * We never know what module will get patched by a new patch.
+ */
+ if (action == MODULE_STATE_COMING)
+ mod->klp_alive = true;
+ else /* MODULE_STATE_GOING */
+ mod->klp_alive = false;
+
list_for_each_entry(patch, &klp_patches, list) {
for (obj = patch->objs; obj->funcs; obj++) {
if (!klp_is_module(obj) || strcmp(obj->name, mod->name))
if (!new_class->name)
return 0;
- list_for_each_entry(class, &all_lock_classes, lock_entry) {
+ list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) {
if (new_class->key - new_class->subclass == class->key)
return class->name_version;
if (class->name && !strcmp(class->name, new_class->name))
hash_head = classhashentry(key);
/*
- * We can walk the hash lockfree, because the hash only
- * grows, and we are careful when adding entries to the end:
+ * We do an RCU walk of the hash, see lockdep_free_key_range().
*/
- list_for_each_entry(class, hash_head, hash_entry) {
+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ return NULL;
+
+ list_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class->key == key) {
/*
* Huh! same key, different name? Did someone trample
struct lockdep_subclass_key *key;
struct list_head *hash_head;
struct lock_class *class;
- unsigned long flags;
+
+ DEBUG_LOCKS_WARN_ON(!irqs_disabled());
class = look_up_lock_class(lock, subclass);
if (likely(class))
key = lock->key->subkeys + subclass;
hash_head = classhashentry(key);
- raw_local_irq_save(flags);
if (!graph_lock()) {
- raw_local_irq_restore(flags);
return NULL;
}
/*
* We have to do the hash-walk again, to avoid races
* with another CPU:
*/
- list_for_each_entry(class, hash_head, hash_entry)
+ list_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class->key == key)
goto out_unlock_set;
+ }
+
/*
* Allocate a new key from the static array, and add it to
* the hash:
*/
if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {
if (!debug_locks_off_graph_unlock()) {
- raw_local_irq_restore(flags);
return NULL;
}
- raw_local_irq_restore(flags);
print_lockdep_off("BUG: MAX_LOCKDEP_KEYS too low!");
dump_stack();
if (verbose(class)) {
graph_unlock();
- raw_local_irq_restore(flags);
printk("\nnew class %p: %s", class->key, class->name);
if (class->name_version > 1)
printk("\n");
dump_stack();
- raw_local_irq_save(flags);
if (!graph_lock()) {
- raw_local_irq_restore(flags);
return NULL;
}
}
out_unlock_set:
graph_unlock();
- raw_local_irq_restore(flags);
out_set_class_cache:
if (!subclass || force)
entry->distance = distance;
entry->trace = *trace;
/*
- * Since we never remove from the dependency list, the list can
- * be walked lockless by other CPUs, it's only allocation
- * that must be protected by the spinlock. But this also means
- * we must make new entries visible only once writes to the
- * entry become visible - hence the RCU op:
+ * Both allocation and removal are done under the graph lock; but
+ * iteration is under RCU-sched; see look_up_lock_class() and
+ * lockdep_free_key_range().
*/
list_add_tail_rcu(&entry->entry, head);
else
head = &lock->class->locks_before;
- list_for_each_entry(entry, head, entry) {
+ DEBUG_LOCKS_WARN_ON(!irqs_disabled());
+
+ list_for_each_entry_rcu(entry, head, entry) {
if (!lock_accessed(entry)) {
unsigned int cq_depth;
mark_lock_accessed(entry, lock);
* We can walk it lock-free, because entries only get added
* to the hash:
*/
- list_for_each_entry(chain, hash_head, entry) {
+ list_for_each_entry_rcu(chain, hash_head, entry) {
if (chain->chain_key == chain_key) {
cache_hit:
debug_atomic_inc(chain_lookup_hits);
if (unlikely(!debug_locks))
return;
- if (subclass)
+ if (subclass) {
+ unsigned long flags;
+
+ if (DEBUG_LOCKS_WARN_ON(current->lockdep_recursion))
+ return;
+
+ raw_local_irq_save(flags);
+ current->lockdep_recursion = 1;
register_lock_class(lock, subclass, 1);
+ current->lockdep_recursion = 0;
+ raw_local_irq_restore(flags);
+ }
}
EXPORT_SYMBOL_GPL(lockdep_init_map);
return addr >= start && addr < start + size;
}
+/*
+ * Used in module.c to remove lock classes from memory that is going to be
+ * freed; and possibly re-used by other modules.
+ *
+ * We will have had one sync_sched() before getting here, so we're guaranteed
+ * nobody will look up these exact classes -- they're properly dead but still
+ * allocated.
+ */
void lockdep_free_key_range(void *start, unsigned long size)
{
- struct lock_class *class, *next;
+ struct lock_class *class;
struct list_head *head;
unsigned long flags;
int i;
head = classhash_table + i;
if (list_empty(head))
continue;
- list_for_each_entry_safe(class, next, head, hash_entry) {
+ list_for_each_entry_rcu(class, head, hash_entry) {
if (within(class->key, start, size))
zap_class(class);
else if (within(class->name, start, size))
if (locked)
graph_unlock();
raw_local_irq_restore(flags);
+
+ /*
+ * Wait for any possible iterators from look_up_lock_class() to pass
+ * before continuing to free the memory they refer to.
+ *
+ * sync_sched() is sufficient because the read-side is IRQ disable.
+ */
+ synchronize_sched();
+
+ /*
+ * XXX at this point we could return the resources to the pool;
+ * instead we leak them. We would need to change to bitmap allocators
+ * instead of the linear allocators we have now.
+ */
}
void lockdep_reset_lock(struct lockdep_map *lock)
{
- struct lock_class *class, *next;
+ struct lock_class *class;
struct list_head *head;
unsigned long flags;
int i, j;
head = classhash_table + i;
if (list_empty(head))
continue;
- list_for_each_entry_safe(class, next, head, hash_entry) {
+ list_for_each_entry_rcu(class, head, hash_entry) {
int match = 0;
for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
#include <linux/async.h>
#include <linux/percpu.h>
#include <linux/kmemleak.h>
-#include <linux/kasan.h>
#include <linux/jump_label.h>
#include <linux/pfn.h>
#include <linux/bsearch.h>
void __weak module_memfree(void *module_region)
{
vfree(module_region);
- kasan_module_free(module_region);
}
void __weak module_arch_cleanup(struct module *mod)
kfree(mod->args);
percpu_modfree(mod);
- /* Free lock-classes: */
+ /* Free lock-classes; relies on the preceding sync_rcu(). */
lockdep_free_key_range(mod->module_core, mod->core_size);
/* Finally, free the core (containing the module structure) */
module_bug_cleanup(mod);
mutex_unlock(&module_mutex);
- /* Free lock-classes: */
- lockdep_free_key_range(mod->module_core, mod->core_size);
-
/* we can't deallocate the module until we clear memory protection */
unset_module_init_ro_nx(mod);
unset_module_core_ro_nx(mod);
synchronize_rcu();
mutex_unlock(&module_mutex);
free_module:
+ /* Free lock-classes; relies on the preceding sync_rcu() */
+ lockdep_free_key_range(mod->module_core, mod->core_size);
+
module_deallocate(mod, info);
free_copy:
free_copy(info);
} else {
if (dl_prio(oldprio))
p->dl.dl_boosted = 0;
+ if (rt_prio(oldprio))
+ p->rt.timeout = 0;
p->sched_class = &fair_sched_class;
}
/*
* If there were no record hinting faults then either the task is
* completely idle or all activity is areas that are not of interest
- * to automatic numa balancing. Scan slower
+ * to automatic numa balancing. Related to that, if there were failed
+ * migration then it implies we are migrating too quickly or the local
+ * node is overloaded. In either case, scan slower
*/
- if (local + shared == 0) {
+ if (local + shared == 0 || p->numa_faults_locality[2]) {
p->numa_scan_period = min(p->numa_scan_period_max,
p->numa_scan_period << 1);
if (migrated)
p->numa_pages_migrated += pages;
+ if (flags & TNF_MIGRATE_FAIL)
+ p->numa_faults_locality[2] += pages;
p->numa_faults[task_faults_idx(NUMA_MEMBUF, mem_node, priv)] += pages;
p->numa_faults[task_faults_idx(NUMA_CPUBUF, cpu_node, priv)] += pages;
*/
static int bc_set_next(ktime_t expires, struct clock_event_device *bc)
{
+ int bc_moved;
/*
* We try to cancel the timer first. If the callback is on
* flight on some other cpu then we let it handle it. If we
* restart the timer because we are in the callback, but we
* can set the expiry time and let the callback return
* HRTIMER_RESTART.
+ *
+ * Since we are in the idle loop at this point and because
+ * hrtimer_{start/cancel} functions call into tracing,
+ * calls to these functions must be bound within RCU_NONIDLE.
*/
- if (hrtimer_try_to_cancel(&bctimer) >= 0) {
- hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED);
+ RCU_NONIDLE(bc_moved = (hrtimer_try_to_cancel(&bctimer) >= 0) ?
+ !hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED) :
+ 0);
+ if (bc_moved) {
/* Bind the "device" to the cpu */
bc->bound_on = smp_processor_id();
} else if (bc->bound_on == smp_processor_id()) {
static struct pid * const ftrace_swapper_pid = &init_struct_pid;
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+static int ftrace_graph_active;
+#else
+# define ftrace_graph_active 0
+#endif
+
#ifdef CONFIG_DYNAMIC_FTRACE
static struct ftrace_ops *removed_ops;
if (!ftrace_rec_count(rec))
rec->flags = 0;
else
- /* Just disable the record (keep REGS state) */
- rec->flags &= ~FTRACE_FL_ENABLED;
+ /*
+ * Just disable the record, but keep the ops TRAMP
+ * and REGS states. The _EN flags must be disabled though.
+ */
+ rec->flags &= ~(FTRACE_FL_ENABLED | FTRACE_FL_TRAMP_EN |
+ FTRACE_FL_REGS_EN);
}
return FTRACE_UPDATE_MAKE_NOP;
static void ftrace_startup_sysctl(void)
{
+ int command;
+
if (unlikely(ftrace_disabled))
return;
/* Force update next time */
saved_ftrace_func = NULL;
/* ftrace_start_up is true if we want ftrace running */
- if (ftrace_start_up)
- ftrace_run_update_code(FTRACE_UPDATE_CALLS);
+ if (ftrace_start_up) {
+ command = FTRACE_UPDATE_CALLS;
+ if (ftrace_graph_active)
+ command |= FTRACE_START_FUNC_RET;
+ ftrace_startup_enable(command);
+ }
}
static void ftrace_shutdown_sysctl(void)
{
+ int command;
+
if (unlikely(ftrace_disabled))
return;
/* ftrace_start_up is true if ftrace is running */
- if (ftrace_start_up)
- ftrace_run_update_code(FTRACE_DISABLE_CALLS);
+ if (ftrace_start_up) {
+ command = FTRACE_DISABLE_CALLS;
+ if (ftrace_graph_active)
+ command |= FTRACE_STOP_FUNC_RET;
+ ftrace_run_update_code(command);
+ }
}
static cycle_t ftrace_update_time;
if (ftrace_enabled) {
- ftrace_startup_sysctl();
-
/* we are starting ftrace again */
if (ftrace_ops_list != &ftrace_list_end)
update_ftrace_function();
+ ftrace_startup_sysctl();
+
} else {
/* stopping ftrace calls (just send to ftrace_stub) */
ftrace_trace_function = ftrace_stub;
ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash)
};
-static int ftrace_graph_active;
-
int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)
{
return 0;
}
EXPORT_SYMBOL_GPL(flush_work);
+struct cwt_wait {
+ wait_queue_t wait;
+ struct work_struct *work;
+};
+
+static int cwt_wakefn(wait_queue_t *wait, unsigned mode, int sync, void *key)
+{
+ struct cwt_wait *cwait = container_of(wait, struct cwt_wait, wait);
+
+ if (cwait->work != key)
+ return 0;
+ return autoremove_wake_function(wait, mode, sync, key);
+}
+
static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)
{
+ static DECLARE_WAIT_QUEUE_HEAD(cancel_waitq);
unsigned long flags;
int ret;
do {
ret = try_to_grab_pending(work, is_dwork, &flags);
/*
- * If someone else is canceling, wait for the same event it
- * would be waiting for before retrying.
+ * If someone else is already canceling, wait for it to
+ * finish. flush_work() doesn't work for PREEMPT_NONE
+ * because we may get scheduled between @work's completion
+ * and the other canceling task resuming and clearing
+ * CANCELING - flush_work() will return false immediately
+ * as @work is no longer busy, try_to_grab_pending() will
+ * return -ENOENT as @work is still being canceled and the
+ * other canceling task won't be able to clear CANCELING as
+ * we're hogging the CPU.
+ *
+ * Let's wait for completion using a waitqueue. As this
+ * may lead to the thundering herd problem, use a custom
+ * wake function which matches @work along with exclusive
+ * wait and wakeup.
*/
- if (unlikely(ret == -ENOENT))
- flush_work(work);
+ if (unlikely(ret == -ENOENT)) {
+ struct cwt_wait cwait;
+
+ init_wait(&cwait.wait);
+ cwait.wait.func = cwt_wakefn;
+ cwait.work = work;
+
+ prepare_to_wait_exclusive(&cancel_waitq, &cwait.wait,
+ TASK_UNINTERRUPTIBLE);
+ if (work_is_canceling(work))
+ schedule();
+ finish_wait(&cancel_waitq, &cwait.wait);
+ }
} while (unlikely(ret < 0));
/* tell other tasks trying to grab @work to back off */
flush_work(work);
clear_work_data(work);
+
+ /*
+ * Paired with prepare_to_wait() above so that either
+ * waitqueue_active() is visible here or !work_is_canceling() is
+ * visible there.
+ */
+ smp_mb();
+ if (waitqueue_active(&cancel_waitq))
+ __wake_up(&cancel_waitq, TASK_NORMAL, 1, work);
+
return ret;
}
obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \
bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \
- gcd.o lcm.o list_sort.o uuid.o flex_array.o clz_ctz.o \
+ gcd.o lcm.o list_sort.o uuid.o flex_array.o iov_iter.o clz_ctz.o \
bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \
percpu-refcount.o percpu_ida.o rhashtable.o reciprocal_div.o
obj-y += string_helpers.o
--- /dev/null
+#include <linux/export.h>
+#include <linux/uio.h>
+#include <linux/pagemap.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <net/checksum.h>
+
+#define iterate_iovec(i, n, __v, __p, skip, STEP) { \
+ size_t left; \
+ size_t wanted = n; \
+ __p = i->iov; \
+ __v.iov_len = min(n, __p->iov_len - skip); \
+ if (likely(__v.iov_len)) { \
+ __v.iov_base = __p->iov_base + skip; \
+ left = (STEP); \
+ __v.iov_len -= left; \
+ skip += __v.iov_len; \
+ n -= __v.iov_len; \
+ } else { \
+ left = 0; \
+ } \
+ while (unlikely(!left && n)) { \
+ __p++; \
+ __v.iov_len = min(n, __p->iov_len); \
+ if (unlikely(!__v.iov_len)) \
+ continue; \
+ __v.iov_base = __p->iov_base; \
+ left = (STEP); \
+ __v.iov_len -= left; \
+ skip = __v.iov_len; \
+ n -= __v.iov_len; \
+ } \
+ n = wanted - n; \
+}
+
+#define iterate_kvec(i, n, __v, __p, skip, STEP) { \
+ size_t wanted = n; \
+ __p = i->kvec; \
+ __v.iov_len = min(n, __p->iov_len - skip); \
+ if (likely(__v.iov_len)) { \
+ __v.iov_base = __p->iov_base + skip; \
+ (void)(STEP); \
+ skip += __v.iov_len; \
+ n -= __v.iov_len; \
+ } \
+ while (unlikely(n)) { \
+ __p++; \
+ __v.iov_len = min(n, __p->iov_len); \
+ if (unlikely(!__v.iov_len)) \
+ continue; \
+ __v.iov_base = __p->iov_base; \
+ (void)(STEP); \
+ skip = __v.iov_len; \
+ n -= __v.iov_len; \
+ } \
+ n = wanted; \
+}
+
+#define iterate_bvec(i, n, __v, __p, skip, STEP) { \
+ size_t wanted = n; \
+ __p = i->bvec; \
+ __v.bv_len = min_t(size_t, n, __p->bv_len - skip); \
+ if (likely(__v.bv_len)) { \
+ __v.bv_page = __p->bv_page; \
+ __v.bv_offset = __p->bv_offset + skip; \
+ (void)(STEP); \
+ skip += __v.bv_len; \
+ n -= __v.bv_len; \
+ } \
+ while (unlikely(n)) { \
+ __p++; \
+ __v.bv_len = min_t(size_t, n, __p->bv_len); \
+ if (unlikely(!__v.bv_len)) \
+ continue; \
+ __v.bv_page = __p->bv_page; \
+ __v.bv_offset = __p->bv_offset; \
+ (void)(STEP); \
+ skip = __v.bv_len; \
+ n -= __v.bv_len; \
+ } \
+ n = wanted; \
+}
+
+#define iterate_all_kinds(i, n, v, I, B, K) { \
+ size_t skip = i->iov_offset; \
+ if (unlikely(i->type & ITER_BVEC)) { \
+ const struct bio_vec *bvec; \
+ struct bio_vec v; \
+ iterate_bvec(i, n, v, bvec, skip, (B)) \
+ } else if (unlikely(i->type & ITER_KVEC)) { \
+ const struct kvec *kvec; \
+ struct kvec v; \
+ iterate_kvec(i, n, v, kvec, skip, (K)) \
+ } else { \
+ const struct iovec *iov; \
+ struct iovec v; \
+ iterate_iovec(i, n, v, iov, skip, (I)) \
+ } \
+}
+
+#define iterate_and_advance(i, n, v, I, B, K) { \
+ size_t skip = i->iov_offset; \
+ if (unlikely(i->type & ITER_BVEC)) { \
+ const struct bio_vec *bvec; \
+ struct bio_vec v; \
+ iterate_bvec(i, n, v, bvec, skip, (B)) \
+ if (skip == bvec->bv_len) { \
+ bvec++; \
+ skip = 0; \
+ } \
+ i->nr_segs -= bvec - i->bvec; \
+ i->bvec = bvec; \
+ } else if (unlikely(i->type & ITER_KVEC)) { \
+ const struct kvec *kvec; \
+ struct kvec v; \
+ iterate_kvec(i, n, v, kvec, skip, (K)) \
+ if (skip == kvec->iov_len) { \
+ kvec++; \
+ skip = 0; \
+ } \
+ i->nr_segs -= kvec - i->kvec; \
+ i->kvec = kvec; \
+ } else { \
+ const struct iovec *iov; \
+ struct iovec v; \
+ iterate_iovec(i, n, v, iov, skip, (I)) \
+ if (skip == iov->iov_len) { \
+ iov++; \
+ skip = 0; \
+ } \
+ i->nr_segs -= iov - i->iov; \
+ i->iov = iov; \
+ } \
+ i->count -= n; \
+ i->iov_offset = skip; \
+}
+
+static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes,
+ struct iov_iter *i)
+{
+ size_t skip, copy, left, wanted;
+ const struct iovec *iov;
+ char __user *buf;
+ void *kaddr, *from;
+
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ wanted = bytes;
+ iov = i->iov;
+ skip = i->iov_offset;
+ buf = iov->iov_base + skip;
+ copy = min(bytes, iov->iov_len - skip);
+
+ if (!fault_in_pages_writeable(buf, copy)) {
+ kaddr = kmap_atomic(page);
+ from = kaddr + offset;
+
+ /* first chunk, usually the only one */
+ left = __copy_to_user_inatomic(buf, from, copy);
+ copy -= left;
+ skip += copy;
+ from += copy;
+ bytes -= copy;
+
+ while (unlikely(!left && bytes)) {
+ iov++;
+ buf = iov->iov_base;
+ copy = min(bytes, iov->iov_len);
+ left = __copy_to_user_inatomic(buf, from, copy);
+ copy -= left;
+ skip = copy;
+ from += copy;
+ bytes -= copy;
+ }
+ if (likely(!bytes)) {
+ kunmap_atomic(kaddr);
+ goto done;
+ }
+ offset = from - kaddr;
+ buf += copy;
+ kunmap_atomic(kaddr);
+ copy = min(bytes, iov->iov_len - skip);
+ }
+ /* Too bad - revert to non-atomic kmap */
+ kaddr = kmap(page);
+ from = kaddr + offset;
+ left = __copy_to_user(buf, from, copy);
+ copy -= left;
+ skip += copy;
+ from += copy;
+ bytes -= copy;
+ while (unlikely(!left && bytes)) {
+ iov++;
+ buf = iov->iov_base;
+ copy = min(bytes, iov->iov_len);
+ left = __copy_to_user(buf, from, copy);
+ copy -= left;
+ skip = copy;
+ from += copy;
+ bytes -= copy;
+ }
+ kunmap(page);
+done:
+ if (skip == iov->iov_len) {
+ iov++;
+ skip = 0;
+ }
+ i->count -= wanted - bytes;
+ i->nr_segs -= iov - i->iov;
+ i->iov = iov;
+ i->iov_offset = skip;
+ return wanted - bytes;
+}
+
+static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t bytes,
+ struct iov_iter *i)
+{
+ size_t skip, copy, left, wanted;
+ const struct iovec *iov;
+ char __user *buf;
+ void *kaddr, *to;
+
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ wanted = bytes;
+ iov = i->iov;
+ skip = i->iov_offset;
+ buf = iov->iov_base + skip;
+ copy = min(bytes, iov->iov_len - skip);
+
+ if (!fault_in_pages_readable(buf, copy)) {
+ kaddr = kmap_atomic(page);
+ to = kaddr + offset;
+
+ /* first chunk, usually the only one */
+ left = __copy_from_user_inatomic(to, buf, copy);
+ copy -= left;
+ skip += copy;
+ to += copy;
+ bytes -= copy;
+
+ while (unlikely(!left && bytes)) {
+ iov++;
+ buf = iov->iov_base;
+ copy = min(bytes, iov->iov_len);
+ left = __copy_from_user_inatomic(to, buf, copy);
+ copy -= left;
+ skip = copy;
+ to += copy;
+ bytes -= copy;
+ }
+ if (likely(!bytes)) {
+ kunmap_atomic(kaddr);
+ goto done;
+ }
+ offset = to - kaddr;
+ buf += copy;
+ kunmap_atomic(kaddr);
+ copy = min(bytes, iov->iov_len - skip);
+ }
+ /* Too bad - revert to non-atomic kmap */
+ kaddr = kmap(page);
+ to = kaddr + offset;
+ left = __copy_from_user(to, buf, copy);
+ copy -= left;
+ skip += copy;
+ to += copy;
+ bytes -= copy;
+ while (unlikely(!left && bytes)) {
+ iov++;
+ buf = iov->iov_base;
+ copy = min(bytes, iov->iov_len);
+ left = __copy_from_user(to, buf, copy);
+ copy -= left;
+ skip = copy;
+ to += copy;
+ bytes -= copy;
+ }
+ kunmap(page);
+done:
+ if (skip == iov->iov_len) {
+ iov++;
+ skip = 0;
+ }
+ i->count -= wanted - bytes;
+ i->nr_segs -= iov - i->iov;
+ i->iov = iov;
+ i->iov_offset = skip;
+ return wanted - bytes;
+}
+
+/*
+ * Fault in the first iovec of the given iov_iter, to a maximum length
+ * of bytes. Returns 0 on success, or non-zero if the memory could not be
+ * accessed (ie. because it is an invalid address).
+ *
+ * writev-intensive code may want this to prefault several iovecs -- that
+ * would be possible (callers must not rely on the fact that _only_ the
+ * first iovec will be faulted with the current implementation).
+ */
+int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
+{
+ if (!(i->type & (ITER_BVEC|ITER_KVEC))) {
+ char __user *buf = i->iov->iov_base + i->iov_offset;
+ bytes = min(bytes, i->iov->iov_len - i->iov_offset);
+ return fault_in_pages_readable(buf, bytes);
+ }
+ return 0;
+}
+EXPORT_SYMBOL(iov_iter_fault_in_readable);
+
+void iov_iter_init(struct iov_iter *i, int direction,
+ const struct iovec *iov, unsigned long nr_segs,
+ size_t count)
+{
+ /* It will get better. Eventually... */
+ if (segment_eq(get_fs(), KERNEL_DS)) {
+ direction |= ITER_KVEC;
+ i->type = direction;
+ i->kvec = (struct kvec *)iov;
+ } else {
+ i->type = direction;
+ i->iov = iov;
+ }
+ i->nr_segs = nr_segs;
+ i->iov_offset = 0;
+ i->count = count;
+}
+EXPORT_SYMBOL(iov_iter_init);
+
+static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
+{
+ char *from = kmap_atomic(page);
+ memcpy(to, from + offset, len);
+ kunmap_atomic(from);
+}
+
+static void memcpy_to_page(struct page *page, size_t offset, char *from, size_t len)
+{
+ char *to = kmap_atomic(page);
+ memcpy(to + offset, from, len);
+ kunmap_atomic(to);
+}
+
+static void memzero_page(struct page *page, size_t offset, size_t len)
+{
+ char *addr = kmap_atomic(page);
+ memset(addr + offset, 0, len);
+ kunmap_atomic(addr);
+}
+
+size_t copy_to_iter(void *addr, size_t bytes, struct iov_iter *i)
+{
+ char *from = addr;
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ iterate_and_advance(i, bytes, v,
+ __copy_to_user(v.iov_base, (from += v.iov_len) - v.iov_len,
+ v.iov_len),
+ memcpy_to_page(v.bv_page, v.bv_offset,
+ (from += v.bv_len) - v.bv_len, v.bv_len),
+ memcpy(v.iov_base, (from += v.iov_len) - v.iov_len, v.iov_len)
+ )
+
+ return bytes;
+}
+EXPORT_SYMBOL(copy_to_iter);
+
+size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i)
+{
+ char *to = addr;
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ iterate_and_advance(i, bytes, v,
+ __copy_from_user((to += v.iov_len) - v.iov_len, v.iov_base,
+ v.iov_len),
+ memcpy_from_page((to += v.bv_len) - v.bv_len, v.bv_page,
+ v.bv_offset, v.bv_len),
+ memcpy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len)
+ )
+
+ return bytes;
+}
+EXPORT_SYMBOL(copy_from_iter);
+
+size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i)
+{
+ char *to = addr;
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ iterate_and_advance(i, bytes, v,
+ __copy_from_user_nocache((to += v.iov_len) - v.iov_len,
+ v.iov_base, v.iov_len),
+ memcpy_from_page((to += v.bv_len) - v.bv_len, v.bv_page,
+ v.bv_offset, v.bv_len),
+ memcpy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len)
+ )
+
+ return bytes;
+}
+EXPORT_SYMBOL(copy_from_iter_nocache);
+
+size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
+ struct iov_iter *i)
+{
+ if (i->type & (ITER_BVEC|ITER_KVEC)) {
+ void *kaddr = kmap_atomic(page);
+ size_t wanted = copy_to_iter(kaddr + offset, bytes, i);
+ kunmap_atomic(kaddr);
+ return wanted;
+ } else
+ return copy_page_to_iter_iovec(page, offset, bytes, i);
+}
+EXPORT_SYMBOL(copy_page_to_iter);
+
+size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
+ struct iov_iter *i)
+{
+ if (i->type & (ITER_BVEC|ITER_KVEC)) {
+ void *kaddr = kmap_atomic(page);
+ size_t wanted = copy_from_iter(kaddr + offset, bytes, i);
+ kunmap_atomic(kaddr);
+ return wanted;
+ } else
+ return copy_page_from_iter_iovec(page, offset, bytes, i);
+}
+EXPORT_SYMBOL(copy_page_from_iter);
+
+size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
+{
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ iterate_and_advance(i, bytes, v,
+ __clear_user(v.iov_base, v.iov_len),
+ memzero_page(v.bv_page, v.bv_offset, v.bv_len),
+ memset(v.iov_base, 0, v.iov_len)
+ )
+
+ return bytes;
+}
+EXPORT_SYMBOL(iov_iter_zero);
+
+size_t iov_iter_copy_from_user_atomic(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ char *kaddr = kmap_atomic(page), *p = kaddr + offset;
+ iterate_all_kinds(i, bytes, v,
+ __copy_from_user_inatomic((p += v.iov_len) - v.iov_len,
+ v.iov_base, v.iov_len),
+ memcpy_from_page((p += v.bv_len) - v.bv_len, v.bv_page,
+ v.bv_offset, v.bv_len),
+ memcpy((p += v.iov_len) - v.iov_len, v.iov_base, v.iov_len)
+ )
+ kunmap_atomic(kaddr);
+ return bytes;
+}
+EXPORT_SYMBOL(iov_iter_copy_from_user_atomic);
+
+void iov_iter_advance(struct iov_iter *i, size_t size)
+{
+ iterate_and_advance(i, size, v, 0, 0, 0)
+}
+EXPORT_SYMBOL(iov_iter_advance);
+
+/*
+ * Return the count of just the current iov_iter segment.
+ */
+size_t iov_iter_single_seg_count(const struct iov_iter *i)
+{
+ if (i->nr_segs == 1)
+ return i->count;
+ else if (i->type & ITER_BVEC)
+ return min(i->count, i->bvec->bv_len - i->iov_offset);
+ else
+ return min(i->count, i->iov->iov_len - i->iov_offset);
+}
+EXPORT_SYMBOL(iov_iter_single_seg_count);
+
+void iov_iter_kvec(struct iov_iter *i, int direction,
+ const struct kvec *kvec, unsigned long nr_segs,
+ size_t count)
+{
+ BUG_ON(!(direction & ITER_KVEC));
+ i->type = direction;
+ i->kvec = kvec;
+ i->nr_segs = nr_segs;
+ i->iov_offset = 0;
+ i->count = count;
+}
+EXPORT_SYMBOL(iov_iter_kvec);
+
+void iov_iter_bvec(struct iov_iter *i, int direction,
+ const struct bio_vec *bvec, unsigned long nr_segs,
+ size_t count)
+{
+ BUG_ON(!(direction & ITER_BVEC));
+ i->type = direction;
+ i->bvec = bvec;
+ i->nr_segs = nr_segs;
+ i->iov_offset = 0;
+ i->count = count;
+}
+EXPORT_SYMBOL(iov_iter_bvec);
+
+unsigned long iov_iter_alignment(const struct iov_iter *i)
+{
+ unsigned long res = 0;
+ size_t size = i->count;
+
+ if (!size)
+ return 0;
+
+ iterate_all_kinds(i, size, v,
+ (res |= (unsigned long)v.iov_base | v.iov_len, 0),
+ res |= v.bv_offset | v.bv_len,
+ res |= (unsigned long)v.iov_base | v.iov_len
+ )
+ return res;
+}
+EXPORT_SYMBOL(iov_iter_alignment);
+
+ssize_t iov_iter_get_pages(struct iov_iter *i,
+ struct page **pages, size_t maxsize, unsigned maxpages,
+ size_t *start)
+{
+ if (maxsize > i->count)
+ maxsize = i->count;
+
+ if (!maxsize)
+ return 0;
+
+ iterate_all_kinds(i, maxsize, v, ({
+ unsigned long addr = (unsigned long)v.iov_base;
+ size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1));
+ int n;
+ int res;
+
+ if (len > maxpages * PAGE_SIZE)
+ len = maxpages * PAGE_SIZE;
+ addr &= ~(PAGE_SIZE - 1);
+ n = DIV_ROUND_UP(len, PAGE_SIZE);
+ res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, pages);
+ if (unlikely(res < 0))
+ return res;
+ return (res == n ? len : res * PAGE_SIZE) - *start;
+ 0;}),({
+ /* can't be more than PAGE_SIZE */
+ *start = v.bv_offset;
+ get_page(*pages = v.bv_page);
+ return v.bv_len;
+ }),({
+ return -EFAULT;
+ })
+ )
+ return 0;
+}
+EXPORT_SYMBOL(iov_iter_get_pages);
+
+static struct page **get_pages_array(size_t n)
+{
+ struct page **p = kmalloc(n * sizeof(struct page *), GFP_KERNEL);
+ if (!p)
+ p = vmalloc(n * sizeof(struct page *));
+ return p;
+}
+
+ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
+ struct page ***pages, size_t maxsize,
+ size_t *start)
+{
+ struct page **p;
+
+ if (maxsize > i->count)
+ maxsize = i->count;
+
+ if (!maxsize)
+ return 0;
+
+ iterate_all_kinds(i, maxsize, v, ({
+ unsigned long addr = (unsigned long)v.iov_base;
+ size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1));
+ int n;
+ int res;
+
+ addr &= ~(PAGE_SIZE - 1);
+ n = DIV_ROUND_UP(len, PAGE_SIZE);
+ p = get_pages_array(n);
+ if (!p)
+ return -ENOMEM;
+ res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, p);
+ if (unlikely(res < 0)) {
+ kvfree(p);
+ return res;
+ }
+ *pages = p;
+ return (res == n ? len : res * PAGE_SIZE) - *start;
+ 0;}),({
+ /* can't be more than PAGE_SIZE */
+ *start = v.bv_offset;
+ *pages = p = get_pages_array(1);
+ if (!p)
+ return -ENOMEM;
+ get_page(*p = v.bv_page);
+ return v.bv_len;
+ }),({
+ return -EFAULT;
+ })
+ )
+ return 0;
+}
+EXPORT_SYMBOL(iov_iter_get_pages_alloc);
+
+size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum,
+ struct iov_iter *i)
+{
+ char *to = addr;
+ __wsum sum, next;
+ size_t off = 0;
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ sum = *csum;
+ iterate_and_advance(i, bytes, v, ({
+ int err = 0;
+ next = csum_and_copy_from_user(v.iov_base,
+ (to += v.iov_len) - v.iov_len,
+ v.iov_len, 0, &err);
+ if (!err) {
+ sum = csum_block_add(sum, next, off);
+ off += v.iov_len;
+ }
+ err ? v.iov_len : 0;
+ }), ({
+ char *p = kmap_atomic(v.bv_page);
+ next = csum_partial_copy_nocheck(p + v.bv_offset,
+ (to += v.bv_len) - v.bv_len,
+ v.bv_len, 0);
+ kunmap_atomic(p);
+ sum = csum_block_add(sum, next, off);
+ off += v.bv_len;
+ }),({
+ next = csum_partial_copy_nocheck(v.iov_base,
+ (to += v.iov_len) - v.iov_len,
+ v.iov_len, 0);
+ sum = csum_block_add(sum, next, off);
+ off += v.iov_len;
+ })
+ )
+ *csum = sum;
+ return bytes;
+}
+EXPORT_SYMBOL(csum_and_copy_from_iter);
+
+size_t csum_and_copy_to_iter(void *addr, size_t bytes, __wsum *csum,
+ struct iov_iter *i)
+{
+ char *from = addr;
+ __wsum sum, next;
+ size_t off = 0;
+ if (unlikely(bytes > i->count))
+ bytes = i->count;
+
+ if (unlikely(!bytes))
+ return 0;
+
+ sum = *csum;
+ iterate_and_advance(i, bytes, v, ({
+ int err = 0;
+ next = csum_and_copy_to_user((from += v.iov_len) - v.iov_len,
+ v.iov_base,
+ v.iov_len, 0, &err);
+ if (!err) {
+ sum = csum_block_add(sum, next, off);
+ off += v.iov_len;
+ }
+ err ? v.iov_len : 0;
+ }), ({
+ char *p = kmap_atomic(v.bv_page);
+ next = csum_partial_copy_nocheck((from += v.bv_len) - v.bv_len,
+ p + v.bv_offset,
+ v.bv_len, 0);
+ kunmap_atomic(p);
+ sum = csum_block_add(sum, next, off);
+ off += v.bv_len;
+ }),({
+ next = csum_partial_copy_nocheck((from += v.iov_len) - v.iov_len,
+ v.iov_base,
+ v.iov_len, 0);
+ sum = csum_block_add(sum, next, off);
+ off += v.iov_len;
+ })
+ )
+ *csum = sum;
+ return bytes;
+}
+EXPORT_SYMBOL(csum_and_copy_to_iter);
+
+int iov_iter_npages(const struct iov_iter *i, int maxpages)
+{
+ size_t size = i->count;
+ int npages = 0;
+
+ if (!size)
+ return 0;
+
+ iterate_all_kinds(i, size, v, ({
+ unsigned long p = (unsigned long)v.iov_base;
+ npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
+ - p / PAGE_SIZE;
+ if (npages >= maxpages)
+ return maxpages;
+ 0;}),({
+ npages++;
+ if (npages >= maxpages)
+ return maxpages;
+ }),({
+ unsigned long p = (unsigned long)v.iov_base;
+ npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
+ - p / PAGE_SIZE;
+ if (npages >= maxpages)
+ return maxpages;
+ })
+ )
+ return npages;
+}
+EXPORT_SYMBOL(iov_iter_npages);
+
+const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags)
+{
+ *new = *old;
+ if (new->type & ITER_BVEC)
+ return new->bvec = kmemdup(new->bvec,
+ new->nr_segs * sizeof(struct bio_vec),
+ flags);
+ else
+ /* iovec and kvec have identical layout */
+ return new->iov = kmemdup(new->iov,
+ new->nr_segs * sizeof(struct iovec),
+ flags);
+}
+EXPORT_SYMBOL(dup_iter);
/* Error: request to write beyond destination buffer */
if (cpy > oend)
goto _output_error;
+ if ((ref + COPYLENGTH) > oend ||
+ (op + COPYLENGTH) > oend)
+ goto _output_error;
LZ4_SECURECOPY(ref, op, (oend - COPYLENGTH));
while (op < cpy)
*op++ = *ref++;
if (s->len < s->size) {
len = vsnprintf(s->buffer + s->len, s->size - s->len, fmt, args);
- if (seq_buf_can_fit(s, len)) {
+ if (s->len + len < s->size) {
s->len += len;
return 0;
}
if (s->len < s->size) {
ret = bstr_printf(s->buffer + s->len, len, fmt, binary);
- if (seq_buf_can_fit(s, ret)) {
+ if (s->len + ret < s->size) {
s->len += ret;
return 0;
}
mm_init.o mmu_context.o percpu.o slab_common.o \
compaction.o vmacache.o \
interval_tree.o list_lru.o workingset.o \
- iov_iter.o debug.o $(mmu-y)
+ debug.o $(mmu-y)
obj-y += init-mm.o
return (1UL << (align_order - cma->order_per_bit)) - 1;
}
+/*
+ * Find a PFN aligned to the specified order and return an offset represented in
+ * order_per_bits.
+ */
static unsigned long cma_bitmap_aligned_offset(struct cma *cma, int align_order)
{
- unsigned int alignment;
-
if (align_order <= cma->order_per_bit)
return 0;
- alignment = 1UL << (align_order - cma->order_per_bit);
- return ALIGN(cma->base_pfn, alignment) -
- (cma->base_pfn >> cma->order_per_bit);
+
+ return (ALIGN(cma->base_pfn, (1UL << align_order))
+ - cma->base_pfn) >> cma->order_per_bit;
}
static unsigned long cma_bitmap_maxno(struct cma *cma)
int target_nid, last_cpupid = -1;
bool page_locked;
bool migrated = false;
+ bool was_writable;
int flags = 0;
/* A PROT_NONE fault should not end up here */
flags |= TNF_FAULT_LOCAL;
}
- /*
- * Avoid grouping on DSO/COW pages in specific and RO pages
- * in general, RO pages shouldn't hurt as much anyway since
- * they can be in shared cache state.
- */
- if (!pmd_write(pmd))
+ /* See similar comment in do_numa_page for explanation */
+ if (!(vma->vm_flags & VM_WRITE))
flags |= TNF_NO_GROUP;
/*
if (migrated) {
flags |= TNF_MIGRATED;
page_nid = target_nid;
- }
+ } else
+ flags |= TNF_MIGRATE_FAIL;
goto out;
clear_pmdnuma:
BUG_ON(!PageLocked(page));
+ was_writable = pmd_write(pmd);
pmd = pmd_modify(pmd, vma->vm_page_prot);
+ pmd = pmd_mkyoung(pmd);
+ if (was_writable)
+ pmd = pmd_mkwrite(pmd);
set_pmd_at(mm, haddr, pmdp, pmd);
update_mmu_cache_pmd(vma, addr, pmdp);
unlock_page(page);
if (__pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
pmd_t entry;
+ bool preserve_write = prot_numa && pmd_write(*pmd);
+ ret = 1;
/*
* Avoid trapping faults against the zero page. The read-only
*/
if (prot_numa && is_huge_zero_pmd(*pmd)) {
spin_unlock(ptl);
- return 0;
+ return ret;
}
if (!prot_numa || !pmd_protnone(*pmd)) {
- ret = 1;
entry = pmdp_get_and_clear_notify(mm, addr, pmd);
entry = pmd_modify(entry, newprot);
+ if (preserve_write)
+ entry = pmd_mkwrite(entry);
ret = HPAGE_PMD_NR;
set_pmd_at(mm, addr, pmd, entry);
- BUG_ON(pmd_write(entry));
+ BUG_ON(!preserve_write && pmd_write(entry));
}
spin_unlock(ptl);
}
__SetPageHead(page);
__ClearPageReserved(page);
for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
- __SetPageTail(p);
/*
* For gigantic hugepages allocated through bootmem at
* boot, it's safer to be consistent with the not-gigantic
__ClearPageReserved(p);
set_page_count(p, 0);
p->first_page = page;
+ /* Make sure p->first_page is always valid for PageTail() */
+ smp_wmb();
+ __SetPageTail(p);
}
}
+++ /dev/null
-#include <linux/export.h>
-#include <linux/uio.h>
-#include <linux/pagemap.h>
-#include <linux/slab.h>
-#include <linux/vmalloc.h>
-#include <net/checksum.h>
-
-#define iterate_iovec(i, n, __v, __p, skip, STEP) { \
- size_t left; \
- size_t wanted = n; \
- __p = i->iov; \
- __v.iov_len = min(n, __p->iov_len - skip); \
- if (likely(__v.iov_len)) { \
- __v.iov_base = __p->iov_base + skip; \
- left = (STEP); \
- __v.iov_len -= left; \
- skip += __v.iov_len; \
- n -= __v.iov_len; \
- } else { \
- left = 0; \
- } \
- while (unlikely(!left && n)) { \
- __p++; \
- __v.iov_len = min(n, __p->iov_len); \
- if (unlikely(!__v.iov_len)) \
- continue; \
- __v.iov_base = __p->iov_base; \
- left = (STEP); \
- __v.iov_len -= left; \
- skip = __v.iov_len; \
- n -= __v.iov_len; \
- } \
- n = wanted - n; \
-}
-
-#define iterate_kvec(i, n, __v, __p, skip, STEP) { \
- size_t wanted = n; \
- __p = i->kvec; \
- __v.iov_len = min(n, __p->iov_len - skip); \
- if (likely(__v.iov_len)) { \
- __v.iov_base = __p->iov_base + skip; \
- (void)(STEP); \
- skip += __v.iov_len; \
- n -= __v.iov_len; \
- } \
- while (unlikely(n)) { \
- __p++; \
- __v.iov_len = min(n, __p->iov_len); \
- if (unlikely(!__v.iov_len)) \
- continue; \
- __v.iov_base = __p->iov_base; \
- (void)(STEP); \
- skip = __v.iov_len; \
- n -= __v.iov_len; \
- } \
- n = wanted; \
-}
-
-#define iterate_bvec(i, n, __v, __p, skip, STEP) { \
- size_t wanted = n; \
- __p = i->bvec; \
- __v.bv_len = min_t(size_t, n, __p->bv_len - skip); \
- if (likely(__v.bv_len)) { \
- __v.bv_page = __p->bv_page; \
- __v.bv_offset = __p->bv_offset + skip; \
- (void)(STEP); \
- skip += __v.bv_len; \
- n -= __v.bv_len; \
- } \
- while (unlikely(n)) { \
- __p++; \
- __v.bv_len = min_t(size_t, n, __p->bv_len); \
- if (unlikely(!__v.bv_len)) \
- continue; \
- __v.bv_page = __p->bv_page; \
- __v.bv_offset = __p->bv_offset; \
- (void)(STEP); \
- skip = __v.bv_len; \
- n -= __v.bv_len; \
- } \
- n = wanted; \
-}
-
-#define iterate_all_kinds(i, n, v, I, B, K) { \
- size_t skip = i->iov_offset; \
- if (unlikely(i->type & ITER_BVEC)) { \
- const struct bio_vec *bvec; \
- struct bio_vec v; \
- iterate_bvec(i, n, v, bvec, skip, (B)) \
- } else if (unlikely(i->type & ITER_KVEC)) { \
- const struct kvec *kvec; \
- struct kvec v; \
- iterate_kvec(i, n, v, kvec, skip, (K)) \
- } else { \
- const struct iovec *iov; \
- struct iovec v; \
- iterate_iovec(i, n, v, iov, skip, (I)) \
- } \
-}
-
-#define iterate_and_advance(i, n, v, I, B, K) { \
- size_t skip = i->iov_offset; \
- if (unlikely(i->type & ITER_BVEC)) { \
- const struct bio_vec *bvec; \
- struct bio_vec v; \
- iterate_bvec(i, n, v, bvec, skip, (B)) \
- if (skip == bvec->bv_len) { \
- bvec++; \
- skip = 0; \
- } \
- i->nr_segs -= bvec - i->bvec; \
- i->bvec = bvec; \
- } else if (unlikely(i->type & ITER_KVEC)) { \
- const struct kvec *kvec; \
- struct kvec v; \
- iterate_kvec(i, n, v, kvec, skip, (K)) \
- if (skip == kvec->iov_len) { \
- kvec++; \
- skip = 0; \
- } \
- i->nr_segs -= kvec - i->kvec; \
- i->kvec = kvec; \
- } else { \
- const struct iovec *iov; \
- struct iovec v; \
- iterate_iovec(i, n, v, iov, skip, (I)) \
- if (skip == iov->iov_len) { \
- iov++; \
- skip = 0; \
- } \
- i->nr_segs -= iov - i->iov; \
- i->iov = iov; \
- } \
- i->count -= n; \
- i->iov_offset = skip; \
-}
-
-static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes,
- struct iov_iter *i)
-{
- size_t skip, copy, left, wanted;
- const struct iovec *iov;
- char __user *buf;
- void *kaddr, *from;
-
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- wanted = bytes;
- iov = i->iov;
- skip = i->iov_offset;
- buf = iov->iov_base + skip;
- copy = min(bytes, iov->iov_len - skip);
-
- if (!fault_in_pages_writeable(buf, copy)) {
- kaddr = kmap_atomic(page);
- from = kaddr + offset;
-
- /* first chunk, usually the only one */
- left = __copy_to_user_inatomic(buf, from, copy);
- copy -= left;
- skip += copy;
- from += copy;
- bytes -= copy;
-
- while (unlikely(!left && bytes)) {
- iov++;
- buf = iov->iov_base;
- copy = min(bytes, iov->iov_len);
- left = __copy_to_user_inatomic(buf, from, copy);
- copy -= left;
- skip = copy;
- from += copy;
- bytes -= copy;
- }
- if (likely(!bytes)) {
- kunmap_atomic(kaddr);
- goto done;
- }
- offset = from - kaddr;
- buf += copy;
- kunmap_atomic(kaddr);
- copy = min(bytes, iov->iov_len - skip);
- }
- /* Too bad - revert to non-atomic kmap */
- kaddr = kmap(page);
- from = kaddr + offset;
- left = __copy_to_user(buf, from, copy);
- copy -= left;
- skip += copy;
- from += copy;
- bytes -= copy;
- while (unlikely(!left && bytes)) {
- iov++;
- buf = iov->iov_base;
- copy = min(bytes, iov->iov_len);
- left = __copy_to_user(buf, from, copy);
- copy -= left;
- skip = copy;
- from += copy;
- bytes -= copy;
- }
- kunmap(page);
-done:
- if (skip == iov->iov_len) {
- iov++;
- skip = 0;
- }
- i->count -= wanted - bytes;
- i->nr_segs -= iov - i->iov;
- i->iov = iov;
- i->iov_offset = skip;
- return wanted - bytes;
-}
-
-static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t bytes,
- struct iov_iter *i)
-{
- size_t skip, copy, left, wanted;
- const struct iovec *iov;
- char __user *buf;
- void *kaddr, *to;
-
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- wanted = bytes;
- iov = i->iov;
- skip = i->iov_offset;
- buf = iov->iov_base + skip;
- copy = min(bytes, iov->iov_len - skip);
-
- if (!fault_in_pages_readable(buf, copy)) {
- kaddr = kmap_atomic(page);
- to = kaddr + offset;
-
- /* first chunk, usually the only one */
- left = __copy_from_user_inatomic(to, buf, copy);
- copy -= left;
- skip += copy;
- to += copy;
- bytes -= copy;
-
- while (unlikely(!left && bytes)) {
- iov++;
- buf = iov->iov_base;
- copy = min(bytes, iov->iov_len);
- left = __copy_from_user_inatomic(to, buf, copy);
- copy -= left;
- skip = copy;
- to += copy;
- bytes -= copy;
- }
- if (likely(!bytes)) {
- kunmap_atomic(kaddr);
- goto done;
- }
- offset = to - kaddr;
- buf += copy;
- kunmap_atomic(kaddr);
- copy = min(bytes, iov->iov_len - skip);
- }
- /* Too bad - revert to non-atomic kmap */
- kaddr = kmap(page);
- to = kaddr + offset;
- left = __copy_from_user(to, buf, copy);
- copy -= left;
- skip += copy;
- to += copy;
- bytes -= copy;
- while (unlikely(!left && bytes)) {
- iov++;
- buf = iov->iov_base;
- copy = min(bytes, iov->iov_len);
- left = __copy_from_user(to, buf, copy);
- copy -= left;
- skip = copy;
- to += copy;
- bytes -= copy;
- }
- kunmap(page);
-done:
- if (skip == iov->iov_len) {
- iov++;
- skip = 0;
- }
- i->count -= wanted - bytes;
- i->nr_segs -= iov - i->iov;
- i->iov = iov;
- i->iov_offset = skip;
- return wanted - bytes;
-}
-
-/*
- * Fault in the first iovec of the given iov_iter, to a maximum length
- * of bytes. Returns 0 on success, or non-zero if the memory could not be
- * accessed (ie. because it is an invalid address).
- *
- * writev-intensive code may want this to prefault several iovecs -- that
- * would be possible (callers must not rely on the fact that _only_ the
- * first iovec will be faulted with the current implementation).
- */
-int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
-{
- if (!(i->type & (ITER_BVEC|ITER_KVEC))) {
- char __user *buf = i->iov->iov_base + i->iov_offset;
- bytes = min(bytes, i->iov->iov_len - i->iov_offset);
- return fault_in_pages_readable(buf, bytes);
- }
- return 0;
-}
-EXPORT_SYMBOL(iov_iter_fault_in_readable);
-
-void iov_iter_init(struct iov_iter *i, int direction,
- const struct iovec *iov, unsigned long nr_segs,
- size_t count)
-{
- /* It will get better. Eventually... */
- if (segment_eq(get_fs(), KERNEL_DS)) {
- direction |= ITER_KVEC;
- i->type = direction;
- i->kvec = (struct kvec *)iov;
- } else {
- i->type = direction;
- i->iov = iov;
- }
- i->nr_segs = nr_segs;
- i->iov_offset = 0;
- i->count = count;
-}
-EXPORT_SYMBOL(iov_iter_init);
-
-static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
-{
- char *from = kmap_atomic(page);
- memcpy(to, from + offset, len);
- kunmap_atomic(from);
-}
-
-static void memcpy_to_page(struct page *page, size_t offset, char *from, size_t len)
-{
- char *to = kmap_atomic(page);
- memcpy(to + offset, from, len);
- kunmap_atomic(to);
-}
-
-static void memzero_page(struct page *page, size_t offset, size_t len)
-{
- char *addr = kmap_atomic(page);
- memset(addr + offset, 0, len);
- kunmap_atomic(addr);
-}
-
-size_t copy_to_iter(void *addr, size_t bytes, struct iov_iter *i)
-{
- char *from = addr;
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- iterate_and_advance(i, bytes, v,
- __copy_to_user(v.iov_base, (from += v.iov_len) - v.iov_len,
- v.iov_len),
- memcpy_to_page(v.bv_page, v.bv_offset,
- (from += v.bv_len) - v.bv_len, v.bv_len),
- memcpy(v.iov_base, (from += v.iov_len) - v.iov_len, v.iov_len)
- )
-
- return bytes;
-}
-EXPORT_SYMBOL(copy_to_iter);
-
-size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i)
-{
- char *to = addr;
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- iterate_and_advance(i, bytes, v,
- __copy_from_user((to += v.iov_len) - v.iov_len, v.iov_base,
- v.iov_len),
- memcpy_from_page((to += v.bv_len) - v.bv_len, v.bv_page,
- v.bv_offset, v.bv_len),
- memcpy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len)
- )
-
- return bytes;
-}
-EXPORT_SYMBOL(copy_from_iter);
-
-size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i)
-{
- char *to = addr;
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- iterate_and_advance(i, bytes, v,
- __copy_from_user_nocache((to += v.iov_len) - v.iov_len,
- v.iov_base, v.iov_len),
- memcpy_from_page((to += v.bv_len) - v.bv_len, v.bv_page,
- v.bv_offset, v.bv_len),
- memcpy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len)
- )
-
- return bytes;
-}
-EXPORT_SYMBOL(copy_from_iter_nocache);
-
-size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
- struct iov_iter *i)
-{
- if (i->type & (ITER_BVEC|ITER_KVEC)) {
- void *kaddr = kmap_atomic(page);
- size_t wanted = copy_to_iter(kaddr + offset, bytes, i);
- kunmap_atomic(kaddr);
- return wanted;
- } else
- return copy_page_to_iter_iovec(page, offset, bytes, i);
-}
-EXPORT_SYMBOL(copy_page_to_iter);
-
-size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
- struct iov_iter *i)
-{
- if (i->type & (ITER_BVEC|ITER_KVEC)) {
- void *kaddr = kmap_atomic(page);
- size_t wanted = copy_from_iter(kaddr + offset, bytes, i);
- kunmap_atomic(kaddr);
- return wanted;
- } else
- return copy_page_from_iter_iovec(page, offset, bytes, i);
-}
-EXPORT_SYMBOL(copy_page_from_iter);
-
-size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
-{
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- iterate_and_advance(i, bytes, v,
- __clear_user(v.iov_base, v.iov_len),
- memzero_page(v.bv_page, v.bv_offset, v.bv_len),
- memset(v.iov_base, 0, v.iov_len)
- )
-
- return bytes;
-}
-EXPORT_SYMBOL(iov_iter_zero);
-
-size_t iov_iter_copy_from_user_atomic(struct page *page,
- struct iov_iter *i, unsigned long offset, size_t bytes)
-{
- char *kaddr = kmap_atomic(page), *p = kaddr + offset;
- iterate_all_kinds(i, bytes, v,
- __copy_from_user_inatomic((p += v.iov_len) - v.iov_len,
- v.iov_base, v.iov_len),
- memcpy_from_page((p += v.bv_len) - v.bv_len, v.bv_page,
- v.bv_offset, v.bv_len),
- memcpy((p += v.iov_len) - v.iov_len, v.iov_base, v.iov_len)
- )
- kunmap_atomic(kaddr);
- return bytes;
-}
-EXPORT_SYMBOL(iov_iter_copy_from_user_atomic);
-
-void iov_iter_advance(struct iov_iter *i, size_t size)
-{
- iterate_and_advance(i, size, v, 0, 0, 0)
-}
-EXPORT_SYMBOL(iov_iter_advance);
-
-/*
- * Return the count of just the current iov_iter segment.
- */
-size_t iov_iter_single_seg_count(const struct iov_iter *i)
-{
- if (i->nr_segs == 1)
- return i->count;
- else if (i->type & ITER_BVEC)
- return min(i->count, i->bvec->bv_len - i->iov_offset);
- else
- return min(i->count, i->iov->iov_len - i->iov_offset);
-}
-EXPORT_SYMBOL(iov_iter_single_seg_count);
-
-void iov_iter_kvec(struct iov_iter *i, int direction,
- const struct kvec *kvec, unsigned long nr_segs,
- size_t count)
-{
- BUG_ON(!(direction & ITER_KVEC));
- i->type = direction;
- i->kvec = kvec;
- i->nr_segs = nr_segs;
- i->iov_offset = 0;
- i->count = count;
-}
-EXPORT_SYMBOL(iov_iter_kvec);
-
-void iov_iter_bvec(struct iov_iter *i, int direction,
- const struct bio_vec *bvec, unsigned long nr_segs,
- size_t count)
-{
- BUG_ON(!(direction & ITER_BVEC));
- i->type = direction;
- i->bvec = bvec;
- i->nr_segs = nr_segs;
- i->iov_offset = 0;
- i->count = count;
-}
-EXPORT_SYMBOL(iov_iter_bvec);
-
-unsigned long iov_iter_alignment(const struct iov_iter *i)
-{
- unsigned long res = 0;
- size_t size = i->count;
-
- if (!size)
- return 0;
-
- iterate_all_kinds(i, size, v,
- (res |= (unsigned long)v.iov_base | v.iov_len, 0),
- res |= v.bv_offset | v.bv_len,
- res |= (unsigned long)v.iov_base | v.iov_len
- )
- return res;
-}
-EXPORT_SYMBOL(iov_iter_alignment);
-
-ssize_t iov_iter_get_pages(struct iov_iter *i,
- struct page **pages, size_t maxsize, unsigned maxpages,
- size_t *start)
-{
- if (maxsize > i->count)
- maxsize = i->count;
-
- if (!maxsize)
- return 0;
-
- iterate_all_kinds(i, maxsize, v, ({
- unsigned long addr = (unsigned long)v.iov_base;
- size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1));
- int n;
- int res;
-
- if (len > maxpages * PAGE_SIZE)
- len = maxpages * PAGE_SIZE;
- addr &= ~(PAGE_SIZE - 1);
- n = DIV_ROUND_UP(len, PAGE_SIZE);
- res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, pages);
- if (unlikely(res < 0))
- return res;
- return (res == n ? len : res * PAGE_SIZE) - *start;
- 0;}),({
- /* can't be more than PAGE_SIZE */
- *start = v.bv_offset;
- get_page(*pages = v.bv_page);
- return v.bv_len;
- }),({
- return -EFAULT;
- })
- )
- return 0;
-}
-EXPORT_SYMBOL(iov_iter_get_pages);
-
-static struct page **get_pages_array(size_t n)
-{
- struct page **p = kmalloc(n * sizeof(struct page *), GFP_KERNEL);
- if (!p)
- p = vmalloc(n * sizeof(struct page *));
- return p;
-}
-
-ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
- struct page ***pages, size_t maxsize,
- size_t *start)
-{
- struct page **p;
-
- if (maxsize > i->count)
- maxsize = i->count;
-
- if (!maxsize)
- return 0;
-
- iterate_all_kinds(i, maxsize, v, ({
- unsigned long addr = (unsigned long)v.iov_base;
- size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1));
- int n;
- int res;
-
- addr &= ~(PAGE_SIZE - 1);
- n = DIV_ROUND_UP(len, PAGE_SIZE);
- p = get_pages_array(n);
- if (!p)
- return -ENOMEM;
- res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, p);
- if (unlikely(res < 0)) {
- kvfree(p);
- return res;
- }
- *pages = p;
- return (res == n ? len : res * PAGE_SIZE) - *start;
- 0;}),({
- /* can't be more than PAGE_SIZE */
- *start = v.bv_offset;
- *pages = p = get_pages_array(1);
- if (!p)
- return -ENOMEM;
- get_page(*p = v.bv_page);
- return v.bv_len;
- }),({
- return -EFAULT;
- })
- )
- return 0;
-}
-EXPORT_SYMBOL(iov_iter_get_pages_alloc);
-
-size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum,
- struct iov_iter *i)
-{
- char *to = addr;
- __wsum sum, next;
- size_t off = 0;
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- sum = *csum;
- iterate_and_advance(i, bytes, v, ({
- int err = 0;
- next = csum_and_copy_from_user(v.iov_base,
- (to += v.iov_len) - v.iov_len,
- v.iov_len, 0, &err);
- if (!err) {
- sum = csum_block_add(sum, next, off);
- off += v.iov_len;
- }
- err ? v.iov_len : 0;
- }), ({
- char *p = kmap_atomic(v.bv_page);
- next = csum_partial_copy_nocheck(p + v.bv_offset,
- (to += v.bv_len) - v.bv_len,
- v.bv_len, 0);
- kunmap_atomic(p);
- sum = csum_block_add(sum, next, off);
- off += v.bv_len;
- }),({
- next = csum_partial_copy_nocheck(v.iov_base,
- (to += v.iov_len) - v.iov_len,
- v.iov_len, 0);
- sum = csum_block_add(sum, next, off);
- off += v.iov_len;
- })
- )
- *csum = sum;
- return bytes;
-}
-EXPORT_SYMBOL(csum_and_copy_from_iter);
-
-size_t csum_and_copy_to_iter(void *addr, size_t bytes, __wsum *csum,
- struct iov_iter *i)
-{
- char *from = addr;
- __wsum sum, next;
- size_t off = 0;
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- sum = *csum;
- iterate_and_advance(i, bytes, v, ({
- int err = 0;
- next = csum_and_copy_to_user((from += v.iov_len) - v.iov_len,
- v.iov_base,
- v.iov_len, 0, &err);
- if (!err) {
- sum = csum_block_add(sum, next, off);
- off += v.iov_len;
- }
- err ? v.iov_len : 0;
- }), ({
- char *p = kmap_atomic(v.bv_page);
- next = csum_partial_copy_nocheck((from += v.bv_len) - v.bv_len,
- p + v.bv_offset,
- v.bv_len, 0);
- kunmap_atomic(p);
- sum = csum_block_add(sum, next, off);
- off += v.bv_len;
- }),({
- next = csum_partial_copy_nocheck((from += v.iov_len) - v.iov_len,
- v.iov_base,
- v.iov_len, 0);
- sum = csum_block_add(sum, next, off);
- off += v.iov_len;
- })
- )
- *csum = sum;
- return bytes;
-}
-EXPORT_SYMBOL(csum_and_copy_to_iter);
-
-int iov_iter_npages(const struct iov_iter *i, int maxpages)
-{
- size_t size = i->count;
- int npages = 0;
-
- if (!size)
- return 0;
-
- iterate_all_kinds(i, size, v, ({
- unsigned long p = (unsigned long)v.iov_base;
- npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
- - p / PAGE_SIZE;
- if (npages >= maxpages)
- return maxpages;
- 0;}),({
- npages++;
- if (npages >= maxpages)
- return maxpages;
- }),({
- unsigned long p = (unsigned long)v.iov_base;
- npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
- - p / PAGE_SIZE;
- if (npages >= maxpages)
- return maxpages;
- })
- )
- return npages;
-}
-EXPORT_SYMBOL(iov_iter_npages);
#include <linux/stacktrace.h>
#include <linux/string.h>
#include <linux/types.h>
+#include <linux/vmalloc.h>
#include <linux/kasan.h>
#include "kasan.h"
GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
__builtin_return_address(0));
- return ret ? 0 : -ENOMEM;
+
+ if (ret) {
+ find_vm_area(addr)->flags |= VM_KASAN;
+ return 0;
+ }
+
+ return -ENOMEM;
}
-void kasan_module_free(void *addr)
+void kasan_free_shadow(const struct vm_struct *vm)
{
- vfree(kasan_mem_to_shadow(addr));
+ if (vm->flags & VM_KASAN)
+ vfree(kasan_mem_to_shadow(vm->addr));
}
static void register_global(struct kasan_global *global)
* on for the root memcg is enough.
*/
if (cgroup_on_dfl(root_css->cgroup))
- mem_cgroup_from_css(root_css)->use_hierarchy = true;
+ root_mem_cgroup->use_hierarchy = true;
+ else
+ root_mem_cgroup->use_hierarchy = false;
}
static u64 memory_current_read(struct cgroup_subsys_state *css,
int last_cpupid;
int target_nid;
bool migrated = false;
+ bool was_writable = pte_write(pte);
int flags = 0;
/* A PROT_NONE fault should not end up here */
/* Make it present again */
pte = pte_modify(pte, vma->vm_page_prot);
pte = pte_mkyoung(pte);
+ if (was_writable)
+ pte = pte_mkwrite(pte);
set_pte_at(mm, addr, ptep, pte);
update_mmu_cache(vma, addr, ptep);
}
/*
- * Avoid grouping on DSO/COW pages in specific and RO pages
- * in general, RO pages shouldn't hurt as much anyway since
- * they can be in shared cache state.
+ * Avoid grouping on RO pages in general. RO pages shouldn't hurt as
+ * much anyway since they can be in shared cache state. This misses
+ * the case where a mapping is writable but the process never writes
+ * to it but pte_write gets cleared during protection updates and
+ * pte_dirty has unpredictable behaviour between PTE scan updates,
+ * background writeback, dirty balancing and application behaviour.
*/
- if (!pte_write(pte))
+ if (!(vma->vm_flags & VM_WRITE))
flags |= TNF_NO_GROUP;
/*
if (migrated) {
page_nid = target_nid;
flags |= TNF_MIGRATED;
- }
+ } else
+ flags |= TNF_MIGRATE_FAIL;
out:
if (page_nid != -1)
return NULL;
arch_refresh_nodedata(nid, pgdat);
+ } else {
+ /* Reset the nr_zones and classzone_idx to 0 before reuse */
+ pgdat->nr_zones = 0;
+ pgdat->classzone_idx = 0;
}
/* we can use NODE_DATA(nid) from here */
if (is_vmalloc_addr(zone->wait_table))
vfree(zone->wait_table);
}
-
- /*
- * Since there is no way to guarentee the address of pgdat/zone is not
- * on stack of any kernel threads or used by other kernel objects
- * without reference counting or other symchronizing method, do not
- * reset node_data and free pgdat here. Just reset it to 0 and reuse
- * the memory when the node is online again.
- */
- memset(pgdat, 0, sizeof(*pgdat));
}
EXPORT_SYMBOL(try_offline_node);
int can_do_mlock(void)
{
- if (capable(CAP_IPC_LOCK))
- return 1;
if (rlimit(RLIMIT_MEMLOCK) != 0)
return 1;
+ if (capable(CAP_IPC_LOCK))
+ return 1;
return 0;
}
EXPORT_SYMBOL(can_do_mlock);
importer->anon_vma = exporter->anon_vma;
error = anon_vma_clone(importer, exporter);
- if (error) {
- importer->anon_vma = NULL;
+ if (error)
return error;
- }
}
}
oldpte = *pte;
if (pte_present(oldpte)) {
pte_t ptent;
+ bool preserve_write = prot_numa && pte_write(oldpte);
/*
* Avoid trapping faults against the zero or KSM
ptent = ptep_modify_prot_start(mm, addr, pte);
ptent = pte_modify(ptent, newprot);
+ if (preserve_write)
+ ptent = pte_mkwrite(ptent);
/* Avoid taking write faults for known dirty pages */
if (dirty_accountable && pte_dirty(ptent) &&
EXPORT_SYMBOL(high_memory);
struct page *mem_map;
unsigned long max_mapnr;
+EXPORT_SYMBOL(max_mapnr);
unsigned long highest_memmap_pfn;
struct percpu_counter vm_committed_as;
int sysctl_overcommit_memory = OVERCOMMIT_GUESS; /* heuristic overcommit */
* bw * elapsed + write_bandwidth * (period - elapsed)
* write_bandwidth = ---------------------------------------------------
* period
+ *
+ * @written may have decreased due to account_page_redirty().
+ * Avoid underflowing @bw calculation.
*/
- bw = written - bdi->written_stamp;
+ bw = written - min(written, bdi->written_stamp);
bw *= HZ;
if (unlikely(elapsed > period)) {
do_div(bw, elapsed);
unsigned long now)
{
static DEFINE_SPINLOCK(dirty_lock);
- static unsigned long update_time;
+ static unsigned long update_time = INITIAL_JIFFIES;
/*
* check locklessly first to optimize away locking for the most time
goto out;
}
/* Exhausted what can be done so it's blamo time */
- if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false))
+ if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false)
+ || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL))
*did_some_progress = 1;
out:
oom_zonelist_unlock(ac->zonelist, gfp_mask);
if (!is_migrate_isolate_page(buddy)) {
__isolate_free_page(page, order);
+ kernel_map_pages(page, (1 << order), 1);
set_page_refcounted(page);
isolated_page = page;
}
vma = vma->vm_next;
err = walk_page_test(start, next, walk);
- if (err > 0)
+ if (err > 0) {
+ /*
+ * positive return values are purely for
+ * controlling the pagewalk, so should never
+ * be passed to the callers.
+ */
+ err = 0;
continue;
+ }
if (err < 0)
break;
}
return 0;
enomem_failure:
+ /*
+ * dst->anon_vma is dropped here otherwise its degree can be incorrectly
+ * decremented in unlink_anon_vmas().
+ * We can safely do this because callers of anon_vma_clone() don't care
+ * about dst->anon_vma if anon_vma_clone() failed.
+ */
+ dst->anon_vma = NULL;
unlink_anon_vmas(dst);
return -ENOMEM;
}
do {
tid = this_cpu_read(s->cpu_slab->tid);
c = raw_cpu_ptr(s->cpu_slab);
- } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));
+ } while (IS_ENABLED(CONFIG_PREEMPT) &&
+ unlikely(tid != READ_ONCE(c->tid)));
/*
* Irqless object alloc/free algorithm used here depends on sequence
do {
tid = this_cpu_read(s->cpu_slab->tid);
c = raw_cpu_ptr(s->cpu_slab);
- } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));
+ } while (IS_ENABLED(CONFIG_PREEMPT) &&
+ unlikely(tid != READ_ONCE(c->tid)));
/* Same with comment on barrier() in slab_alloc_node() */
barrier();
spin_unlock(&vmap_area_lock);
vmap_debug_free_range(va->va_start, va->va_end);
+ kasan_free_shadow(vm);
free_unmap_vmap_area(va);
vm->size -= PAGE_SIZE;
static void p9_virtio_remove(struct virtio_device *vdev)
{
struct virtio_chan *chan = vdev->priv;
-
- if (chan->inuse)
- p9_virtio_close(chan->client);
- vdev->config->del_vqs(vdev);
+ unsigned long warning_time;
mutex_lock(&virtio_9p_lock);
+
+ /* Remove self from list so we don't get new users. */
list_del(&chan->chan_list);
+ warning_time = jiffies;
+
+ /* Wait for existing users to close. */
+ while (chan->inuse) {
+ mutex_unlock(&virtio_9p_lock);
+ msleep(250);
+ if (time_after(jiffies, warning_time + 10 * HZ)) {
+ dev_emerg(&vdev->dev,
+ "p9_virtio_remove: waiting for device in use.\n");
+ warning_time = jiffies;
+ }
+ mutex_lock(&virtio_9p_lock);
+ }
+
mutex_unlock(&virtio_9p_lock);
+
+ vdev->config->del_vqs(vdev);
+
sysfs_remove_file(&(vdev->dev.kobj), &dev_attr_mount_tag.attr);
kobject_uevent(&(vdev->dev.kobj), KOBJ_CHANGE);
kfree(chan->tag);
*/
del_nbp(p);
+ dev_set_mtu(br->dev, br_min_mtu(br));
+
spin_lock_bh(&br->lock);
changed_addr = br_stp_recalculate_bridge_id(br);
spin_unlock_bh(&br->lock);
int copylen;
ret = -EOPNOTSUPP;
- if (m->msg_flags&MSG_OOB)
+ if (flags & MSG_OOB)
goto read_error;
skb = skb_recv_datagram(sk, flags, 0 , &ret);
goto inval_skb;
}
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+ skb_reset_mac_header(skb);
skb_reset_network_header(skb);
skb_reset_transport_header(skb);
__get_user(kmsg->msg_controllen, &umsg->msg_controllen) ||
__get_user(kmsg->msg_flags, &umsg->msg_flags))
return -EFAULT;
+
+ if (!uaddr)
+ kmsg->msg_namelen = 0;
+
+ if (kmsg->msg_namelen < 0)
+ return -EINVAL;
+
if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
kmsg->msg_namelen = sizeof(struct sockaddr_storage);
kmsg->msg_control = compat_ptr(tmp3);
}
}
err = rtnl_configure_link(dev, ifm);
- if (err < 0) {
- if (ops->newlink) {
- LIST_HEAD(list_kill);
-
- ops->dellink(dev, &list_kill);
- unregister_netdevice_many(&list_kill);
- } else {
- unregister_netdevice(dev);
- }
- goto out;
- }
-
+ if (err < 0)
+ goto out_unregister;
if (link_net) {
err = dev_change_net_namespace(dev, dest_net, ifname);
if (err < 0)
- unregister_netdevice(dev);
+ goto out_unregister;
}
out:
if (link_net)
put_net(link_net);
put_net(dest_net);
return err;
+out_unregister:
+ if (ops->newlink) {
+ LIST_HEAD(list_kill);
+
+ ops->dellink(dev, &list_kill);
+ unregister_netdevice_many(&list_kill);
+ } else {
+ unregister_netdevice(dev);
+ }
+ goto out;
}
}
struct sock *sk, int tstype)
{
struct sk_buff *skb;
- bool tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY;
+ bool tsonly;
- if (!sk || !skb_may_tx_timestamp(sk, tsonly))
+ if (!sk)
+ return;
+
+ tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY;
+ if (!skb_may_tx_timestamp(sk, tsonly))
return;
if (tsonly)
skb->ignore_df = 0;
skb_dst_drop(skb);
skb->mark = 0;
- skb->sender_cpu = 0;
+ skb_sender_cpu_clear(skb);
skb_init_secmark(skb);
secpath_reset(skb);
nf_reset(skb);
}
EXPORT_SYMBOL(sock_rfree);
+/*
+ * Buffer destructor for skbs that are not used directly in read or write
+ * path, e.g. for error handler skbs. Automatically called from kfree_skb.
+ */
void sock_efree(struct sk_buff *skb)
{
sock_put(skb->sk);
static int zero = 0;
static int one = 1;
static int ushort_max = USHRT_MAX;
+static int min_sndbuf = SOCK_MIN_SNDBUF;
+static int min_rcvbuf = SOCK_MIN_RCVBUF;
static int net_msg_warn; /* Unused, but still a sysctl */
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec_minmax,
- .extra1 = &one,
+ .extra1 = &min_sndbuf,
},
{
.procname = "rmem_max",
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec_minmax,
- .extra1 = &one,
+ .extra1 = &min_rcvbuf,
},
{
.procname = "wmem_default",
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec_minmax,
- .extra1 = &one,
+ .extra1 = &min_sndbuf,
},
{
.procname = "rmem_default",
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec_minmax,
- .extra1 = &one,
+ .extra1 = &min_rcvbuf,
},
{
.procname = "dev_weight",
release_sock(sk);
if (reqsk_queue_empty(&icsk->icsk_accept_queue))
timeo = schedule_timeout(timeo);
+ sched_annotate_sleep();
lock_sock(sk);
err = 0;
if (!reqsk_queue_empty(&icsk->icsk_accept_queue))
mutex_unlock(&inet_diag_table_mutex);
}
+static size_t inet_sk_attr_size(void)
+{
+ return nla_total_size(sizeof(struct tcp_info))
+ + nla_total_size(1) /* INET_DIAG_SHUTDOWN */
+ + nla_total_size(1) /* INET_DIAG_TOS */
+ + nla_total_size(1) /* INET_DIAG_TCLASS */
+ + nla_total_size(sizeof(struct inet_diag_meminfo))
+ + nla_total_size(sizeof(struct inet_diag_msg))
+ + nla_total_size(SK_MEMINFO_VARS * sizeof(u32))
+ + nla_total_size(TCP_CA_NAME_MAX)
+ + nla_total_size(sizeof(struct tcpvegas_info))
+ + 64;
+}
+
int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
struct sk_buff *skb, struct inet_diag_req_v2 *req,
struct user_namespace *user_ns,
if (err)
goto out;
- rep = nlmsg_new(sizeof(struct inet_diag_msg) +
- sizeof(struct inet_diag_meminfo) +
- sizeof(struct tcp_info) + 64, GFP_KERNEL);
+ rep = nlmsg_new(inet_sk_attr_size(), GFP_KERNEL);
if (!rep) {
err = -ENOMEM;
goto out;
if (unlikely(opt->optlen))
ip_forward_options(skb);
+ skb_sender_cpu_clear(skb);
return dst_output(skb);
}
struct sk_buff *ip_check_defrag(struct sk_buff *skb, u32 user)
{
struct iphdr iph;
+ int netoff;
u32 len;
if (skb->protocol != htons(ETH_P_IP))
return skb;
- if (skb_copy_bits(skb, 0, &iph, sizeof(iph)) < 0)
+ netoff = skb_network_offset(skb);
+
+ if (skb_copy_bits(skb, netoff, &iph, sizeof(iph)) < 0)
return skb;
if (iph.ihl < 5 || iph.version != 4)
return skb;
len = ntohs(iph.tot_len);
- if (skb->len < len || len < (iph.ihl * 4))
+ if (skb->len < netoff + len || len < (iph.ihl * 4))
return skb;
if (ip_is_fragment(&iph)) {
skb = skb_share_check(skb, GFP_ATOMIC);
if (skb) {
- if (!pskb_may_pull(skb, iph.ihl*4))
+ if (!pskb_may_pull(skb, netoff + iph.ihl * 4))
return skb;
- if (pskb_trim_rcsum(skb, len))
+ if (pskb_trim_rcsum(skb, netoff + len))
return skb;
memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
if (ip_defrag(skb, user))
kfree_skb(skb);
}
-static bool ipv4_pktinfo_prepare_errqueue(const struct sock *sk,
- const struct sk_buff *skb,
- int ee_origin)
+/* IPv4 supports cmsg on all imcp errors and some timestamps
+ *
+ * Timestamp code paths do not initialize the fields expected by cmsg:
+ * the PKTINFO fields in skb->cb[]. Fill those in here.
+ */
+static bool ipv4_datagram_support_cmsg(const struct sock *sk,
+ struct sk_buff *skb,
+ int ee_origin)
{
- struct in_pktinfo *info = PKTINFO_SKB_CB(skb);
+ struct in_pktinfo *info;
+
+ if (ee_origin == SO_EE_ORIGIN_ICMP)
+ return true;
- if ((ee_origin != SO_EE_ORIGIN_TIMESTAMPING) ||
- (!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) ||
+ if (ee_origin == SO_EE_ORIGIN_LOCAL)
+ return false;
+
+ /* Support IP_PKTINFO on tstamp packets if requested, to correlate
+ * timestamp with egress dev. Not possible for packets without dev
+ * or without payload (SOF_TIMESTAMPING_OPT_TSONLY).
+ */
+ if ((!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) ||
(!skb->dev))
return false;
+ info = PKTINFO_SKB_CB(skb);
info->ipi_spec_dst.s_addr = ip_hdr(skb)->saddr;
info->ipi_ifindex = skb->dev->ifindex;
return true;
serr = SKB_EXT_ERR(skb);
- if (sin && skb->len) {
+ if (sin && serr->port) {
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = *(__be32 *)(skb_network_header(skb) +
serr->addr_offset);
sin = &errhdr.offender;
memset(sin, 0, sizeof(*sin));
- if (skb->len &&
- (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP ||
- ipv4_pktinfo_prepare_errqueue(sk, skb, serr->ee.ee_origin))) {
+ if (ipv4_datagram_support_cmsg(sk, skb, serr->ee.ee_origin)) {
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
if (inet_sk(sk)->cmsg_flags)
&chainname, &comment, &rulenum) != 0)
break;
- nf_log_packet(net, AF_INET, hook, skb, in, out, &trace_loginfo,
- "TRACE: %s:%s:%s:%u ",
- tablename, chainname, comment, rulenum);
+ nf_log_trace(net, AF_INET, hook, skb, in, out, &trace_loginfo,
+ "TRACE: %s:%s:%s:%u ",
+ tablename, chainname, comment, rulenum);
}
#endif
kgid_t low, high;
int ret = 0;
+ if (sk->sk_family == AF_INET6)
+ sk->sk_ipv6only = 1;
+
inet_get_ping_group_range_net(net, &low, &high);
if (gid_lte(low, group) && gid_lte(group, high))
return 0;
if (addr_len < sizeof(*addr))
return -EINVAL;
+ if (addr->sin_family != AF_INET &&
+ !(addr->sin_family == AF_UNSPEC &&
+ addr->sin_addr.s_addr == htonl(INADDR_ANY)))
+ return -EAFNOSUPPORT;
+
pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n",
sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port));
return -EINVAL;
if (addr->sin6_family != AF_INET6)
- return -EINVAL;
+ return -EAFNOSUPPORT;
pr_debug("ping_check_bind_addr(sk=%p,addr=%pI6c,port=%d)\n",
sk, addr->sin6_addr.s6_addr, ntohs(addr->sin6_port));
if (msg->msg_namelen < sizeof(*usin))
return -EINVAL;
if (usin->sin_family != AF_INET)
- return -EINVAL;
+ return -EAFNOSUPPORT;
daddr = usin->sin_addr.s_addr;
/* no remote port */
} else {
int large_allowed)
{
struct tcp_sock *tp = tcp_sk(sk);
- u32 new_size_goal, size_goal, hlen;
+ u32 new_size_goal, size_goal;
if (!large_allowed || !sk_can_gso(sk))
return mss_now;
- /* Maybe we should/could use sk->sk_prot->max_header here ? */
- hlen = inet_csk(sk)->icsk_af_ops->net_header_len +
- inet_csk(sk)->icsk_ext_hdr_len +
- tp->tcp_header_len;
-
- new_size_goal = sk->sk_gso_max_size - 1 - hlen;
+ /* Note : tcp_tso_autosize() will eventually split this later */
+ new_size_goal = sk->sk_gso_max_size - 1 - MAX_TCP_HEADER;
new_size_goal = tcp_bound_to_half_wnd(tp, new_size_goal);
/* We try hard to avoid divides here */
*/
void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked)
{
+ /* If credits accumulated at a higher w, apply them gently now. */
+ if (tp->snd_cwnd_cnt >= w) {
+ tp->snd_cwnd_cnt = 0;
+ tp->snd_cwnd++;
+ }
+
tp->snd_cwnd_cnt += acked;
if (tp->snd_cwnd_cnt >= w) {
u32 delta = tp->snd_cwnd_cnt / w;
}
}
- if (ca->cnt == 0) /* cannot be zero */
- ca->cnt = 1;
+ /* The maximum rate of cwnd increase CUBIC allows is 1 packet per
+ * 2 packets ACKed, meaning cwnd grows at 1.5x per RTT.
+ */
+ ca->cnt = max(ca->cnt, 2U);
}
static void bictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
} else {
/* Socket is locked, keep trying until memory is available. */
for (;;) {
- skb = alloc_skb_fclone(MAX_TCP_HEADER,
- sk->sk_allocation);
+ skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation);
if (skb)
break;
yield();
}
-
- /* Reserve space for headers and prepare control bits. */
- skb_reserve(skb, MAX_TCP_HEADER);
/* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */
tcp_init_nondata_skb(skb, tp->write_seq,
TCPHDR_ACK | TCPHDR_FIN);
return err;
IPCB(skb)->flags |= IPSKB_XFRM_TUNNEL_SIZE;
+ skb->protocol = htons(ETH_P_IP);
return x->outer_mode->output2(x, skb);
}
int xfrm4_output_finish(struct sk_buff *skb)
{
memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
- skb->protocol = htons(ETH_P_IP);
#ifdef CONFIG_NETFILTER
IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED;
kfree_skb(skb);
}
-static void ip6_datagram_prepare_pktinfo_errqueue(struct sk_buff *skb)
+/* IPv6 supports cmsg on all origins aside from SO_EE_ORIGIN_LOCAL.
+ *
+ * At one point, excluding local errors was a quick test to identify icmp/icmp6
+ * errors. This is no longer true, but the test remained, so the v6 stack,
+ * unlike v4, also honors cmsg requests on all wifi and timestamp errors.
+ *
+ * Timestamp code paths do not initialize the fields expected by cmsg:
+ * the PKTINFO fields in skb->cb[]. Fill those in here.
+ */
+static bool ip6_datagram_support_cmsg(struct sk_buff *skb,
+ struct sock_exterr_skb *serr)
{
- int ifindex = skb->dev ? skb->dev->ifindex : -1;
+ if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP ||
+ serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6)
+ return true;
+
+ if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL)
+ return false;
+
+ if (!skb->dev)
+ return false;
if (skb->protocol == htons(ETH_P_IPV6))
- IP6CB(skb)->iif = ifindex;
+ IP6CB(skb)->iif = skb->dev->ifindex;
else
- PKTINFO_SKB_CB(skb)->ipi_ifindex = ifindex;
+ PKTINFO_SKB_CB(skb)->ipi_ifindex = skb->dev->ifindex;
+
+ return true;
}
/*
serr = SKB_EXT_ERR(skb);
- if (sin && skb->len) {
+ if (sin && serr->port) {
const unsigned char *nh = skb_network_header(skb);
sin->sin6_family = AF_INET6;
sin->sin6_flowinfo = 0;
memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));
sin = &errhdr.offender;
memset(sin, 0, sizeof(*sin));
- if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL && skb->len) {
+
+ if (ip6_datagram_support_cmsg(skb, serr)) {
sin->sin6_family = AF_INET6;
- if (np->rxopt.all) {
- if (serr->ee.ee_origin != SO_EE_ORIGIN_ICMP &&
- serr->ee.ee_origin != SO_EE_ORIGIN_ICMP6)
- ip6_datagram_prepare_pktinfo_errqueue(skb);
+ if (np->rxopt.all)
ip6_datagram_recv_common_ctl(sk, msg, skb);
- }
if (skb->protocol == htons(ETH_P_IPV6)) {
sin->sin6_addr = ipv6_hdr(skb)->saddr;
if (np->rxopt.all)
goto again;
flp6->saddr = saddr;
}
+ err = rt->dst.error;
goto out;
}
again:
static inline int ip6_forward_finish(struct sk_buff *skb)
{
+ skb_sender_cpu_clear(skb);
return dst_output(skb);
}
* Create tunnel matching given parameters.
*
* Return:
- * created tunnel or NULL
+ * created tunnel or error pointer
**/
static struct ip6_tnl *ip6_tnl_create(struct net *net, struct __ip6_tnl_parm *p)
struct net_device *dev;
struct ip6_tnl *t;
char name[IFNAMSIZ];
- int err;
+ int err = -ENOMEM;
if (p->name[0])
strlcpy(name, p->name, IFNAMSIZ);
failed_free:
ip6_dev_free(dev);
failed:
- return NULL;
+ return ERR_PTR(err);
}
/**
* tunnel device is created and registered for use.
*
* Return:
- * matching tunnel or NULL
+ * matching tunnel or error pointer
**/
static struct ip6_tnl *ip6_tnl_locate(struct net *net,
if (ipv6_addr_equal(local, &t->parms.laddr) &&
ipv6_addr_equal(remote, &t->parms.raddr)) {
if (create)
- return NULL;
+ return ERR_PTR(-EEXIST);
return t;
}
}
if (!create)
- return NULL;
+ return ERR_PTR(-ENODEV);
return ip6_tnl_create(net, p);
}
}
ip6_tnl_parm_from_user(&p1, &p);
t = ip6_tnl_locate(net, &p1, 0);
- if (t == NULL)
+ if (IS_ERR(t))
t = netdev_priv(dev);
} else {
memset(&p, 0, sizeof(p));
ip6_tnl_parm_from_user(&p1, &p);
t = ip6_tnl_locate(net, &p1, cmd == SIOCADDTUNNEL);
if (cmd == SIOCCHGTUNNEL) {
- if (t != NULL) {
+ if (!IS_ERR(t)) {
if (t->dev != dev) {
err = -EEXIST;
break;
else
err = ip6_tnl_update(t, &p1);
}
- if (t) {
+ if (!IS_ERR(t)) {
err = 0;
ip6_tnl_parm_to_user(&p, &t->parms);
if (copy_to_user(ifr->ifr_ifru.ifru_data, &p, sizeof(p)))
err = -EFAULT;
- } else
- err = (cmd == SIOCADDTUNNEL ? -ENOBUFS : -ENOENT);
+ } else {
+ err = PTR_ERR(t);
+ }
break;
case SIOCDELTUNNEL:
err = -EPERM;
err = -ENOENT;
ip6_tnl_parm_from_user(&p1, &p);
t = ip6_tnl_locate(net, &p1, 0);
- if (t == NULL)
+ if (IS_ERR(t))
break;
err = -EPERM;
if (t->dev == ip6n->fb_tnl_dev)
struct nlattr *tb[], struct nlattr *data[])
{
struct net *net = dev_net(dev);
- struct ip6_tnl *nt;
+ struct ip6_tnl *nt, *t;
nt = netdev_priv(dev);
ip6_tnl_netlink_parms(data, &nt->parms);
- if (ip6_tnl_locate(net, &nt->parms, 0))
+ t = ip6_tnl_locate(net, &nt->parms, 0);
+ if (!IS_ERR(t))
return -EEXIST;
return ip6_tnl_create2(dev);
ip6_tnl_netlink_parms(data, &p);
t = ip6_tnl_locate(net, &p, 0);
-
- if (t) {
+ if (!IS_ERR(t)) {
if (t->dev != dev)
return -EEXIST;
} else
&chainname, &comment, &rulenum) != 0)
break;
- nf_log_packet(net, AF_INET6, hook, skb, in, out, &trace_loginfo,
- "TRACE: %s:%s:%s:%u ",
- tablename, chainname, comment, rulenum);
+ nf_log_trace(net, AF_INET6, hook, skb, in, out, &trace_loginfo,
+ "TRACE: %s:%s:%s:%u ",
+ tablename, chainname, comment, rulenum);
}
#endif
if (msg->msg_name) {
DECLARE_SOCKADDR(struct sockaddr_in6 *, u, msg->msg_name);
- if (msg->msg_namelen < sizeof(struct sockaddr_in6) ||
- u->sin6_family != AF_INET6) {
+ if (msg->msg_namelen < sizeof(*u))
return -EINVAL;
+ if (u->sin6_family != AF_INET6) {
+ return -EAFNOSUPPORT;
}
if (sk->sk_bound_dev_if &&
sk->sk_bound_dev_if != u->sin6_scope_id) {
fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen);
fptr->nexthdr = nexthdr;
fptr->reserved = 0;
- if (skb_shinfo(skb)->ip6_frag_id)
- fptr->identification = skb_shinfo(skb)->ip6_frag_id;
- else
- ipv6_select_ident(fptr,
- (struct rt6_info *)skb_dst(skb));
+ if (!skb_shinfo(skb)->ip6_frag_id)
+ ipv6_proxy_select_ident(skb);
+ fptr->identification = skb_shinfo(skb)->ip6_frag_id;
/* Fragment the skb. ipv6 header and the remaining fields of the
* fragment header are updated in ipv6_gso_segment()
return err;
skb->ignore_df = 1;
+ skb->protocol = htons(ETH_P_IPV6);
return x->outer_mode->output2(x, skb);
}
int xfrm6_output_finish(struct sk_buff *skb)
{
memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
- skb->protocol = htons(ETH_P_IPV6);
#ifdef CONFIG_NETFILTER
IP6CB(skb)->flags |= IP6SKB_XFRM_TRANSFORMED;
#if IS_ENABLED(CONFIG_IPV6_MIP6)
case IPPROTO_MH:
+ offset += ipv6_optlen(exthdr);
if (!onlyproto && pskb_may_pull(skb, nh + offset + 3 - skb->data)) {
struct ip6_mh *mh;
#define IEEE80211_UNSET_POWER_LEVEL INT_MIN
/*
- * Some APs experience problems when working with U-APSD. Decrease the
- * probability of that happening by using legacy mode for all ACs but VO.
- * The AP that caused us trouble was a Cisco 4410N. It ignores our
- * setting, and always treats non-VO ACs as legacy.
+ * Some APs experience problems when working with U-APSD. Decreasing the
+ * probability of that happening by using legacy mode for all ACs but VO isn't
+ * enough.
+ *
+ * Cisco 4410N originally forced us to enable VO by default only because it
+ * treated non-VO ACs as legacy.
+ *
+ * However some APs (notably Netgear R7000) silently reclassify packets to
+ * different ACs. Since u-APSD ACs require trigger frames for frame retrieval
+ * clients would never see some frames (e.g. ARP responses) or would fetch them
+ * accidentally after a long time.
+ *
+ * It makes little sense to enable u-APSD queues by default because it needs
+ * userspace applications to be aware of it to actually take advantage of the
+ * possible additional powersavings. Implicitly depending on driver autotrigger
+ * frame support doesn't make much sense.
*/
-#define IEEE80211_DEFAULT_UAPSD_QUEUES \
- IEEE80211_WMM_IE_STA_QOSINFO_AC_VO
+#define IEEE80211_DEFAULT_UAPSD_QUEUES 0
#define IEEE80211_DEFAULT_MAX_SP_LEN \
IEEE80211_WMM_IE_STA_QOSINFO_SP_ALL
unsigned int flags;
bool csa_waiting_bcn;
+ bool csa_ignored_same_chan;
bool beacon_crc_valid;
u32 beacon_crc;
return;
}
+ if (cfg80211_chandef_identical(&csa_ie.chandef,
+ &sdata->vif.bss_conf.chandef)) {
+ if (ifmgd->csa_ignored_same_chan)
+ return;
+ sdata_info(sdata,
+ "AP %pM tries to chanswitch to same channel, ignore\n",
+ ifmgd->associated->bssid);
+ ifmgd->csa_ignored_same_chan = true;
+ return;
+ }
+
mutex_lock(&local->mtx);
mutex_lock(&local->chanctx_mtx);
conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
sdata->vif.csa_active = true;
sdata->csa_chandef = csa_ie.chandef;
sdata->csa_block_tx = csa_ie.mode;
+ ifmgd->csa_ignored_same_chan = false;
if (sdata->csa_block_tx)
ieee80211_stop_vif_queues(local, sdata,
sdata->vif.csa_active = false;
ifmgd->csa_waiting_bcn = false;
+ ifmgd->csa_ignored_same_chan = false;
if (sdata->csa_block_tx) {
ieee80211_wake_vif_queues(local, sdata,
IEEE80211_QUEUE_STOP_REASON_CSA);
(1ULL << WLAN_EID_CHANNEL_SWITCH) |
(1ULL << WLAN_EID_PWR_CONSTRAINT) |
(1ULL << WLAN_EID_HT_CAPABILITY) |
- (1ULL << WLAN_EID_HT_OPERATION);
+ (1ULL << WLAN_EID_HT_OPERATION) |
+ (1ULL << WLAN_EID_EXT_CHANSWITCH_ANN);
static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
struct ieee80211_mgmt *mgmt, size_t len,
hdr = (struct ieee80211_hdr *) skb->data;
mesh_hdr = (struct ieee80211s_hdr *) (skb->data + hdrlen);
+ if (ieee80211_drop_unencrypted(rx, hdr->frame_control))
+ return RX_DROP_MONITOR;
+
/* frame is in RMC, don't forward */
if (ieee80211_is_data(hdr->frame_control) &&
is_multicast_ether_addr(hdr->addr1) &&
wdev_iter = &sdata_iter->wdev;
if (sdata_iter == sdata ||
- rcu_access_pointer(sdata_iter->vif.chanctx_conf) == NULL ||
+ !ieee80211_sdata_running(sdata_iter) ||
local->hw.wiphy->software_iftypes & BIT(wdev_iter->iftype))
continue;
IP_VS_DBG(2, "BACKUP, add new conn. failed\n");
return;
}
+ if (!(flags & IP_VS_CONN_F_TEMPLATE))
+ kfree(param->pe_data);
}
if (opt)
(opt_flags & IPVS_OPT_F_SEQ_DATA ? &opt : NULL)
);
#endif
+ ip_vs_pe_put(param.pe);
return 0;
/* Error exit */
out:
}
EXPORT_SYMBOL(nf_log_packet);
+void nf_log_trace(struct net *net,
+ u_int8_t pf,
+ unsigned int hooknum,
+ const struct sk_buff *skb,
+ const struct net_device *in,
+ const struct net_device *out,
+ const struct nf_loginfo *loginfo, const char *fmt, ...)
+{
+ va_list args;
+ char prefix[NF_LOG_PREFIXLEN];
+ const struct nf_logger *logger;
+
+ rcu_read_lock();
+ logger = rcu_dereference(net->nf.nf_loggers[pf]);
+ if (logger) {
+ va_start(args, fmt);
+ vsnprintf(prefix, sizeof(prefix), fmt, args);
+ va_end(args);
+ logger->logfn(net, pf, hooknum, skb, in, out, loginfo, prefix);
+ }
+ rcu_read_unlock();
+}
+EXPORT_SYMBOL(nf_log_trace);
+
#define S_SIZE (1024 - (sizeof(unsigned int) + 1))
struct nf_log_buf {
static inline void nft_rule_clear(struct net *net, struct nft_rule *rule)
{
- rule->genmask = 0;
+ rule->genmask &= ~(1 << gencursor_next(net));
}
static int
if (nla[NFTA_CHAIN_POLICY]) {
if ((chain != NULL &&
- !(chain->flags & NFT_BASE_CHAIN)) ||
+ !(chain->flags & NFT_BASE_CHAIN)))
+ return -EOPNOTSUPP;
+
+ if (chain == NULL &&
nla[NFTA_CHAIN_HOOK] == NULL)
return -EOPNOTSUPP;
}
nla_nest_end(skb, list);
- if (rule->ulen &&
- nla_put(skb, NFTA_RULE_USERDATA, rule->ulen, nft_userdata(rule)))
- goto nla_put_failure;
+ if (rule->udata) {
+ struct nft_userdata *udata = nft_userdata(rule);
+ if (nla_put(skb, NFTA_RULE_USERDATA, udata->len + 1,
+ udata->data) < 0)
+ goto nla_put_failure;
+ }
nlmsg_end(skb, nlh);
return 0;
struct nft_table *table;
struct nft_chain *chain;
struct nft_rule *rule, *old_rule = NULL;
+ struct nft_userdata *udata;
struct nft_trans *trans = NULL;
struct nft_expr *expr;
struct nft_ctx ctx;
struct nlattr *tmp;
- unsigned int size, i, n, ulen = 0;
+ unsigned int size, i, n, ulen = 0, usize = 0;
int err, rem;
bool create;
u64 handle, pos_handle;
n++;
}
}
+ /* Check for overflow of dlen field */
+ err = -EFBIG;
+ if (size >= 1 << 12)
+ goto err1;
- if (nla[NFTA_RULE_USERDATA])
+ if (nla[NFTA_RULE_USERDATA]) {
ulen = nla_len(nla[NFTA_RULE_USERDATA]);
+ if (ulen > 0)
+ usize = sizeof(struct nft_userdata) + ulen;
+ }
err = -ENOMEM;
- rule = kzalloc(sizeof(*rule) + size + ulen, GFP_KERNEL);
+ rule = kzalloc(sizeof(*rule) + size + usize, GFP_KERNEL);
if (rule == NULL)
goto err1;
rule->handle = handle;
rule->dlen = size;
- rule->ulen = ulen;
+ rule->udata = ulen ? 1 : 0;
- if (ulen)
- nla_memcpy(nft_userdata(rule), nla[NFTA_RULE_USERDATA], ulen);
+ if (ulen) {
+ udata = nft_userdata(rule);
+ udata->len = ulen - 1;
+ nla_memcpy(udata->data, nla[NFTA_RULE_USERDATA], ulen);
+ }
expr = nft_expr_first(rule);
for (i = 0; i < n; i++) {
err3:
list_del_rcu(&rule->list);
- if (trans) {
- list_del_rcu(&nft_trans_rule(trans)->list);
- nft_rule_clear(net, nft_trans_rule(trans));
- nft_trans_destroy(trans);
- chain->use++;
- }
err2:
nf_tables_rule_destroy(&ctx, rule);
err1:
&te->elem,
NFT_MSG_DELSETELEM, 0);
te->set->ops->get(te->set, &te->elem);
- te->set->ops->remove(te->set, &te->elem);
nft_data_uninit(&te->elem.key, NFT_DATA_VALUE);
- if (te->elem.flags & NFT_SET_MAP) {
- nft_data_uninit(&te->elem.data,
- te->set->dtype);
- }
+ if (te->set->flags & NFT_SET_MAP &&
+ !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END))
+ nft_data_uninit(&te->elem.data, te->set->dtype);
+ te->set->ops->remove(te->set, &te->elem);
nft_trans_destroy(trans);
break;
}
{
struct net *net = sock_net(skb->sk);
struct nft_trans *trans, *next;
- struct nft_set *set;
+ struct nft_trans_elem *te;
list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
switch (trans->msg_type) {
break;
case NFT_MSG_NEWSETELEM:
nft_trans_elem_set(trans)->nelems--;
- set = nft_trans_elem_set(trans);
- set->ops->get(set, &nft_trans_elem(trans));
- set->ops->remove(set, &nft_trans_elem(trans));
+ te = (struct nft_trans_elem *)trans->data;
+ te->set->ops->get(te->set, &te->elem);
+ nft_data_uninit(&te->elem.key, NFT_DATA_VALUE);
+ if (te->set->flags & NFT_SET_MAP &&
+ !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END))
+ nft_data_uninit(&te->elem.data, te->set->dtype);
+ te->set->ops->remove(te->set, &te->elem);
nft_trans_destroy(trans);
break;
case NFT_MSG_DELSETELEM:
{
struct net *net = dev_net(pkt->in ? pkt->in : pkt->out);
- nf_log_packet(net, pkt->xt.family, pkt->ops->hooknum, pkt->skb, pkt->in,
- pkt->out, &trace_loginfo, "TRACE: %s:%s:%s:%u ",
- chain->table->name, chain->name, comments[type],
- rulenum);
+ nf_log_trace(net, pkt->xt.family, pkt->ops->hooknum, pkt->skb, pkt->in,
+ pkt->out, &trace_loginfo, "TRACE: %s:%s:%s:%u ",
+ chain->table->name, chain->name, comments[type],
+ rulenum);
}
unsigned int
if (!tb[NFCTH_TUPLE_L3PROTONUM] || !tb[NFCTH_TUPLE_L4PROTONUM])
return -EINVAL;
+ /* Not all fields are initialized so first zero the tuple */
+ memset(tuple, 0, sizeof(struct nf_conntrack_tuple));
+
tuple->src.l3num = ntohs(nla_get_be16(tb[NFCTH_TUPLE_L3PROTONUM]));
tuple->dst.protonum = nla_get_u8(tb[NFCTH_TUPLE_L4PROTONUM]);
nft_target_set_tgchk_param(struct xt_tgchk_param *par,
const struct nft_ctx *ctx,
struct xt_target *target, void *info,
- union nft_entry *entry, u8 proto, bool inv)
+ union nft_entry *entry, u16 proto, bool inv)
{
par->net = ctx->net;
par->table = ctx->table->name;
entry->e4.ip.invflags = inv ? IPT_INV_PROTO : 0;
break;
case AF_INET6:
+ if (proto)
+ entry->e6.ipv6.flags |= IP6T_F_PROTO;
+
entry->e6.ipv6.proto = proto;
entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0;
break;
case NFPROTO_BRIDGE:
- entry->ebt.ethproto = proto;
+ entry->ebt.ethproto = (__force __be16)proto;
entry->ebt.invflags = inv ? EBT_IPROTO : 0;
break;
}
[NFTA_RULE_COMPAT_FLAGS] = { .type = NLA_U32 },
};
-static int nft_parse_compat(const struct nlattr *attr, u8 *proto, bool *inv)
+static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv)
{
struct nlattr *tb[NFTA_RULE_COMPAT_MAX+1];
u32 flags;
struct xt_target *target = expr->ops->data;
struct xt_tgchk_param par;
size_t size = XT_ALIGN(nla_len(tb[NFTA_TARGET_INFO]));
- u8 proto = 0;
+ u16 proto = 0;
bool inv = false;
union nft_entry e = {};
int ret;
static void
nft_match_set_mtchk_param(struct xt_mtchk_param *par, const struct nft_ctx *ctx,
struct xt_match *match, void *info,
- union nft_entry *entry, u8 proto, bool inv)
+ union nft_entry *entry, u16 proto, bool inv)
{
par->net = ctx->net;
par->table = ctx->table->name;
entry->e4.ip.invflags = inv ? IPT_INV_PROTO : 0;
break;
case AF_INET6:
+ if (proto)
+ entry->e6.ipv6.flags |= IP6T_F_PROTO;
+
entry->e6.ipv6.proto = proto;
entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0;
break;
case NFPROTO_BRIDGE:
- entry->ebt.ethproto = proto;
+ entry->ebt.ethproto = (__force __be16)proto;
entry->ebt.invflags = inv ? EBT_IPROTO : 0;
break;
}
struct xt_match *match = expr->ops->data;
struct xt_mtchk_param par;
size_t size = XT_ALIGN(nla_len(tb[NFTA_MATCH_INFO]));
- u8 proto = 0;
+ u16 proto = 0;
bool inv = false;
union nft_entry e = {};
int ret;
iter->err = err;
goto out;
}
+
+ continue;
}
if (iter->count < iter->skip)
{
const struct ip6t_ip6 *i = par->entryinfo;
- if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP)
- && !(i->flags & IP6T_INV_PROTO))
+ if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) &&
+ !(i->invflags & IP6T_INV_PROTO))
return 0;
pr_info("Can be used only in combination with "
return 0;
}
-static void packet_dev_mclist(struct net_device *dev, struct packet_mclist *i, int what)
+static void packet_dev_mclist_delete(struct net_device *dev,
+ struct packet_mclist **mlp)
{
- for ( ; i; i = i->next) {
- if (i->ifindex == dev->ifindex)
- packet_dev_mc(dev, i, what);
+ struct packet_mclist *ml;
+
+ while ((ml = *mlp) != NULL) {
+ if (ml->ifindex == dev->ifindex) {
+ packet_dev_mc(dev, ml, -1);
+ *mlp = ml->next;
+ kfree(ml);
+ } else
+ mlp = &ml->next;
}
}
packet_dev_mc(dev, ml, -1);
kfree(ml);
}
- rtnl_unlock();
- return 0;
+ break;
}
}
rtnl_unlock();
- return -EADDRNOTAVAIL;
+ return 0;
}
static void packet_flush_mclist(struct sock *sk)
switch (msg) {
case NETDEV_UNREGISTER:
if (po->mclist)
- packet_dev_mclist(dev, po->mclist, -1);
+ packet_dev_mclist_delete(dev, &po->mclist);
/* fallthrough */
case NETDEV_DOWN:
int *unpinned);
static void rds_iw_destroy_fastreg(struct rds_iw_mr_pool *pool, struct rds_iw_mr *ibmr);
-static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwdev, struct rdma_cm_id **cm_id)
+static int rds_iw_get_device(struct sockaddr_in *src, struct sockaddr_in *dst,
+ struct rds_iw_device **rds_iwdev,
+ struct rdma_cm_id **cm_id)
{
struct rds_iw_device *iwdev;
struct rds_iw_cm_id *i_cm_id;
src_addr->sin_port,
dst_addr->sin_addr.s_addr,
dst_addr->sin_port,
- rs->rs_bound_addr,
- rs->rs_bound_port,
- rs->rs_conn_addr,
- rs->rs_conn_port);
+ src->sin_addr.s_addr,
+ src->sin_port,
+ dst->sin_addr.s_addr,
+ dst->sin_port);
#ifdef WORKING_TUPLE_DETECTION
- if (src_addr->sin_addr.s_addr == rs->rs_bound_addr &&
- src_addr->sin_port == rs->rs_bound_port &&
- dst_addr->sin_addr.s_addr == rs->rs_conn_addr &&
- dst_addr->sin_port == rs->rs_conn_port) {
+ if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr &&
+ src_addr->sin_port == src->sin_port &&
+ dst_addr->sin_addr.s_addr == dst->sin_addr.s_addr &&
+ dst_addr->sin_port == dst->sin_port) {
#else
/* FIXME - needs to compare the local and remote
* ipaddr/port tuple, but the ipaddr is the only
* zero'ed. It doesn't appear to be properly populated
* during connection setup...
*/
- if (src_addr->sin_addr.s_addr == rs->rs_bound_addr) {
+ if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr) {
#endif
spin_unlock_irq(&iwdev->spinlock);
*rds_iwdev = iwdev;
{
struct sockaddr_in *src_addr, *dst_addr;
struct rds_iw_device *rds_iwdev_old;
- struct rds_sock rs;
struct rdma_cm_id *pcm_id;
int rc;
src_addr = (struct sockaddr_in *)&cm_id->route.addr.src_addr;
dst_addr = (struct sockaddr_in *)&cm_id->route.addr.dst_addr;
- rs.rs_bound_addr = src_addr->sin_addr.s_addr;
- rs.rs_bound_port = src_addr->sin_port;
- rs.rs_conn_addr = dst_addr->sin_addr.s_addr;
- rs.rs_conn_port = dst_addr->sin_port;
-
- rc = rds_iw_get_device(&rs, &rds_iwdev_old, &pcm_id);
+ rc = rds_iw_get_device(src_addr, dst_addr, &rds_iwdev_old, &pcm_id);
if (rc)
rds_iw_remove_cm_id(rds_iwdev, cm_id);
struct rds_iw_device *rds_iwdev;
struct rds_iw_mr *ibmr = NULL;
struct rdma_cm_id *cm_id;
+ struct sockaddr_in src = {
+ .sin_addr.s_addr = rs->rs_bound_addr,
+ .sin_port = rs->rs_bound_port,
+ };
+ struct sockaddr_in dst = {
+ .sin_addr.s_addr = rs->rs_conn_addr,
+ .sin_port = rs->rs_conn_port,
+ };
int ret;
- ret = rds_iw_get_device(rs, &rds_iwdev, &cm_id);
+ ret = rds_iw_get_device(&src, &dst, &rds_iwdev, &cm_id);
if (ret || !cm_id) {
ret = -ENODEV;
goto out;
_leave("UDP socket errqueue empty");
return;
}
- if (!skb->len) {
+ serr = SKB_EXT_ERR(skb);
+ if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) {
_leave("UDP empty message");
kfree_skb(skb);
return;
rxrpc_new_skb(skb);
- serr = SKB_EXT_ERR(skb);
addr = *(__be32 *)(skb_network_header(skb) + serr->addr_offset);
port = serr->port;
if (!skb) {
/* nothing remains on the queue */
if (copied &&
- (msg->msg_flags & MSG_PEEK || timeo == 0))
+ (flags & MSG_PEEK || timeo == 0))
goto out;
/* wait for a message to turn up */
struct tcf_result *res)
{
struct tcf_bpf *b = a->priv;
- int action;
- int filter_res;
+ int action, filter_res;
spin_lock(&b->tcf_lock);
+
b->tcf_tm.lastuse = jiffies;
bstats_update(&b->tcf_bstats, skb);
- action = b->tcf_action;
filter_res = BPF_PROG_RUN(b->filter, skb);
- if (filter_res == 0) {
- /* Return code 0 from the BPF program
- * is being interpreted as a drop here.
- */
- action = TC_ACT_SHOT;
+
+ /* A BPF program may overwrite the default action opcode.
+ * Similarly as in cls_bpf, if filter_res == -1 we use the
+ * default action specified from tc.
+ *
+ * In case a different well-known TC_ACT opcode has been
+ * returned, it will overwrite the default one.
+ *
+ * For everything else that is unkown, TC_ACT_UNSPEC is
+ * returned.
+ */
+ switch (filter_res) {
+ case TC_ACT_PIPE:
+ case TC_ACT_RECLASSIFY:
+ case TC_ACT_OK:
+ action = filter_res;
+ break;
+ case TC_ACT_SHOT:
+ action = filter_res;
b->tcf_qstats.drops++;
+ break;
+ case TC_ACT_UNSPEC:
+ action = b->tcf_action;
+ break;
+ default:
+ action = TC_ACT_UNSPEC;
+ break;
}
spin_unlock(&b->tcf_lock);
struct tc_u_common *tp_c;
int refcnt;
unsigned int divisor;
- struct tc_u_knode __rcu *ht[1];
struct rcu_head rcu;
+ /* The 'ht' field MUST be the last field in structure to allow for
+ * more entries allocated at end of structure.
+ */
+ struct tc_u_knode __rcu *ht[1];
};
struct tc_u_common {
if (len > INT_MAX)
len = INT_MAX;
+ if (unlikely(!access_ok(VERIFY_READ, buff, len)))
+ return -EFAULT;
sock = sockfd_lookup_light(fd, &err, &fput_needed);
if (!sock)
goto out;
if (size > INT_MAX)
size = INT_MAX;
+ if (unlikely(!access_ok(VERIFY_WRITE, ubuf, size)))
+ return -EFAULT;
sock = sockfd_lookup_light(fd, &err, &fput_needed);
if (!sock)
goto out;
/* Clean up all queues, except inputq: */
__skb_queue_purge(&l_ptr->outqueue);
__skb_queue_purge(&l_ptr->deferred_queue);
- skb_queue_splice_init(&l_ptr->wakeupq, &l_ptr->inputq);
- if (!skb_queue_empty(&l_ptr->inputq))
+ if (!owner->inputq)
+ owner->inputq = &l_ptr->inputq;
+ skb_queue_splice_init(&l_ptr->wakeupq, owner->inputq);
+ if (!skb_queue_empty(owner->inputq))
owner->action_flags |= TIPC_MSG_EVT;
- owner->inputq = &l_ptr->inputq;
l_ptr->next_out = NULL;
l_ptr->unacked_window = 0;
l_ptr->checkpoint = 1;
if (parse_station_flags(info, dev->ieee80211_ptr->iftype, ¶ms))
return -EINVAL;
+ /* HT/VHT requires QoS, but if we don't have that just ignore HT/VHT
+ * as userspace might just pass through the capabilities from the IEs
+ * directly, rather than enforcing this restriction and returning an
+ * error in this case.
+ */
+ if (!(params.sta_flags_set & BIT(NL80211_STA_FLAG_WME))) {
+ params.ht_capa = NULL;
+ params.vht_capa = NULL;
+ }
+
/* When you run into this, adjust the code below for the new flag */
BUILD_BUG_ON(NL80211_STA_FLAG_MAX != 7);
* have the xfrm_state's. We need to wait for KM to
* negotiate new SA's or bail out with error.*/
if (net->xfrm.sysctl_larval_drop) {
- dst_release(dst);
- xfrm_pols_put(pols, drop_pols);
XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTNOSTATES);
-
- return ERR_PTR(-EREMOTE);
+ err = -EREMOTE;
+ goto error;
}
err = -EAGAIN;
error:
dst_release(dst);
dropdst:
- dst_release(dst_orig);
+ if (!(flags & XFRM_LOOKUP_KEEP_DST_REF))
+ dst_release(dst_orig);
xfrm_pols_put(pols, drop_pols);
return ERR_PTR(err);
}
struct sock *sk, int flags)
{
struct dst_entry *dst = xfrm_lookup(net, dst_orig, fl, sk,
- flags | XFRM_LOOKUP_QUEUE);
+ flags | XFRM_LOOKUP_QUEUE |
+ XFRM_LOOKUP_KEEP_DST_REF);
if (IS_ERR(dst) && PTR_ERR(dst) == -EREMOTE)
return make_blackhole(net, dst_orig->ops->family, dst_orig);
goto out;
/* No partial writes. */
- length = EINVAL;
+ length = -EINVAL;
if (*ppos != 0)
goto out;
if (info->count < 1)
return -EINVAL;
+ if (!*info->id.name)
+ return -EINVAL;
+ if (strnlen(info->id.name, sizeof(info->id.name)) >= sizeof(info->id.name))
+ return -EINVAL;
access = info->access == 0 ? SNDRV_CTL_ELEM_ACCESS_READWRITE :
(info->access & (SNDRV_CTL_ELEM_ACCESS_READWRITE|
SNDRV_CTL_ELEM_ACCESS_INACTIVE|
*/
#define RX_ISOCHRONOUS 0x008
+/*
+ * Index of first quadlet to be interpreted; read/write. If > 0, that many
+ * quadlets at the beginning of each data block will be ignored, and all the
+ * audio and MIDI quadlets will follow.
+ */
+#define RX_SEQ_START 0x00c
+
/*
* The number of audio channels; read-only. There will be one quadlet per
* channel.
*/
-#define RX_NUMBER_AUDIO 0x00c
+#define RX_NUMBER_AUDIO 0x010
/*
* The number of MIDI ports, 0-8; read-only. If > 0, there will be one
* additional quadlet in each data block, following the audio quadlets.
*/
-#define RX_NUMBER_MIDI 0x010
-
-/*
- * Index of first quadlet to be interpreted; read/write. If > 0, that many
- * quadlets at the beginning of each data block will be ignored, and all the
- * audio and MIDI quadlets will follow.
- */
-#define RX_SEQ_START 0x014
+#define RX_NUMBER_MIDI 0x014
/*
* Names of all audio channels; read-only. Quadlets are byte-swapped. Names
} tx;
struct {
u32 iso;
+ u32 seq_start;
u32 number_audio;
u32 number_midi;
- u32 seq_start;
char names[RX_NAMES_SIZE];
u32 ac3_caps;
u32 ac3_enable;
break;
snd_iprintf(buffer, "rx %u:\n", stream);
snd_iprintf(buffer, " iso channel: %d\n", (int)buf.rx.iso);
+ snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start);
snd_iprintf(buffer, " audio channels: %u\n",
buf.rx.number_audio);
snd_iprintf(buffer, " midi ports: %u\n", buf.rx.number_midi);
- snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start);
if (quadlets >= 68) {
dice_proc_fixup_string(buf.rx.names, RX_NAMES_SIZE);
snd_iprintf(buffer, " names: %s\n", buf.rx.names);
int fw_iso_resources_init(struct fw_iso_resources *r, struct fw_unit *unit)
{
r->channels_mask = ~0uLL;
- r->unit = fw_unit_get(unit);
+ r->unit = unit;
mutex_init(&r->mutex);
r->allocated = false;
{
WARN_ON(r->allocated);
mutex_destroy(&r->mutex);
- fw_unit_put(r->unit);
}
EXPORT_SYMBOL(fw_iso_resources_destroy);
}
}
- if (!bus->no_response_fallback)
+ if (bus->no_response_fallback)
return -1;
if (!chip->polling_mode && chip->poll_count < 2) {
return val;
}
+/* is this a stereo widget or a stereo-to-mono mix? */
+static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid, int dir)
+{
+ unsigned int wcaps = get_wcaps(codec, nid);
+ hda_nid_t conn;
+
+ if (wcaps & AC_WCAP_STEREO)
+ return true;
+ if (dir != HDA_INPUT || get_wcaps_type(wcaps) != AC_WID_AUD_MIX)
+ return false;
+ if (snd_hda_get_num_conns(codec, nid) != 1)
+ return false;
+ if (snd_hda_get_connections(codec, nid, &conn, 1) < 0)
+ return false;
+ return !!(get_wcaps(codec, conn) & AC_WCAP_STEREO);
+}
+
/* initialize the amp value (only at the first time) */
static void init_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx)
{
unsigned int caps = query_amp_caps(codec, nid, dir);
int val = get_amp_val_to_activate(codec, nid, dir, caps, false);
- snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val);
+
+ if (is_stereo_amps(codec, nid, dir))
+ snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val);
+ else
+ snd_hda_codec_amp_init(codec, nid, 0, dir, idx, 0xff, val);
+}
+
+/* update the amp, doing in stereo or mono depending on NID */
+static int update_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx,
+ unsigned int mask, unsigned int val)
+{
+ if (is_stereo_amps(codec, nid, dir))
+ return snd_hda_codec_amp_stereo(codec, nid, dir, idx,
+ mask, val);
+ else
+ return snd_hda_codec_amp_update(codec, nid, 0, dir, idx,
+ mask, val);
}
/* calculate amp value mask we can modify;
return;
val &= mask;
- snd_hda_codec_amp_stereo(codec, nid, dir, idx, mask, val);
+ update_amp(codec, nid, dir, idx, mask, val);
}
static void activate_amp_out(struct hda_codec *codec, struct nid_path *path,
has_amp = nid_has_mute(codec, mix, HDA_INPUT);
for (i = 0; i < nums; i++) {
if (has_amp)
- snd_hda_codec_amp_stereo(codec, mix,
- HDA_INPUT, i,
- 0xff, HDA_AMP_MUTE);
+ update_amp(codec, mix, HDA_INPUT, i,
+ 0xff, HDA_AMP_MUTE);
else if (nid_has_volume(codec, conn[i], HDA_OUTPUT))
- snd_hda_codec_amp_stereo(codec, conn[i],
- HDA_OUTPUT, 0,
- 0xff, HDA_AMP_MUTE);
+ update_amp(codec, conn[i], HDA_OUTPUT, 0,
+ 0xff, HDA_AMP_MUTE);
}
}
.driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
/* Sunrise Point */
{ PCI_DEVICE(0x8086, 0xa170),
- .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE },
/* Sunrise Point-LP */
{ PCI_DEVICE(0x8086, 0x9d70),
.driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE },
(caps & AC_AMPCAP_MUTE) >> AC_AMPCAP_MUTE_SHIFT);
}
+/* is this a stereo widget or a stereo-to-mono mix? */
+static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid,
+ int dir, unsigned int wcaps, int indices)
+{
+ hda_nid_t conn;
+
+ if (wcaps & AC_WCAP_STEREO)
+ return true;
+ /* check for a stereo-to-mono mix; it must be:
+ * only a single connection, only for input, and only a mixer widget
+ */
+ if (indices != 1 || dir != HDA_INPUT ||
+ get_wcaps_type(wcaps) != AC_WID_AUD_MIX)
+ return false;
+
+ if (snd_hda_get_raw_connections(codec, nid, &conn, 1) < 0)
+ return false;
+ /* the connection source is a stereo? */
+ wcaps = snd_hda_param_read(codec, conn, AC_PAR_AUDIO_WIDGET_CAP);
+ return !!(wcaps & AC_WCAP_STEREO);
+}
+
static void print_amp_vals(struct snd_info_buffer *buffer,
struct hda_codec *codec, hda_nid_t nid,
- int dir, int stereo, int indices)
+ int dir, unsigned int wcaps, int indices)
{
unsigned int val;
+ bool stereo;
int i;
+ stereo = is_stereo_amps(codec, nid, dir, wcaps, indices);
+
dir = dir == HDA_OUTPUT ? AC_AMP_GET_OUTPUT : AC_AMP_GET_INPUT;
for (i = 0; i < indices; i++) {
snd_iprintf(buffer, " [");
(codec->single_adc_amp &&
wid_type == AC_WID_AUD_IN))
print_amp_vals(buffer, codec, nid, HDA_INPUT,
- wid_caps & AC_WCAP_STEREO,
- 1);
+ wid_caps, 1);
else
print_amp_vals(buffer, codec, nid, HDA_INPUT,
- wid_caps & AC_WCAP_STEREO,
- conn_len);
+ wid_caps, conn_len);
}
if (wid_caps & AC_WCAP_OUT_AMP) {
snd_iprintf(buffer, " Amp-Out caps: ");
if (wid_type == AC_WID_PIN &&
codec->pin_amp_workaround)
print_amp_vals(buffer, codec, nid, HDA_OUTPUT,
- wid_caps & AC_WCAP_STEREO,
- conn_len);
+ wid_caps, conn_len);
else
print_amp_vals(buffer, codec, nid, HDA_OUTPUT,
- wid_caps & AC_WCAP_STEREO, 1);
+ wid_caps, 1);
}
switch (wid_type) {
SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81),
SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122),
SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101),
+ SND_PCI_QUIRK(0x106b, 0x5600, "MacBookAir 5,2", CS420X_MBP81),
SND_PCI_QUIRK(0x106b, 0x5b00, "MacBookAir 4,2", CS420X_MBA42),
SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE),
{} /* terminator */
return -ENOMEM;
spec->gen.automute_hook = cs_automute;
+ codec->single_adc_amp = 1;
snd_hda_pick_fixup(codec, cs420x_models, cs420x_fixup_tbl,
cs420x_fixups);
CXT_PINCFG_LENOVO_TP410,
CXT_PINCFG_LEMOTE_A1004,
CXT_PINCFG_LEMOTE_A1205,
+ CXT_PINCFG_COMPAQ_CQ60,
CXT_FIXUP_STEREO_DMIC,
CXT_FIXUP_INC_MIC_BOOST,
CXT_FIXUP_HEADPHONE_MIC_PIN,
.type = HDA_FIXUP_PINS,
.v.pins = cxt_pincfg_lemote,
},
+ [CXT_PINCFG_COMPAQ_CQ60] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ /* 0x17 was falsely set up as a mic, it should 0x1d */
+ { 0x17, 0x400001f0 },
+ { 0x1d, 0x97a70120 },
+ { }
+ }
+ },
[CXT_FIXUP_STEREO_DMIC] = {
.type = HDA_FIXUP_FUNC,
.v.func = cxt_fixup_stereo_dmic,
};
static const struct snd_pci_quirk cxt5051_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x360b, "Compaq CQ60", CXT_PINCFG_COMPAQ_CQ60),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo X200", CXT_PINCFG_LENOVO_X200),
{}
};
{
/* We currently only handle front, HP */
static hda_nid_t pins[] = {
- 0x0f, 0x10, 0x14, 0x15, 0
+ 0x0f, 0x10, 0x14, 0x15, 0x17, 0
};
hda_nid_t *p;
for (p = pins; *p; p++)
SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
- unsigned int deemph = ucontrol->value.enumerated.item[0];
+ unsigned int deemph = ucontrol->value.integer.value[0];
if (deemph > 1)
return -EINVAL;
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = adav80x->deemph;
+ ucontrol->value.integer.value[0] = adav80x->deemph;
return 0;
};
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec);
- int deemph = ucontrol->value.enumerated.item[0];
+ int deemph = ucontrol->value.integer.value[0];
if (deemph > 1)
return -EINVAL;
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = ak4641->deemph;
+ ucontrol->value.integer.value[0] = ak4641->deemph;
return 0;
};
};
static const struct snd_soc_dapm_route ak4671_intercon[] = {
- {"DAC Left", "NULL", "PMPLL"},
- {"DAC Right", "NULL", "PMPLL"},
- {"ADC Left", "NULL", "PMPLL"},
- {"ADC Right", "NULL", "PMPLL"},
+ {"DAC Left", NULL, "PMPLL"},
+ {"DAC Right", NULL, "PMPLL"},
+ {"ADC Left", NULL, "PMPLL"},
+ {"ADC Right", NULL, "PMPLL"},
/* Outputs */
- {"LOUT1", "NULL", "LOUT1 Mixer"},
- {"ROUT1", "NULL", "ROUT1 Mixer"},
- {"LOUT2", "NULL", "LOUT2 Mix Amp"},
- {"ROUT2", "NULL", "ROUT2 Mix Amp"},
- {"LOUT3", "NULL", "LOUT3 Mixer"},
- {"ROUT3", "NULL", "ROUT3 Mixer"},
+ {"LOUT1", NULL, "LOUT1 Mixer"},
+ {"ROUT1", NULL, "ROUT1 Mixer"},
+ {"LOUT2", NULL, "LOUT2 Mix Amp"},
+ {"ROUT2", NULL, "ROUT2 Mix Amp"},
+ {"LOUT3", NULL, "LOUT3 Mixer"},
+ {"ROUT3", NULL, "ROUT3 Mixer"},
{"LOUT1 Mixer", "DACL", "DAC Left"},
{"ROUT1 Mixer", "DACR", "DAC Right"},
{"LOUT2 Mixer", "DACHL", "DAC Left"},
{"ROUT2 Mixer", "DACHR", "DAC Right"},
- {"LOUT2 Mix Amp", "NULL", "LOUT2 Mixer"},
- {"ROUT2 Mix Amp", "NULL", "ROUT2 Mixer"},
+ {"LOUT2 Mix Amp", NULL, "LOUT2 Mixer"},
+ {"ROUT2 Mix Amp", NULL, "ROUT2 Mixer"},
{"LOUT3 Mixer", "DACSL", "DAC Left"},
{"ROUT3 Mixer", "DACSR", "DAC Right"},
{"LIN2", NULL, "Mic Bias"},
{"RIN2", NULL, "Mic Bias"},
- {"ADC Left", "NULL", "LIN MUX"},
- {"ADC Right", "NULL", "RIN MUX"},
+ {"ADC Left", NULL, "LIN MUX"},
+ {"ADC Right", NULL, "RIN MUX"},
/* Analog Loops */
- {"LIN1 Mixing Circuit", "NULL", "LIN1"},
- {"RIN1 Mixing Circuit", "NULL", "RIN1"},
- {"LIN2 Mixing Circuit", "NULL", "LIN2"},
- {"RIN2 Mixing Circuit", "NULL", "RIN2"},
- {"LIN3 Mixing Circuit", "NULL", "LIN3"},
- {"RIN3 Mixing Circuit", "NULL", "RIN3"},
- {"LIN4 Mixing Circuit", "NULL", "LIN4"},
- {"RIN4 Mixing Circuit", "NULL", "RIN4"},
+ {"LIN1 Mixing Circuit", NULL, "LIN1"},
+ {"RIN1 Mixing Circuit", NULL, "RIN1"},
+ {"LIN2 Mixing Circuit", NULL, "LIN2"},
+ {"RIN2 Mixing Circuit", NULL, "RIN2"},
+ {"LIN3 Mixing Circuit", NULL, "LIN3"},
+ {"RIN3 Mixing Circuit", NULL, "RIN3"},
+ {"LIN4 Mixing Circuit", NULL, "LIN4"},
+ {"RIN4 Mixing Circuit", NULL, "RIN4"},
{"LOUT1 Mixer", "LINL1", "LIN1 Mixing Circuit"},
{"ROUT1 Mixer", "RINR1", "RIN1 Mixing Circuit"},
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = cs4271->deemph;
+ ucontrol->value.integer.value[0] = cs4271->deemph;
return 0;
}
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec);
- cs4271->deemph = ucontrol->value.enumerated.item[0];
+ cs4271->deemph = ucontrol->value.integer.value[0];
return cs4271_set_deemph(codec);
}
static const struct snd_soc_dapm_route da732x_dapm_routes[] = {
/* Inputs */
- {"AUX1L PGA", "NULL", "AUX1L"},
- {"AUX1R PGA", "NULL", "AUX1R"},
+ {"AUX1L PGA", NULL, "AUX1L"},
+ {"AUX1R PGA", NULL, "AUX1R"},
{"MIC1 PGA", NULL, "MIC1"},
- {"MIC2 PGA", "NULL", "MIC2"},
- {"MIC3 PGA", "NULL", "MIC3"},
+ {"MIC2 PGA", NULL, "MIC2"},
+ {"MIC3 PGA", NULL, "MIC3"},
/* Capture Path */
{"ADC1 Left MUX", "MIC1", "MIC1 PGA"},
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct es8328_priv *es8328 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = es8328->deemph;
+ ucontrol->value.integer.value[0] = es8328->deemph;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct es8328_priv *es8328 = snd_soc_codec_get_drvdata(codec);
- int deemph = ucontrol->value.enumerated.item[0];
+ int deemph = ucontrol->value.integer.value[0];
int ret;
if (deemph > 1)
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = priv->deemph;
+ ucontrol->value.integer.value[0] = priv->deemph;
return 0;
}
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec);
- priv->deemph = ucontrol->value.enumerated.item[0];
+ priv->deemph = ucontrol->value.integer.value[0];
return pcm1681_set_deemph(codec);
}
.ident = "Dell Dino",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_BOARD_NAME, "0144P8")
+ DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9343")
}
},
{ }
/* Enable VDDC charge pump */
ana_pwr |= SGTL5000_VDDC_CHRGPMP_POWERUP;
} else if (vddio >= 3100 && vdda >= 3100) {
- /*
- * if vddio and vddd > 3.1v,
- * charge pump should be clean before set ana_pwr
- */
- snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER,
- SGTL5000_VDDC_CHRGPMP_POWERUP, 0);
-
+ ana_pwr &= ~SGTL5000_VDDC_CHRGPMP_POWERUP;
/* VDDC use VDDIO rail */
lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD;
lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO <<
/* speaker map */
{ "IHFOUTL", NULL, "Speaker Rail"},
{ "IHFOUTR", NULL, "Speaker Rail"},
- { "IHFOUTL", "NULL", "Speaker Left Playback"},
- { "IHFOUTR", "NULL", "Speaker Right Playback"},
+ { "IHFOUTL", NULL, "Speaker Left Playback"},
+ { "IHFOUTR", NULL, "Speaker Right Playback"},
{ "Speaker Left Playback", NULL, "Speaker Left Filter"},
{ "Speaker Right Playback", NULL, "Speaker Right Filter"},
{ "Speaker Left Filter", NULL, "IHFDAC Left"},
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = priv->deemph;
+ ucontrol->value.integer.value[0] = priv->deemph;
return 0;
}
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec);
- priv->deemph = ucontrol->value.enumerated.item[0];
+ priv->deemph = ucontrol->value.integer.value[0];
return tas5086_set_deemph(codec);
}
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev);
- ucontrol->value.enumerated.item[0] = wm2000->anc_active;
+ ucontrol->value.integer.value[0] = wm2000->anc_active;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev);
- int anc_active = ucontrol->value.enumerated.item[0];
+ int anc_active = ucontrol->value.integer.value[0];
int ret;
if (anc_active > 1)
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev);
- ucontrol->value.enumerated.item[0] = wm2000->spk_ena;
+ ucontrol->value.integer.value[0] = wm2000->spk_ena;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev);
- int val = ucontrol->value.enumerated.item[0];
+ int val = ucontrol->value.integer.value[0];
int ret;
if (val > 1)
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = wm8731->deemph;
+ ucontrol->value.integer.value[0] = wm8731->deemph;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec);
- int deemph = ucontrol->value.enumerated.item[0];
+ int deemph = ucontrol->value.integer.value[0];
int ret = 0;
if (deemph > 1)
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = wm8903->deemph;
+ ucontrol->value.integer.value[0] = wm8903->deemph;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec);
- int deemph = ucontrol->value.enumerated.item[0];
+ int deemph = ucontrol->value.integer.value[0];
int ret = 0;
if (deemph > 1)
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = wm8904->deemph;
+ ucontrol->value.integer.value[0] = wm8904->deemph;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec);
- int deemph = ucontrol->value.enumerated.item[0];
+ int deemph = ucontrol->value.integer.value[0];
if (deemph > 1)
return -EINVAL;
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = wm8955->deemph;
+ ucontrol->value.integer.value[0] = wm8955->deemph;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec);
- int deemph = ucontrol->value.enumerated.item[0];
+ int deemph = ucontrol->value.integer.value[0];
if (deemph > 1)
return -EINVAL;
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec);
- ucontrol->value.enumerated.item[0] = wm8960->deemph;
+ ucontrol->value.integer.value[0] = wm8960->deemph;
return 0;
}
{
struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec);
- int deemph = ucontrol->value.enumerated.item[0];
+ int deemph = ucontrol->value.integer.value[0];
if (deemph > 1)
return -EINVAL;
struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
struct snd_soc_codec *codec = snd_soc_dapm_to_codec(dapm);
struct wm9712_priv *wm9712 = snd_soc_codec_get_drvdata(codec);
- unsigned int val = ucontrol->value.enumerated.item[0];
+ unsigned int val = ucontrol->value.integer.value[0];
struct soc_mixer_control *mc =
(struct soc_mixer_control *)kcontrol->private_value;
unsigned int mixer, mask, shift, old;
mutex_lock(&wm9712->lock);
old = wm9712->hp_mixer[mixer];
- if (ucontrol->value.enumerated.item[0])
+ if (ucontrol->value.integer.value[0])
wm9712->hp_mixer[mixer] |= mask;
else
wm9712->hp_mixer[mixer] &= ~mask;
mixer = mc->shift >> 8;
shift = mc->shift & 0xff;
- ucontrol->value.enumerated.item[0] =
+ ucontrol->value.integer.value[0] =
(wm9712->hp_mixer[mixer] >> shift) & 1;
return 0;
struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
struct snd_soc_codec *codec = snd_soc_dapm_to_codec(dapm);
struct wm9713_priv *wm9713 = snd_soc_codec_get_drvdata(codec);
- unsigned int val = ucontrol->value.enumerated.item[0];
+ unsigned int val = ucontrol->value.integer.value[0];
struct soc_mixer_control *mc =
(struct soc_mixer_control *)kcontrol->private_value;
unsigned int mixer, mask, shift, old;
mutex_lock(&wm9713->lock);
old = wm9713->hp_mixer[mixer];
- if (ucontrol->value.enumerated.item[0])
+ if (ucontrol->value.integer.value[0])
wm9713->hp_mixer[mixer] |= mask;
else
wm9713->hp_mixer[mixer] &= ~mask;
mixer = mc->shift >> 8;
shift = mc->shift & 0xff;
- ucontrol->value.enumerated.item[0] =
+ ucontrol->value.integer.value[0] =
(wm9713->hp_mixer[mixer] >> shift) & 1;
return 0;
enum spdif_txrate index, bool round)
{
const u32 rate[] = { 32000, 44100, 48000, 96000, 192000 };
- bool is_sysclk = clk == spdif_priv->sysclk;
+ bool is_sysclk = clk_is_match(clk, spdif_priv->sysclk);
u64 rate_ideal, rate_actual, sub;
u32 sysclk_dfmin, sysclk_dfmax;
u32 txclk_df, sysclk_df, arate;
spdif_priv->txclk_src[index], rate[index]);
dev_dbg(&pdev->dev, "use txclk df %d for %dHz sample rate\n",
spdif_priv->txclk_df[index], rate[index]);
- if (spdif_priv->txclk[index] == spdif_priv->sysclk)
+ if (clk_is_match(spdif_priv->txclk[index], spdif_priv->sysclk))
dev_dbg(&pdev->dev, "use sysclk df %d for %dHz sample rate\n",
spdif_priv->sysclk_df[index], rate[index]);
dev_dbg(&pdev->dev, "the best rate for %dHz sample rate is %dHz\n",
factor = (div2 + 1) * (7 * psr + 1) * 2;
for (i = 0; i < 255; i++) {
- tmprate = freq * factor * (i + 2);
+ tmprate = freq * factor * (i + 1);
if (baudclk_is_used)
clkrate = clk_get_rate(ssi_private->baudclk);
ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0;
ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;
- ret = !of_property_read_u32_array(np, "dmas", dmas, 4);
+ ret = of_property_read_u32_array(np, "dmas", dmas, 4);
if (ssi_private->use_dma && !ret && dmas[2] == IMX_DMATYPE_SSI_DUAL) {
ssi_private->use_dual_fifo = true;
/* When using dual fifo mode, we need to keep watermark
module = (void *)module + sizeof(*module) + module->mod_size;
}
- /* allocate scratch mem regions */
- sst_block_alloc_scratch(dsp);
-
return 0;
}
int sst_hsw_dsp_load(struct sst_hsw *hsw)
{
struct sst_dsp *dsp = hsw->dsp;
+ struct sst_fw *sst_fw, *t;
int ret;
dev_dbg(hsw->dev, "loading audio DSP....");
return ret;
}
- ret = sst_fw_reload(hsw->sst_fw);
- if (ret < 0) {
- dev_err(hsw->dev, "error: SST FW reload failed\n");
- sst_dsp_dma_put_channel(dsp);
- return -ENOMEM;
+ list_for_each_entry_safe_reverse(sst_fw, t, &dsp->fw_list, list) {
+ ret = sst_fw_reload(sst_fw);
+ if (ret < 0) {
+ dev_err(hsw->dev, "error: SST FW reload failed\n");
+ sst_dsp_dma_put_channel(dsp);
+ return -ENOMEM;
+ }
}
+ ret = sst_block_alloc_scratch(hsw->dsp);
+ if (ret < 0)
+ return -EINVAL;
sst_dsp_dma_put_channel(dsp);
return 0;
int sst_hsw_dsp_runtime_sleep(struct sst_hsw *hsw)
{
- sst_fw_unload(hsw->sst_fw);
- sst_block_free_scratch(hsw->dsp);
+ struct sst_fw *sst_fw, *t;
+ struct sst_dsp *dsp = hsw->dsp;
+
+ list_for_each_entry_safe(sst_fw, t, &dsp->fw_list, list) {
+ sst_fw_unload(sst_fw);
+ }
+ sst_block_free_scratch(dsp);
hsw->boot_complete = false;
- sst_dsp_sleep(hsw->dsp);
+ sst_dsp_sleep(dsp);
return 0;
}
goto fw_err;
}
+ /* allocate scratch mem regions */
+ ret = sst_block_alloc_scratch(hsw->dsp);
+ if (ret < 0)
+ goto boot_err;
+
/* wait for DSP boot completion */
sst_dsp_boot(hsw->dsp);
ret = wait_event_timeout(hsw->boot_wait, hsw->boot_complete,
if (PTR_ERR(priv->extclk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
} else {
- if (priv->extclk == priv->clk) {
+ if (clk_is_match(priv->extclk, priv->clk)) {
devm_clk_put(&pdev->dev, priv->extclk);
priv->extclk = ERR_PTR(-EINVAL);
} else {
if (!buf)
return -ENOMEM;
+ mutex_lock(&client_mutex);
+
list_for_each_entry(codec, &codec_list, list) {
len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n",
codec->component.name);
}
}
+ mutex_unlock(&client_mutex);
+
if (ret >= 0)
ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret);
if (!buf)
return -ENOMEM;
+ mutex_lock(&client_mutex);
+
list_for_each_entry(component, &component_list, list) {
list_for_each_entry(dai, &component->dai_list, list) {
len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n",
}
}
+ mutex_unlock(&client_mutex);
+
ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret);
kfree(buf);
if (!buf)
return -ENOMEM;
+ mutex_lock(&client_mutex);
+
list_for_each_entry(platform, &platform_list, list) {
len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n",
platform->component.name);
}
}
+ mutex_unlock(&client_mutex);
+
ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret);
kfree(buf);
{
struct snd_soc_component *component;
+ lockdep_assert_held(&client_mutex);
+
list_for_each_entry(component, &component_list, list) {
if (of_node) {
if (component->dev->of_node == of_node)
struct snd_soc_component *component;
struct snd_soc_dai *dai;
+ lockdep_assert_held(&client_mutex);
+
/* Find CPU DAI from registered DAIs*/
list_for_each_entry(component, &component_list, list) {
if (dlc->of_node && component->dev->of_node != dlc->of_node)
struct snd_soc_codec *codec;
int ret, i, order;
+ mutex_lock(&client_mutex);
mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_INIT);
/* bind DAIs */
card->instantiated = 1;
snd_soc_dapm_sync(&card->dapm);
mutex_unlock(&card->mutex);
+ mutex_unlock(&client_mutex);
return 0;
base_error:
mutex_unlock(&card->mutex);
+ mutex_unlock(&client_mutex);
return ret;
}
list_del(&component->list);
}
-static void snd_soc_component_del(struct snd_soc_component *component)
-{
- mutex_lock(&client_mutex);
- snd_soc_component_del_unlocked(component);
- mutex_unlock(&client_mutex);
-}
-
int snd_soc_register_component(struct device *dev,
const struct snd_soc_component_driver *cmpnt_drv,
struct snd_soc_dai_driver *dai_drv,
{
struct snd_soc_component *cmpnt;
+ mutex_lock(&client_mutex);
list_for_each_entry(cmpnt, &component_list, list) {
if (dev == cmpnt->dev && cmpnt->registered_as_component)
goto found;
}
+ mutex_unlock(&client_mutex);
return;
found:
- snd_soc_component_del(cmpnt);
+ snd_soc_component_del_unlocked(cmpnt);
+ mutex_unlock(&client_mutex);
snd_soc_component_cleanup(cmpnt);
kfree(cmpnt);
}
{
struct snd_soc_platform *platform;
+ mutex_lock(&client_mutex);
list_for_each_entry(platform, &platform_list, list) {
- if (dev == platform->dev)
+ if (dev == platform->dev) {
+ mutex_unlock(&client_mutex);
return platform;
+ }
}
+ mutex_unlock(&client_mutex);
return NULL;
}
{
struct snd_soc_codec *codec;
+ mutex_lock(&client_mutex);
list_for_each_entry(codec, &codec_list, list) {
if (dev == codec->dev)
goto found;
}
+ mutex_unlock(&client_mutex);
return;
found:
-
- mutex_lock(&client_mutex);
list_del(&codec->list);
snd_soc_component_del_unlocked(&codec->component);
mutex_unlock(&client_mutex);
}
}
},
+{
+ USB_DEVICE(0x0582, 0x0159),
+ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+ /* .vendor_name = "Roland", */
+ /* .product_name = "UA-22", */
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_COMPOSITE,
+ .data = (const struct snd_usb_audio_quirk[]) {
+ {
+ .ifnum = 0,
+ .type = QUIRK_AUDIO_STANDARD_INTERFACE
+ },
+ {
+ .ifnum = 1,
+ .type = QUIRK_AUDIO_STANDARD_INTERFACE
+ },
+ {
+ .ifnum = 2,
+ .type = QUIRK_MIDI_FIXED_ENDPOINT,
+ .data = & (const struct snd_usb_midi_endpoint_info) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+ {
+ .ifnum = -1
+ }
+ }
+ }
+},
/* this catches most recent vendor-specific Roland devices */
{
.match_flags = USB_DEVICE_ID_MATCH_VENDOR |
static void ins__delete(struct ins_operands *ops)
{
+ if (ops == NULL)
+ return;
zfree(&ops->source.raw);
zfree(&ops->source.name);
zfree(&ops->target.raw);
$(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)libcpupower.so.$(LIB_MAJ)
$(ECHO) " CC " $@
- $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -Wl,-rpath=./ -lrt -lpci -L$(OUTPUT) -o $@
+ $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -lrt -lpci -L$(OUTPUT) -o $@
$(QUIET) $(STRIPCMD) $@
$(OUTPUT)po/$(PACKAGE).pot: $(UTIL_SRC)
TARGETS_HOTPLUG = cpu-hotplug
TARGETS_HOTPLUG += memory-hotplug
+# Clear LDFLAGS and MAKEFLAGS if called from main
+# Makefile to avoid test build failures when test
+# Makefile doesn't have explicit build rules.
+ifeq (1,$(MAKELEVEL))
+undefine LDFLAGS
+override MAKEFLAGS =
+endif
+
all:
for TARGET in $(TARGETS); do \
make -C $$TARGET; \
#ifdef __NR_execveat
return syscall(__NR_execveat, fd, path, argv, envp, flags);
#else
- errno = -ENOSYS;
+ errno = ENOSYS;
return -1;
#endif
}
int fd_cloexec = open_or_die("execveat", O_RDONLY|O_CLOEXEC);
int fd_script_cloexec = open_or_die("script", O_RDONLY|O_CLOEXEC);
+ /* Check if we have execveat at all, and bail early if not */
+ errno = 0;
+ execveat_(-1, NULL, NULL, NULL, 0);
+ if (errno == ENOSYS) {
+ printf("[FAIL] ENOSYS calling execveat - no kernel support?\n");
+ return 1;
+ }
+
/* Change file position to confirm it doesn't affect anything */
lseek(fd, 10, SEEK_SET);
{
if (!(lr_desc.state & LR_STATE_MASK))
vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr |= (1ULL << lr);
+ else
+ vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr &= ~(1ULL << lr);
}
static u64 vgic_v2_get_elrsr(const struct kvm_vcpu *vcpu)
return vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr;
}
+static void vgic_v2_clear_eisr(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr = 0;
+}
+
static u32 vgic_v2_get_interrupt_status(const struct kvm_vcpu *vcpu)
{
u32 misr = vcpu->arch.vgic_cpu.vgic_v2.vgic_misr;
.sync_lr_elrsr = vgic_v2_sync_lr_elrsr,
.get_elrsr = vgic_v2_get_elrsr,
.get_eisr = vgic_v2_get_eisr,
+ .clear_eisr = vgic_v2_clear_eisr,
.get_interrupt_status = vgic_v2_get_interrupt_status,
.enable_underflow = vgic_v2_enable_underflow,
.disable_underflow = vgic_v2_disable_underflow,
{
if (!(lr_desc.state & LR_STATE_MASK))
vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr |= (1U << lr);
+ else
+ vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr &= ~(1U << lr);
}
static u64 vgic_v3_get_elrsr(const struct kvm_vcpu *vcpu)
return vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr;
}
+static void vgic_v3_clear_eisr(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr = 0;
+}
+
static u32 vgic_v3_get_interrupt_status(const struct kvm_vcpu *vcpu)
{
u32 misr = vcpu->arch.vgic_cpu.vgic_v3.vgic_misr;
.sync_lr_elrsr = vgic_v3_sync_lr_elrsr,
.get_elrsr = vgic_v3_get_elrsr,
.get_eisr = vgic_v3_get_eisr,
+ .clear_eisr = vgic_v3_clear_eisr,
.get_interrupt_status = vgic_v3_get_interrupt_status,
.enable_underflow = vgic_v3_enable_underflow,
.disable_underflow = vgic_v3_disable_underflow,
return vgic_ops->get_eisr(vcpu);
}
+static inline void vgic_clear_eisr(struct kvm_vcpu *vcpu)
+{
+ vgic_ops->clear_eisr(vcpu);
+}
+
static inline u32 vgic_get_interrupt_status(struct kvm_vcpu *vcpu)
{
return vgic_ops->get_interrupt_status(vcpu);
vgic_set_lr(vcpu, lr_nr, vlr);
clear_bit(lr_nr, vgic_cpu->lr_used);
vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY;
+ vgic_sync_lr_elrsr(vcpu, lr_nr, vlr);
}
/*
BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
vlr.state |= LR_STATE_PENDING;
vgic_set_lr(vcpu, lr, vlr);
+ vgic_sync_lr_elrsr(vcpu, lr, vlr);
return true;
}
}
vlr.state |= LR_EOI_INT;
vgic_set_lr(vcpu, lr, vlr);
+ vgic_sync_lr_elrsr(vcpu, lr, vlr);
return true;
}
if (status & INT_STATUS_UNDERFLOW)
vgic_disable_underflow(vcpu);
+ /*
+ * In the next iterations of the vcpu loop, if we sync the vgic state
+ * after flushing it, but before entering the guest (this happens for
+ * pending signals and vmid rollovers), then make sure we don't pick
+ * up any old maintenance interrupts here.
+ */
+ vgic_clear_eisr(vcpu);
+
return level_pending;
}
* emulation. So check this here again. KVM_CREATE_DEVICE does
* the proper checks already.
*/
- if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2)
- return -ENODEV;
+ if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2) {
+ ret = -ENODEV;
+ goto out;
+ }
/*
* Any time a vcpu is run, vcpu_load is called which tries to grab the
BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
r = -ENOMEM;
- kvm->memslots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);
+ kvm->memslots = kvm_kvzalloc(sizeof(struct kvm_memslots));
if (!kvm->memslots)
goto out_err_no_srcu;
out_err_no_disable:
for (i = 0; i < KVM_NR_BUSES; i++)
kfree(kvm->buses[i]);
- kfree(kvm->memslots);
+ kvfree(kvm->memslots);
kvm_arch_free_vm(kvm);
return ERR_PTR(r);
}
kvm_for_each_memslot(memslot, slots)
kvm_free_physmem_slot(kvm, memslot, NULL);
- kfree(kvm->memslots);
+ kvfree(kvm->memslots);
}
static void kvm_destroy_devices(struct kvm *kvm)
goto out_free;
}
- slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
- GFP_KERNEL);
+ slots = kvm_kvzalloc(sizeof(struct kvm_memslots));
if (!slots)
goto out_free;
+ memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
slot = id_to_memslot(slots, mem->slot);
kvm_arch_commit_memory_region(kvm, mem, &old, change);
kvm_free_physmem_slot(kvm, &old, &new);
- kfree(old_memslots);
+ kvfree(old_memslots);
/*
* IOMMU mapping: New slots need to be mapped. Old slots need to be
return 0;
out_slots:
- kfree(slots);
+ kvfree(slots);
out_free:
kvm_free_physmem_slot(kvm, &new, &old);
out:
case KVM_CAP_SIGNAL_MSI:
#endif
#ifdef CONFIG_HAVE_KVM_IRQFD
+ case KVM_CAP_IRQFD:
case KVM_CAP_IRQFD_RESAMPLE:
#endif
case KVM_CAP_CHECK_EXTENSION_VM: