S: Maintained
- F: drivers/net/ethernet/realtek/r8169.c
+ F: drivers/net/ethernet/realtek/r8169*
8250/16?50 (AND CLONE UARTS) SERIAL DRIVER
S: Supported
F: drivers/video/backlight/adp8860_bl.c
- ADS1015 HARDWARE MONITOR DRIVER
- S: Maintained
- F: Documentation/hwmon/ads1015.rst
- F: drivers/hwmon/ads1015.c
- F: include/linux/platform_data/ads1015.h
-
ADT746X FAN DRIVER
S: Maintained
S: Maintained
F: drivers/net/ethernet/alacritech/*
+ FORCEDETH GIGABIT ETHERNET DRIVER
+ S: Maintained
+ F: drivers/net/ethernet/nvidia/*
+
ALCATEL SPEEDTOUCH USB DRIVER
S: Maintained
- F: Documentation/i2c/busses/i2c-ali1563
+ F: Documentation/i2c/busses/i2c-ali1563.rst
F: drivers/i2c/busses/i2c-ali1563.c
ALLEGRO DVT VIDEO IP CORE DRIVER
S: Maintained
F: drivers/staging/media/allegro-dvt/
+ ALLWINNER CPUFREQ DRIVER
+ S: Maintained
+ F: Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt
+ F: drivers/cpufreq/sun50i-cpufreq-nvmem.c
+
ALLWINNER SECURITY SYSTEM
F: drivers/crypto/sunxi-ss/
ALLWINNER VPU DRIVER
S: Maintained
S: Maintained
F: drivers/mfd/altera-sysmgr.c
- F: include/linux/mfd/altera-sysgmr.h
+ F: include/linux/mfd/altera-sysmgr.h
ALTERA SYSTEM RESOURCE DRIVER FOR ARRIA10 DEVKIT
F: include/linux/amd-iommu.h
AMD KFD
- T: git git://people.freedesktop.org/~gabbayo/linux.git
- S: Supported
- F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
- F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
- F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
- F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
- F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
- F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
- F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+ T: git git://people.freedesktop.org/~agd5f/linux
+ S: Supported
+ F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd*.[ch]
F: drivers/gpu/drm/amd/amdkfd/
F: drivers/gpu/drm/amd/include/cik_structs.h
F: drivers/gpu/drm/amd/include/kgd_kfd_interface.h
W: http://ez.analog.com/community/linux-device-drivers
S: Supported
F: drivers/iio/adc/ad7124.c
- F: Documentation/devicetree/bindings/iio/adc/adi,ad7124.txt
+ F: Documentation/devicetree/bindings/iio/adc/adi,ad7124.yaml
ANALOG DEVICES INC AD7606 DRIVER
W: http://ez.analog.com/community/linux-device-drivers
S: Supported
F: drivers/iio/adc/ad7606.c
- F: Documentation/devicetree/bindings/iio/adc/adi,ad7606.txt
+ F: Documentation/devicetree/bindings/iio/adc/adi,ad7606.yaml
ANALOG DEVICES INC AD7768-1 DRIVER
F: drivers/mux/adgs1408.c
F: Documentation/devicetree/bindings/mux/adi,adgs1408.txt
+ ANALOG DEVICES INC ADIN DRIVER
+ W: http://ez.analog.com/community/linux-device-drivers
+ S: Supported
+ F: drivers/net/phy/adin.c
+ F: Documentation/devicetree/bindings/net/adi,adin.yaml
+
ANALOG DEVICES INC ADIS DRIVER LIBRARY
S: Supported
F: include/linux/iio/imu/adis.h
F: drivers/iio/imu/adis.c
+ ANALOG DEVICES INC ADIS16460 DRIVER
+ S: Supported
+ W: http://ez.analog.com/community/linux-device-drivers
+ F: drivers/iio/imu/adis16460.c
+ F: Documentation/devicetree/bindings/iio/imu/adi,adis16460.yaml
+
ANALOG DEVICES INC ADP5061 DRIVER
ARM ARCHITECTED TIMER DRIVER
S: Maintained
F: arch/arm/include/asm/arch_timer.h
ARM MALI PANFROST DRM DRIVER
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
S: Maintained
- F: drivers/iommu/arm-smmu.c
- F: drivers/iommu/arm-smmu-v3.c
+ F: drivers/iommu/arm-smmu*
F: drivers/iommu/io-pgtable-arm.c
F: drivers/iommu/io-pgtable-arm-v7s.c
F: drivers/soc/actions/
F: include/dt-bindings/power/owl-*
F: include/linux/soc/actions/
- F: Documentation/devicetree/bindings/arm/actions.txt
+ F: Documentation/devicetree/bindings/arm/actions.yaml
F: Documentation/devicetree/bindings/clock/actions,owl-cmu.txt
F: Documentation/devicetree/bindings/dma/owl-dma.txt
F: Documentation/devicetree/bindings/i2c/i2c-owl.txt
F: drivers/clk/sunxi/
ARM/Allwinner sunXi SoC support
S: Maintained
F: drivers/soc/sunxi/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sunxi/linux.git
+ Allwinner A10 CSI driver
+ T: git git://linuxtv.org/media_tree.git
+ F: drivers/media/platform/sunxi/sun4i-csi/
+ F: Documentation/devicetree/bindings/media/allwinner,sun4i-a10-csi.yaml
+ S: Maintained
+
ARM/Amlogic Meson SoC CLOCK FRAMEWORK
F: drivers/pinctrl/meson/
F: drivers/mmc/host/meson*
F: drivers/soc/amlogic/
+ F: drivers/rtc/rtc-meson*
N: meson
ARM/Amlogic Meson SoC Sound Drivers
F: arch/arm/boot/dts/artpec6*
F: drivers/clk/axis
F: drivers/crypto/axis
+ F: drivers/mmc/host/usdhi6rol0.c
F: drivers/pinctrl/pinctrl-artpec*
F: Documentation/devicetree/bindings/pinctrl/axis,artpec6-pinctrl.txt
S: Maintained
F: drivers/hwtracing/coresight/*
- F: Documentation/trace/coresight.txt
- F: Documentation/trace/coresight-cpu-debug.txt
+ F: Documentation/trace/coresight.rst
+ F: Documentation/trace/coresight-cpu-debug.rst
F: Documentation/devicetree/bindings/arm/coresight.txt
F: Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt
F: Documentation/ABI/testing/sysfs-bus-coresight-devices-*
N: [^a-z]sirf
X: drivers/gnss
+ ARM/CZ.NIC TURRIS MOX SUPPORT
+ W: http://mox.turris.cz
+ S: Maintained
+ F: Documentation/ABI/testing/debugfs-moxtet
+ F: Documentation/ABI/testing/sysfs-bus-moxtet-devices
+ F: Documentation/ABI/testing/sysfs-firmware-turris-mox-rwtm
+ F: Documentation/devicetree/bindings/bus/moxtet.txt
+ F: Documentation/devicetree/bindings/firmware/cznic,turris-mox-rwtm.txt
+ F: Documentation/devicetree/bindings/gpio/gpio-moxtet.txt
+ F: include/linux/moxtet.h
+ F: drivers/bus/moxtet.c
+ F: drivers/firmware/turris-mox-rwtm.c
+ F: drivers/gpio/gpio-moxtet.c
+
ARM/EBSA110 MACHINE SUPPORT
S: Maintained
F: arch/arm/mach-pxa/colibri-pxa270-income.c
- ARM/INTEL IOP13XX ARM ARCHITECTURE
- S: Maintained
-
ARM/INTEL IOP32X ARM ARCHITECTURE
S: Maintained
- ARM/INTEL IOP33X ARM ARCHITECTURE
- S: Orphan
-
ARM/INTEL IQ81342EX MACHINE SUPPORT
F: drivers/phy/mediatek/
F: Documentation/devicetree/bindings/phy/phy-mtk-*
- ARM/MICREL KS8695 ARCHITECTURE
- F: arch/arm/mach-ks8695/
- S: Odd Fixes
-
ARM/Microchip (AT91) SoC support
F: arch/arm/mach-nomadik/
F: arch/arm/mach-u300/
F: arch/arm/mach-ux500/
+ F: drivers/soc/ux500/
F: arch/arm/boot/dts/ste-*
F: drivers/clk/clk-nomadik.c
F: drivers/clk/clk-u300.c
F: Documentation/devicetree/bindings/*/*npcm*
F: Documentation/devicetree/bindings/*/*/*npcm*
- ARM/NUVOTON W90X900 ARM ARCHITECTURE
- W: http://www.mcuos.com
- S: Maintained
- F: arch/arm/mach-w90x900/
- F: drivers/input/keyboard/w90p910_keypad.c
- F: drivers/input/touchscreen/w90p910_ts.c
- F: drivers/watchdog/nuc900_wdt.c
- F: drivers/net/ethernet/nuvoton/w90p910_ether.c
- F: drivers/mtd/nand/raw/nuc900_nand.c
- F: drivers/rtc/rtc-nuc900.c
- F: drivers/spi/spi-nuc900.c
- F: drivers/usb/host/ehci-w90x900.c
- F: drivers/video/fbdev/nuc900fb.c
-
ARM/OPENMOKO NEO FREERUNNER (GTA02) MACHINE SUPPORT
W: http://wiki.openmoko.org/wiki/Neo_FreeRunner
S: Maintained
F: arch/arm64/boot/dts/realtek/
- F: Documentation/devicetree/bindings/arm/realtek.txt
+ F: Documentation/devicetree/bindings/arm/realtek.yaml
ARM/RENESAS ARM64 ARCHITECTURE
Q: http://patchwork.kernel.org/project/linux-renesas-soc/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-devel.git next
S: Supported
F: arch/arm64/boot/dts/renesas/
F: Documentation/devicetree/bindings/arm/renesas.yaml
F: drivers/*/*/*s3c24*
F: drivers/*/*s3c64xx*
F: drivers/*/*s5pv210*
- F: drivers/memory/samsung/*
- F: drivers/soc/samsung/*
+ F: drivers/memory/samsung/
+ F: drivers/soc/samsung/
+ F: include/linux/soc/samsung/
F: Documentation/arm/samsung/
F: Documentation/devicetree/bindings/arm/samsung/
F: Documentation/devicetree/bindings/sram/samsung-sram.txt
ARM/SHMOBILE ARM ARCHITECTURE
Q: http://patchwork.kernel.org/project/linux-renesas-soc/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-devel.git next
S: Supported
F: arch/arm/boot/dts/emev2*
F: arch/arm/boot/dts/gr-peach*
F: include/linux/backlight.h
F: include/linux/pwm_backlight.h
F: Documentation/devicetree/bindings/leds/backlight
+ F: Documentation/ABI/stable/sysfs-class-backlight
+ F: Documentation/ABI/testing/sysfs-class-backlight
BATMAN ADVANCED
W: https://www.open-mesh.org/
B: https://www.open-mesh.org/projects/batman-adv/issues
F: fs/cachefiles/
CADENCE MIPI-CSI2 BRIDGES
S: Maintained
F: Documentation/devicetree/bindings/media/cdns,*.txt
F: Documentation/devicetree/bindings/net/can/
F: drivers/net/can/
F: include/linux/can/dev.h
+ F: include/linux/can/led.h
+ F: include/linux/can/rx-offload.h
F: include/linux/can/platform/
F: include/uapi/linux/can/error.h
F: include/uapi/linux/can/netlink.h
+ F: include/uapi/linux/can/vxcan.h
CAN NETWORK LAYER
F: Documentation/networking/can.rst
F: net/can/
F: include/linux/can/core.h
+ F: include/linux/can/skb.h
+ F: include/net/netns/can.h
F: include/uapi/linux/can.h
F: include/uapi/linux/can/bcm.h
F: include/uapi/linux/can/raw.h
F: include/uapi/linux/can/gw.h
+ CAN-J1939 NETWORK LAYER
+ S: Maintained
+ F: Documentation/networking/j1939.txt
+ F: net/can/j1939/
+ F: include/uapi/linux/can/j1939.h
+
CAPABILITIES
F: scripts/extract-cert.c
CERTIFIED WIRELESS USB (WUSB) SUBSYSTEM:
- S: Orphan
- F: Documentation/usb/wusb-design-overview.rst
- F: Documentation/usb/wusb-cbaf
- F: drivers/usb/host/hwa-hc.c
- F: drivers/usb/host/whci/
- F: drivers/usb/wusbcore/
- F: include/linux/usb/wusb*
+ S: Obsolete
+ F: drivers/staging/wusbcore/
CFAG12864B LCD DRIVER
W: http://linux-cifs.samba.org/
T: git git://git.samba.org/sfrench/cifs-2.6.git
S: Supported
- F: Documentation/filesystems/cifs/
+ F: Documentation/admin-guide/cifs/
F: fs/cifs/
COMPACTPCI HOTPLUG CORE
T: git git://git.kernel.dk/linux-block
- F: Documentation/cgroup-v1/blkio-controller.rst
+ F: Documentation/admin-guide/cgroup-v1/blkio-controller.rst
F: block/blk-cgroup.c
F: include/linux/blk-cgroup.h
F: block/blk-throttle.c
F: drivers/cpuidle/cpuidle-exynos.c
F: arch/arm/mach-exynos/pm.c
+ CPUIDLE DRIVER - ARM PSCI
+ S: Supported
+ F: drivers/cpuidle/cpuidle-psci.c
+
CPU IDLE TIME MANAGEMENT FRAMEWORK
F: Documentation/filesystems/cramfs.txt
F: fs/cramfs/
+ CREATIVE SB0540
+ S: Maintained
+ F: drivers/hid/hid-creative-sb0540.c
+
CRYPTO API
F: drivers/misc/cxl/
F: include/misc/cxl*
F: include/uapi/misc/cxl.h
- F: Documentation/powerpc/cxl.txt
+ F: Documentation/powerpc/cxl.rst
F: Documentation/ABI/testing/sysfs-class-cxl
CXLFLASH (IBM Coherent Accelerator Processor Interface CAPI Flash) SCSI DRIVER
S: Supported
F: drivers/scsi/cxlflash/
F: include/uapi/scsi/cxlflash_ioctl.h
- F: Documentation/powerpc/cxlflash.txt
+ F: Documentation/powerpc/cxlflash.rst
CYBERPRO FB DRIVER
S: Maintained
F: Documentation/
+ F: scripts/documentation-file-ref-check
F: scripts/kernel-doc
+ F: scripts/sphinx-pre-install
X: Documentation/ABI/
X: Documentation/firmware-guide/acpi/
X: Documentation/devicetree/
S: Maintained
F: Documentation/translations/it_IT
+ DOCUMENTATION SCRIPTS
+ S: Maintained
+ F: scripts/documentation-file-ref-check
+ F: scripts/sphinx-pre-install
+ F: Documentation/sphinx/parse-headers.pl
+
DONGWOON DW9714 LENS VOICE COIL DRIVER
DRM DRIVERS AND MISC GPU PATCHES
W: https://01.org/linuxgraphics/gfx-docs/maintainer-tools/drm-misc.html
S: Maintained
F: include/linux/vga*
DRM DRIVERS FOR ALLWINNER A10
S: Supported
F: drivers/gpu/drm/sun4i/
F: Documentation/devicetree/bindings/display/sunxi/sun4i-drm.txt
T: git git://anongit.freedesktop.org/drm/drm-misc
+DRM DRIVER FOR ALLWINNER DE2 AND DE3 ENGINE
+S: Supported
+F: drivers/gpu/drm/sun4i/sun8i*
+T: git git://anongit.freedesktop.org/drm/drm-misc
+
DRM DRIVERS FOR AMLOGIC SOCS
S: Maintained
F: drivers/media/radio/dsbr100.c
- DSCC4 DRIVER
- S: Maintained
- F: drivers/net/wan/dscc4.c
-
DT3155 MEDIA DRIVER
S: Maintained
F: drivers/edac/amd64_edac*
+ EDAC-ARMADA
+ S: Maintained
+ F: drivers/edac/armada_xp_*
+
EDAC-AST2500
S: Supported
F: drivers/edac/aspeed_edac.c
F: Documentation/devicetree/bindings/edac/aspeed-sdram-edac.txt
+ EDAC-BLUEFIELD
+ S: Supported
+ F: drivers/edac/bluefield_edac.c
+
EDAC-CALXEDA
EDAC-CORE
- T: git git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git for-next
- T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-edac.git linux_next
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras.git edac-for-next
S: Supported
F: Documentation/admin-guide/ras.rst
F: Documentation/driver-api/edac.rst
F: drivers/video/fbdev/s1d13xxxfb.c
F: include/video/s1d13xxxfb.h
+ EROFS FILE SYSTEM
+ S: Maintained
+ F: fs/erofs/
+
ERRSEQ ERROR TRACKING INFRASTRUCTURE
S: Maintained
S: Maintained
- F: Documentation/ABI/testing/sysfs-bus-mdio
+ F: Documentation/ABI/testing/sysfs-class-net-phydev
F: Documentation/devicetree/bindings/net/ethernet-phy.yaml
F: Documentation/devicetree/bindings/net/mdio*
F: Documentation/networking/phy.rst
F: include/uapi/linux/mdio.h
F: include/uapi/linux/mii.h
+ EXFAT FILE SYSTEM
+ S: Maintained
+ F: drivers/staging/exfat/
+
EXT2 FILE SYSTEM
F: drivers/hwmon/f75375s.c
F: include/linux/f75375s.h
- FIREWIRE AUDIO DRIVERS
+ FIREWIRE AUDIO DRIVERS and IEC 61883-1/6 PACKET STREAMING ENGINE
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git
S: Maintained
F: sound/firewire/
+ F: include/uapi/sound/firewire.h
FIREWIRE MEDIA DRIVERS (firedtv)
S: Maintained
- F: Documentation/ABI/testing/sysfs-bus-counter-ftm-quadddec
+ F: Documentation/ABI/testing/sysfs-bus-counter-ftm-quaddec
F: Documentation/devicetree/bindings/counter/ftm-quaddec.txt
F: drivers/counter/ftm-quaddec.c
FLOPPY DRIVER
- S: Orphan
+ S: Odd Fixes
F: drivers/block/floppy.c
- FMC SUBSYSTEM
- W: http://www.ohwr.org/projects/fmc-bus
- S: Supported
- F: drivers/fmc/
- F: include/linux/fmc*.h
- F: include/linux/ipmi-fru.h
- K: fmc_d.*register
-
FPGA MANAGER FRAMEWORK
S: Maintained
- T: git git://git.kernel.org/pub/scm/linux/kernel/git/atull/linux-fpga.git
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git
Q: http://patchwork.kernel.org/project/linux-fpga/list/
F: Documentation/fpga/
F: Documentation/driver-api/fpga/
- T: git git://github.com/bzolnier/linux.git
+ T: git git://anongit.freedesktop.org/drm/drm-misc
Q: http://patchwork.kernel.org/project/linux-fbdev/list/
S: Maintained
F: Documentation/fb/
S: Maintained
F: drivers/perf/fsl_imx8_ddr_perf.c
+ F: Documentation/admin-guide/perf/imx-ddr.rst
F: Documentation/devicetree/bindings/perf/fsl-imx-ddr.txt
+ FREESCALE IMX I2C DRIVER
+ S: Maintained
+ F: drivers/i2c/busses/i2c-imx.c
+ F: Documentation/devicetree/bindings/i2c/i2c-imx.txt
+
FREESCALE IMX LPI2C DRIVER
S: Supported
F: fs/crypto/
F: include/linux/fscrypt*.h
+ F: include/uapi/linux/fscrypt.h
F: Documentation/filesystems/fscrypt.rst
FSI SUBSYSTEM
F: fs/notify/
F: include/linux/fsnotify*.h
+ FSVERITY: READ-ONLY FILE-BASED AUTHENTICITY PROTECTION
+ Q: https://patchwork.kernel.org/project/linux-fscrypt/list/
+ T: git git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt.git fsverity
+ S: Supported
+ F: fs/verity/
+ F: include/linux/fsverity.h
+ F: include/uapi/linux/fsverity.h
+ F: Documentation/filesystems/fsverity.rst
+
FUJITSU LAPTOP EXTRAS
S: Maintained
F: drivers/media/radio/radio-gemtek*
+ GENERIC ARCHITECTURE TOPOLOGY
+ S: Maintained
+ F: drivers/base/arch_topology.c
+ F: include/linux/arch_topology.h
+
GENERIC GPIO I2C DRIVER
S: Supported
S: Supported
F: drivers/i2c/muxes/i2c-mux-gpio.c
F: include/linux/platform_data/i2c-mux-gpio.h
- F: Documentation/i2c/muxes/i2c-mux-gpio
+ F: Documentation/i2c/muxes/i2c-mux-gpio.rst
GENERIC HDLC (WAN) DRIVERS
F: fs/gfs2/
F: include/uapi/linux/gfs2_ondisk.h
- GIGASET ISDN DRIVERS
- W: http://gigaset307x.sourceforge.net/
- S: Odd Fixes
- F: drivers/staging/isdn/gigaset/
-
GNSS SUBSYSTEM
T: git git://git.kernel.org/pub/scm/linux/kernel/git/johan/gnss.git
S: Supported
- F: Documentation/networking/device_drivers/google/gve.txt
+ F: Documentation/networking/device_drivers/google/gve.rst
F: drivers/net/ethernet/google
GPD POCKET FAN DRIVER
S: Maintained
F: drivers/staging/greybus/
+ F: drivers/greybus/
+ F: include/linux/greybus.h
+ F: include/linux/greybus/
GREYBUS UART PROTOCOLS DRIVERS
F: drivers/scsi/hisi_sas/
F: Documentation/devicetree/bindings/scsi/hisilicon-sas.txt
+ HISILICON QM AND ZIP Controller DRIVER
+ S: Maintained
+ F: drivers/crypto/hisilicon/qm.c
+ F: drivers/crypto/hisilicon/qm.h
+ F: drivers/crypto/hisilicon/sgl.c
+ F: drivers/crypto/hisilicon/sgl.h
+ F: drivers/crypto/hisilicon/zip/
+ F: Documentation/ABI/testing/debugfs-hisi-zip
+
HMM - Heterogeneous Memory Management
F: drivers/hv/
F: drivers/input/serio/hyperv-keyboard.c
F: drivers/pci/controller/pci-hyperv.c
+ F: drivers/pci/controller/pci-hyperv-intf.c
F: drivers/net/hyperv/
F: drivers/scsi/storvsc_drv.c
F: drivers/uio/uio_hv_generic.c
F: drivers/video/fbdev/hyperv_fb.c
- F: drivers/iommu/hyperv_iommu.c
+ F: drivers/iommu/hyperv-iommu.c
F: net/vmw_vsock/hyperv_transport.c
F: include/clocksource/hyperv_timer.h
F: include/linux/hyperv.h
S: Maintained
- F: Documentation/i2c/busses/i2c-nvidia-gpu
+ F: Documentation/i2c/busses/i2c-nvidia-gpu.rst
F: drivers/i2c/busses/i2c-nvidia-gpu.c
I2C MUXES
S: Maintained
- F: Documentation/i2c/i2c-topology
+ F: Documentation/i2c/i2c-topology.rst
F: Documentation/i2c/muxes/
F: Documentation/devicetree/bindings/i2c/i2c-mux*
F: Documentation/devicetree/bindings/i2c/i2c-arb*
S: Maintained
- F: Documentation/devicetree/bindings/i2c/i2c-mv64xxx.txt
+ F: Documentation/devicetree/bindings/i2c/marvell,mv64xxx-i2c.yaml
F: drivers/i2c/busses/i2c-mv64xxx.c
I2C OVER PARALLEL PORT
S: Maintained
- F: Documentation/i2c/busses/i2c-parport
- F: Documentation/i2c/busses/i2c-parport-light
+ F: Documentation/i2c/busses/i2c-parport.rst
+ F: Documentation/i2c/busses/i2c-parport-light.rst
F: drivers/i2c/busses/i2c-parport.c
F: drivers/i2c/busses/i2c-parport-light.c
S: Maintained
- F: Documentation/i2c/busses/i2c-taos-evm
+ F: Documentation/i2c/busses/i2c-taos-evm.rst
F: drivers/i2c/busses/i2c-taos-evm.c
I2C-TINY-USB DRIVER
S: Maintained
- F: Documentation/i2c/busses/i2c-ali1535
- F: Documentation/i2c/busses/i2c-ali1563
- F: Documentation/i2c/busses/i2c-ali15x3
- F: Documentation/i2c/busses/i2c-amd756
- F: Documentation/i2c/busses/i2c-amd8111
- F: Documentation/i2c/busses/i2c-i801
- F: Documentation/i2c/busses/i2c-nforce2
- F: Documentation/i2c/busses/i2c-piix4
- F: Documentation/i2c/busses/i2c-sis5595
- F: Documentation/i2c/busses/i2c-sis630
- F: Documentation/i2c/busses/i2c-sis96x
- F: Documentation/i2c/busses/i2c-via
- F: Documentation/i2c/busses/i2c-viapro
+ F: Documentation/i2c/busses/i2c-ali1535.rst
+ F: Documentation/i2c/busses/i2c-ali1563.rst
+ F: Documentation/i2c/busses/i2c-ali15x3.rst
+ F: Documentation/i2c/busses/i2c-amd756.rst
+ F: Documentation/i2c/busses/i2c-amd8111.rst
+ F: Documentation/i2c/busses/i2c-i801.rst
+ F: Documentation/i2c/busses/i2c-nforce2.rst
+ F: Documentation/i2c/busses/i2c-piix4.rst
+ F: Documentation/i2c/busses/i2c-sis5595.rst
+ F: Documentation/i2c/busses/i2c-sis630.rst
+ F: Documentation/i2c/busses/i2c-sis96x.rst
+ F: Documentation/i2c/busses/i2c-via.rst
+ F: Documentation/i2c/busses/i2c-viapro.rst
F: drivers/i2c/busses/i2c-ali1535.c
F: drivers/i2c/busses/i2c-ali1563.c
F: drivers/i2c/busses/i2c-ali15x3.c
F: drivers/i2c/busses/i2c-ismt.c
- F: Documentation/i2c/busses/i2c-ismt
+ F: Documentation/i2c/busses/i2c-ismt.rst
I2C/SMBUS STUB DRIVER
F: drivers/crypto/nx/nx-sha*
F: drivers/crypto/nx/nx.*
F: drivers/crypto/nx/nx_csbcpb.h
- F: drivers/crypto/nx/nx_debugfs.h
+ F: drivers/crypto/nx/nx_debugfs.c
IBM Power Linux RAID adapter
F: drivers/mfd/lpc_ich.c
F: drivers/gpio/gpio-ich.c
+ ICY I2C DRIVER
+ S: Maintained
+ F: drivers/i2c/busses/i2c-icy.c
+
IDE SUBSYSTEM
F: drivers/video/fbdev/i810/
INTEL ASoC DRIVERS
S: Supported
F: drivers/scsi/isci/
+ INTEL CPU family model numbers
+ S: Supported
+ F: arch/x86/include/asm/intel-family.h
+
INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)
F: tools/power/x86/intel-speed-select/
F: include/uapi/linux/isst_if.h
+ INTEL STRATIX10 FIRMWARE DRIVERS
+ S: Maintained
+ F: drivers/firmware/stratix10-rsu.c
+ F: drivers/firmware/stratix10-svc.c
+ F: include/linux/firmware/intel/stratix10-smc.h
+ F: include/linux/firmware/intel/stratix10-svc-client.h
+ F: Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
+ F: Documentation/devicetree/bindings/firmware/intel,stratix10-svc.txt
+
INTEL TELEMETRY DRIVER
S: Supported
W: http://linuxwimax.org
- F: Documentation/wimax/README.i2400m
+ F: Documentation/admin-guide/wimax/i2400m.rst
F: drivers/net/wimax/i2400m/
F: include/uapi/linux/wimax/i2400m.h
S: Supported
F: Documentation/trace/intel_th.rst
F: drivers/hwtracing/intel_th/
+ F: include/linux/intel_th.h
INTEL(R) TRUSTED EXECUTION TECHNOLOGY (TXT)
F: include/linux/tboot.h
F: arch/x86/kernel/tboot.c
- INTEL-MID GPIO DRIVER
- S: Maintained
- F: drivers/gpio/gpio-intel-mid.c
-
INTERCONNECT API
S: Maintained
F: drivers/net/ethernet/sgi/ioc3-eth.c
- IOC3 SERIAL DRIVER
- S: Maintained
- F: drivers/tty/serial/ioc3_serial.c
-
IOMAP FILESYSTEM LIBRARY
T: git git://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git
S: Supported
- F: fs/iomap.c
F: fs/iomap/
F: include/linux/iomap.h
F: fs/io_uring.c
F: include/uapi/linux/io_uring.h
- IP MASQUERADING
- S: Maintained
- F: net/ipv4/netfilter/ipt_MASQUERADE.c
-
IPMI SUBSYSTEM
F: include/uapi/linux/ipx.h
IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY)
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core
F: Documentation/IRQ-domain.txt
IRQCHIP DRIVERS
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core
W: http://jfs.sourceforge.net/
T: git git://github.com/kleikamp/linux-shaggy.git
S: Maintained
- F: Documentation/filesystems/jfs.txt
+ F: Documentation/admin-guide/jfs.rst
F: fs/jfs/
JME NETWORK DRIVER
W: http://www.linux-kvm.org
T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
S: Supported
- F: Documentation/virtual/kvm/
+ F: Documentation/virt/kvm/
F: include/trace/events/kvm.h
F: include/uapi/asm-generic/kvm*
F: include/uapi/linux/kvm*
F: tools/kvm/
F: tools/testing/selftests/kvm/
- KERNEL VIRTUAL MACHINE FOR AMD-V (KVM/amd)
- W: http://www.linux-kvm.org/
- S: Maintained
- F: arch/x86/include/asm/svm.h
- F: arch/x86/kvm/svm.c
-
KERNEL VIRTUAL MACHINE FOR ARM/ARM64 (KVM/arm, KVM/arm64)
- R: Julien Thierry <julien.thierry@arm.com>
+ R: Julien Thierry <julien.thierry.kdev@gmail.com>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git
- L: linux-s390@vger.kernel.org
+ L: kvm@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git
S: Supported
KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
W: http://www.linux-kvm.org
T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
F: arch/x86/kvm/
F: arch/x86/kvm/*/
F: arch/x86/include/uapi/asm/kvm*
+ F: arch/x86/include/uapi/asm/vmx.h
+ F: arch/x86/include/uapi/asm/svm.h
F: arch/x86/include/asm/kvm*
F: arch/x86/include/asm/pvclock-abi.h
+ F: arch/x86/include/asm/svm.h
+ F: arch/x86/include/asm/vmx.h
F: arch/x86/kernel/kvm.c
F: arch/x86/kernel/kvmclock.c
KEYS-TRUSTED
F: Documentation/security/keys/trusted-encrypted.rst
F: include/keys/trusted-type.h
F: security/keys/trusted.c
- F: security/keys/trusted.h
+ F: include/keys/trusted.h
KEYS/KEYRINGS:
S: Maintained
F: Documentation/security/keys/core.rst
KS0108 LCD CONTROLLER DRIVER
S: Maintained
- F: Documentation/auxdisplay/ks0108
+ F: Documentation/admin-guide/auxdisplay/ks0108.rst
F: drivers/auxdisplay/ks0108.c
F: include/linux/ks0108.h
F: include/linux/libnvdimm.h
F: include/uapi/linux/ndctl.h
+ LICENSES and SPDX stuff
+ S: Maintained
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx.git
+ F: COPYING
+ F: Documentation/process/license-rules.rst
+ F: LICENSES/
+ F: scripts/spdxcheck-test.sh
+ F: scripts/spdxcheck.py
+
LIGHTNVM PLATFORM SUPPORT
W: http://github/OpenChannelSSD
LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM)
- M: Andrea Parri <andrea.parri@amarulasolutions.com>
+ M: Andrea Parri <parri.andrea@gmail.com>
- M: "Paul E. McKenney" <paulmck@linux.ibm.com>
+ M: "Paul E. McKenney" <paulmck@kernel.org>
F: include/net/mac80211.h
F: net/mac80211/
F: drivers/net/wireless/mac80211_hwsim.[ch]
- F: Documentation/networking/mac80211_hwsim/README
+ F: Documentation/networking/mac80211_hwsim/mac80211_hwsim.rst
MAILBOX API
T: git git://linuxtv.org/media_tree.git
S: Supported
- F: Documentation/devicetree/bindings/media/renesas,rcar-csi2.txt
- F: Documentation/devicetree/bindings/media/rcar_vin.txt
+ F: Documentation/devicetree/bindings/media/renesas,csi2.txt
+ F: Documentation/devicetree/bindings/media/renesas,vin.txt
F: drivers/media/platform/rcar-vin/
MEDIA DRIVERS FOR RENESAS - VSP1
S: Supported
F: drivers/i2c/busses/i2c-mlxcpld.c
F: drivers/i2c/muxes/i2c-mux-mlxcpld.c
- F: Documentation/i2c/busses/i2c-mlxcpld
+ F: Documentation/i2c/busses/i2c-mlxcpld.rst
MELLANOX MLXCPLD LED DRIVER
MEMBARRIER SUPPORT
- M: "Paul E. McKenney" <paulmck@linux.ibm.com>
+ M: "Paul E. McKenney" <paulmck@kernel.org>
S: Supported
F: kernel/sched/membarrier.c
S: Supported
F: drivers/power/reset/at91-sama5d2_shdwc.c
- MICROCHIP SAMA5D2-COMPATIBLE PIOBU GPIO
- F: drivers/gpio/gpio-sama5d2-piobu.c
-
MICROCHIP SPI DRIVER
S: Supported
F: drivers/misc/atmel-ssc.c
F: include/linux/atmel-ssc.h
- MICROCHIP TIMER COUNTER (TC) AND CLOCKSOURCE DRIVERS
- S: Supported
- F: drivers/misc/atmel_tclib.c
- F: drivers/clocksource/tcb_clksrc.c
-
MICROCHIP USBA UDC DRIVER
S: Supported
- F: driver/net/net_failover.c
+ F: drivers/net/net_failover.c
F: include/net/net_failover.h
F: Documentation/networking/net_failover.rst
S: Maintained
W: https://fedorahosted.org/dropwatch/
F: net/core/drop_monitor.c
+ F: include/uapi/linux/net_dropmon.h
+ F: include/net/drop_monitor.h
NETWORKING DRIVERS
S: Maintained
F: net/tls/*
F: include/uapi/linux/nfc.h
F: drivers/nfc/
F: include/linux/platform_data/nfcmrvl.h
- F: include/linux/platform_data/nxp-nci.h
F: Documentation/devicetree/bindings/net/nfc/
NFS, SUNRPC, AND LOCKD CLIENTS
F: include/linux/power/bq2415x_charger.h
F: include/linux/power/bq27xxx_battery.h
- F: include/linux/power/isp1704_charger.h
F: drivers/power/supply/bq2415x_charger.c
F: drivers/power/supply/bq27xxx_battery.c
F: drivers/power/supply/bq27xxx_battery_i2c.c
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wtarreau/nolibc.git
F: tools/include/nolibc/
+ NSDEPS
+ S: Maintained
+ F: scripts/nsdeps
+
NTB AMD DRIVER
F: arch/arm/mach-omap2/
F: arch/arm/plat-omap/
F: arch/arm/configs/omap2plus_defconfig
+ F: drivers/bus/ti-sysc.c
F: drivers/i2c/busses/i2c-omap.c
F: drivers/irqchip/irq-omap-intc.c
F: drivers/mfd/*omap*.c
F: drivers/regulator/twl-regulator.c
F: drivers/regulator/twl6030-regulator.c
F: include/linux/platform_data/i2c-omap.h
+ F: include/linux/platform_data/ti-sysc.h
ONION OMEGA2+ BOARD
S: Maintained
F: drivers/media/i2c/ov5647.c
+ OMNIVISION OV5670 SENSOR DRIVER
+ T: git git://linuxtv.org/media_tree.git
+ S: Maintained
+ F: drivers/media/i2c/ov5670.c
+
+ OMNIVISION OV5675 SENSOR DRIVER
+ T: git git://linuxtv.org/media_tree.git
+ S: Maintained
+ F: drivers/media/i2c/ov5675.c
+
OMNIVISION OV5695 SENSOR DRIVER
S: Maintained
F: Documentation/devicetree/bindings/i2c/i2c-ocores.txt
- F: Documentation/i2c/busses/i2c-ocores
+ F: Documentation/i2c/busses/i2c-ocores.rst
F: drivers/i2c/busses/i2c-ocores.c
F: include/linux/platform_data/i2c-ocores.h
S: Supported
F: lib/packing.c
F: include/linux/packing.h
- F: Documentation/packing.txt
+ F: Documentation/core-api/packing.rst
PADATA PARALLEL EXECUTION MECHANISM
S: Supported
- F: Documentation/virtual/paravirt_ops.txt
+ F: Documentation/virt/paravirt_ops.rst
F: arch/*/kernel/paravirt*
F: arch/*/include/asm/paravirt*.h
F: include/linux/hypervisor.h
F: drivers/pci/pcie/aer.c
F: drivers/pci/pcie/dpc.c
F: drivers/pci/pcie/err.c
- F: Documentation/powerpc/eeh-pci-error-recovery.txt
+ F: Documentation/powerpc/eeh-pci-error-recovery.rst
F: arch/powerpc/kernel/eeh*.c
F: arch/powerpc/platforms/*/eeh*.c
F: arch/powerpc/include/*/eeh*.h
PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS
Q: http://patchwork.ozlabs.org/project/linux-pci/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/
S: Supported
F: drivers/pci/controller/
- PCIE DRIVER FOR ANNAPURNA LABS
+ PCIE DRIVER FOR AMAZON ANNAPURNA LABS
S: Maintained
+ F: Documentation/devicetree/bindings/pci/pcie-al.txt
F: drivers/pci/controller/dwc/pcie-al.c
PCIE DRIVER FOR AMLOGIC MESON
S: Maintained
F: drivers/platform/x86/peaq-wmi.c
+ PENSANDO ETHERNET DRIVERS
+ S: Supported
+ F: Documentation/networking/device_drivers/pensando/ionic.rst
+ F: drivers/net/ethernet/pensando/
+
PER-CPU MEMORY ALLOCATOR
F: Documentation/input/devices/pxrc.rst
F: drivers/input/joystick/pxrc.c
+ FLYSKY FSIA6B RC RECEIVER
+ S: Maintained
+ F: drivers/input/joystick/fsia6b.c
+
PHONET PROTOCOL
S: Supported
S: Supported
F: drivers/pinctrl/pinctrl-at91*
+ F: drivers/gpio/gpio-sama5d2-piobu.c
PIN CONTROLLER - FREESCALE
F: drivers/video/fbdev/fb-puv3.c
F: drivers/rtc/rtc-puv3.c
+ PLANTOWER PMS7003 AIR POLLUTION SENSOR DRIVER
+ S: Maintained
+ F: drivers/iio/chemical/pms7003.c
+ F: Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
+
PMBUS HARDWARE MONITORING DRIVERS
PWM SUBSYSTEM
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm.git
+ Q: https://patchwork.ozlabs.org/project/linux-pwm/list/
F: Documentation/driver-api/pwm.rst
F: Documentation/devicetree/bindings/pwm/
F: include/linux/pwm.h
F: include/linux/pwm_backlight.h
F: drivers/gpio/gpio-mvebu.c
F: Documentation/devicetree/bindings/gpio/gpio-mvebu.txt
+ K: pwm_(config|apply_state|ops)
PXA GPIO DRIVER
S: Supported
- F: drivers/net/ethernet/qlogic/qlge/
+ F: drivers/staging/qlge/
QM1D1B0004 MEDIA DRIVER
S: Maintained
- F: Documentation/devicetree/bindings/opp/kryo-cpufreq.txt
- F: drivers/cpufreq/qcom-cpufreq-kryo.c
+ F: Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt
+ F: drivers/cpufreq/qcom-cpufreq-nvmem.c
QUALCOMM EMAC GIGABIT ETHERNET DRIVER
F: drivers/i2c/busses/i2c-qcom-geni.c
QUALCOMM HEXAGON ARCHITECTURE
- M: Richard Kuo <rkuo@codeaurora.org>
+ M: Brian Cain <bcain@codeaurora.org>
- T: git git://git.kernel.org/pub/scm/linux/kernel/git/rkuo/linux-hexagon-kernel.git
S: Supported
F: arch/hexagon/
F: drivers/net/wireless/ray*
RCUTORTURE TEST FRAMEWORK
- M: "Paul E. McKenney" <paulmck@linux.ibm.com>
+ M: "Paul E. McKenney" <paulmck@kernel.org>
F: Documentation/x86/resctrl*
READ-COPY UPDATE (RCU)
- M: "Paul E. McKenney" <paulmck@linux.ibm.com>
+ M: "Paul E. McKenney" <paulmck@kernel.org>
- T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/remoteproc.git
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc.git rproc-next
S: Maintained
F: Documentation/devicetree/bindings/remoteproc/
F: Documentation/ABI/testing/sysfs-class-remoteproc
- T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/rpmsg.git
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc.git rpmsg-next
S: Maintained
F: drivers/rpmsg/
F: Documentation/rpmsg.txt
RENESAS EMEV2 I2C DRIVER
S: Supported
- F: Documentation/devicetree/bindings/i2c/i2c-emev2.txt
+ F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.txt
F: drivers/i2c/busses/i2c-emev2.c
RENESAS ETHERNET DRIVERS
RENESAS R-CAR I2C DRIVERS
S: Supported
- F: Documentation/devicetree/bindings/i2c/i2c-rcar.txt
- F: Documentation/devicetree/bindings/i2c/i2c-sh_mobile.txt
+ F: Documentation/devicetree/bindings/i2c/renesas,i2c.txt
+ F: Documentation/devicetree/bindings/i2c/renesas,iic.txt
F: drivers/i2c/busses/i2c-rcar.c
F: drivers/i2c/busses/i2c-sh_mobile.c
RENESAS RIIC DRIVER
S: Supported
- F: Documentation/devicetree/bindings/i2c/i2c-riic.txt
+ F: Documentation/devicetree/bindings/i2c/renesas,riic.txt
F: drivers/i2c/busses/i2c-riic.c
RENESAS USB PHY DRIVER
RESTARTABLE SEQUENCES SUPPORT
- M: "Paul E. McKenney" <paulmck@linux.ibm.com>
+ M: "Paul E. McKenney" <paulmck@kernel.org>
S: Supported
F: drivers/mtd/nand/raw/r852.h
RISC-V ARCHITECTURE
F: Documentation/ABI/*/sysfs-driver-hid-roccat*
ROCKCHIP RASTER 2D GRAPHIC ACCELERATION UNIT DRIVER
S: Maintained
F: drivers/media/platform/rockchip/rga/
S: Maintained
- F: drivers/staging/media/platform/hantro/
+ F: drivers/staging/media/hantro/
F: Documentation/devicetree/bindings/media/rockchip-vpu.txt
ROCKER DRIVER
S390 VFIO-CCW DRIVER
F: drivers/media/pci/saa7146/
F: include/media/drv-intf/saa7146*
+ SAFESETID SECURITY MODULE
+ S: Supported
+ F: security/safesetid/
+ F: Documentation/admin-guide/LSM/SafeSetID.rst
+
SAMSUNG AUDIO (ASoC) DRIVERS
S: Maintained
+ F: Documentation/devicetree/bindings/crypto/samsung-slimsss.txt
+ F: Documentation/devicetree/bindings/crypto/samsung-sss.txt
F: drivers/crypto/s5p-sss.c
SAMSUNG S5P/EXYNOS4 SOC SERIES CAMERA SUBSYSTEM DRIVERS
F: drivers/clk/samsung/
F: include/dt-bindings/clock/exynos*.h
F: Documentation/devicetree/bindings/clock/exynos*.txt
+ F: Documentation/devicetree/bindings/clock/samsung,s3c*
+ F: Documentation/devicetree/bindings/clock/samsung,s5p*
SAMSUNG SPI DRIVERS
SCHEDULER
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
S: Maintained
SCx200 CPU SUPPORT
S: Odd Fixes
- F: Documentation/i2c/busses/scx200_acb
+ F: Documentation/i2c/busses/scx200_acb.rst
F: arch/x86/platform/scx200/
F: drivers/watchdog/scx200_wdt.c
F: drivers/i2c/busses/scx200*
F: drivers/net/phy/sfp*
F: include/linux/phylink.h
F: include/linux/sfp.h
+ K: phylink
SGI GRU DRIVER
SLEEPABLE READ-COPY UPDATE (SRCU)
- M: "Paul E. McKenney" <paulmck@linux.ibm.com>
+ M: "Paul E. McKenney" <paulmck@kernel.org>
F: include/uapi/linux/arm_sdei.h
SOFTWARE RAID (Multiple Disks) SUPPORT
- M: Shaohua Li <shli@kernel.org>
+ M: Song Liu <song@kernel.org>
- T: git git://git.kernel.org/pub/scm/linux/kernel/git/shli/md.git
+ T: git git://git.kernel.org/pub/scm/linux/kernel/git/song/md.git
S: Supported
F: drivers/md/Makefile
F: drivers/md/Kconfig
S: Odd Fixes
F: drivers/staging/comedi/
- STAGING - EROFS FILE SYSTEM
- S: Maintained
- F: drivers/staging/erofs/
-
STAGING - FIELDBUS SUBSYSTEM
S: Maintained
SYNOPSYS DESIGNWARE AXI DMAC DRIVER
S: Maintained
- F: drivers/dma/dwi-axi-dmac/
+ F: drivers/dma/dw-axi-dmac/
F: Documentation/devicetree/bindings/dma/snps,dw-axi-dmac.txt
SYNOPSYS DESIGNWARE DMAC DRIVER
F: drivers/cpufreq/sc[mp]i-cpufreq.c
F: drivers/firmware/arm_scpi.c
F: drivers/firmware/arm_scmi/
+ F: drivers/reset/reset-scmi.c
F: include/linux/sc[mp]i_protocol.h
SYSTEM RESET/SHUTDOWN DRIVERS
F: include/linux/soc/ti/ti_sci_protocol.h
F: Documentation/devicetree/bindings/soc/ti/sci-pm-domain.txt
F: drivers/soc/ti/ti_sci_pm_domains.c
+ F: include/dt-bindings/soc/ti,sci_pm_domain.h
F: Documentation/devicetree/bindings/reset/ti,sci-reset.txt
F: Documentation/devicetree/bindings/clock/ti,sci-clk.txt
F: drivers/clk/keystone/sci-clk.c
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git
S: Supported
- F: Documentation/thermal/cpu-cooling-api.rst
+ F: Documentation/driver-api/thermal/cpu-cooling-api.rst
F: drivers/thermal/cpu_cooling.c
F: include/linux/cpu_cooling.h
F: drivers/net/ethernet/ti/netcp*
TI PCM3060 ASoC CODEC DRIVER
- M: Kirill Marinushkin <kmarinushkin@birdec.tech>
+ M: Kirill Marinushkin <kmarinushkin@birdec.com>
S: Maintained
F: Documentation/devicetree/bindings/sound/pcm3060.txt
TORTURE-TEST MODULES
- M: "Paul E. McKenney" <paulmck@linux.ibm.com>
+ M: "Paul E. McKenney" <paulmck@kernel.org>
S: Supported
UFS FILESYSTEM
S: Maintained
- F: Documentation/filesystems/ufs.txt
+ F: Documentation/admin-guide/ufs.rst
F: fs/ufs/
UHID USERSPACE HID IO DRIVER:
F: include/linux/ulpi/
ULTRA-WIDEBAND (UWB) SUBSYSTEM:
- S: Orphan
- F: drivers/uwb/
- F: include/linux/uwb.h
- F: include/linux/uwb/
+ S: Obsolete
+ F: drivers/staging/uwb/
UNICODE SUBSYSTEM:
Q: https://patchwork.ozlabs.org/project/linux-um/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml.git
S: Maintained
- F: Documentation/virtual/uml/
+ F: Documentation/virt/uml/
F: arch/um/
F: arch/x86/um/
F: fs/hostfs/
F: drivers/s390/virtio/
F: arch/s390/include/uapi/asm/virtio-ccw.h
+ VIRTIO FILE SYSTEM
+ W: https://virtio-fs.gitlab.io/
+ S: Supported
+ F: fs/fuse/virtio_fs.c
+ F: include/uapi/linux/virtio_fs.h
+ F: Documentation/filesystems/virtiofs.rst
+
VIRTIO GPU DRIVER
F: include/uapi/linux/virtio_input.h
VIRTIO IOMMU DRIVER
+ M: Jean-Philippe Brucker <jean-philippe@linaro.org>
S: Maintained
F: drivers/iommu/virtio-iommu.c
F: include/linux/vme*
VMWARE BALLOON DRIVER
S: Supported
F: arch/x86/kernel/cpu/vmware.c
+ F: arch/x86/include/asm/vmware.h
VMWARE PVRDMA DRIVER
F: drivers/regulator/
F: include/dt-bindings/regulator/
F: include/linux/regulator/
+ K: regulator_get_optional
VRF
S: Supported
W: http://linuxwimax.org
- F: Documentation/wimax/README.wimax
+ F: Documentation/admin-guide/wimax/wimax.rst
F: include/linux/wimax/debug.h
F: include/net/wimax.h
F: include/uapi/linux/wimax.h
T: git git://git.infradead.org/linux-platform-drivers-x86.git
- S: Maintained
+ S: Odd Fixes
F: drivers/platform/x86/
F: drivers/platform/olpc/
S: Supported
F: net/core/xdp.c
XEN NETWORK BACKEND DRIVER
+ M: Paul Durrant <paul@xen.org>
S: Supported
F: include/uapi/linux/fsmap.h
XILINX AXI ETHERNET DRIVER
S: Maintained
F: drivers/net/ethernet/xilinx/xilinx_axienet*
F: drivers/media/platform/xilinx/
F: include/uapi/linux/xilinx-v4l2-controls.h
+ XILINX SD-FEC IP CORES
+ S: Maintained
+ F: Documentation/devicetree/bindings/misc/xlnx,sd-fec.txt
+ F: Documentation/misc-devices/xilinx_sdfec.rst
+ F: drivers/misc/xilinx_sdfec.c
+ F: drivers/misc/Kconfig
+ F: drivers/misc/Makefile
+ F: include/uapi/misc/xilinx_sdfec.h
+
XILLYBUS DRIVER
F: mm/zpool.c
F: include/linux/zpool.h
- ZR36067 VIDEO FOR LINUX DRIVER
- W: http://mjpeg.sourceforge.net/driver-zoran/
- T: hg https://linuxtv.org/hg/v4l-dvb
- S: Odd Fixes
- F: drivers/staging/media/zoran/
-
ZRAM COMPRESSED RAM BLOCK DEVICE DRVIER
config DRM_VRAM_HELPER
tristate
depends on DRM
- select DRM_TTM
help
Helpers for VRAM memory management
+config DRM_TTM_HELPER
+ tristate
+ depends on DRM
+ select DRM_TTM
+ help
+ Helpers for ttm-based gem objects
+
config DRM_GEM_CMA_HELPER
bool
depends on DRM
config DRM_I810
tristate "Intel I810"
# !PREEMPT because of missing ioctl locking
- depends on DRM && AGP && AGP_INTEL && (!PREEMPT || BROKEN)
+ depends on DRM && AGP && AGP_INTEL && (!PREEMPTION || BROKEN)
help
Choose this option if you have an Intel I810 graphics card. If M is
selected, the module will be called i810. AGP support is required
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
drm_vram_helper-y := drm_gem_vram_helper.o \
- drm_vram_helper_common.o \
- drm_vram_mm_helper.o
+ drm_vram_helper_common.o
obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
+drm_ttm_helper-y := drm_gem_ttm_helper.o
+obj-$(CONFIG_DRM_TTM_HELPER) += drm_ttm_helper.o
+
drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_dsc.o drm_probe_helper.o \
drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
drm_kms_helper_common.o drm_dp_dual_mode_helper.o \
obj-$(CONFIG_DRM_SCHED) += scheduler/
obj-$(CONFIG_DRM_TDFX) += tdfx/
obj-$(CONFIG_DRM_R128) += r128/
- obj-$(CONFIG_HSA_AMD) += amd/amdkfd/
obj-$(CONFIG_DRM_RADEON)+= radeon/
obj-$(CONFIG_DRM_AMDGPU)+= amd/amdgpu/
obj-$(CONFIG_DRM_MGA) += mga/
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
#include <drm/drm_probe_helper.h>
+ #include <linux/mmu_notifier.h>
#include "amdgpu.h"
#include "amdgpu_irq.h"
* - 3.31.0 - Add support for per-flip tiling attribute changes with DC
* - 3.32.0 - Add syncobj timeline support to AMDGPU_CS.
* - 3.33.0 - Fixes for GDS ENOMEM failures in AMDGPU_CS.
+ * - 3.34.0 - Non-DC can flip correctly between buffers with different pitches
*/
#define KMS_DRIVER_MAJOR 3
- #define KMS_DRIVER_MINOR 33
+ #define KMS_DRIVER_MINOR 34
#define KMS_DRIVER_PATCHLEVEL 0
#define AMDGPU_MAX_TIMEOUT_PARAM_LENTH 256
int amdgpu_mcbp = 0;
int amdgpu_discovery = -1;
int amdgpu_mes = 0;
- int amdgpu_noretry;
+ int amdgpu_noretry = 1;
struct amdgpu_mgpu_info mgpu_info = {
.mutex = __MUTEX_INITIALIZER(mgpu_info.mutex),
};
int amdgpu_ras_enable = -1;
- uint amdgpu_ras_mask = 0xffffffff;
+ uint amdgpu_ras_mask = 0xfffffffb;
/**
* DOC: vramlimit (int)
module_param_named(mes, amdgpu_mes, int, 0444);
MODULE_PARM_DESC(noretry,
- "Disable retry faults (0 = retry enabled (default), 1 = retry disabled)");
+ "Disable retry faults (0 = retry enabled, 1 = retry disabled (default))");
module_param_named(noretry, amdgpu_noretry, int, 0644);
#ifdef CONFIG_HSA_AMD
/* Raven */
{0x1002, 0x15dd, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RAVEN|AMD_IS_APU},
{0x1002, 0x15d8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RAVEN|AMD_IS_APU},
+ /* Arcturus */
+ {0x1002, 0x738C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT},
+ {0x1002, 0x7388, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT},
+ {0x1002, 0x738E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT},
+ {0x1002, 0x7390, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT},
/* Navi10 */
{0x1002, 0x7310, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
{0x1002, 0x7312, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
{0x1002, 0x731A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
{0x1002, 0x731B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
{0x1002, 0x731F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
+ /* Navi14 */
+ {0x1002, 0x7340, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI14|AMD_EXP_HW_SUPPORT},
+ {0x1002, 0x7341, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI14|AMD_EXP_HW_SUPPORT},
+ {0x1002, 0x7347, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI14|AMD_EXP_HW_SUPPORT},
+
+ /* Renoir */
+ {0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU|AMD_EXP_HW_SUPPORT},
+
+ /* Navi12 */
+ {0x1002, 0x7360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12|AMD_EXP_HW_SUPPORT},
{0, 0, 0}
};
}
/* Get rid of things like offb */
- ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "amdgpudrmfb");
+ ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "amdgpudrmfb");
if (ret)
return ret;
* unfortunately we can't detect certain
* hypervisors so just do this all the time.
*/
+ adev->mp1_state = PP_MP1_STATE_UNLOAD;
amdgpu_device_ip_suspend(adev);
+ adev->mp1_state = PP_MP1_STATE_NONE;
}
static int amdgpu_pmops_suspend(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
return amdgpu_device_suspend(drm_dev, true, true);
}
static int amdgpu_pmops_resume(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
/* GPU comes up enabled by the bios on resume */
if (amdgpu_device_is_px(drm_dev)) {
static int amdgpu_pmops_freeze(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
return amdgpu_device_suspend(drm_dev, false, true);
}
static int amdgpu_pmops_thaw(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
return amdgpu_device_resume(drm_dev, false, true);
}
static int amdgpu_pmops_poweroff(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
return amdgpu_device_suspend(drm_dev, true, true);
}
static int amdgpu_pmops_restore(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
return amdgpu_device_resume(drm_dev, false, true);
}
static int amdgpu_pmops_runtime_idle(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
struct drm_crtc *crtc;
if (!amdgpu_device_is_px(drm_dev)) {
amdgpu_unregister_atpx_handler();
amdgpu_sync_fini();
amdgpu_fence_slab_fini();
+ mmu_notifier_synchronize();
}
module_init(amdgpu_init);
if (r)
goto error;
+ /* clear the space being freed */
+ if (old_mem->mem_type == TTM_PL_VRAM &&
+ (ttm_to_amdgpu_bo(bo)->flags &
+ AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
+ struct dma_fence *wipe_fence = NULL;
+
+ r = amdgpu_fill_buffer(ttm_to_amdgpu_bo(bo), AMDGPU_POISON,
+ NULL, &wipe_fence);
+ if (r) {
+ goto error;
+ } else if (wipe_fence) {
+ dma_fence_put(fence);
+ fence = wipe_fence;
+ }
+ }
+
/* Always block for VM page tables before committing the new location */
if (bo->type == ttm_bo_type_kernel)
r = ttm_bo_move_accel_cleanup(bo, fence, true, new_mem);
struct hmm_range *range;
unsigned long i;
uint64_t *pfns;
- int retry = 0;
int r = 0;
if (!mm) /* Happens during process shutdown */
0 : range->flags[HMM_PFN_WRITE];
range->pfn_flags_mask = 0;
range->pfns = pfns;
- hmm_range_register(range, mirror, start,
- start + ttm->num_pages * PAGE_SIZE, PAGE_SHIFT);
+ range->start = start;
+ range->end = start + ttm->num_pages * PAGE_SIZE;
+
+ hmm_range_register(range, mirror);
- retry:
/*
* Just wait for range to be valid, safe to ignore return value as we
* will use the return value of hmm_range_fault() below under the
hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT);
down_read(&mm->mmap_sem);
-
- r = hmm_range_fault(range, true);
- if (unlikely(r < 0)) {
- if (likely(r == -EAGAIN)) {
- /*
- * return -EAGAIN, mmap_sem is dropped
- */
- if (retry++ < MAX_RETRY_HMM_RANGE_FAULT)
- goto retry;
- else
- pr_err("Retry hmm fault too many times\n");
- }
-
- goto out_up_read;
- }
-
+ r = hmm_range_fault(range, 0);
up_read(&mm->mmap_sem);
+ if (unlikely(r < 0))
+ goto out_free_pfns;
+
for (i = 0; i < ttm->num_pages; i++) {
pages[i] = hmm_device_entry_to_page(range, pfns[i]);
if (unlikely(!pages[i])) {
return 0;
- out_up_read:
- if (likely(r != -EAGAIN))
- up_read(&mm->mmap_sem);
out_free_pfns:
hmm_range_unregister(range);
kvfree(pfns);
.move = &amdgpu_bo_move,
.verify_access = &amdgpu_verify_access,
.move_notify = &amdgpu_bo_move_notify,
+ .release_notify = &amdgpu_bo_release_notify,
.fault_reserve_notify = &amdgpu_bo_fault_reserve_notify,
.io_mem_reserve = &amdgpu_ttm_io_mem_reserve,
.io_mem_free = &amdgpu_ttm_io_mem_free,
uint64_t gtt_size;
int r;
u64 vis_vram_limit;
+ void *stolen_vga_buf;
mutex_init(&adev->mman.gtt_window_lock);
r = ttm_bo_device_init(&adev->mman.bdev,
&amdgpu_bo_driver,
adev->ddev->anon_inode->i_mapping,
- adev->need_dma32);
+ adev->ddev->vma_offset_manager,
+ dma_addressing_limited(adev->dev));
if (r) {
DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
return r;
r = amdgpu_bo_create_kernel(adev, adev->gmc.stolen_size, PAGE_SIZE,
AMDGPU_GEM_DOMAIN_VRAM,
&adev->stolen_vga_memory,
- NULL, NULL);
+ NULL, &stolen_vga_buf);
if (r)
return r;
DRM_INFO("amdgpu: %uM of VRAM memory ready\n",
*/
void amdgpu_ttm_late_init(struct amdgpu_device *adev)
{
+ void *stolen_vga_buf;
/* return the VGA stolen memory (if any) back to VRAM */
- amdgpu_bo_free_kernel(&adev->stolen_vga_memory, NULL, NULL);
+ amdgpu_bo_free_kernel(&adev->stolen_vga_memory, NULL, &stolen_vga_buf);
}
/**
dce_virtual_encoder(struct drm_connector *connector)
{
struct drm_encoder *encoder;
- int i;
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
if (encoder->encoder_type == DRM_MODE_ENCODER_VIRTUAL)
return encoder;
}
/* pick the first one */
- drm_connector_for_each_possible_encoder(connector, encoder, i)
+ drm_connector_for_each_possible_encoder(connector, encoder)
return encoder;
return NULL;
#endif
/* no DCE */
break;
- case CHIP_VEGA10:
- case CHIP_VEGA12:
- case CHIP_VEGA20:
- case CHIP_NAVI10:
- break;
default:
- DRM_ERROR("Virtual display unsupported ASIC type: 0x%X\n", adev->asic_type);
+ break;
}
return 0;
}
*/
if (adev->flags & AMD_IS_APU &&
adev->asic_type >= CHIP_CARRIZO &&
- adev->asic_type < CHIP_RAVEN)
+ adev->asic_type <= CHIP_RAVEN)
init_data.flags.gpu_vm_support = true;
if (amdgpu_dc_feature_mask & DC_FBC_MASK)
init_data.flags.fbc_support = true;
+ if (amdgpu_dc_feature_mask & DC_MULTI_MON_PP_MCLK_SWITCH_MASK)
+ init_data.flags.multi_mon_pp_mclk_switch = true;
+
init_data.flags.power_down_display_on_boot = true;
#ifdef CONFIG_DRM_AMD_DC_DCN2_0
case CHIP_VEGA12:
case CHIP_VEGA20:
case CHIP_NAVI10:
+ case CHIP_NAVI14:
+ case CHIP_NAVI12:
+ case CHIP_RENOIR:
return 0;
case CHIP_RAVEN:
if (ASICREV_IS_PICASSO(adev->external_rev_id))
}
static const struct backlight_ops amdgpu_dm_backlight_ops = {
+ .options = BL_CORE_SUSPENDRESUME,
.get_brightness = amdgpu_dm_backlight_get_brightness,
.update_status = amdgpu_dm_backlight_update_status,
};
#if defined(CONFIG_DRM_AMD_DC_DCN1_0)
case CHIP_RAVEN:
#if defined(CONFIG_DRM_AMD_DC_DCN2_0)
+ case CHIP_NAVI12:
case CHIP_NAVI10:
+ case CHIP_NAVI14:
+ #endif
+ #if defined(CONFIG_DRM_AMD_DC_DCN2_1)
+ case CHIP_RENOIR:
#endif
if (dcn10_register_irq_handlers(dm->adev)) {
DRM_ERROR("DM: Failed to initialize IRQ\n");
if (adev->asic_type != CHIP_CARRIZO && adev->asic_type != CHIP_STONEY)
dm->dc->debug.disable_stutter = amdgpu_pp_feature_mask & PP_STUTTER_MODE ? false : true;
+ if (adev->asic_type == CHIP_RENOIR)
+ dm->dc->debug.disable_stutter = true;
return 0;
fail:
{
int ret;
int s3_state;
- struct pci_dev *pdev = to_pci_dev(device);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(device);
struct amdgpu_device *adev = drm_dev->dev_private;
ret = kstrtoint(buf, 0, &s3_state);
#endif
#if defined(CONFIG_DRM_AMD_DC_DCN2_0)
case CHIP_NAVI10:
+ case CHIP_NAVI12:
adev->mode_info.num_crtc = 6;
adev->mode_info.num_hpd = 6;
adev->mode_info.num_dig = 6;
break;
+ case CHIP_NAVI14:
+ adev->mode_info.num_crtc = 5;
+ adev->mode_info.num_hpd = 5;
+ adev->mode_info.num_dig = 5;
+ break;
+ #endif
+ #if defined(CONFIG_DRM_AMD_DC_DCN2_1)
+ case CHIP_RENOIR:
+ adev->mode_info.num_crtc = 4;
+ adev->mode_info.num_hpd = 4;
+ adev->mode_info.num_dig = 4;
+ break;
#endif
default:
DRM_ERROR("Unsupported ASIC type: 0x%X\n", adev->asic_type);
const struct amdgpu_framebuffer *afb,
const enum surface_pixel_format format,
const enum dc_rotation_angle rotation,
- const union plane_size *plane_size,
+ const struct plane_size *plane_size,
const union dc_tiling_info *tiling_info,
const uint64_t info,
struct dc_plane_dcc_param *dcc,
return -EINVAL;
input.format = format;
- input.surface_size.width = plane_size->grph.surface_size.width;
- input.surface_size.height = plane_size->grph.surface_size.height;
+ input.surface_size.width = plane_size->surface_size.width;
+ input.surface_size.height = plane_size->surface_size.height;
input.swizzle_mode = tiling_info->gfx9.swizzle;
if (rotation == ROTATION_ANGLE_0 || rotation == ROTATION_ANGLE_180)
return -EINVAL;
dcc->enable = 1;
- dcc->grph.meta_pitch =
+ dcc->meta_pitch =
AMDGPU_TILING_GET(info, DCC_PITCH_MAX) + 1;
- dcc->grph.independent_64b_blks = i64b;
+ dcc->independent_64b_blks = i64b;
dcc_address = get_dcc_address(afb->address, info);
address->grph.meta_addr.low_part = lower_32_bits(dcc_address);
const enum dc_rotation_angle rotation,
const uint64_t tiling_flags,
union dc_tiling_info *tiling_info,
- union plane_size *plane_size,
+ struct plane_size *plane_size,
struct dc_plane_dcc_param *dcc,
struct dc_plane_address *address)
{
memset(address, 0, sizeof(*address));
if (format < SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) {
- plane_size->grph.surface_size.x = 0;
- plane_size->grph.surface_size.y = 0;
- plane_size->grph.surface_size.width = fb->width;
- plane_size->grph.surface_size.height = fb->height;
- plane_size->grph.surface_pitch =
+ plane_size->surface_size.x = 0;
+ plane_size->surface_size.y = 0;
+ plane_size->surface_size.width = fb->width;
+ plane_size->surface_size.height = fb->height;
+ plane_size->surface_pitch =
fb->pitches[0] / fb->format->cpp[0];
address->type = PLN_ADDR_TYPE_GRAPHICS;
} else if (format < SURFACE_PIXEL_FORMAT_INVALID) {
uint64_t chroma_addr = afb->address + fb->offsets[1];
- plane_size->video.luma_size.x = 0;
- plane_size->video.luma_size.y = 0;
- plane_size->video.luma_size.width = fb->width;
- plane_size->video.luma_size.height = fb->height;
- plane_size->video.luma_pitch =
+ plane_size->surface_size.x = 0;
+ plane_size->surface_size.y = 0;
+ plane_size->surface_size.width = fb->width;
+ plane_size->surface_size.height = fb->height;
+ plane_size->surface_pitch =
fb->pitches[0] / fb->format->cpp[0];
- plane_size->video.chroma_size.x = 0;
- plane_size->video.chroma_size.y = 0;
+ plane_size->chroma_size.x = 0;
+ plane_size->chroma_size.y = 0;
/* TODO: set these based on surface format */
- plane_size->video.chroma_size.width = fb->width / 2;
- plane_size->video.chroma_size.height = fb->height / 2;
+ plane_size->chroma_size.width = fb->width / 2;
+ plane_size->chroma_size.height = fb->height / 2;
- plane_size->video.chroma_pitch =
+ plane_size->chroma_pitch =
fb->pitches[1] / fb->format->cpp[1];
address->type = PLN_ADDR_TYPE_VIDEO_PROGRESSIVE;
adev->asic_type == CHIP_VEGA20 ||
#if defined(CONFIG_DRM_AMD_DC_DCN2_0)
adev->asic_type == CHIP_NAVI10 ||
+ adev->asic_type == CHIP_NAVI14 ||
+ adev->asic_type == CHIP_NAVI12 ||
+ #endif
+ #if defined(CONFIG_DRM_AMD_DC_DCN2_1)
+ adev->asic_type == CHIP_RENOIR ||
#endif
adev->asic_type == CHIP_RAVEN) {
/* Fill GFX9 params */
plane_info->visible = true;
plane_info->stereo_format = PLANE_STEREO_FORMAT_NONE;
+ plane_info->layer_index = 0;
+
ret = fill_plane_color_attributes(plane_state, plane_info->format,
&plane_info->color_space);
if (ret)
dc_plane_state->global_alpha = plane_info.global_alpha;
dc_plane_state->global_alpha_value = plane_info.global_alpha_value;
dc_plane_state->dcc = plane_info.dcc;
+ dc_plane_state->layer_index = plane_info.layer_index; // Always returns 0
/*
* Always set input transfer function, since plane state is refreshed
convert_color_depth_from_display_info(const struct drm_connector *connector,
const struct drm_connector_state *state)
{
- uint32_t bpc = connector->display_info.bpc;
+ uint8_t bpc = (uint8_t)connector->display_info.bpc;
+
+ /* Assume 8 bpc by default if no bpc is specified. */
+ bpc = bpc ? bpc : 8;
if (!state)
state = connector->state;
if (state) {
- bpc = state->max_bpc;
+ /*
+ * Cap display bpc based on the user requested value.
+ *
+ * The value for state->max_bpc may not correctly updated
+ * depending on when the connector gets added to the state
+ * or if this was called outside of atomic check, so it
+ * can't be used directly.
+ */
+ bpc = min(bpc, state->max_requested_bpc);
+
/* Round down to the nearest even number. */
bpc = bpc - (bpc & 1);
}
bool scale = dm_state ? (dm_state->scaling != RMX_OFF) : false;
int mode_refresh;
int preferred_refresh = 0;
+ #ifdef CONFIG_DRM_AMD_DC_DSC_SUPPORT
+ struct dsc_dec_dpcd_caps dsc_caps;
+ uint32_t link_bandwidth_kbps;
+ #endif
struct dc_sink *sink = NULL;
if (aconnector == NULL) {
&mode, &aconnector->base, con_state, old_stream);
#ifdef CONFIG_DRM_AMD_DC_DSC_SUPPORT
- /* stream->timing.flags.DSC = 0; */
- /* */
- /* if (aconnector->dc_link && */
- /* aconnector->dc_link->connector_signal == SIGNAL_TYPE_DISPLAY_PORT #<{(|&& */
- /* aconnector->dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.is_dsc_supported|)}>#) */
- /* if (dc_dsc_compute_config(aconnector->dc_link->ctx->dc, */
- /* &aconnector->dc_link->dpcd_caps.dsc_caps, */
- /* dc_link_bandwidth_kbps(aconnector->dc_link, dc_link_get_link_cap(aconnector->dc_link)), */
- /* &stream->timing, */
- /* &stream->timing.dsc_cfg)) */
- /* stream->timing.flags.DSC = 1; */
+ stream->timing.flags.DSC = 0;
+
+ if (aconnector->dc_link && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT) {
+ dc_dsc_parse_dsc_dpcd(aconnector->dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.raw,
+ aconnector->dc_link->dpcd_caps.dsc_caps.dsc_ext_caps.raw,
+ &dsc_caps);
+ link_bandwidth_kbps = dc_link_bandwidth_kbps(aconnector->dc_link,
+ dc_link_get_link_cap(aconnector->dc_link));
+
+ if (dsc_caps.is_dsc_supported)
+ if (dc_dsc_compute_config(aconnector->dc_link->ctx->dc,
+ &dsc_caps,
+ link_bandwidth_kbps,
+ &stream->timing,
+ &stream->timing.dsc_cfg))
+ stream->timing.flags.DSC = 1;
+ }
#endif
update_stream_scaling_settings(&mode, dm_state, stream);
state->abm_level = cur->abm_level;
state->vrr_supported = cur->vrr_supported;
state->freesync_config = cur->freesync_config;
- state->crc_enabled = cur->crc_enabled;
+ state->crc_src = cur->crc_src;
state->cm_has_degamma = cur->cm_has_degamma;
state->cm_is_degamma_srgb = cur->cm_is_degamma_srgb;
.atomic_destroy_state = dm_crtc_destroy_state,
.set_crc_source = amdgpu_dm_crtc_set_crc_source,
.verify_crc_source = amdgpu_dm_crtc_verify_crc_source,
+ .get_crc_sources = amdgpu_dm_crtc_get_crc_sources,
.enable_vblank = dm_enable_vblank,
.disable_vblank = dm_disable_vblank,
};
}
if (plane->type != DRM_PLANE_TYPE_CURSOR)
- domain = amdgpu_display_supported_domains(adev);
+ domain = amdgpu_display_supported_domains(adev, rbo->flags);
else
domain = AMDGPU_GEM_DOMAIN_VRAM;
static int dm_plane_atomic_async_check(struct drm_plane *plane,
struct drm_plane_state *new_plane_state)
{
- struct drm_plane_state *old_plane_state =
- drm_atomic_get_old_plane_state(new_plane_state->state, plane);
-
/* Only support async updates on cursor planes. */
if (plane->type != DRM_PLANE_TYPE_CURSOR)
return -EINVAL;
- /*
- * DRM calls prepare_fb and cleanup_fb on new_plane_state for
- * async commits so don't allow fb changes.
- */
- if (old_plane_state->fb != new_plane_state->fb)
- return -EINVAL;
-
return 0;
}
static struct drm_encoder *amdgpu_dm_connector_to_encoder(struct drm_connector *connector)
{
- return drm_encoder_find(connector->dev, NULL, connector->encoder_ids[0]);
+ struct drm_encoder *encoder;
+
+ /* There is only one encoder per connector */
+ drm_connector_for_each_possible_encoder(connector, encoder)
+ return encoder;
+
+ return NULL;
}
static void amdgpu_dm_get_native_mode(struct drm_connector *connector)
false,
msecs_to_jiffies(5000));
if (unlikely(r <= 0))
- DRM_ERROR("Waiting for fences timed out or interrupted!");
+ DRM_ERROR("Waiting for fences timed out!");
/*
* TODO This might fail and hence better not used, wait
bundle->surface_updates[planes_count].plane_info =
&bundle->plane_infos[planes_count];
+ /*
+ * Only allow immediate flips for fast updates that don't
+ * change FB pitch, DCC state, rotation or mirroing.
+ */
bundle->flip_addrs[planes_count].flip_immediate =
- (crtc->state->pageflip_flags & DRM_MODE_PAGE_FLIP_ASYNC) != 0;
+ crtc->state->async_flip &&
+ acrtc_state->update_type == UPDATE_TYPE_FAST;
timestamp_ns = ktime_get_ns();
bundle->flip_addrs[planes_count].flip_timestamp_in_us = div_u64(timestamp_ns, 1000);
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state, *new_crtc_state;
int i;
+ enum amdgpu_dm_pipe_crc_source source;
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
new_crtc_state, i) {
#ifdef CONFIG_DEBUG_FS
/* The stream has changed so CRC capture needs to re-enabled. */
- if (dm_new_crtc_state->crc_enabled) {
- dm_new_crtc_state->crc_enabled = false;
- amdgpu_dm_crtc_set_crc_source(crtc, "auto");
+ source = dm_new_crtc_state->crc_src;
+ if (amdgpu_dm_is_valid_crc_source(source)) {
+ amdgpu_dm_crtc_configure_crc_source(
+ crtc, dm_new_crtc_state,
+ dm_new_crtc_state->crc_src);
}
#endif
}
if (dm_old_crtc_state->interrupts_enabled &&
(!dm_new_crtc_state->interrupts_enabled ||
- drm_atomic_crtc_needs_modeset(new_crtc_state))) {
- /*
- * Drop the extra vblank reference added by CRC
- * capture if applicable.
- */
- if (dm_new_crtc_state->crc_enabled)
- drm_crtc_vblank_put(crtc);
-
- /*
- * Only keep CRC capture enabled if there's
- * still a stream for the CRTC.
- */
- if (!dm_new_crtc_state->stream)
- dm_new_crtc_state->crc_enabled = false;
-
+ drm_atomic_crtc_needs_modeset(new_crtc_state)))
manage_dm_interrupts(adev, acrtc, false);
- }
}
/*
* Add check here for SoC's that support hardware cursor plane, to
amdgpu_dm_enable_crtc_interrupts(dev, state, true);
for_each_new_crtc_in_state(state, crtc, new_crtc_state, j)
- if (new_crtc_state->pageflip_flags & DRM_MODE_PAGE_FLIP_ASYNC)
+ if (new_crtc_state->async_flip)
wait_for_vblank = false;
/* update planes when needed per crtc*/
continue;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) {
+ const struct amdgpu_framebuffer *amdgpu_fb =
+ to_amdgpu_framebuffer(new_plane_state->fb);
+ struct dc_plane_info plane_info;
+ struct dc_flip_addrs flip_addr;
+ uint64_t tiling_flags;
+
new_plane_crtc = new_plane_state->crtc;
old_plane_crtc = old_plane_state->crtc;
new_dm_plane_state = to_dm_plane_state(new_plane_state);
updates[num_plane].scaling_info = &scaling_info;
+ if (amdgpu_fb) {
+ ret = get_fb_info(amdgpu_fb, &tiling_flags);
+ if (ret)
+ goto cleanup;
+
+ memset(&flip_addr, 0, sizeof(flip_addr));
+
+ ret = fill_dc_plane_info_and_addr(
+ dm->adev, new_plane_state, tiling_flags,
+ &plane_info,
+ &flip_addr.address);
+ if (ret)
+ goto cleanup;
+
+ updates[num_plane].plane_info = &plane_info;
+ updates[num_plane].flip_addr = &flip_addr;
+ }
+
num_plane++;
}
if (ret)
goto fail;
+ if (state->legacy_cursor_update) {
+ /*
+ * This is a fast cursor update coming from the plane update
+ * helper, check if it can be done asynchronously for better
+ * performance.
+ */
+ state->async_update =
+ !drm_atomic_helper_async_check(dev, state);
+
+ /*
+ * Skip the remaining global validation if this is an async
+ * update. Cursor updates can be done without affecting
+ * state or bandwidth calcs and this avoids the performance
+ * penalty of locking the private state object and
+ * allocating a new dc_state.
+ */
+ if (state->async_update)
+ return 0;
+ }
+
/* Check scaling and underscan changes*/
/* TODO Removed scaling changes validation due to inability to commit
* new stream into context w\o causing full reset. Need to
ret = -EINVAL;
goto fail;
}
- } else if (state->legacy_cursor_update) {
+ } else {
/*
- * This is a fast cursor update coming from the plane update
- * helper, check if it can be done asynchronously for better
- * performance.
+ * The commit is a fast update. Fast updates shouldn't change
+ * the DC context, affect global validation, and can have their
+ * commit work done in parallel with other commits not touching
+ * the same resource. If we have a new DC context as part of
+ * the DM atomic state from validation we need to free it and
+ * retain the existing one instead.
*/
- state->async_update = !drm_atomic_helper_async_check(dev, state);
+ struct dm_atomic_state *new_dm_state, *old_dm_state;
+
+ new_dm_state = dm_atomic_get_new_state(state);
+ old_dm_state = dm_atomic_get_old_state(state);
+
+ if (new_dm_state && old_dm_state) {
+ if (new_dm_state->context)
+ dc_release_state(new_dm_state->context);
+
+ new_dm_state->context = old_dm_state->context;
+
+ if (old_dm_state->context)
+ dc_retain_state(old_dm_state->context);
+ }
+ }
+
+ /* Store the overall update type for use later in atomic check. */
+ for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
+ struct dm_crtc_state *dm_new_crtc_state =
+ to_dm_crtc_state(new_crtc_state);
+
+ dm_new_crtc_state->update_type = (int)overall_update_type;
}
/* Must be success */
struct komeda_dev *mdev = sf->private;
int i;
+ seq_puts(sf, "\n====== Komeda register dump =========\n");
+
if (mdev->funcs->dump_register)
mdev->funcs->dump_register(mdev, sf);
}
static DEVICE_ATTR_RO(config_id);
+static ssize_t
+aclk_hz_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct komeda_dev *mdev = dev_to_mdev(dev);
+
+ return snprintf(buf, PAGE_SIZE, "%lu\n", clk_get_rate(mdev->aclk));
+}
+static DEVICE_ATTR_RO(aclk_hz);
+
static struct attribute *komeda_sysfs_entries[] = {
&dev_attr_core_id.attr,
&dev_attr_config_id.attr,
+ &dev_attr_aclk_hz.attr,
NULL,
};
of_graph_get_port_by_id(np, KOMEDA_OF_PORT_OUTPUT);
pipe->dual_link = pipe->of_output_links[0] && pipe->of_output_links[1];
- pipe->of_node = np;
+ pipe->of_node = of_node_get(np);
return 0;
}
product->product_id,
MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id));
err = -ENODEV;
- goto err_cleanup;
+ goto disable_clk;
}
DRM_INFO("Found ARM Mali-D%x version r%dp%d\n",
err = mdev->funcs->enum_resources(mdev);
if (err) {
DRM_ERROR("enumerate display resource failed.\n");
- goto err_cleanup;
+ goto disable_clk;
}
err = komeda_parse_dt(dev, mdev);
if (err) {
DRM_ERROR("parse device tree failed.\n");
- goto err_cleanup;
+ goto disable_clk;
}
err = komeda_assemble_pipelines(mdev);
if (err) {
DRM_ERROR("assemble display pipelines failed.\n");
- goto err_cleanup;
+ goto disable_clk;
}
dev->dma_parms = &mdev->dma_parms;
if (mdev->iommu && mdev->funcs->connect_iommu) {
err = mdev->funcs->connect_iommu(mdev);
if (err) {
+ DRM_ERROR("connect iommu failed.\n");
mdev->iommu = NULL;
- goto err_cleanup;
+ goto disable_clk;
}
}
+ clk_disable_unprepare(mdev->aclk);
+
err = sysfs_create_group(&dev->kobj, &komeda_sysfs_attr_group);
if (err) {
DRM_ERROR("create sysfs group failed.\n");
return mdev;
+disable_clk:
+ clk_disable_unprepare(mdev->aclk);
err_cleanup:
komeda_dev_destroy(mdev);
return ERR_PTR(err);
debugfs_remove_recursive(mdev->debugfs_root);
#endif
+ if (mdev->aclk)
+ clk_prepare_enable(mdev->aclk);
+
if (mdev->iommu && mdev->funcs->disconnect_iommu)
- mdev->funcs->disconnect_iommu(mdev);
+ if (mdev->funcs->disconnect_iommu(mdev))
+ DRM_ERROR("disconnect iommu failed.\n");
mdev->iommu = NULL;
for (i = 0; i < mdev->n_pipelines; i++) {
devm_kfree(dev, mdev);
}
+
+int komeda_dev_resume(struct komeda_dev *mdev)
+{
+ int ret = 0;
+
+ clk_prepare_enable(mdev->aclk);
+
+ if (mdev->iommu && mdev->funcs->connect_iommu) {
+ ret = mdev->funcs->connect_iommu(mdev);
+ if (ret < 0) {
+ DRM_ERROR("connect iommu failed.\n");
+ goto disable_clk;
+ }
+ }
+
+ ret = mdev->funcs->enable_irq(mdev);
+
+disable_clk:
+ clk_disable_unprepare(mdev->aclk);
+
+ return ret;
+}
+
+int komeda_dev_suspend(struct komeda_dev *mdev)
+{
+ int ret = 0;
+
+ clk_prepare_enable(mdev->aclk);
+
+ if (mdev->iommu && mdev->funcs->disconnect_iommu) {
+ ret = mdev->funcs->disconnect_iommu(mdev);
+ if (ret < 0) {
+ DRM_ERROR("disconnect iommu failed.\n");
+ goto disable_clk;
+ }
+ }
+
+ ret = mdev->funcs->disable_irq(mdev);
+
+disable_clk:
+ clk_disable_unprepare(mdev->aclk);
+
+ return ret;
+}
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_irq.h>
+ #include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
#include "komeda_dev.h"
memset(&evts, 0, sizeof(evts));
status = mdev->funcs->irq_handler(mdev, &evts);
+ komeda_print_events(&evts);
+
/* Notify the crtc to handle the events */
for (i = 0; i < kms->n_crtcs; i++)
komeda_crtc_handle_event(&kms->crtcs[i], &evts);
struct komeda_crtc_state *kcrtc_st = to_kcrtc_st(crtc_st);
struct komeda_plane_state *kplane_st;
struct drm_plane_state *plane_st;
- struct drm_framebuffer *fb;
struct drm_plane *plane;
struct list_head zorder_list;
int order = 0, err;
list_for_each_entry(kplane_st, &zorder_list, zlist_node) {
plane_st = &kplane_st->base;
- fb = plane_st->fb;
plane = plane_st->plane;
plane_st->normalized_zpos = order++;
struct drm_atomic_state *state)
{
struct drm_crtc *crtc;
- struct drm_crtc_state *old_crtc_st, *new_crtc_st;
+ struct drm_crtc_state *new_crtc_st;
int i, err;
err = drm_atomic_helper_check_modeset(dev, state);
* so need to add all affected_planes (even unchanged) to
* drm_atomic_state.
*/
- for_each_oldnew_crtc_in_state(state, crtc, old_crtc_st, new_crtc_st, i) {
+ for_each_new_crtc_in_state(state, crtc, new_crtc_st, i) {
err = drm_atomic_add_affected_planes(state, crtc);
if (err)
return err;
komeda_kms_irq_handler, IRQF_SHARED,
drm->driver->name, drm);
if (err)
- goto cleanup_mode_config;
+ goto free_component_binding;
err = mdev->funcs->enable_irq(mdev);
if (err)
- goto cleanup_mode_config;
+ goto free_component_binding;
drm->irq_enabled = true;
+ drm_kms_helper_poll_init(drm);
+
err = drm_dev_register(drm, 0);
if (err)
- goto cleanup_mode_config;
+ goto free_interrupts;
return kms;
- cleanup_mode_config:
+ free_interrupts:
+ drm_kms_helper_poll_fini(drm);
drm->irq_enabled = false;
+ mdev->funcs->disable_irq(mdev);
+ free_component_binding:
+ component_unbind_all(mdev->dev, drm);
+ cleanup_mode_config:
drm_mode_config_cleanup(drm);
komeda_kms_cleanup_private_objs(kms);
+ drm->dev_private = NULL;
+ drm_dev_put(drm);
free_kms:
kfree(kms);
return ERR_PTR(err);
struct drm_device *drm = &kms->base;
struct komeda_dev *mdev = drm->dev_private;
+ drm_dev_unregister(drm);
+ drm_kms_helper_poll_fini(drm);
+ drm_atomic_helper_shutdown(drm);
drm->irq_enabled = false;
mdev->funcs->disable_irq(mdev);
- drm_dev_unregister(drm);
component_unbind_all(mdev->dev, drm);
- komeda_kms_cleanup_private_objs(kms);
drm_mode_config_cleanup(drm);
+ komeda_kms_cleanup_private_objs(kms);
drm->dev_private = NULL;
drm_dev_put(drm);
}
int id;
/** @avail_comps: available components mask of pipeline */
u32 avail_comps;
+ /**
+ * @standalone_disabled_comps:
+ *
+ * When disable the pipeline, some components can not be disabled
+ * together with others, but need a sparated and standalone disable.
+ * The standalone_disabled_comps are the components which need to be
+ * disabled standalone, and this concept also introduce concept of
+ * two phase.
+ * phase 1: for disabling the common components.
+ * phase 2: for disabling the standalong_disabled_comps.
+ */
+ u32 standalone_disabled_comps;
/** @n_layers: the number of layer on @layers */
int n_layers;
/** @layers: the pipeline layers */
struct seq_file *sf);
/* component APIs */
+ extern __printf(10, 11)
struct komeda_component *
komeda_component_add(struct komeda_pipeline *pipe,
size_t comp_sz, u32 id, u32 hw_id,
struct komeda_pipeline_state *
komeda_pipeline_get_old_state(struct komeda_pipeline *pipe,
struct drm_atomic_state *state);
-void komeda_pipeline_disable(struct komeda_pipeline *pipe,
+bool komeda_pipeline_disable(struct komeda_pipeline *pipe,
struct drm_atomic_state *old_state);
void komeda_pipeline_update(struct komeda_pipeline *pipe,
struct drm_atomic_state *old_state);
#include <drm/drm_gem.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_gem_vram_helper.h>
-#include <drm/drm_vram_mm_helper.h>
#include "ast_drv.h"
/* Enable extended register access */
- ast_enable_mmio(dev);
ast_open_key(ast);
+ ast_enable_mmio(dev);
/* Find out whether P2A works or whether to use device-tree */
ast_detect_config_mode(dev, &scu_rev);
{
struct ast_private *ast = dev->dev_private;
+ /* enable standard VGA decode */
+ ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x04);
+
ast_release_firmware(dev);
kfree(ast->dp501_fw_addr);
ast_mode_fini(dev);
return -EINVAL;
ast_open_key(ast);
- ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa1, 0xff, 0x04);
+ ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x06);
ast_set_std_reg(crtc, adjusted_mode, &vbios_mode);
ast_set_crtc_reg(crtc, adjusted_mode, &vbios_mode);
kfree(encoder);
}
-
-static struct drm_encoder *ast_best_single_encoder(struct drm_connector *connector)
-{
- int enc_id = connector->encoder_ids[0];
- /* pick the encoder ids */
- if (enc_id)
- return drm_encoder_find(connector->dev, NULL, enc_id);
- return NULL;
-}
-
-
static const struct drm_encoder_funcs ast_enc_funcs = {
.destroy = ast_encoder_destroy,
};
static const struct drm_connector_helper_funcs ast_connector_helper_funcs = {
.mode_valid = ast_mode_valid,
.get_modes = ast_get_modes,
- .best_encoder = ast_best_single_encoder,
};
static const struct drm_connector_funcs ast_connector_funcs = {
return -ENOENT;
}
gbo = drm_gem_vram_of_gem(obj);
-
- ret = drm_gem_vram_pin(gbo, 0);
- if (ret)
- goto err_drm_gem_object_put_unlocked;
- src = drm_gem_vram_kmap(gbo, true, NULL);
+ src = drm_gem_vram_vmap(gbo);
if (IS_ERR(src)) {
ret = PTR_ERR(src);
- goto err_drm_gem_vram_unpin;
+ goto err_drm_gem_object_put_unlocked;
}
dst = drm_gem_vram_kmap(drm_gem_vram_of_gem(ast->cursor_cache),
false, NULL);
if (IS_ERR(dst)) {
ret = PTR_ERR(dst);
- goto err_drm_gem_vram_kunmap;
+ goto err_drm_gem_vram_vunmap;
}
dst_gpu = drm_gem_vram_offset(drm_gem_vram_of_gem(ast->cursor_cache));
if (dst_gpu < 0) {
ret = (int)dst_gpu;
- goto err_drm_gem_vram_kunmap;
+ goto err_drm_gem_vram_vunmap;
}
dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor;
ast_show_cursor(crtc);
- drm_gem_vram_kunmap(gbo);
- drm_gem_vram_unpin(gbo);
+ drm_gem_vram_vunmap(gbo, src);
drm_gem_object_put_unlocked(obj);
return 0;
-err_drm_gem_vram_kunmap:
- drm_gem_vram_kunmap(gbo);
-err_drm_gem_vram_unpin:
- drm_gem_vram_unpin(gbo);
+err_drm_gem_vram_vunmap:
+ drm_gem_vram_vunmap(gbo, src);
err_drm_gem_object_put_unlocked:
drm_gem_object_put_unlocked(obj);
return ret;
#include <drm/bridge/dw_hdmi.h>
#include <drm/drm_atomic_helper.h>
+#include <drm/drm_bridge.h>
#include <drm/drm_edid.h>
- #include <drm/drm_encoder_slave.h>
#include <drm/drm_of.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
return n;
}
+/*
+ * When transmitting IEC60958 linear PCM audio, these registers allow to
+ * configure the channel status information of all the channel status
+ * bits in the IEC60958 frame. For the moment this configuration is only
+ * used when the I2S audio interface, General Purpose Audio (GPA),
+ * or AHB audio DMA (AHBAUDDMA) interface is active
+ * (for S/PDIF interface this information comes from the stream).
+ */
+void dw_hdmi_set_channel_status(struct dw_hdmi *hdmi,
+ u8 *channel_status)
+{
+ /*
+ * Set channel status register for frequency and word length.
+ * Use default values for other registers.
+ */
+ hdmi_writeb(hdmi, channel_status[3], HDMI_FC_AUDSCHNLS7);
+ hdmi_writeb(hdmi, channel_status[4], HDMI_FC_AUDSCHNLS8);
+}
+EXPORT_SYMBOL_GPL(dw_hdmi_set_channel_status);
+
static void hdmi_set_clk_regenerator(struct dw_hdmi *hdmi,
unsigned long pixel_clk, unsigned int sample_rate)
{
*/
#include <linux/dma-fence.h>
+ #include <linux/ktime.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_uapi.h>
+#include <drm/drm_bridge.h>
#include <drm/drm_damage_helper.h>
#include <drm/drm_device.h>
#include <drm/drm_plane_helper.h>
}
}
-/*
- * For connectors that support multiple encoders, either the
- * .atomic_best_encoder() or .best_encoder() operation must be implemented.
- */
-static struct drm_encoder *
-pick_single_encoder_for_connector(struct drm_connector *connector)
-{
- WARN_ON(connector->encoder_ids[1]);
- return drm_encoder_find(connector->dev, NULL, connector->encoder_ids[0]);
-}
-
static int handle_conflicting_encoders(struct drm_atomic_state *state,
bool disable_conflicting_encoders)
{
else if (funcs->best_encoder)
new_encoder = funcs->best_encoder(connector);
else
- new_encoder = pick_single_encoder_for_connector(connector);
+ new_encoder = drm_connector_get_single_encoder(connector);
if (new_encoder) {
if (encoder_mask & drm_encoder_mask(new_encoder)) {
else if (funcs->best_encoder)
new_encoder = funcs->best_encoder(connector);
else
- new_encoder = pick_single_encoder_for_connector(connector);
+ new_encoder = drm_connector_get_single_encoder(connector);
if (!new_encoder) {
DRM_DEBUG_ATOMIC("No suitable encoder found for [CONNECTOR:%d:%s]\n",
continue;
funcs = crtc->helper_private;
- if (!funcs->mode_fixup)
+ if (!funcs || !funcs->mode_fixup)
continue;
ret = funcs->mode_fixup(crtc, &new_crtc_state->mode,
{
struct drm_device *dev = old_state->dev;
const struct drm_mode_config_helper_funcs *funcs;
+ ktime_t start;
+ s64 commit_time_ms;
funcs = dev->mode_config.helper_private;
+ /*
+ * We're measuring the _entire_ commit, so the time will vary depending
+ * on how many fences and objects are involved. For the purposes of self
+ * refresh, this is desirable since it'll give us an idea of how
+ * congested things are. This will inform our decision on how often we
+ * should enter self refresh after idle.
+ *
+ * These times will be averaged out in the self refresh helpers to avoid
+ * overreacting over one outlier frame
+ */
+ start = ktime_get();
+
drm_atomic_helper_wait_for_fences(dev, old_state, false);
drm_atomic_helper_wait_for_dependencies(old_state);
else
drm_atomic_helper_commit_tail(old_state);
+ commit_time_ms = ktime_ms_delta(ktime_get(), start);
+ if (commit_time_ms > 0)
+ drm_self_refresh_helper_update_avg_times(old_state,
+ (unsigned long)commit_time_ms);
+
drm_atomic_helper_commit_cleanup_done(old_state);
drm_atomic_state_put(old_state);
return PTR_ERR(crtc_state);
crtc_state->event = event;
- crtc_state->pageflip_flags = flags;
+ crtc_state->async_flip = flags & DRM_MODE_PAGE_FLIP_ASYNC;
plane_state = drm_atomic_get_plane_state(state, plane);
if (IS_ERR(plane_state))
if (arg->reserved)
return -EINVAL;
- if ((arg->flags & DRM_MODE_PAGE_FLIP_ASYNC) &&
- !dev->mode_config.async_page_flip)
+ if (arg->flags & DRM_MODE_PAGE_FLIP_ASYNC)
return -EINVAL;
/* can't test and expect an event at the same time. */
} else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) {
ret = drm_atomic_nonblocking_commit(state);
} else {
- if (unlikely(drm_debug & DRM_UT_STATE))
+ if (drm_debug_enabled(DRM_UT_STATE))
drm_atomic_print_state(state);
ret = drm_atomic_commit(state);
struct drm_crtc *crtc)
{
struct drm_encoder *encoder;
- int i;
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
if (encoder->possible_crtcs & drm_crtc_mask(crtc))
return true;
}
* simple XOR between the two handle the addition nicely.
*/
cmdline = &connector->cmdline_mode;
- if (cmdline->specified) {
+ if (cmdline->specified && cmdline->rotation_reflection) {
unsigned int cmdline_rest, panel_rest;
unsigned int cmdline_rot, panel_rot;
unsigned int sum_rot, sum_rest;
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_uapi.h>
+#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h>
if (!encoder_funcs)
return;
- drm_bridge_disable(encoder->bridge);
-
if (encoder_funcs->disable)
(*encoder_funcs->disable)(encoder);
else if (encoder_funcs->dpms)
(*encoder_funcs->dpms)(encoder, DRM_MODE_DPMS_OFF);
-
- drm_bridge_post_disable(encoder->bridge);
}
static void __drm_helper_disable_unused_functions(struct drm_device *dev)
if (!encoder_funcs)
continue;
- ret = drm_bridge_mode_fixup(encoder->bridge,
- mode, adjusted_mode);
- if (!ret) {
- DRM_DEBUG_KMS("Bridge fixup failed\n");
- goto done;
- }
-
encoder_funcs = encoder->helper_private;
if (encoder_funcs->mode_fixup) {
if (!(ret = encoder_funcs->mode_fixup(encoder, mode,
if (!encoder_funcs)
continue;
- drm_bridge_disable(encoder->bridge);
-
/* Disable the encoders as the first thing we do. */
if (encoder_funcs->prepare)
encoder_funcs->prepare(encoder);
-
- drm_bridge_post_disable(encoder->bridge);
}
drm_crtc_prepare_encoders(dev);
encoder->base.id, encoder->name, mode->name);
if (encoder_funcs->mode_set)
encoder_funcs->mode_set(encoder, mode, adjusted_mode);
-
- drm_bridge_mode_set(encoder->bridge, mode, adjusted_mode);
}
/* Now enable the clocks, plane, pipe, and connectors that we set up. */
if (!encoder_funcs)
continue;
- drm_bridge_pre_enable(encoder->bridge);
-
if (encoder_funcs->commit)
encoder_funcs->commit(encoder);
-
- drm_bridge_enable(encoder->bridge);
}
/* Calculate and store various constants which
__drm_helper_disable_unused_functions(dev);
}
+/*
+ * For connectors that support multiple encoders, either the
+ * .atomic_best_encoder() or .best_encoder() operation must be implemented.
+ */
+struct drm_encoder *
+drm_connector_get_single_encoder(struct drm_connector *connector)
+{
+ struct drm_encoder *encoder;
+
+ WARN_ON(hweight32(connector->possible_encoders) > 1);
+ drm_connector_for_each_possible_encoder(connector, encoder)
+ return encoder;
+
+ return NULL;
+}
+
/**
* drm_crtc_helper_set_config - set a new config from userspace
* @set: mode set configuration
new_encoder = connector->encoder;
for (ro = 0; ro < set->num_connectors; ro++) {
if (set->connectors[ro] == connector) {
- new_encoder = connector_funcs->best_encoder(connector);
+ if (connector_funcs->best_encoder)
+ new_encoder = connector_funcs->best_encoder(connector);
+ else
+ new_encoder = drm_connector_get_single_encoder(connector);
+
/* if we can't get an encoder for a connector
we are setting now - then fail */
if (new_encoder == NULL)
/* Helper which handles bridge ordering around encoder dpms */
static void drm_helper_encoder_dpms(struct drm_encoder *encoder, int mode)
{
- struct drm_bridge *bridge = encoder->bridge;
const struct drm_encoder_helper_funcs *encoder_funcs;
encoder_funcs = encoder->helper_private;
if (!encoder_funcs)
return;
- if (mode == DRM_MODE_DPMS_ON)
- drm_bridge_pre_enable(bridge);
- else
- drm_bridge_disable(bridge);
-
if (encoder_funcs->dpms)
encoder_funcs->dpms(encoder, mode);
-
- if (mode == DRM_MODE_DPMS_ON)
- drm_bridge_enable(bridge);
- else
- drm_bridge_post_disable(bridge);
}
static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc)
#include "drm_internal.h"
#include "drm_legacy.h"
-/*
- * drm_debug: Enable debug output.
- * Bitmask of DRM_UT_x. See include/drm/drm_print.h for details.
- */
-unsigned int drm_debug = 0;
-EXPORT_SYMBOL(drm_debug);
-
MODULE_AUTHOR("Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl");
MODULE_DESCRIPTION("DRM shared core routines");
MODULE_LICENSE("GPL and additional rights");
-MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug category.\n"
-"\t\tBit 0 (0x01) will enable CORE messages (drm core code)\n"
-"\t\tBit 1 (0x02) will enable DRIVER messages (drm controller code)\n"
-"\t\tBit 2 (0x04) will enable KMS messages (modesetting code)\n"
-"\t\tBit 3 (0x08) will enable PRIME messages (prime code)\n"
-"\t\tBit 4 (0x10) will enable ATOMIC messages (atomic code)\n"
-"\t\tBit 5 (0x20) will enable VBL messages (vblank code)\n"
-"\t\tBit 7 (0x80) will enable LEASE messages (leasing code)\n"
-"\t\tBit 8 (0x100) will enable DP messages (displayport code)");
-module_param_named(debug, drm_debug, int, 0600);
static DEFINE_SPINLOCK(drm_minor_lock);
static struct idr drm_minors_idr;
if (ret)
goto err_minors;
+ dev->registered = true;
+
if (dev->driver->load) {
ret = dev->driver->load(dev, flags);
if (ret)
goto err_minors;
}
- dev->registered = true;
-
if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_modeset_register_all(dev);
* Copyright (C) 2014-2018 Etnaviv Project
*/
+ #include <drm/drm_drv.h>
+
#include "etnaviv_cmdbuf.h"
#include "etnaviv_gpu.h"
#include "etnaviv_gem.h"
u32 *ptr = buf->vaddr + off;
dev_info(gpu->dev, "virt %p phys 0x%08x free 0x%08x\n",
- ptr, etnaviv_cmdbuf_get_va(buf) + off, size - len * 4 - off);
+ ptr, etnaviv_cmdbuf_get_va(buf,
+ &gpu->mmu_context->cmdbuf_mapping) +
+ off, size - len * 4 - off);
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4,
ptr, len * 4, 0);
if (buffer->user_size + cmd_dwords * sizeof(u64) > buffer->size)
buffer->user_size = 0;
- return etnaviv_cmdbuf_get_va(buffer) + buffer->user_size;
+ return etnaviv_cmdbuf_get_va(buffer,
+ &gpu->mmu_context->cmdbuf_mapping) +
+ buffer->user_size;
}
u16 etnaviv_buffer_init(struct etnaviv_gpu *gpu)
buffer->user_size = 0;
CMD_WAIT(buffer);
- CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer) +
- buffer->user_size - 4);
+ CMD_LINK(buffer, 2,
+ etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping)
+ + buffer->user_size - 4);
return buffer->user_size / 8;
}
return buffer->user_size / 8;
}
- u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu)
+ u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu, unsigned short id)
{
struct etnaviv_cmdbuf *buffer = &gpu->buffer;
buffer->user_size = 0;
CMD_LOAD_STATE(buffer, VIVS_MMUv2_PTA_CONFIG,
- VIVS_MMUv2_PTA_CONFIG_INDEX(0));
+ VIVS_MMUv2_PTA_CONFIG_INDEX(id));
CMD_END(buffer);
/* Append waitlink */
CMD_WAIT(buffer);
- CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer) +
- buffer->user_size - 4);
+ CMD_LINK(buffer, 2,
+ etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping)
+ + buffer->user_size - 4);
/*
* Kick off the 'sync point' command by replacing the previous
/* Append a command buffer to the ring buffer. */
void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
- unsigned int event, struct etnaviv_cmdbuf *cmdbuf)
+ struct etnaviv_iommu_context *mmu_context, unsigned int event,
+ struct etnaviv_cmdbuf *cmdbuf)
{
struct etnaviv_cmdbuf *buffer = &gpu->buffer;
unsigned int waitlink_offset = buffer->user_size - 16;
u32 return_target, return_dwords;
u32 link_target, link_dwords;
bool switch_context = gpu->exec_state != exec_state;
+ bool switch_mmu_context = gpu->mmu_context != mmu_context;
+ unsigned int new_flush_seq = READ_ONCE(gpu->mmu_context->flush_seq);
+ bool need_flush = switch_mmu_context || gpu->flush_seq != new_flush_seq;
lockdep_assert_held(&gpu->lock);
- if (drm_debug & DRM_UT_DRIVER)
+ if (drm_debug_enabled(DRM_UT_DRIVER))
etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
- link_target = etnaviv_cmdbuf_get_va(cmdbuf);
+ link_target = etnaviv_cmdbuf_get_va(cmdbuf,
+ &gpu->mmu_context->cmdbuf_mapping);
link_dwords = cmdbuf->size / 8;
/*
- * If we need maintanence prior to submitting this buffer, we will
+ * If we need maintenance prior to submitting this buffer, we will
* need to append a mmu flush load state, followed by a new
* link to this buffer - a total of four additional words.
*/
- if (gpu->mmu->need_flush || switch_context) {
+ if (need_flush || switch_context) {
u32 target, extra_dwords;
/* link command */
extra_dwords = 1;
/* flush command */
- if (gpu->mmu->need_flush) {
- if (gpu->mmu->version == ETNAVIV_IOMMU_V1)
+ if (need_flush) {
+ if (gpu->mmu_context->global->version == ETNAVIV_IOMMU_V1)
extra_dwords += 1;
else
extra_dwords += 3;
if (switch_context)
extra_dwords += 4;
+ /* PTA load command */
+ if (switch_mmu_context && gpu->sec_mode == ETNA_SEC_KERNEL)
+ extra_dwords += 1;
+
target = etnaviv_buffer_reserve(gpu, buffer, extra_dwords);
+ /*
+ * Switch MMU context if necessary. Must be done after the
+ * link target has been calculated, as the jump forward in the
+ * kernel ring still uses the last active MMU context before
+ * the switch.
+ */
+ if (switch_mmu_context) {
+ struct etnaviv_iommu_context *old_context = gpu->mmu_context;
+
+ etnaviv_iommu_context_get(mmu_context);
+ gpu->mmu_context = mmu_context;
+ etnaviv_iommu_context_put(old_context);
+ }
- if (gpu->mmu->need_flush) {
+ if (need_flush) {
/* Add the MMU flush */
- if (gpu->mmu->version == ETNAVIV_IOMMU_V1) {
+ if (gpu->mmu_context->global->version == ETNAVIV_IOMMU_V1) {
CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_MMU,
VIVS_GL_FLUSH_MMU_FLUSH_FEMMU |
VIVS_GL_FLUSH_MMU_FLUSH_UNK1 |
VIVS_GL_FLUSH_MMU_FLUSH_PEMMU |
VIVS_GL_FLUSH_MMU_FLUSH_UNK4);
} else {
+ u32 flush = VIVS_MMUv2_CONFIGURATION_MODE_MASK |
+ VIVS_MMUv2_CONFIGURATION_FLUSH_FLUSH;
+
+ if (switch_mmu_context &&
+ gpu->sec_mode == ETNA_SEC_KERNEL) {
+ unsigned short id =
+ etnaviv_iommuv2_get_pta_id(gpu->mmu_context);
+ CMD_LOAD_STATE(buffer,
+ VIVS_MMUv2_PTA_CONFIG,
+ VIVS_MMUv2_PTA_CONFIG_INDEX(id));
+ }
+
+ if (gpu->sec_mode == ETNA_SEC_NONE)
+ flush |= etnaviv_iommuv2_get_mtlb_addr(gpu->mmu_context);
+
CMD_LOAD_STATE(buffer, VIVS_MMUv2_CONFIGURATION,
- VIVS_MMUv2_CONFIGURATION_MODE_MASK |
- VIVS_MMUv2_CONFIGURATION_ADDRESS_MASK |
- VIVS_MMUv2_CONFIGURATION_FLUSH_FLUSH);
+ flush);
CMD_SEM(buffer, SYNC_RECIPIENT_FE,
SYNC_RECIPIENT_PE);
CMD_STALL(buffer, SYNC_RECIPIENT_FE,
SYNC_RECIPIENT_PE);
}
- gpu->mmu->need_flush = false;
+ gpu->flush_seq = new_flush_seq;
}
if (switch_context) {
}
/* And the link to the submitted buffer */
+ link_target = etnaviv_cmdbuf_get_va(cmdbuf,
+ &gpu->mmu_context->cmdbuf_mapping);
CMD_LINK(buffer, link_dwords, link_target);
/* Update the link target to point to above instructions */
CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) |
VIVS_GL_EVENT_FROM_PE);
CMD_WAIT(buffer);
- CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer) +
- buffer->user_size - 4);
+ CMD_LINK(buffer, 2,
+ etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping)
+ + buffer->user_size - 4);
- if (drm_debug & DRM_UT_DRIVER)
+ if (drm_debug_enabled(DRM_UT_DRIVER))
pr_info("stream link to 0x%08x @ 0x%08x %p\n",
- return_target, etnaviv_cmdbuf_get_va(cmdbuf),
+ return_target,
+ etnaviv_cmdbuf_get_va(cmdbuf, &gpu->mmu_context->cmdbuf_mapping),
cmdbuf->vaddr);
- if (drm_debug & DRM_UT_DRIVER) {
+ if (drm_debug_enabled(DRM_UT_DRIVER)) {
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4,
cmdbuf->vaddr, cmdbuf->size, 0);
VIV_FE_LINK_HEADER_PREFETCH(link_dwords),
link_target);
- if (drm_debug & DRM_UT_DRIVER)
+ if (drm_debug_enabled(DRM_UT_DRIVER))
etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
}
# SPDX-License-Identifier: GPL-2.0-only
config DRM_HISI_HIBMC
tristate "DRM Support for Hisilicon Hibmc"
- depends on DRM && PCI && MMU
+ depends on DRM && PCI && MMU && ARM64
select DRM_KMS_HELPER
select DRM_VRAM_HELPER
-
+ select DRM_TTM
+ select DRM_TTM_HELPER
help
Choose this option if you have a Hisilicon Hibmc soc chipset.
If M is selected the module will be called hibmc-drm.
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
-#include <drm/drm_vram_mm_helper.h>
#include "hibmc_drm_drv.h"
#include "hibmc_drm_regs.h"
static int __maybe_unused hibmc_pm_suspend(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
return drm_mode_config_helper_suspend(drm_dev);
}
static int __maybe_unused hibmc_pm_resume(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
return drm_mode_config_helper_resume(drm_dev);
}
.driver.pm = &hibmc_pm_ops,
};
- static int __init hibmc_init(void)
- {
- return pci_register_driver(&hibmc_pci_driver);
- }
-
- static void __exit hibmc_exit(void)
- {
- return pci_unregister_driver(&hibmc_pci_driver);
- }
-
- module_init(hibmc_init);
- module_exit(hibmc_exit);
+ module_pci_driver(hibmc_pci_driver);
MODULE_DEVICE_TABLE(pci, hibmc_pci_table);
#include "i915_drv.h"
#include "intel_connector.h"
- #include "intel_drv.h"
+ #include "intel_display_types.h"
#include "intel_hdcp.h"
int intel_connector_init(struct intel_connector *connector)
if (ret)
goto err;
- if (i915_inject_load_failure()) {
+ if (i915_inject_probe_failure(to_i915(connector->dev))) {
ret = -EFAULT;
goto err_backlight;
}
void
intel_attach_colorspace_property(struct drm_connector *connector)
{
- if (!drm_mode_create_colorspace_property(connector))
+ if (!drm_mode_create_hdmi_colorspace_property(connector))
drm_object_attach_property(&connector->base,
connector->colorspace_property, 0);
}
#include "i915_debugfs.h"
#include "i915_drv.h"
+ #include "i915_trace.h"
#include "intel_atomic.h"
#include "intel_audio.h"
#include "intel_connector.h"
#include "intel_ddi.h"
+ #include "intel_display_types.h"
#include "intel_dp.h"
#include "intel_dp_link_training.h"
#include "intel_dp_mst.h"
#include "intel_dpio_phy.h"
- #include "intel_drv.h"
#include "intel_fifo_underrun.h"
#include "intel_hdcp.h"
#include "intel_hdmi.h"
#include "intel_panel.h"
#include "intel_psr.h"
#include "intel_sideband.h"
+ #include "intel_tc.h"
#include "intel_vdsc.h"
#define DP_DPRX_ESI_LEN 14
return intel_dp->common_rates[intel_dp->num_common_rates - 1];
}
- static int intel_dp_get_fia_supported_lane_count(struct intel_dp *intel_dp)
- {
- struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
- struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
- enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port);
- intel_wakeref_t wakeref;
- u32 lane_info;
-
- if (tc_port == PORT_TC_NONE || dig_port->tc_type != TC_PORT_TYPEC)
- return 4;
-
- lane_info = 0;
- with_intel_display_power(dev_priv, POWER_DOMAIN_DISPLAY_CORE, wakeref)
- lane_info = (I915_READ(PORT_TX_DFLEXDPSP) &
- DP_LANE_ASSIGNMENT_MASK(tc_port)) >>
- DP_LANE_ASSIGNMENT_SHIFT(tc_port);
-
- switch (lane_info) {
- default:
- MISSING_CASE(lane_info);
- case 1:
- case 2:
- case 4:
- case 8:
- return 1;
- case 3:
- case 12:
- return 2;
- case 15:
- return 4;
- }
- }
-
/* Theoretical max between source and sink */
static int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
int source_max = intel_dig_port->max_lanes;
int sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
- int fia_max = intel_dp_get_fia_supported_lane_count(intel_dp);
+ int fia_max = intel_tc_port_fia_max_lane_count(intel_dig_port);
return min3(source_max, sink_max, fia_max);
}
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
- enum port port = dig_port->base.port;
+ enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
- if (intel_port_is_combophy(dev_priv, port) &&
+ if (intel_phy_is_combo(dev_priv, phy) &&
!IS_ELKHARTLAKE(dev_priv) &&
!intel_dp_is_edp(intel_dp))
return 540000;
DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) |
DP_AUX_CH_CTL_SYNC_PULSE_SKL(32);
- if (intel_dig_port->tc_type == TC_PORT_TBT)
+ if (intel_dig_port->tc_mode == TC_PORT_TBT_ALT)
ret |= DP_AUX_CH_CTL_TBT_IO;
return ret;
struct drm_i915_private *i915 =
to_i915(intel_dig_port->base.base.dev);
struct intel_uncore *uncore = &i915->uncore;
+ enum phy phy = intel_port_to_phy(i915, intel_dig_port->base.port);
+ bool is_tc_port = intel_phy_is_tc(i915, phy);
i915_reg_t ch_ctl, ch_data[5];
u32 aux_clock_divider;
enum intel_display_power_domain aux_domain =
for (i = 0; i < ARRAY_SIZE(ch_data); i++)
ch_data[i] = intel_dp->aux_ch_data_reg(intel_dp, i);
+ if (is_tc_port)
+ intel_tc_port_lock(intel_dig_port);
+
aux_wakeref = intel_display_power_get(i915, aux_domain);
pps_wakeref = pps_lock(intel_dp);
pps_unlock(intel_dp, pps_wakeref);
intel_display_power_put_async(i915, aux_domain, aux_wakeref);
+ if (is_tc_port)
+ intel_tc_port_unlock(intel_dig_port);
+
return ret;
}
int mode_rate, link_clock, link_avail;
for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
+ int output_bpp = intel_dp_output_bpp(pipe_config, bpp);
+
mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
- bpp);
+ output_bpp);
for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
for (lane_count = limits->min_lane_count;
I915_READ(pp_stat_reg),
I915_READ(pp_ctrl_reg));
- if (intel_wait_for_register(&dev_priv->uncore,
- pp_stat_reg, mask, value,
- 5000))
+ if (intel_de_wait_for_register(dev_priv, pp_stat_reg,
+ mask, value, 5000))
DRM_ERROR("Panel status timeout: status %08x control %08x\n",
I915_READ(pp_stat_reg),
I915_READ(pp_ctrl_reg));
if (port == PORT_A)
return;
- if (intel_wait_for_register(&dev_priv->uncore, DP_TP_STATUS(port),
- DP_TP_STATUS_IDLE_DONE,
- DP_TP_STATUS_IDLE_DONE,
- 1))
+ if (intel_de_wait_for_set(dev_priv, DP_TP_STATUS(port),
+ DP_TP_STATUS_IDLE_DONE, 1))
DRM_ERROR("Timed out waiting for DP idle patterns\n");
}
drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
drm_dp_is_branch(intel_dp->dpcd));
- if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11)
- dev_priv->no_aux_handshake = intel_dp->dpcd[DP_MAX_DOWNSPREAD] &
- DP_NO_AUX_HANDSHAKE_LINK_TRAINING;
-
/*
* Read the eDP display control registers.
*
if (!intel_dp_read_dpcd(intel_dp))
return false;
- /* Don't clobber cached eDP rates. */
+ /*
+ * Don't clobber cached eDP rates. Also skip re-reading
+ * the OUI/ID since we know it won't change.
+ */
if (!intel_dp_is_edp(intel_dp)) {
+ drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
+ drm_dp_is_branch(intel_dp->dpcd));
+
intel_dp_set_sink_rates(intel_dp);
intel_dp_set_common_rates(intel_dp);
}
* Some eDP panels do not set a valid value for sink count, that is why
* it don't care about read it here and in intel_edp_init_dpcd().
*/
- if (!intel_dp_is_edp(intel_dp)) {
+ if (!intel_dp_is_edp(intel_dp) &&
+ !drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_SINK_COUNT)) {
u8 count;
ssize_t r;
* retrain the link to get a picture. That's in case no
* userspace component reacted to intermittent HPD dip.
*/
- static bool intel_dp_hotplug(struct intel_encoder *encoder,
- struct intel_connector *connector)
+ static enum intel_hotplug_state
+ intel_dp_hotplug(struct intel_encoder *encoder,
+ struct intel_connector *connector,
+ bool irq_received)
{
struct drm_modeset_acquire_ctx ctx;
- bool changed;
+ enum intel_hotplug_state state;
int ret;
- changed = intel_encoder_hotplug(encoder, connector);
+ state = intel_encoder_hotplug(encoder, connector, irq_received);
drm_modeset_acquire_init(&ctx, 0);
drm_modeset_acquire_fini(&ctx);
WARN(ret, "Acquiring modeset locks failed with %i\n", ret);
- return changed;
+ /*
+ * Keeping it consistent with intel_ddi_hotplug() and
+ * intel_hdmi_hotplug().
+ */
+ if (state == INTEL_HOTPLUG_UNCHANGED && irq_received)
+ state = INTEL_HOTPLUG_RETRY;
+
+ return state;
}
static void intel_dp_check_service_irq(struct intel_dp *intel_dp)
return I915_READ(SDEISR) & SDE_DDI_HOTPLUG_ICP(port);
}
- static const char *tc_type_name(enum tc_port_type type)
- {
- static const char * const names[] = {
- [TC_PORT_UNKNOWN] = "unknown",
- [TC_PORT_LEGACY] = "legacy",
- [TC_PORT_TYPEC] = "typec",
- [TC_PORT_TBT] = "tbt",
- };
-
- if (WARN_ON(type >= ARRAY_SIZE(names)))
- type = TC_PORT_UNKNOWN;
-
- return names[type];
- }
-
- static void icl_update_tc_port_type(struct drm_i915_private *dev_priv,
- struct intel_digital_port *intel_dig_port,
- bool is_legacy, bool is_typec, bool is_tbt)
- {
- enum port port = intel_dig_port->base.port;
- enum tc_port_type old_type = intel_dig_port->tc_type;
-
- WARN_ON(is_legacy + is_typec + is_tbt != 1);
-
- if (is_legacy)
- intel_dig_port->tc_type = TC_PORT_LEGACY;
- else if (is_typec)
- intel_dig_port->tc_type = TC_PORT_TYPEC;
- else if (is_tbt)
- intel_dig_port->tc_type = TC_PORT_TBT;
- else
- return;
-
- /* Types are not supposed to be changed at runtime. */
- WARN_ON(old_type != TC_PORT_UNKNOWN &&
- old_type != intel_dig_port->tc_type);
-
- if (old_type != intel_dig_port->tc_type)
- DRM_DEBUG_KMS("Port %c has TC type %s\n", port_name(port),
- tc_type_name(intel_dig_port->tc_type));
- }
-
- /*
- * This function implements the first part of the Connect Flow described by our
- * specification, Gen11 TypeC Programming chapter. The rest of the flow (reading
- * lanes, EDID, etc) is done as needed in the typical places.
- *
- * Unlike the other ports, type-C ports are not available to use as soon as we
- * get a hotplug. The type-C PHYs can be shared between multiple controllers:
- * display, USB, etc. As a result, handshaking through FIA is required around
- * connect and disconnect to cleanly transfer ownership with the controller and
- * set the type-C power state.
- *
- * We could opt to only do the connect flow when we actually try to use the AUX
- * channels or do a modeset, then immediately run the disconnect flow after
- * usage, but there are some implications on this for a dynamic environment:
- * things may go away or change behind our backs. So for now our driver is
- * always trying to acquire ownership of the controller as soon as it gets an
- * interrupt (or polls state and sees a port is connected) and only gives it
- * back when it sees a disconnect. Implementation of a more fine-grained model
- * will require a lot of coordination with user space and thorough testing for
- * the extra possible cases.
- */
- static bool icl_tc_phy_connect(struct drm_i915_private *dev_priv,
- struct intel_digital_port *dig_port)
- {
- enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port);
- u32 val;
-
- if (dig_port->tc_type != TC_PORT_LEGACY &&
- dig_port->tc_type != TC_PORT_TYPEC)
- return true;
-
- val = I915_READ(PORT_TX_DFLEXDPPMS);
- if (!(val & DP_PHY_MODE_STATUS_COMPLETED(tc_port))) {
- DRM_DEBUG_KMS("DP PHY for TC port %d not ready\n", tc_port);
- WARN_ON(dig_port->tc_legacy_port);
- return false;
- }
-
- /*
- * This function may be called many times in a row without an HPD event
- * in between, so try to avoid the write when we can.
- */
- val = I915_READ(PORT_TX_DFLEXDPCSSS);
- if (!(val & DP_PHY_MODE_STATUS_NOT_SAFE(tc_port))) {
- val |= DP_PHY_MODE_STATUS_NOT_SAFE(tc_port);
- I915_WRITE(PORT_TX_DFLEXDPCSSS, val);
- }
-
- /*
- * Now we have to re-check the live state, in case the port recently
- * became disconnected. Not necessary for legacy mode.
- */
- if (dig_port->tc_type == TC_PORT_TYPEC &&
- !(I915_READ(PORT_TX_DFLEXDPSP) & TC_LIVE_STATE_TC(tc_port))) {
- DRM_DEBUG_KMS("TC PHY %d sudden disconnect.\n", tc_port);
- icl_tc_phy_disconnect(dev_priv, dig_port);
- return false;
- }
-
- return true;
- }
-
- /*
- * See the comment at the connect function. This implements the Disconnect
- * Flow.
- */
- void icl_tc_phy_disconnect(struct drm_i915_private *dev_priv,
- struct intel_digital_port *dig_port)
- {
- enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port);
-
- if (dig_port->tc_type == TC_PORT_UNKNOWN)
- return;
-
- /*
- * TBT disconnection flow is read the live status, what was done in
- * caller.
- */
- if (dig_port->tc_type == TC_PORT_TYPEC ||
- dig_port->tc_type == TC_PORT_LEGACY) {
- u32 val;
-
- val = I915_READ(PORT_TX_DFLEXDPCSSS);
- val &= ~DP_PHY_MODE_STATUS_NOT_SAFE(tc_port);
- I915_WRITE(PORT_TX_DFLEXDPCSSS, val);
- }
-
- DRM_DEBUG_KMS("Port %c TC type %s disconnected\n",
- port_name(dig_port->base.port),
- tc_type_name(dig_port->tc_type));
-
- dig_port->tc_type = TC_PORT_UNKNOWN;
- }
-
- /*
- * The type-C ports are different because even when they are connected, they may
- * not be available/usable by the graphics driver: see the comment on
- * icl_tc_phy_connect(). So in our driver instead of adding the additional
- * concept of "usable" and make everything check for "connected and usable" we
- * define a port as "connected" when it is not only connected, but also when it
- * is usable by the rest of the driver. That maintains the old assumption that
- * connected ports are usable, and avoids exposing to the users objects they
- * can't really use.
- */
- static bool icl_tc_port_connected(struct drm_i915_private *dev_priv,
- struct intel_digital_port *intel_dig_port)
- {
- enum port port = intel_dig_port->base.port;
- enum tc_port tc_port = intel_port_to_tc(dev_priv, port);
- bool is_legacy, is_typec, is_tbt;
- u32 dpsp;
-
- /*
- * Complain if we got a legacy port HPD, but VBT didn't mark the port as
- * legacy. Treat the port as legacy from now on.
- */
- if (!intel_dig_port->tc_legacy_port &&
- I915_READ(SDEISR) & SDE_TC_HOTPLUG_ICP(tc_port)) {
- DRM_ERROR("VBT incorrectly claims port %c is not TypeC legacy\n",
- port_name(port));
- intel_dig_port->tc_legacy_port = true;
- }
- is_legacy = intel_dig_port->tc_legacy_port;
-
- /*
- * The spec says we shouldn't be using the ISR bits for detecting
- * between TC and TBT. We should use DFLEXDPSP.
- */
- dpsp = I915_READ(PORT_TX_DFLEXDPSP);
- is_typec = dpsp & TC_LIVE_STATE_TC(tc_port);
- is_tbt = dpsp & TC_LIVE_STATE_TBT(tc_port);
-
- if (!is_legacy && !is_typec && !is_tbt) {
- icl_tc_phy_disconnect(dev_priv, intel_dig_port);
-
- return false;
- }
-
- icl_update_tc_port_type(dev_priv, intel_dig_port, is_legacy, is_typec,
- is_tbt);
-
- if (!icl_tc_phy_connect(dev_priv, intel_dig_port))
- return false;
-
- return true;
- }
-
static bool icl_digital_port_connected(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
+ enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
- if (intel_port_is_combophy(dev_priv, encoder->port))
+ if (intel_phy_is_combo(dev_priv, phy))
return icl_combo_port_connected(dev_priv, dig_port);
- else if (intel_port_is_tc(dev_priv, encoder->port))
- return icl_tc_port_connected(dev_priv, dig_port);
+ else if (intel_phy_is_tc(dev_priv, phy))
+ return intel_tc_port_connected(dig_port);
else
MISSING_CASE(encoder->hpd_pin);
if (INTEL_GEN(dev_priv) >= 11)
intel_dp_get_dsc_sink_cap(intel_dp);
- drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
- drm_dp_is_branch(intel_dp->dpcd));
-
intel_dp_configure_mst(intel_dp);
if (intel_dp->is_mst) {
intel_dp_connector_register(struct drm_connector *connector)
{
struct intel_dp *intel_dp = intel_attached_dp(connector);
- struct drm_device *dev = connector->dev;
int ret;
ret = intel_connector_register(connector);
intel_dp->aux.dev = connector->kdev;
ret = drm_dp_aux_register(&intel_dp->aux);
if (!ret)
- drm_dp_cec_register_connector(&intel_dp->aux,
- connector->name, dev->dev);
+ drm_dp_cec_register_connector(&intel_dp->aux, connector);
return ret;
}
u8 stream_type;
} __packed;
- static struct hdcp2_dp_msg_data {
+ struct hdcp2_dp_msg_data {
u8 msg_id;
u32 offset;
bool msg_detectable;
u32 timeout;
u32 timeout2; /* Added for non_paired situation */
- } hdcp2_msg_data[] = {
- {HDCP_2_2_AKE_INIT, DP_HDCP_2_2_AKE_INIT_OFFSET, false, 0, 0},
- {HDCP_2_2_AKE_SEND_CERT, DP_HDCP_2_2_AKE_SEND_CERT_OFFSET,
- false, HDCP_2_2_CERT_TIMEOUT_MS, 0},
- {HDCP_2_2_AKE_NO_STORED_KM, DP_HDCP_2_2_AKE_NO_STORED_KM_OFFSET,
- false, 0, 0},
- {HDCP_2_2_AKE_STORED_KM, DP_HDCP_2_2_AKE_STORED_KM_OFFSET,
- false, 0, 0},
- {HDCP_2_2_AKE_SEND_HPRIME, DP_HDCP_2_2_AKE_SEND_HPRIME_OFFSET,
- true, HDCP_2_2_HPRIME_PAIRED_TIMEOUT_MS,
- HDCP_2_2_HPRIME_NO_PAIRED_TIMEOUT_MS},
- {HDCP_2_2_AKE_SEND_PAIRING_INFO,
- DP_HDCP_2_2_AKE_SEND_PAIRING_INFO_OFFSET, true,
- HDCP_2_2_PAIRING_TIMEOUT_MS, 0},
- {HDCP_2_2_LC_INIT, DP_HDCP_2_2_LC_INIT_OFFSET, false, 0, 0},
- {HDCP_2_2_LC_SEND_LPRIME, DP_HDCP_2_2_LC_SEND_LPRIME_OFFSET,
- false, HDCP_2_2_DP_LPRIME_TIMEOUT_MS, 0},
- {HDCP_2_2_SKE_SEND_EKS, DP_HDCP_2_2_SKE_SEND_EKS_OFFSET, false,
- 0, 0},
- {HDCP_2_2_REP_SEND_RECVID_LIST,
- DP_HDCP_2_2_REP_SEND_RECVID_LIST_OFFSET, true,
- HDCP_2_2_RECVID_LIST_TIMEOUT_MS, 0},
- {HDCP_2_2_REP_SEND_ACK, DP_HDCP_2_2_REP_SEND_ACK_OFFSET, false,
- 0, 0},
- {HDCP_2_2_REP_STREAM_MANAGE,
- DP_HDCP_2_2_REP_STREAM_MANAGE_OFFSET, false,
- 0, 0},
- {HDCP_2_2_REP_STREAM_READY, DP_HDCP_2_2_REP_STREAM_READY_OFFSET,
- false, HDCP_2_2_STREAM_READY_TIMEOUT_MS, 0},
+ };
+
+ static const struct hdcp2_dp_msg_data hdcp2_dp_msg_data[] = {
+ { HDCP_2_2_AKE_INIT, DP_HDCP_2_2_AKE_INIT_OFFSET, false, 0, 0 },
+ { HDCP_2_2_AKE_SEND_CERT, DP_HDCP_2_2_AKE_SEND_CERT_OFFSET,
+ false, HDCP_2_2_CERT_TIMEOUT_MS, 0 },
+ { HDCP_2_2_AKE_NO_STORED_KM, DP_HDCP_2_2_AKE_NO_STORED_KM_OFFSET,
+ false, 0, 0 },
+ { HDCP_2_2_AKE_STORED_KM, DP_HDCP_2_2_AKE_STORED_KM_OFFSET,
+ false, 0, 0 },
+ { HDCP_2_2_AKE_SEND_HPRIME, DP_HDCP_2_2_AKE_SEND_HPRIME_OFFSET,
+ true, HDCP_2_2_HPRIME_PAIRED_TIMEOUT_MS,
+ HDCP_2_2_HPRIME_NO_PAIRED_TIMEOUT_MS },
+ { HDCP_2_2_AKE_SEND_PAIRING_INFO,
+ DP_HDCP_2_2_AKE_SEND_PAIRING_INFO_OFFSET, true,
+ HDCP_2_2_PAIRING_TIMEOUT_MS, 0 },
+ { HDCP_2_2_LC_INIT, DP_HDCP_2_2_LC_INIT_OFFSET, false, 0, 0 },
+ { HDCP_2_2_LC_SEND_LPRIME, DP_HDCP_2_2_LC_SEND_LPRIME_OFFSET,
+ false, HDCP_2_2_DP_LPRIME_TIMEOUT_MS, 0 },
+ { HDCP_2_2_SKE_SEND_EKS, DP_HDCP_2_2_SKE_SEND_EKS_OFFSET, false,
+ 0, 0 },
+ { HDCP_2_2_REP_SEND_RECVID_LIST,
+ DP_HDCP_2_2_REP_SEND_RECVID_LIST_OFFSET, true,
+ HDCP_2_2_RECVID_LIST_TIMEOUT_MS, 0 },
+ { HDCP_2_2_REP_SEND_ACK, DP_HDCP_2_2_REP_SEND_ACK_OFFSET, false,
+ 0, 0 },
+ { HDCP_2_2_REP_STREAM_MANAGE,
+ DP_HDCP_2_2_REP_STREAM_MANAGE_OFFSET, false,
+ 0, 0 },
+ { HDCP_2_2_REP_STREAM_READY, DP_HDCP_2_2_REP_STREAM_READY_OFFSET,
+ false, HDCP_2_2_STREAM_READY_TIMEOUT_MS, 0 },
/* local define to shovel this through the write_2_2 interface */
#define HDCP_2_2_ERRATA_DP_STREAM_TYPE 50
- {HDCP_2_2_ERRATA_DP_STREAM_TYPE,
- DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET, false,
- 0, 0},
- };
+ { HDCP_2_2_ERRATA_DP_STREAM_TYPE,
+ DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET, false,
+ 0, 0 },
+ };
static inline
int intel_dp_hdcp2_read_rx_status(struct intel_digital_port *intel_dig_port,
static ssize_t
intel_dp_hdcp2_wait_for_msg(struct intel_digital_port *intel_dig_port,
- struct hdcp2_dp_msg_data *hdcp2_msg_data)
+ const struct hdcp2_dp_msg_data *hdcp2_msg_data)
{
struct intel_dp *dp = &intel_dig_port->dp;
struct intel_hdcp *hdcp = &dp->attached_connector->hdcp;
return ret;
}
- static struct hdcp2_dp_msg_data *get_hdcp2_dp_msg_data(u8 msg_id)
+ static const struct hdcp2_dp_msg_data *get_hdcp2_dp_msg_data(u8 msg_id)
{
int i;
- for (i = 0; i < ARRAY_SIZE(hdcp2_msg_data); i++)
- if (hdcp2_msg_data[i].msg_id == msg_id)
- return &hdcp2_msg_data[i];
+ for (i = 0; i < ARRAY_SIZE(hdcp2_dp_msg_data); i++)
+ if (hdcp2_dp_msg_data[i].msg_id == msg_id)
+ return &hdcp2_dp_msg_data[i];
return NULL;
}
unsigned int offset;
u8 *byte = buf;
ssize_t ret, bytes_to_write, len;
- struct hdcp2_dp_msg_data *hdcp2_msg_data;
+ const struct hdcp2_dp_msg_data *hdcp2_msg_data;
hdcp2_msg_data = get_hdcp2_dp_msg_data(*byte);
if (!hdcp2_msg_data)
unsigned int offset;
u8 *byte = buf;
ssize_t ret, bytes_to_recv, len;
- struct hdcp2_dp_msg_data *hdcp2_msg_data;
+ const struct hdcp2_dp_msg_data *hdcp2_msg_data;
hdcp2_msg_data = get_hdcp2_dp_msg_data(msg_id);
if (!hdcp2_msg_data)
const struct intel_crtc_state *crtc_state,
int refresh_rate)
{
- struct intel_encoder *encoder;
- struct intel_digital_port *dig_port = NULL;
struct intel_dp *intel_dp = dev_priv->drrs.dp;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
enum drrs_refresh_rate_type index = DRRS_HIGH_RR;
return;
}
- dig_port = dp_to_dig_port(intel_dp);
- encoder = &dig_port->base;
-
if (!intel_crtc) {
DRM_DEBUG_KMS("DRRS: intel_crtc not initialized\n");
return;
struct drm_device *dev = intel_encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
enum port port = intel_encoder->port;
+ enum phy phy = intel_port_to_phy(dev_priv, port);
int type;
/* Initialize the work for modeset in case of link train failure */
* Currently we don't support eDP on TypeC ports, although in
* theory it could work on TypeC legacy ports.
*/
- WARN_ON(intel_port_is_tc(dev_priv, port));
+ WARN_ON(intel_phy_is_tc(dev_priv, phy));
type = DRM_MODE_CONNECTOR_eDP;
} else {
type = DRM_MODE_CONNECTOR_DisplayPort;
#include "intel_audio.h"
#include "intel_connector.h"
#include "intel_ddi.h"
+ #include "intel_display_types.h"
#include "intel_dp.h"
#include "intel_dpio_phy.h"
- #include "intel_drv.h"
#include "intel_fifo_underrun.h"
#include "intel_gmbus.h"
#include "intel_hdcp.h"
#include "intel_hdmi.h"
#include "intel_hotplug.h"
#include "intel_lspcon.h"
- #include "intel_sdvo.h"
#include "intel_panel.h"
+ #include "intel_sdvo.h"
#include "intel_sideband.h"
static struct drm_device *intel_hdmi_to_dev(struct intel_hdmi *intel_hdmi)
return true;
}
- static struct hdcp2_hdmi_msg_data {
+ struct hdcp2_hdmi_msg_data {
u8 msg_id;
u32 timeout;
u32 timeout2;
- } hdcp2_msg_data[] = {
- {HDCP_2_2_AKE_INIT, 0, 0},
- {HDCP_2_2_AKE_SEND_CERT, HDCP_2_2_CERT_TIMEOUT_MS, 0},
- {HDCP_2_2_AKE_NO_STORED_KM, 0, 0},
- {HDCP_2_2_AKE_STORED_KM, 0, 0},
- {HDCP_2_2_AKE_SEND_HPRIME, HDCP_2_2_HPRIME_PAIRED_TIMEOUT_MS,
- HDCP_2_2_HPRIME_NO_PAIRED_TIMEOUT_MS},
- {HDCP_2_2_AKE_SEND_PAIRING_INFO, HDCP_2_2_PAIRING_TIMEOUT_MS,
- 0},
- {HDCP_2_2_LC_INIT, 0, 0},
- {HDCP_2_2_LC_SEND_LPRIME, HDCP_2_2_HDMI_LPRIME_TIMEOUT_MS, 0},
- {HDCP_2_2_SKE_SEND_EKS, 0, 0},
- {HDCP_2_2_REP_SEND_RECVID_LIST,
- HDCP_2_2_RECVID_LIST_TIMEOUT_MS, 0},
- {HDCP_2_2_REP_SEND_ACK, 0, 0},
- {HDCP_2_2_REP_STREAM_MANAGE, 0, 0},
- {HDCP_2_2_REP_STREAM_READY, HDCP_2_2_STREAM_READY_TIMEOUT_MS,
- 0},
- };
+ };
+
+ static const struct hdcp2_hdmi_msg_data hdcp2_msg_data[] = {
+ { HDCP_2_2_AKE_INIT, 0, 0 },
+ { HDCP_2_2_AKE_SEND_CERT, HDCP_2_2_CERT_TIMEOUT_MS, 0 },
+ { HDCP_2_2_AKE_NO_STORED_KM, 0, 0 },
+ { HDCP_2_2_AKE_STORED_KM, 0, 0 },
+ { HDCP_2_2_AKE_SEND_HPRIME, HDCP_2_2_HPRIME_PAIRED_TIMEOUT_MS,
+ HDCP_2_2_HPRIME_NO_PAIRED_TIMEOUT_MS },
+ { HDCP_2_2_AKE_SEND_PAIRING_INFO, HDCP_2_2_PAIRING_TIMEOUT_MS, 0 },
+ { HDCP_2_2_LC_INIT, 0, 0 },
+ { HDCP_2_2_LC_SEND_LPRIME, HDCP_2_2_HDMI_LPRIME_TIMEOUT_MS, 0 },
+ { HDCP_2_2_SKE_SEND_EKS, 0, 0 },
+ { HDCP_2_2_REP_SEND_RECVID_LIST, HDCP_2_2_RECVID_LIST_TIMEOUT_MS, 0 },
+ { HDCP_2_2_REP_SEND_ACK, 0, 0 },
+ { HDCP_2_2_REP_STREAM_MANAGE, 0, 0 },
+ { HDCP_2_2_REP_STREAM_READY, HDCP_2_2_STREAM_READY_TIMEOUT_MS, 0 },
+ };
static
int intel_hdmi_hdcp2_read_rx_status(struct intel_digital_port *intel_dig_port,
static void intel_hdmi_destroy(struct drm_connector *connector)
{
- if (intel_attached_hdmi(connector)->cec_notifier)
- cec_notifier_put(intel_attached_hdmi(connector)->cec_notifier);
+ struct cec_notifier *n = intel_attached_hdmi(connector)->cec_notifier;
+
+ cec_notifier_conn_unregister(n);
intel_connector_destroy(connector);
}
static u8 icl_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port)
{
- u8 ddc_pin;
+ enum phy phy = intel_port_to_phy(dev_priv, port);
- switch (port) {
- case PORT_A:
- ddc_pin = GMBUS_PIN_1_BXT;
- break;
- case PORT_B:
- ddc_pin = GMBUS_PIN_2_BXT;
- break;
- case PORT_C:
- ddc_pin = GMBUS_PIN_9_TC1_ICP;
- break;
- case PORT_D:
- ddc_pin = GMBUS_PIN_10_TC2_ICP;
- break;
- case PORT_E:
- ddc_pin = GMBUS_PIN_11_TC3_ICP;
- break;
- case PORT_F:
- ddc_pin = GMBUS_PIN_12_TC4_ICP;
- break;
- default:
- MISSING_CASE(port);
- ddc_pin = GMBUS_PIN_2_BXT;
- break;
- }
- return ddc_pin;
+ if (intel_phy_is_combo(dev_priv, phy))
+ return GMBUS_PIN_1_BXT + port;
+ else if (intel_phy_is_tc(dev_priv, phy))
+ return GMBUS_PIN_9_TC1_ICP + intel_port_to_tc(dev_priv, port);
+
+ WARN(1, "Unknown port:%c\n", port_name(port));
+ return GMBUS_PIN_2_BXT;
}
static u8 mcc_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port)
{
+ enum phy phy = intel_port_to_phy(dev_priv, port);
u8 ddc_pin;
- switch (port) {
- case PORT_A:
+ switch (phy) {
+ case PHY_A:
ddc_pin = GMBUS_PIN_1_BXT;
break;
- case PORT_B:
+ case PHY_B:
ddc_pin = GMBUS_PIN_2_BXT;
break;
- case PORT_C:
+ case PHY_C:
ddc_pin = GMBUS_PIN_9_TC1_ICP;
break;
default:
- MISSING_CASE(port);
+ MISSING_CASE(phy);
ddc_pin = GMBUS_PIN_1_BXT;
break;
}
if (HAS_PCH_MCC(dev_priv))
ddc_pin = mcc_port_to_ddc_pin(dev_priv, port);
- else if (HAS_PCH_ICP(dev_priv))
+ else if (HAS_PCH_TGP(dev_priv) || HAS_PCH_ICP(dev_priv))
ddc_pin = icl_port_to_ddc_pin(dev_priv, port);
else if (HAS_PCH_CNP(dev_priv))
ddc_pin = cnp_port_to_ddc_pin(dev_priv, port);
struct drm_device *dev = intel_encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
enum port port = intel_encoder->port;
+ struct cec_connector_info conn_info;
DRM_DEBUG_KMS("Adding HDMI connector on port %c\n",
port_name(port));
I915_WRITE(PEG_BAND_GAP_DATA, (temp & ~0xf) | 0xd);
}
- intel_hdmi->cec_notifier = cec_notifier_get_conn(dev->dev,
- port_identifier(port));
+ cec_fill_conn_info_from_drm(&conn_info, connector);
+
+ intel_hdmi->cec_notifier =
+ cec_notifier_conn_register(dev->dev, port_identifier(port),
+ &conn_info);
if (!intel_hdmi->cec_notifier)
DRM_DEBUG_KMS("CEC notifier get failed\n");
}
+ static enum intel_hotplug_state
+ intel_hdmi_hotplug(struct intel_encoder *encoder,
+ struct intel_connector *connector, bool irq_received)
+ {
+ enum intel_hotplug_state state;
+
+ state = intel_encoder_hotplug(encoder, connector, irq_received);
+
+ /*
+ * On many platforms the HDMI live state signal is known to be
+ * unreliable, so we can't use it to detect if a sink is connected or
+ * not. Instead we detect if it's connected based on whether we can
+ * read the EDID or not. That in turn has a problem during disconnect,
+ * since the HPD interrupt may be raised before the DDC lines get
+ * disconnected (due to how the required length of DDC vs. HPD
+ * connector pins are specified) and so we'll still be able to get a
+ * valid EDID. To solve this schedule another detection cycle if this
+ * time around we didn't detect any change in the sink's connection
+ * status.
+ */
+ if (state == INTEL_HOTPLUG_UNCHANGED && irq_received)
+ state = INTEL_HOTPLUG_RETRY;
+
+ return state;
+ }
+
void intel_hdmi_init(struct drm_i915_private *dev_priv,
i915_reg_t hdmi_reg, enum port port)
{
&intel_hdmi_enc_funcs, DRM_MODE_ENCODER_TMDS,
"HDMI %c", port_name(port));
- intel_encoder->hotplug = intel_encoder_hotplug;
+ intel_encoder->hotplug = intel_hdmi_hotplug;
intel_encoder->compute_config = intel_hdmi_compute_config;
if (HAS_PCH_SPLIT(dev_priv)) {
intel_encoder->disable = pch_disable_hdmi;
#include "display/intel_audio.h"
#include "display/intel_bw.h"
#include "display/intel_cdclk.h"
+ #include "display/intel_display_types.h"
#include "display/intel_dp.h"
#include "display/intel_fbdev.h"
#include "display/intel_gmbus.h"
#include "gem/i915_gem_context.h"
#include "gem/i915_gem_ioctls.h"
+ #include "gt/intel_gt.h"
#include "gt/intel_gt_pm.h"
- #include "gt/intel_reset.h"
- #include "gt/intel_workarounds.h"
#include "i915_debugfs.h"
#include "i915_drv.h"
#include "i915_irq.h"
- #include "i915_pmu.h"
+ #include "i915_memcpy.h"
+ #include "i915_perf.h"
#include "i915_query.h"
+ #include "i915_suspend.h"
+ #include "i915_sysfs.h"
#include "i915_trace.h"
#include "i915_vgpu.h"
#include "intel_csr.h"
- #include "intel_drv.h"
#include "intel_pm.h"
- #include "intel_uc.h"
static struct drm_driver driver;
- #if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
- static unsigned int i915_load_fail_count;
-
- bool __i915_inject_load_failure(const char *func, int line)
- {
- if (i915_load_fail_count >= i915_modparams.inject_load_failure)
- return false;
-
- if (++i915_load_fail_count == i915_modparams.inject_load_failure) {
- DRM_INFO("Injecting failure at checkpoint %u [%s:%d]\n",
- i915_modparams.inject_load_failure, func, line);
- i915_modparams.inject_load_failure = 0;
- return true;
- }
-
- return false;
- }
-
- bool i915_error_injected(void)
- {
- return i915_load_fail_count && !i915_modparams.inject_load_failure;
- }
-
- #endif
-
- #define FDO_BUG_URL "https://bugs.freedesktop.org/enter_bug.cgi?product=DRI"
- #define FDO_BUG_MSG "Please file a bug at " FDO_BUG_URL " against DRM/Intel " \
- "providing the dmesg log by booting with drm.debug=0xf"
-
- void
- __i915_printk(struct drm_i915_private *dev_priv, const char *level,
- const char *fmt, ...)
- {
- static bool shown_bug_once;
- struct device *kdev = dev_priv->drm.dev;
- bool is_error = level[1] <= KERN_ERR[1];
- bool is_debug = level[1] == KERN_DEBUG[1];
- struct va_format vaf;
- va_list args;
-
- if (is_debug && !(drm_debug & DRM_UT_DRIVER))
- return;
-
- va_start(args, fmt);
-
- vaf.fmt = fmt;
- vaf.va = &args;
-
- if (is_error)
- dev_printk(level, kdev, "%pV", &vaf);
- else
- dev_printk(level, kdev, "[" DRM_NAME ":%ps] %pV",
- __builtin_return_address(0), &vaf);
-
- va_end(args);
-
- if (is_error && !shown_bug_once) {
- /*
- * Ask the user to file a bug report for the error, except
- * if they may have caused the bug by fiddling with unsafe
- * module parameters.
- */
- if (!test_taint(TAINT_USER))
- dev_notice(kdev, "%s", FDO_BUG_MSG);
- shown_bug_once = true;
- }
- }
-
- /* Map PCH device id to PCH type, or PCH_NONE if unknown. */
- static enum intel_pch
- intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
- {
- switch (id) {
- case INTEL_PCH_IBX_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
- WARN_ON(!IS_GEN(dev_priv, 5));
- return PCH_IBX;
- case INTEL_PCH_CPT_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found CougarPoint PCH\n");
- WARN_ON(!IS_GEN(dev_priv, 6) && !IS_IVYBRIDGE(dev_priv));
- return PCH_CPT;
- case INTEL_PCH_PPT_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found PantherPoint PCH\n");
- WARN_ON(!IS_GEN(dev_priv, 6) && !IS_IVYBRIDGE(dev_priv));
- /* PantherPoint is CPT compatible */
- return PCH_CPT;
- case INTEL_PCH_LPT_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found LynxPoint PCH\n");
- WARN_ON(!IS_HASWELL(dev_priv) && !IS_BROADWELL(dev_priv));
- WARN_ON(IS_HSW_ULT(dev_priv) || IS_BDW_ULT(dev_priv));
- return PCH_LPT;
- case INTEL_PCH_LPT_LP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found LynxPoint LP PCH\n");
- WARN_ON(!IS_HASWELL(dev_priv) && !IS_BROADWELL(dev_priv));
- WARN_ON(!IS_HSW_ULT(dev_priv) && !IS_BDW_ULT(dev_priv));
- return PCH_LPT;
- case INTEL_PCH_WPT_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found WildcatPoint PCH\n");
- WARN_ON(!IS_HASWELL(dev_priv) && !IS_BROADWELL(dev_priv));
- WARN_ON(IS_HSW_ULT(dev_priv) || IS_BDW_ULT(dev_priv));
- /* WildcatPoint is LPT compatible */
- return PCH_LPT;
- case INTEL_PCH_WPT_LP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found WildcatPoint LP PCH\n");
- WARN_ON(!IS_HASWELL(dev_priv) && !IS_BROADWELL(dev_priv));
- WARN_ON(!IS_HSW_ULT(dev_priv) && !IS_BDW_ULT(dev_priv));
- /* WildcatPoint is LPT compatible */
- return PCH_LPT;
- case INTEL_PCH_SPT_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found SunrisePoint PCH\n");
- WARN_ON(!IS_SKYLAKE(dev_priv) && !IS_KABYLAKE(dev_priv));
- return PCH_SPT;
- case INTEL_PCH_SPT_LP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found SunrisePoint LP PCH\n");
- WARN_ON(!IS_SKYLAKE(dev_priv) && !IS_KABYLAKE(dev_priv));
- return PCH_SPT;
- case INTEL_PCH_KBP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found Kaby Lake PCH (KBP)\n");
- WARN_ON(!IS_SKYLAKE(dev_priv) && !IS_KABYLAKE(dev_priv) &&
- !IS_COFFEELAKE(dev_priv));
- /* KBP is SPT compatible */
- return PCH_SPT;
- case INTEL_PCH_CNP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found Cannon Lake PCH (CNP)\n");
- WARN_ON(!IS_CANNONLAKE(dev_priv) && !IS_COFFEELAKE(dev_priv));
- return PCH_CNP;
- case INTEL_PCH_CNP_LP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found Cannon Lake LP PCH (CNP-LP)\n");
- WARN_ON(!IS_CANNONLAKE(dev_priv) && !IS_COFFEELAKE(dev_priv));
- return PCH_CNP;
- case INTEL_PCH_CMP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found Comet Lake PCH (CMP)\n");
- WARN_ON(!IS_COFFEELAKE(dev_priv));
- /* CometPoint is CNP Compatible */
- return PCH_CNP;
- case INTEL_PCH_ICP_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found Ice Lake PCH\n");
- WARN_ON(!IS_ICELAKE(dev_priv));
- return PCH_ICP;
- case INTEL_PCH_MCC_DEVICE_ID_TYPE:
- DRM_DEBUG_KMS("Found Mule Creek Canyon PCH\n");
- WARN_ON(!IS_ELKHARTLAKE(dev_priv));
- return PCH_MCC;
- default:
- return PCH_NONE;
- }
- }
-
- static bool intel_is_virt_pch(unsigned short id,
- unsigned short svendor, unsigned short sdevice)
- {
- return (id == INTEL_PCH_P2X_DEVICE_ID_TYPE ||
- id == INTEL_PCH_P3X_DEVICE_ID_TYPE ||
- (id == INTEL_PCH_QEMU_DEVICE_ID_TYPE &&
- svendor == PCI_SUBVENDOR_ID_REDHAT_QUMRANET &&
- sdevice == PCI_SUBDEVICE_ID_QEMU));
- }
-
- static unsigned short
- intel_virt_detect_pch(const struct drm_i915_private *dev_priv)
- {
- unsigned short id = 0;
-
- /*
- * In a virtualized passthrough environment we can be in a
- * setup where the ISA bridge is not able to be passed through.
- * In this case, a south bridge can be emulated and we have to
- * make an educated guess as to which PCH is really there.
- */
-
- if (IS_ELKHARTLAKE(dev_priv))
- id = INTEL_PCH_MCC_DEVICE_ID_TYPE;
- else if (IS_ICELAKE(dev_priv))
- id = INTEL_PCH_ICP_DEVICE_ID_TYPE;
- else if (IS_CANNONLAKE(dev_priv) || IS_COFFEELAKE(dev_priv))
- id = INTEL_PCH_CNP_DEVICE_ID_TYPE;
- else if (IS_KABYLAKE(dev_priv) || IS_SKYLAKE(dev_priv))
- id = INTEL_PCH_SPT_DEVICE_ID_TYPE;
- else if (IS_HSW_ULT(dev_priv) || IS_BDW_ULT(dev_priv))
- id = INTEL_PCH_LPT_LP_DEVICE_ID_TYPE;
- else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
- id = INTEL_PCH_LPT_DEVICE_ID_TYPE;
- else if (IS_GEN(dev_priv, 6) || IS_IVYBRIDGE(dev_priv))
- id = INTEL_PCH_CPT_DEVICE_ID_TYPE;
- else if (IS_GEN(dev_priv, 5))
- id = INTEL_PCH_IBX_DEVICE_ID_TYPE;
-
- if (id)
- DRM_DEBUG_KMS("Assuming PCH ID %04x\n", id);
- else
- DRM_DEBUG_KMS("Assuming no PCH\n");
-
- return id;
- }
-
- static void intel_detect_pch(struct drm_i915_private *dev_priv)
- {
- struct pci_dev *pch = NULL;
-
- /*
- * The reason to probe ISA bridge instead of Dev31:Fun0 is to
- * make graphics device passthrough work easy for VMM, that only
- * need to expose ISA bridge to let driver know the real hardware
- * underneath. This is a requirement from virtualization team.
- *
- * In some virtualized environments (e.g. XEN), there is irrelevant
- * ISA bridge in the system. To work reliably, we should scan trhough
- * all the ISA bridge devices and check for the first match, instead
- * of only checking the first one.
- */
- while ((pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, pch))) {
- unsigned short id;
- enum intel_pch pch_type;
-
- if (pch->vendor != PCI_VENDOR_ID_INTEL)
- continue;
-
- id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
-
- pch_type = intel_pch_type(dev_priv, id);
- if (pch_type != PCH_NONE) {
- dev_priv->pch_type = pch_type;
- dev_priv->pch_id = id;
- break;
- } else if (intel_is_virt_pch(id, pch->subsystem_vendor,
- pch->subsystem_device)) {
- id = intel_virt_detect_pch(dev_priv);
- pch_type = intel_pch_type(dev_priv, id);
-
- /* Sanity check virtual PCH id */
- if (WARN_ON(id && pch_type == PCH_NONE))
- id = 0;
-
- dev_priv->pch_type = pch_type;
- dev_priv->pch_id = id;
- break;
- }
- }
-
- /*
- * Use PCH_NOP (PCH but no South Display) for PCH platforms without
- * display.
- */
- if (pch && !HAS_DISPLAY(dev_priv)) {
- DRM_DEBUG_KMS("Display disabled, reverting to NOP PCH\n");
- dev_priv->pch_type = PCH_NOP;
- dev_priv->pch_id = 0;
- }
-
- if (!pch)
- DRM_DEBUG_KMS("No PCH found.\n");
-
- pci_dev_put(pch);
- }
-
- static int i915_getparam_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
- {
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct pci_dev *pdev = dev_priv->drm.pdev;
- const struct sseu_dev_info *sseu = &RUNTIME_INFO(dev_priv)->sseu;
- drm_i915_getparam_t *param = data;
- int value;
-
- switch (param->param) {
- case I915_PARAM_IRQ_ACTIVE:
- case I915_PARAM_ALLOW_BATCHBUFFER:
- case I915_PARAM_LAST_DISPATCH:
- case I915_PARAM_HAS_EXEC_CONSTANTS:
- /* Reject all old ums/dri params. */
- return -ENODEV;
- case I915_PARAM_CHIPSET_ID:
- value = pdev->device;
- break;
- case I915_PARAM_REVISION:
- value = pdev->revision;
- break;
- case I915_PARAM_NUM_FENCES_AVAIL:
- value = dev_priv->ggtt.num_fences;
- break;
- case I915_PARAM_HAS_OVERLAY:
- value = dev_priv->overlay ? 1 : 0;
- break;
- case I915_PARAM_HAS_BSD:
- value = !!dev_priv->engine[VCS0];
- break;
- case I915_PARAM_HAS_BLT:
- value = !!dev_priv->engine[BCS0];
- break;
- case I915_PARAM_HAS_VEBOX:
- value = !!dev_priv->engine[VECS0];
- break;
- case I915_PARAM_HAS_BSD2:
- value = !!dev_priv->engine[VCS1];
- break;
- case I915_PARAM_HAS_LLC:
- value = HAS_LLC(dev_priv);
- break;
- case I915_PARAM_HAS_WT:
- value = HAS_WT(dev_priv);
- break;
- case I915_PARAM_HAS_ALIASING_PPGTT:
- value = INTEL_PPGTT(dev_priv);
- break;
- case I915_PARAM_HAS_SEMAPHORES:
- value = !!(dev_priv->caps.scheduler & I915_SCHEDULER_CAP_SEMAPHORES);
- break;
- case I915_PARAM_HAS_SECURE_BATCHES:
- value = capable(CAP_SYS_ADMIN);
- break;
- case I915_PARAM_CMD_PARSER_VERSION:
- value = i915_cmd_parser_get_version(dev_priv);
- break;
- case I915_PARAM_SUBSLICE_TOTAL:
- value = intel_sseu_subslice_total(sseu);
- if (!value)
- return -ENODEV;
- break;
- case I915_PARAM_EU_TOTAL:
- value = sseu->eu_total;
- if (!value)
- return -ENODEV;
- break;
- case I915_PARAM_HAS_GPU_RESET:
- value = i915_modparams.enable_hangcheck &&
- intel_has_gpu_reset(dev_priv);
- if (value && intel_has_reset_engine(dev_priv))
- value = 2;
- break;
- case I915_PARAM_HAS_RESOURCE_STREAMER:
- value = 0;
- break;
- case I915_PARAM_HAS_POOLED_EU:
- value = HAS_POOLED_EU(dev_priv);
- break;
- case I915_PARAM_MIN_EU_IN_POOL:
- value = sseu->min_eu_in_pool;
- break;
- case I915_PARAM_HUC_STATUS:
- value = intel_huc_check_status(&dev_priv->huc);
- if (value < 0)
- return value;
- break;
- case I915_PARAM_MMAP_GTT_VERSION:
- /* Though we've started our numbering from 1, and so class all
- * earlier versions as 0, in effect their value is undefined as
- * the ioctl will report EINVAL for the unknown param!
- */
- value = i915_gem_mmap_gtt_version();
- break;
- case I915_PARAM_HAS_SCHEDULER:
- value = dev_priv->caps.scheduler;
- break;
-
- case I915_PARAM_MMAP_VERSION:
- /* Remember to bump this if the version changes! */
- case I915_PARAM_HAS_GEM:
- case I915_PARAM_HAS_PAGEFLIPPING:
- case I915_PARAM_HAS_EXECBUF2: /* depends on GEM */
- case I915_PARAM_HAS_RELAXED_FENCING:
- case I915_PARAM_HAS_COHERENT_RINGS:
- case I915_PARAM_HAS_RELAXED_DELTA:
- case I915_PARAM_HAS_GEN7_SOL_RESET:
- case I915_PARAM_HAS_WAIT_TIMEOUT:
- case I915_PARAM_HAS_PRIME_VMAP_FLUSH:
- case I915_PARAM_HAS_PINNED_BATCHES:
- case I915_PARAM_HAS_EXEC_NO_RELOC:
- case I915_PARAM_HAS_EXEC_HANDLE_LUT:
- case I915_PARAM_HAS_COHERENT_PHYS_GTT:
- case I915_PARAM_HAS_EXEC_SOFTPIN:
- case I915_PARAM_HAS_EXEC_ASYNC:
- case I915_PARAM_HAS_EXEC_FENCE:
- case I915_PARAM_HAS_EXEC_CAPTURE:
- case I915_PARAM_HAS_EXEC_BATCH_FIRST:
- case I915_PARAM_HAS_EXEC_FENCE_ARRAY:
- case I915_PARAM_HAS_EXEC_SUBMIT_FENCE:
- /* For the time being all of these are always true;
- * if some supported hardware does not have one of these
- * features this value needs to be provided from
- * INTEL_INFO(), a feature macro, or similar.
- */
- value = 1;
- break;
- case I915_PARAM_HAS_CONTEXT_ISOLATION:
- value = intel_engines_has_context_isolation(dev_priv);
- break;
- case I915_PARAM_SLICE_MASK:
- value = sseu->slice_mask;
- if (!value)
- return -ENODEV;
- break;
- case I915_PARAM_SUBSLICE_MASK:
- value = sseu->subslice_mask[0];
- if (!value)
- return -ENODEV;
- break;
- case I915_PARAM_CS_TIMESTAMP_FREQUENCY:
- value = 1000 * RUNTIME_INFO(dev_priv)->cs_timestamp_frequency_khz;
- break;
- case I915_PARAM_MMAP_GTT_COHERENT:
- value = INTEL_INFO(dev_priv)->has_coherent_ggtt;
- break;
- default:
- DRM_DEBUG("Unknown parameter %d\n", param->param);
- return -EINVAL;
- }
-
- if (put_user(value, param->value))
- return -EFAULT;
-
- return 0;
- }
+ struct vlv_s0ix_state {
+ /* GAM */
+ u32 wr_watermark;
+ u32 gfx_prio_ctrl;
+ u32 arb_mode;
+ u32 gfx_pend_tlb0;
+ u32 gfx_pend_tlb1;
+ u32 lra_limits[GEN7_LRA_LIMITS_REG_NUM];
+ u32 media_max_req_count;
+ u32 gfx_max_req_count;
+ u32 render_hwsp;
+ u32 ecochk;
+ u32 bsd_hwsp;
+ u32 blt_hwsp;
+ u32 tlb_rd_addr;
+
+ /* MBC */
+ u32 g3dctl;
+ u32 gsckgctl;
+ u32 mbctl;
+
+ /* GCP */
+ u32 ucgctl1;
+ u32 ucgctl3;
+ u32 rcgctl1;
+ u32 rcgctl2;
+ u32 rstctl;
+ u32 misccpctl;
+
+ /* GPM */
+ u32 gfxpause;
+ u32 rpdeuhwtc;
+ u32 rpdeuc;
+ u32 ecobus;
+ u32 pwrdwnupctl;
+ u32 rp_down_timeout;
+ u32 rp_deucsw;
+ u32 rcubmabdtmr;
+ u32 rcedata;
+ u32 spare2gh;
+
+ /* Display 1 CZ domain */
+ u32 gt_imr;
+ u32 gt_ier;
+ u32 pm_imr;
+ u32 pm_ier;
+ u32 gt_scratch[GEN7_GT_SCRATCH_REG_NUM];
+
+ /* GT SA CZ domain */
+ u32 tilectl;
+ u32 gt_fifoctl;
+ u32 gtlc_wake_ctrl;
+ u32 gtlc_survive;
+ u32 pmwgicz;
+
+ /* Display 2 CZ domain */
+ u32 gu_ctl0;
+ u32 gu_ctl1;
+ u32 pcbr;
+ u32 clock_gate_dis2;
+ };
static int i915_get_bridge_dev(struct drm_i915_private *dev_priv)
{
return VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
}
- static int i915_resume_switcheroo(struct drm_device *dev);
- static int i915_suspend_switcheroo(struct drm_device *dev, pm_message_t state);
+ static int i915_resume_switcheroo(struct drm_i915_private *i915);
+ static int i915_suspend_switcheroo(struct drm_i915_private *i915,
+ pm_message_t state);
static void i915_switcheroo_set_state(struct pci_dev *pdev, enum vga_switcheroo_state state)
{
- struct drm_device *dev = pci_get_drvdata(pdev);
+ struct drm_i915_private *i915 = pdev_to_i915(pdev);
pm_message_t pmm = { .event = PM_EVENT_SUSPEND };
+ if (!i915) {
+ dev_err(&pdev->dev, "DRM not initialized, aborting switch.\n");
+ return;
+ }
+
if (state == VGA_SWITCHEROO_ON) {
pr_info("switched on\n");
- dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
+ i915->drm.switch_power_state = DRM_SWITCH_POWER_CHANGING;
/* i915 resume handler doesn't set to D0 */
pci_set_power_state(pdev, PCI_D0);
- i915_resume_switcheroo(dev);
- dev->switch_power_state = DRM_SWITCH_POWER_ON;
+ i915_resume_switcheroo(i915);
+ i915->drm.switch_power_state = DRM_SWITCH_POWER_ON;
} else {
pr_info("switched off\n");
- dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
- i915_suspend_switcheroo(dev, pmm);
- dev->switch_power_state = DRM_SWITCH_POWER_OFF;
+ i915->drm.switch_power_state = DRM_SWITCH_POWER_CHANGING;
+ i915_suspend_switcheroo(i915, pmm);
+ i915->drm.switch_power_state = DRM_SWITCH_POWER_OFF;
}
}
static bool i915_switcheroo_can_switch(struct pci_dev *pdev)
{
- struct drm_device *dev = pci_get_drvdata(pdev);
+ struct drm_i915_private *i915 = pdev_to_i915(pdev);
/*
* FIXME: open_count is protected by drm_global_mutex but that would lead to
* locking inversion with the driver load path. And the access here is
* completely racy anyway. So don't bother with locking for now.
*/
- return dev->open_count == 0;
+ return i915 && i915->drm.open_count == 0;
}
static const struct vga_switcheroo_client_ops i915_switcheroo_ops = {
.can_switch = i915_switcheroo_can_switch,
};
- static int i915_load_modeset_init(struct drm_device *dev)
+ static int i915_driver_modeset_probe(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct pci_dev *pdev = dev_priv->drm.pdev;
int ret;
- if (i915_inject_load_failure())
+ if (i915_inject_probe_failure(dev_priv))
return -ENODEV;
if (HAS_DISPLAY(dev_priv)) {
cleanup_gem:
i915_gem_suspend(dev_priv);
- i915_gem_fini_hw(dev_priv);
- i915_gem_fini(dev_priv);
+ i915_gem_driver_remove(dev_priv);
+ i915_gem_driver_release(dev_priv);
cleanup_modeset:
- intel_modeset_cleanup(dev);
+ intel_modeset_driver_remove(dev);
cleanup_irq:
- drm_irq_uninstall(dev);
+ intel_irq_uninstall(dev_priv);
intel_gmbus_teardown(dev_priv);
cleanup_csr:
intel_csr_ucode_fini(dev_priv);
- intel_power_domains_fini_hw(dev_priv);
+ intel_power_domains_driver_remove(dev_priv);
vga_switcheroo_unregister_client(pdev);
cleanup_vga_client:
vga_client_register(pdev, NULL, NULL, NULL);
return ret;
}
-static int i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)
-{
- struct apertures_struct *ap;
- struct pci_dev *pdev = dev_priv->drm.pdev;
- struct i915_ggtt *ggtt = &dev_priv->ggtt;
- bool primary;
- int ret;
-
- ap = alloc_apertures(1);
- if (!ap)
- return -ENOMEM;
-
- ap->ranges[0].base = ggtt->gmadr.start;
- ap->ranges[0].size = ggtt->mappable_end;
-
- primary =
- pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
-
- ret = drm_fb_helper_remove_conflicting_framebuffers(ap, "inteldrmfb", primary);
-
- kfree(ap);
-
- return ret;
-}
-
static void intel_init_dpio(struct drm_i915_private *dev_priv)
{
/*
return -ENOMEM;
}
- static void i915_engines_cleanup(struct drm_i915_private *i915)
- {
- struct intel_engine_cs *engine;
- enum intel_engine_id id;
-
- for_each_engine(engine, i915, id)
- kfree(engine);
- }
-
static void i915_workqueues_cleanup(struct drm_i915_private *dev_priv)
{
destroy_workqueue(dev_priv->hotplug.dp_wq);
}
}
+ static int vlv_alloc_s0ix_state(struct drm_i915_private *i915)
+ {
+ if (!IS_VALLEYVIEW(i915))
+ return 0;
+
+ /* we write all the values in the struct, so no need to zero it out */
+ i915->vlv_s0ix_state = kmalloc(sizeof(*i915->vlv_s0ix_state),
+ GFP_KERNEL);
+ if (!i915->vlv_s0ix_state)
+ return -ENOMEM;
+
+ return 0;
+ }
+
+ static void vlv_free_s0ix_state(struct drm_i915_private *i915)
+ {
+ if (!i915->vlv_s0ix_state)
+ return;
+
+ kfree(i915->vlv_s0ix_state);
+ i915->vlv_s0ix_state = NULL;
+ }
+
/**
- * i915_driver_init_early - setup state not requiring device access
+ * i915_driver_early_probe - setup state not requiring device access
* @dev_priv: device private
*
* Initialize everything that is a "SW-only" state, that is state not
* system memory allocation, setting up device specific attributes and
* function hooks not requiring accessing the device.
*/
- static int i915_driver_init_early(struct drm_i915_private *dev_priv)
+ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
{
int ret = 0;
- if (i915_inject_load_failure())
+ if (i915_inject_probe_failure(dev_priv))
return -ENODEV;
intel_device_info_subplatform_init(dev_priv);
- intel_uncore_init_early(&dev_priv->uncore);
+ intel_uncore_mmio_debug_init_early(&dev_priv->mmio_debug);
+ intel_uncore_init_early(&dev_priv->uncore, dev_priv);
spin_lock_init(&dev_priv->irq_lock);
spin_lock_init(&dev_priv->gpu_error.lock);
ret = i915_workqueues_init(dev_priv);
if (ret < 0)
- goto err_engines;
+ return ret;
- ret = i915_gem_init_early(dev_priv);
+ ret = vlv_alloc_s0ix_state(dev_priv);
if (ret < 0)
goto err_workqueues;
+ intel_wopcm_init_early(&dev_priv->wopcm);
+
+ intel_gt_init_early(&dev_priv->gt, dev_priv);
+
+ ret = i915_gem_init_early(dev_priv);
+ if (ret < 0)
+ goto err_gt;
+
/* This must be called before any calls to HAS_PCH_* */
intel_detect_pch(dev_priv);
- intel_wopcm_init_early(&dev_priv->wopcm);
- intel_uc_init_early(dev_priv);
intel_pm_setup(dev_priv);
intel_init_dpio(dev_priv);
ret = intel_power_domains_init(dev_priv);
if (ret < 0)
- goto err_uc;
+ goto err_gem;
intel_irq_init(dev_priv);
- intel_hangcheck_init(dev_priv);
intel_init_display_hooks(dev_priv);
intel_init_clock_gating_hooks(dev_priv);
intel_init_audio_hooks(dev_priv);
return 0;
- err_uc:
- intel_uc_cleanup_early(dev_priv);
+ err_gem:
i915_gem_cleanup_early(dev_priv);
+ err_gt:
+ intel_gt_driver_late_release(&dev_priv->gt);
+ vlv_free_s0ix_state(dev_priv);
err_workqueues:
i915_workqueues_cleanup(dev_priv);
- err_engines:
- i915_engines_cleanup(dev_priv);
return ret;
}
/**
- * i915_driver_cleanup_early - cleanup the setup done in i915_driver_init_early()
+ * i915_driver_late_release - cleanup the setup done in
+ * i915_driver_early_probe()
* @dev_priv: device private
*/
- static void i915_driver_cleanup_early(struct drm_i915_private *dev_priv)
+ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
{
intel_irq_fini(dev_priv);
intel_power_domains_cleanup(dev_priv);
- intel_uc_cleanup_early(dev_priv);
i915_gem_cleanup_early(dev_priv);
+ intel_gt_driver_late_release(&dev_priv->gt);
+ vlv_free_s0ix_state(dev_priv);
i915_workqueues_cleanup(dev_priv);
- i915_engines_cleanup(dev_priv);
pm_qos_remove_request(&dev_priv->sb_qos);
mutex_destroy(&dev_priv->sb_lock);
}
/**
- * i915_driver_init_mmio - setup device MMIO
+ * i915_driver_mmio_probe - setup device MMIO
* @dev_priv: device private
*
* Setup minimal device state necessary for MMIO accesses later in the
* side effects or exposing the driver via kernel internal or user space
* interfaces.
*/
- static int i915_driver_init_mmio(struct drm_i915_private *dev_priv)
+ static int i915_driver_mmio_probe(struct drm_i915_private *dev_priv)
{
int ret;
- if (i915_inject_load_failure())
+ if (i915_inject_probe_failure(dev_priv))
return -ENODEV;
if (i915_get_bridge_dev(dev_priv))
intel_uncore_prune_mmio_domains(&dev_priv->uncore);
- intel_uc_init_mmio(dev_priv);
+ intel_uc_init_mmio(&dev_priv->gt.uc);
ret = intel_engines_init_mmio(dev_priv);
if (ret)
}
/**
- * i915_driver_cleanup_mmio - cleanup the setup done in i915_driver_init_mmio()
+ * i915_driver_mmio_release - cleanup the setup done in i915_driver_mmio_probe()
* @dev_priv: device private
*/
- static void i915_driver_cleanup_mmio(struct drm_i915_private *dev_priv)
+ static void i915_driver_mmio_release(struct drm_i915_private *dev_priv)
{
+ intel_engines_cleanup(dev_priv);
intel_teardown_mchbar(dev_priv);
intel_uncore_fini_mmio(&dev_priv->uncore);
pci_dev_put(dev_priv->bridge_dev);
dev_priv->edram_size_mb =
gen9_edram_size_mb(dev_priv, edram_cap);
- DRM_INFO("Found %uMB of eDRAM\n", dev_priv->edram_size_mb);
+ dev_info(dev_priv->drm.dev,
+ "Found %uMB of eDRAM\n", dev_priv->edram_size_mb);
}
/**
- * i915_driver_init_hw - setup state requiring device access
+ * i915_driver_hw_probe - setup state requiring device access
* @dev_priv: device private
*
* Setup state that requires accessing the device, but doesn't require
* exposing the driver via kernel internal or userspace interfaces.
*/
- static int i915_driver_init_hw(struct drm_i915_private *dev_priv)
+ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
{
struct pci_dev *pdev = dev_priv->drm.pdev;
int ret;
- if (i915_inject_load_failure())
+ if (i915_inject_probe_failure(dev_priv))
return -ENODEV;
intel_device_info_runtime_init(dev_priv);
if (ret)
goto err_perf;
- /*
- * WARNING: Apparently we must kick fbdev drivers before vgacon,
- * otherwise the vga fbdev driver falls over.
- */
- ret = i915_kick_out_firmware_fb(dev_priv);
- if (ret) {
- DRM_ERROR("failed to remove conflicting framebuffer drivers\n");
- goto err_ggtt;
- }
-
- ret = vga_remove_vgacon(pdev);
- if (ret) {
- DRM_ERROR("failed to remove conflicting VGA console\n");
+ ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "inteldrmfb");
+ if (ret)
goto err_ggtt;
- }
ret = i915_ggtt_init_hw(dev_priv);
if (ret)
goto err_ggtt;
+ intel_gt_init_hw(dev_priv);
+
ret = i915_ggtt_enable_hw(dev_priv);
if (ret) {
DRM_ERROR("failed to enable GGTT\n");
pci_set_master(pdev);
+ /*
+ * We don't have a max segment size, so set it to the max so sg's
+ * debugging layer doesn't complain
+ */
+ dma_set_max_seg_size(&pdev->dev, UINT_MAX);
+
/* overlay on gen2 is broken and can't address above 1G */
if (IS_GEN(dev_priv, 2)) {
ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(30));
pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE);
- intel_uncore_sanitize(dev_priv);
+ /* BIOS often leaves RC6 enabled, but disable it for hw init */
+ intel_sanitize_gt_powersave(dev_priv);
intel_gt_init_workarounds(dev_priv);
pci_disable_msi(pdev);
pm_qos_remove_request(&dev_priv->pm_qos);
err_ggtt:
- i915_ggtt_cleanup_hw(dev_priv);
+ i915_ggtt_driver_release(dev_priv);
err_perf:
i915_perf_fini(dev_priv);
return ret;
}
/**
- * i915_driver_cleanup_hw - cleanup the setup done in i915_driver_init_hw()
+ * i915_driver_hw_remove - cleanup the setup done in i915_driver_hw_probe()
* @dev_priv: device private
*/
- static void i915_driver_cleanup_hw(struct drm_i915_private *dev_priv)
+ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
{
struct pci_dev *pdev = dev_priv->drm.pdev;
{
struct drm_device *dev = &dev_priv->drm;
- i915_gem_shrinker_register(dev_priv);
+ i915_gem_driver_register(dev_priv);
i915_pmu_register(dev_priv);
/*
i915_teardown_sysfs(dev_priv);
drm_dev_unplug(&dev_priv->drm);
- i915_gem_shrinker_unregister(dev_priv);
+ i915_gem_driver_unregister(dev_priv);
}
static void i915_welcome_messages(struct drm_i915_private *dev_priv)
return ERR_PTR(err);
}
- i915->drm.pdev = pdev;
i915->drm.dev_private = i915;
- pci_set_drvdata(pdev, &i915->drm);
+
+ i915->drm.pdev = pdev;
+ pci_set_drvdata(pdev, i915);
/* Setup the write-once "constant" device info */
device_info = mkwrite_device_info(i915);
}
/**
- * i915_driver_load - setup chip and create an initial config
+ * i915_driver_probe - setup chip and create an initial config
* @pdev: PCI device
* @ent: matching PCI ID entry
*
- * The driver load routine has to do several things:
+ * The driver probe routine has to do several things:
* - drive output discovery via intel_modeset_init()
* - initialize the memory manager
* - allocate initial config memory
* - setup the DRM framebuffer with the allocated memory
*/
- int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
+ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
const struct intel_device_info *match_info =
(struct intel_device_info *)ent->driver_data;
if (ret)
goto out_fini;
- ret = i915_driver_init_early(dev_priv);
+ ret = i915_driver_early_probe(dev_priv);
if (ret < 0)
goto out_pci_disable;
disable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
- ret = i915_driver_init_mmio(dev_priv);
+ i915_detect_vgpu(dev_priv);
+
+ ret = i915_driver_mmio_probe(dev_priv);
if (ret < 0)
goto out_runtime_pm_put;
- ret = i915_driver_init_hw(dev_priv);
+ ret = i915_driver_hw_probe(dev_priv);
if (ret < 0)
goto out_cleanup_mmio;
- ret = i915_load_modeset_init(&dev_priv->drm);
+ ret = i915_driver_modeset_probe(&dev_priv->drm);
if (ret < 0)
goto out_cleanup_hw;
return 0;
out_cleanup_hw:
- i915_driver_cleanup_hw(dev_priv);
- i915_ggtt_cleanup_hw(dev_priv);
+ i915_driver_hw_remove(dev_priv);
+ i915_ggtt_driver_release(dev_priv);
+
+ /* Paranoia: make sure we have disabled everything before we exit. */
+ intel_sanitize_gt_powersave(dev_priv);
out_cleanup_mmio:
- i915_driver_cleanup_mmio(dev_priv);
+ i915_driver_mmio_release(dev_priv);
out_runtime_pm_put:
enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
- i915_driver_cleanup_early(dev_priv);
+ i915_driver_late_release(dev_priv);
out_pci_disable:
pci_disable_device(pdev);
out_fini:
- i915_load_error(dev_priv, "Device initialization failed (%d)\n", ret);
+ i915_probe_error(dev_priv, "Device initialization failed (%d)\n", ret);
i915_driver_destroy(dev_priv);
return ret;
}
- void i915_driver_unload(struct drm_device *dev)
+ void i915_driver_remove(struct drm_i915_private *i915)
{
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct pci_dev *pdev = dev_priv->drm.pdev;
+ struct pci_dev *pdev = i915->drm.pdev;
- disable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
+ disable_rpm_wakeref_asserts(&i915->runtime_pm);
- i915_driver_unregister(dev_priv);
+ i915_driver_unregister(i915);
/*
* After unregistering the device to prevent any new users, cancel
* all in-flight requests so that we can quickly unbind the active
* resources.
*/
- i915_gem_set_wedged(dev_priv);
+ intel_gt_set_wedged(&i915->gt);
/* Flush any external code that still may be under the RCU lock */
synchronize_rcu();
- i915_gem_suspend(dev_priv);
+ i915_gem_suspend(i915);
- drm_atomic_helper_shutdown(dev);
+ drm_atomic_helper_shutdown(&i915->drm);
- intel_gvt_cleanup(dev_priv);
+ intel_gvt_driver_remove(i915);
- intel_modeset_cleanup(dev);
+ intel_modeset_driver_remove(&i915->drm);
- intel_bios_cleanup(dev_priv);
+ intel_bios_driver_remove(i915);
vga_switcheroo_unregister_client(pdev);
vga_client_register(pdev, NULL, NULL, NULL);
- intel_csr_ucode_fini(dev_priv);
+ intel_csr_ucode_fini(i915);
/* Free error state after interrupts are fully disabled. */
- cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
- i915_reset_error_state(dev_priv);
+ cancel_delayed_work_sync(&i915->gt.hangcheck.work);
+ i915_reset_error_state(i915);
- i915_gem_fini_hw(dev_priv);
+ i915_gem_driver_remove(i915);
- intel_power_domains_fini_hw(dev_priv);
+ intel_power_domains_driver_remove(i915);
- i915_driver_cleanup_hw(dev_priv);
+ i915_driver_hw_remove(i915);
- enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
+ enable_rpm_wakeref_asserts(&i915->runtime_pm);
}
static void i915_driver_release(struct drm_device *dev)
disable_rpm_wakeref_asserts(rpm);
- i915_gem_fini(dev_priv);
+ i915_gem_driver_release(dev_priv);
+
+ i915_ggtt_driver_release(dev_priv);
+
+ /* Paranoia: make sure we have disabled everything before we exit. */
+ intel_sanitize_gt_powersave(dev_priv);
- i915_ggtt_cleanup_hw(dev_priv);
- i915_driver_cleanup_mmio(dev_priv);
+ i915_driver_mmio_release(dev_priv);
enable_rpm_wakeref_asserts(rpm);
- intel_runtime_pm_cleanup(rpm);
+ intel_runtime_pm_driver_release(rpm);
- i915_driver_cleanup_early(dev_priv);
+ i915_driver_late_release(dev_priv);
i915_driver_destroy(dev_priv);
}
mutex_unlock(&dev->struct_mutex);
kfree(file_priv);
+
+ /* Catch up with all the deferred frees from "this" client */
+ i915_gem_flush_free_objects(to_i915(dev));
}
static void intel_suspend_encoders(struct drm_i915_private *dev_priv)
struct drm_i915_private *dev_priv = to_i915(dev);
struct pci_dev *pdev = dev_priv->drm.pdev;
struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;
- int ret;
+ int ret = 0;
disable_rpm_wakeref_asserts(rpm);
intel_power_domains_suspend(dev_priv,
get_suspend_mode(dev_priv, hibernation));
- ret = 0;
- if (INTEL_GEN(dev_priv) >= 11 || IS_GEN9_LP(dev_priv))
- bxt_enable_dc9(dev_priv);
- else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
- hsw_enable_pc8(dev_priv);
- else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
+ intel_display_power_suspend_late(dev_priv);
+
+ if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
ret = vlv_suspend_complete(dev_priv);
if (ret) {
out:
enable_rpm_wakeref_asserts(rpm);
- if (!dev_priv->uncore.user_forcewake.count)
- intel_runtime_pm_cleanup(rpm);
+ if (!dev_priv->uncore.user_forcewake_count)
+ intel_runtime_pm_driver_release(rpm);
return ret;
}
- static int i915_suspend_switcheroo(struct drm_device *dev, pm_message_t state)
+ static int
+ i915_suspend_switcheroo(struct drm_i915_private *i915, pm_message_t state)
{
int error;
- if (!dev) {
- DRM_ERROR("dev: %p\n", dev);
- DRM_ERROR("DRM not initialized, aborting suspend.\n");
- return -ENODEV;
- }
-
if (WARN_ON_ONCE(state.event != PM_EVENT_SUSPEND &&
state.event != PM_EVENT_FREEZE))
return -EINVAL;
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- error = i915_drm_suspend(dev);
+ error = i915_drm_suspend(&i915->drm);
if (error)
return error;
- return i915_drm_suspend_late(dev, false);
+ return i915_drm_suspend_late(&i915->drm, false);
}
static int i915_drm_resume(struct drm_device *dev)
intel_uncore_resume_early(&dev_priv->uncore);
- i915_check_and_clear_faults(dev_priv);
+ intel_gt_check_and_clear_faults(&dev_priv->gt);
- if (INTEL_GEN(dev_priv) >= 11 || IS_GEN9_LP(dev_priv)) {
- gen9_sanitize_dc_state(dev_priv);
- bxt_disable_dc9(dev_priv);
- } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
- hsw_disable_pc8(dev_priv);
- }
+ intel_display_power_resume_early(dev_priv);
- intel_uncore_sanitize(dev_priv);
+ intel_sanitize_gt_powersave(dev_priv);
intel_power_domains_resume(dev_priv);
- intel_gt_sanitize(dev_priv, true);
+ intel_gt_sanitize(&dev_priv->gt, true);
enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
return ret;
}
- static int i915_resume_switcheroo(struct drm_device *dev)
+ static int i915_resume_switcheroo(struct drm_i915_private *i915)
{
int ret;
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- ret = i915_drm_resume_early(dev);
+ ret = i915_drm_resume_early(&i915->drm);
if (ret)
return ret;
- return i915_drm_resume(dev);
+ return i915_drm_resume(&i915->drm);
}
static int i915_pm_prepare(struct device *kdev)
{
- struct pci_dev *pdev = to_pci_dev(kdev);
- struct drm_device *dev = pci_get_drvdata(pdev);
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
- if (!dev) {
+ if (!i915) {
dev_err(kdev, "DRM not initialized, aborting suspend.\n");
return -ENODEV;
}
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- return i915_drm_prepare(dev);
+ return i915_drm_prepare(&i915->drm);
}
static int i915_pm_suspend(struct device *kdev)
{
- struct pci_dev *pdev = to_pci_dev(kdev);
- struct drm_device *dev = pci_get_drvdata(pdev);
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
- if (!dev) {
+ if (!i915) {
dev_err(kdev, "DRM not initialized, aborting suspend.\n");
return -ENODEV;
}
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- return i915_drm_suspend(dev);
+ return i915_drm_suspend(&i915->drm);
}
static int i915_pm_suspend_late(struct device *kdev)
{
- struct drm_device *dev = &kdev_to_i915(kdev)->drm;
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
/*
* We have a suspend ordering issue with the snd-hda driver also
* FIXME: This should be solved with a special hdmi sink device or
* similar so that power domains can be employed.
*/
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- return i915_drm_suspend_late(dev, false);
+ return i915_drm_suspend_late(&i915->drm, false);
}
static int i915_pm_poweroff_late(struct device *kdev)
{
- struct drm_device *dev = &kdev_to_i915(kdev)->drm;
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- return i915_drm_suspend_late(dev, true);
+ return i915_drm_suspend_late(&i915->drm, true);
}
static int i915_pm_resume_early(struct device *kdev)
{
- struct drm_device *dev = &kdev_to_i915(kdev)->drm;
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- return i915_drm_resume_early(dev);
+ return i915_drm_resume_early(&i915->drm);
}
static int i915_pm_resume(struct device *kdev)
{
- struct drm_device *dev = &kdev_to_i915(kdev)->drm;
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
- if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- return i915_drm_resume(dev);
+ return i915_drm_resume(&i915->drm);
}
/* freeze: before creating the hibernation_image */
static int i915_pm_freeze(struct device *kdev)
{
- struct drm_device *dev = &kdev_to_i915(kdev)->drm;
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
int ret;
- if (dev->switch_power_state != DRM_SWITCH_POWER_OFF) {
- ret = i915_drm_suspend(dev);
+ if (i915->drm.switch_power_state != DRM_SWITCH_POWER_OFF) {
+ ret = i915_drm_suspend(&i915->drm);
if (ret)
return ret;
}
- ret = i915_gem_freeze(kdev_to_i915(kdev));
+ ret = i915_gem_freeze(i915);
if (ret)
return ret;
static int i915_pm_freeze_late(struct device *kdev)
{
- struct drm_device *dev = &kdev_to_i915(kdev)->drm;
+ struct drm_i915_private *i915 = kdev_to_i915(kdev);
int ret;
- if (dev->switch_power_state != DRM_SWITCH_POWER_OFF) {
- ret = i915_drm_suspend_late(dev, true);
+ if (i915->drm.switch_power_state != DRM_SWITCH_POWER_OFF) {
+ ret = i915_drm_suspend_late(&i915->drm, true);
if (ret)
return ret;
}
- ret = i915_gem_freeze_late(kdev_to_i915(kdev));
+ ret = i915_gem_freeze_late(i915);
if (ret)
return ret;
*/
static void vlv_save_gunit_s0ix_state(struct drm_i915_private *dev_priv)
{
- struct vlv_s0ix_state *s = &dev_priv->vlv_s0ix_state;
+ struct vlv_s0ix_state *s = dev_priv->vlv_s0ix_state;
int i;
+ if (!s)
+ return;
+
/* GAM 0x4000-0x4770 */
s->wr_watermark = I915_READ(GEN7_WR_WATERMARK);
s->gfx_prio_ctrl = I915_READ(GEN7_GFX_PRIO_CTRL);
static void vlv_restore_gunit_s0ix_state(struct drm_i915_private *dev_priv)
{
- struct vlv_s0ix_state *s = &dev_priv->vlv_s0ix_state;
+ struct vlv_s0ix_state *s = dev_priv->vlv_s0ix_state;
u32 val;
int i;
+ if (!s)
+ return;
+
/* GAM 0x4000-0x4770 */
I915_WRITE(GEN7_WR_WATERMARK, s->wr_watermark);
I915_WRITE(GEN7_GFX_PRIO_CTRL, s->gfx_prio_ctrl);
if (err)
goto err2;
- if (!IS_CHERRYVIEW(dev_priv))
- vlv_save_gunit_s0ix_state(dev_priv);
+ vlv_save_gunit_s0ix_state(dev_priv);
err = vlv_force_gfx_clock(dev_priv, false);
if (err)
*/
ret = vlv_force_gfx_clock(dev_priv, true);
- if (!IS_CHERRYVIEW(dev_priv))
- vlv_restore_gunit_s0ix_state(dev_priv);
+ vlv_restore_gunit_s0ix_state(dev_priv);
err = vlv_allow_gt_wake(dev_priv, true);
if (!ret)
static int intel_runtime_suspend(struct device *kdev)
{
- struct pci_dev *pdev = to_pci_dev(kdev);
- struct drm_device *dev = pci_get_drvdata(pdev);
- struct drm_i915_private *dev_priv = to_i915(dev);
+ struct drm_i915_private *dev_priv = kdev_to_i915(kdev);
struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;
- int ret;
+ int ret = 0;
if (WARN_ON_ONCE(!(dev_priv->gt_pm.rc6.enabled && HAS_RC6(dev_priv))))
return -ENODEV;
*/
i915_gem_runtime_suspend(dev_priv);
- intel_uc_runtime_suspend(dev_priv);
+ intel_gt_runtime_suspend(&dev_priv->gt);
intel_runtime_pm_disable_interrupts(dev_priv);
intel_uncore_suspend(&dev_priv->uncore);
- ret = 0;
- if (INTEL_GEN(dev_priv) >= 11) {
- icl_display_core_uninit(dev_priv);
- bxt_enable_dc9(dev_priv);
- } else if (IS_GEN9_LP(dev_priv)) {
- bxt_display_core_uninit(dev_priv);
- bxt_enable_dc9(dev_priv);
- } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
- hsw_enable_pc8(dev_priv);
- } else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
+ intel_display_power_suspend(dev_priv);
+
+ if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
ret = vlv_suspend_complete(dev_priv);
- }
if (ret) {
DRM_ERROR("Runtime suspend failed, disabling it (%d)\n", ret);
intel_runtime_pm_enable_interrupts(dev_priv);
- intel_uc_resume(dev_priv);
+ intel_gt_runtime_resume(&dev_priv->gt);
- i915_gem_init_swizzling(dev_priv);
i915_gem_restore_fences(dev_priv);
enable_rpm_wakeref_asserts(rpm);
}
enable_rpm_wakeref_asserts(rpm);
- intel_runtime_pm_cleanup(rpm);
+ intel_runtime_pm_driver_release(rpm);
if (intel_uncore_arm_unclaimed_mmio_detection(&dev_priv->uncore))
DRM_ERROR("Unclaimed access detected prior to suspending\n");
static int intel_runtime_resume(struct device *kdev)
{
- struct pci_dev *pdev = to_pci_dev(kdev);
- struct drm_device *dev = pci_get_drvdata(pdev);
- struct drm_i915_private *dev_priv = to_i915(dev);
+ struct drm_i915_private *dev_priv = kdev_to_i915(kdev);
struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;
int ret = 0;
if (intel_uncore_unclaimed_mmio(&dev_priv->uncore))
DRM_DEBUG_DRIVER("Unclaimed access during suspend, bios?\n");
- if (INTEL_GEN(dev_priv) >= 11) {
- bxt_disable_dc9(dev_priv);
- icl_display_core_init(dev_priv, true);
- if (dev_priv->csr.dmc_payload) {
- if (dev_priv->csr.allowed_dc_mask &
- DC_STATE_EN_UPTO_DC6)
- skl_enable_dc6(dev_priv);
- else if (dev_priv->csr.allowed_dc_mask &
- DC_STATE_EN_UPTO_DC5)
- gen9_enable_dc5(dev_priv);
- }
- } else if (IS_GEN9_LP(dev_priv)) {
- bxt_disable_dc9(dev_priv);
- bxt_display_core_init(dev_priv, true);
- if (dev_priv->csr.dmc_payload &&
- (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5))
- gen9_enable_dc5(dev_priv);
- } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
- hsw_disable_pc8(dev_priv);
- } else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
+ intel_display_power_resume(dev_priv);
+
+ if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
ret = vlv_resume_prepare(dev_priv, true);
- }
intel_uncore_runtime_resume(&dev_priv->uncore);
intel_runtime_pm_enable_interrupts(dev_priv);
- intel_uc_resume(dev_priv);
-
/*
* No point of rolling back things in case of an error, as the best
* we can do is to hope that things will still work (and disable RPM).
*/
- i915_gem_init_swizzling(dev_priv);
+ intel_gt_runtime_resume(&dev_priv->gt);
i915_gem_restore_fences(dev_priv);
/*
.gem_prime_export = i915_gem_prime_export,
.gem_prime_import = i915_gem_prime_import,
+ .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
+ .get_scanout_position = i915_get_crtc_scanoutpos,
+
.dumb_create = i915_gem_dumb_create,
.dumb_map_offset = i915_gem_mmap_gtt,
.ioctls = i915_ioctls,
struct device_driver *drv = &mcde_component_drivers[i]->driver;
struct device *p = NULL, *d;
- while ((d = bus_find_device(&platform_bus_type, p, drv,
- (void *)platform_bus_type.match))) {
+ while ((d = platform_find_device_by_driver(p, drv))) {
put_device(p);
component_match_add(dev, &match, mcde_compare_dev, d);
p = d;
}
if (!match) {
dev_err(dev, "no matching components\n");
- return -ENODEV;
+ ret = -ENODEV;
+ goto clk_disable;
}
if (IS_ERR(match)) {
dev_err(dev, "could not create component match\n");
const u32 loop_delay_us = 10; /* us */
const u8 *tx = msg->tx_buf;
u32 loop_counter;
- size_t txlen;
+ size_t txlen = msg->tx_len;
+ size_t rxlen = msg->rx_len;
u32 val;
int ret;
int i;
- txlen = msg->tx_len;
- if (txlen > 12) {
+ if (txlen > 16) {
+ dev_err(d->dev,
+ "dunno how to write more than 16 bytes yet\n");
+ return -EIO;
+ }
+ if (rxlen > 4) {
dev_err(d->dev,
- "dunno how to write more than 12 bytes yet\n");
+ "dunno how to read more than 4 bytes yet\n");
return -EIO;
}
dev_dbg(d->dev,
- "message to channel %d, %zd bytes",
- msg->channel,
- txlen);
+ "message to channel %d, write %zd bytes read %zd bytes\n",
+ msg->channel, txlen, rxlen);
/* Command "nature" */
if (MCDE_DSI_HOST_IS_READ(msg->type))
if (mipi_dsi_packet_format_is_long(msg->type))
val |= DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_LONGNOTSHORT;
val |= 0 << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_ID_SHIFT;
- /* Add one to the length for the MIPI DCS command */
- val |= txlen
- << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_SIZE_SHIFT;
+ val |= txlen << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_SIZE_SHIFT;
val |= DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_LP_EN;
val |= msg->type << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_HEAD_SHIFT;
writel(val, d->regs + DSI_DIRECT_CMD_MAIN_SETTINGS);
writel(1, d->regs + DSI_DIRECT_CMD_SEND);
loop_counter = 1000 * 1000 / loop_delay_us;
- while (!(readl(d->regs + DSI_DIRECT_CMD_STS) &
- DSI_DIRECT_CMD_STS_WRITE_COMPLETED)
- && --loop_counter)
- usleep_range(loop_delay_us, (loop_delay_us * 3) / 2);
-
- if (!loop_counter) {
- dev_err(d->dev, "DSI write timeout!\n");
- return -ETIME;
+ if (MCDE_DSI_HOST_IS_READ(msg->type)) {
+ /* Read command */
+ while (!(readl(d->regs + DSI_DIRECT_CMD_STS) &
+ (DSI_DIRECT_CMD_STS_READ_COMPLETED |
+ DSI_DIRECT_CMD_STS_READ_COMPLETED_WITH_ERR))
+ && --loop_counter)
+ usleep_range(loop_delay_us, (loop_delay_us * 3) / 2);
+ if (!loop_counter) {
+ dev_err(d->dev, "DSI read timeout!\n");
+ return -ETIME;
+ }
+ } else {
+ /* Writing only */
+ while (!(readl(d->regs + DSI_DIRECT_CMD_STS) &
+ DSI_DIRECT_CMD_STS_WRITE_COMPLETED)
+ && --loop_counter)
+ usleep_range(loop_delay_us, (loop_delay_us * 3) / 2);
+
+ if (!loop_counter) {
+ dev_err(d->dev, "DSI write timeout!\n");
+ return -ETIME;
+ }
}
val = readl(d->regs + DSI_DIRECT_CMD_STS);
+ if (val & DSI_DIRECT_CMD_STS_READ_COMPLETED_WITH_ERR) {
+ dev_err(d->dev, "read completed with error\n");
+ writel(1, d->regs + DSI_DIRECT_CMD_RD_INIT);
+ return -EIO;
+ }
if (val & DSI_DIRECT_CMD_STS_ACKNOWLEDGE_WITH_ERR_RECEIVED) {
val >>= DSI_DIRECT_CMD_STS_ACK_VAL_SHIFT;
dev_err(d->dev, "error during transmission: %04x\n",
if (!MCDE_DSI_HOST_IS_READ(msg->type)) {
/* Return number of bytes written */
- if (mipi_dsi_packet_format_is_long(msg->type))
- ret = 4 + txlen;
- else
- ret = 4;
+ ret = txlen;
} else {
/* OK this is a read command, get the response */
u32 rdsz;
rdsz = readl(d->regs + DSI_DIRECT_CMD_RD_PROPERTY);
rdsz &= DSI_DIRECT_CMD_RD_PROPERTY_RD_SIZE_MASK;
rddat = readl(d->regs + DSI_DIRECT_CMD_RDDAT);
- for (i = 0; i < 4 && i < rdsz; i++)
+ if (rdsz < rxlen) {
+ dev_err(d->dev, "read error, requested %zd got %d\n",
+ rxlen, rdsz);
+ return -EIO;
+ }
+ /* FIXME: read more than 4 bytes */
+ for (i = 0; i < 4 && i < rxlen; i++)
rx[i] = (rddat >> (i * 8)) & 0xff;
ret = rdsz;
}
}
}
if (panel) {
- bridge = drm_panel_bridge_add(panel,
- DRM_MODE_CONNECTOR_DSI);
+ bridge = drm_panel_bridge_add_typed(panel,
+ DRM_MODE_CONNECTOR_DSI);
if (IS_ERR(bridge)) {
dev_err(dev, "error adding panel bridge\n");
return PTR_ERR(bridge);
#ifndef __DPU_KMS_H__
#define __DPU_KMS_H__
+ #include <drm/drm_drv.h>
+
#include "msm_drv.h"
#include "msm_kms.h"
#include "msm_mmu.h"
*/
#define DPU_DEBUG(fmt, ...) \
do { \
- if (unlikely(drm_debug & DRM_UT_KMS)) \
+ if (drm_debug_enabled(DRM_UT_KMS)) \
DRM_DEBUG(fmt, ##__VA_ARGS__); \
else \
pr_debug(fmt, ##__VA_ARGS__); \
*/
#define DPU_DEBUG_DRIVER(fmt, ...) \
do { \
- if (unlikely(drm_debug & DRM_UT_DRIVER)) \
+ if (drm_debug_enabled(DRM_UT_DRIVER)) \
DRM_ERROR(fmt, ##__VA_ARGS__); \
else \
pr_debug(fmt, ##__VA_ARGS__); \
struct platform_device *pdev;
bool rpm_enabled;
struct dss_module_power mp;
+
+ /* reference count bandwidth requests, so we know when we can
+ * release bandwidth. Each atomic update increments, and frame-
+ * done event decrements. Additionally, for video mode, the
+ * reference is incremented when crtc is enabled, and decremented
+ * when disabled.
+ */
+ atomic_t bandwidth_ref;
};
struct vsync_info {
goto fail;
}
- encoder->bridge = hdmi->bridge;
+ ret = drm_bridge_attach(encoder, hdmi->bridge, NULL);
+ if (ret)
+ goto fail;
priv->bridges[priv->num_bridges++] = hdmi->bridge;
priv->connectors[priv->num_connectors++] = hdmi->connector;
{ "qcom,hdmi-tx-mux-lpm", true, 1, "HDMI_MUX_LPM" },
};
- static int msm_hdmi_get_gpio(struct device_node *of_node, const char *name)
- {
- int gpio;
-
- /* try with the gpio names as in the table (downstream bindings) */
- gpio = of_get_named_gpio(of_node, name, 0);
- if (gpio < 0) {
- char name2[32];
-
- /* try with the gpio names as in the upstream bindings */
- snprintf(name2, sizeof(name2), "%s-gpios", name);
- gpio = of_get_named_gpio(of_node, name2, 0);
- if (gpio < 0) {
- char name3[32];
-
- /*
- * try again after stripping out the "qcom,hdmi-tx"
- * prefix. This is mainly to match "hpd-gpios" used
- * in the upstream bindings
- */
- if (sscanf(name2, "qcom,hdmi-tx-%s", name3))
- gpio = of_get_named_gpio(of_node, name3, 0);
- }
-
- if (gpio < 0) {
- DBG("failed to get gpio: %s (%d)", name, gpio);
- gpio = -1;
- }
- }
- return gpio;
- }
-
/*
* HDMI audio codec callbacks
*/
hdmi_cfg->qfprom_mmio_name = "qfprom_physical";
for (i = 0; i < HDMI_MAX_NUM_GPIO; i++) {
- hdmi_cfg->gpios[i].num = msm_hdmi_get_gpio(of_node,
- msm_hdmi_gpio_pdata[i].name);
+ const char *name = msm_hdmi_gpio_pdata[i].name;
+ struct gpio_desc *gpiod;
+
+ /*
+ * We are fetching the GPIO lines "as is" since the connector
+ * code is enabling and disabling the lines. Until that point
+ * the power-on default value will be kept.
+ */
+ gpiod = devm_gpiod_get_optional(dev, name, GPIOD_ASIS);
+ /* This will catch e.g. -PROBE_DEFER */
+ if (IS_ERR(gpiod))
+ return PTR_ERR(gpiod);
+ if (!gpiod) {
+ /* Try a second time, stripping down the name */
+ char name3[32];
+
+ /*
+ * Try again after stripping out the "qcom,hdmi-tx"
+ * prefix. This is mainly to match "hpd-gpios" used
+ * in the upstream bindings.
+ */
+ if (sscanf(name, "qcom,hdmi-tx-%s", name3))
+ gpiod = devm_gpiod_get_optional(dev, name3, GPIOD_ASIS);
+ if (IS_ERR(gpiod))
+ return PTR_ERR(gpiod);
+ if (!gpiod)
+ DBG("failed to get gpio: %s", name);
+ }
+ hdmi_cfg->gpios[i].gpiod = gpiod;
+ if (gpiod)
+ gpiod_set_consumer_name(gpiod, msm_hdmi_gpio_pdata[i].label);
hdmi_cfg->gpios[i].output = msm_hdmi_gpio_pdata[i].output;
hdmi_cfg->gpios[i].value = msm_hdmi_gpio_pdata[i].value;
- hdmi_cfg->gpios[i].label = msm_hdmi_gpio_pdata[i].label;
}
dev->platform_data = hdmi_cfg;
#include <linux/clk.h>
#include <linux/platform_device.h>
#include <linux/regulator/consumer.h>
+ #include <linux/gpio/consumer.h>
#include <linux/hdmi.h>
+#include <drm/drm_bridge.h>
+
#include "msm_drv.h"
#include "hdmi.xml.h"
struct hdmi_platform_config;
struct hdmi_gpio_data {
- int num;
+ struct gpio_desc *gpiod;
bool output;
int value;
- const char *label;
};
struct hdmi_audio {
* Author: Ben Skeggs
*/
- #include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include "nouveau_drv.h"
nouveau_display(dev)->fini = nv04_display_fini;
/* Pre-nv50 doesn't support atomic, so don't expose the ioctls */
- dev->driver->driver_features &= ~DRIVER_ATOMIC;
+ dev->driver_features &= ~DRIVER_ATOMIC;
/* Request page flip completion event. */
if (drm->nvsw.client) {
list_for_each_entry_safe(connector, ct,
&dev->mode_config.connector_list, head) {
- if (!connector->encoder_ids[0]) {
+ if (!connector->possible_encoders) {
NV_WARN(drm, "%s has no encoders, removing\n",
connector->name);
connector->funcs->destroy(connector);
#include <linux/dma-mapping.h>
#include <linux/hdmi.h>
- #include <drm/drmP.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_dp_helper.h>
+ #include <drm/drm_edid.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_scdc_helper.h>
- #include <drm/drm_edid.h>
+ #include <drm/drm_vblank.h>
#include <nvif/class.h>
#include <nvif/cl0002.h>
struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
int slots;
- /* When restoring duplicated states, we need to make sure that the
- * bw remains the same and avoid recalculating it, as the connector's
- * bpc may have changed after the state was duplicated
- */
- if (!state->duplicated)
- asyh->dp.pbn =
- drm_dp_calc_pbn_mode(crtc_state->adjusted_mode.clock,
- connector->display_info.bpc * 3);
+ if (crtc_state->mode_changed || crtc_state->connectors_changed) {
+ /*
+ * When restoring duplicated states, we need to make sure that
+ * the bw remains the same and avoid recalculating it, as the
+ * connector's bpc may have changed after the state was
+ * duplicated
+ */
+ if (!state->duplicated) {
+ const int bpp = connector->display_info.bpc * 3;
+ const int clock = crtc_state->adjusted_mode.clock;
+
+ asyh->dp.pbn = drm_dp_calc_pbn_mode(clock, bpp);
+ }
- if (drm_atomic_crtc_needs_modeset(crtc_state)) {
slots = drm_dp_atomic_find_vcpi_slots(state, &mstm->mgr,
mstc->port,
asyh->dp.pbn);
nv_encoder->aux = aux;
}
- if ((data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len)) &&
+ if (nv_connector->type != DCB_CONNECTOR_eDP &&
+ (data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len)) &&
ver >= 0x40 && (nvbios_rd08(bios, data + 0x08) & 0x04)) {
ret = nv50_mstm_new(nv_encoder, &nv_connector->aux, 16,
nv_connector->base.base.id,
NV_ATOMIC(drm, "%s: clr %04x (set %04x)\n", crtc->name,
asyh->clr.mask, asyh->set.mask);
- if (old_crtc_state->active && !new_crtc_state->active)
+
+ if (old_crtc_state->active && !new_crtc_state->active) {
+ pm_runtime_put_noidle(dev->dev);
drm_crtc_vblank_off(crtc);
+ }
if (asyh->clr.mask) {
nv50_head_flush_clr(head, asyh, atom->flush_disable);
}
if (new_crtc_state->active) {
- if (!old_crtc_state->active)
+ if (!old_crtc_state->active) {
drm_crtc_vblank_on(crtc);
+ pm_runtime_get_noresume(dev->dev);
+ }
if (new_crtc_state->event)
drm_crtc_vblank_get(crtc);
}
drm_atomic_helper_cleanup_planes(dev, state);
drm_atomic_helper_commit_cleanup_done(state);
drm_atomic_state_put(state);
+
+ /* Drop the RPM ref we got from nv50_disp_atomic_commit() */
+ pm_runtime_mark_last_busy(dev->dev);
+ pm_runtime_put_autosuspend(dev->dev);
}
static void
nv50_disp_atomic_commit(struct drm_device *dev,
struct drm_atomic_state *state, bool nonblock)
{
- struct nouveau_drm *drm = nouveau_drm(dev);
struct drm_plane_state *new_plane_state;
struct drm_plane *plane;
- struct drm_crtc *crtc;
- bool active = false;
int ret, i;
ret = pm_runtime_get_sync(dev->dev);
drm_atomic_state_get(state);
+ /*
+ * Grab another RPM ref for the commit tail, which will release the
+ * ref when it's finished
+ */
+ pm_runtime_get_noresume(dev->dev);
+
if (nonblock)
queue_work(system_unbound_wq, &state->commit_work);
else
nv50_disp_atomic_commit_tail(state);
- drm_for_each_crtc(crtc, dev) {
- if (crtc->state->active) {
- if (!drm->have_disp_power_ref) {
- drm->have_disp_power_ref = true;
- return 0;
- }
- active = true;
- break;
- }
- }
-
- if (!active && drm->have_disp_power_ref) {
- pm_runtime_put_autosuspend(dev->dev);
- drm->have_disp_power_ref = false;
- }
-
err_cleanup:
if (ret)
drm_atomic_helper_cleanup_planes(dev, state);
disp->disp = &nouveau_display(dev)->disp;
dev->mode_config.funcs = &nv50_disp_func;
dev->mode_config.quirk_addfb_prefer_xbgr_30bpp = true;
+ dev->mode_config.normalize_zpos = true;
/* small shared memory area we use for notifiers and semaphores */
ret = nouveau_bo_new(&drm->client, 4096, 0x1000, TTM_PL_FLAG_VRAM,
/* cull any connectors we created that don't have an encoder */
list_for_each_entry_safe(connector, tmp, &dev->mode_config.connector_list, head) {
- if (connector->encoder_ids[0])
+ if (connector->possible_encoders)
continue;
NV_WARN(drm, "%s has no encoders, removing\n",
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
- #include <drm/drmP.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_edid.h>
#include <drm/drm_crtc_helper.h>
{
struct nouveau_encoder *nv_encoder;
struct drm_encoder *enc;
- int i;
- drm_connector_for_each_possible_encoder(connector, enc, i) {
+ drm_connector_for_each_possible_encoder(connector, enc) {
nv_encoder = nouveau_encoder(enc);
if (type == DCB_OUTPUT_ANY ||
struct drm_device *dev = connector->dev;
struct nouveau_encoder *nv_encoder = NULL, *found = NULL;
struct drm_encoder *encoder;
- int i, ret;
+ int ret;
bool switcheroo_ddc = false;
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
nv_encoder = nouveau_encoder(encoder);
switch (nv_encoder->dcb->type) {
switch (type) {
case DRM_MODE_CONNECTOR_DisplayPort:
case DRM_MODE_CONNECTOR_eDP:
- drm_dp_cec_register_connector(&nv_connector->aux,
- connector->name, dev->dev);
+ drm_dp_cec_register_connector(&nv_connector->aux, connector);
break;
}
*/
+ #include <linux/bitops.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_graph.h>
+#include <drm/drm_bridge.h>
#include <drm/drm_panel.h>
#include "dss.h"
{
struct device_node *remote_node;
- remote_node = of_graph_get_remote_node(out->dev->of_node, 0, 0);
+ remote_node = of_graph_get_remote_node(out->dev->of_node,
+ ffs(out->of_ports) - 1, 0);
if (!remote_node) {
dev_dbg(out->dev, "failed to find video sink\n");
return 0;
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
+#include <drm/drm_bridge.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_file.h>
if (omapdss_is_initialized() == false)
return -EPROBE_DEFER;
- ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
+ ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
if (ret) {
dev_err(&pdev->dev, "Failed to set the DMA mask\n");
return ret;
* If frequency scaling from low to high, adjust voltage first.
* If frequency scaling from high to low, adjust frequency first.
*/
- if (old_clk_rate < target_rate && pfdev->regulator) {
+ if (old_clk_rate < target_rate) {
err = regulator_set_voltage(pfdev->regulator, target_volt,
target_volt);
if (err) {
if (err) {
dev_err(dev, "Cannot set frequency %lu (%d)\n", target_rate,
err);
- regulator_set_voltage(pfdev->regulator, pfdev->devfreq.cur_volt,
- pfdev->devfreq.cur_volt);
+ if (pfdev->regulator)
+ regulator_set_voltage(pfdev->regulator,
+ pfdev->devfreq.cur_volt,
+ pfdev->devfreq.cur_volt);
return err;
}
- if (old_clk_rate > target_rate && pfdev->regulator) {
+ if (old_clk_rate > target_rate) {
err = regulator_set_voltage(pfdev->regulator, target_volt,
target_volt);
if (err)
static struct drm_driver qxl_driver;
static struct pci_driver qxl_pci_driver;
+ static bool is_vga(struct pci_dev *pdev)
+ {
+ return pdev->class == PCI_CLASS_DISPLAY_VGA << 8;
+ }
+
static int
qxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
if (ret)
goto free_dev;
- ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "qxl");
+ ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "qxl");
if (ret)
goto disable_pci;
+ if (is_vga(pdev)) {
+ ret = vga_get_interruptible(pdev, VGA_RSRC_LEGACY_IO);
+ if (ret) {
+ DRM_ERROR("can't get legacy vga ioports\n");
+ goto disable_pci;
+ }
+ }
+
ret = qxl_device_init(qdev, &qxl_driver, pdev);
if (ret)
- goto disable_pci;
+ goto put_vga;
ret = qxl_modeset_init(qdev);
if (ret)
qxl_modeset_fini(qdev);
unload:
qxl_device_fini(qdev);
+ put_vga:
+ if (is_vga(pdev))
+ vga_put(pdev, VGA_RSRC_LEGACY_IO);
disable_pci:
pci_disable_device(pdev);
free_dev:
qxl_modeset_fini(qdev);
qxl_device_fini(qdev);
+ if (is_vga(pdev))
+ vga_put(pdev, VGA_RSRC_LEGACY_IO);
dev->dev_private = NULL;
kfree(qdev);
#endif
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
- .gem_prime_pin = qxl_gem_prime_pin,
- .gem_prime_unpin = qxl_gem_prime_unpin,
- .gem_prime_get_sg_table = qxl_gem_prime_get_sg_table,
.gem_prime_import_sg_table = qxl_gem_prime_import_sg_table,
- .gem_prime_vmap = qxl_gem_prime_vmap,
- .gem_prime_vunmap = qxl_gem_prime_vunmap,
.gem_prime_mmap = qxl_gem_prime_mmap,
- .gem_free_object_unlocked = qxl_gem_object_free,
- .gem_open_object = qxl_gem_object_open,
- .gem_close_object = qxl_gem_object_close,
.fops = &qxl_fops,
.ioctls = qxl_ioctls,
.irq_handler = qxl_irq_handler,
struct drm_encoder *encoder;
const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
bool connected;
- int i;
best_encoder = connector_funcs->best_encoder(connector);
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
if ((encoder == best_encoder) && (status == connector_status_connected))
connected = true;
else
static struct drm_encoder *radeon_find_encoder(struct drm_connector *connector, int encoder_type)
{
struct drm_encoder *encoder;
- int i;
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
if (encoder->encoder_type == encoder_type)
return encoder;
}
static struct drm_encoder *radeon_best_single_encoder(struct drm_connector *connector)
{
struct drm_encoder *encoder;
- int i;
/* pick the first one */
- drm_connector_for_each_possible_encoder(connector, encoder, i)
+ drm_connector_for_each_possible_encoder(connector, encoder)
return encoder;
return NULL;
list_for_each_entry(conflict, &dev->mode_config.connector_list, head) {
struct drm_encoder *enc;
- int i;
if (conflict == connector)
continue;
radeon_conflict = to_radeon_connector(conflict);
- drm_connector_for_each_possible_encoder(conflict, enc, i) {
+ drm_connector_for_each_possible_encoder(conflict, enc) {
/* if the IDs match */
if (enc == encoder) {
if (conflict->status != connector_status_connected)
radeon_encoder->output_csc = val;
- if (connector->encoder->crtc) {
+ if (connector->encoder && connector->encoder->crtc) {
struct drm_crtc *crtc = connector->encoder->crtc;
struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
/* find analog encoder */
if (radeon_connector->dac_load_detect) {
- int i;
-
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
if (encoder->encoder_type != DRM_MODE_ENCODER_DAC &&
encoder->encoder_type != DRM_MODE_ENCODER_TVDAC)
continue;
{
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct drm_encoder *encoder;
- int i;
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
if (radeon_connector->use_digital == true) {
if (encoder->encoder_type == DRM_MODE_ENCODER_TMDS)
return encoder;
/* then check use digitial */
/* pick the first one */
- drm_connector_for_each_possible_encoder(connector, encoder, i)
+ drm_connector_for_each_possible_encoder(connector, encoder)
return encoder;
return NULL;
{
struct drm_encoder *encoder;
struct radeon_encoder *radeon_encoder;
- int i;
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
radeon_encoder = to_radeon_encoder(encoder);
switch (radeon_encoder->encoder_id) {
{
struct drm_encoder *encoder;
struct radeon_encoder *radeon_encoder;
- int i;
bool found = false;
- drm_connector_for_each_possible_encoder(connector, encoder, i) {
+ drm_connector_for_each_possible_encoder(connector, encoder) {
radeon_encoder = to_radeon_encoder(encoder);
if (radeon_encoder->caps & ATOM_ENCODER_CAP_RECORD_HBR2)
found = true;
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
+ #include <linux/mmu_notifier.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h>
static int radeon_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
+ unsigned long flags = 0;
int ret;
+ if (!ent)
+ return -ENODEV; /* Avoid NULL-ptr deref in drm_get_pci_dev */
+
+ flags = ent->driver_data;
+
+ if (!radeon_si_support) {
+ switch (flags & RADEON_FAMILY_MASK) {
+ case CHIP_TAHITI:
+ case CHIP_PITCAIRN:
+ case CHIP_VERDE:
+ case CHIP_OLAND:
+ case CHIP_HAINAN:
+ dev_info(&pdev->dev,
+ "SI support disabled by module param\n");
+ return -ENODEV;
+ }
+ }
+ if (!radeon_cik_support) {
+ switch (flags & RADEON_FAMILY_MASK) {
+ case CHIP_KAVERI:
+ case CHIP_BONAIRE:
+ case CHIP_HAWAII:
+ case CHIP_KABINI:
+ case CHIP_MULLINS:
+ dev_info(&pdev->dev,
+ "CIK support disabled by module param\n");
+ return -ENODEV;
+ }
+ }
+
if (vga_switcheroo_client_probe_defer(pdev))
return -EPROBE_DEFER;
/* Get rid of things like offb */
- ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "radeondrmfb");
+ ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "radeondrmfb");
if (ret)
return ret;
static void
radeon_pci_shutdown(struct pci_dev *pdev)
{
+ struct drm_device *ddev = pci_get_drvdata(pdev);
+
/* if we are running in a VM, make sure the device
* torn down properly on reboot/shutdown
*/
if (radeon_device_is_virtual())
radeon_pci_remove(pdev);
+
+ /* Some adapters need to be suspended before a
+ * shutdown occurs in order to prevent an error
+ * during kexec.
+ */
+ radeon_suspend_kms(ddev, true, true, false);
}
static int radeon_pmops_suspend(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
return radeon_suspend_kms(drm_dev, true, true, false);
}
static int radeon_pmops_resume(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
/* GPU comes up enabled by the bios on resume */
if (radeon_is_px(drm_dev)) {
static int radeon_pmops_freeze(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
return radeon_suspend_kms(drm_dev, false, true, true);
}
static int radeon_pmops_thaw(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
return radeon_resume_kms(drm_dev, false, true);
}
static int radeon_pmops_runtime_idle(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
struct drm_crtc *crtc;
if (!radeon_is_px(drm_dev)) {
{
pci_unregister_driver(pdriver);
radeon_unregister_atpx_handler();
+ mmu_notifier_synchronize();
}
module_init(radeon_init);
r = ttm_bo_device_init(&rdev->mman.bdev,
&radeon_bo_driver,
rdev->ddev->anon_inode->i_mapping,
- rdev->need_dma32);
+ rdev->ddev->vma_offset_manager,
+ dma_addressing_limited(&rdev->pdev->dev));
if (r) {
DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
return r;
#include <linux/reset.h>
#include <drm/drm_atomic_helper.h>
+#include <drm/drm_bridge.h>
#include <drm/drm_connector.h>
#include <drm/drm_crtc.h>
#include <drm/drm_encoder.h>
/* R and B components are only 5 bits deep */
val |= SUN4I_TCON0_FRM_CTL_MODE_R;
val |= SUN4I_TCON0_FRM_CTL_MODE_B;
+ /* Fall through */
case MEDIA_BUS_FMT_RGB666_1X18:
case MEDIA_BUS_FMT_RGB666_1X7X3_SPWG:
/* Fall through: enable dithering */
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
+#include <linux/regulator/consumer.h>
#include <linux/reset.h>
#include <linux/slab.h>
static u16 sun6i_dsi_get_video_start_delay(struct sun6i_dsi *dsi,
struct drm_display_mode *mode)
{
- u16 start = clamp(mode->vtotal - mode->vdisplay - 10, 8, 100);
- u16 delay = mode->vtotal - (mode->vsync_end - mode->vdisplay) + start;
+ u16 delay = mode->vtotal - (mode->vsync_end - mode->vdisplay) + 1;
if (delay > mode->vtotal)
delay = delay % mode->vtotal;
SUN6I_DSI_BURST_LINE_SYNC_POINT(SUN6I_DSI_SYNC_POINT));
val = SUN6I_DSI_TCON_DRQ_ENABLE_MODE;
- } else if ((mode->hsync_end - mode->hdisplay) > 20) {
+ } else if ((mode->hsync_start - mode->hdisplay) > 20) {
/* Maaaaaagic */
- u16 drq = (mode->hsync_end - mode->hdisplay) - 20;
+ u16 drq = (mode->hsync_start - mode->hdisplay) - 20;
drq *= mipi_dsi_pixel_format_to_bpp(device->format);
drq /= 32;
ret = sun6i_dsi_dcs_read(dsi, msg);
break;
}
+ /* Else, fall through */
default:
ret = -EINVAL;
return PTR_ERR(base);
}
+ dsi->regulator = devm_regulator_get(dev, "vcc-dsi");
+ if (IS_ERR(dsi->regulator)) {
+ dev_err(dev, "Couldn't get VCC-DSI supply\n");
+ return PTR_ERR(dsi->regulator);
+ }
+
dsi->regs = devm_regmap_init_mmio_clk(dev, "bus", base,
&sun6i_dsi_regmap_config);
if (IS_ERR(dsi->regs)) {
static int __maybe_unused sun6i_dsi_runtime_resume(struct device *dev)
{
struct sun6i_dsi *dsi = dev_get_drvdata(dev);
+ int err;
+
+ err = regulator_enable(dsi->regulator);
+ if (err) {
+ dev_err(dsi->dev, "failed to enable VCC-DSI supply: %d\n", err);
+ return err;
+ }
reset_control_deassert(dsi->reset);
clk_prepare_enable(dsi->mod_clk);
clk_disable_unprepare(dsi->mod_clk);
reset_control_assert(dsi->reset);
+ regulator_disable(dsi->regulator);
return 0;
}
struct ttm_bo_device *bdev = bo->bdev;
struct ttm_mem_type_manager *man = &bdev->man[bo->mem.mem_type];
- drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node);
+ if (bo->bdev->driver->release_notify)
+ bo->bdev->driver->release_notify(bo);
+
+ drm_vma_offset_remove(bdev->vma_manager, &bo->base.vma_node);
ttm_mem_io_lock(man, false);
ttm_mem_io_free_vm(bo);
ttm_mem_io_unlock(man);
*/
if (bo->type == ttm_bo_type_device ||
bo->type == ttm_bo_type_sg)
- ret = drm_vma_offset_add(&bdev->vma_manager, &bo->base.vma_node,
+ ret = drm_vma_offset_add(bdev->vma_manager, &bo->base.vma_node,
bo->mem.num_pages);
/* passed reservation objects should already be locked,
pr_debug("Swap list %d was clean\n", i);
spin_unlock(&glob->lru_lock);
- drm_vma_offset_manager_destroy(&bdev->vma_manager);
-
if (!ret)
ttm_bo_global_release();
int ttm_bo_device_init(struct ttm_bo_device *bdev,
struct ttm_bo_driver *driver,
struct address_space *mapping,
+ struct drm_vma_offset_manager *vma_manager,
bool need_dma32)
{
struct ttm_bo_global *glob = &ttm_bo_glob;
int ret;
+ if (WARN_ON(vma_manager == NULL))
+ return -EINVAL;
+
ret = ttm_bo_global_init();
if (ret)
return ret;
if (unlikely(ret != 0))
goto out_no_sys;
- drm_vma_offset_manager_init(&bdev->vma_manager,
- DRM_FILE_PAGE_OFFSET_START,
- DRM_FILE_PAGE_OFFSET_SIZE);
+ bdev->vma_manager = vma_manager;
INIT_DELAYED_WORK(&bdev->wq, ttm_bo_delayed_workqueue);
INIT_LIST_HEAD(&bdev->ddestroy);
bdev->dev_mapping = mapping;
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*
**************************************************************************/
- #include <linux/module.h>
+
#include <linux/console.h>
#include <linux/dma-mapping.h>
+ #include <linux/module.h>
- #include <drm/drmP.h>
- #include "vmwgfx_drv.h"
- #include "vmwgfx_binding.h"
- #include "ttm_object.h"
- #include <drm/ttm/ttm_placement.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_ioctl.h>
+ #include <drm/drm_pci.h>
+ #include <drm/drm_sysfs.h>
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_module.h>
+ #include <drm/ttm/ttm_placement.h>
+
+ #include "ttm_object.h"
+ #include "vmwgfx_binding.h"
+ #include "vmwgfx_drv.h"
#define VMWGFX_DRIVER_DESC "Linux drm driver for VMware graphics devices"
#define VMWGFX_CHIP_SVGAII 0
static int vmw_assume_16bpp;
static int vmw_probe(struct pci_dev *, const struct pci_device_id *);
- static void vmw_master_init(struct vmw_master *);
static int vmwgfx_pm_notifier(struct notifier_block *nb, unsigned long val,
void *ptr);
DRM_INFO("MMIO at 0x%08x size is %u kiB\n",
dev_priv->mmio_start, dev_priv->mmio_size / 1024);
- vmw_master_init(&dev_priv->fbdev_master);
- ttm_lock_set_kill(&dev_priv->fbdev_master.lock, false, SIGTERM);
- dev_priv->active_master = &dev_priv->fbdev_master;
-
dev_priv->mmio_virt = memremap(dev_priv->mmio_start,
dev_priv->mmio_size, MEMREMAP_WB);
goto out_no_fman;
}
+ drm_vma_offset_manager_init(&dev_priv->vma_manager,
+ DRM_FILE_PAGE_OFFSET_START,
+ DRM_FILE_PAGE_OFFSET_SIZE);
ret = ttm_bo_device_init(&dev_priv->bdev,
&vmw_bo_driver,
dev->anon_inode->i_mapping,
+ &dev_priv->vma_manager,
false);
if (unlikely(ret != 0)) {
DRM_ERROR("Failed initializing TTM buffer object driver.\n");
if (dev_priv->has_mob)
(void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB);
(void) ttm_bo_device_release(&dev_priv->bdev);
+ drm_vma_offset_manager_destroy(&dev_priv->vma_manager);
vmw_release_device_late(dev_priv);
vmw_fence_manager_takedown(dev_priv->fman);
if (dev_priv->capabilities & SVGA_CAP_IRQMASK)
static void vmw_postclose(struct drm_device *dev,
struct drm_file *file_priv)
{
- struct vmw_fpriv *vmw_fp;
-
- vmw_fp = vmw_fpriv(file_priv);
-
- if (vmw_fp->locked_master) {
- struct vmw_master *vmaster =
- vmw_master(vmw_fp->locked_master);
-
- ttm_lock_set_kill(&vmaster->lock, true, SIGTERM);
- ttm_vt_unlock(&vmaster->lock);
- drm_master_put(&vmw_fp->locked_master);
- }
+ struct vmw_fpriv *vmw_fp = vmw_fpriv(file_priv);
ttm_object_file_release(&vmw_fp->tfile);
kfree(vmw_fp);
return ret;
}
- static struct vmw_master *vmw_master_check(struct drm_device *dev,
- struct drm_file *file_priv,
- unsigned int flags)
- {
- int ret;
- struct vmw_fpriv *vmw_fp = vmw_fpriv(file_priv);
- struct vmw_master *vmaster;
-
- if (!drm_is_primary_client(file_priv) || !(flags & DRM_AUTH))
- return NULL;
-
- ret = mutex_lock_interruptible(&dev->master_mutex);
- if (unlikely(ret != 0))
- return ERR_PTR(-ERESTARTSYS);
-
- if (drm_is_current_master(file_priv)) {
- mutex_unlock(&dev->master_mutex);
- return NULL;
- }
-
- /*
- * Check if we were previously master, but now dropped. In that
- * case, allow at least render node functionality.
- */
- if (vmw_fp->locked_master) {
- mutex_unlock(&dev->master_mutex);
-
- if (flags & DRM_RENDER_ALLOW)
- return NULL;
-
- DRM_ERROR("Dropped master trying to access ioctl that "
- "requires authentication.\n");
- return ERR_PTR(-EACCES);
- }
- mutex_unlock(&dev->master_mutex);
-
- /*
- * Take the TTM lock. Possibly sleep waiting for the authenticating
- * master to become master again, or for a SIGTERM if the
- * authenticating master exits.
- */
- vmaster = vmw_master(file_priv->master);
- ret = ttm_read_lock(&vmaster->lock, true);
- if (unlikely(ret != 0))
- vmaster = ERR_PTR(ret);
-
- return vmaster;
- }
-
static long vmw_generic_ioctl(struct file *filp, unsigned int cmd,
unsigned long arg,
long (*ioctl_func)(struct file *, unsigned int,
struct drm_file *file_priv = filp->private_data;
struct drm_device *dev = file_priv->minor->dev;
unsigned int nr = DRM_IOCTL_NR(cmd);
- struct vmw_master *vmaster;
unsigned int flags;
- long ret;
/*
* Do extra checking on driver private ioctls.
} else if (!drm_ioctl_flags(nr, &flags))
return -EINVAL;
- vmaster = vmw_master_check(dev, file_priv, flags);
- if (IS_ERR(vmaster)) {
- ret = PTR_ERR(vmaster);
-
- if (ret != -ERESTARTSYS)
- DRM_INFO("IOCTL ERROR Command %d, Error %ld.\n",
- nr, ret);
- return ret;
- }
-
- ret = ioctl_func(filp, cmd, arg);
- if (vmaster)
- ttm_read_unlock(&vmaster->lock);
-
- return ret;
+ return ioctl_func(filp, cmd, arg);
out_io_encoding:
DRM_ERROR("Invalid command format, ioctl %d\n",
}
#endif
- static void vmw_master_init(struct vmw_master *vmaster)
- {
- ttm_lock_init(&vmaster->lock);
- }
-
- static int vmw_master_create(struct drm_device *dev,
- struct drm_master *master)
- {
- struct vmw_master *vmaster;
-
- vmaster = kzalloc(sizeof(*vmaster), GFP_KERNEL);
- if (unlikely(!vmaster))
- return -ENOMEM;
-
- vmw_master_init(vmaster);
- ttm_lock_set_kill(&vmaster->lock, true, SIGTERM);
- master->driver_priv = vmaster;
-
- return 0;
- }
-
- static void vmw_master_destroy(struct drm_device *dev,
- struct drm_master *master)
- {
- struct vmw_master *vmaster = vmw_master(master);
-
- master->driver_priv = NULL;
- kfree(vmaster);
- }
-
static int vmw_master_set(struct drm_device *dev,
struct drm_file *file_priv,
bool from_open)
{
- struct vmw_private *dev_priv = vmw_priv(dev);
- struct vmw_fpriv *vmw_fp = vmw_fpriv(file_priv);
- struct vmw_master *active = dev_priv->active_master;
- struct vmw_master *vmaster = vmw_master(file_priv->master);
- int ret = 0;
-
- if (active) {
- BUG_ON(active != &dev_priv->fbdev_master);
- ret = ttm_vt_lock(&active->lock, false, vmw_fp->tfile);
- if (unlikely(ret != 0))
- return ret;
-
- ttm_lock_set_kill(&active->lock, true, SIGTERM);
- dev_priv->active_master = NULL;
- }
-
- ttm_lock_set_kill(&vmaster->lock, false, SIGTERM);
- if (!from_open) {
- ttm_vt_unlock(&vmaster->lock);
- BUG_ON(vmw_fp->locked_master != file_priv->master);
- drm_master_put(&vmw_fp->locked_master);
- }
-
- dev_priv->active_master = vmaster;
-
/*
* Inform a new master that the layout may have changed while
* it was gone.
struct drm_file *file_priv)
{
struct vmw_private *dev_priv = vmw_priv(dev);
- struct vmw_fpriv *vmw_fp = vmw_fpriv(file_priv);
- struct vmw_master *vmaster = vmw_master(file_priv->master);
- int ret;
-
- /**
- * Make sure the master doesn't disappear while we have
- * it locked.
- */
- vmw_fp->locked_master = drm_master_get(file_priv->master);
- ret = ttm_vt_lock(&vmaster->lock, false, vmw_fp->tfile);
vmw_kms_legacy_hotspot_clear(dev_priv);
- if (unlikely((ret != 0))) {
- DRM_ERROR("Unable to lock TTM at VT switch.\n");
- drm_master_put(&vmw_fp->locked_master);
- }
-
- ttm_lock_set_kill(&vmaster->lock, false, SIGTERM);
-
if (!dev_priv->enable_fb)
vmw_svga_disable(dev_priv);
-
- dev_priv->active_master = &dev_priv->fbdev_master;
- ttm_lock_set_kill(&dev_priv->fbdev_master.lock, false, SIGTERM);
- ttm_vt_unlock(&dev_priv->fbdev_master.lock);
}
/**
.disable_vblank = vmw_disable_vblank,
.ioctls = vmw_ioctls,
.num_ioctls = ARRAY_SIZE(vmw_ioctls),
- .master_create = vmw_master_create,
- .master_destroy = vmw_master_destroy,
.master_set = vmw_master_set,
.master_drop = vmw_master_drop,
.open = vmw_driver_open,
#ifndef _VMWGFX_DRV_H_
#define _VMWGFX_DRV_H_
- #include "vmwgfx_validation.h"
- #include "vmwgfx_reg.h"
- #include <drm/drmP.h>
- #include <drm/vmwgfx_drm.h>
- #include <drm/drm_hashtab.h>
- #include <drm/drm_auth.h>
#include <linux/suspend.h>
+ #include <linux/sync_file.h>
+
+ #include <drm/drm_auth.h>
+ #include <drm/drm_device.h>
+ #include <drm/drm_file.h>
+ #include <drm/drm_hashtab.h>
+ #include <drm/drm_rect.h>
+
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_execbuf_util.h>
#include <drm/ttm/ttm_module.h>
- #include "vmwgfx_fence.h"
- #include "ttm_object.h"
+
#include "ttm_lock.h"
- #include <linux/sync_file.h>
+ #include "ttm_object.h"
+
+ #include "vmwgfx_fence.h"
+ #include "vmwgfx_reg.h"
+ #include "vmwgfx_validation.h"
+
+ /*
+ * FIXME: vmwgfx_drm.h needs to be last due to dependencies.
+ * uapi headers should not depend on header files outside uapi/.
+ */
+ #include <drm/vmwgfx_drm.h>
+
#define VMWGFX_DRIVER_NAME "vmwgfx"
#define VMWGFX_DRIVER_DATE "20180704"
#define VMW_RES_SHADER ttm_driver_type4
struct vmw_fpriv {
- struct drm_master *locked_master;
struct ttm_object_file *tfile;
bool gb_aware; /* user-space is guest-backed aware */
};
+ /**
+ * struct vmw_buffer_object - TTM buffer object with vmwgfx additions
+ * @base: The TTM buffer object
+ * @res_list: List of resources using this buffer object as a backing MOB
+ * @pin_count: pin depth
+ * @dx_query_ctx: DX context if this buffer object is used as a DX query MOB
+ * @map: Kmap object for semi-persistent mappings
+ * @res_prios: Eviction priority counts for attached resources
+ */
struct vmw_buffer_object {
struct ttm_buffer_object base;
struct list_head res_list;
struct vmw_resource *dx_query_ctx;
/* Protected by reservation */
struct ttm_bo_kmap_obj map;
+ u32 res_prios[TTM_MAX_BO_PRIORITY];
};
/**
struct kref kref;
struct vmw_private *dev_priv;
int id;
+ u32 used_prio;
unsigned long backup_size;
bool res_dirty;
bool backup_dirty;
struct vmw_legacy_display;
struct vmw_overlay;
- struct vmw_master {
- struct ttm_lock lock;
- };
-
struct vmw_vga_topology_state {
uint32_t width;
uint32_t height;
struct vmw_fifo_state fifo;
struct drm_device *dev;
+ struct drm_vma_offset_manager vma_manager;
unsigned long vmw_chipset;
unsigned int io_start;
uint32_t vram_start;
spinlock_t svga_lock;
/**
- * Master management.
+ * PM management.
*/
-
- struct vmw_master *active_master;
- struct vmw_master fbdev_master;
struct notifier_block pm_nb;
bool refuse_hibernation;
bool suspend_locked;
return (struct vmw_fpriv *)file_priv->driver_priv;
}
- static inline struct vmw_master *vmw_master(struct drm_master *master)
- {
- return (struct vmw_master *) master->driver_priv;
- }
-
/*
* The locking here is fine-grained, so that it is performed once
* for every read- and write operation. This is of course costly, but we
extern int vmw_query_readback_all(struct vmw_buffer_object *dx_query_mob);
extern void vmw_resource_evict_all(struct vmw_private *dev_priv);
extern void vmw_resource_unbind_list(struct vmw_buffer_object *vbo);
+ void vmw_resource_mob_attach(struct vmw_resource *res);
+ void vmw_resource_mob_detach(struct vmw_resource *res);
+
+ /**
+ * vmw_resource_mob_attached - Whether a resource currently has a mob attached
+ * @res: The resource
+ *
+ * Return: true if the resource has a mob attached, false otherwise.
+ */
+ static inline bool vmw_resource_mob_attached(const struct vmw_resource *res)
+ {
+ return !list_empty(&res->mob_head);
+ }
/**
* vmw_user_resource_noref_release - release a user resource pointer looked up
ttm_base_object_noref_release();
}
+ /**
+ * vmw_bo_adjust_prio - Adjust the buffer object eviction priority
+ * according to attached resources
+ * @vbo: The struct vmw_buffer_object
+ */
+ static inline void vmw_bo_prio_adjust(struct vmw_buffer_object *vbo)
+ {
+ int i = ARRAY_SIZE(vbo->res_prios);
+
+ while (i--) {
+ if (vbo->res_prios[i]) {
+ vbo->base.priority = i;
+ return;
+ }
+ }
+
+ vbo->base.priority = 3;
+ }
+
+ /**
+ * vmw_bo_prio_add - Notify a buffer object of a newly attached resource
+ * eviction priority
+ * @vbo: The struct vmw_buffer_object
+ * @prio: The resource priority
+ *
+ * After being notified, the code assigns the highest resource eviction priority
+ * to the backing buffer object (mob).
+ */
+ static inline void vmw_bo_prio_add(struct vmw_buffer_object *vbo, int prio)
+ {
+ if (vbo->res_prios[prio]++ == 0)
+ vmw_bo_prio_adjust(vbo);
+ }
+
+ /**
+ * vmw_bo_prio_del - Notify a buffer object of a resource with a certain
+ * priority being removed
+ * @vbo: The struct vmw_buffer_object
+ * @prio: The resource priority
+ *
+ * After being notified, the code assigns the highest resource eviction priority
+ * to the backing buffer object (mob).
+ */
+ static inline void vmw_bo_prio_del(struct vmw_buffer_object *vbo, int prio)
+ {
+ if (--vbo->res_prios[prio] == 0)
+ vmw_bo_prio_adjust(vbo);
+ }
/**
* Misc Ioctl functionality - vmwgfx_ioctl.c
int vmw_kms_write_svga(struct vmw_private *vmw_priv,
unsigned width, unsigned height, unsigned pitch,
unsigned bpp, unsigned depth);
- void vmw_kms_idle_workqueues(struct vmw_master *vmaster);
bool vmw_kms_validate_mode_vram(struct vmw_private *dev_priv,
uint32_t pitch,
uint32_t height);
#define VMW_DEBUG_USER(fmt, ...) \
DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)
+ /**
+ * VMW_DEBUG_KMS - Debug output for kernel mode-setting
+ *
+ * This macro is for debugging vmwgfx mode-setting code.
+ */
+ #define VMW_DEBUG_KMS(fmt, ...) \
+ DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)
+
/**
* Inline helper functions
*/
}
#ifdef CONFIG_CPU_FREQ
-/*
- * Calculate the minimum DMA period over all displays that we own.
- * This, together with the SDRAM bandwidth defines the slowest CPU
- * frequency that can be selected.
- */
-static unsigned int sa1100fb_min_dma_period(struct sa1100fb_info *fbi)
-{
- /*
- * FIXME: we need to verify _all_ consoles.
- */
- return sa1100fb_display_dma_period(&fbi->fb.var);
-}
-
/*
* CPU clock speed change handler. We need to adjust the LCD timing
* parameters when the CPU clock is adjusted by the power management
}
return 0;
}
-
- static int
- sa1100fb_freq_policy(struct notifier_block *nb, unsigned long val,
- void *data)
- {
- struct sa1100fb_info *fbi = TO_INF(nb, freq_policy);
- struct cpufreq_policy *policy = data;
-
- switch (val) {
- case CPUFREQ_ADJUST:
- dev_dbg(fbi->dev, "min dma period: %d ps, "
- "new clock %d kHz\n", sa1100fb_min_dma_period(fbi),
- policy->max);
- /* todo: fill in min/max values */
- break;
- case CPUFREQ_NOTIFY:
- do {} while(0);
- /* todo: panic if min/max values aren't fulfilled
- * [can't really happen unless there's a bug in the
- * CPU policy verififcation process *
- */
- break;
- }
- return 0;
- }
#endif
#ifdef CONFIG_PM
#ifdef CONFIG_CPU_FREQ
fbi->freq_transition.notifier_call = sa1100fb_freq_transition;
- fbi->freq_policy.notifier_call = sa1100fb_freq_policy;
cpufreq_register_notifier(&fbi->freq_transition, CPUFREQ_TRANSITION_NOTIFIER);
- cpufreq_register_notifier(&fbi->freq_policy, CPUFREQ_POLICY_NOTIFIER);
#endif
/* This driver cannot be unloaded at the moment */
#include <drm/drm_connector.h>
#include <drm/drm_device.h>
#include <drm/drm_property.h>
-#include <drm/drm_bridge.h>
#include <drm/drm_edid.h>
#include <drm/drm_plane.h>
#include <drm/drm_blend.h>
u32 target_vblank;
/**
- * @pageflip_flags:
+ * @async_flip:
*
- * DRM_MODE_PAGE_FLIP_* flags, as passed to the page flip ioctl.
- * Zero in any other case.
+ * This is set when DRM_MODE_PAGE_FLIP_ASYNC is set in the legacy
+ * PAGE_FLIP IOCTL. It's not wired up for the atomic IOCTL itself yet.
*/
- u32 pageflip_flags;
+ bool async_flip;
/**
* @vrr_enabled:
/**
* @self_refresh_data: Holds the state for the self refresh helpers
*
- * Initialized via drm_self_refresh_helper_register().
+ * Initialized via drm_self_refresh_helper_init().
*/
struct drm_self_refresh_data *self_refresh_data;
};
* notify driver that a BO was deleted from LRU.
*/
void (*del_from_lru_notify)(struct ttm_buffer_object *bo);
+
+ /**
+ * Notify the driver that we're about to release a BO
+ *
+ * @bo: BO that is about to be released
+ *
+ * Gives the driver a chance to do any cleanup, including
+ * adding fences that may force a delayed delete
+ */
+ void (*release_notify)(struct ttm_buffer_object *bo);
};
/**
*
* @driver: Pointer to a struct ttm_bo_driver struct setup by the driver.
* @man: An array of mem_type_managers.
- * @vma_manager: Address space manager
+ * @vma_manager: Address space manager (pointer)
* lru_lock: Spinlock that protects the buffer+device lru lists and
* ddestroy lists.
* @dev_mapping: A pointer to the struct address_space representing the
/*
* Protected by internal locks.
*/
- struct drm_vma_offset_manager vma_manager;
+ struct drm_vma_offset_manager *vma_manager;
/*
* Protected by the global:lru lock.
* @glob: A pointer to an initialized struct ttm_bo_global.
* @driver: A pointer to a struct ttm_bo_driver set up by the caller.
* @mapping: The address space to use for this bo.
+ * @vma_manager: A pointer to a vma manager.
* @file_page_offset: Offset into the device address space that is available
* for buffer data. This ensures compatibility with other users of the
* address space.
int ttm_bo_device_init(struct ttm_bo_device *bdev,
struct ttm_bo_driver *driver,
struct address_space *mapping,
+ struct drm_vma_offset_manager *vma_manager,
bool need_dma32);
/**