F: drivers/scsi/53c700*
6LOWPAN GENERIC (BTLE/IEEE 802.15.4)
S: Maintained
F: Documentation/hwmon/ads1015
F: drivers/hwmon/ads1015.c
-F: include/linux/i2c/ads1015.h
+F: include/linux/platform_data/ads1015.h
ADT746X FAN DRIVER
F: drivers/amba/
F: include/linux/amba/bus.h
+ARM/ACTIONS SEMI ARCHITECTURE
+S: Maintained
+N: owl
+F: arch/arm/mach-actions/
+F: arch/arm/boot/dts/owl-*
+F: arch/arm64/boot/dts/actions/
+F: drivers/clocksource/owl-*
+F: drivers/soc/actions/
+F: include/dt-bindings/power/owl-*
+F: include/linux/soc/actions/
+F: Documentation/devicetree/bindings/arm/actions.txt
+F: Documentation/devicetree/bindings/power/actions,owl-sps.txt
+F: Documentation/devicetree/bindings/timer/actions,owl-timer.txt
+
ARM/ADS SPHERE MACHINE SUPPORT
ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE
S: Maintained
F: arch/arm/mach-ep93xx/
S: Maintained
F: drivers/hwtracing/coresight/*
F: Documentation/trace/coresight.txt
+F: Documentation/trace/coresight-cpu-debug.txt
F: Documentation/devicetree/bindings/arm/coresight.txt
+F: Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt
F: Documentation/ABI/testing/sysfs-bus-coresight-devices-*
F: tools/perf/arch/arm/util/pmu.c
F: tools/perf/arch/arm/util/auxtrace.c
S: Maintained
-F: arch/arm/mach-mvebu/
-F: drivers/rtc/rtc-armada38x.c
F: arch/arm/boot/dts/armada*
F: arch/arm/boot/dts/kirkwood*
+F: arch/arm/configs/mvebu_*_defconfig
+F: arch/arm/mach-mvebu/
F: arch/arm64/boot/dts/marvell/armada*
F: drivers/cpufreq/mvebu-cpufreq.c
-F: arch/arm/configs/mvebu_*_defconfig
+F: drivers/irqchip/irq-armada-370-xp.c
+F: drivers/irqchip/irq-mvebu-*
+F: drivers/pinctrl/mvebu/
+F: drivers/rtc/rtc-armada38x.c
ARM/Marvell Berlin SoC support
F: arch/arm64/boot/dts/qcom/*
F: drivers/i2c/busses/i2c-qup.c
F: drivers/clk/qcom/
-F: drivers/pinctrl/qcom/
F: drivers/dma/qcom/
F: drivers/soc/qcom/
F: drivers/spi/spi-qup.c
S: Maintained
+ARM/REALTEK ARCHITECTURE
+S: Maintained
+F: arch/arm64/boot/dts/realtek/
+F: Documentation/devicetree/bindings/arm/realtek.txt
+
ARM/RENESAS ARM64 ARCHITECTURE
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip.git
S: Maintained
F: arch/arm/boot/dts/rk3*
+F: arch/arm/boot/dts/rv1108*
F: arch/arm/mach-rockchip/
F: drivers/clk/rockchip/
F: drivers/i2c/busses/i2c-rk3x.c
ARM/SAMSUNG EXYNOS ARM ARCHITECTURES
Q: https://patchwork.kernel.org/project/linux-samsung-soc/list/
F: drivers/media/platform/s5p-mfc/
ARM/SAMSUNG S5P SERIES HDMI CEC SUBSYSTEM SUPPORT
-M: Kyungmin Park <kyungmin.park@samsung.com>
+M: Marek Szyprowski <m.szyprowski@samsung.com>
S: Maintained
-F: drivers/staging/media/platform/s5p-cec/
+F: drivers/media/platform/s5p-cec/
+F: Documentation/devicetree/bindings/media/s5p-cec.txt
ARM/SAMSUNG S5P SERIES JPEG CODEC SUPPORT
ARM/STI ARCHITECTURE
W: http://www.stlinux.com
S: Maintained
F: arch/arm/mach-sti/
F: drivers/media/rc/st_rc.c
F: drivers/media/platform/sti/c8sectpfe/
F: drivers/mmc/host/sdhci-st.c
-F: drivers/phy/phy-miphy28lp.c
-F: drivers/phy/phy-stih407-usb.c
+F: drivers/phy/st/phy-miphy28lp.c
+F: drivers/phy/st/phy-stih407-usb.c
F: drivers/pinctrl/pinctrl-st.c
F: drivers/remoteproc/st_remoteproc.c
F: drivers/remoteproc/st_slim_rproc.c
F: drivers/input/touchscreen/atmel_mxt_ts.c
F: include/linux/platform_data/atmel_mxt_ts.h
+ATOMIC INFRASTRUCTURE
+S: Maintained
+F: arch/*/include/asm/atomic*.h
+F: include/*/atomic*.h
+
ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
F: net/bluetooth/
F: include/net/bluetooth/
+DMA MAPPING HELPERS
+T: git git://git.infradead.org/users/hch/dma-mapping.git
+W: http://git.infradead.org/users/hch/dma-mapping.git
+S: Supported
+F: lib/dma-debug.c
+F: lib/dma-noop.c
+F: lib/dma-virt.c
+F: drivers/base/dma-mapping.c
+F: drivers/base/dma-coherent.c
+F: include/linux/dma-mapping.h
+
BONDING DRIVER
F: arch/arm/mach-bcm/
BROADCOM BCM2835 ARM ARCHITECTURE
S: Supported
F: drivers/net/wireless/broadcom/brcm80211/
C6X ARCHITECTURE
-M: Aurelien Jacquiot <a-jacquiot@ti.com>
+M: Aurelien Jacquiot <jacquiot.aurelien@gmail.com>
W: http://www.linux-c6x.org/wiki/index.php/Main_Page
S: Maintained
F: include/media/cec-notifier.h
F: include/uapi/linux/cec.h
F: include/uapi/linux/cec-funcs.h
+F: Documentation/devicetree/bindings/media/cec.txt
CELL BROADBAND ENGINE ARCHITECTURE
S: Maintained
F: Documentation/crypto/
F: Documentation/devicetree/bindings/crypto/
-F: Documentation/DocBook/crypto-API.tmpl
F: arch/*/crypto/
F: crypto/
F: drivers/crypto/
F: drivers/infiniband/hw/cxgb4/
F: include/uapi/rdma/cxgb4-abi.h
+CXGB4 CRYPTO DRIVER (chcr)
+W: http://www.chelsio.com
+S: Supported
+F: drivers/crypto/chelsio
+
CXGB4VF ETHERNET DRIVER (CXGB4VF)
S: Maintained
F: Documentation/
-F: scripts/docproc.c
-F: scripts/kernel-doc*
+F: scripts/kernel-doc
X: Documentation/ABI/
X: Documentation/devicetree/
X: Documentation/acpi
F: drivers/media/usb/dvb-usb-v2/dvb_usb*
F: drivers/media/usb/dvb-usb-v2/usb_urb.c
+DONGWOON DW9714 LENS VOICE COIL DRIVER
+T: git git://linuxtv.org/media_tree.git
+S: Maintained
+F: drivers/media/i2c/dw9714.c
+
DYNAMIC DEBUG
S: Maintained
GENWQE (IBM Generic Workqueue Card)
-M: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
+M: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
S: Supported
F: drivers/misc/genwqe/
GPIO SUBSYSTEM
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
S: Maintained
F: drivers/ide/ide-cd*
IEEE 802.15.4 SUBSYSTEM
W: http://wpan.cakelab.org/
F: Documentation/devicetree/bindings/iio/adc/envelope-detector.txt
F: drivers/iio/adc/envelope-detector.c
+IIO MULTIPLEXER
+S: Maintained
+F: Documentation/devicetree/bindings/iio/multiplexer/iio-mux.txt
+F: drivers/iio/multiplexer/iio-mux.c
+
IIO SUBSYSTEM AND DRIVERS
F: drivers/input/input-mt.c
K: \b(ABS|SYN)_MT_
+INSIDE SECURE CRYPTO DRIVER
+F: drivers/crypto/inside-secure/
+S: Maintained
+
INTEL ASoC BDW/HSW DRIVERS
F: Documentation/networking/i40evf.txt
F: drivers/net/ethernet/intel/
F: drivers/net/ethernet/intel/*/
+F: include/linux/avf/virtchnl.h
INTEL RDMA RNIC DRIVER
KERNEL VIRTUAL MACHINE for s390 (KVM/s390)
+M: Cornelia Huck <cohuck@redhat.com>
W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git
S: Maintained
-F: Documentation/security/keys.txt
+F: Documentation/security/keys/core.rst
F: include/linux/key.h
F: include/linux/key-type.h
F: include/linux/keyctl.h
S: Supported
-F: Documentation/security/keys-trusted-encrypted.txt
+F: Documentation/security/keys/trusted-encrypted.rst
F: include/keys/trusted-type.h
F: security/keys/trusted.c
F: security/keys/trusted.h
S: Supported
-F: Documentation/security/keys-trusted-encrypted.txt
+F: Documentation/security/keys/trusted-encrypted.rst
F: include/keys/encrypted-type.h
F: security/keys/encrypted-keys/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb.git
S: Maintained
-F: Documentation/DocBook/kgdb.tmpl
+F: Documentation/dev-tools/kgdb.rst
F: drivers/misc/kgdbts.c
F: drivers/tty/serial/kgdboc.c
F: include/linux/kdb.h
F: drivers/ata/pata_*.c
F: drivers/ata/ata_generic.c
+LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
+S: Maintained
+F: drivers/ata/pata_ftide010.c
+F: drivers/ata/sata_gemini.c
+F: drivers/ata/sata_gemini.h
+
LIBATA SATA AHCI PLATFORM devices support
F: drivers/ata/sata_promise.*
LIBLOCKDEP
-M: Sasha Levin <sasha.levin@oracle.com>
+M: Sasha Levin <alexander.levin@verizon.com>
S: Maintained
F: tools/lib/lockdep/
Q: https://patchwork.kernel.org/project/linux-nvdimm/list/
S: Supported
- F: drivers/nvdimm/pmem.c
- F: include/linux/pmem.h
- F: arch/*/include/asm/pmem.h
+ F: drivers/nvdimm/pmem*
LIGHTNVM PLATFORM SUPPORT
LIVE PATCHING
-M: Jessica Yu <jeyu@redhat.com>
+M: Jessica Yu <jeyu@kernel.org>
F: drivers/net/ethernet/marvell/mv643xx_eth.*
F: include/linux/mv643xx.h
+MARVELL MV88X3310 PHY DRIVER
+S: Maintained
+F: drivers/net/phy/marvell10g.c
+
MARVELL MVNETA ETHERNET DRIVER
F: Documentation/hwmon/max20751
F: drivers/hwmon/max20751.c
+MAX2175 SDR TUNER DRIVER
+T: git git://linuxtv.org/media_tree.git
+S: Maintained
+F: Documentation/devicetree/bindings/media/i2c/max2175.txt
+F: Documentation/media/v4l-drivers/max2175.rst
+F: drivers/media/i2c/max2175*
+F: include/uapi/linux/max2175.h
+
MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER
S: Orphan
F: drivers/power/supply/max14577_charger.c
F: drivers/power/supply/max77693_charger.c
-MAXIM MAX77802 MULTIFUNCTION PMIC DEVICE DRIVERS
-M: Javier Martinez Canillas <javier@osg.samsung.com>
+MAXIM MAX77802 PMIC REGULATOR DEVICE DRIVER
+M: Javier Martinez Canillas <javier@dowhile0.org>
S: Supported
-F: drivers/*/*max77802*.c
+F: drivers/regulator/max77802-regulator.c
F: Documentation/devicetree/bindings/*/*max77802.txt
F: include/dt-bindings/*/*max77802.h
S: Maintained
F: drivers/iio/dac/cio-dac.c
+MEDIA DRIVERS FOR RENESAS - DRIF
+T: git git://linuxtv.org/media_tree.git
+S: Supported
+F: Documentation/devicetree/bindings/media/renesas,drif.txt
+F: drivers/media/platform/rcar_drif.c
+
+MEDIA DRIVERS FOR FREESCALE IMX
+T: git git://linuxtv.org/media_tree.git
+S: Maintained
+F: Documentation/devicetree/bindings/media/imx.txt
+F: Documentation/media/v4l-drivers/imx.rst
+F: drivers/staging/media/imx/
+F: include/linux/imx-media.h
+F: include/media/imx.h
+
MEDIA DRIVERS FOR RENESAS - FCP
S: Maintained
F: drivers/net/wireless/mediatek/mt7601u/
+MEDIATEK RANDOM NUMBER GENERATOR SUPPORT
+S: Maintained
+F: drivers/char/hw_random/mtk-rng.c
+
MEGACHIPS STDPXXXX-GE-B850V3-FW LVDS/DP++ BRIDGES
Q: http://patchwork.ozlabs.org/project/netdev/list/
F: drivers/net/ethernet/mellanox/mlx5/core/en_*
+MELLANOX ETHERNET INNOVA DRIVER
+S: Supported
+W: http://www.mellanox.com
+Q: http://patchwork.ozlabs.org/project/netdev/list/
+F: drivers/net/ethernet/mellanox/mlx5/core/fpga/*
+F: include/linux/mlx5/mlx5_ifc_fpga.h
+
+MELLANOX ETHERNET INNOVA IPSEC DRIVER
+S: Supported
+W: http://www.mellanox.com
+Q: http://patchwork.ozlabs.org/project/netdev/list/
+F: drivers/net/ethernet/mellanox/mlx5/core/en_ipsec/*
+F: drivers/net/ethernet/mellanox/mlx5/core/ipsec*
+
MELLANOX ETHERNET SWITCH DRIVERS
Q: http://patchwork.ozlabs.org/project/netdev/list/
F: drivers/net/ethernet/mellanox/mlxsw/
+MELLANOX FIRMWARE FLASH LIBRARY (mlxfw)
+S: Supported
+W: http://www.mellanox.com
+Q: http://patchwork.ozlabs.org/project/netdev/list/
+F: drivers/net/ethernet/mellanox/mlxfw/
+
MELLANOX MLXCPLD I2C AND MUX DRIVER
S: Supported
F: arch/microblaze/
-MICROCHIP / ATMEL AT91 / AT32 SERIAL DRIVER
+MICROCHIP / ATMEL AT91 SERIAL DRIVER
S: Maintained
F: drivers/tty/serial/atmel_serial.c
F: drivers/media/platform/atmel/atmel-isc-regs.h
F: devicetree/bindings/media/atmel-isc.txt
+MICROCHIP KSZ SERIES ETHERNET SWITCH DRIVER
+S: Maintained
+F: net/dsa/tag_ksz.c
+F: drivers/net/dsa/microchip/*
+F: include/linux/platform_data/microchip-ksz.h
+F: Documentation/devicetree/bindings/net/dsa/ksz.txt
+
MICROCHIP USB251XB DRIVER
F: drivers/media/radio/radio-miropcm20*
MELLANOX MLX4 core VPI driver
-M: Yishai Hadas <yishaih@mellanox.com>
+M: Tariq Toukan <tariqt@mellanox.com>
W: http://www.mellanox.com
S: Supported
F: drivers/net/ethernet/mellanox/mlx4/
F: include/linux/mlx4/
-F: include/uapi/rdma/mlx4-abi.h
MELLANOX MLX4 IB driver
S: Supported
F: drivers/infiniband/hw/mlx4/
F: include/linux/mlx4/
+F: include/uapi/rdma/mlx4-abi.h
MELLANOX MLX5 core VPI driver
S: Supported
F: drivers/net/ethernet/mellanox/mlx5/core/
F: include/linux/mlx5/
-F: include/uapi/rdma/mlx5-abi.h
MELLANOX MLX5 IB driver
S: Supported
F: drivers/infiniband/hw/mlx5/
F: include/linux/mlx5/
+F: include/uapi/rdma/mlx5-abi.h
MELEXIS MLX90614 DRIVER
F: drivers/media/dvb-frontends/mn88473*
MODULE SUPPORT
-M: Jessica Yu <jeyu@redhat.com>
+M: Jessica Yu <jeyu@kernel.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux.git modules-next
S: Maintained
F: drivers/mmc/host/mmc_spi.c
F: include/linux/spi/mmc_spi.h
+MULTIPLEXER SUBSYSTEM
+S: Maintained
+F: Documentation/ABI/testing/mux/sysfs-class-mux*
+F: Documentation/devicetree/bindings/mux/
+F: include/linux/dt-bindings/mux/
+F: include/linux/mux/
+F: drivers/mux/
+
MULTISOUND SOUND DRIVER
S: Maintained
F: include/net/ip*
F: arch/x86/net/*
+NETWORKING [TLS]
+S: Maintained
+F: net/tls/*
+F: include/uapi/linux/tls.h
+F: include/net/tls.h
+
NETWORKING [IPSEC]
F: drivers/nfc/
F: include/linux/platform_data/nfcmrvl.h
F: include/linux/platform_data/nxp-nci.h
-F: include/linux/platform_data/pn544.h
-F: include/linux/platform_data/st21nfca.h
-F: include/linux/platform_data/st-nci.h
F: Documentation/devicetree/bindings/net/nfc/
NFS, SUNRPC, AND LOCKD CLIENTS
S: Maintained
F: drivers/char/pcmcia/cm4040_cs.*
+OMNIVISION OV5640 SENSOR DRIVER
+T: git git://linuxtv.org/media_tree.git
+S: Maintained
+F: drivers/media/i2c/ov5640.c
+
OMNIVISION OV5647 SENSOR DRIVER
F: drivers/media/i2c/ov7670.c
F: Documentation/devicetree/bindings/media/i2c/ov7670.txt
+OMNIVISION OV13858 SENSOR DRIVER
+T: git git://linuxtv.org/media_tree.git
+S: Maintained
+F: drivers/media/i2c/ov13858.c
+
ONENAND FLASH DRIVER
S: Maintained
F: drivers/pinctrl/intel/
+PIN CONTROLLER - QUALCOMM
+S: Maintained
+F: Documentation/devicetree/bindings/pinctrl/qcom,*.txt
+F: drivers/pinctrl/qcom/
+
PIN CONTROLLER - RENESAS
S: Maintained
F: Documentation/hwmon/pmbus
F: drivers/hwmon/pmbus/
-F: include/linux/i2c/pmbus.h
+F: include/linux/pmbus.h
PMC SIERRA MaxRAID DRIVER
S: Maintained
F: drivers/staging/fsl-mc/
+F: Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt
QT1010 MEDIA DRIVER
S: Supported
F: arch/hexagon/
+QUALCOMM VENUS VIDEO ACCELERATOR DRIVER
+T: git git://linuxtv.org/media_tree.git
+S: Maintained
+F: drivers/media/platform/qcom/venus/
+
QUALCOMM WCN36XX WIRELESS DRIVER
S: Maintained
F: drivers/firmware/qemu_fw_cfg.c
+QUANTENNA QTNFMAC WIRELESS DRIVER
+S: Maintained
+F: drivers/net/wireless/quantenna
+
RADOS BLOCK DEVICE (RBD)
S: Supported
F: drivers/iio/adc/rcar_gyro_adc.c
-RENESAS USB2 PHY DRIVER
+RENESAS USB PHY DRIVER
S: Maintained
-F: drivers/phy/phy-rcar-gen3-usb2.c
+F: drivers/phy/renesas/phy-rcar-gen3-usb*.c
RESET CONTROLLER FRAMEWORK
F: arch/s390/
F: drivers/s390/
F: Documentation/s390/
-F: Documentation/DocBook/s390*
+F: Documentation/driver-api/s390-drivers.rst
S390 COMMON I/O LAYER
F: drivers/iommu/s390-iommu.c
S390 VFIO-CCW DRIVER
+M: Cornelia Huck <cohuck@redhat.com>
S: Supported
F: Documentation/devicetree/bindings/phy/samsung-phy.txt
F: Documentation/phy/samsung-usb2.txt
-F: drivers/phy/phy-exynos4210-usb2.c
-F: drivers/phy/phy-exynos4x12-usb2.c
-F: drivers/phy/phy-exynos5250-usb2.c
-F: drivers/phy/phy-s5pv210-usb2.c
-F: drivers/phy/phy-samsung-usb2.c
-F: drivers/phy/phy-samsung-usb2.h
+F: drivers/phy/samsung/phy-exynos4210-usb2.c
+F: drivers/phy/samsung/phy-exynos4x12-usb2.c
+F: drivers/phy/samsung/phy-exynos5250-usb2.c
+F: drivers/phy/samsung/phy-s5pv210-usb2.c
+F: drivers/phy/samsung/phy-samsung-usb2.c
+F: drivers/phy/samsung/phy-samsung-usb2.h
SERIAL DRIVERS
STI CEC DRIVER
S: Maintained
F: drivers/staging/media/st-cec/
F: Documentation/devicetree/bindings/media/stih-cec.txt
THUNDERBOLT DRIVER
S: Maintained
F: drivers/thunderbolt/
F: kernel/time/ntp.c
F: tools/testing/selftests/timers/
+TI TRF7970A NFC DRIVER
+S: Supported
+F: drivers/nfc/trf7970a.c
+F: Documentation/devicetree/bindings/net/nfc/trf7970a.txt
+
SC1200 WDT DRIVER
S: Maintained
F: include/uapi/linux/seccomp.h
F: include/linux/seccomp.h
F: tools/testing/selftests/seccomp/*
+F: Documentation/userspace-api/seccomp_filter.rst
K: \bsecure_computing
K: \bTIF_SECCOMP\b
F: include/linux/selinux*
F: security/selinux/
F: scripts/selinux/
+F: Documentation/admin-guide/LSM/SELinux.rst
APPARMOR SECURITY MODULE
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jj/apparmor-dev.git
S: Supported
F: security/apparmor/
+F: Documentation/admin-guide/LSM/apparmor.rst
LOADPIN SECURITY MODULE
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git lsm/loadpin
S: Supported
F: security/loadpin/
+F: Documentation/admin-guide/LSM/LoadPin.rst
YAMA SECURITY MODULE
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git yama/tip
S: Supported
F: security/yama/
+F: Documentation/admin-guide/LSM/Yama.rst
SENSABLE PHANTOM
S: Supported
F: arch/arm/mach-davinci/
F: drivers/i2c/busses/i2c-davinci.c
+F: arch/arm/boot/dts/da850*
TI DAVINCI SERIES MEDIA DRIVER
W: http://schaufler-ca.com
T: git git://github.com/cschaufler/smack-next
S: Maintained
-F: Documentation/security/Smack.txt
+F: Documentation/admin-guide/LSM/Smack.rst
F: security/smack/
DRIVERS FOR ADAPTIVE VOLTAGE SCALING (AVS)
SOFTLOGIC 6x10 MPEG CODEC
-M: Andrey Utkin <andrey.krieger.utkin@gmail.com>
+M: Andrey Utkin <andrey_utkin@fastmail.com>
S: Supported
F: Documentation/devicetree/bindings/soc/ti/sci-pm-domain.txt
F: include/dt-bindings/genpd/k2g.h
F: drivers/soc/ti/ti_sci_pm_domains.c
+F: Documentation/devicetree/bindings/reset/ti,sci-reset.txt
+F: drivers/reset/reset-ti-sci.c
THANKO'S RAREMONO AM/FM/SW RADIO RECEIVER USB DRIVER
S: Supported
F: drivers/mmc/host/tmio_mmc*
-F: drivers/mmc/host/sh_mobile_sdhi.c
+F: drivers/mmc/host/renesas_sdhi*
F: include/linux/mfd/tmio.h
TMP401 HARDWARE MONITOR DRIVER
TW5864 VIDEO4LINUX DRIVER
F: drivers/media/v4l2-core/videobuf2-*
F: include/media/videobuf2-*
+VIDEO MULTIPLEXER DRIVER
+S: Maintained
+F: drivers/media/platform/video-mux.c
+
VIRTIO AND VHOST VSOCK DRIVER
F: drivers/crypto/virtio/
VIRTIO DRIVERS FOR S390
+M: Cornelia Huck <cohuck@redhat.com>
S: Maintained
F: Documentation/w1/
F: drivers/w1/
+F: include/linux/w1.h
W83791D HARDWARE MONITORING DRIVER
F: drivers/net/wireless/wl3501*
WOLFSON MICROELECTRONICS DRIVERS
-L: patches@opensource.wolfsonmicro.com
+L: patches@opensource.cirrus.com
T: git https://github.com/CirrusLogic/linux-drivers.git
W: https://github.com/CirrusLogic/linux-drivers/wiki
S: Supported
F: arch/x86/include/asm/xen/
F: include/xen/
F: include/uapi/xen/
+F: Documentation/ABI/stable/sysfs-hypervisor-xen
+F: Documentation/ABI/testing/sysfs-hypervisor-xen
XEN HYPERVISOR ARM
def_bool y
depends on 64BIT
# Options that are inherently 64-bit kernel only:
- select ARCH_HAS_GIGANTIC_PAGE
+ select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
select ARCH_SUPPORTS_INT128
select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_SOFT_DIRTY
select ARCH_HAS_KCOV if X86_64
select ARCH_HAS_MMIO_FLUSH
select ARCH_HAS_PMEM_API if X86_64
+ select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
- select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if SMP
+ select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
select ARCH_WANT_FRAME_POINTERS
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
+ select ARCH_WANTS_THP_SWAP if X86_64
select BUILDTIME_EXTABLE_SORT
select CLKEVT_I8253
select CLOCKSOURCE_VALIDATE_LAST_CYCLE
select GENERIC_EARLY_IOREMAP
select GENERIC_FIND_FIRST_BIT
select GENERIC_IOMAP
+ select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP
+ select GENERIC_IRQ_MIGRATION if SMP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_PENDING_IRQ if SMP
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_USER_RETURN_NOTIFIER
select IRQ_FORCED_THREADING
+ select PCI_LOCKLESS_CONFIG
select PERF_EVENTS
select RTC_LIB
select RTC_MC146818_LIB
def_bool y
config X86_MCE_INJECT
- depends on X86_MCE && X86_LOCAL_APIC && X86_MCELOG_LEGACY
+ depends on X86_MCE && X86_LOCAL_APIC && DEBUG_FS
tristate "Machine check injector support"
---help---
Provide support for injecting machine checks for testing purposes.
config SYSVIPC_COMPAT
def_bool y
depends on SYSVIPC
-
-config KEYS_COMPAT
- def_bool y
- depends on KEYS
endif
endmenu
bool
depends on STA2X11
+config HAVE_GENERIC_GUP
+ def_bool y
+
source "net/Kconfig"
source "drivers/Kconfig"
#ifdef CONFIG_BLK_DEV_RAM_DAX
#include <linux/pfn_t.h>
#include <linux/dax.h>
+ #include <linux/uio.h>
#endif
#include <linux/uaccess.h>
return __brd_direct_access(brd, pgoff, nr_pages, kaddr, pfn);
}
+ static size_t brd_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ void *addr, size_t bytes, struct iov_iter *i)
+ {
+ return copy_from_iter(addr, bytes, i);
+ }
+
static const struct dax_operations brd_dax_ops = {
.direct_access = brd_dax_direct_access,
+ .copy_from_iter = brd_dax_copy_from_iter,
};
#endif
blk_queue_make_request(brd->brd_queue, brd_make_request);
blk_queue_max_hw_sectors(brd->brd_queue, 1024);
- blk_queue_bounce_limit(brd->brd_queue, BLK_BOUNCE_ANY);
/* This is so fdisk will align partitions on 4k, because of
* direct_access API needing 4k alignment, returning a PFN
#include <linux/cdev.h>
#include <linux/hash.h>
#include <linux/slab.h>
+ #include <linux/uio.h>
#include <linux/dax.h>
#include <linux/fs.h>
EXPORT_SYMBOL_GPL(__bdev_dax_supported);
#endif
+ enum dax_device_flags {
+ /* !alive + rcu grace period == no new operations / mappings */
+ DAXDEV_ALIVE,
+ /* gate whether dax_flush() calls the low level flush routine */
+ DAXDEV_WRITE_CACHE,
+ };
+
/**
* struct dax_device - anchor object for dax services
* @inode: core vfs
* @cdev: optional character interface for "device dax"
* @host: optional name for lookups where the device path is not available
* @private: dax driver private data
- * @alive: !alive + rcu grace period == no new operations / mappings
+ * @flags: state and boolean properties
*/
struct dax_device {
struct hlist_node list;
struct cdev cdev;
const char *host;
void *private;
- bool alive;
+ unsigned long flags;
const struct dax_operations *ops;
};
+ static ssize_t write_cache_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct dax_device *dax_dev = dax_get_by_host(dev_name(dev));
+ ssize_t rc;
+
+ WARN_ON_ONCE(!dax_dev);
+ if (!dax_dev)
+ return -ENXIO;
+
+ rc = sprintf(buf, "%d\n", !!test_bit(DAXDEV_WRITE_CACHE,
+ &dax_dev->flags));
+ put_dax(dax_dev);
+ return rc;
+ }
+
+ static ssize_t write_cache_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+ {
+ bool write_cache;
+ int rc = strtobool(buf, &write_cache);
+ struct dax_device *dax_dev = dax_get_by_host(dev_name(dev));
+
+ WARN_ON_ONCE(!dax_dev);
+ if (!dax_dev)
+ return -ENXIO;
+
+ if (rc)
+ len = rc;
+ else if (write_cache)
+ set_bit(DAXDEV_WRITE_CACHE, &dax_dev->flags);
+ else
+ clear_bit(DAXDEV_WRITE_CACHE, &dax_dev->flags);
+
+ put_dax(dax_dev);
+ return len;
+ }
+ static DEVICE_ATTR_RW(write_cache);
+
+ static umode_t dax_visible(struct kobject *kobj, struct attribute *a, int n)
+ {
+ struct device *dev = container_of(kobj, typeof(*dev), kobj);
+ struct dax_device *dax_dev = dax_get_by_host(dev_name(dev));
+
+ WARN_ON_ONCE(!dax_dev);
+ if (!dax_dev)
+ return 0;
+
+ if (a == &dev_attr_write_cache.attr && !dax_dev->ops->flush)
+ return 0;
+ return a->mode;
+ }
+
+ static struct attribute *dax_attributes[] = {
+ &dev_attr_write_cache.attr,
+ NULL,
+ };
+
+ struct attribute_group dax_attribute_group = {
+ .name = "dax",
+ .attrs = dax_attributes,
+ .is_visible = dax_visible,
+ };
+ EXPORT_SYMBOL_GPL(dax_attribute_group);
+
/**
* dax_direct_access() - translate a device pgoff to an absolute pfn
* @dax_dev: a dax_device instance representing the logical memory range
}
EXPORT_SYMBOL_GPL(dax_direct_access);
+ size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+ size_t bytes, struct iov_iter *i)
+ {
+ if (!dax_alive(dax_dev))
+ return 0;
+
+ return dax_dev->ops->copy_from_iter(dax_dev, pgoff, addr, bytes, i);
+ }
+ EXPORT_SYMBOL_GPL(dax_copy_from_iter);
+
+ void dax_flush(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+ size_t size)
+ {
+ if (!dax_alive(dax_dev))
+ return;
+
+ if (!test_bit(DAXDEV_WRITE_CACHE, &dax_dev->flags))
+ return;
+
+ if (dax_dev->ops->flush)
+ dax_dev->ops->flush(dax_dev, pgoff, addr, size);
+ }
+ EXPORT_SYMBOL_GPL(dax_flush);
+
+ void dax_write_cache(struct dax_device *dax_dev, bool wc)
+ {
+ if (wc)
+ set_bit(DAXDEV_WRITE_CACHE, &dax_dev->flags);
+ else
+ clear_bit(DAXDEV_WRITE_CACHE, &dax_dev->flags);
+ }
+ EXPORT_SYMBOL_GPL(dax_write_cache);
+
bool dax_alive(struct dax_device *dax_dev)
{
lockdep_assert_held(&dax_srcu);
- return dax_dev->alive;
+ return test_bit(DAXDEV_ALIVE, &dax_dev->flags);
}
EXPORT_SYMBOL_GPL(dax_alive);
if (!dax_dev)
return;
- dax_dev->alive = false;
+ clear_bit(DAXDEV_ALIVE, &dax_dev->flags);
synchronize_srcu(&dax_srcu);
static struct inode *dax_alloc_inode(struct super_block *sb)
{
struct dax_device *dax_dev;
+ struct inode *inode;
dax_dev = kmem_cache_alloc(dax_cache, GFP_KERNEL);
- return &dax_dev->inode;
+ inode = &dax_dev->inode;
+ inode->i_rdev = 0;
+ return inode;
}
static struct dax_device *to_dax_dev(struct inode *inode)
kfree(dax_dev->host);
dax_dev->host = NULL;
- ida_simple_remove(&dax_minor_ida, MINOR(inode->i_rdev));
+ if (inode->i_rdev)
+ ida_simple_remove(&dax_minor_ida, MINOR(inode->i_rdev));
kmem_cache_free(dax_cache, dax_dev);
}
{
struct dax_device *dax_dev = to_dax_dev(inode);
- WARN_ONCE(dax_dev->alive,
+ WARN_ONCE(test_bit(DAXDEV_ALIVE, &dax_dev->flags),
"kill_dax() must be called before final iput()\n");
call_rcu(&inode->i_rcu, dax_i_callback);
}
dax_dev = to_dax_dev(inode);
if (inode->i_state & I_NEW) {
- dax_dev->alive = true;
+ set_bit(DAXDEV_ALIVE, &dax_dev->flags);
inode->i_cdev = &dax_dev->cdev;
inode->i_mode = S_IFCHR;
inode->i_flags = S_DAX;
struct dax_device *dax_dev = _dax_dev;
struct inode *inode = &dax_dev->inode;
+ memset(dax_dev, 0, sizeof(*dax_dev));
inode_init_once(inode);
}
struct linear_c *lc = ti->private;
bio->bi_bdev = lc->dev->bdev;
- if (bio_sectors(bio))
+ if (bio_sectors(bio) || bio_op(bio) == REQ_OP_ZONE_RESET)
bio->bi_iter.bi_sector =
linear_map_sector(ti, bio->bi_iter.bi_sector);
}
return DM_MAPIO_REMAPPED;
}
+static int linear_end_io(struct dm_target *ti, struct bio *bio,
+ blk_status_t *error)
+{
+ struct linear_c *lc = ti->private;
+
+ if (!*error && bio_op(bio) == REQ_OP_ZONE_REPORT)
+ dm_remap_zone_report(ti, bio, lc->start);
+
+ return DM_ENDIO_DONE;
+}
+
static void linear_status(struct dm_target *ti, status_type_t type,
unsigned status_flags, char *result, unsigned maxlen)
{
return dax_direct_access(dax_dev, pgoff, nr_pages, kaddr, pfn);
}
+ static size_t linear_dax_copy_from_iter(struct dm_target *ti, pgoff_t pgoff,
+ void *addr, size_t bytes, struct iov_iter *i)
+ {
+ struct linear_c *lc = ti->private;
+ struct block_device *bdev = lc->dev->bdev;
+ struct dax_device *dax_dev = lc->dev->dax_dev;
+ sector_t dev_sector, sector = pgoff * PAGE_SECTORS;
+
+ dev_sector = linear_map_sector(ti, sector);
+ if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(bytes, PAGE_SIZE), &pgoff))
+ return 0;
+ return dax_copy_from_iter(dax_dev, pgoff, addr, bytes, i);
+ }
+
+ static void linear_dax_flush(struct dm_target *ti, pgoff_t pgoff, void *addr,
+ size_t size)
+ {
+ struct linear_c *lc = ti->private;
+ struct block_device *bdev = lc->dev->bdev;
+ struct dax_device *dax_dev = lc->dev->dax_dev;
+ sector_t dev_sector, sector = pgoff * PAGE_SECTORS;
+
+ dev_sector = linear_map_sector(ti, sector);
+ if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(size, PAGE_SIZE), &pgoff))
+ return;
+ dax_flush(dax_dev, pgoff, addr, size);
+ }
+
static struct target_type linear_target = {
.name = "linear",
- .version = {1, 3, 0},
- .features = DM_TARGET_PASSES_INTEGRITY,
+ .version = {1, 4, 0},
+ .features = DM_TARGET_PASSES_INTEGRITY | DM_TARGET_ZONED_HM,
.module = THIS_MODULE,
.ctr = linear_ctr,
.dtr = linear_dtr,
.map = linear_map,
+ .end_io = linear_end_io,
.status = linear_status,
.prepare_ioctl = linear_prepare_ioctl,
.iterate_devices = linear_iterate_devices,
.direct_access = linear_dax_direct_access,
+ .dax_copy_from_iter = linear_dax_copy_from_iter,
+ .dax_flush = linear_dax_flush,
};
int __init dm_linear_init(void)
return dax_direct_access(dax_dev, pgoff, nr_pages, kaddr, pfn);
}
+ static size_t stripe_dax_copy_from_iter(struct dm_target *ti, pgoff_t pgoff,
+ void *addr, size_t bytes, struct iov_iter *i)
+ {
+ sector_t dev_sector, sector = pgoff * PAGE_SECTORS;
+ struct stripe_c *sc = ti->private;
+ struct dax_device *dax_dev;
+ struct block_device *bdev;
+ uint32_t stripe;
+
+ stripe_map_sector(sc, sector, &stripe, &dev_sector);
+ dev_sector += sc->stripe[stripe].physical_start;
+ dax_dev = sc->stripe[stripe].dev->dax_dev;
+ bdev = sc->stripe[stripe].dev->bdev;
+
+ if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(bytes, PAGE_SIZE), &pgoff))
+ return 0;
+ return dax_copy_from_iter(dax_dev, pgoff, addr, bytes, i);
+ }
+
+ static void stripe_dax_flush(struct dm_target *ti, pgoff_t pgoff, void *addr,
+ size_t size)
+ {
+ sector_t dev_sector, sector = pgoff * PAGE_SECTORS;
+ struct stripe_c *sc = ti->private;
+ struct dax_device *dax_dev;
+ struct block_device *bdev;
+ uint32_t stripe;
+
+ stripe_map_sector(sc, sector, &stripe, &dev_sector);
+ dev_sector += sc->stripe[stripe].physical_start;
+ dax_dev = sc->stripe[stripe].dev->dax_dev;
+ bdev = sc->stripe[stripe].dev->bdev;
+
+ if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(size, PAGE_SIZE), &pgoff))
+ return;
+ dax_flush(dax_dev, pgoff, addr, size);
+ }
+
/*
* Stripe status:
*
}
}
-static int stripe_end_io(struct dm_target *ti, struct bio *bio, int error)
+static int stripe_end_io(struct dm_target *ti, struct bio *bio,
+ blk_status_t *error)
{
unsigned i;
char major_minor[16];
struct stripe_c *sc = ti->private;
- if (!error)
- return 0; /* I/O complete */
+ if (!*error)
+ return DM_ENDIO_DONE; /* I/O complete */
- if ((error == -EWOULDBLOCK) && (bio->bi_opf & REQ_RAHEAD))
- return error;
+ if (bio->bi_opf & REQ_RAHEAD)
+ return DM_ENDIO_DONE;
- if (error == -EOPNOTSUPP)
- return error;
+ if (*error == BLK_STS_NOTSUPP)
+ return DM_ENDIO_DONE;
memset(major_minor, 0, sizeof(major_minor));
sprintf(major_minor, "%d:%d",
schedule_work(&sc->trigger_event);
}
- return error;
+ return DM_ENDIO_DONE;
}
static int stripe_iterate_devices(struct dm_target *ti,
.iterate_devices = stripe_iterate_devices,
.io_hints = stripe_io_hints,
.direct_access = stripe_dax_direct_access,
+ .dax_copy_from_iter = stripe_dax_copy_from_iter,
+ .dax_flush = stripe_dax_flush,
};
int __init dm_stripe_init(void)
#include <linux/dax.h>
#include <linux/slab.h>
#include <linux/idr.h>
+ #include <linux/uio.h>
#include <linux/hdreg.h>
#include <linux/delay.h>
#include <linux/wait.h>
static struct workqueue_struct *deferred_remove_workqueue;
+atomic_t dm_global_event_nr = ATOMIC_INIT(0);
+DECLARE_WAIT_QUEUE_HEAD(dm_global_eventq);
+
/*
* One of these is allocated per bio.
*/
struct dm_io {
struct mapped_device *md;
- int error;
+ blk_status_t status;
atomic_t io_count;
struct bio *bio;
unsigned long start_time;
* Decrements the number of outstanding ios that a bio has been
* cloned into, completing the original io if necc.
*/
-static void dec_pending(struct dm_io *io, int error)
+static void dec_pending(struct dm_io *io, blk_status_t error)
{
unsigned long flags;
- int io_error;
+ blk_status_t io_error;
struct bio *bio;
struct mapped_device *md = io->md;
/* Push-back supersedes any I/O errors */
if (unlikely(error)) {
spin_lock_irqsave(&io->endio_lock, flags);
- if (!(io->error > 0 && __noflush_suspending(md)))
- io->error = error;
+ if (!(io->status == BLK_STS_DM_REQUEUE &&
+ __noflush_suspending(md)))
+ io->status = error;
spin_unlock_irqrestore(&io->endio_lock, flags);
}
if (atomic_dec_and_test(&io->io_count)) {
- if (io->error == DM_ENDIO_REQUEUE) {
+ if (io->status == BLK_STS_DM_REQUEUE) {
/*
* Target requested pushing back the I/O.
*/
bio_list_add_head(&md->deferred, io->bio);
else
/* noflush suspend was interrupted. */
- io->error = -EIO;
+ io->status = BLK_STS_IOERR;
spin_unlock_irqrestore(&md->deferred_lock, flags);
}
- io_error = io->error;
+ io_error = io->status;
bio = io->bio;
end_io_acct(io);
free_io(md, io);
- if (io_error == DM_ENDIO_REQUEUE)
+ if (io_error == BLK_STS_DM_REQUEUE)
return;
if ((bio->bi_opf & REQ_PREFLUSH) && bio->bi_iter.bi_size) {
queue_io(md, bio);
} else {
/* done with normal IO or empty flush */
- bio->bi_error = io_error;
+ bio->bi_status = io_error;
bio_endio(bio);
}
}
static void clone_endio(struct bio *bio)
{
- int error = bio->bi_error;
- int r = error;
+ blk_status_t error = bio->bi_status;
struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone);
struct dm_io *io = tio->io;
struct mapped_device *md = tio->io->md;
dm_endio_fn endio = tio->ti->type->end_io;
- if (endio) {
- r = endio(tio->ti, bio, error);
- if (r < 0 || r == DM_ENDIO_REQUEUE)
- /*
- * error and requeue request are handled
- * in dec_pending().
- */
- error = r;
- else if (r == DM_ENDIO_INCOMPLETE)
- /* The target will handle the io */
- return;
- else if (r) {
- DMWARN("unimplemented target endio return value: %d", r);
- BUG();
- }
- }
-
- if (unlikely(r == -EREMOTEIO)) {
+ if (unlikely(error == BLK_STS_TARGET)) {
if (bio_op(bio) == REQ_OP_WRITE_SAME &&
!bdev_get_queue(bio->bi_bdev)->limits.max_write_same_sectors)
disable_write_same(md);
disable_write_zeroes(md);
}
+ if (endio) {
+ int r = endio(tio->ti, bio, &error);
+ switch (r) {
+ case DM_ENDIO_REQUEUE:
+ error = BLK_STS_DM_REQUEUE;
+ /*FALLTHRU*/
+ case DM_ENDIO_DONE:
+ break;
+ case DM_ENDIO_INCOMPLETE:
+ /* The target will handle the io */
+ return;
+ default:
+ DMWARN("unimplemented target endio return value: %d", r);
+ BUG();
+ }
+ }
+
free_tio(tio);
dec_pending(io, error);
}
return ret;
}
+ static size_t dm_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ void *addr, size_t bytes, struct iov_iter *i)
+ {
+ struct mapped_device *md = dax_get_private(dax_dev);
+ sector_t sector = pgoff * PAGE_SECTORS;
+ struct dm_target *ti;
+ long ret = 0;
+ int srcu_idx;
+
+ ti = dm_dax_get_live_target(md, sector, &srcu_idx);
+
+ if (!ti)
+ goto out;
+ if (!ti->type->dax_copy_from_iter) {
+ ret = copy_from_iter(addr, bytes, i);
+ goto out;
+ }
+ ret = ti->type->dax_copy_from_iter(ti, pgoff, addr, bytes, i);
+ out:
+ dm_put_live_table(md, srcu_idx);
+
+ return ret;
+ }
+
+ static void dm_dax_flush(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+ size_t size)
+ {
+ struct mapped_device *md = dax_get_private(dax_dev);
+ sector_t sector = pgoff * PAGE_SECTORS;
+ struct dm_target *ti;
+ int srcu_idx;
+
+ ti = dm_dax_get_live_target(md, sector, &srcu_idx);
+
+ if (!ti)
+ goto out;
+ if (ti->type->dax_flush)
+ ti->type->dax_flush(ti, pgoff, addr, size);
+ out:
+ dm_put_live_table(md, srcu_idx);
+ }
+
/*
* A target may call dm_accept_partial_bio only from the map routine. It is
* allowed for all bio types except REQ_PREFLUSH.
}
EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
+/*
+ * The zone descriptors obtained with a zone report indicate
+ * zone positions within the target device. The zone descriptors
+ * must be remapped to match their position within the dm device.
+ * A target may call dm_remap_zone_report after completion of a
+ * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained
+ * from the target device mapping to the dm device.
+ */
+void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+{
+#ifdef CONFIG_BLK_DEV_ZONED
+ struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone);
+ struct bio *report_bio = tio->io->bio;
+ struct blk_zone_report_hdr *hdr = NULL;
+ struct blk_zone *zone;
+ unsigned int nr_rep = 0;
+ unsigned int ofst;
+ struct bio_vec bvec;
+ struct bvec_iter iter;
+ void *addr;
+
+ if (bio->bi_status)
+ return;
+
+ /*
+ * Remap the start sector of the reported zones. For sequential zones,
+ * also remap the write pointer position.
+ */
+ bio_for_each_segment(bvec, report_bio, iter) {
+ addr = kmap_atomic(bvec.bv_page);
+
+ /* Remember the report header in the first page */
+ if (!hdr) {
+ hdr = addr;
+ ofst = sizeof(struct blk_zone_report_hdr);
+ } else
+ ofst = 0;
+
+ /* Set zones start sector */
+ while (hdr->nr_zones && ofst < bvec.bv_len) {
+ zone = addr + ofst;
+ if (zone->start >= start + ti->len) {
+ hdr->nr_zones = 0;
+ break;
+ }
+ zone->start = zone->start + ti->begin - start;
+ if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) {
+ if (zone->cond == BLK_ZONE_COND_FULL)
+ zone->wp = zone->start + zone->len;
+ else if (zone->cond == BLK_ZONE_COND_EMPTY)
+ zone->wp = zone->start;
+ else
+ zone->wp = zone->wp + ti->begin - start;
+ }
+ ofst += sizeof(struct blk_zone);
+ hdr->nr_zones--;
+ nr_rep++;
+ }
+
+ if (addr != hdr)
+ kunmap_atomic(addr);
+
+ if (!hdr->nr_zones)
+ break;
+ }
+
+ if (hdr) {
+ hdr->nr_zones = nr_rep;
+ kunmap_atomic(hdr);
+ }
+
+ bio_advance(report_bio, report_bio->bi_iter.bi_size);
+
+#else /* !CONFIG_BLK_DEV_ZONED */
+ bio->bi_status = BLK_STS_NOTSUPP;
+#endif
+}
+EXPORT_SYMBOL_GPL(dm_remap_zone_report);
+
/*
* Flush current->bio_list when the target map method blocks.
* This fixes deadlocks in snapshot and possibly in other targets.
while ((bio = bio_list_pop(&list))) {
struct bio_set *bs = bio->bi_pool;
- if (unlikely(!bs) || bs == fs_bio_set) {
+ if (unlikely(!bs) || bs == fs_bio_set ||
+ !bs->rescue_workqueue) {
bio_list_add(¤t->bio_list[i], bio);
continue;
}
r = ti->type->map(ti, clone);
dm_offload_end(&o);
- if (r == DM_MAPIO_REMAPPED) {
+ switch (r) {
+ case DM_MAPIO_SUBMITTED:
+ break;
+ case DM_MAPIO_REMAPPED:
/* the bio has been remapped so dispatch it */
-
trace_block_bio_remap(bdev_get_queue(clone->bi_bdev), clone,
tio->io->bio->bi_bdev->bd_dev, sector);
-
generic_make_request(clone);
- } else if (r < 0 || r == DM_MAPIO_REQUEUE) {
- /* error the io and bail out, or requeue it if needed */
- dec_pending(tio->io, r);
+ break;
+ case DM_MAPIO_KILL:
+ dec_pending(tio->io, BLK_STS_IOERR);
free_tio(tio);
- } else if (r != DM_MAPIO_SUBMITTED) {
+ break;
+ case DM_MAPIO_REQUEUE:
+ dec_pending(tio->io, BLK_STS_DM_REQUEUE);
+ free_tio(tio);
+ break;
+ default:
DMWARN("unimplemented target map return value: %d", r);
BUG();
}
return r;
}
- bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));
+ if (bio_op(bio) != REQ_OP_ZONE_REPORT)
+ bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));
clone->bi_iter.bi_size = to_bytes(len);
if (unlikely(bio_integrity(bio) != NULL))
if (!dm_target_is_valid(ti))
return -EIO;
- len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
+ if (bio_op(bio) == REQ_OP_ZONE_REPORT)
+ len = ci->sector_count;
+ else
+ len = min_t(sector_t, max_io_len(ci->sector, ti),
+ ci->sector_count);
r = __clone_and_map_data_bio(ci, ti, ci->sector, &len);
if (r < 0)
ci.map = map;
ci.md = md;
ci.io = alloc_io(md);
- ci.io->error = 0;
+ ci.io->status = 0;
atomic_set(&ci.io->io_count, 1);
ci.io->bio = bio;
ci.io->md = md;
ci.sector_count = 0;
error = __send_empty_flush(&ci);
/* dec_pending submits any data associated with flush */
+ } else if (bio_op(bio) == REQ_OP_ZONE_RESET) {
+ ci.bio = bio;
+ ci.sector_count = 0;
+ error = __split_and_process_non_flush(&ci);
} else {
ci.bio = bio;
ci.sector_count = bio_sectors(bio);
* Initialize aspects of queue that aren't relevant for blk-mq
*/
md->queue->backing_dev_info->congested_fn = dm_any_congested;
- blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY);
}
static void cleanup_mapped_device(struct mapped_device *md)
dm_send_uevents(&uevents, &disk_to_dev(md->disk)->kobj);
atomic_inc(&md->event_nr);
+ atomic_inc(&dm_global_event_nr);
wake_up(&md->eventq);
+ wake_up(&dm_global_eventq);
}
/*
BUG();
}
- pools->bs = bioset_create_nobvec(pool_size, front_pad);
+ pools->bs = bioset_create(pool_size, front_pad, BIOSET_NEED_RESCUER);
if (!pools->bs)
goto out;
static const struct dax_operations dm_dax_ops = {
.direct_access = dm_dax_direct_access,
+ .copy_from_iter = dm_dax_copy_from_iter,
+ .flush = dm_dax_flush,
};
/*
struct nd_btt *nd_btt = arena->nd_btt;
struct nd_namespace_common *ndns = nd_btt->ndns;
- /* arena offsets are 4K from the base of the device */
- offset += SZ_4K;
+ /* arena offsets may be shifted from the base of the device */
+ offset += arena->nd_btt->initial_offset;
return nvdimm_read_bytes(ndns, offset, buf, n, flags);
}
struct nd_btt *nd_btt = arena->nd_btt;
struct nd_namespace_common *ndns = nd_btt->ndns;
- /* arena offsets are 4K from the base of the device */
- offset += SZ_4K;
+ /* arena offsets may be shifted from the base of the device */
+ offset += arena->nd_btt->initial_offset;
return nvdimm_write_bytes(ndns, offset, buf, n, flags);
}
old_ent = btt_log_get_old(log);
if (old_ent < 0 || old_ent > 1) {
- dev_info(to_dev(arena),
+ dev_err(to_dev(arena),
"log corruption (%d): lane %d seq [%d, %d]\n",
old_ent, lane, log[0].seq, log[1].seq);
/* TODO set error state? */
arena->internal_lbasize = roundup(arena->external_lbasize,
INT_LBASIZE_ALIGNMENT);
arena->nfree = BTT_DEFAULT_NFREE;
- arena->version_major = 1;
- arena->version_minor = 1;
+ arena->version_major = btt->nd_btt->version_major;
+ arena->version_minor = btt->nd_btt->version_minor;
if (available % BTT_PG_SIZE)
available -= (available % BTT_PG_SIZE);
dev_info(to_dev(arena), "No existing arenas\n");
goto out;
} else {
- dev_info(to_dev(arena),
+ dev_err(to_dev(arena),
"Found corrupted metadata!\n");
ret = -ENODEV;
goto out;
* another kernel subsystem, and we just pass it through.
*/
if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
- bio->bi_error = -EIO;
+ bio->bi_status = BLK_STS_IOERR;
goto out;
}
err = btt_do_bvec(btt, bip, bvec.bv_page, len, bvec.bv_offset,
op_is_write(bio_op(bio)), iter.bi_sector);
if (err) {
- dev_info(&btt->nd_btt->dev,
+ dev_err(&btt->nd_btt->dev,
"io error in %s sector %lld, len %d,\n",
(op_is_write(bio_op(bio))) ? "WRITE" :
"READ",
(unsigned long long) iter.bi_sector, len);
- bio->bi_error = err;
+ bio->bi_status = errno_to_blk_status(err);
break;
}
}
struct page *page, bool is_write)
{
struct btt *btt = bdev->bd_disk->private_data;
+ int rc;
- btt_do_bvec(btt, NULL, page, PAGE_SIZE, 0, is_write, sector);
- page_endio(page, is_write, 0);
- return 0;
+ rc = btt_do_bvec(btt, NULL, page, PAGE_SIZE, 0, is_write, sector);
+ if (rc == 0)
+ page_endio(page, is_write, 0);
+
+ return rc;
}
blk_queue_make_request(btt->btt_queue, btt_make_request);
blk_queue_logical_block_size(btt->btt_queue, btt->sector_size);
blk_queue_max_hw_sectors(btt->btt_queue, UINT_MAX);
- blk_queue_bounce_limit(btt->btt_queue, BLK_BOUNCE_ANY);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, btt->btt_queue);
btt->btt_queue->queuedata = btt;
}
if (btt->init_state != INIT_READY && nd_region->ro) {
- dev_info(dev, "%s is read-only, unable to init btt metadata\n",
+ dev_warn(dev, "%s is read-only, unable to init btt metadata\n",
dev_name(&nd_region->dev));
return NULL;
} else if (btt->init_state != INIT_READY) {
{
struct nd_btt *nd_btt = to_nd_btt(ndns->claim);
struct nd_region *nd_region;
+ struct btt_sb *btt_sb;
struct btt *btt;
size_t rawsize;
return -ENODEV;
}
- rawsize = nvdimm_namespace_capacity(ndns) - SZ_4K;
+ btt_sb = devm_kzalloc(&nd_btt->dev, sizeof(*btt_sb), GFP_KERNEL);
+
+ /*
+ * If this returns < 0, that is ok as it just means there wasn't
+ * an existing BTT, and we're creating a new one. We still need to
+ * call this as we need the version dependent fields in nd_btt to be
+ * set correctly based on the holder class
+ */
+ nd_btt_version(nd_btt, ndns, btt_sb);
+
+ rawsize = nvdimm_namespace_capacity(ndns) - nd_btt->initial_offset;
if (rawsize < ARENA_MIN_SIZE) {
dev_dbg(&nd_btt->dev, "%s must be at least %ld bytes\n",
- dev_name(&ndns->dev), ARENA_MIN_SIZE + SZ_4K);
+ dev_name(&ndns->dev),
+ ARENA_MIN_SIZE + nd_btt->initial_offset);
return -ENXIO;
}
nd_region = to_nd_region(nd_btt->dev.parent);
#include <linux/blk-mq.h>
#include <linux/pfn_t.h>
#include <linux/slab.h>
- #include <linux/pmem.h>
+ #include <linux/uio.h>
#include <linux/dax.h>
#include <linux/nd.h>
#include "pmem.h"
return to_nd_region(to_dev(pmem)->parent);
}
-static int pmem_clear_poison(struct pmem_device *pmem, phys_addr_t offset,
- unsigned int len)
+static blk_status_t pmem_clear_poison(struct pmem_device *pmem,
+ phys_addr_t offset, unsigned int len)
{
struct device *dev = to_dev(pmem);
sector_t sector;
long cleared;
- int rc = 0;
+ blk_status_t rc = BLK_STS_OK;
sector = (offset - pmem->data_offset) / 512;
cleared = nvdimm_clear_poison(dev, pmem->phys_addr + offset, len);
if (cleared < len)
- rc = -EIO;
+ rc = BLK_STS_IOERR;
if (cleared > 0 && cleared / 512) {
cleared /= 512;
dev_dbg(dev, "%s: %#llx clear %ld sector%s\n", __func__,
(unsigned long long) sector, cleared,
cleared > 1 ? "s" : "");
badblocks_clear(&pmem->bb, sector, cleared);
+ if (pmem->bb_state)
+ sysfs_notify_dirent(pmem->bb_state);
}
- invalidate_pmem(pmem->virt_addr + offset, len);
+ arch_invalidate_pmem(pmem->virt_addr + offset, len);
return rc;
}
{
void *mem = kmap_atomic(page);
- memcpy_to_pmem(pmem_addr, mem + off, len);
+ memcpy_flushcache(pmem_addr, mem + off, len);
kunmap_atomic(mem);
}
-static int read_pmem(struct page *page, unsigned int off,
+static blk_status_t read_pmem(struct page *page, unsigned int off,
void *pmem_addr, unsigned int len)
{
int rc;
rc = memcpy_mcsafe(mem + off, pmem_addr, len);
kunmap_atomic(mem);
if (rc)
- return -EIO;
- return 0;
+ return BLK_STS_IOERR;
+ return BLK_STS_OK;
}
-static int pmem_do_bvec(struct pmem_device *pmem, struct page *page,
+static blk_status_t pmem_do_bvec(struct pmem_device *pmem, struct page *page,
unsigned int len, unsigned int off, bool is_write,
sector_t sector)
{
- int rc = 0;
+ blk_status_t rc = BLK_STS_OK;
bool bad_pmem = false;
phys_addr_t pmem_off = sector * 512 + pmem->data_offset;
void *pmem_addr = pmem->virt_addr + pmem_off;
if (!is_write) {
if (unlikely(bad_pmem))
- rc = -EIO;
+ rc = BLK_STS_IOERR;
else {
rc = read_pmem(page, off, pmem_addr, len);
flush_dcache_page(page);
static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio)
{
- int rc = 0;
+ blk_status_t rc = 0;
bool do_acct;
unsigned long start;
struct bio_vec bvec;
bvec.bv_offset, op_is_write(bio_op(bio)),
iter.bi_sector);
if (rc) {
- bio->bi_error = rc;
+ bio->bi_status = rc;
break;
}
}
struct page *page, bool is_write)
{
struct pmem_device *pmem = bdev->bd_queue->queuedata;
- int rc;
+ blk_status_t rc;
rc = pmem_do_bvec(pmem, page, PAGE_SIZE, 0, is_write, sector);
if (rc == 0)
page_endio(page, is_write, 0);
- return rc;
+ return blk_status_to_errno(rc);
}
/* see "strong" declaration in tools/testing/nvdimm/pmem-dax.c */
return __pmem_direct_access(pmem, pgoff, nr_pages, kaddr, pfn);
}
+ static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ void *addr, size_t bytes, struct iov_iter *i)
+ {
+ return copy_from_iter_flushcache(addr, bytes, i);
+ }
+
+ static void pmem_dax_flush(struct dax_device *dax_dev, pgoff_t pgoff,
+ void *addr, size_t size)
+ {
+ arch_wb_cache_pmem(addr, size);
+ }
+
static const struct dax_operations pmem_dax_ops = {
.direct_access = pmem_dax_direct_access,
+ .copy_from_iter = pmem_copy_from_iter,
+ .flush = pmem_dax_flush,
+ };
+
+ static const struct attribute_group *pmem_attribute_groups[] = {
+ &dax_attribute_group,
+ NULL,
};
static void pmem_release_queue(void *q)
struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev);
struct nd_region *nd_region = to_nd_region(dev->parent);
struct vmem_altmap __altmap, *altmap = NULL;
+ int nid = dev_to_node(dev), fua, wbc;
struct resource *res = &nsio->res;
struct nd_pfn *nd_pfn = NULL;
struct dax_device *dax_dev;
- int nid = dev_to_node(dev);
struct nd_pfn_sb *pfn_sb;
struct pmem_device *pmem;
struct resource pfn_res;
struct request_queue *q;
+ struct device *gendev;
struct gendisk *disk;
void *addr;
dev_set_drvdata(dev, pmem);
pmem->phys_addr = res->start;
pmem->size = resource_size(res);
- if (nvdimm_has_flush(nd_region) < 0)
+ fua = nvdimm_has_flush(nd_region);
+ if (!IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) || fua < 0) {
dev_warn(dev, "unable to guarantee persistence of writes\n");
+ fua = 0;
+ }
+ wbc = nvdimm_has_cache(nd_region);
if (!devm_request_mem_region(dev, res->start, resource_size(res),
dev_name(&ndns->dev))) {
return PTR_ERR(addr);
pmem->virt_addr = addr;
- blk_queue_write_cache(q, true, true);
+ blk_queue_write_cache(q, wbc, fua);
blk_queue_make_request(q, pmem_make_request);
blk_queue_physical_block_size(q, PAGE_SIZE);
+ blk_queue_logical_block_size(q, pmem_sector_size(ndns));
blk_queue_max_hw_sectors(q, UINT_MAX);
- blk_queue_bounce_limit(q, BLK_BOUNCE_ANY);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
queue_flag_set_unlocked(QUEUE_FLAG_DAX, q);
q->queuedata = pmem;
put_disk(disk);
return -ENOMEM;
}
+ dax_write_cache(dax_dev, wbc);
pmem->dax_dev = dax_dev;
+ gendev = disk_to_dev(disk);
+ gendev->groups = pmem_attribute_groups;
+
device_add_disk(dev, disk);
if (devm_add_action_or_reset(dev, pmem_release_disk, pmem))
return -ENOMEM;
revalidate_disk(disk);
+ pmem->bb_state = sysfs_get_dirent(disk_to_dev(disk)->kobj.sd,
+ "badblocks");
+ if (!pmem->bb_state)
+ dev_warn(dev, "'badblocks' notification disabled\n");
+
return 0;
}
static int nd_pmem_remove(struct device *dev)
{
+ struct pmem_device *pmem = dev_get_drvdata(dev);
+
if (is_nd_btt(dev))
nvdimm_namespace_detach_btt(to_nd_btt(dev));
+ else {
+ /*
+ * Note, this assumes device_lock() context to not race
+ * nd_pmem_notify()
+ */
+ sysfs_put(pmem->bb_state);
+ pmem->bb_state = NULL;
+ }
nvdimm_flush(to_nd_region(dev->parent));
return 0;
struct nd_namespace_io *nsio;
struct resource res;
struct badblocks *bb;
+ struct kernfs_node *bb_state;
if (event != NVDIMM_REVALIDATE_POISON)
return;
nd_region = to_nd_region(ndns->dev.parent);
nsio = to_nd_namespace_io(&ndns->dev);
bb = &nsio->bb;
+ bb_state = NULL;
} else {
struct pmem_device *pmem = dev_get_drvdata(dev);
nd_region = to_region(pmem);
bb = &pmem->bb;
+ bb_state = pmem->bb_state;
if (is_nd_pfn(dev)) {
struct nd_pfn *nd_pfn = to_nd_pfn(dev);
res.start = nsio->res.start + offset;
res.end = nsio->res.end - end_trunc;
nvdimm_badblocks_populate(nd_region, bb, &res);
+ if (bb_state)
+ sysfs_notify_dirent(bb_state);
}
MODULE_ALIAS("pmem");
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/pfn_t.h>
+ #include <linux/uio.h>
#include <linux/dax.h>
#include <asm/extmem.h>
#include <asm/io.h>
.release = dcssblk_release,
};
+ static size_t dcssblk_dax_copy_from_iter(struct dax_device *dax_dev,
+ pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i)
+ {
+ return copy_from_iter(addr, bytes, i);
+ }
+
static const struct dax_operations dcssblk_dax_ops = {
.direct_access = dcssblk_dax_direct_access,
+ .copy_from_iter = dcssblk_dax_copy_from_iter,
};
struct dcssblk_dev_info {
unsigned long source_addr;
unsigned long bytes_done;
- blk_queue_split(q, &bio, q->bio_split);
+ blk_queue_split(q, &bio);
bytes_done = 0;
dev_info = bio->bi_bdev->bd_disk->private_data;
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/pagevec.h>
- #include <linux/pmem.h>
#include <linux/sched.h>
#include <linux/sched/signal.h>
#include <linux/uio.h>
};
struct wait_exceptional_entry_queue {
- wait_queue_t wait;
+ wait_queue_entry_t wait;
struct exceptional_entry_key key;
};
return wait_table + hash;
}
-static int wake_exceptional_entry_func(wait_queue_t *wait, unsigned int mode,
+static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mode,
int sync, void *keyp)
{
struct exceptional_entry_key *key = keyp;
}
dax_mapping_entry_mkclean(mapping, index, pfn_t_to_pfn(pfn));
- wb_cache_pmem(kaddr, size);
+ dax_flush(dax_dev, pgoff, kaddr, size);
/*
* After we have flushed the cache, we can clear the dirty tag. There
* cannot be new dirty data in the pfn after the flush has completed as
if (ret < 0)
goto out;
}
+ start_index = indices[pvec.nr - 1] + 1;
}
out:
put_dax(dax_dev);
dax_read_unlock(id);
return rc;
}
- clear_pmem(kaddr + offset, size);
+ memset(kaddr + offset, 0, size);
+ dax_flush(dax_dev, pgoff, kaddr + offset, size);
dax_read_unlock(id);
}
return 0;
map_len = end - pos;
if (iov_iter_rw(iter) == WRITE)
- map_len = copy_from_iter_pmem(kaddr, map_len, iter);
+ map_len = dax_copy_from_iter(dax_dev, pgoff, kaddr,
+ map_len, iter);
else
map_len = copy_to_iter(kaddr, map_len, iter);
if (map_len <= 0) {
case IOMAP_MAPPED:
if (iomap.flags & IOMAP_F_NEW) {
count_vm_event(PGMAJFAULT);
- mem_cgroup_count_vm_event(vmf->vma->vm_mm, PGMAJFAULT);
+ count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
major = VM_FAULT_MAJOR;
}
error = dax_insert_mapping(mapping, iomap.bdev, iomap.dax_dev,
* 2 : The target wants to push back the io
*/
typedef int (*dm_endio_fn) (struct dm_target *ti,
- struct bio *bio, int error);
+ struct bio *bio, blk_status_t *error);
typedef int (*dm_request_endio_fn) (struct dm_target *ti,
- struct request *clone, int error,
+ struct request *clone, blk_status_t error,
union map_info *map_context);
typedef void (*dm_presuspend_fn) (struct dm_target *ti);
*/
typedef long (*dm_dax_direct_access_fn) (struct dm_target *ti, pgoff_t pgoff,
long nr_pages, void **kaddr, pfn_t *pfn);
+ typedef size_t (*dm_dax_copy_from_iter_fn)(struct dm_target *ti, pgoff_t pgoff,
+ void *addr, size_t bytes, struct iov_iter *i);
+ typedef void (*dm_dax_flush_fn)(struct dm_target *ti, pgoff_t pgoff, void *addr,
+ size_t size);
#define PAGE_SECTORS (PAGE_SIZE / 512)
void dm_error(const char *message);
dm_iterate_devices_fn iterate_devices;
dm_io_hints_fn io_hints;
dm_dax_direct_access_fn direct_access;
+ dm_dax_copy_from_iter_fn dax_copy_from_iter;
+ dm_dax_flush_fn dax_flush;
/* For internal device-mapper use. */
struct list_head list;
#define DM_TARGET_PASSES_INTEGRITY 0x00000020
#define dm_target_passes_integrity(type) ((type)->features & DM_TARGET_PASSES_INTEGRITY)
+/*
+ * Indicates that a target supports host-managed zoned block devices.
+ */
+#define DM_TARGET_ZONED_HM 0x00000040
+#define dm_target_supports_zoned_hm(type) ((type)->features & DM_TARGET_ZONED_HM)
+
struct dm_target {
struct dm_table *table;
struct target_type *type;
int dm_suspended(struct dm_target *ti);
int dm_noflush_suspending(struct dm_target *ti);
void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors);
+void dm_remap_zone_report(struct dm_target *ti, struct bio *bio,
+ sector_t start);
union map_info *dm_get_rq_mapinfo(struct request *rq);
struct queue_limits *dm_get_queue_limits(struct mapped_device *md);
#define dm_ratelimit() 0
#endif
-#define DMCRIT(f, arg...) \
- printk(KERN_CRIT DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
-
-#define DMERR(f, arg...) \
- printk(KERN_ERR DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
-#define DMERR_LIMIT(f, arg...) \
- do { \
- if (dm_ratelimit()) \
- printk(KERN_ERR DM_NAME ": " DM_MSG_PREFIX ": " \
- f "\n", ## arg); \
- } while (0)
-
-#define DMWARN(f, arg...) \
- printk(KERN_WARNING DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
-#define DMWARN_LIMIT(f, arg...) \
- do { \
- if (dm_ratelimit()) \
- printk(KERN_WARNING DM_NAME ": " DM_MSG_PREFIX ": " \
- f "\n", ## arg); \
- } while (0)
-
-#define DMINFO(f, arg...) \
- printk(KERN_INFO DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
-#define DMINFO_LIMIT(f, arg...) \
- do { \
- if (dm_ratelimit()) \
- printk(KERN_INFO DM_NAME ": " DM_MSG_PREFIX ": " f \
- "\n", ## arg); \
- } while (0)
+#define DM_FMT(fmt) DM_NAME ": " DM_MSG_PREFIX ": " fmt "\n"
+
+#define DMCRIT(fmt, ...) pr_crit(DM_FMT(fmt), ##__VA_ARGS__)
+
+#define DMERR(fmt, ...) pr_err(DM_FMT(fmt), ##__VA_ARGS__)
+#define DMERR_LIMIT(fmt, ...) \
+do { \
+ if (dm_ratelimit()) \
+ DMERR(fmt, ##__VA_ARGS__); \
+} while (0)
+
+#define DMWARN(fmt, ...) pr_warn(DM_FMT(fmt), ##__VA_ARGS__)
+#define DMWARN_LIMIT(fmt, ...) \
+do { \
+ if (dm_ratelimit()) \
+ DMWARN(fmt, ##__VA_ARGS__); \
+} while (0)
+
+#define DMINFO(fmt, ...) pr_info(DM_FMT(fmt), ##__VA_ARGS__)
+#define DMINFO_LIMIT(fmt, ...) \
+do { \
+ if (dm_ratelimit()) \
+ DMINFO(fmt, ##__VA_ARGS__); \
+} while (0)
#ifdef CONFIG_DM_DEBUG
-# define DMDEBUG(f, arg...) \
- printk(KERN_DEBUG DM_NAME ": " DM_MSG_PREFIX " DEBUG: " f "\n", ## arg)
-# define DMDEBUG_LIMIT(f, arg...) \
- do { \
- if (dm_ratelimit()) \
- printk(KERN_DEBUG DM_NAME ": " DM_MSG_PREFIX ": " f \
- "\n", ## arg); \
- } while (0)
+#define DMDEBUG(fmt, ...) printk(KERN_DEBUG DM_FMT(fmt), ##__VA_ARGS__)
+#define DMDEBUG_LIMIT(fmt, ...) \
+do { \
+ if (dm_ratelimit()) \
+ DMDEBUG(fmt, ##__VA_ARGS__); \
+} while (0)
#else
-# define DMDEBUG(f, arg...) do {} while (0)
-# define DMDEBUG_LIMIT(f, arg...) do {} while (0)
+#define DMDEBUG(fmt, ...) no_printk(fmt, ##__VA_ARGS__)
+#define DMDEBUG_LIMIT(fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#define DMEMIT(x...) sz += ((sz >= maxlen) ? \
endchoice
+config CRC4
+ tristate "CRC4 functions"
+ help
+ This option is provided for the case where no in-kernel-tree
+ modules require CRC4 functions, but a module built outside
+ the kernel tree does. Such modules that use library CRC4
+ functions require M here.
+
config CRC7
tristate "CRC7 functions"
help
config ARCH_HAS_PMEM_API
bool
+ config ARCH_HAS_UACCESS_FLUSHCACHE
+ bool
+
config ARCH_HAS_MMIO_FLUSH
bool