| Cavium | ThunderX2 Core | #219 | CAVIUM_TX2_ERRATUM_219 |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
++++++ ++++| Marvell | ARM-MMU-500 | #582743 | N/A |
++++++ +++++----------------+-----------------+-----------------+-----------------------------+
++++++ +++++----------------+-----------------+-----------------+-----------------------------+
| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Qualcomm Tech. | Falkor v{1,2} | E1041 | QCOM_FALKOR_ERRATUM_1041 |
+----------------+-----------------+-----------------+-----------------------------+
+++ ++++++| Qualcomm Tech. | Kryo4xx Gold | N/A | ARM64_ERRATUM_1463225 |
+++ +++++++----------------+-----------------+-----------------+-----------------------------+
+++ ++++++| Qualcomm Tech. | Kryo4xx Gold | N/A | ARM64_ERRATUM_1418040 |
+++ +++++++----------------+-----------------+-----------------+-----------------------------+
+++ ++++++| Qualcomm Tech. | Kryo4xx Silver | N/A | ARM64_ERRATUM_1530923 |
+++ +++++++----------------+-----------------+-----------------+-----------------------------+
+++ ++++++| Qualcomm Tech. | Kryo4xx Silver | N/A | ARM64_ERRATUM_1024718 |
+++ +++++++----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Fujitsu | A64FX | E#010001 | FUJITSU_ERRATUM_010001 |
+----------------+-----------------+-----------------+-----------------------------+
ATMEL MACB ETHERNET DRIVER
S: Supported
F: drivers/net/ethernet/cadence/
BPF JIT for S390
----------M: Heiko Carstens <heiko.carstens@de.ibm.com>
++++++++++M: Heiko Carstens <hca@linux.ibm.com>
S: Supported
F: drivers/char/hw_random/cctrng.c
F: drivers/char/hw_random/cctrng.h
--- ------F: Documentation/devicetree/bindings/rng/arm-cctrng.txt
+++ ++++++F: Documentation/devicetree/bindings/rng/arm-cctrng.yaml
W: https://developer.arm.com/products/system-ip/trustzone-cryptocell/cryptocell-700-family
CEC FRAMEWORK
F: drivers/pinctrl/pinctrl-da90??.c
F: drivers/power/supply/da9052-battery.c
F: drivers/power/supply/da91??-*.c
----------F: drivers/regulator/da903x.c
F: drivers/regulator/da9???-regulator.[ch]
F: drivers/regulator/slg51000-regulator.[ch]
F: drivers/rtc/rtc-da90??.c
S: Maintained
Q: https://patchwork.kernel.org/project/linux-dmaengine/list/
----------T: git git://git.infradead.org/users/vkoul/slave-dma.git
++++++++++T: git git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine.git
F: Documentation/devicetree/bindings/dma/
F: Documentation/driver-api/dmaengine/
F: drivers/dma/
DRM DRIVER FOR RAYDIUM RM67191 PANELS
S: Maintained
--- ------F: Documentation/devicetree/bindings/display/panel/raydium,rm67191.txt
+++ ++++++F: Documentation/devicetree/bindings/display/panel/raydium,rm67191.yaml
F: drivers/gpu/drm/panel/panel-raydium-rm67191.c
DRM DRIVER FOR ROCKTECH JH057N00900 PANELS
S: Maintained
F: include/linux/iommu.h
F: include/linux/iova.h
F: include/linux/of_iommu.h
++++++++++ F: include/uapi/linux/iommu.h
IO_URING
F: scripts/Kconfig.include
F: scripts/kconfig/
++++++++++KCOV
++++++++++S: Maintained
++++++++++F: Documentation/dev-tools/kcov.rst
++++++++++F: include/linux/kcov.h
++++++++++F: include/uapi/linux/kcov.h
++++++++++F: kernel/kcov.c
++++++++++F: scripts/Makefile.kcov
++++++++++
KCSAN
F: drivers/crypto/atmel-ecc.*
MICROCHIP I2C DRIVER
----------M: Ludovic Desroches <ludovic.desroches@microchip.com>
++++++++++M: Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
S: Supported
F: drivers/i2c/busses/i2c-at91-*.c
F: include/dt-bindings/iio/adc/at91-sama5d2_adc.h
MICROCHIP SAMA5D2-COMPATIBLE SHUTDOWN CONTROLLER
----------M: Nicolas Ferre <nicolas.ferre@microchip.com>
++++++++++M: Claudiu Beznea <claudiu.beznea@microchip.com>
S: Supported
F: drivers/power/reset/at91-sama5d2_shdwc.c
MICROCHIP SPI DRIVER
----------M: Nicolas Ferre <nicolas.ferre@microchip.com>
++++++++++M: Tudor Ambarus <tudor.ambarus@microchip.com>
S: Supported
F: drivers/spi/spi-atmel.*
MICROCHIP SSC DRIVER
----------M: Nicolas Ferre <nicolas.ferre@microchip.com>
++++++++++M: Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
S: Supported
F: drivers/misc/atmel-ssc.c
S: Supported
--- ------F: Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.txt
--- ------F: Documentation/devicetree/bindings/thermal/rcar-thermal.txt
+++ ++++++F: Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml
+++ ++++++F: Documentation/devicetree/bindings/thermal/rcar-thermal.yaml
F: drivers/thermal/rcar_gen3_thermal.c
F: drivers/thermal/rcar_thermal.c
F: drivers/video/fbdev/savage/
S390
----------M: Heiko Carstens <heiko.carstens@de.ibm.com>
++++++++++M: Heiko Carstens <hca@linux.ibm.com>
F: include/linux/dasd_mod.h
S390 IOMMU (PCI)
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
S390 PCI SUBSYSTEM
----------M: Gerald Schaefer <gerald.schaefer@de.ibm.com>
++++++++++M: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
TEGRA IOMMU DRIVERS
S: Supported
++++++ ++++F: drivers/iommu/arm-smmu-nvidia.c
F: drivers/iommu/tegra*
TEGRA KBC DRIVER
F: fs/ufs/
UHID USERSPACE HID IO DRIVER
----------M: David Herrmann <dh.herrmann@googlemail.com>
++++++++++M: David Rheinsberg <david.rheinsberg@gmail.com>
S: Maintained
F: drivers/hid/uhid.c
F: drivers/rtc/rtc-sd3078.c
WIIMOTE HID DRIVER
----------M: David Herrmann <dh.herrmann@googlemail.com>
++++++++++M: David Rheinsberg <david.rheinsberg@gmail.com>
S: Maintained
F: drivers/hid/hid-wiimote*
If unsure, say N here.
---------- config IOMMU_PGTABLES_L2
---------- def_bool y
---------- depends on MSM_IOMMU && MMU && SMP && CPU_DCACHE_DISABLE=n
----------
---------- # AMD IOMMU support
---------- config AMD_IOMMU
---------- bool "AMD IOMMU support"
---------- select SWIOTLB
---------- select PCI_MSI
---------- select PCI_ATS
---------- select PCI_PRI
---------- select PCI_PASID
---------- select IOMMU_API
---------- select IOMMU_IOVA
---------- select IOMMU_DMA
---------- depends on X86_64 && PCI && ACPI
---------- help
---------- With this option you can enable support for AMD IOMMU hardware in
---------- your system. An IOMMU is a hardware component which provides
---------- remapping of DMA memory accesses from devices. With an AMD IOMMU you
---------- can isolate the DMA memory of different devices and protect the
---------- system from misbehaving device drivers or hardware.
----------
---------- You can find out if your system has an AMD IOMMU if you look into
---------- your BIOS for an option to enable it or if you have an IVRS ACPI
---------- table.
----------
---------- config AMD_IOMMU_V2
---------- tristate "AMD IOMMU Version 2 driver"
---------- depends on AMD_IOMMU
---------- select MMU_NOTIFIER
---------- help
---------- This option enables support for the AMD IOMMUv2 features of the IOMMU
---------- hardware. Select this option if you want to use devices that support
---------- the PCI PRI and PASID interface.
----------
---------- config AMD_IOMMU_DEBUGFS
---------- bool "Enable AMD IOMMU internals in DebugFS"
---------- depends on AMD_IOMMU && IOMMU_DEBUGFS
---------- help
---------- !!!WARNING!!! !!!WARNING!!! !!!WARNING!!! !!!WARNING!!!
----------
---------- DO NOT ENABLE THIS OPTION UNLESS YOU REALLY, -REALLY- KNOW WHAT YOU ARE DOING!!!
---------- Exposes AMD IOMMU device internals in DebugFS.
----------
---------- This option is -NOT- intended for production environments, and should
---------- not generally be enabled.
----------
---------- # Intel IOMMU support
---------- config DMAR_TABLE
---------- bool
----------
---------- config INTEL_IOMMU
---------- bool "Support for Intel IOMMU using DMA Remapping Devices"
---------- depends on PCI_MSI && ACPI && (X86 || IA64)
---------- select IOMMU_API
---------- select IOMMU_IOVA
---------- select NEED_DMA_MAP_STATE
---------- select DMAR_TABLE
---------- select SWIOTLB
---------- select IOASID
---------- help
---------- DMA remapping (DMAR) devices support enables independent address
---------- translations for Direct Memory Access (DMA) from devices.
---------- These DMA remapping devices are reported via ACPI tables
---------- and include PCI device scope covered by these DMA
---------- remapping devices.
----------
---------- config INTEL_IOMMU_DEBUGFS
---------- bool "Export Intel IOMMU internals in Debugfs"
---------- depends on INTEL_IOMMU && IOMMU_DEBUGFS
---------- help
---------- !!!WARNING!!!
----------
---------- DO NOT ENABLE THIS OPTION UNLESS YOU REALLY KNOW WHAT YOU ARE DOING!!!
----------
---------- Expose Intel IOMMU internals in Debugfs.
----------
---------- This option is -NOT- intended for production environments, and should
---------- only be enabled for debugging Intel IOMMU.
----------
---------- config INTEL_IOMMU_SVM
---------- bool "Support for Shared Virtual Memory with Intel IOMMU"
---------- depends on INTEL_IOMMU && X86_64
---------- select PCI_PASID
---------- select PCI_PRI
---------- select MMU_NOTIFIER
---------- select IOASID
---------- help
---------- Shared Virtual Memory (SVM) provides a facility for devices
---------- to access DMA resources through process address space by
---------- means of a Process Address Space ID (PASID).
----------
---------- config INTEL_IOMMU_DEFAULT_ON
---------- def_bool y
---------- prompt "Enable Intel DMA Remapping Devices by default"
---------- depends on INTEL_IOMMU
---------- help
---------- Selecting this option will enable a DMAR device at boot time if
---------- one is found. If this option is not selected, DMAR support can
---------- be enabled by passing intel_iommu=on to the kernel.
----------
---------- config INTEL_IOMMU_BROKEN_GFX_WA
---------- bool "Workaround broken graphics drivers (going away soon)"
---------- depends on INTEL_IOMMU && BROKEN && X86
---------- help
---------- Current Graphics drivers tend to use physical address
---------- for DMA and avoid using DMA APIs. Setting this config
---------- option permits the IOMMU driver to set a unity map for
---------- all the OS-visible memory. Hence the driver can continue
---------- to use physical addresses for DMA, at least until this
---------- option is removed in the 2.6.32 kernel.
----------
---------- config INTEL_IOMMU_FLOPPY_WA
---------- def_bool y
---------- depends on INTEL_IOMMU && X86
---------- help
---------- Floppy disk drivers are known to bypass DMA API calls
---------- thereby failing to work when IOMMU is enabled. This
---------- workaround will setup a 1:1 mapping for the first
---------- 16MiB to make floppy (an ISA device) work.
----------
---------- config INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
---------- bool "Enable Intel IOMMU scalable mode by default"
---------- depends on INTEL_IOMMU
---------- help
---------- Selecting this option will enable by default the scalable mode if
---------- hardware presents the capability. The scalable mode is defined in
---------- VT-d 3.0. The scalable mode capability could be checked by reading
---------- /sys/devices/virtual/iommu/dmar*/intel-iommu/ecap. If this option
---------- is not selected, scalable mode support could also be enabled by
---------- passing intel_iommu=sm_on to the kernel. If not sure, please use
---------- the default value.
++++++++++ source "drivers/iommu/amd/Kconfig"
++++++++++ source "drivers/iommu/intel/Kconfig"
config IRQ_REMAP
bool "Support for Interrupt Remapping"
# OMAP IOMMU support
config OMAP_IOMMU
bool "OMAP IOMMU Support"
---------- depends on ARM && MMU || (COMPILE_TEST && (ARM || ARM64 || IA64 || SPARC))
depends on ARCH_OMAP2PLUS || COMPILE_TEST
select IOMMU_API
help
config ROCKCHIP_IOMMU
bool "Rockchip IOMMU Support"
---------- depends on ARM || ARM64 || (COMPILE_TEST && (ARM64 || IA64 || SPARC))
depends on ARCH_ROCKCHIP || COMPILE_TEST
select IOMMU_API
select ARM_DMA_USE_IOMMU
config SUN50I_IOMMU
bool "Allwinner H6 IOMMU Support"
++++++++++ depends on HAS_DMA
depends on ARCH_SUNXI || COMPILE_TEST
select ARM_DMA_USE_IOMMU
select IOMMU_API
---------- select IOMMU_DMA
help
Support for the IOMMU introduced in the Allwinner H6 SoCs.
config EXYNOS_IOMMU
bool "Exynos IOMMU Support"
---------- depends on ARCH_EXYNOS && MMU || (COMPILE_TEST && (ARM || ARM64 || IA64 || SPARC))
++++++++++ depends on ARCH_EXYNOS || COMPILE_TEST
depends on !CPU_BIG_ENDIAN # revisit driver if we can enable big-endian ptes
select IOMMU_API
select ARM_DMA_USE_IOMMU
config IPMMU_VMSA
bool "Renesas VMSA-compatible IPMMU"
---------- depends on ARM || IOMMU_DMA
depends on ARCH_RENESAS || (COMPILE_TEST && !GENERIC_ATOMIC64)
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
# ARM IOMMU support
config ARM_SMMU
tristate "ARM Ltd. System MMU (SMMU) Support"
---------- depends on (ARM64 || ARM || (COMPILE_TEST && !GENERIC_ATOMIC64)) && MMU
++++++++++ depends on ARM64 || ARM || (COMPILE_TEST && !GENERIC_ATOMIC64)
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU if ARM
config MTK_IOMMU
bool "MTK IOMMU Support"
---------- depends on HAS_DMA
depends on ARCH_MEDIATEK || COMPILE_TEST
select ARM_DMA_USE_IOMMU
select IOMMU_API
---------- select IOMMU_DMA
select IOMMU_IO_PGTABLE_ARMV7S
select MEMORY
select MTK_SMI
# SPDX-License-Identifier: GPL-2.0
++++++++++ obj-y += amd/ intel/
obj-$(CONFIG_IOMMU_API) += iommu.o
obj-$(CONFIG_IOMMU_API) += iommu-traces.o
obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
obj-$(CONFIG_IOMMU_IOVA) += iova.o
obj-$(CONFIG_OF_IOMMU) += of_iommu.o
obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o
---------- obj-$(CONFIG_AMD_IOMMU) += amd/iommu.o amd/init.o amd/quirks.o
---------- obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += amd/debugfs.o
---------- obj-$(CONFIG_AMD_IOMMU_V2) += amd/iommu_v2.o
obj-$(CONFIG_ARM_SMMU) += arm_smmu.o
------ ----arm_smmu-objs += arm-smmu.o arm-smmu-impl.o arm-smmu-qcom.o
++++++ ++++arm_smmu-objs += arm-smmu.o arm-smmu-impl.o arm-smmu-nvidia.o arm-smmu-qcom.o
obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o
---------- obj-$(CONFIG_DMAR_TABLE) += intel/dmar.o
---------- obj-$(CONFIG_INTEL_IOMMU) += intel/iommu.o intel/pasid.o
---------- obj-$(CONFIG_INTEL_IOMMU) += intel/trace.o
---------- obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += intel/debugfs.o
---------- obj-$(CONFIG_INTEL_IOMMU_SVM) += intel/svm.o
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
---------- obj-$(CONFIG_IRQ_REMAP) += intel/irq_remapping.o irq_remapping.o
++++++++++ obj-$(CONFIG_IRQ_REMAP) += irq_remapping.o
obj-$(CONFIG_MTK_IOMMU) += mtk_iommu.o
obj-$(CONFIG_MTK_IOMMU_V1) += mtk_iommu_v1.o
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
pgtable->mode = pt_root & 7; /* lowest 3 bits encode pgtable mode */
}
--------- -static u64 amd_iommu_domain_encode_pgtable(u64 *root, int mode)
+++++++++ +static void amd_iommu_domain_set_pt_root(struct protection_domain *domain, u64 root)
+++++++++ +{
+++++++++ + atomic64_set(&domain->pt_root, root);
+++++++++ +}
+++++++++ +
+++++++++ +static void amd_iommu_domain_clr_pt_root(struct protection_domain *domain)
+++++++++ +{
+++++++++ + amd_iommu_domain_set_pt_root(domain, 0);
+++++++++ +}
+++++++++ +
+++++++++ +static void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
+++++++++ + u64 *root, int mode)
{
u64 pt_root;
pt_root = mode & 7;
pt_root |= (u64)root;
--------- - return pt_root;
+++++++++ + amd_iommu_domain_set_pt_root(domain, pt_root);
}
static struct iommu_dev_data *alloc_dev_data(u16 devid)
struct domain_pgtable pgtable;
unsigned long flags;
bool ret = true;
--------- - u64 *pte, root;
+++++++++ + u64 *pte;
spin_lock_irqsave(&domain->lock, flags);
* Device Table needs to be updated and flushed before the new root can
* be published.
*/
--------- - root = amd_iommu_domain_encode_pgtable(pte, pgtable.mode);
--------- - atomic64_set(&domain->pt_root, root);
+++++++++ + amd_iommu_domain_set_pgtable(domain, pte, pgtable.mode);
ret = true;
domain_id_free(domain->id);
amd_iommu_domain_get_pgtable(domain, &pgtable);
--------- - atomic64_set(&domain->pt_root, 0);
+++++++++ + amd_iommu_domain_clr_pt_root(domain);
free_pagetable(&pgtable);
kfree(domain);
static int protection_domain_init(struct protection_domain *domain, int mode)
{
--------- - u64 *pt_root = NULL, root;
+++++++++ + u64 *pt_root = NULL;
BUG_ON(mode < PAGE_MODE_NONE || mode > PAGE_MODE_6_LEVEL);
return -ENOMEM;
}
--------- - root = amd_iommu_domain_encode_pgtable(pt_root, mode);
--------- - atomic64_set(&domain->pt_root, root);
+++++++++ + amd_iommu_domain_set_pgtable(domain, pt_root, mode);
return 0;
}
/* First save pgtable configuration*/
amd_iommu_domain_get_pgtable(domain, &pgtable);
--------- - /* Update data structure */
--------- - atomic64_set(&domain->pt_root, 0);
+++++++++ + /* Remove page-table from domain */
+++++++++ + amd_iommu_domain_clr_pt_root(domain);
/* Make changes visible to IOMMUs */
update_domain(domain);
if (!fn)
return -ENOMEM;
iommu->ir_domain = irq_domain_create_tree(fn, &amd_ir_domain_ops, iommu);
---------- irq_domain_free_fwnode(fn);
---------- if (!iommu->ir_domain)
++++++++++ if (!iommu->ir_domain) {
++++++++++ irq_domain_free_fwnode(fn);
return -ENOMEM;
++++++++++ }
iommu->ir_domain->parent = arch_get_ir_parent_domain();
iommu->msi_domain = arch_create_remap_msi_irq_domain(iommu->ir_domain,
}
/*
------ ---- * Try to unlock the cmq lock. This will fail if we're the last
++++++ ++++ * Try to unlock the cmdq lock. This will fail if we're the last
* reader, in which case we can safely update cmdq->q.llq.cons
*/
if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
if (!ops)
return -ENODEV;
---------- return ops->map(ops, iova, paddr, size, prot);
++++++++++ return ops->map(ops, iova, paddr, size, prot, gfp);
}
static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
*/
#define QCOM_DUMMY_VAL -1
------ ----#define TLB_LOOP_TIMEOUT 1000000 /* 1s! */
------ ----#define TLB_SPIN_COUNT 10
------ ----
#define MSI_IOVA_BASE 0x8000000
#define MSI_IOVA_LENGTH 0x100000
enum io_pgtable_fmt fmt;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
++++++ ++++ irqreturn_t (*context_fault)(int irq, void *dev);
mutex_lock(&smmu_domain->init_mutex);
if (smmu_domain->smmu)
* handler seeing a half-initialised domain state.
*/
irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
------ ---- ret = devm_request_irq(smmu->dev, irq, arm_smmu_context_fault,
++++++ ++++
++++++ ++++ if (smmu->impl && smmu->impl->context_fault)
++++++ ++++ context_fault = smmu->impl->context_fault;
++++++ ++++ else
++++++ ++++ context_fault = arm_smmu_context_fault;
++++++ ++++
++++++ ++++ ret = devm_request_irq(smmu->dev, irq, context_fault,
IRQF_SHARED, "arm-smmu-context-fault", domain);
if (ret < 0) {
dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n",
return -ENODEV;
arm_smmu_rpm_get(smmu);
---------- ret = ops->map(ops, iova, paddr, size, prot);
++++++++++ ret = ops->map(ops, iova, paddr, size, prot, gfp);
arm_smmu_rpm_put(smmu);
return ret;
unsigned int size;
u32 id;
bool cttw_reg, cttw_fw = smmu->features & ARM_SMMU_FEAT_COHERENT_WALK;
------ ---- int i;
++++++ ++++ int i, ret;
dev_notice(smmu->dev, "probing hardware configuration...\n");
dev_notice(smmu->dev, "SMMUv%d with:\n",
smmu->features |= ARM_SMMU_FEAT_FMT_AARCH64_64K;
}
++++++ ++++ if (smmu->impl && smmu->impl->cfg_probe) {
++++++ ++++ ret = smmu->impl->cfg_probe(smmu);
++++++ ++++ if (ret)
++++++ ++++ return ret;
++++++ ++++ }
++++++ ++++
/* Now we've corralled the various formats, what'll it do? */
if (smmu->features & ARM_SMMU_FEAT_FMT_AARCH32_S)
smmu->pgsize_bitmap |= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
dev_notice(smmu->dev, "\tStage-2: %lu-bit IPA -> %lu-bit PA\n",
smmu->ipa_size, smmu->pa_size);
------ ---- if (smmu->impl && smmu->impl->cfg_probe)
------ ---- return smmu->impl->cfg_probe(smmu);
------ ----
return 0;
}
{ .compatible = "arm,mmu-401", .data = &arm_mmu401 },
{ .compatible = "arm,mmu-500", .data = &arm_mmu500 },
{ .compatible = "cavium,smmu-v2", .data = &cavium_smmuv2 },
++++++ ++++ { .compatible = "nvidia,smmu-500", .data = &arm_mmu500 },
{ .compatible = "qcom,smmu-v2", .data = &qcom_smmuv2 },
{ },
};
struct arm_smmu_device *smmu;
struct device *dev = &pdev->dev;
int num_irqs, i, err;
++++++ ++++ irqreturn_t (*global_fault)(int irq, void *dev);
smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
if (!smmu) {
if (err)
return err;
------ ---- smmu = arm_smmu_impl_init(smmu);
------ ---- if (IS_ERR(smmu))
------ ---- return PTR_ERR(smmu);
------ ----
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ioaddr = res->start;
smmu->base = devm_ioremap_resource(dev, res);
*/
smmu->numpage = resource_size(res);
++++++ ++++ smmu = arm_smmu_impl_init(smmu);
++++++ ++++ if (IS_ERR(smmu))
++++++ ++++ return PTR_ERR(smmu);
++++++ ++++
num_irqs = 0;
while ((res = platform_get_resource(pdev, IORESOURCE_IRQ, num_irqs))) {
num_irqs++;
smmu->num_context_irqs = smmu->num_context_banks;
}
++++++ ++++ if (smmu->impl && smmu->impl->global_fault)
++++++ ++++ global_fault = smmu->impl->global_fault;
++++++ ++++ else
++++++ ++++ global_fault = arm_smmu_global_fault;
++++++ ++++
for (i = 0; i < smmu->num_global_irqs; ++i) {
err = devm_request_irq(smmu->dev, smmu->irqs[i],
------ ---- arm_smmu_global_fault,
++++++ ++++ global_fault,
IRQF_SHARED,
"arm-smmu global fault",
smmu);
#define REG_V5_FAULT_AR_VA 0x070
#define REG_V5_FAULT_AW_VA 0x080
---------- #define has_sysmmu(dev) (dev->archdata.iommu != NULL)
++++++++++ #define has_sysmmu(dev) (dev_iommu_priv_get(dev) != NULL)
static struct device *dma_dev;
static struct kmem_cache *lv2table_kmem_cache;
};
/*
---------- * This structure is attached to dev.archdata.iommu of the master device
++++++++++ * This structure is attached to dev->iommu->priv of the master device
* on device add, contains a list of SYSMMU controllers defined by device tree,
* which are bound to given master device. It is usually referenced by 'owner'
* pointer.
struct device *master = data->master;
if (master) {
---------- struct exynos_iommu_owner *owner = master->archdata.iommu;
++++++++++ struct exynos_iommu_owner *owner = dev_iommu_priv_get(master);
mutex_lock(&owner->rpm_lock);
if (data->domain) {
struct device *master = data->master;
if (master) {
---------- struct exynos_iommu_owner *owner = master->archdata.iommu;
++++++++++ struct exynos_iommu_owner *owner = dev_iommu_priv_get(master);
mutex_lock(&owner->rpm_lock);
if (data->domain) {
}
};
----- -----static inline void update_pte(sysmmu_pte_t *ent, sysmmu_pte_t val)
+++++ +++++static inline void exynos_iommu_set_pte(sysmmu_pte_t *ent, sysmmu_pte_t val)
{
dma_sync_single_for_cpu(dma_dev, virt_to_phys(ent), sizeof(*ent),
DMA_TO_DEVICE);
static void exynos_iommu_detach_device(struct iommu_domain *iommu_domain,
struct device *dev)
{
---------- struct exynos_iommu_owner *owner = dev->archdata.iommu;
struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
++++++++++ struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
phys_addr_t pagetable = virt_to_phys(domain->pgtable);
struct sysmmu_drvdata *data, *next;
unsigned long flags;
static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain,
struct device *dev)
{
---------- struct exynos_iommu_owner *owner = dev->archdata.iommu;
struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
++++++++++ struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
struct sysmmu_drvdata *data;
phys_addr_t pagetable = virt_to_phys(domain->pgtable);
unsigned long flags;
if (!pent)
return ERR_PTR(-ENOMEM);
----- ----- update_pte(sent, mk_lv1ent_page(virt_to_phys(pent)));
+++++ +++++ exynos_iommu_set_pte(sent, mk_lv1ent_page(virt_to_phys(pent)));
kmemleak_ignore(pent);
*pgcounter = NUM_LV2ENTRIES;
handle = dma_map_single(dma_dev, pent, LV2TABLE_SIZE,
*pgcnt = 0;
}
----- ----- update_pte(sent, mk_lv1ent_sect(paddr, prot));
+++++ +++++ exynos_iommu_set_pte(sent, mk_lv1ent_sect(paddr, prot));
spin_lock(&domain->lock);
if (lv1ent_page_zero(sent)) {
if (WARN_ON(!lv2ent_fault(pent)))
return -EADDRINUSE;
----- ----- update_pte(pent, mk_lv2ent_spage(paddr, prot));
+++++ +++++ exynos_iommu_set_pte(pent, mk_lv2ent_spage(paddr, prot));
*pgcnt -= 1;
} else { /* size == LPAGE_SIZE */
int i;
}
/* workaround for h/w bug in System MMU v3.3 */
----- ----- update_pte(ent, ZERO_LV2LINK);
+++++ +++++ exynos_iommu_set_pte(ent, ZERO_LV2LINK);
size = SECT_SIZE;
goto done;
}
}
if (lv2ent_small(ent)) {
----- ----- update_pte(ent, 0);
+++++ +++++ exynos_iommu_set_pte(ent, 0);
size = SPAGE_SIZE;
domain->lv2entcnt[lv1ent_offset(iova)] += 1;
goto done;
static struct iommu_device *exynos_iommu_probe_device(struct device *dev)
{
---------- struct exynos_iommu_owner *owner = dev->archdata.iommu;
++++++++++ struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
struct sysmmu_drvdata *data;
if (!has_sysmmu(dev))
static void exynos_iommu_release_device(struct device *dev)
{
---------- struct exynos_iommu_owner *owner = dev->archdata.iommu;
++++++++++ struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
struct sysmmu_drvdata *data;
if (!has_sysmmu(dev))
static int exynos_iommu_of_xlate(struct device *dev,
struct of_phandle_args *spec)
{
---------- struct exynos_iommu_owner *owner = dev->archdata.iommu;
struct platform_device *sysmmu = of_find_device_by_node(spec->np);
++++++++++ struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
struct sysmmu_drvdata *data, *entry;
if (!sysmmu)
INIT_LIST_HEAD(&owner->controllers);
mutex_init(&owner->rpm_lock);
---------- dev->archdata.iommu = owner;
++++++++++ dev_iommu_priv_set(dev, owner);
}
list_for_each_entry(entry, &owner->controllers, owner_node)
#include <trace/events/intel_iommu.h>
#include "../irq_remapping.h"
-------- --#include "intel-pasid.h"
++++++++ ++#include "pasid.h"
#define ROOT_SIZE VTD_PAGE_SIZE
#define CONTEXT_SIZE VTD_PAGE_SIZE
static int intel_iommu_superpage = 1;
static int iommu_identity_mapping;
static int intel_no_bounce;
++++++++ ++static int iommu_skip_te_disable;
#define IDENTMAP_GFX 2
#define IDENTMAP_AZALIA 4
if (!dev)
return NULL;
---------- info = dev->archdata.iommu;
++++++++++ info = dev_iommu_priv_get(dev);
if (unlikely(info == DUMMY_DEVICE_DOMAIN_INFO ||
info == DEFER_DEVICE_DOMAIN_INFO))
return NULL;
static int iommu_dummy(struct device *dev)
{
---------- return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
++++++++++ return dev_iommu_priv_get(dev) == DUMMY_DEVICE_DOMAIN_INFO;
}
static bool attach_deferred(struct device *dev)
{
---------- return dev->archdata.iommu == DEFER_DEVICE_DOMAIN_INFO;
++++++++++ return dev_iommu_priv_get(dev) == DEFER_DEVICE_DOMAIN_INFO;
}
/**
return false;
}
-------- --static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn)
++++++++ ++struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn)
{
struct dmar_drhd_unit *drhd = NULL;
++++++++ ++ struct pci_dev *pdev = NULL;
struct intel_iommu *iommu;
struct device *tmp;
-------- -- struct pci_dev *pdev = NULL;
u16 segment = 0;
int i;
-------- -- if (iommu_dummy(dev))
++++++++ ++ if (!dev || iommu_dummy(dev))
return NULL;
if (dev_is_pci(dev)) {
if (pdev && pdev->is_virtfn)
goto got_pdev;
-------- -- *bus = drhd->devices[i].bus;
-------- -- *devfn = drhd->devices[i].devfn;
++++++++ ++ if (bus && devfn) {
++++++++ ++ *bus = drhd->devices[i].bus;
++++++++ ++ *devfn = drhd->devices[i].devfn;
++++++++ ++ }
goto out;
}
if (pdev && drhd->include_all) {
got_pdev:
-------- -- *bus = pdev->bus->number;
-------- -- *devfn = pdev->devfn;
++++++++ ++ if (bus && devfn) {
++++++++ ++ *bus = pdev->bus->number;
++++++++ ++ *devfn = pdev->devfn;
++++++++ ++ }
goto out;
}
}
u32 sts;
unsigned long flag;
++++++++ ++ if (iommu_skip_te_disable && iommu->drhd->gfx_dedicated &&
++++++++ ++ (cap_read_drain(iommu->cap) || cap_write_drain(iommu->cap)))
++++++++ ++ return;
++++++++ ++
raw_spin_lock_irqsave(&iommu->register_lock, flag);
iommu->gcmd &= ~DMA_GCMD_TE;
writel(iommu->gcmd, iommu->reg + DMAR_GCMD_REG);
list_del(&info->link);
list_del(&info->global);
if (info->dev)
---------- info->dev->archdata.iommu = NULL;
++++++++++ dev_iommu_priv_set(info->dev, NULL);
}
static void domain_remove_dev_info(struct dmar_domain *domain)
{
struct iommu_domain *domain;
---------- dev->archdata.iommu = NULL;
++++++++++ dev_iommu_priv_set(dev, NULL);
domain = iommu_get_domain_for_dev(dev);
if (domain)
intel_iommu_attach_device(domain, dev);
list_add(&info->link, &domain->devices);
list_add(&info->global, &device_domain_list);
if (dev)
---------- dev->archdata.iommu = info;
++++++++++ dev_iommu_priv_set(dev, info);
spin_unlock_irqrestore(&device_domain_lock, flags);
/* PASID table is mandatory for a PCI device in scalable mode. */
if (!drhd || drhd->reg_base_addr - vtbar != 0xa000) {
pr_warn_once(FW_BUG "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n");
add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
---------- pdev->dev.archdata.iommu = DUMMY_DEVICE_DOMAIN_INFO;
++++++++++ dev_iommu_priv_set(&pdev->dev, DUMMY_DEVICE_DOMAIN_INFO);
}
}
DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB, quirk_ioat_snb_local_iommu);
/* This IOMMU has *only* gfx devices. Either bypass it or
set the gfx_mapped flag, as appropriate */
++++++++ ++ drhd->gfx_dedicated = 1;
if (!dmar_map_gfx) {
drhd->ignored = 1;
for_each_active_dev_scope(drhd->devices,
drhd->devices_cnt, i, dev)
---------- dev->archdata.iommu = DUMMY_DEVICE_DOMAIN_INFO;
++++++++++ dev_iommu_priv_set(dev, DUMMY_DEVICE_DOMAIN_INFO);
}
}
}
struct device *dev)
{
int ret;
-------- -- u8 bus, devfn;
unsigned long flags;
struct intel_iommu *iommu;
-------- -- iommu = device_to_iommu(dev, &bus, &devfn);
++++++++ ++ iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return -ENODEV;
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
struct intel_iommu *iommu;
int addr_width;
-------- -- u8 bus, devfn;
-------- -- iommu = device_to_iommu(dev, &bus, &devfn);
++++++++ ++ iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return -ENODEV;
sid = PCI_DEVID(bus, devfn);
/* Size is only valid in address selective invalidation */
-------- -- if (inv_info->granularity != IOMMU_INV_GRANU_PASID)
++++++++ ++ if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
size = to_vtd_size(inv_info->addr_info.granule_size,
inv_info->addr_info.nb_granules);
IOMMU_CACHE_INV_TYPE_NR) {
int granu = 0;
u64 pasid = 0;
++++++++ ++ u64 addr = 0;
granu = to_vtd_granularity(cache_type, inv_info->granularity);
if (granu == -EINVAL) {
switch (BIT(cache_type)) {
case IOMMU_CACHE_INV_TYPE_IOTLB:
++++++++ ++ /* HW will ignore LSB bits based on address mask */
if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
size &&
(inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
-------- -- pr_err_ratelimited("Address out of range, 0x%llx, size order %llu\n",
++++++++ ++ pr_err_ratelimited("User address not aligned, 0x%llx, size order %llu\n",
inv_info->addr_info.addr, size);
-------- -- ret = -ERANGE;
-------- -- goto out_unlock;
}
/*
(granu == QI_GRAN_NONG_PASID) ? -1 : 1 << size,
inv_info->addr_info.flags & IOMMU_INV_ADDR_FLAGS_LEAF);
++++++++ ++ if (!info->ats_enabled)
++++++++ ++ break;
/*
* Always flush device IOTLB if ATS is enabled. vIOMMU
* in the guest may assume IOTLB flush is inclusive,
* which is more efficient.
*/
-------- -- if (info->ats_enabled)
-------- -- qi_flush_dev_iotlb_pasid(iommu, sid,
-------- -- info->pfsid, pasid,
-------- -- info->ats_qdep,
-------- -- inv_info->addr_info.addr,
-------- -- size, granu);
-------- -- break;
++++++++ ++ fallthrough;
case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
++++++++ ++ /*
++++++++ ++ * PASID based device TLB invalidation does not support
++++++++ ++ * IOMMU_INV_GRANU_PASID granularity but only supports
++++++++ ++ * IOMMU_INV_GRANU_ADDR.
++++++++ ++ * The equivalent of that is we set the size to be the
++++++++ ++ * entire range of 64 bit. User only provides PASID info
++++++++ ++ * without address info. So we set addr to 0.
++++++++ ++ */
++++++++ ++ if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
++++++++ ++ size = 64 - VTD_PAGE_SHIFT;
++++++++ ++ addr = 0;
++++++++ ++ } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR) {
++++++++ ++ addr = inv_info->addr_info.addr;
++++++++ ++ }
++++++++ ++
if (info->ats_enabled)
qi_flush_dev_iotlb_pasid(iommu, sid,
info->pfsid, pasid,
-------- -- info->ats_qdep,
-------- -- inv_info->addr_info.addr,
-------- -- size, granu);
++++++++ ++ info->ats_qdep, addr,
++++++++ ++ size);
else
pr_warn_ratelimited("Passdown device IOTLB flush w/o ATS!\n");
break;
static struct iommu_device *intel_iommu_probe_device(struct device *dev)
{
struct intel_iommu *iommu;
-------- -- u8 bus, devfn;
-------- -- iommu = device_to_iommu(dev, &bus, &devfn);
++++++++ ++ iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return ERR_PTR(-ENODEV);
if (translation_pre_enabled(iommu))
---------- dev->archdata.iommu = DEFER_DEVICE_DOMAIN_INFO;
++++++++++ dev_iommu_priv_set(dev, DEFER_DEVICE_DOMAIN_INFO);
return &iommu->iommu;
}
static void intel_iommu_release_device(struct device *dev)
{
struct intel_iommu *iommu;
-------- -- u8 bus, devfn;
-------- -- iommu = device_to_iommu(dev, &bus, &devfn);
++++++++ ++ iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return;
return generic_device_group(dev);
}
-------- --#ifdef CONFIG_INTEL_IOMMU_SVM
-------- --struct intel_iommu *intel_svm_device_to_iommu(struct device *dev)
-------- --{
-------- -- struct intel_iommu *iommu;
-------- -- u8 bus, devfn;
-------- --
-------- -- if (iommu_dummy(dev)) {
-------- -- dev_warn(dev,
-------- -- "No IOMMU translation for device; cannot enable SVM\n");
-------- -- return NULL;
-------- -- }
-------- --
-------- -- iommu = device_to_iommu(dev, &bus, &devfn);
-------- -- if ((!iommu)) {
-------- -- dev_err(dev, "No IOMMU for device; cannot enable SVM\n");
-------- -- return NULL;
-------- -- }
-------- --
-------- -- return iommu;
-------- --}
-------- --#endif /* CONFIG_INTEL_IOMMU_SVM */
-------- --
static int intel_iommu_enable_auxd(struct device *dev)
{
struct device_domain_info *info;
struct intel_iommu *iommu;
unsigned long flags;
-------- -- u8 bus, devfn;
int ret;
-------- -- iommu = device_to_iommu(dev, &bus, &devfn);
++++++++ ++ iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu || dmar_disabled)
return -EINVAL;
.sva_bind = intel_svm_bind,
.sva_unbind = intel_svm_unbind,
.sva_get_pasid = intel_svm_get_pasid,
++++++++ ++ .page_response = intel_svm_page_response,
#endif
};
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0062, quirk_calpella_no_shadow_gtt);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x006a, quirk_calpella_no_shadow_gtt);
++++++++ ++static void quirk_igfx_skip_te_disable(struct pci_dev *dev)
++++++++ ++{
++++++++ ++ unsigned short ver;
++++++++ ++
++++++++ ++ if (!IS_GFX_DEVICE(dev))
++++++++ ++ return;
++++++++ ++
++++++++ ++ ver = (dev->device >> 8) & 0xff;
++++++++ ++ if (ver != 0x45 && ver != 0x46 && ver != 0x4c &&
++++++++ ++ ver != 0x4e && ver != 0x8a && ver != 0x98 &&
++++++++ ++ ver != 0x9a)
++++++++ ++ return;
++++++++ ++
++++++++ ++ if (risky_device(dev))
++++++++ ++ return;
++++++++ ++
++++++++ ++ pci_info(dev, "Skip IOMMU disabling for graphics\n");
++++++++ ++ iommu_skip_te_disable = 1;
++++++++ ++}
++++++++ ++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, quirk_igfx_skip_te_disable);
++++++++ ++
/* On Tylersburg chipsets, some BIOSes have been known to enable the
ISOCH DMAR unit for the Azalia sound device, but not give it any
TLB entries, which causes it to deadlock. Check for that. We do
static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
phys_addr_t paddr, size_t size, arm_lpae_iopte prot,
---------- int lvl, arm_lpae_iopte *ptep)
++++++++++ int lvl, arm_lpae_iopte *ptep, gfp_t gfp)
{
arm_lpae_iopte *cptep, pte;
size_t block_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
/* Grab a pointer to the next level */
pte = READ_ONCE(*ptep);
if (!pte) {
---------- cptep = __arm_lpae_alloc_pages(tblsz, GFP_ATOMIC, cfg);
++++++++++ cptep = __arm_lpae_alloc_pages(tblsz, gfp, cfg);
if (!cptep)
return -ENOMEM;
}
/* Rinse, repeat */
---------- return __arm_lpae_map(data, iova, paddr, size, prot, lvl + 1, cptep);
++++++++++ return __arm_lpae_map(data, iova, paddr, size, prot, lvl + 1, cptep, gfp);
}
static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data,
else if (prot & IOMMU_CACHE)
pte |= (ARM_LPAE_MAIR_ATTR_IDX_CACHE
<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
------ ---- else if (prot & IOMMU_SYS_CACHE_ONLY)
------ ---- pte |= (ARM_LPAE_MAIR_ATTR_IDX_INC_OCACHE
------ ---- << ARM_LPAE_PTE_ATTRINDX_SHIFT);
}
if (prot & IOMMU_CACHE)
}
static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova,
---------- phys_addr_t paddr, size_t size, int iommu_prot)
++++++++++ phys_addr_t paddr, size_t size, int iommu_prot, gfp_t gfp)
{
struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops);
struct io_pgtable_cfg *cfg = &data->iop.cfg;
return -ERANGE;
prot = arm_lpae_prot_to_pte(data, iommu_prot);
---------- ret = __arm_lpae_map(data, iova, paddr, size, prot, lvl, ptep);
++++++++++ ret = __arm_lpae_map(data, iova, paddr, size, prot, lvl, ptep, gfp);
/*
* Synchronise all PTE updates for the new mapping before there's
* a chance for anything to kick off a table walk for the new iova.
if (ops->map(ops, iova, iova, size, IOMMU_READ |
IOMMU_WRITE |
IOMMU_NOEXEC |
---------- IOMMU_CACHE))
++++++++++ IOMMU_CACHE, GFP_KERNEL))
return __FAIL(ops, i);
/* Overlapping mappings */
if (!ops->map(ops, iova, iova + size, size,
---------- IOMMU_READ | IOMMU_NOEXEC))
++++++++++ IOMMU_READ | IOMMU_NOEXEC, GFP_KERNEL))
return __FAIL(ops, i);
if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
return __FAIL(ops, i);
/* Remap of partial unmap */
---------- if (ops->map(ops, SZ_1G + size, size, size, IOMMU_READ))
++++++++++ if (ops->map(ops, SZ_1G + size, size, size, IOMMU_READ, GFP_KERNEL))
return __FAIL(ops, i);
if (ops->iova_to_phys(ops, SZ_1G + size + 42) != (size + 42))
return __FAIL(ops, i);
/* Remap full block */
---------- if (ops->map(ops, iova, iova, size, IOMMU_WRITE))
++++++++++ if (ops->map(ops, iova, iova, size, IOMMU_WRITE, GFP_KERNEL))
return __FAIL(ops, i);
if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
return;
iommu_device_unlink(dev->iommu->iommu_dev, dev);
---------- iommu_group_remove_device(dev);
ops->release_device(dev);
++++++++++ iommu_group_remove_device(dev);
module_put(ops->owner);
dev_iommu_free(dev);
}
* Elements are sorted by start address and overlapping segments
* of the same type are merged.
*/
---------- int iommu_insert_resv_region(struct iommu_resv_region *new,
---------- struct list_head *regions)
++++++++++ static int iommu_insert_resv_region(struct iommu_resv_region *new,
++++++++++ struct list_head *regions)
{
struct iommu_resv_region *iter, *tmp, *nr, *top;
LIST_HEAD(stack);
int iommu_page_response(struct device *dev,
struct iommu_page_response *msg)
{
---------- bool pasid_valid;
++++++++++ bool needs_pasid;
int ret = -EINVAL;
struct iommu_fault_event *evt;
struct iommu_fault_page_request *prm;
struct dev_iommu *param = dev->iommu;
++++++++++ bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID;
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
if (!domain || !domain->ops->page_response)
*/
list_for_each_entry(evt, ¶m->fault_param->faults, list) {
prm = &evt->fault.prm;
---------- pasid_valid = prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
++++++++++ if (prm->grpid != msg->grpid)
++++++++++ continue;
---------- if ((pasid_valid && prm->pasid != msg->pasid) ||
---------- prm->grpid != msg->grpid)
++++++++++ /*
++++++++++ * If the PASID is required, the corresponding request is
++++++++++ * matched using the group ID, the PASID valid bit and the PASID
++++++++++ * value. Otherwise only the group ID matches request and
++++++++++ * response.
++++++++++ */
++++++++++ needs_pasid = prm->flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
++++++++++ if (needs_pasid && (!has_pasid || msg->pasid != prm->pasid))
continue;
---------- /* Sanitize the reply */
---------- msg->flags = pasid_valid ? IOMMU_PAGE_RESP_PASID_VALID : 0;
++++++++++ if (!needs_pasid && has_pasid) {
++++++++++ /* No big deal, just clear it. */
++++++++++ msg->flags &= ~IOMMU_PAGE_RESP_PASID_VALID;
++++++++++ msg->pasid = 0;
++++++++++ }
ret = domain->ops->page_response(dev, evt, msg);
list_del(&evt->list);
return pgsize;
}
---------- int __iommu_map(struct iommu_domain *domain, unsigned long iova,
---------- phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
++++++++++ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
++++++++++ phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
{
const struct iommu_ops *ops = domain->ops;
unsigned long orig_iova = iova;
}
EXPORT_SYMBOL_GPL(iommu_unmap_fast);
---------- size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
---------- struct scatterlist *sg, unsigned int nents, int prot,
---------- gfp_t gfp)
++++++++++ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
++++++++++ struct scatterlist *sg, unsigned int nents, int prot,
++++++++++ gfp_t gfp)
{
size_t len = 0, mapped = 0;
phys_addr_t start;
* IOMMU API for Renesas VMSA-compatible IPMMU
*
- --------- * Copyright (C) 2014 Renesas Electronics Corporation
+ +++++++++ * Copyright (C) 2014-2020 Renesas Electronics Corporation
*/
#include <linux/bitmap.h>
if (!domain)
return -ENODEV;
---------- return domain->iop->map(domain->iop, iova, paddr, size, prot);
++++++++++ return domain->iop->map(domain->iop, iova, paddr, size, prot, gfp);
}
static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
{ .soc_id = "r8a774a1", },
{ .soc_id = "r8a774b1", },
{ .soc_id = "r8a774c0", },
+ +++++++++ { .soc_id = "r8a774e1", },
{ .soc_id = "r8a7795", },
+ +++++++++ { .soc_id = "r8a77961", },
{ .soc_id = "r8a7796", },
{ .soc_id = "r8a77965", },
{ .soc_id = "r8a77970", },
static const struct soc_device_attribute soc_rcar_gen3_whitelist[] = {
{ .soc_id = "r8a774b1", },
{ .soc_id = "r8a774c0", },
+ +++++++++ { .soc_id = "r8a774e1", },
{ .soc_id = "r8a7795", .revision = "ES3.*" },
+ +++++++++ { .soc_id = "r8a77961", },
{ .soc_id = "r8a77965", },
{ .soc_id = "r8a77990", },
{ .soc_id = "r8a77995", },
}, {
.compatible = "renesas,ipmmu-r8a774c0",
.data = &ipmmu_features_rcar_gen3,
+ +++++++++ }, {
+ +++++++++ .compatible = "renesas,ipmmu-r8a774e1",
+ +++++++++ .data = &ipmmu_features_rcar_gen3,
}, {
.compatible = "renesas,ipmmu-r8a7795",
.data = &ipmmu_features_rcar_gen3,
}, {
.compatible = "renesas,ipmmu-r8a7796",
.data = &ipmmu_features_rcar_gen3,
+ +++++++++ }, {
+ +++++++++ .compatible = "renesas,ipmmu-r8a77961",
+ +++++++++ .data = &ipmmu_features_rcar_gen3,
}, {
.compatible = "renesas,ipmmu-r8a77965",
.data = &ipmmu_features_rcar_gen3,
#define REG_MMU_INVLD_START_A 0x024
#define REG_MMU_INVLD_END_A 0x028
--- -------#define REG_MMU_INV_SEL 0x038
+++ +++++++#define REG_MMU_INV_SEL_GEN2 0x02c
+++ +++++++#define REG_MMU_INV_SEL_GEN1 0x038
#define F_INVLD_EN0 BIT(0)
#define F_INVLD_EN1 BIT(1)
--- -------#define REG_MMU_STANDARD_AXI_MODE 0x048
+++ +++++++#define REG_MMU_MISC_CTRL 0x048
+++ +++++++#define F_MMU_IN_ORDER_WR_EN_MASK (BIT(1) | BIT(17))
+++ +++++++#define F_MMU_STANDARD_AXI_MODE_MASK (BIT(3) | BIT(19))
+++ +++++++
#define REG_MMU_DCM_DIS 0x050
+++ +++++++#define REG_MMU_WR_LEN_CTRL 0x054
+++ +++++++#define F_MMU_WR_THROT_DIS_MASK (BIT(5) | BIT(21))
#define REG_MMU_CTRL_REG 0x110
#define F_MMU_TF_PROT_TO_PROGRAM_ADDR (2 << 4)
#define REG_MMU1_INVLD_PA 0x148
#define REG_MMU0_INT_ID 0x150
#define REG_MMU1_INT_ID 0x154
+++ +++++++#define F_MMU_INT_ID_COMM_ID(a) (((a) >> 9) & 0x7)
+++ +++++++#define F_MMU_INT_ID_SUB_COMM_ID(a) (((a) >> 7) & 0x3)
#define F_MMU_INT_ID_LARB_ID(a) (((a) >> 7) & 0x7)
#define F_MMU_INT_ID_PORT_ID(a) (((a) >> 2) & 0x1f)
--- -------#define MTK_PROTECT_PA_ALIGN 128
+++ +++++++#define MTK_PROTECT_PA_ALIGN 256
/*
* Get the local arbiter ID and the portid within the larb arbiter
#define MTK_M4U_TO_LARB(id) (((id) >> 5) & 0xf)
#define MTK_M4U_TO_PORT(id) ((id) & 0x1f)
+++ +++++++#define HAS_4GB_MODE BIT(0)
+++ +++++++/* HW will use the EMI clock if there isn't the "bclk". */
+++ +++++++#define HAS_BCLK BIT(1)
+++ +++++++#define HAS_VLD_PA_RNG BIT(2)
+++ +++++++#define RESET_AXI BIT(3)
+++ +++++++#define OUT_ORDER_WR_EN BIT(4)
+++ +++++++#define HAS_SUB_COMM BIT(5)
+++ +++++++#define WR_THROT_EN BIT(6)
+++ +++++++
+++ +++++++#define MTK_IOMMU_HAS_FLAG(pdata, _x) \
+++ +++++++ ((((pdata)->flags) & (_x)) == (_x))
+++ +++++++
struct mtk_iommu_domain {
struct io_pgtable_cfg cfg;
struct io_pgtable_ops *iop;
for_each_m4u(data) {
writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0,
--- ------- data->base + REG_MMU_INV_SEL);
+++ +++++++ data->base + data->plat_data->inv_sel_reg);
writel_relaxed(F_ALL_INVLD, data->base + REG_MMU_INVALIDATE);
wmb(); /* Make sure the tlb flush all done */
}
for_each_m4u(data) {
spin_lock_irqsave(&data->tlb_lock, flags);
writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0,
--- ------- data->base + REG_MMU_INV_SEL);
+++ +++++++ data->base + data->plat_data->inv_sel_reg);
writel_relaxed(iova, data->base + REG_MMU_INVLD_START_A);
writel_relaxed(iova + size - 1,
struct mtk_iommu_data *data = dev_id;
struct mtk_iommu_domain *dom = data->m4u_dom;
u32 int_state, regval, fault_iova, fault_pa;
--- ------- unsigned int fault_larb, fault_port;
+++ +++++++ unsigned int fault_larb, fault_port, sub_comm = 0;
bool layer, write;
/* Read error info from registers */
}
layer = fault_iova & F_MMU_FAULT_VA_LAYER_BIT;
write = fault_iova & F_MMU_FAULT_VA_WRITE_BIT;
--- ------- fault_larb = F_MMU_INT_ID_LARB_ID(regval);
fault_port = F_MMU_INT_ID_PORT_ID(regval);
--- -------
--- ------- fault_larb = data->plat_data->larbid_remap[fault_larb];
+++ +++++++ if (MTK_IOMMU_HAS_FLAG(data->plat_data, HAS_SUB_COMM)) {
+++ +++++++ fault_larb = F_MMU_INT_ID_COMM_ID(regval);
+++ +++++++ sub_comm = F_MMU_INT_ID_SUB_COMM_ID(regval);
+++ +++++++ } else {
+++ +++++++ fault_larb = F_MMU_INT_ID_LARB_ID(regval);
+++ +++++++ }
+++ +++++++ fault_larb = data->plat_data->larbid_remap[fault_larb][sub_comm];
if (report_iommu_fault(&dom->domain, data->dev, fault_iova,
write ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ)) {
paddr |= BIT_ULL(32);
/* Synchronize with the tlb_lock */
---------- return dom->iop->map(dom->iop, iova, paddr, size, prot);
++++++++++ return dom->iop->map(dom->iop, iova, paddr, size, prot, gfp);
}
static size_t mtk_iommu_unmap(struct iommu_domain *domain,
return ret;
}
--- ------- if (data->plat_data->m4u_plat == M4U_MT8173)
+++ +++++++ if (data->plat_data->m4u_plat == M4U_MT8173) {
regval = F_MMU_PREFETCH_RT_REPLACE_MOD |
F_MMU_TF_PROT_TO_PROGRAM_ADDR_MT8173;
--- ------- else
--- ------- regval = F_MMU_TF_PROT_TO_PROGRAM_ADDR;
+++ +++++++ } else {
+++ +++++++ regval = readl_relaxed(data->base + REG_MMU_CTRL_REG);
+++ +++++++ regval |= F_MMU_TF_PROT_TO_PROGRAM_ADDR;
+++ +++++++ }
writel_relaxed(regval, data->base + REG_MMU_CTRL_REG);
regval = F_L2_MULIT_HIT_EN |
upper_32_bits(data->protect_base);
writel_relaxed(regval, data->base + REG_MMU_IVRP_PADDR);
--- ------- if (data->enable_4GB && data->plat_data->has_vld_pa_rng) {
+++ +++++++ if (data->enable_4GB &&
+++ +++++++ MTK_IOMMU_HAS_FLAG(data->plat_data, HAS_VLD_PA_RNG)) {
/*
* If 4GB mode is enabled, the validate PA range is from
* 0x1_0000_0000 to 0x1_ffff_ffff. here record bit[32:30].
writel_relaxed(regval, data->base + REG_MMU_VLD_PA_RNG);
}
writel_relaxed(0, data->base + REG_MMU_DCM_DIS);
+++ +++++++ if (MTK_IOMMU_HAS_FLAG(data->plat_data, WR_THROT_EN)) {
+++ +++++++ /* write command throttling mode */
+++ +++++++ regval = readl_relaxed(data->base + REG_MMU_WR_LEN_CTRL);
+++ +++++++ regval &= ~F_MMU_WR_THROT_DIS_MASK;
+++ +++++++ writel_relaxed(regval, data->base + REG_MMU_WR_LEN_CTRL);
+++ +++++++ }
--- ------- if (data->plat_data->reset_axi)
--- ------- writel_relaxed(0, data->base + REG_MMU_STANDARD_AXI_MODE);
+++ +++++++ if (MTK_IOMMU_HAS_FLAG(data->plat_data, RESET_AXI)) {
+++ +++++++ /* The register is called STANDARD_AXI_MODE in this case */
+++ +++++++ regval = 0;
+++ +++++++ } else {
+++ +++++++ regval = readl_relaxed(data->base + REG_MMU_MISC_CTRL);
+++ +++++++ regval &= ~F_MMU_STANDARD_AXI_MODE_MASK;
+++ +++++++ if (MTK_IOMMU_HAS_FLAG(data->plat_data, OUT_ORDER_WR_EN))
+++ +++++++ regval &= ~F_MMU_IN_ORDER_WR_EN_MASK;
+++ +++++++ }
+++ +++++++ writel_relaxed(regval, data->base + REG_MMU_MISC_CTRL);
if (devm_request_irq(data->dev, data->irq, mtk_iommu_isr, 0,
dev_name(data->dev), (void *)data)) {
/* Whether the current dram is over 4GB */
data->enable_4GB = !!(max_pfn > (BIT_ULL(32) >> PAGE_SHIFT));
--- ------- if (!data->plat_data->has_4gb_mode)
+++ +++++++ if (!MTK_IOMMU_HAS_FLAG(data->plat_data, HAS_4GB_MODE))
data->enable_4GB = false;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (data->irq < 0)
return data->irq;
--- ------- if (data->plat_data->has_bclk) {
+++ +++++++ if (MTK_IOMMU_HAS_FLAG(data->plat_data, HAS_BCLK)) {
data->bclk = devm_clk_get(dev, "bclk");
if (IS_ERR(data->bclk))
return PTR_ERR(data->bclk);
struct mtk_iommu_suspend_reg *reg = &data->reg;
void __iomem *base = data->base;
--- ------- reg->standard_axi_mode = readl_relaxed(base +
--- ------- REG_MMU_STANDARD_AXI_MODE);
+++ +++++++ reg->wr_len_ctrl = readl_relaxed(base + REG_MMU_WR_LEN_CTRL);
+++ +++++++ reg->misc_ctrl = readl_relaxed(base + REG_MMU_MISC_CTRL);
reg->dcm_dis = readl_relaxed(base + REG_MMU_DCM_DIS);
reg->ctrl_reg = readl_relaxed(base + REG_MMU_CTRL_REG);
reg->int_control0 = readl_relaxed(base + REG_MMU_INT_CONTROL0);
dev_err(data->dev, "Failed to enable clk(%d) in resume\n", ret);
return ret;
}
--- ------- writel_relaxed(reg->standard_axi_mode,
--- ------- base + REG_MMU_STANDARD_AXI_MODE);
+++ +++++++ writel_relaxed(reg->wr_len_ctrl, base + REG_MMU_WR_LEN_CTRL);
+++ +++++++ writel_relaxed(reg->misc_ctrl, base + REG_MMU_MISC_CTRL);
writel_relaxed(reg->dcm_dis, base + REG_MMU_DCM_DIS);
writel_relaxed(reg->ctrl_reg, base + REG_MMU_CTRL_REG);
writel_relaxed(reg->int_control0, base + REG_MMU_INT_CONTROL0);
static const struct mtk_iommu_plat_data mt2712_data = {
.m4u_plat = M4U_MT2712,
--- ------- .has_4gb_mode = true,
--- ------- .has_bclk = true,
--- ------- .has_vld_pa_rng = true,
--- ------- .larbid_remap = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
+++ +++++++ .flags = HAS_4GB_MODE | HAS_BCLK | HAS_VLD_PA_RNG,
+++ +++++++ .inv_sel_reg = REG_MMU_INV_SEL_GEN1,
+++ +++++++ .larbid_remap = {{0}, {1}, {2}, {3}, {4}, {5}, {6}, {7}},
+++ +++++++};
+++ +++++++
+++ +++++++static const struct mtk_iommu_plat_data mt6779_data = {
+++ +++++++ .m4u_plat = M4U_MT6779,
+++ +++++++ .flags = HAS_SUB_COMM | OUT_ORDER_WR_EN | WR_THROT_EN,
+++ +++++++ .inv_sel_reg = REG_MMU_INV_SEL_GEN2,
+++ +++++++ .larbid_remap = {{0}, {1}, {2}, {3}, {5}, {7, 8}, {10}, {9}},
};
static const struct mtk_iommu_plat_data mt8173_data = {
.m4u_plat = M4U_MT8173,
--- ------- .has_4gb_mode = true,
--- ------- .has_bclk = true,
--- ------- .reset_axi = true,
--- ------- .larbid_remap = {0, 1, 2, 3, 4, 5}, /* Linear mapping. */
+++ +++++++ .flags = HAS_4GB_MODE | HAS_BCLK | RESET_AXI,
+++ +++++++ .inv_sel_reg = REG_MMU_INV_SEL_GEN1,
+++ +++++++ .larbid_remap = {{0}, {1}, {2}, {3}, {4}, {5}}, /* Linear mapping. */
};
static const struct mtk_iommu_plat_data mt8183_data = {
.m4u_plat = M4U_MT8183,
--- ------- .reset_axi = true,
--- ------- .larbid_remap = {0, 4, 5, 6, 7, 2, 3, 1},
+++ +++++++ .flags = RESET_AXI,
+++ +++++++ .inv_sel_reg = REG_MMU_INV_SEL_GEN1,
+++ +++++++ .larbid_remap = {{0}, {4}, {5}, {6}, {7}, {2}, {3}, {1}},
};
static const struct of_device_id mtk_iommu_of_ids[] = {
{ .compatible = "mediatek,mt2712-m4u", .data = &mt2712_data},
+++ +++++++ { .compatible = "mediatek,mt6779-m4u", .data = &mt6779_data},
{ .compatible = "mediatek,mt8173-m4u", .data = &mt8173_data},
{ .compatible = "mediatek,mt8183-m4u", .data = &mt8183_data},
{}
#include <linux/iommu.h>
#include <linux/list.h>
#include <linux/spinlock.h>
+++ +++++++#include <linux/dma-mapping.h>
#include <soc/mediatek/smi.h>
+++ +++++++#define MTK_LARB_COM_MAX 8
+++ +++++++#define MTK_LARB_SUBCOM_MAX 4
+++ +++++++
struct mtk_iommu_suspend_reg {
--- ------- u32 standard_axi_mode;
+++ +++++++ union {
+++ +++++++ u32 standard_axi_mode;/* v1 */
+++ +++++++ u32 misc_ctrl;/* v2 */
+++ +++++++ };
u32 dcm_dis;
u32 ctrl_reg;
u32 int_control0;
u32 int_main_control;
u32 ivrp_paddr;
u32 vld_pa_rng;
+++ +++++++ u32 wr_len_ctrl;
};
enum mtk_iommu_plat {
M4U_MT2701,
M4U_MT2712,
+++ +++++++ M4U_MT6779,
M4U_MT8173,
M4U_MT8183,
};
struct mtk_iommu_plat_data {
enum mtk_iommu_plat m4u_plat;
--- ------- bool has_4gb_mode;
--- -------
--- ------- /* HW will use the EMI clock if there isn't the "bclk". */
--- ------- bool has_bclk;
--- ------- bool has_vld_pa_rng;
--- ------- bool reset_axi;
--- ------- unsigned char larbid_remap[MTK_LARB_NR_MAX];
+++ +++++++ u32 flags;
+++ +++++++ u32 inv_sel_reg;
+++ +++++++ unsigned char larbid_remap[MTK_LARB_COM_MAX][MTK_LARB_SUBCOM_MAX];
};
struct mtk_iommu_domain;
struct iommu_device iommu;
const struct mtk_iommu_plat_data *plat_data;
++++++++++ struct dma_iommu_mapping *mapping; /* For mtk_iommu_v1.c */
++++++++++
struct list_head list;
struct mtk_smi_larb_iommu larb_imu[MTK_LARB_NR_MAX];
};
* omap iommu: tlb and pagetable primitives
*
* Copyright (C) 2008-2010 Nokia Corporation
---- ------ * Copyright (C) 2013-2017 Texas Instruments Incorporated - http://www.ti.com/
++++ ++++++ * Copyright (C) 2013-2017 Texas Instruments Incorporated - https://www.ti.com/
*
* Paul Mundt and Toshihiro Kobayashi
**/
void omap_iommu_save_ctx(struct device *dev)
{
---------- struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
++++++++++ struct omap_iommu_arch_data *arch_data = dev_iommu_priv_get(dev);
struct omap_iommu *obj;
u32 *p;
int i;
**/
void omap_iommu_restore_ctx(struct device *dev)
{
---------- struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
++++++++++ struct omap_iommu_arch_data *arch_data = dev_iommu_priv_get(dev);
struct omap_iommu *obj;
u32 *p;
int i;
static int omap_iommu_count(struct device *dev)
{
---------- struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
++++++++++ struct omap_iommu_arch_data *arch_data = dev_iommu_priv_get(dev);
int count = 0;
while (arch_data->iommu_dev) {
static int
omap_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
++++++++++ struct omap_iommu_arch_data *arch_data = dev_iommu_priv_get(dev);
struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
---------- struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
struct omap_iommu_device *iommu;
struct omap_iommu *oiommu;
int ret = 0;
static void _omap_iommu_detach_dev(struct omap_iommu_domain *omap_domain,
struct device *dev)
{
---------- struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
++++++++++ struct omap_iommu_arch_data *arch_data = dev_iommu_priv_get(dev);
struct omap_iommu_device *iommu = omap_domain->iommus;
struct omap_iommu *oiommu;
int i;
int num_iommus, i;
/*
---------- * Allocate the archdata iommu structure for DT-based devices.
++++++++++ * Allocate the per-device iommu structure for DT-based devices.
*
* TODO: Simplify this when removing non-DT support completely from the
* IOMMU users.
of_node_put(np);
}
---------- dev->archdata.iommu = arch_data;
++++++++++ dev_iommu_priv_set(dev, arch_data);
/*
* use the first IOMMU alone for the sysfs device linking.
static void omap_iommu_release_device(struct device *dev)
{
---------- struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
++++++++++ struct omap_iommu_arch_data *arch_data = dev_iommu_priv_get(dev);
if (!dev->of_node || !arch_data)
return;
---------- dev->archdata.iommu = NULL;
++++++++++ dev_iommu_priv_set(dev, NULL);
kfree(arch_data);
}
static struct iommu_group *omap_iommu_device_group(struct device *dev)
{
---------- struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
++++++++++ struct omap_iommu_arch_data *arch_data = dev_iommu_priv_get(dev);
struct iommu_group *group = ERR_PTR(-EINVAL);
if (!arch_data)
#define SMMU_INTR_SEL_NS 0x2000
++ ++++++++enum qcom_iommu_clk {
++ ++++++++ CLK_IFACE,
++ ++++++++ CLK_BUS,
++ ++++++++ CLK_TBU,
++ ++++++++ CLK_NUM,
++ ++++++++};
++ ++++++++
struct qcom_iommu_ctx;
struct qcom_iommu_dev {
/* IOMMU core code handle */
struct iommu_device iommu;
struct device *dev;
-- -------- struct clk *iface_clk;
-- -------- struct clk *bus_clk;
++ ++++++++ struct clk_bulk_data clks[CLK_NUM];
void __iomem *local_base;
u32 sec_id;
u8 num_ctxs;
struct mutex init_mutex; /* Protects iommu pointer */
struct iommu_domain domain;
struct qcom_iommu_dev *iommu;
++++++++++ struct iommu_fwspec *fwspec;
};
static struct qcom_iommu_domain *to_qcom_iommu_domain(struct iommu_domain *dom)
return dev_iommu_priv_get(dev);
}
----------static struct qcom_iommu_ctx * to_ctx(struct device *dev, unsigned asid)
++++++++++static struct qcom_iommu_ctx * to_ctx(struct qcom_iommu_domain *d, unsigned asid)
{
---------- struct qcom_iommu_dev *qcom_iommu = to_iommu(dev);
++++++++++ struct qcom_iommu_dev *qcom_iommu = d->iommu;
if (!qcom_iommu)
return NULL;
return qcom_iommu->ctxs[asid - 1];
static void qcom_iommu_tlb_sync(void *cookie)
{
---------- struct iommu_fwspec *fwspec;
---------- struct device *dev = cookie;
++++++++++ struct qcom_iommu_domain *qcom_domain = cookie;
++++++++++ struct iommu_fwspec *fwspec = qcom_domain->fwspec;
unsigned i;
---------- fwspec = dev_iommu_fwspec_get(dev);
----------
for (i = 0; i < fwspec->num_ids; i++) {
---------- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++++++++++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
unsigned int val, ret;
iommu_writel(ctx, ARM_SMMU_CB_TLBSYNC, 0);
static void qcom_iommu_tlb_inv_context(void *cookie)
{
---------- struct device *dev = cookie;
---------- struct iommu_fwspec *fwspec;
++++++++++ struct qcom_iommu_domain *qcom_domain = cookie;
++++++++++ struct iommu_fwspec *fwspec = qcom_domain->fwspec;
unsigned i;
---------- fwspec = dev_iommu_fwspec_get(dev);
----------
for (i = 0; i < fwspec->num_ids; i++) {
---------- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++++++++++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
iommu_writel(ctx, ARM_SMMU_CB_S1_TLBIASID, ctx->asid);
}
static void qcom_iommu_tlb_inv_range_nosync(unsigned long iova, size_t size,
size_t granule, bool leaf, void *cookie)
{
---------- struct device *dev = cookie;
---------- struct iommu_fwspec *fwspec;
++++++++++ struct qcom_iommu_domain *qcom_domain = cookie;
++++++++++ struct iommu_fwspec *fwspec = qcom_domain->fwspec;
unsigned i, reg;
reg = leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
---------- fwspec = dev_iommu_fwspec_get(dev);
----------
for (i = 0; i < fwspec->num_ids; i++) {
---------- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++++++++++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
size_t s = size;
iova = (iova >> 12) << 12;
};
qcom_domain->iommu = qcom_iommu;
---------- pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, dev);
++++++++++ qcom_domain->fwspec = fwspec;
++++++++++
++++++++++ pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, qcom_domain);
if (!pgtbl_ops) {
dev_err(qcom_iommu->dev, "failed to allocate pagetable ops\n");
ret = -ENOMEM;
domain->geometry.force_aperture = true;
for (i = 0; i < fwspec->num_ids; i++) {
---------- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++++++++++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
if (!ctx->secure_init) {
ret = qcom_scm_restore_sec_cfg(qcom_iommu->sec_id, ctx->asid);
ARM_SMMU_SCTLR_M | ARM_SMMU_SCTLR_S1_ASIDPNE |
ARM_SMMU_SCTLR_CFCFG;
-- -------- if (IS_ENABLED(CONFIG_BIG_ENDIAN))
++ ++++++++ if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
reg |= ARM_SMMU_SCTLR_E;
iommu_writel(ctx, ARM_SMMU_CB_SCTLR, reg);
pm_runtime_get_sync(qcom_iommu->dev);
for (i = 0; i < fwspec->num_ids; i++) {
---------- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++++++++++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
/* Disable the context bank: */
iommu_writel(ctx, ARM_SMMU_CB_SCTLR, 0);
return -ENODEV;
spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags);
---------- ret = ops->map(ops, iova, paddr, size, prot);
++++++++++ ret = ops->map(ops, iova, paddr, size, prot, GFP_ATOMIC);
spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags);
return ret;
}
.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
};
-- --------static int qcom_iommu_enable_clocks(struct qcom_iommu_dev *qcom_iommu)
-- --------{
-- -------- int ret;
-- --------
-- -------- ret = clk_prepare_enable(qcom_iommu->iface_clk);
-- -------- if (ret) {
-- -------- dev_err(qcom_iommu->dev, "Couldn't enable iface_clk\n");
-- -------- return ret;
-- -------- }
-- --------
-- -------- ret = clk_prepare_enable(qcom_iommu->bus_clk);
-- -------- if (ret) {
-- -------- dev_err(qcom_iommu->dev, "Couldn't enable bus_clk\n");
-- -------- clk_disable_unprepare(qcom_iommu->iface_clk);
-- -------- return ret;
-- -------- }
-- --------
-- -------- return 0;
-- --------}
-- --------
-- --------static void qcom_iommu_disable_clocks(struct qcom_iommu_dev *qcom_iommu)
-- --------{
-- -------- clk_disable_unprepare(qcom_iommu->bus_clk);
-- -------- clk_disable_unprepare(qcom_iommu->iface_clk);
-- --------}
-- --------
static int qcom_iommu_sec_ptbl_init(struct device *dev)
{
size_t psize = 0;
struct qcom_iommu_dev *qcom_iommu;
struct device *dev = &pdev->dev;
struct resource *res;
++ ++++++++ struct clk *clk;
int ret, max_asid = 0;
/* find the max asid (which is 1:1 to ctx bank idx), so we know how
return PTR_ERR(qcom_iommu->local_base);
}
-- -------- qcom_iommu->iface_clk = devm_clk_get(dev, "iface");
-- -------- if (IS_ERR(qcom_iommu->iface_clk)) {
++ ++++++++ clk = devm_clk_get(dev, "iface");
++ ++++++++ if (IS_ERR(clk)) {
dev_err(dev, "failed to get iface clock\n");
-- -------- return PTR_ERR(qcom_iommu->iface_clk);
++ ++++++++ return PTR_ERR(clk);
}
++ ++++++++ qcom_iommu->clks[CLK_IFACE].clk = clk;
-- -------- qcom_iommu->bus_clk = devm_clk_get(dev, "bus");
-- -------- if (IS_ERR(qcom_iommu->bus_clk)) {
++ ++++++++ clk = devm_clk_get(dev, "bus");
++ ++++++++ if (IS_ERR(clk)) {
dev_err(dev, "failed to get bus clock\n");
-- -------- return PTR_ERR(qcom_iommu->bus_clk);
++ ++++++++ return PTR_ERR(clk);
++ ++++++++ }
++ ++++++++ qcom_iommu->clks[CLK_BUS].clk = clk;
++ ++++++++
++ ++++++++ clk = devm_clk_get_optional(dev, "tbu");
++ ++++++++ if (IS_ERR(clk)) {
++ ++++++++ dev_err(dev, "failed to get tbu clock\n");
++ ++++++++ return PTR_ERR(clk);
}
++ ++++++++ qcom_iommu->clks[CLK_TBU].clk = clk;
if (of_property_read_u32(dev->of_node, "qcom,iommu-secure-id",
&qcom_iommu->sec_id)) {
{
struct qcom_iommu_dev *qcom_iommu = dev_get_drvdata(dev);
-- -------- return qcom_iommu_enable_clocks(qcom_iommu);
++ ++++++++ return clk_bulk_prepare_enable(CLK_NUM, qcom_iommu->clks);
}
static int __maybe_unused qcom_iommu_suspend(struct device *dev)
{
struct qcom_iommu_dev *qcom_iommu = dev_get_drvdata(dev);
-- -------- qcom_iommu_disable_clocks(qcom_iommu);
++ ++++++++ clk_bulk_disable_unprepare(CLK_NUM, qcom_iommu->clks);
return 0;
}
* if the IOMMU page table format is equivalent.
*/
#define IOMMU_PRIV (1 << 5)
------ ----/*
------ ---- * Non-coherent masters can use this page protection flag to set cacheable
------ ---- * memory attributes for only a transparent outer level of cache, also known as
------ ---- * the last-level or system cache.
------ ---- */
------ ----#define IOMMU_SYS_CACHE_ONLY (1 << 6)
struct iommu_ops;
struct iommu_group;
extern void iommu_set_fault_handler(struct iommu_domain *domain,
iommu_fault_handler_t handler, void *token);
---------- /**
---------- * iommu_map_sgtable - Map the given buffer to the IOMMU domain
---------- * @domain: The IOMMU domain to perform the mapping
---------- * @iova: The start address to map the buffer
---------- * @sgt: The sg_table object describing the buffer
---------- * @prot: IOMMU protection bits
---------- *
---------- * Creates a mapping at @iova for the buffer described by a scatterlist
---------- * stored in the given sg_table object in the provided IOMMU domain.
---------- */
---------- static inline size_t iommu_map_sgtable(struct iommu_domain *domain,
---------- unsigned long iova, struct sg_table *sgt, int prot)
---------- {
---------- return iommu_map_sg(domain, iova, sgt->sgl, sgt->orig_nents, prot);
---------- }
----------
extern void iommu_get_resv_regions(struct device *dev, struct list_head *list);
extern void iommu_put_resv_regions(struct device *dev, struct list_head *list);
extern void generic_iommu_put_resv_regions(struct device *dev,
}
#endif /* CONFIG_IOMMU_API */
++++++++++ /**
++++++++++ * iommu_map_sgtable - Map the given buffer to the IOMMU domain
++++++++++ * @domain: The IOMMU domain to perform the mapping
++++++++++ * @iova: The start address to map the buffer
++++++++++ * @sgt: The sg_table object describing the buffer
++++++++++ * @prot: IOMMU protection bits
++++++++++ *
++++++++++ * Creates a mapping at @iova for the buffer described by a scatterlist
++++++++++ * stored in the given sg_table object in the provided IOMMU domain.
++++++++++ */
++++++++++ static inline size_t iommu_map_sgtable(struct iommu_domain *domain,
++++++++++ unsigned long iova, struct sg_table *sgt, int prot)
++++++++++ {
++++++++++ return iommu_map_sg(domain, iova, sgt->sgl, sgt->orig_nents, prot);
++++++++++ }
++++++++++
#ifdef CONFIG_IOMMU_DEBUGFS
extern struct dentry *iommu_debugfs_dir;
void iommu_debugfs_setup(void);