S: Supported
F: drivers/input/misc/adxl34x.c
-AEDSP16 DRIVER
-S: Maintained
-F: sound/oss/aedsp16.c
-
AF9013 MEDIA DRIVER
F: include/linux/altera_jtaguart.h
AMAZON ETHERNET DRIVERS
-M: Netanel Belgazal <netanel@annapurnalabs.com>
-R: Saeed Bishara <saeed@annapurnalabs.com>
-R: Zorik Machulsky <zorik@annapurnalabs.com>
+M: Netanel Belgazal <netanel@amazon.com>
+R: Saeed Bishara <saeedb@amazon.com>
+R: Zorik Machulsky <zorik@amazon.com>
S: Supported
F: Documentation/networking/ena.txt
F: drivers/staging/android/
ANDROID GOLDFISH RTC DRIVER
-M: Miodrag Dinic <miodrag.dinic@imgtec.com>
+M: Miodrag Dinic <miodrag.dinic@mips.com>
S: Supported
F: Documentation/devicetree/bindings/rtc/google,goldfish-rtc.txt
F: drivers/rtc/rtc-goldfish.c
T: git git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-uniphier.git
S: Maintained
+F: Documentation/devicetree/bindings/gpio/gpio-uniphier.txt
F: arch/arm/boot/dts/uniphier*
F: arch/arm/include/asm/hardware/cache-uniphier.h
F: arch/arm/mach-uniphier/
F: arch/arm64/boot/dts/socionext/
F: drivers/bus/uniphier-system-bus.c
F: drivers/clk/uniphier/
+F: drivers/gpio/gpio-uniphier.c
F: drivers/i2c/busses/i2c-uniphier*
F: drivers/irqchip/irq-uniphier-aidet.c
F: drivers/pinctrl/uniphier/
F: include/linux/async_tx.h
AT24 EEPROM DRIVER
S: Maintained
F: drivers/misc/eeprom/at24.c
F: drivers/net/hamradio/baycom*
BCACHE (BLOCK LAYER CACHE)
W: http://bcache.evilpiepirate.org
-S: Orphan
+C: irc://irc.oftc.net/bcache
+S: Maintained
F: drivers/md/bcache/
BDISP ST MEDIA DRIVER
S: Supported
F: arch/x86/net/bpf_jit*
F: Documentation/networking/filter.txt
+F: Documentation/bpf/
F: include/linux/bpf*
F: include/linux/filter.h
F: include/uapi/linux/bpf*
F: net/sched/act_bpf.c
F: net/sched/cls_bpf.c
F: samples/bpf/
-F: tools/net/bpf*
+F: tools/bpf/
F: tools/testing/selftests/bpf/
BROADCOM B44 10/100 ETHERNET DRIVER
F: drivers/gpio/gpio-brcmstb.c
F: Documentation/devicetree/bindings/gpio/brcm,brcmstb-gpio.txt
+BROADCOM BRCMSTB USB2 and USB3 PHY DRIVER
+S: Maintained
+F: drivers/phy/broadcom/phy-brcm-usb*
+
BROADCOM GENET ETHERNET DRIVER
S: Supported
CA8210 IEEE-802.15.4 RADIO DRIVER
W: https://github.com/Cascoda/ca8210-linux.git
S: Maintained
F: drivers/auxdisplay/cfag12864bfb.c
F: include/linux/cfag12864b.h
-CFG80211 and NL80211
+802.11 (including CFG80211/NL80211)
W: http://wireless.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
+F: net/wireless/
F: include/uapi/linux/nl80211.h
+F: include/linux/ieee80211.h
+F: include/net/wext.h
F: include/net/cfg80211.h
-F: net/wireless/*
-X: net/wireless/wext*
+F: include/net/iw_handler.h
+F: include/net/ieee80211_radiotap.h
+F: Documentation/driver-api/80211/cfg80211.rst
+F: Documentation/networking/regulatory.txt
CHAR and MISC DRIVERS
CISCO VIC ETHERNET NIC DRIVER
-M: Neel Patel <neepatel@cisco.com>
+M: Parvi Kaustubhi <pkaustub@cisco.com>
S: Supported
F: drivers/net/ethernet/cisco/enic/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core
S: Supported
-F: drivers/clocksource
+F: drivers/clocksource/
+F: Documentation/devicetree/bindings/timer/
CMPC ACPI DRIVER
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git misc
W: http://coccinelle.lip6.fr/
S: Maintained
F: Documentation/cgroup-v1/cpusets.txt
F: include/linux/cpuset.h
-F: kernel/cpuset.c
+F: kernel/cgroup/cpuset.c
CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)
CPU POWER MONITORING SUBSYSTEM
S: Maintained
F: tools/power/cpupower/
T: quilt http://people.redhat.com/agk/patches/linux/editing/
S: Maintained
F: Documentation/device-mapper/
+F: drivers/md/Makefile
+F: drivers/md/Kconfig
F: drivers/md/dm*
F: drivers/md/persistent-data/
F: include/linux/device-mapper.h
F: drivers/dma/
F: include/linux/dmaengine.h
F: Documentation/devicetree/bindings/dma/
-F: Documentation/dmaengine/
+F: Documentation/driver-api/dmaengine/
T: git git://git.infradead.org/users/vkoul/slave-dma.git
DMA MAPPING HELPERS
S: Maintained
F: drivers/edac/highbank*
-EDAC-CAVIUM
+EDAC-CAVIUM OCTEON
S: Supported
F: drivers/edac/octeon_edac*
+
+EDAC-CAVIUM THUNDERX
+S: Supported
F: drivers/edac/thunderx_edac*
EDAC-CORE
Extended Verification Module (EVM)
S: Supported
F: security/integrity/evm/
S: Maintained
F: include/linux/fcntl.h
-F: include/linux/fs.h
F: include/uapi/linux/fcntl.h
-F: include/uapi/linux/fs.h
F: fs/fcntl.c
F: fs/locks.c
S: Maintained
F: fs/*
+F: include/linux/fs.h
+F: include/uapi/linux/fs.h
FINTEK F75375S HARDWARE MONITOR AND FAN CONTROLLER DRIVER
FREESCALE CAAM (Cryptographic Acceleration and Assurance Module) DRIVER
-M: Dan Douglass <dan.douglass@nxp.com>
+M: Aymen Sghaier <aymen.sghaier@nxp.com>
S: Maintained
F: drivers/crypto/caam/
S: Supported
F: fs/crypto/
F: include/linux/fscrypt*.h
+F: Documentation/filesystems/fscrypt.rst
FUJITSU FR-V (FRV) PORT
S: Orphan
F: drivers/net/ethernet/hisilicon/
F: Documentation/devicetree/bindings/net/hisilicon*.txt
+HISILICON PMU DRIVER
+W: http://www.hisilicon.com
+S: Supported
+F: drivers/perf/hisilicon
+F: Documentation/perf/hisi-pmu.txt
+
HISILICON ROCE DRIVER
F: Documentation/networking/ieee802154.txt
IFE PROTOCOL
-M: Yotam Gigi <yotamg@mellanox.com>
+M: Yotam Gigi <yotam.gi@gmail.com>
F: net/ife
F: include/net/ife.h
F: drivers/usb/atm/ueagle-atm.c
IMGTEC ASCII LCD DRIVER
-M: Paul Burton <paul.burton@imgtec.com>
+M: Paul Burton <paul.burton@mips.com>
S: Maintained
F: Documentation/devicetree/bindings/auxdisplay/img-ascii-lcd.txt
F: drivers/auxdisplay/img-ascii-lcd.c
INFINIBAND SUBSYSTEM
W: http://www.openfabrics.org/
Q: http://patchwork.kernel.org/project/linux-rdma/list/
INTEGRITY MEASUREMENT ARCHITECTURE (IMA)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity.git
S: Supported
F: security/integrity/ima/
F: scripts/Makefile.kasan
KCONFIG
-T: git git://gitorious.org/linux-kconfig/linux-kconfig
-S: Maintained
+S: Orphan
F: Documentation/kbuild/kconfig-language.txt
F: scripts/kconfig/
KERNEL BUILD + files below scripts/ (unless maintained elsewhere)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git
S: Maintained
F: arch/mips/kvm/
KERNEL VIRTUAL MACHINE FOR POWERPC (KVM/powerpc)
W: http://www.linux-kvm.org/
T: git git://github.com/agraf/linux-2.6.git
KEYS-ENCRYPTED
S: Supported
F: Documentation/security/keys/trusted-encrypted.rst
F: security/keys/encrypted-keys/
KEYS-TRUSTED
-L: linux-security-module@vger.kernel.org
+L: linux-integrity@vger.kernel.org
S: Supported
F: Documentation/security/keys/trusted-encrypted.rst
F: Documentation/scsi/53c700.txt
F: drivers/scsi/53c700*
+LEAKING_ADDRESSES
+S: Maintained
+F: scripts/leaking_addresses.pl
+
LED SUBSYSTEM
F: include/net/mac80211.h
F: net/mac80211/
F: drivers/net/wireless/mac80211_hwsim.[ch]
+F: Documentation/networking/mac80211_hwsim/README
MAILBOX API
F: drivers/net/ethernet/mellanox/mlxsw/
MELLANOX FIRMWARE FLASH LIBRARY (mlxfw)
S: Supported
W: http://www.mellanox.com
F: arch/mips/
MIPS BOSTON DEVELOPMENT BOARD
-M: Paul Burton <paul.burton@imgtec.com>
+M: Paul Burton <paul.burton@mips.com>
S: Maintained
F: Documentation/devicetree/bindings/clock/img,boston-clock.txt
F: include/dt-bindings/clock/boston-clock.h
MIPS GENERIC PLATFORM
-M: Paul Burton <paul.burton@imgtec.com>
+M: Paul Burton <paul.burton@mips.com>
S: Supported
F: arch/mips/generic/
F: drivers/*/*/*loongson1*
MIPS RINT INSTRUCTION EMULATION
-M: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
+M: Aleksandar Markovic <aleksandar.markovic@mips.com>
S: Supported
F: arch/mips/math-emu/sp_rint.c
F: include/linux/mux/
F: drivers/mux/
-MULTISOUND SOUND DRIVER
-S: Maintained
-F: Documentation/sound/oss/MultiSound
-F: sound/oss/msnd*
-
MULTITECH MULTIPORT CARD (ISICOM)
S: Orphan
F: drivers/tty/isicom.c
MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
S: Maintained
F: drivers/usb/musb/
S: Maintained
F: net/dsa/
F: include/net/dsa.h
+F: include/linux/dsa/
F: drivers/net/dsa/
NETWORKING [GENERAL]
F: include/uapi/linux/net.h
F: include/uapi/linux/netdevice.h
F: include/uapi/linux/net_namespace.h
-F: tools/net/
F: tools/testing/selftests/net/
+F: lib/net_utils.c
F: lib/random32.c
NETWORKING [IPSEC]
W: http://openrisc.io
S: Maintained
+F: Documentation/devicetree/bindings/openrisc/
+F: Documentation/openrisc/
F: arch/openrisc/
+F: drivers/irqchip/irq-ompic.c
+F: drivers/irqchip/irq-or1k-*
OPENVSWITCH
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm.git
-F: drivers/base/power/opp/
+F: drivers/opp/
F: include/linux/pm_opp.h
F: Documentation/power/opp.txt
F: Documentation/devicetree/bindings/opp/
PARAVIRT_OPS INTERFACE
PCI DRIVER FOR MICROSEMI SWITCHTEC
S: Maintained
PCI ENDPOINT SUBSYSTEM
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/pci-endpoint.git
S: Supported
F: arch/x86/pci/
F: arch/x86/kernel/quirks.c
+PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS
+Q: http://patchwork.ozlabs.org/project/linux-pci/list/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/
+S: Supported
+F: drivers/pci/host/
+F: drivers/pci/dwc/
+
PCIE DRIVER FOR AXIS ARTPEC
PCIE DRIVER FOR HISILICON
S: Maintained
F: Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
S: Supported
F: drivers/pinctrl/spear/
PISTACHIO SOC SUPPORT
-S: Maintained
+S: Odd Fixes
F: arch/mips/pistachio/
F: arch/mips/include/asm/mach-pistachio/
F: arch/mips/boot/dts/img/pistachio*
F: drivers/block/ps3vram.c
PSAMPLE PACKET SAMPLING SUPPORT:
-M: Yotam Gigi <yotamg@mellanox.com>
+M: Yotam Gigi <yotam.gi@gmail.com>
S: Maintained
F: net/psample
F: include/net/psample.h
QAT DRIVER
S: Supported
F: drivers/crypto/qat/
QLOGIC QL4xxx RDMA DRIVER
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: Documentation/rfkill.txt
+F: Documentation/ABI/stable/sysfs-class-rfkill
F: net/rfkill/
RHASHTABLE
F: drivers/mtd/nand/r852.c
F: drivers/mtd/nand/r852.h
+RISC-V ARCHITECTURE
+T: git https://github.com/riscv/riscv-linux
+S: Supported
+F: arch/riscv/
+K: riscv
+N: riscv
+
ROCCAT DRIVERS
W: http://sourceforge.net/projects/roccat/
S: Maintained
F: drivers/crypto/exynos-rng.c
-F: Documentation/devicetree/bindings/rng/samsung,exynos-rng4.txt
+F: Documentation/devicetree/bindings/crypto/samsung,exynos-rng4.txt
SAMSUNG FRAMEBUFFER DRIVER
S: Maintained
F: drivers/mmc/host/sdhci-spear.c
+SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) TI OMAP DRIVER
+S: Maintained
+F: drivers/mmc/host/sdhci-omap.c
+
SECURE ENCRYPTING DEVICE (SED) OPAL DRIVER
S: Supported
F: block/sed*
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shli/md.git
S: Supported
-F: drivers/md/
+F: drivers/md/Makefile
+F: drivers/md/Kconfig
+F: drivers/md/md*
+F: drivers/md/raid*
F: include/linux/raid/
F: include/uapi/linux/raid/
F: arch/arc/boot/dts/ax*
F: Documentation/devicetree/bindings/arc/axs10*
+SYNOPSYS DESIGNWARE APB GPIO DRIVER
+S: Maintained
+F: drivers/gpio/gpio-dwapb.c
+F: Documentation/devicetree/bindings/gpio/snps-dwapb-gpio.txt
+
SYNOPSYS DESIGNWARE DMAC DRIVER
S: Maintained
F: include/linux/dma/dw.h
F: include/linux/platform_data/dma-dw.h
S: Maintained
F: drivers/thunderbolt/
+F: include/linux/thunderbolt.h
+
+THUNDERBOLT NETWORK DRIVER
+S: Maintained
+F: drivers/net/thunderbolt.c
THUNDERX GPIO DRIVER
TPM DEVICE DRIVER
-W: http://tpmdd.sourceforge.net
-Q: https://patchwork.kernel.org/project/tpmdd-devel/list/
+Q: https://patchwork.kernel.org/project/linux-integrity/list/
T: git git://git.infradead.org/users/jjs/linux-tpmdd.git
S: Maintained
F: drivers/char/tpm/
-TPM IBM_VTPM DEVICE DRIVER
-W: http://tpmdd.sourceforge.net
-S: Maintained
-F: drivers/char/tpm/tpm_ibmvtpm*
-
TRACING
S: Maintained
-F: drivers/hid/hid-udraw.c
+F: drivers/hid/hid-udraw-ps3.c
UFS FILESYSTEM
F: include/linux/virtio_vsock.h
F: include/uapi/linux/virtio_vsock.h
F: include/uapi/linux/vsockmon.h
+F: include/uapi/linux/vm_sockets_diag.h
+F: net/vmw_vsock/diag.c
F: net/vmw_vsock/af_vsock_tap.c
F: net/vmw_vsock/virtio_transport_common.c
F: net/vmw_vsock/virtio_transport.c
F: drivers/net/vsockmon.c
F: drivers/vhost/vsock.c
F: drivers/vhost/vsock.h
+F: tools/testing/vsock/
VIRTIO CONSOLE DRIVER
S: Supported
F: drivers/s390/virtio/
+F: arch/s390/include/uapi/asm/virtio-ccw.h
VIRTIO GPU DRIVER
S: Supported
W: http://wireless.kernel.org/en/users/Drivers/wil6210
F: drivers/net/wireless/ath/wil6210/
-F: include/uapi/linux/wil6210_uapi.h
WIMAX STACK
F: Documentation/devicetree/bindings/regulator/arizona-regulator.txt
F: Documentation/devicetree/bindings/mfd/arizona.txt
F: Documentation/devicetree/bindings/mfd/wm831x.txt
+F: Documentation/devicetree/bindings/sound/wlf,arizona.txt
F: arch/arm/mach-s3c64xx/mach-crag6410*
F: drivers/clk/clk-wm83*.c
F: drivers/extcon/extcon-arizona.c
+# SPDX-License-Identifier: GPL-2.0
infiniband-$(CONFIG_INFINIBAND_ADDR_TRANS) := rdma_cm.o
user_access-$(CONFIG_INFINIBAND_ADDR_TRANS) := rdma_ucm.o
security.o nldev.o
ib_core-$(CONFIG_INFINIBAND_USER_MEM) += umem.o
- ib_core-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o umem_rbtree.o
+ ib_core-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o
ib_core-$(CONFIG_CGROUP_RDMA) += cgroup.o
ib_cm-y := cm.o
{
struct send_context_info *sci;
struct send_context *sc = NULL;
- int req_type = type;
dma_addr_t dma;
unsigned long flags;
u64 reg;
return NULL;
}
- /*
- * VNIC contexts are dynamically allocated.
- * Hence, pick a user context for VNIC.
- */
- if (type == SC_VNIC)
- type = SC_USER;
-
spin_lock_irqsave(&dd->sc_lock, flags);
ret = sc_hw_alloc(dd, type, &sw_index, &hw_context);
if (ret) {
return NULL;
}
- /*
- * VNIC contexts are used by kernel driver.
- * Hence, mark them as kernel contexts.
- */
- if (req_type == SC_VNIC) {
- dd->send_contexts[sw_index].type = SC_KERNEL;
- type = SC_KERNEL;
- }
-
sci = &dd->send_contexts[sw_index];
sci->sc = sc;
goto done;
}
/* copy from receiver cache line and recalculate */
- sc->alloc_free = ACCESS_ONCE(sc->free);
+ sc->alloc_free = READ_ONCE(sc->free);
avail =
(unsigned long)sc->credits -
(sc->fill - sc->alloc_free);
if (blocks > avail) {
/* still no room, actively update */
sc_release_update(sc);
- sc->alloc_free = ACCESS_ONCE(sc->free);
+ sc->alloc_free = READ_ONCE(sc->free);
trycount++;
goto retry;
}
/* call sent buffer callbacks */
code = -1; /* code not yet set */
- head = ACCESS_ONCE(sc->sr_head); /* snapshot the head */
+ head = READ_ONCE(sc->sr_head); /* snapshot the head */
tail = sc->sr_tail;
while (head != tail) {
pbuf = &sc->sr[tail].pbuf;
again:
smp_read_barrier_depends(); /* see post_one_send() */
- if (sqp->s_last == ACCESS_ONCE(sqp->s_head))
+ if (sqp->s_last == READ_ONCE(sqp->s_head))
goto clr_busy;
wqe = rvt_get_swqe_ptr(sqp, sqp->s_last);
wc.byte_len = wqe->length;
wc.qp = &qp->ibqp;
wc.src_qp = qp->remote_qpn;
- wc.slid = rdma_ah_get_dlid(&qp->remote_ah_attr);
+ wc.slid = rdma_ah_get_dlid(&qp->remote_ah_attr) & U16_MAX;
wc.sl = rdma_ah_get_sl(&qp->remote_ah_attr);
wc.port_num = 1;
/* Signal completion event if the solicited bit is set. */
{
struct hfi1_qp_priv *priv = qp->priv;
struct hfi1_ibport *ibp = ps->ibp;
- struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
u32 bth1 = 0;
u16 pkey = hfi1_get_pkey(ibp, qp->s_pkey_index);
u16 lrh0 = HFI1_LRH_BTH;
- u16 slid;
u8 extra_bytes = -ps->s_txreq->s_cur_size & 3;
u32 nwords = SIZE_OF_CRC + ((ps->s_txreq->s_cur_size +
extra_bytes) >> 2);
bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);
}
hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2);
-
- if (!ppd->lid)
- slid = be16_to_cpu(IB_LID_PERMISSIVE);
- else
- slid = ppd->lid |
- (rdma_ah_get_path_bits(&qp->remote_ah_attr) &
- ((1 << ppd->lmc) - 1));
hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh,
lrh0,
qp->s_hdrwords + nwords,
}
}
- static void sdma_err_progress_check(unsigned long data)
+ static void sdma_err_progress_check(struct timer_list *t)
{
unsigned index;
- struct sdma_engine *sde = (struct sdma_engine *)data;
+ struct sdma_engine *sde = from_timer(sde, t, err_progress_check_timer);
dd_dev_err(sde->dd, "SDE progress check event\n");
for (index = 0; index < sde->dd->num_sdma; index++) {
return ret;
idle_cnt = ns_to_cclock(dd, idle_cnt);
+ if (idle_cnt)
+ dd->default_desc1 =
+ SDMA_DESC1_HEAD_TO_HOST_FLAG;
+ else
+ dd->default_desc1 =
+ SDMA_DESC1_INT_REQ_FLAG;
+
if (!sdma_desct_intr)
sdma_desct_intr = SDMA_DESC_INTR;
sde->tail_csr =
get_kctxt_csr_addr(dd, this_idx, SD(TAIL));
- if (idle_cnt)
- dd->default_desc1 =
- SDMA_DESC1_HEAD_TO_HOST_FLAG;
- else
- dd->default_desc1 =
- SDMA_DESC1_INT_REQ_FLAG;
-
tasklet_init(&sde->sdma_hw_clean_up_task, sdma_hw_clean_up_task,
(unsigned long)sde);
sde->progress_check_head = 0;
- setup_timer(&sde->err_progress_check_timer,
- sdma_err_progress_check, (unsigned long)sde);
+ timer_setup(&sde->err_progress_check_timer,
+ sdma_err_progress_check, 0);
sde->descq = dma_zalloc_coherent(
&dd->pcidev->dev,
if (!sde->descq)
goto bail;
sde->tx_ring =
- kcalloc(descq_cnt, sizeof(struct sdma_txreq *),
- GFP_KERNEL);
- if (!sde->tx_ring)
- sde->tx_ring =
- vzalloc(
- sizeof(struct sdma_txreq *) *
- descq_cnt);
+ kvzalloc_node(sizeof(struct sdma_txreq *) * descq_cnt,
+ GFP_KERNEL, dd->node);
if (!sde->tx_ring)
goto bail;
}
swhead = sde->descq_head & sde->sdma_mask;
/* this code is really bad for cache line trading */
- swtail = ACCESS_ONCE(sde->descq_tail) & sde->sdma_mask;
+ swtail = READ_ONCE(sde->descq_tail) & sde->sdma_mask;
cnt = sde->descq_cnt;
if (swhead < swtail)
if ((status & sde->idle_mask) && !idle_check_done) {
u16 swtail;
- swtail = ACCESS_ONCE(sde->descq_tail) & sde->sdma_mask;
+ swtail = READ_ONCE(sde->descq_tail) & sde->sdma_mask;
if (swtail != hwhead) {
hwhead = (u16)read_sde_csr(sde, SD(HEAD));
idle_check_done = 1;
static void dump_sdma_state(struct sdma_engine *sde)
{
- struct hw_sdma_desc *descq;
struct hw_sdma_desc *descqp;
u64 desc[2];
u64 addr;
head = sde->descq_head & sde->sdma_mask;
tail = sde->descq_tail & sde->sdma_mask;
cnt = sdma_descq_freecnt(sde);
- descq = sde->descq;
dd_dev_err(sde->dd,
"SDMA (%u) descq_head: %u descq_tail: %u freecnt: %u FLE %d\n",
u16 len;
head = sde->descq_head & sde->sdma_mask;
- tail = ACCESS_ONCE(sde->descq_tail) & sde->sdma_mask;
+ tail = READ_ONCE(sde->descq_tail) & sde->sdma_mask;
seq_printf(s, SDE_FMT, sde->this_idx,
sde->cpu,
sdma_state_name(sde->state.current_state),
* 7220, e.g.
*/
ss->go_s99_running = 1;
- /* fall through and start dma engine */
+ /* fall through -- and start dma engine */
case sdma_event_e10_go_hw_start:
/* This reference means the state machine is started */
sdma_get(&sde->state);
case sdma_event_e60_hw_halted:
need_progress = 1;
sdma_err_progress_check_schedule(sde);
+ /* fall through */
case sdma_event_e90_sw_halted:
/*
* SW initiated halt does not perform engines
return -EINVAL;
}
while (1) {
- nr = ffz(ACCESS_ONCE(sde->ahg_bits));
+ nr = ffz(READ_ONCE(sde->ahg_bits));
if (nr > 31) {
trace_hfi1_ahg_allocate(sde, -ENOSPC);
return -ENOSPC;
goto bail;
/* We are in the error state, flush the work request. */
smp_read_barrier_depends(); /* see post_one_send() */
- if (qp->s_last == ACCESS_ONCE(qp->s_head))
+ if (qp->s_last == READ_ONCE(qp->s_head))
goto bail;
/* If DMAs are in progress, we can't flush immediately. */
if (iowait_sdma_pending(&priv->s_iowait)) {
goto done_free_tx;
}
- ps->s_txreq->phdr.hdr.hdr_type = priv->hdr_type;
if (priv->hdr_type == HFI1_PKT_TYPE_9B) {
/* header size in 32-bit words LRH+BTH = (8+12)/4. */
hwords = 5;
goto bail;
/* Check if send work queue is empty. */
smp_read_barrier_depends(); /* see post_one_send() */
- if (qp->s_cur == ACCESS_ONCE(qp->s_head)) {
+ if (qp->s_cur == READ_ONCE(qp->s_head)) {
clear_ahg(qp);
goto bail;
}
wc.status = IB_WC_SUCCESS;
wc.qp = &qp->ibqp;
wc.src_qp = qp->remote_qpn;
- wc.slid = rdma_ah_get_dlid(&qp->remote_ah_attr);
+ wc.slid = rdma_ah_get_dlid(&qp->remote_ah_attr) & U16_MAX;
/*
* It seems that IB mandates the presence of an SL in a
* work completion only for the UD transport (see section
} else {
wc.pkey_index = 0;
}
- wc.slid = ppd->lid | (rdma_ah_get_path_bits(ah_attr) &
- ((1 << ppd->lmc) - 1));
+ wc.slid = (ppd->lid | (rdma_ah_get_path_bits(ah_attr) &
+ ((1 << ppd->lmc) - 1))) & U16_MAX;
/* Check for loopback when the port lid is not set */
if (wc.slid == 0 && sqp->ibqp.qp_type == IB_QPT_GSI)
wc.slid = be16_to_cpu(IB_LID_PERMISSIVE);
goto bail;
/* We are in the error state, flush the work request. */
smp_read_barrier_depends(); /* see post_one_send */
- if (qp->s_last == ACCESS_ONCE(qp->s_head))
+ if (qp->s_last == READ_ONCE(qp->s_head))
goto bail;
/* If DMAs are in progress, we can't flush immediately. */
if (iowait_sdma_pending(&priv->s_iowait)) {
/* see post_one_send() */
smp_read_barrier_depends();
- if (qp->s_cur == ACCESS_ONCE(qp->s_head))
+ if (qp->s_cur == READ_ONCE(qp->s_head))
goto bail;
wqe = rvt_get_swqe_ptr(qp, qp->s_cur);
int mgmt_pkey_idx = -1;
struct hfi1_ibport *ibp = rcd_to_iport(packet->rcd);
struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
- struct ib_header *hdr = packet->hdr;
void *data = packet->payload;
u32 tlen = packet->tlen;
struct rvt_qp *qp = packet->qp;
dlid_is_permissive = (dlid == permissive_lid);
slid_is_permissive = (slid == permissive_lid);
} else {
- hdr = packet->hdr;
pkey = ib_bth_get_pkey(ohdr);
dlid_is_permissive = (dlid == be16_to_cpu(IB_LID_PERMISSIVE));
slid_is_permissive = (slid == be16_to_cpu(IB_LID_PERMISSIVE));
}
if (slid_is_permissive)
slid = be32_to_cpu(OPA_LID_PERMISSIVE);
- wc.slid = slid;
+ wc.slid = slid & U16_MAX;
wc.sl = sl_from_sc;
/*
/* Wait until all requests have been freed. */
wait_event_interruptible(
pq->wait,
- (ACCESS_ONCE(pq->state) == SDMA_PKT_Q_INACTIVE));
+ (READ_ONCE(pq->state) == SDMA_PKT_Q_INACTIVE));
kfree(pq->reqs);
kfree(pq->req_in_use);
kmem_cache_destroy(pq->txreq_cache);
if (ret != -EBUSY) {
req->status = ret;
WRITE_ONCE(req->has_error, 1);
- if (ACCESS_ONCE(req->seqcomp) ==
+ if (READ_ONCE(req->seqcomp) ==
req->seqsubmitted - 1)
goto free_req;
return ret;
*/
if (req->data_len) {
iovec = &req->iovs[req->iov_idx];
- if (ACCESS_ONCE(iovec->offset) == iovec->iov.iov_len) {
+ if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) {
if (++req->iov_idx == req->data_iovs) {
ret = -EFAULT;
goto free_txreq;
struct hfi1_user_sdma_pkt_q *pq = req->pq;
pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
- if (!pages) {
- SDMA_DBG(req, "Failed page array alloc");
+ if (!pages)
return -ENOMEM;
- }
memcpy(pages, node->pages, node->npages * sizeof(*pages));
npages -= node->npages;
struct user_sdma_txreq *tx, u32 datalen)
{
u32 ahg[AHG_KDETH_ARRAY_SIZE];
- int diff = 0;
+ int idx = 0;
u8 omfactor; /* KDETH.OM */
struct hfi1_user_sdma_pkt_q *pq = req->pq;
struct hfi1_pkt_header *hdr = &req->hdr;
u16 pbclen = le16_to_cpu(hdr->pbc[0]);
u32 val32, tidval = 0, lrhlen = get_lrh_len(*hdr, pad_len(datalen));
+ size_t array_size = ARRAY_SIZE(ahg);
if (PBC2LRH(pbclen) != lrhlen) {
/* PBC.PbcLengthDWs */
- AHG_HEADER_SET(ahg, diff, 0, 0, 12,
- cpu_to_le16(LRH2PBC(lrhlen)));
+ idx = ahg_header_set(ahg, idx, array_size, 0, 0, 12,
+ (__force u16)cpu_to_le16(LRH2PBC(lrhlen)));
+ if (idx < 0)
+ return idx;
/* LRH.PktLen (we need the full 16 bits due to byte swap) */
- AHG_HEADER_SET(ahg, diff, 3, 0, 16,
- cpu_to_be16(lrhlen >> 2));
+ idx = ahg_header_set(ahg, idx, array_size, 3, 0, 16,
+ (__force u16)cpu_to_be16(lrhlen >> 2));
+ if (idx < 0)
+ return idx;
}
/*
(HFI1_CAP_IS_KSET(EXTENDED_PSN) ? 0x7fffffff : 0xffffff);
if (unlikely(tx->flags & TXREQ_FLAGS_REQ_ACK))
val32 |= 1UL << 31;
- AHG_HEADER_SET(ahg, diff, 6, 0, 16, cpu_to_be16(val32 >> 16));
- AHG_HEADER_SET(ahg, diff, 6, 16, 16, cpu_to_be16(val32 & 0xffff));
+ idx = ahg_header_set(ahg, idx, array_size, 6, 0, 16,
+ (__force u16)cpu_to_be16(val32 >> 16));
+ if (idx < 0)
+ return idx;
+ idx = ahg_header_set(ahg, idx, array_size, 6, 16, 16,
+ (__force u16)cpu_to_be16(val32 & 0xffff));
+ if (idx < 0)
+ return idx;
/* KDETH.Offset */
- AHG_HEADER_SET(ahg, diff, 15, 0, 16,
- cpu_to_le16(req->koffset & 0xffff));
- AHG_HEADER_SET(ahg, diff, 15, 16, 16, cpu_to_le16(req->koffset >> 16));
+ idx = ahg_header_set(ahg, idx, array_size, 15, 0, 16,
+ (__force u16)cpu_to_le16(req->koffset & 0xffff));
+ if (idx < 0)
+ return idx;
+ idx = ahg_header_set(ahg, idx, array_size, 15, 16, 16,
+ (__force u16)cpu_to_le16(req->koffset >> 16));
+ if (idx < 0)
+ return idx;
if (req_opcode(req->info.ctrl) == EXPECTED) {
__le16 val;
KDETH_OM_MAX_SIZE) ? KDETH_OM_LARGE_SHIFT :
KDETH_OM_SMALL_SHIFT;
/* KDETH.OM and KDETH.OFFSET (TID) */
- AHG_HEADER_SET(ahg, diff, 7, 0, 16,
- ((!!(omfactor - KDETH_OM_SMALL_SHIFT)) << 15 |
+ idx = ahg_header_set(
+ ahg, idx, array_size, 7, 0, 16,
+ ((!!(omfactor - KDETH_OM_SMALL_SHIFT)) << 15 |
((req->tidoffset >> omfactor)
- & 0x7fff)));
+ & 0x7fff)));
+ if (idx < 0)
+ return idx;
/* KDETH.TIDCtrl, KDETH.TID, KDETH.Intr, KDETH.SH */
val = cpu_to_le16(((EXP_TID_GET(tidval, CTRL) & 0x3) << 10) |
(EXP_TID_GET(tidval, IDX) & 0x3ff));
AHG_KDETH_INTR_SHIFT));
}
- AHG_HEADER_SET(ahg, diff, 7, 16, 14, val);
+ idx = ahg_header_set(ahg, idx, array_size,
+ 7, 16, 14, (__force u16)val);
+ if (idx < 0)
+ return idx;
}
- if (diff < 0)
- return diff;
trace_hfi1_sdma_user_header_ahg(pq->dd, pq->ctxt, pq->subctxt,
req->info.comp_idx, req->sde->this_idx,
- req->ahg_idx, ahg, diff, tidval);
+ req->ahg_idx, ahg, idx, tidval);
sdma_txinit_ahg(&tx->txreq,
SDMA_TXREQ_F_USE_AHG,
- datalen, req->ahg_idx, diff,
+ datalen, req->ahg_idx, idx,
ahg, sizeof(req->hdr),
user_sdma_txreq_cb);
- return diff;
+ return idx;
}
/*
} else {
if (status != SDMA_TXREQ_S_OK)
req->status = status;
- if (req->seqcomp == (ACCESS_ONCE(req->seqsubmitted) - 1) &&
+ if (req->seqcomp == (READ_ONCE(req->seqsubmitted) - 1) &&
(READ_ONCE(req->done) ||
READ_ONCE(req->has_error))) {
user_sdma_free_request(req, false);
static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
{
+ int i;
+
if (!list_empty(&req->txps)) {
struct sdma_txreq *t, *p;
kmem_cache_free(req->pq->txreq_cache, tx);
}
}
- if (req->data_iovs) {
- struct sdma_mmu_node *node;
- int i;
-
- for (i = 0; i < req->data_iovs; i++) {
- node = req->iovs[i].node;
- if (!node)
- continue;
-
- if (unpin)
- hfi1_mmu_rb_remove(req->pq->handler,
- &node->rb);
- else
- atomic_dec(&node->refcount);
- }
+
+ for (i = 0; i < req->data_iovs; i++) {
+ struct sdma_mmu_node *node = req->iovs[i].node;
+
+ if (!node)
+ continue;
+
+ if (unpin)
+ hfi1_mmu_rb_remove(req->pq->handler,
+ &node->rb);
+ else
+ atomic_dec(&node->refcount);
}
+
kfree(req->tids);
clear_bit(req->info.comp_idx, req->pq->req_in_use);
}
config INFINIBAND_QEDR
tristate "QLogic RoCE driver"
depends on 64BIT && QEDE
+ depends on PCI
select QED_LL2
+ select QED_OOO
select QED_RDMA
---help---
This driver provides low-level InfiniBand over Ethernet
.string = txselect_list,
.maxlen = MAX_ATTEN_LEN
};
-static int setup_txselect(const char *, struct kernel_param *);
+static int setup_txselect(const char *, const struct kernel_param *);
module_param_call(txselect, setup_txselect, param_get_string,
&kp_txselect, S_IWUSR | S_IRUGO);
MODULE_PARM_DESC(txselect,
u32 updthresh; /* current AvailUpdThld */
u32 updthresh_dflt; /* default AvailUpdThld */
u32 r1;
- int irq;
u32 num_msix_entries;
u32 sdmabufcnt;
u32 lastbuf_for_pio;
static u32 __iomem *qib_7322_getsendbuf(struct qib_pportdata *, u64, u32 *);
#ifdef CONFIG_INFINIBAND_QIB_DCA
static void qib_setup_dca(struct qib_devdata *dd);
- static void setup_dca_notifier(struct qib_devdata *dd,
- struct qib_msix_entry *m);
- static void reset_dca_notifier(struct qib_devdata *dd,
- struct qib_msix_entry *m);
+ static void setup_dca_notifier(struct qib_devdata *dd, int msixnum);
+ static void reset_dca_notifier(struct qib_devdata *dd, int msixnum);
#endif
/**
u64 iserr = 0;
u64 errs;
u64 mask;
- int log_idx;
qib_stats.sps_errints++;
errs = qib_read_kreg64(dd, kr_errstatus);
if (errs & QIB_E_HARDWARE) {
*msg = '\0';
qib_7322_handle_hwerrors(dd, msg, sizeof(dd->cspec->emsgbuf));
- } else
- for (log_idx = 0; log_idx < QIB_EEP_LOG_CNT; ++log_idx)
- if (errs & dd->eep_st_masks[log_idx].errs_to_log)
- qib_inc_eeprom_err(dd, log_idx, 1);
+ }
if (errs & QIB_E_SPKTERRS) {
qib_disarm_7322_senderrbufs(dd->pport);
qib_write_kreg(dd, kr_errmask, dd->cspec->errormask);
}
- static void reenable_chase(unsigned long opaque)
+ static void reenable_chase(struct timer_list *t)
{
- struct qib_pportdata *ppd = (struct qib_pportdata *)opaque;
+ struct qib_chippport_specific *cp = from_timer(cp, t, chase_timer);
+ struct qib_pportdata *ppd = cp->ppd;
ppd->cpspec->chase_timer.expires = 0;
qib_set_ib_7322_lstate(ppd, QLOGIC_IB_IBCC_LINKCMD_DOWN,
cancel_delayed_work_sync(&ppd->cpspec->ipg_work);
ppd->cpspec->chase_end = 0;
- if (ppd->cpspec->chase_timer.data) /* if initted */
+ if (ppd->cpspec->chase_timer.function) /* if initted */
del_timer_sync(&ppd->cpspec->chase_timer);
/*
qib_write_kreg(dd, KREG_IDX(DCACtrlB) + i,
cspec->dca_rcvhdr_ctrl[i]);
for (i = 0; i < cspec->num_msix_entries; i++)
- setup_dca_notifier(dd, &cspec->msix_entries[i]);
+ setup_dca_notifier(dd, i);
}
static void qib_irq_notifier_notify(struct irq_affinity_notify *notify,
}
#endif
- /*
- * Disable MSIx interrupt if enabled, call generic MSIx code
- * to cleanup, and clear pending MSIx interrupts.
- * Used for fallback to INTx, after reset, and when MSIx setup fails.
- */
- static void qib_7322_nomsix(struct qib_devdata *dd)
+ static void qib_7322_free_irq(struct qib_devdata *dd)
{
u64 intgranted;
- int n;
+ int i;
dd->cspec->main_int_mask = ~0ULL;
- n = dd->cspec->num_msix_entries;
- if (n) {
- int i;
- dd->cspec->num_msix_entries = 0;
- for (i = 0; i < n; i++) {
+ for (i = 0; i < dd->cspec->num_msix_entries; i++) {
+ /* only free IRQs that were allocated */
+ if (dd->cspec->msix_entries[i].arg) {
#ifdef CONFIG_INFINIBAND_QIB_DCA
- reset_dca_notifier(dd, &dd->cspec->msix_entries[i]);
+ reset_dca_notifier(dd, i);
#endif
- irq_set_affinity_hint(
- dd->cspec->msix_entries[i].irq, NULL);
+ irq_set_affinity_hint(pci_irq_vector(dd->pcidev, i),
+ NULL);
free_cpumask_var(dd->cspec->msix_entries[i].mask);
- free_irq(dd->cspec->msix_entries[i].irq,
- dd->cspec->msix_entries[i].arg);
+ pci_free_irq(dd->pcidev, i,
+ dd->cspec->msix_entries[i].arg);
}
- qib_nomsix(dd);
}
+
+ /* If num_msix_entries was 0, disable the INTx IRQ */
+ if (!dd->cspec->num_msix_entries)
+ pci_free_irq(dd->pcidev, 0, dd);
+ else
+ dd->cspec->num_msix_entries = 0;
+
+ pci_free_irq_vectors(dd->pcidev);
+
/* make sure no MSIx interrupts are left pending */
intgranted = qib_read_kreg64(dd, kr_intgranted);
if (intgranted)
qib_write_kreg(dd, kr_intgranted, intgranted);
}
- static void qib_7322_free_irq(struct qib_devdata *dd)
- {
- if (dd->cspec->irq) {
- free_irq(dd->cspec->irq, dd);
- dd->cspec->irq = 0;
- }
- qib_7322_nomsix(dd);
- }
-
static void qib_setup_7322_cleanup(struct qib_devdata *dd)
{
int i;
#ifdef CONFIG_INFINIBAND_QIB_DCA
- static void reset_dca_notifier(struct qib_devdata *dd, struct qib_msix_entry *m)
+ static void reset_dca_notifier(struct qib_devdata *dd, int msixnum)
{
- if (!m->dca)
+ if (!dd->cspec->msix_entries[msixnum].dca)
return;
- qib_devinfo(dd->pcidev,
- "Disabling notifier on HCA %d irq %d\n",
- dd->unit,
- m->irq);
- irq_set_affinity_notifier(
- m->irq,
- NULL);
- m->notifier = NULL;
+
+ qib_devinfo(dd->pcidev, "Disabling notifier on HCA %d irq %d\n",
+ dd->unit, pci_irq_vector(dd->pcidev, msixnum));
+ irq_set_affinity_notifier(pci_irq_vector(dd->pcidev, msixnum), NULL);
+ dd->cspec->msix_entries[msixnum].notifier = NULL;
}
- static void setup_dca_notifier(struct qib_devdata *dd, struct qib_msix_entry *m)
+ static void setup_dca_notifier(struct qib_devdata *dd, int msixnum)
{
+ struct qib_msix_entry *m = &dd->cspec->msix_entries[msixnum];
struct qib_irq_notify *n;
if (!m->dca)
int ret;
m->notifier = n;
- n->notify.irq = m->irq;
+ n->notify.irq = pci_irq_vector(dd->pcidev, msixnum);
n->notify.notify = qib_irq_notifier_notify;
n->notify.release = qib_irq_notifier_release;
n->arg = m->arg;
if (!dd->cspec->num_msix_entries) {
/* Try to get INTx interrupt */
try_intx:
- if (!dd->pcidev->irq) {
- qib_dev_err(dd,
- "irq is 0, BIOS error? Interrupts won't work\n");
- goto bail;
- }
- ret = request_irq(dd->pcidev->irq, qib_7322intr,
- IRQF_SHARED, QIB_DRV_NAME, dd);
+ ret = pci_request_irq(dd->pcidev, 0, qib_7322intr, NULL, dd,
+ QIB_DRV_NAME);
if (ret) {
- qib_dev_err(dd,
+ qib_dev_err(
+ dd,
"Couldn't setup INTx interrupt (irq=%d): %d\n",
- dd->pcidev->irq, ret);
- goto bail;
+ pci_irq_vector(dd->pcidev, 0), ret);
+ return;
}
- dd->cspec->irq = dd->pcidev->irq;
dd->cspec->main_int_mask = ~0ULL;
- goto bail;
+ return;
}
/* Try to get MSIx interrupts */
for (i = 0; msixnum < dd->cspec->num_msix_entries; i++) {
irq_handler_t handler;
void *arg;
- u64 val;
int lsb, reg, sh;
#ifdef CONFIG_INFINIBAND_QIB_DCA
int dca = 0;
#endif
-
- dd->cspec->msix_entries[msixnum].
- name[sizeof(dd->cspec->msix_entries[msixnum].name) - 1]
- = '\0';
if (i < ARRAY_SIZE(irq_table)) {
if (irq_table[i].port) {
/* skip if for a non-configured port */
#endif
lsb = irq_table[i].lsb;
handler = irq_table[i].handler;
- snprintf(dd->cspec->msix_entries[msixnum].name,
- sizeof(dd->cspec->msix_entries[msixnum].name)
- - 1,
- QIB_DRV_NAME "%d%s", dd->unit,
- irq_table[i].name);
+ ret = pci_request_irq(dd->pcidev, msixnum, handler,
+ NULL, arg, QIB_DRV_NAME "%d%s",
+ dd->unit,
+ irq_table[i].name);
} else {
unsigned ctxt;
#endif
lsb = QIB_I_RCVAVAIL_LSB + ctxt;
handler = qib_7322pintr;
- snprintf(dd->cspec->msix_entries[msixnum].name,
- sizeof(dd->cspec->msix_entries[msixnum].name)
- - 1,
- QIB_DRV_NAME "%d (kctx)", dd->unit);
+ ret = pci_request_irq(dd->pcidev, msixnum, handler,
+ NULL, arg,
+ QIB_DRV_NAME "%d (kctx)",
+ dd->unit);
}
- dd->cspec->msix_entries[msixnum].irq = pci_irq_vector(
- dd->pcidev, msixnum);
- if (dd->cspec->msix_entries[msixnum].irq < 0) {
- qib_dev_err(dd,
- "Couldn't get MSIx irq (vec=%d): %d\n",
- msixnum,
- dd->cspec->msix_entries[msixnum].irq);
- qib_7322_nomsix(dd);
- goto try_intx;
- }
- ret = request_irq(dd->cspec->msix_entries[msixnum].irq,
- handler, 0,
- dd->cspec->msix_entries[msixnum].name,
- arg);
if (ret) {
/*
* Shouldn't happen since the enable said we could
* have as many as we are trying to setup here.
*/
qib_dev_err(dd,
- "Couldn't setup MSIx interrupt (vec=%d, irq=%d): %d\n",
- msixnum,
- dd->cspec->msix_entries[msixnum].irq,
- ret);
- qib_7322_nomsix(dd);
+ "Couldn't setup MSIx interrupt (vec=%d, irq=%d): %d\n",
+ msixnum,
+ pci_irq_vector(dd->pcidev, msixnum),
+ ret);
+ qib_7322_free_irq(dd);
+ pci_alloc_irq_vectors(dd->pcidev, 1, 1,
+ PCI_IRQ_LEGACY);
goto try_intx;
}
dd->cspec->msix_entries[msixnum].arg = arg;
mask &= ~(1ULL << lsb);
redirect[reg] |= ((u64) msixnum) << sh;
}
- val = qib_read_kreg64(dd, 2 * msixnum + 1 +
- (QIB_7322_MsixTable_OFFS / sizeof(u64)));
+ qib_read_kreg64(dd, 2 * msixnum + 1 +
+ (QIB_7322_MsixTable_OFFS / sizeof(u64)));
if (firstcpu < nr_cpu_ids &&
zalloc_cpumask_var(
&dd->cspec->msix_entries[msixnum].mask,
dd->cspec->msix_entries[msixnum].mask);
}
irq_set_affinity_hint(
- dd->cspec->msix_entries[msixnum].irq,
+ pci_irq_vector(dd->pcidev, msixnum),
dd->cspec->msix_entries[msixnum].mask);
}
msixnum++;
dd->cspec->main_int_mask = mask;
tasklet_init(&dd->error_tasklet, qib_error_tasklet,
(unsigned long)dd);
- bail:;
}
/**
/* no interrupts till re-initted */
qib_7322_set_intr_state(dd, 0);
+ qib_7322_free_irq(dd);
+
if (msix_entries) {
- qib_7322_nomsix(dd);
/* can be up to 512 bytes, too big for stack */
msix_vecsave = kmalloc(2 * dd->cspec->num_msix_entries *
sizeof(u64), GFP_KERNEL);
write_7322_init_portregs(&dd->pport[i]);
write_7322_initregs(dd);
- if (qib_pcie_params(dd, dd->lbus_width,
- &dd->cspec->num_msix_entries))
+ if (qib_pcie_params(dd, dd->lbus_width, &msix_entries))
qib_dev_err(dd,
"Reset failed to setup PCIe or interrupts; continuing anyway\n");
+ dd->cspec->num_msix_entries = msix_entries;
qib_setup_7322_interrupt(dd, 1);
for (i = 0; i < dd->num_pports; ++i) {
*
* called from add_timer
*/
- static void qib_get_7322_faststats(unsigned long opaque)
+ static void qib_get_7322_faststats(struct timer_list *t)
{
- struct qib_devdata *dd = (struct qib_devdata *) opaque;
+ struct qib_devdata *dd = from_timer(dd, t, stats_timer);
struct qib_pportdata *ppd;
unsigned long flags;
u64 traffic_wds;
qib_devinfo(dd->pcidev,
"MSIx interrupt not detected, trying INTx interrupts\n");
- qib_7322_nomsix(dd);
- qib_enable_intx(dd);
+ qib_7322_free_irq(dd);
+ if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_LEGACY) < 0)
+ qib_dev_err(dd, "Failed to enable INTx\n");
qib_setup_7322_interrupt(dd, 0);
return 1;
}
static void autoneg_7322_work(struct work_struct *work)
{
struct qib_pportdata *ppd;
- struct qib_devdata *dd;
- u64 startms;
u32 i;
unsigned long flags;
ppd = container_of(work, struct qib_chippport_specific,
autoneg_work.work)->ppd;
- dd = ppd->dd;
-
- startms = jiffies_to_msecs(jiffies);
/*
* Busy wait for this first part, it should be at most a
}
/* handle the txselect parameter changing */
-static int setup_txselect(const char *str, struct kernel_param *kp)
+static int setup_txselect(const char *str, const struct kernel_param *kp)
{
struct qib_devdata *dd;
unsigned long val;
if (!qib_mini_init)
write_7322_init_portregs(ppd);
- setup_timer(&cp->chase_timer, reenable_chase,
- (unsigned long)ppd);
+ timer_setup(&cp->chase_timer, reenable_chase, 0);
ppd++;
}
(u64) rcv_int_count << IBA7322_HDRHEAD_PKTINT_SHIFT;
/* setup the stats timer; the add_timer is done at end of init */
- setup_timer(&dd->stats_timer, qib_get_7322_faststats,
- (unsigned long)dd);
+ timer_setup(&dd->stats_timer, qib_get_7322_faststats, 0);
dd->ureg_align = 0x10000; /* 64KB alignment */
{
struct qib_devdata *dd = ppd->dd;
int chan;
- u32 rbc;
for (chan = 0; chan < SERDES_CHANS; ++chan) {
ahb_mod(dd, IBSD(ppd->hw_pidx), (chan + (chan >> 1)), addr,
data, mask);
- rbc = ahb_mod(dd, IBSD(ppd->hw_pidx), (chan + (chan >> 1)),
- addr, 0, 0);
+ ahb_mod(dd, IBSD(ppd->hw_pidx), (chan + (chan >> 1)), addr,
+ 0, 0);
}
}
#include "vt.h"
#include "trace.h"
- static void rvt_rc_timeout(unsigned long arg);
+ static void rvt_rc_timeout(struct timer_list *t);
/*
* Convert the AETH RNR timeout code into the number of microseconds.
/* take qp out the hash and wait for it to be unused */
rvt_remove_qp(rdi, qp);
- wait_event(qp->wait, !atomic_read(&qp->refcount));
/* grab the lock b/c it was locked at call time */
spin_lock_irq(&qp->r_lock);
if (init_attr->port_num == 0 ||
init_attr->port_num > ibpd->device->phys_port_cnt)
return ERR_PTR(-EINVAL);
+ /* fall through */
case IB_QPT_UC:
case IB_QPT_RC:
case IB_QPT_UD:
goto bail_qp;
}
/* initialize timers needed for rc qp */
- setup_timer(&qp->s_timer, rvt_rc_timeout, (unsigned long)qp);
+ timer_setup(&qp->s_timer, rvt_rc_timeout, 0);
hrtimer_init(&qp->s_rnr_timer, CLOCK_MONOTONIC,
HRTIMER_MODE_REL);
qp->s_rnr_timer.function = rvt_rc_rnr_retry;
atomic_set(&qp->refcount, 0);
atomic_set(&qp->local_ops_pending, 0);
init_waitqueue_head(&qp->wait);
- init_timer(&qp->s_timer);
- qp->s_timer.data = (unsigned long)qp;
INIT_LIST_HEAD(&qp->rspwait);
qp->state = IB_QPS_RESET;
qp->s_wq = swq;
rdi->driver_f.notify_error_qp(qp);
/* Schedule the sending tasklet to drain the send work queue. */
- if (ACCESS_ONCE(qp->s_last) != qp->s_head)
+ if (READ_ONCE(qp->s_last) != qp->s_head)
rdi->driver_f.schedule_send(qp);
rvt_clear_mr_refs(qp, 0);
spin_unlock(&qp->s_hlock);
spin_unlock_irq(&qp->r_lock);
+ wait_event(qp->wait, !atomic_read(&qp->refcount));
/* qpn is now available for use again */
rvt_free_qpn(&rdi->qp_dev->qpn_table, qp->ibqp.qp_num);
if (likely(qp->s_avail))
return 0;
smp_read_barrier_depends(); /* see rc.c */
- slast = ACCESS_ONCE(qp->s_last);
+ slast = READ_ONCE(qp->s_last);
if (qp->s_head >= slast)
avail = qp->s_size - (qp->s_head - slast);
else
* ahead and kick the send engine into gear. Otherwise we will always
* just schedule the send to happen later.
*/
- call_send = qp->s_head == ACCESS_ONCE(qp->s_last) && !wr->next;
+ call_send = qp->s_head == READ_ONCE(qp->s_last) && !wr->next;
for (; wr; wr = wr->next) {
err = rvt_post_one_wr(qp, wr, &call_send);
/**
* This is called from s_timer for missing responses.
*/
- static void rvt_rc_timeout(unsigned long arg)
+ static void rvt_rc_timeout(struct timer_list *t)
{
- struct rvt_qp *qp = (struct rvt_qp *)arg;
+ struct rvt_qp *qp = from_timer(qp, t, s_timer);
struct rvt_dev_info *rdi = ib_to_rvt(qp->ibqp.device);
unsigned long flags;
MODULE_PARM_DESC(srpt_srq_size,
"Shared receive queue (SRQ) size.");
-static int srpt_get_u64_x(char *buffer, struct kernel_param *kp)
+static int srpt_get_u64_x(char *buffer, const struct kernel_param *kp)
{
return sprintf(buffer, "0x%016llx", *(u64 *)kp->arg);
}
{
struct srpt_device *sdev = sport->sdev;
struct ib_dm_ioc_profile *iocp;
+ int send_queue_depth;
iocp = (struct ib_dm_ioc_profile *)mad->data;
return;
}
+ if (sdev->use_srq)
+ send_queue_depth = sdev->srq_size;
+ else
+ send_queue_depth = min(SRPT_RQ_SIZE,
+ sdev->device->attrs.max_qp_wr);
+
memset(iocp, 0, sizeof(*iocp));
strcpy(iocp->id_string, SRPT_ID_STRING);
iocp->guid = cpu_to_be64(srpt_service_guid);
iocp->io_subclass = cpu_to_be16(SRP_IO_SUBCLASS);
iocp->protocol = cpu_to_be16(SRP_PROTOCOL);
iocp->protocol_version = cpu_to_be16(SRP_PROTOCOL_VERSION);
- iocp->send_queue_depth = cpu_to_be16(sdev->srq_size);
+ iocp->send_queue_depth = cpu_to_be16(send_queue_depth);
iocp->rdma_read_depth = 4;
iocp->send_size = cpu_to_be32(srp_max_req_size);
iocp->rdma_size = cpu_to_be32(min(sport->port_attrib.srp_max_rdma_size,
{
int i;
+ if (!ioctx_ring)
+ return;
+
for (i = 0; i < ring_size; ++i)
srpt_free_ioctx(sdev, ioctx_ring[i], dma_size, dir);
kfree(ioctx_ring);
/**
* srpt_post_recv() - Post an IB receive request.
*/
- static int srpt_post_recv(struct srpt_device *sdev,
+ static int srpt_post_recv(struct srpt_device *sdev, struct srpt_rdma_ch *ch,
struct srpt_recv_ioctx *ioctx)
{
struct ib_sge list;
BUG_ON(!sdev);
list.addr = ioctx->ioctx.dma;
list.length = srp_max_req_size;
- list.lkey = sdev->pd->local_dma_lkey;
+ list.lkey = sdev->lkey;
ioctx->ioctx.cqe.done = srpt_recv_done;
wr.wr_cqe = &ioctx->ioctx.cqe;
wr.sg_list = &list;
wr.num_sge = 1;
- return ib_post_srq_recv(sdev->srq, &wr, &bad_wr);
+ if (sdev->use_srq)
+ return ib_post_srq_recv(sdev->srq, &wr, &bad_wr);
+ else
+ return ib_post_recv(ch->qp, &wr, &bad_wr);
}
/**
break;
}
- srpt_post_recv(ch->sport->sdev, recv_ioctx);
+ srpt_post_recv(ch->sport->sdev, ch, recv_ioctx);
return;
out_wait:
struct srpt_device *sdev = sport->sdev;
const struct ib_device_attr *attrs = &sdev->device->attrs;
u32 srp_sq_size = sport->port_attrib.srp_sq_size;
- int ret;
+ int i, ret;
WARN_ON(ch->rq_size < 1);
= (void(*)(struct ib_event *, void*))srpt_qp_event;
qp_init->send_cq = ch->cq;
qp_init->recv_cq = ch->cq;
- qp_init->srq = sdev->srq;
qp_init->sq_sig_type = IB_SIGNAL_REQ_WR;
qp_init->qp_type = IB_QPT_RC;
/*
* both both, as RDMA contexts will also post completions for the
* RDMA READ case.
*/
- qp_init->cap.max_send_wr = srp_sq_size / 2;
+ qp_init->cap.max_send_wr = min(srp_sq_size / 2, attrs->max_qp_wr + 0U);
qp_init->cap.max_rdma_ctxs = srp_sq_size / 2;
qp_init->cap.max_send_sge = min(attrs->max_sge, SRPT_MAX_SG_PER_WQE);
qp_init->port_num = ch->sport->port;
+ if (sdev->use_srq) {
+ qp_init->srq = sdev->srq;
+ } else {
+ qp_init->cap.max_recv_wr = ch->rq_size;
+ qp_init->cap.max_recv_sge = qp_init->cap.max_send_sge;
+ }
ch->qp = ib_create_qp(sdev->pd, qp_init);
if (IS_ERR(ch->qp)) {
if (ret)
goto err_destroy_qp;
+ if (!sdev->use_srq)
+ for (i = 0; i < ch->rq_size; i++)
+ srpt_post_recv(sdev, ch, ch->ioctx_recv_ring[i]);
+
out:
kfree(qp_init);
return ret;
return ret;
}
- static void __srpt_close_all_ch(struct srpt_device *sdev)
+ /*
+ * Send DREQ and wait for DREP. Return true if and only if this function
+ * changed the state of @ch.
+ */
+ static bool srpt_disconnect_ch_sync(struct srpt_rdma_ch *ch)
+ __must_hold(&sdev->mutex)
{
+ DECLARE_COMPLETION_ONSTACK(release_done);
+ struct srpt_device *sdev = ch->sport->sdev;
+ bool wait;
+
+ lockdep_assert_held(&sdev->mutex);
+
+ pr_debug("ch %s-%d state %d\n", ch->sess_name, ch->qp->qp_num,
+ ch->state);
+
+ WARN_ON(ch->release_done);
+ ch->release_done = &release_done;
+ wait = !list_empty(&ch->list);
+ srpt_disconnect_ch(ch);
+ mutex_unlock(&sdev->mutex);
+
+ if (!wait)
+ goto out;
+
+ while (wait_for_completion_timeout(&release_done, 180 * HZ) == 0)
+ pr_info("%s(%s-%d state %d): still waiting ...\n", __func__,
+ ch->sess_name, ch->qp->qp_num, ch->state);
+
+ out:
+ mutex_lock(&sdev->mutex);
+ return wait;
+ }
+
+ static void srpt_set_enabled(struct srpt_port *sport, bool enabled)
+ __must_hold(&sdev->mutex)
+ {
+ struct srpt_device *sdev = sport->sdev;
struct srpt_rdma_ch *ch;
lockdep_assert_held(&sdev->mutex);
+ if (sport->enabled == enabled)
+ return;
+ sport->enabled = enabled;
+ if (sport->enabled)
+ return;
+
+ again:
list_for_each_entry(ch, &sdev->rch_list, list) {
- if (srpt_disconnect_ch(ch) >= 0)
- pr_info("Closing channel %s-%d because target %s has been disabled\n",
- ch->sess_name, ch->qp->qp_num,
- sdev->device->name);
- srpt_close_ch(ch);
+ if (ch->sport == sport) {
+ pr_info("%s: closing channel %s-%d\n",
+ sdev->device->name, ch->sess_name,
+ ch->qp->qp_num);
+ if (srpt_disconnect_ch_sync(ch))
+ goto again;
+ }
}
+
}
static void srpt_free_ch(struct kref *kref)
ch->sport->sdev, ch->rq_size,
ch->rsp_size, DMA_TO_DEVICE);
+ srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_recv_ring,
+ sdev, ch->rq_size,
+ srp_max_req_size, DMA_FROM_DEVICE);
+
mutex_lock(&sdev->mutex);
list_del_init(&ch->list);
if (ch->release_done)
ch->cm_id = cm_id;
cm_id->context = ch;
/*
- * Avoid QUEUE_FULL conditions by limiting the number of buffers used
- * for the SRP protocol to the command queue size.
+ * ch->rq_size should be at least as large as the initiator queue
+ * depth to avoid that the initiator driver has to report QUEUE_FULL
+ * to the SCSI mid-layer.
*/
- ch->rq_size = SRPT_RQ_SIZE;
+ ch->rq_size = min(SRPT_RQ_SIZE, sdev->device->attrs.max_qp_wr);
spin_lock_init(&ch->spinlock);
ch->state = CH_CONNECTING;
INIT_LIST_HEAD(&ch->cmd_wait_list);
ch->ioctx_ring[i]->ch = ch;
list_add_tail(&ch->ioctx_ring[i]->free_list, &ch->free_list);
}
+ if (!sdev->use_srq) {
+ ch->ioctx_recv_ring = (struct srpt_recv_ioctx **)
+ srpt_alloc_ioctx_ring(ch->sport->sdev, ch->rq_size,
+ sizeof(*ch->ioctx_recv_ring[0]),
+ srp_max_req_size,
+ DMA_FROM_DEVICE);
+ if (!ch->ioctx_recv_ring) {
+ pr_err("rejected SRP_LOGIN_REQ because creating a new QP RQ ring failed.\n");
+ rej->reason =
+ cpu_to_be32(SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES);
+ goto free_ring;
+ }
+ }
ret = srpt_create_ch_ib(ch);
if (ret) {
SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES);
pr_err("rejected SRP_LOGIN_REQ because creating"
" a new RDMA channel failed.\n");
- goto free_ring;
+ goto free_recv_ring;
}
ret = srpt_ch_qp_rtr(ch, ch->qp);
destroy_ib:
srpt_destroy_ch_ib(ch);
+ free_recv_ring:
+ srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_recv_ring,
+ ch->sport->sdev, ch->rq_size,
+ srp_max_req_size, DMA_FROM_DEVICE);
+
free_ring:
srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring,
ch->sport->sdev, ch->rq_size,
sge.addr = ioctx->ioctx.dma;
sge.length = resp_len;
- sge.lkey = sdev->pd->local_dma_lkey;
+ sge.lkey = sdev->lkey;
ioctx->ioctx.cqe.done = srpt_send_done;
send_wr.next = NULL;
mutex_lock(&sdev->mutex);
for (i = 0; i < ARRAY_SIZE(sdev->port); i++)
- sdev->port[i].enabled = false;
- __srpt_close_all_ch(sdev);
+ srpt_set_enabled(&sdev->port[i], false);
mutex_unlock(&sdev->mutex);
res = wait_event_interruptible(sdev->ch_releaseQ,
return wwn;
}
+ static void srpt_free_srq(struct srpt_device *sdev)
+ {
+ if (!sdev->srq)
+ return;
+
+ ib_destroy_srq(sdev->srq);
+ srpt_free_ioctx_ring((struct srpt_ioctx **)sdev->ioctx_ring, sdev,
+ sdev->srq_size, srp_max_req_size, DMA_FROM_DEVICE);
+ sdev->srq = NULL;
+ }
+
+ static int srpt_alloc_srq(struct srpt_device *sdev)
+ {
+ struct ib_srq_init_attr srq_attr = {
+ .event_handler = srpt_srq_event,
+ .srq_context = (void *)sdev,
+ .attr.max_wr = sdev->srq_size,
+ .attr.max_sge = 1,
+ .srq_type = IB_SRQT_BASIC,
+ };
+ struct ib_device *device = sdev->device;
+ struct ib_srq *srq;
+ int i;
+
+ WARN_ON_ONCE(sdev->srq);
+ srq = ib_create_srq(sdev->pd, &srq_attr);
+ if (IS_ERR(srq)) {
+ pr_debug("ib_create_srq() failed: %ld\n", PTR_ERR(srq));
+ return PTR_ERR(srq);
+ }
+
+ pr_debug("create SRQ #wr= %d max_allow=%d dev= %s\n", sdev->srq_size,
+ sdev->device->attrs.max_srq_wr, device->name);
+
+ sdev->ioctx_ring = (struct srpt_recv_ioctx **)
+ srpt_alloc_ioctx_ring(sdev, sdev->srq_size,
+ sizeof(*sdev->ioctx_ring[0]),
+ srp_max_req_size, DMA_FROM_DEVICE);
+ if (!sdev->ioctx_ring) {
+ ib_destroy_srq(srq);
+ return -ENOMEM;
+ }
+
+ sdev->use_srq = true;
+ sdev->srq = srq;
+
+ for (i = 0; i < sdev->srq_size; ++i)
+ srpt_post_recv(sdev, NULL, sdev->ioctx_ring[i]);
+
+ return 0;
+ }
+
+ static int srpt_use_srq(struct srpt_device *sdev, bool use_srq)
+ {
+ struct ib_device *device = sdev->device;
+ int ret = 0;
+
+ if (!use_srq) {
+ srpt_free_srq(sdev);
+ sdev->use_srq = false;
+ } else if (use_srq && !sdev->srq) {
+ ret = srpt_alloc_srq(sdev);
+ }
+ pr_debug("%s(%s): use_srq = %d; ret = %d\n", __func__, device->name,
+ sdev->use_srq, ret);
+ return ret;
+ }
+
/**
* srpt_add_one() - Infiniband device addition callback function.
*/
{
struct srpt_device *sdev;
struct srpt_port *sport;
- struct ib_srq_init_attr srq_attr;
int i;
pr_debug("device = %p\n", device);
if (IS_ERR(sdev->pd))
goto free_dev;
- sdev->srq_size = min(srpt_srq_size, sdev->device->attrs.max_srq_wr);
-
- srq_attr.event_handler = srpt_srq_event;
- srq_attr.srq_context = (void *)sdev;
- srq_attr.attr.max_wr = sdev->srq_size;
- srq_attr.attr.max_sge = 1;
- srq_attr.attr.srq_limit = 0;
- srq_attr.srq_type = IB_SRQT_BASIC;
+ sdev->lkey = sdev->pd->local_dma_lkey;
- sdev->srq = ib_create_srq(sdev->pd, &srq_attr);
- if (IS_ERR(sdev->srq))
- goto err_pd;
+ sdev->srq_size = min(srpt_srq_size, sdev->device->attrs.max_srq_wr);
- pr_debug("%s: create SRQ #wr= %d max_allow=%d dev= %s\n",
- __func__, sdev->srq_size, sdev->device->attrs.max_srq_wr,
- device->name);
+ srpt_use_srq(sdev, sdev->port[0].port_attrib.use_srq);
if (!srpt_service_guid)
srpt_service_guid = be64_to_cpu(device->node_guid);
sdev->cm_id = ib_create_cm_id(device, srpt_cm_handler, sdev);
if (IS_ERR(sdev->cm_id))
- goto err_srq;
+ goto err_ring;
/* print out target login information */
pr_debug("Target login info: id_ext=%016llx,ioc_guid=%016llx,"
srpt_event_handler);
ib_register_event_handler(&sdev->event_handler);
- sdev->ioctx_ring = (struct srpt_recv_ioctx **)
- srpt_alloc_ioctx_ring(sdev, sdev->srq_size,
- sizeof(*sdev->ioctx_ring[0]),
- srp_max_req_size, DMA_FROM_DEVICE);
- if (!sdev->ioctx_ring)
- goto err_event;
-
- for (i = 0; i < sdev->srq_size; ++i)
- srpt_post_recv(sdev, sdev->ioctx_ring[i]);
-
WARN_ON(sdev->device->phys_port_cnt > ARRAY_SIZE(sdev->port));
for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
sport->port_attrib.srp_max_rdma_size = DEFAULT_MAX_RDMA_SIZE;
sport->port_attrib.srp_max_rsp_size = DEFAULT_MAX_RSP_SIZE;
sport->port_attrib.srp_sq_size = DEF_SRPT_SQ_SIZE;
+ sport->port_attrib.use_srq = false;
INIT_WORK(&sport->work, srpt_refresh_port_work);
if (srpt_refresh_port(sport)) {
pr_err("MAD registration failed for %s-%d.\n",
sdev->device->name, i);
- goto err_ring;
+ goto err_event;
}
}
pr_debug("added %s.\n", device->name);
return;
- err_ring:
- srpt_free_ioctx_ring((struct srpt_ioctx **)sdev->ioctx_ring, sdev,
- sdev->srq_size, srp_max_req_size,
- DMA_FROM_DEVICE);
err_event:
ib_unregister_event_handler(&sdev->event_handler);
err_cm:
ib_destroy_cm_id(sdev->cm_id);
- err_srq:
- ib_destroy_srq(sdev->srq);
- err_pd:
+ err_ring:
+ srpt_free_srq(sdev);
ib_dealloc_pd(sdev->pd);
free_dev:
kfree(sdev);
spin_unlock(&srpt_dev_lock);
srpt_release_sdev(sdev);
- ib_destroy_srq(sdev->srq);
+ srpt_free_srq(sdev);
+
ib_dealloc_pd(sdev->pd);
- srpt_free_ioctx_ring((struct srpt_ioctx **)sdev->ioctx_ring, sdev,
- sdev->srq_size, srp_max_req_size, DMA_FROM_DEVICE);
- sdev->ioctx_ring = NULL;
kfree(sdev);
}
*/
static void srpt_close_session(struct se_session *se_sess)
{
- DECLARE_COMPLETION_ONSTACK(release_done);
struct srpt_rdma_ch *ch = se_sess->fabric_sess_ptr;
struct srpt_device *sdev = ch->sport->sdev;
- bool wait;
-
- pr_debug("ch %s-%d state %d\n", ch->sess_name, ch->qp->qp_num,
- ch->state);
mutex_lock(&sdev->mutex);
- BUG_ON(ch->release_done);
- ch->release_done = &release_done;
- wait = !list_empty(&ch->list);
- srpt_disconnect_ch(ch);
+ srpt_disconnect_ch_sync(ch);
mutex_unlock(&sdev->mutex);
-
- if (!wait)
- return;
-
- while (wait_for_completion_timeout(&release_done, 180 * HZ) == 0)
- pr_info("%s(%s-%d state %d): still waiting ...\n", __func__,
- ch->sess_name, ch->qp->qp_num, ch->state);
}
/**
{
const char *p;
unsigned len, count, leading_zero_bytes;
- int ret, rc;
+ int ret;
p = name;
if (strncasecmp(p, "0x", 2) == 0)
count = min(len / 2, 16U);
leading_zero_bytes = 16 - count;
memset(i_port_id, 0, leading_zero_bytes);
- rc = hex2bin(i_port_id + leading_zero_bytes, p, count);
- if (rc < 0)
- pr_debug("hex2bin failed for srpt_parse_i_port_id: %d\n", rc);
- ret = 0;
+ ret = hex2bin(i_port_id + leading_zero_bytes, p, count);
+ if (ret < 0)
+ pr_debug("hex2bin failed for srpt_parse_i_port_id: %d\n", ret);
out:
return ret;
}
return count;
}
+ static ssize_t srpt_tpg_attrib_use_srq_show(struct config_item *item,
+ char *page)
+ {
+ struct se_portal_group *se_tpg = attrib_to_tpg(item);
+ struct srpt_port *sport = srpt_tpg_to_sport(se_tpg);
+
+ return sprintf(page, "%d\n", sport->port_attrib.use_srq);
+ }
+
+ static ssize_t srpt_tpg_attrib_use_srq_store(struct config_item *item,
+ const char *page, size_t count)
+ {
+ struct se_portal_group *se_tpg = attrib_to_tpg(item);
+ struct srpt_port *sport = srpt_tpg_to_sport(se_tpg);
+ struct srpt_device *sdev = sport->sdev;
+ unsigned long val;
+ bool enabled;
+ int ret;
+
+ ret = kstrtoul(page, 0, &val);
+ if (ret < 0)
+ return ret;
+ if (val != !!val)
+ return -EINVAL;
+
+ ret = mutex_lock_interruptible(&sdev->mutex);
+ if (ret < 0)
+ return ret;
+ enabled = sport->enabled;
+ /* Log out all initiator systems before changing 'use_srq'. */
+ srpt_set_enabled(sport, false);
+ sport->port_attrib.use_srq = val;
+ srpt_use_srq(sdev, sport->port_attrib.use_srq);
+ srpt_set_enabled(sport, enabled);
+ mutex_unlock(&sdev->mutex);
+
+ return count;
+ }
+
CONFIGFS_ATTR(srpt_tpg_attrib_, srp_max_rdma_size);
CONFIGFS_ATTR(srpt_tpg_attrib_, srp_max_rsp_size);
CONFIGFS_ATTR(srpt_tpg_attrib_, srp_sq_size);
+ CONFIGFS_ATTR(srpt_tpg_attrib_, use_srq);
static struct configfs_attribute *srpt_tpg_attrib_attrs[] = {
&srpt_tpg_attrib_attr_srp_max_rdma_size,
&srpt_tpg_attrib_attr_srp_max_rsp_size,
&srpt_tpg_attrib_attr_srp_sq_size,
+ &srpt_tpg_attrib_attr_use_srq,
NULL,
};
struct se_portal_group *se_tpg = to_tpg(item);
struct srpt_port *sport = srpt_tpg_to_sport(se_tpg);
struct srpt_device *sdev = sport->sdev;
- struct srpt_rdma_ch *ch;
unsigned long tmp;
int ret;
pr_err("Illegal value for srpt_tpg_store_enable: %lu\n", tmp);
return -EINVAL;
}
- if (sport->enabled == tmp)
- goto out;
- sport->enabled = tmp;
- if (sport->enabled)
- goto out;
mutex_lock(&sdev->mutex);
- list_for_each_entry(ch, &sdev->rch_list, list) {
- if (ch->sport == sport) {
- pr_debug("%s: ch %p %s-%d\n", __func__, ch,
- ch->sess_name, ch->qp->qp_num);
- srpt_disconnect_ch(ch);
- srpt_close_ch(ch);
- }
- }
+ srpt_set_enabled(sport, tmp);
mutex_unlock(&sdev->mutex);
- out:
return count;
}
#include <rdma/ib_verbs.h>
#include <linux/mlx5/driver.h>
-
+#include <linux/refcount.h>
struct mlx5_core_cq {
u32 cqn;
__be32 *set_ci_db;
__be32 *arm_db;
struct mlx5_uars_page *uar;
- atomic_t refcount;
+ refcount_t refcount;
struct completion free;
unsigned vector;
unsigned int irqn;
enum {
CQE_SIZE_64 = 0,
CQE_SIZE_128 = 1,
+ CQE_SIZE_128_PAD = 2,
};
- static inline int cqe_sz_to_mlx_sz(u8 size)
+ #define MLX5_MAX_CQ_PERIOD (BIT(__mlx5_bit_sz(cqc, cq_period)) - 1)
+ #define MLX5_MAX_CQ_COUNT (BIT(__mlx5_bit_sz(cqc, cq_max_count)) - 1)
+
+ static inline int cqe_sz_to_mlx_sz(u8 size, int padding_128_en)
{
- return size == 64 ? CQE_SIZE_64 : CQE_SIZE_128;
+ return padding_128_en ? CQE_SIZE_128_PAD :
+ size == 64 ? CQE_SIZE_64 : CQE_SIZE_128;
}
static inline void mlx5_cq_set_ci(struct mlx5_core_cq *cq)
u8 swp[0x1];
u8 swp_csum[0x1];
u8 swp_lso[0x1];
- u8 reserved_at_23[0x1d];
+ u8 reserved_at_23[0x1b];
+ u8 max_geneve_opt_len[0x1];
+ u8 tunnel_stateless_geneve_rx[0x1];
u8 reserved_at_40[0x10];
u8 lro_min_mss_size[0x10];
MLX5_WQ_TYPE_LINKED_LIST = 0x0,
MLX5_WQ_TYPE_CYCLIC = 0x1,
MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ = 0x2,
+ MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ = 0x3,
};
enum {
u8 cc_modify_allowed[0x1];
u8 start_pad[0x1];
u8 cache_line_128byte[0x1];
- u8 reserved_at_165[0xb];
+ u8 reserved_at_165[0xa];
+ u8 qcam_reg[0x1];
u8 gid_table_size[0x10];
u8 out_of_seq_cnt[0x1];
u8 num_of_uars_per_page[0x20];
u8 reserved_at_540[0x40];
- u8 reserved_at_580[0x3f];
+ u8 reserved_at_580[0x3d];
+ u8 cqe_128_always[0x1];
+ u8 cqe_compression_128[0x1];
u8 cqe_compression[0x1];
u8 cqe_compression_timeout[0x10];
u8 reserved_at_1c0[0x80];
};
+struct mlx5_ifc_qcam_access_reg_cap_mask {
+ u8 qcam_access_reg_cap_mask_127_to_20[0x6C];
+ u8 qpdpm[0x1];
+ u8 qcam_access_reg_cap_mask_18_to_4[0x0F];
+ u8 qdpm[0x1];
+ u8 qpts[0x1];
+ u8 qcap[0x1];
+ u8 qcam_access_reg_cap_mask_0[0x1];
+};
+
+struct mlx5_ifc_qcam_qos_feature_cap_mask {
+ u8 qcam_qos_feature_cap_mask_127_to_1[0x7F];
+ u8 qpts_trust_both[0x1];
+};
+
+struct mlx5_ifc_qcam_reg_bits {
+ u8 reserved_at_0[0x8];
+ u8 feature_group[0x8];
+ u8 reserved_at_10[0x8];
+ u8 access_reg_group[0x8];
+ u8 reserved_at_20[0x20];
+
+ union {
+ struct mlx5_ifc_qcam_access_reg_cap_mask reg_cap;
+ u8 reserved_at_0[0x80];
+ } qos_access_reg_cap_mask;
+
+ u8 reserved_at_c0[0x80];
+
+ union {
+ struct mlx5_ifc_qcam_qos_feature_cap_mask feature_cap;
+ u8 reserved_at_0[0x80];
+ } qos_feature_cap_mask;
+
+ u8 reserved_at_1c0[0x80];
+};
+
struct mlx5_ifc_pcap_reg_bits {
u8 reserved_at_0[0x8];
u8 local_port[0x8];
struct mlx5_ifc_ets_global_config_reg_bits global_configuration;
};
+struct mlx5_ifc_qpdpm_dscp_reg_bits {
+ u8 e[0x1];
+ u8 reserved_at_01[0x0b];
+ u8 prio[0x04];
+};
+
+struct mlx5_ifc_qpdpm_reg_bits {
+ u8 reserved_at_0[0x8];
+ u8 local_port[0x8];
+ u8 reserved_at_10[0x10];
+ struct mlx5_ifc_qpdpm_dscp_reg_bits dscp[64];
+};
+
+struct mlx5_ifc_qpts_reg_bits {
+ u8 reserved_at_0[0x8];
+ u8 local_port[0x8];
+ u8 reserved_at_10[0x2d];
+ u8 trust_state[0x3];
+};
+
struct mlx5_ifc_qtct_reg_bits {
u8 reserved_at_0[0x8];
u8 port_number[0x8];
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
IB_USER_VERBS_EX_CMD_MODIFY_WQ,
IB_USER_VERBS_EX_CMD_DESTROY_WQ,
IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL,
- IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL
+ IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL,
+ IB_USER_VERBS_EX_CMD_MODIFY_CQ
};
/*
__u64 cq_handle;
};
+ struct ib_uverbs_cq_moderation_caps {
+ __u16 max_cq_moderation_count;
+ __u16 max_cq_moderation_period;
+ __u32 reserved;
+ };
+
/*
* All commands from userspace should start with a __u32 command field
* followed by __u16 in_words and out_words fields (which give the
__u32 max_wq_type_rq;
__u32 raw_packet_caps;
struct ib_uverbs_tm_caps tm_caps;
+ struct ib_uverbs_cq_moderation_caps cq_moderation_caps;
};
struct ib_uverbs_query_port {
__u32 ind_tbl_handle;
};
+ struct ib_uverbs_cq_moderation {
+ __u16 cq_count;
+ __u16 cq_period;
+ };
+
+ struct ib_uverbs_ex_modify_cq {
+ __u32 cq_handle;
+ __u32 attr_mask;
+ struct ib_uverbs_cq_moderation attr;
+ __u32 reserved;
+ };
+
#define IB_DEVICE_NAME_MAX 64
#endif /* IB_USER_VERBS_H */
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */
/*
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
enum {
MLX5_QP_FLAG_SIGNATURE = 1 << 0,
MLX5_QP_FLAG_SCATTER_CQE = 1 << 1,
+ MLX5_QP_FLAG_TUNNEL_OFFLOADS = 1 << 2,
};
enum {
__u32 supported_qpts;
};
+ struct mlx5_ib_striding_rq_caps {
+ __u32 min_single_stride_log_num_of_bytes;
+ __u32 max_single_stride_log_num_of_bytes;
+ __u32 min_single_wqe_log_num_of_strides;
+ __u32 max_single_wqe_log_num_of_strides;
+
+ /* Corresponding bit will be set if qp type from
+ * 'enum ib_qp_type' is supported, e.g.
+ * supported_qpts |= 1 << IB_QPT_RAW_PACKET
+ */
+ __u32 supported_qpts;
+ __u32 reserved;
+ };
+
+ enum mlx5_ib_query_dev_resp_flags {
+ /* Support 128B CQE compression */
+ MLX5_IB_QUERY_DEV_RESP_FLAGS_CQE_128B_COMP = 1 << 0,
+ MLX5_IB_QUERY_DEV_RESP_FLAGS_CQE_128B_PAD = 1 << 1,
+ };
+
+ enum mlx5_ib_tunnel_offloads {
+ MLX5_IB_TUNNELED_OFFLOADS_VXLAN = 1 << 0,
+ MLX5_IB_TUNNELED_OFFLOADS_GRE = 1 << 1,
+ MLX5_IB_TUNNELED_OFFLOADS_GENEVE = 1 << 2
+ };
+
struct mlx5_ib_query_device_resp {
__u32 comp_mask;
__u32 response_length;
struct mlx5_ib_cqe_comp_caps cqe_comp_caps;
struct mlx5_packet_pacing_caps packet_pacing_caps;
__u32 mlx5_ib_support_multi_pkt_send_wqes;
- __u32 reserved;
+ __u32 flags; /* Use enum mlx5_ib_query_dev_resp_flags */
struct mlx5_ib_sw_parsing_caps sw_parsing_caps;
+ struct mlx5_ib_striding_rq_caps striding_rq_caps;
+ __u32 tunnel_offloads_caps; /* enum mlx5_ib_tunnel_offloads */
+ __u32 reserved;
+ };
+
+ enum mlx5_ib_create_cq_flags {
+ MLX5_IB_CREATE_CQ_FLAGS_CQE_128B_PAD = 1 << 0,
};
struct mlx5_ib_create_cq {
__u32 cqe_size;
__u8 cqe_comp_en;
__u8 cqe_comp_res_format;
- __u16 reserved; /* explicit padding (optional on i386) */
+ __u16 flags;
};
struct mlx5_ib_create_cq_resp {
MLX5_RX_HASH_SRC_PORT_TCP = 1 << 4,
MLX5_RX_HASH_DST_PORT_TCP = 1 << 5,
MLX5_RX_HASH_SRC_PORT_UDP = 1 << 6,
- MLX5_RX_HASH_DST_PORT_UDP = 1 << 7
+ MLX5_RX_HASH_DST_PORT_UDP = 1 << 7,
+ /* Save bits for future fields */
+ MLX5_RX_HASH_INNER = 1 << 31
};
struct mlx5_ib_create_qp_rss {
__u8 reserved[6];
__u8 rx_hash_key[128]; /* valid only for Toeplitz */
__u32 comp_mask;
- __u32 reserved1;
+ __u32 flags;
};
struct mlx5_ib_create_qp_resp {
__u16 reserved2;
};
+ enum mlx5_ib_create_wq_mask {
+ MLX5_IB_CREATE_WQ_STRIDING_RQ = (1 << 0),
+ };
+
struct mlx5_ib_create_wq {
__u64 buf_addr;
__u64 db_addr;
__u32 user_index;
__u32 flags;
__u32 comp_mask;
- __u32 reserved;
+ __u32 single_stride_log_num_of_bytes;
+ __u32 single_wqe_log_num_of_strides;
+ __u32 two_byte_shift_en;
};
struct mlx5_ib_create_ah_resp {
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */
/*
* Copyright (c) 2012-2016 VMware, Inc. All rights reserved.
*
struct pvrdma_create_srq {
__u64 buf_addr;
+ __u32 buf_size;
+ __u32 reserved;
};
struct pvrdma_create_srq_resp {