Merge tag 'soc-fixes-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull ARM SoC fixes from Arnd Bergmann:
"This should be the last set of bugfixes in the SoC tree:
- Two fixes for Arm integrator, dealing with a regression caused by
invalid DT properties combined with a change in dma address
translation, and missing device_type annotations on the PCI bus
- Fixes for drivers/reset/, addressing bugs in i.MX8MP, Sparx5 and
NPCM8XX platforms
- Bjorn Andersson's email address changes in the MAINTAINERS file
- Multiple minor fixes to Qualcomm dts files, and a change to the
remoteproc firmware filename that did not match the actual path in
the linux-firmware package
- Minor code fixes for the Allwinner/sunxi SRAM driver, and the
broadcom STB Bus Interface Unit driver
- A build fix for the sunplus sp7021 platform
- Two dts fixes for TI OMAP family SoCs, addressing an extraneous
usb4 device node and an incorrect DMA handle"
* tag 'soc-fixes-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc:
ARM: dts: integrator: Fix DMA ranges
ARM: dts: integrator: Tag PCI host with device_type
ARM: sunplus: fix serial console kconfig and build problems
reset: npcm: fix iprst2 and iprst4 setting
arm64: dts: qcom: sm8350: fix UFS PHY serdes size
soc: bcm: brcmstb: biuctrl: Avoid double of_node_put()
arm64: dts: qcom: sc8280xp-x13s: Update firmware location
soc: sunxi: sram: Fix debugfs info for A64 SRAM C
soc: sunxi: sram: Fix probe function ordering issues
soc: sunxi: sram: Prevent the driver from being unbound
soc: sunxi: sram: Actually claim SRAM regions
ARM: dts: am5748: keep usb4_tm disabled
reset: microchip-sparx5: issue a reset on startup
reset: imx7: Fix the iMX8MP PCIe PHY PERST support
MAINTAINERS: Update Bjorn's email address
arm64: dts: qcom: sc7280: move USB wakeup-source property
arm64: dts: qcom: thinkpad-x13s: Fix firmware location
arm64: dts: qcom: sm8150: Fix fastrpc iommu values
ARM: dts: am33xx: Fix MMCHS0 dma properties
Eli Cohen [Mon, 12 Sep 2022 12:50:19 +0000 (15:50 +0300)]
vdpa/mlx5: Fix MQ to support non power of two num queues
RQT objects require that a power of two value be configured for both
rqt_max_size and rqt_actual size.
For create_rqt, make sure to round up to the power of two the value of
given by the user who created the vdpa device and given by
ndev->rqt_size. The actual size is also rounded up to the power of two
using the current number of VQs given by ndev->cur_num_vqs.
Same goes with modify_rqt where we need to make sure act size is power
of two based on the new number of QPs.
Without this patch, attempt to create a device with non power of two QPs
would result in error from firmware.
Suwan Kim [Tue, 30 Aug 2022 15:01:53 +0000 (00:01 +0900)]
virtio-blk: Fix WARN_ON_ONCE in virtio_queue_rq()
If a request fails at virtio_queue_rqs(), it is inserted to requeue_list
and passed to virtio_queue_rq(). Then blk_mq_start_request() can be called
again at virtio_queue_rq() and trigger WARN_ON_ONCE like below trace because
request state was already set to MQ_RQ_IN_FLIGHT in virtio_queue_rqs()
despite the failure.
To avoid calling blk_mq_start_request() twice, This patch moves the
execution of blk_mq_start_request() to the end of virtblk_prep_rq().
And instead of requeuing failed request to plug list in the error path of
virtblk_add_req_batch(), it uses blk_mq_requeue_request() to change failed
request state to MQ_RQ_IDLE. Then virtblk can safely handle the request
on the next trial.
Xuan Zhuo [Tue, 30 Aug 2022 11:05:49 +0000 (19:05 +0800)]
virtio_test: fixup for vq reset
Fix virtio test compilation failure caused by vq reset.
../../drivers/virtio/virtio_ring.c: In function ‘vring_create_virtqueue_packed’:
../../drivers/virtio/virtio_ring.c:1999:8: error: ‘struct virtqueue’ has no member named ‘reset’
1999 | vq->vq.reset = false;
| ^
../../drivers/virtio/virtio_ring.c: In function ‘__vring_new_virtqueue’:
../../drivers/virtio/virtio_ring.c:2493:8: error: ‘struct virtqueue’ has no member named ‘reset’
2493 | vq->vq.reset = false;
| ^
../../drivers/virtio/virtio_ring.c: In function ‘virtqueue_resize’:
../../drivers/virtio/virtio_ring.c:2587:18: error: ‘struct virtqueue’ has no member named ‘num_max’
2587 | if (num > vq->vq.num_max)
| ^
../../drivers/virtio/virtio_ring.c:2596:11: error: ‘struct virtio_device’ has no member named ‘config’
2596 | if (!vdev->config->disable_vq_and_reset)
| ^~
../../drivers/virtio/virtio_ring.c:2599:11: error: ‘struct virtio_device’ has no member named ‘config’
2599 | if (!vdev->config->enable_vq_after_reset)
| ^~
../../drivers/virtio/virtio_ring.c:2602:12: error: ‘struct virtio_device’ has no member named ‘config’
2602 | err = vdev->config->disable_vq_and_reset(_vq);
| ^~
../../drivers/virtio/virtio_ring.c:2614:10: error: ‘struct virtio_device’ has no member named ‘config’
2614 | if (vdev->config->enable_vq_after_reset(_vq))
| ^~
make: *** [<builtin>: virtio_ring.o] Error 1
lei he [Mon, 19 Sep 2022 07:51:58 +0000 (15:51 +0800)]
virtio-crypto: fix memory-leak
Fix memory-leak for virtio-crypto akcipher request, this problem is
introduced by 59ca6c93387d3(virtio-crypto: implement RSA algorithm).
The leak can be reproduced and tested with the following script
inside virtual machine:
# here we only run pkey_encrypt becasuse it is the fastest interface
function bench_pub() {
keyctl pkey_encrypt $PUB_KEY_ID 0 /tmp/data enc=pkcs1 >/tmp/enc.pub
}
# do bench_pub in loop to obtain the memory leak
for (( i = 0; i < ${LOOP_TIMES}; ++i )); do
bench_pub
done
drm/amdgpu: Add amdgpu suspend-resume code path under SRIOV
- Under SRIOV, we need to send REQ_GPU_FINI to the hypervisor
during the suspend time. Furthermore, we cannot request a
mode 1 reset under SRIOV as VF. Therefore, we will skip it
as it is called in suspend_noirq() function.
- In the resume code path, we need to send REQ_GPU_INIT to the
hypervisor and also resume PSP IP block under SRIOV.
Samson Tam [Fri, 9 Sep 2022 21:16:32 +0000 (17:16 -0400)]
drm/amd/display: fill in clock values when DPM is not enabled
[Why]
For individual feature testing, PMFW may not report all clock
values back. Driver will default them to 0 but this will
cause the BB table to be skipped and default to one state
with max clocks.
[How]
Add helper function to scan through initial clock values and
populate them with default clock limits so that BB table
can be built.
Add dpm_enabled flag to check when DPM is not enabled and
to trigger helper function.
[why]
We have minimal pipe split transition method to avoid pipe
allocation outage.However, this method will invoke audio setup
which cause audio output stuck once pipe reallocate.
[how]
skip audio setup for pipelines which audio stream has been enabled
Graham Sider [Fri, 23 Sep 2022 14:07:15 +0000 (10:07 -0400)]
drm/amdkfd: fix dropped interrupt in kfd_int_process_v11
Shader wave interrupts were getting dropped in event_interrupt_wq_v11
if the PRIV bit was set to 1. This would often lead to a hang. Until
debugger logic is upstreamed, expand comment to stop early return.
Graham Sider [Mon, 19 Sep 2022 17:57:14 +0000 (13:57 -0400)]
drm/amdgpu: pass queue size and is_aql_queue to MES
Update mes_v11_api_def.h add_queue API with is_aql_queue parameter. Also
re-use gds_size for the queue size (unused for KFD). MES requires the
queue size in order to compute the actual wptr offset within the queue
RB since it increases monotonically for AQL queues.
Evan Quan [Thu, 1 Sep 2022 03:45:02 +0000 (11:45 +0800)]
drm/amd/pm: use adverse selection for dpm features unsupported by driver
It's vbios and pmfw instead of driver who decide whether some dpm features
is supported or not. Driver just de-selects those features which are not
permitted on user's request. Thus, we use adverse selects model.
Nadav Amit [Wed, 21 Sep 2022 18:09:32 +0000 (18:09 +0000)]
x86/alternative: Fix race in try_get_desc()
I encountered some occasional crashes of poke_int3_handler() when
kprobes are set, while accessing desc->vec.
The text poke mechanism claims to have an RCU-like behavior, but it
does not appear that there is any quiescent state to ensure that
nobody holds reference to desc. As a result, the following race
appears to be possible, which can lead to memory corruption.
if (!desc) [false, success]
WRITE_ONCE(bp_desc, NULL);
atomic_dec_and_test(&desc.refs)
[ success, desc space on the stack
is being reused and might have
non-zero value. ]
arch_atomic_inc_not_zero(&desc->refs)
[ might succeed since desc points to
stack memory that was freed and might
be reused. ]
Fix this issue with small backportable patch. Instead of trying to
make RCU-like behavior for bp_desc, just eliminate the unnecessary
level of indirection of bp_desc, and hold the whole descriptor as a
global. Anyhow, there is only a single descriptor at any given
moment.
ice: xsk: drop power of 2 ring size restriction for AF_XDP
We had multiple customers in the past months that reported commit 296f13ff3854 ("ice: xsk: Force rings to be sized to power of 2")
makes them unable to use ring size of 8160 in conjunction with AF_XDP.
Remove this restriction.
AF_XDP Tx descriptor cleaning in ice driver currently works in a "lazy"
way - descriptors are not cleaned immediately after send. We rather hold
on with cleaning until we see that free space in ring drops below
particular threshold. This was supposed to reduce the amount of
unnecessary work related to cleaning and instead of keeping the ring
empty, ring was rather saturated.
In AF_XDP realm cleaning Tx descriptors implies producing them to CQ.
This is a way of letting know user space that particular descriptor has
been sent, as John points out in [0].
We tried to implement serial descriptor cleaning which would be used in
conjunction with batched cleaning but it made code base more convoluted
and probably harder to maintain in future. Therefore we step away from
batched cleaning in a current form in favor of an approach where we set
RS bit on every last descriptor from a batch and clean always at the
beginning of ice_xmit_zc().
This means that we give up a bit of Tx performance, but this doesn't
hurt l2fwd scenario which is way more meaningful than txonly as this can
be treaten as AF_XDP based packet generator. l2fwd is not hurt due to
the fact that Tx side is much faster than Rx and Rx is the one that has
to catch Tx up.
FWIW Tx descriptors are still produced in a batched way.
Both i.MX6 and i.MX8 reference manuals list 0xBF8 as SNVS_HPVIDR1
(chapters 57.9 and 6.4.5 respectively).
Without this, trying to read the revision number results in 0 on
all revisions, causing the i.MX6 quirk to apply on all platforms,
which in turn causes the driver to synthesise power button release
events instead of passing the real one as they happen even on
platforms like i.MX8 where that's not wanted.
Merge tag 'sound-6.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
"A few device-specific fixes, mostly for ASoC. All look small / trivial
enough"
* tag 'sound-6.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
ALSA: hda: intel-dsp-config: add missing RaptorLake PCI IDs
ASoC: tas2770: Reinit regcache on reset
ASoC: nau8824: Fix semaphore is released unexpectedly
ASoC: Intel: sof_sdw: add support for Dell SKU 0AFF
ASoC: imx-card: Fix refcount issue with of_node_put
ASoC: rt5640: Fix the issue of the abnormal JD2 status
David Matlack [Mon, 26 Sep 2022 17:14:57 +0000 (10:14 -0700)]
KVM: selftests: Skip tests that require EPT when it is not available
Skip selftests that require EPT support in the VM when it is not
available. For example, if running on a machine where kvm_intel.ept=N
since KVM does not offer EPT support to guests if EPT is not supported
on the host.
This commit causes vmx_dirty_log_test to be skipped instead of failing
on hosts where kvm_intel.ept=N.
The block device uses multiple queues to access emmc. There will be up to 3
requests in the hsq of the host. The current code will check whether there
is a request doing recovery before entering the queue, but it will not check
whether there is a request when the lock is issued. The request is in recovery
mode. If there is a request in recovery, then a read and write request is
initiated at this time, and the conflict between the request and the recovery
request will cause the data to be trampled.
The UAS mode of Thinkplus(0x17ef, 0x3899) is reported to influence
performance and trigger kernel panic on several platforms with the
following error message:
The UAS mode of Hiksemi USB_HDD is reported to fail to work on several
platforms with the following error message, then after re-connecting the
device will be offlined and not working at all.
The UAS mode of Hiksemi is reported to fail to work on several platforms
with the following error message, then after re-connecting the device will
be offlined and not working at all.
Heikki Krogerus [Thu, 22 Sep 2022 14:59:24 +0000 (17:59 +0300)]
usb: typec: ucsi: Remove incorrect warning
Sink only devices do not have any source capabilities, so
the driver should not warn about that. Also DRP (Dual Role
Power) capable devices, such as USB Type-C docking stations,
do not return any source capabilities unless they are
plugged to a power supply themselves.
net: phy: Don't WARN for PHY_UP state in mdio_bus_phy_resume()
Commit 744d23c71af3 ("net: phy: Warn about incorrect mdio_bus_phy_resume()
state") introduced a WARN() on resume from system sleep if a PHY is not
in PHY_HALTED state.
Commit 6dbe852c379f ("net: phy: Don't WARN for PHY_READY state in
mdio_bus_phy_resume()") added an exemption for PHY_READY state from
the WARN().
It turns out PHY_UP state needs to be exempted as well because the
following may happen on suspend:
mdio_bus_phy_suspend()
phy_stop_machine()
phydev->state = PHY_UP # if (phydev->state >= PHY_UP)
Merge tag 'thunderbolt-for-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt into usb-linus
Mika writes:
"thunderbolt: Fix for v6.0 final
This includes a single fix from Mario that resets the plug event delay
back to the USB4 spec value.
This has been in linux-next with no reported issues."
* tag 'thunderbolt-for-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt:
thunderbolt: Explicitly reset plug events delay back to USB4 spec value
The issue is related with serdes which impacts clock. There is
serdes in ADLink I-Pi SMARC board ethernet controller. Please refer to
commit b9663b7ca6ff78 ("net: stmmac: Enable SERDES power up/down sequence")
for detial. When issue is reproduced, DMA engine clock is not ready
because serdes is not powered up.
To reproduce DMA engine reset timeout issue with hardware which has
serdes in GBE controller, install Ubuntu. In Ubuntu GUI, click
"Power Off/Log Out" -> "Suspend" menu, it disables network interface,
then goes to sleep mode. When it wakes up, it enables network
interface again. Stmmac driver is called in this way:
1. stmmac_release: Stop network interface. In this function, it
disables DMA engine and network interface;
2. stmmac_suspend: It is called in kernel suspend flow. But because
network interface has been disabled(netif_running(ndev) is
false), it does nothing and returns directly;
3. System goes into S3 or S0ix state. Some time later, system is
waken up by keyboard or mouse;
4. stmmac_resume: It does nothing because network interface has
been disabled;
5. stmmac_open: It is called to enable network interace again. DMA
engine is initialized in this API, but serdes is not power on so
there will be DMA engine reset timeout issue.
Similarly, serdes powerdown should be added in stmmac_release.
Network interface might be disabled by cmd "ifconfig eth0 down",
DMA engine, phy and mac have been disabled in ndo_stop callback,
serdes should be powered down as well. It doesn't make sense that
serdes is on while other components have been turned off.
If ethernet interface is in enabled state(netif_running(ndev) is true)
before suspend/resume, the issue couldn't be reproduced because serdes
could be powered up in stmmac_resume.
Because serdes_powerup is added in stmmac_open, it doesn't need to be
called in probe function.
Rafael Mendonca [Sun, 25 Sep 2022 14:34:19 +0000 (11:34 -0300)]
wifi: mac80211: mlme: Fix double unlock on assoc success handling
Commit 6911458dc428 ("wifi: mac80211: mlme: refactor assoc success
handling") moved the per-link setup out of ieee80211_assoc_success() into a
new function ieee80211_assoc_config_link() but missed to remove the unlock
of 'sta_mtx' in case of HE capability/operation missing on HE AP, which
leads to a double unlock:
Rafael Mendonca [Sat, 24 Sep 2022 18:40:41 +0000 (15:40 -0300)]
wifi: mac80211: mlme: Fix missing unlock on beacon RX
Commit 98b0b467466c ("wifi: mac80211: mlme: use correct link_sta")
switched to link station instead of deflink and added some checks to do
that, which are done with the 'sta_mtx' mutex held. However, the error
path of these checks does not unlock 'sta_mtx' before returning.
Paweł Lenkow [Mon, 19 Sep 2022 15:01:35 +0000 (17:01 +0200)]
wifi: mac80211: fix memory corruption in minstrel_ht_update_rates()
During our testing of WFM200 module over SDIO on i.MX6Q-based platform,
we discovered a memory corruption on the system, tracing back to the wfx
driver. Using kfence, it was possible to trace it back to the root
cause, which is hw->max_rates set to 8 in wfx_init_common,
while the maximum defined by IEEE80211_TX_TABLE_SIZE is 4.
This causes array out-of-bounds writes during updates of the rate table,
as seen below:
BUG: KFENCE: memory corruption in kfree_rcu_work+0x320/0x36c
After discussion on the wireless mailing list it was clarified
that the issue has been introduced by:
commit ee0e16ab756a ("mac80211: minstrel_ht: fill all requested rates")
and fix shall be in minstrel_ht_update_rates in rc80211_minstrel_ht.c.
Hans de Goede [Sun, 18 Sep 2022 19:20:52 +0000 (21:20 +0200)]
wifi: mac80211: fix regression with non-QoS drivers
Commit 10cb8e617560 ("mac80211: enable QoS support for nl80211 ctrl port")
changed ieee80211_tx_control_port() to aways call
__ieee80211_select_queue() without checking local->hw.queues.
__ieee80211_select_queue() returns a queue-id between 0 and 3, which means
that now ieee80211_tx_control_port() may end up setting the queue-mapping
for a skb to a value higher then local->hw.queues if local->hw.queues
is less then 4.
Specifically this is a problem for ralink rt2500-pci cards where
local->hw.queues is 2. There this causes rt2x00queue_get_tx_queue() to
return NULL and the following error to be logged: "ieee80211 phy0:
rt2x00mac_tx: Error - Attempt to send packet over invalid queue 2",
after which association with the AP fails.
Other callers of __ieee80211_select_queue() skip calling it when
local->hw.queues < IEEE80211_NUM_ACS, add the same check to
ieee80211_tx_control_port(). This fixes ralink rt2500-pci and
similar cards when less then 4 tx-queues no longer working.
Alexander Wetzel [Thu, 15 Sep 2022 13:09:46 +0000 (15:09 +0200)]
wifi: mac80211: ensure vif queues are operational after start
Make sure local->queue_stop_reasons and vif.txqs_stopped stay in sync.
When a new vif is created the queues may end up in an inconsistent state
and be inoperable:
Communication not using iTXQ will work, allowing to e.g. complete the
association. But the 4-way handshake will time out. The sta will not
send out any skbs queued in iTXQs.
All normal attempts to start the queues will fail when reaching this
state.
local->queue_stop_reasons will have marked all queues as operational but
vif.txqs_stopped will still be set, creating an inconsistent internal
state.
In reality this seems to be race between the mac80211 function
ieee80211_do_open() setting SDATA_STATE_RUNNING and the wake_txqs_tasklet:
Depending on the driver and the timing the queues may end up to be
operational or not.
Alexander Wetzel [Thu, 15 Sep 2022 12:41:20 +0000 (14:41 +0200)]
wifi: mac80211: don't start TX with fq->lock to fix deadlock
ieee80211_txq_purge() calls fq_tin_reset() and
ieee80211_purge_tx_queue(); Both are then calling
ieee80211_free_txskb(). Which can decide to TX the skb again.
There are at least two ways to get a deadlock:
1) When we have a TDLS teardown packet queued in either tin or frags
ieee80211_tdls_td_tx_handle() will call ieee80211_subif_start_xmit()
while we still hold fq->lock. ieee80211_txq_enqueue() will thus
deadlock.
2) A variant of the above happens if aggregation is up and running:
In that case ieee80211_iface_work() will deadlock with the original
task: The original tasks already holds fq->lock and tries to get
sta->lock after kicking off ieee80211_iface_work(). But the worker
can get sta->lock prior to the original task and will then spin for
fq->lock.
Avoid these deadlocks by not sending out any skbs when called via
ieee80211_free_txskb().
Nicolas Dufresne [Fri, 10 Jun 2022 12:52:11 +0000 (13:52 +0100)]
media: rkvdec: Disable H.264 error detection
Quite often, the HW get stuck in error condition if a stream error
was detected. As documented, the HW should stop immediately and self
reset. There is likely a problem or a miss-understanding of the self
reset mechanism, as unless we make a long pause, the next command
will then report an error even if there is no error in it.
Disabling error detection fixes the issue, and let the decoder continue
after an error. This patch is safe for backport into older kernels.
media: mediatek: vcodec: Drop platform_get_resource(IORESOURCE_IRQ)
Commit a1a2b7125e10 ("of/platform: Drop static setup of IRQ resource
from DT core") removed support for calling platform_get_resource(...,
IORESOURCE_IRQ, ...) on DT-based drivers, but the probe() function of
mtk-vcodec's encoder was still making use of it. This caused the encoder
driver to fail probe.
Since the platform_get_resource() call was only being used to check for
the presence of the interrupt (its returned resource wasn't even used)
and platform_get_irq() was already being used to get the IRQ, simply
drop the use of platform_get_resource(IORESOURCE_IRQ) and handle the
failure of platform_get_irq(), to get the driver probing again.
[hverkuil: drop unused struct resource *res]
Fixes: a1a2b7125e10 ("of/platform: Drop static setup of IRQ resource from DT core") Signed-off-by: Nícolas F. R. A. Prado <[email protected]> Reviewed-by: AngeloGioacchino Del Regno <[email protected]> Signed-off-by: Hans Verkuil <[email protected]> Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Hans Verkuil [Wed, 18 May 2022 13:06:31 +0000 (14:06 +0100)]
media: v4l2-ioctl.c: fix incorrect error path
If allocating array_buf fails, or copying data from userspace into that
buffer fails, then just free memory and return the error. Don't attempt
to call video_put_user() since there is no point, and it would copy back
data on error even if INFO_FL_ALWAYS_COPY wasn't set.
So if writing the array back to userspace fails, then don't go to
out_array_args, instead just continue with the regular code that just
returns the error unless 'always_copy' is set.
Update the VIDIOC_G/S/TRY_EXT_CTRLS ioctls to set the ALWAYS_COPY flag
since they now need it. Before this worked due to this buggy code, but
now that that is fixed these ioctls need to set this flag explicitly.
Hans Verkuil [Mon, 21 Mar 2022 08:33:56 +0000 (08:33 +0000)]
media: v4l2-compat-ioctl32.c: zero buffer passed to v4l2_compat_get_array_args()
The v4l2_compat_get_array_args() function can leave uninitialized memory in the
buffer it is passed. So zero it before copying array elements from userspace
into the buffer.
Tina Hsu [Thu, 22 Sep 2022 06:16:30 +0000 (14:16 +0800)]
nvme-pci: disable Write Zeroes on Phison E3C/E4C
E3C/E4C SSDs do support the Write Zeroes command in theory, but have very
bad performance when using it. As the firmware has been frozen for these
products we can not expect firmware improvements for it, so disable
Write Zeroes.
Michael Kelley [Fri, 23 Sep 2022 04:49:09 +0000 (21:49 -0700)]
nvme: Fix IOC_PR_CLEAR and IOC_PR_RELEASE ioctls for nvme devices
The IOC_PR_CLEAR and IOC_PR_RELEASE ioctls are
non-functional on NVMe devices because the nvme_pr_clear()
and nvme_pr_release() functions set the IEKEY field incorrectly.
The IEKEY field should be set only when the key is zero (i.e,
not specified). The current code does it backwards.
Furthermore, the NVMe spec describes the persistent
reservation "clear" function as an option on the reservation
release command. The current implementation of nvme_pr_clear()
erroneously uses the reservation register command.
Fix these errors. Note that NVMe version 1.3 and later specify
that setting the IEKEY field will return an error of Invalid
Field in Command. The fix will set IEKEY when the key is zero,
which is appropriate as these ioctls consider a zero key to
be "unspecified", and the intention of the spec change is
to require a valid key.
Tested on a version 1.4 PCI NVMe device in an Azure VM.
Fixes: 1673f1f08c88 ("nvme: move block_device_operations and ns/ctrl freeing to common code") Fixes: 1d277a637a71 ("NVMe: Add persistent reservation ops") Signed-off-by: Michael Kelley <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
libata: add ATA_HORKAGE_NOLPM for Pioneer BDR-207M and BDR-205
Commit 1527f69204fe ("ata: ahci: Add Green Sardine vendor ID as
board_ahci_mobile") added an explicit entry for AMD Green Sardine
AHCI controller using the board_ahci_mobile configuration (this
configuration has later been renamed to board_ahci_low_power).
The board_ahci_low_power configuration enables support for low power
modes.
This explicit entry takes precedence over the generic AHCI controller
entry, which does not enable support for low power modes.
Therefore, when commit 1527f69204fe ("ata: ahci: Add Green Sardine
vendor ID as board_ahci_mobile") was backported to stable kernels,
it make some Pioneer optical drives, which was working perfectly fine
before the commit was backported, stop working.
The real problem is that the Pioneer optical drives do not handle low
power modes correctly. If these optical drives would have been tested
on another AHCI controller using the board_ahci_low_power configuration,
this issue would have been detected earlier.
Unfortunately, the board_ahci_low_power configuration is only used in
less than 15% of the total AHCI controller entries, so many devices
have never been tested with an AHCI controller with low power modes.
Merge tag 'x86_urgent_for_v6.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Dave Hansen:
- A performance fix for recent large AMD systems that avoids an ancient
cpu idle hardware workaround
- A new Intel model number. Folks like these upstream as soon as
possible so that each developer doing feature development doesn't
need to carry their own #define
- SGX fixes for a userspace crash and a rare kernel warning
* tag 'x86_urgent_for_v6.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
ACPI: processor idle: Practically limit "Dummy wait" workaround to old Intel systems
x86/sgx: Handle VA page allocation failure for EAUG on PF.
x86/sgx: Do not fail on incomplete sanitization on premature stop of ksgxd
x86/cpu: Add CPU model numbers for Meteor Lake
A recent change affecting the behaviour of phys_to_dma() to
actually require the device tree ranges to work unmasked a
bug in the Integrator DMA ranges.
The PL110 uses the CMA allocator to obtain coherent allocations
from a dedicated 1MB video memory, leading to the following
call chain:
phys_to_dma() by way of translate_phys_to_dma() will nowadays not
provide 1:1 mappings unless the ranges are properly defined in
the device tree and reflected into the dev->dma_range_map.
There is a bug in the device trees because the DMA ranges are
incorrectly specified, and the patch uncovers this bug.
Solution:
- Fix the LB (logic bus) ranges to be 1-to-1 like they should
have always been.
- Provide a 1:1 dma-ranges attribute to the PL110.
- Mark the PL110 display controller as DMA coherent.
This makes the DMA ranges work right and makes the PL110
framebuffer work again.
Merge tag 'mm-hotfixes-stable-2022-09-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull last (?) hotfixes from Andrew Morton:
"26 hotfixes.
8 are for issues which were introduced during this -rc cycle, 18 are
for earlier issues, and are cc:stable"
* tag 'mm-hotfixes-stable-2022-09-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (26 commits)
x86/uaccess: avoid check_object_size() in copy_from_user_nmi()
mm/page_isolation: fix isolate_single_pageblock() isolation behavior
mm,hwpoison: check mm when killing accessing process
mm/hugetlb: correct demote page offset logic
mm: prevent page_frag_alloc() from corrupting the memory
mm: bring back update_mmu_cache() to finish_fault()
frontswap: don't call ->init if no ops are registered
mm/huge_memory: use pfn_to_online_page() in split_huge_pages_all()
mm: fix madivse_pageout mishandling on non-LRU page
powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
mm: gup: fix the fast GUP race against THP collapse
mm: fix dereferencing possible ERR_PTR
vmscan: check folio_test_private(), not folio_get_private()
mm: fix VM_BUG_ON in __delete_from_swap_cache()
tools: fix compilation after gfp_types.h split
mm/damon/dbgfs: fix memory leak when using debugfs_lookup()
mm/migrate_device.c: copy pte dirty bit to page
mm/migrate_device.c: add missing flush_cache_page()
mm/migrate_device.c: flush TLB while holding PTL
x86/mm: disable instrumentations of mm/pgprot.c
...
Rafael Mendonca [Thu, 22 Sep 2022 17:51:08 +0000 (14:51 -0300)]
cxgb4: fix missing unlock on ETHOFLD desc collect fail path
The label passed to the QDESC_GET for the ETHOFLD TXQ, RXQ, and FLQ, is the
'out' one, which skips the 'out_unlock' label, and thus doesn't unlock the
'uld_mutex' before returning. Additionally, since commit 5148e5950c67
("cxgb4: add EOTID tracking and software context dump"), the access to
these ETHOFLD hardware queues should be protected by the 'mqprio_mutex'
instead.
Merge tag 'ext4_for_linus_fixes2' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull missed ext4 fix from Ted Ts'o:
"Fix an potential unitialzied variable bug; this was a fixup that I had
forgotten to apply before the last pull request for ext4. My bad"
* tag 'ext4_for_linus_fixes2' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
ext4: fixup possible uninitialized variable access in ext4_mb_choose_next_group_cr1()
x86/uaccess: avoid check_object_size() in copy_from_user_nmi()
The check_object_size() helper under CONFIG_HARDENED_USERCOPY is designed
to skip any checks where the length is known at compile time as a
reasonable heuristic to avoid "likely known-good" cases. However, it can
only do this when the copy_*_user() helpers are, themselves, inline too.
Using find_vmap_area() requires taking a spinlock. The
check_object_size() helper can call find_vmap_area() when the destination
is in vmap memory. If show_regs() is called in interrupt context, it will
attempt a call to copy_from_user_nmi(), which may call check_object_size()
and then find_vmap_area(). If something in normal context happens to be
in the middle of calling find_vmap_area() (with the spinlock held), the
interrupt handler will hang forever.
The copy_from_user_nmi() call is actually being called with a fixed-size
length, so check_object_size() should never have been called in the first
place. Given the narrow constraints, just replace the
__copy_from_user_inatomic() call with an open-coded version that calls
only into the sanitizers and not check_object_size(), followed by a call
to raw_copy_from_user().
set_migratetype_isolate() does not allow isolating MIGRATE_CMA pageblocks
unless it is used for CMA allocation. isolate_single_pageblock() did not
have the same behavior when it is used together with
set_migratetype_isolate() in start_isolate_page_range(). This allows
alloc_contig_range() with migratetype other than MIGRATE_CMA, like
MIGRATE_MOVABLE (used by alloc_contig_pages()), to isolate first and last
pageblock but fail the rest. The failure leads to changing migratetype of
the first and last pageblock to MIGRATE_MOVABLE from MIGRATE_CMA,
corrupting the CMA region. This can happen during gigantic page
allocations.
Like Doug said here:
https://lore.kernel.org/linux-mm/a3363a52-883b-dcd1-b77f-f2bb378d6f2d@gmail.com/T/#u,
for gigantic page allocations, the user would notice no difference,
since the allocation on CMA region will fail as well as it did before.
But it might hurt the performance of device drivers that use CMA, since
CMA region size decreases.
Fix it by passing migratetype into isolate_single_pageblock(), so that
set_migratetype_isolate() used by isolate_single_pageblock() will prevent
the isolation happening.
mm,hwpoison: check mm when killing accessing process
The GHES code calls memory_failure_queue() from IRQ context to queue work
into workqueue and schedule it on the current CPU. Then the work is
processed in memory_failure_work_func() by kworker and calls
memory_failure().
When a page is already poisoned, commit a3f5d80ea401 ("mm,hwpoison: send
SIGBUS with error virutal address") make memory_failure() call
kill_accessing_process() that:
- holds mmap locking of current->mm
- does pagetable walk to find the error virtual address
- and sends SIGBUS to the current process with error info.
However, the mm of kworker is not valid, resulting in a null-pointer
dereference. So check mm when killing the accessing process.
With gigantic pages it may not be true that struct page structures are
contiguous across the entire gigantic page. The nth_page macro is used
here in place of direct pointer arithmetic to correct for this.
Mike said:
: This error could cause addressing exceptions. However, this is only
: possible in configurations where CONFIG_SPARSEMEM &&
: !CONFIG_SPARSEMEM_VMEMMAP. Such a configuration option is rare and
: unknown to be the default anywhere.
mm: prevent page_frag_alloc() from corrupting the memory
A number of drivers call page_frag_alloc() with a fragment's size >
PAGE_SIZE.
In low memory conditions, __page_frag_cache_refill() may fail the order
3 cache allocation and fall back to order 0; In this case, the cache
will be smaller than the fragment, causing memory corruptions.
Prevent this from happening by checking if the newly allocated cache is
large enough for the fragment; if not, the allocation will fail and
page_frag_alloc() will return NULL.
In the "ptr" buffer there appear runs of zero bytes which are aligned
by 16 and their lengths are multiple of 16.
Linux v5.11 does not have the bug, "git bisect" finds the first bad commit: f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
Before the commit update_mmu_cache() was called during a call to
filemap_map_pages() as well as finish_fault(). After the commit
finish_fault() lacks it.
Bring back update_mmu_cache() to finish_fault() to fix the bug.
Also call update_mmu_tlb() only when returning VM_FAULT_NOPAGE to more
closely reproduce the code of alloc_set_pte() function that existed before
the commit.
On many platforms update_mmu_cache() is nop:
x86, see arch/x86/include/asm/pgtable
ARMv6+, see arch/arm/include/asm/tlbflush.h
So, it seems, few users ran into this bug.
mm/huge_memory: use pfn_to_online_page() in split_huge_pages_all()
NULL pointer dereference is triggered when calling thp split via debugfs
on the system with offlined memory blocks. With debug option enabled, the
following kernel messages are printed out:
Yang Shi [Wed, 7 Sep 2022 18:01:44 +0000 (11:01 -0700)]
powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
The IPI broadcast is used to serialize against fast-GUP, but fast-GUP will
move to use RCU instead of disabling local interrupts in fast-GUP. Using
an IPI is the old-styled way of serializing against fast-GUP although it
still works as expected now.
And fast-GUP now fixed the potential race with THP collapse by checking
whether PMD is changed or not. So IPI broadcast in radix pmd collapse
flush is not necessary anymore. But it is still needed for hash TLB.
Yang Shi [Wed, 7 Sep 2022 18:01:43 +0000 (11:01 -0700)]
mm: gup: fix the fast GUP race against THP collapse
Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
sufficient to handle concurrent GUP-fast in all cases, it only handles
traditional IPI-based GUP-fast correctly. On architectures that send an
IPI broadcast on TLB flush, it works as expected. But on the
architectures that do not use IPI to broadcast TLB flush, it may have the
below race:
CPU A CPU B
THP collapse fast GUP
gup_pmd_range() <-- see valid pmd
gup_pte_range() <-- work on pte
pmdp_collapse_flush() <-- clear pmd and flush
__collapse_huge_page_isolate()
check page pinned <-- before GUP bump refcount
pin the page
check PTE <-- no change
__collapse_huge_page_copy()
copy data to huge page
ptep_clear()
install huge pmd for the huge page
return the stale page
discard the stale page
The race can be fixed by checking whether PMD is changed or not after
taking the page pin in fast GUP, just like what it does for PTE. If the
PMD is changed it means there may be parallel THP collapse, so GUP should
back off.
Also update the stale comment about serializing against fast GUP in
khugepaged.
Peilin Ye [Fri, 23 Sep 2022 04:25:51 +0000 (21:25 -0700)]
usbnet: Fix memory leak in usbnet_disconnect()
Currently usbnet_disconnect() unanchors and frees all deferred URBs
using usb_scuttle_anchored_urbs(), which does not free urb->context,
causing a memory leak as reported by syzbot.
Use a usb_get_from_anchor() while loop instead, similar to what we did
in commit 19cfe912c37b ("Bluetooth: btusb: Fix memory leak in
play_deferred"). Also free urb->sg.
Instead of picking the task from the first submitter task, rather use the
creator task or in the case of disabled (IORING_SETUP_R_DISABLED) the
enabling task.
This approach allows a lot of simplification of the logic here. This
removes init logic from the submission path, which can always be a bit
confusing, but also removes the need for locking to write (or read) the
submitter_task.
Users that want to move a ring before submitting can create the ring
disabled and then enable it on the submitting task.
Jan Kara [Thu, 22 Sep 2022 09:09:29 +0000 (11:09 +0200)]
ext4: fixup possible uninitialized variable access in ext4_mb_choose_next_group_cr1()
Variable 'grp' may be left uninitialized if there's no group with
suitable average fragment size (or larger). Fix the problem by
initializing it earlier.
On Tue, Sep 13, 2022 at 05:48:58PM +0100, Russell King (Oracle) wrote:
>What happens if this is built as a module, and the module is loaded,
>binds (and creates the directory), then is removed, and then re-
>inserted? Nothing removes the old directory, so doesn't
>debugfs_create_dir() fail, resulting in subsequent failure to add
>any subsequent debugfs entries?
>
>I don't think this patch should be backported to stable trees until
>this point is addressed.
Revert until a proper fix is available as the original behavior was
better.
Chris Wilson [Wed, 21 Sep 2022 13:52:58 +0000 (15:52 +0200)]
drm/i915/gt: Restrict forced preemption to the active context
When we submit a new pair of contexts to ELSP for execution, we start a
timer by which point we expect the HW to have switched execution to the
pending contexts. If the promotion to the new pair of contexts has not
occurred, we declare the executing context to have hung and force the
preemption to take place by resetting the engine and resubmitting the
new contexts.
This can lead to an unfair situation where almost all of the preemption
timeout is consumed by the first context which just switches into the
second context immediately prior to the timer firing and triggering the
preemption reset (assuming that the timer interrupts before we process
the CS events for the context switch). The second context hasn't yet had
a chance to yield to the incoming ELSP (and send the ACk for the
promotion) and so ends up being blamed for the reset.
If we see that a context switch has occurred since setting the
preemption timeout, but have not yet received the ACK for the ELSP
promotion, rearm the preemption timer and check again. This is
especially significant if the first context was not schedulable and so
we used the shortest timer possible, greatly increasing the chance of
accidentally blaming the second innocent context.
perf tests powerpc: Fix branch stack sampling test to include sanity check for branch filter
Commit b55878c90ab92a24 ("perf test: Add test for branch stack
sampling") added test for branch stack sampling. There is a sanity check
in the beginning to skip the test if the hardware doesn't support branch
stack sampling.
Snippet
<<>>
skip the test if the hardware doesn't support branch stack sampling
perf record -b -o- -B true > /dev/null 2>&1 || exit 2
<<>>
But the testcase also uses branch sample types: save_type, any. if any
platform doesn't support the branch filters used in the test, the testcase
will fail. In powerpc, currently mutliple branch filters are not supported
and hence this test fails in powerpc. Fix the sanity check to look at
the support for branch filters used in this test before proceeding with
the test.
By default, we create two hybrid cache events, one is for cpu_core, and
another is for cpu_atom. But Some hybrid hardware cache events are only
available on one CPU PMU. For example, the 'L1-dcache-load-misses' is only
available on cpu_core, while the 'L1-icache-loads' is only available on
cpu_atom. We need to remove "not supported" hybrid cache events. By
extending is_event_supported() to global API and using it to check if the
hybrid cache events are supported before being created, we can remove the
"not supported" hybrid cache events.
Before:
# ./perf stat -e L1-dcache-load-misses,L1-icache-loads -a sleep 1
perf print-events: Fix "perf list" can not display the PMU prefix for some hybrid cache events
Some hybrid hardware cache events are only available on one CPU PMU. For
example, 'L1-dcache-load-misses' is only available on cpu_core.
We have supported in the perf list clearly reporting this info, the
function works fine before but recently the argument "config" in API
is_event_supported() is changed from "u64" to "unsigned int" which
caused a regression, the "perf list" then can not display the PMU prefix
for some hybrid cache events.
For the hybrid systems, the PMU type ID is stored at config[63:32],
define config to "unsigned int" will miss the PMU type ID information,
then the regression happened, the config should be defined as "u64".
gpio: mvebu: Fix check for pwm support on non-A8K platforms
pwm support incompatible with Armada 80x0/70x0 API is not only in
Armada 370, but also in Armada XP, 38x and 39x. So basically every non-A8K
platform. Fix check for pwm support appropriately.
Fixes: 85b7d8abfec7 ("gpio: mvebu: add pwm support for Armada 8K/7K") Signed-off-by: Pali Rohár <[email protected]> Signed-off-by: Bartosz Golaszewski <[email protected]>
Merge tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 fixes from Ted Ts'o:
"Regression and bug fixes:
- Performance regression fix from 5.18 on a Rasberry Pi
- Fix extent parsing bug which triggers a BUG_ON when a (corrupted)
extent tree has has a non-root node when zero entries.
- Fix a livelock where in the right (wrong) circumstances a large
number of nfsd threads can try to write to a nearly full file
system, and retry for hours(!)"
* tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
ext4: limit the number of retries after discarding preallocations blocks
ext4: fix bug in extents parsing when eh_entries == 0 and eh_depth > 0
ext4: use buckets for cr 1 block scan instead of rbtree
ext4: use locality group preallocation for small closed files
ext4: make directory inode spreading reflect flexbg size
ext4: avoid unnecessary spreading of allocations among groups
ext4: make mballoc try target group first even with mb_optimize_scan
Merge tag 'dax-and-nvdimm-fixes-v6.0-final' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull NVDIMM and DAX fixes from Dan Williams:
"A recently discovered one-line fix for devdax that further addresses a
v5.5 regression, and (a bit embarrassing) a small batch of fixes that
have been sitting in my fixes tree for weeks.
The older fixes have soaked in linux-next during that time and address
an fsdax infinite loop and some other minor fixups.
- Fix a infinite loop bug in fsdax
- Fix memory-type detection for devdax (EINJ regression)
- Small cleanups"
* tag 'dax-and-nvdimm-fixes-v6.0-final' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
devdax: Fix soft-reservation memory description
fsdax: Fix infinite loop in dax_iomap_rw()
nvdimm/namespace: drop nested variable in create_namespace_pmem()
ndtest: Cleanup all of blk namespace specific code
pmem: fix a name collision
Merge tag 'i2c-for-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
Pull i2c fixes from Wolfram Sang:
"I2C driver bugfixes for mlxbf and imx, a few documentation fixes after
the rework this cycle, and one hardening for the i2c-mux core"
* tag 'i2c-for-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: mux: harden i2c_mux_alloc() against integer overflows
i2c: mlxbf: Fix frequency calculation
i2c: mlxbf: prevent stack overflow in mlxbf_i2c_smbus_start_transaction()
i2c: mlxbf: incorrect base address passed during io write
Documentation: i2c: fix references to other documents
MAINTAINERS: remove Nehal Shah from AMD MP2 I2C DRIVER
i2c: imx: If pm_runtime_get_sync() returned 1 device access is possible
Jeff LaBundy [Sat, 17 Sep 2022 19:09:22 +0000 (14:09 -0500)]
Input: iqs62x-keys - drop unused device node references
Each call to device/fwnode_get_named_child_node() must be matched
with a call to fwnode_handle_put() once the corresponding node is
no longer in use. This ensures a reference count remains balanced
in the case of dynamic device tree support.
Currently, the driver never calls fwnode_handle_put(). This patch
adds the missing calls.
Dan Williams [Fri, 23 Sep 2022 22:05:56 +0000 (15:05 -0700)]
devdax: Fix soft-reservation memory description
The "hmem" platform-devices that are created to represent the
platform-advertised "Soft Reserved" memory ranges end up inserting a
resource that causes the iomem_resource tree to look like this:
This is because insert_resource() reparents ranges when they completely
intersect an existing range.
This matters because code that uses region_intersects() to scan for a
given IORES_DESC will only check that top-level 'hmem.0' resource and
not the 'Soft Reserved' descendant.
So, to support EINJ (via einj_error_inject()) to inject errors into
memory hosted by a dax-device, be sure to describe the memory as
IORES_DESC_SOFT_RESERVED. This is a follow-on to:
commit b13a3e5fd40b ("ACPI: APEI: Fix _EINJ vs EFI_MEMORY_SP")
...that fixed EINJ support for "Soft Reserved" ranges in the first
instance.
Merge tag 'kbuild-fixes-v6.0-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- Fix build error for the combination of SYSTEM_TRUSTED_KEYRING=y and
X509_CERTIFICATE_PARSER=m
- Fix DEBUG_INFO_SPLIT to generate debug info for GCC 11+ and Clang 12+
- Revive debug info for assembly files
- Remove unused code
* tag 'kbuild-fixes-v6.0-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
Makefile.debug: re-enable debug info for .S files
Makefile.debug: set -g unconditional on CONFIG_DEBUG_INFO_SPLIT
certs: make system keyring depend on built-in x509 parser
Kconfig: remove unused function 'menu_get_root_menu'
scripts/clang-tools: remove unused module
Merge tag 'pm-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management fixes from Rafael Wysocki:
"These fix an uninitialized variable usage in the operating performance
points code and add missing DT bindings for it.
Specifics:
- Fix uninitialized variable usage in dev_pm_opp_config_clks_simple()
(Christophe JAILLET)
- Add missing OPP DT properties (Rob Herring)"
* tag 'pm-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
dt-bindings: opp: Add missing (unevaluated|additional)Properties on child nodes
OPP: Fix an un-initialized variable usage