Linus Torvalds [Thu, 29 Nov 2018 17:56:00 +0000 (09:56 -0800)]
Merge tag 'fixes_for_v4.20-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
Pull ext2 and udf fixes from Jan Kara:
"Three small ext2 and udf fixes"
* tag 'fixes_for_v4.20-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
ext2: fix potential use after free
ext2: initialize opts.s_mount_opt as zero before using it
udf: Allow mounting volumes with incorrect identification strings
Pan Bian [Thu, 22 Nov 2018 02:07:12 +0000 (10:07 +0800)]
pvcalls-front: fixes incorrect error handling
kfree() is incorrectly used to release the pages allocated by
__get_free_page() and __get_free_pages(). Use the matching deallocators
i.e., free_page() and free_pages(), respectively.
That commit unintentionally broke Xen balloon memory hotplug with
"hotplug_unpopulated" set to 1. As long as "System RAM" resource
got assigned under a new "Unusable memory" resource in IO/Mem tree
any attempt to online this memory would fail due to general kernel
restrictions on having "System RAM" resources as 1st level only.
The original issue that commit has tried to workaround fa564ad96366
("x86/PCI: Enable a 64bit BAR on AMD Family 15h (Models 00-1f, 30-3f,
60-7f)") also got amended by the following 03a551734 ("x86/PCI: Move
and shrink AMD 64-bit window to avoid conflict") which made the
original fix to Xen ballooning unnecessary.
xen: xlate_mmu: add missing header to fix 'W=1' warning
Add a missing header otherwise compiler warns about missed prototype:
drivers/xen/xlate_mmu.c:183:5: warning: no previous prototype for 'xen_xlate_unmap_gfn_range?' [-Wmissing-prototypes]
int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma,
^~~~~~~~~~~~~~~~~~~~~~~~~
Juergen Gross [Fri, 23 Nov 2018 16:24:51 +0000 (17:24 +0100)]
xen/x86: add diagnostic printout to xen_mc_flush() in case of error
Failure of an element of a Xen multicall is signalled via a WARN()
only if the kernel is compiled with MC_DEBUG. It is impossible to
know which element failed and why it did so.
Change that by printing the related information even without MC_DEBUG,
even if maybe in some limited form (e.g. without information which
caller produced the failing element).
Move the printing out of the switch statement in order to have the
same information for a single call.
Masami Hiramatsu [Thu, 29 Nov 2018 05:39:33 +0000 (14:39 +0900)]
arm64: ftrace: Fix to enable syscall events on arm64
Since commit 4378a7d4be30 ("arm64: implement syscall wrappers")
introduced "__arm64_" prefix to all syscall wrapper symbols in
sys_call_table, syscall tracer can not find corresponding
metadata from syscall name. In the result, we have no syscall
ftrace events on arm64 kernel, and some bpf testcases are failed
on arm64.
To fix this issue, this introduces custom
arch_syscall_match_sym_name() which skips first 8 bytes when
comparing the syscall and symbol names.
Catalin Marinas [Mon, 19 Nov 2018 11:27:28 +0000 (11:27 +0000)]
arm64: Add workaround for Cortex-A76 erratum 1286807
On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual address
for a cacheable mapping of a location is being accessed by a core while
another core is remapping the virtual address to a new physical page
using the recommended break-before-make sequence, then under very rare
circumstances TLBI+DSB completes before a read using the translation
being invalidated has been observed by other observers. The workaround
repeats the TLBI+DSB operation and is shared with the Qualcomm Falkor
erratum 1009
Paul Moore [Wed, 28 Nov 2018 17:57:33 +0000 (12:57 -0500)]
selinux: add support for RTM_NEWCHAIN, RTM_DELCHAIN, and RTM_GETCHAIN
Commit 32a4f5ecd738 ("net: sched: introduce chain object to uapi")
added new RTM_* definitions without properly updating SELinux, this
patch adds the necessary SELinux support.
While there was a BUILD_BUG_ON() in the SELinux code to protect from
exactly this case, it was bypassed in the broken commit. In order to
hopefully prevent this from happening in the future, add additional
comments which provide some instructions on how to resolve the
BUILD_BUG_ON() failures.
Richard Genoud [Tue, 27 Nov 2018 16:06:35 +0000 (17:06 +0100)]
dmaengine: at_hdmac: fix module unloading
of_dma_controller_free() was not called on module onloading.
This lead to a soft lockup:
watchdog: BUG: soft lockup - CPU#0 stuck for 23s!
Modules linked in: at_hdmac [last unloaded: at_hdmac]
when of_dma_request_slave_channel() tried to call ofdma->of_dma_xlate().
Richard Genoud [Tue, 27 Nov 2018 16:06:34 +0000 (17:06 +0100)]
dmaengine: at_hdmac: fix memory leak in at_dma_xlate()
The leak was found when opening/closing a serial port a great number of
time, increasing kmalloc-32 in slabinfo.
Each time the port was opened, dma_request_slave_channel() was called.
Then, in at_dma_xlate(), atslave was allocated with devm_kzalloc() and
never freed. (Well, it was free at module unload, but that's not what we
want).
So, here, kzalloc is more suited for the job since it has to be freed in
atc_free_chan_resources().
Merge tag 'fixes-for-v4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-linus
Felipe writes:
USB: fixes for v4.20-rc4
In this second set of fixes for the current -rc cycle, we have some
regressions fixes for the old omap_udc driver done by Aaro Koskinen.
We're also reverting an old patch on dwc3 which is, now, known to
break USB certification in some cases.
We have a fix on u_ether for an unsafe list iteration.
* tag 'fixes-for-v4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb:
usb: gadget: u_ether: fix unsafe list iteration
USB: omap_udc: fix rejection of out transfers when DMA is used
USB: omap_udc: fix USB gadget functionality on Palm Tungsten E
USB: omap_udc: fix omap_udc_start() on 15xx machines
USB: omap_udc: fix crashes on probe error and module removal
USB: omap_udc: use devm_request_irq()
Revert "usb: dwc3: gadget: skip Set/Clear Halt when invalid"
Dave Airlie [Thu, 29 Nov 2018 00:11:02 +0000 (10:11 +1000)]
Merge tag 'drm-misc-fixes-2018-11-28-1' of git://anongit.freedesktop.org/drm/drm-misc into drm-fixes
- mst: Don't try to validate ports while destroying them (Lyude)
- Revert: Don't try to validate ports while destroying them (Lyude)
- core: Don't set device to master unless set_master succeeds (Sergio)
- meson: Do vblank_on/off on enable/disable (Neil)
- meson: Use fast_io regmap option to avoid sleeping in irq ctx (Lyude)
- meson: Don't walk off the end of the OSD EOTF LUTs (Lyude)
Y.C. Chen [Thu, 22 Nov 2018 03:56:28 +0000 (11:56 +0800)]
drm/ast: fixed reading monitor EDID not stable issue
v1: over-sample data to increase the stability with some specific monitors
v2: refine to avoid infinite loop
v3: remove un-necessary "volatile" declaration
Lyude Paul [Wed, 28 Nov 2018 21:00:05 +0000 (16:00 -0500)]
Revert "drm/dp_mst: Skip validating ports during destruction, just ref"
This reverts commit:
c54c7374ff44 ("drm/dp_mst: Skip validating ports during destruction, just ref")
ugh.
In drm_dp_destroy_connector_work(), we have a pretty good chance of
freeing the actual struct drm_dp_mst_port. However, after destroying
things we send a hotplug through (*mgr->cbs->hotplug)(mgr) which is
where the problems start.
For i915, this calls all the way down to the fbcon probing helpers,
which start trying to access the port in a modeset.
[ 45.062001] ==================================================================
[ 45.062112] BUG: KASAN: use-after-free in ex_handler_refcount+0x146/0x180
[ 45.062196] Write of size 4 at addr ffff8882b4b70968 by task kworker/3:1/53
[ 45.326312] Memory state around the buggy address:
[ 45.329085] ffff8882b4b70800: fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 45.331845] ffff8882b4b70880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 45.334584] >ffff8882b4b70900: fc fc fc fc fc fc fc fc fc fc fc fc fc fb fb fb
[ 45.337302] ^
[ 45.340061] ffff8882b4b70980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 45.342910] ffff8882b4b70a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 45.345748] ==================================================================
So, this definitely isn't a fix that we want. This being said; there's
no real easy fix for this problem because of some of the catch-22's of
the MST helpers current design. For starters; we always need to validate
a port with drm_dp_get_validated_port_ref(), but validation relies on
the lifetime of the port in the actual topology. So once the port is
gone, it can't be validated again.
If we were to try to make the payload helpers not use port validation,
then we'd cause another problem: if the port isn't validated, it could
be freed and we'd just start causing more KASAN issues. There are
already hacks that attempt to workaround this in
drm_dp_mst_destroy_connector_work() by re-initializing the kref so that
it can be used again and it's memory can be freed once the VCPI helpers
finish removing the port's respective payloads. But none of these really
do anything helpful since the port still can't be validated since it's
gone from the topology. Also, that workaround is immensely confusing to
read through.
What really needs to be done in order to fix this is to teach DRM how to
track the lifetime of the structs for MST ports and branch devices
separately from their lifetime in the actual topology. Simply put; this
means having two different krefs-one that removes the port/branch device
from the topology, and one that finally calls kfree(). This would let us
simplify things, since we'd now be able to keep ports around without
having to keep them in the topology at the same time, which is exactly
what we need in order to teach our VCPI helpers to only validate ports
when it's actually necessary without running the risk of trying to use
unallocated memory.
Such a fix is on it's way, but for now let's play it safe and just
revert this. If this bug has been around for well over a year, we can
wait a little while to get an actual proper fix here.
Linus Torvalds [Wed, 28 Nov 2018 20:51:10 +0000 (12:51 -0800)]
Merge tag 'xtensa-20181128' of git://github.com/jcmvbkbc/linux-xtensa
Pull Xtensa fixes from Max Filippov:
- fix kernel exception on userspace access to a currently disabled
coprocessor
- fix coprocessor data saving/restoring in configurations with multiple
coprocessors
- fix ptrace access to coprocessor data on configurations with multiple
coprocessors with high alignment requirements
* tag 'xtensa-20181128' of git://github.com/jcmvbkbc/linux-xtensa:
xtensa: fix coprocessor part of ptrace_{get,set}xregs
xtensa: fix coprocessor context offset definitions
xtensa: enable coprocessors that are being flushed
shaoyunl [Thu, 22 Nov 2018 16:45:24 +0000 (11:45 -0500)]
drm/amdgpu: Add delay after enable RLC ucode
Driver shouldn't try to access any GFX registers until RLC is idle.
During the test, it took 12 seconds for RLC to clear the BUSY bit
in RLC_GPM_STAT register which is un-acceptable for driver.
As per RLC engineer, it would take RLC Ucode less than 10,000 GFXCLK
cycles to finish its critical section. In a lowest 300M enginer clock
setting(default from vbios), 50 us delay is enough.
This commit fix the hang when RLC introduce the work around for XGMI
which requires more cycles to setup more registers than normal
Felix Kuehling [Sun, 25 Nov 2018 04:25:04 +0000 (23:25 -0500)]
drm/amdgpu: Avoid endless loop in GPUVM fragment processing
Don't bounce back to the root level for fragment processing, because
huge pages are not supported at that level. This is unlikely to happen
with the default VM size on Vega, but can be exposed by limiting the
VM size with the amdgpu.vm_size module parameter.
David S. Miller [Wed, 28 Nov 2018 19:33:35 +0000 (11:33 -0800)]
Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue
Jeff Kirsher says:
====================
Intel Wired LAN Driver Fixes 2018-11-28
This series contains fixes to igb, ixgbe and i40e.
Yunjian Wang from Huawei resolves a variable that could potentially be
NULL before it is used.
Lihong fixes an i40e issue which goes back to 4.17 kernels, where
deleting any of the MAC filters was causing the incorrect syncing for
the PF.
Josh Elsasser caught that there were missing enum values in the link
capabilities for x550 devices, which was preventing link for 1000BaseLX
SFP modules.
Jan fixes the function header comments for XSK methods.
====================
Julian Wiedmann [Wed, 28 Nov 2018 15:20:50 +0000 (16:20 +0100)]
s390/qeth: fix length check in SNMP processing
The response for a SNMP request can consist of multiple parts, which
the cmd callback stages into a kernel buffer until all parts have been
received. If the callback detects that the staging buffer provides
insufficient space, it bails out with error.
This processing is buggy for the first part of the response - while it
initially checks for a length of 'data_len', it later copies an
additional amount of 'offsetof(struct qeth_snmp_cmd, data)' bytes.
Fix the calculation of 'data_len' for the first part of the response.
This also nicely cleans up the memcpy code.
Pan Bian [Wed, 28 Nov 2018 07:30:24 +0000 (15:30 +0800)]
net: hisilicon: remove unexpected free_netdev
The net device ndev is freed via free_netdev when failing to register
the device. The control flow then jumps to the error handling code
block. ndev is used and freed again. Resulting in a use-after-free bug.
Pan Bian [Wed, 28 Nov 2018 06:53:19 +0000 (14:53 +0800)]
rapidio/rionet: do not free skb before reading its length
skb is freed via dev_kfree_skb_any, however, skb->len is read then. This
may result in a use-after-free bug.
Fixes: e6161d64263 ("rapidio/rionet: rework driver initialization and removal") Signed-off-by: Pan Bian <[email protected]> Signed-off-by: David S. Miller <[email protected]>
Linus Torvalds [Wed, 28 Nov 2018 16:38:20 +0000 (08:38 -0800)]
Merge tag 'for-4.20-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux
Pull btrfs fixes from David Sterba:
"Some of these bugs are being hit during testing so we'd like to get
them merged, otherwise there are usual stability fixes for stable
trees"
* tag 'for-4.20-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
btrfs: relocation: set trans to be NULL after ending transaction
Btrfs: fix race between enabling quotas and subvolume creation
Btrfs: send, fix infinite loop due to directory rename dependencies
Btrfs: ensure path name is null terminated at btrfs_control_ioctl
Btrfs: fix rare chances for data loss when doing a fast fsync
btrfs: Always try all copies when reading extent buffers
Linus Torvalds [Wed, 28 Nov 2018 16:33:55 +0000 (08:33 -0800)]
Merge tag 'spi-fix-v4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi fixes from Mark Brown:
"A few driver specific fixes here, nothing big or that stands out for
anyone other than the driver users.
The omap2-mcspi fix is for issues that started showing up with a
change in defconfig in this release to make cpuidle get turned on by
default"
* tag 'spi-fix-v4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: omap2-mcspi: Add missing suspend and resume calls
spi: mediatek: use correct mata->xfer_len when in fifo transfer
spi: uniphier: fix incorrect property items
Josh Elsasser [Sat, 24 Nov 2018 20:57:33 +0000 (12:57 -0800)]
ixgbe: recognize 1000BaseLX SFP modules as 1Gbps
Add the two 1000BaseLX enum values to the X550's check for 1Gbps modules,
allowing the core driver code to establish a link over this SFP type.
This is done by the out-of-tree driver but the fix wasn't in mainline.
Fixes: e23f33367882 ("ixgbe: Fix 1G and 10G link stability for X550EM_x SFP+”) Fixes: 6a14ee0cfb19 ("ixgbe: Add X550 support function pointers") Signed-off-by: Josh Elsasser <[email protected]> Tested-by: Andrew Bowers <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]>
Linus Torvalds [Wed, 28 Nov 2018 16:29:18 +0000 (08:29 -0800)]
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"Bugfixes, many of them reported by syzkaller and mostly predating the
merge window"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
kvm: svm: Ensure an IBPB on all affected CPUs when freeing a vmcb
kvm: mmu: Fix race in emulated page table writes
KVM: nVMX: vmcs12 revision_id is always VMCS12_REVISION even when copied from eVMCS
KVM: nVMX: Verify eVMCS revision id match supported eVMCS version on eVMCS VMPTRLD
KVM: nVMX/nSVM: Fix bug which sets vcpu->arch.tsc_offset to L1 tsc_offset
x86/kvm/vmx: fix old-style function declaration
KVM: x86: fix empty-body warnings
KVM: VMX: Update shared MSRs to be saved/restored on MSR_EFER.LMA changes
KVM: x86: Fix kernel info-leak in KVM_HC_CLOCK_PAIRING hypercall
KVM: nVMX: Fix kernel info-leak when enabling KVM_CAP_HYPERV_ENLIGHTENED_VMCS more than once
svm: Add mutex_lock to protect apic_access_page_done on AMD systems
KVM: X86: Fix scan ioapic use-before-initialization
KVM: LAPIC: Fix pv ipis use-before-initialization
KVM: VMX: re-add ple_gap module parameter
KVM: PPC: Book3S HV: Fix handling for interrupted H_ENTER_NESTED
Lihong Yang [Wed, 21 Nov 2018 17:15:37 +0000 (09:15 -0800)]
i40e: Fix deletion of MAC filters
In __i40e_del_filter function, the flag __I40E_MACVLAN_SYNC_PENDING for
the PF state is wrongly set for the VSI. Deleting any of the MAC filters
has caused the incorrect syncing for the PF. Fix it by setting this state
flag to the intended PF.
cachefiles: Fix page leak in cachefiles_read_backing_file while vmscan is active
[Description]
In a heavily loaded system where the system pagecache is nearing memory
limits and fscache is enabled, pages can be leaked by fscache while trying
read pages from cachefiles backend. This can happen because two
applications can be reading same page from a single mount, two threads can
be trying to read the backing page at same time. This results in one of
the threads finding that a page for the backing file or netfs file is
already in the radix tree. During the error handling cachefiles does not
clean up the reference on backing page, leading to page leak.
[Fix]
The fix is straightforward, to decrement the reference when error is
encountered.
[dhowells: Note that I've removed the clearance and put of newpage as
they aren't attested in the commit message and don't appear to actually
achieve anything since a new page is only allocated is newpage!=NULL and
any residual new page is cleared before returning.]
[Testing]
I have tested the fix using following method for 12+ hrs.
1) mkdir -p /mnt/nfs ; mount -o vers=3,fsc <server_ip>:/export /mnt/nfs
2) create 10000 files of 2.8MB in a NFS mount.
3) start a thread to simulate heavy VM presssure
(while true ; do echo 3 > /proc/sys/vm/drop_caches ; sleep 1 ; done)&
4) start multiple parallel reader for data set at same time
find /mnt/nfs -type f | xargs -P 80 cat > /dev/null &
find /mnt/nfs -type f | xargs -P 80 cat > /dev/null &
find /mnt/nfs -type f | xargs -P 80 cat > /dev/null &
..
..
find /mnt/nfs -type f | xargs -P 80 cat > /dev/null &
find /mnt/nfs -type f | xargs -P 80 cat > /dev/null &
5) finally check using cat /proc/fs/fscache/stats | grep -i pages ;
free -h , cat /proc/meminfo and page-types -r -b lru
to ensure all pages are freed.
Frieder Schrempf [Tue, 27 Nov 2018 07:44:52 +0000 (07:44 +0000)]
mtd: nand: Fix memory allocation in nanddev_bbt_init()
Fix the size of the buffer allocated to store the in-memory BBT.
This bug was previously hidden by a different bug, that was fixed in
commit d098093ba06e ("mtd: nand: Fix nanddev_neraseblocks()").
kiran.modukuri [Mon, 26 Nov 2018 15:41:48 +0000 (15:41 +0000)]
fscache: Fix race in fscache_op_complete() due to split atomic_sub & read
The code in fscache_retrieval_complete is using atomic_sub followed by an
atomic_read:
atomic_sub(n_pages, &op->n_pages);
if (atomic_read(&op->n_pages) <= 0)
fscache_op_complete(&op->op, true);
This causes two threads doing a decrement of n_pages to race with each
other seeing the op->refcount 0 at same time - and they end up calling
fscache_op_complete() in both the threads leading to an assertion failure.
Fix this by using atomic_sub_return_relaxed() instead of two calls. Note
that I'm using 'relaxed' rather than, say, 'release' as there aren't
multiple variables that appear to need ordering across the release.
David Howells [Tue, 27 Nov 2018 16:34:55 +0000 (16:34 +0000)]
cachefiles: Fix an assertion failure when trying to update a failed object
If cachefiles gets an error other then ENOENT when trying to look up an
object in the cache (in this case, EACCES), the object state machine will
eventually transition to the DROP_OBJECT state.
This state invokes fscache_drop_object() which tries to sync the auxiliary
data with the cache (this is done lazily since commit 402cb8dda949d) on an
incomplete cache object struct.
The problem comes when cachefiles_update_object_xattr() is called to
rewrite the xattr holding the data. There's an assertion there that the
cache object points to a dentry as we're going to update its xattr. The
assertion trips, however, as dentry didn't get set.
Fix the problem by skipping the update in cachefiles if the object doesn't
refer to a dentry. A better way to do it could be to skip the update from
the DROP_OBJECT state handler in fscache, but that might deny the cache the
opportunity to update intermediate state.
If this error occurs, the kernel log includes lines that look like the
following:
Note that there are actually two issues here: (1) EACCES happened on a
cache object and (2) an oops occurred. I think that the second is a
consequence of the first (it certainly looks like it ought to be). This
patch only deals with the second.
Fixes: 402cb8dda949 ("fscache: Attach the index key and aux data to the cookie") Reported-by: Zhibin Li <[email protected]> Signed-off-by: David Howells <[email protected]>
Thomas Gleixner [Sun, 25 Nov 2018 18:33:55 +0000 (19:33 +0100)]
x86/speculation: Add seccomp Spectre v2 user space protection mode
If 'prctl' mode of user space protection from spectre v2 is selected
on the kernel command-line, STIBP and IBPB are applied on tasks which
restrict their indirect branch speculation via prctl.
SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
makes sense to prevent spectre v2 user space to user space attacks as
well.
The Intel mitigation guide documents how STIPB works:
Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
prevents the predicted targets of indirect branches on any logical
processor of that core from being controlled by software that executes
(or executed previously) on another logical processor of the same core.
Ergo setting STIBP protects the task itself from being attacked from a task
running on a different hyper-thread and protects the tasks running on
different hyper-threads from being attacked.
While the document suggests that the branch predictors are shielded between
the logical processors, the observed performance regressions suggest that
STIBP simply disables the branch predictor more or less completely. Of
course the document wording is vague, but the fact that there is also no
requirement for issuing IBPB when STIBP is used points clearly in that
direction. The kernel still issues IBPB even when STIBP is used until Intel
clarifies the whole mechanism.
IBPB is issued when the task switches out, so malicious sandbox code cannot
mistrain the branch predictor for the next user space task on the same
logical processor.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:53 +0000 (19:33 +0100)]
x86/speculation: Add prctl() control for indirect branch speculation
Add the PR_SPEC_INDIRECT_BRANCH option for the PR_GET_SPECULATION_CTRL and
PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of
indirect branch speculation via STIBP and IBPB.
Invocations:
Check indirect branch speculation status with
- prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0);
Thomas Gleixner [Sun, 25 Nov 2018 18:33:52 +0000 (19:33 +0100)]
x86/speculation: Prepare arch_smt_update() for PRCTL mode
The upcoming fine grained per task STIBP control needs to be updated on CPU
hotplug as well.
Split out the code which controls the strict mode so the prctl control code
can be added later. Mark the SMP function call argument __unused while at it.
Thomas Gleixner [Wed, 28 Nov 2018 09:56:57 +0000 (10:56 +0100)]
x86/speculation: Prevent stale SPEC_CTRL msr content
The seccomp speculation control operates on all tasks of a process, but
only the current task of a process can update the MSR immediately. For the
other threads the update is deferred to the next context switch.
This creates the following situation with Process A and B:
Process A task 2 and Process B task 1 are pinned on CPU1. Process A task 2
does not have the speculation control TIF bit set. Process B task 1 has the
speculation control TIF bit set.
CPU0 CPU1
MSR bit is set
ProcB.T1 schedules out
ProcA.T2 schedules in
MSR bit is cleared
ProcA.T1
seccomp_update()
set TIF bit on ProcA.T2
ProcB.T1 schedules in
MSR is not updated <-- FAIL
This happens because the context switch code tries to avoid the MSR update
if the speculation control TIF bits of the incoming and the outgoing task
are the same. In the worst case ProcB.T1 and ProcA.T2 are the only tasks
scheduling back and forth on CPU1, which keeps the MSR stale forever.
In theory this could be remedied by IPIs, but chasing the remote task which
could be migrated is complex and full of races.
The straight forward solution is to avoid the asychronous update of the TIF
bit and defer it to the next context switch. The speculation control state
is stored in task_struct::atomic_flags by the prctl and seccomp updates
already.
Add a new TIF_SPEC_FORCE_UPDATE bit and set this after updating the
atomic_flags. Check the bit on context switch and force a synchronous
update of the speculation control if set. Use the same mechanism for
updating the current task.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:51 +0000 (19:33 +0100)]
x86/speculation: Split out TIF update
The update of the TIF_SSBD flag and the conditional speculation control MSR
update is done in the ssb_prctl_set() function directly. The upcoming prctl
support for controlling indirect branch speculation via STIBP needs the
same mechanism.
Split the code out and make it reusable. Reword the comment about updates
for other tasks.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:49 +0000 (19:33 +0100)]
x86/speculation: Prepare for conditional IBPB in switch_mm()
The IBPB speculation barrier is issued from switch_mm() when the kernel
switches to a user space task with a different mm than the user space task
which ran last on the same CPU.
An additional optimization is to avoid IBPB when the incoming task can be
ptraced by the outgoing task. This optimization only works when switching
directly between two user space tasks. When switching from a kernel task to
a user space task the optimization fails because the previous task cannot
be accessed anymore. So for quite some scenarios the optimization is just
adding overhead.
The upcoming conditional IBPB support will issue IBPB only for user space
tasks which have the TIF_SPEC_IB bit set. This requires to handle the
following cases:
1) Switch from a user space task (potential attacker) which has
TIF_SPEC_IB set to a user space task (potential victim) which has
TIF_SPEC_IB not set.
2) Switch from a user space task (potential attacker) which has
TIF_SPEC_IB not set to a user space task (potential victim) which has
TIF_SPEC_IB set.
This needs to be optimized for the case where the IBPB can be avoided when
only kernel threads ran in between user space tasks which belong to the
same process.
The current check whether two tasks belong to the same context is using the
tasks context id. While correct, it's simpler to use the mm pointer because
it allows to mangle the TIF_SPEC_IB bit into it. The context id based
mechanism requires extra storage, which creates worse code.
When a task is scheduled out its TIF_SPEC_IB bit is mangled as bit 0 into
the per CPU storage which is used to track the last user space mm which was
running on a CPU. This bit can be used together with the TIF_SPEC_IB bit of
the incoming task to make the decision whether IBPB needs to be issued or
not to cover the two cases above.
As conditional IBPB is going to be the default, remove the dubious ptrace
check for the IBPB always case and simply issue IBPB always when the
process changes.
Move the storage to a different place in the struct as the original one
created a hole.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:48 +0000 (19:33 +0100)]
x86/speculation: Avoid __switch_to_xtra() calls
The TIF_SPEC_IB bit does not need to be evaluated in the decision to invoke
__switch_to_xtra() when:
- CONFIG_SMP is disabled
- The conditional STIPB mode is disabled
The TIF_SPEC_IB bit still controls IBPB in both cases so the TIF work mask
checks might invoke __switch_to_xtra() for nothing if TIF_SPEC_IB is the
only set bit in the work masks.
Optimize it out by masking the bit at compile time for CONFIG_SMP=n and at
run time when the static key controlling the conditional STIBP mode is
disabled.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:47 +0000 (19:33 +0100)]
x86/process: Consolidate and simplify switch_to_xtra() code
Move the conditional invocation of __switch_to_xtra() into an inline
function so the logic can be shared between 32 and 64 bit.
Remove the handthrough of the TSS pointer and retrieve the pointer directly
in the bitmap handling function. Use this_cpu_ptr() instead of the
per_cpu() indirection.
This is a preparatory change so integration of conditional indirect branch
speculation optimization happens only in one place.
Tim Chen [Sun, 25 Nov 2018 18:33:46 +0000 (19:33 +0100)]
x86/speculation: Prepare for per task indirect branch speculation control
To avoid the overhead of STIBP always on, it's necessary to allow per task
control of STIBP.
Add a new task flag TIF_SPEC_IB and evaluate it during context switch if
SMT is active and flag evaluation is enabled by the speculation control
code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the
guest/host switch works properly.
This has no effect because TIF_SPEC_IB cannot be set yet and the static key
which controls evaluation is off. Preparatory patch for adding the control
code.
[ tglx: Simplify the context switch logic and make the TIF evaluation
depend on SMP=y and on the static key controlling the conditional
update. Rename it to TIF_SPEC_IB because it controls both STIBP and
IBPB ]
Thomas Gleixner [Sun, 25 Nov 2018 18:33:45 +0000 (19:33 +0100)]
x86/speculation: Add command line control for indirect branch speculation
Add command line control for user space indirect branch speculation
mitigations. The new option is: spectre_v2_user=
The initial options are:
- on: Unconditionally enabled
- off: Unconditionally disabled
-auto: Kernel selects mitigation (default off for now)
When the spectre_v2= command line argument is either 'on' or 'off' this
implies that the application to application control follows that state even
if a contradicting spectre_v2_user= argument is supplied.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:40 +0000 (19:33 +0100)]
x86/l1tf: Show actual SMT state
Use the now exposed real SMT state, not the SMT sysfs control knob
state. This reflects the state of the system when the mitigation status is
queried.
This does not change the warning in the VMX launch code. There the
dependency on the control knob makes sense because siblings could be
brought online anytime after launching the VM.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:39 +0000 (19:33 +0100)]
x86/speculation: Rework SMT state change
arch_smt_update() is only called when the sysfs SMT control knob is
changed. This means that when SMT is enabled in the sysfs control knob the
system is considered to have SMT active even if all siblings are offline.
To allow finegrained control of the speculation mitigations, the actual SMT
state is more interesting than the fact that siblings could be enabled.
Rework the code, so arch_smt_update() is invoked from each individual CPU
hotplug function, and simplify the update function while at it.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:37 +0000 (19:33 +0100)]
x86/Kconfig: Select SCHED_SMT if SMP enabled
CONFIG_SCHED_SMT is enabled by all distros, so there is not a real point to
have it configurable. The runtime overhead in the core scheduler code is
minimal because the actual SMT scheduling parts are conditional on a static
key.
This allows to expose the scheduler's SMT state static key to the
speculation control code. Alternatively the scheduler's static key could be
made always available when CONFIG_SMP is enabled, but that's just adding an
unused static key to every other architecture for nothing.
Currently the 'sched_smt_present' static key is enabled when at CPU bringup
SMT topology is observed, but it is never disabled. However there is demand
to also disable the key when the topology changes such that there is no SMT
present anymore.
Implement this by making the key count the number of cores that have SMT
enabled.
In particular, the SMT topology bits are set before interrrupts are enabled
and similarly, are cleared after interrupts are disabled for the last time
and the CPU dies.
Tim Chen [Sun, 25 Nov 2018 18:33:35 +0000 (19:33 +0100)]
x86/speculation: Reorganize speculation control MSRs update
The logic to detect whether there's a change in the previous and next
task's flag relevant to update speculation control MSRs is spread out
across multiple functions.
Consolidate all checks needed for updating speculation control MSRs into
the new __speculation_ctrl_update() helper function.
This makes it easy to pick the right speculation control MSR and the bits
in MSR_IA32_SPEC_CTRL that need updating based on TIF flags changes.
Thomas Gleixner [Sun, 25 Nov 2018 18:33:34 +0000 (19:33 +0100)]
x86/speculation: Rename SSBD update functions
During context switch, the SSBD bit in SPEC_CTRL MSR is updated according
to changes of the TIF_SSBD flag in the current and next running task.
Currently, only the bit controlling speculative store bypass disable in
SPEC_CTRL MSR is updated and the related update functions all have
"speculative_store" or "ssb" in their names.
For enhanced mitigation control other bits in SPEC_CTRL MSR need to be
updated as well, which makes the SSB names inadequate.
Rename the "speculative_store*" functions to a more generic name. No
functional change.
Tim Chen [Sun, 25 Nov 2018 18:33:32 +0000 (19:33 +0100)]
x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common()
The Spectre V2 printout in cpu_show_common() handles conditionals for the
various mitigation methods directly in the sprintf() argument list. That's
hard to read and will become unreadable if more complex decisions need to
be made for a particular method.
Move the conditionals for STIBP and IBPB string selection into helper
functions, so they can be extended later on.
Zhenzhong Duan [Fri, 2 Nov 2018 08:45:41 +0000 (01:45 -0700)]
x86/retpoline: Remove minimal retpoline support
Now that CONFIG_RETPOLINE hard depends on compiler support, there is no
reason to keep the minimal retpoline support around which only provided
basic protection in the assembly files.
Hui Wang [Wed, 28 Nov 2018 09:11:26 +0000 (17:11 +0800)]
ALSA: usb-audio: Add vendor and product name for Dell WD19 Dock
Like the Dell WD15 Dock, the WD19 Dock (0bda:402e) doens't provide
useful string for the vendor and product names too. In order to share
the UCM with WD15, here we keep the profile_name same as the WD15.
Taehee Yoo [Wed, 28 Nov 2018 02:27:28 +0000 (11:27 +0900)]
netfilter: nf_tables: deactivate expressions in rule replecement routine
There is no expression deactivation call from the rule replacement path,
hence, chain counter is not decremented. A few steps to reproduce the
problem:
%nft add table ip filter
%nft add chain ip filter c1
%nft add chain ip filter c1
%nft add rule ip filter c1 jump c2
%nft replace rule ip filter c1 handle 3 accept
%nft flush ruleset
<jump c2> expression means immediate NFT_JUMP to chain c2.
Reference count of chain c2 is increased when the rule is added.
When rule is deleted or replaced, the reference counter of c2 should be
decreased via nft_rule_expr_deactivate() which calls
nft_immediate_deactivate().
Borislav Petkov [Tue, 27 Nov 2018 13:41:37 +0000 (14:41 +0100)]
x86/MCE/AMD: Fix the thresholding machinery initialization order
Currently, the code sets up the thresholding interrupt vector and only
then goes about initializing the thresholding banks. Which is wrong,
because an early thresholding interrupt would cause a NULL pointer
dereference when accessing those banks and prevent the machine from
booting.
Therefore, set the thresholding interrupt vector only *after* having
initialized the banks successfully.
Tudor Ambarus [Mon, 26 Nov 2018 12:45:44 +0000 (12:45 +0000)]
mtd: spi-nor: fix erase_type array to indicate current map conf
BFPT advertises all the erase types supported by all the possible
map configurations. Mask out the erase types that are not supported
by the current map configuration.
Backward compatibility test done on sst26vf064b.
Fixes: b038e8e3be72 ("mtd: spi-nor: parse SFDP Sector Map Parameter Table") Reported-by: Alexander Sverdlin <[email protected]> Signed-off-by: Tudor Ambarus <[email protected]> Tested-by: Alexander Sverdlin <[email protected]> Signed-off-by: Boris Brezillon <[email protected]>
Marek Szyprowski [Mon, 19 Nov 2018 15:49:05 +0000 (16:49 +0100)]
usb: gadget: u_ether: fix unsafe list iteration
list_for_each_entry_safe() is not safe for deleting entries from the
list if the spin lock, which protects it, is released and reacquired during
the list iteration. Fix this issue by replacing this construction with
a simple check if list is empty and removing the first entry in each
iteration. This is almost equivalent to a revert of the commit mentioned in
the Fixes: tag.
This patch fixes following issue:
--->8---
Unable to handle kernel NULL pointer dereference at virtual address 00000104
pgd = (ptrval)
[00000104] *pgd=00000000
Internal error: Oops: 817 [#1] PREEMPT SMP ARM
Modules linked in:
CPU: 1 PID: 84 Comm: kworker/1:1 Not tainted 4.20.0-rc2-next-20181114-00009-g8266b35ec404 #1061
Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
Workqueue: events eth_work
PC is at rx_fill+0x60/0xac
LR is at _raw_spin_lock_irqsave+0x50/0x5c
pc : [<c065fee0>] lr : [<c0a056b8>] psr: 80000093
sp : ee7fbee8 ip : 00000100 fp : 00000000
r10: 006000c0 r9 : c10b0ab0 r8 : ee7eb5c0
r7 : ee7eb614 r6 : ee7eb5ec r5 : 000000dc r4 : ee12ac00
r3 : ee12ac24 r2 : 00000200 r1 : 60000013 r0 : ee7eb5ec
Flags: Nzcv IRQs off FIQs on Mode SVC_32 ISA ARM Segment none
Control: 10c5387d Table: 6d5dc04a DAC: 00000051
Process kworker/1:1 (pid: 84, stack limit = 0x(ptrval))
Stack: (0xee7fbee8 to 0xee7fc000)
...
[<c065fee0>] (rx_fill) from [<c0143b7c>] (process_one_work+0x200/0x738)
[<c0143b7c>] (process_one_work) from [<c0144118>] (worker_thread+0x2c/0x4c8)
[<c0144118>] (worker_thread) from [<c014a8a4>] (kthread+0x128/0x164)
[<c014a8a4>] (kthread) from [<c01010b4>] (ret_from_fork+0x14/0x20)
Exception stack(0xee7fbfb0 to 0xee7fbff8)
...
---[ end trace 64480bc835eba7d6 ]---
Fixes: fea14e68ff5e ("usb: gadget: u_ether: use better list accessors") Signed-off-by: Marek Szyprowski <[email protected]> Signed-off-by: Felipe Balbi <[email protected]>
sched, trace: Fix prev_state output in sched_switch tracepoint
commit 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout")
tried to fix the problem introduced by a previous commit efb40f588b43
("sched/tracing: Fix trace_sched_switch task-state printing"). However
the prev_state output in sched_switch is still broken.
task_state_index() uses fls() which considers the LSB as 1. Left
shifting 1 by this value gives an incorrect mapping to the task state.
Fix this by decrementing the value returned by __get_task_state()
before shifting.
function_graph: Have profiler use curr_ret_stack and not depth
The profiler uses trace->depth to find its entry on the ret_stack, but the
depth may not match the actual location of where its entry is (if an
interrupt were to preempt the processing of the profiler for another
function, the depth and the curr_ret_stack will be different).
Have it use the curr_ret_stack as the index to find its ret_stack entry
instead of using the depth variable, as that is no longer guaranteed to be
the same.
function_graph: Reverse the order of pushing the ret_stack and the callback
The function graph profiler uses the ret_stack to store the "subtime" and
reuse it by nested functions and also on the return. But the current logic
has the profiler callback called before the ret_stack is updated, and it is
just modifying the ret_stack that will later be allocated (it's just lucky
that the "subtime" is not touched when it is allocated).
This could also cause a crash if we are at the end of the ret_stack when
this happens.
By reversing the order of the allocating the ret_stack and then calling the
callbacks attached to a function being traced, the ret_stack entry is no
longer used before it is allocated.
function_graph: Move return callback before update of curr_ret_stack
In the past, curr_ret_stack had two functions. One was to denote the depth
of the call graph, the other is to keep track of where on the ret_stack the
data is used. Although they may be slightly related, there are two cases
where they need to be used differently.
The one case is that it keeps the ret_stack data from being corrupted by an
interrupt coming in and overwriting the data still in use. The other is just
to know where the depth of the stack currently is.
The function profiler uses the ret_stack to save a "subtime" variable that
is part of the data on the ret_stack. If curr_ret_stack is modified too
early, then this variable can be corrupted.
The "max_depth" option, when set to 1, will record the first functions going
into the kernel. To see all top functions (when dealing with timings), the
depth variable needs to be lowered before calling the return hook. But by
lowering the curr_ret_stack, it makes the data on the ret_stack still being
used by the return hook susceptible to being overwritten.
Now that there's two variables to handle both cases (curr_ret_depth), we can
move them to the locations where they can handle both cases.
function_graph: Use new curr_ret_depth to manage depth instead of curr_ret_stack
Currently, the depth of the ret_stack is determined by curr_ret_stack index.
The issue is that there's a race between setting of the curr_ret_stack and
calling of the callback attached to the return of the function.
Commit 03274a3ffb44 ("tracing/fgraph: Adjust fgraph depth before calling
trace return callback") moved the calling of the callback to after the
setting of the curr_ret_stack, even stating that it was safe to do so, when
in fact, it was the reason there was a barrier() there (yes, I should have
commented that barrier()).
Not only does the curr_ret_stack keep track of the current call graph depth,
it also keeps the ret_stack content from being overwritten by new data.
The function profiler, uses the "subtime" variable of ret_stack structure
and by moving the curr_ret_stack, it allows for interrupts to use the same
structure it was using, corrupting the data, and breaking the profiler.
To fix this, there needs to be two variables to handle the call stack depth
and the pointer to where the ret_stack is being used, as they need to change
at two different locations.
function_graph: Make ftrace_push_return_trace() static
As all architectures now call function_graph_enter() to do the entry work,
no architecture should ever call ftrace_push_return_trace(). Make it static.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
sparc/function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have sparc use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
sh/function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have superh use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
s390/function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have s390 use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
riscv/function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have riscv use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
powerpc/function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have powerpc use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
parisc: function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have parisc use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
nds32: function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have nds32 use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
MIPS: function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have MIPS use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
microblaze: function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have microblaze use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
arm64: function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have arm64 use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
ARM: function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have ARM use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
x86/function_graph: Simplify with function_graph_enter()
The function_graph_enter() function does the work of calling the function
graph hook function and the management of the shadow stack, simplifying the
work done in the architecture dependent prepare_ftrace_return().
Have x86 use the new code, and remove the shadow stack management as well as
having to set up the trace structure.
This is needed to prepare for a fix of a design bug on how the curr_ret_stack
is used.
The reason is that the node timer handler sometimes needs to delete a
node which has been disconnected for too long. To do this, it grabs
the lock 'node_list_lock', which may at the same time be held by the
generic node cleanup function, tipc_node_stop(), during module removal.
Since the latter is calling del_timer_sync() inside the same lock, we
have a potential deadlock.
We fix this letting the timer cleanup function use spin_trylock()
instead of just spin_lock(), and when it fails to grab the lock it
just returns so that the timer handler can terminate its execution.
This is safe to do, since tipc_node_stop() anyway is about to
delete both the timer and the node instance.
Fixes: 6a939f365bdb ("tipc: Auto removal of peer down node instance") Acked-by: Ying Xue <[email protected]> Signed-off-by: Jon Maloy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
Bryan Whitehead [Mon, 26 Nov 2018 17:04:57 +0000 (12:04 -0500)]
lan743x: fix return value for lan743x_tx_napi_poll
The lan743x driver, when under heavy traffic load, has been noticed
to sometimes hang, or cause a kernel panic.
Debugging reveals that the TX napi poll routine was returning
the wrong value, 'weight'. Most other drivers return 0.
And call napi_complete, instead of napi_complete_done.
Additionally when creating the tx napi poll routine.
Changed netif_napi_add, to netif_tx_napi_add.
Updates for v3:
changed 'fixes' tag to match defined format
Updates for v2:
use napi_complete, instead of napi_complete_done in
lan743x_tx_napi_poll
use netif_tx_napi_add, instead of netif_napi_add for
registration of tx napi poll routine
Lorenzo Bianconi [Mon, 26 Nov 2018 14:07:16 +0000 (15:07 +0100)]
net: thunderx: fix NULL pointer dereference in nic_remove
Fix a possible NULL pointer dereference in nic_remove routine
removing the nicpf module if nic_probe fails.
The issue can be triggered with the following reproducer:
Fixes: 4863dea3fab0 ("net: Adding support for Cavium ThunderX network controller") Signed-off-by: Lorenzo Bianconi <[email protected]> Signed-off-by: David S. Miller <[email protected]>
Xin Long [Mon, 26 Nov 2018 06:52:44 +0000 (14:52 +0800)]
sctp: increase sk_wmem_alloc when head->truesize is increased
I changed to count sk_wmem_alloc by skb truesize instead of 1 to
fix the sk_wmem_alloc leak caused by later truesize's change in
xfrm in Commit 02968ccf0125 ("sctp: count sk_wmem_alloc by skb
truesize in sctp_packet_transmit").
But I should have also increased sk_wmem_alloc when head->truesize
is increased in sctp_packet_gso_append() as xfrm does. Otherwise,
sctp gso packet will cause sk_wmem_alloc underflow.
Fixes: 02968ccf0125 ("sctp: count sk_wmem_alloc by skb truesize in sctp_packet_transmit") Signed-off-by: Xin Long <[email protected]> Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>