Dan Carpenter [Thu, 7 Mar 2019 05:33:44 +0000 (08:33 +0300)]
scsi: lpfc: Fix error codes in lpfc_sli4_pci_mem_setup()
It used to be that "error" was set to -ENODEV at the start of the function
but we shifted some code around an now "error" is set to zero for most
error paths. There is a mix of direct returns and "goto out" but I changed
everything to direct returns for consistency.
Fixes: 56de8357049c ("scsi: lpfc: fix calls to dma_set_mask_and_coherent()") Signed-off-by: Dan Carpenter <[email protected]> Acked-by: James SmartĀ <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]>
* pm-domains:
PM / domains: Remove one unnecessary blank line
PM / Domains: Return early for all errors in _genpd_power_off()
PM / Domains: Improve warn for multiple states but no governor
Jaroslav Kysela [Thu, 14 Mar 2019 08:21:08 +0000 (09:21 +0100)]
ALSA: hda/realtek: merge alc_fixup_headset_jack to alc295_fixup_chromebook
The ALC225_FIXUP_HEADSET_JACK fixup can be merged to alc295_fixup_chromebook.
There are no other users for ALC225_FIXUP_HEADSET_JACK other than
the chromebook hardware.
Fixes: 10f5b1b85ed1 ("ALSA: hda/realtek - Fixed Headset Mic JD not stable") Cc: Kailang Yang <[email protected]> Signed-off-by: Jaroslav Kysela <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
Dave Airlie [Thu, 14 Mar 2019 01:56:59 +0000 (11:56 +1000)]
Merge tag 'drm-intel-next-fixes-2019-03-12' of git://anongit.freedesktop.org/drm/drm-intel into drm-next
- HDCP state handling in ddi_update_pipe
- Protect i915_active iterators from the shrinker
- Reacquire priolist cache after dropping the engine lock
- (Selftest) Always free spinner on __sseu_prepare error
- Acquire breadcrumb ref before canceling
- Fix atomic state leak on HDMI link reset
- Relax mmap VMA check
Xin Long [Wed, 13 Mar 2019 09:00:48 +0000 (17:00 +0800)]
pptp: dst_release sk_dst_cache in pptp_sock_destruct
sk_setup_caps() is called to set sk->sk_dst_cache in pptp_connect,
so we have to dst_release(sk->sk_dst_cache) in pptp_sock_destruct,
otherwise, the dst refcnt will leak.
unregister_netdevice: waiting for lo to become free. Usage count = 1
v1->v2:
- use rcu_dereference_protected() instead of rcu_dereference_check(),
as suggested by Eric.
Fixes: 00959ade36ac ("PPTP: PPP over IPv4 (Point-to-Point Tunneling Protocol)") Reported-by: Xiumei Mu <[email protected]> Signed-off-by: Xin Long <[email protected]> Signed-off-by: David S. Miller <[email protected]>
Florian Fainelli [Tue, 12 Mar 2019 17:50:59 +0000 (10:50 -0700)]
MAINTAINERS: GENET & SYSTEMPORT: Add internal Broadcom list
There is a patchwork instance behind bcm-kernel-feedback-list that is
helpful to track submissions, add this list for the Broadcom GENET and
SYSTEMPORT drivers.
Local variable description: ----addr@___sys_recvmsg
Variable was created at:
___sys_recvmsg+0xf6/0x1310 net/socket.c:2244
do_recvmmsg+0x646/0x10c0 net/socket.c:2390
Bytes 0-31 of 32 are uninitialized
Memory access of size 32 starts at ffff8880ae62fbb0
Data copied to user address 0000000020000000
Fixes: a32e0eec7042 ("l2tp: introduce L2TPv3 IP encapsulation support for IPv6") Signed-off-by: Eric Dumazet <[email protected]> Reported-by: syzbot <[email protected]> Signed-off-by: David S. Miller <[email protected]>
Vakul Garg [Tue, 12 Mar 2019 08:22:57 +0000 (08:22 +0000)]
net/tls: Inform user space about send buffer availability
A previous fix ("tls: Fix write space handling") assumed that
user space application gets informed about the socket send buffer
availability when tls_push_sg() gets called. Inside tls_push_sg(), in
case do_tcp_sendpages() returns 0, the function returns without calling
ctx->sk_write_space. Further, the new function tls_sw_write_space()
did not invoke ctx->sk_write_space. This leads to situation that user
space application encounters a lockup always waiting for socket send
buffer to become available.
Rather than call ctx->sk_write_space from tls_push_sg(), it should be
called from tls_write_space. So whenever tcp stack invokes
sk->sk_write_space after freeing socket send buffer, we always declare
the same to user space by the way of invoking ctx->sk_write_space.
Zhike Wang [Mon, 11 Mar 2019 10:15:54 +0000 (03:15 -0700)]
net_sched: return correct value for *notify* functions
It is confusing to directly use return value of netlink_send()/
netlink_unicast() as the return value of *notify*, as it may be not
error at all.
Example: in tc_del_tfilter(), after calling tfilter_del_notify(), it will
goto errout if (err). However, the netlink_send()/netlink_unicast() will
return positive value even for successful case. So it may not call
tcf_chain_tp_remove() and so on to clean up the resource, as a result,
resource is leaked.
It may be easier to only check the return value of tfilter_del_nofiy(),
but it is more clean to correct all related functions.
Bryan Whitehead [Wed, 13 Mar 2019 19:55:48 +0000 (15:55 -0400)]
lan743x: Fix TX Stall Issue
It has been observed that tx queue may stall while downloading
from certain web sites (example www.speedtest.net)
The cause has been tracked down to a corner case where
the tx interrupt vector was disabled automatically, but
was not re enabled later.
The lan743x has two mechanisms to enable/disable individual
interrupts. Interrupts can be enabled/disabled by individual
source, and they can also be enabled/disabled by individual
vector which has been mapped to the source. Both must be
enabled for interrupts to work properly.
The TX code path, primarily uses the interrupt enable/disable of
the TX source bit, while leaving the vector enabled all the time.
However, while investigating this issue it was noticed that
the driver requested the use of the vector auto clear feature.
The test above revealed a case where the vector enable was
cleared unintentionally.
This patch fixes the issue by deleting the lines that request
the vector auto clear feature to be used.
Fixes: 23f0703c125b ("lan743x: Add main source files for new lan743x driver") Signed-off-by: Bryan Whitehead <[email protected]> Signed-off-by: David S. Miller <[email protected]>
Sagi Grimberg [Wed, 13 Mar 2019 17:55:10 +0000 (18:55 +0100)]
nvme-tcp: support C2HData with SUCCESS flag
A C2HData PDU with the SUCCESS flag set indicates that the I/O was
completed by the controller successfully and means that a subsequent
completion response capsule PDU will be ommitted.
If we see this flag, fisrt we check that LAST_PDU flag is set as well,
and then we complete the request when the data transfer (and data digest
verification if its on) is done.
While we're at it, reuse a bit of code with nvme_fail_request.
NVMe DSM is a pure hint, so if the underlying device / file system
does not support discard-like operations we should not fail the
operation but rather return success.
nvme: add proper write zeroes setup for the multipath device
Add a gendisk argument to nvme_config_write_zeroes so that the call to
nvme_update_disk_info for the multipath device node updates the
proper request_queue.
nvme: add proper discard setup for the multipath device
Add a gendisk argument to nvme_config_discard so that the call to
nvme_update_disk_info for the multipath device node updates the
proper request_queue.
Qemu started out with a broken implementation of Write Zeroes written
by yours truly. Disable Write Zeroes on qemu for now, eventually
we need to go back and make all the qemu quirks version specific,
but that is left for another time.
James Smart [Wed, 13 Mar 2019 17:55:03 +0000 (18:55 +0100)]
nvmet-fc: fix issues with targetport assoc_list list walking
There are two changes:
1) The logic in the __nvmet_fc_free_assoc() routine is bad. It uses
"safe" routines assuming pointers will come back valid. However, the
intervening next structure being linked can be removed from the list and
the resulting safe pointers are bad, resulting in NULL ptrs being hit.
Correct by scheduling a work element to perform the association delete,
which can be done while under lock.
2) Prior patch that added the work element scheduling left a possible
reference on the object if the work element couldn't be scheduled.
Correct by doing the put on a failing schedule_work() call.
Linus Torvalds [Wed, 13 Mar 2019 18:10:42 +0000 (11:10 -0700)]
Merge tag 'selinux-pr-20190312' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux
Pull selinux fixes from Paul Moore:
"Two small fixes for SELinux in v5.1: one adds a buffer length check to
the SELinux SCTP code, the other ensures that the SELinux labeling for
a NFS mount is not disabled if the filesystem is mounted twice"
* tag 'selinux-pr-20190312' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux:
security/selinux: fix SECURITY_LSM_NATIVE_LABELS on reused superblock
selinux: add the missing walk_size + len check in selinux_sctp_bind_connect
Linus Torvalds [Wed, 13 Mar 2019 18:07:36 +0000 (11:07 -0700)]
Merge tag 'apparmor-pr-2019-03-12' of git://git.kernel.org/pub/scm/linux/kernel/git/jj/linux-apparmor
Pull apparmor fixes from John Johansen:
- fix double when failing to unpack secmark rules in policy
- fix leak of dentry when profile is removed
* tag 'apparmor-pr-2019-03-12' of git://git.kernel.org/pub/scm/linux/kernel/git/jj/linux-apparmor:
apparmor: fix double free when unpack of secmark rules fails
apparmor: delete the dentry in aafs_remove() to avoid a leak
apparmor: Fix warning about unused function apparmor_ipv6_postroute
James Smart [Wed, 13 Mar 2019 17:55:02 +0000 (18:55 +0100)]
nvme-fc: reject reconnect if io queue count is reduced to zero
If:
- A successful connect has occurred with an io queue count greater than
zero and namespaces detected and running.
- An error or something occurs which causes a termination of the prior
association and then starts a reconnect,
- The reconnect then creates a new controller, but for whatever reason,
nvme_set_queue_count() results in io queue count set to zero. This
will skip io queue and tag set changes.
- But... the controller will transition to live, calling
nvme_start_ctrl, which calls nvme_start_queues(), which then releases
I/Os into the transport which then sends them to the driver.
As there are no queues, things eventually hit the driver looking for a
handle, which was cleared when the original controller was reset, and it
can't proceed. Worst case, things progress, but everything fails.
In the failing scenario, the nvme_set_features(NVME_FEAT_NUM_QUEUES)
command actually failed with a NVME_SC_INTERNAL error. For some reason,
although nvme_set_queue_count() saw the error and set io queue count to
zero, it doesn't return a failure status to the transport, which allows
the transport to continue using the controller.
Fix the problem by simply rejecting the new association if at least 1
I/O queue can't be created. The association reject will fail the
reconnect attempt and fall into the reconnect retry policy.
James Smart [Wed, 13 Mar 2019 17:55:01 +0000 (18:55 +0100)]
nvme-fc: fix numa_node when dev is null
A recent change added a numa_node field to the nvme controller
and has the transport assign the node using dev_to_node().
However, fcloop registers with a NULL device struct, so the
dev_to_node() call oops.
Revise the assignment to assign no node when device struct is null.
James Smart [Wed, 13 Mar 2019 17:55:00 +0000 (18:55 +0100)]
nvme-fc: use nr_phys_segments to determine existence of sgl
For some nvme command, when issued by the nvme core layer, there
is an internal buffer which can cause blk_rq_payload_bytes() to
return a non-zero value yet there is no actual/real command payload
and sg list. An example is the WRITE ZEROES command.
To address this, when making choices on whether to dma map an sgl,
use blk_rq_nr_phys_segments() instead of blk_rq_payload_bytes().
When there is a sgl, blk_rq_payload_bytes() will return the amount
of data to be transferred by the sgl.
Yufen Yu [Wed, 13 Mar 2019 17:54:58 +0000 (18:54 +0100)]
nvme: update comment to make the code easier to read
After commit a686ed75c0fb ("nvme: introduce a helper function for
controller deletion), nvme_delete_ctrl_sync no longer use flush_work.
Update comment, accordingly.
Sagi Grimberg [Wed, 13 Mar 2019 17:54:57 +0000 (18:54 +0100)]
nvme: put ns_head ref if namespace fails allocation
In case nvme_alloc_ns fails after we initialize ns_head but before we
add the ns to the controller namespaces list we need to explicitly put
the ns_head reference because when we teardown the controller we
won't find it, causing us to leak a dangling subsystem eventually.
Keith Busch [Wed, 13 Mar 2019 17:54:55 +0000 (18:54 +0100)]
nvme: don't warn on block content change effects
A write or flush IO passthrough command is expected to change the
logical block content, so don't warn on these as no additional handling
is necessary.
Masahiro Yamada [Fri, 8 Mar 2019 09:56:25 +0000 (18:56 +0900)]
kbuild: pkg: grep include/config/auto.conf instead of $KCONFIG_CONFIG
This will be a little more efficient since unset CONFIG options are
stripped away from auto.conf, and we can hard-code the path to auto.conf
since it is never overridden.
include/config/kernel.release is generated before %pkg is run.
So, it is guaranteed auto.conf is up-to-date.
# Include the kernel makefile
override dot-config := 1
include Makefile
dot-config := 1
I did not know the kernel Makefile was used in that way, and it is
hard to guarantee the behavior when the kernel Makefile is included
by another Makefile from a different project.
It looks like Debian Stretch stopped providing make-kpkg. Maybe it is
obsolete and being replaced with 'make deb-pkg' etc. but still widely
used.
This commit adds a workaround; if the top Makefile is included by
another Makefile, skip sub-make in order to make the main part visible.
'MAKEFLAGS += -rR' does not become effective for GNU Make < 4.0, but
Debian/Ubuntu is already using newer versions.
The effect of this commit:
Debian 8 (Jessie) : Fixed
Debian 9 (Stretch) : make-kpkg (kernel-package) is not provided
Ubuntu 14.04 LTS : NOT Fixed
Ubuntu 16.04 LTS : Fixed
Ubuntu 18.04 LTS : Fixed
This commit cannot fix Ubuntu 14.04 because it installs GNU Make 3.81,
but its support will end in Apr 2019, which is before the Linux v5.1
release.
I added warning so that nobody would try to include the top Makefile.
Masahiro Yamada [Fri, 8 Mar 2019 05:49:10 +0000 (14:49 +0900)]
kbuild: source include/config/auto.conf instead of ${KCONFIG_CONFIG}
As commit 423a8155facf ("kbuild: Fix reading of .config in
link-vmlinux.sh") addressed, some shells fail to perform '.' if
${KCONFIG_CONFIG} does not contain a slash at all.
Instead, we can source include/config/auto.conf, which obviously
contain slashes, and we do not expect its file path overridden by
a user. Perhaps, the performance might be slightly better since
unset CONFIG options are stripped from include/config/auto.conf.
Masahiro Yamada [Fri, 8 Mar 2019 05:30:35 +0000 (14:30 +0900)]
unicore32: simplify linker script generation for decompressor
When I was searching for unneeded $(KCONFIG_CONFIG) usages, I noticed
this strange build dependency.
It can use $(call if_changed,...) in case ZTEXTADDR and ZBSSADDR are
changed, but even a simpler way is to use the pattern rule in
scripts/Makefile.build. This is what arch/arm/boot/compressed/Makefile
does.
I did only build test. I confirmed equivalent vmlinux.lds was generated.
Masahiro Yamada [Fri, 15 Feb 2019 04:04:26 +0000 (13:04 +0900)]
h8300: use cc-cross-prefix instead of hardcoding h8300-unknown-linux-
It believe it is a bad idea to hardcode a specific compiler prefix
that may or may not be installed on a user's system. It is annoying
when testing features that should not require compilers at all.
For example, mrproper, headers_install, etc. should work without
any compiler.
They look like follows on my machine.
$ make ARCH=h8300 mrproper
./scripts/gcc-version.sh: line 26: h8300-unknown-linux-gcc: command not found
./scripts/gcc-version.sh: line 27: h8300-unknown-linux-gcc: command not found
make: h8300-unknown-linux-gcc: Command not found
make: h8300-unknown-linux-gcc: Command not found
[ a bunch of the same error messages continue ]
$ make ARCH=h8300 headers_install
./scripts/gcc-version.sh: line 26: h8300-unknown-linux-gcc: command not found
./scripts/gcc-version.sh: line 27: h8300-unknown-linux-gcc: command not found
make: h8300-unknown-linux-gcc: Command not found
HOSTCC scripts/basic/fixdep
make: h8300-unknown-linux-gcc: Command not found
WRAP arch/h8300/include/generated/uapi/asm/kvm_para.h
[ snip ]
The solution is to delete this line, or to use cc-cross-prefix like
some architectures do. I chose the latter as a moderate fixup.
I added an alternative 'h8300-linux-' because it is available at:
Masahiro Yamada [Fri, 25 Jan 2019 07:18:23 +0000 (16:18 +0900)]
ia64: prefix header search path with $(srctree)/
Currently, the Kbuild core manipulates header search paths in a crazy
way [1].
To fix this mess, I want all Makefiles to add explicit $(srctree)/ to
the search paths in the srctree. Some Makefiles are already written in
that way, but not all. The goal of this work is to make the notation
consistent, and finally get rid of the gross hacks.
Having whitespaces after -I does not matter since commit 48f6e3cf5bc6
("kbuild: do not drop -I without parameter").
I removed some header search paths because I was able to build ia64
without them.
Masahiro Yamada [Fri, 25 Jan 2019 03:41:38 +0000 (12:41 +0900)]
libfdt: prefix header search paths with $(srctree)/
Currently, the Kbuild core manipulates header search paths in a crazy
way [1].
To fix this mess, I want all Makefiles to add explicit $(srctree)/ to
the search paths in the srctree. Some Makefiles are already written in
that way, but not all. The goal of this work is to make the notation
consistent, and finally get rid of the gross hacks.
Having whitespaces after -I does not matter since commit 48f6e3cf5bc6
("kbuild: do not drop -I without parameter").
Riku Voipio [Wed, 2 Jan 2019 09:23:04 +0000 (11:23 +0200)]
deb-pkg: generate correct build dependencies
bison/flex is now needed always for building for kconfig. Some build
dependencies depend on kernel configuration, enable them as needed:
- libelf-dev when UNWINDER_ORC is set
- libssl-dev for SYSTEM_TRUSTED_KEYRING
Since the libssl-dev is needed for extract_cert binary, denote with
:native to install the libssl-dev for the build machines architecture,
rather than for the architecture of the kernel being built.
Hans de Goede [Tue, 12 Mar 2019 14:55:54 +0000 (15:55 +0100)]
i2c: i2c-designware-platdrv: Always use a dynamic adapter number
Before this commit the i2c-designware-platdrv assumes that if the pdev
has an apci-companion it should use a dynamic adapter-nr and it sets
adapter->nr to -1, otherwise it will use pdev->id as adapter->nr.
There are 3 ways how platform_device-s to which i2c-designware-platdrv
will bind can be instantiated:
1) Through of / devicetree
2) Through ACPI enumeration
3) Explicitly instantiated through platform_device_create + add
1) In case of devicetree-instantiation the drivers/of code always sets
pdev->id to PLATFORM_DEVID_NONE, which is -1 so in this case both paths
to set adapter->nr end up doing the same thing.
2) In case of ACPI instantiation the device will always have an
ACPI-companion, so we are already using dynamic adapter-nrs.
3) There are 2 places manually instantiating a designware_i2c platform_dev:
drivers/mfd/intel_quark_i2c_gpio.c
drivers/mfd/intel-lpss.c
In the intel_quark_i2c_gpio.c case pdev->id is always 0, so switching to
dynamic adapter-nrs here could lead to the bus-number no longer being
stable, but the quark X1000 only has 1 i2c-controller, which will also
be assigned bus-number 0 when using dynamic adapter-nrs.
In the intel-lpss.c case intel_lpss_probe() is called from either
intel-lpss-acpi.c in which case there always is an ACPI-companion, or
from intel-lpss-pci.c. In most cases devices handled by intel-lpss-pci.c
also have an ACPI-companion, so we use a dynamic adapter-nr. But in some
cases the ACPI-companion is missing and we would use pdev->id (allocated
from intel_lpss_devid_ida). Devices which use the intel-lpss-pci.c code
typically have many i2c busses, so using pdev->id in this case may lead
to a bus-number conflict, triggering a WARN(id < 0, "couldn't get idr")
in i2c-core-base.c causing an oops an the adapter registration to fail.
So in this case using non dynamic adapter-nrs is actually undesirable.
One machine on which this oops was triggering is the Apollo Lake based
Acer TravelMate Spin B118.
TL;DR: Switching to always using dynamic adapter-numbers does not make
any difference in most cases and in the one case where it does make a
difference the behavior change is desirable because the old behavior
caused an oops.
Hans de Goede [Tue, 12 Mar 2019 14:55:53 +0000 (15:55 +0100)]
i2c: i2c-designware-platdrv: Cleanup setting of the adapter number
i2c-designware-platdrv assumes that if the pdev has an apci-companion
it should use a dynamic adapter-nr and otherwise it will use pdev->id
as adapter-nr.
Before this commit the setting of the adapter.nr was somewhat convoluted,
in the acpi_companion case it was set from dw_i2c_acpi_configure, in the
non acpi_companion case it was set from dw_i2c_set_fifo_size based on
tx_fifo_depth not being set yet indicating that dw_i2c_acpi_configure was
not executed.
This cleans this up, directly setting the adapter-nr from
dw_i2c_plat_probe for both cases.
Linus Torvalds [Wed, 13 Mar 2019 17:06:28 +0000 (10:06 -0700)]
Merge tag 'kconfig-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kconfig updates from Masahiro Yamada:
- rename lexer and parse files
- fix 'Save as' menu of xconfig
* tag 'kconfig-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
kconfig: fix 'Save As' menu of xconfig
kconfig: rename zconf.y to parser.y
kconfig: rename zconf.l to lexer.l
Wolfram Sang [Tue, 12 Mar 2019 12:44:42 +0000 (13:44 +0100)]
i2c: add extra check to safe DMA buffer helper
Make sure we report 'no buffer' for 0-length messages. This can only
happen if threshold is set to 0 which is kind of bogus but we should
still handle this situation. Update the docs and add a debug message
to educate callers of this function.
Linus Torvalds [Wed, 13 Mar 2019 17:01:10 +0000 (10:01 -0700)]
Merge tag 'pwm/for-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm
Pull pwm updates from Thierry Reding:
"The changes for this cycle are across the board.
The bulk of it is cleanups, but there's also new device support in
some drivers as well as more conversions to the atomic API"
* tag 'pwm/for-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm: (24 commits)
pwm: atmel: Remove useless symbolic definitions
pwm: bcm-kona: Update macros to remove braces around numbers
pwm: imx27: Only enable the clocks once in .get_state()
pwm: rcar: Improve calculation of divider
pwm: rcar: Remove legacy APIs
pwm: rcar: Use "atomic" API on rcar_pwm_resume()
pwm: rcar: Add support "atomic" API
pwm: atmel: Add support for SAM9X60's PWM controller
pwm: atmel: Add PWM binding for SAM9X60
pwm: atmel: Rename objects of type atmel_pwm_data
pwm: atmel: Add support for controllers with 32 bit counters
pwm: atmel: Add struct atmel_pwm_data
pwm: Add MediaTek MT8183 display PWM driver support
pwm: hibvt: Add hi3559v100 support
dt-bindings: pwm: hibvt: Add hi3559v100 support
pwm: hibvt: Use individual struct per of-data
pwm: imx: Signedness bug in imx_pwm_get_state()
pwm: imx: Split into two drivers
pwm: imx: Don't print an error on -EPROBE_DEFER
pwm: imx: Set driver data earlier simplifying the end of ->probe()
...
Linus Torvalds [Wed, 13 Mar 2019 16:59:08 +0000 (09:59 -0700)]
Merge tag 'mailbox-v5.1' of git://git.linaro.org/landing-teams/working/fujitsu/integration
Pull mailbox updates from Jassi Brar:
- mailbox-test: support multiple controller instances
- misc cleanup: IMX, STM32 and Tegra
- new driver: ZynqMP IPI
* tag 'mailbox-v5.1' of git://git.linaro.org/landing-teams/working/fujitsu/integration:
mailbox: imx: keep MU irq working during suspend/resume
dt-bindings: mailbox: Add Xilinx IPI Mailbox
mailbox: ZynqMP IPI mailbox controller
mailbox: stm32-ipcc: remove useless device_init_wakeup call
mailbox: stm32-ipcc: do not enable wakeup source by default
mailbox: mailbox-test: fix null pointer if no mmio
mailbox: mailbox-test: fix debugfs in multi-instances
mailbox: tegra-hsp: mark suspend function as __maybe_unused
Jens Axboe [Wed, 13 Mar 2019 16:47:25 +0000 (10:47 -0600)]
Merge branch 'for-5.1/md-post' of https://github.com/liu-song-6/linux into for-5.1/block-post
Pull MD fixes from Song.
* 'for-5.1/md-post' of https://github.com/liu-song-6/linux:
md: Fix failed allocation of md_register_thread
It's wrong to add len to sector_nr in raid10 reshape twice
raid5: set write hint for PPL
Linus Torvalds [Wed, 13 Mar 2019 16:41:18 +0000 (09:41 -0700)]
Merge tag 'libnvdimm-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm updates from Dan Williams:
"The bulk of this has been in -next since before the merge window
opened, with no known collisions / issues reported.
The only detail worth noting, outside the summary below, is that the
"libnvdimm-start-pad" topic has been truncated to just cleanups and
small fixes. The full topic branch would have doubled down on hacks
around the "section alignment" limitation of the core-mm, instead
effort is now being spent to address that root issue in the memory
hotplug implementation for v5.2.
- Fix nfit-bus command submission regression
- Support retrieval of short-ARS results if the ARS state is
"requires continuation", and even if the "no_init_ars" module
parameter is specified
- Allow busy-polling of the kernel ARS state by allowing root to
reset the exponential back-off timer
- Filter potentially stale ARS results by tracking query-ARS relative
to the previous start-ARS
- Enhance dax_device alignment checks
- Add support for the Hyper-V family of device-specific-methods
(DSMs)
- Add several fixes and workarounds for Hyper-V compatibility
- Fix support to cache the dirty-shutdown-count at init"
* tag 'libnvdimm-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (25 commits)
libnvdimm/namespace: Clean up holder_class_store()
libnvdimm/of_pmem: Fix platform_no_drv_owner.cocci warnings
acpi/nfit: Update NFIT flags error message
libnvdimm/btt: Fix LBA masking during 'free list' population
libnvdimm/btt: Remove unnecessary code in btt_freelist_init
libnvdimm/pfn: Remove dax_label_reserve
dax: Check the end of the block-device capacity with dax_direct_access()
nfit/ars: Avoid stale ARS results
nfit/ars: Allow root to busy-poll the ARS state machine
nfit/ars: Introduce scrub_flags
nfit/ars: Remove ars_start_flags
nfit/ars: Attempt short-ARS even in the no_init_ars case
nfit/ars: Attempt a short-ARS whenever the ARS state is idle at boot
acpi/nfit: Require opt-in for read-only label configurations
libnvdimm/pmem: Honor force_raw for legacy pmem regions
libnvdimm/pfn: Account for PAGE_SIZE > info-block-size in nd_pfn_init()
libnvdimm: Fix altmap reservation size calculation
libnvdimm, pfn: Fix over-trim in trim_pfn_device()
acpi/nfit: Fix bus command validation
libnvdimm/dimm: Add a no-BLK quirk based on NVDIMM family
...
Linus Torvalds [Wed, 13 Mar 2019 16:37:09 +0000 (09:37 -0700)]
Merge tag 'fsdax-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull filesystem-dax updates from Dan Williams:
- Fix handling of PMD-sized entries in the Xarray that lead to a crash
scenario
- Miscellaneous cleanups and small fixes
* tag 'fsdax-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
dax: Flush partial PMDs correctly
fs/dax: NIT fix comment regarding start/end vs range
fs/dax: Convert to use vmf_error()
As reported back in 2016-11 [1], the "ftdump" kdb command triggers a
BUG for "sleeping function called from invalid context".
kdb's "ftdump" command wants to call ring_buffer_read_prepare() in
atomic context. A very simple solution for this is to add allocation
flags to ring_buffer_read_prepare() so kdb can call it without
triggering the allocation error. This patch does that.
Note that in the original email thread about this, it was suggested
that perhaps the solution for kdb was to either preallocate the buffer
ahead of time or create our own iterator. I'm hoping that this
alternative of adding allocation flags to ring_buffer_read_prepare()
can be considered since it means I don't need to duplicate more of the
core trace code into "trace_kdb.c" (for either creating my own
iterator or re-preparing a ring allocator whose memory was already
allocated).
NOTE: another option for kdb is to actually figure out how to make it
reuse the existing ftrace_dump() function and totally eliminate the
duplication. This sounds very appealing and actually works (the "sr
z" command can be seen to properly dump the ftrace buffer). The
downside here is that ftrace_dump() fully consumes the trace buffer.
Unless that is changed I'd rather not use it because it means "ftdump
| grep xyz" won't be very useful to search the ftrace buffer since it
will throw away the whole trace on the first grep. A future patch to
dump only the last few lines of the buffer will also be hard to
implement.
Jian-Hong Pan [Wed, 13 Mar 2019 09:33:24 +0000 (17:33 +0800)]
ALSA: hda/realtek: Enable headset MIC of Acer TravelMate X514-51T with ALC255
The Acer TravelMate X514-51T with ALC255 cannot detect the headset MIC
until ALC255_FIXUP_ACER_HEADSET_MIC quirk applied. Although, the
internal DMIC uses another module - snd_soc_skl as the driver. We still
need the NID 0x1a in the quirk to enable the headset MIC.
Arnd Bergmann [Mon, 4 Mar 2019 20:33:25 +0000 (21:33 +0100)]
ALSA: hda/tegra: avoid build error without CONFIG_PM
The #ifdef protection around the PM functions is wrong, leading to
a failed reference in some configurations:
sound/pci/hda/hda_tegra.c: In function 'hda_tegra_runtime_suspend':
sound/pci/hda/hda_tegra.c:273:2: error: implicit declaration of function 'hda_tegra_disable_clocks'; did you mean 'hda_tegra_enable_clocks'? [-Werror=implicit-function-declaration]
Better remove the #ifdefs entirely and rely on the compiler silently
dropping unused functions marked __maybe_unused.
usb_alloc_urb() can fail due to kmalloc failure and push the error
upstream. Further this can cause a NULL pointer dereference in
init_pipe_urbs(). This patch avoids such a scenario.
Mariusz Ceier [Mon, 11 Mar 2019 20:53:57 +0000 (21:53 +0100)]
ALSA: hda: Avoid NULL pointer dereference at snd_hdac_stream_start()
For ca0132 codec, azx_dev->stream is NULL during firmware loading.
Calling snd_hdac_get_stream_stripe_ctl unconditionally causes NULL
pointer dereference in that function.
Fixes: 9b6f7e7a296e ("ALSA: hda: program stripe bits for controller") Signed-off-by: Mariusz Ceier <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
sometimes, dead lock when make system call SYS_getdents64 with fsync() is
called by another process.
monkey running on android9.0
1. task 9785 held sbi->cp_rwsem and waiting lock_page()
2. task 10349 held mm_sem and waiting sbi->cp_rwsem
3. task 9709 held lock_page() and waiting mm_sem
Since f2fs_readdir is protected by inode.i_rwsem, there should not be
any updates in inode page, we're safe to lookup dents in inode page
without its lock held, so taking off the lock to improve concurrency
of readdir and avoid potential deadlock.
The reason is for inode which has very small inline xattr size,
__find_inline_xattr() will fail to traverse any entry due to first
entry may not be loaded from xattr node yet, later, we may skip to
check entire xattr datas in __find_xattr(), result in such wrong
condition.
This patch adds condition to check such case to avoid this issue.
When I run the poc on the mounted f2fs img I get a buffer overflow in
read_inline_xattr due to there being no sanity check on the value of
i_inline_xattr_size.
I created the img by just modifying the value of i_inline_xattr_size
in the inode:
Chao Yu [Thu, 7 Mar 2019 09:31:30 +0000 (17:31 +0800)]
f2fs: don't trigger read IO for beyond EOF page
In f2fs_mpage_readpages(), if page is beyond EOF, we should just
zero out it, but previously, before checking previous mapping
info, we missed to check filesize boundary, fix it.
f2fs may skip pageout() due to incorrect page reference count.
The problem here is that MM defined the rule [1] very clearly that
once page was set with PG_private flag, we should increment the
refcount in that page, also main flows like pageout(), migrate_page()
will assume there is one additional page reference count if
page_has_private() returns true.
But currently, f2fs won't add/del refcount when changing PG_private
flag. Anyway, f2fs should follow MM's rule to make MM's related flows
running as expected.
Chao Yu [Wed, 6 Mar 2019 08:18:33 +0000 (16:18 +0800)]
f2fs: remove wrong comment in f2fs_invalidate_page()
Since 8c242db9b8c0 ("f2fs: fix stale ATOMIC_WRITTEN_PAGE private pointer"),
we've started to not skip clear private flag for atomic_write page
truncation, so removing old wrong comment in f2fs_invalidate_page().
System can panic due to using wrong allocate/free function pair
in xattr interface:
- use kvmalloc to allocate memory
- use kzfree to free memory
Let's fix to use kvfree instead of kzfree, BTW, we are safe to
get rid of kzfree, since there is no such confidential data stored
as xattr, we don't need to zero it before free memory.
We may encounter above ABBA deadlock as reported by Kyungtae Kim:
I'm reporting a bug in linux-4.17.19: "INFO: task hung in
drop_inmem_page" (no reproducer)
I think this might be somehow related to the following:
https://groups.google.com/forum/#!searchin/syzkaller-bugs/INFO$3A$20task$20hung$20in$20%7Csort:date/syzkaller-bugs/c6soBTrdaIo/AjAzPeIzCgAJ
We didn't recover permission field correctly after sudden power-cut,
the reason is in setattr we didn't add inode into global dirty list
once i_mode is changed, so latter checkpoint triggered by fsync will
not flush last i_mode into disk, result in this problem, fix it.
Chao Yu [Thu, 21 Feb 2019 12:37:14 +0000 (20:37 +0800)]
f2fs: fix encrypted page memory leak
For IPU path of f2fs_do_write_data_page(), in its error path, we
need to release encrypted page and fscrypt context, otherwise it
will cause memory leak.
Chao Yu [Tue, 19 Feb 2019 09:08:18 +0000 (17:08 +0800)]
f2fs: make fault injection covering __submit_flush_wait()
This patch changes to allow failure of f2fs_bio_alloc() in
__submit_flush_wait(), which can simulate flush error in checkpoint()
for covering more error paths.
Chao Yu [Tue, 19 Feb 2019 08:23:53 +0000 (16:23 +0800)]
f2fs: fix to retry fill_super only if recovery failed
With current retry mechanism in f2fs_fill_super, first fill_super
fails due to no memory, then second fill_super runs w/o recovery,
if we succeed, we may lose fsynced data, it doesn't make sense.
Let's retry fill_super only if it occurs non-ENOMEM error during
recovery.
Stephen Rothwell [Fri, 22 Feb 2019 05:14:45 +0000 (16:14 +1100)]
remoteproc: fix for "dma-mapping: remove the DMA_MEMORY_EXCLUSIVE flag"
The commit 82c5de0ab8db ("dma-mapping: remove the DMA_MEMORY_EXCLUSIVE
flag") removed the "flags" parameter for dma_declare_coherent_memory().
Remove the parameter from the call in rproc_add_virtio_dev().
cpuidle: governor: Add new governors to cpuidle_governors again
After commit 61cb5758d3c4 ("cpuidle: Add cpuidle.governor= command
line parameter") new cpuidle governors are not added to the list
of available governors, so governor selection via sysfs doesn't
work as expected (even though it is rarely used anyway).
Fix that by making cpuidle_register_governor() add new governors to
cpuidle_governors again.
Fixes: 61cb5758d3c4 ("cpuidle: Add cpuidle.governor= command line parameter") Reported-by: Kees Cook <[email protected]> Cc: 5.0+ <[email protected]> # 5.0+ Signed-off-by: Rafael J. Wysocki <[email protected]>
Linus Torvalds [Tue, 12 Mar 2019 22:06:54 +0000 (15:06 -0700)]
Merge tag 'nfsd-5.1' of git://linux-nfs.org/~bfields/linux
Pull NFS server updates from Bruce Fields:
"Miscellaneous NFS server fixes.
Probably the most visible bug is one that could artificially limit
NFSv4.1 performance by limiting the number of oustanding rpcs from a
single client.
Neil Brown also gets a special mention for fixing a 14.5-year-old
memory-corruption bug in the encoding of NFSv3 readdir responses"
* tag 'nfsd-5.1' of git://linux-nfs.org/~bfields/linux:
nfsd: allow nfsv3 readdir request to be larger.
nfsd: fix wrong check in write_v4_end_grace()
nfsd: fix memory corruption caused by readdir
nfsd: fix performance-limiting session calculation
svcrpc: fix UDP on servers with lots of threads
svcrdma: Remove syslog warnings in work completion handlers
svcrdma: Squelch compiler warning when SUNRPC_DEBUG is disabled
svcrdma: Use struct_size() in kmalloc()
svcrpc: fix unlikely races preventing queueing of sockets
svcrpc: svc_xprt_has_something_to_do seems a little long
SUNRPC: Don't allow compiler optimisation of svc_xprt_release_slot()
nfsd: fix an IS_ERR() vs NULL check
Linus Torvalds [Tue, 12 Mar 2019 22:03:21 +0000 (15:03 -0700)]
Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 updates from Ted Ts'o:
"A large number of bug fixes and cleanups.
One new feature to allow users to more easily find the jbd2 journal
thread for a particular ext4 file system"
* tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (25 commits)
jbd2: jbd2_get_transaction does not need to return a value
jbd2: fix invalid descriptor block checksum
ext4: fix bigalloc cluster freeing when hole punching under load
ext4: add sysfs attr /sys/fs/ext4/<disk>/journal_task
ext4: Change debugging support help prefix from EXT4 to Ext4
ext4: fix compile error when using BUFFER_TRACE
jbd2: fix compile warning when using JBUFFER_TRACE
ext4: fix some error pointer dereferences
ext4: annotate more implicit fall throughs
ext4: annotate implicit fall throughs
ext4: don't update s_rev_level if not required
jbd2: fold jbd2_superblock_csum_{verify,set} into their callers
jbd2: fix race when writing superblock
ext4: fix crash during online resizing
ext4: disallow files with EXT4_JOURNAL_DATA_FL from EXT4_IOC_SWAP_BOOT
ext4: add mask of ext4 flags to swap
ext4: update quota information while swapping boot loader inode
ext4: cleanup pagecache before swap i_data
ext4: fix check of inode in swap_inode_boot_loader
ext4: unlock unused_pages timely when doing writeback
...
David S. Miller [Tue, 12 Mar 2019 22:00:15 +0000 (15:00 -0700)]
Merge branch 'mlx4-fixes'
Tariq Toukan says:
====================
mlx4_core misc fixes
This patchset by Jack contains misc fixes to the mlx4 Core driver.
Patch 1 fixes a use-after-free situation by marking (nullifying) the pointer,
please queue for -stable >= v4.0.
Patch 2 adds a missing lock acquire and release in SRIOV command interface,
please queue for -stable >= v4.9.
Patch 3 avoids calling roundup_pow_of_two when argument is zero,
please queue for -stable >= v3.3.
Series generated against net commit: a3b1933d34d5 Merge tag 'mlx5-fixes-2019-03-11' of
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
====================