]> Git Repo - linux.git/log
linux.git
5 years agotracing: Fix user stack trace "??" output
Eiichi Tsukata [Sun, 30 Jun 2019 08:54:38 +0000 (17:54 +0900)]
tracing: Fix user stack trace "??" output

Commit c5c27a0a5838 ("x86/stacktrace: Remove the pointless ULONG_MAX
marker") removes ULONG_MAX marker from user stack trace entries but
trace_user_stack_print() still uses the marker and it outputs unnecessary
"??".

For example:

            less-1911  [001] d..2    34.758944: <user stack trace>
   =>  <00007f16f2295910>
   => ??
   => ??
   => ??
   => ??
   => ??
   => ??
   => ??

The user stack trace code zeroes the storage before saving the stack, so if
the trace is shorter than the maximum number of entries it can terminate
the print loop if a zero entry is detected.

Link: http://lkml.kernel.org/r/[email protected]
Cc: [email protected]
Fixes: 4285f2fcef80 ("tracing: Remove the ULONG_MAX stack trace hackery")
Signed-off-by: Eiichi Tsukata <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
5 years agoRevert "drm/i915: Update description of i915.enable_guc modparam"
Tvrtko Ursulin [Fri, 19 Jul 2019 09:48:45 +0000 (10:48 +0100)]
Revert "drm/i915: Update description of i915.enable_guc modparam"

This reverts commit 0629d4da1f159778063767fb0ac1c951034c5477.

If GuC firmware is not present on the filesystem driver crashes the
machine on boot.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Acked-by: Joonas Lahtinen <[email protected]>
Fixes: 0629d4da1f15 ("drm/i915: Update description of i915.enable_guc modparam")
Cc: Tvrtko Ursulin <[email protected]>
Cc: Michal Wajdeczko <[email protected]>
Cc: Jani Nikula <[email protected]>
Cc: Joonas Lahtinen <[email protected]>
Cc: Rodrigo Vivi <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Daniele Ceraolo Spurio <[email protected]>
Cc: Jani Nikula <[email protected]>
Cc: [email protected]
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agoRevert "drm/i915/guc: Turn on GuC/HuC auto mode"
Tvrtko Ursulin [Fri, 19 Jul 2019 09:48:44 +0000 (10:48 +0100)]
Revert "drm/i915/guc: Turn on GuC/HuC auto mode"

This reverts commit f774f09649192f326fa030564afd3f8f5d82c1e4.

If GuC firmware is not present on the filesystem driver crashes the
machine on boot.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Acked-by: Joonas Lahtinen <[email protected]>
Fixes: f774f0964919 ("drm/i915/guc: Turn on GuC/HuC auto mode")
Cc: Michal Wajdeczko <[email protected]>
Cc: Jani Nikula <[email protected]>
Cc: Joonas Lahtinen <[email protected]>
Cc: Rodrigo Vivi <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Daniele Ceraolo Spurio <[email protected]>
Cc: Jani Nikula <[email protected]>
Cc: [email protected]
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/i915/icl: Add Wa_1409178092
Tvrtko Ursulin [Wed, 17 Jul 2019 18:06:24 +0000 (19:06 +0100)]
drm/i915/icl: Add Wa_1409178092

We were missing this workaround which can cause hangs if fine grained
coherency was used.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/i915/icl: Verify engine workarounds in GEN8_L3SQCREG4
Tvrtko Ursulin [Wed, 17 Jul 2019 18:06:23 +0000 (19:06 +0100)]
drm/i915/icl: Verify engine workarounds in GEN8_L3SQCREG4

Having fixed the incorect MCR programming in an earlier patch, we can now
stop ignoring read back of GEN8_L3SQCREG4 during engine workaround
verification.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/i915: Skip CS verification of L3 bank registers
Tvrtko Ursulin [Wed, 17 Jul 2019 18:06:22 +0000 (19:06 +0100)]
drm/i915: Skip CS verification of L3 bank registers

Access to 0xb100 - 0xb3ff mmio range is controlled by the MCR selector
which only affects CPU MMIO. Therefore these registers cannot be realiably
read with MI_SRM from the command streamer so skip their verification.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/i915: Fix and improve MCR selection logic
Tvrtko Ursulin [Wed, 17 Jul 2019 18:06:21 +0000 (19:06 +0100)]
drm/i915: Fix and improve MCR selection logic

A couple issues were present in this code:

1.
fls() usage was incorrect causing off by one in subslice mask lookup,
which in other words means subslice mask of all zeroes is always used
(subslice mask of a slice which is not present, or even out of bounds
array access), rendering the checks in wa_init_mcr either futile or
random.

2.
Condition in WARN_ON was not correct. It is doing a bitwise and operation
between a positive (present subslices) and negative mask (disabled L3
banks).

This means that with corrected fls() usage the assert would always
incorrectly fail.

We could fix this by inverting the fuse bits in the check, but instead do
one better and improve the code so it not only asserts, but finds the
first common index between the two masks and only warns if no such index
can be found.

v2:
 * Simplify check for logic and redability.
 * Improve commentary explaining what is really happening ie. what the
   assert is really trying to check and why.

v3:
 * Find first common index instead of just asserting.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Fixes: fe864b76c2ab ("drm/i915: Implement WaProgramMgsrForL3BankSpecificMmioReads")
Reviewed-by: Chris Wilson <[email protected]> # v1
Cc: Michał Winiarski <[email protected]>
Cc: Stuart Summers <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/i915: Trust programmed MCR in read_subslice_reg
Tvrtko Ursulin [Wed, 17 Jul 2019 18:06:20 +0000 (19:06 +0100)]
drm/i915: Trust programmed MCR in read_subslice_reg

Instead of re-calculating the MCR selector in read_subslice_reg do the
rwm on its existing value and restore it when done.

This consolidates MCR programming to one place for cnl+, and avoids
re-calculating its default value on older platforms during hangcheck.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/i915: Fix GEN8_MCR_SELECTOR programming
Tvrtko Ursulin [Wed, 17 Jul 2019 18:06:19 +0000 (19:06 +0100)]
drm/i915: Fix GEN8_MCR_SELECTOR programming

fls returns bit positions starting from one for the lsb and the MCR
register expects zero based (sub)slice addressing.

Incorrent MCR programming can have the effect of directing MMIO reads of
registers in the 0xb100-0xb3ff range to invalid subslice returning zeroes
instead of actual content.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Fixes: 1e40d4aea57b ("drm/i915/cnl: Implement WaProgramMgsrForCorrectSliceSpecificMmioReads")
Reviewed-by: Chris Wilson <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/i915: Remove set but not used variable 'src_y'
YueHaibing [Fri, 19 Jul 2019 02:41:00 +0000 (02:41 +0000)]
drm/i915: Remove set but not used variable 'src_y'

Fixes gcc '-Wunused-but-set-variable' warning:

drivers/gpu/drm/i915/display/intel_sprite.c: In function 'g4x_sprite_check_scaling':
drivers/gpu/drm/i915/display/intel_sprite.c:1494:13: warning:
 variable 'src_y' set but not used [-Wunused-but-set-variable]

Reported-by: Hulk Robot <[email protected]>
Signed-off-by: YueHaibing <[email protected]>
Signed-off-by: Ville Syrjälä <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device
Fugang Duan [Fri, 19 Jul 2019 09:26:48 +0000 (17:26 +0800)]
dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device

dma_map_sg() may use swiotlb buffer when the kernel command line includes
"swiotlb=force" or the dma_addr is out of dev->dma_mask range.  After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original
virtual buffer to other space.

So dma_direct_sync_sg_for_cpu() should use swiotlb physical addr, not
the original physical addr from sg_phys(sg).

dma_direct_sync_sg_for_device() also has the same issue, correct it as
well.

Fixes: 55897af63091("dma-direct: merge swiotlb_dma_ops into the dma_direct code")
Signed-off-by: Fugang Duan <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
5 years agodrm/i915/execlists: Cancel breadcrumb on preempting the virtual engine
Chris Wilson [Tue, 16 Jul 2019 12:49:30 +0000 (13:49 +0100)]
drm/i915/execlists: Cancel breadcrumb on preempting the virtual engine

As we unwind the requests for a preemption event, we return a virtual
request back to its original virtual engine (so that it is available for
execution on any of its siblings). In the process, this means that its
breadcrumb should no longer be associated with the original physical
engine, and so we are forced to decouple it. Previously, as the request
could not complete without our awareness, we would move it to the next
real engine without any danger. However, preempt-to-busy allowed for
requests to continue on the HW and complete in the background as we
unwound, which meant that we could end up retiring the request before
fixing up the breadcrumb link.

[51679.517943] INFO: trying to register non-static key.
[51679.517956] the code is fine but needs lockdep annotation.
[51679.517960] turning off the locking correctness validator.
[51679.517966] CPU: 0 PID: 3270 Comm: kworker/u8:0 Tainted: G     U            5.2.0+ #717
[51679.517971] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
[51679.518012] Workqueue: i915 retire_work_handler [i915]
[51679.518017] Call Trace:
[51679.518026]  dump_stack+0x67/0x90
[51679.518031]  register_lock_class+0x52c/0x540
[51679.518038]  ? find_held_lock+0x2d/0x90
[51679.518042]  __lock_acquire+0x68/0x1800
[51679.518047]  ? find_held_lock+0x2d/0x90
[51679.518073]  ? __i915_sw_fence_complete+0xff/0x1c0 [i915]
[51679.518079]  lock_acquire+0x90/0x170
[51679.518105]  ? i915_request_cancel_breadcrumb+0x29/0x160 [i915]
[51679.518112]  _raw_spin_lock+0x27/0x40
[51679.518138]  ? i915_request_cancel_breadcrumb+0x29/0x160 [i915]
[51679.518165]  i915_request_cancel_breadcrumb+0x29/0x160 [i915]
[51679.518199]  i915_request_retire+0x43f/0x530 [i915]
[51679.518232]  retire_requests+0x4d/0x60 [i915]
[51679.518263]  i915_retire_requests+0xdf/0x1f0 [i915]
[51679.518294]  retire_work_handler+0x4c/0x60 [i915]
[51679.518301]  process_one_work+0x22c/0x5c0
[51679.518307]  worker_thread+0x37/0x390
[51679.518311]  ? process_one_work+0x5c0/0x5c0
[51679.518316]  kthread+0x116/0x130
[51679.518320]  ? kthread_create_on_node+0x40/0x40
[51679.518325]  ret_from_fork+0x24/0x30
[51679.520177] ------------[ cut here ]------------
[51679.520189] list_del corruption, ffff88883675e2f0->next is LIST_POISON1 (dead000000000100)

Fixes: 22b7a426bbe1 ("drm/i915/execlists: Preempt-to-busy")
Signed-off-by: Chris Wilson <[email protected]>
Cc: Tvrtko Ursulin <[email protected]>
Reviewed-by: Tvrtko Ursulin <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agoInput: alps - fix a mismatch between a condition check and its comment
Hui Wang [Fri, 19 Jul 2019 09:38:58 +0000 (12:38 +0300)]
Input: alps - fix a mismatch between a condition check and its comment

In the function alps_is_cs19_trackpoint(), we check if the param[1] is
in the 0x20~0x2f range, but the code we wrote for this checking is not
correct:
(param[1] & 0x20) does not mean param[1] is in the range of 0x20~0x2f,
it also means the param[1] is in the range of 0x30~0x3f, 0x60~0x6f...

Now fix it with a new condition checking ((param[1] & 0xf0) == 0x20).

Fixes: 7e4935ccc323 ("Input: alps - don't handle ALPS cs19 trackpoint-only device")
Cc: [email protected]
Signed-off-by: Hui Wang <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
5 years agoInput: psmouse - fix build error of multiple definition
YueHaibing [Tue, 16 Jul 2019 18:17:20 +0000 (20:17 +0200)]
Input: psmouse - fix build error of multiple definition

trackpoint_detect() should be static inline while
CONFIG_MOUSE_PS2_TRACKPOINT is not set, otherwise, we build fails:

drivers/input/mouse/alps.o: In function `trackpoint_detect':
alps.c:(.text+0x8e00): multiple definition of `trackpoint_detect'
drivers/input/mouse/psmouse-base.o:psmouse-base.c:(.text+0x1b50): first defined here

Reported-by: Hulk Robot <[email protected]>
Fixes: 55e3d9224b60 ("Input: psmouse - allow disabing certain protocol extensions")
Signed-off-by: YueHaibing <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
5 years agoInput: applespi - remove set but not used variables 'sts'
Mao Wenan [Tue, 16 Jul 2019 18:16:28 +0000 (20:16 +0200)]
Input: applespi - remove set but not used variables 'sts'

Fixes gcc '-Wunused-but-set-variable' warning:

drivers/input/keyboard/applespi.c: In function applespi_set_bl_level:
drivers/input/keyboard/applespi.c:902:6: warning: variable sts set but not used [-Wunused-but-set-variable]

Fixes: b426ac0452093d ("Input: add Apple SPI keyboard and trackpad driver")
Signed-off-by: Mao Wenan <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
5 years agoInput: add Apple SPI keyboard and trackpad driver
Ronald Tschalär [Mon, 15 Jul 2019 17:30:11 +0000 (10:30 -0700)]
Input: add Apple SPI keyboard and trackpad driver

The keyboard and trackpad on recent MacBook's (since 8,1) and
MacBookPro's (13,* and 14,*) are attached to an SPI controller instead
of USB, as previously. The higher level protocol is not publicly
documented and hence has been reverse engineered. As a consequence there
are still a number of unknown fields and commands. However, the known
parts have been working well and received extensive testing and use.

In order for this driver to work, the proper SPI drivers need to be
loaded too; for MB8,1 these are spi_pxa2xx_platform and spi_pxa2xx_pci;
for all others they are spi_pxa2xx_platform and intel_lpss_pci.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=99891
Link: https://bugzilla.kernel.org/show_bug.cgi?id=108331
Signed-off-by: Ronald Tschalär <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
5 years agox86/hyper-v: Zero out the VP ASSIST PAGE on allocation
Dexuan Cui [Fri, 19 Jul 2019 03:22:35 +0000 (03:22 +0000)]
x86/hyper-v: Zero out the VP ASSIST PAGE on allocation

The VP ASSIST PAGE is an "overlay" page (see Hyper-V TLFS's Section
5.2.1 "GPA Overlay Pages" for the details) and here is an excerpt:

"The hypervisor defines several special pages that "overlay" the guest's
 Guest Physical Addresses (GPA) space. Overlays are addressed GPA but are
 not included in the normal GPA map maintained internally by the hypervisor.
 Conceptually, they exist in a separate map that overlays the GPA map.

 If a page within the GPA space is overlaid, any SPA page mapped to the
 GPA page is effectively "obscured" and generally unreachable by the
 virtual processor through processor memory accesses.

 If an overlay page is disabled, the underlying GPA page is "uncovered",
 and an existing mapping becomes accessible to the guest."

SPA = System Physical Address = the final real physical address.

When a CPU (e.g. CPU1) is onlined, hv_cpu_init() allocates the VP ASSIST
PAGE and enables the EOI optimization for this CPU by writing the MSR
HV_X64_MSR_VP_ASSIST_PAGE. From now on, hvp->apic_assist belongs to the
special SPA page, and this CPU *always* uses hvp->apic_assist (which is
shared with the hypervisor) to decide if it needs to write the EOI MSR.

When a CPU is offlined then on the outgoing CPU:
1. hv_cpu_die() disables the EOI optimizaton for this CPU, and from
   now on hvp->apic_assist belongs to the original "normal" SPA page;
2. the remaining work of stopping this CPU is done
3. this CPU is completely stopped.

Between 1 and 3, this CPU can still receive interrupts (e.g. reschedule
IPIs from CPU0, and Local APIC timer interrupts), and this CPU *must* write
the EOI MSR for every interrupt received, otherwise the hypervisor may not
deliver further interrupts, which may be needed to completely stop the CPU.

So, after the EOI optimization is disabled in hv_cpu_die(), it's required
that the hvp->apic_assist's bit0 is zero, which is not guaranteed by the
current allocation mode because it lacks __GFP_ZERO. As a consequence the
bit might be set and interrupt handling would not write the EOI MSR causing
interrupt delivery to become stuck.

Add the missing __GFP_ZERO to the allocation.

Note 1: after the "normal" SPA page is allocted and zeroed out, neither the
hypervisor nor the guest writes into the page, so the page remains with
zeros.

Note 2: see Section 10.3.5 "EOI Assist" for the details of the EOI
optimization. When the optimization is enabled, the guest can still write
the EOI MSR register irrespective of the "No EOI required" value, but
that's slower than the optimized assist based variant.

Fixes: ba696429d290 ("x86/hyper-v: Implement EOI assist")
Signed-off-by: Dexuan Cui <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: https://lkml.kernel.org/r/
5 years agoMerge branch 'linux-5.3' of git://github.com/skeggsb/linux into drm-next
Dave Airlie [Fri, 19 Jul 2019 07:28:10 +0000 (17:28 +1000)]
Merge branch 'linux-5.3' of git://github.com/skeggsb/linux into drm-next

nouveau fixes and TU116 enablement.

Signed-off-by: Dave Airlie <[email protected]>
From: Ben Skeggs <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/CACAvsv5hZ3B4S9cVTPd2-Ug7dMSasLPJrWMyoDo4MOg8cbXWkA@mail.gmail.com
5 years agoMerge tag 'drm-next-5.3-2019-07-18' of git://people.freedesktop.org/~agd5f/linux...
Dave Airlie [Fri, 19 Jul 2019 07:21:48 +0000 (17:21 +1000)]
Merge tag 'drm-next-5.3-2019-07-18' of git://people.freedesktop.org/~agd5f/linux into drm-next

drm-next-5.3-2019-07-18:

amdgpu:
- Navi DC fix for secondary adapters
- Fix Navi flickering with high res panels
- Navi SMU fixes
- Vega20 SMU fixes
- Fixes for audio hotplug on HG systems
- Fix for potential integer overflows on large buffer
  migrations
- debugfs fixes for umr
- Various other small fixes

amdkfd:
- Apply noretry setting consistently
- Fix hang in eviction
- Properly clean up GWS on uninit

UAPI:
- clarify a comment on ctx priority

Signed-off-by: Dave Airlie <[email protected]>
From: Alex Deucher <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
5 years agodrm/nouveau/secboot/gp102-: remove WAR for SEC2 RTOS start bug
Ben Skeggs [Tue, 2 Jul 2019 06:29:40 +0000 (16:29 +1000)]
drm/nouveau/secboot/gp102-: remove WAR for SEC2 RTOS start bug

Appears to be fixed by "flcn/gp102-: improve implementation of
bind_context() on SEC2/GSP".

Tested on GP10[24678] and GV100.

Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/flcn/gp102-: improve implementation of bind_context() on SEC2/GSP
Ben Skeggs [Mon, 1 Jul 2019 21:52:15 +0000 (07:52 +1000)]
drm/nouveau/flcn/gp102-: improve implementation of bind_context() on SEC2/GSP

Fixes various issues encountered while attempting to initialise ACR.

Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau: fix memory leak in nouveau_conn_reset()
Yongxin Liu [Mon, 1 Jul 2019 01:46:22 +0000 (09:46 +0800)]
drm/nouveau: fix memory leak in nouveau_conn_reset()

In nouveau_conn_reset(), if connector->state is true,
__drm_atomic_helper_connector_destroy_state() will be called,
but the memory pointed by asyc isn't freed. Memory leak happens
in the following function __drm_atomic_helper_connector_reset(),
where newly allocated asyc->state will be assigned to connector->state.

So using nouveau_conn_atomic_destroy_state() instead of
__drm_atomic_helper_connector_destroy_state to free the "old" asyc.

Here the is the log showing memory leak.

unreferenced object 0xffff8c5480483c80 (size 192):
  comm "kworker/0:2", pid 188, jiffies 4294695279 (age 53.179s)
  hex dump (first 32 bytes):
    00 f0 ba 7b 54 8c ff ff 00 00 00 00 00 00 00 00  ...{T...........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000005005c0d0>] kmem_cache_alloc_trace+0x195/0x2c0
    [<00000000a122baed>] nouveau_conn_reset+0x25/0xc0 [nouveau]
    [<000000004fd189a2>] nouveau_connector_create+0x3a7/0x610 [nouveau]
    [<00000000c73343a8>] nv50_display_create+0x343/0x980 [nouveau]
    [<000000002e2b03c3>] nouveau_display_create+0x51f/0x660 [nouveau]
    [<00000000c924699b>] nouveau_drm_device_init+0x182/0x7f0 [nouveau]
    [<00000000cc029436>] nouveau_drm_probe+0x20c/0x2c0 [nouveau]
    [<000000007e961c3e>] local_pci_probe+0x47/0xa0
    [<00000000da14d569>] work_for_cpu_fn+0x1a/0x30
    [<0000000028da4805>] process_one_work+0x27c/0x660
    [<000000001d415b04>] worker_thread+0x22b/0x3f0
    [<0000000003b69f1f>] kthread+0x12f/0x150
    [<00000000c94c29b7>] ret_from_fork+0x3a/0x50

Signed-off-by: Yongxin Liu <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/dmem: missing mutex_lock in error path
Ralph Campbell [Fri, 14 Jun 2019 20:20:03 +0000 (13:20 -0700)]
drm/nouveau/dmem: missing mutex_lock in error path

In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before
calling nouveau_dmem_chunk_alloc() as shown when CONFIG_PROVE_LOCKING
is enabled:

[ 1294.871933] =====================================
[ 1294.876656] WARNING: bad unlock balance detected!
[ 1294.881375] 5.2.0-rc3+ #5 Not tainted
[ 1294.885048] -------------------------------------
[ 1294.889773] test-malloc-vra/6299 is trying to release lock (&drm->dmem->mutex) at:
[ 1294.897482] [<ffffffffa01a220f>] nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1294.905782] but there are no more locks to release!
[ 1294.910690]
[ 1294.910690] other info that might help us debug this:
[ 1294.917249] 1 lock held by test-malloc-vra/6299:
[ 1294.921881]  #0: 0000000016e10454 (&mm->mmap_sem#2){++++}, at: nouveau_svmm_bind+0x142/0x210 [nouveau]
[ 1294.931313]
[ 1294.931313] stack backtrace:
[ 1294.935702] CPU: 4 PID: 6299 Comm: test-malloc-vra Not tainted 5.2.0-rc3+ #5
[ 1294.942786] Hardware name: ASUS X299-A/PRIME X299-A, BIOS 1401 05/21/2018
[ 1294.949590] Call Trace:
[ 1294.952059]  dump_stack+0x7c/0xc0
[ 1294.955469]  ? nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1294.962213]  print_unlock_imbalance_bug.cold.52+0xca/0xcf
[ 1294.967641]  lock_release+0x306/0x380
[ 1294.971383]  ? nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1294.978089]  ? lock_downgrade+0x2d0/0x2d0
[ 1294.982121]  ? find_held_lock+0xac/0xd0
[ 1294.985979]  __mutex_unlock_slowpath+0x8f/0x3f0
[ 1294.990540]  ? wait_for_completion+0x230/0x230
[ 1294.995002]  ? rwlock_bug.part.2+0x60/0x60
[ 1294.999197]  nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1295.005751]  ? page_mapping+0x98/0x110
[ 1295.009511]  migrate_vma+0xa74/0x1090
[ 1295.013186]  ? move_to_new_page+0x480/0x480
[ 1295.017400]  ? __kmalloc+0x153/0x300
[ 1295.021052]  ? nouveau_dmem_migrate_vma+0xd8/0x1e0 [nouveau]
[ 1295.026796]  nouveau_dmem_migrate_vma+0x157/0x1e0 [nouveau]
[ 1295.032466]  ? nouveau_dmem_init+0x490/0x490 [nouveau]
[ 1295.037612]  ? vmacache_find+0xc2/0x110
[ 1295.041537]  nouveau_svmm_bind+0x1b4/0x210 [nouveau]
[ 1295.046583]  ? nouveau_svm_fault+0x13e0/0x13e0 [nouveau]
[ 1295.051912]  drm_ioctl_kernel+0x14d/0x1a0
[ 1295.055930]  ? drm_setversion+0x330/0x330
[ 1295.059971]  drm_ioctl+0x308/0x530
[ 1295.063384]  ? drm_version+0x150/0x150
[ 1295.067153]  ? find_held_lock+0xac/0xd0
[ 1295.070996]  ? __pm_runtime_resume+0x3f/0xa0
[ 1295.075285]  ? mark_held_locks+0x29/0xa0
[ 1295.079230]  ? _raw_spin_unlock_irqrestore+0x3c/0x50
[ 1295.084232]  ? lockdep_hardirqs_on+0x17d/0x250
[ 1295.088768]  nouveau_drm_ioctl+0x9a/0x100 [nouveau]
[ 1295.093661]  do_vfs_ioctl+0x137/0x9a0
[ 1295.097341]  ? ioctl_preallocate+0x140/0x140
[ 1295.101623]  ? match_held_lock+0x1b/0x230
[ 1295.105646]  ? match_held_lock+0x1b/0x230
[ 1295.109660]  ? find_held_lock+0xac/0xd0
[ 1295.113512]  ? __do_page_fault+0x324/0x630
[ 1295.117617]  ? lock_downgrade+0x2d0/0x2d0
[ 1295.121648]  ? mark_held_locks+0x79/0xa0
[ 1295.125583]  ? handle_mm_fault+0x352/0x430
[ 1295.129687]  ksys_ioctl+0x60/0x90
[ 1295.133020]  ? mark_held_locks+0x29/0xa0
[ 1295.136964]  __x64_sys_ioctl+0x3d/0x50
[ 1295.140726]  do_syscall_64+0x68/0x250
[ 1295.144400]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 1295.149465] RIP: 0033:0x7f1a3495809b
[ 1295.153053] Code: 0f 1e fa 48 8b 05 ed bd 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d bd bd 0c 00 f7 d8 64 89 01 48
[ 1295.171850] RSP: 002b:00007ffef7ed1358 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 1295.179451] RAX: ffffffffffffffda RBX: 00007ffef7ed1628 RCX: 00007f1a3495809b
[ 1295.186601] RDX: 00007ffef7ed13b0 RSI: 0000000040406449 RDI: 0000000000000004
[ 1295.193759] RBP: 00007ffef7ed13b0 R08: 0000000000000000 R09: 000000000157e770
[ 1295.200917] R10: 000000000151c010 R11: 0000000000000246 R12: 0000000040406449
[ 1295.208083] R13: 0000000000000004 R14: 0000000000000000 R15: 0000000000000000

Reacquire the lock before continuing to the next page.

Signed-off-by: Ralph Campbell <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/hwmon: return EINVAL if the GPU is powered down for sensors reads
Karol Herbst [Tue, 18 Jun 2019 11:01:33 +0000 (13:01 +0200)]
drm/nouveau/hwmon: return EINVAL if the GPU is powered down for sensors reads

fixes bogus values userspace gets from hwmon while the GPU is powered down

Signed-off-by: Karol Herbst <[email protected]>
Reviewed-by: Rhys Kidd <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau: fix bogus GPL-2 license header
Ben Skeggs [Fri, 5 Jul 2019 05:11:42 +0000 (15:11 +1000)]
drm/nouveau: fix bogus GPL-2 license header

The bulk SPDX addition made all these files into GPL-2.0 licensed files.
However the remainder of the project is MIT-licensed, these files
were simply missing the boiler plate and got caught up in the global update.

Fixes: 96ac6d4351004 (treewide: Add SPDX license identifier - Kbuild)
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau: fix bogus GPL-2 license header
Ilia Mirkin [Thu, 20 Jun 2019 00:13:43 +0000 (20:13 -0400)]
drm/nouveau: fix bogus GPL-2 license header

The bulk SPDX addition made all these files into GPL-2.0 licensed files.
However the remainder of the project is MIT-licensed, these files
(primarily header files) were simply missing the boiler plate and got
caught up in the global update.

Fixes: b24413180f5 (License cleanup: add SPDX GPL-2.0 license identifier to files with no license)
Signed-off-by: Ilia Mirkin <[email protected]>
Acked-by: Emil Velikov <[email protected]>
Acked-by: Karol Herbst <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/i2c: Enable i2c pads & busses during preinit
Lyude Paul [Wed, 26 Jun 2019 18:10:27 +0000 (14:10 -0400)]
drm/nouveau/i2c: Enable i2c pads & busses during preinit

It turns out that while disabling i2c bus access from software when the
GPU is suspended was a step in the right direction with:

commit 342406e4fbba ("drm/nouveau/i2c: Disable i2c bus access after
->fini()")

We also ended up accidentally breaking the vbios init scripts on some
older Tesla GPUs, as apparently said scripts can actually use the i2c
bus. Since these scripts are executed before initializing any
subdevices, we end up failing to acquire access to the i2c bus which has
left a number of cards with their fan controllers uninitialized. Luckily
this doesn't break hardware - it just means the fan gets stuck at 100%.

This also means that we've always been using our i2c busses before
initializing them during the init scripts for older GPUs, we just didn't
notice it until we started preventing them from being used until init.
It's pretty impressive this never caused us any issues before!

So, fix this by initializing our i2c pad and busses during subdev
pre-init. We skip initializing aux busses during pre-init, as those are
guaranteed to only ever be used by nouveau for DP aux transactions.

Signed-off-by: Lyude Paul <[email protected]>
Tested-by: Marc Meledandri <[email protected]>
Fixes: 342406e4fbba ("drm/nouveau/i2c: Disable i2c bus access after ->fini()")
Cc: [email protected]
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/disp/tu102-: wire up scdc parameter setter
Ben Skeggs [Tue, 18 Jun 2019 08:40:16 +0000 (18:40 +1000)]
drm/nouveau/disp/tu102-: wire up scdc parameter setter

Regs seem valid here still, and tested on TU116.

Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/core: recognise TU116 chipset
Ben Skeggs [Mon, 17 Jun 2019 02:52:48 +0000 (12:52 +1000)]
drm/nouveau/core: recognise TU116 chipset

Modesetting only, still waiting on ACR/GR firmware from NVIDIA for Turing
graphics/compute bring-up.

Each subsystem was compared with traces, along with various tests to check
that things generally work as they should, and appears compatible enough
with the current TU117 code to enable support.

Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/kms: disallow dual-link harder if hdmi connection detected
Ben Skeggs [Tue, 28 May 2019 23:58:18 +0000 (09:58 +1000)]
drm/nouveau/kms: disallow dual-link harder if hdmi connection detected

The fallthrough cases (pre-Fermi) would accidentally allow dual-link pixel
clocks even where they shouldn't be.  This leads to a high resolution HDMI
displays, connected via a DVI->HDMI adapter, to fail on the original NV50.

Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/disp/nv50-: fix center/aspect-corrected scaling
Ilia Mirkin [Sat, 25 May 2019 22:41:49 +0000 (18:41 -0400)]
drm/nouveau/disp/nv50-: fix center/aspect-corrected scaling

Previously center scaling would get scaling applied to it (when it was
only supposed to center the image), and aspect-corrected scaling did not
always correctly pick whether to reduce width or height for a particular
combination of inputs/outputs.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110660
Signed-off-by: Ilia Mirkin <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/disp/nv50-: force scaler for any non-default LVDS/eDP modes
Ilia Mirkin [Sat, 25 May 2019 22:41:48 +0000 (18:41 -0400)]
drm/nouveau/disp/nv50-: force scaler for any non-default LVDS/eDP modes

Higher layers tend to add a lot of modes not actually in the EDID, such
as the standard DMT modes. Changing this would be extremely intrusive to
everyone, so just force the scaler more often. There are no practical
cases we're aware of where a LVDS/eDP panel has multiple resolutions
exposed, and i915 already does it this way.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110660
Signed-off-by: Ilia Mirkin <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
5 years agodrm/nouveau/mcp89/mmu: Use mcp77_mmu_new instead of g84_mmu_new on MCP89.
Timo Wiren [Wed, 22 May 2019 17:01:06 +0000 (20:01 +0300)]
drm/nouveau/mcp89/mmu: Use mcp77_mmu_new instead of g84_mmu_new on MCP89.

Fix a crash or broken depth testing in all OpenGL applications that use the
depth buffer on MCP89 (GeForce 320M) seen on a MacBook Pro Late 2010.

The bug is tracked in https://bugs.freedesktop.org/show_bug.cgi?id=108500

Signed-off-by: Timo Wiren <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
5 years agocsky: Fixup abiv1 memset error
Guo Ren [Fri, 28 Jun 2019 12:39:46 +0000 (20:39 +0800)]
csky: Fixup abiv1 memset error

Current memset implementation in abiv1 is wrong and it'll cause unalign
access. Just remove it and use the generic one. This patch will cause
performance degradation and we will improve it with a new design in next
patchset.

Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
5 years agocsky: Improve tlb operation with help of asid
Guo Ren [Tue, 18 Jun 2019 12:34:35 +0000 (20:34 +0800)]
csky: Improve tlb operation with help of asid

There are two generations of tlb operation instruction for C-SKY.
First generation is use mcr register and it need software do more
things, second generation is use specific instructions, eg:
 tlbi.va, tlbi.vas, tlbi.alls

We implemented the following functions:

 - flush_tlb_range (a range of entries)
 - flush_tlb_page (one entry)

 Above functions use asid from vma->mm to invalid tlb entries and
 we could use tlbi.vas instruction for newest generation csky cpu.

 - flush_tlb_kernel_range
 - flush_tlb_one

 Above functions don't care asid and it invalid the tlb entries only
 with vpn and we could use tlbi.vaas instruction for newest generat-
 ion csky cpu.

Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
5 years agocsky: Use generic asid algorithm to implement switch_mm
Guo Ren [Tue, 18 Jun 2019 12:33:32 +0000 (20:33 +0800)]
csky: Use generic asid algorithm to implement switch_mm

Use linux generic asid/vmid algorithm to implement csky
switch_mm function. The algorithm is from arm and it could
work with SMP system. It'll help reduce tlb flush for
switch_mm in task/vm switch.

Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
5 years agocsky: Add new asid lib code from arm
Guo Ren [Tue, 18 Jun 2019 12:06:52 +0000 (20:06 +0800)]
csky: Add new asid lib code from arm

This patch only contains asid help code from arm for next patch to
use.

The asid allocator use five level check to reduce the cost of
switch_mm.

 1. Check if the asid version is the same (it's general)
 2. Check reserved_asid which is set in rollover flush_context()
    and key point is to keep the same bit position with the current
    asid version instead of input version.
 3. Check if the position of bitmap is free then it could be set &
    used directly.
 4. find_next_zero_bit() (a little performance cost)
 5. flush_context  (this is the worst cost with increase current asid
    version)

Check is level by level and cost is also higher with the next level.
The reserved_asid and bitmap mechanism prevent unnecessary
find_next_zero_bit().

The atomic 64 bit asid is also suitable for 32-bit system and it
won't cost a lot in 1th 2th 3th level check.

The operation of set/clear mm_cpumask was removed in arm64 compared to
arm32. It seems no side effect on current arm64 system, but from
software meaning it's wrong. Although csky also needn't it, we add it
back for csky.

The asid_per_ctxt is no use for csky and it reserves the lowest bits for
other use, maybe: trust zone ? Ok, just keep it in csky copy.

Seems it also could be used by other archs and it's worth to move asid
code to generic in future.

Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Julien Grall <[email protected]>
5 years agocsky: Revert mmu ASID mechanism
Guo Ren [Tue, 18 Jun 2019 09:20:10 +0000 (17:20 +0800)]
csky: Revert mmu ASID mechanism

Current C-SKY ASID mechanism is from mips and it doesn't work well
with multi-cores. ASID per core mechanism is not suitable for C-SKY
SMP tlb maintain operations, eg: tlbi.vas need share the same asid
in all processors and it'll invalid the tlb entry in all cores with
the same asid.

This patch is prepare for new ASID mechanism.

Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
5 years agodt-bindings: csky: Add csky PMU bindings
Mao Han [Tue, 4 Jun 2019 10:54:47 +0000 (18:54 +0800)]
dt-bindings: csky: Add csky PMU bindings

This patch adds the documentation to describe that how to add pmu node in
dts.

Signed-off-by: Mao Han <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Cc: Rob Herring <[email protected]>
5 years agodt-bindings: interrupt-controller: Update csky mpintc
Guo Ren [Thu, 6 Jun 2019 07:37:32 +0000 (15:37 +0800)]
dt-bindings: interrupt-controller: Update csky mpintc

Add trigger type setting for csky,mpintc. The driver also could
support #interrupt-cells <1> and it wouldn't invalidate existing
DTs. Here we only show the complete format.

Signed-off-by: Guo Ren <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
Cc: Marc Zyngier <[email protected]>
5 years agocsky: Fixup some error count in 810 & 860.
Guo Ren [Tue, 4 Jun 2019 10:54:48 +0000 (18:54 +0800)]
csky: Fixup some error count in 810 & 860.

CK810 pmu only support event with index 0-8 and 0xd; CK860 only
support event 1~4, 0xa~0x1b. So do not register unsupport event
to hardware cache event, which may leader to unknown behavior.

Signed-off-by: Mao Han <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
5 years agocsky: Fix perf record in kernel/user space
Mao Han [Tue, 4 Jun 2019 10:54:49 +0000 (18:54 +0800)]
csky: Fix perf record in kernel/user space

csky_pmu_event_init is called several times during the perf record
initialzation. After configure the event counter in either kernel
space or user space, csky_pmu_event_init is called twice with no
attr specified. Configuration will be overwritten with sampling in
both kernel space and user space. --all-kernel/--all-user is
useless without this patch applied.

Signed-off-by: Mao Han <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
5 years agocsky: Add pmu interrupt support
Mao Han [Tue, 4 Jun 2019 10:54:46 +0000 (18:54 +0800)]
csky: Add pmu interrupt support

This patch add interrupt request and handler for csky pmu.
perf can record on hardware event with this patch applied.

Signed-off-by: Mao Han <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
5 years agocsky: Add count-width property for csky pmu
Mao Han [Tue, 4 Jun 2019 10:54:45 +0000 (18:54 +0800)]
csky: Add count-width property for csky pmu

The csky pmu counter may have different io width. When the counter is
smaller then 64 bits and counter value is smaller than the old value, it
will result to a extremely large delta value. So the sampled value should
be extend to 64 bits to avoid this, the extension bits base on the
count-width property from dts.

Signed-off-by: Mao Han <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
5 years agocsky: Init pmu as a device
Mao Han [Tue, 4 Jun 2019 10:54:44 +0000 (18:54 +0800)]
csky: Init pmu as a device

This patch change the csky pmu initialization from arch init to
device init. The pmu can be configued with information from
device tree(pmu device name, irq number and etc.).

Signed-off-by: Mao Han <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
5 years agocsky: Fixup no panic in kernel for some traps
Guo Ren [Fri, 10 May 2019 09:07:01 +0000 (17:07 +0800)]
csky: Fixup no panic in kernel for some traps

These traps couldn't be hanppen in kernel and we must panic there not
send a signal to userspace.

Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
5 years agocsky: Select intc & timer drivers
Guo Ren [Fri, 10 May 2019 04:57:27 +0000 (12:57 +0800)]
csky: Select intc & timer drivers

Let arch help to select interrupt controller's and timer's drivers
instead of people using menuconfig to select. This help the mini system
boot up.

Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
5 years agotcp: fix tcp_set_congestion_control() use from bpf hook
Eric Dumazet [Fri, 19 Jul 2019 02:28:14 +0000 (19:28 -0700)]
tcp: fix tcp_set_congestion_control() use from bpf hook

Neal reported incorrect use of ns_capable() from bpf hook.

bpf_setsockopt(...TCP_CONGESTION...)
  -> tcp_set_congestion_control()
   -> ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)
    -> ns_capable_common()
     -> current_cred()
      -> rcu_dereference_protected(current->cred, 1)

Accessing 'current' in bpf context makes no sense, since packets
are processed from softirq context.

As Neal stated : The capability check in tcp_set_congestion_control()
was written assuming a system call context, and then was reused from
a BPF call site.

The fix is to add a new parameter to tcp_set_congestion_control(),
so that the ns_capable() call is only performed under the right
context.

Fixes: 91b5b21c7c16 ("bpf: Add support for changing congestion control")
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Lawrence Brakmo <[email protected]>
Reported-by: Neal Cardwell <[email protected]>
Acked-by: Neal Cardwell <[email protected]>
Acked-by: Lawrence Brakmo <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
5 years agoag71xx: fix return value check in ag71xx_probe()
Wei Yongjun [Fri, 19 Jul 2019 01:22:06 +0000 (01:22 +0000)]
ag71xx: fix return value check in ag71xx_probe()

In case of error, the function of_get_mac_address() returns ERR_PTR()
and never returns NULL. The NULL test in the return value check should
be replaced with IS_ERR().

Fixes: d51b6ce441d3 ("net: ethernet: add ag71xx driver")
Signed-off-by: Wei Yongjun <[email protected]>
Reviewed-by: Oleksij Rempel <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
5 years agoag71xx: fix error return code in ag71xx_probe()
Wei Yongjun [Fri, 19 Jul 2019 01:21:57 +0000 (01:21 +0000)]
ag71xx: fix error return code in ag71xx_probe()

Fix to return error code -ENOMEM from the dmam_alloc_coherent() error
handling case instead of 0, as done elsewhere in this function.

Fixes: d51b6ce441d3 ("net: ethernet: add ag71xx driver")
Signed-off-by: Wei Yongjun <[email protected]>
Reviewed-by: Oleksij Rempel <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
5 years agoproc/sysctl: add shared variables for range check
Matteo Croce [Thu, 18 Jul 2019 22:58:50 +0000 (15:58 -0700)]
proc/sysctl: add shared variables for range check

In the sysctl code the proc_dointvec_minmax() function is often used to
validate the user supplied value between an allowed range.  This
function uses the extra1 and extra2 members from struct ctl_table as
minimum and maximum allowed value.

On sysctl handler declaration, in every source file there are some
readonly variables containing just an integer which address is assigned
to the extra1 and extra2 members, so the sysctl range is enforced.

The special values 0, 1 and INT_MAX are very often used as range
boundary, leading duplication of variables like zero=0, one=1,
int_max=INT_MAX in different source files:

    $ git grep -E '\.extra[12].*&(zero|one|int_max)' |wc -l
    248

Add a const int array containing the most commonly used values, some
macros to refer more easily to the correct array member, and use them
instead of creating a local one for every object file.

This is the bloat-o-meter output comparing the old and new binary
compiled with the default Fedora config:

    # scripts/bloat-o-meter -d vmlinux.o.old vmlinux.o
    add/remove: 2/2 grow/shrink: 0/2 up/down: 24/-188 (-164)
    Data                                         old     new   delta
    sysctl_vals                                    -      12     +12
    __kstrtab_sysctl_vals                          -      12     +12
    max                                           14      10      -4
    int_max                                       16       -     -16
    one                                           68       -     -68
    zero                                         128      28    -100
    Total: Before=20583249, After=20583085, chg -0.00%

[[email protected]: tipc: remove two unused variables]
Link: http://lkml.kernel.org/r/[email protected]
[[email protected]: fix net/ipv6/sysctl_net_ipv6.c]
[[email protected]: proc/sysctl: make firmware loader table conditional]
Link: http://lkml.kernel.org/r/[email protected]
[[email protected]: fix fs/eventpoll.c]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Matteo Croce <[email protected]>
Signed-off-by: Arnd Bergmann <[email protected]>
Acked-by: Kees Cook <[email protected]>
Reviewed-by: Aaron Tomlin <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm: migrate: remove unused mode argument
Keith Busch [Thu, 18 Jul 2019 22:58:46 +0000 (15:58 -0700)]
mm: migrate: remove unused mode argument

migrate_page_move_mapping() doesn't use the mode argument.  Remove it
and update callers accordingly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparsemem: cleanup 'section number' data types
Dan Williams [Thu, 18 Jul 2019 22:58:43 +0000 (15:58 -0700)]
mm/sparsemem: cleanup 'section number' data types

David points out that there is a mixture of 'int' and 'unsigned long'
usage for section number data types.  Update the memory hotplug path to
use 'unsigned long' consistently for section numbers.

[[email protected]: fix printk format]
Link: http://lkml.kernel.org/r/156107543656.1329419.11505835211949439815.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reported-by: David Hildenbrand <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agolibnvdimm/pfn: stop padding pmem namespaces to section alignment
Dan Williams [Thu, 18 Jul 2019 22:58:40 +0000 (15:58 -0700)]
libnvdimm/pfn: stop padding pmem namespaces to section alignment

Now that the mm core supports section-unaligned hotplug of ZONE_DEVICE
memory, we no longer need to add padding at pfn/dax device creation
time.  The kernel will still honor padding established by older kernels.

Link: http://lkml.kernel.org/r/156092356588.979959.6793371748950931916.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reported-by: Jeff Moyer <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: David Hildenbrand <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agolibnvdimm/pfn: fix fsdax-mode namespace info-block zero-fields
Dan Williams [Thu, 18 Jul 2019 22:58:36 +0000 (15:58 -0700)]
libnvdimm/pfn: fix fsdax-mode namespace info-block zero-fields

At namespace creation time there is the potential for the "expected to
be zero" fields of a 'pfn' info-block to be filled with indeterminate
data.  While the kernel buffer is zeroed on allocation it is immediately
overwritten by nd_pfn_validate() filling it with the current contents of
the on-media info-block location.  For fields like, 'flags' and the
'padding' it potentially means that future implementations can not rely on
those fields being zero.

In preparation to stop using the 'start_pad' and 'end_trunc' fields for
section alignment, arrange for fields that are not explicitly
initialized to be guaranteed zero.  Bump the minor version to indicate
it is safe to assume the 'padding' and 'flags' are zero.  Otherwise,
this corruption is expected to benign since all other critical fields
are explicitly initialized.

Note The cc: stable is about spreading this new policy to as many
kernels as possible not fixing an issue in those kernels.  It is not
until the change titled "libnvdimm/pfn: Stop padding pmem namespaces to
section alignment" where this improper initialization becomes a problem.
So if someone decides to backport "libnvdimm/pfn: Stop padding pmem
namespaces to section alignment" (which is not tagged for stable), make
sure this pre-requisite is flagged.

Link: http://lkml.kernel.org/r/156092356065.979959.6681003754765958296.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes: 32ab0a3f5170 ("libnvdimm, pmem: 'struct page' for pmem")
Signed-off-by: Dan Williams <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/devm_memremap_pages: enable sub-section remap
Dan Williams [Thu, 18 Jul 2019 22:58:33 +0000 (15:58 -0700)]
mm/devm_memremap_pages: enable sub-section remap

Teach devm_memremap_pages() about the new sub-section capabilities of
arch_{add,remove}_memory().  Effectively, just replace all usage of
align_start, align_end, and align_size with res->start, res->end, and
resource_size(res).  The existing sanity check will still make sure that
the two separate remap attempts do not collide within a sub-section (2MB
on x86).

Link: http://lkml.kernel.org/r/156092355542.979959.10060071713397030576.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: Michal Hocko <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm: document ZONE_DEVICE memory-model implications
Dan Williams [Thu, 18 Jul 2019 22:58:29 +0000 (15:58 -0700)]
mm: document ZONE_DEVICE memory-model implications

Explain the general mechanisms of 'ZONE_DEVICE' pages and list the users
of 'devm_memremap_pages()'.

[[email protected]: update ZONE_DEVICE memory model documentation]
Link: http://lkml.kernel.org/r/156109575458.1409767.1885676287099277666.stgit@dwillia2-desk3.amr.corp.intel.com
Link: http://lkml.kernel.org/r/156092354985.979959.15763234410543451710.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reported-by: Mike Rapoport <[email protected]>
Reviewed-by: Mike Rapoport <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: Jonathan Corbet <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparsemem: support sub-section hotplug
Dan Williams [Thu, 18 Jul 2019 22:58:26 +0000 (15:58 -0700)]
mm/sparsemem: support sub-section hotplug

The libnvdimm sub-system has suffered a series of hacks and broken
workarounds for the memory-hotplug implementation's awkward
section-aligned (128MB) granularity.

For example the following backtrace is emitted when attempting
arch_add_memory() with physical address ranges that intersect 'System
RAM' (RAM) with 'Persistent Memory' (PMEM) within a given section:

    # cat /proc/iomem | grep -A1 -B1 Persistent\ Memory
    100000000-1ffffffff : System RAM
    200000000-303ffffff : Persistent Memory (legacy)
    304000000-43fffffff : System RAM
    440000000-23ffffffff : Persistent Memory
    2400000000-43bfffffff : Persistent Memory
      2400000000-43bfffffff : namespace2.0

    WARNING: CPU: 38 PID: 928 at arch/x86/mm/init_64.c:850 add_pages+0x5c/0x60
    [..]
    RIP: 0010:add_pages+0x5c/0x60
    [..]
    Call Trace:
     devm_memremap_pages+0x460/0x6e0
     pmem_attach_disk+0x29e/0x680 [nd_pmem]
     ? nd_dax_probe+0xfc/0x120 [libnvdimm]
     nvdimm_bus_probe+0x66/0x160 [libnvdimm]

It was discovered that the problem goes beyond RAM vs PMEM collisions as
some platform produce PMEM vs PMEM collisions within a given section.
The libnvdimm workaround for that case revealed that the libnvdimm
section-alignment-padding implementation has been broken for a long
while.

A fix for that long-standing breakage introduces as many problems as it
solves as it would require a backward-incompatible change to the
namespace metadata interpretation.  Instead of that dubious route [1],
address the root problem in the memory-hotplug implementation.

Note that EEXIST is no longer treated as success as that is how
sparse_add_section() reports subsection collisions, it was also obviated
by recent changes to perform the request_region() for 'System RAM'
before arch_add_memory() in the add_memory() sequence.

[1] https://lore.kernel.org/r/155000671719.348031.2347363160141119237[email protected]

[[email protected]: fix deactivate_section for early sections]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/156092354368.979959.6232443923440952359.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Oscar Salvador <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparsemem: prepare for sub-section ranges
Dan Williams [Thu, 18 Jul 2019 22:58:22 +0000 (15:58 -0700)]
mm/sparsemem: prepare for sub-section ranges

Prepare the memory hot-{add,remove} paths for handling sub-section
ranges by plumbing the starting page frame and number of pages being
handled through arch_{add,remove}_memory() to
sparse_{add,remove}_one_section().

This is simply plumbing, small cleanups, and some identifier renames.
No intended functional changes.

Link: http://lkml.kernel.org/r/156092353780.979959.9713046515562743194.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm: kill is_dev_zone() helper
Dan Williams [Thu, 18 Jul 2019 22:58:18 +0000 (15:58 -0700)]
mm: kill is_dev_zone() helper

Given there are no more usages of is_dev_zone() outside of 'ifdef
CONFIG_ZONE_DEVICE' protection, kill off the compilation helper.

Link: http://lkml.kernel.org/r/156092353211.979959.1489004866360828964.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: Michal Hocko <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/hotplug: kill is_dev_zone() usage in __remove_pages()
Dan Williams [Thu, 18 Jul 2019 22:58:15 +0000 (15:58 -0700)]
mm/hotplug: kill is_dev_zone() usage in __remove_pages()

The zone type check was a leftover from the cleanup that plumbed altmap
through the memory hotplug path, i.e.  commit da024512a1fa "mm: pass the
vmem_altmap to arch_remove_memory and __remove_pages".

Link: http://lkml.kernel.org/r/156092352642.979959.6664333788149363039.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: Michal Hocko <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparsemem: convert kmalloc_section_memmap() to populate_section_memmap()
Dan Williams [Thu, 18 Jul 2019 22:58:11 +0000 (15:58 -0700)]
mm/sparsemem: convert kmalloc_section_memmap() to populate_section_memmap()

Allow sub-section sized ranges to be added to the memmap.

populate_section_memmap() takes an explict pfn range rather than
assuming a full section, and those parameters are plumbed all the way
through to vmmemap_populate().  There should be no sub-section usage in
current deployments.  New warnings are added to clarify which memmap
allocation paths are sub-section capable.

Link: http://lkml.kernel.org/r/156092352058.979959.6551283472062305149.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/hotplug: prepare shrink_{zone, pgdat}_span for sub-section removal
Dan Williams [Thu, 18 Jul 2019 22:58:07 +0000 (15:58 -0700)]
mm/hotplug: prepare shrink_{zone, pgdat}_span for sub-section removal

Sub-section hotplug support reduces the unit of operation of hotplug
from section-sized-units (PAGES_PER_SECTION) to sub-section-sized units
(PAGES_PER_SUBSECTION).  Teach shrink_{zone,pgdat}_span() to consider
PAGES_PER_SUBSECTION boundaries as the points where pfn_valid(), not
valid_section(), can toggle.

[[email protected]: fix shrink_{zone,node}_span]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/156092351496.979959.12703722803097017492.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Oscar Salvador <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparsemem: add helpers track active portions of a section at boot
Dan Williams [Thu, 18 Jul 2019 22:58:04 +0000 (15:58 -0700)]
mm/sparsemem: add helpers track active portions of a section at boot

Prepare for hot{plug,remove} of sub-ranges of a section by tracking a
sub-section active bitmask, each bit representing a PMD_SIZE span of the
architecture's memory hotplug section size.

The implications of a partially populated section is that pfn_valid()
needs to go beyond a valid_section() check and either determine that the
section is an "early section", or read the sub-section active ranges
from the bitmask.  The expectation is that the bitmask (subsection_map)
fits in the same cacheline as the valid_section() / early_section()
data, so the incremental performance overhead to pfn_valid() should be
negligible.

The rationale for using early_section() to short-ciruit the
subsection_map check is that there are legacy code paths that use
pfn_valid() at section granularity before validating the pfn against
pgdat data.  So, the early_section() check allows those traditional
assumptions to persist while also permitting subsection_map to tell the
truth for purposes of populating the unused portions of early sections
with PMEM and other ZONE_DEVICE mappings.

Link: http://lkml.kernel.org/r/156092350874.979959.18185938451405518285.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reported-by: Qian Cai <[email protected]>
Tested-by: Jane Chu <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparsemem: introduce a SECTION_IS_EARLY flag
Dan Williams [Thu, 18 Jul 2019 22:58:00 +0000 (15:58 -0700)]
mm/sparsemem: introduce a SECTION_IS_EARLY flag

In preparation for sub-section hotplug, track whether a given section
was created during early memory initialization, or later via memory
hotplug.  This distinction is needed to maintain the coarse expectation
that pfn_valid() returns true for any pfn within a given section even if
that section has pages that are reserved from the page allocator.

For example one of the of goals of subsection hotplug is to support
cases where the system physical memory layout collides System RAM and
PMEM within a section.  Several pfn_valid() users expect to just check
if a section is valid, but they are not careful to check if the given
pfn is within a "System RAM" boundary and instead expect pgdat
information to further validate the pfn.

Rather than unwind those paths to make their pfn_valid() queries more
precise a follow on patch uses the SECTION_IS_EARLY flag to maintain the
traditional expectation that pfn_valid() returns true for all early
sections.

Link: https://lore.kernel.org/lkml/[email protected]/
Link: http://lkml.kernel.org/r/156092350358.979959.5817209875548072819.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reported-by: Qian Cai <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparsemem: introduce struct mem_section_usage
Dan Williams [Thu, 18 Jul 2019 22:57:57 +0000 (15:57 -0700)]
mm/sparsemem: introduce struct mem_section_usage

Patch series "mm: Sub-section memory hotplug support", v10.

The memory hotplug section is an arbitrary / convenient unit for memory
hotplug.  'Section-size' units have bled into the user interface
('memblock' sysfs) and can not be changed without breaking existing
userspace.  The section-size constraint, while mostly benign for typical
memory hotplug, has and continues to wreak havoc with 'device-memory'
use cases, persistent memory (pmem) in particular.  Recall that pmem
uses devm_memremap_pages(), and subsequently arch_add_memory(), to
allocate a 'struct page' memmap for pmem.  However, it does not use the
'bottom half' of memory hotplug, i.e.  never marks pmem pages online and
never exposes the userspace memblock interface for pmem.  This leaves an
opening to redress the section-size constraint.

To date, the libnvdimm subsystem has attempted to inject padding to
satisfy the internal constraints of arch_add_memory().  Beyond
complicating the code, leading to bugs [2], wasting memory, and limiting
configuration flexibility, the padding hack is broken when the platform
changes this physical memory alignment of pmem from one boot to the
next.  Device failure (intermittent or permanent) and physical
reconfiguration are events that can cause the platform firmware to
change the physical placement of pmem on a subsequent boot, and device
failure is an everyday event in a data-center.

It turns out that sections are only a hard requirement of the
user-facing interface for memory hotplug and with a bit more
infrastructure sub-section arch_add_memory() support can be added for
kernel internal usages like devm_memremap_pages().  Here is an analysis
of the current design assumptions in the current code and how they are
addressed in the new implementation:

Current design assumptions:

 - Sections that describe boot memory (early sections) are never
   unplugged / removed.

 - pfn_valid(), in the CONFIG_SPARSEMEM_VMEMMAP=y, case devolves to a
   valid_section() check

 - __add_pages() and helper routines assume all operations occur in
   PAGES_PER_SECTION units.

 - The memblock sysfs interface only comprehends full sections

New design assumptions:

 - Sections are instrumented with a sub-section bitmask to track (on
   x86) individual 2MB sub-divisions of a 128MB section.

 - Partially populated early sections can be extended with additional
   sub-sections, and those sub-sections can be removed with
   arch_remove_memory(). With this in place we no longer lose usable
   memory capacity to padding.

 - pfn_valid() is updated to look deeper than valid_section() to also
   check the active-sub-section mask. This indication is in the same
   cacheline as the valid_section() so the performance impact is
   expected to be negligible. So far the lkp robot has not reported any
   regressions.

 - Outside of the core vmemmap population routines which are replaced,
   other helper routines like shrink_{zone,pgdat}_span() are updated to
   handle the smaller granularity. Core memory hotplug routines that
   deal with online memory are not touched.

 - The existing memblock sysfs user api guarantees / assumptions are not
   touched since this capability is limited to !online
   !memblock-sysfs-accessible sections.

Meanwhile the issue reports continue to roll in from users that do not
understand when and how the 128MB constraint will bite them.  The current
implementation relied on being able to support at least one misaligned
namespace, but that immediately falls over on any moderately complex
namespace creation attempt.  Beyond the initial problem of 'System RAM'
colliding with pmem, and the unsolvable problem of physical alignment
changes, Linux is now being exposed to platforms that collide pmem ranges
with other pmem ranges by default [3].  In short, devm_memremap_pages()
has pushed the venerable section-size constraint past the breaking point,
and the simplicity of section-aligned arch_add_memory() is no longer
tenable.

These patches are exposed to the kbuild robot on a subsection-v10 branch
[4], and a preview of the unit test for this functionality is available
on the 'subsection-pending' branch of ndctl [5].

[2]: https://lore.kernel.org/r/155000671719.348031.2347363160141119237[email protected]
[3]: https://github.com/pmem/ndctl/issues/76
[4]: https://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git/log/?h=subsection-v10
[5]: https://github.com/pmem/ndctl/commit/7c59b4867e1c

This patch (of 13):

Towards enabling memory hotplug to track partial population of a section,
introduce 'struct mem_section_usage'.

A pointer to a 'struct mem_section_usage' instance replaces the existing
pointer to a 'pageblock_flags' bitmap.  Effectively it adds one more
'unsigned long' beyond the 'pageblock_flags' (usemap) allocation to house
a new 'subsection_map' bitmap.  The new bitmap enables the memory
hot{plug,remove} implementation to act on incremental sub-divisions of a
section.

SUBSECTION_SHIFT is defined as global constant instead of per-architecture
value like SECTION_SIZE_BITS in order to allow cross-arch compatibility of
subsection users.  Specifically a common subsection size allows for the
possibility that persistent memory namespace configurations be made
compatible across architectures.

The primary motivation for this functionality is to support platforms that
mix "System RAM" and "Persistent Memory" within a single section, or
multiple PMEM ranges with different mapping lifetimes within a single
section.  The section restriction for hotplug has caused an ongoing saga
of hacks and bugs for devm_memremap_pages() users.

Beyond the fixups to teach existing paths how to retrieve the 'usemap'
from a section, and updates to usemap allocation path, there are no
expected behavior changes.

Link: http://lkml.kernel.org/r/156092349845.979959.73333291612799019.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Tested-by: Aneesh Kumar K.V <[email protected]> [ppc64]
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agodrivers/base/memory.c: get rid of find_memory_block_hinted()
David Hildenbrand [Thu, 18 Jul 2019 22:57:53 +0000 (15:57 -0700)]
drivers/base/memory.c: get rid of find_memory_block_hinted()

No longer needed, let's remove it.  Also, drop the "hint" parameter
completely from "find_memory_block_by_id", as nobody needs it anymore.

[[email protected]: v3]
Link: http://lkml.kernel.org/r/[email protected]
[[email protected]: handle zero-length walks]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Tested-by: Qian Cai <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Mike Travis <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: move and simplify walk_memory_blocks()
David Hildenbrand [Thu, 18 Jul 2019 22:57:50 +0000 (15:57 -0700)]
mm/memory_hotplug: move and simplify walk_memory_blocks()

Let's move walk_memory_blocks() to the place where memory block logic
resides and simplify it.  While at it, add a type for the callback
function.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Mike Travis <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Qian Cai <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: rename walk_memory_range() and pass start+size instead of pfns
David Hildenbrand [Thu, 18 Jul 2019 22:57:46 +0000 (15:57 -0700)]
mm/memory_hotplug: rename walk_memory_range() and pass start+size instead of pfns

walk_memory_range() was once used to iterate over sections.  Now, it
iterates over memory blocks.  Rename the function, fixup the
documentation.

Also, pass start+size instead of PFNs, which is what most callers
already have at hand.  (we'll rework link_mem_sections() most probably
soon)

Follow-up patches will rework, simplify, and move walk_memory_blocks()
to drivers/base/memory.c.

Note: walk_memory_blocks() only works correctly right now if the
start_pfn is aligned to a section start.  This is the case right now,
but we'll generalize the function in a follow up patch so the semantics
match the documentation.

[[email protected]: remove unused variable]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Rashmica Gupta <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Michael Neuling <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Nick Desaulniers <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm: make register_mem_sect_under_node() static
David Hildenbrand [Thu, 18 Jul 2019 22:57:43 +0000 (15:57 -0700)]
mm: make register_mem_sect_under_node() static

It is only used internally.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Keith Busch <[email protected]>
Cc: Oscar Salvador <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agodrivers/base/memory: use "unsigned long" for block ids
David Hildenbrand [Thu, 18 Jul 2019 22:57:40 +0000 (15:57 -0700)]
drivers/base/memory: use "unsigned long" for block ids

Block ids are just shifted section numbers, so let's also use "unsigned
long" for them, too.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm: section numbers use the type "unsigned long"
David Hildenbrand [Thu, 18 Jul 2019 22:57:37 +0000 (15:57 -0700)]
mm: section numbers use the type "unsigned long"

Patch series "mm: Further memory block device cleanups", v1.

Some further cleanups around memory block devices.  Especially, clean up
and simplify walk_memory_range().  Including some other minor cleanups.

This patch (of 6):

We are using a mixture of "int" and "unsigned long".  Let's make this
consistent by using "unsigned long" everywhere.  We'll do the same with
memory block ids next.

While at it, turn the "unsigned long i" in removable_show() into an int
- sections_per_block is an int.

[[email protected]: s/unsigned long i/unsigned long nr/]
[[email protected]: v3]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Baoquan He <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agoresource: avoid unnecessary lookups in find_next_iomem_res()
Nadav Amit [Thu, 18 Jul 2019 22:57:34 +0000 (15:57 -0700)]
resource: avoid unnecessary lookups in find_next_iomem_res()

find_next_iomem_res() shows up to be a source for overhead in dax
benchmarks.

Improve performance by not considering children of the tree if the top
level does not match.  Since the range of the parents should include the
range of the children such check is redundant.

Running sysbench on dax (pmem emulation, with write_cache disabled):

  sysbench fileio --file-total-size=3G --file-test-mode=rndwr \
   --file-io-mode=mmap --threads=4 --file-fsync-mode=fdatasync run

Provides the following results:

events (avg/stddev)
-------------------
  5.2-rc3: 1247669.0000/16075.39
  w/patch: 1286320.5000/16402.72 (+3%)

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Nadav Amit <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agoresource: fix locking in find_next_iomem_res()
Nadav Amit [Thu, 18 Jul 2019 22:57:31 +0000 (15:57 -0700)]
resource: fix locking in find_next_iomem_res()

Since resources can be removed, locking should ensure that the resource
is not removed while accessing it.  However, find_next_iomem_res() does
not hold the lock while copying the data of the resource.

Keep holding the lock while the data is copied.  While at it, change the
return value to a more informative value.  It is disregarded by the
callers.

[[email protected]: fix find_next_iomem_res() documentation]
Link: http://lkml.kernel.org/r/[email protected]
Fixes: ff3cc952d3f00 ("resource: Add remove_resource interface")
Signed-off-by: Nadav Amit <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Reviewed-by: Dan Williams <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm: thp: fix false negative of shmem vma's THP eligibility
Yang Shi [Thu, 18 Jul 2019 22:57:27 +0000 (15:57 -0700)]
mm: thp: fix false negative of shmem vma's THP eligibility

Commit 7635d9cbe832 ("mm, thp, proc: report THP eligibility for each
vma") introduced THPeligible bit for processes' smaps.  But, when
checking the eligibility for shmem vma, __transparent_hugepage_enabled()
is called to override the result from shmem_huge_enabled().  It may
result in the anonymous vma's THP flag override shmem's.  For example,
running a simple test which create THP for shmem, but with anonymous THP
disabled, when reading the process's smaps, it may show:

  7fc92ec00000-7fc92f000000 rw-s 00000000 00:14 27764 /dev/shm/test
  Size:               4096 kB
  ...
  [snip]
  ...
  ShmemPmdMapped:     4096 kB
  ...
  [snip]
  ...
  THPeligible:    0

And, /proc/meminfo does show THP allocated and PMD mapped too:

  ShmemHugePages:     4096 kB
  ShmemPmdMapped:     4096 kB

This doesn't make too much sense.  The shmem objects should be treated
separately from anonymous THP.  Calling shmem_huge_enabled() with
checking MMF_DISABLE_THP sounds good enough.  And, we could skip stack
and dax vma check since we already checked if the vma is shmem already.

Also check if vma is suitable for THP by calling
transhuge_vma_suitable().

And minor fix to smaps output format and documentation.

Link: http://lkml.kernel.org/r/[email protected]
Fixes: 7635d9cbe832 ("mm, thp, proc: report THP eligibility for each vma")
Signed-off-by: Yang Shi <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm: thp: make transhuge_vma_suitable available for anonymous THP
Yang Shi [Thu, 18 Jul 2019 22:57:24 +0000 (15:57 -0700)]
mm: thp: make transhuge_vma_suitable available for anonymous THP

transhuge_vma_suitable() was only available for shmem THP, but anonymous
THP has the same check except pgoff check.  And, it will be used for THP
eligible check in the later patch, so make it available for all kind of
THPs.  This also helps reduce code duplication slightly.

Since anonymous THP doesn't have to check pgoff, so make pgoff check
shmem vma only.

And regroup some functions in include/linux/mm.h to solve compile issue
since transhuge_vma_suitable() needs call vma_is_anonymous() which was
defined after huge_mm.h is included.

[[email protected]: fix typo]
[[email protected]: v4]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Yang Shi <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/sparse.c: set section nid for hot-add memory
Wei Yang [Thu, 18 Jul 2019 22:57:21 +0000 (15:57 -0700)]
mm/sparse.c: set section nid for hot-add memory

In case of NODE_NOT_IN_PAGE_FLAGS is set, we store section's node id in
section_to_node_table[].  While for hot-add memory, this is missed.
Without this information, page_to_nid() may not give the right node id.

BTW, current online_pages works because it leverages nid in
memory_block.  But the granularity of node id should be mem_section
wide.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Wei Yang <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: remove "zone" parameter from sparse_remove_one_section
David Hildenbrand [Thu, 18 Jul 2019 22:57:17 +0000 (15:57 -0700)]
mm/memory_hotplug: remove "zone" parameter from sparse_remove_one_section

The parameter is unused, so let's drop it.  Memory removal paths should
never care about zones.  This is the job of memory offlining and will
require more refactorings.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Dan Williams <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: make unregister_memory_block_under_nodes() never fail
David Hildenbrand [Thu, 18 Jul 2019 22:57:12 +0000 (15:57 -0700)]
mm/memory_hotplug: make unregister_memory_block_under_nodes() never fail

We really don't want anything during memory hotunplug to fail.  We
always pass a valid memory block device, that check can go.  Avoid
allocating memory and eventually failing.  As we are always called under
lock, we can use a static piece of memory.  This avoids having to put
the structure onto the stack, having to guess about the stack size of
callers.

Patch inspired by a patch from Oscar Salvador.

In the future, there might be no need to iterate over nodes at all.
mem->nid should tell us exactly what to remove.  Memory block devices
with mixed nodes (added during boot) should properly fenced off and
never removed.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: remove memory block devices before arch_remove_memory()
David Hildenbrand [Thu, 18 Jul 2019 22:57:06 +0000 (15:57 -0700)]
mm/memory_hotplug: remove memory block devices before arch_remove_memory()

Let's factor out removing of memory block devices, which is only
necessary for memory added via add_memory() and friends that created
memory block devices.  Remove the devices before calling
arch_remove_memory().

This finishes factoring out memory block device handling from
arch_add_memory() and arch_remove_memory().

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Dan Williams <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: drop MHP_MEMBLOCK_API
David Hildenbrand [Thu, 18 Jul 2019 22:57:01 +0000 (15:57 -0700)]
mm/memory_hotplug: drop MHP_MEMBLOCK_API

No longer needed, the callers of arch_add_memory() can handle this
manually.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: create memory block devices after arch_add_memory()
David Hildenbrand [Thu, 18 Jul 2019 22:56:56 +0000 (15:56 -0700)]
mm/memory_hotplug: create memory block devices after arch_add_memory()

Only memory to be added to the buddy and to be onlined/offlined by user
space using /sys/devices/system/memory/...  needs (and should have!)
memory block devices.

Factor out creation of memory block devices.  Create all devices after
arch_add_memory() succeeded.  We can later drop the want_memblock
parameter, because it is now effectively stale.

Only after memory block devices have been added, memory can be onlined
by user space.  This implies, that memory is not visible to user space
at all before arch_add_memory() succeeded.

While at it
 - use WARN_ON_ONCE instead of BUG_ON in moved unregister_memory()
 - introduce find_memory_block_by_id() to search via block id
 - Use find_memory_block_by_id() in init_memory_block() to catch
   duplicates

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: allow arch_remove_memory() without CONFIG_MEMORY_HOTREMOVE
David Hildenbrand [Thu, 18 Jul 2019 22:56:51 +0000 (15:56 -0700)]
mm/memory_hotplug: allow arch_remove_memory() without CONFIG_MEMORY_HOTREMOVE

We want to improve error handling while adding memory by allowing to use
arch_remove_memory() and __remove_pages() even if
CONFIG_MEMORY_HOTREMOVE is not set to e.g., implement something like:

arch_add_memory()
rc = do_something();
if (rc) {
arch_remove_memory();
}

We won't get rid of CONFIG_MEMORY_HOTREMOVE for now, as it will require
quite some dependencies for memory offlining.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agodrivers/base/memory: pass a block_id to init_memory_block()
David Hildenbrand [Thu, 18 Jul 2019 22:56:46 +0000 (15:56 -0700)]
drivers/base/memory: pass a block_id to init_memory_block()

We'll rework hotplug_memory_register() shortly, so it no longer consumes
pass a section.

[[email protected]: fix a compilation warning]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Signed-off-by: Qian Cai <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agoarm64/mm: add temporary arch_remove_memory() implementation
David Hildenbrand [Thu, 18 Jul 2019 22:56:41 +0000 (15:56 -0700)]
arm64/mm: add temporary arch_remove_memory() implementation

A proper arch_remove_memory() implementation is on its way, which also
cleanly removes page tables in arch_add_memory() in case something goes
wrong.

As we want to use arch_remove_memory() in case something goes wrong
during memory hotplug after arch_add_memory() finished, let's add a
temporary hack that is sufficient enough until we get a proper
implementation that cleans up page table entries.

We will remove CONFIG_MEMORY_HOTREMOVE around this code in follow up
patches.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: Yu Zhao <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agos390x/mm: implement arch_remove_memory()
David Hildenbrand [Thu, 18 Jul 2019 22:56:35 +0000 (15:56 -0700)]
s390x/mm: implement arch_remove_memory()

Will come in handy when wanting to handle errors after
arch_add_memory().

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agos390x/mm: fail when an altmap is used for arch_add_memory()
David Hildenbrand [Thu, 18 Jul 2019 22:56:30 +0000 (15:56 -0700)]
s390x/mm: fail when an altmap is used for arch_add_memory()

ZONE_DEVICE is not yet supported, fail if an altmap is passed, so we
don't forget arch_add_memory()/arch_remove_memory() when unlocking
support.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Suggested-by: Dan Williams <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agomm/memory_hotplug: simplify and fix check_hotplug_memory_range()
David Hildenbrand [Thu, 18 Jul 2019 22:56:25 +0000 (15:56 -0700)]
mm/memory_hotplug: simplify and fix check_hotplug_memory_range()

Patch series "mm/memory_hotplug: Factor out memory block devicehandling", v3.

We only want memory block devices for memory to be onlined/offlined
(add/remove from the buddy).  This is required so user space can
online/offline memory and kdump gets notified about newly onlined
memory.

Let's factor out creation/removal of memory block devices.  This helps
to further cleanup arch_add_memory/arch_remove_memory() and to make
implementation of new features easier - especially sub-section memory
hot add from Dan.

Anshuman Khandual is currently working on arch_remove_memory().  I added
a temporary solution via "arm64/mm: Add temporary arch_remove_memory()
implementation", that is sufficient as a firsts tep in the context of
this series.  (we don't cleanup page tables in case anything goes wrong
already)

Did a quick sanity test with DIMM plug/unplug, making sure all devices
and sysfs links properly get added/removed.  Compile tested on s390x and
x86-64.

This patch (of 11):

By converting start and size to page granularity, we actually ignore
unaligned parts within a page instead of properly bailing out with an
error.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Reviewed-by: Dan Williams <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Arun KS <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Andrew Banman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chintan Pandya <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Jun Yao <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
5 years agousb: qmi_wwan: add D-Link DWM-222 A2 device ID
Rogan Dawes [Wed, 17 Jul 2019 09:14:33 +0000 (11:14 +0200)]
usb: qmi_wwan: add D-Link DWM-222 A2 device ID

Signed-off-by: Rogan Dawes <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
5 years agobnxt_en: Fix VNIC accounting when enabling aRFS on 57500 chips.
Michael Chan [Wed, 17 Jul 2019 07:07:23 +0000 (03:07 -0400)]
bnxt_en: Fix VNIC accounting when enabling aRFS on 57500 chips.

Unlike legacy chips, 57500 chips don't need additional VNIC resources
for aRFS/ntuple.  Fix the code accordingly so that we don't reserve
and allocate additional VNICs on 57500 chips.  Without this patch,
the driver is failing to initialize when it tries to allocate extra
VNICs.

Fixes: ac33906c67e2 ("bnxt_en: Add support for aRFS on 57500 chips.")
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
5 years agonet: dsa: sja1105: Fix missing unlock on error in sk_buff()
Wei Yongjun [Wed, 17 Jul 2019 06:29:56 +0000 (06:29 +0000)]
net: dsa: sja1105: Fix missing unlock on error in sk_buff()

Add the missing unlock before return from function sk_buff()
in the error handling case.

Fixes: f3097be21bf1 ("net: dsa: sja1105: Add a state machine for RX timestamping")
Signed-off-by: Wei Yongjun <[email protected]>
Reviewed-by: Vladimir Oltean <[email protected]>
Reviewed-by: Vivien Didelot <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
5 years agogve: replace kfree with kvfree
Chuhong Yuan [Wed, 17 Jul 2019 02:05:11 +0000 (10:05 +0800)]
gve: replace kfree with kvfree

Variables allocated by kvzalloc should not be freed by kfree.
Because they may be allocated by vmalloc.
So we replace kfree with kvfree here.

Signed-off-by: Chuhong Yuan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
5 years agocifs: update internal module number
Steve French [Mon, 1 Jul 2019 21:25:46 +0000 (16:25 -0500)]
cifs: update internal module number

To 2.21

Signed-off-by: Steve French <[email protected]>
5 years agocifs: flush before set-info if we have writeable handles
Ronnie Sahlberg [Thu, 18 Jul 2019 22:12:11 +0000 (08:12 +1000)]
cifs: flush before set-info if we have writeable handles

Servers can defer destaging any data and updating the mtime until close().
This means that if we do a setinfo to modify the mtime while other handles
are open for write the server may overwrite our setinfo timestamps when
if flushes the file on close() of the writeable handle.

To solve this we add an explicit flush when the mtime is about to
be updated.

This fixes "cp -p" to preserve mtime when copying a file onto an SMB2 share.

CC: Stable <[email protected]>
Signed-off-by: Ronnie Sahlberg <[email protected]>
Reviewed-by: Pavel Shilovsky <[email protected]>
Signed-off-by: Steve French <[email protected]>
5 years agosmb3: optimize open to not send query file internal info
Steve French [Thu, 18 Jul 2019 22:22:18 +0000 (17:22 -0500)]
smb3: optimize open to not send query file internal info

We can cut one third of the traffic on open by not querying the
inode number explicitly via SMB3 query_info since it is now
returned on open in the qfid context.

This is better in multiple ways, and
speeds up file open about 10% (more if network is slow).

Reviewed-by: Pavel Shilovsky <[email protected]>
Signed-off-by: Steve French <[email protected]>
5 years agoMerge tag 'for-5.3/dm-changes-2' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Thu, 18 Jul 2019 21:49:33 +0000 (14:49 -0700)]
Merge tag 'for-5.3/dm-changes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull more device mapper updates from Mike Snitzer:

 - Fix zone state management race in DM zoned target by eliminating the
   unnecessary DMZ_ACTIVE state.

 - A couple fixes for issues the DM snapshot target's optional discard
   support added during first week of the 5.3 merge.

 - Increase default size of outstanding IO that is allowed for a each
   dm-kcopyd client and introduce tunable to allow user adjust.

 - Update DM core to use printk ratelimiting functions rather than
   duplicate them and in doing so fix an issue where DMDEBUG_LIMIT()
   rate limited KERN_DEBUG messages had excessive "callbacks suppressed"
   messages.

* tag 'for-5.3/dm-changes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm: use printk ratelimiting functions
  dm kcopyd: Increase default sub-job size to 512KB
  dm snapshot: fix oversights in optional discard support
  dm zoned: fix zone state management race

5 years agoMerge tag 'nfs-for-5.3-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Linus Torvalds [Thu, 18 Jul 2019 21:32:33 +0000 (14:32 -0700)]
Merge tag 'nfs-for-5.3-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs

Pull NFS client updates from Trond Myklebust:
 "Highlights include:

  Stable fixes:

   - SUNRPC: Ensure bvecs are re-synced when we re-encode the RPC
     request

   - Fix an Oops in ff_layout_track_ds_error due to a PTR_ERR()
     dereference

   - Revert buggy NFS readdirplus optimisation

   - NFSv4: Handle the special Linux file open access mode

   - pnfs: Fix a problem where we gratuitously start doing I/O through
     the MDS

  Features:

   - Allow NFS client to set up multiple TCP connections to the server
     using a new 'nconnect=X' mount option. Queue length is used to
     balance load.

   - Enhance statistics reporting to report on all transports when using
     multiple connections.

   - Speed up SUNRPC by removing bh-safe spinlocks

   - Add a mechanism to allow NFSv4 to request that containers set a
     unique per-host identifier for when the hostname is not set.

   - Ensure NFSv4 updates the lease_time after a clientid update

  Bugfixes and cleanup:

   - Fix use-after-free in rpcrdma_post_recvs

   - Fix a memory leak when nfs_match_client() is interrupted

   - Fix buggy file access checking in NFSv4 open for execute

   - disable unsupported client side deduplication

   - Fix spurious client disconnections

   - Fix occasional RDMA transport deadlock

   - Various RDMA cleanups

   - Various tracepoint fixes

   - Fix the TCP callback channel to guarantee the server can actually
     send the number of callback requests that was negotiated at mount
     time"

* tag 'nfs-for-5.3-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: (68 commits)
  pnfs/flexfiles: Add tracepoints for detecting pnfs fallback to MDS
  pnfs: Fix a problem where we gratuitously start doing I/O through the MDS
  SUNRPC: Optimise transport balancing code
  SUNRPC: Ensure the bvecs are reset when we re-encode the RPC request
  pnfs/flexfiles: Fix PTR_ERR() dereferences in ff_layout_track_ds_error
  NFSv4: Don't use the zero stateid with layoutget
  SUNRPC: Fix up backchannel slot table accounting
  SUNRPC: Fix initialisation of struct rpc_xprt_switch
  SUNRPC: Skip zero-refcount transports
  SUNRPC: Replace division by multiplication in calculation of queue length
  NFSv4: Validate the stateid before applying it to state recovery
  nfs4.0: Refetch lease_time after clientid update
  nfs4: Rename nfs41_setup_state_renewal
  nfs4: Make nfs4_proc_get_lease_time available for nfs4.0
  nfs: Fix copy-and-paste error in debug message
  NFS: Replace 16 seq_printf() calls by seq_puts()
  NFS: Use seq_putc() in nfs_show_stats()
  Revert "NFS: readdirplus optimization by cache mechanism" (memleak)
  SUNRPC: Fix transport accounting when caller specifies an rpc_xprt
  NFS: Record task, client ID, and XID in xdr_status trace points
  ...

5 years agosched/rt, Kconfig: Introduce CONFIG_PREEMPT_RT
Thomas Gleixner [Wed, 17 Jul 2019 20:01:49 +0000 (22:01 +0200)]
sched/rt, Kconfig: Introduce CONFIG_PREEMPT_RT

Add a new entry to the preemption menu which enables the real-time support
for the kernel. The choice is only enabled when an architecture supports
it.

It selects PREEMPT as the RT features depend on it. To achieve that the
existing PREEMPT choice is renamed to PREEMPT_LL which select PREEMPT as
well.

No functional change.

Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Paul E. McKenney <[email protected]>
Acked-by: Steven Rostedt (VMware) <[email protected]>
Acked-by: Clark Williams <[email protected]>
Acked-by: Daniel Bristot de Oliveira <[email protected]>
Acked-by: Frederic Weisbecker <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Acked-by: Daniel Wagner <[email protected]>
Acked-by: Luis Claudio R. Goncalves <[email protected]>
Acked-by: Julia Cartwright <[email protected]>
Acked-by: Tom Zanussi <[email protected]>
Acked-by: Gratian Crisan <[email protected]>
Acked-by: Sebastian Siewior <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Lukas Bulwahn <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Tejun Heo <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
5 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
David S. Miller [Thu, 18 Jul 2019 21:04:45 +0000 (14:04 -0700)]
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf

Alexei Starovoitov says:

====================
pull-request: bpf 2019-07-18

The following pull-request contains BPF updates for your *net* tree.

The main changes are:

1) verifier precision propagation fix, from Andrii.

2) BTF size fix for typedefs, from Andrii.

3) a bunch of big endian fixes, from Ilya.

4) wide load from bpf_sock_addr fixes, from Stanislav.

5) a bunch of misc fixes from a number of developers.
====================

Signed-off-by: David S. Miller <[email protected]>
5 years agoselftests/bpf: fix test_xdp_noinline on s390
Ilya Leoshkevich [Wed, 17 Jul 2019 12:26:20 +0000 (14:26 +0200)]
selftests/bpf: fix test_xdp_noinline on s390

test_xdp_noinline fails on s390 due to a handful of endianness issues.
Use ntohs for parsing eth_proto.
Replace bswaps with ntohs/htons.

Signed-off-by: Ilya Leoshkevich <[email protected]>
Acked-by: Vasily Gorbik <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
This page took 0.149171 seconds and 4 git commands to generate.