]> Git Repo - linux.git/commitdiff
pmem, dax: clean up clear_pmem()
authorDan Williams <[email protected]>
Sat, 16 Jan 2016 00:55:49 +0000 (16:55 -0800)
committerLinus Torvalds <[email protected]>
Sat, 16 Jan 2016 01:56:32 +0000 (17:56 -0800)
To date, we have implemented two I/O usage models for persistent memory,
PMEM (a persistent "ram disk") and DAX (mmap persistent memory into
userspace).  This series adds a third, DAX-GUP, that allows DAX mappings
to be the target of direct-i/o.  It allows userspace to coordinate
DMA/RDMA from/to persistent memory.

The implementation leverages the ZONE_DEVICE mm-zone that went into
4.3-rc1 (also discussed at kernel summit) to flag pages that are owned
and dynamically mapped by a device driver.  The pmem driver, after
mapping a persistent memory range into the system memmap via
devm_memremap_pages(), arranges for DAX to distinguish pfn-only versus
page-backed pmem-pfns via flags in the new pfn_t type.

The DAX code, upon seeing a PFN_DEV+PFN_MAP flagged pfn, flags the
resulting pte(s) inserted into the process page tables with a new
_PAGE_DEVMAP flag.  Later, when get_user_pages() is walking ptes it keys
off _PAGE_DEVMAP to pin the device hosting the page range active.
Finally, get_page() and put_page() are modified to take references
against the device driver established page mapping.

Finally, this need for "struct page" for persistent memory requires
memory capacity to store the memmap array.  Given the memmap array for a
large pool of persistent may exhaust available DRAM introduce a
mechanism to allocate the memmap from persistent memory.  The new
"struct vmem_altmap *" parameter to devm_memremap_pages() enables
arch_add_memory() to use reserved pmem capacity rather than the page
allocator.

This patch (of 25):

Both __dax_pmd_fault, and clear_pmem() were taking special steps to
clear memory a page at a time to take advantage of non-temporal
clear_page() implementations.  However, x86_64 does not use non-temporal
instructions for clear_page(), and arch_clear_pmem() was always
incurring the cost of __arch_wb_cache_pmem().

Clean up the assumption that doing clear_pmem() a page at a time is more
performant.

Signed-off-by: Dan Williams <[email protected]>
Reported-by: Dave Hansen <[email protected]>
Reviewed-by: Ross Zwisler <[email protected]>
Reviewed-by: Jeff Moyer <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Dave Chinner <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Jeff Dike <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Logan Gunthorpe <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Toshi Kani <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
arch/x86/include/asm/pmem.h
fs/dax.c

index d8ce3ec816ab1a86d2d06aca917dae5aae762e40..1544fabcd7f9b7428a5d3ec7429f00f03b545945 100644 (file)
@@ -132,12 +132,7 @@ static inline void arch_clear_pmem(void __pmem *addr, size_t size)
 {
        void *vaddr = (void __force *)addr;
 
-       /* TODO: implement the zeroing via non-temporal writes */
-       if (size == PAGE_SIZE && ((unsigned long)vaddr & ~PAGE_MASK) == 0)
-               clear_page(vaddr);
-       else
-               memset(vaddr, 0, size);
-
+       memset(vaddr, 0, size);
        __arch_wb_cache_pmem(vaddr, size);
 }
 
index 43671b68220ed968386f5c1ad9067f236fbab67e..19492cc65a302ce2285f179d056bc9f860cbea9c 100644 (file)
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -641,9 +641,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
                        goto fallback;
 
                if (buffer_unwritten(&bh) || buffer_new(&bh)) {
-                       int i;
-                       for (i = 0; i < PTRS_PER_PMD; i++)
-                               clear_pmem(kaddr + i * PAGE_SIZE, PAGE_SIZE);
+                       clear_pmem(kaddr, PMD_SIZE);
                        wmb_pmem();
                        count_vm_event(PGMAJFAULT);
                        mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
This page took 0.047043 seconds and 4 git commands to generate.