Commit b379d790 authored by Jared Hulbert's avatar Jared Hulbert Committed by Linus Torvalds

mm: introduce VM_MIXEDMAP

This series introduces some important infrastructure work.  The overall result
is that:

1. We now support XIP backed filesystems using memory that have no
   struct page allocated to them. And patches 6 and 7 actually implement
   this for s390.

   This is pretty important in a number of cases. As far as I understand,
   in the case of virtualisation (eg. s390), each guest may mount a
   readonly copy of the same filesystem (eg. the distro). Currently,
   guests need to allocate struct pages for this image. So if you have
   100 guests, you already need to allocate more memory for the struct
   pages than the size of the image. I think. (Carsten?)

   For other (eg. embedded) systems, you may have a very large non-
   volatile filesystem. If you have to have struct pages for this, then
   your RAM consumption will go up proportionally to fs size. Even
   though it is just a small proportion, the RAM can be much more costly
   eg in terms of power, so every KB less that Linux uses makes it more
   attractive to a lot of these guys.

2. VM_MIXEDMAP allows us to support mappings where you actually do want
   to refcount _some_ pages in the mapping, but not others, and support
   COW on arbitrary (non-linear) mappings. Jared needs this for his NVRAM
   filesystem in progress. Future iterations of this filesystem will
   most likely want to migrate pages between pagecache and XIP backing,
   which is where the requirement for mixed (some refcounted, some not)
   comes from.

3. pte_special also has a peripheral usage that I need for my lockless
   get_user_pages patch. That was shown to speed up "oltp" on db2 by
   10% on a 2 socket system, which is kind of significant because they
   scrounge for months to try to find 0.1% improvement on these
   workloads. I'm hoping we might finally be faster than AIX on
   pSeries with this :). My reference to lockless get_user_pages is not
   meant to justify this patchset (which doesn't include lockless gup),
   but just to show that pte_special is not some s390 specific thing that
   should be hidden in arch code or xip code: I definitely want to use it
   on at least x86 and powerpc as well.

This patch:

Introduce a new type of mapping, VM_MIXEDMAP.  This is unlike VM_PFNMAP in
that it can support COW mappings of arbitrary ranges including ranges without
struct page *and* ranges with a struct page that we actually want to refcount
(PFNMAP can only support COW in those cases where the un-COW-ed translations
are mapped linearly in the virtual address, and can only support non
refcounted ranges).

VM_MIXEDMAP achieves this by refcounting all pfn_valid pages, and not
refcounting !pfn_valid pages (which is not an option for VM_PFNMAP, because it
needs to avoid refcounting pfn_valid pages eg.  for /dev/mem mappings).
Signed-off-by: default avatarJared Hulbert <jaredeh@gmail.com>
Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
Acked-by: default avatarCarsten Otte <cotte@de.ibm.com>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 214e471f
...@@ -107,6 +107,7 @@ extern unsigned int kobjsize(const void *objp); ...@@ -107,6 +107,7 @@ extern unsigned int kobjsize(const void *objp);
#define VM_ALWAYSDUMP 0x04000000 /* Always include in core dumps */ #define VM_ALWAYSDUMP 0x04000000 /* Always include in core dumps */
#define VM_CAN_NONLINEAR 0x08000000 /* Has ->fault & does nonlinear pages */ #define VM_CAN_NONLINEAR 0x08000000 /* Has ->fault & does nonlinear pages */
#define VM_MIXEDMAP 0x10000000 /* Can contain "struct page" and pure PFN pages */
#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
......
...@@ -371,35 +371,65 @@ static inline int is_cow_mapping(unsigned int flags) ...@@ -371,35 +371,65 @@ static inline int is_cow_mapping(unsigned int flags)
} }
/* /*
* This function gets the "struct page" associated with a pte. * This function gets the "struct page" associated with a pte or returns
* NULL if no "struct page" is associated with the pte.
* *
* NOTE! Some mappings do not have "struct pages". A raw PFN mapping * A raw VM_PFNMAP mapping (ie. one that is not COWed) may not have any "struct
* will have each page table entry just pointing to a raw page frame * page" backing, and even if they do, they are not refcounted. COWed pages of
* number, and as far as the VM layer is concerned, those do not have * a VM_PFNMAP do always have a struct page, and they are normally refcounted
* pages associated with them - even if the PFN might point to memory * (they are _normal_ pages).
*
* So a raw PFNMAP mapping will have each page table entry just pointing
* to a page frame number, and as far as the VM layer is concerned, those do
* not have pages associated with them - even if the PFN might point to memory
* that otherwise is perfectly fine and has a "struct page". * that otherwise is perfectly fine and has a "struct page".
* *
* The way we recognize those mappings is through the rules set up * The way we recognize COWed pages within VM_PFNMAP mappings is through the
* by "remap_pfn_range()": the vma will have the VM_PFNMAP bit set, * rules set up by "remap_pfn_range()": the vma will have the VM_PFNMAP bit
* and the vm_pgoff will point to the first PFN mapped: thus every * set, and the vm_pgoff will point to the first PFN mapped: thus every
* page that is a raw mapping will always honor the rule * page that is a raw mapping will always honor the rule
* *
* pfn_of_page == vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT) * pfn_of_page == vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT)
* *
* and if that isn't true, the page has been COW'ed (in which case it * A call to vm_normal_page() will return NULL for such a page.
* _does_ have a "struct page" associated with it even if it is in a *
* VM_PFNMAP range). * If the page doesn't follow the "remap_pfn_range()" rule in a VM_PFNMAP
* then the page has been COW'ed. A COW'ed page _does_ have a "struct page"
* associated with it even if it is in a VM_PFNMAP range. Calling
* vm_normal_page() on such a page will therefore return the "struct page".
*
*
* VM_MIXEDMAP mappings can likewise contain memory with or without "struct
* page" backing, however the difference is that _all_ pages with a struct
* page (that is, those where pfn_valid is true) are refcounted and considered
* normal pages by the VM. The disadvantage is that pages are refcounted
* (which can be slower and simply not an option for some PFNMAP users). The
* advantage is that we don't have to follow the strict linearity rule of
* PFNMAP mappings in order to support COWable mappings.
*
* A call to vm_normal_page() with a VM_MIXEDMAP mapping will return the
* associated "struct page" or NULL for memory not backed by a "struct page".
*
*
* All other mappings should have a valid struct page, which will be
* returned by a call to vm_normal_page().
*/ */
struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte) struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
{ {
unsigned long pfn = pte_pfn(pte); unsigned long pfn = pte_pfn(pte);
if (unlikely(vma->vm_flags & VM_PFNMAP)) { if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
unsigned long off = (addr - vma->vm_start) >> PAGE_SHIFT; if (vma->vm_flags & VM_MIXEDMAP) {
if (pfn == vma->vm_pgoff + off) if (!pfn_valid(pfn))
return NULL; return NULL;
if (!is_cow_mapping(vma->vm_flags)) goto out;
return NULL; } else {
unsigned long off = (addr-vma->vm_start) >> PAGE_SHIFT;
if (pfn == vma->vm_pgoff + off)
return NULL;
if (!is_cow_mapping(vma->vm_flags))
return NULL;
}
} }
#ifdef CONFIG_DEBUG_VM #ifdef CONFIG_DEBUG_VM
...@@ -422,6 +452,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_ ...@@ -422,6 +452,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_
* The PAGE_ZERO() pages and various VDSO mappings can * The PAGE_ZERO() pages and various VDSO mappings can
* cause them to exist. * cause them to exist.
*/ */
out:
return pfn_to_page(pfn); return pfn_to_page(pfn);
} }
...@@ -1232,8 +1263,11 @@ int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, ...@@ -1232,8 +1263,11 @@ int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
pte_t *pte, entry; pte_t *pte, entry;
spinlock_t *ptl; spinlock_t *ptl;
BUG_ON(!(vma->vm_flags & VM_PFNMAP)); BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)));
BUG_ON(is_cow_mapping(vma->vm_flags)); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) ==
(VM_PFNMAP|VM_MIXEDMAP));
BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn));
retval = -ENOMEM; retval = -ENOMEM;
pte = get_locked_pte(mm, addr, &ptl); pte = get_locked_pte(mm, addr, &ptl);
...@@ -2365,10 +2399,13 @@ static noinline int do_no_pfn(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -2365,10 +2399,13 @@ static noinline int do_no_pfn(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long pfn; unsigned long pfn;
pte_unmap(page_table); pte_unmap(page_table);
BUG_ON(!(vma->vm_flags & VM_PFNMAP)); BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)));
BUG_ON(is_cow_mapping(vma->vm_flags)); BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
pfn = vma->vm_ops->nopfn(vma, address & PAGE_MASK); pfn = vma->vm_ops->nopfn(vma, address & PAGE_MASK);
BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn));
if (unlikely(pfn == NOPFN_OOM)) if (unlikely(pfn == NOPFN_OOM))
return VM_FAULT_OOM; return VM_FAULT_OOM;
else if (unlikely(pfn == NOPFN_SIGBUS)) else if (unlikely(pfn == NOPFN_SIGBUS))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment