Commit 015970e8 authored by Lee Schermerhorn's avatar Lee Schermerhorn Committed by james toy

I noticed that alloc_bootmem_huge_page() will only advance to the next

node on failure to allocate a huge page.  I asked about this on linux-mm
and linux-numa, cc'ing the usual huge page suspects.  Mel Gorman
responded:

	I strongly suspect that the same node being used until allocation
	failure instead of round-robin is an oversight and not deliberate
	at all. It appears to be a side-effect of a fix made way back in
	commit 63b4613c ["hugetlb: fix
	hugepage allocation with memoryless nodes"]. Prior to that patch
	it looked like allocations would always round-robin even when
	allocation was successful.

Andy Whitcroft countered that the existing behavior looked like Andi
Kleen's original implementation and suggested that we ask him.  We did and
Andy replied that his intention was to interleave the allocations.  So,
...

This patch moves the advance of the hstate next node from which to
allocate up before the test for success of the attempted allocation.  This
will unconditionally advance the next node from which to alloc,
interleaving successful allocations over the nodes with sufficient
contiguous memory, and skipping over nodes that fail the huge page
allocation attempt.

Note that alloc_bootmem_huge_page() will only be called for huge pages of
order > MAX_ORDER.
Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: David Rientjes <rientjes@google.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Eric Whitney <eric.whitney@hp.com>
Cc: <stable@kernel.org>		[2.6.27.x, 2.6.28.x, 2.6.29.x, 2.6.30.x]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 36b6077d
...@@ -1011,6 +1011,7 @@ int __weak alloc_bootmem_huge_page(struct hstate *h) ...@@ -1011,6 +1011,7 @@ int __weak alloc_bootmem_huge_page(struct hstate *h)
NODE_DATA(h->hugetlb_next_nid), NODE_DATA(h->hugetlb_next_nid),
huge_page_size(h), huge_page_size(h), 0); huge_page_size(h), huge_page_size(h), 0);
hstate_next_node(h);
if (addr) { if (addr) {
/* /*
* Use the beginning of the huge page to store the * Use the beginning of the huge page to store the
...@@ -1020,7 +1021,6 @@ int __weak alloc_bootmem_huge_page(struct hstate *h) ...@@ -1020,7 +1021,6 @@ int __weak alloc_bootmem_huge_page(struct hstate *h)
m = addr; m = addr;
goto found; goto found;
} }
hstate_next_node(h);
nr_nodes--; nr_nodes--;
} }
return 0; return 0;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment