1. 18 Jul, 2009 1 commit
  2. 17 Jul, 2009 3 commits
  3. 21 Jul, 2009 1 commit
  4. 17 Jul, 2009 2 commits
  5. 16 Jul, 2009 6 commits
  6. 15 Jul, 2009 1 commit
  7. 14 Jul, 2009 5 commits
  8. 13 Jul, 2009 1 commit
  9. 01 Jul, 2009 1 commit
  10. 17 Jul, 2009 1 commit
    • Lee Schermerhorn's avatar
      I noticed that alloc_bootmem_huge_page() will only advance to the next · 7eb81022
      Lee Schermerhorn authored
      node on failure to allocate a huge page, potentially filling nodes with
      huge-pages.  I asked about this on linux-mm and linux-numa, cc'ing the
      usual huge page suspects.
      
      Mel Gorman responded:
      
      	I strongly suspect that the same node being used until allocation
      	failure instead of round-robin is an oversight and not deliberate
      	at all. It appears to be a side-effect of a fix made way back in
      	commit 63b4613c ["hugetlb: fix
      	hugepage allocation with memoryless nodes"]. Prior to that patch
      	it looked like allocations would always round-robin even when
      	allocation was successful.
      
      This patch--factored out of my "hugetlb mempolicy" series--moves the
      advance of the hstate next node from which to allocate up before the test
      for success of the attempted allocation.
      
      Note that alloc_bootmem_huge_page() is only used for order > MAX_ORDER
      huge pages.
      
      I'll post a separate patch for mainline/stable, as the above mentioned
      "balance freeing" series renamed the next node to alloc function.
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: default avatarMel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarAndy Whitcroft <apw@canonical.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      7eb81022
  11. 29 Jun, 2009 1 commit
  12. 30 Jun, 2009 1 commit
    • Lee Schermerhorn's avatar
      Fixes bug detected by libhugetlbfs test suite in: · 34d925f9
      Lee Schermerhorn authored
      hugetlb-use-free_pool_huge_page-to-return-unused-surplus-pages.patch
      
      Can't just "continue" for node with no surplus pages when returning
      unused surplus.  We need to advance to 'next node to free'.
      
      With this fix, the "hugetlb balance free across nodes" series passes
      the test suite.
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Nishanth Aravamudan <nacc@us.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Eric Whitney <eric.whitney@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      34d925f9
  13. 29 Jun, 2009 2 commits
  14. 13 Jul, 2009 1 commit
  15. 10 Aug, 2009 1 commit
  16. 29 Jun, 2009 1 commit
  17. 13 Jul, 2009 4 commits
  18. 03 Jun, 2009 2 commits
  19. 02 Jun, 2009 1 commit
    • Hisashi Hifumi's avatar
      I added blk_run_backing_dev on page_cache_async_readahead so readahead I/O · c512db2e
      Hisashi Hifumi authored
      is unpluged to improve throughput on especially RAID environment.
      
      The normal case is, if page N become uptodate at time T(N), then T(N) <=
      T(N+1) holds.  With RAID (and NFS to some degree), there is no strict
      ordering, the data arrival time depends on runtime status of individual
      disks, which breaks that formula.  So in do_generic_file_read(), just
      after submitting the async readahead IO request, the current page may well
      be uptodate, so the page won't be locked, and the block device won't be
      implicitly unplugged:
      
                     if (PageReadahead(page))
                              page_cache_async_readahead()
                      if (!PageUptodate(page))
                                      goto page_not_up_to_date;
                      //...
      page_not_up_to_date:
                      lock_page_killable(page);
      
      Therefore explicit unplugging can help.
      
      Following is the test result with dd.
      
      #dd if=testdir/testfile of=/dev/null bs=16384
      
      -2.6.30-rc6
      1048576+0 records in
      1048576+0 records out
      17179869184 bytes (17 GB) copied, 224.182 seconds, 76.6 MB/s
      
      -2.6.30-rc6-patched
      1048576+0 records in
      1048576+0 records out
      17179869184 bytes (17 GB) copied, 206.465 seconds, 83.2 MB/s
      
      (7Disks RAID-0 Array)
      
      -2.6.30-rc6
      1054976+0 records in
      1054976+0 records out
      17284726784 bytes (17 GB) copied, 212.233 seconds, 81.4 MB/s
      
      -2.6.30-rc6-patched
      1054976+0 records out
      17284726784 bytes (17 GB) copied, 198.878 seconds, 86.9 MB/s
      
      (7Disks RAID-5 Array)
      Signed-off-by: default avatarHisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
      Acked-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c512db2e
  20. 13 Jul, 2009 1 commit
  21. 26 Jun, 2009 1 commit
  22. 23 Jun, 2009 2 commits