1. 17 Jul, 2009 2 commits
  2. 21 Jul, 2009 1 commit
  3. 17 Jul, 2009 2 commits
  4. 16 Jul, 2009 6 commits
  5. 15 Jul, 2009 1 commit
  6. 14 Jul, 2009 5 commits
  7. 13 Jul, 2009 1 commit
  8. 01 Jul, 2009 1 commit
  9. 17 Jul, 2009 1 commit
    • Lee Schermerhorn's avatar
      I noticed that alloc_bootmem_huge_page() will only advance to the next · 7eb81022
      Lee Schermerhorn authored
      node on failure to allocate a huge page, potentially filling nodes with
      huge-pages.  I asked about this on linux-mm and linux-numa, cc'ing the
      usual huge page suspects.
      
      Mel Gorman responded:
      
      	I strongly suspect that the same node being used until allocation
      	failure instead of round-robin is an oversight and not deliberate
      	at all. It appears to be a side-effect of a fix made way back in
      	commit 63b4613c ["hugetlb: fix
      	hugepage allocation with memoryless nodes"]. Prior to that patch
      	it looked like allocations would always round-robin even when
      	allocation was successful.
      
      This patch--factored out of my "hugetlb mempolicy" series--moves the
      advance of the hstate next node from which to allocate up before the test
      for success of the attempted allocation.
      
      Note that alloc_bootmem_huge_page() is only used for order > MAX_ORDER
      huge pages.
      
      I'll post a separate patch for mainline/stable, as the above mentioned
      "balance freeing" series renamed the next node to alloc function.
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: default avatarMel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarAndy Whitcroft <apw@canonical.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      7eb81022
  10. 29 Jun, 2009 1 commit
  11. 30 Jun, 2009 1 commit
    • Lee Schermerhorn's avatar
      Fixes bug detected by libhugetlbfs test suite in: · 34d925f9
      Lee Schermerhorn authored
      hugetlb-use-free_pool_huge_page-to-return-unused-surplus-pages.patch
      
      Can't just "continue" for node with no surplus pages when returning
      unused surplus.  We need to advance to 'next node to free'.
      
      With this fix, the "hugetlb balance free across nodes" series passes
      the test suite.
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Nishanth Aravamudan <nacc@us.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Eric Whitney <eric.whitney@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      34d925f9
  12. 29 Jun, 2009 2 commits
  13. 13 Jul, 2009 1 commit
  14. 10 Aug, 2009 1 commit
  15. 29 Jun, 2009 1 commit
  16. 13 Jul, 2009 4 commits
  17. 03 Jun, 2009 2 commits
  18. 02 Jun, 2009 1 commit
    • Hisashi Hifumi's avatar
      I added blk_run_backing_dev on page_cache_async_readahead so readahead I/O · c512db2e
      Hisashi Hifumi authored
      is unpluged to improve throughput on especially RAID environment.
      
      The normal case is, if page N become uptodate at time T(N), then T(N) <=
      T(N+1) holds.  With RAID (and NFS to some degree), there is no strict
      ordering, the data arrival time depends on runtime status of individual
      disks, which breaks that formula.  So in do_generic_file_read(), just
      after submitting the async readahead IO request, the current page may well
      be uptodate, so the page won't be locked, and the block device won't be
      implicitly unplugged:
      
                     if (PageReadahead(page))
                              page_cache_async_readahead()
                      if (!PageUptodate(page))
                                      goto page_not_up_to_date;
                      //...
      page_not_up_to_date:
                      lock_page_killable(page);
      
      Therefore explicit unplugging can help.
      
      Following is the test result with dd.
      
      #dd if=testdir/testfile of=/dev/null bs=16384
      
      -2.6.30-rc6
      1048576+0 records in
      1048576+0 records out
      17179869184 bytes (17 GB) copied, 224.182 seconds, 76.6 MB/s
      
      -2.6.30-rc6-patched
      1048576+0 records in
      1048576+0 records out
      17179869184 bytes (17 GB) copied, 206.465 seconds, 83.2 MB/s
      
      (7Disks RAID-0 Array)
      
      -2.6.30-rc6
      1054976+0 records in
      1054976+0 records out
      17284726784 bytes (17 GB) copied, 212.233 seconds, 81.4 MB/s
      
      -2.6.30-rc6-patched
      1054976+0 records out
      17284726784 bytes (17 GB) copied, 198.878 seconds, 86.9 MB/s
      
      (7Disks RAID-5 Array)
      Signed-off-by: default avatarHisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
      Acked-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c512db2e
  19. 13 Jul, 2009 1 commit
  20. 26 Jun, 2009 1 commit
  21. 23 Jun, 2009 2 commits
  22. 19 Aug, 2009 1 commit
    • James Toy's avatar
      · 8e580e58
      James Toy authored
      The following commit make console open fails while booting:
      
      	commit d966976924119acd35a431adbb95292082f73f8c
      	Author: Alan Cox <alan@linux.intel.com>
      	Date:   Tue Aug 11 10:23:05 2009 +1000
      
      	tty: make the kref destructor occur asynchronously
      
      Due to tty release routines runs in workqueue now, error like following
      will be reported while booting:
      
      INIT open /dev/console Input/output error
      
      The reason is that now there's latency issue with closing, but when we
      open a "closing not finished" tty, -EIO will be returned.
      
      Fix it as alan's following suggestion:
      
      Fun but its actually not a bug and the fix is wrong in itself as the port
      may be closing but not yet being destructed, in which case it seems to do
      the wrong thing.  Opening a tty that is closing (and could be closing for
      long periods) is supposed to return -EIO.
      
      I suspect a better way to deal with this and keep the old console timing
      is to split tty->shutdown into two functions.
      
      tty->shutdown() - called synchronously just before we dump the tty onto
      the waitqueue for destruction
      
      tty->cleanup() - called when the destructor runs.
      
      We would then do the shutdown part which can occur in IRQ context fine,
      before queueing the rest of the release (from tty->magic = 0 ...  the end)
      to occur asynchronously
      
      The USB update in -next would then need a call like
      
             if (tty->cleanup)
                     tty->cleanup(tty);
      
      at the top of the async function and the USB shutdown to be split between
      shutdown and cleanup as the USB resource cleanup and final tidy cannot
      occur synchronously as it needs to sleep.
      
      In other words the logic becomes
      
             final kref put
                     make object unfindable
      
             async
                     clean it up
      Signed-off-by: default avatarDave Young <hidave.darkstar@gmail.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Emmanuel Benisty <benisty.e@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      8e580e58
  23. 13 Jul, 2009 1 commit