1. 01 Sep, 2006 5 commits
    • NeilBrown's avatar
      [PATCH] md: Fix issues with referencing rdev in md/raid1 · ddac7c7e
      NeilBrown authored
      We need to be careful when referencing mirrors[i].rdev.  It can disappear
      under us at various times.
      
      So:
        fix a couple of problem places.
        comment a couple of non-problem places
        move an 'atomic_add' which deferences rdev down a little
          way to some where where it is sure to not be NULL.
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      ddac7c7e
    • Christoph Lameter's avatar
      [PATCH] ZVC: Scale thresholds depending on the size of the system · df9ecaba
      Christoph Lameter authored
      The ZVC counter update threshold is currently set to a fixed value of 32.
      This patch sets up the threshold depending on the number of processors and
      the sizes of the zones in the system.
      
      With the current threshold of 32, I was able to observe slight contention
      when more than 130-140 processors concurrently updated the counters.  The
      contention vanished when I either increased the threshold to 64 or used
      Andrew's idea of overstepping the interval (see ZVC overstep patch).
      
      However, we saw contention again at 220-230 processors.  So we need higher
      values for larger systems.
      
      But the current default is already a bit of an overkill for smaller
      systems.  Some systems have tiny zones where precision matters.  For
      example i386 and x86_64 have 16M DMA zones and either 900M ZONE_NORMAL or
      ZONE_DMA32.  These are even present on SMP and NUMA systems.
      
      The patch here sets up a threshold based on the number of processors in the
      system and the size of the zone that these counters are used for.  The
      threshold should grow logarithmically, so we use fls() as an easy
      approximation.
      
      Results of tests on a system with 1024 processors (4TB RAM)
      
      The following output is from a test allocating 1GB of memory concurrently
      on each processor (Forking the process.  So contention on mmap_sem and the
      pte locks is not a factor):
      
                             X                   MIN
      TYPE:               CPUS       WALL       WALL        SYS     USER     TOTCPU
      fork                   1      0.552      0.552      0.540    0.012      0.552
      fork                   4      0.552      0.548      2.164    0.036      2.200
      fork                  16      0.564      0.548      8.812    0.164      8.976
      fork                 128      0.580      0.572     72.204    1.208     73.412
      fork                 256      1.300      0.660    310.400    2.160    312.560
      fork                 512      3.512      0.696   1526.836    4.816   1531.652
      fork                1020     20.024      0.700  17243.176    6.688  17249.863
      
      So a threshold of 32 is fine up to 128 processors. At 256 processors contention
      becomes a factor.
      
      Overstepping the counter (earlier patch) improves the numbers a bit:
      
      fork                   4      0.552      0.548      2.164    0.040      2.204
      fork                  16      0.552      0.548      8.640    0.148      8.788
      fork                 128      0.556      0.548     69.676    0.956     70.632
      fork                 256      0.876      0.636    212.468    2.108    214.576
      fork                 512      2.276      0.672    997.324    4.260   1001.584
      fork                1020     13.564      0.680  11586.436    6.088  11592.523
      
      Still contention at 512 and 1020. Contention at 1020 is down by a third.
      256 still has a slight bit of contention.
      
      After this patch the counter threshold will be set to 125 which reduces
      contention significantly:
      
      fork                 128      0.560      0.548     69.776    0.932     70.708
      fork                 256      0.636      0.556    143.460    2.036    145.496
      fork                 512      0.640      0.548    284.244    4.236    288.480
      fork                1020      1.500      0.588   1326.152    8.892   1335.044
      
      [akpm@osdl.org: !SMP build fix]
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      df9ecaba
    • Christoph Lameter's avatar
      [PATCH] ZVC: Overstep counters · a302eb4e
      Christoph Lameter authored
      Increments and decrements are usually grouped rather than mixed.  We can
      optimize the inc and dec functions for that case.
      
      Increment and decrement the counters by 50% more than the threshold in
      those cases and set the differential accordingly.  This decreases the need
      to update the atomic counters.
      
      The idea came originally from Andrew Morton.  The overstepping alone was
      sufficient to address the contention issue found when updating the global
      and the per zone counters from 160 processors.
      
      Also remove some code in dec_zone_page_state.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      a302eb4e
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband · b63fe1ba
      Linus Torvalds authored
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband:
        IB/mthca: Use IRQ safe locks to protect allocation bitmaps
      b63fe1ba
    • Roland Dreier's avatar
      IB/mthca: Use IRQ safe locks to protect allocation bitmaps · 5a4e6dcc
      Roland Dreier authored
      It is supposed to be OK to call mthca_create_ah() and mthca_destroy_ah()
      from any context.  However, for mem-full HCAs, these functions use the
      mthca_alloc() and mthca_free() bitmap helpers, and those helpers use
      non-IRQ-safe spin_lock() internally.  Lockdep correctly warns that
      this could lead to a deadlock.  Fix this by changing mthca_alloc() and
      mthca_free() to use spin_lock_irqsave().
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      5a4e6dcc
  2. 31 Aug, 2006 10 commits
  3. 30 Aug, 2006 25 commits