1. 30 Apr, 2007 4 commits
    • Jens Axboe's avatar
      cfq-iosched: rework the whole round-robin list concept · d9e7620e
      Jens Axboe authored
      Drawing on some inspiration from the CFS CPU scheduler design, overhaul
      the pending cfq_queue concept list management. Currently CFQ uses a
      doubly linked list per priority level for sorting and service uses.
      Kill those lists and maintain an rbtree of cfq_queue's, sorted by when
      to service them.
      
      This unfortunately means that the ionice levels aren't as strong
      anymore, will work on improving those later. We only scale the slice
      time now, not the number of times we service. This means that latency
      is better (for all priority levels), but that the distinction between
      the highest and lower levels aren't as big.
      
      The diffstat speaks for itself.
      
       cfq-iosched.c |  363 +++++++++++++++++---------------------------------
       1 file changed, 125 insertions(+), 238 deletions(-)
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      d9e7620e
    • Jens Axboe's avatar
      cfq-iosched: minor updates · 1afba045
      Jens Axboe authored
      - Move the queue_new flag clear to when the queue is selected
      - Only select the non-first queue in cfq_get_best_queue(), if there's
        a substantial difference between the best and first.
      - Get rid of ->busy_rr
      - Only select a close cooperator, if the current queue is known to take
        a while to "think".
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      1afba045
    • Jens Axboe's avatar
      cfq-iosched: development update · 6d048f53
      Jens Axboe authored
      - Implement logic for detecting cooperating processes, so we
        choose the best available queue whenever possible.
      
      - Improve residual slice time accounting.
      
      - Remove dead code: we no longer see async requests coming in on
        sync queues. That part was removed a long time ago. That means
        that we can also remove the difference between cfq_cfqq_sync()
        and cfq_cfqq_class_sync(), they are now indentical. And we can
        kill the on_dispatch array, just make it a counter.
      
      - Allow a process to go into the current list, if it hasn't been
        serviced in this scheduler tick yet.
      
      Possible future improvements including caching the cfqq lookup
      in cfq_close_cooperator(), so we don't have to look it up twice.
      cfq_get_best_queue() should just use that last decision instead
      of doing it again.
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      6d048f53
    • Jens Axboe's avatar
      cfq-iosched: improve preemption for cooperating tasks · 1e3335de
      Jens Axboe authored
      When testing the syslet async io approach, I discovered that CFQ
      sometimes didn't perform as well as expected. cfq_should_preempt()
      needs to better check for cooperating tasks, so fix that by allowing
      preemption of an equal priority queue if the recently queued request
      is as good a candidate for IO as the one we are currently waiting for.
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      1e3335de
  2. 28 Apr, 2007 3 commits
    • Linus Torvalds's avatar
      Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6 · b9099ff6
      Linus Torvalds authored
      * 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6:
        sis900: Allocate rx replacement buffer before rx operation
        usb-net/pegasus: simplify carrier detection
      b9099ff6
    • Neil Horman's avatar
      sis900: Allocate rx replacement buffer before rx operation · dc5a1449
      Neil Horman authored
      Just found a hole in my last patch.  It was reported to me that shortly after we
      integrated this patch.  The report was of an oops that took place inside of
      netif_rx when using the sis900 driver.  Looking at my origional patch I noted
      that there was a spot between the new skb_alloc and the refill_rx_ring label
      where skb got reassigned to the pointer currently held in the rx_ring for the
      purposes of receiveing the frame.  The result of this is however that the buffer
      that gets passed to netif_rx (if it is called), then gets placed right back into
      the rx_ring.  So if you receive frames fast enough the skb being processed by
      the network stack can get corrupted.  The reporter is testing out the fix I've
      written for this below (I'm not near my hardware at the moment to test myself),
      but I wanted to post it for review ASAP.  I'll post test results when I hear
      them, but I think this is a pretty straightforward fix.  It just uses a separate
      pointer to do the rx operation, so that we don't improperly reassign the pointer
      that we use to refill the rx ring.
      Signed-off-by: default avatarNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: default avatarJeff Garzik <jeff@garzik.org>
      dc5a1449
    • Dan Williams's avatar
      usb-net/pegasus: simplify carrier detection · 1764f150
      Dan Williams authored
      Simplify pegasus carrier detection; rely only on the periodic MII
      polling.  Reverts pieces of c43c49bd.
      Signed-off-by: default avatarDan Williams <dcbw@redhat.com>
      Signed-off-by: default avatarJeff Garzik <jeff@garzik.org>
      1764f150
  3. 27 Apr, 2007 33 commits