1. 09 Aug, 2007 11 commits
    • Alexey Dobriyan's avatar
      sched: remove binary sysctls from kernel.sched_domain · e0361851
      Alexey Dobriyan authored
      kernel.sched_domain hierarchy is under CTL_UNNUMBERED and thus
      unreachable to sysctl(2). Generating .ctl_number's in such situation is
      not useful.
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@sw.ru>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e0361851
    • Ingo Molnar's avatar
      sched: delta_exec accounting fix · fd8bb43e
      Ingo Molnar authored
      small delta_exec accounting fix: increase delta_exec and increase
      sum_exec_runtime even if the task is not on the runqueue anymore.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      fd8bb43e
    • Ingo Molnar's avatar
      sched: clean up delta_mine · c5dcfe72
      Ingo Molnar authored
      cleanup: delta_mine is an unsigned value.
      
      no code impact:
      
         text    data     bss     dec     hex filename
         27823    2726      16   30565    7765 sched.o.before
         27823    2726      16   30565    7765 sched.o.after
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      c5dcfe72
    • Ingo Molnar's avatar
      sched: schedule() speedup · 8e717b19
      Ingo Molnar authored
      speed up schedule(): share the 'now' parameter that deactivate_task()
      was calculating internally.
      
      ( this also fixes the small accounting window between the deactivate
        call and the pick_next_task() call. )
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8e717b19
    • Ingo Molnar's avatar
      sched: uninline rq_clock() · 7bfd0485
      Ingo Molnar authored
      uninline rq_clock() to save 263 bytes of code:
      
         text    data     bss     dec     hex filename
         39561    3642      24   43227    a8db sched.o.before
         39298    3642      24   42964    a7d4 sched.o.after
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7bfd0485
    • Josh Triplett's avatar
      sched: mark print_cfs_stats static · 291ae5a1
      Josh Triplett authored
      sched_fair.c defines print_cfs_stats, and sched_debug.c uses it, but sched.c
      includes both sched_fair.c and sched_debug.c, so all the references to
      print_cfs_stats occur in the same compilation unit.  Thus, mark
      print_cfs_stats static.
      
      Eliminates a sparse warning:
      warning: symbol 'print_cfs_stats' was not declared. Should it be static?
      Signed-off-by: default avatarJosh Triplett <josh@kernel.org>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      291ae5a1
    • Ulrich Drepper's avatar
      sched: clean up sched_getaffinity() · 9531b62f
      Ulrich Drepper authored
      here's another tiny cleanup.  The generated code is not affected (gcc is
      smart enough) but for people looking over the code it is just irritating
      to have the extra conditional.
      Signed-off-by: default avatarUlrich Drepper <drepper@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9531b62f
    • Thomas Voegtle's avatar
      sched: mention CONFIG_SCHED_DEBUG in documentation · 5f5d3aa1
      Thomas Voegtle authored
      a little hint to switch on CONFIG_SCHED_DEBUG should be given.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      5f5d3aa1
    • Peter Williams's avatar
      sched: simplify move_tasks() · 43010659
      Peter Williams authored
      The move_tasks() function is currently multiplexed with two distinct
      capabilities:
      
      1. attempt to move a specified amount of weighted load from one run
      queue to another; and
      2. attempt to move a specified number of tasks from one run queue to
      another.
      
      The first of these capabilities is used in two places, load_balance()
      and load_balance_idle(), and in both of these cases the return value of
      move_tasks() is used purely to decide if tasks/load were moved and no
      notice of the actual number of tasks moved is taken.
      
      The second capability is used in exactly one place,
      active_load_balance(), to attempt to move exactly one task and, as
      before, the return value is only used as an indicator of success or failure.
      
      This multiplexing of sched_task() was introduced, by me, as part of the
      smpnice patches and was motivated by the fact that the alternative, one
      function to move specified load and one to move a single task, would
      have led to two functions of roughly the same complexity as the old
      move_tasks() (or the new balance_tasks()).  However, the new modular
      design of the new CFS scheduler allows a simpler solution to be adopted
      and this patch addresses that solution by:
      
      1. adding a new function, move_one_task(), to be used by
      active_load_balance(); and
      2. making move_tasks() a single purpose function that tries to move a
      specified weighted load and returns 1 for success and 0 for failure.
      
      One of the consequences of these changes is that neither move_one_task()
      or the new move_tasks() care how many tasks sched_class.load_balance()
      moves and this enables its interface to be simplified by returning the
      amount of load moved as its result and removing the load_moved pointer
      from the argument list.  This helps simplify the new move_tasks() and
      slightly reduces the amount of work done in each of
      sched_class.load_balance()'s implementations.
      
      Further simplification, e.g. changes to balance_tasks(), are possible
      but (slightly) complicated by the special needs of load_balance_fair()
      so I've left them to a later patch (if this one gets accepted).
      
      NB Since move_tasks() gets called with two run queue locks held even
      small reductions in overhead are worthwhile.
      
      [ mingo@elte.hu ]
      
      this change also reduces code size nicely:
      
         text    data     bss     dec     hex filename
         39216    3618      24   42858    a76a sched.o.before
         39173    3618      24   42815    a73f sched.o.after
      Signed-off-by: default avatarPeter Williams <pwil3058@bigpond.net.au>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      43010659
    • Ingo Molnar's avatar
      sched: reorder update_cpu_load(rq) with the ->task_tick() call · f1a438d8
      Ingo Molnar authored
      Peter Williams suggested to flip the order of update_cpu_load(rq) with
      the ->task_tick() call. This is a NOP for the current scheduler (the
      two functions are independent of each other), ->task_tick() might
      create some state for update_cpu_load() in the future (or in PlugSched).
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f1a438d8
    • Ingo Molnar's avatar
      sched: batch sleeper bonus · 0915c4e8
      Ingo Molnar authored
      batch up the sleeper bonus sum a bit more. Anything below
      sched-granularity is too small to make a practical difference
      anyway.
      
      this optimization reduces the math in high-frequency scheduling
      scenarios.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      0915c4e8
  2. 07 Aug, 2007 9 commits
  3. 06 Aug, 2007 3 commits
  4. 05 Aug, 2007 2 commits
  5. 04 Aug, 2007 14 commits
  6. 03 Aug, 2007 1 commit