An error occurred fetching the project authors.
- 26 Jul, 2007 2 commits
-
-
Satoru Takeuchi authored
Remove unused rq->load_balance_class. Signed-off-by:
Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Avi Kivity authored
This adds a general mechanism whereby a task can request the scheduler to notify it whenever it is preempted or scheduled back in. This allows the task to swap any special-purpose registers like the fpu or Intel's VT registers. Signed-off-by:
Avi Kivity <avi@qumranet.com> [ mingo@elte.hu: fixes, cleanups ] Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 19 Jul, 2007 4 commits
-
-
Ingo Molnar authored
Implement the cpu_clock(cpu) interface for kernel-internal use: high-speed (but slightly incorrect) per-cpu clock constructed from sched_clock(). This API, unused at the moment, will be used in the future by blktrace, by the softlockup-watchdog, by printk and by lockstat. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Suresh Siddha authored
nr_moved is not the correct check for triggering all pinned logic. Fix the all pinned logic in the case of load_balance_newidle(). Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Suresh Siddha authored
In the presence of SMT, newly idle balance was never happening for multi-core and SMP domains (even when both the logical siblings are idle). If thread 0 is already idle and when thread 1 is about to go to idle, newly idle load balance always think that one of the threads is not idle and skips doing the newly idle load balance for multi-core and SMP domains. This is because of the idle_cpu() macro, which checks if the current process on a cpu is an idle process. But this is not the case for the thread doing the load_balance_newidle(). Fix this by using runqueue's nr_running field instead of idle_cpu(). And also skip the logic of 'only one idle cpu in the group will be doing load balancing' during newly idle case. Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Fenghua Yu authored
Currently most of the per cpu data, which is accessed by different cpus, has a ____cacheline_aligned_in_smp attribute. Move all this data to the new per cpu shared data section: .data.percpu.shared_aligned. This will seperate the percpu data which is referenced frequently by other cpus from the local only percpu data. Signed-off-by:
Fenghua Yu <fenghua.yu@intel.com> Acked-by:
Suresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 17 Jul, 2007 1 commit
-
-
Rafael J. Wysocki authored
Currently, the freezer treats all tasks as freezable, except for the kernel threads that explicitly set the PF_NOFREEZE flag for themselves. This approach is problematic, since it requires every kernel thread to either set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't care for the freezing of tasks at all. It seems better to only require the kernel threads that want to or need to be frozen to use some freezer-related code and to remove any freezer-related code from the other (nonfreezable) kernel threads, which is done in this patch. The patch causes all kernel threads to be nonfreezable by default (ie. to have PF_NOFREEZE set by default) and introduces the set_freezable() function that should be called by the freezable kernel threads in order to unset PF_NOFREEZE. It also makes all of the currently freezable kernel threads call set_freezable(), so it shouldn't cause any (intentional) change of behaviour to appear. Additionally, it updates documentation to describe the freezing of tasks more accurately. [akpm@linux-foundation.org: build fixes] Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl> Acked-by:
Nigel Cunningham <nigel@nigel.suspend2.net> Cc: Pavel Machek <pavel@ucw.cz> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 16 Jul, 2007 3 commits
-
-
Ingo Molnar authored
prettify the prio_to_wmult[] array. (this could have saved us from the typos) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
document prio_to_wmult[]. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
improve the comments around the wmult array (which controls the weight of niced tasks). Clarify that to achieve a 10% difference in CPU utilization, a weight multiplier of 1.25 has to be used. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 13 Jul, 2007 4 commits
-
-
Thomas Gleixner authored
Roman Zippel noticed another inconsistency of the wmult table. wmult[16] has a missing digit. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Ingo Molnar authored
fix show_task()/show_tasks() output: - there's no sibling info anymore - the fields were not aligned properly with the description - get rid of the lazy-TLB output: it's been quite some time since we last had a bug there, and when we had a bug it wasnt helped a bit by this debug output. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Ingo Molnar authored
Allow granularity up to 100 msecs, instead of 10 msecs. (needed on larger boxes) Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Galbraith authored
There's a typo in the values in prio_to_wmult[] for nice level 1. While it did not cause bad CPU distribution, but caused more rescheduling between nice-0 and nice-1 tasks than necessary. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 09 Jul, 2007 26 commits
-
-
Ingo Molnar authored
add credits for recent major scheduler contributions: Con Kolivas, for pioneering the fair-scheduling approach Peter Williams, for smpnice Mike Galbraith, for interactivity tuning of CFS Srivatsa Vaddagiri, for group scheduling enhancements Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
clean up the sleep_on() APIs: - do not use fastcall - replace fragile macro magic with proper inline functions Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
4 small style cleanups to sched.c: checkpatch.pl is now happy about the totality of sched.c [ignoring false positives] - yay! ;-) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove unused rq types from sched.c, now that we switched over to CFS. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove now unused interactivity-heuristics related defined and types of the old scheduler. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
clean up include files in sched.c, they were still old-style <asm/>. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
make use of sched-clock-unstable events. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
track TSC-unstable events and propagate it to the scheduler code. Also allow sched_clock() to be used when the TSC is unstable, the rq_clock() wrapper creates a reliable clock out of it. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
apply the CFS core code. this change switches over the scheduler core to CFS's modular design and makes use of kernel/sched_fair/rt/idletask.c to implement Linux's scheduling policies. thanks to Andrew Morton and Thomas Gleixner for lots of detailed review feedback and for fixlets. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Dmitry Adamushko <dmitry.adamushko@gmail.com> Signed-off-by:
Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
-
Ingo Molnar authored
remove the sleep-bonus interactivity code from the core scheduler. scheduling policy is implemented in the policy modules, and CFS does not need such type of heuristics. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the expired_starving() heuristics from the core scheduler. CFS does not need it, and this did not really work well in practice anyway, due to the rq->nr_running multiplier to STARVATION_LIMIT. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the sleep_type heuristics from the core scheduler - scheduling policy is implemented in the scheduling-policy modules. (and CFS does not use this type of sleep-type heuristics) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
add the new load-calculation methods of CFS. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
clean up: move __normal_prio() in head of normal_prio(). no code changed. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
cleanup: move dequeue/enqueue_task() to a more logical place, to not split up __normal_prio()/normal_prio(). Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
move resched_task()/resched_cpu() into the 'public interfaces' section of sched.c, for use by kernel/sched_fair/rt/idletask.c Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
clean up the rt priority macros, pointed out by Andrew Morton. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
add the set_task_cfs_rq() abstraction needed by CONFIG_FAIR_GROUP_SCHED. (not activated yet) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
update the posix-cpu-timers code to use CFS's CPU accounting information. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
add rq_clock()/__rq_clock(), a robust wrapper around sched_clock(), used by CFS. It protects against common type of sched_clock() problems (caused by hardware): time warps forwards and backwards. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
add the CFS rq data types to sched.c. (the old scheduler fields are still intact, they are removed by a later patch) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
create sched_stats.h and move sched.c schedstats code into it. This cleans up sched.c a bit. no code changes are caused by this patch. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
add the init_idle_bootup_task() callback to the bootup thread, unused at the moment. (CFS will use it to switch the scheduling class of the boot thread to the idle class) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove sched_exit(): the elaborate dance of us trying to recover timeslices given to child tasks never really worked. CFS does not need it either. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
uninline set_task_cpu(): CFS will add more code to it. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
the SMP load-balancer uses the boot-time migration-cost estimation code to attempt to improve the quality of balancing. The reason for this code is that the discrete priority queues do not preserve the order of scheduling accurately, so the load-balancer skips tasks that were running on a CPU 'recently'. this code is fundamental fragile: the boot-time migration cost detector doesnt really work on systems that had large L3 caches, it caused boot delays on large systems and the whole cache-hot concept made the balancing code pretty undeterministic as well. (and hey, i wrote most of it, so i can say it out loud that it sucks ;-) under CFS the same purpose of cache affinity can be achieved without any special cache-hot special-case: tasks are sorted in the 'timeline' tree and the SMP balancer picks tasks from the left side of the tree, thus the most cache-cold task is balanced automatically. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-