- 29 Jul, 2009 40 commits
-
-
Gregory Haskins authored
We will use this later in the series to eliminate the need for a function call. [ Steven Rostedt: added task_is_current function ] Signed-off-by:
Gregory Haskins <ghaskins@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Here we are in the CPU_DEAD notifier, and we must not sleep nor enable interrupts. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Idle task boosting is a nono in general. There is one exception, when NOHZ is active: The idle task calls get_next_timer_interrupt() and holds the timer wheel base->lock on the CPU and another CPU wants to access the timer (probably to cancel it). We can safely ignore the boosting request, as the idle CPU runs this code with interrupts disabled and will complete the lock protected section without being interrupted. So there is no real need to boost. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Steven Rostedt authored
Argh, cut and paste wasn't enough... Use this patch instead. It needs an irq disable. But, believe it or not, on SMP this is actually better. If the irq is shared (as it is in Mark's case), we don't stop the irq of other devices from being handled on another CPU (unfortunately for Mark, he pinned all interrupts to one CPU). Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> drivers/net/3c59x.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
Waking the thread even when no timers are scheduled is useless. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Arnaldo Carvalho de Melo authored
Shorten the softirq kernel thread names because they always overflow the limited comm length, appearing as "posix_cpu_timer" CPU# times. Done on 2.6.24.7, but probably applicable to later kernels. Signed-off-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
John Stultz authored
posix-cpu-timer code takes non -rt safe locks in hard irq context. Move it to a thread. Signed-off-by:
John Stultz <johnstul@us.ibm.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
add RT stats to /proc/stat Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu> fs/proc/stat.c | 23 +++++++++++++++++------ include/linux/kernel_stat.h | 2 ++ kernel/sched.c | 6 +++++- 3 files changed, 24 insertions(+), 7 deletions(-)
-
Ingo Molnar authored
Creates long latencies for no value Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> ---- include/linux/interrupt.h | 33 ++++---- kernel/softirq.c | 184 ++++++++++++++++++++++++++++++++-------------- 2 files changed, 149 insertions(+), 68 deletions(-)
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Allows that code to be preemtible Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Add the missing function Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Steven Rostedt authored
The current code of rt_downgrade_write simply does a BUG(). There are places in the kernel that uses this code, and will crash a runnning preempt-rt kernel. The rt_downgrade_write converts a rwsem held for write into a rwsem held for read without ever releasing the semaphore. In -rt, the rwsems are simply a mutex. There is nothing different between a rwsem held for write, and one held for read. The difference is that one held for read can nest. This patch changes the BUG_ON() to simply BUG if the caller is not the owner of the semaphore. This patch comes from my rt-git repo, and has been tested there. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Clark Williams <clark.williams@gmail.com> Cc: "Luis Claudio R. Goncalves" <lclaudio@uudg.org> LKML-Reference: <alpine.DEB.2.00.0904151142420.31828@gandalf.stny.rr.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
recursive rwlocks are only allowed for recursive reads. recursive rwsems are not allowed at all. Follow up to Jan Blunks fix. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Jan Blunck authored
This patch removes the stupid "Read locks within the self-held write lock succeed" behaviour. This is breaking in mm_take_all_locks() since it is quite common to ensure that a lock is taken with BUG_ON(down_read_trylock(&mm->mmap_sem)). Signed-off-by:
Jan Blunck <jblunck@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Map spinlocks, rwlocks, rw_semaphores and semaphores to the rt_mutex based locking functions for preempt-rt. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
john stultz authored
So if I enable CONFIG_DEBUG_RT_MUTEXES with 2.6.24.7-rt14, I tend to quickly see a number of BUG warnings when running Java tests: BUG: jxeinajar/3383: lock count underflow! Pid: 3383, comm: jxeinajar Not tainted 2.6.24-ibmrt2.5john #3 Call Trace: [<ffffffff8107208d>] rt_mutex_deadlock_account_unlock+0x5d/0x70 [<ffffffff817d6aa5>] rt_read_slowunlock+0x35/0x550 [<ffffffff8107173d>] rt_mutex_up_read+0x3d/0xc0 [<ffffffff81072a99>] rt_up_read+0x29/0x30 [<ffffffff8106e34e>] do_futex+0x32e/0xd40 [<ffffffff8107173d>] ? rt_mutex_up_read+0x3d/0xc0 [<ffffffff81072a99>] ? rt_up_read+0x29/0x30 [<ffffffff8106f370>] compat_sys_futex+0xa0/0x110 [<ffffffff81010a36>] ? syscall_trace_enter+0x86/0xb0 [<ffffffff8102ff04>] cstar_do_call+0x1b/0x65 INFO: lockdep is turned off. --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- ... [<ffffffff817d8e42>] .... __spin_lock_irqsave+0x22/0x60 ......[<ffffffff817d6a93>] .. ( <= rt_read_slowunlock+0x23/0x550) After some debugging and with Steven's help, we realized that with rwlocks, rt_mutex_deadlock_account_lock can be called multiple times in parallel (where as in most cases the mutex must be held by the caller to to call the function). This can cause integer lock_count value being used to be non-atomically incremented. The following patch converts lock_count to a atomic_t and resolves the warnings. Signed-off-by:
John Stultz <johnstul@us.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Clark Williams <williams@redhat.com> Cc: dvhltc <dvhltc@linux.vnet.ibm.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
The sleeping locks implementation based on rtmutexes can miss wakeups for two reasons: 1) The unconditional usage TASK_UNINTERRUPTIBLE for the blocking state Results in missed wakeups from wake_up_interruptible*() state = TASK_INTERRUPTIBLE; blocks_on_lock() state = TASK_UNINTERRUPTIBLE; schedule(); .... acquires_lock(); restore_state(); Until the waiter has restored its state wake_up_interruptible*() will fail. 2) The rtmutex wakeup intermediate state TASK_RUNNING_MUTEX Results in missed wakeups from wake_up*() waiter is woken by mutex wakeup waiter->state = TASK_RUNNING_MUTEX; .... acquires_lock(); restore_state(); Until the waiter has restored its state wake_up*() will fail. Solution: Instead of setting the state to TASK_RUNNING_MUTEX in the mutex wakeup case we logically OR TASK_RUNNING_MUTEX to the current waiter state. This keeps the original bits (TASK_INTERRUPTIBLE / TASK_UNINTERRUPTIBLE) intact and lets wakeups succeed. When a task blocks on a lock in state TASK_INTERRUPTIBLE and is woken up by a real wakeup, then we store the state = TASK_RUNNING for the restore and can safely use TASK_UNINTERRUPTIBLE from that point to avoid further wakeups which just let us loop in the lock code. This also removes the extra TASK_RUNNING_MUTEX flags from the wakeup_process*() functions as they are not longer necessary. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
The adaptive spin patches introduced an overdesigned optimization for the adaptive path. This avoidance of those code pathes is not worth the extra conditionals and furthermore it is inconsistent in itself. Remove it and use the same mechanism as the other lock pathes. That way we have a consistent state manipulation scheme and less extra cases. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
The manipulation of the waiter task state is copied all over the place with slightly different details. Use one set of functions to reduce duplicated code and make the handling consistent for all instances. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Lock stealing and non cmpxchg will always go into the slow path. This patch detects the fact that we didn't go through the work of blocking and will exit early. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Gregory Haskins authored
[ The following text is in the "utf-8" character set. ] [ Your display is set for the "iso-8859-1" character set. ] [ Some characters may be displayed incorrectly. ] From: Peter W. Morreale <pmorreale@novell.com> Remove the redundant attempt to get the lock. While it is true that the exit path with this patch adds an un-necessary xchg (in the event the lock is granted without further traversal in the loop) experimentation shows that we almost never encounter this situation. Signed-off-by:
Peter W. Morreale <pmorreale@novell.com> Signed-off-by:
Gregory Haskins <ghaskins@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Gregory Haskins authored
[ The following text is in the "utf-8" character set. ] [ Your display is set for the "iso-8859-1" character set. ] [ Some characters may be displayed incorrectly. ] From: Peter W.Morreale <pmorreale@novell.com> In wakeup_next_waiter(), we take the pi_lock, and then find out whether we have another waiter to add to the pending owner. We can reduce contention on the pi_lock for the pending owner if we first obtain the pointer to the next waiter outside of the pi_lock. Signed-off-by:
Peter W. Morreale <pmorreale@novell.com> Signed-off-by:
Gregory Haskins <ghaskins@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Gregory Haskins authored
[ The following text is in the "utf-8" character set. ] [ Your display is set for the "iso-8859-1" character set. ] [ Some characters may be displayed incorrectly. ] It is redundant to wake the grantee task if it is already running, and the call to wake_up_process is relatively expensive. If we can safely skip it we can measurably improve the performance of the adaptive-locks. Credit goes to Peter Morreale for the general idea. Signed-off-by:
Gregory Haskins <ghaskins@novell.com> Signed-off-by:
Peter Morreale <pmorreale@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Steven Rostedt authored
After talking with Gregory Haskins about how they implemented his version of adaptive spinlocks and before I actually looked at their code, I was thinking about it while lying in bed. I always thought that adaptive spinlocks were to spin for a short period of time based off of some heuristic and then sleep. This idea is totally bogus. No heuristic can account for a bunch of different activities. But Gregory mentioned something to me that made a hell of a lot of sense. And that is to only spin while the owner is running. If the owner is running, then it would seem that it would be quicker to spin then to take the scheduling hit. While lying awake in bed, it dawned on me that we could simply spin in the fast lock and never touch the "has waiters" flag, which would keep the owner from going into the slow path. Also, the task itself is preemptible while spinning so this would not affect latencies. The only trick was to not have the owner get freed between the time you saw the owner and the time you check its run queue. This was easily solved by simply grabing the RCU read lock because freeing of a task must happen after a grace period. I first tried to stay only in the fast path. This works fine until you want to guarantee that the highest prio task gets the lock next. I tried all sorts of hackeries and found that there was too many cases where we can miss. I finally concurred with Gregory, and decided that going into the slow path was the way to go. I then started looking into what the guys over at Novell did. The had the basic idea correct, but went way overboard in the implementation, making it far more complex than it needed to be. I rewrote their work using the ideas from my original patch, and simplified it quite a bit. This is the patch that they wanted to do ;-) Special thanks goes out to Gregory Haskins, Sven Dietrich and Peter Morreale, for proving that adaptive spin locks certainly *can* make a difference. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-