- 29 Jul, 2009 40 commits
-
-
Steven Rostedt authored
__nf_conntrack_destroy is called with preemption disabled and calls functions that will schedule in PREEMPT_RT. When PREEMPT_RT is defined we call an RCU callback to do the destruction at a later time. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Steven Rostedt authored
John Kacur pointed out that the get_cpu_var used in net/sched/sch_generic.c would trigger warnings. This was happing on a statistic variable and by a softirq which is bound to a single thread. John sent a patch that used local_irq_save which is a little bit of overkill. This version uses preempt disable, but we still need to create a preempt_disable_rt API that is only activated when PREEMPT_RT is configured. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Flatten out the dev_queue_xmit() code flow. This keeps the fall-through fast-path free for the compiler, and also helps code readability. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
- __netif_tx_lock() always passes in 'current' as the lock owner, so eliminate this parameter. - likewise for HARD_TX_LOCK() Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
mbeauch authored
Changed the real-time patch code to detect recursive calls to dev_queue_xmit and drop the packet when detected. Signed-off-by:
Mark Beauchemin <mark.beauchemin@sycamorenet.com> [ ported to latest upstream ] Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Mikulas Patocka authored
On one of my machines with tickless kernel and plip I get messages : NOHZ: local_softirq_pending 08 always when using plip (on other machine with tickless kernel and plip I get no errors). Thebug happens both on 2.6.21 and 2.6.22-rc1 This patch fixes that. Note that plip calls netif_rx neither from hardware interrupt nor from ksoftirqd, so there is no one who would wake ksoftirqd then. netif_tx calls only __raise_softirq_irqoff(NET_RX_SOFTIRQ), which sets softirq bit, but doesn't wake ksoftirqd. [ tglx: Removed the remaining users of __raise_softirq_irqoff() as well. ] Signed-off-by:
Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
MUST-FIX: check the skbuff.c bit! MUST-FIX: check the sched.c bit! This doesn't look good. You declare it as a PER_CPU_LOCKED, but then never use the extra lock to synchronize data. Given that sock_proc_inuse_get() is a racy read anyway, the 'right' fix would be to do something like: Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Mike Galbraith authored
On Sat, 2007-10-27 at 11:44 +0200, Ingo Molnar wrote: > * Nick Piggin <nickpiggin@yahoo.com.au> wrote: > > > > [10138.175796] [<c0105de3>] show_trace+0x12/0x14 > > > [10138.180291] [<c0105dfb>] dump_stack+0x16/0x18 > > > [10138.184769] [<c011609f>] native_smp_call_function_mask+0x138/0x13d > > > [10138.191117] [<c0117606>] smp_call_function+0x1e/0x24 > > > [10138.196210] [<c012f85c>] on_each_cpu+0x25/0x50 > > > [10138.200807] [<c0115c74>] flush_tlb_all+0x1e/0x20 > > > [10138.205553] [<c016caaf>] kmap_high+0x1b6/0x417 > > > [10138.210118] [<c011ec88>] kmap+0x4d/0x4f > > > [10138.214102] [<c026a9d8>] ntfs_end_buffer_async_read+0x228/0x2f9 > > > [10138.220163] [<c01a0e9e>] end_bio_bh_io_sync+0x26/0x3f > > > [10138.225352] [<c01a2b09>] bio_endio+0x42/0x6d > > > [10138.229769] [<c02c2a08>] __end_that_request_first+0x115/0x4ac > > > [10138.235682] [<c02c2da7>] end_that_request_chunk+0x8/0xa > > > [10138.241052] [<c0365943>] ide_end_request+0x55/0x10a > > > [10138.246058] [<c036dae3>] ide_dma_intr+0x6f/0xac > > > [10138.250727] [<c0366d83>] ide_intr+0x93/0x1e0 > > > [10138.255125] [<c015afb4>] handle_IRQ_event+0x5c/0xc9 > > > > Looks like ntfs is kmap()ing from interrupt context. Should be using > > kmap_atomic instead, I think. > > it's not atomic interrupt context but irq thread context - and -rt > remaps kmap_atomic() to kmap() internally. Hm. Looking at the change to mm/bounce.c, perhaps I should do this instead? Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
- btrfs_path_lock_waiting() looks rather dubious and there's no spin_is_contended() method on PREEMPT_RT - so exclude this for now => needs a proper fix later. Either this code gets zapped from btrfs upstream, or we add spin_is_contended() to PREEMPT_RT too. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
On RT we cannot loop with preemption disabled here as mnt_make_readonly() might have been preempted. Instead we block on vfsmount_lock which is held by mnt_make_readonly(). Works for !RT as well. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Steven Rostedt authored
I was compiling a kernel in a shell that I set to a priority of 20, and it locked up on the bit_spin_lock crap of jbd. This patch adds another spinlock to the buffer head and uses that instead of the bit_spins. From: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> -- fs/buffer.c | 3 ++- include/linux/buffer_head.h | 1 + include/linux/jbd.h | 12 ++++++------ 3 files changed, 9 insertions(+), 7 deletions(-)
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Luis Claudio R. Goncalves authored
Fixes spurious system load spikes observed in /proc/loadavgrt, as described in: Bug 253103: /proc/loadavgrt issues weird results https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=253103Signed-off-by:
Luis Claudio R. Goncalves <lgoncalv@redhat.com>> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ankita Garg authored
So, I have merged my previous patch (to display rt_nr_running info in sched_debug.c) with this one. Signed-off-by:
Ankita Garg <ankita@in.ibm.com> [mingo@elte.hu: fix it to work on !SCHEDSTATS too] Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> -- kernel/sched_debug.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
-
Luis Claudio R. Goncalves authored
Hello, The values in /proc/loadavgrt are sometimes the real load and sometimes garbage. As you can see in th tests below, it occurs from in 2.6.21.5-rt20 to 2.6.23-rc2-rt2. The code for calc_load(), in kernel/timer.c has not changed much in -rt patches. [lclaudio@lab sandbox]$ ls /proc/loadavg* /proc/loadavg /proc/loadavgrt [lclaudio@lab sandbox]$ uname -a Linux lab.casa 2.6.21-34.el5rt #1 SMP PREEMPT RT Thu Jul 12 15:26:48 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux [lclaudio@lab sandbox]$ cat /proc/loadavg* 4.57 4.90 4.16 3/146 23499 0.44 0.98 1.78 0/146 23499 ... [lclaudio@lab sandbox]$ cat /proc/loadavg* 4.65 4.80 4.75 5/144 20720 23896.04 -898421.23 383170.94 2/144 20720 [root@neverland ~]# uname -a Linux neverland.casa 2.6.21.5-rt20 #2 SMP PREEMPT RT Fri Jul 1318:31:38 BRT 2007 i686 athlon i386 GNU/Linux [root@neverland ~]# cat /proc/loadavg* 0.16 0.16 0.15 1/184 11240 344.65 0.38 311.71 0/184 11240 [williams@torg ~]$ uname -a Linux torg 2.6.23-rc2-rt2 #14 SMP PREEMPT RT Tue Aug 7 20:07:31 CDT 2007 x86_64 x86_64 x86_64 GNU/Linux [williams@torg ~]$ cat /proc/loadavg* 0.88 0.76 0.57 1/257 7267 122947.70 103790.53 -564712.87 0/257 7267 ----------> Fixes spurious system load spikes observed in /proc/loadavgrt, as described in: Bug 253103: /proc/loadavgrt issues weird results https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=253103Signed-off-by:
Luis Claudio R. Goncalves <lclaudio@uudg.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
KVM expects the notifier call with irqs enabled. It's necessary due to a possible IPI call. Make the preempt-rt version behave the same way as mainline. Signed-off-by:
Thomas Gleixner <tgxl@linutronix.de>
-
Gregory Haskins authored
We will use this later in the series to eliminate the need for a function call. [ Steven Rostedt: added task_is_current function ] Signed-off-by:
Gregory Haskins <ghaskins@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Here we are in the CPU_DEAD notifier, and we must not sleep nor enable interrupts. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Idle task boosting is a nono in general. There is one exception, when NOHZ is active: The idle task calls get_next_timer_interrupt() and holds the timer wheel base->lock on the CPU and another CPU wants to access the timer (probably to cancel it). We can safely ignore the boosting request, as the idle CPU runs this code with interrupts disabled and will complete the lock protected section without being interrupted. So there is no real need to boost. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Steven Rostedt authored
Argh, cut and paste wasn't enough... Use this patch instead. It needs an irq disable. But, believe it or not, on SMP this is actually better. If the irq is shared (as it is in Mark's case), we don't stop the irq of other devices from being handled on another CPU (unfortunately for Mark, he pinned all interrupts to one CPU). Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> drivers/net/3c59x.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-