Commit f9384d41 authored by David S. Miller's avatar David S. Miller

sparc64: Fix MM refcount check in smp_flush_tlb_pending().

As explained by Benjamin Herrenschmidt:

> CPU 0 is running the context, task->mm == task->active_mm == your
> context. The CPU is in userspace happily churning things.
>
> CPU 1 used to run it, not anymore, it's now running fancyfsd which
> is a kernel thread, but current->active_mm still points to that
> same context.
>
> Because there's only one "real" user, mm_users is 1 (but mm_count is
> elevated, it's just that the presence on CPU 1 as active_mm has no
> effect on mm_count().
>
> At this point, fancyfsd decides to invalidate a mapping currently mapped
> by that context, for example because a networked file has changed
> remotely or something like that, using unmap_mapping_ranges().
>
> So CPU 1 goes into the zapping code, which eventually ends up calling
> flush_tlb_pending(). Your test will succeed, as current->active_mm is
> indeed the target mm for the flush, and mm_users is indeed 1. So you
> will -not- send an IPI to the other CPU, and CPU 0 will continue happily
> accessing the pages that should have been unmapped.

To fix this problem, check ->mm instead of ->active_mm, and this
means:

> So if you test current->mm, you effectively account for mm_users == 1,
> so the only way the mm can be active on another processor is as a lazy
> mm for a kernel thread. So your test should work properly as long
> as you don't have a HW that will do speculative TLB reloads into the
> TLB on that other CPU (and even if you do, you flush-on-switch-in should
> get rid of any crap here).

And therefore we should be OK.
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 86ee79c3
...@@ -1031,7 +1031,7 @@ void smp_fetch_global_regs(void) ...@@ -1031,7 +1031,7 @@ void smp_fetch_global_regs(void)
* If the address space is non-shared (ie. mm->count == 1) we avoid * If the address space is non-shared (ie. mm->count == 1) we avoid
* cross calls when we want to flush the currently running process's * cross calls when we want to flush the currently running process's
* tlb state. This is done by clearing all cpu bits except the current * tlb state. This is done by clearing all cpu bits except the current
* processor's in current->active_mm->cpu_vm_mask and performing the * processor's in current->mm->cpu_vm_mask and performing the
* flush locally only. This will force any subsequent cpus which run * flush locally only. This will force any subsequent cpus which run
* this task to flush the context from the local tlb if the process * this task to flush the context from the local tlb if the process
* migrates to another cpu (again). * migrates to another cpu (again).
...@@ -1074,7 +1074,7 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long ...@@ -1074,7 +1074,7 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
u32 ctx = CTX_HWBITS(mm->context); u32 ctx = CTX_HWBITS(mm->context);
int cpu = get_cpu(); int cpu = get_cpu();
if (mm == current->active_mm && atomic_read(&mm->mm_users) == 1) if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
mm->cpu_vm_mask = cpumask_of_cpu(cpu); mm->cpu_vm_mask = cpumask_of_cpu(cpu);
else else
smp_cross_call_masked(&xcall_flush_tlb_pending, smp_cross_call_masked(&xcall_flush_tlb_pending,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment