Commit a9ba9a3b authored by Arjan van de Ven's avatar Arjan van de Ven Committed by Linus Torvalds

[PATCH] x86_64: prefetch the mmap_sem in the fault path

In a micro-benchmark that stresses the pagefault path, the down_read_trylock
on the mmap_sem showed up quite high on the profile. Turns out this lock is
bouncing between cpus quite a bit and thus is cache-cold a lot. This patch
prefetches the lock (for write) as early as possible (and before some other
somewhat expensive operations). With this patch, the down_read_trylock
basically fell out of the top of profile.
Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
Signed-off-by: default avatarAndi Kleen <ak@suse.de>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 4bc32c4d
...@@ -314,11 +314,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, ...@@ -314,11 +314,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
unsigned long flags; unsigned long flags;
siginfo_t info; siginfo_t info;
tsk = current;
mm = tsk->mm;
prefetchw(&mm->mmap_sem);
/* get the address */ /* get the address */
__asm__("movq %%cr2,%0":"=r" (address)); __asm__("movq %%cr2,%0":"=r" (address));
tsk = current;
mm = tsk->mm;
info.si_code = SEGV_MAPERR; info.si_code = SEGV_MAPERR;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment