Commit 6f5e6b9e authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

[PATCH] fix free swap cache latency

Lee Revell reported 28ms latency when process with lots of swapped memory
exits.

2.6.15 introduced a latency regression when unmapping: in accounting the
zap_work latency breaker, pte_none counted 1, pte_present PAGE_SIZE, but a
swap entry counted nothing at all.  We think of pages present as the slow
case, but Lee's trace shows that free_swap_and_cache's radix tree lookup
can make a lot of work - and we could have been doing it many thousands of
times without a latency break.

Move the zap_work update up to account swap entries like pages present.
This does account non-linear pte_file entries, and unmap_mapping_range
skipping over swap entries, by the same amount even though they're quick:
but neither of those cases deserves complicating the code (and they're
treated no worse than they were in 2.6.14).
Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
Acked-by: default avatarNick Piggin <npiggin@suse.de>
Acked-by: default avatarIngo Molnar <mingo@elte.hu>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 7670f023
...@@ -623,11 +623,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, ...@@ -623,11 +623,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
(*zap_work)--; (*zap_work)--;
continue; continue;
} }
if (pte_present(ptent)) {
struct page *page;
(*zap_work) -= PAGE_SIZE; (*zap_work) -= PAGE_SIZE;
if (pte_present(ptent)) {
struct page *page;
page = vm_normal_page(vma, addr, ptent); page = vm_normal_page(vma, addr, ptent);
if (unlikely(details) && page) { if (unlikely(details) && page) {
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment