Commit 82a90819 authored by Bo Liu's avatar Bo Liu Committed by Greg Kroah-Hartman

mm: remove incorrect swap_count() from try_to_unuse()

commit 32c5fc10 upstream.

In try_to_unuse(), swcount is a local copy of *swap_map, including the
SWAP_HAS_CACHE bit; but a wrong comparison against swap_count(*swap_map),
which masks off the SWAP_HAS_CACHE bit, succeeded where it should fail.

That had the effect of resetting the mm from which to start searching
for the next swap page, to an irrelevant mm instead of to an mm in which
this swap page had been found: which may increase search time by ~20%.
But we're used to swapoff being slow, so never noticed the slowdown.

Remove that one spurious use of swap_count(): Bo Liu thought it merely
redundant, Hugh rewrote the description since it was measurably wrong.
Signed-off-by: default avatarBo Liu <bo-liu@hotmail.com>
Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
parent cd4ad4b9
...@@ -1149,8 +1149,7 @@ static int try_to_unuse(unsigned int type) ...@@ -1149,8 +1149,7 @@ static int try_to_unuse(unsigned int type)
} else } else
retval = unuse_mm(mm, entry, page); retval = unuse_mm(mm, entry, page);
if (set_start_mm && if (set_start_mm && *swap_map < swcount) {
swap_count(*swap_map) < swcount) {
mmput(new_start_mm); mmput(new_start_mm);
atomic_inc(&mm->mm_users); atomic_inc(&mm->mm_users);
new_start_mm = mm; new_start_mm = mm;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment