Commit a9e01acd authored by Vincent Li's avatar Vincent Li Committed by james toy

If we can't isolate pages from LRU list, we don't have to account page

movement, either.  Already, in commit 5343daceec, KOSAKI did it about
shrink_inactive_list.

This patch removes unnecessary overhead of page accounting and locking in
shrink_active_list as follow-up work of commit 5343daceec.
Signed-off-by: default avatarVincent Li <macli@brc.ubc.ca>
Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reviewed-by: default avatarWu Fengguang <fengguang.wu@intel.com>
Acked-by: default avatarRik van Riel <riel@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 3a220441
......@@ -1333,9 +1333,12 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
if (scanning_global_lru(sc)) {
zone->pages_scanned += pgscanned;
}
reclaim_stat->recent_scanned[file] += nr_taken;
__count_zone_vm_events(PGREFILL, zone, pgscanned);
if (nr_taken == 0)
goto done;
reclaim_stat->recent_scanned[file] += nr_taken;
if (file)
__mod_zone_page_state(zone, NR_ACTIVE_FILE, -nr_taken);
else
......@@ -1393,6 +1396,7 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
move_active_pages_to_lru(zone, &l_inactive,
LRU_BASE + file * LRU_FILE);
__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
done:
spin_unlock_irq(&zone->lru_lock);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment