Commit 4ff1ffb4 authored by Nick Piggin's avatar Nick Piggin Committed by Linus Torvalds

[PATCH] oom: reclaim_mapped on oom

Potentially it takes several scans of the lru lists before we can even start
reclaiming pages.

mapped pages, with young ptes can take 2 passes on the active list + one on
the inactive list.  But reclaim_mapped may not always kick in instantly, so it
could take even more than that.

Raise the threshold for marking a zone as all_unreclaimable from a factor of 4
time the pages in the zone to 6.  Introduce a mechanism to force
reclaim_mapped if we've reached a factor 3 and still haven't made progress.

Previously, a customer doing stress testing was able to easily OOM the box
after using only a small fraction of its swap (~100MB).  After the patches, it
would only OOM after having used up all swap (~800MB).
Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 408d8544
...@@ -697,6 +697,11 @@ done: ...@@ -697,6 +697,11 @@ done:
return nr_reclaimed; return nr_reclaimed;
} }
static inline int zone_is_near_oom(struct zone *zone)
{
return zone->pages_scanned >= (zone->nr_active + zone->nr_inactive)*3;
}
/* /*
* This moves pages from the active list to the inactive list. * This moves pages from the active list to the inactive list.
* *
...@@ -732,6 +737,9 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone, ...@@ -732,6 +737,9 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
long distress; long distress;
long swap_tendency; long swap_tendency;
if (zone_is_near_oom(zone))
goto force_reclaim_mapped;
/* /*
* `distress' is a measure of how much trouble we're having * `distress' is a measure of how much trouble we're having
* reclaiming pages. 0 -> no problems. 100 -> great trouble. * reclaiming pages. 0 -> no problems. 100 -> great trouble.
...@@ -767,6 +775,7 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone, ...@@ -767,6 +775,7 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
* memory onto the inactive list. * memory onto the inactive list.
*/ */
if (swap_tendency >= 100) if (swap_tendency >= 100)
force_reclaim_mapped:
reclaim_mapped = 1; reclaim_mapped = 1;
} }
...@@ -1161,7 +1170,7 @@ scan: ...@@ -1161,7 +1170,7 @@ scan:
if (zone->all_unreclaimable) if (zone->all_unreclaimable)
continue; continue;
if (nr_slab == 0 && zone->pages_scanned >= if (nr_slab == 0 && zone->pages_scanned >=
(zone->nr_active + zone->nr_inactive) * 4) (zone->nr_active + zone->nr_inactive) * 6)
zone->all_unreclaimable = 1; zone->all_unreclaimable = 1;
/* /*
* If we've done a decent amount of scanning and * If we've done a decent amount of scanning and
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment