Commit 21eac81f authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] Swap Migration V5: LRU operations

This is the start of the `swap migration' patch series.

Swap migration allows the moving of the physical location of pages between
nodes in a numa system while the process is running.  This means that the
virtual addresses that the process sees do not change.  However, the system
rearranges the physical location of those pages.

The main intent of page migration patches here is to reduce the latency of
memory access by moving pages near to the processor where the process
accessing that memory is running.

The patchset allows a process to manually relocate the node on which its
pages are located through the MF_MOVE and MF_MOVE_ALL options while
setting a new memory policy.

The pages of process can also be relocated from another process using the
sys_migrate_pages() function call.  Requires CAP_SYS_ADMIN.  The migrate_pages
function call takes two sets of nodes and moves pages of a process that are
located on the from nodes to the destination nodes.

Manual migration is very useful if for example the scheduler has relocated a
process to a processor on a distant node.  A batch scheduler or an
administrator can detect the situation and move the pages of the process
nearer to the new processor.

sys_migrate_pages() could be used on non-numa machines as well, to force all
of a particualr process's pages out to swap, if someone thinks that's useful.

Larger installations usually partition the system using cpusets into sections
of nodes.  Paul has equipped cpusets with the ability to move pages when a
task is moved to another cpuset.  This allows automatic control over locality
of a process.  If a task is moved to a new cpuset then also all its pages are
moved with it so that the performance of the process does not sink
dramatically (as is the case today).

Swap migration works by simply evicting the page.  The pages must be faulted
back in.  The pages are then typically reallocated by the system near the node
where the process is executing.

For swap migration the destination of the move is controlled by the allocation
policy.  Cpusets set the allocation policy before calling sys_migrate_pages()
in order to move the pages as intended.

No allocation policy changes are performed for sys_migrate_pages().  This
means that the pages may not faulted in to the specified nodes if no
allocation policy was set by other means.  The pages will just end up near the
node where the fault occurred.

There's another patch series in the pipeline which implements "direct
migration".

The direct migration patchset extends the migration functionality to avoid
going through swap.  The destination node of the relation is controllable
during the actual moving of pages.  The crutch of using the allocation policy
to relocate is not necessary and the pages are moved directly to the target.
Its also faster since swap is not used.

And sys_migrate_pages() can then move pages directly to the specified node.
Implement functions to isolate pages from the LRU and put them back later.

This patch:

An earlier implementation was provided by Hirokazu Takahashi
<taka@valinux.co.jp> and IWAMOTO Toshihiro <iwamoto@valinux.co.jp> for the
memory hotplug project.

From: Magnus

This breaks out isolate_lru_page() and putpack_lru_page().  Needed for swap
migration.
Signed-off-by: default avatarMagnus Damm <magnus.damm@gmail.com>
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 15316ba8
...@@ -38,3 +38,25 @@ del_page_from_lru(struct zone *zone, struct page *page) ...@@ -38,3 +38,25 @@ del_page_from_lru(struct zone *zone, struct page *page)
zone->nr_inactive--; zone->nr_inactive--;
} }
} }
/*
* Isolate one page from the LRU lists.
*
* - zone->lru_lock must be held
*/
static inline int __isolate_lru_page(struct page *page)
{
if (unlikely(!TestClearPageLRU(page)))
return 0;
if (get_page_testone(page)) {
/*
* It is being freed elsewhere
*/
__put_page(page);
SetPageLRU(page);
return -ENOENT;
}
return 1;
}
...@@ -175,6 +175,9 @@ extern int try_to_free_pages(struct zone **, gfp_t); ...@@ -175,6 +175,9 @@ extern int try_to_free_pages(struct zone **, gfp_t);
extern int shrink_all_memory(int); extern int shrink_all_memory(int);
extern int vm_swappiness; extern int vm_swappiness;
extern int isolate_lru_page(struct page *p);
extern int putback_lru_pages(struct list_head *l);
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
/* linux/mm/shmem.c */ /* linux/mm/shmem.c */
extern int shmem_unuse(swp_entry_t entry, struct page *page); extern int shmem_unuse(swp_entry_t entry, struct page *page);
......
...@@ -593,20 +593,18 @@ static int isolate_lru_pages(int nr_to_scan, struct list_head *src, ...@@ -593,20 +593,18 @@ static int isolate_lru_pages(int nr_to_scan, struct list_head *src,
page = lru_to_page(src); page = lru_to_page(src);
prefetchw_prev_lru_page(page, src, flags); prefetchw_prev_lru_page(page, src, flags);
if (!TestClearPageLRU(page)) switch (__isolate_lru_page(page)) {
BUG(); case 1:
list_del(&page->lru); /* Succeeded to isolate page */
if (get_page_testone(page)) { list_move(&page->lru, dst);
/*
* It is being freed elsewhere
*/
__put_page(page);
SetPageLRU(page);
list_add(&page->lru, src);
continue;
} else {
list_add(&page->lru, dst);
nr_taken++; nr_taken++;
break;
case -ENOENT:
/* Not possible to isolate */
list_move(&page->lru, src);
break;
default:
BUG();
} }
} }
...@@ -614,6 +612,48 @@ static int isolate_lru_pages(int nr_to_scan, struct list_head *src, ...@@ -614,6 +612,48 @@ static int isolate_lru_pages(int nr_to_scan, struct list_head *src,
return nr_taken; return nr_taken;
} }
static void lru_add_drain_per_cpu(void *dummy)
{
lru_add_drain();
}
/*
* Isolate one page from the LRU lists and put it on the
* indicated list. Do necessary cache draining if the
* page is not on the LRU lists yet.
*
* Result:
* 0 = page not on LRU list
* 1 = page removed from LRU list and added to the specified list.
* -ENOENT = page is being freed elsewhere.
*/
int isolate_lru_page(struct page *page)
{
int rc = 0;
struct zone *zone = page_zone(page);
redo:
spin_lock_irq(&zone->lru_lock);
rc = __isolate_lru_page(page);
if (rc == 1) {
if (PageActive(page))
del_page_from_active_list(zone, page);
else
del_page_from_inactive_list(zone, page);
}
spin_unlock_irq(&zone->lru_lock);
if (rc == 0) {
/*
* Maybe this page is still waiting for a cpu to drain it
* from one of the lru lists?
*/
rc = schedule_on_each_cpu(lru_add_drain_per_cpu, NULL);
if (rc == 0 && PageLRU(page))
goto redo;
}
return rc;
}
/* /*
* shrink_cache() adds the number of pages reclaimed to sc->nr_reclaimed * shrink_cache() adds the number of pages reclaimed to sc->nr_reclaimed
*/ */
...@@ -679,6 +719,40 @@ done: ...@@ -679,6 +719,40 @@ done:
pagevec_release(&pvec); pagevec_release(&pvec);
} }
static inline void move_to_lru(struct page *page)
{
list_del(&page->lru);
if (PageActive(page)) {
/*
* lru_cache_add_active checks that
* the PG_active bit is off.
*/
ClearPageActive(page);
lru_cache_add_active(page);
} else {
lru_cache_add(page);
}
put_page(page);
}
/*
* Add isolated pages on the list back to the LRU
*
* returns the number of pages put back.
*/
int putback_lru_pages(struct list_head *l)
{
struct page *page;
struct page *page2;
int count = 0;
list_for_each_entry_safe(page, page2, l, lru) {
move_to_lru(page);
count++;
}
return count;
}
/* /*
* This moves pages from the active list to the inactive list. * This moves pages from the active list to the inactive list.
* *
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment