Commit 4da5eda0 authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] Page Migration: Make do_swap_page redo the fault

It is better to redo the complete fault if do_swap_page() finds that the
page is not in PageSwapCache() because the page migration code may have
replaced the swap pte already with a pte pointing to valid memory.

do_swap_page() may interpret an invalid swap entry without this patch
because we do not reload the pte if we are looping back.  The page
migration code may already have reused the swap entry referenced by our
local swp_entry.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent cb2b95e1
...@@ -1879,7 +1879,6 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -1879,7 +1879,6 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
goto out; goto out;
entry = pte_to_swp_entry(orig_pte); entry = pte_to_swp_entry(orig_pte);
again:
page = lookup_swap_cache(entry); page = lookup_swap_cache(entry);
if (!page) { if (!page) {
swapin_readahead(entry, address, vma); swapin_readahead(entry, address, vma);
...@@ -1903,12 +1902,6 @@ again: ...@@ -1903,12 +1902,6 @@ again:
mark_page_accessed(page); mark_page_accessed(page);
lock_page(page); lock_page(page);
if (!PageSwapCache(page)) {
/* Page migration has occured */
unlock_page(page);
page_cache_release(page);
goto again;
}
/* /*
* Back out if somebody else already faulted in this pte. * Back out if somebody else already faulted in this pte.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment