Commit a8a93f3f authored by Jeremy Fitzhardinge's avatar Jeremy Fitzhardinge

mm: disable preemption in apply_to_pte_range

Impact: bugfix

Lazy mmu mode needs preemption disabled, so if we're apply to
init_mm (which doesn't require any pte locks), then explicitly
disable preemption.  (Do it unconditionally after checking we've
successfully done the allocation to simplify the error handling.)
Signed-off-by: default avatarJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
parent 0d34fb8e
...@@ -1718,6 +1718,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, ...@@ -1718,6 +1718,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
BUG_ON(pmd_huge(*pmd)); BUG_ON(pmd_huge(*pmd));
preempt_disable();
arch_enter_lazy_mmu_mode(); arch_enter_lazy_mmu_mode();
token = pmd_pgtable(*pmd); token = pmd_pgtable(*pmd);
...@@ -1729,6 +1730,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, ...@@ -1729,6 +1730,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
} while (pte++, addr += PAGE_SIZE, addr != end); } while (pte++, addr += PAGE_SIZE, addr != end);
arch_leave_lazy_mmu_mode(); arch_leave_lazy_mmu_mode();
preempt_enable();
if (mm != &init_mm) if (mm != &init_mm)
pte_unmap_unlock(pte-1, ptl); pte_unmap_unlock(pte-1, ptl);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment