From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail143.messagelabs.com (mail143.messagelabs.com [216.82.254.35]) by kanga.kvack.org (Postfix) with SMTP id DC1526B008C for ; Sun, 21 Feb 2010 09:18:43 -0500 (EST) Message-Id: <20100221141756.149128731@redhat.com> Date: Sun, 21 Feb 2010 15:10:30 +0100 From: aarcange@redhat.com Subject: [patch 21/36] split_huge_page_mm/vma References: <20100221141009.581909647@redhat.com> Content-Disposition: inline; filename=split_huge_page_mm_vma Sender: owner-linux-mm@kvack.org To: linux-mm@kvack.org Cc: Marcelo Tosatti , Adam Litke , Avi Kivity , Izik Eidus , Hugh Dickins , Nick Piggin , Rik van Riel , Mel Gorman , Dave Hansen , Benjamin Herrenschmidt , Ingo Molnar , Mike Travis , KAMEZAWA Hiroyuki , Christoph Lameter , Chris Wright , Andrew Morton , bpicco@redhat.com, KOSAKI Motohiro , Balbir Singh , Arnd Bergmann , Andrea Arcangeli List-ID: From: Andrea Arcangeli split_huge_page_mm/vma compat code. Each one of those would need to be expanded to hundred of lines of complex code without a fully reliable split_huge_page_mm/vma functionality. Signed-off-by: Andrea Arcangeli Acked-by: Rik van Riel Acked-by: Mel Gorman --- diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c --- a/arch/x86/kernel/vm86_32.c +++ b/arch/x86/kernel/vm86_32.c @@ -179,6 +179,7 @@ static void mark_screen_rdonly(struct mm if (pud_none_or_clear_bad(pud)) goto out; pmd = pmd_offset(pud, 0xA0000); + split_huge_page_mm(mm, 0xA0000, pmd); if (pmd_none_or_clear_bad(pmd)) goto out; pte = pte_offset_map_lock(mm, pmd, 0xA0000, &ptl); diff --git a/mm/mempolicy.c b/mm/mempolicy.c --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -446,6 +446,7 @@ static inline int check_pmd_range(struct pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); + split_huge_page_vma(vma, pmd); if (pmd_none_or_clear_bad(pmd)) continue; if (check_pte_range(vma, pmd, addr, next, nodes, diff --git a/mm/mincore.c b/mm/mincore.c --- a/mm/mincore.c +++ b/mm/mincore.c @@ -132,6 +132,7 @@ static long do_mincore(unsigned long add if (pud_none_or_clear_bad(pud)) goto none_mapped; pmd = pmd_offset(pud, addr); + split_huge_page_vma(vma, pmd); if (pmd_none_or_clear_bad(pmd)) goto none_mapped; diff --git a/mm/mprotect.c b/mm/mprotect.c --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -89,6 +89,7 @@ static inline void change_pmd_range(stru pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); + split_huge_page_mm(mm, addr, pmd); if (pmd_none_or_clear_bad(pmd)) continue; change_pte_range(mm, pmd, addr, next, newprot, dirty_accountable); diff --git a/mm/mremap.c b/mm/mremap.c --- a/mm/mremap.c +++ b/mm/mremap.c @@ -42,6 +42,7 @@ static pmd_t *get_old_pmd(struct mm_stru return NULL; pmd = pmd_offset(pud, addr); + split_huge_page_mm(mm, addr, pmd); if (pmd_none_or_clear_bad(pmd)) return NULL; diff --git a/mm/pagewalk.c b/mm/pagewalk.c --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -34,6 +34,7 @@ static int walk_pmd_range(pud_t *pud, un pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); + split_huge_page_mm(walk->mm, addr, pmd); if (pmd_none_or_clear_bad(pmd)) { if (walk->pte_hole) err = walk->pte_hole(addr, next, walk); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org