linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	linux-mm@kvack.org
Cc: Andi Kleen <ak@linux.intel.com>,
	"H. Peter Anvin" <hpa@linux.intel.com>,
	linux-kernel@vger.kernel.org,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCH v4 07/10] thp: implement splitting pmd for huge zero page
Date: Mon, 15 Oct 2012 09:00:56 +0300	[thread overview]
Message-ID: <1350280859-18801-8-git-send-email-kirill.shutemov@linux.intel.com> (raw)
In-Reply-To: <1350280859-18801-1-git-send-email-kirill.shutemov@linux.intel.com>

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

We can't split huge zero page itself (and it's bug if we try), but we
can split the pmd which points to it.

On splitting the pmd we create a table with all ptes set to normal zero
page.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/huge_memory.c |   47 ++++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 87359f1..b267b12 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1610,6 +1610,7 @@ int split_huge_page(struct page *page)
 	struct anon_vma *anon_vma;
 	int ret = 1;
 
+	BUG_ON(is_huge_zero_pfn(page_to_pfn(page)));
 	BUG_ON(!PageAnon(page));
 	anon_vma = page_lock_anon_vma(page);
 	if (!anon_vma)
@@ -2508,23 +2509,63 @@ static int khugepaged(void *none)
 	return 0;
 }
 
+static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
+		unsigned long haddr, pmd_t *pmd)
+{
+	pgtable_t pgtable;
+	pmd_t _pmd;
+	int i;
+
+	pmdp_clear_flush(vma, haddr, pmd);
+	/* leave pmd empty until pte is filled */
+
+	pgtable = get_pmd_huge_pte(vma->vm_mm);
+	pmd_populate(vma->vm_mm, &_pmd, pgtable);
+
+	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
+		pte_t *pte, entry;
+		entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot);
+		entry = pte_mkspecial(entry);
+		pte = pte_offset_map(&_pmd, haddr);
+		VM_BUG_ON(!pte_none(*pte));
+		set_pte_at(vma->vm_mm, haddr, pte, entry);
+		pte_unmap(pte);
+	}
+	smp_wmb(); /* make pte visible before pmd */
+	pmd_populate(vma->vm_mm, pmd, pgtable);
+}
+
 void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
 		pmd_t *pmd)
 {
 	struct page *page;
+	struct mm_struct *mm = vma->vm_mm;
 	unsigned long haddr = address & HPAGE_PMD_MASK;
+	unsigned long mmun_start;	/* For mmu_notifiers */
+	unsigned long mmun_end;		/* For mmu_notifiers */
 
 	BUG_ON(vma->vm_start > haddr || vma->vm_end < haddr + HPAGE_PMD_SIZE);
 
-	spin_lock(&vma->vm_mm->page_table_lock);
+	mmun_start = haddr;
+	mmun_end   = address + HPAGE_PMD_SIZE;
+	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	spin_lock(&mm->page_table_lock);
 	if (unlikely(!pmd_trans_huge(*pmd))) {
-		spin_unlock(&vma->vm_mm->page_table_lock);
+		spin_unlock(&mm->page_table_lock);
+		mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+		return;
+	}
+	if (is_huge_zero_pmd(*pmd)) {
+		__split_huge_zero_page_pmd(vma, haddr, pmd);
+		spin_unlock(&mm->page_table_lock);
+		mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
 		return;
 	}
 	page = pmd_page(*pmd);
 	VM_BUG_ON(!page_count(page));
 	get_page(page);
-	spin_unlock(&vma->vm_mm->page_table_lock);
+	spin_unlock(&mm->page_table_lock);
+	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
 
 	split_huge_page(page);
 
-- 
1.7.7.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2012-10-15  6:00 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-15  6:00 [PATCH v4 00/10, REBASED] Introduce huge zero page Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 01/10] thp: huge zero page: basic preparation Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 02/10] thp: zap_huge_pmd(): zap huge zero pmd Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 03/10] thp: copy_huge_pmd(): copy huge zero page Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 04/10] thp: do_huge_pmd_wp_page(): handle " Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 05/10] thp: change_huge_pmd(): keep huge zero page write-protected Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 06/10] thp: change split_huge_page_pmd() interface Kirill A. Shutemov
2012-10-15  6:00 ` Kirill A. Shutemov [this message]
2012-10-15  6:00 ` [PATCH v4 08/10] thp: setup huge zero page on non-write page fault Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 09/10] thp: lazy huge zero page allocation Kirill A. Shutemov
2012-10-15  6:00 ` [PATCH v4 10/10] thp: implement refcounting for huge zero page Kirill A. Shutemov
2012-10-18 23:45   ` Andrew Morton
2012-10-18 23:59     ` Kirill A. Shutemov
2012-10-23  6:35       ` Kirill A. Shutemov
2012-10-23  6:43         ` Andrew Morton
2012-10-23  7:00           ` Kirill A. Shutemov
2012-10-23 22:59             ` Andrew Morton
2012-10-23 23:38               ` Kirill A. Shutemov
2012-10-24 19:22                 ` Andrew Morton
2012-10-24 19:45                   ` Kirill A. Shutemov
2012-10-24 20:25                     ` Andrew Morton
2012-10-24 20:33                       ` Kirill A. Shutemov
2012-10-24 20:44                         ` Andi Kleen
2012-10-25 20:49                       ` Kirill A. Shutemov
2012-10-25 21:05                         ` Andrew Morton
2012-10-25 21:22                           ` Kirill A. Shutemov
2012-10-25 21:37                             ` Andrew Morton
2012-10-25 22:10                               ` Kirill A. Shutemov
2012-10-16  9:53 ` [PATCH v4 00/10, REBASED] Introduce " Ni zhan Chen
2012-10-16 10:54   ` Kirill A. Shutemov
2012-10-16 11:13     ` Ni zhan Chen
2012-10-16 11:28       ` Kirill A. Shutemov
2012-10-16 11:37         ` Ni zhan Chen
2012-10-26 15:14 ` [PATCH] thp, vmstat: implement HZP_ALLOC and HZP_ALLOC_FAILED events Kirill A. Shutemov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1350280859-18801-8-git-send-email-kirill.shutemov@linux.intel.com \
    --to=kirill.shutemov@linux.intel.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=hpa@linux.intel.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).