From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f41.google.com (mail-pa0-f41.google.com [209.85.220.41]) by kanga.kvack.org (Postfix) with ESMTP id 47F4F9003DA for ; Fri, 10 Jul 2015 13:43:17 -0400 (EDT) Received: by pabvl15 with SMTP id vl15so171376988pab.1 for ; Fri, 10 Jul 2015 10:43:17 -0700 (PDT) Received: from mga03.intel.com (mga03.intel.com. [134.134.136.65]) by mx.google.com with ESMTP id ej13si15513843pdb.155.2015.07.10.10.43.12 for ; Fri, 10 Jul 2015 10:43:13 -0700 (PDT) From: "Kirill A. Shutemov" Subject: [PATCH 08/36] khugepaged: ignore pmd tables with THP mapped with ptes Date: Fri, 10 Jul 2015 20:41:42 +0300 Message-Id: <1436550130-112636-9-git-send-email-kirill.shutemov@linux.intel.com> In-Reply-To: <1436550130-112636-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1436550130-112636-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Andrea Arcangeli , Hugh Dickins Cc: Dave Hansen , Mel Gorman , Rik van Riel , Vlastimil Babka , Christoph Lameter , Naoya Horiguchi , Steve Capper , "Aneesh Kumar K.V" , Johannes Weiner , Michal Hocko , Jerome Marchand , Sasha Levin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Prepare khugepaged to see compound pages mapped with pte. For now we won't collapse the pmd table with such pte. khugepaged is subject for future rework wrt new refcounting. Signed-off-by: Kirill A. Shutemov Tested-by: Sasha Levin Acked-by: Jerome Marchand Acked-by: Vlastimil Babka --- mm/huge_memory.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4ad975506c1b..a5423bee0109 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2680,6 +2680,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, page = vm_normal_page(vma, _address, pteval); if (unlikely(!page)) goto out_unmap; + + /* TODO: teach khugepaged to collapse THP mapped with pte */ + if (PageCompound(page)) + goto out_unmap; + /* * Record which node the original page is from and save this * information to khugepaged_node_load[]. @@ -2690,7 +2695,6 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, if (khugepaged_scan_abort(node)) goto out_unmap; khugepaged_node_load[node]++; - VM_BUG_ON_PAGE(PageCompound(page), page); if (!PageLRU(page) || PageLocked(page) || !PageAnon(page)) goto out_unmap; /* -- 2.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org