From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f72.google.com (mail-pa0-f72.google.com [209.85.220.72]) by kanga.kvack.org (Postfix) with ESMTP id 873326B0283 for ; Fri, 28 Oct 2016 01:56:42 -0400 (EDT) Received: by mail-pa0-f72.google.com with SMTP id r13so37504195pag.1 for ; Thu, 27 Oct 2016 22:56:42 -0700 (PDT) Received: from mga11.intel.com (mga11.intel.com. [192.55.52.93]) by mx.google.com with ESMTPS id ub3si9907965pab.52.2016.10.27.22.56.41 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 27 Oct 2016 22:56:41 -0700 (PDT) From: "Huang, Ying" Subject: [PATCH -v4 RESEND 7/9] mm, THP: Add can_split_huge_page() Date: Fri, 28 Oct 2016 13:56:06 +0800 Message-Id: <20161028055608.1736-8-ying.huang@intel.com> In-Reply-To: <20161028055608.1736-1-ying.huang@intel.com> References: <20161028055608.1736-1-ying.huang@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: tim.c.chen@intel.com, dave.hansen@intel.com, andi.kleen@intel.com, aaron.lu@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrea Arcangeli , "Kirill A . Shutemov" , Ebru Akagunduz From: Huang Ying Separates checking whether we can split the huge page from split_huge_page_to_list() into a function. This will help to check that before splitting the THP (Transparent Huge Page) really. This will be used for delaying splitting THP during swapping out. Where for a THP, we will allocate a swap cluster, add the THP into the swap cache, then split the THP. To avoid the unnecessary operations for the un-splittable THP, we will check that firstly. There is no functionality change in this patch. Cc: Andrea Arcangeli Cc: Kirill A. Shutemov Cc: Ebru Akagunduz Signed-off-by: "Huang, Ying" --- include/linux/huge_mm.h | 7 +++++++ mm/huge_memory.c | 13 ++++++++++++- 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 9b9f65d..14ffa3f 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -94,6 +94,7 @@ extern unsigned long thp_get_unmapped_area(struct file *filp, extern void prep_transhuge_page(struct page *page); extern void free_transhuge_page(struct page *page); +bool can_split_huge_page(struct page *page); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) { @@ -176,6 +177,12 @@ static inline void prep_transhuge_page(struct page *page) {} #define thp_get_unmapped_area NULL +static inline bool +can_split_huge_page(struct page *page) +{ + BUILD_BUG(); + return false; +} static inline int split_huge_page_to_list(struct page *page, struct list_head *list) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cdcd25c..326b145 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2013,6 +2013,17 @@ int page_trans_huge_mapcount(struct page *page, int *total_mapcount) return ret; } +/* Racy check whether the huge page can be split */ +bool can_split_huge_page(struct page *page) +{ + int extra_pins = 0; + + /* Additional pins from radix tree */ + if (!PageAnon(page)) + extra_pins = HPAGE_PMD_NR; + return total_mapcount(page) == page_count(page) - extra_pins - 1; +} + /* * This function splits huge page into normal pages. @page can point to any * subpage of huge page to split. Split doesn't change the position of @page. @@ -2083,7 +2094,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * Racy check if we can split the page, before freeze_page() will * split PMDs */ - if (total_mapcount(head) != page_count(head) - extra_pins - 1) { + if (!can_split_huge_page(head)) { ret = -EBUSY; goto out_unlock; } -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org