From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:47618 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752520AbcJ2Mv0 (ORCPT ); Sat, 29 Oct 2016 08:51:26 -0400 Subject: Patch "mm/hugetlb: improve locking in dissolve_free_huge_pages()" has been added to the 4.8-stable tree To: gerald.schaefer@de.ibm.com, akpm@linux-foundation.org, aneesh.kumar@linux.vnet.ibm.com, dave.hansen@linux.intel.com, gregkh@linuxfoundation.org, heiko.carstens@de.ibm.com, kirill.shutemov@linux.intel.com, mhocko@suse.com, mike.kravetz@oracle.com, n-horiguchi@ah.jp.nec.com, rui.teng@linux.vnet.ibm.com, schwidefsky@de.ibm.com, torvalds@linux-foundation.org, vbabka@suse.cz Cc: , From: Date: Sat, 29 Oct 2016 08:51:33 -0400 Message-ID: <1477745493206112@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled mm/hugetlb: improve locking in dissolve_free_huge_pages() to the 4.8-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-hugetlb-improve-locking-in-dissolve_free_huge_pages.patch and it can be found in the queue-4.8 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From eb03aa008561004257900983193d024e57abdd96 Mon Sep 17 00:00:00 2001 From: Gerald Schaefer Date: Fri, 7 Oct 2016 17:01:13 -0700 Subject: mm/hugetlb: improve locking in dissolve_free_huge_pages() From: Gerald Schaefer commit eb03aa008561004257900983193d024e57abdd96 upstream. For every pfn aligned to minimum_order, dissolve_free_huge_pages() will call dissolve_free_huge_page() which takes the hugetlb spinlock, even if the page is not huge at all or a hugepage that is in-use. Improve this by doing the PageHuge() and page_count() checks already in dissolve_free_huge_pages() before calling dissolve_free_huge_page(). In dissolve_free_huge_page(), when holding the spinlock, those checks need to be revalidated. Link: http://lkml.kernel.org/r/20160926172811.94033-4-gerald.schaefer@de.ibm.com Signed-off-by: Gerald Schaefer Acked-by: Michal Hocko Acked-by: Naoya Horiguchi Cc: "Kirill A . Shutemov" Cc: Vlastimil Babka Cc: Mike Kravetz Cc: "Aneesh Kumar K . V" Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: Rui Teng Cc: Dave Hansen Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/hugetlb.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1476,14 +1476,20 @@ out: int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) { unsigned long pfn; + struct page *page; int rc = 0; if (!hugepages_supported()) return rc; - for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) - if (rc = dissolve_free_huge_page(pfn_to_page(pfn))) - break; + for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) { + page = pfn_to_page(pfn); + if (PageHuge(page) && !page_count(page)) { + rc = dissolve_free_huge_page(page); + if (rc) + break; + } + } return rc; } Patches currently in stable-queue which might be from gerald.schaefer@de.ibm.com are queue-4.8/mm-hugetlb-improve-locking-in-dissolve_free_huge_pages.patch queue-4.8/mm-hugetlb-check-for-reserved-hugepages-during-memory-offline.patch