From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B291AC43613 for ; Thu, 20 Jun 2019 18:32:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8D5DB206BF for ; Thu, 20 Jun 2019 18:32:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1561055521; bh=dgDLQHfZ0VlXlL+JJBDpMsWiA6fPwJwJf7eZOmvk3fw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=TkDfsVYWixdpGop8zC0og23uPBV6Svl2zOEsK8i/hMsHncHLIwSC0QcaIaxBubsJt js0u6eQiw5Ung/rHM5fTTRfnCl6jelAbTn83TlyPPZNi7Y81K+F0f8iNtQU7tJIOP3 QaRwMLuPWNW17nviCFYl/HuuyieKAIBddOkzylfw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726343AbfFTR6z (ORCPT ); Thu, 20 Jun 2019 13:58:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:45788 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726331AbfFTR6z (ORCPT ); Thu, 20 Jun 2019 13:58:55 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 64822214AF; Thu, 20 Jun 2019 17:58:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1561053533; bh=dgDLQHfZ0VlXlL+JJBDpMsWiA6fPwJwJf7eZOmvk3fw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YovoXzHcayCAhRDd17B59/R9bW9wnYIYJkue9EQpfaAUYjngvMPHcoXIGuhgb6PCv +UhnfC/82bDzbZng/zOzdNBfBQfF4LT+MkGV6tfQ5kZJhbTyJ+V8f1LRG52Xp9fvdd 0b3MhYecY6p7DpqXpARLtDHAhVYnYJY+bkFr1wN8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , Naoya Horiguchi , Davidlohr Bueso , Joonsoo Kim , Michal Hocko , "Kirill A . Shutemov" , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.4 04/84] hugetlbfs: on restore reserve error path retain subpool reservation Date: Thu, 20 Jun 2019 19:56:01 +0200 Message-Id: <20190620174338.192463610@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190620174337.538228162@linuxfoundation.org> References: <20190620174337.538228162@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org [ Upstream commit 0919e1b69ab459e06df45d3ba6658d281962db80 ] When a huge page is allocated, PagePrivate() is set if the allocation consumed a reservation. When freeing a huge page, PagePrivate is checked. If set, it indicates the reservation should be restored. PagePrivate being set at free huge page time mostly happens on error paths. When huge page reservations are created, a check is made to determine if the mapping is associated with an explicitly mounted filesystem. If so, pages are also reserved within the filesystem. The default action when freeing a huge page is to decrement the usage count in any associated explicitly mounted filesystem. However, if the reservation is to be restored the reservation/use count within the filesystem should not be decrementd. Otherwise, a subsequent page allocation and free for the same mapping location will cause the file filesystem usage to go 'negative'. Filesystem Size Used Avail Use% Mounted on nodev 4.0G -4.0M 4.1G - /opt/hugepool To fix, when freeing a huge page do not adjust filesystem usage if PagePrivate() is set to indicate the reservation should be restored. I did not cc stable as the problem has been around since reserves were added to hugetlbfs and nobody has noticed. Link: http://lkml.kernel.org/r/20190328234704.27083-2-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz Reviewed-by: Naoya Horiguchi Cc: Davidlohr Bueso Cc: Joonsoo Kim Cc: Michal Hocko Cc: "Kirill A . Shutemov" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/hugetlb.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d7f65a8c629b..fd932e7a25dd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1221,12 +1221,23 @@ void free_huge_page(struct page *page) ClearPagePrivate(page); /* - * A return code of zero implies that the subpool will be under its - * minimum size if the reservation is not restored after page is free. - * Therefore, force restore_reserve operation. + * If PagePrivate() was set on page, page allocation consumed a + * reservation. If the page was associated with a subpool, there + * would have been a page reserved in the subpool before allocation + * via hugepage_subpool_get_pages(). Since we are 'restoring' the + * reservtion, do not call hugepage_subpool_put_pages() as this will + * remove the reserved page from the subpool. */ - if (hugepage_subpool_put_pages(spool, 1) == 0) - restore_reserve = true; + if (!restore_reserve) { + /* + * A return code of zero implies that the subpool will be + * under its minimum size if the reservation is not restored + * after page is free. Therefore, force restore_reserve + * operation. + */ + if (hugepage_subpool_put_pages(spool, 1) == 0) + restore_reserve = true; + } spin_lock(&hugetlb_lock); clear_page_huge_active(page); -- 2.20.1