From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AE82C31E4A for ; Thu, 13 Jun 2019 16:20:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 43C6520644 for ; Thu, 13 Jun 2019 16:20:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560442821; bh=PSlMBgfdiVsZwQmAgoRCHyVenx9wut+uU1U2QOXPAK8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=FjGyugryy2CfWm4k0rQgrz0X0ZV9eu/DovIDtCWWdCkF+FrsxhrQUNp+Mv+8l9OhH AtYX1IPkO4Gy5190Kuit80C6fikKdw5GR9MZCLko+mojyzyQIvQPeLZwtDriR87bp0 5iXhN795gBD6l6rRL2oQtxlJWM2lIsSq7LKzxYhk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388729AbfFMQUU (ORCPT ); Thu, 13 Jun 2019 12:20:20 -0400 Received: from mail.kernel.org ([198.145.29.99]:56926 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731077AbfFMIjX (ORCPT ); Thu, 13 Jun 2019 04:39:23 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 353A22147A; Thu, 13 Jun 2019 08:39:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560415162; bh=PSlMBgfdiVsZwQmAgoRCHyVenx9wut+uU1U2QOXPAK8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PnPAsTj2zk+tBo+DHoRvIEjy7ld708cHaCwdzriqlMZPL7xI3I31cwm2KMSOAt/3+ kCvk/xZIY8bcLXkyKvXgfkTKpgmzsFNg5M2XSC4hYJkUuTKMng4j7Fn10jMABBlkyI UtmsQuVxRW+YkIMMfVtHaBiggNb7ktZgzN1AvCm8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , Naoya Horiguchi , Davidlohr Bueso , Joonsoo Kim , Michal Hocko , "Kirill A . Shutemov" , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.19 008/118] hugetlbfs: on restore reserve error path retain subpool reservation Date: Thu, 13 Jun 2019 10:32:26 +0200 Message-Id: <20190613075644.167993541@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190613075643.642092651@linuxfoundation.org> References: <20190613075643.642092651@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org [ Upstream commit 0919e1b69ab459e06df45d3ba6658d281962db80 ] When a huge page is allocated, PagePrivate() is set if the allocation consumed a reservation. When freeing a huge page, PagePrivate is checked. If set, it indicates the reservation should be restored. PagePrivate being set at free huge page time mostly happens on error paths. When huge page reservations are created, a check is made to determine if the mapping is associated with an explicitly mounted filesystem. If so, pages are also reserved within the filesystem. The default action when freeing a huge page is to decrement the usage count in any associated explicitly mounted filesystem. However, if the reservation is to be restored the reservation/use count within the filesystem should not be decrementd. Otherwise, a subsequent page allocation and free for the same mapping location will cause the file filesystem usage to go 'negative'. Filesystem Size Used Avail Use% Mounted on nodev 4.0G -4.0M 4.1G - /opt/hugepool To fix, when freeing a huge page do not adjust filesystem usage if PagePrivate() is set to indicate the reservation should be restored. I did not cc stable as the problem has been around since reserves were added to hugetlbfs and nobody has noticed. Link: http://lkml.kernel.org/r/20190328234704.27083-2-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz Reviewed-by: Naoya Horiguchi Cc: Davidlohr Bueso Cc: Joonsoo Kim Cc: Michal Hocko Cc: "Kirill A . Shutemov" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/hugetlb.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0bbb033d7d8c..65179513c2b2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1256,12 +1256,23 @@ void free_huge_page(struct page *page) ClearPagePrivate(page); /* - * A return code of zero implies that the subpool will be under its - * minimum size if the reservation is not restored after page is free. - * Therefore, force restore_reserve operation. + * If PagePrivate() was set on page, page allocation consumed a + * reservation. If the page was associated with a subpool, there + * would have been a page reserved in the subpool before allocation + * via hugepage_subpool_get_pages(). Since we are 'restoring' the + * reservtion, do not call hugepage_subpool_put_pages() as this will + * remove the reserved page from the subpool. */ - if (hugepage_subpool_put_pages(spool, 1) == 0) - restore_reserve = true; + if (!restore_reserve) { + /* + * A return code of zero implies that the subpool will be + * under its minimum size if the reservation is not restored + * after page is free. Therefore, force restore_reserve + * operation. + */ + if (hugepage_subpool_put_pages(spool, 1) == 0) + restore_reserve = true; + } spin_lock(&hugetlb_lock); clear_page_huge_active(page); -- 2.20.1