From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F646C433FE for ; Thu, 3 Nov 2022 06:02:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D6D48000A; Thu, 3 Nov 2022 02:02:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 086FE80007; Thu, 3 Nov 2022 02:02:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF50D8000A; Thu, 3 Nov 2022 02:02:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CCB2380007 for ; Thu, 3 Nov 2022 02:02:03 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 94EA5140608 for ; Thu, 3 Nov 2022 06:02:03 +0000 (UTC) X-FDA: 80091085326.17.C9BDC98 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf11.hostedemail.com (Postfix) with ESMTP id 3A01B40007 for ; Thu, 3 Nov 2022 06:02:02 +0000 (UTC) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.17.1.5/8.17.1.5) with ESMTP id 2A2NVtRV026841 for ; Wed, 2 Nov 2022 23:02:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : content-transfer-encoding : mime-version; s=facebook; bh=WDr+TzXliey3tKsLwY9+U2Etg8D7AhnQQEjb19DfvGg=; b=kkYu8RJl4+ySiajmSQ0R2oHlzB+fVRcQiDcaq6WbXJt7SDhw/PND2Xt/ySWqoPesDsQ5 xbxaBo+sAlRLW97NsqXXy4c/sT6pjY0YJztiD1fDL1bBFcVS7ehrTZpT98xO4BtHTLY5 qc9hjTxZgWPQkUSCo8IW9JUE8FW2yTwVV7g= Received: from mail.thefacebook.com ([163.114.132.120]) by m0001303.ppops.net (PPS) with ESMTPS id 3kkshcynh7-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 02 Nov 2022 23:02:02 -0700 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 2 Nov 2022 23:01:59 -0700 Received: by devvm6390.atn0.facebook.com (Postfix, from userid 352741) id 8C96C5FD0416; Wed, 2 Nov 2022 23:01:49 -0700 (PDT) From: To: , CC: , , , , , Alexander Zhu Subject: [PATCH v6 2/5] mm: changes to split_huge_page() to free zero filled tail pages Date: Wed, 2 Nov 2022 23:01:44 -0700 Message-ID: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: X-FB-Internal: Safe Content-Type: text/plain X-Proofpoint-ORIG-GUID: nynrA0_wAexUnrZvOdW5mjS9rCFkaVrS X-Proofpoint-GUID: nynrA0_wAexUnrZvOdW5mjS9rCFkaVrS Content-Transfer-Encoding: quoted-printable X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667455323; a=rsa-sha256; cv=none; b=Hq3kzBJVpTjEWjGwwIZqOrZMmuaWUPI19mC28BXiQTPXhtjmWS1t+ovt5lMQyVdR3eXLgH 3O7ogZHRhKdtwAimjR5+TyiU0T55DDHPfzJrcG3BxbK1AHthLx2+id9mZom2y9fcTA4NwY mWRCoWLmvP2/rQ87bOiX19Qyd9l6Zzc= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=fb.com header.s=facebook header.b=kkYu8RJl; dmarc=pass (policy=reject) header.from=fb.com; spf=pass (imf11.hostedemail.com: domain of "prvs=2306c4488a=alexlzhu@meta.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=2306c4488a=alexlzhu@meta.com" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667455323; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WDr+TzXliey3tKsLwY9+U2Etg8D7AhnQQEjb19DfvGg=; b=3ZyS01xAmiwxiE7eWpCUSVcwp34Vi8wWaFRy95pSdNYgsPjOMTtsQQ/eYyDRFYPge/yAX5 IFhqgeDtDT7BthGHGDBT+7tPSxxZRNfmo5esMeKNFRXHRnWRGdrqRElUhDOnbi1KCTXnRx ON18SekpkvMgL2chXEyk/tYOe8kZYCM= X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 3A01B40007 X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=fb.com header.s=facebook header.b=kkYu8RJl; dmarc=pass (policy=reject) header.from=fb.com; spf=pass (imf11.hostedemail.com: domain of "prvs=2306c4488a=alexlzhu@meta.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=2306c4488a=alexlzhu@meta.com" X-Stat-Signature: 5143naokqbyyiih4h7dubyaeoe4y59ih X-HE-Tag: 1667455322-164075 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu Zhao Currently, when /sys/kernel/mm/transparent_hugepage/enabled=3Dalways is set there are a large number of transparent hugepages that are almost entirely zero filled. This is mentioned in a number of previous patchsets including: https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/ https://lore.kernel.org/all/ 1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/ Currently, split_huge_page() does not have a way to identify zero filled pages within the THP. Thus these zero pages get remapped and continue to create memory waste. In this patch, we identify and free tail pages that are zero filled in split_huge_page(). In this way, we avoid mapping these pages back into page table entries and can free up unused memory within THPs. This is based off the previously mentioned patchset by Yu Zhao. However, we chose to free anonymous zero tail pages whenever they are encountered instead of only on reclaim or migration. Signed-off-by: Yu Zhao Signed-off-by: Alexander Zhu --- include/linux/vm_event_item.h | 1 + mm/huge_memory.c | 37 +++++++++++++++++++++++++++++++++++ mm/vmstat.c | 1 + 3 files changed, 39 insertions(+) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 3518dba1e02f..f733ffc5f6f3 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -111,6 +111,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD THP_SPLIT_PUD, #endif + THP_SPLIT_FREE, THP_ZERO_PAGE_ALLOC, THP_ZERO_PAGE_ALLOC_FAILED, THP_SWPOUT, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 561a42567477..6a5c70080c07 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2505,6 +2505,8 @@ static void __split_huge_page(struct page *page, stru= ct list_head *list, struct address_space *swap_cache =3D NULL; unsigned long offset =3D 0; unsigned int nr =3D thp_nr_pages(head); + LIST_HEAD(pages_to_free); + int nr_pages_to_free =3D 0; int i; =20 /* complete memcg works before add pages to LRU */ @@ -2581,6 +2583,34 @@ static void __split_huge_page(struct page *page, str= uct list_head *list, continue; unlock_page(subpage); =20 + /* + * If a tail page has only two references left, one inherited + * from the isolation of its head and the other from + * lru_add_page_tail() which we are about to drop, it means this + * tail page was concurrently zapped. Then we can safely free it + * and save page reclaim or migration the trouble of trying it. + */ + if (list && page_ref_freeze(subpage, 2)) { + VM_BUG_ON_PAGE(PageLRU(subpage), subpage); + VM_BUG_ON_PAGE(PageCompound(subpage), subpage); + VM_BUG_ON_PAGE(page_mapped(subpage), subpage); + + ClearPageActive(subpage); + ClearPageUnevictable(subpage); + list_move(&subpage->lru, &pages_to_free); + nr_pages_to_free++; + continue; + } + + /* + * If a tail page has only one reference left, it will be freed + * by the call to free_page_and_swap_cache below. Since zero + * subpages are no longer remapped, there will only be one + * reference left in cases outside of reclaim or migration. + */ + if (page_ref_count(subpage) =3D=3D 1) + nr_pages_to_free++; + /* * Subpages may be freed if there wasn't any mapping * like if add_to_swap() is running on a lru page that @@ -2590,6 +2620,13 @@ static void __split_huge_page(struct page *page, str= uct list_head *list, */ free_page_and_swap_cache(subpage); } + + if (!nr_pages_to_free) + return; + + mem_cgroup_uncharge_list(&pages_to_free); + free_unref_page_list(&pages_to_free); + count_vm_events(THP_SPLIT_FREE, nr_pages_to_free); } =20 /* Racy check whether the huge page can be split */ diff --git a/mm/vmstat.c b/mm/vmstat.c index b2371d745e00..a2ba5d7922f4 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1359,6 +1359,7 @@ const char * const vmstat_text[] =3D { #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD "thp_split_pud", #endif + "thp_split_free", "thp_zero_page_alloc", "thp_zero_page_alloc_failed", "thp_swpout", --=20 2.30.2