From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7FD1CD6E64 for ; Wed, 11 Oct 2023 12:46:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 624658D0104; Wed, 11 Oct 2023 08:46:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FD218D0002; Wed, 11 Oct 2023 08:46:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EBD18D0104; Wed, 11 Oct 2023 08:46:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3EF6D8D0002 for ; Wed, 11 Oct 2023 08:46:19 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 18628140186 for ; Wed, 11 Oct 2023 12:46:19 +0000 (UTC) X-FDA: 81333153678.26.D5FEB3B Received: from outbound-smtp41.blacknight.com (outbound-smtp41.blacknight.com [46.22.139.224]) by imf16.hostedemail.com (Postfix) with ESMTP id 37B7018001C for ; Wed, 11 Oct 2023 12:46:14 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf16.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.224 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697028377; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QCb6Sium+TckvLRrjgrzdadQbKd955ZNtta7YjDhXJ8=; b=rvEPdcxdXOV5wc/04i/q8IGZ6sxNIVVLWYYZJVEoPy4/vdb4/csxJAs1pMAb1O1xJKr4hX j45TVXzQkpdNGXBWx8GtIENFhykh1zy1SsubrdZNosfhCa4qZzjXVQiCyoVJxRbDkWppKf Ji5g/grR5W8m/7ImP2jFshDlrypQwIE= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf16.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.224 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697028377; a=rsa-sha256; cv=none; b=D8ttKjSTyjoSzlNqab65jeVg5d3WipBC0i4wJ85XnJ82gGY1kGOUH1y8GsMBmijjaYbmql MeKoToXCWhs5LABU7fvYfrckEoVECPxnlCopRASfXNzTl1WkZrJ1LLKIw2RilE+cHxzSYy N8QqEC0pnk4QXNAJqYnHOT4P9VWDkbg= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp41.blacknight.com (Postfix) with ESMTPS id 9C8052972 for ; Wed, 11 Oct 2023 13:46:13 +0100 (IST) Received: (qmail 15039 invoked from network); 11 Oct 2023 12:46:13 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 11 Oct 2023 12:46:13 -0000 Date: Wed, 11 Oct 2023 13:46:10 +0100 From: Mel Gorman To: Huang Ying Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Andrew Morton , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: Re: [PATCH 01/10] mm, pcp: avoid to drain PCP when process exit Message-ID: <20231011124610.4punxroovolyvmgr@techsingularity.net> References: <20230920061856.257597-1-ying.huang@intel.com> <20230920061856.257597-2-ying.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20230920061856.257597-2-ying.huang@intel.com> X-Rspamd-Queue-Id: 37B7018001C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: guzwjx5o18xxrb1gy7istaax96zdbp47 X-HE-Tag: 1697028374-168935 X-HE-Meta: U2FsdGVkX19eMq4rRMLivND589B/Fp4848TakoRGPXqx7jFG7uI0vI9z1uI9OjHJZ6dD7oymk66hMoQMEIOr3rLdybs9+OAnTKBnNdhkeoGnkVERrpCJy+uwV2GSUSrBYdJPLhKg7XeTs2C4L+Aecz9tjzW3ouYg1iUbJwjGnzaIyTJGoNYn724KY8VhWWB0/B2FsA97xKsHkbv19RxyDmPzNiwurizVzpOMnOisof3uFN0alE9+S3SdskHWGh3NmeoCtpwYm3dHc+P6IcwfxNoCtpmuXyAJs9RG0H7I2bvuvA+9B781WHoo+4INiDe+eVC7yjOjKk7czeO6WbNtsjqRUfRqo/p8XLULMtUFO0LlVKXVmBgDbhIWZDIKsel8IVmiLjWV2gAN0fF7BviiOyzAE53bHrqlnBFF0a7+Slrm/vqYcqH16pk4/EfIGe0u5luray7V5FjubBMUmTYW0fHvXJ+NXcQeaNKTSROD6OT/stqL/sYkTtNh4F7TVEtJoMCqpdaGbyHxcgxgvwSsKc/Q7SoLs8eSX7O4tgwslt05Z3aloxkO5ZfXZBkzV3pWDcjKgscd2n7MsE10HWBtfERa4oXchm1hvhDFdCDsXKCWGS5XFfKxvmR1vR/hKA4SY7o2BFsDZjbLcMFt54f0RZ7RkjyPWbL7aCjR9CCvQOQscdS0de/j/xwiHPL1w1chxpOZlHbvvx1Y+irBzozPs9P00bSSA6MYzS3x0PxDFRh0xwvWau5a2JSjGEOJeFHB9k5kvuStBVJWlrODj2NvQCrLr4eprFMeNuZakhR3zk3CcFIuiYCZGDuAHil7dDbVeKJqmNmubt+WhBZf4s5cXxpb8VsZi+g+4uJjhT5/VV8DNHQmmcRTk8jvtgmq6HmvWDfcL+s3T1zoIEdrPYbHEB3CUJb9W92o2iHrc0GIYroef/st6dHCN+mqbxwAYLDJXN6BRBDzl//7QODaHKy J/gXcK+g tZKHBERCpZluSKhWkyaENI4ag8YP5au0tkwIX0vyK7aCsBgwRExeL6gs8sJYk8vCJADK7TlFQciW4wIqnvSJ14wg9FSBTHqACGyhzFqcToV2MMV58Br9PlXgK9qwSipSAWF5Z97TMFsxqITOgY1gpRRtswBTd/dt815qkBkDYNPBB+ftLvVDln0OKghk6OeQGw1yERN1VzYHC3jRq5aRTfICwpiIHra3CckhdY174YPjkYV4ZJKozI0oqlxR5LqUy3XRS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 20, 2023 at 02:18:47PM +0800, Huang Ying wrote: > In commit f26b3fa04611 ("mm/page_alloc: limit number of high-order > pages on PCP during bulk free"), the PCP (Per-CPU Pageset) will be > drained when PCP is mostly used for high-order pages freeing to > improve the cache-hot pages reusing between page allocation and > freeing CPUs. > > But, the PCP draining mechanism may be triggered unexpectedly when > process exits. With some customized trace point, it was found that > PCP draining (free_high == true) was triggered with the order-1 page > freeing with the following call stack, > > => free_unref_page_commit > => free_unref_page > => __mmdrop > => exit_mm > => do_exit > => do_group_exit > => __x64_sys_exit_group > => do_syscall_64 > > Checking the source code, this is the page table PGD > freeing (mm_free_pgd()). It's a order-1 page freeing if > CONFIG_PAGE_TABLE_ISOLATION=y. Which is a common configuration for > security. > > Just before that, page freeing with the following call stack was > found, > > => free_unref_page_commit > => free_unref_page_list > => release_pages > => tlb_batch_pages_flush > => tlb_finish_mmu > => exit_mmap > => __mmput > => exit_mm > => do_exit > => do_group_exit > => __x64_sys_exit_group > => do_syscall_64 > > So, when a process exits, > > - a large number of user pages of the process will be freed without > page allocation, it's highly possible that pcp->free_factor becomes > > 0. > > - after freeing all user pages, the PGD will be freed, which is a > order-1 page freeing, PCP will be drained. > > All in all, when a process exits, it's high possible that the PCP will > be drained. This is an unexpected behavior. > > To avoid this, in the patch, the PCP draining will only be triggered > for 2 consecutive high-order page freeing. > > On a 2-socket Intel server with 224 logical CPU, we tested kbuild on > one socket with `make -j 112`. With the patch, the build time > decreases 3.4% (from 206s to 199s). The cycles% of the spinlock > contention (mostly for zone lock) decreases from 43.6% to 40.3% (with > PCP size == 361). The number of PCP draining for high order pages > freeing (free_high) decreases 50.8%. > > This helps network workload too for reduced zone lock contention. On > a 2-socket Intel server with 128 logical CPU, with the patch, the > network bandwidth of the UNIX (AF_UNIX) test case of lmbench test > suite with 16-pair processes increase 17.1%. The cycles% of the > spinlock contention (mostly for zone lock) decreases from 50.0% to > 45.8%. The number of PCP draining for high order pages > freeing (free_high) decreases 27.4%. The cache miss rate keeps 0.3%. > > Signed-off-by: "Huang, Ying" Acked-by: Mel Gorman However, I want to note that batching on exit is not necessarily unexpected. For processes that are multi-TB in size, the time to exit can actually be quite large and batching is of benefit but optimising for exit is rarely a winning strategy. The pattern of "all allocs on CPU B and all frees on CPU B" or "short-lived tasks triggering a premature drain" is a bit more compelling but not worth a changelog rewrite. > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 4106fbc5b4b3..64d5ed2bb724 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -676,12 +676,15 @@ enum zone_watermarks { > #define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost) > #define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost) > > +#define PCPF_PREV_FREE_HIGH_ORDER 0x01 > + The meaning of the flag and its intent should have been documented. -- Mel Gorman SUSE Labs