From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5B37C4708D for ; Fri, 28 May 2021 12:53:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 430A9613E9 for ; Fri, 28 May 2021 12:53:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 430A9613E9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C874F6B0036; Fri, 28 May 2021 08:53:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C36EC6B006E; Fri, 28 May 2021 08:53:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFDCA6B0070; Fri, 28 May 2021 08:53:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 7EB3F6B0036 for ; Fri, 28 May 2021 08:53:40 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4C1DAA8CC for ; Fri, 28 May 2021 12:53:39 +0000 (UTC) X-FDA: 78190631358.20.E7F7AA7 Received: from outbound-smtp09.blacknight.com (outbound-smtp09.blacknight.com [46.22.139.14]) by imf20.hostedemail.com (Postfix) with ESMTP id 1D5E637A for ; Fri, 28 May 2021 12:53:26 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp09.blacknight.com (Postfix) with ESMTPS id 8FF1A1C385D for ; Fri, 28 May 2021 13:53:36 +0100 (IST) Received: (qmail 15377 invoked from network); 28 May 2021 12:53:36 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 28 May 2021 12:53:36 -0000 Date: Fri, 28 May 2021 13:53:35 +0100 From: Mel Gorman To: Vlastimil Babka Cc: Andrew Morton , Hillf Danton , Dave Hansen , Michal Hocko , LKML , Linux-MM Subject: Re: [PATCH 6/6] mm/page_alloc: Introduce vm.percpu_pagelist_high_fraction Message-ID: <20210528125334.GP30378@techsingularity.net> References: <20210525080119.5455-1-mgorman@techsingularity.net> <20210525080119.5455-7-mgorman@techsingularity.net> <018c4b99-81a5-bc12-03cd-662a938ef05a@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <018c4b99-81a5-bc12-03cd-662a938ef05a@suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf20.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.14 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 1D5E637A X-Stat-Signature: yskc8peuriu5miw6or3kophq6u4kyhzz X-HE-Tag: 1622206406-743692 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 28, 2021 at 01:59:37PM +0200, Vlastimil Babka wrote: > On 5/25/21 10:01 AM, Mel Gorman wrote: > > This introduces a new sysctl vm.percpu_pagelist_high_fraction. It is > > similar to the old vm.percpu_pagelist_fraction. The old sysctl increased > > both pcp->batch and pcp->high with the higher pcp->high potentially > > reducing zone->lock contention. However, the higher pcp->batch value also > > potentially increased allocation latency while the PCP was refilled. > > This sysctl only adjusts pcp->high so that zone->lock contention is > > potentially reduced but allocation latency during a PCP refill remains > > the same. > > > > # grep -E "high:|batch" /proc/zoneinfo | tail -2 > > high: 649 > > batch: 63 > > > > # sysctl vm.percpu_pagelist_high_fraction=8 > > # grep -E "high:|batch" /proc/zoneinfo | tail -2 > > high: 35071 > > batch: 63 > > > > # sysctl vm.percpu_pagelist_high_fraction=64 > > high: 4383 > > batch: 63 > > > > # sysctl vm.percpu_pagelist_high_fraction=0 > > high: 649 > > batch: 63 > > > > Signed-off-by: Mel Gorman > > Acked-by: Dave Hansen > > Acked-by: Vlastimil Babka > Thanks. > Documentation nit below: > > > @@ -789,6 +790,25 @@ panic_on_oom=2+kdump gives you very strong tool to investigate > > why oom happens. You can get snapshot. > > > > > > +percpu_pagelist_high_fraction > > +============================= > > + > > +This is the fraction of pages in each zone that are allocated for each > > +per cpu page list. The min value for this is 8. It means that we do > > +not allow more than 1/8th of pages in each zone to be allocated in any > > +single per_cpu_pagelist. > > This, while technically correct (as an upper limit) is somewhat misleading as > the limit for a single per_cpu_pagelist also considers the number of local cpus. > > > This entry only changes the value of hot per > > +cpu pagelists. User can specify a number like 100 to allocate 1/100th > > +of each zone to each per cpu page list. > > This is worse. Anyone trying to reproduce this example on a system with multiple > cpus per node and checking the result will be puzzled. > So I think the part about number of local cpus should be mentioned to avoid > confusion. > Is this any better? diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst index e85c2f21d209..2da25735a629 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -793,15 +793,16 @@ why oom happens. You can get snapshot. percpu_pagelist_high_fraction ============================= -This is the fraction of pages in each zone that are allocated for each -per cpu page list. The min value for this is 8. It means that we do -not allow more than 1/8th of pages in each zone to be allocated in any -single per_cpu_pagelist. This entry only changes the value of hot per -cpu pagelists. User can specify a number like 100 to allocate 1/100th -of each zone to each per cpu page list. - -The batch value of each per cpu pagelist remains the same regardless of the -value of the high fraction so allocation latencies are unaffected. +This is the fraction of pages in each zone that are can be stored to +per-cpu page lists. It is an upper boundary that is divided depending +on the number of online CPUs. The min value for this is 8 which means +that we do not allow more than 1/8th of pages in each zone to be stored +on per-cpu page lists. This entry only changes the value of hot per-cpu +page lists. A user can specify a number like 100 to allocate 1/100th of +each zone between per-cpu lists. + +The batch value of each per-cpu page list remains the same regardless of +the value of the high fraction so allocation latencies are unaffected. The initial value is zero. Kernel uses this value to set the high pcp->high mark based on the low watermark for the zone and the number of local -- Mel Gorman SUSE Labs