From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DD0EC433F5 for ; Thu, 12 May 2022 19:38:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96B7B6B0074; Thu, 12 May 2022 15:37:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 91B096B0075; Thu, 12 May 2022 15:37:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BC5F6B0078; Thu, 12 May 2022 15:37:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6B9D36B0074 for ; Thu, 12 May 2022 15:37:59 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8306820BB6 for ; Thu, 12 May 2022 19:37:47 +0000 (UTC) X-FDA: 79458100974.27.421247B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf04.hostedemail.com (Postfix) with ESMTP id 2D3F8400B8 for ; Thu, 12 May 2022 19:37:36 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A353A61D3C; Thu, 12 May 2022 19:37:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE9A5C34117; Thu, 12 May 2022 19:37:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1652384265; bh=lQpabCphLsIePbatgCbsEpbOuoC7zkFlFeb5DU65TmI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=UYDyTW9RGEvNnX3N0dpBTAO528x9y7IuNfzSzRDBC2Mk0Ks6gaTOiZk8NWCT4N/Xq r4XbMICQYY5ENy20WojmmEBXmIYChvUFqlRtj+PxhjzdXlxbIu/aU3v709AUGOvRXT ajOHnuqlikd883wEH/JTxU1sAv15kCNPcAIbn82Y= Date: Thu, 12 May 2022 12:37:43 -0700 From: Andrew Morton To: Mel Gorman Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM Subject: Re: [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Message-Id: <20220512123743.5be26b3ad4413f20d5f46564@linux-foundation.org> In-Reply-To: <20220512085043.5234-7-mgorman@techsingularity.net> References: <20220512085043.5234-1-mgorman@techsingularity.net> <20220512085043.5234-7-mgorman@techsingularity.net> X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Stat-Signature: ddrhkdxm966b3f9afx8ccopw8wuawyqk X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2D3F8400B8 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=UYDyTW9R; spf=pass (imf04.hostedemail.com: domain of akpm@linux-foundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspam-User: X-HE-Tag: 1652384256-189762 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 12 May 2022 09:50:43 +0100 Mel Gorman wrote: > From: Nicolas Saenz Julienne > > Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu > drain work queued by __drain_all_pages(). So introduce a new mechanism to > remotely drain the per-cpu lists. It is made possible by remotely locking > 'struct per_cpu_pages' new per-cpu spinlocks. A benefit of this new scheme > is that drain operations are now migration safe. > > There was no observed performance degradation vs. the previous scheme. > Both netperf and hackbench were run in parallel to triggering the > __drain_all_pages(NULL, true) code path around ~100 times per second. > The new scheme performs a bit better (~5%), although the important point > here is there are no performance regressions vs. the previous mechanism. > Per-cpu lists draining happens only in slow paths. > > Minchan Kim tested this independently and reported; > > My workload is not NOHZ CPUs but run apps under heavy memory > pressure so they goes to direct reclaim and be stuck on > drain_all_pages until work on workqueue run. > > unit: nanosecond > max(dur) avg(dur) count(dur) > 166713013 487511.77786438033 1283 > > From traces, system encountered the drain_all_pages 1283 times and > worst case was 166ms and avg was 487us. > > The other problem was alloc_contig_range in CMA. The PCP draining > takes several hundred millisecond sometimes though there is no > memory pressure or a few of pages to be migrated out but CPU were > fully booked. > > Your patch perfectly removed those wasted time. I'm not getting a sense here of the overall effect upon userspace performance. As Thomas said last year in https://lkml.kernel.org/r/87v92sgt3n.ffs@tglx : The changelogs and the cover letter have a distinct void vs. that which : means this is just another example of 'scratch my itch' changes w/o : proper justification. Is there more to all of this than itchiness and if so, well, you know the rest ;)