From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761A0C433F5 for ; Mon, 16 May 2022 10:53:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 137286B0072; Mon, 16 May 2022 06:53:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E5E06B0075; Mon, 16 May 2022 06:53:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC8A66B0078; Mon, 16 May 2022 06:53:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E14566B0072 for ; Mon, 16 May 2022 06:53:14 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C373561AFC for ; Mon, 16 May 2022 10:53:14 +0000 (UTC) X-FDA: 79471294308.12.97F243A Received: from outbound-smtp01.blacknight.com (outbound-smtp01.blacknight.com [81.17.249.7]) by imf13.hostedemail.com (Postfix) with ESMTP id 54D60200C1 for ; Mon, 16 May 2022 10:52:53 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp01.blacknight.com (Postfix) with ESMTPS id DC374C4A61 for ; Mon, 16 May 2022 11:53:12 +0100 (IST) Received: (qmail 8810 invoked from network); 16 May 2022 10:53:12 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 16 May 2022 10:53:12 -0000 Date: Mon, 16 May 2022 11:53:11 +0100 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM Subject: Re: [PATCH 0/6] Drain remote per-cpu directly v3 Message-ID: <20220516105311.GL3441@techsingularity.net> References: <20220512085043.5234-1-mgorman@techsingularity.net> <20220512124325.751781bb88ceef5c37ca653e@linux-foundation.org> <20220513142330.GI3441@techsingularity.net> <20220513123805.41e560392d028c271b36847d@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20220513123805.41e560392d028c271b36847d@linux-foundation.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 54D60200C1 X-Stat-Signature: iq5e3iz8epda63hi6u6kbaekwzu8xj8u Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.7 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-HE-Tag: 1652698373-74873 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 13, 2022 at 12:38:05PM -0700, Andrew Morton wrote: > > The sentence can be dropped because it adds little and is potentially > > confusing. The PCP being safe to access remotely is specific to the > > context of the CPU being hot-removed and there are other special corner > > cases like zone_pcp_disable that modifies a per-cpu structure remotely > > but not in a way that causes corruption. > > OK. I pasted in your para from the other email. Current 0/n blurb: > > Some setups, notably NOHZ_FULL CPUs, may be running realtime or > latency-sensitive applications that cannot tolerate interference due to > per-cpu drain work queued by __drain_all_pages(). Introduce a new > mechanism to remotely drain the per-cpu lists. It is made possible by > remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. This has > two advantages, the time to drain is more predictable and other unrelated > tasks are not interrupted. > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote > per-cpu lists drain support" -- avoid interference of a high priority task > due to a workqueue item draining per-cpu page lists. While many workloads > can tolerate a brief interruption, it may cause a real-time task running > on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is > non-deterministic. > > Currently an IRQ-safe local_lock protects the page allocator per-cpu > lists. The local_lock on its own prevents migration and the IRQ disabling > protects from corruption due to an interrupt arriving while a page > allocation is in progress. > > This series adjusts the locking. A spinlock is added to struct > per_cpu_pages to protect the list contents while local_lock_irq continues > to prevent migration and IRQ reentry. This allows a remote CPU to safely > drain a remote per-cpu list. > > This series is a partial series. Follow-on work should allow the > local_irq_save to be converted to a local_irq to avoid IRQs being > disabled/enabled in most cases. Consequently, there are some TODO > comments highlighting the places that would change if local_irq was used. > However, there are enough corner cases that it deserves a series on its > own separated by one kernel release and the priority right now is to avoid > interference of high priority tasks. > Looks good, thanks! -- Mel Gorman SUSE Labs