From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C3861A682F for ; Thu, 9 Apr 2026 18:03:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775757825; cv=none; b=u5daoTTzU1WtTgtEimCchIcv4RgmLD/GuuS0LYHNlEHuJTCCYILdLeqz8SBpQ+qBz2daKSspQSAozeXhw8wLsQiwH81khLwAFN7MVvkHUwoizHy/GkAQGjTYl75h/ugTAwQYzIbV640dAD5hxubFn9BWAcrOViItau2sKeXhmig= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775757825; c=relaxed/simple; bh=4dbCHZLjjXhxdr9gOsf+BIDy30Sqc5rBBMNQ7k7EzDo=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=iu496/BlRWrmC6ceNfjHclRyuRLFyNqq6dsEaS9T8Jcr2tkdeEj26fwo71rnZdPAhEH1f3miT8Ddacv8W8Jk5SCkwqhVpKVVGgPNg+qcVpIF5atpm1QbZQnLpZoVR6Hhz/qft/q+++FFxvYkMjbMjaigKZ47zVNVvi9ELcB6PH8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fZj5N06X; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fZj5N06X" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33E38C4CEF7; Thu, 9 Apr 2026 18:03:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775757824; bh=4dbCHZLjjXhxdr9gOsf+BIDy30Sqc5rBBMNQ7k7EzDo=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=fZj5N06XaWfYZYpq2na8TQKoYo3vPYHGjEucpEHkRv2YtNVXNDQe7mSmD46sopb9P 9uC5x1QvJyP0WgFrZBwT9Yqh2N0hbmaeQ/s+h55IEvCAVLnrkLKCAUm3OV7mE5MB8W ELmqncnkrwmF7acIiIp/cAydEARB1UxlvqNH5DVN2xEZEcmVR7vcPmdt7T4+jUZZ92 k3k67yopanxGmZHy6T4a140EdqQtGvz1fycVAuTBuR1wLxrjHnnGySl4gapZoo5Vub 3Pzdz7i6ZOfFwHEk63iWcMhWzF3izYV7iydsgkRejhWscUy6Hm3aVXB/QxGNtjmoLR Wi+5bOn2l0esw== Date: Thu, 9 Apr 2026 19:03:41 +0100 From: Lorenzo Stoakes To: Hugh Dickins Cc: John Hubbard , Joseph Salisbury , Andrew Morton , David Hildenbrand , Chris Li , Kairui Song , Jason Gunthorpe , Peter Xu , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , linux-mm@kvack.org, LKML Subject: Re: [RFC] mm: stress-ng --mremap triggers severe lruvec lock contention in populate/unmap paths Message-ID: References: <4a4f5b48-8a1d-48f8-8760-0f5d43b5d483@nvidia.com> <982e5964-5ea6-eaf7-a11a-0692f14a6943@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <982e5964-5ea6-eaf7-a11a-0692f14a6943@google.com> On Tue, Apr 07, 2026 at 05:35:18PM -0700, Hugh Dickins wrote: > On Tue, 7 Apr 2026, John Hubbard wrote: > > On 4/7/26 1:09 PM, Joseph Salisbury wrote: > > > Hello, > > > > > > I would like to ask for feedback on an MM performance issue triggered by > > > stress-ng's mremap stressor: > > > > > > stress-ng --mremap 8192 --mremap-bytes 4K --timeout 30 --metrics-brief > > > > > > This was first investigated as a possible regression from 0ca0c24e3211 > > > ("mm: store zero pages to be swapped out in a bitmap"), but the current > > > evidence suggests that commit is mostly exposing an older problem for > > > this workload rather than directly causing it. > > > > > > > Can you try this out? (Adding Hugh to Cc.) > > > > From: John Hubbard > > Date: Tue, 7 Apr 2026 15:33:47 -0700 > > Subject: [PATCH] mm/gup: skip lru_add_drain() for non-locked populate > > X-NVConfidentiality: public > > Cc: John Hubbard > > > > populate_vma_page_range() calls lru_add_drain() unconditionally after > > __get_user_pages(). With high-frequency single-page MAP_POPULATE/munmap > > cycles at high thread counts, this forces a lruvec->lru_lock acquire > > per page, defeating per-CPU folio_batch batching. > > > > The drain was added by commit ece369c7e104 ("mm/munlock: add > > lru_add_drain() to fix memcg_stat_test") for VM_LOCKED populate, where > > unevictable page stats must be accurate after faulting. Non-locked VMAs > > have no such requirement. Skip the drain for them. > > > > Cc: Hugh Dickins > > Signed-off-by: John Hubbard > > Thanks for the Cc. I'm not convinced that we should be making such a > change, just to avoid the stress that an avowed stresstest is showing; > but can let others debate that - and, need it be said, I have no > problem with Joseph trying your patch. Yeah, the test case (as said by others also) is rather synthetic, and it's a test designed to saturate, if not I/O throttled by swap then we hammer the populate path. It feels like a micro-optimisation for something that is not (at least not yet demonstrated to be) an actual problem. stress-ng is not a benchmarking tool per se, it's designed to eek out bugs. So really we need to see a real-world case I think. > > I tend to stand by my comment in that commit, that it's not just for > VM_LOCKED: I believe it's in everyone's interest that a bulk faulting > interface like populate_vma_page_range() or faultin_vma_page_range() > should drain its local pagevecs at the end, to save others sometimes > needing the much more expensive lru_add_drain_all(). I mean yeah, but I guess anywhere that _really_ needs to be sure of the drain has to do an lru_add_drain_all(), because it'd be fragile to rely on lru_add_drain()'s being done at the right time? > > But lru_add_drain() and lru_add_drain_all(): there's so much to be > said and agonized over there They've distressed me for years, and > are a hot topic for us at present. But I won't be able to contribute > more on that subject, not this week. Yeah they do feel rather delicate... :) sometimes you _really do_ need to know everything's drained. But other times it feels a bit whack-a-mole. I also do agree it makes sense to drain locally after a batch operation. It all comes down to whether this manifests in a real-world case, at which point maybe this is a more useful change? > > Hugh > > > --- > > mm/gup.c | 13 ++++++++++++- > > 1 file changed, 12 insertions(+), 1 deletion(-) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index 8e7dc2c6ee73..2dd5de1cb5b9 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -1816,6 +1816,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, > > struct mm_struct *mm = vma->vm_mm; > > unsigned long nr_pages = (end - start) / PAGE_SIZE; > > int local_locked = 1; > > + bool need_drain; > > int gup_flags; > > long ret; > > > > @@ -1857,9 +1858,19 @@ long populate_vma_page_range(struct vm_area_struct *vma, > > * We made sure addr is within a VMA, so the following will > > * not result in a stack expansion that recurses back here. > > */ > > + /* > > + * Read VM_LOCKED before __get_user_pages(), which may drop > > + * mmap_lock when FOLL_UNLOCKABLE is set, after which the vma > > + * must not be accessed. The read is stable: mmap_lock is held > > + * for read here, so mlock() (which needs the write lock) > > + * cannot change VM_LOCKED concurrently. > > + */ BTW, not to nitpick (OK, maybe to nitpick :) this comments feels a bit redundant. Maybe useful to note that the lock might be dropped (but you don't indicate why it's OK to still assume state about the VMA), and it's a known thing that you need a VMA write lock to alter flags, if we had to comment this each time mm would be mostly comments :) So if you want a comment here I'd say something like 'the lock might be dropped due to FOLL_UNLOCKABLE, but that's ok, we would simply end up doing a redundant drain in this case'. But I'm not sure it's needed? > > + need_drain = vma->vm_flags & VM_LOCKED; Please use the new VMA flag interface :) need_drain = vma_test(VMA_LOCKED_BIT); > > + > > ret = __get_user_pages(mm, start, nr_pages, gup_flags, > > NULL, locked ? locked : &local_locked); > > - lru_add_drain(); > > + if (need_drain) > > + lru_add_drain(); > > return ret; > > } > > > > > > base-commit: 3036cd0d3328220a1858b1ab390be8b562774e8a > > -- > > 2.53.0 > > > > > > thanks, > > -- > > John Hubbard > Cheers, Lorenzo