From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB0DC204C3B for ; Thu, 9 Apr 2026 18:24:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775759079; cv=none; b=EyI4192Kv9JQcCZZzarzvYRZXxEUzJ7wkJceSJfbWBXQ+fxZEVZW4T62ZY6yCg9+x4WOUIrqD4F8aOmRzCAUdIcvncciTqSx9jWmzC3YVIM+dz0HROHGphf9g5j6mQeeG+iJFPdhNo0/AnfzGr6Qpo4NacImu2k9qQFMju19B60= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775759079; c=relaxed/simple; bh=2yUfjrS12dXVy2I2I++13aaB8lz5ESIf/F8Z5xPj1xQ=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=ASm3pjqKcYGIaM9hm3kngFWQva4NQt6za2i4/8GKoltuUO+Ky/FGmFQfTW5uGMz8jrXPJsu9gmGGU5SE6W97iooybnAo97rp5bcf1le6DBoqEkTNXiV17KoCqFyzWjSsEryffCPf+O043gx4dQfbWvjhYsIRSAo/lmR/Ih8fAs8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=C7HMAE1w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="C7HMAE1w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EAAFC4CEF7; Thu, 9 Apr 2026 18:24:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775759078; bh=2yUfjrS12dXVy2I2I++13aaB8lz5ESIf/F8Z5xPj1xQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=C7HMAE1wIjLoFqwoZ9KRCPIpLMD+2AB6gOcofs3RDOghYizSLrg1kz0xFZT7PZ1y0 mr7PUIYnq07w+Wo/J3DpzigXjix9c9c3ifQeneJNMl1/35/k8ES7UaGuDJplXgzObY FQb6q0ccPTXBASuY016zzHxrRAbqbv+es0yJ0bM0hIksv79GpkzJVOcry7teBV1PZV AvweqCwYDvKCJ6pcMXD/kLADl09zIB+mHyEjN3BLLMFor4r+FvvaoCpDnGS1zO/LN1 q26UgqsPVvxWvSjj0ZFUkYZfTgHT3dxHuQtRfgVamXmbpP0Ue5ovl+mRso7fxRaB6p q7nrpkvTY0/aQ== Date: Thu, 9 Apr 2026 19:24:35 +0100 From: Lorenzo Stoakes To: "David Hildenbrand (Arm)" Cc: Pedro Falcato , Joseph Salisbury , Andrew Morton , Chris Li , Kairui Song , Jason Gunthorpe , John Hubbard , Peter Xu , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , linux-mm@kvack.org, LKML Subject: Re: [RFC] mm: stress-ng --mremap triggers severe lruvec lock contention in populate/unmap paths Message-ID: References: <639f20f3-9e65-4117-af9b-e37af0829847@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <639f20f3-9e65-4117-af9b-e37af0829847@kernel.org> On Wed, Apr 08, 2026 at 10:09:23AM +0200, David Hildenbrand (Arm) wrote: > >> > >> It was also found that adding '--mremap-numa' changes the behavior > >> substantially: > > > > "assign memory mapped pages to randomly selected NUMA nodes. This is > > disabled for systems that do not support NUMA." > > > > so this is just sharding your lock contention across your NUMA nodes (you > > have an lruvec per node). > > > >> > >> stress-ng --mremap 8192 --mremap-bytes 4K --timeout 30 --mremap-numa > >> --metrics-brief > >> > >> mremap 2570798 29.39 8.06 106.23 87466.50 22494.74 > >> > >> So it's possible that either actual swapping, or the mbind(..., > >> MPOL_MF_MOVE) path used by '--mremap-numa', removes most of the excessive > >> system time. > >> > >> Does this look like a known MM scalability issue around short-lived > >> MAP_POPULATE / munmap churn? > > > > Yes. Is this an actual issue on some workload? > > Same thought, it's unclear to me why we should care here. In particular, > when talking about excessive use of zero-filled pages. Yup, I fear that this might also be misleading - stress-ng is designed to saturate. When swapping is enabled, it ends up rate-limited by I/O (there is simultanous MADV_PAGEOUT occurring). Then you see lower systime because... the system is sleeping more :) The zero pages patch stops all that, so you throttle on the next thing - the lruvec lock. If you group by NUMA node rather than just not-at-all (the default) you naturally distribute evenly across lruvec locks, because they're per node (+ memcg whatever). So all this is arbitrary, it is essentially asking 'what do I rate limit on?' And 'optimising' things to give different outcomes, esp. on things like system time, doesn't really make sense. If you absolutely hammer the hell out of the populate/unmap paths, unevenly over NUMA nodes, you'll see system time explode because now you're hitting up on the lruvec lock which is a spinlock (has to be due to possible irq context invocation). You're not actually asking 'how fast is this in a real workload?' or even a 'how fast is this microbenchmark?', you're asking 'what does saturating this look like?'. So it's rather asking the wrong question, I fear, and a reason why stress-ng-as-benchmark has to be treated with caution. I would definitely recommend examining any underlying real-world workload that is triggering the issue rather than stress-ng, and then examining closely what's going on there. This whole thing might be unfortunately misleading, as you observe saturation of lruvec lock, but in reality it might simply be a manifestation of: - syscalls on the hotpath - not distributing work sensibly over NUMA nodes Perhaps it is indeed an issue with the lruvec that needs attention, but with a real world usecase we can perhaps be a little more sure it's that rather than stress-ng doing it's thing :) > > -- > Cheers, > > David Thanks, Lorenzo