From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32A7E33DEF3 for ; Wed, 11 Mar 2026 22:23:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773267822; cv=none; b=fLQtIAxpgdGRYdYLR/QLXPc6Y7Pd8cUL1g/rDtiiM/rkvGD36W9QTdRtNR1n/cWveQP5goyr2F9ayMC+nUPnS6L5abpb89gyJR6lOxcgnrHST1fAN3fzzeQ/s2WftpG9U9YvcK5xgMDdg8Nm4x3kc5Wixai2wgBHTNizZKhv8VI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773267822; c=relaxed/simple; bh=vNQT44UR5S7io6hYR318cyqfrKmzItN1fXmEUKzW/FU=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=u1kHpUwOqqXMSmdMbpkRCvQPEb7sv+ols/4D/cdB3UKWWRbXBWavvPMalHLw+NKIVWipUOfTSXvaGkYBEEZKsvNrA8rjNIE+jr1gUqMahK+MT/5HHz/Rr/k1DIlWTKdGDhHmyvazfYZNJT6Bz7bvVEkLPHZq6lLTD9alYit2r54= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FeHRPAuo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FeHRPAuo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9BBFC4CEF7; Wed, 11 Mar 2026 22:23:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773267821; bh=vNQT44UR5S7io6hYR318cyqfrKmzItN1fXmEUKzW/FU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=FeHRPAuogMNRiB8G5ICCZu3caqsUfYCD+oSm5TCtz9JTaWZ8VaP0YTNEHXM6A0sz7 oBGu9yCKuNu1jGvN3ZDLvDcAq6qW3jS3f+/iQ+ESmw7490jUrTRgTnSYiRpPjsK2EO jJWHrn35MPBI8Hr4eWuzjS5ECSbhaOxUZZyaqCS7S2+P4mVrKYu4WmexzUKhLYaHGL pqZX21nQuRS/RiLDg8CyxfqCAVGbAt+4fccp2/NsLFDkL5qyhN6WbT8l94xWTbr8Vy vGS2ud63GtFRK8Gr3PumhVpc5TsqJ8FhqdyYiG44bwox1ZlW+YvENMZVtCHFjODDro s/5dr6wdY7vrg== Date: Thu, 12 Mar 2026 09:23:35 +1100 From: Dave Chinner To: Johannes Weiner Cc: Andrew Morton , David Hildenbrand , Zi Yan , "Liam R. Howlett" , Usama Arif , Kiryl Shutsemau , Dave Chinner , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: switch deferred split shrinker to list_lru Message-ID: References: <20260311154358.150977-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260311154358.150977-1-hannes@cmpxchg.org> On Wed, Mar 11, 2026 at 11:43:58AM -0400, Johannes Weiner wrote: > The deferred split queue handles cgroups in a suboptimal fashion. The > queue is per-NUMA node or per-cgroup, not the intersection. That means > on a cgrouped system, a node-restricted allocation entering reclaim > can end up splitting large pages on other nodes: > > alloc/unmap > deferred_split_folio() > list_add_tail(memcg->split_queue) > set_shrinker_bit(memcg, node, deferred_shrinker_id) > > for_each_zone_zonelist_nodemask(restricted_nodes) > mem_cgroup_iter() > shrink_slab(node, memcg) > shrink_slab_memcg(node, memcg) > if test_shrinker_bit(memcg, node, deferred_shrinker_id) > deferred_split_scan() > walks memcg->split_queue > > The shrinker bit adds an imperfect guard rail. As soon as the cgroup > has a single large page on the node of interest, all large pages owned > by that memcg, including those on other nodes, will be split. > > list_lru properly sets up per-node, per-cgroup lists. As a bonus, it > streamlines a lot of the list operations and reclaim walks. It's used > widely by other major shrinkers already. Convert the deferred split > queue as well. > > The list_lru per-memcg heads are instantiated on demand when the first > object of interest is allocated for a cgroup, by calling > memcg_list_lru_alloc(). Add calls to where splittable pages are > created: anon faults, swapin faults, khugepaged collapse. > > These calls create all possible node heads for the cgroup at once, so > the migration code (between nodes) doesn't need any special care. > > The folio_test_partially_mapped() state is currently protected and > serialized wrt LRU state by the deferred split queue lock. To > facilitate the transition, add helpers to the list_lru API to allow > caller-side locking. > > Signed-off-by: Johannes Weiner > --- > include/linux/huge_mm.h | 6 +- > include/linux/list_lru.h | 48 ++++++ > include/linux/memcontrol.h | 4 - > include/linux/mmzone.h | 12 -- > mm/huge_memory.c | 326 +++++++++++-------------------------- > mm/internal.h | 2 +- > mm/khugepaged.c | 7 + > mm/list_lru.c | 197 ++++++++++++++-------- > mm/memcontrol.c | 12 +- > mm/memory.c | 52 +++--- > mm/mm_init.c | 14 -- > 11 files changed, 310 insertions(+), 370 deletions(-) Can you please split this up into multiple patches (i.e. one logical change per patch) to make it easier to review? i.e. just from the list-lru persepective, there's multiple complex changes in the series - locking API changes, new locking primitives, internally locked functions exposed to callers allowing external locking, etc. These need to be looked at individually and in isolation so we can actually discuss the finer details, and that's almost impossible to do when they are all smashed into one massive patch. -Dave. -- Dave Chinner dgc@kernel.org