From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D31BD194098 for ; Thu, 8 Jan 2026 06:03:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767852219; cv=none; b=pMEaJsJ6ndjxhhOWyNGvmYKtzLO1p744tqLFA/oMYtcWxFWOGdX/HakOj6f0z4XmltxqeWsFe9gGeZpDoonNXmFCCjlaoKAM/qDuiQhGCv6xfEDJ1KCahcP6dyWIHQbxv+bRuBunn5V5JDm+PbDm5nfZ6TTff98D5pFa1kHaPlU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767852219; c=relaxed/simple; bh=bSUd+ZLhpBsSganw5nA2vnJiTgD44PXqVIkzlsPUtkA=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=UV24vrA4nC/GNTlTh+BZ+y/Ico2QQxH61FH2+7762rfd2mTxoWmbnIeV8UCKSjpnvn1puqJ0qqRNeIJat+d3O4ZymjM5zac4Gdhp8xjIxzqkiselQvHHrl7CWSnxYroKQvdi9KRhq/MP6XHerp82hv+mDgHwI6k0VH+qb6erPJA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pqjAJ1Bk; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pqjAJ1Bk" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-2a0d06cfa93so141055ad.1 for ; Wed, 07 Jan 2026 22:03:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767852217; x=1768457017; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=KhIs3vOYMuFLAr6bHtr3A6x3d713FTijP+XCkxgK4yM=; b=pqjAJ1BkDYLVtkY7npeuJQg5sdsyThWTtVxtlNyyu9K/VaI/ARFCkNqmQxU/QMA8ZA GQtdTPqVyOSGdEgz7XZxyV8Pwq65clGffUwXB1OPNi/+2bMBU/zY2n9RG73zJbfF4A01 e/GETTuheYQmH6Y4e489/yHohneQ/NbLfhEcWY2gACpvq1pbQHxsGkd7ksZHcpJASiBt kjyDUsX2Jw0V2K4YAq1/ZVvt2hUTXHaRs+19dYUd5wDSj06ErZRWHw0p0Yjiv9w8ntNZ 6UeR6JpEHWrWD57b0RVl9AQpuIKh8Dik92UoMFqHvXAbkOj+xhwaTd6bgdqokTt4KPbt 6SCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767852217; x=1768457017; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KhIs3vOYMuFLAr6bHtr3A6x3d713FTijP+XCkxgK4yM=; b=Tz+aiGQK36yb0aK5uVt2qTnAg/EchloTSyM+x5/uEc/n+n/4zuUicvu6fxi758mWWD 2TWUB1AOsOMQGfArUpAKqeyiE8z2SxWRv02vEpGq222naM8RQkCRq/31JgjE07XGiuvx jr4En7N8EW31gIotmDKuTpHvffQm5Cn8GVTEf3weQJ/5wnQ5webJo6nf/7VkJP9Cf2Ac YvpicdhVgJgzUsn9DJYdi8KrXAZ6KzDX0s9Ry+9zx2QvvFxSGP4AkFf/8O3fF2zowPA+ F7i7xH+e6GdEhVnGBdIZkd0LbYp6mylCZc/VrvS88kpev0AovmjjrgchvTesmPirzqzR rWXg== X-Forwarded-Encrypted: i=1; AJvYcCWJO7Q3tZdIyXKUA4klS2moW/mIpOKIIAgtTMqT5NICQwwM5EufSAwdh+A+cyKpkYFwXPLT/ZfmS0zqK1U=@vger.kernel.org X-Gm-Message-State: AOJu0YwkPCGEycubc0nWHjGffrbFG2beVeuvGSK0mbdAEoPUT/fnz2Zd RPUoeydQGtsmznVFZxEUIxbbbaWjz14lHKbcNZYe0zup9LvEFOwDVm0yKmfc8zC7dw== X-Gm-Gg: AY/fxX7AizM7dJ/bpKVCN0FMChwtQ7PEIvMFMil2LJft+VyyJ2aPqFnI2z0rCbnW/Po Mx+EMoQvaRVF9vNyDbjMLVRFcG5AuL7I9OdB/mZ/JaV66z3ZYwaD9dcX+HMCXcBlBj3OUUkFBQ8 cNfhCC/JVCRYI+PJXDqjBUF55D+YLiCR6pof+sjFlwVwKfH1WZGq549iaD0aYG+ITlQitpcpRBi sKZL8ip5hUPQaK5pKy/sLeHle82rBJ6fe7QzglQsjfGOPCsPcJOKmutlsXUhOAR5v5noY1WvRqZ UD8KYhHO3B1sygaXvqoUP6GiSCBw+5huijYnvVI9X9v8Qi6ITcVeu01zxxm+dwgQzUSx8Ctbzb9 cBrnxtYORYoUxxaNXKLoplAdIyk+HkTE7EUahm/WflUUsV96z1EGAuc0Iz95M27DBhxWsABH8uY nx6TkF1eEBuz4QO36KQB1hplAVhA/srbJYcq8VXTjZpS1jy9th1PIU X-Received: by 2002:a17:903:fa7:b0:2a0:89b0:71d6 with SMTP id d9443c01a7336-2a3ff1ac431mr982045ad.17.1767852216571; Wed, 07 Jan 2026 22:03:36 -0800 (PST) Received: from google.com (248.132.125.34.bc.googleusercontent.com. [34.125.132.248]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-81ce8815129sm794721b3a.19.2026.01.07.22.03.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jan 2026 22:03:36 -0800 (PST) Date: Thu, 8 Jan 2026 06:03:30 +0000 From: Bing Jiao To: Joshua Hahn Cc: linux-mm@kvack.org, Andrew Morton , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Axel Rasmussen , Yuanchu Xie , Wei Xu , linux-kernel@vger.kernel.org Subject: Re: [PATCH v1 0/2] mm/vmscan: optimize preferred target demotion node selection Message-ID: References: <20260107072814.2324646-1-bingjiao@google.com> <20260107174652.3973445-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260107174652.3973445-1-joshua.hahnjy@gmail.com> On Wed, Jan 07, 2026 at 09:46:52AM -0800, Joshua Hahn wrote: Hi Joshua, Thanks for your insights and valuable suggestions! > On Wed, 7 Jan 2026 07:28:12 +0000 Bing Jiao wrote: > > Hello Bing, thank you for your patch! > > I have a few questions about the motivation about this patch. > > > In tiered memory systems, the demotion aims to move cold folios to the > > far-tier nodes. To maintain system performance, the demotion target > > should ideally be the node with the shortest NUMA distance from the > > source node. > > > > However, the current implementation has two suboptimal behaviors: > > > > 1. Unbalanced Fallback: When the primary preferred demotion node is full, > > the allocator falls back to other nodes in a way that often skews > > toward zones that closer to the primary preferred node rather than > > distributing the load evenly across fallback nodes. > > I definitely think this is a problem that can exist for some workloads / > machines, and I agree that there should be some mechanism to manage this > in the demotion code as well. In the context of tiered memory, it might be > the case that some far-nodes have more restrited memory bandwidth, so better > distribution of memory across those nodes definitely sounds like something > that should at least be considered (even if it might not be the sole factor). > > With that said, I think adding some numbers here to motivate this change could > definitely make the argument more convincing. In particular, I don't think > I am fully convinced that doing a full random selection from the demotion > targets makes the most sense. Maybe there are a few more things to consider, > like the node's capacity, how full it is, bandwidth, etc. For instance, > weighted interleave auto-tuning makes a weighted selection based on each > node's bandwidth. I agree that a detailed evaluation is necessary. When I initially wrote this patch, I hadn't fully considered a weighted selection. Using bandwidth as a weight for demotion target selection makes sense, and node capacity could serve as another useful heuristic. However, designing and evaluating a proposal that integrates all these metrics properly will require more time and study. > At least right now, it seems like we're consistent with how the demotion node > gets selected when the preferred node is full. > > Do your changes lead to a "better" distribution of memory? And does this > distribution lead to increased performance? I think some numbers here could > help my understanding and convince others as well : -) I haven't performed a formal A/B performance test yet. My primary observation was a significant imbalance in memory pressure: some far nodes were completely exhausted while others in the same tier remained half-empty. With this patch, that skewed distribution is mitigated when nodes reside in the same tier. I agree that providing numbers would strengthen the proposal. I will work on gathering those numbers later. > > 2. Suboptimal Target Selection: demote_folio_list() randomly select > > a preferred node from the allowed mask, potentially selecting > > a very distant node. > > Following up, I think it could be helpful to have a unified story about how > demotion nodes should be selected. In particular, I'm not entirely confident > if it makes sense to have a "try on the preferred demotion target, and then > select randomly among all other nodes" story, since these have conflicting > stories of "prefer close nodes" vs "distribute demotions". To put it explicitly, > what makes the first demotion target special? Should we just select randomly > for *all* demotion targets, not just if the preferred node is full? The "first" target is not particularly special. It is randomly selected from the tier closest to the source node by next_demotion_node(). Regarding the strategy, what I am thinking: if far nodes are mostly empty, preferring the nearest one is optimal. However, as those nodes reach capacity, consistently targeting the nearest one can create contention hotspots. Choosing between "proximity" and "distribution" likely depends on the current state of the targets. I agree that we need a more comprehensive study to establish a unified selection policy. > Sorry if it seems like I am asking too many questions, I just wanted to get > a better understanding of the motivation behind the patch. > > Thank you, and I hope you have a great day! > Joshua Thanks for the feedback and suggestions. I realized that my previous patch ("mm/vmscan: fix demotion targets checks in reclaim/demotion") is what introduced the "non-preferred node" issue in demote_folio_list(). I am not sure whether it should be in the previous patch series; but I just posted a refreshed version of Patch 2/2 in the previous series. Thanks, Bing