From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DEB619D89E for ; Mon, 9 Mar 2026 23:35:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773099335; cv=none; b=NwEFyvLfMdTbGAmimeZQmRhRiO/q7xL23hQ0rhFZRLx/4YXOVUM/VSrNpwOOwv9MkE+ky59bipWvQqBZz1dVIIu8B9mfOMQXjFJ0L0saLH487LZ5dTibL5AU7WuYzKZ6D3RUkXpDDf3fjucOPEu8n+E45cOhXz02Qwka8ocBCnM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773099335; c=relaxed/simple; bh=zMEoXbLZtSeJr2yyf9QZjwCg9OaWQPefxXUxT5OVjfc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=UycfO0WkI4IKmjcEbkZOTAhvUQojJCSmwXc1T9Nf9V9Z06xGiMphdIR7teyb4a3KWYkmEjvGSiyrwZz7SbXlzpE27Wf+inWZam+XTQSOS3eetRtbPxbmqhZ6z+CteG6N5rYW9Wa+XGTrsK3MnloqAZKdq04v87F7i94v/2Xx0+Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=VTcCMmh0; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="VTcCMmh0" Date: Mon, 9 Mar 2026 16:35:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1773099329; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=qcUZHIoP/8rVVvTMfjJZuqQNgHOSfG8EEoqKfNdISjo=; b=VTcCMmh01vAkOgpbdITzQKUfGBwJyxxIUHL9dTLptYFWWKdUefCRn8yKynWdEQISF3sh9Y u2C/Kh8xkLnJuT+K+Zsi2qHDmUNbs8XNMxiO2jbYtz7I+d9IlVeq8GmfQN9LqFRWoLDYKu aWf9StgOUuywlW4g8ai+CM4I5wXtq4g= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Shakeel Butt To: "JP Kobryn (Meta)" Cc: linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@suse.com, vbabka@suse.cz, apopple@nvidia.com, axelrasmussen@google.com, byungchul@sk.com, cgroups@vger.kernel.org, david@kernel.org, eperezma@redhat.com, gourry@gourry.net, jasowang@redhat.com, hannes@cmpxchg.org, joshua.hahnjy@gmail.com, Liam.Howlett@oracle.com, linux-kernel@vger.kernel.org, lorenzo.stoakes@oracle.com, matthew.brost@intel.com, mst@redhat.com, rppt@kernel.org, muchun.song@linux.dev, zhengqi.arch@bytedance.com, rakie.kim@sk.com, roman.gushchin@linux.dev, surenb@google.com, virtualization@lists.linux.dev, weixugc@google.com, xuanzhuo@linux.alibaba.com, ying.huang@linux.alibaba.com, yuanchu@google.com, ziy@nvidia.com, kernel-team@meta.com Subject: Re: [PATCH v2] mm/mempolicy: track page allocations per mempolicy Message-ID: References: <20260307045520.247998-1-jp.kobryn@linux.dev> Precedence: bulk X-Mailing-List: virtualization@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260307045520.247998-1-jp.kobryn@linux.dev> X-Migadu-Flow: FLOW_OUT On Fri, Mar 06, 2026 at 08:55:20PM -0800, JP Kobryn (Meta) wrote: > When investigating pressure on a NUMA node, there is no straightforward way > to determine which policies are driving allocations to it. > > Add per-policy page allocation counters as new node stat items. These > counters track allocations to nodes and also whether the allocations were > intentional or fallbacks. > > The new stats follow the existing numa hit/miss/foreign style and have the > following meanings: > > hit > - for BIND and PREFERRED_MANY, allocation succeeded on node in nodemask > - for other policies, allocation succeeded on intended node > - counted on the node of the allocation > miss > - allocation intended for other node, but happened on this one > - counted on other node > foreign > - allocation intended on this node, but happened on other node > - counted on this node > > Counters are exposed per-memcg, per-node in memory.numa_stat and globally > in /proc/vmstat. > > Signed-off-by: JP Kobryn (Meta) > --- > v2: > - Replaced single per-policy total counter (PGALLOC_MPOL_*) with > hit/miss/foreign triplet per policy > - Changed from global node stats to per-memcg per-node tracking > > v1: > https://lore.kernel.org/linux-mm/20260212045109.255391-2-inwardvessel@gmail.com/ > > include/linux/mmzone.h | 20 ++++++++++ > mm/memcontrol.c | 60 ++++++++++++++++++++++++++++ > mm/mempolicy.c | 90 ++++++++++++++++++++++++++++++++++++++++-- > mm/vmstat.c | 20 ++++++++++ > 4 files changed, 187 insertions(+), 3 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 7bd0134c241c..c0517cbcb0e2 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -323,6 +323,26 @@ enum node_stat_item { > PGSCAN_ANON, > PGSCAN_FILE, > PGREFILL, > +#ifdef CONFIG_NUMA > + NUMA_MPOL_LOCAL_HIT, > + NUMA_MPOL_LOCAL_MISS, > + NUMA_MPOL_LOCAL_FOREIGN, > + NUMA_MPOL_PREFERRED_HIT, > + NUMA_MPOL_PREFERRED_MISS, > + NUMA_MPOL_PREFERRED_FOREIGN, > + NUMA_MPOL_PREFERRED_MANY_HIT, > + NUMA_MPOL_PREFERRED_MANY_MISS, > + NUMA_MPOL_PREFERRED_MANY_FOREIGN, > + NUMA_MPOL_BIND_HIT, > + NUMA_MPOL_BIND_MISS, > + NUMA_MPOL_BIND_FOREIGN, > + NUMA_MPOL_INTERLEAVE_HIT, > + NUMA_MPOL_INTERLEAVE_MISS, > + NUMA_MPOL_INTERLEAVE_FOREIGN, > + NUMA_MPOL_WEIGHTED_INTERLEAVE_HIT, > + NUMA_MPOL_WEIGHTED_INTERLEAVE_MISS, > + NUMA_MPOL_WEIGHTED_INTERLEAVE_FOREIGN, > +#endif I have not looked into what these metrics mean but these are too many, at least for the memcg. For the memcg, there is significant memory cost for each metric added to memcg.