From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-186.mta1.migadu.com (out-186.mta1.migadu.com [95.215.58.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B82362E7635 for ; Thu, 12 Feb 2026 15:08:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.186 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770908884; cv=none; b=QcCQvzS6xh3Dn4nESwGeks2a87FZ2cv6gEDPmNoD77Ifazn6K1L4mrCWtNvrvopsyHXk4JP4EVuS2uH8AvCc1pHoEkSEB0YLkFkODYgz/136UyoNjrvO05HodWL2z4BIoJI0q3/XBaoQE1pvJlJ4EQDxY24SPzTxIntpcKgVf7c= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770908884; c=relaxed/simple; bh=lyIVO3YKLzbGPMRXk1WLVlcKUIa84nvT5ivVOEzzV9Q=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=gFfFGIZJsaLyM9v9rcYE/PR/QSzD2P8YLhci5WRYakO3xBW+xuE/X341pihZWct7wYfG9v74wrhBPYWqgYyASpyUXzEaWGqmXe3/4x0PPSDzDw/WSnZJ21U52YVErKfBbudu46BIKKjE0g4iLmJKgIfOK2UTLnOnNk5R0B5dfDc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=XxaWGlEC; arc=none smtp.client-ip=95.215.58.186 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="XxaWGlEC" Date: Thu, 12 Feb 2026 07:07:34 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1770908880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=nwOfzTHo9+0udcjtzr/t6KDnWYj44fvh58Cy5u0CarI=; b=XxaWGlECBN4a610iIpj/hxVRt0G79D1N7svcZQZiL0dTP19ojT1qgU0qS98Wddx7J0SXc8 IGi/YSGTIcDrI2HQgoyDrN67COuIc51tksr8vGdaB9ljvXyEwlm2cL8dA9cKXH18EO/RSu EBGdEiAwXGwlIFEFzkdNeio4AIT4vyE= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Shakeel Butt To: JP Kobryn Cc: linux-mm@kvack.org, apopple@nvidia.com, akpm@linux-foundation.org, axelrasmussen@google.com, byungchul@sk.com, cgroups@vger.kernel.org, david@kernel.org, eperezma@redhat.com, gourry@gourry.net, jasowang@redhat.com, hannes@cmpxchg.org, joshua.hahnjy@gmail.com, Liam.Howlett@oracle.com, linux-kernel@vger.kernel.org, lorenzo.stoakes@oracle.com, matthew.brost@intel.com, mst@redhat.com, mhocko@suse.com, rppt@kernel.org, muchun.song@linux.dev, zhengqi.arch@bytedance.com, rakie.kim@sk.com, roman.gushchin@linux.dev, surenb@google.com, virtualization@lists.linux.dev, vbabka@suse.cz, weixugc@google.com, xuanzhuo@linux.alibaba.com, ying.huang@linux.alibaba.com, yuanchu@google.com, ziy@nvidia.com, kernel-team@meta.com Subject: Re: [PATCH 1/2] mm/mempolicy: track page allocations per mempolicy Message-ID: References: <20260212045109.255391-1-inwardvessel@gmail.com> <20260212045109.255391-2-inwardvessel@gmail.com> Precedence: bulk X-Mailing-List: virtualization@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260212045109.255391-2-inwardvessel@gmail.com> X-Migadu-Flow: FLOW_OUT On Wed, Feb 11, 2026 at 08:51:08PM -0800, JP Kobryn wrote: > It would be useful to see a breakdown of allocations to understand which > NUMA policies are driving them. For example, when investigating memory > pressure, having policy-specific counts could show that allocations were > bound to the affected node (via MPOL_BIND). > > Add per-policy page allocation counters as new node stat items. These > counters can provide correlation between a mempolicy and pressure on a > given node. > > Signed-off-by: JP Kobryn > Suggested-by: Johannes Weiner [...] > int mempolicy_set_node_perf(unsigned int node, struct access_coordinate *coords) > { > struct weighted_interleave_state *new_wi_state, *old_wi_state = NULL; > @@ -2446,8 +2461,14 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > > nodemask = policy_nodemask(gfp, pol, ilx, &nid); > > - if (pol->mode == MPOL_PREFERRED_MANY) > - return alloc_pages_preferred_many(gfp, order, nid, nodemask); > + if (pol->mode == MPOL_PREFERRED_MANY) { > + page = alloc_pages_preferred_many(gfp, order, nid, nodemask); > + if (page) > + __mod_node_page_state(page_pgdat(page), > + mpol_node_stat(MPOL_PREFERRED_MANY), 1 << order); Here and two places below, please use mod_node_page_state() instead of __mod_node_page_state() as __foo() requires preempt disable or if the given stat can be updated in IRQ, then IRQ disable. This code path does not do either of that. > + > + return page; > + } > > if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && > /* filter "hugepage" allocation, unless from alloc_pages() */ > @@ -2472,6 +2493,9 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > page = __alloc_frozen_pages_noprof( > gfp | __GFP_THISNODE | __GFP_NORETRY, order, > nid, NULL); > + if (page) > + __mod_node_page_state(page_pgdat(page), > + mpol_node_stat(pol->mode), 1 << order); > if (page || !(gfp & __GFP_DIRECT_RECLAIM)) > return page; > /* > @@ -2484,6 +2508,8 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > } > > page = __alloc_frozen_pages_noprof(gfp, order, nid, nodemask); > + if (page) > + __mod_node_page_state(page_pgdat(page), mpol_node_stat(pol->mode), 1 << order); >