From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-172.mta1.migadu.com (out-172.mta1.migadu.com [95.215.58.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9AC2F27B4FA for ; Wed, 4 Feb 2026 20:38:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770237540; cv=none; b=rQPqtvEsWWxeMFoLBp7A+eBZbJkaZztwaWy7/ZGXW/B7kBdvTWXA1I1FhzeeCLRxfSZUkzDxofgrR7Tx2dOm/X8yKDvQnYQY20m6PjQiVCKozuPfwVtBfasD4Tx35lG7WRJhL1OiLltnLCd42qKxrHJCYWvp/BG9blOoAn2zvPY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770237540; c=relaxed/simple; bh=xqp2APQTK418ZK90TuqFUBCEbMo+N9RGiGQ4TV0nJ8g=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=QnuhiYxrZykNLPnp4XL153NN5EawlAqad2Nma1nFk/AH6lvFHEhAB2o3EgjUKncvWZAbnRdJh9nl0JZA2e0KZiMAeKKg4QWLgzsQPP/vSaTiY77tjq6oysMn/yVVfvuLeiFpvL9uZQ8mQZvnPWDw1VKWUVI/qgQCdXzklxwMGT8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=ub/vnzHu; arc=none smtp.client-ip=95.215.58.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="ub/vnzHu" Date: Wed, 4 Feb 2026 12:38:47 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1770237537; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+FYMhfr46SivfaufI+yrynTinQPNIgR87QHwVwyjO1Q=; b=ub/vnzHu3xWXTYfSKu8a2uExkd0Is5QCM8UwF88Ag1v0KbSCN1kOP6fINttLxWwFP8Or7c ewnv2CL/SSXXNPGoo7QBtxQuNPApxUPwqigfqQohWr7Elh1AImzObJR9iN+3HE7T3AAmg8 LOZKuLLuUPTa03Yr4OlbJ0xSLsqNhK8= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Shakeel Butt To: Dev Jain Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Harry Yoo , Qi Zheng , Vlastimil Babka , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: Re: [PATCH 1/4] memcg: use mod_node_page_state to update stats Message-ID: References: <20251110232008.1352063-1-shakeel.butt@linux.dev> <20251110232008.1352063-2-shakeel.butt@linux.dev> <1052a452-9ba3-4da7-be47-7d27d27b3d1d@arm.com> <2638bd96-d8cc-4733-a4ce-efdf8f223183@arm.com> <51819ca5a15d8928caac720426cd1ce82e89b429@linux.dev> <05aec69b-8e73-49ac-aa89-47b371fb6269@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <05aec69b-8e73-49ac-aa89-47b371fb6269@arm.com> X-Migadu-Flow: FLOW_OUT On Mon, Feb 02, 2026 at 02:23:54PM +0530, Dev Jain wrote: > > On 02/02/26 10:24 am, Shakeel Butt wrote: > >>>> Hello Shakeel, > >>>> > >>>> We are seeing a regression in micromm/munmap benchmark with this patch, on arm64 - > >>>> the benchmark mmmaps a lot of memory, memsets it, and measures the time taken > >>>> to munmap. Please see below if my understanding of this patch is correct. > >>>> > >>> Thanks for the report. Are you seeing regression in just the benchmark > >>> or some real workload as well? Also how much regression are you seeing? > >>> I have a kernel rebot regression report [1] for this patch as well which > >>> says 2.6% regression and thus it was on the back-burner for now. I will > >>> take look at this again soon. > >>> > >> The munmap regression is ~24%. Haven't observed a regression in any other > >> benchmark yet. > > Please share the code/benchmark which shows such regression, also if you can > > share the perf profile, that would be awesome. > > https://gitlab.arm.com/tooling/fastpath/-/blob/main/containers/microbench/micromm.c > You can run this with > ./micromm 0 munmap 10 > > Don't have a perf profile, I measured the time taken by above command, with and > without the patch. > Hi Dev, can you please try the following patch? >From 40155feca7e7bc846800ab8449735bdb03164d6d Mon Sep 17 00:00:00 2001 From: Shakeel Butt Date: Wed, 4 Feb 2026 08:46:08 -0800 Subject: [PATCH] vmstat: use preempt disable instead of try_cmpxchg Signed-off-by: Shakeel Butt --- include/linux/mmzone.h | 2 +- mm/vmstat.c | 58 ++++++++++++++++++------------------------ 2 files changed, 26 insertions(+), 34 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 3e51190a55e4..499cd53efdd6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -776,7 +776,7 @@ struct per_cpu_zonestat { struct per_cpu_nodestat { s8 stat_threshold; - s8 vm_node_stat_diff[NR_VM_NODE_STAT_ITEMS]; + long vm_node_stat_diff[NR_VM_NODE_STAT_ITEMS]; }; #endif /* !__GENERATING_BOUNDS.H */ diff --git a/mm/vmstat.c b/mm/vmstat.c index 86b14b0f77b5..0930695597bb 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -377,7 +377,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, long delta) { struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; - s8 __percpu *p = pcp->vm_node_stat_diff + item; + long __percpu *p = pcp->vm_node_stat_diff + item; long x; long t; @@ -456,8 +456,8 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item) void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) { struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; - s8 __percpu *p = pcp->vm_node_stat_diff + item; - s8 v, t; + long __percpu *p = pcp->vm_node_stat_diff + item; + long v, t; VM_WARN_ON_ONCE(vmstat_item_in_bytes(item)); @@ -467,7 +467,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) v = __this_cpu_inc_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { - s8 overstep = t >> 1; + long overstep = t >> 1; node_page_state_add(v + overstep, pgdat, item); __this_cpu_write(*p, -overstep); @@ -512,8 +512,8 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item) void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) { struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; - s8 __percpu *p = pcp->vm_node_stat_diff + item; - s8 v, t; + long __percpu *p = pcp->vm_node_stat_diff + item; + long v, t; VM_WARN_ON_ONCE(vmstat_item_in_bytes(item)); @@ -523,7 +523,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) v = __this_cpu_dec_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v < - t)) { - s8 overstep = t >> 1; + long overstep = t >> 1; node_page_state_add(v - overstep, pgdat, item); __this_cpu_write(*p, overstep); @@ -619,9 +619,8 @@ static inline void mod_node_state(struct pglist_data *pgdat, enum node_stat_item item, int delta, int overstep_mode) { struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; - s8 __percpu *p = pcp->vm_node_stat_diff + item; - long n, t, z; - s8 o; + long __percpu *p = pcp->vm_node_stat_diff + item; + long o, n, t, z; if (vmstat_item_in_bytes(item)) { /* @@ -634,32 +633,25 @@ static inline void mod_node_state(struct pglist_data *pgdat, delta >>= PAGE_SHIFT; } + preempt_disable(); + o = this_cpu_read(*p); - do { - z = 0; /* overflow to node counters */ + n = o + delta; - /* - * The fetching of the stat_threshold is racy. We may apply - * a counter threshold to the wrong the cpu if we get - * rescheduled while executing here. However, the next - * counter update will apply the threshold again and - * therefore bring the counter under the threshold again. - * - * Most of the time the thresholds are the same anyways - * for all cpus in a node. - */ - t = this_cpu_read(pcp->stat_threshold); + t = this_cpu_read(pcp->stat_threshold); + z = 0; - n = delta + (long)o; + if (abs(n) > t) { + int os = overstep_mode * (t >> 1); - if (abs(n) > t) { - int os = overstep_mode * (t >> 1) ; + /* Overflow must be added to node counters */ + z = n + os; + n = -os; + } - /* Overflow must be added to node counters */ - z = n + os; - n = -os; - } - } while (!this_cpu_try_cmpxchg(*p, &o, n)); + this_cpu_add(*p, n - o); + + preempt_enable(); if (z) node_page_state_add(z, pgdat, item); @@ -866,7 +858,7 @@ static bool refresh_cpu_vm_stats(bool do_pagesets) struct per_cpu_nodestat __percpu *p = pgdat->per_cpu_nodestats; for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { - int v; + long v; v = this_cpu_xchg(p->vm_node_stat_diff[i], 0); if (v) { @@ -929,7 +921,7 @@ void cpu_vm_stats_fold(int cpu) for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) if (p->vm_node_stat_diff[i]) { - int v; + long v; v = p->vm_node_stat_diff[i]; p->vm_node_stat_diff[i] = 0; -- 2.47.3