From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7174C3ABC5 for ; Wed, 14 May 2025 05:09:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CEACC6B00C9; Wed, 14 May 2025 01:09:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C9B2C6B00CA; Wed, 14 May 2025 01:09:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEE0D6B00CB; Wed, 14 May 2025 01:09:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8FE956B00C9 for ; Wed, 14 May 2025 01:09:00 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D2256BA180 for ; Wed, 14 May 2025 05:09:00 +0000 (UTC) X-FDA: 83440334040.19.AA83C02 Received: from out-186.mta0.migadu.com (out-186.mta0.migadu.com [91.218.175.186]) by imf10.hostedemail.com (Postfix) with ESMTP id 1E158C0005 for ; Wed, 14 May 2025 05:08:58 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=N+woL7ya; spf=pass (imf10.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.186 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747199339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lcPht+GPMt0yev/Qh9l4mZlCLB+ximZHBdS6AfFo5Is=; b=RMlHjwIf4Y3Wai1oqpugkep0keo3aLyYI+JwnxbMutCF5r2DQc5kaj1fYh7OU+2LRRcnmG pbMe3pM85Q+4Vvrs+fiQnEJK+e8zqQxEb/nmXELXbgtRvkq4y7HcgJ73fheIgqsV9Vr3v0 8m4cX2X2VemXrneMcFo7Ah+lXNB3THA= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=N+woL7ya; spf=pass (imf10.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.186 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747199339; a=rsa-sha256; cv=none; b=QfSJ+y9X/biflDlFbcacyrybbqiHpvWM+xitjsDrQ8Bmx1xjYfuIjGp1YemtaedV84uXtb seKk3JlW2sJLx7rxiuh6d5kXykTIzfYrS5A1m0DmpiXFOQJo8u1pLqd7rRM1O0ZVa5sXzT HrHDBL4zFEBbNCWvs7Eg+fASEdjcQuI= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1747199337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lcPht+GPMt0yev/Qh9l4mZlCLB+ximZHBdS6AfFo5Is=; b=N+woL7yakbLXHSkg0D4+onHGELWFvbVkocWEWivtqfAyf19m78VaP0Yf1jF+i4mnw2rT3B LmklWcglrPX5Hm1MXJWgcDrgHkpVlnBNps6xpadkyAFwHtii03rmxln6RdVdJY2eABDiVa HLo3+j2gbHWT9rgePUcF8a2qCYciHJM= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , Alexei Starovoitov , Sebastian Andrzej Siewior , Harry Yoo , Yosry Ahmed , bpf@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH 5/7] memcg: make __mod_memcg_lruvec_state re-entrant safe against irqs Date: Tue, 13 May 2025 22:08:11 -0700 Message-ID: <20250514050813.2526843-6-shakeel.butt@linux.dev> In-Reply-To: <20250514050813.2526843-1-shakeel.butt@linux.dev> References: <20250514050813.2526843-1-shakeel.butt@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Stat-Signature: rpgpg5irzk45gnsieahh5gyzk9pfanfa X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1E158C0005 X-HE-Tag: 1747199338-668865 X-HE-Meta: U2FsdGVkX1+Ad6Kq+YWG1iXFNdHodY2VIF0KAjY9Zx/z7sDjMTenTP5ShmnRhtJ5y2tySU+u0IrIoM13mE88hZnCae8BOKf4hMSPm2R2ffBkd1eju1kvS/KpNo8Og87zA0FdpbkQSHSU5a5ZbUyMx8Nff86/lTVTKoUvlNmxX9Asj6BQgp0tCYet8uJr8JQw9RaTQRnSTr820916m5dvGmfsWsvk8a0/5DahkzIITEkKUsb5+7yYUK2swXKsW6eF7v1r/GxS6ODR+dF30mF8MJZ4HWBHhoheTfCf8MFAmYKUhVTwFwOfDl2LDwHTGrPEgrruIb/eydu/ku07Plq8dfSvq35YHFrO5unoFQq4rdsjF4CYobbm+lH2cFus34meLm4CsszETi1uURVwXYtE3nR3xUlweHS/YcD75bK8hvg0p41DXc7VKQsXEEeM/JuxzqDHUxOX0BKVgcoerwBs020OwQkze1ZAkGRkkmHBzhMy4ASC3Wdp1MqPxKWswNt3NkFOzWmZlyETYMzukRJ0gA+tUfdmcAhDC6lbbkTQW1c8wyyzRdPYHrN2J8uMwW6BjrvYIwcrCnIIanJnUoj5E0Cg9Qa+z2+XoEQVRuhScKFEAQcOfbFcNXhZqW0yAS2OF4lsaVUDJZ911HJktlLS9kORwCZphmc74e7eb662KnwIGmDDGiGCbf5fJTbS34KhAbwkx4oW3vuJJSpUwBXa0DsjxU4wYAFoHlOxqx9J9AiO9SL+XTr0vPykqH82aDceI8dSkfjzx0MYEsH2pd1dUw1fsEuGiS+agg4e4S69r8qSIq1A85Y3xDNgw1uPLbvhXoIagbxGxryUyXoguktNO+qGel6lJ/Cywn/lQGiLQQTzQQQ8x47Q+52X5H6fOSfRQfaevbYwbpaWCjFXy7gDrAoy3fvQjTB+1qczcN0igCNl92EXyzS6fpYjeeMTLb+8IrvmU1E8R8t0hzTAnVv 6gufLnIZ zjKjW6dVRda5jG5/W/TFe0pfV0btupxtQbKL44ts5Jo1fr4gzrBtY7Jj8QVmlhYJtPX4Qyf0+k6ApDh9ckoC4Vc+oI31xthlGsyAj40AJTxWOemCETPlakpdMQUJYgQB1dV/KWkU26Ol/98TrvSJ26L5GIKKEhZ7Dk1ELDwsUEIO+hXGQoFN/m55Fn/wMCGlVk7pNETBE+ktnqDBlr7iCvggTfSO0/TvYg024 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's make __mod_memcg_lruvec_state re-entrant safe and name it mod_memcg_lruvec_state(). The only thing needed is to convert the usage of __this_cpu_add() to this_cpu_add(). There are two callers of mod_memcg_lruvec_state() and one of them i.e. __mod_objcg_mlstate() will be re-entrant safe as well, so, rename it mod_objcg_mlstate(). The last caller __mod_lruvec_state() still calls __mod_node_page_state() which is not re-entrant safe yet, so keep it as is. Signed-off-by: Shakeel Butt Acked-by: Vlastimil Babka --- mm/memcontrol.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b666cdb1af68..4f19fe9de5bf 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -728,7 +728,7 @@ unsigned long memcg_page_state_local(struct mem_cgroup *memcg, int idx) } #endif -static void __mod_memcg_lruvec_state(struct lruvec *lruvec, +static void mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { @@ -746,10 +746,10 @@ static void __mod_memcg_lruvec_state(struct lruvec *lruvec, cpu = get_cpu(); /* Update memcg */ - __this_cpu_add(memcg->vmstats_percpu->state[i], val); + this_cpu_add(memcg->vmstats_percpu->state[i], val); /* Update lruvec */ - __this_cpu_add(pn->lruvec_stats_percpu->state[i], val); + this_cpu_add(pn->lruvec_stats_percpu->state[i], val); val = memcg_state_val_in_pages(idx, val); memcg_rstat_updated(memcg, val, cpu); @@ -776,7 +776,7 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, /* Update memcg and lruvec */ if (!mem_cgroup_disabled()) - __mod_memcg_lruvec_state(lruvec, idx, val); + mod_memcg_lruvec_state(lruvec, idx, val); } void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, @@ -2552,7 +2552,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) folio->memcg_data = (unsigned long)memcg; } -static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg, +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr) { @@ -2562,7 +2562,7 @@ static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg, rcu_read_lock(); memcg = obj_cgroup_memcg(objcg); lruvec = mem_cgroup_lruvec(memcg, pgdat); - __mod_memcg_lruvec_state(lruvec, idx, nr); + mod_memcg_lruvec_state(lruvec, idx, nr); rcu_read_unlock(); } @@ -2872,12 +2872,12 @@ static void __account_obj_stock(struct obj_cgroup *objcg, struct pglist_data *oldpg = stock->cached_pgdat; if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b = 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b = 0; } @@ -2903,7 +2903,7 @@ static void __account_obj_stock(struct obj_cgroup *objcg, } } if (nr) - __mod_objcg_mlstate(objcg, pgdat, idx, nr); + mod_objcg_mlstate(objcg, pgdat, idx, nr); } static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes, @@ -2972,13 +2972,13 @@ static void drain_obj_stock(struct obj_stock_pcp *stock) */ if (stock->nr_slab_reclaimable_b || stock->nr_slab_unreclaimable_b) { if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b = 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b = 0; -- 2.47.1