From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6166C3ABC5 for ; Wed, 14 May 2025 05:08:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D1386B00C3; Wed, 14 May 2025 01:08:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8800E6B00C4; Wed, 14 May 2025 01:08:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FC016B00C5; Wed, 14 May 2025 01:08:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4811A6B00C3 for ; Wed, 14 May 2025 01:08:41 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7C78A8052E for ; Wed, 14 May 2025 05:08:41 +0000 (UTC) X-FDA: 83440333242.27.FDC5522 Received: from out-186.mta0.migadu.com (out-186.mta0.migadu.com [91.218.175.186]) by imf02.hostedemail.com (Postfix) with ESMTP id B5A9E80007 for ; Wed, 14 May 2025 05:08:39 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Ufan84iQ; spf=pass (imf02.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.186 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747199320; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lT1lWRIc39oQfhab/AvhtsB7gjFHkHKvP88N+9WuUp8=; b=rVlIBg1/JX0sNr3Lag3DUa6Co4gOaZD6YOpni4wsLz4WMOiCcpbCTfOIyAbVrQdzUwgpBV +AC7sD2tY7cnNU/Rb80ibmzfIe8oPF3ikxJHvrkU6f9NOrr9j13kDdQ6M/bUa4cCVbuS2u +eyyVjba+M48nLeppTZZHwIcW9wa2ug= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Ufan84iQ; spf=pass (imf02.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.186 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747199320; a=rsa-sha256; cv=none; b=kuzgTK7dtEHE2/HDV12mIto70Jkqd0oh+pCjPUo/c6+5fBQl8y3kDnMNSSdH3RETGgRkMA /heU+5WslM7ieu194EVkrXqFnOzeOjLMUvjFZc0nooHEUGCwt6TA++ejT/3/nngJe+zTqy e9zi6f1oZewK2/cZa2wFrxxKZ8tuyiU= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1747199317; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lT1lWRIc39oQfhab/AvhtsB7gjFHkHKvP88N+9WuUp8=; b=Ufan84iQx4kZ6ny1kDkEoY8jcItoUy+lGSmiHmEWOC/vnxckDXJ8pN7FdgVYDVZtg8IOYP WwRCYapOwVegFLk0yijnYzo88XArgShFZK0r+JpUQyd24hpzeJ9fo5Y9u7BOR4rGgJOjDR Otlz62CicA3jypXQ1jSx2lqykI4LCkk= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , Alexei Starovoitov , Sebastian Andrzej Siewior , Harry Yoo , Yosry Ahmed , bpf@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH 2/7] memcg: move preempt disable to callers of memcg_rstat_updated Date: Tue, 13 May 2025 22:08:08 -0700 Message-ID: <20250514050813.2526843-3-shakeel.butt@linux.dev> In-Reply-To: <20250514050813.2526843-1-shakeel.butt@linux.dev> References: <20250514050813.2526843-1-shakeel.butt@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Queue-Id: B5A9E80007 X-Rspamd-Server: rspam09 X-Stat-Signature: giru61463jsykyucy8dmcrxss7gt9s6p X-HE-Tag: 1747199319-542475 X-HE-Meta: U2FsdGVkX1869DgZOQcYRWXuuiMw9e2On9cnkpju1owmbXchMlkKFr17oGQ4WI22mbEfiRMfrfxaoICAOVOoUhnamIg8r06MFJd/E1DBXMpGEFRLObtgZyHwMpnR+GCbAL0KEh3kKN+8RJWOBBI9dqqpybNBQNrcu0kpmwKXx/2Ze64aAitISQgfUKX/O3MOIypo1Y3ziUKtBwOAe12hR1JrQOOFQey7Fmp1kCg4OGS119VTT6VAlwKYc0ystXXhx93FCdhvhatfHGmeRRsZH3U81rQgNob5A9kztZGhDYQqYxJvvlad9cozcRz77hgdifTD3dStYxJrIfxykqxYrWCnyCHhS9P+wYRl+/xCQE6D/owYTA7EWkeFMOatndu4mxM7CuGJ9la5S0Yr+AwKYdHOJ4YMJpHIVh2eEYGipzpPtC9fKdGnU46T2F6D50IRPZ8IkAp9BuyBrzDv+IH9FZGigJRcS9uHa1LUxc9EObOQZfwzyK8j2tzdcIRpEUfYZQXI4uY1W7dciMqPC22ryCZsW2cS6VZGdDtQsqTjd6NyOgwPDkPWCpW81UBPuqr+kzfVOlJdsSJa4mYPsig9WmMJuNNeqEAmB7WcOXxgO5gV/iWziAk2V+P8DI5EWJZH85udvUsb5eVtkEzoz6RZtwPtU54/aHWYARxRL0fcyOyZiDIHLqf6tMebmNB2ylUhYPHgRFX51PAHsQ/DsCYsdFsk8laCDnPoGUSIGLFV31rN0NeO8JlqX8bc1Vh/9Amk6DFAj7jPbUH6buk1g4HApwqbMlrjYttz5X8QRreqmrgri7OiZ4MWVIImcKPUT6gv3WT/sz2E4l1lu0r0qf9va36t/QKICq8iu5as6dD4xDT1CVRMjjS8VEaUR4Q/+ggnDKoSYmRRtrnaMGtal6/2VA32i9W5Pi9CTpGjY3kNj0W12VtT0E5akT91kpQPG+9X+BdguX0pXYCp247VJTX 28UBgrXW HDzDne3XKafTkws0usR34/tze4ijLHoMqjA6neEWpGXgl2CdAW7mx/5leZkDgu0L7C9gGY5l94OrX6QxSov11WjuLwTNO4/FueslFdDIugJanOPKfg/sWKd6E+R6DIa62Q+kq0qYbwZTfT3psAiiEQf4FaF7iK0jWgzsMpF8oj0AS7IUr8fid3DwlJyD/HggnNXqsoDjfsKREpzsjht5ScPDxNXHr0lCcJEmO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's move the explicit preempt disable code to the callers of memcg_rstat_updated and also remove the memcg_stats_lock and related functions which ensures the callers of stats update functions have disabled preemption because now the stats update functions are explicitly disabling preemption. Signed-off-by: Shakeel Butt Acked-by: Vlastimil Babka --- mm/memcontrol.c | 74 +++++++++++++------------------------------------ 1 file changed, 19 insertions(+), 55 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cb10bcd1028d..8c8e0e1acd71 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -558,47 +558,21 @@ static u64 flush_last_time; #define FLUSH_TIME (2UL*HZ) -/* - * Accessors to ensure that preemption is disabled on PREEMPT_RT because it can - * not rely on this as part of an acquired spinlock_t lock. These functions are - * never used in hardirq context on PREEMPT_RT and therefore disabling preemtion - * is sufficient. - */ -static void memcg_stats_lock(void) -{ - preempt_disable_nested(); - VM_WARN_ON_IRQS_ENABLED(); -} - -static void __memcg_stats_lock(void) -{ - preempt_disable_nested(); -} - -static void memcg_stats_unlock(void) -{ - preempt_enable_nested(); -} - - static bool memcg_vmstats_needs_flush(struct memcg_vmstats *vmstats) { return atomic64_read(&vmstats->stats_updates) > MEMCG_CHARGE_BATCH * num_online_cpus(); } -static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) +static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val, + int cpu) { struct memcg_vmstats_percpu __percpu *statc_pcpu; - int cpu; unsigned int stats_updates; if (!val) return; - /* Don't assume callers have preemption disabled. */ - cpu = get_cpu(); - css_rstat_updated(&memcg->css, cpu); statc_pcpu = memcg->vmstats_percpu; for (; statc_pcpu; statc_pcpu = this_cpu_ptr(statc_pcpu)->parent_pcpu) { @@ -620,7 +594,6 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) atomic64_add(stats_updates, &this_cpu_ptr(statc_pcpu)->vmstats->stats_updates); } - put_cpu(); } static void __mem_cgroup_flush_stats(struct mem_cgroup *memcg, bool force) @@ -718,6 +691,7 @@ void __mod_memcg_state(struct mem_cgroup *memcg, enum memcg_stat_item idx, int val) { int i = memcg_stats_index(idx); + int cpu; if (mem_cgroup_disabled()) return; @@ -725,12 +699,14 @@ void __mod_memcg_state(struct mem_cgroup *memcg, enum memcg_stat_item idx, if (WARN_ONCE(BAD_STAT_IDX(i), "%s: missing stat item %d\n", __func__, idx)) return; - memcg_stats_lock(); + cpu = get_cpu(); + __this_cpu_add(memcg->vmstats_percpu->state[i], val); val = memcg_state_val_in_pages(idx, val); - memcg_rstat_updated(memcg, val); + memcg_rstat_updated(memcg, val, cpu); trace_mod_memcg_state(memcg, idx, val); - memcg_stats_unlock(); + + put_cpu(); } #ifdef CONFIG_MEMCG_V1 @@ -759,6 +735,7 @@ static void __mod_memcg_lruvec_state(struct lruvec *lruvec, struct mem_cgroup_per_node *pn; struct mem_cgroup *memcg; int i = memcg_stats_index(idx); + int cpu; if (WARN_ONCE(BAD_STAT_IDX(i), "%s: missing stat item %d\n", __func__, idx)) return; @@ -766,24 +743,7 @@ static void __mod_memcg_lruvec_state(struct lruvec *lruvec, pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); memcg = pn->memcg; - /* - * The caller from rmap relies on disabled preemption because they never - * update their counter from in-interrupt context. For these two - * counters we check that the update is never performed from an - * interrupt context while other caller need to have disabled interrupt. - */ - __memcg_stats_lock(); - if (IS_ENABLED(CONFIG_DEBUG_VM)) { - switch (idx) { - case NR_ANON_MAPPED: - case NR_FILE_MAPPED: - case NR_ANON_THPS: - WARN_ON_ONCE(!in_task()); - break; - default: - VM_WARN_ON_IRQS_ENABLED(); - } - } + cpu = get_cpu(); /* Update memcg */ __this_cpu_add(memcg->vmstats_percpu->state[i], val); @@ -792,9 +752,10 @@ static void __mod_memcg_lruvec_state(struct lruvec *lruvec, __this_cpu_add(pn->lruvec_stats_percpu->state[i], val); val = memcg_state_val_in_pages(idx, val); - memcg_rstat_updated(memcg, val); + memcg_rstat_updated(memcg, val, cpu); trace_mod_memcg_lruvec_state(memcg, idx, val); - memcg_stats_unlock(); + + put_cpu(); } /** @@ -874,6 +835,7 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx, unsigned long count) { int i = memcg_events_index(idx); + int cpu; if (mem_cgroup_disabled()) return; @@ -881,11 +843,13 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx, if (WARN_ONCE(BAD_STAT_IDX(i), "%s: missing stat item %d\n", __func__, idx)) return; - memcg_stats_lock(); + cpu = get_cpu(); + __this_cpu_add(memcg->vmstats_percpu->events[i], count); - memcg_rstat_updated(memcg, count); + memcg_rstat_updated(memcg, count, cpu); trace_count_memcg_events(memcg, idx, count); - memcg_stats_unlock(); + + put_cpu(); } unsigned long memcg_events(struct mem_cgroup *memcg, int event) -- 2.47.1