From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A23AC71135 for ; Wed, 11 Jun 2025 22:16:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 375846B0096; Wed, 11 Jun 2025 18:16:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34D2A6B0098; Wed, 11 Jun 2025 18:16:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 262AF6B009A; Wed, 11 Jun 2025 18:16:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 03F986B0096 for ; Wed, 11 Jun 2025 18:16:17 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A795281742 for ; Wed, 11 Jun 2025 22:16:17 +0000 (UTC) X-FDA: 83544529194.18.913F160 Received: from out-188.mta1.migadu.com (out-188.mta1.migadu.com [95.215.58.188]) by imf28.hostedemail.com (Postfix) with ESMTP id 99168C0003 for ; Wed, 11 Jun 2025 22:16:15 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="OV/VvR3x"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf28.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.188 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749680175; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nkbZifWNsj/jvVHvX2J02HCzx/mgIhtufDCBJPadwMA=; b=hr3t/cwbq9s6U/rctCwHCdFVWDieDHsq3VEMe9kTrp0DtTl0slJrOl5pQl/rF1r9gfaHqW rylAUl+HbWoVRJ1PwqFKdZcqs/cLQD/eKTaDx+Osn0mVJytGBYITxPdsdL16hogMz0V42a KBygAfAbQCh68a4q6yWSCEnTzRnlWWo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749680175; a=rsa-sha256; cv=none; b=a0kcscED1xuaAHvllr1zqHAK4nGraepqr4YIDsgtOPG40vU7CVL1er1k9L7p/n7uVCAu9t jbiBKz83TRS6cLsfHBeZDHPG9F/fskJjGPEIyARHrlYOuAFpA2489LBnOu/7QYib9Q58CJ 5WDJWprqQQYpD1mH9168Cu6Uox8p40E= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="OV/VvR3x"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf28.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.188 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1749680173; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nkbZifWNsj/jvVHvX2J02HCzx/mgIhtufDCBJPadwMA=; b=OV/VvR3xcNT6rWYSnEPGYAXvZ6T08E2DVqQ91TNN8F7Nla1meRLbeLt8SoZA8cAFGXL8AL hHsbj18GcI7in7vU6DKq6ooauHFQ5lXLVx4EZU3TvIjrivt3QVjMpPRcHVcx/qxyVXGXCv MambegDnX8BmL6SlDCIqf6Pa5GKUNR4= From: Shakeel Butt To: Tejun Heo , Andrew Morton Cc: JP Kobryn , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , Alexei Starovoitov , Sebastian Andrzej Siewior , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Harry Yoo , Yosry Ahmed , bpf@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH v2 3/4] cgroup: remove per-cpu per-subsystem locks Date: Wed, 11 Jun 2025 15:15:31 -0700 Message-ID: <20250611221532.2513772-4-shakeel.butt@linux.dev> In-Reply-To: <20250611221532.2513772-1-shakeel.butt@linux.dev> References: <20250611221532.2513772-1-shakeel.butt@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 99168C0003 X-Stat-Signature: cd3e6jsdjgjghqaagzqx5buwswsyg1gr X-Rspam-User: X-HE-Tag: 1749680175-701464 X-HE-Meta: U2FsdGVkX18Xb0eXZLWhGI7I9VgKBI/P1sbBeh6Lbtb2gXaT/9bbYl1P2tj9K1aDDUV+D19ipuC6rk7ZJXQPnJKLTi+u5uRmUak3EASkZSBCHClUs2gsYBkLZdj04eWsu8cKp5kdHNQ/LhuulnkY1ZhP+CthF36v41ufJykQm3N6QfRfhc8SHKvxjKd9Iq2hcGWZFNhj4N+xR9utxuEdquS/4rV3UPRNRERUGpagL94O8vIflD1LIB/zYEBZNvT0hEbXzHoceZ2rF7HYI0AHiHsPK49OAvsexpOxkagaJp5aoWB9FwMAhBaHKJ7toHEpIyyb8PA3B70urk7IjFbEWhX56zJi2/upgtVzsKJxHoyljxrkd5rJfJV1wcJImFjQ0nz5kmhwCbEbBjO0appdZbqKQMKSA/Pb1gmzhfBpcI24cZ9IU6lf11ypzAgwujii3yo9uqiVc+Rajn9WGLq4vU0LiA0lkYeW/pE3QlAOnJFdgSj2gqkQ0hDmjW9yS45atktp5sFevcYW34tM4bcaBE91KDGLmYHv+NB44ju0on+ssyfj5X48zQFFIK9vD9u8GC7Qfo/uPpJz/XU6/0pyA/mVvBeLMvhgZ1P687Z10Fo4T6sj2qhWpsTcaK5ZqD5eD/HF8Wd9lJ9uGl4m3grqEZBH0bpWi0J8uT3fr+FkhY5u8Z3kpxGJebCT/vcnfXA/NS+XsDNJe1DWzqf1ue6LNDv+qL5FYQ7PDkjRAkybIQU6EUSSKj2zp2d5D91UynEw1B/clD2RKBK+JeKi0nH574KI3OAugR4sXWThBfpD93G4f/W++RhXCvRpWa3XT9de3bgpsPcJaRHWbWti6tt7fPClXWI/DWJ7o28nBzLeIkPKgV2apMdmPrEUOIyjZ2TUBEkQbOCY/G9yS2S6YY56HyAfccMr6AnQ534Q+R1jDAFzid/hqEU64Ooz6cIiCOVQ1uwHgn9TfOGx7/8Dc7q 6oKlzK6o 3crLtA4XOVwQ9dbSVfKOHlFYOFcRUxQ59Xw2IutjqCCgqAlHQIkiBVc6chIgYCYZCeE3stFKrkF0psb9gx1SoMgmvqTzbWu1jMAgIWveN9qFPGAu2EXu1NXS0JoffFrpOfpIY2DtW5/Z7Y6ZliyzIq4XWO0QF3sk2PTsfaiBo1NDdWJ4Nsk9k4PXngwztaI0mF2m88mccjzzjL/Kzc/H5tiTW4jwuVRd+9VvqjfS0w40B68A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The rstat update side used to insert the cgroup whose stats are updated in the update tree and the read side flush the update tree to get the latest uptodate stats. The per-cpu per-subsystem locks were used to synchronize the update and flush side. However now the update side does not access update tree but uses per-cpu lockless lists. So there is no need for locks to synchronize update and flush side. Let's remove them. Suggested-by: JP Kobryn Signed-off-by: Shakeel Butt --- include/linux/cgroup-defs.h | 7 --- include/trace/events/cgroup.h | 47 --------------- kernel/cgroup/rstat.c | 107 ++-------------------------------- 3 files changed, 4 insertions(+), 157 deletions(-) diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index 45860fe5dd0c..bca3562e3df4 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -375,12 +375,6 @@ struct css_rstat_cpu { * Child cgroups with stat updates on this cpu since the last read * are linked on the parent's ->updated_children through * ->updated_next. updated_children is terminated by its container css. - * - * In addition to being more compact, singly-linked list pointing to - * the css makes it unnecessary for each per-cpu struct to point back - * to the associated css. - * - * Protected by per-cpu css->ss->rstat_ss_cpu_lock. */ struct cgroup_subsys_state *updated_children; struct cgroup_subsys_state *updated_next; /* NULL if not on the list */ @@ -824,7 +818,6 @@ struct cgroup_subsys { unsigned int depends_on; spinlock_t rstat_ss_lock; - raw_spinlock_t __percpu *rstat_ss_cpu_lock; struct llist_head __percpu *lhead; /* lockless update list head */ }; diff --git a/include/trace/events/cgroup.h b/include/trace/events/cgroup.h index 7d332387be6c..ba9229af9a34 100644 --- a/include/trace/events/cgroup.h +++ b/include/trace/events/cgroup.h @@ -257,53 +257,6 @@ DEFINE_EVENT(cgroup_rstat, cgroup_rstat_unlock, TP_ARGS(cgrp, cpu, contended) ); -/* - * Related to per CPU locks: - * global rstat_base_cpu_lock for base stats - * cgroup_subsys::rstat_ss_cpu_lock for subsystem stats - */ -DEFINE_EVENT(cgroup_rstat, cgroup_rstat_cpu_lock_contended, - - TP_PROTO(struct cgroup *cgrp, int cpu, bool contended), - - TP_ARGS(cgrp, cpu, contended) -); - -DEFINE_EVENT(cgroup_rstat, cgroup_rstat_cpu_lock_contended_fastpath, - - TP_PROTO(struct cgroup *cgrp, int cpu, bool contended), - - TP_ARGS(cgrp, cpu, contended) -); - -DEFINE_EVENT(cgroup_rstat, cgroup_rstat_cpu_locked, - - TP_PROTO(struct cgroup *cgrp, int cpu, bool contended), - - TP_ARGS(cgrp, cpu, contended) -); - -DEFINE_EVENT(cgroup_rstat, cgroup_rstat_cpu_locked_fastpath, - - TP_PROTO(struct cgroup *cgrp, int cpu, bool contended), - - TP_ARGS(cgrp, cpu, contended) -); - -DEFINE_EVENT(cgroup_rstat, cgroup_rstat_cpu_unlock, - - TP_PROTO(struct cgroup *cgrp, int cpu, bool contended), - - TP_ARGS(cgrp, cpu, contended) -); - -DEFINE_EVENT(cgroup_rstat, cgroup_rstat_cpu_unlock_fastpath, - - TP_PROTO(struct cgroup *cgrp, int cpu, bool contended), - - TP_ARGS(cgrp, cpu, contended) -); - #endif /* _TRACE_CGROUP_H */ /* This part must be outside protection */ diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index a7550961dd12..c8a48cf83878 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -10,7 +10,6 @@ #include static DEFINE_SPINLOCK(rstat_base_lock); -static DEFINE_PER_CPU(raw_spinlock_t, rstat_base_cpu_lock); static DEFINE_PER_CPU(struct llist_head, rstat_backlog_list); static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); @@ -53,86 +52,6 @@ static inline struct llist_head *ss_lhead_cpu(struct cgroup_subsys *ss, int cpu) return per_cpu_ptr(&rstat_backlog_list, cpu); } -static raw_spinlock_t *ss_rstat_cpu_lock(struct cgroup_subsys *ss, int cpu) -{ - if (ss) { - /* - * Depending on config, the subsystem per-cpu lock type may be an - * empty struct. In enviromnents where this is the case, allocation - * of this field is not performed in ss_rstat_init(). Avoid a - * cpu-based offset relative to NULL by returning early. When the - * lock type is zero in size, the corresponding lock functions are - * no-ops so passing them NULL is acceptable. - */ - if (sizeof(*ss->rstat_ss_cpu_lock) == 0) - return NULL; - - return per_cpu_ptr(ss->rstat_ss_cpu_lock, cpu); - } - - return per_cpu_ptr(&rstat_base_cpu_lock, cpu); -} - -/* - * Helper functions for rstat per CPU locks. - * - * This makes it easier to diagnose locking issues and contention in - * production environments. The parameter @fast_path determine the - * tracepoints being added, allowing us to diagnose "flush" related - * operations without handling high-frequency fast-path "update" events. - */ -static __always_inline -unsigned long _css_rstat_cpu_lock(struct cgroup_subsys_state *css, int cpu, - const bool fast_path) -{ - struct cgroup *cgrp = css->cgroup; - raw_spinlock_t *cpu_lock; - unsigned long flags; - bool contended; - - /* - * The _irqsave() is needed because the locks used for flushing are - * spinlock_t which is a sleeping lock on PREEMPT_RT. Acquiring this lock - * with the _irq() suffix only disables interrupts on a non-PREEMPT_RT - * kernel. The raw_spinlock_t below disables interrupts on both - * configurations. The _irqsave() ensures that interrupts are always - * disabled and later restored. - */ - cpu_lock = ss_rstat_cpu_lock(css->ss, cpu); - contended = !raw_spin_trylock_irqsave(cpu_lock, flags); - if (contended) { - if (fast_path) - trace_cgroup_rstat_cpu_lock_contended_fastpath(cgrp, cpu, contended); - else - trace_cgroup_rstat_cpu_lock_contended(cgrp, cpu, contended); - - raw_spin_lock_irqsave(cpu_lock, flags); - } - - if (fast_path) - trace_cgroup_rstat_cpu_locked_fastpath(cgrp, cpu, contended); - else - trace_cgroup_rstat_cpu_locked(cgrp, cpu, contended); - - return flags; -} - -static __always_inline -void _css_rstat_cpu_unlock(struct cgroup_subsys_state *css, int cpu, - unsigned long flags, const bool fast_path) -{ - struct cgroup *cgrp = css->cgroup; - raw_spinlock_t *cpu_lock; - - if (fast_path) - trace_cgroup_rstat_cpu_unlock_fastpath(cgrp, cpu, false); - else - trace_cgroup_rstat_cpu_unlock(cgrp, cpu, false); - - cpu_lock = ss_rstat_cpu_lock(css->ss, cpu); - raw_spin_unlock_irqrestore(cpu_lock, flags); -} - /** * css_rstat_updated - keep track of updated rstat_cpu * @css: target cgroup subsystem state @@ -335,15 +254,12 @@ static struct cgroup_subsys_state *css_rstat_updated_list( { struct css_rstat_cpu *rstatc = css_rstat_cpu(root, cpu); struct cgroup_subsys_state *head = NULL, *parent, *child; - unsigned long flags; - - flags = _css_rstat_cpu_lock(root, cpu, false); css_process_update_tree(root->ss, cpu); /* Return NULL if this subtree is not on-list */ if (!rstatc->updated_next) - goto unlock_ret; + return NULL; /* * Unlink @root from its parent. As the updated_children list is @@ -375,8 +291,7 @@ static struct cgroup_subsys_state *css_rstat_updated_list( rstatc->updated_children = root; if (child != root) head = css_rstat_push_children(head, child, cpu); -unlock_ret: - _css_rstat_cpu_unlock(root, cpu, flags, false); + return head; } @@ -572,29 +487,15 @@ int __init ss_rstat_init(struct cgroup_subsys *ss) { int cpu; - /* - * Depending on config, the subsystem per-cpu lock type may be an empty - * struct. Avoid allocating a size of zero in this case. - */ - if (ss && sizeof(*ss->rstat_ss_cpu_lock)) { - ss->rstat_ss_cpu_lock = alloc_percpu(raw_spinlock_t); - if (!ss->rstat_ss_cpu_lock) - return -ENOMEM; - } - if (ss) { ss->lhead = alloc_percpu(struct llist_head); - if (!ss->lhead) { - free_percpu(ss->rstat_ss_cpu_lock); + if (!ss->lhead) return -ENOMEM; - } } spin_lock_init(ss_rstat_lock(ss)); - for_each_possible_cpu(cpu) { - raw_spin_lock_init(ss_rstat_cpu_lock(ss, cpu)); + for_each_possible_cpu(cpu) init_llist_head(ss_lhead_cpu(ss, cpu)); - } return 0; } -- 2.47.1