From: ovidiu.panait@windriver.com
To: stable@vger.kernel.org
Cc: qyousef@layalina.io, Chengming Zhou <zhouchengming@bytedance.com>,
Peter Zijlstra <peterz@infradead.org>,
Ovidiu Panait <ovidiu.panait@windriver.com>
Subject: [PATCH 5.10 3/4] sched/cpuacct: Optimize away RCU read lock
Date: Fri, 29 Sep 2023 16:14:17 +0300 [thread overview]
Message-ID: <20230929131418.821640-4-ovidiu.panait@windriver.com> (raw)
In-Reply-To: <20230929131418.821640-1-ovidiu.panait@windriver.com>
From: Chengming Zhou <zhouchengming@bytedance.com>
commit dc6e0818bc9a0336d9accf3ea35d146d72aa7a18 upstream.
Since cpuacct_charge() is called from the scheduler update_curr(),
we must already have rq lock held, then the RCU read lock can
be optimized away.
And do the same thing in it's wrapper cgroup_account_cputime(),
but we can't use lockdep_assert_rq_held() there, which defined
in kernel/sched/sched.h.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220220051426.5274-2-zhouchengming@bytedance.com
[OP: adjusted lockdep_assert_rq_held() -> lockdep_assert_held()]
Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
---
include/linux/cgroup.h | 2 --
kernel/sched/cpuacct.c | 4 +---
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 959b370733f0..7653f5418950 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -779,11 +779,9 @@ static inline void cgroup_account_cputime(struct task_struct *task,
cpuacct_charge(task, delta_exec);
- rcu_read_lock();
cgrp = task_dfl_cgroup(task);
if (cgroup_parent(cgrp))
__cgroup_account_cputime(cgrp, delta_exec);
- rcu_read_unlock();
}
static inline void cgroup_account_cputime_field(struct task_struct *task,
diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
index 3c59c541dd31..8ee298321d78 100644
--- a/kernel/sched/cpuacct.c
+++ b/kernel/sched/cpuacct.c
@@ -331,12 +331,10 @@ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
unsigned int cpu = task_cpu(tsk);
struct cpuacct *ca;
- rcu_read_lock();
+ lockdep_assert_held(&cpu_rq(cpu)->lock);
for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
*per_cpu_ptr(ca->cpuusage, cpu) += cputime;
-
- rcu_read_unlock();
}
/*
--
2.31.1
next prev parent reply other threads:[~2023-09-29 13:14 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-29 13:14 [PATCH 5.10 0/4] cgroup: Fix suspicious rcu_dereference_check() warning ovidiu.panait
2023-09-29 13:14 ` [PATCH 5.10 1/4] sched/cpuacct: Fix user/system in shown cpuacct.usage* ovidiu.panait
2023-09-29 13:14 ` [PATCH 5.10 2/4] sched/cpuacct: Fix charge percpu cpuusage ovidiu.panait
2023-09-29 13:14 ` ovidiu.panait [this message]
2023-09-29 13:14 ` [PATCH 5.10 4/4] cgroup: Fix suspicious rcu_dereference_check() usage warning ovidiu.panait
2023-10-03 11:32 ` [PATCH 5.10 0/4] cgroup: Fix suspicious rcu_dereference_check() warning Sasha Levin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230929131418.821640-4-ovidiu.panait@windriver.com \
--to=ovidiu.panait@windriver.com \
--cc=peterz@infradead.org \
--cc=qyousef@layalina.io \
--cc=stable@vger.kernel.org \
--cc=zhouchengming@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox