From: Oleg Nesterov <oleg@redhat.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dylan Hatch <dylanbhatch@google.com>,
Kees Cook <keescook@chromium.org>,
Frederic Weisbecker <frederic@kernel.org>,
"Joel Fernandes (Google)" <joel@joelfernandes.org>,
Ard Biesheuvel <ardb@kernel.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
"Eric W. Biederman" <ebiederm@xmission.com>,
Vincent Whitchurch <vincent.whitchurch@axis.com>,
Dmitry Vyukov <dvyukov@google.com>,
Luis Chamberlain <mcgrof@kernel.org>,
Mike Christie <michael.christie@oracle.com>,
David Hildenbrand <david@redhat.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Stefan Roesch <shr@devkernel.io>, Joey Gouly <joey.gouly@arm.com>,
Josh Triplett <josh@joshtriplett.org>,
Helge Deller <deller@gmx.de>,
Ondrej Mosnacek <omosnace@redhat.com>,
Florent Revest <revest@chromium.org>,
Miguel Ojeda <ojeda@kernel.org>,
linux-kernel@vger.kernel.org
Subject: [PATCH 1/2] getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()
Date: Fri, 19 Jan 2024 15:15:01 +0100 [thread overview]
Message-ID: <20240119141501.GA23739@redhat.com> (raw)
In-Reply-To: <20240117192534.1327608-1-dylanbhatch@google.com>
thread_group_cputime() does its own locking, we can safely shift
thread_group_cputime_adjusted() which does another for_each_thread loop
outside of ->siglock protected section.
This is also preparation for the next patch which changes getrusage() to
use stats_lock instead of siglock. Currently the deadlock is not possible,
if getrusage() enters the slow path and takes stats_lock, read_seqretry()
in thread_group_cputime() must always return 0, so thread_group_cputime()
will never try to take the same lock. Yet this looks more safe and better
performance-wise.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
kernel/sys.c | 34 +++++++++++++++++++---------------
1 file changed, 19 insertions(+), 15 deletions(-)
diff --git a/kernel/sys.c b/kernel/sys.c
index e219fcfa112d..70ad06ad852e 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1785,17 +1785,19 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
struct task_struct *t;
unsigned long flags;
u64 tgutime, tgstime, utime, stime;
- unsigned long maxrss = 0;
+ unsigned long maxrss;
+ struct mm_struct *mm;
struct signal_struct *sig = p->signal;
- memset((char *)r, 0, sizeof (*r));
+ memset(r, 0, sizeof(*r));
utime = stime = 0;
+ maxrss = 0;
if (who == RUSAGE_THREAD) {
task_cputime_adjusted(current, &utime, &stime);
accumulate_thread_rusage(p, r);
maxrss = sig->maxrss;
- goto out;
+ goto out_thread;
}
if (!lock_task_sighand(p, &flags))
@@ -1819,9 +1821,6 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
fallthrough;
case RUSAGE_SELF:
- thread_group_cputime_adjusted(p, &tgutime, &tgstime);
- utime += tgutime;
- stime += tgstime;
r->ru_nvcsw += sig->nvcsw;
r->ru_nivcsw += sig->nivcsw;
r->ru_minflt += sig->min_flt;
@@ -1839,19 +1838,24 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
}
unlock_task_sighand(p, &flags);
-out:
- r->ru_utime = ns_to_kernel_old_timeval(utime);
- r->ru_stime = ns_to_kernel_old_timeval(stime);
+ if (who == RUSAGE_CHILDREN)
+ goto out_children;
- if (who != RUSAGE_CHILDREN) {
- struct mm_struct *mm = get_task_mm(p);
+ thread_group_cputime_adjusted(p, &tgutime, &tgstime);
+ utime += tgutime;
+ stime += tgstime;
- if (mm) {
- setmax_mm_hiwater_rss(&maxrss, mm);
- mmput(mm);
- }
+out_thread:
+ mm = get_task_mm(p);
+ if (mm) {
+ setmax_mm_hiwater_rss(&maxrss, mm);
+ mmput(mm);
}
+
+out_children:
r->ru_maxrss = maxrss * (PAGE_SIZE / 1024); /* convert pages to KBs */
+ r->ru_utime = ns_to_kernel_old_timeval(utime);
+ r->ru_stime = ns_to_kernel_old_timeval(stime);
}
SYSCALL_DEFINE2(getrusage, int, who, struct rusage __user *, ru)
--
2.25.1.362.g51ebf55
next prev parent reply other threads:[~2024-01-19 14:16 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-17 19:25 [RFC PATCH] getrusage: Use trylock when getting sighand lock Dylan Hatch
2024-01-17 20:44 ` Oleg Nesterov
2024-01-18 15:56 ` Oleg Nesterov
2024-01-19 14:15 ` Oleg Nesterov [this message]
2024-01-19 14:15 ` [PATCH 2/2] getrusage: use sig->stats_lock Oleg Nesterov
2024-01-20 3:27 ` Dylan Hatch
2024-01-21 4:45 ` Andrew Morton
2024-01-21 12:07 ` Oleg Nesterov
2024-01-23 2:53 ` Dylan Hatch
2024-01-21 22:32 ` Andrew Morton
2024-01-20 3:29 ` [PATCH 1/2] getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand() Dylan Hatch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240119141501.GA23739@redhat.com \
--to=oleg@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=deller@gmx.de \
--cc=dvyukov@google.com \
--cc=dylanbhatch@google.com \
--cc=ebiederm@xmission.com \
--cc=frederic@kernel.org \
--cc=joel@joelfernandes.org \
--cc=joey.gouly@arm.com \
--cc=josh@joshtriplett.org \
--cc=keescook@chromium.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=michael.christie@oracle.com \
--cc=ojeda@kernel.org \
--cc=omosnace@redhat.com \
--cc=revest@chromium.org \
--cc=shr@devkernel.io \
--cc=tglx@linutronix.de \
--cc=vincent.whitchurch@axis.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox