public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Oleg Nesterov <oleg@redhat.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>,
	Americo Wang <xiyou.wangcong@gmail.com>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>,
	Ingo Molnar <mingo@elte.hu>,
	Peter Zijlstra <peterz@infradead.org>,
	Roland McGrath <roland@redhat.com>,
	Spencer Candland <spencer@bluehost.com>,
	Stanislaw Gruszka <sgruszka@redhat.com>,
	linux-kernel@vger.kernel.org
Subject: [PATCH -mm 2/4] cputimers: make sure thread_group_cputime() can't count the same thread twice lockless
Date: Mon, 29 Mar 2010 20:13:29 +0200	[thread overview]
Message-ID: <20100329181329.GC16356@redhat.com> (raw)
In-Reply-To: <20100329181204.GA16356@redhat.com>

- change __exit_signal() to do __unhash_process() before we accumulate
  the counters in ->signal

- add a couple of barriers into thread_group_cputime() and __exit_signal()
  to make sure thread_group_cputime() can never account the same thread
  twice if it races with exit.

  If any thread T was already accounted in ->signal, next_thread() or
  pid_alive() must see the result of __unhash_process(T).

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---

 kernel/exit.c             |   14 +++++++++-----
 kernel/posix-cpu-timers.c |    6 ++++++
 2 files changed, 15 insertions(+), 5 deletions(-)

--- 34-rc1/kernel/exit.c~cpuacct_2_thread_group_cputime_rcu_safe	2010-03-29 18:03:17.000000000 +0200
+++ 34-rc1/kernel/exit.c	2010-03-29 18:29:35.000000000 +0200
@@ -88,6 +88,8 @@ static void __exit_signal(struct task_st
 					rcu_read_lock_held() ||
 					lockdep_is_held(&tasklist_lock));
 	spin_lock(&sighand->siglock);
+	__unhash_process(tsk, group_dead);
+	sig->nr_threads--;
 
 	posix_cpu_timers_exit(tsk);
 	if (group_dead) {
@@ -111,9 +113,14 @@ static void __exit_signal(struct task_st
 		 * The group leader stays around as a zombie as long
 		 * as there are other threads.  When it gets reaped,
 		 * the exit.c code will add its counts into these totals.
-		 * We won't ever get here for the group leader, since it
-		 * will have been the last reference on the signal_struct.
+		 *
+		 * Make sure that this thread can't be accounted twice
+		 * by thread_group_cputime() under rcu. If it sees the
+		 * the result of accounting below it must see the result
+		 * of __unhash_process()->__list_del(thread_group) above.
 		 */
+		smp_wmb();
+
 		sig->utime = cputime_add(sig->utime, tsk->utime);
 		sig->stime = cputime_add(sig->stime, tsk->stime);
 		sig->gtime = cputime_add(sig->gtime, tsk->gtime);
@@ -127,9 +134,6 @@ static void __exit_signal(struct task_st
 		sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
 	}
 
-	sig->nr_threads--;
-	__unhash_process(tsk, group_dead);
-
 	/*
 	 * Do this under ->siglock, we can race with another thread
 	 * doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals.
--- 34-rc1/kernel/posix-cpu-timers.c~cpuacct_2_thread_group_cputime_rcu_safe	2010-03-29 18:09:15.000000000 +0200
+++ 34-rc1/kernel/posix-cpu-timers.c	2010-03-29 18:29:35.000000000 +0200
@@ -239,6 +239,12 @@ void thread_group_cputime(struct task_st
 	times->utime = sig->utime;
 	times->stime = sig->stime;
 	times->sum_exec_runtime = sig->sum_sched_runtime;
+	/*
+	 * This pairs with wmb() in __exit_signal(). If any thread was
+	 * already accounted in tsk->signal, while_each_thread() must
+	 * not see it.
+	 */
+	smp_rmb();
 
 	rcu_read_lock();
 	/* make sure we can trust tsk->thread_group list */


  parent reply	other threads:[~2010-03-29 18:16 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-24 20:45 [RFC,PATCH 1/2] cputimers/proc: do_task_stat()->task_times() can race with getrusage() Oleg Nesterov
2010-03-26  3:53 ` Balbir Singh
2010-03-26  7:37   ` Stanislaw Gruszka
2010-03-26 16:12     ` Stanislaw Gruszka
2010-03-26 21:49   ` Oleg Nesterov
2010-03-29 11:17     ` Stanislaw Gruszka
2010-03-29 12:54       ` Oleg Nesterov
2010-03-29 18:12         ` [PATCH -mm 0/4] cputimers/proc: do_task_stat: don't walk through the thread list under ->siglock Oleg Nesterov
2010-03-29 18:12           ` [PATCH -mm 1/4] cputimers: thread_group_cputime: cleanup rcu/signal stuff Oleg Nesterov
2010-03-29 18:13           ` Oleg Nesterov [this message]
2010-03-30 11:01             ` [PATCH -mm 2/4] cputimers: make sure thread_group_cputime() can't count the same thread twice lockless Stanislaw Gruszka
2010-03-30 13:43               ` Oleg Nesterov
2010-03-29 18:13           ` [PATCH -mm 3/4] cputimers: thread_group_times: make it rcu-safe Oleg Nesterov
2010-03-29 18:14           ` [PATCH -mm 1/4] cputimers: do_task_stat: avoid ->siglock for while_each_thread() Oleg Nesterov
2010-03-29 18:16             ` Oleg Nesterov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100329181329.GC16356@redhat.com \
    --to=oleg@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=ebiederm@xmission.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=roland@redhat.com \
    --cc=seto.hidetoshi@jp.fujitsu.com \
    --cc=sgruszka@redhat.com \
    --cc=spencer@bluehost.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox