From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754240AbbCQT1i (ORCPT ); Tue, 17 Mar 2015 15:27:38 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56364 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753463AbbCQT1e (ORCPT ); Tue, 17 Mar 2015 15:27:34 -0400 Date: Tue, 17 Mar 2015 20:25:40 +0100 From: Oleg Nesterov To: Aaron Tomlin Cc: akpm@linux-foundation.org, rientjes@google.com, dwysocha@redhat.com, linux-kernel@vger.kernel.org, Ingo Molnar Subject: [PATCH 2/2] hung_task: improve the rcu_lock_break() logic Message-ID: <20150317192540.GC32579@redhat.com> References: <1426601624-6703-1-git-send-email-atomlin@redhat.com> <1426601624-6703-2-git-send-email-atomlin@redhat.com> <20150317170920.GA21493@redhat.com> <20150317192450.GA32579@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150317192450.GA32579@redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org check_hung_uninterruptible_tasks() stops after rcu_lock_break() if either "t" or "g" exits, this is suboptimal. If "t" is alive, we can always continue, t->group_leader can be used as the new "g". We do not even bother to check g != NULL in this case. If "g" is alive, we can at least continue the outer for_each_process() loop. Signed-off-by: Oleg Nesterov --- kernel/hung_task.c | 29 ++++++++++++++++++++--------- 1 files changed, 20 insertions(+), 9 deletions(-) diff --git a/kernel/hung_task.c b/kernel/hung_task.c index 4735b99..f488059 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -134,20 +134,26 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) * For preemptible RCU it is sufficient to call rcu_read_unlock in order * to exit the grace period. For classic RCU, a reschedule is required. */ -static bool rcu_lock_break(struct task_struct *g, struct task_struct *t) +static void rcu_lock_break(struct task_struct **g, struct task_struct **t) { - bool can_cont; + bool alive; + + get_task_struct(*g); + get_task_struct(*t); - get_task_struct(g); - get_task_struct(t); rcu_read_unlock(); cond_resched(); rcu_read_lock(); - can_cont = pid_alive(g) && pid_alive(t); - put_task_struct(t); - put_task_struct(g); - return can_cont; + alive = pid_alive(*g); + put_task_struct(*g); + if (!alive) + *g = NULL; + + alive = pid_alive(*t); + put_task_struct(*t); + if (!alive) + *t = NULL; } /* @@ -178,7 +184,12 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) if (!--batch_count) { batch_count = HUNG_TASK_BATCHING; - if (!rcu_lock_break(g, t)) + rcu_lock_break(&g, &t); + if (t) /* in case g == NULL */ + g = t->group_leader; + else if (g) /* continue the outer loop */ + break; + else /* both dead */ goto unlock; } /* use "==" to skip the TASK_KILLABLE tasks */ -- 1.5.5.1