From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753476AbcGYQX4 (ORCPT ); Mon, 25 Jul 2016 12:23:56 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50572 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752930AbcGYQXy (ORCPT ); Mon, 25 Jul 2016 12:23:54 -0400 Date: Mon, 25 Jul 2016 18:23:51 +0200 From: Oleg Nesterov To: Andrew Morton Cc: Dave Anderson , Ingo Molnar , Peter Zijlstra , "Paul E. McKenney" , Wang Shu , linux-kernel@vger.kernel.org Subject: [PATCH 2/3] hung_task.c: change rcu_lock_break() code to use for_each_process_thread_break/continue Message-ID: <20160725162351.GA23950@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160725162332.GA23935@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Mon, 25 Jul 2016 16:23:53 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Change check_hung_uninterruptible_tasks() to use the new helpers and remove the "can_cont" logic from rcu_lock_break() which we will probably export later for other users (show_state_filter). We could add for_each_process_thread_break/continue right into rcu_lock_break() but see the next patch. Signed-off-by: Oleg Nesterov --- kernel/hung_task.c | 25 +++++++++---------------- 1 file changed, 9 insertions(+), 16 deletions(-) diff --git a/kernel/hung_task.c b/kernel/hung_task.c index d234022..517f52e 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -134,20 +134,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) * For preemptible RCU it is sufficient to call rcu_read_unlock in order * to exit the grace period. For classic RCU, a reschedule is required. */ -static bool rcu_lock_break(struct task_struct *g, struct task_struct *t) +static void rcu_lock_break(void) { - bool can_cont; - - get_task_struct(g); - get_task_struct(t); rcu_read_unlock(); cond_resched(); rcu_read_lock(); - can_cont = pid_alive(g) && pid_alive(t); - put_task_struct(t); - put_task_struct(g); - - return can_cont; } /* @@ -170,16 +161,18 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) rcu_read_lock(); for_each_process_thread(g, t) { - if (!max_count--) + /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */ + if (t->state == TASK_UNINTERRUPTIBLE) + check_hung_task(t, timeout); + + if (!--max_count) goto unlock; if (!--batch_count) { batch_count = HUNG_TASK_BATCHING; - if (!rcu_lock_break(g, t)) - goto unlock; + for_each_process_thread_break(g, t); + rcu_lock_break(); + for_each_process_thread_continue(&g, &t); } - /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */ - if (t->state == TASK_UNINTERRUPTIBLE) - check_hung_task(t, timeout); } unlock: rcu_read_unlock(); -- 2.5.0