public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] sched: don't use while_each_thread()
@ 2014-08-13 19:19 Oleg Nesterov
  2014-08-13 19:19 ` [PATCH 1/4] sched: s/do_each_thread/for_each_process_thread/ in core.c Oleg Nesterov
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-13 19:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel

Peter, could you take these simple patches ?

Better later than never... per-file, but please feel free to join
them in a single patch.

read_lock_irq*(tasklist_lock) in kernel/sched/ files looks strange.
Why? I'll recheck, but this looks unneeded.

Oleg.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/4] sched: s/do_each_thread/for_each_process_thread/ in core.c
  2014-08-13 19:19 [PATCH 0/4] sched: don't use while_each_thread() Oleg Nesterov
@ 2014-08-13 19:19 ` Oleg Nesterov
  2014-08-20  8:18   ` [tip:sched/core] " tip-bot for Oleg Nesterov
  2014-08-13 19:19 ` [PATCH 2/4] sched: s/do_each_thread/for_each_process_thread/ in debug.c Oleg Nesterov
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-13 19:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel

Change kernel/sched/core.c to use for_each_process_thread().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 kernel/sched/core.c |   13 ++++++-------
 1 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1211575..8a6506f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4505,7 +4505,7 @@ void show_state_filter(unsigned long state_filter)
 		"  task                        PC stack   pid father\n");
 #endif
 	rcu_read_lock();
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		/*
 		 * reset the NMI-timeout, listing all files on a slow
 		 * console might take a lot of time:
@@ -4513,7 +4513,7 @@ void show_state_filter(unsigned long state_filter)
 		touch_nmi_watchdog();
 		if (!state_filter || (p->state & state_filter))
 			sched_show_task(p);
-	} while_each_thread(g, p);
+	}
 
 	touch_all_softlockup_watchdogs();
 
@@ -7138,7 +7138,7 @@ void normalize_rt_tasks(void)
 	struct rq *rq;
 
 	read_lock_irqsave(&tasklist_lock, flags);
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		/*
 		 * Only normalize user tasks:
 		 */
@@ -7169,8 +7169,7 @@ void normalize_rt_tasks(void)
 
 		__task_rq_unlock(rq);
 		raw_spin_unlock(&p->pi_lock);
-	} while_each_thread(g, p);
-
+	}
 	read_unlock_irqrestore(&tasklist_lock, flags);
 }
 
@@ -7358,10 +7357,10 @@ static inline int tg_has_rt_tasks(struct task_group *tg)
 {
 	struct task_struct *g, *p;
 
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		if (rt_task(p) && task_rq(p)->rt.tg == tg)
 			return 1;
-	} while_each_thread(g, p);
+	}
 
 	return 0;
 }
-- 
1.5.5.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/4] sched: s/do_each_thread/for_each_process_thread/ in debug.c
  2014-08-13 19:19 [PATCH 0/4] sched: don't use while_each_thread() Oleg Nesterov
  2014-08-13 19:19 ` [PATCH 1/4] sched: s/do_each_thread/for_each_process_thread/ in core.c Oleg Nesterov
@ 2014-08-13 19:19 ` Oleg Nesterov
  2014-08-20  8:19   ` [tip:sched/core] " tip-bot for Oleg Nesterov
  2014-08-13 19:20 ` [PATCH 3/4] sched: change thread_group_cputime() to use for_each_thread() Oleg Nesterov
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-13 19:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel

Change kernel/sched/debug.c to use for_each_process_thread().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 kernel/sched/debug.c |    6 ++----
 1 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 627b3c3..c7fe1ea 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -160,14 +160,12 @@ static void print_rq(struct seq_file *m, struct rq *rq, int rq_cpu)
 	"----------------------------------------------------\n");
 
 	read_lock_irqsave(&tasklist_lock, flags);
-
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		if (task_cpu(p) != rq_cpu)
 			continue;
 
 		print_task(m, rq, p);
-	} while_each_thread(g, p);
-
+	}
 	read_unlock_irqrestore(&tasklist_lock, flags);
 }
 
-- 
1.5.5.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/4] sched: change thread_group_cputime() to use for_each_thread()
  2014-08-13 19:19 [PATCH 0/4] sched: don't use while_each_thread() Oleg Nesterov
  2014-08-13 19:19 ` [PATCH 1/4] sched: s/do_each_thread/for_each_process_thread/ in core.c Oleg Nesterov
  2014-08-13 19:19 ` [PATCH 2/4] sched: s/do_each_thread/for_each_process_thread/ in debug.c Oleg Nesterov
@ 2014-08-13 19:20 ` Oleg Nesterov
  2014-08-20  8:19   ` [tip:sched/core] sched: Change " tip-bot for Oleg Nesterov
  2014-08-13 19:20 ` [PATCH 4/4] sched: change autogroup_move_group() " Oleg Nesterov
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-13 19:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel

Change thread_group_cputime() to use for_each_thread() instead of
buggy while_each_thread(). This also makes the pid_alive() check
unnecessary.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 kernel/sched/cputime.c |   10 ++--------
 1 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 72fdf06..3e52836 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -294,18 +294,12 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
 	times->sum_exec_runtime = sig->sum_sched_runtime;
 
 	rcu_read_lock();
-	/* make sure we can trust tsk->thread_group list */
-	if (!likely(pid_alive(tsk)))
-		goto out;
-
-	t = tsk;
-	do {
+	for_each_thread(tsk, t) {
 		task_cputime(t, &utime, &stime);
 		times->utime += utime;
 		times->stime += stime;
 		times->sum_exec_runtime += task_sched_runtime(t);
-	} while_each_thread(tsk, t);
-out:
+	}
 	rcu_read_unlock();
 }
 
-- 
1.5.5.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/4] sched: change autogroup_move_group() to use for_each_thread()
  2014-08-13 19:19 [PATCH 0/4] sched: don't use while_each_thread() Oleg Nesterov
                   ` (2 preceding siblings ...)
  2014-08-13 19:20 ` [PATCH 3/4] sched: change thread_group_cputime() to use for_each_thread() Oleg Nesterov
@ 2014-08-13 19:20 ` Oleg Nesterov
  2014-08-20  8:19   ` [tip:sched/core] sched: Change " tip-bot for Oleg Nesterov
  2014-08-13 19:23 ` [PATCH 0/4] sched: don't use while_each_thread() Peter Zijlstra
  2014-08-17 15:25 ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Oleg Nesterov
  5 siblings, 1 reply; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-13 19:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel

Change autogroup_move_group() to use for_each_thread() instead of
buggy while_each_thread().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 kernel/sched/auto_group.c |    5 +----
 1 files changed, 1 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c
index e73efba..8a2e230 100644
--- a/kernel/sched/auto_group.c
+++ b/kernel/sched/auto_group.c
@@ -148,11 +148,8 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag)
 	if (!ACCESS_ONCE(sysctl_sched_autogroup_enabled))
 		goto out;
 
-	t = p;
-	do {
+	for_each_thread(p, t)
 		sched_move_task(t);
-	} while_each_thread(p, t);
-
 out:
 	unlock_task_sighand(p, &flags);
 	autogroup_kref_put(prev);
-- 
1.5.5.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/4] sched: don't use while_each_thread()
  2014-08-13 19:19 [PATCH 0/4] sched: don't use while_each_thread() Oleg Nesterov
                   ` (3 preceding siblings ...)
  2014-08-13 19:20 ` [PATCH 4/4] sched: change autogroup_move_group() " Oleg Nesterov
@ 2014-08-13 19:23 ` Peter Zijlstra
  2014-08-17 15:25 ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Oleg Nesterov
  5 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2014-08-13 19:23 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 125 bytes --]

On Wed, Aug 13, 2014 at 09:19:38PM +0200, Oleg Nesterov wrote:
> Peter, could you take these simple patches ?

Done, thanks!

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread())
  2014-08-13 19:19 [PATCH 0/4] sched: don't use while_each_thread() Oleg Nesterov
                   ` (4 preceding siblings ...)
  2014-08-13 19:23 ` [PATCH 0/4] sched: don't use while_each_thread() Peter Zijlstra
@ 2014-08-17 15:25 ` Oleg Nesterov
  2014-08-17 15:26   ` [PATCH 1/2] sched: normalize_rt_tasks: don't use _irqsave for tasklist_lock, use task_rq_lock() Oleg Nesterov
                     ` (2 more replies)
  5 siblings, 3 replies; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-17 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel, Kirill Tkhai

On 08/13, Oleg Nesterov wrote:
>
> Peter, could you take these simple patches ?
>
> Better later than never... per-file, but please feel free to join
> them in a single patch.
>
> read_lock_irq*(tasklist_lock) in kernel/sched/ files looks strange.
> Why? I'll recheck, but this looks unneeded.

Yes, please consider these minor cleanups on top of for_each_thread
conversions.

read_lock_irq(tasklist) in normalize_rt_tasks() doesn't really hurt,
but it looks confusing. If we really have a reason to disable irqs
this (subtle) reason should be documented.

And I can't understand tg_has_rt_tasks(). Don't we need something
like the patch below? If not, please do not ask me why I think so,
I don't understand this black magic ;) But the usage of the global
"runqueues" array looks suspicious.

Oleg.

--- x/kernel/sched/core.c
+++ x/kernel/sched/core.c
@@ -7354,7 +7354,7 @@ static inline int tg_has_rt_tasks(struct
 	struct task_struct *g, *p;
 
 	for_each_process_thread(g, p) {
-		if (rt_task(p) && task_rq(p)->rt.tg == tg)
+		if (rt_task(p) && task_group(p) == tg)
 			return 1;
 	}
 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/2] sched: normalize_rt_tasks: don't use _irqsave for tasklist_lock, use task_rq_lock()
  2014-08-17 15:25 ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Oleg Nesterov
@ 2014-08-17 15:26   ` Oleg Nesterov
  2014-08-17 15:26   ` [PATCH 2/2] sched: print_rq: don't use tasklist_lock Oleg Nesterov
  2014-08-17 21:14   ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Kirill Tkhai
  2 siblings, 0 replies; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-17 15:26 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel, Kirill Tkhai

1. read_lock(tasklist_lock) does not need to disable irqs.

2. ->mm != NULL is the common mistake, use PF_KTHREAD.

3. The second ->mm check can be simply removed.

4. task_rq_lock() looks better than raw_spin_lock(&p->pi_lock) +
   __task_rq_lock().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 kernel/sched/core.c |   16 ++++++----------
 1 files changed, 6 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8a6506f..eee12b3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7137,12 +7137,12 @@ void normalize_rt_tasks(void)
 	unsigned long flags;
 	struct rq *rq;
 
-	read_lock_irqsave(&tasklist_lock, flags);
+	read_lock(&tasklist_lock);
 	for_each_process_thread(g, p) {
 		/*
 		 * Only normalize user tasks:
 		 */
-		if (!p->mm)
+		if (p->flags & PF_KTHREAD)
 			continue;
 
 		p->se.exec_start		= 0;
@@ -7157,20 +7157,16 @@ void normalize_rt_tasks(void)
 			 * Renice negative nice level userspace
 			 * tasks back to 0:
 			 */
-			if (task_nice(p) < 0 && p->mm)
+			if (task_nice(p) < 0)
 				set_user_nice(p, 0);
 			continue;
 		}
 
-		raw_spin_lock(&p->pi_lock);
-		rq = __task_rq_lock(p);
-
+		rq = task_rq_lock(p, &flags);
 		normalize_task(rq, p);
-
-		__task_rq_unlock(rq);
-		raw_spin_unlock(&p->pi_lock);
+		task_rq_unlock(rq, p, &flags);
 	}
-	read_unlock_irqrestore(&tasklist_lock, flags);
+	read_unlock(&tasklist_lock);
 }
 
 #endif /* CONFIG_MAGIC_SYSRQ */
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/2] sched: print_rq: don't use tasklist_lock
  2014-08-17 15:25 ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Oleg Nesterov
  2014-08-17 15:26   ` [PATCH 1/2] sched: normalize_rt_tasks: don't use _irqsave for tasklist_lock, use task_rq_lock() Oleg Nesterov
@ 2014-08-17 15:26   ` Oleg Nesterov
  2014-08-17 21:14   ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Kirill Tkhai
  2 siblings, 0 replies; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-17 15:26 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel, Kirill Tkhai

read_lock_irqsave(tasklist_lock) in print_rq() looks strange. We do
not need to disable irqs, and they are already disabled by the caller.

And afaics this lock buys nothing, we can rely on rcu_read_lock().
In this case it makes sense to also move rcu_read_lock/unlock from
the caller to print_rq().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 kernel/sched/debug.c |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index c7fe1ea..ce33780 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -150,7 +150,6 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
 static void print_rq(struct seq_file *m, struct rq *rq, int rq_cpu)
 {
 	struct task_struct *g, *p;
-	unsigned long flags;
 
 	SEQ_printf(m,
 	"\nrunnable tasks:\n"
@@ -159,14 +158,14 @@ static void print_rq(struct seq_file *m, struct rq *rq, int rq_cpu)
 	"------------------------------------------------------"
 	"----------------------------------------------------\n");
 
-	read_lock_irqsave(&tasklist_lock, flags);
+	rcu_read_lock();
 	for_each_process_thread(g, p) {
 		if (task_cpu(p) != rq_cpu)
 			continue;
 
 		print_task(m, rq, p);
 	}
-	read_unlock_irqrestore(&tasklist_lock, flags);
+	rcu_read_unlock();
 }
 
 void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
@@ -331,9 +330,7 @@ do {									\
 	print_cfs_stats(m, cpu);
 	print_rt_stats(m, cpu);
 
-	rcu_read_lock();
 	print_rq(m, rq, cpu);
-	rcu_read_unlock();
 	spin_unlock_irqrestore(&sched_debug_lock, flags);
 	SEQ_printf(m, "\n");
 }
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread())
  2014-08-17 15:25 ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Oleg Nesterov
  2014-08-17 15:26   ` [PATCH 1/2] sched: normalize_rt_tasks: don't use _irqsave for tasklist_lock, use task_rq_lock() Oleg Nesterov
  2014-08-17 15:26   ` [PATCH 2/2] sched: print_rq: don't use tasklist_lock Oleg Nesterov
@ 2014-08-17 21:14   ` Kirill Tkhai
  2014-08-18 15:09     ` Oleg Nesterov
  2 siblings, 1 reply; 15+ messages in thread
From: Kirill Tkhai @ 2014-08-17 21:14 UTC (permalink / raw)
  To: Oleg Nesterov, Peter Zijlstra
  Cc: Rik van Riel, Mike Galbraith, Hidetoshi Seto, Frank Mayhar,
	Frederic Weisbecker, Andrew Morton, Sanjay Rao, Larry Woodman,
	linux-kernel

On 17.08.2014 19:25, Oleg Nesterov wrote:
> On 08/13, Oleg Nesterov wrote:
>>
>> Peter, could you take these simple patches ?
>>
>> Better later than never... per-file, but please feel free to join
>> them in a single patch.
>>
>> read_lock_irq*(tasklist_lock) in kernel/sched/ files looks strange.
>> Why? I'll recheck, but this looks unneeded.
> 
> Yes, please consider these minor cleanups on top of for_each_thread
> conversions.
> 
> read_lock_irq(tasklist) in normalize_rt_tasks() doesn't really hurt,
> but it looks confusing. If we really have a reason to disable irqs
> this (subtle) reason should be documented.
> 
> And I can't understand tg_has_rt_tasks(). Don't we need something
> like the patch below? If not, please do not ask me why I think so,
> I don't understand this black magic ;) But the usage of the global
> "runqueues" array looks suspicious.

This function searches RT task which is related to this tg. It's
opaquely because it looks that there is an error.

task_rq(p)->rt.tg is a task group of a top rt_rq, while the task may
be queued on a child rt_rq instead of this. So, your patch is a BUGFIX,
not a cleanup.

> --- x/kernel/sched/core.c
> +++ x/kernel/sched/core.c
> @@ -7354,7 +7354,7 @@ static inline int tg_has_rt_tasks(struct
>  	struct task_struct *g, *p;
>  
>  	for_each_process_thread(g, p) {
> -		if (rt_task(p) && task_rq(p)->rt.tg == tg)
> +		if (rt_task(p) && task_group(p) == tg)
>  			return 1;
>  	}

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread())
  2014-08-17 21:14   ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Kirill Tkhai
@ 2014-08-18 15:09     ` Oleg Nesterov
  0 siblings, 0 replies; 15+ messages in thread
From: Oleg Nesterov @ 2014-08-18 15:09 UTC (permalink / raw)
  To: Kirill Tkhai
  Cc: Peter Zijlstra, Rik van Riel, Mike Galbraith, Hidetoshi Seto,
	Frank Mayhar, Frederic Weisbecker, Andrew Morton, Sanjay Rao,
	Larry Woodman, linux-kernel

On 08/18, Kirill Tkhai wrote:
>
> On 17.08.2014 19:25, Oleg Nesterov wrote:
> >
> > And I can't understand tg_has_rt_tasks(). Don't we need something
> > like the patch below? If not, please do not ask me why I think so,
> > I don't understand this black magic ;) But the usage of the global
> > "runqueues" array looks suspicious.
>
> This function searches RT task which is related to this tg. It's
> opaquely because it looks that there is an error.
>
> task_rq(p)->rt.tg is a task group of a top rt_rq, while the task may
> be queued on a child rt_rq instead of this. So, your patch is a BUGFIX,
> not a cleanup.

Yes, thanks, this was my (vague) understanding. But since I don't know
even the terminology I wasn't able to explain my concerns.

OK, I am going to shamelessly steal your words and turn them into the
changelog.

Thanks.

Oleg.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [tip:sched/core] sched: s/do_each_thread/for_each_process_thread/ in core.c
  2014-08-13 19:19 ` [PATCH 1/4] sched: s/do_each_thread/for_each_process_thread/ in core.c Oleg Nesterov
@ 2014-08-20  8:18   ` tip-bot for Oleg Nesterov
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot for Oleg Nesterov @ 2014-08-20  8:18 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, torvalds, seto.hidetoshi, peterz,
	umgwanakikbuti, riel, fmayhar, akpm, srao, fweisbec, tglx, oleg,
	lwoodman

Commit-ID:  5d07f4202c5d63b73ba1734ed38e08461a689313
Gitweb:     http://git.kernel.org/tip/5d07f4202c5d63b73ba1734ed38e08461a689313
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Wed, 13 Aug 2014 21:19:53 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 20 Aug 2014 09:47:17 +0200

sched: s/do_each_thread/for_each_process_thread/ in core.c

Change kernel/sched/core.c to use for_each_process_thread().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Frank Mayhar <fmayhar@google.com>
Cc: Frederic Weisbecker <fweisbec@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sanjay Rao <srao@redhat.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140813191953.GA19315@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7d1ec6e..4f2826f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4505,7 +4505,7 @@ void show_state_filter(unsigned long state_filter)
 		"  task                        PC stack   pid father\n");
 #endif
 	rcu_read_lock();
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		/*
 		 * reset the NMI-timeout, listing all files on a slow
 		 * console might take a lot of time:
@@ -4513,7 +4513,7 @@ void show_state_filter(unsigned long state_filter)
 		touch_nmi_watchdog();
 		if (!state_filter || (p->state & state_filter))
 			sched_show_task(p);
-	} while_each_thread(g, p);
+	}
 
 	touch_all_softlockup_watchdogs();
 
@@ -7137,7 +7137,7 @@ void normalize_rt_tasks(void)
 	struct rq *rq;
 
 	read_lock_irqsave(&tasklist_lock, flags);
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		/*
 		 * Only normalize user tasks:
 		 */
@@ -7168,8 +7168,7 @@ void normalize_rt_tasks(void)
 
 		__task_rq_unlock(rq);
 		raw_spin_unlock(&p->pi_lock);
-	} while_each_thread(g, p);
-
+	}
 	read_unlock_irqrestore(&tasklist_lock, flags);
 }
 
@@ -7357,10 +7356,10 @@ static inline int tg_has_rt_tasks(struct task_group *tg)
 {
 	struct task_struct *g, *p;
 
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		if (rt_task(p) && task_rq(p)->rt.tg == tg)
 			return 1;
-	} while_each_thread(g, p);
+	}
 
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip:sched/core] sched: s/do_each_thread/for_each_process_thread/ in debug.c
  2014-08-13 19:19 ` [PATCH 2/4] sched: s/do_each_thread/for_each_process_thread/ in debug.c Oleg Nesterov
@ 2014-08-20  8:19   ` tip-bot for Oleg Nesterov
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot for Oleg Nesterov @ 2014-08-20  8:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, torvalds, seto.hidetoshi, peterz,
	umgwanakikbuti, riel, fmayhar, akpm, srao, fweisbec, tglx, oleg,
	lwoodman

Commit-ID:  d38e83c715270cc2e137bbf6f25206c8c023896b
Gitweb:     http://git.kernel.org/tip/d38e83c715270cc2e137bbf6f25206c8c023896b
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Wed, 13 Aug 2014 21:19:56 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 20 Aug 2014 09:47:17 +0200

sched: s/do_each_thread/for_each_process_thread/ in debug.c

Change kernel/sched/debug.c to use for_each_process_thread().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Frank Mayhar <fmayhar@google.com>
Cc: Frederic Weisbecker <fweisbec@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sanjay Rao <srao@redhat.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140813191956.GA19324@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/debug.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 627b3c3..c7fe1ea0 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -160,14 +160,12 @@ static void print_rq(struct seq_file *m, struct rq *rq, int rq_cpu)
 	"----------------------------------------------------\n");
 
 	read_lock_irqsave(&tasklist_lock, flags);
-
-	do_each_thread(g, p) {
+	for_each_process_thread(g, p) {
 		if (task_cpu(p) != rq_cpu)
 			continue;
 
 		print_task(m, rq, p);
-	} while_each_thread(g, p);
-
+	}
 	read_unlock_irqrestore(&tasklist_lock, flags);
 }
 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip:sched/core] sched: Change thread_group_cputime() to use for_each_thread()
  2014-08-13 19:20 ` [PATCH 3/4] sched: change thread_group_cputime() to use for_each_thread() Oleg Nesterov
@ 2014-08-20  8:19   ` tip-bot for Oleg Nesterov
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot for Oleg Nesterov @ 2014-08-20  8:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, torvalds, seto.hidetoshi, peterz,
	umgwanakikbuti, riel, fmayhar, akpm, srao, fweisbec, tglx, oleg,
	lwoodman

Commit-ID:  1e4dda08b4c39b3d8f4a3ee7269d49e0200c8af8
Gitweb:     http://git.kernel.org/tip/1e4dda08b4c39b3d8f4a3ee7269d49e0200c8af8
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Wed, 13 Aug 2014 21:20:00 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 20 Aug 2014 09:47:18 +0200

sched: Change thread_group_cputime() to use for_each_thread()

Change thread_group_cputime() to use for_each_thread() instead of
buggy while_each_thread(). This also makes the pid_alive() check
unnecessary.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Frank Mayhar <fmayhar@google.com>
Cc: Frederic Weisbecker <fweisbec@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sanjay Rao <srao@redhat.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140813192000.GA19327@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/cputime.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 72fdf06..3e52836 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -294,18 +294,12 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
 	times->sum_exec_runtime = sig->sum_sched_runtime;
 
 	rcu_read_lock();
-	/* make sure we can trust tsk->thread_group list */
-	if (!likely(pid_alive(tsk)))
-		goto out;
-
-	t = tsk;
-	do {
+	for_each_thread(tsk, t) {
 		task_cputime(t, &utime, &stime);
 		times->utime += utime;
 		times->stime += stime;
 		times->sum_exec_runtime += task_sched_runtime(t);
-	} while_each_thread(tsk, t);
-out:
+	}
 	rcu_read_unlock();
 }
 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip:sched/core] sched: Change autogroup_move_group() to use for_each_thread()
  2014-08-13 19:20 ` [PATCH 4/4] sched: change autogroup_move_group() " Oleg Nesterov
@ 2014-08-20  8:19   ` tip-bot for Oleg Nesterov
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot for Oleg Nesterov @ 2014-08-20  8:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, torvalds, seto.hidetoshi, peterz,
	umgwanakikbuti, riel, fmayhar, akpm, srao, fweisbec, tglx, oleg,
	lwoodman

Commit-ID:  5aface53d1a0ef7823215c4078fca8445995d006
Gitweb:     http://git.kernel.org/tip/5aface53d1a0ef7823215c4078fca8445995d006
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Wed, 13 Aug 2014 21:20:03 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 20 Aug 2014 09:47:18 +0200

sched: Change autogroup_move_group() to use for_each_thread()

Change autogroup_move_group() to use for_each_thread() instead of
buggy while_each_thread().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Frank Mayhar <fmayhar@google.com>
Cc: Frederic Weisbecker <fweisbec@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sanjay Rao <srao@redhat.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140813192003.GA19334@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/auto_group.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c
index e73efba..8a2e230 100644
--- a/kernel/sched/auto_group.c
+++ b/kernel/sched/auto_group.c
@@ -148,11 +148,8 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag)
 	if (!ACCESS_ONCE(sysctl_sched_autogroup_enabled))
 		goto out;
 
-	t = p;
-	do {
+	for_each_thread(p, t)
 		sched_move_task(t);
-	} while_each_thread(p, t);
-
 out:
 	unlock_task_sighand(p, &flags);
 	autogroup_kref_put(prev);

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2014-08-20  8:22 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-08-13 19:19 [PATCH 0/4] sched: don't use while_each_thread() Oleg Nesterov
2014-08-13 19:19 ` [PATCH 1/4] sched: s/do_each_thread/for_each_process_thread/ in core.c Oleg Nesterov
2014-08-20  8:18   ` [tip:sched/core] " tip-bot for Oleg Nesterov
2014-08-13 19:19 ` [PATCH 2/4] sched: s/do_each_thread/for_each_process_thread/ in debug.c Oleg Nesterov
2014-08-20  8:19   ` [tip:sched/core] " tip-bot for Oleg Nesterov
2014-08-13 19:20 ` [PATCH 3/4] sched: change thread_group_cputime() to use for_each_thread() Oleg Nesterov
2014-08-20  8:19   ` [tip:sched/core] sched: Change " tip-bot for Oleg Nesterov
2014-08-13 19:20 ` [PATCH 4/4] sched: change autogroup_move_group() " Oleg Nesterov
2014-08-20  8:19   ` [tip:sched/core] sched: Change " tip-bot for Oleg Nesterov
2014-08-13 19:23 ` [PATCH 0/4] sched: don't use while_each_thread() Peter Zijlstra
2014-08-17 15:25 ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Oleg Nesterov
2014-08-17 15:26   ` [PATCH 1/2] sched: normalize_rt_tasks: don't use _irqsave for tasklist_lock, use task_rq_lock() Oleg Nesterov
2014-08-17 15:26   ` [PATCH 2/2] sched: print_rq: don't use tasklist_lock Oleg Nesterov
2014-08-17 21:14   ` [PATCH 0/2] sched: tasklist_lock cleanups (Was: don't use while_each_thread()) Kirill Tkhai
2014-08-18 15:09     ` Oleg Nesterov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox