* [patch] sched: fix unregister_fair_sched_group
[not found] <20101130005539.617388324@google.com>
@ 2010-11-30 0:55 ` Paul Turner
2010-11-30 9:09 ` [tip:sched/core] sched: Fix unregister_fair_sched_group() tip-bot for Paul Turner
0 siblings, 1 reply; 2+ messages in thread
From: Paul Turner @ 2010-11-30 0:55 UTC (permalink / raw)
To: linux-kernel; +Cc: Ingo Molnar, Peter Zijlstra, Mike Galbraith
[-- Attachment #1: sched-fix_unregister.patch --]
[-- Type: text/plain, Size: 1069 bytes --]
In the flipping and flopping between calling unregister_fair_sched_group on a
per-cpu versus per-group basis we ended up in a bad state.
Remove from the list for the passed cpu as opposed to some arbitrary index.
(This fixes explosions w/ autogroup as well as a group creation/destruction
stress test)
Signed-off-by: Paul Turner <pjt@google.com>
---
kernel/sched.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
Index: tip/kernel/sched.c
===================================================================
--- tip.orig/kernel/sched.c
+++ tip/kernel/sched.c
@@ -8097,7 +8097,6 @@ static inline void unregister_fair_sched
{
struct rq *rq = cpu_rq(cpu);
unsigned long flags;
- int i;
/*
* Only empty task groups can be destroyed; so we can speculatively
@@ -8107,7 +8106,7 @@ static inline void unregister_fair_sched
return;
raw_spin_lock_irqsave(&rq->lock, flags);
- list_del_leaf_cfs_rq(tg->cfs_rq[i]);
+ list_del_leaf_cfs_rq(tg->cfs_rq[cpu]);
raw_spin_unlock_irqrestore(&rq->lock, flags);
}
#else /* !CONFG_FAIR_GROUP_SCHED */
^ permalink raw reply [flat|nested] 2+ messages in thread
* [tip:sched/core] sched: Fix unregister_fair_sched_group()
2010-11-30 0:55 ` [patch] sched: fix unregister_fair_sched_group Paul Turner
@ 2010-11-30 9:09 ` tip-bot for Paul Turner
0 siblings, 0 replies; 2+ messages in thread
From: tip-bot for Paul Turner @ 2010-11-30 9:09 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, a.p.zijlstra, efault, pjt, tglx, sfr,
mingo
Commit-ID: 822bc180a7f7a7bc5fcaaea195f41b487cc8cae8
Gitweb: http://git.kernel.org/tip/822bc180a7f7a7bc5fcaaea195f41b487cc8cae8
Author: Paul Turner <pjt@google.com>
AuthorDate: Mon, 29 Nov 2010 16:55:40 -0800
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Tue, 30 Nov 2010 10:07:10 +0100
sched: Fix unregister_fair_sched_group()
In the flipping and flopping between calling
unregister_fair_sched_group() on a per-cpu versus per-group basis
we ended up in a bad state.
Remove from the list for the passed cpu as opposed to some
arbitrary index.
( This fixes explosions w/ autogroup as well as a group
creation/destruction stress test. )
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <20101130005740.080828123@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
kernel/sched.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 35a6373..66ef579 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -8085,7 +8085,6 @@ static inline void unregister_fair_sched_group(struct task_group *tg, int cpu)
{
struct rq *rq = cpu_rq(cpu);
unsigned long flags;
- int i;
/*
* Only empty task groups can be destroyed; so we can speculatively
@@ -8095,7 +8094,7 @@ static inline void unregister_fair_sched_group(struct task_group *tg, int cpu)
return;
raw_spin_lock_irqsave(&rq->lock, flags);
- list_del_leaf_cfs_rq(tg->cfs_rq[i]);
+ list_del_leaf_cfs_rq(tg->cfs_rq[cpu]);
raw_spin_unlock_irqrestore(&rq->lock, flags);
}
#else /* !CONFG_FAIR_GROUP_SCHED */
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2010-11-30 9:10 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20101130005539.617388324@google.com>
2010-11-30 0:55 ` [patch] sched: fix unregister_fair_sched_group Paul Turner
2010-11-30 9:09 ` [tip:sched/core] sched: Fix unregister_fair_sched_group() tip-bot for Paul Turner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox