* [patch] fix smt nice lock contention and optimization
@ 2006-06-03 7:43 Chen, Kenneth W
2006-06-03 7:49 ` Ingo Molnar
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Chen, Kenneth W @ 2006-06-03 7:43 UTC (permalink / raw)
To: 'Andrew Morton', 'Nick Piggin',
'Chris Mason', 'Con Kolivas'
Cc: Ingo Molnar, linux-kernel
OK, final rolled up patch with everyone's changes. I fixed one bug
introduced by Con's earlier patch that there is an unpaired
spin_trylock/spin_unlock in the for loop of dependent_sleeper().
Chris, Con, Nick - please review and provide your signed-off-by line.
Andrew - please consider for -mm inclusion. Thanks.
[patch] fix smt nice lock contention and optimization
Initial report and lock contention fix from Chris Mason:
Recent benchmarks showed some performance regressions between 2.6.16 and
2.6.5. We tracked down one of the regressions to lock contention in schedule
heavy workloads (~70,000 context switches per second)
kernel/sched.c:dependent_sleeper() was responsible for most of the lock
contention, hammering on the run queue locks. The patch below is more of
a discussion point than a suggested fix (although it does reduce lock
contention significantly). The dependent_sleeper code looks very expensive
to me, especially for using a spinlock to bounce control between two different
siblings in the same cpu.
It is further optimized:
* perform dependent_sleeper check after next task is determined
* convert wake_sleeping_dependent to use trylock
* skip smt runqueue check if trylock fails
* optimize double_rq_lock now that smt nice is converted to trylock
* early exit in searching first SD_SHARE_CPUPOWER domain
* speedup fast path of dependent_sleeper
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
---
sched.c | 168 ++++++++++++++++++----------------------------------------------
1 files changed, 48 insertions(+), 120 deletions(-)
diff -Nurp 2.6.17-rc5-mm2/kernel/sched.c ken/kernel/sched.c
--- 2.6.17-rc5-mm2/kernel/sched.c 2006-06-02 22:34:04.000000000 -0700
+++ ken/kernel/sched.c 2006-06-02 22:52:28.000000000 -0700
@@ -248,7 +248,6 @@ struct runqueue {
task_t *migration_thread;
struct list_head migration_queue;
- int cpu;
#endif
#ifdef CONFIG_SCHEDSTATS
@@ -1887,9 +1886,6 @@ unsigned long nr_active(void)
/*
* double_rq_lock - safely lock two runqueues
*
- * We must take them in cpu order to match code in
- * dependent_sleeper and wake_dependent_sleeper.
- *
* Note this does not disable interrupts like task_rq_lock,
* you need to do so manually before calling.
*/
@@ -1901,7 +1897,7 @@ static void double_rq_lock(runqueue_t *r
spin_lock(&rq1->lock);
__acquire(rq2->lock); /* Fake it out ;) */
} else {
- if (rq1->cpu < rq2->cpu) {
+ if (rq1 < rq2) {
spin_lock(&rq1->lock);
spin_lock(&rq2->lock);
} else {
@@ -1937,7 +1933,7 @@ static void double_lock_balance(runqueue
__acquires(this_rq->lock)
{
if (unlikely(!spin_trylock(&busiest->lock))) {
- if (busiest->cpu < this_rq->cpu) {
+ if (busiest < this_rq) {
spin_unlock_non_nested(&this_rq->lock);
spin_lock(&busiest->lock);
spin_lock(&this_rq->lock);
@@ -2969,48 +2965,33 @@ static inline void wakeup_busy_runqueue(
resched_task(rq->idle);
}
-static void wake_sleeping_dependent(int this_cpu, runqueue_t *this_rq)
+/*
+ * Called with interrupt disabled and this_rq's runqueue locked.
+ */
+static void wake_sleeping_dependent(int this_cpu)
{
struct sched_domain *tmp, *sd = NULL;
- cpumask_t sibling_map;
int i;
for_each_domain(this_cpu, tmp)
- if (tmp->flags & SD_SHARE_CPUPOWER)
+ if (tmp->flags & SD_SHARE_CPUPOWER) {
sd = tmp;
-
+ break;
+ }
if (!sd)
return;
- /*
- * Unlock the current runqueue because we have to lock in
- * CPU order to avoid deadlocks. Caller knows that we might
- * unlock. We keep IRQs disabled.
- */
- spin_unlock(&this_rq->lock);
-
- sibling_map = sd->span;
-
- for_each_cpu_mask(i, sibling_map)
- spin_lock(&cpu_rq(i)->lock);
- /*
- * We clear this CPU from the mask. This both simplifies the
- * inner loop and keps this_rq locked when we exit:
- */
- cpu_clear(this_cpu, sibling_map);
-
- for_each_cpu_mask(i, sibling_map) {
+ for_each_cpu_mask(i, sd->span) {
runqueue_t *smt_rq = cpu_rq(i);
+ if (i == this_cpu)
+ continue;
+ if (unlikely(!spin_trylock(&smt_rq->lock)))
+ continue;
+
wakeup_busy_runqueue(smt_rq);
+ spin_unlock(&smt_rq->lock);
}
-
- for_each_cpu_mask(i, sibling_map)
- spin_unlock_non_nested(&cpu_rq(i)->lock);
- /*
- * We exit with this_cpu's rq still held and IRQs
- * still disabled:
- */
}
/*
@@ -3023,52 +3004,44 @@ static inline unsigned long smt_slice(ta
return p->time_slice * (100 - sd->per_cpu_gain) / 100;
}
-static int dependent_sleeper(int this_cpu, runqueue_t *this_rq)
+/*
+ * To minimise lock contention and not have to drop this_rq's runlock we only
+ * trylock the sibling runqueues and bypass those runqueues if we fail to
+ * acquire their lock. As we only trylock the normal locking order does not
+ * need to be obeyed.
+ */
+static int dependent_sleeper(int this_cpu, runqueue_t *this_rq, task_t *p)
{
struct sched_domain *tmp, *sd = NULL;
- cpumask_t sibling_map;
- prio_array_t *array;
int ret = 0, i;
- task_t *p;
+
+ /* kernel/rt threads do not participate in dependent sleeping */
+ if (!p->mm || rt_task(p))
+ return 0;
for_each_domain(this_cpu, tmp)
- if (tmp->flags & SD_SHARE_CPUPOWER)
+ if (tmp->flags & SD_SHARE_CPUPOWER) {
sd = tmp;
-
+ break;
+ }
if (!sd)
return 0;
- /*
- * The same locking rules and details apply as for
- * wake_sleeping_dependent():
- */
- spin_unlock_non_nested(&this_rq->lock);
- sibling_map = sd->span;
- for_each_cpu_mask(i, sibling_map)
- spin_lock(&cpu_rq(i)->lock);
- cpu_clear(this_cpu, sibling_map);
+ for_each_cpu_mask(i, sd->span) {
+ runqueue_t *smt_rq;
+ task_t *smt_curr;
- /*
- * Establish next task to be run - it might have gone away because
- * we released the runqueue lock above:
- */
- if (!this_rq->nr_running)
- goto out_unlock;
- array = this_rq->active;
- if (!array->nr_active)
- array = this_rq->expired;
- BUG_ON(!array->nr_active);
+ if (i == this_cpu)
+ continue;
- p = list_entry(array->queue[sched_find_first_bit(array->bitmap)].next,
- task_t, run_list);
+ smt_rq = cpu_rq(i);
+ if (unlikely(!spin_trylock(&smt_rq->lock)))
+ continue;
- for_each_cpu_mask(i, sibling_map) {
- runqueue_t *smt_rq = cpu_rq(i);
- task_t *smt_curr = smt_rq->curr;
+ smt_curr = smt_rq->curr;
- /* Kernel threads do not participate in dependent sleeping */
- if (!p->mm || !smt_curr->mm || rt_task(p))
- goto check_smt_task;
+ if (!smt_curr->mm)
+ goto unlock;
/*
* If a user task with lower static priority than the
@@ -3091,44 +3064,17 @@ static int dependent_sleeper(int this_cp
!TASK_PREEMPTS_CURR(p, smt_rq) &&
smt_slice(smt_curr, sd) > task_timeslice(p))
ret = 1;
-
-check_smt_task:
- if ((!smt_curr->mm && smt_curr != smt_rq->idle) ||
- rt_task(smt_curr))
- continue;
- if (!p->mm) {
- wakeup_busy_runqueue(smt_rq);
- continue;
- }
-
- /*
- * Reschedule a lower priority task on the SMT sibling for
- * it to be put to sleep, or wake it up if it has been put to
- * sleep for priority reasons to see if it should run now.
- */
- if (rt_task(p)) {
- if ((jiffies % DEF_TIMESLICE) >
- (sd->per_cpu_gain * DEF_TIMESLICE / 100))
- resched_task(smt_curr);
- } else {
- if (TASK_PREEMPTS_CURR(p, smt_rq) &&
- smt_slice(p, sd) > task_timeslice(smt_curr))
- resched_task(smt_curr);
- else
- wakeup_busy_runqueue(smt_rq);
- }
+unlock:
+ spin_unlock(&smt_rq->lock);
}
-out_unlock:
- for_each_cpu_mask(i, sibling_map)
- spin_unlock_non_nested(&cpu_rq(i)->lock);
return ret;
}
#else
-static inline void wake_sleeping_dependent(int this_cpu, runqueue_t *this_rq)
+static inline void wake_sleeping_dependent(int this_cpu)
{
}
-static inline int dependent_sleeper(int this_cpu, runqueue_t *this_rq)
+static inline int dependent_sleeper(int this_cpu, runqueue_t *this_rq, task_t *p)
{
return 0;
}
@@ -3255,32 +3201,13 @@ need_resched_nonpreemptible:
cpu = smp_processor_id();
if (unlikely(!rq->nr_running)) {
-go_idle:
idle_balance(cpu, rq);
if (!rq->nr_running) {
next = rq->idle;
rq->expired_timestamp = 0;
- wake_sleeping_dependent(cpu, rq);
- /*
- * wake_sleeping_dependent() might have released
- * the runqueue, so break out if we got new
- * tasks meanwhile:
- */
- if (!rq->nr_running)
- goto switch_tasks;
- }
- } else {
- if (dependent_sleeper(cpu, rq)) {
- next = rq->idle;
+ wake_sleeping_dependent(cpu);
goto switch_tasks;
}
- /*
- * dependent_sleeper() releases and reacquires the runqueue
- * lock, hence go into the idle loop if the rq went
- * empty meanwhile:
- */
- if (unlikely(!rq->nr_running))
- goto go_idle;
}
array = rq->active;
@@ -3318,6 +3245,8 @@ go_idle:
}
}
next->sleep_type = SLEEP_NORMAL;
+ if (dependent_sleeper(cpu, rq, next))
+ next = rq->idle;
switch_tasks:
if (next == rq->idle)
schedstat_inc(rq, sched_goidle);
@@ -6666,7 +6595,6 @@ void __init sched_init(void)
rq->push_cpu = 0;
rq->migration_thread = NULL;
INIT_LIST_HEAD(&rq->migration_queue);
- rq->cpu = i;
#endif
atomic_set(&rq->nr_iowait, 0);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 7:43 [patch] fix smt nice lock contention and optimization Chen, Kenneth W
@ 2006-06-03 7:49 ` Ingo Molnar
2006-06-03 8:11 ` Andrew Morton
2006-06-03 7:52 ` Con Kolivas
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2006-06-03 7:49 UTC (permalink / raw)
To: Chen, Kenneth W
Cc: 'Andrew Morton', 'Nick Piggin',
'Chris Mason', 'Con Kolivas', linux-kernel
* Chen, Kenneth W <kenneth.w.chen@intel.com> wrote:
> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
> ---
>
> sched.c | 168 ++++++++++++++++++----------------------------------------------
> 1 files changed, 48 insertions(+), 120 deletions(-)
looks really good now to me.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
lets try it in -mm?
Ingo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 7:43 [patch] fix smt nice lock contention and optimization Chen, Kenneth W
2006-06-03 7:49 ` Ingo Molnar
@ 2006-06-03 7:52 ` Con Kolivas
2006-06-03 7:57 ` Ingo Molnar
2006-06-03 8:52 ` Nick Piggin
2006-06-03 18:45 ` Chris Mason
3 siblings, 1 reply; 11+ messages in thread
From: Con Kolivas @ 2006-06-03 7:52 UTC (permalink / raw)
To: Chen, Kenneth W
Cc: 'Andrew Morton', 'Nick Piggin',
'Chris Mason', Ingo Molnar, linux-kernel
On Saturday 03 June 2006 17:43, Chen, Kenneth W wrote:
> OK, final rolled up patch with everyone's changes. I fixed one bug
> introduced by Con's earlier patch that there is an unpaired
> spin_trylock/spin_unlock in the for loop of dependent_sleeper().
> Chris, Con, Nick - please review and provide your signed-off-by line.
> Andrew - please consider for -mm inclusion. Thanks.
Looks good. Just one style nitpick.
> for_each_domain(this_cpu, tmp)
> - if (tmp->flags & SD_SHARE_CPUPOWER)
> + if (tmp->flags & SD_SHARE_CPUPOWER) {
> sd = tmp;
> -
> + break;
> + }
Could we make this neater with extra braces such as:
for_each_domain(this_cpu, tmp) {
if (tmp->flags & SD_SHARE_CPUPOWER) {
sd = tmp;
break;
}
}
and same for the other uses of for_each ? I know it's redundant but it's
neater IMO when there are multiple lines of code below it.
--
-ck
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 7:52 ` Con Kolivas
@ 2006-06-03 7:57 ` Ingo Molnar
2006-06-03 8:12 ` Chen, Kenneth W
0 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2006-06-03 7:57 UTC (permalink / raw)
To: Con Kolivas
Cc: Chen, Kenneth W, 'Andrew Morton', 'Nick Piggin',
'Chris Mason', linux-kernel
* Con Kolivas <kernel@kolivas.org> wrote:
> Could we make this neater with extra braces such as:
>
> for_each_domain(this_cpu, tmp) {
> if (tmp->flags & SD_SHARE_CPUPOWER) {
> sd = tmp;
> break;
> }
> }
>
> and same for the other uses of for_each ? I know it's redundant but
> it's neater IMO when there are multiple lines of code below it.
yep, that's the preferred style when there are multiple lines below a
loop.
Ingo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 7:49 ` Ingo Molnar
@ 2006-06-03 8:11 ` Andrew Morton
2006-06-03 8:17 ` Chen, Kenneth W
0 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2006-06-03 8:11 UTC (permalink / raw)
To: Ingo Molnar; +Cc: kenneth.w.chen, nickpiggin, mason, kernel, linux-kernel
On Sat, 3 Jun 2006 09:49:20 +0200
Ingo Molnar <mingo@elte.hu> wrote:
>
> * Chen, Kenneth W <kenneth.w.chen@intel.com> wrote:
>
> > Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
> > ---
> >
> > sched.c | 168 ++++++++++++++++++----------------------------------------------
> > 1 files changed, 48 insertions(+), 120 deletions(-)
>
> looks really good now to me.
>
> Signed-off-by: Ingo Molnar <mingo@elte.hu>
>
> lets try it in -mm?
>
Yup. I redid Ken's patch against mainline and them mangled
lock-validator-special-locking-schedc.patch to suit.
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [patch] fix smt nice lock contention and optimization
2006-06-03 7:57 ` Ingo Molnar
@ 2006-06-03 8:12 ` Chen, Kenneth W
2006-06-03 8:17 ` Con Kolivas
0 siblings, 1 reply; 11+ messages in thread
From: Chen, Kenneth W @ 2006-06-03 8:12 UTC (permalink / raw)
To: 'Ingo Molnar', Con Kolivas
Cc: 'Andrew Morton', 'Nick Piggin',
'Chris Mason', linux-kernel
Ingo Molnar wrote on Saturday, June 03, 2006 12:58 AM
> * Con Kolivas <kernel@kolivas.org> wrote:
>
> > Could we make this neater with extra braces such as:
> >
> > for_each_domain(this_cpu, tmp) {
> > if (tmp->flags & SD_SHARE_CPUPOWER) {
> > sd = tmp;
> > break;
> > }
> > }
> >
> > and same for the other uses of for_each ? I know it's redundant but
> > it's neater IMO when there are multiple lines of code below it.
>
> yep, that's the preferred style when there are multiple lines below a
> loop.
OK, thanks for the tips. Here is an incremental coding-style fix:
--- ./kernel/sched.c.orig 2006-06-02 23:54:11.000000000 -0700
+++ ./kernel/sched.c 2006-06-02 23:55:45.000000000 -0700
@@ -2973,11 +2973,12 @@
struct sched_domain *tmp, *sd = NULL;
int i;
- for_each_domain(this_cpu, tmp)
+ for_each_domain(this_cpu, tmp) {
if (tmp->flags & SD_SHARE_CPUPOWER) {
sd = tmp;
break;
}
+ }
if (!sd)
return;
@@ -3019,11 +3020,12 @@
if (!p->mm || rt_task(p))
return 0;
- for_each_domain(this_cpu, tmp)
+ for_each_domain(this_cpu, tmp) {
if (tmp->flags & SD_SHARE_CPUPOWER) {
sd = tmp;
break;
}
+ }
if (!sd)
return 0;
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [patch] fix smt nice lock contention and optimization
2006-06-03 8:11 ` Andrew Morton
@ 2006-06-03 8:17 ` Chen, Kenneth W
2006-06-03 8:22 ` Andrew Morton
0 siblings, 1 reply; 11+ messages in thread
From: Chen, Kenneth W @ 2006-06-03 8:17 UTC (permalink / raw)
To: 'Andrew Morton', Ingo Molnar
Cc: nickpiggin, mason, kernel, linux-kernel
Andrew Morton wrote on Saturday, June 03, 2006 1:11 AM
> Ingo Molnar <mingo@elte.hu> wrote:
> > looks really good now to me.
> >
> > Signed-off-by: Ingo Molnar <mingo@elte.hu>
> >
> > lets try it in -mm?
> >
>
> Yup. I redid Ken's patch against mainline and them mangled
> lock-validator-special-locking-schedc.patch to suit.
Hmm, wish I knew this beforehand, so that I won't spend extra 1/2
hour to port the patch to -mm and only to have you convert it back
to mainline. I could just post the version I originally had against
the mainline.
- Ken
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 8:12 ` Chen, Kenneth W
@ 2006-06-03 8:17 ` Con Kolivas
0 siblings, 0 replies; 11+ messages in thread
From: Con Kolivas @ 2006-06-03 8:17 UTC (permalink / raw)
To: Chen, Kenneth W
Cc: 'Ingo Molnar', 'Andrew Morton',
'Nick Piggin', 'Chris Mason', linux-kernel
On Saturday 03 June 2006 18:12, Chen, Kenneth W wrote:
> Ingo Molnar wrote on Saturday, June 03, 2006 12:58 AM
>
> > * Con Kolivas <kernel@kolivas.org> wrote:
> > > Could we make this neater with extra braces such as:
> > >
> > > for_each_domain(this_cpu, tmp) {
> > > if (tmp->flags & SD_SHARE_CPUPOWER) {
> > > sd = tmp;
> > > break;
> > > }
> > > }
> > >
> > > and same for the other uses of for_each ? I know it's redundant but
> > > it's neater IMO when there are multiple lines of code below it.
> >
> > yep, that's the preferred style when there are multiple lines below a
> > loop.
>
> OK, thanks for the tips. Here is an incremental coding-style fix:
Great!
Thanks Chris, Nick and Ken for your input.
Signed-off-by: Con Kolivas <kernel@kolivas.org>
for both
--
-ck
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 8:17 ` Chen, Kenneth W
@ 2006-06-03 8:22 ` Andrew Morton
0 siblings, 0 replies; 11+ messages in thread
From: Andrew Morton @ 2006-06-03 8:22 UTC (permalink / raw)
To: Chen, Kenneth W; +Cc: mingo, nickpiggin, mason, kernel, linux-kernel
On Sat, 3 Jun 2006 01:17:19 -0700
"Chen, Kenneth W" <kenneth.w.chen@intel.com> wrote:
> Andrew Morton wrote on Saturday, June 03, 2006 1:11 AM
> > Ingo Molnar <mingo@elte.hu> wrote:
> > > looks really good now to me.
> > >
> > > Signed-off-by: Ingo Molnar <mingo@elte.hu>
> > >
> > > lets try it in -mm?
> > >
> >
> > Yup. I redid Ken's patch against mainline and them mangled
> > lock-validator-special-locking-schedc.patch to suit.
>
> Hmm, wish I knew this beforehand, so that I won't spend extra 1/2
> hour to port the patch to -mm and only to have you convert it back
> to mainline. I could just post the version I originally had against
> the mainline.
>
Unless you're specifically working against code which is only in -mm (or a
subsystem tree), please work against mainline.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 7:43 [patch] fix smt nice lock contention and optimization Chen, Kenneth W
2006-06-03 7:49 ` Ingo Molnar
2006-06-03 7:52 ` Con Kolivas
@ 2006-06-03 8:52 ` Nick Piggin
2006-06-03 18:45 ` Chris Mason
3 siblings, 0 replies; 11+ messages in thread
From: Nick Piggin @ 2006-06-03 8:52 UTC (permalink / raw)
To: Chen, Kenneth W
Cc: 'Andrew Morton', 'Chris Mason',
'Con Kolivas', Ingo Molnar, linux-kernel
Chen, Kenneth W wrote:
> OK, final rolled up patch with everyone's changes. I fixed one bug
> introduced by Con's earlier patch that there is an unpaired
> spin_trylock/spin_unlock in the for loop of dependent_sleeper().
> Chris, Con, Nick - please review and provide your signed-off-by line.
> Andrew - please consider for -mm inclusion. Thanks.
Thanks Ken, you can add a Signed-off-by: Nick Piggin <npiggin@suse.de>
for my part.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [patch] fix smt nice lock contention and optimization
2006-06-03 7:43 [patch] fix smt nice lock contention and optimization Chen, Kenneth W
` (2 preceding siblings ...)
2006-06-03 8:52 ` Nick Piggin
@ 2006-06-03 18:45 ` Chris Mason
3 siblings, 0 replies; 11+ messages in thread
From: Chris Mason @ 2006-06-03 18:45 UTC (permalink / raw)
To: Chen, Kenneth W
Cc: 'Andrew Morton', 'Nick Piggin',
'Con Kolivas', Ingo Molnar, linux-kernel
On Saturday 03 June 2006 03:43, Chen, Kenneth W wrote:
> OK, final rolled up patch with everyone's changes. I fixed one bug
> introduced by Con's earlier patch that there is an unpaired
> spin_trylock/spin_unlock in the for loop of dependent_sleeper().
> Chris, Con, Nick - please review and provide your signed-off-by line.
> Andrew - please consider for -mm inclusion. Thanks.
Thanks for turning my half baked code into something nice. acked-by from me
as well.
-chris
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2006-06-03 18:45 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-03 7:43 [patch] fix smt nice lock contention and optimization Chen, Kenneth W
2006-06-03 7:49 ` Ingo Molnar
2006-06-03 8:11 ` Andrew Morton
2006-06-03 8:17 ` Chen, Kenneth W
2006-06-03 8:22 ` Andrew Morton
2006-06-03 7:52 ` Con Kolivas
2006-06-03 7:57 ` Ingo Molnar
2006-06-03 8:12 ` Chen, Kenneth W
2006-06-03 8:17 ` Con Kolivas
2006-06-03 8:52 ` Nick Piggin
2006-06-03 18:45 ` Chris Mason
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox