* [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
@ 2011-07-01 18:41 Rakib Mullick
2011-07-02 2:18 ` Paul Turner
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Rakib Mullick @ 2011-07-01 18:41 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Peter Zijlstra, linux-kernel
Currently at schedule(), when we call pick_next_task we don't check whether current rq is empty or not. Since idle_balance can fail,
its nice to check whether we really have any task on rq or not. If not, we can call idle_sched_class.pick_next_task straight.
Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
---
diff --git a/kernel/sched.c b/kernel/sched.c
index 5925275..a4f4f58 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4273,7 +4273,14 @@ need_resched:
idle_balance(cpu, rq);
put_prev_task(rq, prev);
- next = pick_next_task(rq);
+ /* Since idle_balance can fail, its better to check rq->nr_running.
+ * Otherwise we can call idle_sched_class.pick_next_task straight,
+ * cause we need to do some accounting.
+ */
+ if (likely(rq->nr_running))
+ next = pick_next_task(rq);
+ else
+ next = idle_sched_class.pick_next_task(rq);
clear_tsk_need_resched(prev);
rq->skip_clock_update = 0;
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-01 18:41 [PATCH] sched: Check nr_running before calling pick_next_task in schedule() Rakib Mullick
@ 2011-07-02 2:18 ` Paul Turner
2011-07-02 2:23 ` Paul Turner
2011-07-02 9:51 ` Peter Zijlstra
2 siblings, 0 replies; 10+ messages in thread
From: Paul Turner @ 2011-07-02 2:18 UTC (permalink / raw)
To: linux-kernel; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel
Hi Rakib,
This doesn't strike me as a very good trade.
It adds a branch to the case where we actually have work to save
branches in the case when we're idle anyway?
- Paul
On 07/01/11 11:41, Rakib Mullick wrote:
> Currently at schedule(), when we call pick_next_task we don't check whether current rq is empty or not. Since idle_balance can fail,
> its nice to check whether we really have any task on rq or not. If not, we can call idle_sched_class.pick_next_task straight.
>
> Signed-off-by: Rakib Mullick<rakib.mullick@gmail.com>
> ---
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 5925275..a4f4f58 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4273,7 +4273,14 @@ need_resched:
> idle_balance(cpu, rq);
>
> put_prev_task(rq, prev);
> - next = pick_next_task(rq);
> + /* Since idle_balance can fail, its better to check rq->nr_running.
> + * Otherwise we can call idle_sched_class.pick_next_task straight,
> + * cause we need to do some accounting.
> + */
> + if (likely(rq->nr_running))
> + next = pick_next_task(rq);
> + else
> + next = idle_sched_class.pick_next_task(rq);
> clear_tsk_need_resched(prev);
> rq->skip_clock_update = 0;
>
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-01 18:41 [PATCH] sched: Check nr_running before calling pick_next_task in schedule() Rakib Mullick
2011-07-02 2:18 ` Paul Turner
@ 2011-07-02 2:23 ` Paul Turner
2011-07-02 4:37 ` Rakib Mullick
2011-07-02 9:51 ` Peter Zijlstra
2 siblings, 1 reply; 10+ messages in thread
From: Paul Turner @ 2011-07-02 2:23 UTC (permalink / raw)
To: Rakib Mullick; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel
Hi Rakib,
This doesn't strike me as a very good trade.
It adds a branch to the case where we actually have work to save
branches in the case when we're idle anyway?
- Paul
On 07/01/11 11:41, Rakib Mullick wrote:
> Currently at schedule(), when we call pick_next_task we don't check whether current rq is empty or not. Since idle_balance can fail,
> its nice to check whether we really have any task on rq or not. If not, we can call idle_sched_class.pick_next_task straight.
>
> Signed-off-by: Rakib Mullick<rakib.mullick@gmail.com>
> ---
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 5925275..a4f4f58 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4273,7 +4273,14 @@ need_resched:
> idle_balance(cpu, rq);
>
> put_prev_task(rq, prev);
> - next = pick_next_task(rq);
> + /* Since idle_balance can fail, its better to check rq->nr_running.
> + * Otherwise we can call idle_sched_class.pick_next_task straight,
> + * cause we need to do some accounting.
> + */
> + if (likely(rq->nr_running))
> + next = pick_next_task(rq);
> + else
> + next = idle_sched_class.pick_next_task(rq);
> clear_tsk_need_resched(prev);
> rq->skip_clock_update = 0;
>
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-02 2:23 ` Paul Turner
@ 2011-07-02 4:37 ` Rakib Mullick
0 siblings, 0 replies; 10+ messages in thread
From: Rakib Mullick @ 2011-07-02 4:37 UTC (permalink / raw)
To: Paul Turner; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel
Hi Paul,
On Sat, Jul 2, 2011 at 8:23 AM, Paul Turner <pjt@google.com> wrote:
> Hi Rakib,
>
> This doesn't strike me as a very good trade.
>
I'm not sure why. What am I missing here?
> It adds a branch to the case where we actually have work to save branches in
> the case when we're idle anyway?
Yes, right. We're adding a branch here. Is it a misuse of branch? As I
mentioned, idle_balance can fail (I think thats an unlikely case, that
why I've added a branch). In that case, we don't need to call
pick_next_task. Note, in pick_next_task there is a check for
'likely(rq->nr_running == rq->cfs.nr_running)' - by using this patch
that case will be more likely (which itself calls
fair_sched_class.pick_next_task directly - which is an optimization).
If calling fair_sched_class.pick_next_task in case of all tasks are
cfs task is being treat as an optimization then why, when we're idle
calling idle_sched_class.pick_next_task, won't be an optimization?
Thanks,
Rakib
>
> - Paul
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-01 18:41 [PATCH] sched: Check nr_running before calling pick_next_task in schedule() Rakib Mullick
2011-07-02 2:18 ` Paul Turner
2011-07-02 2:23 ` Paul Turner
@ 2011-07-02 9:51 ` Peter Zijlstra
2011-07-02 14:26 ` Rakib Mullick
2 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2011-07-02 9:51 UTC (permalink / raw)
To: Rakib Mullick; +Cc: Ingo Molnar, linux-kernel, Paul Turner
On Sat, 2011-07-02 at 00:41 +0600, Rakib Mullick wrote:
> Currently at schedule(), when we call pick_next_task we don't check
> whether current rq is empty or not. Since idle_balance can fail,
> its nice to check whether we really have any task on rq or not. If
> not, we can call idle_sched_class.pick_next_task straight.
>
> Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
> ---
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 5925275..a4f4f58 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4273,7 +4273,14 @@ need_resched:
> idle_balance(cpu, rq);
>
> put_prev_task(rq, prev);
> - next = pick_next_task(rq);
> + /* Since idle_balance can fail, its better to check rq->nr_running.
> + * Otherwise we can call idle_sched_class.pick_next_task straight,
> + * cause we need to do some accounting.
> + */
> + if (likely(rq->nr_running))
> + next = pick_next_task(rq);
> + else
> + next = idle_sched_class.pick_next_task(rq);
> clear_tsk_need_resched(prev);
> rq->skip_clock_update = 0;
Why!?
You're making the fast path -- picking a task -- slower by adding a
branch, and making the slow path -- going into idle -- faster. That
seems backwards at best.
You've completely failed to provide any sort of rationale for the patch
nor did you provide a use-case with performance numbers. This just isn't
making much sense at all.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-02 9:51 ` Peter Zijlstra
@ 2011-07-02 14:26 ` Rakib Mullick
2011-07-02 15:35 ` Peter Zijlstra
2011-07-09 4:43 ` Rakib Mullick
0 siblings, 2 replies; 10+ messages in thread
From: Rakib Mullick @ 2011-07-02 14:26 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, linux-kernel, Paul Turner
On Sat, Jul 2, 2011 at 3:51 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Sat, 2011-07-02 at 00:41 +0600, Rakib Mullick wrote:
>> Currently at schedule(), when we call pick_next_task we don't check
>> whether current rq is empty or not. Since idle_balance can fail,
>> its nice to check whether we really have any task on rq or not. If
>> not, we can call idle_sched_class.pick_next_task straight.
>>
>> Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
>> ---
>>
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index 5925275..a4f4f58 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -4273,7 +4273,14 @@ need_resched:
>> idle_balance(cpu, rq);
>>
>> put_prev_task(rq, prev);
>> - next = pick_next_task(rq);
>> + /* Since idle_balance can fail, its better to check rq->nr_running.
>> + * Otherwise we can call idle_sched_class.pick_next_task straight,
>> + * cause we need to do some accounting.
>> + */
>> + if (likely(rq->nr_running))
>> + next = pick_next_task(rq);
>> + else
>> + next = idle_sched_class.pick_next_task(rq);
>> clear_tsk_need_resched(prev);
>> rq->skip_clock_update = 0;
>
> Why!?
>
> You're making the fast path -- picking a task -- slower by adding a
> branch, and making the slow path -- going into idle -- faster. That
> seems backwards at best.
>
Well, yes - branching seems definitely have some side effects.
Thinking from UP's perspective, it will only hit slow path -- going
into idle. In that case, that likely branch will just fail. And on an
UP system that slow path -- going into idle -- is the only way, taking
the fast path (trying picking a task) isn't the right thing, isn't it?
> You've completely failed to provide any sort of rationale for the patch
> nor did you provide a use-case with performance numbers. This just isn't
> making much sense at all.
>
Regarding taking branch, it needs performance numbers - I agree. But,
I'm not quite agree that - it's not just making any sense. Will try to
do some performance testing on it.
Thanks,
Rakib
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-02 14:26 ` Rakib Mullick
@ 2011-07-02 15:35 ` Peter Zijlstra
2011-07-03 8:07 ` Rakib Mullick
2011-07-09 4:43 ` Rakib Mullick
1 sibling, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2011-07-02 15:35 UTC (permalink / raw)
To: Rakib Mullick; +Cc: Ingo Molnar, linux-kernel, Paul Turner
On Sat, 2011-07-02 at 20:26 +0600, Rakib Mullick wrote:
> Well, yes - branching seems definitely have some side effects.
It adds the cost of the test as well as a possible branch mis-predict.
> Thinking from UP's perspective, it will only hit slow path -- going
> into idle.
Uhm, no, every time the machine is busy and does a schedule between
tasks you still get to do that extra nr_running test and branch.
> In that case, that likely branch will just fail. And on an
> UP system that slow path -- going into idle -- is the only way, taking
> the fast path (trying picking a task) isn't the right thing, isn't
> it?
I'm not at all sure I even understand what you're trying to say. I
really don't understand what's the problem with going the long way with
picking the idle task, the machine is idle, it doesn't have anything
useful to do, who cares.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-02 15:35 ` Peter Zijlstra
@ 2011-07-03 8:07 ` Rakib Mullick
2011-07-04 9:00 ` Rakib Mullick
0 siblings, 1 reply; 10+ messages in thread
From: Rakib Mullick @ 2011-07-03 8:07 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, linux-kernel, Paul Turner
On Sat, Jul 2, 2011 at 9:35 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Sat, 2011-07-02 at 20:26 +0600, Rakib Mullick wrote:
>> Well, yes - branching seems definitely have some side effects.
>
> It adds the cost of the test as well as a possible branch mis-predict.
>
>> Thinking from UP's perspective, it will only hit slow path -- going
>> into idle.
>
> Uhm, no, every time the machine is busy and does a schedule between
> tasks you still get to do that extra nr_running test and branch.
>
Ok, for now I'm putting branch aside. I don't think checking
nr_running isn't extra, it should be norm. Cause, if this rq has no
nr_running it calls idle_balance, at that point due to idle_balance -
we might have moved task from other rq if idle_balance is successful.
But, idle_balance might not be successful that's why I was thinking
about checking nr_running is necessary, in that case we don't need to
call pick_next_task cause - we don't have any task.
>> In that case, that likely branch will just fail. And on an
>> UP system that slow path -- going into idle -- is the only way, taking
>> the fast path (trying picking a task) isn't the right thing, isn't
>> it?
>
> I'm not at all sure I even understand what you're trying to say. I
> really don't understand what's the problem with going the long way with
> picking the idle task, the machine is idle, it doesn't have anything
> useful to do, who cares.
>
Well, yes the machine is idle. I got your point that you're
emphasizing that CPU is idle even if we take long path it doesn't
matter. But, when we've two ways, one is going through pick_next_task
other is calling idle_class straight I think calling idle_class is
better. Actually that's how I think (and certainly it differs from
yours). Note that, in pick_next_task there is branch, which checks
likely(nr_running==cfs->nr_running) - chances for hitting this branch
will increase - cause in case of !nr_running, pick_next_task won't be
called. It will reduce pick_next_task's calling overhead.
Thanks,
Rakib
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-03 8:07 ` Rakib Mullick
@ 2011-07-04 9:00 ` Rakib Mullick
0 siblings, 0 replies; 10+ messages in thread
From: Rakib Mullick @ 2011-07-04 9:00 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, linux-kernel, Paul Turner
On Sun, Jul 3, 2011 at 2:07 PM, Rakib Mullick <rakib.mullick@gmail.com> wrote:
> On Sat, Jul 2, 2011 at 9:35 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>> On Sat, 2011-07-02 at 20:26 +0600, Rakib Mullick wrote:
>>> Well, yes - branching seems definitely have some side effects.
>>
>> It adds the cost of the test as well as a possible branch mis-predict.
>>
>>> Thinking from UP's perspective, it will only hit slow path -- going
>>> into idle.
>>
>> Uhm, no, every time the machine is busy and does a schedule between
>> tasks you still get to do that extra nr_running test and branch.
>>
> Ok, for now I'm putting branch aside. I don't think checking
> nr_running isn't extra, it should be norm. Cause, if this rq has no
> nr_running it calls idle_balance, at that point due to idle_balance -
> we might have moved task from other rq if idle_balance is successful.
> But, idle_balance might not be successful that's why I was thinking
> about checking nr_running is necessary, in that case we don't need to
> call pick_next_task cause - we don't have any task.
>
>>> In that case, that likely branch will just fail. And on an
>>> UP system that slow path -- going into idle -- is the only way, taking
>>> the fast path (trying picking a task) isn't the right thing, isn't
>>> it?
>>
>> I'm not at all sure I even understand what you're trying to say. I
>> really don't understand what's the problem with going the long way with
>> picking the idle task, the machine is idle, it doesn't have anything
>> useful to do, who cares.
>>
> Well, yes the machine is idle. I got your point that you're
> emphasizing that CPU is idle even if we take long path it doesn't
> matter. But, when we've two ways, one is going through pick_next_task
> other is calling idle_class straight I think calling idle_class is
> better. Actually that's how I think (and certainly it differs from
> yours). Note that, in pick_next_task there is branch, which checks
> likely(nr_running==cfs->nr_running) - chances for hitting this branch
> will increase - cause in case of !nr_running, pick_next_task won't be
> called. It will reduce pick_next_task's calling overhead.
>
I made a mistake here, pick_next_task is inlined so function calling
overhead isn't a problem.
> Thanks,
> Rakib
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] sched: Check nr_running before calling pick_next_task in schedule().
2011-07-02 14:26 ` Rakib Mullick
2011-07-02 15:35 ` Peter Zijlstra
@ 2011-07-09 4:43 ` Rakib Mullick
1 sibling, 0 replies; 10+ messages in thread
From: Rakib Mullick @ 2011-07-09 4:43 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, linux-kernel, Paul Turner
[-- Attachment #1: Type: text/plain, Size: 1274 bytes --]
On Sat, Jul 2, 2011 at 8:26 PM, Rakib Mullick <rakib.mullick@gmail.com> wrote:
> On Sat, Jul 2, 2011 at 3:51 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>> On Sat, 2011-07-02 at 00:41 +0600, Rakib Mullick wrote:
>
>> You've completely failed to provide any sort of rationale for the patch
>> nor did you provide a use-case with performance numbers. This just isn't
>> making much sense at all.
>>
> Regarding taking branch, it needs performance numbers - I agree. But,
> I'm not quite agree that - it's not just making any sense. Will try to
> do some performance testing on it.
I tried to do some testing with the applied patch using kernbench
(with likely branch removed). I did it twice with load -j {,2,4,6,8}.
Kernel version tagged with -schedpatch indicates kernel with this
applied and -noschedpatch is the default. In first case, with this
patch applied Elapsed time is lesser. In the second run with -j 2 and
-j 4 -schedpatch kernel is slow, but with -j 6 and -j 8 it takes less
elapsed time. Also tested with 'nosmp', with this patch, elapsed time
is lesser. So, for better interpretation and full log of test results
I'm attaching the kernbench.log. Please, see whether it looks
convincing to you or not. Tests are done on core i3 machine.
Thanks,
Rakib
[-- Attachment #2: kernbench.log --]
[-- Type: application/octet-stream, Size: 5358 bytes --]
Sun Jul 3 10:40:56 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 13.61 (0.0608276)
User Time 27.5967 (0.12897)
System Time 14.2133 (0.119303)
Percent CPU 306.667 (2.51661)
Context Switches 7989.33 (58.398)
Sleeps 10866.3 (43.6157)
Sun Jul 3 10:50:56 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 13.65 (0.127671)
User Time 27.54 (0.112694)
System Time 14.1467 (0.155027)
Percent CPU 305 (3)
Context Switches 7983.33 (165.171)
Sleeps 10902.3 (135.795)
Sun Jul 3 11:01:14 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 2 Run (std deviation):
Elapsed Time 17.63 (0.31241)
User Time 19.4333 (0.115036)
System Time 10.21 (0.0264575)
Percent CPU 167.667 (3.21455)
Context Switches 8419 (75.6241)
Sleeps 10580.7 (75.7254)
Sun Jul 3 11:12:36 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 2 Run (std deviation):
Elapsed Time 17.6067 (0.140475)
User Time 19.4033 (0.153079)
System Time 10.17 (0.105357)
Percent CPU 167.333 (1.1547)
Context Switches 8371.33 (16.1658)
Sleeps 10548.3 (17.0392)
Wed Jul 6 22:24:19 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 6 Run (std deviation):
Elapsed Time 13.5833 (0.0450925)
User Time 28.3067 (0.176163)
System Time 14.6033 (0.143643)
Percent CPU 315.333 (1.1547)
Context Switches 9239 (90.3715)
Sleeps 11119.3 (38.0175)
Wed Jul 6 22:34:05 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 6 Run (std deviation):
Elapsed Time 13.8433 (0.433628)
User Time 27.9667 (0.390042)
System Time 14.6133 (0.0757188)
Percent CPU 307.333 (12.4231)
Context Switches 9409.33 (84.2397)
Sleeps 11415 (328.638)
Wed Jul 6 22:41:46 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 8 Run (std deviation):
Elapsed Time 13.6133 (0.065064)
User Time 28.5367 (0.0208167)
System Time 14.6733 (0.133166)
Percent CPU 316.667 (0.57735)
Context Switches 9486 (135.104)
Sleeps 11842.7 (263.788)
Wed Jul 6 22:51:42 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 8 Run (std deviation):
Elapsed Time 13.5733 (0.061101)
User Time 28.4833 (0.080208)
System Time 14.5933 (0.100664)
Percent CPU 317 (1)
Context Switches 9547 (70.7036)
Sleeps 11855.7 (35.0761)
<============================================>
Fri Jul 8 08:22:29 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 2 Run (std deviation):
Elapsed Time 17.746 (0.487217)
User Time 19.424 (0.145017)
System Time 10.218 (0.0816701)
Percent CPU 166.4 (3.78153)
Context Switches 8385.4 (49.687)
Sleeps 10594 (76.7887)
Fri Jul 8 08:30:37 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 13.718 (0.339809)
User Time 27.95 (0.246069)
System Time 14.546 (0.255402)
Percent CPU 309.4 (4.27785)
Context Switches 7838.4 (92.6569)
Sleeps 11082.2 (648.594)
Fri Jul 8 08:38:23 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 6 Run (std deviation):
Elapsed Time 13.506 (0.0844393)
User Time 28.33 (0.0977241)
System Time 14.584 (0.118025)
Percent CPU 317.2 (1.64317)
Context Switches 9345.6 (90.7045)
Sleeps 11046 (106.59)
Fri Jul 8 08:46:11 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 8 Run (std deviation):
Elapsed Time 13.566 (0.0466905)
User Time 28.63 (0.05)
System Time 14.566 (0.0770065)
Percent CPU 318 (1)
Context Switches 9352.8 (103.323)
Sleeps 11805.8 (122.534)
Fri Jul 8 09:00:10 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 2 Run (std deviation):
Elapsed Time 17.628 (0.463379)
User Time 19.426 (0.100399)
System Time 10.162 (0.10663)
Percent CPU 167.6 (4.39318)
Context Switches 8370.2 (24.8636)
Sleeps 10568.4 (99.2184)
Fri Jul 8 09:08:20 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 13.598 (0.172974)
User Time 27.846 (0.105262)
System Time 14.464 (0.0750333)
Percent CPU 310.6 (3.97492)
Context Switches 7799.8 (81.561)
Sleeps 10784.6 (64.5508)
Fri Jul 8 09:15:46 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 6 Run (std deviation):
Elapsed Time 14.296 (0.972538)
User Time 27.796 (0.320047)
System Time 14.488 (0.191755)
Percent CPU 296.4 (21.1967)
Context Switches 10623.6 (499.773)
Sleeps 11212.8 (202.849)
Fri Jul 8 09:23:51 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 8 Run (std deviation):
Elapsed Time 13.614 (0.0219089)
User Time 28.674 (0.0270185)
System Time 14.658 (0.100349)
Percent CPU 317.8 (0.83666)
Context Switches 9374.2 (93.1166)
Sleeps 11795.4 (187.807)
<================= nosmp tests==============>
Fri Jul 8 09:44:14 BDT 2011
3.0.0-rc5-tip-schedpatch-gbc7db00-dirty
Average Optimal load -j 1 Run (std deviation):
Elapsed Time 31.364 (0.817484)
User Time 18.248 (0.0526308)
System Time 8.69 (0.0961769)
Percent CPU 85.4 (1.94936)
Context Switches 5713.2 (24.9339)
Sleeps 10624 (64.5717)
Fri Jul 8 10:02:55 BDT 2011
3.0.0-rc5-tip-noschedpatch-gbc7db00
Average Optimal load -j 1 Run (std deviation):
Elapsed Time 32.868 (1.02021)
User Time 18.268 (0.0923038)
System Time 8.79 (0.109316)
Percent CPU 81.8 (2.16795)
Context Switches 5826.2 (74.5064)
Sleeps 10787 (117.712)
<================= nosmp end ==============>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-07-09 4:43 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-07-01 18:41 [PATCH] sched: Check nr_running before calling pick_next_task in schedule() Rakib Mullick
2011-07-02 2:18 ` Paul Turner
2011-07-02 2:23 ` Paul Turner
2011-07-02 4:37 ` Rakib Mullick
2011-07-02 9:51 ` Peter Zijlstra
2011-07-02 14:26 ` Rakib Mullick
2011-07-02 15:35 ` Peter Zijlstra
2011-07-03 8:07 ` Rakib Mullick
2011-07-04 9:00 ` Rakib Mullick
2011-07-09 4:43 ` Rakib Mullick
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox