* cgroups and SCHED_IDLE
@ 2013-06-27 17:17 Holger Brunck
[not found] ` <51CC7392.8080701-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Holger Brunck @ 2013-06-27 17:17 UTC (permalink / raw)
To: cgroups-u79uwXL29TY76Z2rM5mHXA; +Cc: Tejun Heo, Germs, Frits (extern)
Hi all,
I entered a problem when using cgroups on a powerpc board, but I think it's a
general problem or question.
Whats the status of tasks which are running with SCHED_IDLE and cgroups? The
kernel configuration for CGROUPS distinguishes between SCHED_OTHER and
SCHED_RT/FIFO. SCHED_IDLE isn't mentioned at all. If I create two threads which
are creating load on the cpu with SCHED_IDLE I see that they are sharing the CPU
load. If I move one of this tasks to a cgroup I saw that afterwards this task
eats up (more or less) all of the CPU load and the other one is starving, even
if both are still SCHED_IDLE.
It's easy to reproduce with this script (at least on my single 32 bit ppc cpu),
which set up a cgroup sets the current shell to SCHED_IDLE, create a task move
this one to the cgroup and start the second one:
mount -t tmpfs cgroup_root /sys/fs/cgroup
mkdir /sys/fs/cgroup/cpu
mount -t cgroup -ocpu none /sys/fs/cgroup/cpu
cd /sys/fs/cgroup/cpu
mkdir browser
echo $$ | xargs chrt -i -p 0
dd if=/dev/zero of=/dev/null &
pgrep ^dd$ > browser/tasks
dd if=/dev/zero of=/dev/null &
If you start top you will see that the first dd process eats up the CPU time.
If you skip moving the task you would see that both tasks consumes the same load.
So my question is. Is this a bug or is it forbidden to move a task into a
specific cgroup? If the second statement is true then it may be good to deny
such a request e.g.:
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index a7c9e6d..b475315 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2149,6 +2149,11 @@ retry_find_task:
rcu_read_unlock();
goto out_unlock_cgroup;
}
+ if (tsk->policy == SCHED_IDLE) {
+ ret = -EPERM;
+ rcu_read_unlock();
+ goto out_unlock_cgroup;
+ }
get_task_struct(tsk);
rcu_read_unlock();
Any opinion on this?
Best regards
Holger
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <51CC7392.8080701-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
@ 2013-07-01 8:11 ` Holger Brunck
[not found] ` <51D139B4.8070407-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Holger Brunck @ 2013-07-01 8:11 UTC (permalink / raw)
To: cgroups-u79uwXL29TY76Z2rM5mHXA; +Cc: Tejun Heo, Germs, Frits (extern)
Hi,
small update on this.
On 06/27/2013 07:17 PM, Holger Brunck wrote:
> I entered a problem when using cgroups on a powerpc board, but I think it's a
> general problem or question.
>
> Whats the status of tasks which are running with SCHED_IDLE and cgroups? The
> kernel configuration for CGROUPS distinguishes between SCHED_OTHER and
> SCHED_RT/FIFO. SCHED_IDLE isn't mentioned at all. If I create two threads which
> are creating load on the cpu with SCHED_IDLE I see that they are sharing the CPU
> load. If I move one of this tasks to a cgroup I saw that afterwards this task
> eats up (more or less) all of the CPU load and the other one is starving, even
> if both are still SCHED_IDLE.
>
> It's easy to reproduce with this script (at least on my single 32 bit ppc cpu),
> which set up a cgroup sets the current shell to SCHED_IDLE, create a task move
> this one to the cgroup and start the second one:
>
> mount -t tmpfs cgroup_root /sys/fs/cgroup
> mkdir /sys/fs/cgroup/cpu
> mount -t cgroup -ocpu none /sys/fs/cgroup/cpu
> cd /sys/fs/cgroup/cpu
> mkdir browser
> echo $$ | xargs chrt -i -p 0
> dd if=/dev/zero of=/dev/null &
> pgrep ^dd$ > browser/tasks
> dd if=/dev/zero of=/dev/null &
>
> If you start top you will see that the first dd process eats up the CPU time.
>
> If you skip moving the task you would see that both tasks consumes the same load.
>
On a single ARM CPU (kirkwood) I see the same confusing results similar to the
results of the above powerpc example:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
232 root 20 0 1924 492 420 R 99.9 0.4 0:29.15 dd
234 root 20 0 1924 492 420 R 0.3 0.4 0:00.13 dd
I doublechecked this on my local host x86_64 multicore and here it works fine
even if I force both dd processes to run on the same CPU:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32046 root 20 0 102m 516 432 R 49.4 0.0 0:32.49 dd
32049 root 20 0 102m 516 432 R 49.4 0.0 0:13.39 dd
So either it's a problem for single CPUs or it's not allowed at all and works
only by chance.
Regards
Holger
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <51D139B4.8070407-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
@ 2013-07-23 15:56 ` Tejun Heo
[not found] ` <20130723155634.GD18458-9pTldWuhBndy/B6EtB590w@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Tejun Heo @ 2013-07-23 15:56 UTC (permalink / raw)
To: 51CC7392.8080701-SkAbAL50j+5BDgjK7y7TUQ
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, Germs, Frits (extern),
Peter Zijlstra, Ingo Molnar
(cc'ing Ingo and Peter)
Hello, sorry about the delay. Was traveling.
Ingo, Peter, it looks like when there are two SCHED_IDLE tasks in a
!root cpu cgroup, one of them is starved in certain configurations.
The original message is at
http://thread.gmane.org/gmane.linux.kernel.cgroups/8203
Any ideas?
On Mon, Jul 01, 2013 at 10:11:32AM +0200, Holger Brunck wrote:
> Hi,
> small update on this.
>
> On 06/27/2013 07:17 PM, Holger Brunck wrote:
> > I entered a problem when using cgroups on a powerpc board, but I think it's a
> > general problem or question.
> >
> > Whats the status of tasks which are running with SCHED_IDLE and cgroups? The
> > kernel configuration for CGROUPS distinguishes between SCHED_OTHER and
> > SCHED_RT/FIFO. SCHED_IDLE isn't mentioned at all. If I create two threads which
> > are creating load on the cpu with SCHED_IDLE I see that they are sharing the CPU
> > load. If I move one of this tasks to a cgroup I saw that afterwards this task
> > eats up (more or less) all of the CPU load and the other one is starving, even
> > if both are still SCHED_IDLE.
> >
> > It's easy to reproduce with this script (at least on my single 32 bit ppc cpu),
> > which set up a cgroup sets the current shell to SCHED_IDLE, create a task move
> > this one to the cgroup and start the second one:
> >
> > mount -t tmpfs cgroup_root /sys/fs/cgroup
> > mkdir /sys/fs/cgroup/cpu
> > mount -t cgroup -ocpu none /sys/fs/cgroup/cpu
> > cd /sys/fs/cgroup/cpu
> > mkdir browser
> > echo $$ | xargs chrt -i -p 0
> > dd if=/dev/zero of=/dev/null &
> > pgrep ^dd$ > browser/tasks
> > dd if=/dev/zero of=/dev/null &
> >
> > If you start top you will see that the first dd process eats up the CPU time.
> >
> > If you skip moving the task you would see that both tasks consumes the same load.
> >
>
> On a single ARM CPU (kirkwood) I see the same confusing results similar to the
> results of the above powerpc example:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 232 root 20 0 1924 492 420 R 99.9 0.4 0:29.15 dd
> 234 root 20 0 1924 492 420 R 0.3 0.4 0:00.13 dd
>
> I doublechecked this on my local host x86_64 multicore and here it works fine
> even if I force both dd processes to run on the same CPU:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 32046 root 20 0 102m 516 432 R 49.4 0.0 0:32.49 dd
> 32049 root 20 0 102m 516 432 R 49.4 0.0 0:13.39 dd
>
> So either it's a problem for single CPUs or it's not allowed at all and works
> only by chance.
Can you please boot with maxcpus=1 and see whether that makes the
issue reproducible on x86?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <20130723155634.GD18458-9pTldWuhBndy/B6EtB590w@public.gmane.org>
@ 2013-07-29 12:25 ` Holger Brunck
[not found] ` <51F65F3D.1080503-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Holger Brunck @ 2013-07-29 12:25 UTC (permalink / raw)
To: Tejun Heo
Cc: mingo-DgEjT+Ai2ygdnm+yROfE0A, Peter Zijlstra,
Germs, Frits (extern), cgroups-u79uwXL29TY76Z2rM5mHXA
On 07/23/2013 05:56 PM, Tejun Heo wrote:
>> On 06/27/2013 07:17 PM, Holger Brunck wrote:
>>
>> On a single ARM CPU (kirkwood) I see the same confusing results similar to the
>> results of the above powerpc example:
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 232 root 20 0 1924 492 420 R 99.9 0.4 0:29.15 dd
>> 234 root 20 0 1924 492 420 R 0.3 0.4 0:00.13 dd
>>
>> I doublechecked this on my local host x86_64 multicore and here it works fine
>> even if I force both dd processes to run on the same CPU:
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 32046 root 20 0 102m 516 432 R 49.4 0.0 0:32.49 dd
>> 32049 root 20 0 102m 516 432 R 49.4 0.0 0:13.39 dd
>>
>> So either it's a problem for single CPUs or it's not allowed at all and works
>> only by chance.
>
> Can you please boot with maxcpus=1 and see whether that makes the
> issue reproducible on x86?
>
I retested this with maxcpus=0 to disable SMP completely and it works, both
processes share 50% of the CPU. But I have to admit that I currently have only a
3.4 setup for my x86_64 PC.
My setup for an arm kirkwood board and a board with a powerpc 8247 runs latest
3.10 kernel where I see the problem that one process is starving. But the
problem was already present in a 3.0.x kernel. So it seems to be a architecture
dependent problem.
Regards
Holger
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <51F65F3D.1080503-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
@ 2013-07-29 14:07 ` Peter Zijlstra
[not found] ` <20130729140723.GB22156-RM5+C6weyIYnLiPH7yDmwOa11wxjtiyuLtmvbW2Dspo@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Peter Zijlstra @ 2013-07-29 14:07 UTC (permalink / raw)
To: Holger Brunck
Cc: Tejun Heo, mingo-DgEjT+Ai2ygdnm+yROfE0A, Germs, Frits (extern),
cgroups-u79uwXL29TY76Z2rM5mHXA
On Mon, Jul 29, 2013 at 02:25:33PM +0200, Holger Brunck wrote:
> On 07/23/2013 05:56 PM, Tejun Heo wrote:
> >> On 06/27/2013 07:17 PM, Holger Brunck wrote:
> >>
> >> On a single ARM CPU (kirkwood) I see the same confusing results similar to the
> >> results of the above powerpc example:
> >>
> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> >> 232 root 20 0 1924 492 420 R 99.9 0.4 0:29.15 dd
> >> 234 root 20 0 1924 492 420 R 0.3 0.4 0:00.13 dd
> >>
> >> I doublechecked this on my local host x86_64 multicore and here it works fine
> >> even if I force both dd processes to run on the same CPU:
> >>
> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> >> 32046 root 20 0 102m 516 432 R 49.4 0.0 0:32.49 dd
> >> 32049 root 20 0 102m 516 432 R 49.4 0.0 0:13.39 dd
> >>
> >> So either it's a problem for single CPUs or it's not allowed at all and works
> >> only by chance.
> >
> > Can you please boot with maxcpus=1 and see whether that makes the
> > issue reproducible on x86?
> >
>
> I retested this with maxcpus=0 to disable SMP completely and it works, both
> processes share 50% of the CPU. But I have to admit that I currently have only a
> 3.4 setup for my x86_64 PC.
>
> My setup for an arm kirkwood board and a board with a powerpc 8247 runs latest
> 3.10 kernel where I see the problem that one process is starving. But the
> problem was already present in a 3.0.x kernel. So it seems to be a architecture
> dependent problem.
Does the below fix it?
---
Subject: sched: Ensure update_cfs_shares() is called for parents of continuously-running tasks
From: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
Date: Fri Jul 26 23:48:42 CEST 2013
We typically update a task_group's shares within the dequeue/enqueue
path. However, continuously running tasks sharing a CPU are not
subject to these updates as they are only put/picked. Unfortunately,
when we reverted f269ae046 (in 17bc14b7), we lost the augmenting
periodic update that was supposed to account for this; resulting in a
potential loss of fairness.
To fix this, re-introduce the explicit update in
update_cfs_rq_blocked_load() [called via entity_tick()].
Cc: stable-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org
Reported-by: Max Hailperin <max-cgD0pOLU16z2fBVCVOL8/A@public.gmane.org>
Reviewed-by: Paul Turner <pjt-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
---
kernel/sched/fair.c | 1 +
1 file changed, 1 insertion(+)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2032,6 +2032,7 @@ entity_tick(struct cfs_rq *cfs_rq, struc
*/
update_entity_load_avg(curr, 1);
update_cfs_rq_blocked_load(cfs_rq, 1);
+ update_cfs_shares(cfs_rq);
#ifdef CONFIG_SCHED_HRTICK
/*
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <20130729140723.GB22156-RM5+C6weyIYnLiPH7yDmwOa11wxjtiyuLtmvbW2Dspo@public.gmane.org>
@ 2013-07-29 15:14 ` Holger Brunck
[not found] ` <51F686E7.6020200-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Holger Brunck @ 2013-07-29 15:14 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tejun Heo, mingo-DgEjT+Ai2ygdnm+yROfE0A, Germs, Frits (extern),
cgroups-u79uwXL29TY76Z2rM5mHXA
On 07/29/2013 04:07 PM, Peter Zijlstra wrote:
> On Mon, Jul 29, 2013 at 02:25:33PM +0200, Holger Brunck wrote:
>> On 07/23/2013 05:56 PM, Tejun Heo wrote:
>>>> On 06/27/2013 07:17 PM, Holger Brunck wrote:
>>>>
>>>> On a single ARM CPU (kirkwood) I see the same confusing results similar to the
>>>> results of the above powerpc example:
>>>>
>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>> 232 root 20 0 1924 492 420 R 99.9 0.4 0:29.15 dd
>>>> 234 root 20 0 1924 492 420 R 0.3 0.4 0:00.13 dd
>>>>
>>>> I doublechecked this on my local host x86_64 multicore and here it works fine
>>>> even if I force both dd processes to run on the same CPU:
>>>>
>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>> 32046 root 20 0 102m 516 432 R 49.4 0.0 0:32.49 dd
>>>> 32049 root 20 0 102m 516 432 R 49.4 0.0 0:13.39 dd
>>>>
>>>> So either it's a problem for single CPUs or it's not allowed at all and works
>>>> only by chance.
>>>
>>> Can you please boot with maxcpus=1 and see whether that makes the
>>> issue reproducible on x86?
>>>
>>
>> I retested this with maxcpus=0 to disable SMP completely and it works, both
>> processes share 50% of the CPU. But I have to admit that I currently have only a
>> 3.4 setup for my x86_64 PC.
>>
>> My setup for an arm kirkwood board and a board with a powerpc 8247 runs latest
>> 3.10 kernel where I see the problem that one process is starving. But the
>> problem was already present in a 3.0.x kernel. So it seems to be a architecture
>> dependent problem.
>
> Does the below fix it?
>
no, unfortunately I got the same results as before.
Two sched_idle task in the root group:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
622 root 20 0 1924 492 420 R 49.8 0.4 0:20.26 dd
623 root 20 0 1924 492 420 R 49.8 0.4 0:19.96 dd
After moving one of them into a subgroup:
[root@km_kirkwood /sys/fs/cgroup/cpu]# echo 623 > browser/tasks
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
623 root 20 0 1924 492 420 R 99.4 0.4 0:38.07 dd
622 root 20 0 1924 492 420 R 0.3 0.4 0:30.15 dd
Regards
Holger
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <51F686E7.6020200-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
@ 2013-07-29 15:25 ` Peter Zijlstra
[not found] ` <20130729152529.GD22156-RM5+C6weyIYnLiPH7yDmwOa11wxjtiyuLtmvbW2Dspo@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Peter Zijlstra @ 2013-07-29 15:25 UTC (permalink / raw)
To: Holger Brunck
Cc: Tejun Heo, mingo-DgEjT+Ai2ygdnm+yROfE0A, Germs, Frits (extern),
cgroups-u79uwXL29TY76Z2rM5mHXA
On Mon, Jul 29, 2013 at 05:14:47PM +0200, Holger Brunck wrote:
> no, unfortunately I got the same results as before.
OK, so I'm a little confused; this works on SMP but not on UP? There's
lots of mention of various platforms, but since I simply don't know any
of them its not clear if they're UP or not.
Typically the cgroup nightmare is easier on UP; it would be curious if
that's the 'broken' case now :-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <20130729152529.GD22156-RM5+C6weyIYnLiPH7yDmwOa11wxjtiyuLtmvbW2Dspo@public.gmane.org>
@ 2013-07-29 15:54 ` Holger Brunck
[not found] ` <51F6901B.4070308-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Holger Brunck @ 2013-07-29 15:54 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tejun Heo, mingo-DgEjT+Ai2ygdnm+yROfE0A, Germs, Frits (extern),
cgroups-u79uwXL29TY76Z2rM5mHXA
On 07/29/2013 05:25 PM, Peter Zijlstra wrote:
> On Mon, Jul 29, 2013 at 05:14:47PM +0200, Holger Brunck wrote:
>> no, unfortunately I got the same results as before.
>
> OK, so I'm a little confused; this works on SMP but not on UP? There's
yes this is what I see in my setup.
> lots of mention of various platforms, but since I simply don't know any
> of them its not clear if they're UP or not.
>
as I said I have two different boards with different UPs:
arch cpu 32/64 bit kernel
ARM kirkwood 32 3.10 (but 3.0 shows same problems)
powerpc MPC8247 32 3.10 (but 3.0 shows same problems)
both shows this strange behaviour.
and I have a SMP host PC (Intel(R) Core(TM)2 Quad CPU) running a 3.4 kernel
where I am unable to reproduce the problem even if I start the kernel with
maxcpus=0.
Maybe I could tomorrow have a look on another board of us which is a ppc
Quadcore. I guess this could help to increase the confusion? ;-)
Regards
Holger
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: cgroups and SCHED_IDLE
[not found] ` <51F6901B.4070308-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
@ 2013-07-29 15:56 ` Peter Zijlstra
0 siblings, 0 replies; 9+ messages in thread
From: Peter Zijlstra @ 2013-07-29 15:56 UTC (permalink / raw)
To: Holger Brunck
Cc: Tejun Heo, mingo-DgEjT+Ai2ygdnm+yROfE0A, Germs, Frits (extern),
cgroups-u79uwXL29TY76Z2rM5mHXA
On Mon, Jul 29, 2013 at 05:54:03PM +0200, Holger Brunck wrote:
> On 07/29/2013 05:25 PM, Peter Zijlstra wrote:
> > On Mon, Jul 29, 2013 at 05:14:47PM +0200, Holger Brunck wrote:
> >> no, unfortunately I got the same results as before.
> >
> > OK, so I'm a little confused; this works on SMP but not on UP? There's
>
> yes this is what I see in my setup.
OK, I suppose I'll have to go stare at the UP code paths. No wonder the
patch didn't help, that was an SMP issue ;-)
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2013-07-29 15:56 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-27 17:17 cgroups and SCHED_IDLE Holger Brunck
[not found] ` <51CC7392.8080701-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
2013-07-01 8:11 ` Holger Brunck
[not found] ` <51D139B4.8070407-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
2013-07-23 15:56 ` Tejun Heo
[not found] ` <20130723155634.GD18458-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-07-29 12:25 ` Holger Brunck
[not found] ` <51F65F3D.1080503-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
2013-07-29 14:07 ` Peter Zijlstra
[not found] ` <20130729140723.GB22156-RM5+C6weyIYnLiPH7yDmwOa11wxjtiyuLtmvbW2Dspo@public.gmane.org>
2013-07-29 15:14 ` Holger Brunck
[not found] ` <51F686E7.6020200-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
2013-07-29 15:25 ` Peter Zijlstra
[not found] ` <20130729152529.GD22156-RM5+C6weyIYnLiPH7yDmwOa11wxjtiyuLtmvbW2Dspo@public.gmane.org>
2013-07-29 15:54 ` Holger Brunck
[not found] ` <51F6901B.4070308-SkAbAL50j+5BDgjK7y7TUQ@public.gmane.org>
2013-07-29 15:56 ` Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).