* Sched_autogroup and niced processes @ 2011-05-13 7:39 Carl-Johan Kjellander 2011-05-13 7:53 ` Yong Zhang 0 siblings, 1 reply; 15+ messages in thread From: Carl-Johan Kjellander @ 2011-05-13 7:39 UTC (permalink / raw) To: linux-kernel I've been running seti@home niced to 19 in the background since 1999 without any problems. No noticeable effect even when playing a movie or a game. But since 2.6.38 the new fix-all-problems automatic grouping has been messing a bit with me. These are some timed compiles on my 8 cores. time make -j12 # with seti@home running real 4m16.753s user 10m33.770s sys 1m39.710s time make -j12 # without seti@home running real 2m12.480s user 10m11.580s sys 1m39.980s echo 0 > /proc/sys/kernel/sched_autogroup_enabled time make -j12 # no autogroup, seti@home running again real 2m33.276s user 10m37.540s sys 1m43.190s All compiles already had all files cached in RAM. Now I can take the 10% performance hit, but not the 100% hit of running stuff super niced in the background. Processes niced to 19 should only use spare cycles and not take up half of the cores even with autogroup. I would really like to run autogroup since it is a neat idea, but it can't mess up running niced processes in the background which have been working fine for 12 years. /Carl-Johan Kjellander ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 7:39 Sched_autogroup and niced processes Carl-Johan Kjellander @ 2011-05-13 7:53 ` Yong Zhang 2011-05-13 8:05 ` Mike Galbraith 0 siblings, 1 reply; 15+ messages in thread From: Yong Zhang @ 2011-05-13 7:53 UTC (permalink / raw) To: Carl-Johan Kjellander Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Ingo Molnar Cc'ing more people. On Fri, May 13, 2011 at 3:39 PM, Carl-Johan Kjellander <carl-johan@klarna.com> wrote: > I've been running seti@home niced to 19 in the background since 1999 > without any problems. No noticeable effect even when playing a movie > or a game. But since 2.6.38 the new fix-all-problems automatic > grouping has been messing a bit with me. These are some timed compiles > on my 8 cores. > > time make -j12 # with seti@home running > real 4m16.753s > user 10m33.770s > sys 1m39.710s > > time make -j12 # without seti@home running > real 2m12.480s > user 10m11.580s > sys 1m39.980s > > echo 0 > /proc/sys/kernel/sched_autogroup_enabled > time make -j12 # no autogroup, seti@home running again > real 2m33.276s > user 10m37.540s > sys 1m43.190s > > All compiles already had all files cached in RAM. > > Now I can take the 10% performance hit, but not the 100% hit of > running stuff super niced in the background. Processes niced to 19 > should only use spare cycles and not take up half of the cores even > with autogroup. I would really like to run autogroup since it is a > neat idea, but it can't mess up running niced processes in the > background which have been working fine for 12 years. Then how about change the nice value of seti@home->autogroup? echo 19 > /proc/'pid of seti@home'/autogroup Thanks, Yong > > /Carl-Johan Kjellander > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > -- Only stand for myself ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 7:53 ` Yong Zhang @ 2011-05-13 8:05 ` Mike Galbraith 2011-05-13 8:22 ` Ingo Molnar 0 siblings, 1 reply; 15+ messages in thread From: Mike Galbraith @ 2011-05-13 8:05 UTC (permalink / raw) To: Yong Zhang Cc: Carl-Johan Kjellander, linux-kernel, Peter Zijlstra, Ingo Molnar On Fri, 2011-05-13 at 15:53 +0800, Yong Zhang wrote: > Cc'ing more people. > > On Fri, May 13, 2011 at 3:39 PM, Carl-Johan Kjellander > <carl-johan@klarna.com> wrote: > > I've been running seti@home niced to 19 in the background since 1999 > > without any problems. No noticeable effect even when playing a movie > > or a game. But since 2.6.38 the new fix-all-problems automatic > > grouping has been messing a bit with me. These are some timed compiles > > on my 8 cores. Heh, it's not a fix-all-problems thingy, and was never intended to be. It's also not enabled by default. > > time make -j12 # with seti@home running > > real 4m16.753s > > user 10m33.770s > > sys 1m39.710s > > > > time make -j12 # without seti@home running > > real 2m12.480s > > user 10m11.580s > > sys 1m39.980s > > > > echo 0 > /proc/sys/kernel/sched_autogroup_enabled > > time make -j12 # no autogroup, seti@home running again > > real 2m33.276s > > user 10m37.540s > > sys 1m43.190s > > > > All compiles already had all files cached in RAM. > > > > Now I can take the 10% performance hit, but not the 100% hit of > > running stuff super niced in the background. Processes niced to 19 > > should only use spare cycles and not take up half of the cores even > > with autogroup. I would really like to run autogroup since it is a > > neat idea, but it can't mess up running niced processes in the > > background which have been working fine for 12 years. > > Then how about change the nice value of seti@home->autogroup? > echo 19 > /proc/'pid of seti@home'/autogroup Yup. Overhead and whatnot is the dark side of group scheduling. The thing to do is to turn group scheduling off if you don't like what it does for/to you. -Mike ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 8:05 ` Mike Galbraith @ 2011-05-13 8:22 ` Ingo Molnar 2011-05-13 8:41 ` Peter Zijlstra 0 siblings, 1 reply; 15+ messages in thread From: Ingo Molnar @ 2011-05-13 8:22 UTC (permalink / raw) To: Mike Galbraith Cc: Yong Zhang, Carl-Johan Kjellander, linux-kernel, Peter Zijlstra * Mike Galbraith <efault@gmx.de> wrote: > > > time make -j12 # with seti@home running > > > real 4m16.753s > > > user 10m33.770s > > > sys 1m39.710s > > > > > > time make -j12 # without seti@home running > > > real 2m12.480s > > > user 10m11.580s > > > sys 1m39.980s I think the practical question here is to make seti@home run more idle. Are there some magic cgroup commands you could recommend for that? Thanks, Ingo ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 8:22 ` Ingo Molnar @ 2011-05-13 8:41 ` Peter Zijlstra 2011-05-13 9:05 ` Ingo Molnar 0 siblings, 1 reply; 15+ messages in thread From: Peter Zijlstra @ 2011-05-13 8:41 UTC (permalink / raw) To: Ingo Molnar Cc: Mike Galbraith, Yong Zhang, Carl-Johan Kjellander, linux-kernel On Fri, 2011-05-13 at 10:22 +0200, Ingo Molnar wrote: > * Mike Galbraith <efault@gmx.de> wrote: > > > > > time make -j12 # with seti@home running > > > > real 4m16.753s > > > > user 10m33.770s > > > > sys 1m39.710s > > > > > > > > time make -j12 # without seti@home running > > > > real 2m12.480s > > > > user 10m11.580s > > > > sys 1m39.980s > > I think the practical question here is to make seti@home run more idle. > > Are there some magic cgroup commands you could recommend for that? Yong already did. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 8:41 ` Peter Zijlstra @ 2011-05-13 9:05 ` Ingo Molnar 2011-05-13 9:07 ` Peter Zijlstra 0 siblings, 1 reply; 15+ messages in thread From: Ingo Molnar @ 2011-05-13 9:05 UTC (permalink / raw) To: Peter Zijlstra Cc: Mike Galbraith, Yong Zhang, Carl-Johan Kjellander, linux-kernel * Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > On Fri, 2011-05-13 at 10:22 +0200, Ingo Molnar wrote: > > * Mike Galbraith <efault@gmx.de> wrote: > > > > > > > time make -j12 # with seti@home running > > > > > real 4m16.753s > > > > > user 10m33.770s > > > > > sys 1m39.710s > > > > > > > > > > time make -j12 # without seti@home running > > > > > real 2m12.480s > > > > > user 10m11.580s > > > > > sys 1m39.980s > > > > I think the practical question here is to make seti@home run more idle. > > > > Are there some magic cgroup commands you could recommend for that? > > Yong already did. Oh, indeed, stupid me. This teaches me to not stop at the first paragraph of interesting looking emails ;-) Could we somehow automate this: > echo 19 > /proc/'pid of seti@home'/autogroup and split off nice 19 tasks into separate groups and lower the group's priority? That would fit into the general principle of auto-sched as well. Another thing we could do is to lower the priority of a cgroup if it *only* runs reniced tasks. I.e. track the 'maximum priority' of cgroups and propagate that to their weight. This way renicing within cgroups will be more powerful and people do not have to muck with cgroup details. Thanks, Ingo ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 9:05 ` Ingo Molnar @ 2011-05-13 9:07 ` Peter Zijlstra 2011-05-13 9:14 ` Carl-Johan Kjellander 2011-05-13 9:29 ` Ingo Molnar 0 siblings, 2 replies; 15+ messages in thread From: Peter Zijlstra @ 2011-05-13 9:07 UTC (permalink / raw) To: Ingo Molnar Cc: Mike Galbraith, Yong Zhang, Carl-Johan Kjellander, linux-kernel On Fri, 2011-05-13 at 11:05 +0200, Ingo Molnar wrote: > Could we somehow automate this: > > > echo 19 > /proc/'pid of seti@home'/autogroup > > and split off nice 19 tasks into separate groups and lower the group's > priority? Well I guess you can stack on all kinds of heuristics, do we want to? I'd argue for not, keep is simple. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 9:07 ` Peter Zijlstra @ 2011-05-13 9:14 ` Carl-Johan Kjellander 2011-05-13 9:29 ` Ingo Molnar 1 sibling, 0 replies; 15+ messages in thread From: Carl-Johan Kjellander @ 2011-05-13 9:14 UTC (permalink / raw) To: Peter Zijlstra; +Cc: Ingo Molnar, Mike Galbraith, Yong Zhang, linux-kernel On Fri, May 13, 2011 at 11:07 AM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > On Fri, 2011-05-13 at 11:05 +0200, Ingo Molnar wrote: >> Could we somehow automate this: >> >> > echo 19 > /proc/'pid of seti@home'/autogroup Tried this. echo 19 >/proc/23760/autogroup echo 1 > /proc/sys/kernel/sched_autogroup_enabled time make -j12 #seti@home, autogroup, group reniced. real 3m9.274s user 11m3.020s sys 1m45.550s So 50% increase in compilation time. Will these autogroups remember the 19, the boinc manager starts 8 processes and it will spawn more as time goes by and it finishes tasks. >> and split off nice 19 tasks into separate groups and lower the group's >> priority? > > Well I guess you can stack on all kinds of heuristics, do we want to? > I'd argue for not, keep is simple. > I'd again argue, I use nice 19 for stuff that I want to run on the spare cycles, I don't want them stealing time from my important work or surfing or movie viewing. /cjk ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 9:07 ` Peter Zijlstra 2011-05-13 9:14 ` Carl-Johan Kjellander @ 2011-05-13 9:29 ` Ingo Molnar 2011-05-13 9:46 ` Peter Zijlstra 1 sibling, 1 reply; 15+ messages in thread From: Ingo Molnar @ 2011-05-13 9:29 UTC (permalink / raw) To: Peter Zijlstra Cc: Mike Galbraith, Yong Zhang, Carl-Johan Kjellander, linux-kernel * Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > On Fri, 2011-05-13 at 11:05 +0200, Ingo Molnar wrote: > > Could we somehow automate this: > > > > > echo 19 > /proc/'pid of seti@home'/autogroup > > > > and split off nice 19 tasks into separate groups and lower the group's > > priority? > > Well I guess you can stack on all kinds of heuristics, do we want to? Well have you seen my non-heuristic suggestion: | Another thing we could do is to lower the priority of a cgroup if it *only* | runs reniced tasks. I.e. track the 'maximum priority' of cgroups and | propagate that to their weight. | | This way renicing within cgroups will be more powerful and people do not have | to muck with cgroup details. A cgroup assuming the highest priority of all tasks it contains is a pretty natural definition and extension of priorities and also solves this usecase. Thanks, Ingo ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 9:29 ` Ingo Molnar @ 2011-05-13 9:46 ` Peter Zijlstra 2011-05-13 10:04 ` Ingo Molnar 0 siblings, 1 reply; 15+ messages in thread From: Peter Zijlstra @ 2011-05-13 9:46 UTC (permalink / raw) To: Ingo Molnar Cc: Mike Galbraith, Yong Zhang, Carl-Johan Kjellander, linux-kernel On Fri, 2011-05-13 at 11:29 +0200, Ingo Molnar wrote: > * Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > > > On Fri, 2011-05-13 at 11:05 +0200, Ingo Molnar wrote: > > > Could we somehow automate this: > > > > > > > echo 19 > /proc/'pid of seti@home'/autogroup > > > > > > and split off nice 19 tasks into separate groups and lower the group's > > > priority? > > > > Well I guess you can stack on all kinds of heuristics, do we want to? > > Well have you seen my non-heuristic suggestion: > > | Another thing we could do is to lower the priority of a cgroup if it *only* > | runs reniced tasks. I.e. track the 'maximum priority' of cgroups and > | propagate that to their weight. > | > | This way renicing within cgroups will be more powerful and people do not have > | to muck with cgroup details. > > A cgroup assuming the highest priority of all tasks it contains is a pretty > natural definition and extension of priorities and also solves this usecase. Well, that a heuristic in my book, and it totally destroys the independence of groups from tasks (resulting in O(n) task nice behaviour). I really don't see why we should do this, if people don't want what it does, don't use it. If you want something else, you can do all these things from userspace to suit your exact needs. We have enough knobs to set things up as you want them, no need to make things more complicated. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 9:46 ` Peter Zijlstra @ 2011-05-13 10:04 ` Ingo Molnar 2011-05-13 13:13 ` Mike Galbraith 0 siblings, 1 reply; 15+ messages in thread From: Ingo Molnar @ 2011-05-13 10:04 UTC (permalink / raw) To: Peter Zijlstra Cc: Mike Galbraith, Yong Zhang, Carl-Johan Kjellander, linux-kernel * Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > On Fri, 2011-05-13 at 11:29 +0200, Ingo Molnar wrote: > > * Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > > > > > On Fri, 2011-05-13 at 11:05 +0200, Ingo Molnar wrote: > > > > Could we somehow automate this: > > > > > > > > > echo 19 > /proc/'pid of seti@home'/autogroup > > > > > > > > and split off nice 19 tasks into separate groups and lower the group's > > > > priority? > > > > > > Well I guess you can stack on all kinds of heuristics, do we want to? > > > > Well have you seen my non-heuristic suggestion: > > > > | Another thing we could do is to lower the priority of a cgroup if it *only* > > | runs reniced tasks. I.e. track the 'maximum priority' of cgroups and > > | propagate that to their weight. > > | > > | This way renicing within cgroups will be more powerful and people do not have > > | to muck with cgroup details. > > > > A cgroup assuming the highest priority of all tasks it contains is a pretty > > natural definition and extension of priorities and also solves this usecase. > > Well, that a heuristic in my book, and it totally destroys the independence > of groups from tasks (resulting in O(n) task nice behaviour). > > I really don't see why we should do this, if people don't want what it does, > don't use it. If you want something else, you can do all these things from > userspace to suit your exact needs. > > We have enough knobs to set things up as you want them, no need to make > things more complicated. Ok, i guess you are right, propagating priorities does break the clean hieararchy we have currently. Still, the other important problem is that we still seem to have a bug, even with the cgroup set to low prio seti@home is sucking up CPU resources ... Thanks, Ingo ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 10:04 ` Ingo Molnar @ 2011-05-13 13:13 ` Mike Galbraith 2011-05-13 13:24 ` Peter Zijlstra 2011-05-13 13:36 ` Carl-Johan Kjellander 0 siblings, 2 replies; 15+ messages in thread From: Mike Galbraith @ 2011-05-13 13:13 UTC (permalink / raw) To: Ingo Molnar Cc: Peter Zijlstra, Yong Zhang, Carl-Johan Kjellander, linux-kernel On Fri, 2011-05-13 at 12:04 +0200, Ingo Molnar wrote: > Still, the other important problem is that we still seem to have a bug, even > with the cgroup set to low prio seti@home is sucking up CPU resources ... I don't see how. Other than the expected nice 19 overrun when nice 0 group blocks, it works fine on my little Q6600 box. time make -j4 vmlinux (cache hot) real 2m22.996s user 7m6.887s sys 0m48.999s echo 0 > sched_autogroup_enabled time make -j4 vmlinux real 2m17.052s (darn, no free lunch) user 7m5.483s sys 0m49.415s echo 1 > sched_autogroup_enabled simultaneous massive_intr 8 9999 in nice 19 autogroup and time make -j4 vmlinux in a nice 0 autogroup real 2m30.863s user 7m5.363s sys 0m47.359s 142.996/150.863 = .947 (a tad low) repeat with 2 kbuild tasks/core to cut nice 0 group's idle time time make -j8 vmlinux real 2m24.925s user 7m16.327s sys 0m50.807s 142.996/144.925 = .986 (all better) -Mike ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 13:13 ` Mike Galbraith @ 2011-05-13 13:24 ` Peter Zijlstra 2011-05-13 13:36 ` Carl-Johan Kjellander 1 sibling, 0 replies; 15+ messages in thread From: Peter Zijlstra @ 2011-05-13 13:24 UTC (permalink / raw) To: Mike Galbraith Cc: Ingo Molnar, Yong Zhang, Carl-Johan Kjellander, linux-kernel On Fri, 2011-05-13 at 15:13 +0200, Mike Galbraith wrote: > > > Still, the other important problem is that we still seem to have a bug, even > > with the cgroup set to low prio seti@home is sucking up CPU resources ... > > I don't see how. Agreed, with two groups, a spinner each, and then setting the group weight low yields things like: 1927 root 20 0 105m 672 192 R 98.1 0.0 0:20.50 bash 1933 root 20 0 105m 656 180 R 2.0 0.0 0:07.29 bash So all just seems to work as advertised. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 13:13 ` Mike Galbraith 2011-05-13 13:24 ` Peter Zijlstra @ 2011-05-13 13:36 ` Carl-Johan Kjellander 2011-05-13 14:06 ` Mike Galbraith 1 sibling, 1 reply; 15+ messages in thread From: Carl-Johan Kjellander @ 2011-05-13 13:36 UTC (permalink / raw) To: Mike Galbraith; +Cc: Ingo Molnar, Peter Zijlstra, Yong Zhang, linux-kernel On Fri, May 13, 2011 at 3:13 PM, Mike Galbraith <efault@gmx.de> wrote: > On Fri, 2011-05-13 at 12:04 +0200, Ingo Molnar wrote: > >> Still, the other important problem is that we still seem to have a bug, even >> with the cgroup set to low prio seti@home is sucking up CPU resources ... > > I don't see how. Other than the expected nice 19 overrun when nice 0 > group blocks, it works fine on my little Q6600 box. Dunno if I've done it correct, but I've set 19 to the boinc manager autogroup and some of the seti@home clients, but the clients of course keep changing. boinc 1172 0.1 0.0 81896 13408 ? SN May09 7:00 /usr/bin/boinc --check_all_logins --redirectio --dir /var/lib/boi boinc 18983 82.6 0.3 98172 65224 ? RNl 08:10 364:28 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhanced boinc 19162 83.0 0.4 98836 65948 ? RNl 08:16 360:32 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhanced boinc 20295 84.6 0.3 98356 65468 ? RNl 08:57 332:42 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhanced boinc 22980 82.5 0.3 97880 64992 ? RNl 09:32 295:55 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhanced boinc 23760 81.6 0.3 98064 65168 ? RNl 09:59 270:25 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhanced boinc 634 83.1 0.3 98224 65276 ? RNl 11:02 223:24 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhanced boinc 31758 83.5 0.4 99116 65736 ? RNl 11:33 198:48 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhanced boinc 5931 81.4 0.3 98456 65464 ? RNl 14:06 68:55 \_ ../../projects/setiathome.berkeley.edu/setiathome_enhan But when I build on Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz, it's still a lot slower. time make -j12 real 2m58.437s user 10m58.010s sys 1m45.610s I can try the same thing at home on my Q6600 machine if I upgrade it, cause of course the Core i7 doesn't actually have 8 cores, they are just hyperthreaded. It might be a factor. Or am doing something horribly wrong when I try to set the autogroup to 19? /cjk ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Sched_autogroup and niced processes 2011-05-13 13:36 ` Carl-Johan Kjellander @ 2011-05-13 14:06 ` Mike Galbraith 0 siblings, 0 replies; 15+ messages in thread From: Mike Galbraith @ 2011-05-13 14:06 UTC (permalink / raw) To: Carl-Johan Kjellander Cc: Ingo Molnar, Peter Zijlstra, Yong Zhang, linux-kernel On Fri, 2011-05-13 at 15:36 +0200, Carl-Johan Kjellander wrote: > I can try the same thing at home on my Q6600 machine if I upgrade it, > cause of course the Core i7 doesn't actually have 8 cores, they are > just hyperthreaded. It might be a factor. > > Or am doing something horribly wrong when I try to set the autogroup to 19? You seemingly haven't niced the parent's group, else new clients would behave themselves. cat /proc/pid/NN/autogroup will show nice level for group of pid NN. (Watch out you don't nice down something like kdeinit;) -Mike ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-05-13 14:06 UTC | newest] Thread overview: 15+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-05-13 7:39 Sched_autogroup and niced processes Carl-Johan Kjellander 2011-05-13 7:53 ` Yong Zhang 2011-05-13 8:05 ` Mike Galbraith 2011-05-13 8:22 ` Ingo Molnar 2011-05-13 8:41 ` Peter Zijlstra 2011-05-13 9:05 ` Ingo Molnar 2011-05-13 9:07 ` Peter Zijlstra 2011-05-13 9:14 ` Carl-Johan Kjellander 2011-05-13 9:29 ` Ingo Molnar 2011-05-13 9:46 ` Peter Zijlstra 2011-05-13 10:04 ` Ingo Molnar 2011-05-13 13:13 ` Mike Galbraith 2011-05-13 13:24 ` Peter Zijlstra 2011-05-13 13:36 ` Carl-Johan Kjellander 2011-05-13 14:06 ` Mike Galbraith
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox