trinity.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Fwd: Trinity 1.4 tarball release.
@ 2014-05-12 17:43 Dave Jones
  2014-05-13  6:43 ` Michael Ellerman
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Jones @ 2014-05-12 17:43 UTC (permalink / raw)
  To: trinity

[-- Attachment #1: Type: text/plain, Size: 116 bytes --]

heh, I knew I'd forget something. Hopefully "cc'ing the trinity list"
was the only thing this time around..

	Dave


[-- Attachment #2: Type: message/rfc822, Size: 1439 bytes --]

From: Dave Jones <davej@redhat.com>
To: Linux Kernel <linux-kernel@vger.kernel.org>
Cc: linux-mm@kvack.org
Subject: Trinity 1.4 tarball release.
Date: Mon, 12 May 2014 13:14:09 -0400
Message-ID: <20140512171409.GA32653@redhat.com>

I finally got around to cutting a new release of trinity, hopefully
putting off the "are you running git, or tarball?" questions for a while.

Big changes since 1.3 include some more targetted fuzzing of VM related
syscalls, which judging from the fallout over the last six months, seems
to be working quite well.

Trinity should now also scale up a lot better on bigger machines with lots of cores.
It should pick a reasonable default number of child processes, but you
can override with -C as you could before, but now without any restrictions other
than available memory.  (I'd love to hear stories of people running it
on some of the more extreme systems, especially if something interesting broke)

Info, tarballs, and pointers to git are as always, at
http://codemonkey.org.uk/projects/trinity/

thanks to everyone who sent patches, chased down interesting kernel bugs
trinity found, or who gave me ideas/feedback. Your input has been much
appreciated.

	Dave


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-12 17:43 Fwd: Trinity 1.4 tarball release Dave Jones
@ 2014-05-13  6:43 ` Michael Ellerman
  2014-05-13 14:00   ` Dave Jones
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2014-05-13  6:43 UTC (permalink / raw)
  To: Dave Jones; +Cc: trinity

On Mon, 2014-05-12 at 13:43 -0400, Dave Jones wrote:
> heh, I knew I'd forget something. Hopefully "cc'ing the trinity list"
> was the only thing this time around..


Hi Dave,

I gave this spin on a system of mine here.

I'm consistently ending up with a watchdog that is spinning using 100% cpu.

strace shows it spinning calling kill:

kill(17833, SIG_0)                      = -1 ESRCH (No such process)
kill(17833, SIG_0)                      = -1 ESRCH (No such process)
kill(17833, SIG_0)                      = -1 ESRCH (No such process)
kill(17833, SIG_0)                      = -1 ESRCH (No such process)
...

Which gdb agrees with:

(gdb) bt
#0  0x1001c790 in kill@plt ()
#1  0x10001984 in __check_main () at watchdog.c:158
#2  0x10010510 in check_main_alive () at watchdog.c:185
#3  watchdog () at watchdog.c:407
#4  init_watchdog () at watchdog.c:484
#5  0x10001d04 in main (argc=1, argv=<optimized out>) at trinity.c:128


It's looping around:

183			while (shm->mainpid != 0) {
(gdb) n
185				ret = __check_main();
(gdb)
186				if (ret == TRUE) {
(gdb)
183			while (shm->mainpid != 0) {
(gdb)
185				ret = __check_main();
(gdb)
186				if (ret == TRUE) {
(gdb)
183			while (shm->mainpid != 0) {
(gdb)
185				ret = __check_main();
(gdb)
186				if (ret == TRUE) {


shm->mainpid is 17833, which agrees with strace, and that process is indeed
no longer running.

We are bailing out of __check_main() before clearing shm->mainpid because we
see that we are already exiting.

        if (ret == -1) {
                /* Are we already exiting ? */
                if (shm->exit_reason != STILL_RUNNING)
                        return FALSE;

                /* No. Check what happened. */
                if (errno == ESRCH) {


161			if (shm->exit_reason != STILL_RUNNING)
(gdb) print shm->exit_reason
$6 = EXIT_FORK_FAILURE

It looks like the only other place shm->mainpid is written is in
trinity.c:main(), which is dead. So we are stuck forever as far as I can tell.


The last thing in trinity.log is:

[main] couldn't create child! (Cannot allocate memory)

From main.c:69:

	output(0, "couldn't create child! (%s)\n", strerror(errn    o));
	shm->exit_reason = EXIT_FORK_FAILURE;
	exit(EXIT_FAILURE);


So we exited directly and didn't let the code in main() clear shm->mainpid.

Not sure what the correct fix is. We could drop the check of shm->exit_reason
in __check_main(), but presumably that is there for a good reason.

cheers


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-13  6:43 ` Michael Ellerman
@ 2014-05-13 14:00   ` Dave Jones
  2014-05-14  7:26     ` Michael Ellerman
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Jones @ 2014-05-13 14:00 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: trinity

On Tue, May 13, 2014 at 04:43:48PM +1000, Michael Ellerman wrote:

 > I'm consistently ending up with a watchdog that is spinning using 100% cpu.
 > 
 > We are bailing out of __check_main() before clearing shm->mainpid because we
 > see that we are already exiting.
 > 
 >         if (ret == -1) {
 >                 /* Are we already exiting ? */
 >                 if (shm->exit_reason != STILL_RUNNING)
 >                         return FALSE;
 > 
 >                 /* No. Check what happened. */
 >                 if (errno == ESRCH) {
 > 
 > 
 > 161			if (shm->exit_reason != STILL_RUNNING)
 > (gdb) print shm->exit_reason
 > $6 = EXIT_FORK_FAILURE
 > 
 > It looks like the only other place shm->mainpid is written is in
 > trinity.c:main(), which is dead. So we are stuck forever as far as I can tell.
 
Argh. I hit this exactly once a few weeks back, and thought I had fixed it.

 > The last thing in trinity.log is:
 > 
 > [main] couldn't create child! (Cannot allocate memory)
 > 
 > >From main.c:69:
 > 
 > 	output(0, "couldn't create child! (%s)\n", strerror(errn    o));
 > 	shm->exit_reason = EXIT_FORK_FAILURE;
 > 	exit(EXIT_FAILURE);
 > 
 > 
 > So we exited directly and didn't let the code in main() clear shm->mainpid.
 > 
 > Not sure what the correct fix is.

I think just clearing mainpid before we call exit is the right thing to
do here.  I'll audit all the other exit() calls too, as this might be a
problem in other paths.

 > We could drop the check of shm->exit_reason
 > in __check_main(), but presumably that is there for a good reason.

It's mostly cosmetic. It would previously end up in that path on a
successful exit, and then complain that main had "disappeared".

	Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-13 14:00   ` Dave Jones
@ 2014-05-14  7:26     ` Michael Ellerman
  2014-05-14 13:35       ` Dave Jones
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2014-05-14  7:26 UTC (permalink / raw)
  To: Dave Jones; +Cc: trinity

On Tue, 2014-05-13 at 10:00 -0400, Dave Jones wrote:
> On Tue, May 13, 2014 at 04:43:48PM +1000, Michael Ellerman wrote:
> 
>  > I'm consistently ending up with a watchdog that is spinning using 100% cpu.
>  > 
>  > We are bailing out of __check_main() before clearing shm->mainpid because we
>  > see that we are already exiting.
>  > 
>  >         if (ret == -1) {
>  >                 /* Are we already exiting ? */
>  >                 if (shm->exit_reason != STILL_RUNNING)
>  >                         return FALSE;
>  > 
>  >                 /* No. Check what happened. */
>  >                 if (errno == ESRCH) {
>  > 
>  > 
>  > 161			if (shm->exit_reason != STILL_RUNNING)
>  > (gdb) print shm->exit_reason
>  > $6 = EXIT_FORK_FAILURE
>  > 
>  > It looks like the only other place shm->mainpid is written is in
>  > trinity.c:main(), which is dead. So we are stuck forever as far as I can tell.
>  
> Argh. I hit this exactly once a few weeks back, and thought I had fixed it.
> 
>  > The last thing in trinity.log is:
>  > 
>  > [main] couldn't create child! (Cannot allocate memory)
>  > 
>  > >From main.c:69:
>  > 
>  > 	output(0, "couldn't create child! (%s)\n", strerror(errn    o));
>  > 	shm->exit_reason = EXIT_FORK_FAILURE;
>  > 	exit(EXIT_FAILURE);
>  > 
>  > 
>  > So we exited directly and didn't let the code in main() clear shm->mainpid.
>  > 
>  > Not sure what the correct fix is.
> 
> I think just clearing mainpid before we call exit is the right thing to
> do here.  I'll audit all the other exit() calls too, as this might be a
> problem in other paths.

Thanks. That fix is working for me.

It still exits after a minute or so, because it fails to fork a child in
fork_children().

I have 64 cpus and 16GB of RAM, so that's only 250MB per child.

If I reduce to 32 children then it runs much longer.

I wonder though, should failing to fork a child be a fatal error? Or could it
just skip that child and continue.

cheers


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-14  7:26     ` Michael Ellerman
@ 2014-05-14 13:35       ` Dave Jones
  2014-05-22  2:40         ` Michael Ellerman
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Jones @ 2014-05-14 13:35 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: trinity

On Wed, May 14, 2014 at 05:26:29PM +1000, Michael Ellerman wrote:

 > >  > Not sure what the correct fix is.
 > > 
 > > I think just clearing mainpid before we call exit is the right thing to
 > > do here.  I'll audit all the other exit() calls too, as this might be a
 > > problem in other paths.
 > 
 > Thanks. That fix is working for me.
 > 
 > It still exits after a minute or so, because it fails to fork a child in
 > fork_children().
 > 
 > I have 64 cpus and 16GB of RAM, so that's only 250MB per child.
 > 
 > If I reduce to 32 children then it runs much longer.
 > 
 > I wonder though, should failing to fork a child be a fatal error? Or could it
 > just skip that child and continue.

Maybe.  It could wait until another child exits before retrying.
Something like the patch below maybe.  I think I tried something like
this before though, and it resulted in a flood of failed forks.

Let me know how this work out.

	Dave

diff --git a/main.c b/main.c
index f393f81ae0ba..be7108287dc9 100644
--- a/main.c
+++ b/main.c
@@ -79,6 +79,10 @@ static void fork_children(void)
 			_exit(EXIT_SUCCESS);
 		} else {
 			if (pid == -1) {
+				/* We failed, wait for a child to exit before retrying. */
+				if (shm->running_childs > 0)
+					return;
+
 				output(0, "couldn't create child! (%s)\n", strerror(errno));
 				shm->exit_reason = EXIT_FORK_FAILURE;
 				exit_main_fail();

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-14 13:35       ` Dave Jones
@ 2014-05-22  2:40         ` Michael Ellerman
  2014-05-22  3:40           ` Dave Jones
  2014-05-22  3:41           ` Michael Ellerman
  0 siblings, 2 replies; 10+ messages in thread
From: Michael Ellerman @ 2014-05-22  2:40 UTC (permalink / raw)
  To: Dave Jones; +Cc: trinity

On Wed, 2014-05-14 at 09:35 -0400, Dave Jones wrote:
> On Wed, May 14, 2014 at 05:26:29PM +1000, Michael Ellerman wrote:
> 
>  > >  > Not sure what the correct fix is.
>  > > 
>  > > I think just clearing mainpid before we call exit is the right thing to
>  > > do here.  I'll audit all the other exit() calls too, as this might be a
>  > > problem in other paths.
>  > 
>  > Thanks. That fix is working for me.
>  > 
>  > It still exits after a minute or so, because it fails to fork a child in
>  > fork_children().
>  > 
>  > I have 64 cpus and 16GB of RAM, so that's only 250MB per child.
>  > 
>  > If I reduce to 32 children then it runs much longer.
>  > 
>  > I wonder though, should failing to fork a child be a fatal error? Or could it
>  > just skip that child and continue.
> 
> Maybe.  It could wait until another child exits before retrying.
> Something like the patch below maybe.  I think I tried something like
> this before though, and it resulted in a flood of failed forks.
> 
> Let me know how this work out.

Sorry I didn't get back to you on this. I've been chasing a bug that trinity
found for us.

Running aae6d6a I've seen this once, but only once:

[watchdog] Sanity check failed! Found pid 1885550132!
[watchdog] problem checking on pid 112 (1:Operation not permitted)
[watchdog] pid 1885550132 has disappeared (oom-killed maybe?). Reaping.
[watchdog] pid 678326126 has disappeared (oom-killed maybe?). Reaping.
[watchdog] pid 1697185792 has disappeared (oom-killed maybe?). Reaping.
[watchdog] Reaped 3 dead children
Killed

cheers


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-22  2:40         ` Michael Ellerman
@ 2014-05-22  3:40           ` Dave Jones
  2014-05-22  3:43             ` Michael Ellerman
  2014-05-22  3:41           ` Michael Ellerman
  1 sibling, 1 reply; 10+ messages in thread
From: Dave Jones @ 2014-05-22  3:40 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: trinity

On Thu, May 22, 2014 at 12:40:36PM +1000, Michael Ellerman wrote:
 
 > Sorry I didn't get back to you on this. I've been chasing a bug that trinity
 > found for us.
 > 
 > Running aae6d6a I've seen this once, but only once:
 > 
 > [watchdog] Sanity check failed! Found pid 1885550132!
 > [watchdog] problem checking on pid 112 (1:Operation not permitted)
 > [watchdog] pid 1885550132 has disappeared (oom-killed maybe?). Reaping.
 > [watchdog] pid 678326126 has disappeared (oom-killed maybe?). Reaping.
 > [watchdog] pid 1697185792 has disappeared (oom-killed maybe?). Reaping.
 > [watchdog] Reaped 3 dead children
 > Killed

If it happens again, check /proc/sys/kernel/pid_max.
I wonder if something scribbled in there.
(We only read it on startup, so if it changes under us, and we start
 getting pids out of our expected range, that could go awry).

I'll add some more robustness to that check tomorrow.

Though looking at the pids in the dump above, I wonder if there's
something more screwed up, like we corrupted the ptrs to the pid map
in the shm.

	Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-22  2:40         ` Michael Ellerman
  2014-05-22  3:40           ` Dave Jones
@ 2014-05-22  3:41           ` Michael Ellerman
  2014-05-22  3:50             ` Dave Jones
  1 sibling, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2014-05-22  3:41 UTC (permalink / raw)
  To: Dave Jones; +Cc: trinity

On Thu, 2014-05-22 at 12:40 +1000, Michael Ellerman wrote:
> On Wed, 2014-05-14 at 09:35 -0400, Dave Jones wrote:
> > On Wed, May 14, 2014 at 05:26:29PM +1000, Michael Ellerman wrote:
> > 
> >  > >  > Not sure what the correct fix is.
> >  > > 
> >  > > I think just clearing mainpid before we call exit is the right thing to
> >  > > do here.  I'll audit all the other exit() calls too, as this might be a
> >  > > problem in other paths.
> >  > 
> >  > Thanks. That fix is working for me.
> >  > 
> >  > It still exits after a minute or so, because it fails to fork a child in
> >  > fork_children().
> >  > 
> >  > I have 64 cpus and 16GB of RAM, so that's only 250MB per child.
> >  > 
> >  > If I reduce to 32 children then it runs much longer.
> >  > 
> >  > I wonder though, should failing to fork a child be a fatal error? Or could it
> >  > just skip that child and continue.
> > 
> > Maybe.  It could wait until another child exits before retrying.
> > Something like the patch below maybe.  I think I tried something like
> > this before though, and it resulted in a flood of failed forks.
> > 
> > Let me know how this work out.
> 
> Sorry I didn't get back to you on this. I've been chasing a bug that trinity
> found for us.
> 
> Running aae6d6a I've seen this once, but only once:

And this one, which looks more fun :)

$ trinity -q
Trinity v1.5pre  Dave Jones <davej@redhat.com>
Done parsing arguments.
Marking all syscalls as enabled.
[init] Enabled 323 syscalls. Disabled 0 syscalls.
[init] Using pid_max = 65536
[init] Started watchdog process, PID is 47158
[main] Main thread is alive.
[main] Registered 6 fd providers.
[main] Couldn't find socket cachefile. Regenerating.
[main] created 375 sockets
[main] Generating file descriptors
[main] Added 276 filenames from /dev
[main] Something went wrong during nftw(/proc). (-1:Value too large for defined data type)
[main] Added 10283 filenames from /sys
[child30:56679] nfsservctl (168) returned ENOSYS, marking as inactive.
[child30:56679] stat (18) returned ENOSYS, marking as inactive.
[child1:56650] acct (51) returned ENOSYS, marking as inactive.
[child10:56659] quotactl (131) returned ENOSYS, marking as inactive.
[child28:56677] lstat (84) returned ENOSYS, marking as inactive.
[child15:56664] sysctl (149) returned ENOSYS, marking as inactive.
[watchdog] Watchdog is alive. (pid:47158)
[child6:56655] ipc (117) returned ENOSYS, marking as inactive.
[child11:56660] BUG!: CHILD (pid:56660) GOT REPARENTED! parent pid:47159. Watchdog pid:47158
[child11:56660] BUG!: Last syscalls:
[child11:56660] [0]  pid:56649 call:io_getevents callno:23
[child11:56660] [1]  pid:56650 call:syslog callno:23
[child11:56660] [2]  pid:56651 call:getxattr callno:78
[child11:56660] [3]  pid:56652 call:set_mempolicy callno:3
[child11:56660] [4]  pid:56653 call:getdents64 callno:12
[child11:56660] [5]  pid:56654 call:setgroups callno:8
[child11:56660] [6]  pid:56655 call:rt_sigpending callno:31
[child11:56660] [7]  pid:56656 call:mmap callno:15
[child11:56660] [8]  pid:56657 call:setxattr callno:16
[child11:56660] [9]  pid:56658 call:delete_module callno:6
[child11:56660] [10]  pid:56659 call:timer_delete callno:122
[child11:56660] [11]  pid:56660 call:clock_getres callno:279
[child11:56660] [12]  pid:56661 call:open callno:20
[child11:56660] [13]  pid:56662 call:setregid callno:176
[child11:56660] [14]  pid:56663 call:mount callno:24
[child11:56660] [15]  pid:56664 call:mkdir callno:106
[child11:56660] [16]  pid:56665 call:unshare callno:72
[child11:56660] [17]  pid:56666 call:sched_get_priority_max callno:47
[child11:56660] [18]  pid:56667 call:sched_getparam callno:158
[child11:56660] [19]  pid:56668 call:linkat callno:38
[child11:56660] [20]  pid:56669 call:utime callno:13
[child11:56660] [21]  pid:56670 call:epoll_ctl callno:12
[child11:56660] [22]  pid:56671 call:fremovexattr callno:33
[child11:56660] [23]  pid:56672 call:mincore callno:117
[child11:56660] [24]  pid:56673 call:init_module callno:136
[child11:56660] [25]  pid:56674 call:inotify_init1 callno:20
[child11:56660] [26]  pid:56675 call:ssetmask callno:45
[child11:56660] [27]  pid:56676 call:mmap callno:46
[child11:56660] [28]  pid:56677 call:access callno:115
[child11:56660] [29]  pid:56678 call:ioprio_set callno:63
[child11:56660] [30]  pid:56679 call:old_readdir callno:132
[child11:56660] [31]  pid:56680 call:gettimeofday callno:89
I/O possible
$

cheers


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-22  3:40           ` Dave Jones
@ 2014-05-22  3:43             ` Michael Ellerman
  0 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2014-05-22  3:43 UTC (permalink / raw)
  To: Dave Jones; +Cc: trinity

On Wed, 2014-05-21 at 23:40 -0400, Dave Jones wrote:
> On Thu, May 22, 2014 at 12:40:36PM +1000, Michael Ellerman wrote:
>  
>  > Sorry I didn't get back to you on this. I've been chasing a bug that trinity
>  > found for us.
>  > 
>  > Running aae6d6a I've seen this once, but only once:
>  > 
>  > [watchdog] Sanity check failed! Found pid 1885550132!
>  > [watchdog] problem checking on pid 112 (1:Operation not permitted)
>  > [watchdog] pid 1885550132 has disappeared (oom-killed maybe?). Reaping.
>  > [watchdog] pid 678326126 has disappeared (oom-killed maybe?). Reaping.
>  > [watchdog] pid 1697185792 has disappeared (oom-killed maybe?). Reaping.
>  > [watchdog] Reaped 3 dead children
>  > Killed
> 
> If it happens again, check /proc/sys/kernel/pid_max.
> I wonder if something scribbled in there.

It hasn't happened again, but I haven't rebooted since it did, and I still have:

$ cat /proc/sys/kernel/pid_max
65536

> Though looking at the pids in the dump above, I wonder if there's
> something more screwed up, like we corrupted the ptrs to the pid map
> in the shm.

Yeah it looks more like that to me.

cheers



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Fwd: Trinity 1.4 tarball release.
  2014-05-22  3:41           ` Michael Ellerman
@ 2014-05-22  3:50             ` Dave Jones
  0 siblings, 0 replies; 10+ messages in thread
From: Dave Jones @ 2014-05-22  3:50 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: trinity

On Thu, May 22, 2014 at 01:41:13PM +1000, Michael Ellerman wrote:
 
 > [main] Registered 6 fd providers.
 > [main] Couldn't find socket cachefile. Regenerating.
 > [main] created 375 sockets
 > [main] Generating file descriptors
 > [main] Added 276 filenames from /dev
 > [main] Something went wrong during nftw(/proc). (-1:Value too large for defined data type)

that's curious, but probably not fatal.

 > [child6:56655] ipc (117) returned ENOSYS, marking as inactive.
 > [child11:56660] BUG!: CHILD (pid:56660) GOT REPARENTED! parent pid:47159. Watchdog pid:47158

This is usually indicative of the main pid segfaulting.
If you run with -D, you'll get coredumps (though also of the child
processes, so there's going to be a lot of them if it runs enough)
You should able to find the one that corresponds to the main pid though,
and get a backtrace.

 > I/O possible

Also weird.

I'll dig into it some more tomorrow if you don't beat me to it.

As a last resort, you might be able to bisect between todays changes
(I'd say start at c19c0ef3973bf816025a2aef5ae5dbd00ca5c9eb).
There's only one bad commit in that range, which is
4401f6d0f0bfdeb92595520dc3be23dee32efc77.  If the bisect lands on that,
do git show 4401f6d0f0bfdeb92595520dc3be23dee32efc77| patch -p1 on top of it.

	Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-05-22  3:50 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-12 17:43 Fwd: Trinity 1.4 tarball release Dave Jones
2014-05-13  6:43 ` Michael Ellerman
2014-05-13 14:00   ` Dave Jones
2014-05-14  7:26     ` Michael Ellerman
2014-05-14 13:35       ` Dave Jones
2014-05-22  2:40         ` Michael Ellerman
2014-05-22  3:40           ` Dave Jones
2014-05-22  3:43             ` Michael Ellerman
2014-05-22  3:41           ` Michael Ellerman
2014-05-22  3:50             ` Dave Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).