linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] MD: Allow restarting an interrupted incremental recovery.
@ 2011-10-17 23:22 Andrei Warkentin
  2011-10-18  1:22 ` NeilBrown
  0 siblings, 1 reply; 7+ messages in thread
From: Andrei Warkentin @ 2011-10-17 23:22 UTC (permalink / raw)
  To: linux-raid; +Cc: Andrei Warkentin, Neil Brown

If an incremental recovery was interrupted, a subsequent
re-add will result in a full recovery, even though an
incremental should be possible (seen with raid1).

Solve this problem by not updating the superblock on the
recovering device until array is not degraded any longer.

Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Andrei Warkentin <andreiw@vmware.com>
---
 drivers/md/md.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 5404b22..8ebbae4 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -2444,9 +2444,12 @@ repeat:
 			continue; /* no noise on spare devices */
 		if (test_bit(Faulty, &rdev->flags))
 			dprintk("(skipping faulty ");
+		else if (rdev->saved_raid_disk != -1)
+			dprintk("(skipping incremental s/r ");
 
 		dprintk("%s ", bdevname(rdev->bdev,b));
-		if (!test_bit(Faulty, &rdev->flags)) {
+		if (!test_bit(Faulty, &rdev->flags) &&
+		    rdev->saved_raid_disk == -1) {
 			md_super_write(mddev,rdev,
 				       rdev->sb_start, rdev->sb_size,
 				       rdev->sb_page);
@@ -7353,15 +7356,16 @@ static void reap_sync_thread(mddev_t *mddev)
 	if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
 	    mddev->pers->finish_reshape)
 		mddev->pers->finish_reshape(mddev);
-	md_update_sb(mddev, 1);
 
 	/* if array is no-longer degraded, then any saved_raid_disk
-	 * information must be scrapped
+	 * information must be scrapped, and superblock for
+	 * incrementally recovered device written out.
 	 */
 	if (!mddev->degraded)
 		list_for_each_entry(rdev, &mddev->disks, same_set)
 			rdev->saved_raid_disk = -1;
 
+	md_update_sb(mddev, 1);
 	clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
 	clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
 	clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
  2011-10-17 23:22 [PATCH] MD: Allow restarting an interrupted incremental recovery Andrei Warkentin
@ 2011-10-18  1:22 ` NeilBrown
  2011-10-18 17:07   ` Andrei Warkentin
  0 siblings, 1 reply; 7+ messages in thread
From: NeilBrown @ 2011-10-18  1:22 UTC (permalink / raw)
  To: Andrei Warkentin; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 5100 bytes --]

On Mon, 17 Oct 2011 19:22:11 -0400 Andrei Warkentin <andreiw@vmware.com>
wrote:

> If an incremental recovery was interrupted, a subsequent
> re-add will result in a full recovery, even though an
> incremental should be possible (seen with raid1).
> 
> Solve this problem by not updating the superblock on the
> recovering device until array is not degraded any longer.
> 
> Cc: Neil Brown <neilb@suse.de>
> Signed-off-by: Andrei Warkentin <andreiw@vmware.com>
> ---
>  drivers/md/md.c |   10 +++++++---
>  1 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 5404b22..8ebbae4 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -2444,9 +2444,12 @@ repeat:
>  			continue; /* no noise on spare devices */
>  		if (test_bit(Faulty, &rdev->flags))
>  			dprintk("(skipping faulty ");
> +		else if (rdev->saved_raid_disk != -1)
> +			dprintk("(skipping incremental s/r ");
>  
>  		dprintk("%s ", bdevname(rdev->bdev,b));
> -		if (!test_bit(Faulty, &rdev->flags)) {
> +		if (!test_bit(Faulty, &rdev->flags) &&
> +		    rdev->saved_raid_disk == -1) {
>  			md_super_write(mddev,rdev,
>  				       rdev->sb_start, rdev->sb_size,
>  				       rdev->sb_page);
> @@ -7353,15 +7356,16 @@ static void reap_sync_thread(mddev_t *mddev)
>  	if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
>  	    mddev->pers->finish_reshape)
>  		mddev->pers->finish_reshape(mddev);
> -	md_update_sb(mddev, 1);
>  
>  	/* if array is no-longer degraded, then any saved_raid_disk
> -	 * information must be scrapped
> +	 * information must be scrapped, and superblock for
> +	 * incrementally recovered device written out.
>  	 */
>  	if (!mddev->degraded)
>  		list_for_each_entry(rdev, &mddev->disks, same_set)
>  			rdev->saved_raid_disk = -1;
>  
> +	md_update_sb(mddev, 1);
>  	clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
>  	clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
>  	clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);


Thanks.  I've applied this and pushed it to my for-next branch.

My current HEAD use pr_debug instead of dprintk so I fixed that.

Also I realised that clearing saved_raid_disk when an array is not degraded
is no longer enough.  We also need to clear it when the device becomes
In_sync.
Consider a 3-drive RAID1 with two drives missing.  You add back one of them
and when it is recovered it needs saved_raid_disk cleared so that the
superblock gets written out.

So below is what I applied.

Thanks,
NeilBrown

commit d70ed2e4fafdbef0800e73942482bb075c21578b
Author: Andrei Warkentin <andreiw@vmware.com>
Date:   Tue Oct 18 12:16:48 2011 +1100

    MD: Allow restarting an interrupted incremental recovery.
    
    If an incremental recovery was interrupted, a subsequent
    re-add will result in a full recovery, even though an
    incremental should be possible (seen with raid1).
    
    Solve this problem by not updating the superblock on the
    recovering device until array is not degraded any longer.
    
    Cc: Neil Brown <neilb@suse.de>
    Signed-off-by: Andrei Warkentin <andreiw@vmware.com>
    Signed-off-by: NeilBrown <neilb@suse.de>

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 0ea3485..e8d198d 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -2449,7 +2449,8 @@ repeat:
 		if (rdev->sb_loaded != 1)
 			continue; /* no noise on spare devices */
 
-		if (!test_bit(Faulty, &rdev->flags)) {
+		if (!test_bit(Faulty, &rdev->flags) &&
+		    rdev->saved_raid_disk == -1) {
 			md_super_write(mddev,rdev,
 				       rdev->sb_start, rdev->sb_size,
 				       rdev->sb_page);
@@ -2465,9 +2466,12 @@ repeat:
 				rdev->badblocks.size = 0;
 			}
 
-		} else
+		} else if (test_bit(Faulty, &rdev->flags))
 			pr_debug("md: %s (skipping faulty)\n",
 				 bdevname(rdev->bdev, b));
+		else
+			pr_debug("(skipping incremental s/r ");
+
 		if (mddev->level == LEVEL_MULTIPATH)
 			/* only need to write one superblock... */
 			break;
@@ -7366,15 +7370,19 @@ static void reap_sync_thread(struct mddev *mddev)
 	if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
 	    mddev->pers->finish_reshape)
 		mddev->pers->finish_reshape(mddev);
-	md_update_sb(mddev, 1);
 
-	/* if array is no-longer degraded, then any saved_raid_disk
-	 * information must be scrapped
+	/* If array is no-longer degraded, then any saved_raid_disk
+	 * information must be scrapped.  Also if any device is now
+	 * In_sync we must scrape the saved_raid_disk for that device
+	 * do the superblock for an incrementally recovered device
+	 * written out.
 	 */
-	if (!mddev->degraded)
-		list_for_each_entry(rdev, &mddev->disks, same_set)
+	list_for_each_entry(rdev, &mddev->disks, same_set)
+		if (!mddev->degraded ||
+		    test_bit(In_sync, &rdev->flags))
 			rdev->saved_raid_disk = -1;
 
+	md_update_sb(mddev, 1);
 	clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
 	clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
 	clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
  2011-10-18  1:22 ` NeilBrown
@ 2011-10-18 17:07   ` Andrei Warkentin
  2011-10-18 20:06     ` Andrei Warkentin
  0 siblings, 1 reply; 7+ messages in thread
From: Andrei Warkentin @ 2011-10-18 17:07 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Andrei Warkentin

Hi Neil,

----- Original Message -----
> From: "NeilBrown" <neilb@suse.de>
> To: "Andrei Warkentin" <andreiw@vmware.com>
> Cc: linux-raid@vger.kernel.org
> Sent: Monday, October 17, 2011 9:22:39 PM
> Subject: Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
> 
> On Mon, 17 Oct 2011 19:22:11 -0400 Andrei Warkentin
> <andreiw@vmware.com>
> wrote:
> 
> > If an incremental recovery was interrupted, a subsequent
> > re-add will result in a full recovery, even though an
> > incremental should be possible (seen with raid1).
> > 
> > Solve this problem by not updating the superblock on the
> > recovering device until array is not degraded any longer.
> > 
> > Cc: Neil Brown <neilb@suse.de>
> > Signed-off-by: Andrei Warkentin <andreiw@vmware.com>
> > ---

> Thanks.  I've applied this and pushed it to my for-next branch.
> 
> My current HEAD use pr_debug instead of dprintk so I fixed that.
> 
> Also I realised that clearing saved_raid_disk when an array is not
> degraded
> is no longer enough.  We also need to clear it when the device
> becomes
> In_sync.
> Consider a 3-drive RAID1 with two drives missing.  You add back one
> of them
> and when it is recovered it needs saved_raid_disk cleared so that the
> superblock gets written out.
> 
> So below is what I applied.
> 

Wouldn't all drives being In_sync imply the array is not degraded - i.e. can the
check for a degraded array be omitted then, at all? I.e. if after the resync the
In_sync bit is set - drop saved_raid_role.

A

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
  2011-10-18 17:07   ` Andrei Warkentin
@ 2011-10-18 20:06     ` Andrei Warkentin
  2011-10-18 20:15       ` Andrei Warkentin
  0 siblings, 1 reply; 7+ messages in thread
From: Andrei Warkentin @ 2011-10-18 20:06 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Andrei Warkentin

Hi Neil,

----- Original Message -----
> From: "Andrei Warkentin" <awarkentin@vmware.com>
> To: "NeilBrown" <neilb@suse.de>
> Cc: linux-raid@vger.kernel.org, "Andrei Warkentin" <andreiw@vmware.com>
> Sent: Tuesday, October 18, 2011 1:07:24 PM
> Subject: Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
> 
> > Also I realised that clearing saved_raid_disk when an array is not
> > degraded
> > is no longer enough.  We also need to clear it when the device
> > becomes
> > In_sync.
> > Consider a 3-drive RAID1 with two drives missing.  You add back one
> > of them
> > and when it is recovered it needs saved_raid_disk cleared so that
> > the
> > superblock gets written out.
> > 
> > So below is what I applied.
> > 
> 
> Wouldn't all drives being In_sync imply the array is not degraded -
> i.e. can the
> check for a degraded array be omitted then, at all? I.e. if after the
> resync the
> In_sync bit is set - drop saved_raid_role.
> 

Come to think of it - checking for !mddev->degraded might not be a good idea at all. After all, you
could imagine a situation where in a RAID1 array with A and B, A is recovered from B and then B goes away before
the SBs are flushed due to resync finishing - you would still want A's SB to be flushed, even if array is degraded.

Otherwise you'll end up with another incremental rebuilding A, and lost/inconsistent data after array became degraded (since it was going to A, but we never wrote out its SB, since array is degraded).

What do you think?

A

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
  2011-10-18 20:06     ` Andrei Warkentin
@ 2011-10-18 20:15       ` Andrei Warkentin
  2011-10-18 23:00         ` NeilBrown
  0 siblings, 1 reply; 7+ messages in thread
From: Andrei Warkentin @ 2011-10-18 20:15 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Andrei Warkentin

----- Original Message -----
> From: "Andrei Warkentin" <awarkentin@vmware.com>
> To: "NeilBrown" <neilb@suse.de>
> Cc: linux-raid@vger.kernel.org, "Andrei Warkentin" <andreiw@vmware.com>
> Sent: Tuesday, October 18, 2011 4:06:05 PM
> Subject: Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
> 
> Hi Neil,
> 
> ----- Original Message -----
> > From: "Andrei Warkentin" <awarkentin@vmware.com>
> > To: "NeilBrown" <neilb@suse.de>
> > Cc: linux-raid@vger.kernel.org, "Andrei Warkentin"
> > <andreiw@vmware.com>
> > Sent: Tuesday, October 18, 2011 1:07:24 PM
> > Subject: Re: [PATCH] MD: Allow restarting an interrupted
> > incremental recovery.
> > 
> > > Also I realised that clearing saved_raid_disk when an array is
> > > not
> > > degraded
> > > is no longer enough.  We also need to clear it when the device
> > > becomes
> > > In_sync.
> > > Consider a 3-drive RAID1 with two drives missing.  You add back
> > > one
> > > of them
> > > and when it is recovered it needs saved_raid_disk cleared so that
> > > the
> > > superblock gets written out.
> > > 
> > > So below is what I applied.
> > > 
> > 
> > Wouldn't all drives being In_sync imply the array is not degraded -
> > i.e. can the
> > check for a degraded array be omitted then, at all? I.e. if after
> > the
> > resync the
> > In_sync bit is set - drop saved_raid_role.
> > 
> 
> Come to think of it - checking for !mddev->degraded might not be a
> good idea at all. After all, you
> could imagine a situation where in a RAID1 array with A and B, A is
> recovered from B and then B goes away before
> the SBs are flushed due to resync finishing - you would still want
> A's SB to be flushed, even if array is degraded.
> 
> Otherwise you'll end up with another incremental rebuilding A, and
> lost/inconsistent data after array became degraded (since it was
> going to A, but we never wrote out its SB, since array is degraded).
> 

Errr, I confused myself. This is exactly why you added the check for In_sync.
OTOH, isn't the mddev->degraded now superflous - i.e. if all disks are In_sync,
there is no need to check for degraded, right?

A

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
  2011-10-18 20:15       ` Andrei Warkentin
@ 2011-10-18 23:00         ` NeilBrown
  2011-10-18 23:11           ` Andrei Warkentin
  0 siblings, 1 reply; 7+ messages in thread
From: NeilBrown @ 2011-10-18 23:00 UTC (permalink / raw)
  To: Andrei Warkentin; +Cc: linux-raid, Andrei Warkentin

[-- Attachment #1: Type: text/plain, Size: 2795 bytes --]

On Tue, 18 Oct 2011 13:15:27 -0700 (PDT) Andrei Warkentin
<awarkentin@vmware.com> wrote:

> ----- Original Message -----
> > From: "Andrei Warkentin" <awarkentin@vmware.com>
> > To: "NeilBrown" <neilb@suse.de>
> > Cc: linux-raid@vger.kernel.org, "Andrei Warkentin" <andreiw@vmware.com>
> > Sent: Tuesday, October 18, 2011 4:06:05 PM
> > Subject: Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
> > 
> > Hi Neil,
> > 
> > ----- Original Message -----
> > > From: "Andrei Warkentin" <awarkentin@vmware.com>
> > > To: "NeilBrown" <neilb@suse.de>
> > > Cc: linux-raid@vger.kernel.org, "Andrei Warkentin"
> > > <andreiw@vmware.com>
> > > Sent: Tuesday, October 18, 2011 1:07:24 PM
> > > Subject: Re: [PATCH] MD: Allow restarting an interrupted
> > > incremental recovery.
> > > 
> > > > Also I realised that clearing saved_raid_disk when an array is
> > > > not
> > > > degraded
> > > > is no longer enough.  We also need to clear it when the device
> > > > becomes
> > > > In_sync.
> > > > Consider a 3-drive RAID1 with two drives missing.  You add back
> > > > one
> > > > of them
> > > > and when it is recovered it needs saved_raid_disk cleared so that
> > > > the
> > > > superblock gets written out.
> > > > 
> > > > So below is what I applied.
> > > > 
> > > 
> > > Wouldn't all drives being In_sync imply the array is not degraded -
> > > i.e. can the
> > > check for a degraded array be omitted then, at all? I.e. if after
> > > the
> > > resync the
> > > In_sync bit is set - drop saved_raid_role.
> > > 
> > 
> > Come to think of it - checking for !mddev->degraded might not be a
> > good idea at all. After all, you
> > could imagine a situation where in a RAID1 array with A and B, A is
> > recovered from B and then B goes away before
> > the SBs are flushed due to resync finishing - you would still want
> > A's SB to be flushed, even if array is degraded.
> > 
> > Otherwise you'll end up with another incremental rebuilding A, and
> > lost/inconsistent data after array became degraded (since it was
> > going to A, but we never wrote out its SB, since array is degraded).
> > 
> 
> Errr, I confused myself. This is exactly why you added the check for In_sync.
> OTOH, isn't the mddev->degraded now superflous - i.e. if all disks are In_sync,
> there is no need to check for degraded, right?
> 
> A

Maybe.
However once we clear mddev->degraded we can start clearing bits in the
bitmap, so any saved_raid_disk information is definitely invalid and should
be removed.
You would expect that there won't be any, and you would probably be right.
But it feels safer keeping the check there.  It is probably superfluous, but
sometimes a little paranoia can be a good thing.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
  2011-10-18 23:00         ` NeilBrown
@ 2011-10-18 23:11           ` Andrei Warkentin
  0 siblings, 0 replies; 7+ messages in thread
From: Andrei Warkentin @ 2011-10-18 23:11 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Andrei Warkentin

----- Original Message -----
> From: "NeilBrown" <neilb@suse.de>
> To: "Andrei Warkentin" <awarkentin@vmware.com>
> Cc: linux-raid@vger.kernel.org, "Andrei Warkentin" <andreiw@vmware.com>
> Sent: Tuesday, October 18, 2011 7:00:08 PM
> Subject: Re: [PATCH] MD: Allow restarting an interrupted incremental recovery.
> 
> On Tue, 18 Oct 2011 13:15:27 -0700 (PDT) Andrei Warkentin
> <awarkentin@vmware.com> wrote:
> 
> > ----- Original Message -----
> > > From: "Andrei Warkentin" <awarkentin@vmware.com>
> > > To: "NeilBrown" <neilb@suse.de>
> > > Cc: linux-raid@vger.kernel.org, "Andrei Warkentin"
> > > <andreiw@vmware.com>
> > > Sent: Tuesday, October 18, 2011 4:06:05 PM
> > > Subject: Re: [PATCH] MD: Allow restarting an interrupted
> > > incremental recovery.
> > > 
> > > Hi Neil,
> > > 
> > > ----- Original Message -----
> > > > From: "Andrei Warkentin" <awarkentin@vmware.com>
> > > > To: "NeilBrown" <neilb@suse.de>
> > > > Cc: linux-raid@vger.kernel.org, "Andrei Warkentin"
> > > > <andreiw@vmware.com>
> > > > Sent: Tuesday, October 18, 2011 1:07:24 PM
> > > > Subject: Re: [PATCH] MD: Allow restarting an interrupted
> > > > incremental recovery.
> > > > 
> > > > > Also I realised that clearing saved_raid_disk when an array
> > > > > is
> > > > > not
> > > > > degraded
> > > > > is no longer enough.  We also need to clear it when the
> > > > > device
> > > > > becomes
> > > > > In_sync.
> > > > > Consider a 3-drive RAID1 with two drives missing.  You add
> > > > > back
> > > > > one
> > > > > of them
> > > > > and when it is recovered it needs saved_raid_disk cleared so
> > > > > that
> > > > > the
> > > > > superblock gets written out.
> > > > > 
> > > > > So below is what I applied.
> > > > > 
> > > > 
> > > > Wouldn't all drives being In_sync imply the array is not
> > > > degraded -
> > > > i.e. can the
> > > > check for a degraded array be omitted then, at all? I.e. if
> > > > after
> > > > the
> > > > resync the
> > > > In_sync bit is set - drop saved_raid_role.
> > > > 
> > > 
> > > Come to think of it - checking for !mddev->degraded might not be
> > > a
> > > good idea at all. After all, you
> > > could imagine a situation where in a RAID1 array with A and B, A
> > > is
> > > recovered from B and then B goes away before
> > > the SBs are flushed due to resync finishing - you would still
> > > want
> > > A's SB to be flushed, even if array is degraded.
> > > 
> > > Otherwise you'll end up with another incremental rebuilding A,
> > > and
> > > lost/inconsistent data after array became degraded (since it was
> > > going to A, but we never wrote out its SB, since array is
> > > degraded).
> > > 
> > 
> > Errr, I confused myself. This is exactly why you added the check
> > for In_sync.
> > OTOH, isn't the mddev->degraded now superflous - i.e. if all disks
> > are In_sync,
> > there is no need to check for degraded, right?
> > 
> > A
> 
> Maybe.
> However once we clear mddev->degraded we can start clearing bits in
> the
> bitmap, so any saved_raid_disk information is definitely invalid and
> should
> be removed.
> You would expect that there won't be any, and you would probably be
> right.
> But it feels safer keeping the check there.  It is probably
> superfluous, but
> sometimes a little paranoia can be a good thing.
> 

Sounds fair, given that mddev->degraded is controlled by the personality...

Many thanks for your help,
A

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2011-10-18 23:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-10-17 23:22 [PATCH] MD: Allow restarting an interrupted incremental recovery Andrei Warkentin
2011-10-18  1:22 ` NeilBrown
2011-10-18 17:07   ` Andrei Warkentin
2011-10-18 20:06     ` Andrei Warkentin
2011-10-18 20:15       ` Andrei Warkentin
2011-10-18 23:00         ` NeilBrown
2011-10-18 23:11           ` Andrei Warkentin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).