* Another RAID-5 problem
@ 2012-05-09 9:10 piergiorgio.sartor
2012-05-09 11:03 ` NeilBrown
0 siblings, 1 reply; 7+ messages in thread
From: piergiorgio.sartor @ 2012-05-09 9:10 UTC (permalink / raw)
To: linux-raid
Hi all,
we're hit by a RAID-5 issue, it seems Ubuntu 12.04 is shipping
some bugged kernel/mdadm combination.
Following the other thread about a similar issue, I understood
it is possible to fix the array without losing data.
Problems are:
1) We do not know the HDD order and it is a 5 disks RAID-5
2) 4 of 5 disks have a data offset of 264 sectors, while the
fourth one, added later, has 1048 sectors.
3) There is a LVM setup on the array, not a plain filesystem.
Any idea on how can we get the array back without losing any
data?
At the moment, it seems quite difficult to provide dump of
"mdadm -E" or similar, since the PC does not boot at all.
In any case, if necessary we could try to take a picture of
the screen and send it here or directly per email, if appropriate.
Thanks a lot in advance,
bye,
--
piergiorgio
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another RAID-5 problem
2012-05-09 9:10 piergiorgio.sartor
@ 2012-05-09 11:03 ` NeilBrown
0 siblings, 0 replies; 7+ messages in thread
From: NeilBrown @ 2012-05-09 11:03 UTC (permalink / raw)
To: piergiorgio.sartor; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 4012 bytes --]
On Wed, 9 May 2012 11:10:58 +0200 (CEST) piergiorgio.sartor@nexgo.de wrote:
> Hi all,
>
> we're hit by a RAID-5 issue, it seems Ubuntu 12.04 is shipping
> some bugged kernel/mdadm combination.
Buggy kernel. My fault. I think they know and an update should follow.
However I suspect that Ubuntu must be doing something else to cause the
problem to trigger so often. The circumstance that makes it happen should be
extremely rare. It is as though the md array is half-stopped just before
shutdown. If it were completely stopped or not stopped at all, this wouldn't
happen.
>
> Following the other thread about a similar issue, I understood
> it is possible to fix the array without losing data.
Correct.
>
> Problems are:
>
> 1) We do not know the HDD order and it is a 5 disks RAID-5
If you have kernel logs from the last successful boot they would contain
a "RAID conf printout" which would give you the order, but maybe that it on
the RAID-5 array?
If it is you will have to try different permutations until you find one that
works.
> 2) 4 of 5 disks have a data offset of 264 sectors, while the
> fourth one, added later, has 1048 sectors.
Ouch.
It would be easiest to just make a degraded array with the 4 devices with the
same data offset, then add the 5th later.
To get the correct data offset you could either use the same mdadm that the
array was originally built with, or you could get the 'r10-reshape'
branch from git://neil.brown.name/mdadm/ and build that.
Then create the array with --data-offset=132K as well as all the other flags.
However that hasn't been tested extensively so it would be best to test it
elsewhere first. Check that it created the array with correct data-offset
and correct size.
> 3) There is a LVM setup on the array, not a plain filesystem.
That does make it a little more complex but not much.
You would need to activate the LVM, then "fsck -n" the filesystems to check if
you have the devices in the right order.
However this could help you identify the first device quickly.
If you
dd if=/dev/sdXX skip=264 count=1
then for the first device in the array it will show you the textual
description of the LVM setup. For the other devices it will probably be
binary or something unrelated.
>
> Any idea on how can we get the array back without losing any
> data?
Do you know what the chunk size was? Probably 64K if it was an old array.
Maybe 512K though.
I would:
1/ look at old logs if possible to find out the device order
2/ try to remember what the chunk size could be. If you have the exact
used-device size (mdadm -E should give that) you can get an upper limit
for the chunk size by finding the larger power-of-2 which divides it.
3/ Try to identify the first device by looking for LVM metadata.
4/ Make a list of the possible arrangements of devices and possible chunk
sizes based on the info you collected.
5/ Check that you can create an array with a data-offset for 264 sectors
using one of the approaches listed above.
6/ write a script which iterated though the possibilities and re-created the
array then tries to turn on LVM and the fsck. Or maybe iterate by hand.
The command to create an array would be something like
mdadm -C /dev/md0 -l5 -n5 --assume-clean --chunk=64 \
--data-offset=132K /dev/sdX missing /dev/sdY /dev/sdZ /dev/sdW
7/ Find out which arrangement produces least fsck errors, and use that.
>
> At the moment, it seems quite difficult to provide dump of
> "mdadm -E" or similar, since the PC does not boot at all.
> In any case, if necessary we could try to take a picture of
> the screen and send it here or directly per email, if appropriate.
You probably need to boot from a DVD-ROM or similar.
Certainly feel free to post the data you collect and the conclusions you draw
and even the script you write if you would like them reviewed and confirmed.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another RAID-5 problem
@ 2012-05-09 12:17 piergiorgio.sartor
2012-05-09 19:31 ` Piergiorgio Sartor
0 siblings, 1 reply; 7+ messages in thread
From: piergiorgio.sartor @ 2012-05-09 12:17 UTC (permalink / raw)
To: neilb, piergiorgio.sartor; +Cc: linux-raid
Hi Neil,
thanks a lot for the quick answer, please see the
text embedded below for further details.
----- Original Nachricht ----
Von: NeilBrown <neilb@suse.de>
An: piergiorgio.sartor@nexgo.de
Datum: 09.05.2012 13:03
Betreff: Re: Another RAID-5 problem
> On Wed, 9 May 2012 11:10:58 +0200 (CEST) piergiorgio.sartor@nexgo.de wrote:
>
> > Hi all,
> >
> > we're hit by a RAID-5 issue, it seems Ubuntu 12.04 is shipping
> > some bugged kernel/mdadm combination.
>
> Buggy kernel. My fault. I think they know and an update should follow.
>
> However I suspect that Ubuntu must be doing something else to cause the
> problem to trigger so often. The circumstance that makes it happen should
> be
> extremely rare. It is as though the md array is half-stopped just before
> shutdown. If it were completely stopped or not stopped at all, this
> wouldn't
> happen.
>
> >
> > Following the other thread about a similar issue, I understood
> > it is possible to fix the array without losing data.
>
> Correct.
>
> >
> > Problems are:
> >
> > 1) We do not know the HDD order and it is a 5 disks RAID-5
>
> If you have kernel logs from the last successful boot they would contain
> a "RAID conf printout" which would give you the order, but maybe that it on
> the RAID-5 array?
Unfortunately, the kernel logs are on the PC itself, so
we cannot get them.
> If it is you will have to try different permutations until you find one
> that
> works.
I've some questions about this topic.
We have other, identical, PCs, which were built more or less
same time as this one.
One of this have a similar history, this means 4 drives RAID-5,
later extended to 5 (BTW, Ubuntu 10.10 delivered mdadm 2.6.7.1,
we extended the array later, with some 3.1 or 3.2, that can explain
the data offset difference).
This identical PC shows the following (mdadm -D /dev/md1):
...
Number Major Minor RaidDevice State
0 8 34 0 active sync /dev/sdc2
1 8 18 1 active sync /dev/sdb2
2 8 2 2 active sync /dev/sda2
5 8 50 3 active sync /dev/sdd2
4 8 66 4 active sync /dev/sde2
In this case I assume the "RaidDevice" indicates the order.
Is this correct? We could try with this one, at first.
What about "Number"? Why 3 is missing?
BTW, the broken RAID has /dev/sdd2 still valid, and "mdadm -E"
shows:
...
Device Role : Active device 3
...
Which seem consistent with the working one.
Nevertheless, there is something fishy.
If I try the "dd" command, you suggested below, the drive
which seems to show some consistent LVM data is /dev/sde2,
not /dev/sdc2.
Specifically (dd with proper skip, i.e. 1048 for /dev/sde2):
VolGroup {
id = "eK5Sde-ENzo-0iBO-dJIB-buBt-BnoX-NEmZ1v"
seqno = 1759
status = ["RESIZEABLE", "READ", "WRITE"]
...
The others (with skip 264) either have zeros or some
LVM text, but not something looking properly aligned.
Question would be if the growth changed, somehow, the
order, in which case how will "Create" behave? Considering
that one drive will be missing.
> > 2) 4 of 5 disks have a data offset of 264 sectors, while the
> > fourth one, added later, has 1048 sectors.
>
> Ouch.
> It would be easiest to just make a degraded array with the 4 devices with
> the
> same data offset, then add the 5th later.
> To get the correct data offset you could either use the same mdadm that
> the
> array was originally built with, or you could get the 'r10-reshape'
> branch from git://neil.brown.name/mdadm/ and build that.
> Then create the array with --data-offset=132K as well as all the other
> flags.
> However that hasn't been tested extensively so it would be best to test it
> elsewhere first. Check that it created the array with correct data-offset
> and correct size.
>
> > 3) There is a LVM setup on the array, not a plain filesystem.
>
> That does make it a little more complex but not much.
> You would need to activate the LVM, then "fsck -n" the filesystems to check
> if
> you have the devices in the right order.
> However this could help you identify the first device quickly.
> If you
> dd if=/dev/sdXX skip=264 count=1
> then for the first device in the array it will show you the textual
> description of the LVM setup. For the other devices it will probably be
> binary or something unrelated.
>
> >
> > Any idea on how can we get the array back without losing any
> > data?
>
> Do you know what the chunk size was? Probably 64K if it was an old array.
> Maybe 512K though.
Chunk size we know. As mentioned above, we have other PCs,
all the same, chunk is 512K.
Metadata is 1.1.
Bitmap was activated, but this, I understand, is not problem.
Furthermore "mdadm -X" on each HDD shows 0 dirty bits,
which looks good to me.
> I would:
> 1/ look at old logs if possible to find out the device order
> 2/ try to remember what the chunk size could be. If you have the exact
> used-device size (mdadm -E should give that) you can get an upper limit
> for the chunk size by finding the larger power-of-2 which divides it.
> 3/ Try to identify the first device by looking for LVM metadata.
> 4/ Make a list of the possible arrangements of devices and possible chunk
> sizes based on the info you collected.
> 5/ Check that you can create an array with a data-offset for 264 sectors
> using one of the approaches listed above.
> 6/ write a script which iterated though the possibilities and re-created
> the
> array then tries to turn on LVM and the fsck. Or maybe iterate by
> hand.
> The command to create an array would be something like
> mdadm -C /dev/md0 -l5 -n5 --assume-clean --chunk=64 \
> --data-offset=132K /dev/sdX missing /dev/sdY /dev/sdZ /dev/sdW
> 7/ Find out which arrangement produces least fsck errors, and use that.
I do have another question.
How about starting the RAID in read-only mode?
This will avoid LVM or mount to write something, risking
damages to the different superblocks.
What would be the best way to do this?
After "Create", just "mdadm --read-only /dev/md1"?
One more, how about dumping, with "dd", the firsts
few MB of each drive as backup? Make sense?
Thanks again for the support,
bye,
pg
> >
> > At the moment, it seems quite difficult to provide dump of
> > "mdadm -E" or similar, since the PC does not boot at all.
> > In any case, if necessary we could try to take a picture of
> > the screen and send it here or directly per email, if appropriate.
>
> You probably need to boot from a DVD-ROM or similar.
> Certainly feel free to post the data you collect and the conclusions you
> draw
> and even the script you write if you would like them reviewed and
> confirmed.
>
> NeilBrown
>
>
>
--
piergiorgio
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another RAID-5 problem
2012-05-09 12:17 Another RAID-5 problem piergiorgio.sartor
@ 2012-05-09 19:31 ` Piergiorgio Sartor
2012-05-09 22:58 ` NeilBrown
0 siblings, 1 reply; 7+ messages in thread
From: Piergiorgio Sartor @ 2012-05-09 19:31 UTC (permalink / raw)
To: piergiorgio.sartor; +Cc: neilb, linux-raid
On Wed, May 09, 2012 at 02:17:33PM +0200, piergiorgio.sartor@nexgo.de wrote:
> Hi Neil,
>
> thanks a lot for the quick answer, please see the
> text embedded below for further details.
>
> ----- Original Nachricht ----
> Von: NeilBrown <neilb@suse.de>
> An: piergiorgio.sartor@nexgo.de
> Datum: 09.05.2012 13:03
> Betreff: Re: Another RAID-5 problem
>
> > On Wed, 9 May 2012 11:10:58 +0200 (CEST) piergiorgio.sartor@nexgo.de wrote:
> >
> > > Hi all,
> > >
> > > we're hit by a RAID-5 issue, it seems Ubuntu 12.04 is shipping
> > > some bugged kernel/mdadm combination.
> >
> > Buggy kernel. My fault. I think they know and an update should follow.
> >
> > However I suspect that Ubuntu must be doing something else to cause the
> > problem to trigger so often. The circumstance that makes it happen should
> > be
> > extremely rare. It is as though the md array is half-stopped just before
> > shutdown. If it were completely stopped or not stopped at all, this
> > wouldn't
> > happen.
> >
> > >
> > > Following the other thread about a similar issue, I understood
> > > it is possible to fix the array without losing data.
> >
> > Correct.
> >
> > >
> > > Problems are:
> > >
> > > 1) We do not know the HDD order and it is a 5 disks RAID-5
> >
> > If you have kernel logs from the last successful boot they would contain
> > a "RAID conf printout" which would give you the order, but maybe that it on
> > the RAID-5 array?
>
> Unfortunately, the kernel logs are on the PC itself, so
> we cannot get them.
>
> > If it is you will have to try different permutations until you find one
> > that
> > works.
>
> I've some questions about this topic.
>
> We have other, identical, PCs, which were built more or less
> same time as this one.
> One of this have a similar history, this means 4 drives RAID-5,
> later extended to 5 (BTW, Ubuntu 10.10 delivered mdadm 2.6.7.1,
> we extended the array later, with some 3.1 or 3.2, that can explain
> the data offset difference).
>
> This identical PC shows the following (mdadm -D /dev/md1):
>
> ...
> Number Major Minor RaidDevice State
> 0 8 34 0 active sync /dev/sdc2
> 1 8 18 1 active sync /dev/sdb2
> 2 8 2 2 active sync /dev/sda2
> 5 8 50 3 active sync /dev/sdd2
> 4 8 66 4 active sync /dev/sde2
>
> In this case I assume the "RaidDevice" indicates the order.
> Is this correct? We could try with this one, at first.
> What about "Number"? Why 3 is missing?
> BTW, the broken RAID has /dev/sdd2 still valid, and "mdadm -E"
> shows:
>
> ...
> Device Role : Active device 3
> ...
>
> Which seem consistent with the working one.
>
> Nevertheless, there is something fishy.
> If I try the "dd" command, you suggested below, the drive
> which seems to show some consistent LVM data is /dev/sde2,
> not /dev/sdc2.
>
> Specifically (dd with proper skip, i.e. 1048 for /dev/sde2):
>
> VolGroup {
> id = "eK5Sde-ENzo-0iBO-dJIB-buBt-BnoX-NEmZ1v"
> seqno = 1759
> status = ["RESIZEABLE", "READ", "WRITE"]
> ...
>
> The others (with skip 264) either have zeros or some
> LVM text, but not something looking properly aligned.
>
> Question would be if the growth changed, somehow, the
> order, in which case how will "Create" behave? Considering
> that one drive will be missing.
>
> > > 2) 4 of 5 disks have a data offset of 264 sectors, while the
> > > fourth one, added later, has 1048 sectors.
> >
> > Ouch.
> > It would be easiest to just make a degraded array with the 4 devices with
> > the
> > same data offset, then add the 5th later.
> > To get the correct data offset you could either use the same mdadm that
> > the
> > array was originally built with, or you could get the 'r10-reshape'
> > branch from git://neil.brown.name/mdadm/ and build that.
> > Then create the array with --data-offset=132K as well as all the other
> > flags.
> > However that hasn't been tested extensively so it would be best to test it
> > elsewhere first. Check that it created the array with correct data-offset
> > and correct size.
> >
> > > 3) There is a LVM setup on the array, not a plain filesystem.
> >
> > That does make it a little more complex but not much.
> > You would need to activate the LVM, then "fsck -n" the filesystems to check
> > if
> > you have the devices in the right order.
> > However this could help you identify the first device quickly.
> > If you
> > dd if=/dev/sdXX skip=264 count=1
> > then for the first device in the array it will show you the textual
> > description of the LVM setup. For the other devices it will probably be
> > binary or something unrelated.
> >
> > >
> > > Any idea on how can we get the array back without losing any
> > > data?
> >
> > Do you know what the chunk size was? Probably 64K if it was an old array.
> > Maybe 512K though.
>
> Chunk size we know. As mentioned above, we have other PCs,
> all the same, chunk is 512K.
> Metadata is 1.1.
>
> Bitmap was activated, but this, I understand, is not problem.
> Furthermore "mdadm -X" on each HDD shows 0 dirty bits,
> which looks good to me.
>
> > I would:
> > 1/ look at old logs if possible to find out the device order
> > 2/ try to remember what the chunk size could be. If you have the exact
> > used-device size (mdadm -E should give that) you can get an upper limit
> > for the chunk size by finding the larger power-of-2 which divides it.
> > 3/ Try to identify the first device by looking for LVM metadata.
> > 4/ Make a list of the possible arrangements of devices and possible chunk
> > sizes based on the info you collected.
Actually, we solved this issue in a "creative" way.
Looking at:
https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
we identify the proper address and look-up way for the
component order and, using "od -Ax -tx4 /dev/sdXi | less"
we were able to understand the device order.
For the fresh men, please note that address 0xA0 has
the device number and this is a pointer added to 0x100,
where the device role is stored.
While at 0xA0 are stored 4 bytes int, at 0x100 are
stored 2 bytes int (short int), so the "-tx4" of "od"
swaps (due to CPU endianess) each pair of short int,
so the order will be 1 0 3 2 5 4 ...
If "-tx2" is used, than the 0x100 will be correct,
but 0xA0 will have bytes swapped pair wise.
Fortunately, the superblock seemed OK for the data.
Other information, like raid level, was completely
wiped out...
The only itch left is the data offset.
We plan to try to use mdadm 2.6.7.1 (which originally
created the array) using Ubuntu 10.10 desktop (live).
Still, I would like to know about backing up the
first few MB of each component with "dd" and about
switching to read only, in order to avoid damage
by LVM/mount.
Thanks again,
bye,
pg
> > 5/ Check that you can create an array with a data-offset for 264 sectors
> > using one of the approaches listed above.
> > 6/ write a script which iterated though the possibilities and re-created
> > the
> > array then tries to turn on LVM and the fsck. Or maybe iterate by
> > hand.
> > The command to create an array would be something like
> > mdadm -C /dev/md0 -l5 -n5 --assume-clean --chunk=64 \
> > --data-offset=132K /dev/sdX missing /dev/sdY /dev/sdZ /dev/sdW
> > 7/ Find out which arrangement produces least fsck errors, and use that.
>
> I do have another question.
>
> How about starting the RAID in read-only mode?
> This will avoid LVM or mount to write something, risking
> damages to the different superblocks.
> What would be the best way to do this?
> After "Create", just "mdadm --read-only /dev/md1"?
>
> One more, how about dumping, with "dd", the firsts
> few MB of each drive as backup? Make sense?
>
> Thanks again for the support,
>
> bye,
>
> pg
>
> > >
> > > At the moment, it seems quite difficult to provide dump of
> > > "mdadm -E" or similar, since the PC does not boot at all.
> > > In any case, if necessary we could try to take a picture of
> > > the screen and send it here or directly per email, if appropriate.
> >
> > You probably need to boot from a DVD-ROM or similar.
> > Certainly feel free to post the data you collect and the conclusions you
> > draw
> > and even the script you write if you would like them reviewed and
> > confirmed.
> >
> > NeilBrown
> >
> >
> >
>
> --
>
> piergiorgio
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
piergiorgio
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another RAID-5 problem
2012-05-09 19:31 ` Piergiorgio Sartor
@ 2012-05-09 22:58 ` NeilBrown
2012-05-10 17:29 ` Piergiorgio Sartor
0 siblings, 1 reply; 7+ messages in thread
From: NeilBrown @ 2012-05-09 22:58 UTC (permalink / raw)
To: Piergiorgio Sartor; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1901 bytes --]
On Wed, 9 May 2012 21:31:19 +0200 Piergiorgio Sartor
<piergiorgio.sartor@nexgo.de> wrote:
> Actually, we solved this issue in a "creative" way.
>
> Looking at:
>
> https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
>
> we identify the proper address and look-up way for the
> component order and, using "od -Ax -tx4 /dev/sdXi | less"
> we were able to understand the device order.
> For the fresh men, please note that address 0xA0 has
> the device number and this is a pointer added to 0x100,
> where the device role is stored.
> While at 0xA0 are stored 4 bytes int, at 0x100 are
> stored 2 bytes int (short int), so the "-tx4" of "od"
> swaps (due to CPU endianess) each pair of short int,
> so the order will be 1 0 3 2 5 4 ...
> If "-tx2" is used, than the 0x100 will be correct,
> but 0xA0 will have bytes swapped pair wise.
Well done!!
>
> Fortunately, the superblock seemed OK for the data.
> Other information, like raid level, was completely
> wiped out...
>
> The only itch left is the data offset.
> We plan to try to use mdadm 2.6.7.1 (which originally
> created the array) using Ubuntu 10.10 desktop (live).
>
> Still, I would like to know about backing up the
> first few MB of each component with "dd" and about
> switching to read only, in order to avoid damage
> by LVM/mount.
Takes a backup of the first few MB certainly wouldn't hurt, and might help if
something goes terribly wrong. (Just as long as you don't restore a backup to
the wrong device :-)
Yes, starting in --readonly mode, or switching to --readonly mode after you
have started the array, is probably a good idea.
Some filesystems sometimes try to write even if the device is marked as
read-only (e.g. they try to replay the journal). If that happen md will BUG
and machine will crash, which might be better than corrupting data.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another RAID-5 problem
2012-05-09 22:58 ` NeilBrown
@ 2012-05-10 17:29 ` Piergiorgio Sartor
2012-05-11 0:51 ` NeilBrown
0 siblings, 1 reply; 7+ messages in thread
From: Piergiorgio Sartor @ 2012-05-10 17:29 UTC (permalink / raw)
To: NeilBrown; +Cc: Piergiorgio Sartor, linux-raid
Hi,
we manage to recover the RAID-5 without
any data loss!
I would like to thank Neil again for the
quick response and effective support.
Thanks!
bye,
--
piergiorgio
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another RAID-5 problem
2012-05-10 17:29 ` Piergiorgio Sartor
@ 2012-05-11 0:51 ` NeilBrown
0 siblings, 0 replies; 7+ messages in thread
From: NeilBrown @ 2012-05-11 0:51 UTC (permalink / raw)
To: Piergiorgio Sartor; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 347 bytes --]
On Thu, 10 May 2012 19:29:56 +0200 Piergiorgio Sartor
<piergiorgio.sartor@nexgo.de> wrote:
> Hi,
>
> we manage to recover the RAID-5 without
> any data loss!
Music to my ears!
Thanks for the update,
NeilBrown
>
> I would like to thank Neil again for the
> quick response and effective support.
> Thanks!
>
> bye,
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2012-05-11 0:51 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-09 12:17 Another RAID-5 problem piergiorgio.sartor
2012-05-09 19:31 ` Piergiorgio Sartor
2012-05-09 22:58 ` NeilBrown
2012-05-10 17:29 ` Piergiorgio Sartor
2012-05-11 0:51 ` NeilBrown
-- strict thread matches above, loose matches on Subject: below --
2012-05-09 9:10 piergiorgio.sartor
2012-05-09 11:03 ` NeilBrown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).