* Raid 5 Array
@ 2011-04-02 18:51 Marcus
2011-04-02 19:01 ` Simon McNair
0 siblings, 1 reply; 16+ messages in thread
From: Marcus @ 2011-04-02 18:51 UTC (permalink / raw)
To: linux-raid
I have a raid array this is the second time an upgrade seems to have
corrupted the array.
I get the following message from dmesg when trying to mount the array
[ 372.822199] RAID5 conf printout:
[ 372.822202] --- rd:3 wd:3
[ 372.822208] disk 0, o:1, dev:md0
[ 372.822212] disk 1, o:1, dev:sdb1
[ 372.822216] disk 2, o:1, dev:sdc1
[ 372.822305] md2: detected capacity change from 0 to 1000210300928
[ 372.823206] md2: p1
[ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
optional features (3d1fc20)
[ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
optional features (3d1fc20)
I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
I swapped out md1 with a new 1TB drive which worked. then i dropped
the 500GB and combined it with the 250GB drive to make a 750GB drive
The error seems to come when you reintroduce drives that were
previously in a raid array into a new raid array. This is the second
time I have ended up with the same problem.
Any suggestions on how to recover from this or is my only option to
reformat everything and start again?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-02 18:51 Raid 5 Array Marcus
@ 2011-04-02 19:01 ` Simon McNair
[not found] ` <BANLkTimJfUhvkpkkAH=NLJOvLL-Jotrwqg@mail.gmail.com>
0 siblings, 1 reply; 16+ messages in thread
From: Simon McNair @ 2011-04-02 19:01 UTC (permalink / raw)
To: Marcus; +Cc: linux-raid
Hi,
I'm sure you've tried this, but do you use --zero-superblock before
moving disks over ?
Simon
On 02/04/2011 19:51, Marcus wrote:
> I have a raid array this is the second time an upgrade seems to have
> corrupted the array.
>
> I get the following message from dmesg when trying to mount the array
> [ 372.822199] RAID5 conf printout:
> [ 372.822202] --- rd:3 wd:3
> [ 372.822208] disk 0, o:1, dev:md0
> [ 372.822212] disk 1, o:1, dev:sdb1
> [ 372.822216] disk 2, o:1, dev:sdc1
> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
> [ 372.823206] md2: p1
> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
> optional features (3d1fc20)
> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
> optional features (3d1fc20)
>
> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>
> I swapped out md1 with a new 1TB drive which worked. then i dropped
> the 500GB and combined it with the 250GB drive to make a 750GB drive
>
> The error seems to come when you reintroduce drives that were
> previously in a raid array into a new raid array. This is the second
> time I have ended up with the same problem.
>
> Any suggestions on how to recover from this or is my only option to
> reformat everything and start again?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
[not found] ` <BANLkTimJfUhvkpkkAH=NLJOvLL-Jotrwqg@mail.gmail.com>
@ 2011-04-02 20:09 ` Simon McNair
[not found] ` <BANLkTim3uOiF7Qdir_Vou3rSp1zJmgf6iA@mail.gmail.com>
2011-04-02 21:45 ` Simon Mcnair
0 siblings, 2 replies; 16+ messages in thread
From: Simon McNair @ 2011-04-02 20:09 UTC (permalink / raw)
To: Marcus; +Cc: linux-raid
cc'd the list back in as I'm not an md guru.
I did a search for mdadm raid 50 and this looked the most appropriate.
http://books.google.co.uk/books?id=DkonSDG8jUMC&pg=PT116&lpg=PT116&dq=mdadm+raid+50&source=bl&ots=Ekw6NCiXqR&sig=edBYg9Gtd5RXyuUU0PeSpHvS7pM&hl=en&ei=9YGXTYyeBcGFhQe90ojpCA&sa=X&oi=book_result&ct=result&resnum=5&ved=0CEIQ6AEwBA#v=onepage&q=mdadm%20raid%2050&f=false
Simon
On 02/04/2011 20:38, Marcus wrote:
> Yes I used --zero-superblock this time. I think that was my problem
> last time it kept detecting the drives at random and creating odd
> arrays. This time I am not sure what my problem is. I got two drives
> back up so I have my data back but I tried getting the two raid0
> drives to become part of the raid5 twice so far and each time fdisk -l
> shows the wrong sizes for the raids when they are combine the first
> time it showed the small raid as 1TB which is the size of the big raid
> the second time it showed the big raid as 750GB which is the size of
> the small array. Some how the joining of the two raids is corrupting
> the headers and reporting wrong information.
>
> Is there a proper procedure for creating a raid0 to put into a raid5?
> last time I created my raid0 and added a partition to the raids and it
> automatically dropped the partition and just showed md0 and md1 in the
> array instead of md0p1 and md1p1 which was the partition i added to
> the array. I have tried adding the partition into the array and I also
> tried adding just array into the array. neither method seems to be
> working this time.
>
> On Sat, Apr 2, 2011 at 12:01 PM, Simon McNair<simonmcnair@gmail.com> wrote:
>> Hi,
>> I'm sure you've tried this, but do you use --zero-superblock before moving
>> disks over ?
>>
>> Simon
>>
>> On 02/04/2011 19:51, Marcus wrote:
>>> I have a raid array this is the second time an upgrade seems to have
>>> corrupted the array.
>>>
>>> I get the following message from dmesg when trying to mount the array
>>> [ 372.822199] RAID5 conf printout:
>>> [ 372.822202] --- rd:3 wd:3
>>> [ 372.822208] disk 0, o:1, dev:md0
>>> [ 372.822212] disk 1, o:1, dev:sdb1
>>> [ 372.822216] disk 2, o:1, dev:sdc1
>>> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
>>> [ 372.823206] md2: p1
>>> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
>>> optional features (3d1fc20)
>>> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
>>> optional features (3d1fc20)
>>>
>>> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
>>> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>>>
>>> I swapped out md1 with a new 1TB drive which worked. then i dropped
>>> the 500GB and combined it with the 250GB drive to make a 750GB drive
>>>
>>> The error seems to come when you reintroduce drives that were
>>> previously in a raid array into a new raid array. This is the second
>>> time I have ended up with the same problem.
>>>
>>> Any suggestions on how to recover from this or is my only option to
>>> reformat everything and start again?
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
[not found] ` <BANLkTim3uOiF7Qdir_Vou3rSp1zJmgf6iA@mail.gmail.com>
@ 2011-04-02 21:27 ` Simon Mcnair
0 siblings, 0 replies; 16+ messages in thread
From: Simon Mcnair @ 2011-04-02 21:27 UTC (permalink / raw)
To: Marcus; +Cc: linux-raid
Marcus,
Please reply all and keep the list in cc
Please also post the commands used to create the arrays and the fdisk output ?
The other thing of note, I think is that when you have multiple arrays
I believe the recommendation is to use mdadm.conf as a 'hint' file so
that this doesn't happen ?
On 2 Apr 2011, at 21:20, Marcus <nexuslite@gmail.com> wrote:
> My raid is opposite of that I am putting raid0's into a raid5 rather
> than raid5's into a raid0 but from the looks of what you have sent me
> I am not suppose to add a partition to the raid that is going into the
> main raid?
>
> I guess I will play with it some more and hopefully I don't lose
> everything. I just don't like waiting 4 hours for it to rebuild the
> drive to find out it doesn't work.
>
> Thanks for the help.
>
> On Sat, Apr 2, 2011 at 1:09 PM, Simon McNair <simonmcnair@gmail.com> wrote:
>> cc'd the list back in as I'm not an md guru.
>>
>> I did a search for mdadm raid 50 and this looked the most appropriate.
>>
>> http://books.google.co.uk/books?id=DkonSDG8jUMC&pg=PT116&lpg=PT116&dq=mdadm+raid+50&source=bl&ots=Ekw6NCiXqR&sig=edBYg9Gtd5RXyuUU0PeSpHvS7pM&hl=en&ei=9YGXTYyeBcGFhQe90ojpCA&sa=X&oi=book_result&ct=result&resnum=5&ved=0CEIQ6AEwBA#v=onepage&q=mdadm%20raid%2050&f=false
>>
>> Simon
>>
>> On 02/04/2011 20:38, Marcus wrote:
>>>
>>> Yes I used --zero-superblock this time. I think that was my problem
>>> last time it kept detecting the drives at random and creating odd
>>> arrays. This time I am not sure what my problem is. I got two drives
>>> back up so I have my data back but I tried getting the two raid0
>>> drives to become part of the raid5 twice so far and each time fdisk -l
>>> shows the wrong sizes for the raids when they are combine the first
>>> time it showed the small raid as 1TB which is the size of the big raid
>>> the second time it showed the big raid as 750GB which is the size of
>>> the small array. Some how the joining of the two raids is corrupting
>>> the headers and reporting wrong information.
>>>
>>> Is there a proper procedure for creating a raid0 to put into a raid5?
>>> last time I created my raid0 and added a partition to the raids and it
>>> automatically dropped the partition and just showed md0 and md1 in the
>>> array instead of md0p1 and md1p1 which was the partition i added to
>>> the array. I have tried adding the partition into the array and I also
>>> tried adding just array into the array. neither method seems to be
>>> working this time.
>>>
>>> On Sat, Apr 2, 2011 at 12:01 PM, Simon McNair<simonmcnair@gmail.com>
>>> wrote:
>>>>
>>>> Hi,
>>>> I'm sure you've tried this, but do you use --zero-superblock before
>>>> moving
>>>> disks over ?
>>>>
>>>> Simon
>>>>
>>>> On 02/04/2011 19:51, Marcus wrote:
>>>>>
>>>>> I have a raid array this is the second time an upgrade seems to have
>>>>> corrupted the array.
>>>>>
>>>>> I get the following message from dmesg when trying to mount the array
>>>>> [ 372.822199] RAID5 conf printout:
>>>>> [ 372.822202] --- rd:3 wd:3
>>>>> [ 372.822208] disk 0, o:1, dev:md0
>>>>> [ 372.822212] disk 1, o:1, dev:sdb1
>>>>> [ 372.822216] disk 2, o:1, dev:sdc1
>>>>> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
>>>>> [ 372.823206] md2: p1
>>>>> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
>>>>> optional features (3d1fc20)
>>>>> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
>>>>> optional features (3d1fc20)
>>>>>
>>>>> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
>>>>> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>>>>>
>>>>> I swapped out md1 with a new 1TB drive which worked. then i dropped
>>>>> the 500GB and combined it with the 250GB drive to make a 750GB drive
>>>>>
>>>>> The error seems to come when you reintroduce drives that were
>>>>> previously in a raid array into a new raid array. This is the second
>>>>> time I have ended up with the same problem.
>>>>>
>>>>> Any suggestions on how to recover from this or is my only option to
>>>>> reformat everything and start again?
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-02 20:09 ` Simon McNair
[not found] ` <BANLkTim3uOiF7Qdir_Vou3rSp1zJmgf6iA@mail.gmail.com>
@ 2011-04-02 21:45 ` Simon Mcnair
2011-04-02 22:01 ` Roman Mamedov
1 sibling, 1 reply; 16+ messages in thread
From: Simon Mcnair @ 2011-04-02 21:45 UTC (permalink / raw)
To: simonmcnair@gmail.com; +Cc: Marcus, linux-raid@vger.kernel.org
One last thing.... I've never heard of anyone using a raid 05. Why
wouldn't you use a RAID50 ? Please can you dish the dirt on what
benefit there is ? (I would have thought a raid50 would have been
better with no disadvantages ?). I thought that raid10 & 50 were the
main ones in use in 'the industry'.
Please forgive me if I'm showing my ignorance.
Simon
On 2 Apr 2011, at 21:09, Simon McNair <simonmcnair@gmail.com> wrote:
> cc'd the list back in as I'm not an md guru.
>
> I did a search for mdadm raid 50 and this looked the most appropriate.
>
> http://books.google.co.uk/books?id=DkonSDG8jUMC&pg=PT116&lpg=PT116&dq=mdadm+raid+50&source=bl&ots=Ekw6NCiXqR&sig=edBYg9Gtd5RXyuUU0PeSpHvS7pM&hl=en&ei=9YGXTYyeBcGFhQe90ojpCA&sa=X&oi=book_result&ct=result&resnum=5&ved=0CEIQ6AEwBA#v=onepage&q=mdadm%20raid%2050&f=false
>
> Simon
>
> On 02/04/2011 20:38, Marcus wrote:
>> Yes I used --zero-superblock this time. I think that was my problem
>> last time it kept detecting the drives at random and creating odd
>> arrays. This time I am not sure what my problem is. I got two drives
>> back up so I have my data back but I tried getting the two raid0
>> drives to become part of the raid5 twice so far and each time fdisk -l
>> shows the wrong sizes for the raids when they are combine the first
>> time it showed the small raid as 1TB which is the size of the big raid
>> the second time it showed the big raid as 750GB which is the size of
>> the small array. Some how the joining of the two raids is corrupting
>> the headers and reporting wrong information.
>>
>> Is there a proper procedure for creating a raid0 to put into a raid5?
>> last time I created my raid0 and added a partition to the raids and it
>> automatically dropped the partition and just showed md0 and md1 in the
>> array instead of md0p1 and md1p1 which was the partition i added to
>> the array. I have tried adding the partition into the array and I also
>> tried adding just array into the array. neither method seems to be
>> working this time.
>>
>> On Sat, Apr 2, 2011 at 12:01 PM, Simon McNair<simonmcnair@gmail.com> wrote:
>>> Hi,
>>> I'm sure you've tried this, but do you use --zero-superblock before moving
>>> disks over ?
>>>
>>> Simon
>>>
>>> On 02/04/2011 19:51, Marcus wrote:
>>>> I have a raid array this is the second time an upgrade seems to have
>>>> corrupted the array.
>>>>
>>>> I get the following message from dmesg when trying to mount the array
>>>> [ 372.822199] RAID5 conf printout:
>>>> [ 372.822202] --- rd:3 wd:3
>>>> [ 372.822208] disk 0, o:1, dev:md0
>>>> [ 372.822212] disk 1, o:1, dev:sdb1
>>>> [ 372.822216] disk 2, o:1, dev:sdc1
>>>> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
>>>> [ 372.823206] md2: p1
>>>> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
>>>> optional features (3d1fc20)
>>>> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
>>>> optional features (3d1fc20)
>>>>
>>>> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
>>>> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>>>>
>>>> I swapped out md1 with a new 1TB drive which worked. then i dropped
>>>> the 500GB and combined it with the 250GB drive to make a 750GB drive
>>>>
>>>> The error seems to come when you reintroduce drives that were
>>>> previously in a raid array into a new raid array. This is the second
>>>> time I have ended up with the same problem.
>>>>
>>>> Any suggestions on how to recover from this or is my only option to
>>>> reformat everything and start again?
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-02 21:45 ` Simon Mcnair
@ 2011-04-02 22:01 ` Roman Mamedov
2011-04-02 22:04 ` Roberto Spadim
0 siblings, 1 reply; 16+ messages in thread
From: Roman Mamedov @ 2011-04-02 22:01 UTC (permalink / raw)
To: Simon Mcnair; +Cc: Marcus, linux-raid@vger.kernel.org
[-- Attachment #1: Type: text/plain, Size: 635 bytes --]
On Sat, 2 Apr 2011 22:45:58 +0100
Simon Mcnair <simonmcnair@gmail.com> wrote:
> One last thing.... I've never heard of anyone using a raid 05. Why
> wouldn't you use a RAID50 ? Please can you dish the dirt on what
> benefit there is ? (I would have thought a raid50 would have been
> better with no disadvantages ?). I thought that raid10 & 50 were the
> main ones in use in 'the industry'.
RAID5/6 with some RAID0 (or JBOD) members is what you use when you want to
include differently-sized devices into the array:
http://louwrentius.com/blog/2008/08/building-a-raid-6-array-of-mixed-drives/
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-02 22:01 ` Roman Mamedov
@ 2011-04-02 22:04 ` Roberto Spadim
2011-04-02 23:06 ` Marcus
0 siblings, 1 reply; 16+ messages in thread
From: Roberto Spadim @ 2011-04-02 22:04 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Simon Mcnair, Marcus, linux-raid@vger.kernel.org
why use raid 5,6? raid1 isn´t more secure?
2011/4/2 Roman Mamedov <rm@romanrm.ru>:
> On Sat, 2 Apr 2011 22:45:58 +0100
> Simon Mcnair <simonmcnair@gmail.com> wrote:
>
>> One last thing.... I've never heard of anyone using a raid 05. Why
>> wouldn't you use a RAID50 ? Please can you dish the dirt on what
>> benefit there is ? (I would have thought a raid50 would have been
>> better with no disadvantages ?). I thought that raid10 & 50 were the
>> main ones in use in 'the industry'.
>
> RAID5/6 with some RAID0 (or JBOD) members is what you use when you want to
> include differently-sized devices into the array:
> http://louwrentius.com/blog/2008/08/building-a-raid-6-array-of-mixed-drives/
> --
> With respect,
> Roman
>
--
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-02 22:04 ` Roberto Spadim
@ 2011-04-02 23:06 ` Marcus
2011-04-03 0:22 ` Marcus
0 siblings, 1 reply; 16+ messages in thread
From: Marcus @ 2011-04-02 23:06 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Roman Mamedov, Simon Mcnair, linux-raid@vger.kernel.org
I am running a raid 5 only. The raid 0 is to make a number of smaller
drives larger. Because a raid 5 takes the smallest drive and applies
it to all drives.
The original raid started off as a 26GB raid 5 with a 13GB, 40GB and a
160GB drive and I have grown it from there to its current size which
is 1TB.
I bought another 1TB drive yesterday and am trying to combine a 500GB
and a 250GB drive to make a 750GB drive so I can push the raid up
again this time to 1.5TB.
The last configuration was a raid0 md0 320GB (160GB, 160GB), raid 0
md1 570GB (md0, 250GB), raid 5 1TB (md1, 500GB, 1.0TB) which has been
extremely stable for the last 3 months but ran out of space.
The configuration I am trying to achieve is raid0 md0 750GB (250GB,
500GB) , raid 5 md2 1.5TB (md0, 1.0TB, 1.0TB)
This started out as an experiment to see if I could do a raid 5
system. It was originally built with drives I had laying around the
house. Now it is big enough that I have started buying drives for it.
I have gone through many configurations of extra drives to get it
where it is now. I have had 1 catastrophic failure since I started and
that was the last time I made it bigger. I was running on 2 drives one
of them being the md0, md1 configuration and mdadm got confused and
couldn't put md0 and md1 back together to have 2 working drives. I
probably could have corrected the problem if I knew what I know now
but as this is an experimental raid it is a learning process.
The current problem I am having is every time I try to apply the 750GB
raid drive to the raid 5 it corrupts the headers and 1 of the arrays
report the wrong size. Which causes it not to mount. The only way to
correct the problem seems to be to unplug the two drives that make up
md0 and reboot onto 2 drives then start the process again. I am
currently working on my 3rd attempt to integrate the 750GB raid drive.
Each attempt takes 4 hours to restore the drive so it has been a long
process. I haven't lost the data yet though so I guess I will keep
trying. Hopefully it won't be too corrupt when I am done.
On Sat, Apr 2, 2011 at 3:04 PM, Roberto Spadim <roberto@spadim.com.br> wrote:
> why use raid 5,6? raid1 isn´t more secure?
>
> 2011/4/2 Roman Mamedov <rm@romanrm.ru>:
>> On Sat, 2 Apr 2011 22:45:58 +0100
>> Simon Mcnair <simonmcnair@gmail.com> wrote:
>>
>>> One last thing.... I've never heard of anyone using a raid 05. Why
>>> wouldn't you use a RAID50 ? Please can you dish the dirt on what
>>> benefit there is ? (I would have thought a raid50 would have been
>>> better with no disadvantages ?). I thought that raid10 & 50 were the
>>> main ones in use in 'the industry'.
>>
>> RAID5/6 with some RAID0 (or JBOD) members is what you use when you want to
>> include differently-sized devices into the array:
>> http://louwrentius.com/blog/2008/08/building-a-raid-6-array-of-mixed-drives/
>> --
>> With respect,
>> Roman
>>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-02 23:06 ` Marcus
@ 2011-04-03 0:22 ` Marcus
2011-04-03 6:41 ` Marcus
0 siblings, 1 reply; 16+ messages in thread
From: Marcus @ 2011-04-03 0:22 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Roman Mamedov, Simon Mcnair, linux-raid@vger.kernel.org
Okay it seems to work now. I destroyed md0 and recreated it and then
just added it to the md2 array without doing any partitioning like I
did all the times before. Even when I created my old array I
partitioned it but mdadm destroyed the partition automatically.
On Sat, Apr 2, 2011 at 4:06 PM, Marcus <nexuslite@gmail.com> wrote:
> I am running a raid 5 only. The raid 0 is to make a number of smaller
> drives larger. Because a raid 5 takes the smallest drive and applies
> it to all drives.
>
> The original raid started off as a 26GB raid 5 with a 13GB, 40GB and a
> 160GB drive and I have grown it from there to its current size which
> is 1TB.
>
> I bought another 1TB drive yesterday and am trying to combine a 500GB
> and a 250GB drive to make a 750GB drive so I can push the raid up
> again this time to 1.5TB.
>
> The last configuration was a raid0 md0 320GB (160GB, 160GB), raid 0
> md1 570GB (md0, 250GB), raid 5 1TB (md1, 500GB, 1.0TB) which has been
> extremely stable for the last 3 months but ran out of space.
>
> The configuration I am trying to achieve is raid0 md0 750GB (250GB,
> 500GB) , raid 5 md2 1.5TB (md0, 1.0TB, 1.0TB)
>
> This started out as an experiment to see if I could do a raid 5
> system. It was originally built with drives I had laying around the
> house. Now it is big enough that I have started buying drives for it.
> I have gone through many configurations of extra drives to get it
> where it is now. I have had 1 catastrophic failure since I started and
> that was the last time I made it bigger. I was running on 2 drives one
> of them being the md0, md1 configuration and mdadm got confused and
> couldn't put md0 and md1 back together to have 2 working drives. I
> probably could have corrected the problem if I knew what I know now
> but as this is an experimental raid it is a learning process.
>
> The current problem I am having is every time I try to apply the 750GB
> raid drive to the raid 5 it corrupts the headers and 1 of the arrays
> report the wrong size. Which causes it not to mount. The only way to
> correct the problem seems to be to unplug the two drives that make up
> md0 and reboot onto 2 drives then start the process again. I am
> currently working on my 3rd attempt to integrate the 750GB raid drive.
> Each attempt takes 4 hours to restore the drive so it has been a long
> process. I haven't lost the data yet though so I guess I will keep
> trying. Hopefully it won't be too corrupt when I am done.
>
>
>
> On Sat, Apr 2, 2011 at 3:04 PM, Roberto Spadim <roberto@spadim.com.br> wrote:
>> why use raid 5,6? raid1 isn´t more secure?
>>
>> 2011/4/2 Roman Mamedov <rm@romanrm.ru>:
>>> On Sat, 2 Apr 2011 22:45:58 +0100
>>> Simon Mcnair <simonmcnair@gmail.com> wrote:
>>>
>>>> One last thing.... I've never heard of anyone using a raid 05. Why
>>>> wouldn't you use a RAID50 ? Please can you dish the dirt on what
>>>> benefit there is ? (I would have thought a raid50 would have been
>>>> better with no disadvantages ?). I thought that raid10 & 50 were the
>>>> main ones in use in 'the industry'.
>>>
>>> RAID5/6 with some RAID0 (or JBOD) members is what you use when you want to
>>> include differently-sized devices into the array:
>>> http://louwrentius.com/blog/2008/08/building-a-raid-6-array-of-mixed-drives/
>>> --
>>> With respect,
>>> Roman
>>>
>>
>>
>>
>> --
>> Roberto Spadim
>> Spadim Technology / SPAEmpresarial
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-03 0:22 ` Marcus
@ 2011-04-03 6:41 ` Marcus
2011-04-03 7:49 ` NeilBrown
0 siblings, 1 reply; 16+ messages in thread
From: Marcus @ 2011-04-03 6:41 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Roman Mamedov, Simon Mcnair, linux-raid@vger.kernel.org
Okay I have my raid extended to 1500.3GB however I can't seem to grow
the partition past 1TB. It will let me create a new partition but it
won't let me make the current partition any bigger. Does anyone know
how to fix this?
On Sat, Apr 2, 2011 at 5:22 PM, Marcus <nexuslite@gmail.com> wrote:
> Okay it seems to work now. I destroyed md0 and recreated it and then
> just added it to the md2 array without doing any partitioning like I
> did all the times before. Even when I created my old array I
> partitioned it but mdadm destroyed the partition automatically.
>
> On Sat, Apr 2, 2011 at 4:06 PM, Marcus <nexuslite@gmail.com> wrote:
>> I am running a raid 5 only. The raid 0 is to make a number of smaller
>> drives larger. Because a raid 5 takes the smallest drive and applies
>> it to all drives.
>>
>> The original raid started off as a 26GB raid 5 with a 13GB, 40GB and a
>> 160GB drive and I have grown it from there to its current size which
>> is 1TB.
>>
>> I bought another 1TB drive yesterday and am trying to combine a 500GB
>> and a 250GB drive to make a 750GB drive so I can push the raid up
>> again this time to 1.5TB.
>>
>> The last configuration was a raid0 md0 320GB (160GB, 160GB), raid 0
>> md1 570GB (md0, 250GB), raid 5 1TB (md1, 500GB, 1.0TB) which has been
>> extremely stable for the last 3 months but ran out of space.
>>
>> The configuration I am trying to achieve is raid0 md0 750GB (250GB,
>> 500GB) , raid 5 md2 1.5TB (md0, 1.0TB, 1.0TB)
>>
>> This started out as an experiment to see if I could do a raid 5
>> system. It was originally built with drives I had laying around the
>> house. Now it is big enough that I have started buying drives for it.
>> I have gone through many configurations of extra drives to get it
>> where it is now. I have had 1 catastrophic failure since I started and
>> that was the last time I made it bigger. I was running on 2 drives one
>> of them being the md0, md1 configuration and mdadm got confused and
>> couldn't put md0 and md1 back together to have 2 working drives. I
>> probably could have corrected the problem if I knew what I know now
>> but as this is an experimental raid it is a learning process.
>>
>> The current problem I am having is every time I try to apply the 750GB
>> raid drive to the raid 5 it corrupts the headers and 1 of the arrays
>> report the wrong size. Which causes it not to mount. The only way to
>> correct the problem seems to be to unplug the two drives that make up
>> md0 and reboot onto 2 drives then start the process again. I am
>> currently working on my 3rd attempt to integrate the 750GB raid drive.
>> Each attempt takes 4 hours to restore the drive so it has been a long
>> process. I haven't lost the data yet though so I guess I will keep
>> trying. Hopefully it won't be too corrupt when I am done.
>>
>>
>>
>> On Sat, Apr 2, 2011 at 3:04 PM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>> why use raid 5,6? raid1 isn´t more secure?
>>>
>>> 2011/4/2 Roman Mamedov <rm@romanrm.ru>:
>>>> On Sat, 2 Apr 2011 22:45:58 +0100
>>>> Simon Mcnair <simonmcnair@gmail.com> wrote:
>>>>
>>>>> One last thing.... I've never heard of anyone using a raid 05. Why
>>>>> wouldn't you use a RAID50 ? Please can you dish the dirt on what
>>>>> benefit there is ? (I would have thought a raid50 would have been
>>>>> better with no disadvantages ?). I thought that raid10 & 50 were the
>>>>> main ones in use in 'the industry'.
>>>>
>>>> RAID5/6 with some RAID0 (or JBOD) members is what you use when you want to
>>>> include differently-sized devices into the array:
>>>> http://louwrentius.com/blog/2008/08/building-a-raid-6-array-of-mixed-drives/
>>>> --
>>>> With respect,
>>>> Roman
>>>>
>>>
>>>
>>>
>>> --
>>> Roberto Spadim
>>> Spadim Technology / SPAEmpresarial
>>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-03 6:41 ` Marcus
@ 2011-04-03 7:49 ` NeilBrown
2011-04-03 8:02 ` Marcus
0 siblings, 1 reply; 16+ messages in thread
From: NeilBrown @ 2011-04-03 7:49 UTC (permalink / raw)
To: Marcus
Cc: Roberto Spadim, Roman Mamedov, Simon Mcnair,
linux-raid@vger.kernel.org
On Sat, 2 Apr 2011 23:41:50 -0700 Marcus <nexuslite@gmail.com> wrote:
> Okay I have my raid extended to 1500.3GB however I can't seem to grow
> the partition past 1TB. It will let me create a new partition but it
> won't let me make the current partition any bigger. Does anyone know
> how to fix this?
Best to show exactly the command you use, exactly the results, and details
about the component devices (particularly size).
When using any mdadm command, add "-vv" to make it as verbose as possible.
Include kernel log messages (e.g. dmesg | tail -100)
Prefer to send too much info rather than not enough.
And just place it in-line in the email, no attachments, not 'pastebin' links.
NeilBrown
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-03 7:49 ` NeilBrown
@ 2011-04-03 8:02 ` Marcus
2011-04-03 11:01 ` NeilBrown
0 siblings, 1 reply; 16+ messages in thread
From: Marcus @ 2011-04-03 8:02 UTC (permalink / raw)
To: NeilBrown
Cc: Roberto Spadim, Roman Mamedov, Simon Mcnair,
linux-raid@vger.kernel.org
The file system is ext4. The current raid drive is 1.5TB the old size
was 1TB. I can create a new partition on the drive it just wont let me
resize it to a larger size. It seems to be maxed out at 1TB for some
reason.
mdstat shows 1465159552 blocks which is the new size.
fdisk -l shows Disk /dev/md2: 1500.3 GB, 1500323381248 bytes 2 heads,
4 sectors/track, 366289888 cylinders.
Current partition: /dev/md2p1 17 244191968 976767808
83 Linux
resize2fs -p /dev/md2 returns: nothing to do
Nothing is failing it just seems to be at a max size. I also tried
resizing with parted and it seems to think 244191968 is max like
resize2fs does.
On Sun, Apr 3, 2011 at 12:49 AM, NeilBrown <neilb@suse.de> wrote:
> On Sat, 2 Apr 2011 23:41:50 -0700 Marcus <nexuslite@gmail.com> wrote:
>
>> Okay I have my raid extended to 1500.3GB however I can't seem to grow
>> the partition past 1TB. It will let me create a new partition but it
>> won't let me make the current partition any bigger. Does anyone know
>> how to fix this?
>
> Best to show exactly the command you use, exactly the results, and details
> about the component devices (particularly size).
> When using any mdadm command, add "-vv" to make it as verbose as possible.
> Include kernel log messages (e.g. dmesg | tail -100)
>
> Prefer to send too much info rather than not enough.
> And just place it in-line in the email, no attachments, not 'pastebin' links.
>
> NeilBrown
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-03 8:02 ` Marcus
@ 2011-04-03 11:01 ` NeilBrown
2011-04-03 17:46 ` Marcus
0 siblings, 1 reply; 16+ messages in thread
From: NeilBrown @ 2011-04-03 11:01 UTC (permalink / raw)
To: Marcus
Cc: Roberto Spadim, Roman Mamedov, Simon Mcnair,
linux-raid@vger.kernel.org
On Sun, 3 Apr 2011 01:02:40 -0700 Marcus <nexuslite@gmail.com> wrote:
> The file system is ext4. The current raid drive is 1.5TB the old size
> was 1TB. I can create a new partition on the drive it just wont let me
> resize it to a larger size. It seems to be maxed out at 1TB for some
> reason.
What is "it"? What command do you run? What output does it generate?
>
> mdstat shows 1465159552 blocks which is the new size.
Why didn't you just include the complete "cat /proc/mdstat".
That would have been much more informative.
>
> fdisk -l shows Disk /dev/md2: 1500.3 GB, 1500323381248 bytes 2 heads,
> 4 sectors/track, 366289888 cylinders.
>
> Current partition: /dev/md2p1 17 244191968 976767808
> 83 Linux
>
> resize2fs -p /dev/md2 returns: nothing to do
>
Is this "it"?? Do you realise that you need to resize the device "/dev/md2"
before you can resize the filesystem that is stored in "/dev/md2".
> Nothing is failing it just seems to be at a max size. I also tried
> resizing with parted and it seems to think 244191968 is max like
> resize2fs does.
>
As you provided so little concrete details - despite me asking for lots -
I'll have to guess.
I guess that if you
mdadm -S /dev/md2
mdadm -A /dev/md2 --update=device-size /dev/...list.of.devices
mdadm -G /dev/md2 --size=max
resize2fs /dev/md2
then it might work. Or maybe it'll corrupt everything. I cannot really be
sure because I am being forced to guess.
Commands like:
mdadm --examine /dev/*
mdadm --detail /dev/md*
cat /proc/partitions
cat /proc/mdstat
dmesg | tail -100
are the sort of things that are useful - not "I tried something and it didn't
work"...
NeilBrown
(sorry, but I get grumpy when people provide so little information).
>
>
> On Sun, Apr 3, 2011 at 12:49 AM, NeilBrown <neilb@suse.de> wrote:
> > On Sat, 2 Apr 2011 23:41:50 -0700 Marcus <nexuslite@gmail.com> wrote:
> >
> >> Okay I have my raid extended to 1500.3GB however I can't seem to grow
> >> the partition past 1TB. It will let me create a new partition but it
> >> won't let me make the current partition any bigger. Does anyone know
> >> how to fix this?
> >
> > Best to show exactly the command you use, exactly the results, and details
> > about the component devices (particularly size).
> > When using any mdadm command, add "-vv" to make it as verbose as possible.
> > Include kernel log messages (e.g. dmesg | tail -100)
> >
> > Prefer to send too much info rather than not enough.
> > And just place it in-line in the email, no attachments, not 'pastebin' links.
> >
> > NeilBrown
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-03 11:01 ` NeilBrown
@ 2011-04-03 17:46 ` Marcus
2011-04-03 17:50 ` Roman Mamedov
0 siblings, 1 reply; 16+ messages in thread
From: Marcus @ 2011-04-03 17:46 UTC (permalink / raw)
To: NeilBrown
Cc: Roberto Spadim, Roman Mamedov, Simon Mcnair,
linux-raid@vger.kernel.org
I provided you all relevant information if you payed attention to
sizes and the fact that I stated that I can add a new partition to the
device you would have realized that I have already applied grow to the
raid.
1465159552 raid size
976767808 partition size
See how partition is smaller than raid by about 500GB?
nexuslite@ubuntu:~$ resize2fs -p /dev/md2p1
resize2fs 1.41.11 (14-Mar-2010)
The filesystem is already 244191952 blocks long. Nothing to do!
There is the exact message resize2fs is returning. 244191968 is the
current end block of the partition. parted also shows 244191968 as the
maximum block size for a partition. There are no related dmesg because
it is not an error it is just undesired results.
mdstat
md0 : active raid0 sdb1[1] sdc1[0]
732579840 blocks 64k chunks
md2 : active raid5 md0[0] sde1[2] sdd1[1]
1465159552 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
I have grown this raid array before it isn't like I am a newbie I just
don't understand why the partition is stuck at 1TB. I keep reading
about 2TB limits but can't find anything relevant to the 1TB limit I
am experiencing.
On Sun, Apr 3, 2011 at 4:01 AM, NeilBrown <neilb@suse.de> wrote:
> On Sun, 3 Apr 2011 01:02:40 -0700 Marcus <nexuslite@gmail.com> wrote:
>
>> The file system is ext4. The current raid drive is 1.5TB the old size
>> was 1TB. I can create a new partition on the drive it just wont let me
>> resize it to a larger size. It seems to be maxed out at 1TB for some
>> reason.
>
> What is "it"? What command do you run? What output does it generate?
>
>>
>> mdstat shows 1465159552 blocks which is the new size.
>
> Why didn't you just include the complete "cat /proc/mdstat".
> That would have been much more informative.
>
>>
>> fdisk -l shows Disk /dev/md2: 1500.3 GB, 1500323381248 bytes 2 heads,
>> 4 sectors/track, 366289888 cylinders.
>>
>> Current partition: /dev/md2p1 17 244191968 976767808
>> 83 Linux
>>
>> resize2fs -p /dev/md2 returns: nothing to do
>>
>
> Is this "it"?? Do you realise that you need to resize the device "/dev/md2"
> before you can resize the filesystem that is stored in "/dev/md2".
>
>> Nothing is failing it just seems to be at a max size. I also tried
>> resizing with parted and it seems to think 244191968 is max like
>> resize2fs does.
>>
>
>
> As you provided so little concrete details - despite me asking for lots -
> I'll have to guess.
>
> I guess that if you
> mdadm -S /dev/md2
> mdadm -A /dev/md2 --update=device-size /dev/...list.of.devices
> mdadm -G /dev/md2 --size=max
> resize2fs /dev/md2
>
> then it might work. Or maybe it'll corrupt everything. I cannot really be
> sure because I am being forced to guess.
>
> Commands like:
>
> mdadm --examine /dev/*
> mdadm --detail /dev/md*
> cat /proc/partitions
> cat /proc/mdstat
> dmesg | tail -100
>
> are the sort of things that are useful - not "I tried something and it didn't
> work"...
>
> NeilBrown
>
> (sorry, but I get grumpy when people provide so little information).
>
>
>
>>
>>
>> On Sun, Apr 3, 2011 at 12:49 AM, NeilBrown <neilb@suse.de> wrote:
>> > On Sat, 2 Apr 2011 23:41:50 -0700 Marcus <nexuslite@gmail.com> wrote:
>> >
>> >> Okay I have my raid extended to 1500.3GB however I can't seem to grow
>> >> the partition past 1TB. It will let me create a new partition but it
>> >> won't let me make the current partition any bigger. Does anyone know
>> >> how to fix this?
>> >
>> > Best to show exactly the command you use, exactly the results, and details
>> > about the component devices (particularly size).
>> > When using any mdadm command, add "-vv" to make it as verbose as possible.
>> > Include kernel log messages (e.g. dmesg | tail -100)
>> >
>> > Prefer to send too much info rather than not enough.
>> > And just place it in-line in the email, no attachments, not 'pastebin' links.
>> >
>> > NeilBrown
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
2011-04-03 17:46 ` Marcus
@ 2011-04-03 17:50 ` Roman Mamedov
[not found] ` <BANLkTinzOpR-pgR1GYxgxaJhNoOUxc0D_w@mail.gmail.com>
0 siblings, 1 reply; 16+ messages in thread
From: Roman Mamedov @ 2011-04-03 17:50 UTC (permalink / raw)
To: Marcus; +Cc: NeilBrown, Roberto Spadim, Simon Mcnair,
linux-raid@vger.kernel.org
[-- Attachment #1: Type: text/plain, Size: 1255 bytes --]
On Sun, 3 Apr 2011 10:46:23 -0700
Marcus <nexuslite@gmail.com> wrote:
> I provided you all relevant information if you payed attention to
> sizes and the fact that I stated that I can add a new partition to the
> device you would have realized that I have already applied grow to the
> raid.
>
> 1465159552 raid size
> 976767808 partition size
>
> See how partition is smaller than raid by about 500GB?
Then why not run "cfdisk /dev/md2" (I recommend the version from "GNU fdisk"),
notice that you have a 900GB partition there and 500 GB of free space, then
resize the partition?
> nexuslite@ubuntu:~$ resize2fs -p /dev/md2p1
> resize2fs 1.41.11 (14-Mar-2010)
> The filesystem is already 244191952 blocks long. Nothing to do!
>
> There is the exact message resize2fs is returning. 244191968 is the
> current end block of the partition. parted also shows 244191968 as the
> maximum block size for a partition. There are no related dmesg because
> it is not an error it is just undesired results.
You don't seem to understand the difference between /dev/md2 and /dev/md2p1.
And also that resize2fs will not resize md2p1, it will only amend the
ext* filesystem so that it takes all of md2p1.
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 Array
[not found] ` <BANLkTinzOpR-pgR1GYxgxaJhNoOUxc0D_w@mail.gmail.com>
@ 2011-04-03 19:57 ` Roman Mamedov
0 siblings, 0 replies; 16+ messages in thread
From: Roman Mamedov @ 2011-04-03 19:57 UTC (permalink / raw)
To: Marcus; +Cc: NeilBrown, Roberto Spadim, Simon Mcnair,
linux-raid@vger.kernel.org
[-- Attachment #1: Type: text/plain, Size: 1933 bytes --]
On Sun, 3 Apr 2011 11:56:29 -0700
Marcus <nexuslite@gmail.com> wrote:
> Thanks! resizing the partition first worked. I had to use fdisk
> instead of cfdisk though because my partition didn't start at the
> beginning of the drive it started 17 blocks in.
Use "Reply to all" properly in your client, don't just drop all "CC:" at
will, people who tried to help you might be interested to know that the
problem is solved and what was the solution.
>
> On Sun, Apr 3, 2011 at 10:50 AM, Roman Mamedov <rm@romanrm.ru> wrote:
> > On Sun, 3 Apr 2011 10:46:23 -0700
> > Marcus <nexuslite@gmail.com> wrote:
> >
> >> I provided you all relevant information if you payed attention to
> >> sizes and the fact that I stated that I can add a new partition to the
> >> device you would have realized that I have already applied grow to the
> >> raid.
> >>
> >> 1465159552 raid size
> >> 976767808 partition size
> >>
> >> See how partition is smaller than raid by about 500GB?
> >
> > Then why not run "cfdisk /dev/md2" (I recommend the version from "GNU
> > fdisk"), notice that you have a 900GB partition there and 500 GB of free
> > space, then resize the partition?
> >
> >> nexuslite@ubuntu:~$ resize2fs -p /dev/md2p1
> >> resize2fs 1.41.11 (14-Mar-2010)
> >> The filesystem is already 244191952 blocks long. Nothing to do!
> >>
> >> There is the exact message resize2fs is returning. 244191968 is the
> >> current end block of the partition. parted also shows 244191968 as the
> >> maximum block size for a partition. There are no related dmesg because
> >> it is not an error it is just undesired results.
> >
> > You don't seem to understand the difference between /dev/md2
> > and /dev/md2p1. And also that resize2fs will not resize md2p1, it will
> > only amend the ext* filesystem so that it takes all of md2p1.
> >
> > --
> > With respect,
> > Roman
> >
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2011-04-03 19:57 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-02 18:51 Raid 5 Array Marcus
2011-04-02 19:01 ` Simon McNair
[not found] ` <BANLkTimJfUhvkpkkAH=NLJOvLL-Jotrwqg@mail.gmail.com>
2011-04-02 20:09 ` Simon McNair
[not found] ` <BANLkTim3uOiF7Qdir_Vou3rSp1zJmgf6iA@mail.gmail.com>
2011-04-02 21:27 ` Simon Mcnair
2011-04-02 21:45 ` Simon Mcnair
2011-04-02 22:01 ` Roman Mamedov
2011-04-02 22:04 ` Roberto Spadim
2011-04-02 23:06 ` Marcus
2011-04-03 0:22 ` Marcus
2011-04-03 6:41 ` Marcus
2011-04-03 7:49 ` NeilBrown
2011-04-03 8:02 ` Marcus
2011-04-03 11:01 ` NeilBrown
2011-04-03 17:46 ` Marcus
2011-04-03 17:50 ` Roman Mamedov
[not found] ` <BANLkTinzOpR-pgR1GYxgxaJhNoOUxc0D_w@mail.gmail.com>
2011-04-03 19:57 ` Roman Mamedov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).