* Re: RAID 6 grow problem
[not found] <Pine.LNX.4.64.0706021232040.13184@p34.internal.lan>
@ 2007-06-02 21:26 ` Iain Rauch
2007-06-02 21:31 ` Justin Piszcz
2007-06-02 21:38 ` Neil Brown
0 siblings, 2 replies; 18+ messages in thread
From: Iain Rauch @ 2007-06-02 21:26 UTC (permalink / raw)
To: Justin Piszcz, linux-raid@vger.kernel.org
> For the critical section part, it may be your syntax..
>
> When I had the problem, Neil showed me the path! :)
I don't think it is incorrect, before I thought it was supposted to specify
an actual file so I 'touch'ed one and it says file exists.
> For your issue, do you have raid5/6 GROW support enabled in the kernel?
> Also I grew mine I never used the --backup-file option.
I don't know, how would I find this out? uname -r gives me 2.6.20-15-server.
Iain
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-02 21:26 ` RAID 6 grow problem Iain Rauch
@ 2007-06-02 21:31 ` Justin Piszcz
2007-06-02 21:33 ` Justin Piszcz
2007-06-02 21:38 ` Neil Brown
1 sibling, 1 reply; 18+ messages in thread
From: Justin Piszcz @ 2007-06-02 21:31 UTC (permalink / raw)
To: Iain Rauch; +Cc: linux-raid@vger.kernel.org
On Sat, 2 Jun 2007, Iain Rauch wrote:
>> For the critical section part, it may be your syntax..
>>
>> When I had the problem, Neil showed me the path! :)
> I don't think it is incorrect, before I thought it was supposted to specify
> an actual file so I 'touch'ed one and it says file exists.
>
>> For your issue, do you have raid5/6 GROW support enabled in the kernel?
>> Also I grew mine I never used the --backup-file option.
> I don't know, how would I find this out? uname -r gives me 2.6.20-15-server.
>
>
> Iain
>
>
Find the .config for your kernel and see if the raid5 grow support is
enabled.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-02 21:31 ` Justin Piszcz
@ 2007-06-02 21:33 ` Justin Piszcz
0 siblings, 0 replies; 18+ messages in thread
From: Justin Piszcz @ 2007-06-02 21:33 UTC (permalink / raw)
To: Iain Rauch; +Cc: linux-raid@vger.kernel.org
CONFIG_MD_RAID5_RESHAPE=y
Check for this option.
On Sat, 2 Jun 2007, Justin Piszcz wrote:
>
>
> On Sat, 2 Jun 2007, Iain Rauch wrote:
>
>>> For the critical section part, it may be your syntax..
>>>
>>> When I had the problem, Neil showed me the path! :)
>> I don't think it is incorrect, before I thought it was supposted to specify
>> an actual file so I 'touch'ed one and it says file exists.
>>
>>> For your issue, do you have raid5/6 GROW support enabled in the kernel?
>>> Also I grew mine I never used the --backup-file option.
>> I don't know, how would I find this out? uname -r gives me
>> 2.6.20-15-server.
>>
>>
>> Iain
>>
>>
>
> Find the .config for your kernel and see if the raid5 grow support is
> enabled.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-02 21:26 ` RAID 6 grow problem Iain Rauch
2007-06-02 21:31 ` Justin Piszcz
@ 2007-06-02 21:38 ` Neil Brown
2007-06-02 21:55 ` Iain Rauch
1 sibling, 1 reply; 18+ messages in thread
From: Neil Brown @ 2007-06-02 21:38 UTC (permalink / raw)
To: Iain Rauch; +Cc: Justin Piszcz, linux-raid@vger.kernel.org
On Saturday June 2, groups@email.iain.rauch.co.uk wrote:
> > For the critical section part, it may be your syntax..
> >
> > When I had the problem, Neil showed me the path! :)
> I don't think it is incorrect, before I thought it was supposted to specify
> an actual file so I 'touch'ed one and it says file exists.
>
> > For your issue, do you have raid5/6 GROW support enabled in the kernel?
> > Also I grew mine I never used the --backup-file option.
> I don't know, how would I find this out? uname -r gives me 2.6.20-15-server.
raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
supported.
You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
NeilBrown
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-02 21:38 ` Neil Brown
@ 2007-06-02 21:55 ` Iain Rauch
2007-06-02 22:07 ` Neil Brown
0 siblings, 1 reply; 18+ messages in thread
From: Iain Rauch @ 2007-06-02 21:55 UTC (permalink / raw)
To: Neil Brown, linux-raid@vger.kernel.org, Justin Piszcz
> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
> supported.
> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
I don't see that in the config. Should I add it? Then reboot?
I used apt-get install mdadm to first install it, which gave me 2.5.x then I
downloaded the new source and typed make then make install. Now mdadm -V
shows "mdadm - v2.6.2 - 21st May 2007".
Is there anyway to check it is installed correctly?
Iain
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-02 21:55 ` Iain Rauch
@ 2007-06-02 22:07 ` Neil Brown
2007-06-02 22:35 ` Iain Rauch
0 siblings, 1 reply; 18+ messages in thread
From: Neil Brown @ 2007-06-02 22:07 UTC (permalink / raw)
To: Iain Rauch; +Cc: linux-raid@vger.kernel.org, Justin Piszcz
On Saturday June 2, groups@email.iain.rauch.co.uk wrote:
> > raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
> > supported.
> > You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>
> I don't see that in the config. Should I add it? Then reboot?
You reported that you were running a 2.6.20 kernel, which doesn't
support raid6 reshape.
You need to compile a 2.6.21 kernel (or
apt-get install linux-image-2.6.21-1-amd64
or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
.config before compiling.
>
> I used apt-get install mdadm to first install it, which gave me 2.5.x then I
> downloaded the new source and typed make then make install. Now mdadm -V
> shows "mdadm - v2.6.2 - 21st May 2007".
> Is there anyway to check it is installed correctly?
The "mdadm -V" check is sufficient.
NeilBrown
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-02 22:07 ` Neil Brown
@ 2007-06-02 22:35 ` Iain Rauch
2007-06-03 23:33 ` Daniel Korstad
2007-06-04 20:29 ` RAID 6 grow problem Bill Davidsen
0 siblings, 2 replies; 18+ messages in thread
From: Iain Rauch @ 2007-06-02 22:35 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid@vger.kernel.org, Justin Piszcz
>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
>>> supported.
>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>>
>> I don't see that in the config. Should I add it? Then reboot?
>
> You reported that you were running a 2.6.20 kernel, which doesn't
> support raid6 reshape.
> You need to compile a 2.6.21 kernel (or
> apt-get install linux-image-2.6.21-1-amd64
> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
> .config before compiling.
There only seems to be version 2.6.20 does this matter a lot? Also how do I
specify what is in the config when using apt-get install?
>> I used apt-get install mdadm to first install it, which gave me 2.5.x then I
>> downloaded the new source and typed make then make install. Now mdadm -V
>> shows "mdadm - v2.6.2 - 21st May 2007".
>> Is there anyway to check it is installed correctly?
>
> The "mdadm -V" check is sufficient.
Are you sure because at first I just did the make/make install and mdadm -V
did tell me v2.6.2 but I don't believe it was installed properly because it
didn't recognise my array nor did it make a config file, and cat
/proc/mdstat said no file/directory??
Iain
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: RAID 6 grow problem
2007-06-02 22:35 ` Iain Rauch
@ 2007-06-03 23:33 ` Daniel Korstad
2007-06-04 6:47 ` RAID6 clean? Guy Watkins
2007-06-04 20:29 ` RAID 6 grow problem Bill Davidsen
1 sibling, 1 reply; 18+ messages in thread
From: Daniel Korstad @ 2007-06-03 23:33 UTC (permalink / raw)
To: groups; +Cc: jpiszcz, linux-raid, neilb
As I understand it, you have to get 2.6.21. The raid 6 reshape code was not added until this latest kernel.
If your apt-get only has, 2.6.20 version, your distro (whatever flavor) might not have released a cook version for 2.6.21.
I run Fedora. An older version, FC4. In Fedora, the update utility is yum but since mine is older, there are no longer updates for me.
You have a couple options, if your distro/version is still supported for updates, wait for them to release a new one for 2.6.21.
Or, get the vanilla source. This is what I had to do since my version is considered legacy and no new updates are coming my way.
The source can be found; http://www.kernel.org/
Compiling a kernel is not too hard, a bit time consuming depending on your hardware but unfortunately a bit more involved than just adding a line to the config and a reboot as you mentioned previously.
It varies a bit among distros. You may want to do a search and check some forums for your distro. They might have a good howto for your distro.
For me (Fedora) the basics are;
1. Download the vanilla source
2. As root decompress it to /usr/src/
bzip2 -d linux-2.6.21.3.bz2
3. Change to the new kernel directory
cd /usr/src/linux-2.6.21.3
4. If you wish to apply some kernel patch, do so before the next step. As an example: (optional)
patch -p0 < kernel.patch
5. Put the kernel source in a proper/clean state:
make mrproper
6. Setup /usr/src/linux Softlink:
cd /usr/src/; ln -s linux-2.6.21.3 linux
7. The .config file contains EVERYTHING specific to the kernel you compile. You must have a starting .config file or it will be too difficult to select the right set of options. I grabbed one from my old source that is specific for my processor type:
cd /usr/src/linux
cp /usr/src/redhat/BUILD/kernel-2.6.11/linux-2.6.11/configs/kernel-2.6.11-x86_64.config .config
Hopefully for you, your old source will not be quite as old as mine. You will see what I mean in the make oldconfig step...
8. As an option you can change the EXTRA name attached to the end of the kernel, for example: 1.1369_FC4. Open /usr/src/linux/Makefile in any editor (gedit, kwrite, nano, etc.), and go to line 4.
EXTRAVERSION = -raid6reshape
You can change this value to ANYTHING you like. The result will be that yourkernel name will be something like:2.6.21.3-raid6reshape
9. Bring the .config file up to date to match the new kernel.
make oldconfig
This will prompt for many new options. If you know what they are, answer them correctly. Otherwise just hit [Enter]
Since my old config was from a very old source, I had a ton of prompts for all the new wonderful features and changes that the old config did not address. Be sure to hit enter to take the defaults unless you know what you are doing.
The one exception, if you see CONFIG_MD_RAID5_RESHAPE=
Be sure to answer yes.
10. Once done, configure all the necessary options in the kernel by using any of the following:
text based questions: make config
text based GUI: make menuconfig
GTK based GUI: make gconfig
QT based GUI: make xconfig (recommended but you need to be in xwindows)
Be VERY CAREFUL when changing things with which you are unfamiliar. If you do not know what it is, leave it to the default value. Some things I personally change are the processor type to match my AMD, and add NTFS in the filesystem support to be able to read a friend's usb hard drive when all he uses is windows...
Oh, and check to see that RAID5 RESHAPE feature is included.
11. This step may take between 15min - 2hours depending on the speed of your system
make all
If that did not work, research the error and try to disable the troublesome module or feature in the Configure step above. Sometimes I get too crazy and disable too many things or enable too many things that were not the default and it breaks.
If that worked correctly install it.
12. make modules_install
13. make install
Depending on your distro type, these steps may or may not work exactly as shown.
After you are done and reboot to the new kernel, Grow your raid.
For me, I had a RAID6 with 8 drives. I added an 9th.
mdadm --add /dev/md1 /dev/sdj1
mdadm --grow /dev/md1 --raid-devices=9
After mdamd finishes (watch cat /proc/mdstat this will take hours) the filesystem the needs to be expanded to fill up the new space.
fsck.ext3 /dev/md1
resize2fs /dev/md1
or for me, I am using XFS;
xfs_growfs /dev/md1
After a few hours all was done and it all worked great.
Good luck!!
And thanks to all for the work to bring in reshape for RAID6. I had been waiting a while for this.
Chalk one up for a successful RAID6 reshape here!
Cheers,
Dan.
-----Original Message-----
From: Iain Rauch [mailto:groups@email.iain.rauch.co.uk]
Sent: Saturday, June 02, 2007 5:36 PM
To: Neil Brown
Cc: linux-raid@vger.kernel.org; Justin Piszcz
Subject: Re: RAID 6 grow problem
>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
>>> supported.
>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>>
>> I don't see that in the config. Should I add it? Then reboot?
>
> You reported that you were running a 2.6.20 kernel, which doesn't
> support raid6 reshape.
> You need to compile 2.6.21 kernel (or
> apt-get install linux-image-2.6.21-1-amd64
> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
> .config before compiling.
There only seems to be version 2.6.20 does this matter a lot? Also how do I
specify what is in the config when using apt-get install?
>> I used apt-get install mdadm to first install it, which gave me 2.5.x then I
>> downloaded the new source and typed make then make install. Now mdadm -V
>> shows "mdadm - v2.6.2 - 21st May 2007".
>> Is there anyway to check it is installed correctly?
>
> The "mdadm -V" check is sufficient.
Are you sure because at first I just did the make/make install and mdadm -V
did tell me v2.6.2 but I don't believe it was installed properly because it
didn't recognise my array nor did it make a config file, and cat
/proc/mdstat said no file/directory??
Iain
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 18+ messages in thread
* RAID6 clean?
2007-06-03 23:33 ` Daniel Korstad
@ 2007-06-04 6:47 ` Guy Watkins
2007-06-04 6:59 ` Neil Brown
0 siblings, 1 reply; 18+ messages in thread
From: Guy Watkins @ 2007-06-04 6:47 UTC (permalink / raw)
To: 'linux-raid'
I have a RAID6 array. 1 drive is bad and now un-plugged because the system
hangs waiting on the disk.
The system won't boot because / is "not clean". I booted a rescue CD and
managed to start my arrays using --force. I tried to stop and start the
arrays but they still required --force. I then used "echo repair >
sync_action" to make the arrays "clean". I can now stop and start the RAID6
array without --force. I can now boot normally with 1 missing disk.
Is there an easier method? Some sort of boot option? This was a real pain
in the @$$.
It would be nice if there was an array option to allow an "un-clean" array
to be started. An option that would be set in the md superblock.
Thanks,
Guy
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID6 clean?
2007-06-04 6:47 ` RAID6 clean? Guy Watkins
@ 2007-06-04 6:59 ` Neil Brown
2007-08-18 5:40 ` Guy Watkins
0 siblings, 1 reply; 18+ messages in thread
From: Neil Brown @ 2007-06-04 6:59 UTC (permalink / raw)
To: Guy Watkins; +Cc: 'linux-raid'
On Monday June 4, linux-raid@watkins-home.com wrote:
> I have a RAID6 array. 1 drive is bad and now un-plugged because the system
> hangs waiting on the disk.
>
> The system won't boot because / is "not clean". I booted a rescue CD and
> managed to start my arrays using --force. I tried to stop and start the
> arrays but they still required --force. I then used "echo repair >
> sync_action" to make the arrays "clean". I can now stop and start the RAID6
> array without --force. I can now boot normally with 1 missing disk.
>
> Is there an easier method? Some sort of boot option? This was a real pain
> in the @$$.
>
> It would be nice if there was an array option to allow an "un-clean" array
> to be started. An option that would be set in the md superblock.
Documentation/md.txt
search for 'clean' - no luck.
search for 'dirty'
|
|So, to boot with a root filesystem of a dirty degraded raid[56], use
|
| md-mod.start_dirty_degraded=1
|
NeilBrown
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-02 22:35 ` Iain Rauch
2007-06-03 23:33 ` Daniel Korstad
@ 2007-06-04 20:29 ` Bill Davidsen
2007-06-05 17:08 ` Iain Rauch
1 sibling, 1 reply; 18+ messages in thread
From: Bill Davidsen @ 2007-06-04 20:29 UTC (permalink / raw)
To: Iain Rauch; +Cc: Neil Brown, linux-raid@vger.kernel.org, Justin Piszcz
Iain Rauch wrote:
>>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
>>>> supported.
>>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>>>>
>>> I don't see that in the config. Should I add it? Then reboot?
>>>
>> You reported that you were running a 2.6.20 kernel, which doesn't
>> support raid6 reshape.
>> You need to compile a 2.6.21 kernel (or
>> apt-get install linux-image-2.6.21-1-amd64
>> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
>> .config before compiling.
>>
>
> There only seems to be version 2.6.20 does this matter a lot? Also how do I
> specify what is in the config when using apt-get install?
>
2.6.20 doesn't support the feature you want, only you can tell if that
matters a lot. You don't, either get a raw kernel source and configure,
or run what the vendor provides for config. Sorry, those are the option.
>
>>> I used apt-get install mdadm to first install it, which gave me 2.5.x then I
>>> downloaded the new source and typed make then make install. Now mdadm -V
>>> shows "mdadm - v2.6.2 - 21st May 2007".
>>> Is there anyway to check it is installed correctly?
>>>
>> The "mdadm -V" check is sufficient.
>>
>
> Are you sure because at first I just did the make/make install and mdadm -V
> did tell me v2.6.2 but I don't believe it was installed properly because it
> didn't recognise my array nor did it make a config file, and cat
> /proc/mdstat said no file/directory??
mdadm doesn't control the /proc/mdstat file, it's written by the kernel.
The kernel had no active array to mention in the mdstat file.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-04 20:29 ` RAID 6 grow problem Bill Davidsen
@ 2007-06-05 17:08 ` Iain Rauch
2007-06-05 19:21 ` Daniel Korstad
0 siblings, 1 reply; 18+ messages in thread
From: Iain Rauch @ 2007-06-05 17:08 UTC (permalink / raw)
To: Bill Davidsen, Daniel Korstad, Neil Brown,
linux-raid@vger.kernel.org, Justin Piszcz
>>>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
>>>>> supported.
>>>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>>>>>
>>>> I don't see that in the config. Should I add it? Then reboot?
>>>>
Don't know how I missed it first time, but that is in my config.
>>> You reported that you were running a 2.6.20 kernel, which doesn't
>>> support raid6 reshape.
>>> You need to compile a 2.6.21 kernel (or
>>> apt-get install linux-image-2.6.21-1-amd64
>>> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
>>> .config before compiling.
>>>
>>
>> There only seems to be version 2.6.20 does this matter a lot? Also how do I
>> specify what is in the config when using apt-get install?
>>
>
> 2.6.20 doesn't support the feature you want, only you can tell if that
> matters a lot. You don't, either get a raw kernel source and configure,
> or run what the vendor provides for config. Sorry, those are the option.
I have finally managed to compile a new kernel (2.6.21) and boot it.
>>>> I used apt-get install mdadm to first install it, which gave me 2.5.x then
>>>> I
>>>> downloaded the new source and typed make then make install. Now mdadm -V
>>>> shows "mdadm - v2.6.2 - 21st May 2007".
>>>> Is there anyway to check it is installed correctly?
>>>
>>> The "mdadm -V" check is sufficient.
>>
>> Are you sure because at first I just did the make/make install and mdadm -V
>> did tell me v2.6.2 but I don't believe it was installed properly because it
>> didn't recognise my array nor did it make a config file, and cat
>> /proc/mdstat said no file/directory??
> mdadm doesn't control the /proc/mdstat file, it's written by the kernel.
> The kernel had no active array to mention in the mdstat file.
I see, thanks. I think it is working OK.
I am currently growing a 4 disk array to an 8 disk array as a test, and if
it that works I'll use those 8 and add them to my original 8 to make a 16
disk array. This will be a while yet as this first grow is going to take
2000 minutes. It looks like it's going to work fine, but I'll report back in
a couple of days.
Thank you so much for your help; Dan, Bill, Neil, Justin and everyone else.
The last thing I would like to know is if it is possible to 'clean' the
super blocks to make sure they are all OK. TIA.
Iain
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: RAID 6 grow problem
2007-06-05 17:08 ` Iain Rauch
@ 2007-06-05 19:21 ` Daniel Korstad
2007-06-05 21:26 ` Jon Nelson
2007-06-10 19:18 ` Iain Rauch
0 siblings, 2 replies; 18+ messages in thread
From: Daniel Korstad @ 2007-06-05 19:21 UTC (permalink / raw)
To: Iain Rauch, Bill Davidsen, Neil Brown, linux-raid, Justin Piszcz
Sounds like you are well on your way.
I am not too surprised on the time to completion. I probably underestimated/exaggerated a bit when I said after a few hours :)
It took me over a day to grow one disk as well. But my experience was on a system with an older AMD 754 x64 Mother Board with a couple SATA on board and the rest on two PCI cards each with 4 SATA ports. So I have 8 SATA drives on my PCI (33Mhz x 4 bytes (32bits) = 133MB/s) bus of which is saturated basically after three drives.
But this box sets in the basement and acts as my NAS. So for file access across the 100Mb/s network or wireless network, it does just fine.
When I do hdparm -tT /dev/md1 I get read access speeds from 110MB/s - 130MB/s and for my individual drives at around 50 - 60 MB/s so the RAID6 outperforms (reads) any one drive and I am happy. Bonnie/Bonnie++ is probably a better tool for testing, but I was just looking for quick and dirty numbers.
I have friends that have newer MB with half a dozen or almost a dozen SATA connectors and PCI-express SATA controller cards. Getting rid of the slow PCI bus limitation increases the speed by magnitudes... But this is another topic/thread...
Congrats on your new kernel and progress!
Cheers,
Dan.
----- Original Message -----
From: Iain Rauch
Sent: Tue, 6/5/2007 12:09pm
To: Bill Davidsen ; Daniel Korstad ; Neil Brown ; linux-raid@vger.kernel.org; Justin Piszcz
Subject: Re: RAID 6 grow problem
>>>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
>>>>> supported.
>>>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>>>>>
>>>> I don't see that in the config. Should I add it? Then reboot?
>>>>
Don't know how I missed it first time, but that is in my config.
>>> You reported that you were running a 2.6.20 kernel, which doesn't
>>> support raid6 reshape.
>>> You need to compile a 2.6.21 kernel (or
>>> apt-get install linux-image-2.6.21-1-amd64
>>> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
>>> .config before compiling.
>>>
>>
>> There only seems to be version 2.6.20 does this matter a lot? Also how do I
>> specify what is in the config when using apt-get install?
>>
>
> 2.6.20 doesn't support the feature you want, only you can tell if that
> matters a lot. You don't, either get a raw kernel source and configure,
> or run what the vendor provides for config. Sorry, those are the option.
I have finally managed to compile a new kernel (2.6.21) and boot it.
>>>> I used apt-get install mdadm to first install it, which gave me 2.5.x then
>>>> I
>>>> downloaded the new source and typed make then make install. Now mdadm -V
>>>> shows "mdadm - v2.6.2 - 21st May 2007".
>>>> Is there anyway to check it is installed correctly?
>>>
>>> The "mdadm -V" check is sufficient.
>>
>> Are you sure because at first I just did the make/make install and mdadm -V
>> did tell me v2.6.2 but I don't believe it was installed properly because it
>> didn't recognise my array nor did it make a config file, and cat
>> /proc/mdstat said no file/directory??
> mdadm doesn't control the /proc/mdstat file, it's written by the kernel.
> The kernel had no active array to mention in the mdstat file.
I see, thanks. I think it is working OK.
I am currently growing a 4 disk array to an 8 disk array as a test, and if
it that works I'll use those 8 and add them to my original 8 to make a 16
disk array. This will be a while yet as this first grow is going to take
2000 minutes. It looks like it's going to work fine, but I'll report back in
a couple of days.
Thank you so much for your help; Dan, Bill, Neil, Justin and everyone else.
The last thing I would like to know is if it is possible to 'clean' the
super blocks to make sure they are all OK. TIA.
Iain
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: RAID 6 grow problem
2007-06-05 19:21 ` Daniel Korstad
@ 2007-06-05 21:26 ` Jon Nelson
2007-06-06 0:46 ` Neil Brown
2007-06-10 19:18 ` Iain Rauch
1 sibling, 1 reply; 18+ messages in thread
From: Jon Nelson @ 2007-06-05 21:26 UTC (permalink / raw)
Cc: linux-raid
On Tue, 5 Jun 2007, Daniel Korstad wrote:
> Sounds like you are well on your way.
> I am not too surprised on the time to completion. I probably
> underestimated/exaggerated a bit when I said after a few hours :)
> It took me over a day to grow one disk as well. But my experience
> was on a system with an older AMD 754 x64 Mother Board with a couple
> SATA on board and the rest on two PCI cards each with 4 SATA ports.
> So I have 8 SATA drives on my PCI (33Mhz x 4 bytes (32bits) = 133MB/s)
> bus of which is saturated basically after three drives.
Related to this question, I have several of my own.
I have an EPoX 570SLI motherboard with 3 SATAII drives, all 320GiB: one
Hitachi, one Samsung, one Seagate. I built a RAID5 out of a partition
carved from each. I can issue a 'check' command and the rebuild speed
hovers around 70MB/s, sometimes up to 73MB/s, and dstat/iostat/whatever
confirms that each drive is sustaining approximately 70MB/s reads.
Therefore, 3x70MB/s = 210MB/s which is a bunch more than 133MB/s. lspci
-v reveals, for one of the interfaces (the others are pretty much the
same):
00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a2)
(prog-if 85 [Master SecO PriO])
Subsystem: EPoX Computer Co., Ltd. Unknown device 1026
Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 11
I/O ports at 09f0 [size=8]
I/O ports at 0bf0 [size=4]
I/O ports at 0970 [size=8]
I/O ports at 0b70 [size=4]
I/O ports at e000 [size=16]
Memory at fe02d000 (32-bit, non-prefetchable) [size=4K]
Capabilities: [44] Power Management version 2
Capabilities: [b0] Message Signalled Interrupts: Mask- 64bit+
Queue=0/2 Enable-
Capabilities: [cc] HyperTransport: MSI Mapping
which seems to clearly indicate that it is running at 66MHz (meaning
266MB/s maximum). As I say below, the best I seem to be able to get out
of it is 133MB/s, give or take. Can somebody explain what some of those
other items mean, such as 64bit something and different-sized I/O
ports...)
Each drive identifies with different UDMA levels:
The hitachi:
ata1.00: ATA-7, max UDMA/133, 625142448 sectors: LBA48 NCQ (depth 0/32)
The samsung:
ata2.00: ATA-8, max UDMA7, 625142448 sectors: LBA48 NCQ (depth 0/32)
The seagate:
ata3.00: ATA-7, max UDMA/133, 625142448 sectors: LBA48 NCQ (depth 0/32)
I'm trying to determine what the limiting factor of my raid is: Is it
the drives, my CPU (AMD x86_64, dual core, 3600+), my motherboard,
software, or something else. The best I've been able to get in userland
is about 133MB/s (no filesystem, raw device reads using dd with
iflag=direct). What *should* I be able to get?
--
Jon Nelson
<jnelson-linux-raid@jamponi.net>
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: RAID 6 grow problem
2007-06-05 21:26 ` Jon Nelson
@ 2007-06-06 0:46 ` Neil Brown
0 siblings, 0 replies; 18+ messages in thread
From: Neil Brown @ 2007-06-06 0:46 UTC (permalink / raw)
To: Jon Nelson; +Cc: linux-raid
On Tuesday June 5, jnelson-linux-raid@jamponi.net wrote:
>
> I have an EPoX 570SLI motherboard with 3 SATAII drives, all 320GiB: one
> Hitachi, one Samsung, one Seagate. I built a RAID5 out of a partition
> carved from each. I can issue a 'check' command and the rebuild speed
> hovers around 70MB/s, sometimes up to 73MB/s, and dstat/iostat/whatever
> confirms that each drive is sustaining approximately 70MB/s reads.
> Therefore, 3x70MB/s = 210MB/s which is a bunch more than 133MB/s. lspci
> -v reveals, for one of the interfaces (the others are pretty much the
> same):
...
>
> I'm trying to determine what the limiting factor of my raid is: Is it
> the drives, ....
If look at the data sheets for the drives (I just had a look at a
Seagate one, fairly easy to find on their web site) you should fine
the Maximum Sustained Transfer Rate, which will be about 70MB/s for
current 7200rpm drives.
So I think the drive is the limiting factor.
NeilBrown
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-05 19:21 ` Daniel Korstad
2007-06-05 21:26 ` Jon Nelson
@ 2007-06-10 19:18 ` Iain Rauch
2007-06-10 20:17 ` Justin Piszcz
1 sibling, 1 reply; 18+ messages in thread
From: Iain Rauch @ 2007-06-10 19:18 UTC (permalink / raw)
To: Daniel Korstad, Bill Davidsen, Neil Brown, linux-raid,
Justin Piszcz
Well, it's all done now. Thank you all so much for your help. There was no
problem re-syncing from 8 to 16 drives, only that it took 4500 minutes.
Anyway, here's a pic of the finished product.
http://iain.rauch.co.uk/images/BigNAS.png
Speeds seem a little slower than before, no idea why. The only things I
changed was to put 4 drives instead of 2 on each SATA controller, and change
to XFS instead of ext3. Chunk size is still the same at 128K. I seem to be
getting around 22MB/s write whereas before it was nearer 30MB/s. This is
just transferring from a 1TB LaCie disk (2x500GB RAID0) so I don't have any
scientific evidence of comparisons.
I also tried hdparm -tT and it showed almost 80MB/s for an individual drive
and 113MB/s for md0.
The last things I want to know is am I right in thinking the maximum file
system size I can expand to is 16TB? And also, is it possible to shrink the
size of an array, if I wanted to build the disks into another array to
change file system or another reason? Lastly, would I take a performance hit
if I added USB/FireWire drives into the array - would I be better off
building another NAS and stick with SATA (I'm talking good year off here
hopefully the space will last that long).
TIA
Iain
> Sounds like you are well on your way.
>
> I am not too surprised on the time to completion. I probably
> underestimated/exaggerated a bit when I said after a few hours :)
>
> It took me over a day to grow one disk as well. But my experience was on a
> system with an older AMD 754 x64 Mother Board with a couple SATA on board and
> the rest on two PCI cards each with 4 SATA ports. So I have 8 SATA drives on
> my PCI (33Mhz x 4 bytes (32bits) = 133MB/s) bus of which is saturated
> basically after three drives.
>
> But this box sets in the basement and acts as my NAS. So for file access
> across the 100Mb/s network or wireless network, it does just fine.
>
> When I do hdparm -tT /dev/md1 I get read access speeds from 110MB/s - 130MB/s
> and for my individual drives at around 50 - 60 MB/s so the RAID6 outperforms
> (reads) any one drive and I am happy. Bonnie/Bonnie++ is probably a better
> tool for testing, but I was just looking for quick and dirty numbers.
>
> I have friends that have newer MB with half a dozen or almost a dozen SATA
> connectors and PCI-express SATA controller cards. Getting rid of the slow PCI
> bus limitation increases the speed by magnitudes... But this is another
> topic/thread...
>
>
> Congrats on your new kernel and progress!
> Cheers,
> Dan.
>
> ----- Original Message -----
> From: Iain Rauch
> Sent: Tue, 6/5/2007 12:09pm
> To: Bill Davidsen ; Daniel Korstad ; Neil Brown ; linux-raid@vger.kernel.org;
> Justin Piszcz
> Subject: Re: RAID 6 grow problem
>
>
>>>>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
>>>>>> supported.
>>>>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>>>>>>
>>>>> I don't see that in the config. Should I add it? Then reboot?
>>>>>
> Don't know how I missed it first time, but that is in my config.
>
>>>> You reported that you were running a 2.6.20 kernel, which doesn't
>>>> support raid6 reshape.
>>>> You need to compile a 2.6.21 kernel (or
>>>> apt-get install linux-image-2.6.21-1-amd64
>>>> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
>>>> .config before compiling.
>>>>
>>>
>>> There only seems to be version 2.6.20 does this matter a lot? Also how do I
>>> specify what is in the config when using apt-get install?
>>>
>>
>> 2.6.20 doesn't support the feature you want, only you can tell if that
>> matters a lot. You don't, either get a raw kernel source and configure,
>> or run what the vendor provides for config. Sorry, those are the option.
> I have finally managed to compile a new kernel (2.6.21) and boot it.
>
>>>>> I used apt-get install mdadm to first install it, which gave me 2.5.x then
>>>>> I
>>>>> downloaded the new source and typed make then make install. Now mdadm -V
>>>>> shows "mdadm - v2.6.2 - 21st May 2007".
>>>>> Is there anyway to check it is installed correctly?
>>>>
>>>> The "mdadm -V" check is sufficient.
>>>
>>> Are you sure because at first I just did the make/make install and mdadm -V
>>> did tell me v2.6.2 but I don't believe it was installed properly because it
>>> didn't recognise my array nor did it make a config file, and cat
>>> /proc/mdstat said no file/directory??
>> mdadm doesn't control the /proc/mdstat file, it's written by the kernel.
>> The kernel had no active array to mention in the mdstat file.
> I see, thanks. I think it is working OK.
>
> I am currently growing a 4 disk array to an 8 disk array as a test, and if
> it that works I'll use those 8 and add them to my original 8 to make a 16
> disk array. This will be a while yet as this first grow is going to take
> 2000 minutes. It looks like it's going to work fine, but I'll report back in
> a couple of days.
>
> Thank you so much for your help; Dan, Bill, Neil, Justin and everyone else.
>
> The last thing I would like to know is if it is possible to 'clean' the
> super blocks to make sure they are all OK. TIA.
>
>
> Iain
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: RAID 6 grow problem
2007-06-10 19:18 ` Iain Rauch
@ 2007-06-10 20:17 ` Justin Piszcz
0 siblings, 0 replies; 18+ messages in thread
From: Justin Piszcz @ 2007-06-10 20:17 UTC (permalink / raw)
To: Iain Rauch; +Cc: Daniel Korstad, Bill Davidsen, Neil Brown, linux-raid, xfs
After you grew the RAID I am unsure if the XFS filesystem will 'know'
about these changes and optimize appropriately, there are sunit= and
swidth= you can pass as mount options. However, since you're on the PCI
bus, it has to calculate parity information for more drives and it will no
doubt be slower. Here is an example, I started with roughly 6 SATA/IDE
drives on the PCI bus and the rebuilds use to run at 30-40MB/s, when I got
to 10-12 drives, the rebuilds slowed down to 8MB/s, the PCI bus cannot
handle it. I would stick with SATA if you want speed.
Justin.
On Sun, 10 Jun 2007, Iain Rauch wrote:
> Well, it's all done now. Thank you all so much for your help. There was no
> problem re-syncing from 8 to 16 drives, only that it took 4500 minutes.
>
> Anyway, here's a pic of the finished product.
> http://iain.rauch.co.uk/images/BigNAS.png
>
> Speeds seem a little slower than before, no idea why. The only things I
> changed was to put 4 drives instead of 2 on each SATA controller, and change
> to XFS instead of ext3. Chunk size is still the same at 128K. I seem to be
> getting around 22MB/s write whereas before it was nearer 30MB/s. This is
> just transferring from a 1TB LaCie disk (2x500GB RAID0) so I don't have any
> scientific evidence of comparisons.
>
> I also tried hdparm -tT and it showed almost 80MB/s for an individual drive
> and 113MB/s for md0.
>
> The last things I want to know is am I right in thinking the maximum file
> system size I can expand to is 16TB? And also, is it possible to shrink the
> size of an array, if I wanted to build the disks into another array to
> change file system or another reason? Lastly, would I take a performance hit
> if I added USB/FireWire drives into the array - would I be better off
> building another NAS and stick with SATA (I'm talking good year off here
> hopefully the space will last that long).
>
> TIA
>
>
> Iain
>
>
>
>> Sounds like you are well on your way.
>>
>> I am not too surprised on the time to completion. I probably
>> underestimated/exaggerated a bit when I said after a few hours :)
>>
>> It took me over a day to grow one disk as well. But my experience was on a
>> system with an older AMD 754 x64 Mother Board with a couple SATA on board and
>> the rest on two PCI cards each with 4 SATA ports. So I have 8 SATA drives on
>> my PCI (33Mhz x 4 bytes (32bits) = 133MB/s) bus of which is saturated
>> basically after three drives.
>>
>> But this box sets in the basement and acts as my NAS. So for file access
>> across the 100Mb/s network or wireless network, it does just fine.
>>
>> When I do hdparm -tT /dev/md1 I get read access speeds from 110MB/s - 130MB/s
>> and for my individual drives at around 50 - 60 MB/s so the RAID6 outperforms
>> (reads) any one drive and I am happy. Bonnie/Bonnie++ is probably a better
>> tool for testing, but I was just looking for quick and dirty numbers.
>>
>> I have friends that have newer MB with half a dozen or almost a dozen SATA
>> connectors and PCI-express SATA controller cards. Getting rid of the slow PCI
>> bus limitation increases the speed by magnitudes... But this is another
>> topic/thread...
>>
>>
>> Congrats on your new kernel and progress!
>> Cheers,
>> Dan.
>>
>> ----- Original Message -----
>> From: Iain Rauch
>> Sent: Tue, 6/5/2007 12:09pm
>> To: Bill Davidsen ; Daniel Korstad ; Neil Brown ; linux-raid@vger.kernel.org;
>> Justin Piszcz
>> Subject: Re: RAID 6 grow problem
>>
>>
>>>>>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was
>>>>>>> supported.
>>>>>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y.
>>>>>>>
>>>>>> I don't see that in the config. Should I add it? Then reboot?
>>>>>>
>> Don't know how I missed it first time, but that is in my config.
>>
>>>>> You reported that you were running a 2.6.20 kernel, which doesn't
>>>>> support raid6 reshape.
>>>>> You need to compile a 2.6.21 kernel (or
>>>>> apt-get install linux-image-2.6.21-1-amd64
>>>>> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the
>>>>> .config before compiling.
>>>>>
>>>>
>>>> There only seems to be version 2.6.20 does this matter a lot? Also how do I
>>>> specify what is in the config when using apt-get install?
>>>>
>>>
>>> 2.6.20 doesn't support the feature you want, only you can tell if that
>>> matters a lot. You don't, either get a raw kernel source and configure,
>>> or run what the vendor provides for config. Sorry, those are the option.
>> I have finally managed to compile a new kernel (2.6.21) and boot it.
>>
>>>>>> I used apt-get install mdadm to first install it, which gave me 2.5.x then
>>>>>> I
>>>>>> downloaded the new source and typed make then make install. Now mdadm -V
>>>>>> shows "mdadm - v2.6.2 - 21st May 2007".
>>>>>> Is there anyway to check it is installed correctly?
>>>>>
>>>>> The "mdadm -V" check is sufficient.
>>>>
>>>> Are you sure because at first I just did the make/make install and mdadm -V
>>>> did tell me v2.6.2 but I don't believe it was installed properly because it
>>>> didn't recognise my array nor did it make a config file, and cat
>>>> /proc/mdstat said no file/directory??
>>> mdadm doesn't control the /proc/mdstat file, it's written by the kernel.
>>> The kernel had no active array to mention in the mdstat file.
>> I see, thanks. I think it is working OK.
>>
>> I am currently growing a 4 disk array to an 8 disk array as a test, and if
>> it that works I'll use those 8 and add them to my original 8 to make a 16
>> disk array. This will be a while yet as this first grow is going to take
>> 2000 minutes. It looks like it's going to work fine, but I'll report back in
>> a couple of days.
>>
>> Thank you so much for your help; Dan, Bill, Neil, Justin and everyone else.
>>
>> The last thing I would like to know is if it is possible to 'clean' the
>> super blocks to make sure they are all OK. TIA.
>>
>>
>> Iain
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: RAID6 clean?
2007-06-04 6:59 ` Neil Brown
@ 2007-08-18 5:40 ` Guy Watkins
0 siblings, 0 replies; 18+ messages in thread
From: Guy Watkins @ 2007-08-18 5:40 UTC (permalink / raw)
To: 'Neil Brown'; +Cc: 'linux-raid'
} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Neil Brown
} Sent: Monday, June 04, 2007 2:59 AM
} To: Guy Watkins
} Cc: 'linux-raid'
} Subject: Re: RAID6 clean?
}
} On Monday June 4, linux-raid@watkins-home.com wrote:
} > I have a RAID6 array. 1 drive is bad and now un-plugged because the
} system
} > hangs waiting on the disk.
} >
} > The system won't boot because / is "not clean". I booted a rescue CD
} and
} > managed to start my arrays using --force. I tried to stop and start the
} > arrays but they still required --force. I then used "echo repair >
} > sync_action" to make the arrays "clean". I can now stop and start the
} RAID6
} > array without --force. I can now boot normally with 1 missing disk.
} >
} > Is there an easier method? Some sort of boot option? This was a real
} pain
} > in the @$$.
} >
} > It would be nice if there was an array option to allow an "un-clean"
} array
} > to be started. An option that would be set in the md superblock.
}
} Documentation/md.txt
}
} search for 'clean' - no luck.
} search for 'dirty'
}
} |
} |So, to boot with a root filesystem of a dirty degraded raid[56], use
} |
} | md-mod.start_dirty_degraded=1
} |
}
} NeilBrown
}
Neil,
I had this happen again. The above worked, thanks.
Feature request... Allow us to set a start_dirty_degraded bit in
the superblock so we can set it and forget it. This way it can be automatic
and per array. What do you think?
Thanks,
Guy
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2007-08-18 5:40 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <Pine.LNX.4.64.0706021232040.13184@p34.internal.lan>
2007-06-02 21:26 ` RAID 6 grow problem Iain Rauch
2007-06-02 21:31 ` Justin Piszcz
2007-06-02 21:33 ` Justin Piszcz
2007-06-02 21:38 ` Neil Brown
2007-06-02 21:55 ` Iain Rauch
2007-06-02 22:07 ` Neil Brown
2007-06-02 22:35 ` Iain Rauch
2007-06-03 23:33 ` Daniel Korstad
2007-06-04 6:47 ` RAID6 clean? Guy Watkins
2007-06-04 6:59 ` Neil Brown
2007-08-18 5:40 ` Guy Watkins
2007-06-04 20:29 ` RAID 6 grow problem Bill Davidsen
2007-06-05 17:08 ` Iain Rauch
2007-06-05 19:21 ` Daniel Korstad
2007-06-05 21:26 ` Jon Nelson
2007-06-06 0:46 ` Neil Brown
2007-06-10 19:18 ` Iain Rauch
2007-06-10 20:17 ` Justin Piszcz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).