* Software based SATA RAID-5 expandable arrays?
@ 2007-06-17 22:16 greenjelly
2007-06-17 22:23 ` Justin Piszcz
` (2 more replies)
0 siblings, 3 replies; 40+ messages in thread
From: greenjelly @ 2007-06-17 22:16 UTC (permalink / raw)
To: linux-raid
I am researching my option to build a Media NAS server. Sorry for the long
message, but I wanted to provide as much details as possible to my problem,
for the best solution. I have Bolded sections as to save people who don't
have the time to read all of this.
Option 1: Expand My current Dream Machine!
I could buying a RAID-5 Hardware card for my current system (vista ultimate
64 with a extreme 6800 and 1066mb 2 gig RAM). The Adaptec RAID controller
(model "3805", you can search NewEgg for the infomation) will cost me near
$500 (consume 23w) and support 8 drives (I have 6). This controller
contains a 800mhz processor with a large cache of memory. It will support
expandable RAID-5 array! I would also buy a 750w+ PSU (for the additional
safety and security). The drives in this machine would be placed in shock
absorbing (noise reduction) 3 slot 4 drive bay containers with fans ( I have
2 of these) and I will be removing a IDE based Pioneer DVD Burner (1 of 3)
because of its flaky performance given the p965 intel chip set lack of
native IDE support and thus the Motherboards Micron SATA to IDE device. Ive
already installed 4 drives in this machine (on the native MB SATA
controller) only to find a fan fail on me within days of the installation.
One of the drives went bad (may or may not have to do with the heat). There
are 5mm between these drives, and I would now replace both fans with higher
RPM ball baring fans for added reliability (more noise). I would also need
to find a Freeware SMART monitor software which at this time I can not find
for Vista, to warn me of increased temps due to failure of fan, increased
environmental heat, etc. The only option is commercial SMART monitoring
software (which may not work with the Adaptec RAID adapter.
Option 2: Build a server.
I have a copy of Windows 2003 server, which I have yet to find out if it
supports native software expandable RAID-5 arrays. I can also use Linux
(which I have very little experience with) but have always wanted to use and
learn.
To do either of the last two options, I would still need to buy a new power
supply for my current VISTA machine (for added reliability). The current
PSU is 550w and with a power hungry RADEON, 3 DVD Drives and a X-Fi sound
card... My nerves are getting frayed.
I would buy a cheap motherboard, processor and 1gig or less of RAM. Lastly
I would want a VERY large Case. I have a 7300 NVidia PCI card that was
replaced with a X1950GT on my Home Theater PC so that I may play back
HD/Blue Ray DVD's.
The server option may cost a bit more then the $500 for the Adaptec Raid
controller. This will only work if Linux or Windows 2003 supports my much
needed requirements. My Linux OS will be installed on a 40mb IDE Drive (not
part of the Array).
The options I seek are to be able to start with a 6 Drive array RAID-5
array, then as my demand for more space increases in the future I want to be
able to plug in more drives and incorporate them into the Array without the
need to backup the data. Basically I need the software to add the
drive/drives to the Array, then Rebuild the array incorporating the new
drives while preserving the data on the original array.
QUESTIONS
Since this is a media server, and would only be used to serve Movies and
Video to my two machines It wouldn't have to be powered up full time (My
Music consumes less space and will be contained on two seperate machines).
Is there a way to considerably lower the power consumption of this server
the 90% of time its not in use?
Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Can Linux Software support RAID-5 expandability, allowing me to increase the
number of disks in the array, without the need to backup the media, recreate
the array from scratch and then copy the backup to the machine (something I
will be unable to do)?
I know this is a Linux forum, but I figure many of you guys work with
Windows Server. If so does Windows 2003 provide the same support for the
requested requirements above?
Thanks
GreenJelly
--
View this message in context: http://www.nabble.com/Software-based-SATA-RAID-5-expandable-arrays--tf3937421.html#a11167521
Sent from the linux-raid mailing list archive at Nabble.com.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-17 22:16 Software based SATA RAID-5 expandable arrays? greenjelly
@ 2007-06-17 22:23 ` Justin Piszcz
2007-06-18 21:14 ` Dexter Filmore
2007-06-20 20:52 ` Brad Campbell
2 siblings, 0 replies; 40+ messages in thread
From: Justin Piszcz @ 2007-06-17 22:23 UTC (permalink / raw)
To: greenjelly; +Cc: linux-raid
On Sun, 17 Jun 2007, greenjelly wrote:
>
> I am researching my option to build a Media NAS server. Sorry for the long
> message, but I wanted to provide as much details as possible to my problem,
> for the best solution. I have Bolded sections as to save people who don't
> have the time to read all of this.
>
> Option 1: Expand My current Dream Machine!
> I could buying a RAID-5 Hardware card for my current system (vista ultimate
> 64 with a extreme 6800 and 1066mb 2 gig RAM). The Adaptec RAID controller
> (model "3805", you can search NewEgg for the infomation) will cost me near
> $500 (consume 23w) and support 8 drives (I have 6). This controller
> contains a 800mhz processor with a large cache of memory. It will support
> expandable RAID-5 array! I would also buy a 750w+ PSU (for the additional
> safety and security). The drives in this machine would be placed in shock
> absorbing (noise reduction) 3 slot 4 drive bay containers with fans ( I have
> 2 of these) and I will be removing a IDE based Pioneer DVD Burner (1 of 3)
> because of its flaky performance given the p965 intel chip set lack of
> native IDE support and thus the Motherboards Micron SATA to IDE device. Ive
> already installed 4 drives in this machine (on the native MB SATA
> controller) only to find a fan fail on me within days of the installation.
> One of the drives went bad (may or may not have to do with the heat). There
> are 5mm between these drives, and I would now replace both fans with higher
> RPM ball baring fans for added reliability (more noise). I would also need
> to find a Freeware SMART monitor software which at this time I can not find
> for Vista, to warn me of increased temps due to failure of fan, increased
> environmental heat, etc. The only option is commercial SMART monitoring
> software (which may not work with the Adaptec RAID adapter.
>
> Option 2: Build a server.
> I have a copy of Windows 2003 server, which I have yet to find out if it
> supports native software expandable RAID-5 arrays. I can also use Linux
> (which I have very little experience with) but have always wanted to use and
> learn.
>
> To do either of the last two options, I would still need to buy a new power
> supply for my current VISTA machine (for added reliability). The current
> PSU is 550w and with a power hungry RADEON, 3 DVD Drives and a X-Fi sound
> card... My nerves are getting frayed.
>
> I would buy a cheap motherboard, processor and 1gig or less of RAM. Lastly
> I would want a VERY large Case. I have a 7300 NVidia PCI card that was
> replaced with a X1950GT on my Home Theater PC so that I may play back
> HD/Blue Ray DVD's.
>
> The server option may cost a bit more then the $500 for the Adaptec Raid
> controller. This will only work if Linux or Windows 2003 supports my much
> needed requirements. My Linux OS will be installed on a 40mb IDE Drive (not
> part of the Array).
>
> The options I seek are to be able to start with a 6 Drive array RAID-5
> array, then as my demand for more space increases in the future I want to be
> able to plug in more drives and incorporate them into the Array without the
> need to backup the data. Basically I need the software to add the
> drive/drives to the Array, then Rebuild the array incorporating the new
> drives while preserving the data on the original array.
>
> QUESTIONS
> Since this is a media server, and would only be used to serve Movies and
> Video to my two machines It wouldn't have to be powered up full time (My
> Music consumes less space and will be contained on two seperate machines).
> Is there a way to considerably lower the power consumption of this server
> the 90% of time its not in use?
>
> Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Yes, some reports of over 40-60TB.
>
> Can Linux Software support RAID-5 expandability, allowing me to increase the
> number of disks in the array, without the need to backup the media, recreate
> the array from scratch and then copy the backup to the machine (something I
> will be unable to do)?
With a recent kernel, yes.
>
> I know this is a Linux forum, but I figure many of you guys work with
> Windows Server. If so does Windows 2003 provide the same support for the
> requested requirements above?
I hear Windows 2003 SW raid is extremely slow.
$ /usr/bin/time dd if=/dev/zero of=file bs=1M
dd: writing `file': No space left on device
1070704+0 records in
1070703+0 records out
1122713473024 bytes (1.1 TB) copied, 2565.89 seconds, 438 MB/s
$ time dd if=100gb of=/dev/null bs=1M count=102400
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB) copied, 172.588 seconds, 622 MB/s
Sure its cached, and I could use the direct flag, but I do not use my
disks with the cache turned off.
Most important thing is cooling IMO, make sure you get a good case.
^^ That is with 10 raptors and software raid5. I'd use a separate Linux
box with MD SW raid and have your dream machine separate. If you use XFS
make sure you buy yourself a UPS and then you'll have yourself a nice
stable box.
Justin.
>
> Thanks
> GreenJelly
> --
> View this message in context: http://www.nabble.com/Software-based-SATA-RAID-5-expandable-arrays--tf3937421.html#a11167521
> Sent from the linux-raid mailing list archive at Nabble.com.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
@ 2007-06-18 12:46 Daniel Korstad
0 siblings, 0 replies; 40+ messages in thread
From: Daniel Korstad @ 2007-06-18 12:46 UTC (permalink / raw)
To: big_green_jelly_Bean; +Cc: linux-raid
Last I check expanding drives (reshaping the RAID) in a raid set within Windows is not supported.
Significant size is relative I guess, but 4-8 terabytes will not be a problem in either OS.
I run a RAID 6 (Windows does not support this either last I checked). I started out with 5 drives and have reshaped it to ten drives now. I have a few 250G (old original drives) and many 500G drives (added and replacement drives) in the set. Once all the old 250G die off and I replace them with 500G drives I will grow the RAID to the size of its new smallest disk, 500G. Grow and Reshape are slightly different, both supported in Linux mdadm. I have tested both with succcess.
I too use my set for media and it is not in use 90% of the time.
I use put this line in my /etc/rc.local to put the drives to sleep after a specified min of inactivity;
hdparm -S 241 /dev/sd*
The values for the -S switch are not intuitive, read the man page. The value I use (241) put them to standby (spindown) after 30min. My OS is on EIDE and my RAID set is all SATA, hence the splat for all SATA drives.
I have been running this for a year now with my RAID set. It works great and I have had no problems with mdadm waiting on drives to spinup when I access them.
The one caveat, be prepared to wait a few moments if the are all in spindown state before you can access your data. For me with ten drives, it is always less than a minute, usually 30sec or so.
For a filesystem, I use XFS for my large media files.
Dan.
----- Inline Message Follows -----
To: linux-raid@vger.kernel.org
From: greenjelly
Subject: Software based SATA RAID-5 expandable arrays?
I am researching my option to build a Media NAS server. Sorry for the long
message, but I wanted to provide as much details as possible to my problem,
for the best solution. I have Bolded sections as to save people who don't
have the time to read all of this.
Option 1: Expand My current Dream Machine!
I could buying a RAID-5 Hardware card for my current system (vista ultimate
64 with a extreme 6800 and 1066mb 2 gig RAM). The Adaptec RAID controller
(model "3805", you can search NewEgg for the infomation) will cost me near
$500 (consume 23w) and support 8 drives (I have 6). This controller
contains a 800mhz processor with a large cache of memory. It will support
expandable RAID-5 array! I would also buy a 750w+ PSU (for the additional
safety and security). The drives in this machine would be placed in shock
absorbing (noise reduction) 3 slot 4 drive bay containers with fans ( I have
2 of these) and I will be removing a IDE based Pioneer DVD Burner (1 of 3)
because of its flaky performance given the p965 intel chip set lack of
native IDE support and thus the Motherboards Micron SATA to IDE device. Ive
already installed 4 drives in this machine (on the native MB SATA
controller) only to find a fan fail on me within days of the installation.
One of the drives went bad (may or may not have to do with the heat). There
are 5mm between these drives, and I would now replace both fans with higher
RPM ball baring fans for added reliability (more noise). I would also need
to find a Freeware SMART monitor software which at this time I can not find
for Vista, to warn me of increased temps due to failure of fan, increased
environmental heat, etc. The only option is commercial SMART monitoring
software (which may not work with the Adaptec RAID adapter.
Option 2: Build a server.
I have a copy of Windows 2003 server, which I have yet to find out if it
supports native software expandable RAID-5 arrays. I can also use Linux
(which I have very little experience with) but have always wanted to use and
learn.
To do either of the last two options, I would still need to buy a new power
supply for my current VISTA machine (for added reliability). The current
PSU is 550w and with a power hungry RADEON, 3 DVD Drives and a X-Fi sound
card... My nerves are getting frayed.
I would buy a cheap motherboard, processor and 1gig or less of RAM. Lastly
I would want a VERY large Case. I have a 7300 NVidia PCI card that was
replaced with a X1950GT on my Home Theater PC so that I may play back
HD/Blue Ray DVD's.
The server option may cost a bit more then the $500 for the Adaptec Raid
controller. This will only work if Linux or Windows 2003 supports my much
needed requirements. My Linux OS will be installed on a 40mb IDE Drive (not
part of the Array).
The options I seek are to be able to start with a 6 Drive array RAID-5
array, then as my demand for more space increases in the future I want to be
able to plug in more drives and incorporate them into the Array without the
need to backup the data. Basically I need the software to add the
drive/drives to the Array, then Rebuild the array incorporating the new
drives while preserving the data on the original array.
QUESTIONS
Since this is a media server, and would only be used to serve Movies and
Video to my two machines It wouldn't have to be powered up full time (My
Music consumes less space and will be contained on two seperate machines).
Is there a way to considerably lower the power consumption of this server
the 90% of time its not in use?
Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Can Linux Software support RAID-5 expandability, allowing me to increase the
number of disks in the array, without the need to backup the media, recreate
the array from scratch and then copy the backup to the machine (something I
will be unable to do)?
I know this is a Linux forum, but I figure many of you guys work with
Windows Server. If so does Windows 2003 provide the same support for the
requested requirements above?
Thanks
GreenJelly
--
View this message in context: http://www.nabble.com/Software-based-SATA-RAID-5-expandable-arrays--tf3937421.html#a11167521
Sent from the linux-raid mailing list archive at Nabble.com.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-17 22:16 Software based SATA RAID-5 expandable arrays? greenjelly
2007-06-17 22:23 ` Justin Piszcz
@ 2007-06-18 21:14 ` Dexter Filmore
2007-06-19 8:35 ` David Greaves
2007-06-20 20:52 ` Brad Campbell
2 siblings, 1 reply; 40+ messages in thread
From: Dexter Filmore @ 2007-06-18 21:14 UTC (permalink / raw)
To: greenjelly; +Cc: linux-raid
Why dontcha just cut all the "look how big my ePenis is" chatter and tell us
what you wanna do?
Nobody gives a rat if your ultra1337 sound cards needs a 10 megawatt power
supply.
--
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C++++ UL++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@
b++(+++) DI+++ D- G++ e* h>++ r* y?
------END GEEK CODE BLOCK------
http://www.stop1984.com
http://www.againsttcpa.com
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-18 21:14 ` Dexter Filmore
@ 2007-06-19 8:35 ` David Greaves
2007-06-19 9:14 ` Dexter Filmore
0 siblings, 1 reply; 40+ messages in thread
From: David Greaves @ 2007-06-19 8:35 UTC (permalink / raw)
To: Dexter Filmore; +Cc: greenjelly, linux-raid
Dexter Filmore wrote:
> Why dontcha just cut all the "look how big my ePenis is" chatter and tell us
> what you wanna do?
> Nobody gives a rat if your ultra1337 sound cards needs a 10 megawatt power
> supply.
Chill Dexter.
How many faults have you seen on this list attributed to poor PSUs?
How many people whinge about the performance of their controllers/setups 'cos
they find out _after_ they bought them just how naff they are?
Sure he went a bit OTT in the description - but if you'd rather see "Hey dudez,
what do I need for a really quick server" then #linux is good :)
He's clearly new to linux, (and granted, maybe a bit over-excited by hardware!)
but give the guy a break.
He very clearly told us what he wanted to do in the QUESTIONs bit.
greenjelly (btw most people use real names in the linux kernel part of the
world), feel free to search the archives, read the FAQs and continue to ask
questions :)
And don't think too badly of Dexter, he's usually OK.
David
PS, Dex, I wonder who posted these noobie sounding question in April last year:
I'm currently planning my first raid array.
I intend to go for softraid (budget's the limiting factor), not sure about 5
or 6 yet.
Plan so far: build a raid5 from 3 disks, later add a disk and reconf to raid6.
Question: is that possible at all? Can a raid5 be reconfed to a raid6 with
raidreconf?
Next Question: how stable is it? Will I likely get away without making backups
or is there like a 10% chance of failure?
Other precautions advised?
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-19 8:35 ` David Greaves
@ 2007-06-19 9:14 ` Dexter Filmore
0 siblings, 0 replies; 40+ messages in thread
From: Dexter Filmore @ 2007-06-19 9:14 UTC (permalink / raw)
To: David Greaves; +Cc: greenjelly, linux-raid
On Tuesday 19 June 2007 10:35:47 David Greaves wrote:
> Dexter Filmore wrote:
> > Why dontcha just cut all the "look how big my ePenis is" chatter and tell
> > us what you wanna do?
> > Nobody gives a rat if your ultra1337 sound cards needs a 10 megawatt
> > power supply.
>
> Chill Dexter.
>
>
> How many faults have you seen on this list attributed to poor PSUs?
> How many people whinge about the performance of their controllers/setups
> 'cos they find out _after_ they bought them just how naff they are?
A 750W supply doesn't increase server stability - that's what redundant PSUs
are for.
Plus: there are sh!tty 750W-PSUs otu there as well - numbers mean jack.
>
> Sure he went a bit OTT in the description - but if you'd rather see "Hey
> dudez, what do I need for a really quick server" then #linux is good :)
>
> He's clearly new to linux, (and granted, maybe a bit over-excited by
> hardware!) but give the guy a break.
>
> He very clearly told us what he wanted to do in the QUESTIONs bit.
Could have done right from the start instead of immersing
into "Vista", "X-Fi" "8800" and "Radeon".
> And don't think too badly of Dexter, he's usually OK.
Guess I had a newbie overdose since migrating the desktop box to Kubuntu.
> David
>
> PS, Dex, I wonder who posted these noobie sounding question in April last
> year:
>
> I'm currently planning my first raid array.
> I intend to go for softraid (budget's the limiting factor), not sure about
> 5 or 6 yet.
>
> Plan so far: build a raid5 from 3 disks, later add a disk and reconf to
> raid6. Question: is that possible at all? Can a raid5 be reconfed to a
> raid6 with raidreconf?
> Next Question: how stable is it? Will I likely get away without making
> backups or is there like a 10% chance of failure?
> Other precautions advised?
Yes? What about that? All tech questions regarding file servers and raid
setups on Linux. Don't see how I go about how my Radeon makes the 10k barrier
in 3Dmark.
Nuff said.
--
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C++++ UL++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@
b++(+++) DI+++ D- G++ e* h>++ r* y?
------END GEEK CODE BLOCK------
http://www.stop1984.com
http://www.againsttcpa.com
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-06-19 13:08 Michael
0 siblings, 0 replies; 40+ messages in thread
From: Michael @ 2007-06-19 13:08 UTC (permalink / raw)
To: Dexter Filmore, David Greaves; +Cc: linux-raid
My 750w PSU is going into my dream machine (Overclocked core2duo extreme with 1066mhz memory, lots of optical drives, water cooling, Radion 1900xtx, aka high power application). The 550 Ultra is coming out of that machine and going into the NAS. Its not the perfect solution, for I would prefer a high efficiency PSU for the NAS, but it is the inexpensive solution.
These details are why I tried to be clear and include as much info as possible. My NAS may grow to 20 or more drives, thus making me feel nice and warm with a higher powered PSU. The Higher spec PSU, may also save me from a possible PSU failure due to a fan failure (which is the number one cause of PSU failure).
----- Original Message ----
From: Dexter Filmore <Dexter.Filmore@gmx.de>
To: David Greaves <david@dgreaves.com>
Cc: greenjelly <big_green_jelly_Bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Tuesday, June 19, 2007 4:14:09 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
On Tuesday 19 June 2007 10:35:47 David Greaves wrote:
> Dexter Filmore wrote:
> > Why dontcha just cut all the "look how big my ePenis is" chatter and tell
> > us what you wanna do?
> > Nobody gives a rat if your ultra1337 sound cards needs a 10 megawatt
> > power supply.
>
> Chill Dexter.
>
>
> How many faults have you seen on this list attributed to poor PSUs?
> How many people whinge about the performance of their controllers/setups
> 'cos they find out _after_ they bought them just how naff they are?
A 750W supply doesn't increase server stability - that's what redundant PSUs
are for.
Plus: there are sh!tty 750W-PSUs otu there as well - numbers mean jack.
>
> Sure he went a bit OTT in the description - but if you'd rather see "Hey
> dudez, what do I need for a really quick server" then #linux is good :)
>
> He's clearly new to linux, (and granted, maybe a bit over-excited by
> hardware!) but give the guy a break.
>
> He very clearly told us what he wanted to do in the QUESTIONs bit.
Could have done right from the start instead of immersing
into "Vista", "X-Fi" "8800" and "Radeon".
> And don't think too badly of Dexter, he's usually OK.
Guess I had a newbie overdose since migrating the desktop box to Kubuntu.
> David
>
> PS, Dex, I wonder who posted these noobie sounding question in April last
> year:
>
> I'm currently planning my first raid array.
> I intend to go for softraid (budget's the limiting factor), not sure about
> 5 or 6 yet.
>
> Plan so far: build a raid5 from 3 disks, later add a disk and reconf to
> raid6. Question: is that possible at all? Can a raid5 be reconfed to a
> raid6 with raidreconf?
> Next Question: how stable is it? Will I likely get away without making
> backups or is there like a 10% chance of failure?
> Other precautions advised?
Yes? What about that? All tech questions regarding file servers and raid
setups on Linux. Don't see how I go about how my Radeon makes the 10k barrier
in 3Dmark.
Nuff said.
--
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C++++ UL++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@
b++(+++) DI+++ D- G++ e* h>++ r* y?
------END GEEK CODE BLOCK------
http://www.stop1984.com
http://www.againsttcpa.com
____________________________________________________________________________________
Luggage? GPS? Comic books?
Check out fitting gifts for grads at Yahoo! Search
http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-06-19 13:43 Michael
2007-06-19 14:23 ` Robin Hill
2007-06-19 18:49 ` Daniel Korstad
0 siblings, 2 replies; 40+ messages in thread
From: Michael @ 2007-06-19 13:43 UTC (permalink / raw)
To: Dexter Filmore, David Greaves, Daniel Korstad; +Cc: linux-raid
Look at your sig...
--
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C++++ UL++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@
b++(+++) DI+++ D- G++ e* h>++ r* y?
------END GEEK CODE BLOCK------
http://www.stop1984.com
http://www.againsttcpa.com
You should get out more, maybe learn to interact with people. Its sad that my post could effect you the way it did. You have Slinging insults has done nothing to provide a solution to my build, and has distracted from the quality of this forum. Maybe someone can recommend a different Linux Forum that is better moderated and/or not filled with a person so near sighted that I cant get a solution due to the fact that things tend to deteriorate into flame wars. Dexter why must you focus on information sharing as being negative.
Am I proud of my gaming/work machine... absolutely... Just like I'm proud of my car and house. Did I share this info to brag to people on a RAID Linux forum? Absolutely not... Thats what OCForums is for:) I provided this information as to the reasons why I decided to build a separate machine, and to provide information on what resources I have. A great example is the idea to pull my 550w PSU from my work horse and place it into my NAS machine. I also was hoping for someone to come up with a unorthodox solution, that may save me money.
This forum doesn't seem to be a place for answers, though I really appreciate the instructions from Daniel Korstad dan@korstad.net on how to put the drives to sleep. I have created a mail folder for such great responses. I have yet to get info on what build and/or distro I should use. What commands I need in Linux to build an array. What commands I need in Linux to expand the array, and what type of RAID-5 setup linux option I should use to What SATA adapter cards I should use. Or any other ideas and suggestions.
Thanks Dexter for providing an environment where such info is unobtainable.
GreenJelly
PS The reason I use my Alias is the fact that their are socially inept, super critical, highly judgmental individuals who see asking for help, as begin a weakness. Or simply individuals who like to tear you down because you don't do things like they feel should be done.
____________________________________________________________________________________
Moody friends. Drama queens. Your life? Nope! - their life, your story. Play Sims Stories at Yahoo! Games.
http://sims.yahoo.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-19 13:43 Michael
@ 2007-06-19 14:23 ` Robin Hill
2007-06-19 18:49 ` Daniel Korstad
1 sibling, 0 replies; 40+ messages in thread
From: Robin Hill @ 2007-06-19 14:23 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1708 bytes --]
On Tue Jun 19, 2007 at 06:43:27AM -0700, Michael wrote:
> I have yet to get info on what build and/or distro I should use. What
> commands I need in Linux to build an array. What commands I need in
> Linux to expand the array, and what type of RAID-5 setup linux option
> I should use to What SATA adapter cards I should use. Or any other
> ideas and suggestions.
>
You didn't actually _ask_ any of these questions in your original mail,
which may be why you've not had any answers yet! Anyway - if you're
planning on going with a dedicated server build then you may be best
looking at one of the distros specifically designed for storage servers
(e.g. NASLite - http://www.serverelements.com/). Alternately, as a
linux newbie, one of the more user-friendly distributions like Ubuntu
would probably be a good option.
All the linux RAID configuration is done through the mdadm command - the
manual page should give you a pretty good idea of what can be done and
how. You're best coming back for more details when you know what disks,
controller, distribution, etc. you're going with as that'll influence
the exact command line to use.
I can't really offer much advice in the way of SATA cards as I've only
used the onboard stuff so far. You're best looking at somewhere like
www.linux-drivers.org to check for compatability, or ask here about
specific cards - someone probably has experience, good or bad, they can
share.
HTH,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-06-19 14:42 Michael
2007-06-19 21:30 ` Nix
0 siblings, 1 reply; 40+ messages in thread
From: Michael @ 2007-06-19 14:42 UTC (permalink / raw)
To: Dexter Filmore, David Greaves; +Cc: linux-raid
Sadly Dexter didn't feel like posting his response to the thread... So I will, for it is such a exciting and interesting conclusion to the attitude of this individual.
Grow up man, and I thanks for the threat. I will take that into account if anything bad happens to my computer system. Statements like that can get you into trouble even if you have absolutely nothing to do with an incident.
GreenJelly
----- Original Message ----
From: Dexter Filmore <Dexter.Filmore@gmx.de>
To: David Greaves <david@dgreaves.com>
Cc: greenjelly <big_green_jelly_Bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Tuesday, June 19, 2007 4:14:09 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Dude I don't need to be saved and I don't need your f?!*ing advice on how to
lead my life.
And now welcome to my killfile.
____________________________________________________________________________________Ready for the edge of your seat?
Check out tonight's top picks on Yahoo! TV.
http://tv.yahoo.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
2007-06-19 13:43 Michael
2007-06-19 14:23 ` Robin Hill
@ 2007-06-19 18:49 ` Daniel Korstad
1 sibling, 0 replies; 40+ messages in thread
From: Daniel Korstad @ 2007-06-19 18:49 UTC (permalink / raw)
To: Michael, Dexter Filmore; +Cc: linux-raid
I see nothing wrong with background info if it is relevant. I am always interested in learning and not afraid to ask those green newbie questions, I too was there. I might not be as green as I once was but I am not an expert on everything, always learning and occasionally still ask to green questions. If I never explored out of my comfort zone I won’t continue to learn. Given I have time; I enjoy parting my experiences with others that have an interest.
I think this is a great forum I have received lots of answers and info from both searching archives and asking directly.
If there are some ruffled feathers, I apologize, but I don't care to hear the bashing. Please don't cc the group; I would rather it be emails just between the individuals with the concerns.
As for as looking for more information or answers, I think there are many in the group willing to help out. But if we don't know what is desired or a more direct question we may not know what response is desired. It is not that the group is ignoring someone.
Now (rolling up sleeves...),
As for as the distro. You could ask 10 unix/linux guys and you would likely get 10 different answers. There are heated debates on this all the time. I have played with a few, not all, so I can tell you what my experiences are. Ubuntu is great for new Linux users. I have used this for one of my laptops and have set it up for my folks. I originally cut my teeth on RedHat long before Fedora was spun off from it. Currently I run my home server with Fedora Core 4. The most recent version is Fedora 7 (they dropped the core part). Distro Watch http://distrowatch.com/ is a good place to look for more info on different versions. Most of them will have several forums that would be available to you if you need to look for help unique to that distro.
To expand the array, you will need a recent kernel. RAID 5 reshape has been out for some time. Any distro you chose that is current will provide what you need. RAID6 reshape was recently added in 2.6.21. It was release a few weeks ago. The latest stable is 2.6.21.5. This site has the vanilla kernel http://www.kernel.org/.
When you pick a distro, they of course use the linux kernel but it is often a cooked version, certain tweaks or patches applied that that distro maintainers feel are appropriate to the users. Because of this, vanilla kernels are release prior to it being available to an update manager for that distro. Oh and update manager vary from between distros as well. Taking a quick look on an update mirror for Fedora 7, I see kernel-2.6.21-1.3228.fc7.i586.rpm so if you did an install for Fedora 7 than did a yum update to freshen all the packages you will get the required kernel to support RAID 6 reshape as well. Other distros might have the 2.6.21 available as well, if they don't, they will soon using their update manager. So you should not need to do any kernel recompile for a newer distro version. I had to because my version of Fedora is very old and end of service. They are not releasing new updates for me. That might be another item to consider, Fedora stops providing updates for version that are 2 releases below the current one. So Fedora 7 is the latest, 6 is still supported. When version 8 is released, 7 will be support and 6 will be dropped. Fedora has a rapid development timetable, a couple releases a year. Other distros might (probably do) have a longer life cycle.
For building your raid, there are lots of options and variations. I can tell you some basic steps I used.
I use the available slots on my motherboard and added a couple PCI cards to get me to a 10 disk array. You will have a newer system and using PCI-express cards. As for a recommendation, I would have to defer that to someone else. I have not upgraded my system to PCI-express yet. Most of the cards with several SATA ports will have more features then needed for software RAID. They have their own RAID which jacks up cost and it is not needed. I too would be curious to hear what others are using as I will need to update my system at some point.
For my RAID set, I made a partition on each of my sata drives. I than marked them are type Linux Raid in the fdisk utility... Ummm what a min, here is a link I found from a google search the closely resembles the same steps I took years ago when I built the raid set I am using;
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch26_:_Linux_Software_RAID
To reshape, add drives to your raid set in the future, you first add the new drive to the existing raid set.
mdadm --add /dev/md0 /dev/sdf1
Next use the --grow switch to reshape it to the new number of drives you want. The command below assumes you had 3 devices in the raid set before you added sdf1 and now after you added sdf1 to be available to the set, you want the raid-devices to be 4. Next time you add a device, it would be 5 ...
mdadm --grow /dev/md0 --raid-devices=4
The filesystem then needs to be expanded to fill up the new space, for ext3
fsck.ext3 /dev/md0
resize2fs /dev/md0
For XFS
xfs_growfs /dev/md0
Cheers,
Dan.
----- Original Message -----
From: Michael
Sent: Tue, 6/19/2007 8:44am
To: Dexter Filmore ; David Greaves ; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
Look at your sig...
--
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C++++ UL++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@
b++(+++) DI+++ D- G++ e* h>++ r* y?
------END GEEK CODE BLOCK------
http://www.stop1984.com
http://www.againsttcpa.com
You should get out more, maybe learn to interact with people. Its sad that my post could effect you the way it did. You have Slinging insults has done nothing to provide a solution to my build, and has distracted from the quality of this forum. Maybe someone can recommend a different Linux Forum that is better moderated and/or not filled with a person so near sighted that I cant get a solution due to the fact that things tend to deteriorate into flame wars. Dexter why must you focus on information sharing as being negative.
Am I proud of my gaming/work machine... absolutely... Just like I'm proud of my car and house. Did I share this info to brag to people on a RAID Linux forum? Absolutely not... Thats what OCForums is for:) I provided this information as to the reasons why I decided to build a separate machine, and to provide information on what resources I have. A great example is the idea to pull my 550w PSU from my work horse and place it into my NAS machine. I also was hoping for someone to come up with a unorthodox solution, that may save me money.
This forum doesn't seem to be a place for answers, though I really appreciate the instructions from Daniel Korstad on how to put the drives to sleep. I have created a mail folder for such great responses. I have yet to get info on what build and/or distro I should use. What commands I need in Linux to build an array. What commands I need in Linux to expand the array, and what type of RAID-5 setup linux option I should use to What SATA adapter cards I should use. Or any other ideas and suggestions.
Thanks Dexter for providing an environment where such info is unobtainable.
GreenJelly
PS The reason I use my Alias is the fact that their are socially inept, super critical, highly judgmental individuals who see asking for help, as begin a weakness. Or simply individuals who like to tear you down because you don't do things like they feel should be done.
____________________________________________________________________________________
Moody friends. Drama queens. Your life? Nope! - their life, your story. Play Sims Stories at Yahoo! Games.
http://sims.yahoo.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-19 14:42 Michael
@ 2007-06-19 21:30 ` Nix
0 siblings, 0 replies; 40+ messages in thread
From: Nix @ 2007-06-19 21:30 UTC (permalink / raw)
To: Michael; +Cc: Dexter Filmore, David Greaves, linux-raid
On 19 Jun 2007, Michael outgrape:
[regarding `welcome to my killfile']
> Grow up man, and I thanks for the threat. I will take that into
> account if anything bad happens to my computer system.
Read <http://en.wikipedia.org/wiki/Killfile> and learn. All he's saying
is `I am automatically ignoring you'.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-17 22:16 Software based SATA RAID-5 expandable arrays? greenjelly
2007-06-17 22:23 ` Justin Piszcz
2007-06-18 21:14 ` Dexter Filmore
@ 2007-06-20 20:52 ` Brad Campbell
2 siblings, 0 replies; 40+ messages in thread
From: Brad Campbell @ 2007-06-20 20:52 UTC (permalink / raw)
To: greenjelly; +Cc: linux-raid
greenjelly wrote:
> The options I seek are to be able to start with a 6 Drive array RAID-5
> array, then as my demand for more space increases in the future I want to be
> able to plug in more drives and incorporate them into the Array without the
> need to backup the data. Basically I need the software to add the
> drive/drives to the Array, then Rebuild the array incorporating the new
> drives while preserving the data on the original array.
I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's
not rocket science. Put a lot of drives in a box, make sure you have enough sata ports and power to
go around (watch your peak 12V consumption on spin up really) and use linux md. Easy.. Oh, but make
sure the drives stay cool!
For a cheap-o home server (which is what I have) I'd certainly not bother with a dedicated RAID
card. You are not even going to need GB ethernet really.. I've got 15 drives on a single PCI bus,
it's as slow as a wet week in may (in the southern hemisphere), but I'm streaming to 3 head units
which total a combined 5MB/s if I'm lucky.. Rebuilds can take up to 10 hours though.
> QUESTIONS
> Since this is a media server, and would only be used to serve Movies and
> Video to my two machines It wouldn't have to be powered up full time (My
> Music consumes less space and will be contained on two seperate machines).
> Is there a way to considerably lower the power consumption of this server
> the 90% of time its not in use?
Yes, don't poll for SMART and spin down the drives when idle (man hdparm). Use S3 sleep and WOL if
you are really clever. (I'm not, my boxes live in a dedicated server room with its own AC, but
that's because I'm nuts). I also have over 25k hours on the drives because I don't spin them down. I
figure the extra power is a trade off for drive life. They've got less than 50 spin cycles on them
in over 25k hours..
> Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Yes, easily (6TB here)
> Can Linux Software support RAID-5 expandability, allowing me to increase the
> number of disks in the array, without the need to backup the media, recreate
> the array from scratch and then copy the backup to the machine (something I
> will be unable to do)?
Yes but get a cheap UPS at least (it's cheap insurance)
> I know this is a Linux forum, but I figure many of you guys work with
> Windows Server. If so does Windows 2003 provide the same support for the
> requested requirements above?
Why would you even _ask_ ??
Read the man page for mdadm, then read it again (and a third time). Then google for "Raid-5 two
drive failure linux" just to familiarise yourself with the background.
What you are doing has been done before many, many times. There are some well written sites out
there relating to building exactly what you want to build with great detail.
If you are serious about using windows, I pity you.. Linux (actually a combination of the kernel md
layer and mdadm) makes it so easy you'd be nuts to beat your head against the wall with the alternative.
Brad
--
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-06-21 9:22 Michael
2007-06-21 20:16 ` Richard Scobie
0 siblings, 1 reply; 40+ messages in thread
From: Michael @ 2007-06-21 9:22 UTC (permalink / raw)
To: Brad Campbell; +Cc: linux-raid
Thank you;
Not that I want to, but where did you find a SATA PCI card that fit 15 drives?
The most expensive part of the build has been finding drive controllers.... Also, how did you come up with that power requirement. Seems like allot of power for 29 drives.... I will be able to fit near that many in the case Im buying...
I have SATA drives, not PATA, which is a shame cause the controllers cost so much... As for windows, its just familiar to me... I know it like the back of my hand... BUT... it is about time I learn more of Linux. I have been getting away with IT departments, and simply putting in server change/software installation requests for MySQL Its kinda nice.. I dont even need to figure out the connection string.
I knew DOS very well, but Linux is frustrating for me... I just don't have anyone to goto for quick answers.... especially when I dont know the question or whats wrong... "The computer just hangs randomly" doesnt get very far in forum discussions...
Again, thank you for your insight... I am VERY serous about doing this, and am running out of time (aka disk space) every day.
GreenJelly
----- Original Message ----
From: Brad Campbell <brad@wasp.net.au>
To: greenjelly <big_green_jelly_Bean@yahoo.com>
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, June 20, 2007 4:52:38 PM
Subject: Re: Software based SATA RAID-5 expandable arrays?
greenjelly wrote:
> The options I seek are to be able to start with a 6 Drive array RAID-5
> array, then as my demand for more space increases in the future I want to be
> able to plug in more drives and incorporate them into the Array without the
> need to backup the data. Basically I need the software to add the
> drive/drives to the Array, then Rebuild the array incorporating the new
> drives while preserving the data on the original array.
I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's
not rocket science. Put a lot of drives in a box, make sure you have enough sata ports and power to
go around (watch your peak 12V consumption on spin up really) and use linux md. Easy.. Oh, but make
sure the drives stay cool!
For a cheap-o home server (which is what I have) I'd certainly not bother with a dedicated RAID
card. You are not even going to need GB ethernet really.. I've got 15 drives on a single PCI bus,
it's as slow as a wet week in may (in the southern hemisphere), but I'm streaming to 3 head units
which total a combined 5MB/s if I'm lucky.. Rebuilds can take up to 10 hours though.
> QUESTIONS
> Since this is a media server, and would only be used to serve Movies and
> Video to my two machines It wouldn't have to be powered up full time (My
> Music consumes less space and will be contained on two seperate machines).
> Is there a way to considerably lower the power consumption of this server
> the 90% of time its not in use?
Yes, don't poll for SMART and spin down the drives when idle (man hdparm). Use S3 sleep and WOL if
you are really clever. (I'm not, my boxes live in a dedicated server room with its own AC, but
that's because I'm nuts). I also have over 25k hours on the drives because I don't spin them down. I
figure the extra power is a trade off for drive life. They've got less than 50 spin cycles on them
in over 25k hours..
> Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Yes, easily (6TB here)
> Can Linux Software support RAID-5 expandability, allowing me to increase the
> number of disks in the array, without the need to backup the media, recreate
> the array from scratch and then copy the backup to the machine (something I
> will be unable to do)?
Yes but get a cheap UPS at least (it's cheap insurance)
> I know this is a Linux forum, but I figure many of you guys work with
> Windows Server. If so does Windows 2003 provide the same support for the
> requested requirements above?
Why would you even _ask_ ??
Read the man page for mdadm, then read it again (and a third time). Then google for "Raid-5 two
drive failure linux" just to familiarise yourself with the background.
What you are doing has been done before many, many times. There are some well written sites out
there relating to building exactly what you want to build with great detail.
If you are serious about using windows, I pity you.. Linux (actually a combination of the kernel md
layer and mdadm) makes it so easy you'd be nuts to beat your head against the wall with the alternative.
Brad
--
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams
____________________________________________________________________________________
TV dinner still cooling?
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-06-21 10:40 Michael
0 siblings, 0 replies; 40+ messages in thread
From: Michael @ 2007-06-21 10:40 UTC (permalink / raw)
To: Brad Campbell; +Cc: linux-raid
The power consumption is allot more then I anticipated. Now I know why thoose adaptec cards have staggered spin up features. I really should look into the new Seagate drives power requirements closer. Thank you for the heads up. Finding a PSU with large 12v rails is going to be interesting, though probably well worth it. I never figured these drives would pull so much power. My father is running a server with an array of I think around 8 drives. He does this with a 350w PSU's. Some 1000w PSU's come with 4 rails,l which may be the perfect solution, even though I will not be pushing 1000w (since the 5v and 3v rails will not be needed.
I was thinking of a Intel P65G MB with 6 SATA connections. The G has a graphics card, and would free up a few PCI-Express slots. The 8 drive MoBo's often use a Micron device with a PATA controller card. The obvious issue with this is the numerous reports of SATA II cards conflicting with other SATA adapters. I hope I can find a solution.
The limit to the speed of the machine should be the limit of the South Bridge ability to handle the load. It probably is around 533 mhz since that seems to be the core duo processor FSB. Hardware arrays are a great solution for people who are doing web services , data warehouses, and other process intensive applications that also require many hundreds of simultaneous connections. I may have 4 connections to this box at one time, with only 1upto 2 or 3 people pulling data off.
The output question is an issue when faced with HD programming. A HD disk is usually around 20g, and supports many hours (and sound tracks) of audio and video. The problem is, the programs (like power DVD, don't support network paths. You can trick it with a maped network drive, jpwever I am still experimenting with this and have had some issues. The issues may have been involved in a bad drive that was bringing my machine to its knees. There was no data on this drive, but the failure clearly destroyed windows performance to the point of loosing control over the mousse movements.
Multiple, smaller PSU's may also be the answer, though I will have to do a product analysis.,as for the drive arry I should probably stagger the development, so that each controller card is hit one timethen moved onto the next controller card..
Does this allsound ok?
thansk
Mike
----- Original Message ----
From: Brad Campbell <brad@wasp.net.au>
To: Michael <big_green_jelly_bean@yahoo.com>
Sent: Thursday, June 21, 2007 5:45:00 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Michael wrote:
> Thank you;
>
> Not that I want to, but where did you find a SATA PCI card that fit 15 drives?
I didn't. I have 2 boxes. One has 4 Promise TX4 cards, the other has 3 Promise TX4 cards and the
on-board VIA SATA ports.
> The most expensive part of the build has been finding drive controllers.... Also, how did you come up with that power requirement. Seems like allot of power for 29 drives.... I will be able to fit near that many in the case Im buying...
I didn't "come up" with the power requirement as such. The first box had a 480W PSU in it and it's
been flawless (if it ain't broke!). The second I built with a 400W PSU that would shutdown 2 seconds
into the spinup cycle so I replaced it with a 600W with Dual 12V rails. That still had an issue as
it tried to power all the drives from a single 12V rail, so I had to open it up and spread the 12V
drive connectors across both rails. Running consumption of the machine after spinup at full load is
about 350W, but it hits about 500W for 10 seconds while it spins up.
> I have SATA drives, not PATA, which is a shame cause the controllers cost so much... As for windows, its just familiar to me... I know it like the back of my hand... BUT... it is about time I learn more of Linux. I have been getting away with IT departments, and simply putting in server change/software installation requests for MySQL Its kinda nice.. I dont even need to figure out the connection string.
Not really, if you are looking at the cheaper end of the market the controllers are pretty cost
effective given you are not after hardware raid. Trust me, I started on a box with 8 PATA drives and
the cabling was a nightmare.
> I knew DOS very well, but Linux is frustrating for me... I just don't have anyone to goto for quick answers.... especially when I dont know the question or whats wrong... "The computer just hangs randomly" doesnt get very far in forum discussions...
Google is the wonder cure for most Linux questions, as is freenode.net.
> Again, thank you for your insight... I am VERY serous about doing this, and am running out of time (aka disk space) every day.
Yeah, me too. I've had to cobble another storage server together to get me another 500GB until I can
get some new drives. I never thought I'd fill 6TB this quickly.
If you are doing cheapo storage arrays, linux just can't be beaten on the performance / features /
cost ratios..
Brad
--
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams
____________________________________________________________________________________
Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-21 9:22 Michael
@ 2007-06-21 20:16 ` Richard Scobie
0 siblings, 0 replies; 40+ messages in thread
From: Richard Scobie @ 2007-06-21 20:16 UTC (permalink / raw)
To: Linux RAID Mailing List
Michael wrote:
> Thank you;
>
> Not that I want to, but where did you find a SATA PCI card that fit 15 drives?
Areca have a few - a range of PCI-X cards that do up to 24 SATA drives
(ARC-1170) and PCI-e up to 24 drives (ARC-1280).
Regards,
Richard
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-06-22 15:12 jahammonds prost
2007-06-23 4:16 ` Brad Campbell
0 siblings, 1 reply; 40+ messages in thread
From: jahammonds prost @ 2007-06-22 15:12 UTC (permalink / raw)
To: Brad Campbell, greenjelly; +Cc: linux-raid
From: Brad Campbell brad@wasp.net.au
> I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's
> not rocket science.
Where did you find reasonably priced cases to hold so many drives? Each of my home servers top out at 8 data drives each - plus a 20Gb one to boot from.
Graham
----- Original Message ----
From: Brad Campbell <brad@wasp.net.au>
To: greenjelly <big_green_jelly_Bean@yahoo.com>
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 20 June, 2007 4:52:38 PM
Subject: Re: Software based SATA RAID-5 expandable arrays?
greenjelly wrote:
> The options I seek are to be able to start with a 6 Drive array RAID-5
> array, then as my demand for more space increases in the future I want to be
> able to plug in more drives and incorporate them into the Array without the
> need to backup the data. Basically I need the software to add the
> drive/drives to the Array, then Rebuild the array incorporating the new
> drives while preserving the data on the original array.
I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's
not rocket science. Put a lot of drives in a box, make sure you have enough sata ports and power to
go around (watch your peak 12V consumption on spin up really) and use linux md. Easy.. Oh, but make
sure the drives stay cool!
For a cheap-o home server (which is what I have) I'd certainly not bother with a dedicated RAID
card. You are not even going to need GB ethernet really.. I've got 15 drives on a single PCI bus,
it's as slow as a wet week in may (in the southern hemisphere), but I'm streaming to 3 head units
which total a combined 5MB/s if I'm lucky.. Rebuilds can take up to 10 hours though.
> QUESTIONS
> Since this is a media server, and would only be used to serve Movies and
> Video to my two machines It wouldn't have to be powered up full time (My
> Music consumes less space and will be contained on two seperate machines).
> Is there a way to considerably lower the power consumption of this server
> the 90% of time its not in use?
Yes, don't poll for SMART and spin down the drives when idle (man hdparm). Use S3 sleep and WOL if
you are really clever. (I'm not, my boxes live in a dedicated server room with its own AC, but
that's because I'm nuts). I also have over 25k hours on the drives because I don't spin them down. I
figure the extra power is a trade off for drive life. They've got less than 50 spin cycles on them
in over 25k hours..
> Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Yes, easily (6TB here)
> Can Linux Software support RAID-5 expandability, allowing me to increase the
> number of disks in the array, without the need to backup the media, recreate
> the array from scratch and then copy the backup to the machine (something I
> will be unable to do)?
Yes but get a cheap UPS at least (it's cheap insurance)
> I know this is a Linux forum, but I figure many of you guys work with
> Windows Server. If so does Windows 2003 provide the same support for the
> requested requirements above?
Why would you even _ask_ ??
Read the man page for mdadm, then read it again (and a third time). Then google for "Raid-5 two
drive failure linux" just to familiarise yourself with the background.
What you are doing has been done before many, many times. There are some well written sites out
there relating to building exactly what you want to build with great detail.
If you are serious about using windows, I pity you.. Linux (actually a combination of the kernel md
layer and mdadm) makes it so easy you'd be nuts to beat your head against the wall with the alternative.
Brad
--
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-06-22 15:12 jahammonds prost
@ 2007-06-23 4:16 ` Brad Campbell
0 siblings, 0 replies; 40+ messages in thread
From: Brad Campbell @ 2007-06-23 4:16 UTC (permalink / raw)
To: jahammonds prost; +Cc: linux-raid, big_green_jelly_Bean
jahammonds prost wrote:
> From: Brad Campbell brad@wasp.net.au
>
>> I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's
>> not rocket science.
>
> Where did you find reasonably priced cases to hold so many drives? Each of my home servers top out at 8 data drives each - plus a 20Gb one to boot from.
For one of them I used a modified CD duplicator case (9 5.25" bays) and the other one I used a nice
tall tower. All except 4 drives are in Supermicro hotswap bays. Aside from the Supermicro bays
(which do look nice and keep the drives very cool) these machines are chewing gum and duct tape jobs.
http://i10.photobucket.com/albums/a109/ytixelprep/F.jpg
Having said that, they are chewing gum and duct tape jobs that have had a downtime of less than 4
hrs/year over the last 3 years.
Brad
--
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
[not found] <944875.74303.qm@web54106.mail.re2.yahoo.com>
@ 2007-07-09 19:31 ` Daniel Korstad
2007-07-11 14:21 ` Bill Davidsen
0 siblings, 1 reply; 40+ messages in thread
From: Daniel Korstad @ 2007-07-09 19:31 UTC (permalink / raw)
To: Michael; +Cc: linux-raid
You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
grub
device (hd0) /dev/hdd
root (hd0,0)
setup (hd0)
quit
You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
"...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they’re used for mission critical data."
Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
With four drives you would be just fine with a RAID5.
However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
#check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo check /sys/block/md0/md/sync_action
With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
My filesystem of choice is XFS, but you get to pick your own poison:
mkfs.xfs /-f /dev/md3
Mount the device :
mount /dev/md3 /foo
I would edit your /etc/fstab to have it automounted for each startup.
Dan.
----- Original Message -----
From: Michael
Sent: Sun, 7/8/2007 3:54pm
To: Daniel Korstad
Subject: Re: Software based SATA RAID-5 expandable arrays?
Hey Daniel,
Time for business... been struggling the last few days setting up the right drive/OS partition
I got two 500gb drives for the OS... Figured I would mirror them... Of course 500gb is an insaine amount of space for Linux...
I then will RAID my 4 other drives with RAID 5 or 6... (I havent seen any distros talk about RAID 6, and from wikipedia it doesnt sound attractive, so why do you use it)
So how the hell do I partition this so that I can use my space to the maximum compacity.
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: big_green_jelly_Bean@yahoo.com
Cc: linux-raid@vger.kernel.org
Sent: Monday, June 18, 2007 8:46:08 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
Last I check expanding drives (reshaping the RAID) in a raid set within Windows is not supported.
Significant size is relative I guess, but 4-8 terabytes will not be a problem in either OS.
I run a RAID 6 (Windows does not support this either last I checked). I started out with 5 drives and have reshaped it to ten drives now. I have a few 250G (old original drives) and many 500G drives (added and replacement drives) in the set. Once all the old 250G die off and I replace them with 500G drives I will grow the RAID to the size of its new smallest disk, 500G. Grow and Reshape are slightly different, both supported in Linux mdadm. I have tested both with succcess.
I too use my set for media and it is not in use 90% of the time.
I use put this line in my /etc/rc.local to put the drives to sleep after a specified min of inactivity;
hdparm -S 241 /dev/sd*
The values for the -S switch are not intuitive, read the man page. The value I use (241) put them to standby (spindown) after 30min. My OS is on EIDE and my RAID set is all SATA, hence the splat for all SATA drives.
I have been running this for a year now with my RAID set. It works great and I have had no problems with mdadm waiting on drives to spinup when I access them.
The one caveat, be prepared to wait a few moments if the are all in spindown state before you can access your data. For me with ten drives, it is always less than a minute, usually 30sec or so.
For a filesystem, I use XFS for my large media files.
Dan.
----- Inline Message Follows -----
To: linux-raid@vger.kernel.org
From: greenjelly
Subject: Software based SATA RAID-5 expandable arrays?
I am researching my option to build a Media NAS server. Sorry for the long
message, but I wanted to provide as much details as possible to my problem,
for the best solution. I have Bolded sections as to save people who don't
have the time to read all of this.
Option 1: Expand My current Dream Machine!
I could buying a RAID-5 Hardware card for my current system (vista ultimate
64 with a extreme 6800 and 1066mb 2 gig RAM). The Adaptec RAID controller
(model "3805", you can search NewEgg for the infomation) will cost me near
$500 (consume 23w) and support 8 drives (I have 6). This controller
contains a 800mhz processor with a large cache of memory. It will support
expandable RAID-5 array! I would also buy a 750w+ PSU (for the additional
safety and security). The drives in this machine would be placed in shock
absorbing (noise reduction) 3 slot 4 drive bay containers with fans ( I have
2 of these) and I will be removing a IDE based Pioneer DVD Burner (1 of 3)
because of its flaky performance given the p965 intel chip set lack of
native IDE support and thus the Motherboards Micron SATA to IDE device. Ive
already installed 4 drives in this machine (on the native MB SATA
controller) only to find a fan fail on me within days of the installation.
One of the drives went bad (may or may not have to do with the heat). There
are 5mm between these drives, and I would now replace both fans with higher
RPM ball baring fans for added reliability (more noise). I would also need
to find a Freeware SMART monitor software which at this time I can not find
for Vista, to warn me of increased temps due to failure of fan, increased
environmental heat, etc. The only option is commercial SMART monitoring
software (which may not work with the Adaptec RAID adapter.
Option 2: Build a server.
I have a copy of Windows 2003 server, which I have yet to find out if it
supports native software expandable RAID-5 arrays. I can also use Linux
(which I have very little experience with) but have always wanted to use and
learn.
To do either of the last two options, I would still need to buy a new power
supply for my current VISTA machine (for added reliability). The current
PSU is 550w and with a power hungry RADEON, 3 DVD Drives and a X-Fi sound
card... My nerves are getting frayed.
I would buy a cheap motherboard, processor and 1gig or less of RAM. Lastly
I would want a VERY large Case. I have a 7300 NVidia PCI card that was
replaced with a X1950GT on my Home Theater PC so that I may play back
HD/Blue Ray DVD's.
The server option may cost a bit more then the $500 for the Adaptec Raid
controller. This will only work if Linux or Windows 2003 supports my much
needed requirements. My Linux OS will be installed on a 40mb IDE Drive (not
part of the Array).
The options I seek are to be able to start with a 6 Drive array RAID-5
array, then as my demand for more space increases in the future I want to be
able to plug in more drives and incorporate them into the Array without the
need to backup the data. Basically I need the software to add the
drive/drives to the Array, then Rebuild the array incorporating the new
drives while preserving the data on the original array.
QUESTIONS
Since this is a media server, and would only be used to serve Movies and
Video to my two machines It wouldn't have to be powered up full time (My
Music consumes less space and will be contained on two seperate machines).
Is there a way to considerably lower the power consumption of this server
the 90% of time its not in use?
Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Can Linux Software support RAID-5 expandability, allowing me to increase the
number of disks in the array, without the need to backup the media, recreate
the array from scratch and then copy the backup to the machine (something I
will be unable to do)?
I know this is a Linux forum, but I figure many of you guys work with
Windows Server. If so does Windows 2003 provide the same support for the
requested requirements above?
Thanks
GreenJelly
--
View this message in context: http://www.nabble.com/Software-based-SATA-RAID-5-expandable-arrays--tf3937421.html#a11167521
Sent from the linux-raid mailing list archive at Nabble.com.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
____________________________________________________________________________________
We won't tell. Get more on shows you hate to love
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-07-10 21:58 jahammonds prost
0 siblings, 0 replies; 40+ messages in thread
From: jahammonds prost @ 2007-07-10 21:58 UTC (permalink / raw)
To: Daniel Korstad, Michael; +Cc: linux-raid
> Why do I use RAID6? For the extra redundancy
I've been thinking about RAID6 too, having been bitten a couple of times.... the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale.... Or am I wrong on both counts?
Graham
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: Michael <big_green_jelly_bean@yahoo.com>
Cc: linux-raid@vger.kernel.org
Sent: Monday, 9 July, 2007 3:31:01 PM
Subject: RE: Software based SATA RAID-5 expandable arrays?
You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it determining
how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
grub
device (hd0) /dev/hdd
root (hd0,0)
setup (hd0)
quit
You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
"...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they’re used for mission critical data."
Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
With four drives you would be just fine with a RAID5.
However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
#check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo check /sys/block/md0/md/sync_action
With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
My filesystem of choice is XFS, but you get to pick your own poison:
mkfs.xfs /-f /dev/md3
Mount the device :
mount /dev/md3 /foo
I would edit your /etc/fstab to have it automounted for each startup.
Dan.
----- Original Message -----
From: Michael
Sent: Sun, 7/8/2007 3:54pm
To: Daniel Korstad
Subject: Re: Software based SATA RAID-5 expandable arrays?
Hey Daniel,
Time for business... been struggling the last few days setting up the right drive/OS partition
I got two 500gb drives for the OS... Figured I would mirror them... Of course 500gb is an insaine amount of space for Linux...
I then will RAID my 4 other drives with RAID 5 or 6... (I havent seen any distros talk about RAID 6, and from wikipedia it doesnt sound attractive, so why do you use it)
So how the hell do I partition this so that I can use my space to the maximum compacity.
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: big_green_jelly_Bean@yahoo.com
Cc: linux-raid@vger.kernel.org
Sent: Monday, June 18, 2007 8:46:08 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
Last I check expanding drives (reshaping the RAID) in a raid set within Windows is not supported.
Significant size is relative I guess, but 4-8 terabytes will not be a problem in either OS.
I run a RAID 6 (Windows does not support this either last I checked). I started out with 5 drives and have reshaped it to ten drives now. I have a few 250G (old original drives) and many 500G drives (added and replacement drives) in the set. Once all the old 250G die off and I replace them with 500G drives I will grow the RAID to the size of its new smallest disk, 500G. Grow and Reshape are slightly different, both supported in Linux mdadm. I have tested both with succcess.
I too use my set for media and it is not in use 90% of the time.
I use put this line in my /etc/rc.local to put the drives to sleep after a specified min of inactivity;
hdparm -S 241 /dev/sd*
The values for the -S switch are not intuitive, read the man page. The value I use (241) put them to standby (spindown) after 30min. My OS is on EIDE and my RAID set is all SATA, hence the splat for all SATA drives.
I have been running this for a year now with my RAID set. It works great and I have had no problems with mdadm waiting on drives to spinup when I access them.
The one caveat, be prepared to wait a few moments if the are all in spindown state before you can access your data. For me with ten drives, it is always less than a minute, usually 30sec or so.
For a filesystem, I use XFS for my large media files.
Dan.
----- Inline Message Follows -----
To: linux-raid@vger.kernel.org
From: greenjelly
Subject: Software based SATA RAID-5 expandable arrays?
I am researching my option to build a Media NAS server. Sorry for the long
message, but I wanted to provide as much details as possible to my problem,
for the best solution. I have Bolded sections as to save people who don't
have the time to read all of this.
Option 1: Expand My current Dream Machine!
I could buying a RAID-5 Hardware card for my current system (vista ultimate
64 with a extreme 6800 and 1066mb 2 gig RAM). The Adaptec RAID controller
(model "3805", you can search NewEgg for the infomation) will cost me near
$500 (consume 23w) and support 8 drives (I have 6). This controller
contains a 800mhz processor with a large cache of memory. It will support
expandable RAID-5 array! I would also buy a 750w+ PSU (for the additional
safety and security). The drives in this machine would be placed in shock
absorbing (noise reduction) 3 slot 4 drive bay containers with fans ( I have
2 of these) and I will be removing a IDE based Pioneer DVD Burner (1 of 3)
because of its flaky performance given the p965 intel chip set lack of
native IDE support and thus the Motherboards Micron SATA to IDE device. Ive
already installed 4 drives in this machine (on the native MB SATA
controller) only to find a fan fail on me within days of the installation.
One of the drives went bad (may or may not have to do with the heat). There
are 5mm between these drives, and I would now replace both fans with higher
RPM ball baring fans for added reliability (more noise). I would also need
to find a Freeware SMART monitor software which at this time I can not find
for Vista, to warn me of increased temps due to failure of fan, increased
environmental heat, etc. The only option is commercial SMART monitoring
software (which may not work with the Adaptec RAID adapter.
Option 2: Build a server.
I have a copy of Windows 2003 server, which I have yet to find out if it
supports native software expandable RAID-5 arrays. I can also use Linux
(which I have very little experience with) but have always wanted to use and
learn.
To do either of the last two options, I would still need to buy a new power
supply for my current VISTA machine (for added reliability). The current
PSU is 550w and with a power hungry RADEON, 3 DVD Drives and a X-Fi sound
card... My nerves are getting frayed.
I would buy a cheap motherboard, processor and 1gig or less of RAM. Lastly
I would want a VERY large Case. I have a 7300 NVidia PCI card that was
replaced with a X1950GT on my Home Theater PC so that I may play back
HD/Blue Ray DVD's.
The server option may cost a bit more then the $500 for the Adaptec Raid
controller. This will only work if Linux or Windows 2003 supports my much
needed requirements. My Linux OS will be installed on a 40mb IDE Drive (not
part of the Array).
The options I seek are to be able to start with a 6 Drive array RAID-5
array, then as my demand for more space increases in the future I want to be
able to plug in more drives and incorporate them into the Array without the
need to backup the data. Basically I need the software to add the
drive/drives to the Array, then Rebuild the array incorporating the new
drives while preserving the data on the original array.
QUESTIONS
Since this is a media server, and would only be used to serve Movies and
Video to my two machines It wouldn't have to be powered up full time (My
Music consumes less space and will be contained on two seperate machines).
Is there a way to considerably lower the power consumption of this server
the 90% of time its not in use?
Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?
Can Linux Software support RAID-5 expandability, allowing me to increase the
number of disks in the array, without the need to backup the media, recreate
the array from scratch and then copy the backup to the machine (something I
will be unable to do)?
I know this is a Linux forum, but I figure many of you guys work with
Windows Server. If so does Windows 2003 provide the same support for the
requested requirements above?
Thanks
GreenJelly
--
View this message in context: http://www.nabble.com/Software-based-SATA-RAID-5-expandable-arrays--tf3937421.html#a11167521
Sent from the linux-raid mailing list archive at Nabble.com.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
____________________________________________________________________________________
We won't tell. Get more on shows you hate to love
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-07-09 19:31 ` Daniel Korstad
@ 2007-07-11 14:21 ` Bill Davidsen
0 siblings, 0 replies; 40+ messages in thread
From: Bill Davidsen @ 2007-07-11 14:21 UTC (permalink / raw)
To: Daniel Korstad; +Cc: Michael, linux-raid
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they’re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
@ 2007-07-11 15:03 Daniel Korstad
2007-07-14 15:49 ` Bill Davidsen
0 siblings, 1 reply; 40+ messages in thread
From: Daniel Korstad @ 2007-07-11 15:03 UTC (permalink / raw)
To: gmitch64; +Cc: linux-raid
That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived.
I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months.
You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases.
Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity.
But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write.
I have been using RAID6 with 10 disk for over a year and it has saved me at least once.
As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not.
Dan.
----- Inline Message Follows -----
To: Daniel Korstad ; Michael
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?
> Why do I use RAID6? For the extra redundancy
I've been thinking about RAID6 too, having been bitten a couple of times.... the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale.... Or am I wrong on both counts?
Graham
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-07-11 17:26 jahammonds prost
2007-07-11 19:13 ` Daniel Korstad
0 siblings, 1 reply; 40+ messages in thread
From: jahammonds prost @ 2007-07-11 17:26 UTC (permalink / raw)
To: Daniel Korstad; +Cc: linux-raid
Ahh... guess it's time to upgrade again.... My plan was to start off with 3 drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before converting it over to a RAID6, and then topping it out at 12 drives (all I can fit in the case).... The performace hit isn't going to bother me too much - it's mainly going to be for video for my media server for the house...
So.. Can I expand a RAID6 now, which is good.... But can I change from RAID5 to RAID6 whilst online?
Graham
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: gmitch64@yahoo.com
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 11 July, 2007 11:03:34 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived.
I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months.
You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases.
Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity.
But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write.
I have been using RAID6 with 10 disk for over a year and it has saved me at least once.
As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not.
Dan.
----- Inline Message Follows -----
To: Daniel Korstad ; Michael
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?
> Why do I use RAID6? For the extra redundancy
I've been thinking about RAID6 too, having been bitten a couple of times.... the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale.... Or am I wrong on both counts?
Graham
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
2007-07-11 17:26 jahammonds prost
@ 2007-07-11 19:13 ` Daniel Korstad
2007-07-11 19:26 ` Daniel Korstad
0 siblings, 1 reply; 40+ messages in thread
From: Daniel Korstad @ 2007-07-11 19:13 UTC (permalink / raw)
To: jahammonds prost; +Cc: linux-raid
Currently, no you can't.
However it is on the TODO list.
http://neil.brown.name/blog/20050727143147-003
Maybe by the end of the year, Neil hit his goal on the raid6 grow for kernel 2.6.21... But Neil states the raid 5 to raid 6 is more complex to implement...
Dan.
----- Original Message -----
From: jahammonds prost
Sent: Wed, 7/11/2007 12:26pm
To: Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
Ahh... guess it's time to upgrade again.... My plan was to start off with 3 drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before converting it over to a RAID6, and then topping it out at 12 drives (all I can fit in the case).... The performace hit isn't going to bother me too much - it's mainly going to be for video for my media server for the house...
So.. Can I expand a RAID6 now, which is good.... But can I change from RAID5 to RAID6 whilst online?
Graham
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: gmitch64@yahoo.com
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 11 July, 2007 11:03:34 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived.
I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months.
You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases.
Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity.
But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write.
I have been using RAID6 with 10 disk for over a year and it has saved me at least once.
As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not.
Dan.
----- Inline Message Follows -----
To: Daniel Korstad ; Michael
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?
> Why do I use RAID6? For the extra redundancy
I've been thinking about RAID6 too, having been bitten a couple of times.... the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale.... Or am I wrong on both counts?
Graham
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
2007-07-11 19:13 ` Daniel Korstad
@ 2007-07-11 19:26 ` Daniel Korstad
0 siblings, 0 replies; 40+ messages in thread
From: Daniel Korstad @ 2007-07-11 19:26 UTC (permalink / raw)
To: jahammonds prost; +Cc: linux-raid
And if I were a betting man, I would guess you will need to add a physical drive to execute a RAID5 to RAID6 conversation for adding additional parity even if your current RAID5 is not full of data.
So if your case only holds 12 Drives, I would not grow your RAID5 to 12 drives and expect to be able to convert to RAID6 with the same 12 drives even if they are not full of data.
But that is just my guess on a feature that does not even exist yet...
Dan.
----- Original Message -----
From: linux-raid-owner@vger.kernel.org on behalf of Daniel Korstad
Sent: Wed, 7/11/2007 2:14pm
To: jahammonds prost
Cc: linux-raid@vger.kernel.org
Subject: RE: Software based SATA RAID-5 expandable arrays?
Currently, no you can't.
However it is on the TODO list.
http://neil.brown.name/blog/20050727143147-003
Maybe by the end of the year, Neil hit his goal on the raid6 grow for kernel 2.6.21... But Neil states the raid 5 to raid 6 is more complex to implement...
Dan.
----- Original Message -----
From: jahammonds prost
Sent: Wed, 7/11/2007 12:26pm
To: Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
Ahh... guess it's time to upgrade again.... My plan was to start off with 3 drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before converting it over to a RAID6, and then topping it out at 12 drives (all I can fit in the case).... The performace hit isn't going to bother me too much - it's mainly going to be for video for my media server for the house...
So.. Can I expand a RAID6 now, which is good.... But can I change from RAID5 to RAID6 whilst online?
Graham
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: gmitch64@yahoo.com
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 11 July, 2007 11:03:34 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived.
I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months.
You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases.
Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity.
But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write.
I have been using RAID6 with 10 disk for over a year and it has saved me at least once.
As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not.
Dan.
----- Inline Message Follows -----
To: Daniel Korstad ; Michael
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?
> Why do I use RAID6? For the extra redundancy
I've been thinking about RAID6 too, having been bitten a couple of times.... the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale.... Or am I wrong on both counts?
Graham
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-07-11 20:08 Michael
2007-07-11 23:29 ` Nix
0 siblings, 1 reply; 40+ messages in thread
From: Michael @ 2007-07-11 20:08 UTC (permalink / raw)
To: Bill Davidsen, Daniel Korstad; +Cc: linux-raid
How would I be able to generate a report and email it to me, based on this cron jobs check disk results, any important reports based on the SMART Disk information, and/or drive failure reported if a disk failed.
I am running Suse, and the check program is not available, I like Suse since it is easy to use, supports all of my hardware right on install and has the auto update features that I enjoy. I have instead I have seen a report of tune2fs (which is available), though I am not sure if this is of use on a RAID-5 array.
Thanks
Michael Parisi
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they’re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-07-11 20:12 jahammonds prost
0 siblings, 0 replies; 40+ messages in thread
From: jahammonds prost @ 2007-07-11 20:12 UTC (permalink / raw)
To: Daniel Korstad; +Cc: linux-raid
Yeah... I kinda suspected that it would need to be a new drive being added - which is fine by me. I'm in the planning stages for building my next home server...
One way to do it (with what we have at the moment) would be to have enough drives setup for RAID5, and build an empty RAID6 array. Move the data over, then destroy the old array, and grow out the new one with the recovered disks.... Ick... but I that works.
Graham
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: jahammonds prost <gmitch64@yahoo.com>
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 11 July, 2007 3:26:51 PM
Subject: RE: Software based SATA RAID-5 expandable arrays?
And if I were a betting man, I would guess you will need to add a physical drive to execute a RAID5 to RAID6 conversation for adding additional parity even if your current RAID5 is not full of data.
So if your case only holds 12 Drives, I would not grow your RAID5 to 12 drives and expect to be able to convert to RAID6 with the same 12 drives even if they are not full of data.
But that is just my guess on a feature that does not even exist yet...
Dan.
----- Original Message -----
From: linux-raid-owner@vger.kernel.org on behalf of Daniel Korstad
Sent: Wed, 7/11/2007 2:14pm
To: jahammonds prost
Cc: linux-raid@vger.kernel.org
Subject: RE: Software based SATA RAID-5 expandable arrays?
Currently, no you can't.
However it is on the TODO list.
http://neil.brown.name/blog/20050727143147-003
Maybe by the end of the year, Neil hit his goal on the raid6 grow for kernel 2.6.21... But Neil states the raid 5 to raid 6 is more complex to implement...
Dan.
----- Original Message -----
From: jahammonds prost
Sent: Wed, 7/11/2007 12:26pm
To: Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
Ahh... guess it's time to upgrade again.... My plan was to start off with 3 drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before converting it over to a RAID6, and then topping it out at 12 drives (all I can fit in the case).... The performace hit isn't going to bother me too much - it's mainly going to be for video for my media server for the house...
So.. Can I expand a RAID6 now, which is good.... But can I change from RAID5 to RAID6 whilst online?
Graham
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: gmitch64@yahoo.com
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 11 July, 2007 11:03:34 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived.
I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months.
You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases.
Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity.
But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write.
I have been using RAID6 with 10 disk for over a year and it has saved me at least once.
As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not.
Dan.
----- Inline Message Follows -----
To: Daniel Korstad ; Michael
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?
> Why do I use RAID6? For the extra redundancy
I've been thinking about RAID6 too, having been bitten a couple of times.... the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale.... Or am I wrong on both counts?
Graham
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
___________________________________________________________
Yahoo! Mail is the world's favourite email. Don't settle for less, sign up for
your free account today http://uk.rd.yahoo.com/evt=44106/*http://uk.docs.yahoo.com/mail/winter07.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-07-11 20:08 Michael
@ 2007-07-11 23:29 ` Nix
0 siblings, 0 replies; 40+ messages in thread
From: Nix @ 2007-07-11 23:29 UTC (permalink / raw)
To: Michael; +Cc: Bill Davidsen, Daniel Korstad, linux-raid
On 11 Jul 2007, Michael stated:
> I am running Suse, and the check program is not available
`check' isn't a program. The line suggested has a typo: it should
be something like this:
30 2 * * Mon echo check > /sys/block/md0/md/sync_action
The only program that line needs is `echo' and I'm sure you've got
that. (You also need to have sysfs mounted at /sys, but virtually
everyone has their systems set up like that nowadays.)
(obviously you can check more than one array: just stick in other lines
that echo `check' into some other mdN at some other time of day.)
--
`... in the sense that dragons logically follow evolution so they would
be able to wield metal.' --- Kenneth Eng's colourless green ideas sleep
furiously
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-07-12 22:42 Michael
2007-07-13 3:54 ` Bill Davidsen
2007-07-13 15:22 ` Daniel Korstad
0 siblings, 2 replies; 40+ messages in thread
From: Michael @ 2007-07-12 22:42 UTC (permalink / raw)
To: Bill Davidsen, Daniel Korstad; +Cc: linux-raid
SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they’re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-07-12 22:42 Michael
@ 2007-07-13 3:54 ` Bill Davidsen
2007-07-13 15:22 ` Daniel Korstad
1 sibling, 0 replies; 40+ messages in thread
From: Bill Davidsen @ 2007-07-13 3:54 UTC (permalink / raw)
To: Michael; +Cc: Daniel Korstad, linux-raid
Michael wrote:
> SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
>
>
The command is echo! As in
echo check >/sys/block/md0/md/sync_action
Read the man page on what happens if you echo "repair" instead of
"check" there, which might be more what you want to do. Only you can decide.
> These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
>
> ----- Original Message ----
> From: Bill Davidsen <davidsen@tmr.com>
> To: Daniel Korstad <dan@korstad.net>
> Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
> Sent: Wednesday, July 11, 2007 10:21:42 AM
> Subject: Re: Software based SATA RAID-5 expandable arrays?
>
> Daniel Korstad wrote:
>
>> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>>
>>
>>
> Just a few thoughts below interspersed with your comments.
>
>> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
>> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>>
>> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
>>
> determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
>>
>> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>>
>>
>>
> Note that RAID-10 generally performs better than mirroring, particularly
> when more than a few drives are involved. This can have performance
> implications for swap, when large i/o pushes program pages out of
> memory. The other side of that coin is that "recovery CDs" don't seem to
> know how to use RAID-10 swap, which might be an issue on some systems.
>
>> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
>> grub
>> device (hd0) /dev/hdd
>> root (hd0,0)
>> setup (hd0)
>> quit
>> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>>
>> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
>> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
>> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>>
>>
>>
> Other configurations will perform better for writes, know your i/o
> performance requirements.
>
>> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
>> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they’re used for mission critical data."
>>
>> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>>
>> With four drives you would be just fine with a RAID5.
>> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>>
>> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
>> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>>
>> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>>
>>
>>
> I think a comment on "check" vs. "repair" is appropriate here. At the
> least "see the man page" is appropriate.
>
>> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>>
>> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>>
>> My filesystem of choice is XFS, but you get to pick your own poison:
>> mkfs.xfs /-f /dev/md3
>>
>> Mount the device :
>> mount /dev/md3 /foo
>>
>> I would edit your /etc/fstab to have it automounted for each startup.
>>
>> Dan.
>>
>>
> Other misc comments: mirroring your boot partition on drives which the
> BIOS won't use is a waste of bytes. If you have more than, say four,
> drives fail to function you probably have a system problem other than
> disk. And some BIOS versions will boot a secondary drive if the primary
> fails hard but not if it has a parity or other error, which can enter a
> retry loop (I *must* keep trying to boot). This behavior can be seen on
> at least one major server hardware from a big name vendor, it's not just
> cheap desktops. The solution, ugly as it is, is to use the firmware
> "RAID" on the motherboard controller for boot, and I have several
> systems with low cost small PATA drives in mirror just for boot (after
> which they are spun down with hdparm settings) for this reason.
>
> Really good notes, people should hang onto them!
>
>
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
2007-07-12 22:42 Michael
2007-07-13 3:54 ` Bill Davidsen
@ 2007-07-13 15:22 ` Daniel Korstad
1 sibling, 0 replies; 40+ messages in thread
From: Daniel Korstad @ 2007-07-13 15:22 UTC (permalink / raw)
To: big.green.jelly.bean; +Cc: davidsen, linux-raid
To run it manually;
echo check >> /sys/block/md0/md/sync_action
than you can check the status with;
cat /proc/mdstat
Or to continually watch it, if you want (kind of boring though :) )
watch cat /proc/mdstat
This will refresh ever 2sec.
In my original email I suggested to use a crontab so you don't need to remember to do this every once in a while.
Run (I did this in root);
crontab -e
This will allow you to edit you crontab. Now past this command in there;
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
If you want you can add comments, I like to comment my stuff since I have lots of stuff in mine, just make sure you have '#' in the front of the lines so your system knows it is just a comment and not a command it should run;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
After you have put this in your crontab, write and quit with this command;
:wq
It should come back with this;
[root@gateway ~]# crontab -e
crontab: installing new crontab
Now you can look at your cron table (without editing) with this;
crontab -l
It should return something like this, depending if you added comments or how you scheduled your command;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
For more info on crontab and syntax for times (I just did a google and grabbed the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
http://ubuntuforums.org/showthread.php?t=102626&highlight=cron
Cheers,
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Thursday, July 12, 2007 5:43 PM
To: Bill Davidsen; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they¢re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-07-13 16:48 Michael
2007-07-13 18:18 ` Bill Davidsen
2007-07-13 18:23 ` Daniel Korstad
0 siblings, 2 replies; 40+ messages in thread
From: Michael @ 2007-07-13 16:48 UTC (permalink / raw)
To: Daniel Korstad; +Cc: davidsen, linux-raid
RESPONSE
I had everything working, but it is evident that when I installed SuSe
the first time check and repair where not included in the package:( I
did not use the ">>" I used ">", as was incorrectly stated in
many documentations I set up.
The thing that made me suspect check and repair wasn't part of sues was
the failure of "check" or "repair" typed at the command prompt to
respond in any kind other then a response that stated their was no
command. In addition man check and man repair was also missing.
BROKEN!
I did an auto update of the SuSe machine, which ended up replacing the
kernel. They added the new entries to the boot choices but the mount
information was not transfered. SuSe also deleted the original kernel
boot setup. When suse looked at the drives individually they found
that none of them was recognizable. Therefor when I woke up this
morning and rebooted the machine after the update, I received the
errors and then dumps me to a basic prompt with limited ability to do
anything. I know I need to manually remount the drives, but its going
to be a challenge since I did not do this in the past. The answer to
this question is that I either have to change distro's (which I am
tempted to do) or fix the current distro. Please do not bother
providing any solutions for I simply have to RTFM (which I haven't had
time to do).
I think I am going to reset up my machines. The first two drives with
identical boot partitions, yet not mirror them. I can then manually
run a "tree" copy that would update my second drive as I grow the
system, and after successfull and needed updates. This would then
allow me a fall back after any updates, and with simply swapping SATA
drive cables from the first boot drive too the second. I am assuming
this will work. I then can RAID-6 (or 5) in the setup, recopy my files
(yes I haven't deleted them because I am not confident in my ability
with Linux yet.). Hopefully I will just simply remount these 4 drives
because there a simple raid 5 array.
SUSE's COMPLETE FAILURES
This frustration with SuSe, the lack of a simple reliable update
utility and the failures I experience has discouraged me from using
SuSe at all. Its got some amazing tools that help me from constantly
looking up documentation, posting to forums, or going to IRC, but the
unreliable upgrade process is a deal breaker for me. Its simply to
much work to manually update everything. This project had a simple
goal, which was to provide an easy and cheap solution to an unlimited
NAS service.
SUPPORT
In addition, SuSe's IRC help channel is among the worst I have
encountered. The level of support is often very good, but the level of
harassment, flames and simple childish behavior overcomes almost any
attempt at providing any level of support. I have no problem giving
back to the community when I learn enough to do so, but I will not be
mocked for my inability to understand a new and very in depth system.
In fact, I tend to goto the wonderful gentoo irc for my answers. The
IRC is amazing, the people patient and encouraging, the level of
knowledge is the best I have experienced. This resource, outside the
original incident, has been an amazing resource. I feel highly
confident asking questions about RAID here, because I know you guys are
actually RUNNING systems that I am attempting to do.
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: big.green.jelly.bean <big_green_jelly_bean@yahoo.com>
Cc: davidsen <davidsen@tmr.com>; linux-raid <linux-raid@vger.kernel.org>
Sent: Friday, July 13, 2007 11:22:45 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
To run it manually;
echo check >> /sys/block/md0/md/sync_action
than you can check the status with;
cat /proc/mdstat
Or to continually watch it, if you want (kind of boring though :) )
watch cat /proc/mdstat
This will refresh ever 2sec.
In my original email I suggested to use a crontab so you don't need to remember to do this every once in a while.
Run (I did this in root);
crontab -e
This will allow you to edit you crontab. Now past this command in there;
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
If you want you can add comments, I like to comment my stuff since I have lots of stuff in mine, just make sure you have '#' in the front of the lines so your system knows it is just a comment and not a command it should run;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
After you have put this in your crontab, write and quit with this command;
:wq
It should come back with this;
[root@gateway ~]# crontab -e
crontab: installing new crontab
Now you can look at your cron table (without editing) with this;
crontab -l
It should return something like this, depending if you added comments or how you scheduled your command;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
For more info on crontab and syntax for times (I just did a google and grabbed the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
http://ubuntuforums.org/showthread.php?t=102626&highlight=cron
Cheers,
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Thursday, July 12, 2007 5:43 PM
To: Bill Davidsen; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they¢re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
____________________________________________________________________________________
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-07-13 16:48 Michael
@ 2007-07-13 18:18 ` Bill Davidsen
2007-07-13 18:23 ` Daniel Korstad
1 sibling, 0 replies; 40+ messages in thread
From: Bill Davidsen @ 2007-07-13 18:18 UTC (permalink / raw)
To: Michael; +Cc: Daniel Korstad, linux-raid
Michael wrote:
> RESPONSE
>
> I had everything working, but it is evident that when I installed SuSe
> the first time check and repair where not included in the package:( I
> did not use the ">>" I used ">", as was incorrectly stated in
> many documentations I set up.
>
>
>
Doesn't matter, either will work and most people just use ">"
> The thing that made me suspect check and repair wasn't part of sues was
> the failure of "check" or "repair" typed at the command prompt to
> respond in any kind other then a response that stated their was no
> command. In addition man check and man repair was also missing.
>
>
>
One more time, "check" and "repair" are not commands, they are character
strings! You are using the echo command to write those strings into the
control interface in the sysfs area. If you type exactly what people
have sent you that will work.
> BROKEN!
>
> I did an auto update of the SuSe machine, which ended up replacing the
> kernel. They added the new entries to the boot choices but the mount
> information was not transfered. SuSe also deleted the original kernel
> boot setup. When suse looked at the drives individually they found
> that none of them was recognizable. Therefor when I woke up this
> morning and rebooted the machine after the update, I received the
> errors and then dumps me to a basic prompt with limited ability to do
> anything. I know I need to manually remount the drives, but its going
> to be a challenge since I did not do this in the past. The answer to
> this question is that I either have to change distro's (which I am
> tempted to do) or fix the current distro. Please do not bother
> providing any solutions for I simply have to RTFM (which I haven't had
> time to do).
>
>
>
> I think I am going to reset up my machines. The first two drives with
> identical boot partitions, yet not mirror them. I can then manually
> run a "tree" copy that would update my second drive as I grow the
> system, and after successfull and needed updates. This would then
> allow me a fall back after any updates, and with simply swapping SATA
> drive cables from the first boot drive too the second. I am assuming
> this will work. I then can RAID-6 (or 5) in the setup, recopy my files
> (yes I haven't deleted them because I am not confident in my ability
> with Linux yet.). Hopefully I will just simply remount these 4 drives
> because there a simple raid 5 array.
>
>
>
> SUSE's COMPLETE FAILURES
>
> This frustration with SuSe, the lack of a simple reliable update
> utility and the failures I experience has discouraged me from using
> SuSe at all. Its got some amazing tools that help me from constantly
> looking up documentation, posting to forums, or going to IRC, but the
> unreliable upgrade process is a deal breaker for me. Its simply to
> much work to manually update everything. This project had a simple
> goal, which was to provide an easy and cheap solution to an unlimited
> NAS service.
>
>
>
> SUPPORT
>
> In addition, SuSe's IRC help channel is among the worst I have
> encountered. The level of support is often very good, but the level of
> harassment, flames and simple childish behavior overcomes almost any
> attempt at providing any level of support. I have no problem giving
> back to the community when I learn enough to do so, but I will not be
> mocked for my inability to understand a new and very in depth system.
> In fact, I tend to goto the wonderful gentoo irc for my answers. The
> IRC is amazing, the people patient and encouraging, the level of
> knowledge is the best I have experienced. This resource, outside the
> original incident, has been an amazing resource. I feel highly
> confident asking questions about RAID here, because I know you guys are
> actually RUNNING systems that I am attempting to do.
>
> ----- Original Message ----
> From: Daniel Korstad <dan@korstad.net>
> To: big.green.jelly.bean <big_green_jelly_bean@yahoo.com>
> Cc: davidsen <davidsen@tmr.com>; linux-raid <linux-raid@vger.kernel.org>
> Sent: Friday, July 13, 2007 11:22:45 AM
> Subject: RE: Software based SATA RAID-5 expandable arrays?
>
> To run it manually;
>
> echo check >> /sys/block/md0/md/sync_action
>
> than you can check the status with;
>
> cat /proc/mdstat
>
> Or to continually watch it, if you want (kind of boring though :) )
>
> watch cat /proc/mdstat
>
> This will refresh ever 2sec.
>
> In my original email I suggested to use a crontab so you don't need to remember to do this every once in a while.
>
> Run (I did this in root);
>
> crontab -e
>
> This will allow you to edit you crontab. Now past this command in there;
>
> 30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
>
> If you want you can add comments, I like to comment my stuff since I have lots of stuff in mine, just make sure you have '#' in the front of the lines so your system knows it is just a comment and not a command it should run;
>
> #check for bad blocks once a week (every Mon at 2:30am)
> #if bad blocks are found, they are corrected from parity information
>
> After you have put this in your crontab, write and quit with this command;
>
> :wq
>
> It should come back with this;
> [root@gateway ~]# crontab -e
> crontab: installing new crontab
>
> Now you can look at your cron table (without editing) with this;
>
> crontab -l
>
> It should return something like this, depending if you added comments or how you scheduled your command;
>
> #check for bad blocks once a week (every Mon at 2:30am)
> #if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
>
> For more info on crontab and syntax for times (I just did a google and grabbed the first couple links...);
> http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
> http://ubuntuforums.org/showthread.php?t=102626&highlight=cron
>
> Cheers,
> Dan.
>
> -----Original Message-----
> From: Michael [mailto:big_green_jelly_bean@yahoo.com]
> Sent: Thursday, July 12, 2007 5:43 PM
> To: Bill Davidsen; Daniel Korstad
> Cc: linux-raid@vger.kernel.org
> Subject: Re: Software based SATA RAID-5 expandable arrays?
>
> SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
>
> These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
>
> ----- Original Message ----
> From: Bill Davidsen <davidsen@tmr.com>
> To: Daniel Korstad <dan@korstad.net>
> Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
> Sent: Wednesday, July 11, 2007 10:21:42 AM
> Subject: Re: Software based SATA RAID-5 expandable arrays?
>
> Daniel Korstad wrote:
>
>> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>>
>>
>>
> Just a few thoughts below interspersed with your comments.
>
>> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
>> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>>
>> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
>>
> determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
>>
>> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>>
>>
>>
> Note that RAID-10 generally performs better than mirroring, particularly
> when more than a few drives are involved. This can have performance
> implications for swap, when large i/o pushes program pages out of
> memory. The other side of that coin is that "recovery CDs" don't seem to
> know how to use RAID-10 swap, which might be an issue on some systems.
>
>> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
>> grub
>> device (hd0) /dev/hdd
>> root (hd0,0)
>> setup (hd0)
>> quit
>> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>>
>> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
>> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
>> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>>
>>
>>
> Other configurations will perform better for writes, know your i/o
> performance requirements.
>
>> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
>> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they¢re used for mission critical data."
>>
>> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>>
>> With four drives you would be just fine with a RAID5.
>> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>>
>> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
>> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>>
>> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>>
>>
>>
> I think a comment on "check" vs. "repair" is appropriate here. At the
> least "see the man page" is appropriate.
>
>> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>>
>> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>>
>> My filesystem of choice is XFS, but you get to pick your own poison:
>> mkfs.xfs /-f /dev/md3
>>
>> Mount the device :
>> mount /dev/md3 /foo
>>
>> I would edit your /etc/fstab to have it automounted for each startup.
>>
>> Dan.
>>
>>
> Other misc comments: mirroring your boot partition on drives which the
> BIOS won't use is a waste of bytes. If you have more than, say four,
> drives fail to function you probably have a system problem other than
> disk. And some BIOS versions will boot a secondary drive if the primary
> fails hard but not if it has a parity or other error, which can enter a
> retry loop (I *must* keep trying to boot). This behavior can be seen on
> at least one major server hardware from a big name vendor, it's not just
> cheap desktops. The solution, ugly as it is, is to use the firmware
> "RAID" on the motherboard controller for boot, and I have several
> systems with low cost small PATA drives in mirror just for boot (after
> which they are spun down with hdparm settings) for this reason.
>
> Really good notes, people should hang onto them!
>
>
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
2007-07-13 16:48 Michael
2007-07-13 18:18 ` Bill Davidsen
@ 2007-07-13 18:23 ` Daniel Korstad
1 sibling, 0 replies; 40+ messages in thread
From: Daniel Korstad @ 2007-07-13 18:23 UTC (permalink / raw)
To: big.green.jelly.bean; +Cc: davidsen, linux-raid
I can't speak for SuSe issues but I believe there is some confusion on the packages and command syntax.
So hang on, we are going for a ride, step by step...
Check and repair are not packages per say.
You should have a package called echo.
If you run this;
echo 1
Should get a 1 echoed back at you.
For example;
[root@gateway]# echo 1
1
Or anything else you want;
[root@gateway]# echo check
check
Now all we are doing with this is redirecting with the ">>" to another location, /sys/block/md0/md/sync_action
The difference between a double >> and a single > is the >> will append it to the end and the single > will replace the contents of the file with the value.
For example;
I will create a file called foo;
[root@gateway tmp]# vi foo
In this file I add two lines of text, foo, than I will write and quit :wq
Now I will take a look at the file I just made with my vi editor...
[root@gateway tmp]# cat foo
foo
foo
Great, now I run my echo command to send another value to it.
First I use the double >> to just append;
[root@gateway tmp]# echo foo2 >> foo
Now I take another look at the file;
[root@gateway tmp]# cat foo
foo
foo
foo2
So, I have my first two text lines the third line "foo2" appended.
Now I do this again but use just the single > to replace the file with a value.
[root@gateway tmp]# echo foo3 > foo
Than I look at it again;
[root@gateway tmp]# cat foo
foo3
Ahh, all the other lines are gone and now I just have foo3.
So, > replaces and >> appends.
How does this affect your /sys/block/md0/md/sync_action file? As it turns out, it does not matter.
Think of the proc and sys (/proc and /sys) as psuedo file system is a real time, memory resident file system that tracks the processes running on your machine and the state of your system.
So first lets go to /sys/block/
Than I will list its contents;
[root@gateway ~]# cd /sys/block/
[root@gateway block]# ls
dm-0 dm-3 hda md1 ram0 ram11 ram14 ram3 ram6 ram9 sdc sdf sdi
dm-1 dm-4 hdc md2 ram1 ram12 ram15 ram4 ram7 sda sdd sdg
dm-2 dm-5 md0 md3 ram10 ram13 ram2 ram5 ram8 sdb sde sdh
This will be different for you since your system will have different hardware and settings, again a pseudo file system. The dm stuff are my logical volumes and you might have more or less sata drives, the sda, sdb, ... these were created when I boot the system. If I add another sata drive, another sdj will be created automatically for me.
So depending on how many raid devices you have (I have four, /boot, swa, /, and my RAID6 data, (md0, md1, md2, md3)) they are listed here too.
So lets go into one, my swap RAID, md1, is small so let go to that one and test this out;
[root@gateway md1]# ls
dev holders md range removable size slaves stat uevent
Lets go deeper,
[root@gateway md1]# cd /sys/block/md1/md/
[root@gateway md]# ls
chunk_size dev-hdc1 mismatch_cnt rd0 suspend_lo sync_speed
component_size level new_dev rd1 sync_action sync_speed_max
dev-hda1 metadata_version raid_disks suspend_hi sync_completed sync_speed_min
Now lets look at sync_action;
[root@gateway md]# cat sync_action
idle
That is the pseudo file the represents the current state of my RAID md1.
So lets run that echo command and than lets check the state of the RAID;
[root@gateway md]# echo check > sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[============>........] resync = 62.7% (65664/104320) finish=0.0min speed=65664K/sec
So it is in resync state and if there are bad blocks they will be correct from parity.
Now once it is done, lets check that sync_action file again.
[root@gateway md]# cat sync_action
idle
Now remember we used the single redirect, so we replace the value with the text of "check" with our echo command. Once it was done with the resync, my system changed the value back to "idle".
What about the double ">>" well they append to the file but it will have the over all same effect...
[root@gateway md]# echo check >> sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[=========>...........] resync = 49.0% (52096/104320) finish=0.0min speed=52096K/sec
When it is done the value goes back to idle;
[root@gateway md]# cat sync_action
idle
So, > or >> does not matter here. And the command you need is echo.
Manipulating the pseudo files in /proc are similar.
Say for example, for security, I don't want my box to respond to pings (1 is for true and 0 is for false),
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
In this case, you want the single > because you want to replace the current value to 1 and not the >> for append.
Also another pseudo file for turning you linux box into a router;
echo 1 > /proc/sys/net/ipv4/ip_forward
As for SuSe updating your kernel, removing your original one and breaking your box by dropping you to a limited shell on boot up.. I can't help you much there. I don't have SuSe but as I understand, they are a good distro. In my current distro, Fedora, you can tell the update manager to not update the kernel. Also in Fedora, it will keep your old kernel by default so if there was an issue, you can select to go back to it in the grub boot up menu. I believe Ubuntu is similar. I bet you could configure SuSe to do the same.
I hope that clears up some confusion and good luck.
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Friday, July 13, 2007 11:48 AM
To: Daniel Korstad
Cc: davidsen; linux-raid
Subject: Re: Software based SATA RAID-5 expandable arrays?
RESPONSE
I had everything working, but it is evident that when I installed SuSe
the first time check and repair where not included in the package:( I
did not use the ">>" I used ">", as was incorrectly stated in
many documentations I set up.
The thing that made me suspect check and repair wasn't part of sues was
the failure of "check" or "repair" typed at the command prompt to
respond in any kind other then a response that stated their was no
command. In addition man check and man repair was also missing.
BROKEN!
I did an auto update of the SuSe machine, which ended up replacing the
kernel. They added the new entries to the boot choices but the mount
information was not transfered. SuSe also deleted the original kernel
boot setup. When suse looked at the drives individually they found
that none of them was recognizable. Therefor when I woke up this
morning and rebooted the machine after the update, I received the
errors and then dumps me to a basic prompt with limited ability to do
anything. I know I need to manually remount the drives, but its going
to be a challenge since I did not do this in the past. The answer to
this question is that I either have to change distro's (which I am
tempted to do) or fix the current distro. Please do not bother
providing any solutions for I simply have to RTFM (which I haven't had
time to do).
I think I am going to reset up my machines. The first two drives with
identical boot partitions, yet not mirror them. I can then manually
run a "tree" copy that would update my second drive as I grow the
system, and after successfull and needed updates. This would then
allow me a fall back after any updates, and with simply swapping SATA
drive cables from the first boot drive too the second. I am assuming
this will work. I then can RAID-6 (or 5) in the setup, recopy my files
(yes I haven't deleted them because I am not confident in my ability
with Linux yet.). Hopefully I will just simply remount these 4 drives
because there a simple raid 5 array.
SUSE's COMPLETE FAILURES
This frustration with SuSe, the lack of a simple reliable update
utility and the failures I experience has discouraged me from using
SuSe at all. Its got some amazing tools that help me from constantly
looking up documentation, posting to forums, or going to IRC, but the
unreliable upgrade process is a deal breaker for me. Its simply to
much work to manually update everything. This project had a simple
goal, which was to provide an easy and cheap solution to an unlimited
NAS service.
SUPPORT
In addition, SuSe's IRC help channel is among the worst I have
encountered. The level of support is often very good, but the level of
harassment, flames and simple childish behavior overcomes almost any
attempt at providing any level of support. I have no problem giving
back to the community when I learn enough to do so, but I will not be
mocked for my inability to understand a new and very in depth system.
In fact, I tend to goto the wonderful gentoo irc for my answers. The
IRC is amazing, the people patient and encouraging, the level of
knowledge is the best I have experienced. This resource, outside the
original incident, has been an amazing resource. I feel highly
confident asking questions about RAID here, because I know you guys are
actually RUNNING systems that I am attempting to do.
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: big.green.jelly.bean <big_green_jelly_bean@yahoo.com>
Cc: davidsen <davidsen@tmr.com>; linux-raid <linux-raid@vger.kernel.org>
Sent: Friday, July 13, 2007 11:22:45 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
To run it manually;
echo check >> /sys/block/md0/md/sync_action
than you can check the status with;
cat /proc/mdstat
Or to continually watch it, if you want (kind of boring though :) )
watch cat /proc/mdstat
This will refresh ever 2sec.
In my original email I suggested to use a crontab so you don't need to remember to do this every once in a while.
Run (I did this in root);
crontab -e
This will allow you to edit you crontab. Now past this command in there;
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
If you want you can add comments, I like to comment my stuff since I have lots of stuff in mine, just make sure you have '#' in the front of the lines so your system knows it is just a comment and not a command it should run;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
After you have put this in your crontab, write and quit with this command;
:wq
It should come back with this;
[root@gateway ~]# crontab -e
crontab: installing new crontab
Now you can look at your cron table (without editing) with this;
crontab -l
It should return something like this, depending if you added comments or how you scheduled your command;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
For more info on crontab and syntax for times (I just did a google and grabbed the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
http://ubuntuforums.org/showthread.php?t=102626&highlight=cron
Cheers,
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Thursday, July 12, 2007 5:43 PM
To: Bill Davidsen; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they¢re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
____________________________________________________________________________________
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-07-11 15:03 Daniel Korstad
@ 2007-07-14 15:49 ` Bill Davidsen
0 siblings, 0 replies; 40+ messages in thread
From: Bill Davidsen @ 2007-07-14 15:49 UTC (permalink / raw)
To: Daniel Korstad; +Cc: gmitch64, linux-raid
Daniel Korstad wrote:
>
> That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived.
>
> I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months.
>
> You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases.
>
> Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity.
>
>
I would expect (and see) a fairly substantial drop in write performance.
With RAID-5 only the parity needs to be read on a data change, and the
old data chunk. Then several XORs are done and the new data and new
parity written. With RAID-6, I believe that all the data in the stripe
need be read for calculating the q parity.
> But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write.
>
> I have been using RAID6 with 10 disk for over a year and it has saved me at least once.
>
> As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not.
Agree, for many things the write performance is not an issue, while the
reliability is. Backups are still desirable, of course.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
[not found] <1914474980.1184590115562.JavaMail.root@gateway.korstad.net>
@ 2007-07-16 14:23 ` Daniel Korstad
0 siblings, 0 replies; 40+ messages in thread
From: Daniel Korstad @ 2007-07-16 14:23 UTC (permalink / raw)
To: Michael; +Cc: linux-raid
You will learn a lot by building your own system and will allow you to do more with it as far as other services if you want.
However, again if you are still having problems with distro selection, configuration and commands, here is another NAS install solution I stumbled on.
http://www.openfiler.com
They appear to use a Fedora Distro, and remade it into their own. They also use the mdadm packages.
I have not played with this, but If I had to chose, I would use this one since I have had more experience with mdadm as oppose to what the freenas is using.
Their version of mdadm is not the very latest however. That won't effect you unless you want to be able to grow your RAID. You will need to update it.
https://www.openfiler.com/community/forums/viewtopic.php?id=741
Oh, and they do support creating RAID6 arrays
http://www.openfiler.com/screenshots/shots/RAID_Mgmt3.png
Just giving you more options.
Dan.
----- Original Message -----
From: Daniel Korstad
Sent: Mon, 7/16/2007 7:48am
To: Michael
Subject: RE: Software based SATA RAID-5 expandable arrays?
Something I ran across a year ago.
http://www.freenas.org/index.php?option=com_versions&Itemid=51
I played with it for a day or so and it look impressive. The project is sill very much alive and they just released a new version a couple days ago.
The caveat or reason I did not use this is that I use my Linux box for so many other things, (Web server, Asterisk (voip), Chillispot, VMware Server, Firewall, ...
If you go this route, you will pretty much dedicate your box for just a NAS function. The project is an ISO OS you download and install. This greatly simplifies things but it ties you down a bit.
After it is built, clients connect to it in server different options you can configure, CIFS (this is windows file sharing or samba), FTP, NFS, RSYNCD, SSHD, Unision, AFP.
It also supports hard disk standby time, and advanced power management for your drives.
However, if that is all you really want (a NAS) and you are having issues with other Linux distros... This is pretty simple to get one up and running with a NAS. Nice web interface for all the configuration.
Other things to consider, I don't think it has RAID6. Or it did not last time I played with it a year ago. And I think the code is different than mdadm. So, you would be looking toward their forums for help if you had issues.
Also, here is the manual for you..
http://www.freenas.org/downloads/docs/user-docs/FreeNAS-SUG.pdf
Cheers,
Dan.
----- Original Message -----
From: linux-raid-owner@vger.kernel.org on behalf of Daniel Korstad
Sent: Fri, 7/13/2007 1:24pm
To: big.green.jelly.bean
Cc: davidsen ; linux-raid
Subject: RE: Software based SATA RAID-5 expandable arrays?
I can't speak for SuSe issues but I believe there is some confusion on the packages and command syntax.
So hang on, we are going for a ride, step by step...
Check and repair are not packages per say.
You should have a package called echo.
If you run this;
echo 1
Should get a 1 echoed back at you.
For example;
[root@gateway]# echo 1
1
Or anything else you want;
[root@gateway]# echo check
check
Now all we are doing with this is redirecting with the ">>" to another location, /sys/block/md0/md/sync_action
The difference between a double >> and a single > is the >> will append it to the end and the single > will replace the contents of the file with the value.
For example;
I will create a file called foo;
[root@gateway tmp]# vi foo
In this file I add two lines of text, foo, than I will write and quit :wq
Now I will take a look at the file I just made with my vi editor...
[root@gateway tmp]# cat foo
foo
foo
Great, now I run my echo command to send another value to it.
First I use the double >> to just append;
[root@gateway tmp]# echo foo2 >> foo
Now I take another look at the file;
[root@gateway tmp]# cat foo
foo
foo
foo2
So, I have my first two text lines the third line "foo2" appended.
Now I do this again but use just the single > to replace the file with a value.
[root@gateway tmp]# echo foo3 > foo
Than I look at it again;
[root@gateway tmp]# cat foo
foo3
Ahh, all the other lines are gone and now I just have foo3.
So, > replaces and >> appends.
How does this affect your /sys/block/md0/md/sync_action file? As it turns out, it does not matter.
Think of the proc and sys (/proc and /sys) as psuedo file system is a real time, memory resident file system that tracks the processes running on your machine and the state of your system.
So first lets go to /sys/block/
Than I will list its contents;
[root@gateway ~]# cd /sys/block/
[root@gateway block]# ls
dm-0 dm-3 hda md1 ram0 ram11 ram14 ram3 ram6 ram9 sdc sdf sdi
dm-1 dm-4 hdc md2 ram1 ram12 ram15 ram4 ram7 sda sdd sdg
dm-2 dm-5 md0 md3 ram10 ram13 ram2 ram5 ram8 sdb sde sdh
This will be different for you since your system will have different hardware and settings, again a pseudo file system. The dm stuff are my logical volumes and you might have more or less sata drives, the sda, sdb, ... these were created when I boot the system. If I add another sata drive, another sdj will be created automatically for me.
So depending on how many raid devices you have (I have four, /boot, swa, /, and my RAID6 data, (md0, md1, md2, md3)) they are listed here too.
So lets go into one, my swap RAID, md1, is small so let go to that one and test this out;
[root@gateway md1]# ls
dev holders md range removable size slaves stat uevent
Lets go deeper,
[root@gateway md1]# cd /sys/block/md1/md/
[root@gateway md]# ls
chunk_size dev-hdc1 mismatch_cnt rd0 suspend_lo sync_speed
component_size level new_dev rd1 sync_action sync_speed_max
dev-hda1 metadata_version raid_disks suspend_hi sync_completed sync_speed_min
Now lets look at sync_action;
[root@gateway md]# cat sync_action
idle
That is the pseudo file the represents the current state of my RAID md1.
So lets run that echo command and than lets check the state of the RAID;
[root@gateway md]# echo check > sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[============>........] resync = 62.7% (65664/104320) finish=0.0min speed=65664K/sec
So it is in resync state and if there are bad blocks they will be correct from parity.
Now once it is done, lets check that sync_action file again.
[root@gateway md]# cat sync_action
idle
Now remember we used the single redirect, so we replace the value with the text of "check" with our echo command. Once it was done with the resync, my system changed the value back to "idle".
What about the double ">>" well they append to the file but it will have the over all same effect...
[root@gateway md]# echo check >> sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[=========>...........] resync = 49.0% (52096/104320) finish=0.0min speed=52096K/sec
When it is done the value goes back to idle;
[root@gateway md]# cat sync_action
idle
So, > or >> does not matter here. And the command you need is echo.
Manipulating the pseudo files in /proc are similar.
Say for example, for security, I don't want my box to respond to pings (1 is for true and 0 is for false),
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
In this case, you want the single > because you want to replace the current value to 1 and not the >> for append.
Also another pseudo file for turning you linux box into a router;
echo 1 > /proc/sys/net/ipv4/ip_forward
As for SuSe updating your kernel, removing your original one and breaking your box by dropping you to a limited shell on boot up.. I can't help you much there. I don't have SuSe but as I understand, they are a good distro. In my current distro, Fedora, you can tell the update manager to not update the kernel. Also in Fedora, it will keep your old kernel by default so if there was an issue, you can select to go back to it in the grub boot up menu. I believe Ubuntu is similar. I bet you could configure SuSe to do the same.
I hope that clears up some confusion and good luck.
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Friday, July 13, 2007 11:48 AM
To: Daniel Korstad
Cc: davidsen; linux-raid
Subject: Re: Software based SATA RAID-5 expandable arrays?
RESPONSE
I had everything working, but it is evident that when I installed SuSe
the first time check and repair where not included in the package:( I
did not use the ">>" I used ">", as was incorrectly stated in
many documentations I set up.
The thing that made me suspect check and repair wasn't part of sues was
the failure of "check" or "repair" typed at the command prompt to
respond in any kind other then a response that stated their was no
command. In addition man check and man repair was also missing.
BROKEN!
I did an auto update of the SuSe machine, which ended up replacing the
kernel. They added the new entries to the boot choices but the mount
information was not transfered. SuSe also deleted the original kernel
boot setup. When suse looked at the drives individually they found
that none of them was recognizable. Therefor when I woke up this
morning and rebooted the machine after the update, I received the
errors and then dumps me to a basic prompt with limited ability to do
anything. I know I need to manually remount the drives, but its going
to be a challenge since I did not do this in the past. The answer to
this question is that I either have to change distro's (which I am
tempted to do) or fix the current distro. Please do not bother
providing any solutions for I simply have to RTFM (which I haven't had
time to do).
I think I am going to reset up my machines. The first two drives with
identical boot partitions, yet not mirror them. I can then manually
run a "tree" copy that would update my second drive as I grow the
system, and after successfull and needed updates. This would then
allow me a fall back after any updates, and with simply swapping SATA
drive cables from the first boot drive too the second. I am assuming
this will work. I then can RAID-6 (or 5) in the setup, recopy my files
(yes I haven't deleted them because I am not confident in my ability
with Linux yet.). Hopefully I will just simply remount these 4 drives
because there a simple raid 5 array.
SUSE's COMPLETE FAILURES
This frustration with SuSe, the lack of a simple reliable update
utility and the failures I experience has discouraged me from using
SuSe at all. Its got some amazing tools that help me from constantly
looking up documentation, posting to forums, or going to IRC, but the
unreliable upgrade process is a deal breaker for me. Its simply to
much work to manually update everything. This project had a simple
goal, which was to provide an easy and cheap solution to an unlimited
NAS service.
SUPPORT
In addition, SuSe's IRC help channel is among the worst I have
encountered. The level of support is often very good, but the level of
harassment, flames and simple childish behavior overcomes almost any
attempt at providing any level of support. I have no problem giving
back to the community when I learn enough to do so, but I will not be
mocked for my inability to understand a new and very in depth system.
In fact, I tend to goto the wonderful gentoo irc for my answers. The
IRC is amazing, the people patient and encouraging, the level of
knowledge is the best I have experienced. This resource, outside the
original incident, has been an amazing resource. I feel highly
confident asking questions about RAID here, because I know you guys are
actually RUNNING systems that I am attempting to do.
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: big.green.jelly.bean <big_green_jelly_bean@yahoo.com>
Cc: davidsen <davidsen@tmr.com>; linux-raid <linux-raid@vger.kernel.org>
Sent: Friday, July 13, 2007 11:22:45 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
To run it manually;
echo check >> /sys/block/md0/md/sync_action
than you can check the status with;
cat /proc/mdstat
Or to continually watch it, if you want (kind of boring though :) )
watch cat /proc/mdstat
This will refresh ever 2sec.
In my original email I suggested to use a crontab so you don't need to remember to do this every once in a while.
Run (I did this in root);
crontab -e
This will allow you to edit you crontab. Now past this command in there;
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
If you want you can add comments, I like to comment my stuff since I have lots of stuff in mine, just make sure you have '#' in the front of the lines so your system knows it is just a comment and not a command it should run;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
After you have put this in your crontab, write and quit with this command;
:wq
It should come back with this;
[root@gateway ~]# crontab -e
crontab: installing new crontab
Now you can look at your cron table (without editing) with this;
crontab -l
It should return something like this, depending if you added comments or how you scheduled your command;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
For more info on crontab and syntax for times (I just did a google and grabbed the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
http://ubuntuforums.org/showthread.php?t=102626&highlight=cron
Cheers,
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Thursday, July 12, 2007 5:43 PM
To: Bill Davidsen; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they¢re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
____________________________________________________________________________________
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-07-16 17:34 Michael
2007-07-16 19:29 ` Daniel Korstad
0 siblings, 1 reply; 40+ messages in thread
From: Michael @ 2007-07-16 17:34 UTC (permalink / raw)
To: Daniel Korstad; +Cc: linux-raid
Due too the nature of the data I am storing RAID-6 is not really worth the extra safety and security, though it would be if I could get another 6 drives. Maybe then I can convert my RAID 5 into a RAID 6.
As for openfiler, it is a great, simple package that provides all the features I need except that they dont include the latest kernel. That means my motherboard isnt supported. (frown). I have installed Fedora, after all the hastle of SuSe, and am currently setting that up so that it can be my main OS. It seems great, just some of the GUI based Admin tools are cryptic in their function.
I have Mirrored my boot drive, which means I have to check to see if the second drive can be booted from. This is my todo list (though it does fail to mention SMART!), the times on the crontab have to be corrected.
------------------------------------------
SAMBA
http://www.redhatmagazine.com/2007/06/26/how-to-build-a-dirt-easy-home-nas-server-using-samba/
Repair
http://www.issociate.de/board/post/391115/Observations_of_a_failing_disk.html
http://www.issociate.de/board/post/443666/how_to_deal_with_continuously_getting_more_errors?.html
Crontab (Weekly Repair Schedule)
http://www.unixgeeks.org/security/newbie/unix/cron-1.html
http://www.ss64.com/bash/crontab.html\
crontab -e
30 3 * * Mon echo check /sys/block/md3/md/sync_action
30 4 * * Mon echo check /sys/block/md0/md/sync_action
30 4 * * Mon echo check /sys/block/md1/md/sync_action
30 4 * * Mon echo check /sys/block/md2/md/sync_action
Check Boot Info on Mirrored Drive
After you go through the install and have a bootable OS that is running
on mdadm RAID, I would test it to make sure grub was installed
correctly to both the physical drives. If grub is not installed to
both drives, and you lose one drive down the road and if that one was
the one with grub, you will have a system that will not boot even
though it has a second drive with a copy of all the files. If this
were to happen, you can recover by booting with a bootable linux CD or
recover disk and manually installing grub too. For example say you only
had grub installed to hda and it failed, boot with a live linux cd and
type (assuming /dev/hdd is the surviving second drive);
grub
device (hd0) /dev/hdd
root (hd0,0)
setup (hd0)
quit
System Report Email Mutt
http://www.mutt.org/
http://linux.die.net/man/8/auditd.conf
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: Michael <big_green_jelly_bean@yahoo.com>
Cc: linux-raid@vger.kernel.org
Sent: Monday, July 16, 2007 10:23:23 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
You will learn a lot by building your own system and will allow you to do more with it as far as other services if you want.
However, again if you are still having problems with distro selection, configuration and commands, here is another NAS install solution I stumbled on.
http://www.openfiler.com
They appear to use a Fedora Distro, and remade it into their own. They also use the mdadm packages.
I have not played with this, but If I had to chose, I would use this one since I have had more experience with mdadm as oppose to what the freenas is using.
Their version of mdadm is not the very latest however. That won't effect you unless you want to be able to grow your RAID. You will need to update it.
https://www.openfiler.com/community/forums/viewtopic.php?id=741
Oh, and they do support creating RAID6 arrays
http://www.openfiler.com/screenshots/shots/RAID_Mgmt3.png
Just giving you more options.
Dan.
----- Original Message -----
From: Daniel Korstad
Sent: Mon, 7/16/2007 7:48am
To: Michael
Subject: RE: Software based SATA RAID-5 expandable arrays?
Something I ran across a year ago.
http://www.freenas.org/index.php?option=com_versions&Itemid=51
I played with it for a day or so and it look impressive. The project is sill very much alive and they just released a new version a couple days ago.
The caveat or reason I did not use this is that I use my Linux box for so many other things, (Web server, Asterisk (voip), Chillispot, VMware Server, Firewall, ...
If you go this route, you will pretty much dedicate your box for just a NAS function. The project is an ISO OS you download and install. This greatly simplifies things but it ties you down a bit.
After it is built, clients connect to it in server different options you can configure, CIFS (this is windows file sharing or samba), FTP, NFS, RSYNCD, SSHD, Unision, AFP.
It also supports hard disk standby time, and advanced power management for your drives.
However, if that is all you really want (a NAS) and you are having issues with other Linux distros... This is pretty simple to get one up and running with a NAS. Nice web interface for all the configuration.
Other things to consider, I don't think it has RAID6. Or it did not last time I played with it a year ago. And I think the code is different than mdadm. So, you would be looking toward their forums for help if you had issues.
Also, here is the manual for you..
http://www.freenas.org/downloads/docs/user-docs/FreeNAS-SUG.pdf
Cheers,
Dan.
----- Original Message -----
From: linux-raid-owner@vger.kernel.org on behalf of Daniel Korstad
Sent: Fri, 7/13/2007 1:24pm
To: big.green.jelly.bean
Cc: davidsen ; linux-raid
Subject: RE: Software based SATA RAID-5 expandable arrays?
I can't speak for SuSe issues but I believe there is some confusion on the packages and command syntax.
So hang on, we are going for a ride, step by step...
Check and repair are not packages per say.
You should have a package called echo.
If you run this;
echo 1
Should get a 1 echoed back at you.
For example;
[root@gateway]# echo 1
1
Or anything else you want;
[root@gateway]# echo check
check
Now all we are doing with this is redirecting with the ">>" to another location, /sys/block/md0/md/sync_action
The difference between a double >> and a single > is the >> will append it to the end and the single > will replace the contents of the file with the value.
For example;
I will create a file called foo;
[root@gateway tmp]# vi foo
In this file I add two lines of text, foo, than I will write and quit :wq
Now I will take a look at the file I just made with my vi editor...
[root@gateway tmp]# cat foo
foo
foo
Great, now I run my echo command to send another value to it.
First I use the double >> to just append;
[root@gateway tmp]# echo foo2 >> foo
Now I take another look at the file;
[root@gateway tmp]# cat foo
foo
foo
foo2
So, I have my first two text lines the third line "foo2" appended.
Now I do this again but use just the single > to replace the file with a value.
[root@gateway tmp]# echo foo3 > foo
Than I look at it again;
[root@gateway tmp]# cat foo
foo3
Ahh, all the other lines are gone and now I just have foo3.
So, > replaces and >> appends.
How does this affect your /sys/block/md0/md/sync_action file? As it turns out, it does not matter.
Think of the proc and sys (/proc and /sys) as psuedo file system is a real time, memory resident file system that tracks the processes running on your machine and the state of your system.
So first lets go to /sys/block/
Than I will list its contents;
[root@gateway ~]# cd /sys/block/
[root@gateway block]# ls
dm-0 dm-3 hda md1 ram0 ram11 ram14 ram3 ram6 ram9 sdc sdf sdi
dm-1 dm-4 hdc md2 ram1 ram12 ram15 ram4 ram7 sda sdd sdg
dm-2 dm-5 md0 md3 ram10 ram13 ram2 ram5 ram8 sdb sde sdh
This will be different for you since your system will have different hardware and settings, again a pseudo file system. The dm stuff are my logical volumes and you might have more or less sata drives, the sda, sdb, ... these were created when I boot the system. If I add another sata drive, another sdj will be created automatically for me.
So depending on how many raid devices you have (I have four, /boot, swa, /, and my RAID6 data, (md0, md1, md2, md3)) they are listed here too.
So lets go into one, my swap RAID, md1, is small so let go to that one and test this out;
[root@gateway md1]# ls
dev holders md range removable size slaves stat uevent
Lets go deeper,
[root@gateway md1]# cd /sys/block/md1/md/
[root@gateway md]# ls
chunk_size dev-hdc1 mismatch_cnt rd0 suspend_lo sync_speed
component_size level new_dev rd1 sync_action sync_speed_max
dev-hda1 metadata_version raid_disks suspend_hi sync_completed sync_speed_min
Now lets look at sync_action;
[root@gateway md]# cat sync_action
idle
That is the pseudo file the represents the current state of my RAID md1.
So lets run that echo command and than lets check the state of the RAID;
[root@gateway md]# echo check > sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[============>........] resync = 62.7% (65664/104320) finish=0.0min speed=65664K/sec
So it is in resync state and if there are bad blocks they will be correct from parity.
Now once it is done, lets check that sync_action file again.
[root@gateway md]# cat sync_action
idle
Now remember we used the single redirect, so we replace the value with the text of "check" with our echo command. Once it was done with the resync, my system changed the value back to "idle".
What about the double ">>" well they append to the file but it will have the over all same effect...
[root@gateway md]# echo check >> sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[=========>...........] resync = 49.0% (52096/104320) finish=0.0min speed=52096K/sec
When it is done the value goes back to idle;
[root@gateway md]# cat sync_action
idle
So, > or >> does not matter here. And the command you need is echo.
Manipulating the pseudo files in /proc are similar.
Say for example, for security, I don't want my box to respond to pings (1 is for true and 0 is for false),
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
In this case, you want the single > because you want to replace the current value to 1 and not the >> for append.
Also another pseudo file for turning you linux box into a router;
echo 1 > /proc/sys/net/ipv4/ip_forward
As for SuSe updating your kernel, removing your original one and breaking your box by dropping you to a limited shell on boot up.. I can't help you much there. I don't have SuSe but as I understand, they are a good distro. In my current distro, Fedora, you can tell the update manager to not update the kernel. Also in Fedora, it will keep your old kernel by default so if there was an issue, you can select to go back to it in the grub boot up menu. I believe Ubuntu is similar. I bet you could configure SuSe to do the same.
I hope that clears up some confusion and good luck.
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Friday, July 13, 2007 11:48 AM
To: Daniel Korstad
Cc: davidsen; linux-raid
Subject: Re: Software based SATA RAID-5 expandable arrays?
RESPONSE
I had everything working, but it is evident that when I installed SuSe
the first time check and repair where not included in the package:( I
did not use the ">>" I used ">", as was incorrectly stated in
many documentations I set up.
The thing that made me suspect check and repair wasn't part of sues was
the failure of "check" or "repair" typed at the command prompt to
respond in any kind other then a response that stated their was no
command. In addition man check and man repair was also missing.
BROKEN!
I did an auto update of the SuSe machine, which ended up replacing the
kernel. They added the new entries to the boot choices but the mount
information was not transfered. SuSe also deleted the original kernel
boot setup. When suse looked at the drives individually they found
that none of them was recognizable. Therefor when I woke up this
morning and rebooted the machine after the update, I received the
errors and then dumps me to a basic prompt with limited ability to do
anything. I know I need to manually remount the drives, but its going
to be a challenge since I did not do this in the past. The answer to
this question is that I either have to change distro's (which I am
tempted to do) or fix the current distro. Please do not bother
providing any solutions for I simply have to RTFM (which I haven't had
time to do).
I think I am going to reset up my machines. The first two drives with
identical boot partitions, yet not mirror them. I can then manually
run a "tree" copy that would update my second drive as I grow the
system, and after successfull and needed updates. This would then
allow me a fall back after any updates, and with simply swapping SATA
drive cables from the first boot drive too the second. I am assuming
this will work. I then can RAID-6 (or 5) in the setup, recopy my files
(yes I haven't deleted them because I am not confident in my ability
with Linux yet.). Hopefully I will just simply remount these 4 drives
because there a simple raid 5 array.
SUSE's COMPLETE FAILURES
This frustration with SuSe, the lack of a simple reliable update
utility and the failures I experience has discouraged me from using
SuSe at all. Its got some amazing tools that help me from constantly
looking up documentation, posting to forums, or going to IRC, but the
unreliable upgrade process is a deal breaker for me. Its simply to
much work to manually update everything. This project had a simple
goal, which was to provide an easy and cheap solution to an unlimited
NAS service.
SUPPORT
In addition, SuSe's IRC help channel is among the worst I have
encountered. The level of support is often very good, but the level of
harassment, flames and simple childish behavior overcomes almost any
attempt at providing any level of support. I have no problem giving
back to the community when I learn enough to do so, but I will not be
mocked for my inability to understand a new and very in depth system.
In fact, I tend to goto the wonderful gentoo irc for my answers. The
IRC is amazing, the people patient and encouraging, the level of
knowledge is the best I have experienced. This resource, outside the
original incident, has been an amazing resource. I feel highly
confident asking questions about RAID here, because I know you guys are
actually RUNNING systems that I am attempting to do.
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: big.green.jelly.bean <big_green_jelly_bean@yahoo.com>
Cc: davidsen <davidsen@tmr.com>; linux-raid <linux-raid@vger.kernel.org>
Sent: Friday, July 13, 2007 11:22:45 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
To run it manually;
echo check >> /sys/block/md0/md/sync_action
than you can check the status with;
cat /proc/mdstat
Or to continually watch it, if you want (kind of boring though :) )
watch cat /proc/mdstat
This will refresh ever 2sec.
In my original email I suggested to use a crontab so you don't need to remember to do this every once in a while.
Run (I did this in root);
crontab -e
This will allow you to edit you crontab. Now past this command in there;
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
If you want you can add comments, I like to comment my stuff since I have lots of stuff in mine, just make sure you have '#' in the front of the lines so your system knows it is just a comment and not a command it should run;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
After you have put this in your crontab, write and quit with this command;
:wq
It should come back with this;
[root@gateway ~]# crontab -e
crontab: installing new crontab
Now you can look at your cron table (without editing) with this;
crontab -l
It should return something like this, depending if you added comments or how you scheduled your command;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
For more info on crontab and syntax for times (I just did a google and grabbed the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
http://ubuntuforums.org/showthread.php?t=102626&highlight=cron
Cheers,
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Thursday, July 12, 2007 5:43 PM
To: Bill Davidsen; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they¢re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
____________________________________________________________________________________
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
____________________________________________________________________________________
Sick sense of humor? Visit Yahoo! TV's
Comedy with an Edge to see what's on, when.
http://tv.yahoo.com/collections/222
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: Software based SATA RAID-5 expandable arrays?
2007-07-16 17:34 Michael
@ 2007-07-16 19:29 ` Daniel Korstad
0 siblings, 0 replies; 40+ messages in thread
From: Daniel Korstad @ 2007-07-16 19:29 UTC (permalink / raw)
To: Michael; +Cc: linux-raid
Don't forget the > or >> either one will do...
crontab -e
30 3 * * Mon echo check > /sys/block/md3/md/sync_action
30 4 * * Mon echo check > /sys/block/md0/md/sync_action
30 4 * * Mon echo check > /sys/block/md1/md/sync_action
30 4 * * Mon echo check > /sys/block/md2/md/sync_action
----- Original Message -----
From: Michael
Sent: Mon, 7/16/2007 12:34pm
To: Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
Due too the nature of the data I am storing RAID-6 is not really worth the extra safety and security, though it would be if I could get another 6 drives. Maybe then I can convert my RAID 5 into a RAID 6.
As for openfiler, it is a great, simple package that provides all the features I need except that they dont include the latest kernel. That means my motherboard isnt supported. (frown). I have installed Fedora, after all the hastle of SuSe, and am currently setting that up so that it can be my main OS. It seems great, just some of the GUI based Admin tools are cryptic in their function.
I have Mirrored my boot drive, which means I have to check to see if the second drive can be booted from. This is my todo list (though it does fail to mention SMART!), the times on the crontab have to be corrected.
------------------------------------------
SAMBA
http://www.redhatmagazine.com/2007/06/26/how-to-build-a-dirt-easy-home-nas-server-using-samba/
Repair
http://www.issociate.de/board/post/391115/Observations_of_a_failing_disk.html
http://www.issociate.de/board/post/443666/how_to_deal_with_continuously_getting_more_errors?.html
Crontab (Weekly Repair Schedule)
http://www.unixgeeks.org/security/newbie/unix/cron-1.html
http://www.ss64.com/bash/crontab.html\
crontab -e
30 3 * * Mon echo check /sys/block/md3/md/sync_action
30 4 * * Mon echo check /sys/block/md0/md/sync_action
30 4 * * Mon echo check /sys/block/md1/md/sync_action
30 4 * * Mon echo check /sys/block/md2/md/sync_action
Check Boot Info on Mirrored Drive
After you go through the install and have a bootable OS that is running
on mdadm RAID, I would test it to make sure grub was installed
correctly to both the physical drives. If grub is not installed to
both drives, and you lose one drive down the road and if that one was
the one with grub, you will have a system that will not boot even
though it has a second drive with a copy of all the files. If this
were to happen, you can recover by booting with a bootable linux CD or
recover disk and manually installing grub too. For example say you only
had grub installed to hda and it failed, boot with a live linux cd and
type (assuming /dev/hdd is the surviving second drive);
grub
device (hd0) /dev/hdd
root (hd0,0)
setup (hd0)
quit
System Report Email Mutt
http://www.mutt.org/
http://linux.die.net/man/8/auditd.conf
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: Michael <big_green_jelly_bean@yahoo.com>
Cc: linux-raid@vger.kernel.org
Sent: Monday, July 16, 2007 10:23:23 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
You will learn a lot by building your own system and will allow you to do more with it as far as other services if you want.
However, again if you are still having problems with distro selection, configuration and commands, here is another NAS install solution I stumbled on.
http://www.openfiler.com
They appear to use a Fedora Distro, and remade it into their own. They also use the mdadm packages.
I have not played with this, but If I had to chose, I would use this one since I have had more experience with mdadm as oppose to what the freenas is using.
Their version of mdadm is not the very latest however. That won't effect you unless you want to be able to grow your RAID. You will need to update it.
https://www.openfiler.com/community/forums/viewtopic.php?id=741
Oh, and they do support creating RAID6 arrays
http://www.openfiler.com/screenshots/shots/RAID_Mgmt3.png
Just giving you more options.
Dan.
----- Original Message -----
From: Daniel Korstad
Sent: Mon, 7/16/2007 7:48am
To: Michael
Subject: RE: Software based SATA RAID-5 expandable arrays?
Something I ran across a year ago.
http://www.freenas.org/index.php?option=com_versions&Itemid=51
I played with it for a day or so and it look impressive. The project is sill very much alive and they just released a new version a couple days ago.
The caveat or reason I did not use this is that I use my Linux box for so many other things, (Web server, Asterisk (voip), Chillispot, VMware Server, Firewall, ...
If you go this route, you will pretty much dedicate your box for just a NAS function. The project is an ISO OS you download and install. This greatly simplifies things but it ties you down a bit.
After it is built, clients connect to it in server different options you can configure, CIFS (this is windows file sharing or samba), FTP, NFS, RSYNCD, SSHD, Unision, AFP.
It also supports hard disk standby time, and advanced power management for your drives.
However, if that is all you really want (a NAS) and you are having issues with other Linux distros... This is pretty simple to get one up and running with a NAS. Nice web interface for all the configuration.
Other things to consider, I don't think it has RAID6. Or it did not last time I played with it a year ago. And I think the code is different than mdadm. So, you would be looking toward their forums for help if you had issues.
Also, here is the manual for you..
http://www.freenas.org/downloads/docs/user-docs/FreeNAS-SUG.pdf
Cheers,
Dan.
----- Original Message -----
From: linux-raid-owner@vger.kernel.org on behalf of Daniel Korstad
Sent: Fri, 7/13/2007 1:24pm
To: big.green.jelly.bean
Cc: davidsen ; linux-raid
Subject: RE: Software based SATA RAID-5 expandable arrays?
I can't speak for SuSe issues but I believe there is some confusion on the packages and command syntax.
So hang on, we are going for a ride, step by step...
Check and repair are not packages per say.
You should have a package called echo.
If you run this;
echo 1
Should get a 1 echoed back at you.
For example;
[root@gateway]# echo 1
1
Or anything else you want;
[root@gateway]# echo check
check
Now all we are doing with this is redirecting with the ">>" to another location, /sys/block/md0/md/sync_action
The difference between a double >> and a single > is the >> will append it to the end and the single > will replace the contents of the file with the value.
For example;
I will create a file called foo;
[root@gateway tmp]# vi foo
In this file I add two lines of text, foo, than I will write and quit :wq
Now I will take a look at the file I just made with my vi editor...
[root@gateway tmp]# cat foo
foo
foo
Great, now I run my echo command to send another value to it.
First I use the double >> to just append;
[root@gateway tmp]# echo foo2 >> foo
Now I take another look at the file;
[root@gateway tmp]# cat foo
foo
foo
foo2
So, I have my first two text lines the third line "foo2" appended.
Now I do this again but use just the single > to replace the file with a value.
[root@gateway tmp]# echo foo3 > foo
Than I look at it again;
[root@gateway tmp]# cat foo
foo3
Ahh, all the other lines are gone and now I just have foo3.
So, > replaces and >> appends.
How does this affect your /sys/block/md0/md/sync_action file? As it turns out, it does not matter.
Think of the proc and sys (/proc and /sys) as psuedo file system is a real time, memory resident file system that tracks the processes running on your machine and the state of your system.
So first lets go to /sys/block/
Than I will list its contents;
[root@gateway ~]# cd /sys/block/
[root@gateway block]# ls
dm-0 dm-3 hda md1 ram0 ram11 ram14 ram3 ram6 ram9 sdc sdf sdi
dm-1 dm-4 hdc md2 ram1 ram12 ram15 ram4 ram7 sda sdd sdg
dm-2 dm-5 md0 md3 ram10 ram13 ram2 ram5 ram8 sdb sde sdh
This will be different for you since your system will have different hardware and settings, again a pseudo file system. The dm stuff are my logical volumes and you might have more or less sata drives, the sda, sdb, ... these were created when I boot the system. If I add another sata drive, another sdj will be created automatically for me.
So depending on how many raid devices you have (I have four, /boot, swa, /, and my RAID6 data, (md0, md1, md2, md3)) they are listed here too.
So lets go into one, my swap RAID, md1, is small so let go to that one and test this out;
[root@gateway md1]# ls
dev holders md range removable size slaves stat uevent
Lets go deeper,
[root@gateway md1]# cd /sys/block/md1/md/
[root@gateway md]# ls
chunk_size dev-hdc1 mismatch_cnt rd0 suspend_lo sync_speed
component_size level new_dev rd1 sync_action sync_speed_max
dev-hda1 metadata_version raid_disks suspend_hi sync_completed sync_speed_min
Now lets look at sync_action;
[root@gateway md]# cat sync_action
idle
That is the pseudo file the represents the current state of my RAID md1.
So lets run that echo command and than lets check the state of the RAID;
[root@gateway md]# echo check > sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[============>........] resync = 62.7% (65664/104320) finish=0.0min speed=65664K/sec
So it is in resync state and if there are bad blocks they will be correct from parity.
Now once it is done, lets check that sync_action file again.
[root@gateway md]# cat sync_action
idle
Now remember we used the single redirect, so we replace the value with the text of "check" with our echo command. Once it was done with the resync, my system changed the value back to "idle".
What about the double ">>" well they append to the file but it will have the over all same effect...
[root@gateway md]# echo check >> sync_action
[root@gateway md]# cat /proc/mdstat
Personalities : [raid1] [raid6]
md1 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
[=========>...........] resync = 49.0% (52096/104320) finish=0.0min speed=52096K/sec
When it is done the value goes back to idle;
[root@gateway md]# cat sync_action
idle
So, > or >> does not matter here. And the command you need is echo.
Manipulating the pseudo files in /proc are similar.
Say for example, for security, I don't want my box to respond to pings (1 is for true and 0 is for false),
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
In this case, you want the single > because you want to replace the current value to 1 and not the >> for append.
Also another pseudo file for turning you linux box into a router;
echo 1 > /proc/sys/net/ipv4/ip_forward
As for SuSe updating your kernel, removing your original one and breaking your box by dropping you to a limited shell on boot up.. I can't help you much there. I don't have SuSe but as I understand, they are a good distro. In my current distro, Fedora, you can tell the update manager to not update the kernel. Also in Fedora, it will keep your old kernel by default so if there was an issue, you can select to go back to it in the grub boot up menu. I believe Ubuntu is similar. I bet you could configure SuSe to do the same.
I hope that clears up some confusion and good luck.
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Friday, July 13, 2007 11:48 AM
To: Daniel Korstad
Cc: davidsen; linux-raid
Subject: Re: Software based SATA RAID-5 expandable arrays?
RESPONSE
I had everything working, but it is evident that when I installed SuSe
the first time check and repair where not included in the package:( I
did not use the ">>" I used ">", as was incorrectly stated in
many documentations I set up.
The thing that made me suspect check and repair wasn't part of sues was
the failure of "check" or "repair" typed at the command prompt to
respond in any kind other then a response that stated their was no
command. In addition man check and man repair was also missing.
BROKEN!
I did an auto update of the SuSe machine, which ended up replacing the
kernel. They added the new entries to the boot choices but the mount
information was not transfered. SuSe also deleted the original kernel
boot setup. When suse looked at the drives individually they found
that none of them was recognizable. Therefor when I woke up this
morning and rebooted the machine after the update, I received the
errors and then dumps me to a basic prompt with limited ability to do
anything. I know I need to manually remount the drives, but its going
to be a challenge since I did not do this in the past. The answer to
this question is that I either have to change distro's (which I am
tempted to do) or fix the current distro. Please do not bother
providing any solutions for I simply have to RTFM (which I haven't had
time to do).
I think I am going to reset up my machines. The first two drives with
identical boot partitions, yet not mirror them. I can then manually
run a "tree" copy that would update my second drive as I grow the
system, and after successfull and needed updates. This would then
allow me a fall back after any updates, and with simply swapping SATA
drive cables from the first boot drive too the second. I am assuming
this will work. I then can RAID-6 (or 5) in the setup, recopy my files
(yes I haven't deleted them because I am not confident in my ability
with Linux yet.). Hopefully I will just simply remount these 4 drives
because there a simple raid 5 array.
SUSE's COMPLETE FAILURES
This frustration with SuSe, the lack of a simple reliable update
utility and the failures I experience has discouraged me from using
SuSe at all. Its got some amazing tools that help me from constantly
looking up documentation, posting to forums, or going to IRC, but the
unreliable upgrade process is a deal breaker for me. Its simply to
much work to manually update everything. This project had a simple
goal, which was to provide an easy and cheap solution to an unlimited
NAS service.
SUPPORT
In addition, SuSe's IRC help channel is among the worst I have
encountered. The level of support is often very good, but the level of
harassment, flames and simple childish behavior overcomes almost any
attempt at providing any level of support. I have no problem giving
back to the community when I learn enough to do so, but I will not be
mocked for my inability to understand a new and very in depth system.
In fact, I tend to goto the wonderful gentoo irc for my answers. The
IRC is amazing, the people patient and encouraging, the level of
knowledge is the best I have experienced. This resource, outside the
original incident, has been an amazing resource. I feel highly
confident asking questions about RAID here, because I know you guys are
actually RUNNING systems that I am attempting to do.
----- Original Message ----
From: Daniel Korstad <dan@korstad.net>
To: big.green.jelly.bean <big_green_jelly_bean@yahoo.com>
Cc: davidsen <davidsen@tmr.com>; linux-raid <linux-raid@vger.kernel.org>
Sent: Friday, July 13, 2007 11:22:45 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
To run it manually;
echo check >> /sys/block/md0/md/sync_action
than you can check the status with;
cat /proc/mdstat
Or to continually watch it, if you want (kind of boring though :) )
watch cat /proc/mdstat
This will refresh ever 2sec.
In my original email I suggested to use a crontab so you don't need to remember to do this every once in a while.
Run (I did this in root);
crontab -e
This will allow you to edit you crontab. Now past this command in there;
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
If you want you can add comments, I like to comment my stuff since I have lots of stuff in mine, just make sure you have '#' in the front of the lines so your system knows it is just a comment and not a command it should run;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
After you have put this in your crontab, write and quit with this command;
:wq
It should come back with this;
[root@gateway ~]# crontab -e
crontab: installing new crontab
Now you can look at your cron table (without editing) with this;
crontab -l
It should return something like this, depending if you added comments or how you scheduled your command;
#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
For more info on crontab and syntax for times (I just did a google and grabbed the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
http://ubuntuforums.org/showthread.php?t=102626&highlight=cron
Cheers,
Dan.
-----Original Message-----
From: Michael [mailto:big_green_jelly_bean@yahoo.com]
Sent: Thursday, July 12, 2007 5:43 PM
To: Bill Davidsen; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?
SuSe uses its own version of cron which is different then everything else I have seen, and the documentation is horrible. However they provide a wonderfull xwindows utility that helps set them up... the problem Im having is figuring out what to run. When I try to run "/sys/block/md0/md/sync_action" under a prompt it shoots out a permission denied even though I am SU or logged in under Root. Very annoying. You mention Check vrs Repair... which brings me too my last issue on setting up this machine. How do you send an email when Check, SMART, and when a RAID drive fails? How do you auto repair if the Check fails?
These are the last things I need to do for my Linux Server to work right... after I get all of this done, I will change the boot to goto the command prompt and not XWindows, and I will leave it in the corner of my room hopefully not to be used for as long as possible.
----- Original Message ----
From: Bill Davidsen <davidsen@tmr.com>
To: Daniel Korstad <dan@korstad.net>
Cc: Michael <big_green_jelly_bean@yahoo.com>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
> You have lots of options. This will be a lengthy response and will give just some ideas for just some of the options...
>
>
Just a few thoughts below interspersed with your comments.
> For my server, I had started out with a single drive. I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up). Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved. I following this migration to get me there;
> http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
>
> Since you are starting from scratch, it should be easier for you. Most distros will have an installer that will guide you though the process. When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID. In this advance partition setup, you will be able to create your RAID. First you make equal size partitions on both physical drives. For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot. Do this again for other partitions you want to have RAIDed. You can do this for /boot, /var, /home, /tmp, /usr. This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS. Only problem it
determining how big to make them during the install. At a minimum, I would do three partitions; /boot, swap, and / This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly.
>
> For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2) (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...) Do you need to RAID your swap? Well, I would RAID it or make a swap file within a RAID partition. If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was. You systems might hang.
>
>
Note that RAID-10 generally performs better than mirroring, particularly
when more than a few drives are involved. This can have performance
implications for swap, when large i/o pushes program pages out of
memory. The other side of that coin is that "recovery CDs" don't seem to
know how to use RAID-10 swap, which might be an issue on some systems.
> After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives. If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files. If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
> grub
> device (hd0) /dev/hdd
> root (hd0,0)
> setup (hd0)
> quit
> You say you are using two 500G drives for the OS. You don't necessary have to use all the space for the OS. You can make your partitions and take the left over space and throw it into a logical volume. This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives. For example, you use 100M for /boot and 200G for / and 2G for swap. Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.
>
> Why do I use RAID6? For the extra redundancy and I have 10 drives in my arrary.
> I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
> http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm
>
>
Other configurations will perform better for writes, know your i/o
performance requirements.
> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/
> "...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if they¢re used for mission critical data."
>
> Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.
>
> With four drives you would be just fine with a RAID5.
> However, I would make a cron for the command to run every once in awhile. Add this to your crontab...
>
> #check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information
> 30 2 * * Mon echo check /sys/block/md0/md/sync_action
>
> With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.
>
>
I think a comment on "check" vs. "repair" is appropriate here. At the
least "see the man page" is appropriate.
> For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive. Than create your raid.
>
> mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition> <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.
>
> My filesystem of choice is XFS, but you get to pick your own poison:
> mkfs.xfs /-f /dev/md3
>
> Mount the device :
> mount /dev/md3 /foo
>
> I would edit your /etc/fstab to have it automounted for each startup.
>
> Dan.
>
Other misc comments: mirroring your boot partition on drives which the
BIOS won't use is a waste of bytes. If you have more than, say four,
drives fail to function you probably have a system problem other than
disk. And some BIOS versions will boot a secondary drive if the primary
fails hard but not if it has a parity or other error, which can enter a
retry loop (I *must* keep trying to boot). This behavior can be seen on
at least one major server hardware from a big name vendor, it's not just
cheap desktops. The solution, ugly as it is, is to use the firmware
"RAID" on the motherboard controller for boot, and I have several
systems with low cost small PATA drives in mirror just for boot (after
which they are spun down with hdparm settings) for this reason.
Really good notes, people should hang onto them!
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
____________________________________________________________________________________
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
____________________________________________________________________________________
Sick sense of humor? Visit Yahoo! TV's
Comedy with an Edge to see what's on, when.
http://tv.yahoo.com/collections/222
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
@ 2007-08-01 16:46 Michael
2007-08-01 17:13 ` David Greaves
0 siblings, 1 reply; 40+ messages in thread
From: Michael @ 2007-08-01 16:46 UTC (permalink / raw)
To: Daniel Korstad; +Cc: linux-raid
I have removed the drives from my machine, the problem Im having is that I dont know the order (ports) they go back into the machine. Does anyone know how to determine the order, or how to fix the drive array if the order is not correct?
____________________________________________________________________________________
Pinpoint customers who are looking for what you sell.
http://searchmarketing.yahoo.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: Software based SATA RAID-5 expandable arrays?
2007-08-01 16:46 Michael
@ 2007-08-01 17:13 ` David Greaves
0 siblings, 0 replies; 40+ messages in thread
From: David Greaves @ 2007-08-01 17:13 UTC (permalink / raw)
To: Michael; +Cc: Daniel Korstad, linux-raid
Michael wrote:
> I have removed the drives from my machine, the problem Im having is that I dont know the order (ports) they go back into the machine.
Does anyone know how to determine the order, or how to fix the drive array if
the order is not correct?
If you are not attempting a (complex) repair then it won't matter.
Assuming you can identify the boot device and get the OS running then just do an
'assemble' and md/mdadm will figure out the correct order.
You may however want to record the (new) order for future reference in case you
need to do a complex repair.
David
PS Next time you have a new problem you may want to start a new thread.
^ permalink raw reply [flat|nested] 40+ messages in thread
end of thread, other threads:[~2007-08-01 17:13 UTC | newest]
Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-17 22:16 Software based SATA RAID-5 expandable arrays? greenjelly
2007-06-17 22:23 ` Justin Piszcz
2007-06-18 21:14 ` Dexter Filmore
2007-06-19 8:35 ` David Greaves
2007-06-19 9:14 ` Dexter Filmore
2007-06-20 20:52 ` Brad Campbell
-- strict thread matches above, loose matches on Subject: below --
2007-06-18 12:46 Daniel Korstad
2007-06-19 13:08 Michael
2007-06-19 13:43 Michael
2007-06-19 14:23 ` Robin Hill
2007-06-19 18:49 ` Daniel Korstad
2007-06-19 14:42 Michael
2007-06-19 21:30 ` Nix
2007-06-21 9:22 Michael
2007-06-21 20:16 ` Richard Scobie
2007-06-21 10:40 Michael
2007-06-22 15:12 jahammonds prost
2007-06-23 4:16 ` Brad Campbell
[not found] <944875.74303.qm@web54106.mail.re2.yahoo.com>
2007-07-09 19:31 ` Daniel Korstad
2007-07-11 14:21 ` Bill Davidsen
2007-07-10 21:58 jahammonds prost
2007-07-11 15:03 Daniel Korstad
2007-07-14 15:49 ` Bill Davidsen
2007-07-11 17:26 jahammonds prost
2007-07-11 19:13 ` Daniel Korstad
2007-07-11 19:26 ` Daniel Korstad
2007-07-11 20:08 Michael
2007-07-11 23:29 ` Nix
2007-07-11 20:12 jahammonds prost
2007-07-12 22:42 Michael
2007-07-13 3:54 ` Bill Davidsen
2007-07-13 15:22 ` Daniel Korstad
2007-07-13 16:48 Michael
2007-07-13 18:18 ` Bill Davidsen
2007-07-13 18:23 ` Daniel Korstad
[not found] <1914474980.1184590115562.JavaMail.root@gateway.korstad.net>
2007-07-16 14:23 ` Daniel Korstad
2007-07-16 17:34 Michael
2007-07-16 19:29 ` Daniel Korstad
2007-08-01 16:46 Michael
2007-08-01 17:13 ` David Greaves
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).