* Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array @ 2011-02-18 20:55 Larry Schwerzler 2011-02-18 23:44 ` Stan Hoeppner ` (2 more replies) 0 siblings, 3 replies; 10+ messages in thread From: Larry Schwerzler @ 2011-02-18 20:55 UTC (permalink / raw) To: linux-raid I have a few questions about my raid array that I haven't been able to find definitive answers for, so thought I would ask here. My setup: * 8x 1TB drives in an external enclosure connected to my server via 2 esata cables. * Currently all 8 drives are included in a raid 6 array. * I use the array to serve files (mostly larger .mkv/iso (several GB) and .flac/.mp3 (5-50MB) files) over my network via NFS and to perform offsite backup via rsync over ssh of another server. * This is a system in my home, so prolonged downtime, while annoying, is not the end of the world. * If it matters Ubuntu 10.04 64bit server is my distro I'm considering and likely going to move forward moving my data and rebuilding the array as a raid10 array. Just a few questions before I make the switch. Questions: 1. In my research of raid10 I very seldom hear of drive configurations with more drives then 4, are there special considerations with having an 8 drive raid10 array? I understand that I'll be loosing 2TB of space from my current setup, but i'm not too worried about that. 2. One problem I'm having with my current setup is the esata cables have been knocked loose which effectively drops 4 of my drives. I'd really like to be able to survive this type of sudden drive loss. if my drives are /dev/sd[abcdefgh] and abcd are on one esata channel while efgh are on the other is there what drive order should I create the array with? I'd guess /dev/sd[aebfcgdh] would that give me survivability if one of my esata channels went dark? 3. One of the concerns I have with raid10 is expandability, and I'm glad to see reshaping raid10 as an item on the 2011 roadmap :) However it will likely be a while before I'll see that ability in my distro for a while. I did find a guide on expanding raid size when using lvm by increasing the size of each drive and creating two partitions 1 the size of the original drive, and one with the remainder of the new space. Once you have done this for all drives you create a new raid10 array with the 2nd partitions on all the drives and add it to the lvm storage group, effectively you have two raid10 arrays 1 on the first half of the drives 1 on the 2nd half of the drives and the space pooled together. I'm sure many of you are familiar with this scenario, but I'm wondering if this scenario could be problematic, is having two raid10 arrays on one drive an issue? 4. Part of the reason I'm wanting to switch is because of information I read on the "BAARF" site pointing out some of the issues in the parity raid's that can cause issues that people sometimes don't think about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of the information on the site is a few years old now and given how fast things can change and the fact that I have not found many people complaining about the parity raids I'm wondering if some/all of the gotchas that they list are less of an issue now? Maybe my reasons for moving to raid10 are no longer relevant? Thank you in advance for any/all information given. And a big thank you to Neil and the other developers of linux-raid for their efforts on this great tool. Larry Schwerzler -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-18 20:55 Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array Larry Schwerzler @ 2011-02-18 23:44 ` Stan Hoeppner 2011-02-19 0:54 ` Keld Jørn Simonsen 2011-02-19 1:50 ` Larry Schwerzler 2011-02-19 1:12 ` Joe Landman 2011-02-19 3:59 ` NeilBrown 2 siblings, 2 replies; 10+ messages in thread From: Stan Hoeppner @ 2011-02-18 23:44 UTC (permalink / raw) To: Larry Schwerzler; +Cc: linux-raid Larry Schwerzler put forth on 2/18/2011 2:55 PM: > 1. In my research of raid10 I very seldom hear of drive configurations > with more drives then 4, are there special considerations with having > an 8 drive raid10 array? I understand that I'll be loosing 2TB of > space from my current setup, but i'm not too worried about that. This is because Linux mdraid is most popular with the hobby crowd, not business, and most folks in this segment aren't running more than 4 drives in a RAID 10. For business solutions using embedded Linux and mdraid, mdraid is typically hidden from the user who isn't going to be writing posts on the net about mdraid. He calls his vendor for support. In a nutshell, that's why you see little or no posts about mdraid 10 arrays larger than 4 drives. > 2. One problem I'm having with my current setup is the esata cables > have been knocked loose which effectively drops 4 of my drives. I'd > really like to be able to survive this type of sudden drive loss. if Solve the problem then--quit kicking the cables, or secure them in a manner that they can't be kicked loose. Or buy a new chassis that can hold all drives internally. Software cannot solve or work around this problem. This is actually quite silly to ask. Similarly, would you ask your car manufacturer to build a car that floats and has a propeller, because you keep driving off the road into ponds? > my drives are /dev/sd[abcdefgh] and abcd are on one esata channel > while efgh are on the other is there what drive order should I create > the array with? I'd guess /dev/sd[aebfcgdh] would that give me > survivability if one of my esata channels went dark? On a cheap SATA PCIe card, if one channel goes, they both typically go, as it's a single chip solution and the PHYs are built into the chip. However, given your penchant for kicking cables out of their ports, you might physically damage the connector. So you might want to create the layout so your mirror pairs are on opposite ports. > 3. One of the concerns I have with raid10 is expandability, and I'm > glad to see reshaping raid10 as an item on the 2011 roadmap :) However > it will likely be a while before I'll see that ability in my distro > for a while. I did find a guide on expanding raid size when using lvm > by increasing the size of each drive and creating two partitions 1 the > size of the original drive, and one with the remainder of the new > space. Once you have done this for all drives you create a new raid10 > array with the 2nd partitions on all the drives and add it to the lvm > storage group, effectively you have two raid10 arrays 1 on the first > half of the drives 1 on the 2nd half of the drives and the space > pooled together. I'm sure many of you are familiar with this scenario, > but I'm wondering if this scenario could be problematic, is having two > raid10 arrays on one drive an issue? Reshaping requires you have a full good backup for when it all goes wrong. Most home users don't keep backups. If you kick the cable during a reshape you may hose everything and have to start over from scratch. If you don't/won't/or can't keep a regular full backup, then don't do a reshape. Simply add new drives, create a new mdraid if you like, make a filesystem, and mount it somewhere. Others will likely give different advice. If you need to share it via samba or nfs, create another share. For those who like everything in one "tree" you can simply create a new directory "inside" your current array filesystem and mount the new one there. Unix is great like this. Many Linux nubies forget this capability, or never learned it. > 4. Part of the reason I'm wanting to switch is because of information > I read on the "BAARF" site pointing out some of the issues in the > parity raid's that can cause issues that people sometimes don't think > about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of the > information on the site is a few years old now and given how fast > things can change and the fact that I have not found many people > complaining about the parity raids I'm wondering if some/all of the > gotchas that they list are less of an issue now? Maybe my reasons for > moving to raid10 are no longer relevant? You need to worry far more about your cabling situation. Kicking a cable out is what can/will cause data loss. At this point that is far more detrimental to you than the RAID 5/6 invisible data loss issue. Always fix the big problems first. The RAID level you use is the least of yours right now. -- Stan ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-18 23:44 ` Stan Hoeppner @ 2011-02-19 0:54 ` Keld Jørn Simonsen 2011-02-19 1:53 ` Larry Schwerzler 2011-02-19 1:50 ` Larry Schwerzler 1 sibling, 1 reply; 10+ messages in thread From: Keld Jørn Simonsen @ 2011-02-19 0:54 UTC (permalink / raw) To: Stan Hoeppner; +Cc: Larry Schwerzler, linux-raid On Fri, Feb 18, 2011 at 05:44:24PM -0600, Stan Hoeppner wrote: > Larry Schwerzler put forth on 2/18/2011 2:55 PM: > > > 1. In my research of raid10 I very seldom hear of drive configurations > > with more drives then 4, are there special considerations with having > > an 8 drive raid10 array? I understand that I'll be loosing 2TB of > > space from my current setup, but i'm not too worried about that. > > This is because Linux mdraid is most popular with the hobby crowd, not > business, and most folks in this segment aren't running more than 4 > drives in a RAID 10. For business solutions using embedded Linux and > mdraid, mdraid is typically hidden from the user who isn't going to be > writing posts on the net about mdraid. He calls his vendor for support. > In a nutshell, that's why you see little or no posts about mdraid 10 > arrays larger than 4 drives. well on https://raid.wiki.kernel.org/index.php/Performance there are several performance reports with 6 or 10 spindles, so there... For an 8 drive Linux MD raid10 maybe you should consider a motherboard with 8 sata ports. Best regards keld ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-19 0:54 ` Keld Jørn Simonsen @ 2011-02-19 1:53 ` Larry Schwerzler 2011-02-19 4:33 ` Stan Hoeppner 0 siblings, 1 reply; 10+ messages in thread From: Larry Schwerzler @ 2011-02-19 1:53 UTC (permalink / raw) To: Keld Jørn Simonsen; +Cc: Stan Hoeppner, linux-raid 2011/2/18 Keld Jørn Simonsen <keld@keldix.com>: > On Fri, Feb 18, 2011 at 05:44:24PM -0600, Stan Hoeppner wrote: >> Larry Schwerzler put forth on 2/18/2011 2:55 PM: >> >> > 1. In my research of raid10 I very seldom hear of drive configurations >> > with more drives then 4, are there special considerations with having >> > an 8 drive raid10 array? I understand that I'll be loosing 2TB of >> > space from my current setup, but i'm not too worried about that. >> >> This is because Linux mdraid is most popular with the hobby crowd, not >> business, and most folks in this segment aren't running more than 4 >> drives in a RAID 10. For business solutions using embedded Linux and >> mdraid, mdraid is typically hidden from the user who isn't going to be >> writing posts on the net about mdraid. He calls his vendor for support. >> In a nutshell, that's why you see little or no posts about mdraid 10 >> arrays larger than 4 drives. > > well on https://raid.wiki.kernel.org/index.php/Performance > there are several performance reports with 6 or 10 spindles, so there... > > For an 8 drive Linux MD raid10 maybe you should consider a motherboard > with 8 sata ports. > While I have considered getting a new case that can hold 8 drives + system drive + cd rom I always had trouble finding them. There are no doubt better setups then mine, but Im trying to not buy new hardware if I can get away with it. > Best regards > keld > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-19 1:53 ` Larry Schwerzler @ 2011-02-19 4:33 ` Stan Hoeppner 2011-02-20 9:57 ` Simon Mcnair 0 siblings, 1 reply; 10+ messages in thread From: Stan Hoeppner @ 2011-02-19 4:33 UTC (permalink / raw) To: Larry Schwerzler; +Cc: Keld Jørn Simonsen, linux-raid Larry Schwerzler put forth on 2/18/2011 7:53 PM: > While I have considered getting a new case that can hold 8 drives + > system drive + cd rom I always had trouble finding them. There are no > doubt better setups then mine, but Im trying to not buy new hardware > if I can get away with it. Are you mechanically inclined in the slightest? You can fix the "cable kick" problem for less than $5 with these: http://www.lowes.com/ProductDisplay?partNumber=292685-1781-45-1MBUVL&langId=-1&storeId=10151&productId=3128405&catalogId=10051&cmRelshp=rel&rel=nofollow&cId=PDIO1 and these: http://www.lowes.com/pd_220871-1781-45-311UVL_0__?productId=3128261&Ntt=cable+tie&pl=1¤tURL=%2Fpl__0__s%3FVa%3Dtrue%26Ntt%3Dcable%2Btie and have most of them left over for other uses. You'll get strain relief and kick protection, especially if you use two on each chassis. Though, if you are actually kicking or tripping over the cable, you'll simply end up jerking your equipment off the table and damaging it, instead of just having the eSATA plug pop out. I'm really curious to understand why/how your cables are exposed to "kicking" or other detachment due to accidental contact. -- Stan ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-19 4:33 ` Stan Hoeppner @ 2011-02-20 9:57 ` Simon Mcnair 0 siblings, 0 replies; 10+ messages in thread From: Simon Mcnair @ 2011-02-20 9:57 UTC (permalink / raw) To: Stan Hoeppner; +Cc: Larry Schwerzler, Keld Jørn Simonsen, linux-raid Sorry, I can't help responding to this. I love any post that go's back to cable ties. Get as techy as you like, behind the scenes there WILL be cable ties (or posh Velcro ties, my personal favourite ) somewhere holding the whole kit and caboodle together ;-) just an off topic attempt at humour :-) Simon 2011/2/19 Stan Hoeppner <stan@hardwarefreak.com>: > Larry Schwerzler put forth on 2/18/2011 7:53 PM: > >> While I have considered getting a new case that can hold 8 drives + >> system drive + cd rom I always had trouble finding them. There are no >> doubt better setups then mine, but Im trying to not buy new hardware >> if I can get away with it. > > Are you mechanically inclined in the slightest? You can fix the "cable > kick" problem for less than $5 with these: > > http://www.lowes.com/ProductDisplay?partNumber=292685-1781-45-1MBUVL&langId=-1&storeId=10151&productId=3128405&catalogId=10051&cmRelshp=rel&rel=nofollow&cId=PDIO1 > > and these: > > http://www.lowes.com/pd_220871-1781-45-311UVL_0__?productId=3128261&Ntt=cable+tie&pl=1¤tURL=%2Fpl__0__s%3FVa%3Dtrue%26Ntt%3Dcable%2Btie > > and have most of them left over for other uses. You'll get strain > relief and kick protection, especially if you use two on each chassis. > Though, if you are actually kicking or tripping over the cable, you'll > simply end up jerking your equipment off the table and damaging it, > instead of just having the eSATA plug pop out. > > I'm really curious to understand why/how your cables are exposed to > "kicking" or other detachment due to accidental contact. > > -- > Stan > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-18 23:44 ` Stan Hoeppner 2011-02-19 0:54 ` Keld Jørn Simonsen @ 2011-02-19 1:50 ` Larry Schwerzler 1 sibling, 0 replies; 10+ messages in thread From: Larry Schwerzler @ 2011-02-19 1:50 UTC (permalink / raw) To: Stan Hoeppner; +Cc: linux-raid On Fri, Feb 18, 2011 at 3:44 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote: > Larry Schwerzler put forth on 2/18/2011 2:55 PM: > >> 1. In my research of raid10 I very seldom hear of drive configurations >> with more drives then 4, are there special considerations with having >> an 8 drive raid10 array? I understand that I'll be loosing 2TB of >> space from my current setup, but i'm not too worried about that. > > This is because Linux mdraid is most popular with the hobby crowd, not > business, and most folks in this segment aren't running more than 4 > drives in a RAID 10. For business solutions using embedded Linux and > mdraid, mdraid is typically hidden from the user who isn't going to be > writing posts on the net about mdraid. He calls his vendor for support. > In a nutshell, that's why you see little or no posts about mdraid 10 > arrays larger than 4 drives. > Gotcha so no specific issues, thanks. >> 2. One problem I'm having with my current setup is the esata cables >> have been knocked loose which effectively drops 4 of my drives. I'd >> really like to be able to survive this type of sudden drive loss. if > > Solve the problem then--quit kicking the cables, or secure them in a > manner that they can't be kicked loose. Or buy a new chassis that can > hold all drives internally. Software cannot solve or work around this > problem. This is actually quite silly to ask. Similarly, would you ask > your car manufacturer to build a car that floats and has a propeller, > because you keep driving off the road into ponds? I'm working on securing the cables, but sometimes there are things beyond your control and I'd like to protect against a possible issue, rather then just throw up my hands and say, well this won't work, I oviously need a whole new setup. If I can get some of the protection from mdraid awesome, if not, well at least I'll know. Your example is a bit off, it would be more like asking my car manufacturer if the big button that says "float" could be used for when I occasionally drive into ponds. I'm not asking anyone to change the code just to protect me from my poor buying choices, just wondering if the tool has the ability to help me. > >> my drives are /dev/sd[abcdefgh] and abcd are on one esata channel >> while efgh are on the other is there what drive order should I create >> the array with? I'd guess /dev/sd[aebfcgdh] would that give me >> survivability if one of my esata channels went dark? > > On a cheap SATA PCIe card, if one channel goes, they both typically go, > as it's a single chip solution and the PHYs are built into the chip. > However, given your penchant for kicking cables out of their ports, you > might physically damage the connector. So you might want to create the > layout so your mirror pairs are on opposite ports. > Not sure if I have a cheap esata card (SANS DIGITAL HA-DAT-4ESPCIE PCI-Express x8 SATA II) but when one of the cables has come out the drives on the other cable work fine, so I'd guess my chipset doesn't fall into that scenario. I for sure want to create the pairs on opposite ports, but I was unclear what drive order durring the create procedure would actually do that given an f2 layout. >> 3. One of the concerns I have with raid10 is expandability, and I'm >> glad to see reshaping raid10 as an item on the 2011 roadmap :) However >> it will likely be a while before I'll see that ability in my distro >> for a while. I did find a guide on expanding raid size when using lvm >> by increasing the size of each drive and creating two partitions 1 the >> size of the original drive, and one with the remainder of the new >> space. Once you have done this for all drives you create a new raid10 >> array with the 2nd partitions on all the drives and add it to the lvm >> storage group, effectively you have two raid10 arrays 1 on the first >> half of the drives 1 on the 2nd half of the drives and the space >> pooled together. I'm sure many of you are familiar with this scenario, >> but I'm wondering if this scenario could be problematic, is having two >> raid10 arrays on one drive an issue? > > Reshaping requires you have a full good backup for when it all goes > wrong. Most home users don't keep backups. If you kick the cable > during a reshape you may hose everything and have to start over from > scratch. If you don't/won't/or can't keep a regular full backup, then > don't do a reshape. Simply add new drives, create a new mdraid if you > like, make a filesystem, and mount it somewhere. Others will likely > give different advice. If you need to share it via samba or nfs, create > another share. For those who like everything in one "tree" you can > simply create a new directory "inside" your current array filesystem and > mount the new one there. Unix is great like this. Many Linux nubies > forget this capability, or never learned it. > I understand reshaping is tricky, and I do keep backups of the critical data. But much of my data are movies that I own and use to play over the network for my home media server. I don't back these up because if I lose them all I just get to spend a lot of evenings re-ripping the movies, which sucks but isn't as bad as losing the photos etc. Without the LVM raid expansion solution the expansion for me looks like. Buy another jbod raid enclosure that holds 8 drives (or get another computer case that holds 8 drives + system HD + dvd drive and another mobo that can support 10 sata devices) setup the 8 new drives, copy the data from the old drives, retire the old drives, sell the extra jbod enclosure. I was hoping I have the same effect withou buying the extra jbod enclosure, but raid10 can't reshape. >> 4. Part of the reason I'm wanting to switch is because of information >> I read on the "BAARF" site pointing out some of the issues in the >> parity raid's that can cause issues that people sometimes don't think >> about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of the >> information on the site is a few years old now and given how fast >> things can change and the fact that I have not found many people >> complaining about the parity raids I'm wondering if some/all of the >> gotchas that they list are less of an issue now? Maybe my reasons for >> moving to raid10 are no longer relevant? > > You need to worry far more about your cabling situation. Kicking a > cable out is what can/will cause data loss. At this point that is far > more detrimental to you than the RAID 5/6 invisible data loss issue. > > Always fix the big problems first. The RAID level you use is the least > of yours right now. > > -- > Stan > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-18 20:55 Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array Larry Schwerzler 2011-02-18 23:44 ` Stan Hoeppner @ 2011-02-19 1:12 ` Joe Landman 2011-02-19 1:33 ` Larry Schwerzler 2011-02-19 3:59 ` NeilBrown 2 siblings, 1 reply; 10+ messages in thread From: Joe Landman @ 2011-02-19 1:12 UTC (permalink / raw) To: Larry Schwerzler; +Cc: linux-raid On 02/18/2011 03:55 PM, Larry Schwerzler wrote: [...] > Questions: > > 1. In my research of raid10 I very seldom hear of drive configurations > with more drives then 4, are there special considerations with having > an 8 drive raid10 array? I understand that I'll be loosing 2TB of > space from my current setup, but i'm not too worried about that. If you are going to set this up, I'd suggest a few things. 1st: try to use a PCI HBA with enough ports, not the motherboard ports. 2nd: eSATA is probably not a good idea (see your issue below). 3rd: I'd suggest getting 10 drives and using 2 as hot spares. Again, not using eSATA. Use an internal PCIe card that provides a reasonable chip. If you can't house the drives internal to your machine, get a x4 or x8 JBOD/RAID cannister. A single (or possibly 2) SAS cables. But seriously, lose the eSATA setup. > > 2. One problem I'm having with my current setup is the esata cables > have been knocked loose which effectively drops 4 of my drives. I'd > really like to be able to survive this type of sudden drive loss. if > my drives are /dev/sd[abcdefgh] and abcd are on one esata channel > while efgh are on the other is there what drive order should I create > the array with? I'd guess /dev/sd[aebfcgdh] would that give me > survivability if one of my esata channels went dark? Usually the on-board eSATA chips are very low cost, low bandwidth units. Spend another $150-200 on a dual external SAS HBA, and get the JBOD container. > > 3. One of the concerns I have with raid10 is expandability, and I'm > glad to see reshaping raid10 as an item on the 2011 roadmap :) However > it will likely be a while before I'll see that ability in my distro > for a while. I did find a guide on expanding raid size when using lvm > by increasing the size of each drive and creating two partitions 1 the > size of the original drive, and one with the remainder of the new > space. Once you have done this for all drives you create a new raid10 > array with the 2nd partitions on all the drives and add it to the lvm > storage group, effectively you have two raid10 arrays 1 on the first > half of the drives 1 on the 2nd half of the drives and the space > pooled together. I'm sure many of you are familiar with this scenario, > but I'm wondering if this scenario could be problematic, is having two > raid10 arrays on one drive an issue? We'd recommend against this. Too much seeking. > > 4. Part of the reason I'm wanting to switch is because of information > I read on the "BAARF" site pointing out some of the issues in the > parity raid's that can cause issues that people sometimes don't think > about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of the > information on the site is a few years old now and given how fast > things can change and the fact that I have not found many people > complaining about the parity raids I'm wondering if some/all of the > gotchas that they list are less of an issue now? Maybe my reasons for > moving to raid10 are no longer relevant? Things have gotten worse. The BERs are improving a bit (most reasonable SATA drives report 1E-15 as their rate as compared with 1E-14 as previously. Remember, 2TB = 1.6E13 bits. So 10x 2TB drives together is 1.6E14 bits. 8 scans or rebuilds will get you to a statistical near certainty of hitting an unrecoverable error. RAID6 buys you a little more time than RAID5, but you still have worries due to the time correlated second drive failure. Google found a peak at 1000s after the first drive failure (which likely corresponds to an error on rebuild). With RAID5, that second error is the end of your data. With RAID6, you still have a fighting chance at recovery. > Thank you in advance for any/all information given. And a big thank > you to Neil and the other developers of linux-raid for their efforts > on this great tool. Despite the occasional protestations to the contrary, MD raid is a robust and useful RAID layer, and not a "hobby" layer. We use it extensively, as do many others. -- Joe Landman landman@scalableinformatics.com ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-19 1:12 ` Joe Landman @ 2011-02-19 1:33 ` Larry Schwerzler 0 siblings, 0 replies; 10+ messages in thread From: Larry Schwerzler @ 2011-02-19 1:33 UTC (permalink / raw) To: Joe Landman; +Cc: linux-raid Joe, thanks for the info, response/questions inline. On Fri, Feb 18, 2011 at 5:12 PM, Joe Landman <joe.landman@gmail.com> wrote: > On 02/18/2011 03:55 PM, Larry Schwerzler wrote: > > [...] > >> Questions: >> >> 1. In my research of raid10 I very seldom hear of drive configurations >> with more drives then 4, are there special considerations with having >> an 8 drive raid10 array? I understand that I'll be loosing 2TB of >> space from my current setup, but i'm not too worried about that. > > If you are going to set this up, I'd suggest a few things. > > 1st: try to use a PCI HBA with enough ports, not the motherboard ports. I use the SANS DIGITAL HA-DAT-4ESPCIE PCI-Express x8 SATA II card with the SANS DIGITAL TR8M-B 8 Bay SATA to eSATA (Port Multiplier) JBOD Enclosure, so i'm most of the way there, just esata instead of sas. I didn't realize that the esata connections had issues like this else I would have avoided it, though at the time the extra cost of a sas card that could expand to a total of 16 external hard drives would have been prohibitive. > > 2nd: eSATA is probably not a good idea (see your issue below). > > 3rd: I'd suggest getting 10 drives and using 2 as hot spares. Again, not > using eSATA. Use an internal PCIe card that provides a reasonable chip. If > you can't house the drives internal to your machine, get a x4 or x8 > JBOD/RAID cannister. A single (or possibly 2) SAS cables. But seriously, > lose the eSATA setup. I may see about getting an extra drive or two to act as hot spares. > >> >> 2. One problem I'm having with my current setup is the esata cables >> have been knocked loose which effectively drops 4 of my drives. I'd >> really like to be able to survive this type of sudden drive loss. if >> my drives are /dev/sd[abcdefgh] and abcd are on one esata channel >> while efgh are on the other is there what drive order should I create >> the array with? I'd guess /dev/sd[aebfcgdh] would that give me >> survivability if one of my esata channels went dark? > > Usually the on-board eSATA chips are very low cost, low bandwidth units. > Spend another $150-200 on a dual external SAS HBA, and get the JBOD > container. I'd be interested in any specific recommendations anyone might have for a $200 or so card and jbod enclosure that could house at least 8 drives. Off list is fine, so as to not spam the list. I have zero experience with SAS, does it not experience the issues that my esata setup runs into? > >> >> 3. One of the concerns I have with raid10 is expandability, and I'm >> glad to see reshaping raid10 as an item on the 2011 roadmap :) However >> it will likely be a while before I'll see that ability in my distro >> for a while. I did find a guide on expanding raid size when using lvm >> by increasing the size of each drive and creating two partitions 1 the >> size of the original drive, and one with the remainder of the new >> space. Once you have done this for all drives you create a new raid10 >> array with the 2nd partitions on all the drives and add it to the lvm >> storage group, effectively you have two raid10 arrays 1 on the first >> half of the drives 1 on the 2nd half of the drives and the space >> pooled together. I'm sure many of you are familiar with this scenario, >> but I'm wondering if this scenario could be problematic, is having two >> raid10 arrays on one drive an issue? > > We'd recommend against this. Too much seeking. So the raid10 expansion solution is again to wait for raid10 reshaping in the mdraid tools, or start from scratch. I thought that maybe with LVM since it wouldn't be striping the data accross the arrays, it would mostly be accessing the info from one array at a time. I don't know enough about the way that lvm stores the data to know different though. > >> >> 4. Part of the reason I'm wanting to switch is because of information >> I read on the "BAARF" site pointing out some of the issues in the >> parity raid's that can cause issues that people sometimes don't think >> about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of the >> information on the site is a few years old now and given how fast >> things can change and the fact that I have not found many people >> complaining about the parity raids I'm wondering if some/all of the >> gotchas that they list are less of an issue now? Maybe my reasons for >> moving to raid10 are no longer relevant? > > Things have gotten worse. The BERs are improving a bit (most reasonable > SATA drives report 1E-15 as their rate as compared with 1E-14 as previously. > Remember, 2TB = 1.6E13 bits. So 10x 2TB drives together is 1.6E14 bits. 8 > scans or rebuilds will get you to a statistical near certainty of hitting an > unrecoverable error. > > RAID6 buys you a little more time than RAID5, but you still have worries due > to the time correlated second drive failure. Google found a peak at 1000s > after the first drive failure (which likely corresponds to an error on > rebuild). With RAID5, that second error is the end of your data. With > RAID6, you still have a fighting chance at recovery. > This i what really scares me, it seems like a false sense of security as your drive size increases. Hoping for a better chance with raid10 > >> Thank you in advance for any/all information given. And a big thank >> you to Neil and the other developers of linux-raid for their efforts >> on this great tool. > > Despite the occasional protestations to the contrary, MD raid is a robust > and useful RAID layer, and not a "hobby" layer. We use it extensively, as > do many others. > > > -- > Joe Landman > landman@scalableinformatics.com > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array 2011-02-18 20:55 Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array Larry Schwerzler 2011-02-18 23:44 ` Stan Hoeppner 2011-02-19 1:12 ` Joe Landman @ 2011-02-19 3:59 ` NeilBrown 2 siblings, 0 replies; 10+ messages in thread From: NeilBrown @ 2011-02-19 3:59 UTC (permalink / raw) To: Larry Schwerzler; +Cc: linux-raid On Fri, 18 Feb 2011 12:55:05 -0800 Larry Schwerzler <larry@schwerzler.com> wrote: > 2. One problem I'm having with my current setup is the esata cables > have been knocked loose which effectively drops 4 of my drives. I'd > really like to be able to survive this type of sudden drive loss. if > my drives are /dev/sd[abcdefgh] and abcd are on one esata channel > while efgh are on the other is there what drive order should I create > the array with? I'd guess /dev/sd[aebfcgdh] would that give me > survivability if one of my esata channels went dark? Yes, in md/raid10 the multiple copies are an 'adjacent' devices (in the sequence given to --create). Of course, you wouldn't actually use the string /dev/sd[aebfcgdh] as that expands matches in alphabetical order.: $ echo /dev/sd[aebfcgdh] /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh Instead use this: $ echo /dev/sd{a,e,b,f,c,g,d,h} /dev/sda /dev/sde /dev/sdb /dev/sdf /dev/sdc /dev/sdg /dev/sdd /dev/sdh NeilBrown ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-02-20 9:57 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-02-18 20:55 Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array Larry Schwerzler 2011-02-18 23:44 ` Stan Hoeppner 2011-02-19 0:54 ` Keld Jørn Simonsen 2011-02-19 1:53 ` Larry Schwerzler 2011-02-19 4:33 ` Stan Hoeppner 2011-02-20 9:57 ` Simon Mcnair 2011-02-19 1:50 ` Larry Schwerzler 2011-02-19 1:12 ` Joe Landman 2011-02-19 1:33 ` Larry Schwerzler 2011-02-19 3:59 ` NeilBrown
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).