* Recommendations for RAID setup needed
@ 2015-09-15 17:39 Alex
2015-09-15 18:08 ` Wols Lists
0 siblings, 1 reply; 12+ messages in thread
From: Alex @ 2015-09-15 17:39 UTC (permalink / raw)
To: Linux RAID
Hi,
I have a fedora22 system and would like to build a backup server. I
have four 3TB SATA disks and would like to build a RAID5 array. I
understand rebuild times can be extensive, possibility creating a
scenario where another disk fails during that rebuild time, but I'm
not sure I want to lose the extra space with creating a RAID6 array. I
believe RAID5 also has faster write speeds?
Is a 9TB RAID5 partition too risky in terms of rebuild time?
What's the preferred filesystem for a backup server these days? Should
I use XFS or ext4?
Thanks,
Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-15 17:39 Recommendations for RAID setup needed Alex
@ 2015-09-15 18:08 ` Wols Lists
2015-09-15 18:44 ` Alex
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Wols Lists @ 2015-09-15 18:08 UTC (permalink / raw)
To: Alex, Linux RAID
On 15/09/15 18:39, Alex wrote:
> Hi,
> I have a fedora22 system and would like to build a backup server. I
> have four 3TB SATA disks and would like to build a RAID5 array. I
> understand rebuild times can be extensive, possibility creating a
> scenario where another disk fails during that rebuild time, but I'm
> not sure I want to lose the extra space with creating a RAID6 array. I
> believe RAID5 also has faster write speeds?
What disks are you using? Are they proper raid disks? A 12TB array can
have a soft read error every complete pass, and still be within the
disk-manufacturer's specs. If your disks are not raid-compliant, this
will stop your array from rebuilding, ever!
(Chances are, your disks are above spec and won't give a problem. Do you
want to take the risk?)
>
> Is a 9TB RAID5 partition too risky in terms of rebuild time?
>
> What's the preferred filesystem for a backup server these days? Should
> I use XFS or ext4?
Throwing something completely different into the mix, how about
considering btrfs? It's not 100% solid yet, so you need to be careful
with it, but if you back up with rsync and the "in place" option, it'll
give you full backups for the cost of incremental.
What you MUST do is KEEP AN EYE ON DISK SPACE! The main failure mode for
btrfs I'm aware of, is that a disk full can cause a fatal error. As in
"I've just trashed the disk - it's 'format c:' time". So if you hit 80%
or so, alarm bells should be ringing. Very loud.
>
> Thanks,
> Alex
Cheers,
Wol
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-15 18:08 ` Wols Lists
@ 2015-09-15 18:44 ` Alex
2015-09-16 2:12 ` Adam Goryachev
2015-09-15 18:57 ` Roman Mamedov
2015-09-22 4:16 ` David C. Rankin
2 siblings, 1 reply; 12+ messages in thread
From: Alex @ 2015-09-15 18:44 UTC (permalink / raw)
To: Wols Lists; +Cc: Linux RAID
Hi,
>> I have a fedora22 system and would like to build a backup server. I
>> have four 3TB SATA disks and would like to build a RAID5 array. I
>> understand rebuild times can be extensive, possibility creating a
>> scenario where another disk fails during that rebuild time, but I'm
>> not sure I want to lose the extra space with creating a RAID6 array. I
>> believe RAID5 also has faster write speeds?
>
> What disks are you using? Are they proper raid disks? A 12TB array can
> have a soft read error every complete pass, and still be within the
> disk-manufacturer's specs. If your disks are not raid-compliant, this
> will stop your array from rebuilding, ever!
All four are WD30EFRX-68EUZN0. They're not the cheapest WD disks, but
they're also not the ones with the 5yr warranty. The last array I
built using disks with 5yr warranty exceeded their capacity before the
warranty expired.
> (Chances are, your disks are above spec and won't give a problem. Do you
> want to take the risk?)
There's always going to be some kind of risk, but I'm hoping someone
with the technical understanding about disk failure rates can tell me
if it's a prudent decision or not.
>> Is a 9TB RAID5 partition too risky in terms of rebuild time?
>>
>> What's the preferred filesystem for a backup server these days? Should
>> I use XFS or ext4?
>
> Throwing something completely different into the mix, how about
> considering btrfs? It's not 100% solid yet, so you need to be careful
> with it, but if you back up with rsync and the "in place" option, it'll
> give you full backups for the cost of incremental.
I'm not sure I'm ready for something so experimental.
I am in fact using the hard-link function of rsync to perform backups,
though. We have a pretty robust perl script that's evolved over time.
I was also thinking of implementing bacula, but not sure I have the
time to figure it out right now.
Thanks,
Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-15 18:08 ` Wols Lists
2015-09-15 18:44 ` Alex
@ 2015-09-15 18:57 ` Roman Mamedov
2015-09-22 4:18 ` David C. Rankin
2015-09-22 4:16 ` David C. Rankin
2 siblings, 1 reply; 12+ messages in thread
From: Roman Mamedov @ 2015-09-15 18:57 UTC (permalink / raw)
To: Wols Lists; +Cc: Alex, Linux RAID
[-- Attachment #1: Type: text/plain, Size: 1115 bytes --]
On Tue, 15 Sep 2015 19:08:31 +0100
Wols Lists <antlists@youngman.org.uk> wrote:
> Throwing something completely different into the mix, how about
> considering btrfs? It's not 100% solid yet, so you need to be careful
> with it, but if you back up with rsync and the "in place" option, it'll
> give you full backups for the cost of incremental.
However its RAID5/6 is not ready yet and even RAID1/10 lack some important
features compared to mdadm, not to mention performance optimizations.
On the other hand I have a great success running Btrfs without utilizing its
own RAID features, but as a regular filesystem on top of MD RAID5/6.
> What you MUST do is KEEP AN EYE ON DISK SPACE! The main failure mode for
> btrfs I'm aware of, is that a disk full can cause a fatal error. As in
> "I've just trashed the disk - it's 'format c:' time". So if you hit 80%
> or so, alarm bells should be ringing. Very loud.
And this is just an absolute deranged baseless FUD today, or maybe something
that might have been true about 5 years ago -- eons in Btrfs development.
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-15 18:44 ` Alex
@ 2015-09-16 2:12 ` Adam Goryachev
2015-09-16 10:03 ` Jens-U. Mozdzen
2015-09-16 15:53 ` Alex
0 siblings, 2 replies; 12+ messages in thread
From: Adam Goryachev @ 2015-09-16 2:12 UTC (permalink / raw)
To: Alex, Wols Lists; +Cc: Linux RAID
On 16/09/15 04:44, Alex wrote:
> Hi,
>
>>> I have a fedora22 system and would like to build a backup server. I
>>> have four 3TB SATA disks and would like to build a RAID5 array. I
>>> understand rebuild times can be extensive, possibility creating a
>>> scenario where another disk fails during that rebuild time, but I'm
>>> not sure I want to lose the extra space with creating a RAID6 array. I
>>> believe RAID5 also has faster write speeds?
>> What disks are you using? Are they proper raid disks? A 12TB array can
>> have a soft read error every complete pass, and still be within the
>> disk-manufacturer's specs. If your disks are not raid-compliant, this
>> will stop your array from rebuilding, ever!
> All four are WD30EFRX-68EUZN0. They're not the cheapest WD disks, but
> they're also not the ones with the 5yr warranty. The last array I
> built using disks with 5yr warranty exceeded their capacity before the
> warranty expired.
Umm, 1st google result showed this:
http://community.wd.com/t5/Desktop-Mobile-Drives/New-WD30EFRX-Red-Drive-Idle3-Timer-Set-to-8-Seconds-High-LCC-in/td-p/648821/page/5
You might want to verify that setting before using in production, and
probably a quick search/read on any other issues.
>> (Chances are, your disks are above spec and won't give a problem. Do you
>> want to take the risk?)
> There's always going to be some kind of risk, but I'm hoping someone
> with the technical understanding about disk failure rates can tell me
> if it's a prudent decision or not.
That depends on your requirements. What are the implications (for you)
if all the data is lost because two drives failed close to the same
time? Is that resulting cost more or less than getting a fourth drive
and using RAID6?
>>> Is a 9TB RAID5 partition too risky in terms of rebuild time?
>>>
>>> What's the preferred filesystem for a backup server these days? Should
>>> I use XFS or ext4?
>> Throwing something completely different into the mix, how about
>> considering btrfs? It's not 100% solid yet, so you need to be careful
>> with it, but if you back up with rsync and the "in place" option, it'll
>> give you full backups for the cost of incremental.
> I'm not sure I'm ready for something so experimental.
>
> I am in fact using the hard-link function of rsync to perform backups,
> though. We have a pretty robust perl script that's evolved over time.
> I was also thinking of implementing bacula, but not sure I have the
> time to figure it out right now.
Take a look at BackupPC (on sourceforge). It is a "perl script" that
uses rsync plus hardlinks, but also does a whole lot more. It has worked
very well for me for a number of years.
Personally, I'm also still waiting for a more "stable" version of btrfs
or equivalent which can do block level de-dupe.
Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-16 2:12 ` Adam Goryachev
@ 2015-09-16 10:03 ` Jens-U. Mozdzen
2015-09-16 15:53 ` Alex
1 sibling, 0 replies; 12+ messages in thread
From: Jens-U. Mozdzen @ 2015-09-16 10:03 UTC (permalink / raw)
Cc: Linux RAID
Hi Alex & *,
Zitat von Adam Goryachev <mailinglists@websitemanagers.com.au>:
> On 16/09/15 04:44, Alex wrote:
>> Hi,
>>
>>>> [...] but I'm
>>>> not sure I want to lose the extra space with creating a RAID6 array. I
>>>> believe RAID5 also has faster write speeds?
>>> What disks are you using? Are they proper raid disks? A 12TB array can
>>> have a soft read error every complete pass, and still be within the
>>> disk-manufacturer's specs. If your disks are not raid-compliant, this
>>> will stop your array from rebuilding, ever!
>> All four are WD30EFRX-68EUZN0. They're not the cheapest WD disks, but
>> they're also not the ones with the 5yr warranty. The last array I
>> built using disks with 5yr warranty exceeded their capacity before the
>> warranty expired.
> [...]
>>> (Chances are, your disks are above spec and won't give a problem. Do you
>>> want to take the risk?)
>> There's always going to be some kind of risk, but I'm hoping someone
>> with the technical understanding about disk failure rates can tell me
>> if it's a prudent decision or not.
> That depends on your requirements. What are the implications (for
> you) if all the data is lost because two drives failed close to the
> same time? Is that resulting cost more or less than getting a fourth
> drive and using RAID6?
I just want to point out that double and triple disk faults are far
beyond theory - we've had a RAID6 crash last year, where three of 11
disks failed within 24h. Two were from one batch, the third from a
different batch.
Alex, you initially said this is to be a backup server - so if that
means the data is already redundant, then you may stick with RAID5.
But if this is your main data storage and you rely on that data being
available (i.e. quicker than restoring from backup will take), go for
RAID6 and keep in mind that even that won't prevent you from total
RAID loss.
Regards,
Jens
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-16 2:12 ` Adam Goryachev
2015-09-16 10:03 ` Jens-U. Mozdzen
@ 2015-09-16 15:53 ` Alex
2015-09-18 11:25 ` Michael Tokarev
1 sibling, 1 reply; 12+ messages in thread
From: Alex @ 2015-09-16 15:53 UTC (permalink / raw)
To: Adam Goryachev; +Cc: Wols Lists, Linux RAID
Hi,
>>> What disks are you using? Are they proper raid disks? A 12TB array can
>>> have a soft read error every complete pass, and still be within the
>>> disk-manufacturer's specs. If your disks are not raid-compliant, this
>>> will stop your array from rebuilding, ever!
>>
>> All four are WD30EFRX-68EUZN0. They're not the cheapest WD disks, but
>> they're also not the ones with the 5yr warranty. The last array I
>> built using disks with 5yr warranty exceeded their capacity before the
>> warranty expired.
>
> Umm, 1st google result showed this:
> http://community.wd.com/t5/Desktop-Mobile-Drives/New-WD30EFRX-Red-Drive-Idle3-Timer-Set-to-8-Seconds-High-LCC-in/td-p/648821/page/5
I was really just looking for general input on RAID5 vs RAID6, but
that is good information. I knew the drives weren't basic desktop
drives and would be generally suitable for building a software RAID
array.
Are you familiar with the idle3 time? It appears the idle3-tools can
be used to disable the idle3 timer entirely, which would disable
parking the head at all, correct?
> You might want to verify that setting before using in production, and
> probably a quick search/read on any other issues.
Perhaps I should have posted here prior to ordering the drives. Do you
have any recommendations for 3TB SATA disks I should have used?
> That depends on your requirements. What are the implications (for you) if
> all the data is lost because two drives failed close to the same time? Is
> that resulting cost more or less than getting a fourth drive and using
> RAID6?
I was hoping for some kind of emphatic NO, that it's a really bad
idea. I'll consider it further, but probably choose RAID6 then anyway.
> Take a look at BackupPC (on sourceforge). It is a "perl script" that uses
> rsync plus hardlinks, but also does a whole lot more. It has worked very
> well for me for a number of years.
Okay, great.
> Personally, I'm also still waiting for a more "stable" version of btrfs or
> equivalent which can do block level de-dupe.
Awesome, thanks.
Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-16 15:53 ` Alex
@ 2015-09-18 11:25 ` Michael Tokarev
2015-09-18 12:55 ` Alex
0 siblings, 1 reply; 12+ messages in thread
From: Michael Tokarev @ 2015-09-18 11:25 UTC (permalink / raw)
To: Alex, Adam Goryachev; +Cc: Wols Lists, Linux RAID
16.09.2015 18:53, Alex wrote:
[]
>>> All four are WD30EFRX-68EUZN0. They're not the cheapest WD disks, but
>>> they're also not the ones with the 5yr warranty. The last array I
>>> built using disks with 5yr warranty exceeded their capacity before the
>>> warranty expired.
>>
>> Umm, 1st google result showed this:
>> http://community.wd.com/t5/Desktop-Mobile-Drives/New-WD30EFRX-Red-Drive-Idle3-Timer-Set-to-8-Seconds-High-LCC-in/td-p/648821/page/5
>
> I was really just looking for general input on RAID5 vs RAID6, but
> that is good information. I knew the drives weren't basic desktop
> drives and would be generally suitable for building a software RAID
> array.
>
> Are you familiar with the idle3 time? It appears the idle3-tools can
> be used to disable the idle3 timer entirely, which would disable
> parking the head at all, correct?
An addition. From hdparm(8) manpage:
-J Get/set the Western Digital (WD) Green Drive's "idle3" timeout
value. This timeout controls how often the drive parks its
heads and enters a low power consumption state. The factory
default is eight (8) seconds, which is a very poor choice for
use with Linux. Leaving it at the default will result in hun‐
dreds of thousands of head load/unload cycles in a very short
period of time. The drive mechanism is only rated for 300,000
to 1,000,000 cycles, so leaving it at the default could result
in premature failure, not to mention the performance impact of
the drive often having to wake-up before doing routine I/O.
WD supply a WDIDLE3.EXE DOS utility for tweaking this setting,
and you should use that program instead of hdparm if at all pos‐
sible. The reverse-engineered implementation in hdparm is not
as complete as the original official program, even though it
does seem to work on at a least a few drives. A full power
cycle is required for any change in setting to take effect,
regardless of which program is used to tweak things.
A setting of 30 seconds is recommended for Linux use. Permitted
values are from 8 to 12 seconds, and from 30 to 300 seconds in
30-second increments. Specify a value of zero (0) to disable
the WD idle3 timer completely (NOT RECOMMENDED!).
Thanks,
/mjt
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-18 11:25 ` Michael Tokarev
@ 2015-09-18 12:55 ` Alex
0 siblings, 0 replies; 12+ messages in thread
From: Alex @ 2015-09-18 12:55 UTC (permalink / raw)
To: Michael Tokarev; +Cc: Adam Goryachev, Wols Lists, Linux RAID
Hi,
> An addition. From hdparm(8) manpage:
>
> -J Get/set the Western Digital (WD) Green Drive's "idle3" timeout
> value. This timeout controls how often the drive parks its
> heads and enters a low power consumption state. The factory
> default is eight (8) seconds, which is a very poor choice for
> use with Linux. Leaving it at the default will result in hun‐
> dreds of thousands of head load/unload cycles in a very short
> period of time. The drive mechanism is only rated for 300,000
> to 1,000,000 cycles, so leaving it at the default could result
> in premature failure, not to mention the performance impact of
> the drive often having to wake-up before doing routine I/O.
>
> WD supply a WDIDLE3.EXE DOS utility for tweaking this setting,
> and you should use that program instead of hdparm if at all pos‐
> sible. The reverse-engineered implementation in hdparm is not
> as complete as the original official program, even though it
> does seem to work on at a least a few drives. A full power
> cycle is required for any change in setting to take effect,
> regardless of which program is used to tweak things.
>
> A setting of 30 seconds is recommended for Linux use. Permitted
> values are from 8 to 12 seconds, and from 30 to 300 seconds in
> 30-second increments. Specify a value of zero (0) to disable
> the WD idle3 timer completely (NOT RECOMMENDED!).
Thanks very much for the info. I posted to the WD community forum and
didn't receive any responses, so this is much appreciated. I'd like to
understand why disabling it isn't recommended (that's what I was going
to do), but I'll have to just accept this for now and move on.
In addition to hdparm, there's also the idle3-tools package and the
idle3ctl binary.
Thanks,
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-15 18:08 ` Wols Lists
2015-09-15 18:44 ` Alex
2015-09-15 18:57 ` Roman Mamedov
@ 2015-09-22 4:16 ` David C. Rankin
2015-09-22 5:03 ` Roman Mamedov
2 siblings, 1 reply; 12+ messages in thread
From: David C. Rankin @ 2015-09-22 4:16 UTC (permalink / raw)
To: linux-raid
On 9/15/2015 1:08 PM, Wols Lists wrote:
> Throwing something completely different into the mix, how about
> considering btrfs? It's not 100% solid yet, so you need to be careful
> with it, but if you back up with rsync and the "in place" option, it'll
> give you full backups for the cost of incremental.
>
> What you MUST do is KEEP AN EYE ON DISK SPACE! The main failure mode for
> btrfs I'm aware of, is that a disk full can cause a fatal error. As in
> "I've just trashed the disk - it's 'format c:' time". So if you hit 80%
> or so, alarm bells should be ringing. Very loud.
This is the exact reason NOT to use btrfs at the moment. The issue is
with the 'snap shotting' feature which can quietly fill your disk with
snap shots to the point of space exhaustion that has resulted in the
complete loss of information in a number of instances.
Why would you recommend a filesystem when "It's not 100% solid yet" to
someone looking for enhanced data integrity?
Your warning is well taken, and while btrfs is getting better all the
time, for a raid install where the object is to eliminate (or greatly
lessen) the chance of data loss, stick with a tried and true filesystem
unless you want to become an unwitting beta-tester with your data on the
line...
--
David C. Rankin, J.D.,P.E.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-15 18:57 ` Roman Mamedov
@ 2015-09-22 4:18 ` David C. Rankin
0 siblings, 0 replies; 12+ messages in thread
From: David C. Rankin @ 2015-09-22 4:18 UTC (permalink / raw)
To: linux-raid
On 9/15/2015 1:57 PM, Roman Mamedov wrote:
> And this is just an absolute deranged baseless FUD today, or maybe something
> that might have been true about 5 years ago -- eons in Btrfs development.
Not sure that is 100% true. We still see data losses reported to the
various distro mailing lists. openSUSE most recently.
--
David C. Rankin, J.D.,P.E.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recommendations for RAID setup needed
2015-09-22 4:16 ` David C. Rankin
@ 2015-09-22 5:03 ` Roman Mamedov
0 siblings, 0 replies; 12+ messages in thread
From: Roman Mamedov @ 2015-09-22 5:03 UTC (permalink / raw)
To: David C. Rankin; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 861 bytes --]
On Mon, 21 Sep 2015 23:16:26 -0500
"David C. Rankin" <drankinatty@suddenlinkmail.com> wrote:
> This is the exact reason NOT to use btrfs at the moment. The issue is
> with the 'snap shotting' feature which can quietly fill your disk with
> snap shots to the point of space exhaustion that has resulted in the
> complete loss of information in a number of instances.
Snapshots is not something Btrfs "quietly" makes on its own, snapshots are
something *you* create. Snapshots of static and unmodified data are essentially
free in terms of disk space (due to CoW only the changes from the point
snapshot was taken and further on, will require additional space); and if you
heavily modify your files and want to also keep a trail of snapshots, it's of
course up to you to ensure you have enough disk space for that.
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2015-09-22 5:03 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-15 17:39 Recommendations for RAID setup needed Alex
2015-09-15 18:08 ` Wols Lists
2015-09-15 18:44 ` Alex
2015-09-16 2:12 ` Adam Goryachev
2015-09-16 10:03 ` Jens-U. Mozdzen
2015-09-16 15:53 ` Alex
2015-09-18 11:25 ` Michael Tokarev
2015-09-18 12:55 ` Alex
2015-09-15 18:57 ` Roman Mamedov
2015-09-22 4:18 ` David C. Rankin
2015-09-22 4:16 ` David C. Rankin
2015-09-22 5:03 ` Roman Mamedov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).