linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* question about the best suited RAID level/layout
@ 2013-07-04 18:17 Christoph Anton Mitterer
  2013-07-04 21:43 ` Phil Turmel
  2013-07-05 11:10 ` David Brown
  0 siblings, 2 replies; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-04 18:17 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1729 bytes --]

Hi.

I'm setting up a 5-bay NAS (based on a QNAP device), with my personal
Debian on it, currently using only 4 devices though
The focus is absolutely on data security/resilience,... and not at all
on performance.

For that reasons, I bought 4 different (i.e. different vendors)
enterprise SATA HDDs, well actually only three since three different and
on type twice aren't that much vendors left, with the intention that, if
there are flaws in the firmware or a production series, I'm hopefully
not hit at all devices.


Now questions comes, which RAID level to use, and I guess with the main
focus on resilience there's only basically these options:

1) RAID1 with all disks mirrored (i.e. 4 copies of each chunk)
I don't want that,... while it's the most secure one... it costs too
many disks (I'd like to have two of them usable, i.e. 2x 4TB)


2) RAID1+0
AFAIU, its in every way (subtle) worse than RAID10, so no choice?


3) RAID0+1
AFAIU, it has higher probability to get the RAID broken...
See e.g. http://aput.net/~jheiss/raid10/
btw: Questions is... is that really true? Sure, the mdadm will think
that a 0+1 might be broken... but the data may be still _completely_ in
it... and one just has to manually get it out?!


4) RAID6 vs. RAID10
I would have tended to RAID6, since I think it's more secure, as _ANY_
two disks may fail,... and not just two disks of the different RAID1
sets within it.
And things would be probably easier, when I ever start to use the 5th
bay...

Any pro/contra arguments?

What about the layout options for RAID6 (if that would be THE choice)?



Some more general questions in an extra mail few minutes from now :)


Thanks so far,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5165 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-04 18:17 question about the best suited RAID level/layout Christoph Anton Mitterer
@ 2013-07-04 21:43 ` Phil Turmel
  2013-07-04 22:58   ` Christoph Anton Mitterer
  2013-07-05 11:10 ` David Brown
  1 sibling, 1 reply; 27+ messages in thread
From: Phil Turmel @ 2013-07-04 21:43 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 07/04/2013 02:17 PM, Christoph Anton Mitterer wrote:
> Hi.
> 
> I'm setting up a 5-bay NAS (based on a QNAP device), with my personal
> Debian on it, currently using only 4 devices though
> The focus is absolutely on data security/resilience,... and not at all
> on performance.

This particular statement trumps all other considerations.

> Now questions comes, which RAID level to use, and I guess with the main
> focus on resilience there's only basically these options:

[snip /]

You covered all the basics.

From your own analysis, raid6 is the option that maximizes total storage
while achieving an "any two failures" resiliency.  Triple-copy raid10
across four drives can match that resiliency, with dramatically better
performance, but with a substantial cost in capacity.

Two-failure resilience is vital to completing recovery after replacing a
failed drive, particularly when the read error rates of consumer-grade
drives are involved.

In your specific case, raid6 has one additional advantage: making future
expansion to the fifth bay a reliable, simple, no downtime event.

In your situation, I would use raid6.  To mitigate the performance hit
on occasional random-access work, I would use a small chunk size (I use
16k).  That will somewhat hurt peak linear performance, but even
bluray-equivalent media streams only amount to 5 MB/s or so.  That would
be 80 IOPS per device in such a four-drive raid6.

Phil

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-04 21:43 ` Phil Turmel
@ 2013-07-04 22:58   ` Christoph Anton Mitterer
  2013-07-05  1:07     ` Brad Campbell
                       ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-04 22:58 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2900 bytes --]

Hi Phil.


On Thu, 2013-07-04 at 17:43 -0400, Phil Turmel wrote:
> > The focus is absolutely on data security/resilience,... and not at all
> > on performance.
> This particular statement trumps all other considerations.
Sarcasm? (*Sheldon Cooper hat on*)


> Triple-copy raid10
> across four drives can match that resiliency, with dramatically better
> performance, but with a substantial cost in capacity.
hmm I've briefly thought about that as well (just forgot to mention
it)... for some reason (probably a non-reason) I've always had a bad
feeling with respect to that uneven mixing (i.e. three copies on four
disks), AFAIU that would look like (each same number being the same
chunck:
+---------+ +---------+ +---------+ +---------+
|   sda   | |   sdb   | |   sdc   | |   sdd   |
+---------+ +---------+ +---------+ +---------+
  0  1  2     0  1  3     0  2  3     1  2  3
  4  5  6     4  5  7     4  6  7     5  6  7
  8  9  10    8  9  11    8  10 11    9  10 11

And that gives me again, any 2 disks... but so much better performance?

With 4x 4TiB disks,.. RAID6 would give me 16/2 TiB... and the above
would give me 16/3 TiB?!
Quite a loss...

And AFAIU it doesn't give me any better resilience than RAID6 (by tricks
like probabilities or so)?

Can it be grown? Like when I want to use the 5th bay? What would it be
then, still any 2 out of 5?


> Two-failure resilience is vital to completing recovery after replacing a
> failed drive, particularly when the read error rates of consumer-grade
> drives are involved.
Well,... I have enterprise disks, and I have backups on different
media,... but nevertheless,... I wouldn't "risk" RAID5 for my precious
data

> In your specific case, raid6 has one additional advantage: making future
> expansion to the fifth bay a reliable, simple, no downtime event.
Ah... so I couldn't online/offline grow a RAID10 with n/f/o=3 ?


> In your situation, I would use raid6.  To mitigate the performance hit
> on occasional random-access work, I would use a small chunk size (I use
> 16k).  That will somewhat hurt peak linear performance, but even
> bluray-equivalent media streams only amount to 5 MB/s or so.  That would
> be 80 IOPS per device in such a four-drive raid6.
I think RAID6 will be what I go for, at least unless the RAID10 with
three blocks gives me any resilience bonus, which I can't see right now.


Any ideas about the layout? i.e. left-symmetric-6, right-symmetric-6,
left-asymmetric-6, right-asymmetric-6, and parity-first-6 ?



I'd have even one more question here: Has anyone experience with my idea
of intentionally running devices of different vendors (Seagate, WD,
HGST)... for resilience reasons?... Does it work out as I plan, or are
there any hidden caveats I can't see which make the resilience (not the
performance) worse?



Thanks :),
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-04 22:58   ` Christoph Anton Mitterer
@ 2013-07-05  1:07     ` Brad Campbell
  2013-07-06  0:36       ` Christoph Anton Mitterer
  2013-07-05  1:12     ` Stan Hoeppner
  2013-07-05 13:36     ` Phil Turmel
  2 siblings, 1 reply; 27+ messages in thread
From: Brad Campbell @ 2013-07-05  1:07 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 05/07/13 06:58, Christoph Anton Mitterer wrote:

> I'd have even one more question here: Has anyone experience with my idea
> of intentionally running devices of different vendors (Seagate, WD,
> HGST)... for resilience reasons?... Does it work out as I plan, or are
> there any hidden caveats I can't see which make the resilience (not the
> performance) worse?

I've got both here. A large RAID-6 comprised entirely of single brand, 
single type consumer drives and a smaller RAID-10 built from a diverse 
selection. Both have had great reliability, so that's not really a good 
data point for you.

What I *have* found over the years is the importance of weeding out 
early failures. Before I commit a disk to service, I subject it to a 
couple of weeks of hard work. I usually knock up a quick and dirty bash 
script with multiple concurrent instances of dd reading and writing to 
differing parts of the disk simultaneously, with a bonnie++ run for good 
measure. With all this going on at the same time the drive mechanism 
gets a serious workout and the drive stays warmer than it will in actual 
service. If I have the chance, I do all the drives simultaneously and 
preferably in the machine they are going to spend the next couple of 
years. If I can't do that, then I have a burn-in chassis built from a 
retired server that can do 15 at a time.

This has proven quite effective in spotting the early life failures. I 
generally find (for consumer drives) if they pass this they'll last the 
3 years of 24/7 I use them for before I replace them. My enterprise 
drives are a different story, and I have some here with just over 38k 
hours on them. I'll probably replace them for bigger drives before they 
ever fail.

Regards,
Brad

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-04 22:58   ` Christoph Anton Mitterer
  2013-07-05  1:07     ` Brad Campbell
@ 2013-07-05  1:12     ` Stan Hoeppner
  2013-07-06  0:46       ` Christoph Anton Mitterer
  2013-07-05 13:36     ` Phil Turmel
  2 siblings, 1 reply; 27+ messages in thread
From: Stan Hoeppner @ 2013-07-05  1:12 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 7/4/2013 5:58 PM, Christoph Anton Mitterer wrote:

> I'd have even one more question here: Has anyone experience with my idea
> of intentionally running devices of different vendors (Seagate, WD,
> HGST)... for resilience reasons?... Does it work out as I plan, or are
> there any hidden caveats I can't see which make the resilience (not the
> performance) worse?

1.  You'll need to use partitions underneath md because the drives will
all be slightly difference capacity.  You'll need to identify the
capacity of the smallest drive and create partitions a few MiB smaller
than this on all drives.  This should assure that any replacement drive
which is slightly smaller will be usable in the array.

Screwing around with all of this is a PITA.  When I build arrays I use
identical drives and put identical spares in storage.  I don't leave
hot/warm/cold spares in the chassis.  This simply degrades performance.
Hot swap was invented for a reason.  Some folks prefer online spares an
unattended auto rebuild.  Not every time a drive is kicked is a rebuild
required.  I wanna look things over before I start a rebuild.


2.  HGST is a brand created due to Western Digital's acquisition of
Hitachi Data System's disk drive unit.  The drives are Hitachi's final
production units relabeled with a different name and serial number.
Three years from now when that HGST drive fails, Western Digital will
replace it with a Western Digital produced drive.

There are now currently only 3 hard disk drive vendors on the planet
AFAIK:  Seagate, Toshiba, and Western Digital.  In the not too distance
future Toshiba's disk drive unit will probably be acquired by one of the
other two and we'll have two drive vendors.  At that point the lack of
entropy will make this type of "game plan" useless.  You'll have to pick
drives from different lines in each vendors' lineup, which means a mix
of consumer and enterprise models.  Then you'll lose ERC/TLER on some
drives.

IMO the practical disadvantages of using dissimilar drives outweighs the
theoretical benefits.

-- 
Stan


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-04 18:17 question about the best suited RAID level/layout Christoph Anton Mitterer
  2013-07-04 21:43 ` Phil Turmel
@ 2013-07-05 11:10 ` David Brown
  2013-07-06  0:55   ` Christoph Anton Mitterer
  1 sibling, 1 reply; 27+ messages in thread
From: David Brown @ 2013-07-05 11:10 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 04/07/13 20:17, Christoph Anton Mitterer wrote:
> Hi.
> 
> I'm setting up a 5-bay NAS (based on a QNAP device), with my personal
> Debian on it, currently using only 4 devices though
> The focus is absolutely on data security/resilience,... and not at all
> on performance.
> 

I apologise if this is preaching to the converted.  When you are
concerned about data resilience, RAID is only part of the answer - I
just want to make sure you have considered everything else.

The main benefit of RAID is availability - you don't have downtime when
a disk fails.  It also ensures that you don't lose data that is created
between backup runs.  But it is not the solution for data safety and
reliability - that is what backups are for.

You need to look at the complete picture here.  What are the risks to
your data?  Is a double disk failure /really/ the most likely failure
scenario?  Have you considered the likelihood and consequences of other
failure types?  In my experience, user error is a bigger risk to data
loss than a double disk failure - I have more often restored from backup
due to someone deleting the wrong files (or losing them due to slipping
when dragging-and-dropping) than from disk problems, even with non-raid
setups.

Raid will help if one of your disk dies, but it will not help against
fire or theft, or hardware failure on the NAS, or software failure, or
user error, or malware (if you have windows clients), or power failure,
or any one of a number of different scenarios.

So depending on your circumstances, you might get better "data
security/resilience" by putting the extra disks in a second machine at a
second location, or other mixed primary/secondary arrangements.





^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-04 22:58   ` Christoph Anton Mitterer
  2013-07-05  1:07     ` Brad Campbell
  2013-07-05  1:12     ` Stan Hoeppner
@ 2013-07-05 13:36     ` Phil Turmel
  2013-07-06  1:11       ` Christoph Anton Mitterer
  2 siblings, 1 reply; 27+ messages in thread
From: Phil Turmel @ 2013-07-05 13:36 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 07/04/2013 06:58 PM, Christoph Anton Mitterer wrote:
> Hi Phil.
> 
> On Thu, 2013-07-04 at 17:43 -0400, Phil Turmel wrote:
>>> The focus is absolutely on data security/resilience,... and not at all
>>> on performance.
>> This particular statement trumps all other considerations.
> Sarcasm? (*Sheldon Cooper hat on*)

No sarcasm.  Speed, redundancy, capacity.  Pick two.  Any gain in one of
those criteria must reduce one or both of the others.  (Not really
binary on each, but you get the idea.)

You picked "redundancy".  Leaves only only one axis to consider: speed
vs. capacity.

>> Triple-copy raid10
>> across four drives can match that resiliency, with dramatically better
>> performance, but with a substantial cost in capacity.
> hmm I've briefly thought about that as well (just forgot to mention
> it)... for some reason (probably a non-reason) I've always had a bad
> feeling with respect to that uneven mixing (i.e. three copies on four
> disks), AFAIU that would look like (each same number being the same
> chunck:
> +---------+ +---------+ +---------+ +---------+
> |   sda   | |   sdb   | |   sdc   | |   sdd   |
> +---------+ +---------+ +---------+ +---------+
>   0  1  2     0  1  3     0  2  3     1  2  3
>   4  5  6     4  5  7     4  6  7     5  6  7
>   8  9  10    8  9  11    8  10 11    9  10 11

Precisely.  This is "raid10,near3".  You can look up the "offset" and
"far" variants.

> And that gives me again, any 2 disks... but so much better performance?

Dramatically.

> With 4x 4TiB disks,.. RAID6 would give me 16/2 TiB... and the above
> would give me 16/3 TiB?!
> Quite a loss...

Yup.

> And AFAIU it doesn't give me any better resilience than RAID6 (by tricks
> like probabilities or so)?

At four drives, no.  Any two.  With five, there are some combinations of
three missing drives that'll still run.

> Can it be grown? Like when I want to use the 5th bay? What would it be
> then, still any 2 out of 5?

No.

>> Two-failure resilience is vital to completing recovery after replacing a
>> failed drive, particularly when the read error rates of consumer-grade
>> drives are involved.
> Well,... I have enterprise disks, and I have backups on different
> media,... but nevertheless,... I wouldn't "risk" RAID5 for my precious
> data

IMHO, enterprise drives and a good backup regime makes raid5 a
reasonable choice.  Raid gives you uninterrupted *uptime* in the face of
hardware failure, and only hardware failure.  But David already covered
that.

>> In your specific case, raid6 has one additional advantage: making future
>> expansion to the fifth bay a reliable, simple, no downtime event.
> Ah... so I couldn't online/offline grow a RAID10 with n/f/o=3 ?

No.

>> In your situation, I would use raid6.  To mitigate the performance hit
>> on occasional random-access work, I would use a small chunk size (I use
>> 16k).  That will somewhat hurt peak linear performance, but even
>> bluray-equivalent media streams only amount to 5 MB/s or so.  That would
>> be 80 IOPS per device in such a four-drive raid6.
> I think RAID6 will be what I go for, at least unless the RAID10 with
> three blocks gives me any resilience bonus, which I can't see right now.
> 
> Any ideas about the layout? i.e. left-symmetric-6, right-symmetric-6,
> left-asymmetric-6, right-asymmetric-6, and parity-first-6 ?

Certainly not any of the "-6" suffixes.  Those isolate Q on the last
disk, hurting streaming performance, and setting up the possibility of
uneven performance when degraded.  The default left-symmetric gives the
best chunk distribution for general use.

Phil

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-05  1:07     ` Brad Campbell
@ 2013-07-06  0:36       ` Christoph Anton Mitterer
  2013-07-06  5:29         ` Brad Campbell
  2013-07-06  7:40         ` Piergiorgio Sartor
  0 siblings, 2 replies; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06  0:36 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 903 bytes --]

Hi Brad.

On Fri, 2013-07-05 at 09:07 +0800, Brad Campbell wrote:
> Both have had great reliability, so that's not really a good 
> data point for you.
Well but it's not a bad point either, is it?
And when we remember back at the issues (IIRC) Seagate had with the
firmwares of some of it's devices, that suddenly stopped working at some
special date... than it could easily happen you're screwed with having
all disks from one vendor i.e. with the same type.

I vaguely remember that back then there were even cases where firmware
updates didn't help anymore... but I may be wrong on that.



> What I *have* found over the years is the importance of weeding out 
> early failures.
Sure... I ran some 4 passes of badblock scans... well that may not be as
intensive as your several-weeks-test,... but 80 hours of constant
reading/writing is at least something.


Thanks,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-05  1:12     ` Stan Hoeppner
@ 2013-07-06  0:46       ` Christoph Anton Mitterer
  2013-07-06  8:36         ` Stan Hoeppner
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06  0:46 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2434 bytes --]

On Thu, 2013-07-04 at 20:12 -0500, Stan Hoeppner wrote:
> 1.  You'll need to use partitions underneath md because the drives will
> all be slightly difference capacity.
Well it doesn't have to be partitions... you can simply manually specify
a somewhat smaller size for the RAID... but of course one should leave
some gap.
I'll probably doe it via partitions, though ;)

btw: It was interesting to see,... that all 3 different drives (from WD,
HGST, Seagate)... exported _exactly_ the same amount of space.


> You'll need to identify the
> capacity of the smallest drive and create partitions a few MiB smaller
> than this on all drives.  This should assure that any replacement drive
> which is slightly smaller will be usable in the array.
Sure...



> Screwing around with all of this is a PITA.  When I build arrays I use
> identical drives and put identical spares in storage.  I don't leave
> hot/warm/cold spares in the chassis.
Well but that's another field of questions ;)

>   This simply degrades performance.
Why should it? If the spare is unused?


> Hot swap was invented for a reason.  Some folks prefer online spares an
> unattended auto rebuild.  Not every time a drive is kicked is a rebuild
> required.  I wanna look things over before I start a rebuild.
I also think that for my personal home use case hot/warm/cold spares
won't be of much use... but if you run a big data centre things look
different... and hot spares are not replaced by hot swap ;)


> 2.  HGST is a brand created due to Western Digital's acquisition of
> Hitachi Data System's disk drive unit.
Sure... but I think they still use their own technology/firmware... at
least for now?

>   The drives are Hitachi's final
> production units relabeled with a different name and serial number.
> Three years from now when that HGST drive fails, Western Digital will
> replace it with a Western Digital produced drive.
Sure about that? Wasn't there some agreement that HGST belongs to WD but
produces independently...?
Anyway... that's off topic... for now at least the disks are not
identical (which was my goal)... and Toshiba got something from the
WD/HGST trade and already announced a 3.5" enterprise disk out of that.



> IMO the practical disadvantages of using dissimilar drives outweighs the
> theoretical benefits.
So... which are the practical disadvantages?


Cheers,
Chris.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-05 11:10 ` David Brown
@ 2013-07-06  0:55   ` Christoph Anton Mitterer
  0 siblings, 0 replies; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06  0:55 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2419 bytes --]

On Fri, 2013-07-05 at 13:10 +0200, David Brown wrote:
> I apologise if this is preaching to the converted.  When you are
> concerned about data resilience, RAID is only part of the answer - I
> just want to make sure you have considered everything else.
Sure... I make regular complete backups, on different types of storage
media (magnetic vs. optical - yes I do fear very strong coronal mass
ejections ;-) )... being placed in different locations (even different
cities ;)

> The main benefit of RAID is availability - you don't have downtime when
> a disk fails.  It also ensures that you don't lose data that is created
> between backup runs.  But it is not the solution for data safety and
> reliability - that is what backups are for.
Sure...


> You need to look at the complete picture here.  What are the risks to
> your data?  Is a double disk failure /really/ the most likely failure
> scenario?  Have you considered the likelihood and consequences of other
> failure types?  In my experience, user error is a bigger risk to data
> loss than a double disk failure - I have more often restored from backup
> due to someone deleting the wrong files (or losing them due to slipping
> when dragging-and-dropping) than from disk problems, even with non-raid
> setups.
Hehe... I once accidentally mke2fs'ed (instead of fsck) over my main
personal data fs (and fucking e2fsprogs don't check for existing
fs/containers, AFAIK till today)... and back then my most recent backup
was really old... like 2 years or so...
I invested like 2 weeks of ext4 forensic and managed to basically
recover all of the data out of the overwritten fs... ;)



> Raid will help if one of your disk dies, but it will not help against
> fire or theft, or hardware failure on the NAS, or software failure, or
> user error, or malware (if you have windows clients), or power failure,
> or any one of a number of different scenarios.
> 
> So depending on your circumstances, you might get better "data
> security/resilience" by putting the extra disks in a second machine at a
> second location, or other mixed primary/secondary arrangements.
sure... see above... I do have a good backup strategy now :)

Anyway... for the daily growth of data... RAID does a good "backup-like"
job in helping me against disk failures... since I simlpy can't backup
everything every day...


Cheers,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-05 13:36     ` Phil Turmel
@ 2013-07-06  1:11       ` Christoph Anton Mitterer
  2013-07-06  2:19         ` Phil Turmel
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06  1:11 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2171 bytes --]

On Fri, 2013-07-05 at 09:36 -0400, Phil Turmel wrote:
> You picked "redundancy".  Leaves only only one axis to consider: speed
> vs. capacity.
Well thinking about that "raid6check" tool you told me over in the other
thread,...
which AFAIU does what I was talking about, namely telling me which block
is the correct one if I have bad blocks (and the disk itself can't tell)
and not whole drive failures,.. where at least a two-block copy RAID10
would not be able to...
...then I think RAID6 is THE solution for me, given resilience has the
highest priority, as RAID10 with c/f/o=3 cannot do that.


> Precisely.  This is "raid10,near3".  You can look up the "offset" and
> "far" variants.
Actually I had written some ASCII art visualisations for the Debian
mdadm FAQ (and perhaps also the md(4) manpage) yesterday... I just wait
for some minor answers from Neil to publish them... so I already had a
look at these.
But AFAIU, they make absolutely no difference for resilience... and
while far/offset improve performance in some use cases,... they make it
worse in others.


> > And that gives me again, any 2 disks... but so much better performance?
> Dramatically.
I guess I'll do some benchmarking.. ;)


> At four drives, no.  Any two.  With five, there are some combinations of
> three missing drives that'll still run.
Was thinking about that as well... but as you said it's a "it might
survive"... not a "it will survive", right?!



> > Any ideas about the layout? i.e. left-symmetric-6, right-symmetric-6,
> > left-asymmetric-6, right-asymmetric-6, and parity-first-6 ?
> Certainly not any of the "-6" suffixes.
Ah.... I just see that the RAID5 layouts are also for RAID6 ^^

> Those isolate Q on the last
> disk, hurting streaming performance, and setting up the possibility of
> uneven performance when degraded.  The default left-symmetric gives the
> best chunk distribution for general use.
I see... I was just searching for but haven't found anything really
good... actually nothing at all ^^
Does anyone know some place where the layouts for RAID5/6 are
(thoroughly) explained?


Cheers & thanks,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  1:11       ` Christoph Anton Mitterer
@ 2013-07-06  2:19         ` Phil Turmel
  2013-07-06 17:55           ` Christoph Anton Mitterer
  0 siblings, 1 reply; 27+ messages in thread
From: Phil Turmel @ 2013-07-06  2:19 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 07/05/2013 09:11 PM, Christoph Anton Mitterer wrote:
> On Fri, 2013-07-05 at 09:36 -0400, Phil Turmel wrote:
>> You picked "redundancy".  Leaves only only one axis to consider: speed
>> vs. capacity.
> Well thinking about that "raid6check" tool you told me over in the other
> thread,...
> which AFAIU does what I was talking about, namely telling me which block
> is the correct one if I have bad blocks (and the disk itself can't tell)
> and not whole drive failures,.. where at least a two-block copy RAID10
> would not be able to...
> ...then I think RAID6 is THE solution for me, given resilience has the
> highest priority, as RAID10 with c/f/o=3 cannot do that.

I think you should read Neil's blog entry before you get too excited
about raid6check.  You can only trust its decisions when you are
confident that the problems it finds are *only* due to silent read
errors.  MD raid does not carry the per-block metadata needed to
distinguish silent read errors from incomplete writes or out-of-band
writes to member disks.

Hopefully, btrfs will fill this void (eventually).

http://neil.brown.name/blog/20100211050355

Phil

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  0:36       ` Christoph Anton Mitterer
@ 2013-07-06  5:29         ` Brad Campbell
  2013-07-06 14:49           ` Christoph Anton Mitterer
  2013-07-06  7:40         ` Piergiorgio Sartor
  1 sibling, 1 reply; 27+ messages in thread
From: Brad Campbell @ 2013-07-06  5:29 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 06/07/13 08:36, Christoph Anton Mitterer wrote:
> Hi Brad.
>

> And when we remember back at the issues (IIRC) Seagate had with the
> firmwares of some of it's devices, that suddenly stopped working at some
> special date...

I had 10 of those in a RAID-6 at the time. Luckily the firmware bug only 
manifested itself on a re-start of the drive, and as I probably drop 
power to the machine once a year (if that), I could drop them out of the 
array one at a time, upgrade the firmware and re-add it to the array.

A messy process to be sure, but no risk of data loss that way.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  0:36       ` Christoph Anton Mitterer
  2013-07-06  5:29         ` Brad Campbell
@ 2013-07-06  7:40         ` Piergiorgio Sartor
  2013-07-06 14:52           ` Christoph Anton Mitterer
  1 sibling, 1 reply; 27+ messages in thread
From: Piergiorgio Sartor @ 2013-07-06  7:40 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On Sat, Jul 06, 2013 at 02:36:16AM +0200, Christoph Anton Mitterer wrote:
[...]
> Well but it's not a bad point either, is it?
> And when we remember back at the issues (IIRC) Seagate had with the
> firmwares of some of it's devices, that suddenly stopped working at some
> special date... than it could easily happen you're screwed with having
> all disks from one vendor i.e. with the same type.
> 
> I vaguely remember that back then there were even cases where firmware
> updates didn't help anymore... but I may be wrong on that.

Hi Chris,

we have some workstation with RAID5, 4 HDDs, from
4 different vendors (at that time there were 4).

We notice smart errors at a different rate, time and
type, but almost always in the same way for each brand.

In other words, yes, it helped to have different brands.

The only thing, you must confirm the size.
Ours are all identical, but I can imagine there could
be minor differences, expecially from 512B to 4096B.

bye,

-- 

piergiorgio

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  0:46       ` Christoph Anton Mitterer
@ 2013-07-06  8:36         ` Stan Hoeppner
  2013-07-06 15:04           ` Christoph Anton Mitterer
  0 siblings, 1 reply; 27+ messages in thread
From: Stan Hoeppner @ 2013-07-06  8:36 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 7/5/2013 7:46 PM, Christoph Anton Mitterer wrote:
> On Thu, 2013-07-04 at 20:12 -0500, Stan Hoeppner wrote:
...
> btw: It was interesting to see,... that all 3 different drives (from WD,
> HGST, Seagate)... exported _exactly_ the same amount of space.

You did state these are all enterprise class drives.

>> ... I don't leave
>> hot/warm/cold spares in the chassis.
...
>>   This simply degrades performance.

> Why should it? If the spare is unused?

The answer is rather obvious.  If spares are in the chassis one has
fewer active array spindles.

>> 2.  HGST is a brand created due to Western Digital's acquisition of
>> Hitachi Data System's disk drive unit.

> Sure... but I think they still use their own technology/firmware... at
> least for now?

For now.

>>   The drives are Hitachi's final
>> production units relabeled with a different name and serial number.
>> Three years from now when that HGST drive fails, Western Digital will
>> replace it with a Western Digital produced drive.

> Sure about that? Wasn't there some agreement that HGST belongs to WD but
> produces independently...?

That wouldn't make sense for either party.  Where did you read this?
Got a link?

> and Toshiba got something from the
> WD/HGST trade and already announced a 3.5" enterprise disk out of that.

Toshiba has been producing 3.5" enterprise drives for years.  Got a link
showing that Toshiba received technology from the WD/Hitachi acquisition?

>> IMO the practical disadvantages of using dissimilar drives outweighs the
>> theoretical benefits.

> So... which are the practical disadvantages?

I already stated many of them.  You don't seem to be following along
very well.

-- 
Stan


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  5:29         ` Brad Campbell
@ 2013-07-06 14:49           ` Christoph Anton Mitterer
  2013-07-07  6:36             ` Brad Campbell
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06 14:49 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 639 bytes --]

On Sat, 2013-07-06 at 13:29 +0800, Brad Campbell wrote:
> A messy process to be sure, but no risk of data loss that way.
Sure... but that might not always be possible at all... imagine you're
hit by a power outage (okay one could argue, that one must run an
UPS)... or kernel panic...

Or people being very paranoid (like me) probably won't let their
dm-crypt encrypted devices run when being away (freeze attacks against
the RAM)...


Anyway... I guess this goes off topic ;)


But I see that you have not real point *against* running devices from
different vendors either. Correct me if I'm wrong ;)


Cheers,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  7:40         ` Piergiorgio Sartor
@ 2013-07-06 14:52           ` Christoph Anton Mitterer
  0 siblings, 0 replies; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06 14:52 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 418 bytes --]

Hi Piergiorgio

On Sat, 2013-07-06 at 09:40 +0200, Piergiorgio Sartor wrote:
> We notice smart errors at a different rate, time and
> type, but almost always in the same way for each brand.
Guess you mean SMART here ;)

> The only thing, you must confirm the size.
> Ours are all identical, but I can imagine there could
> be minor differences, expecially from 512B to 4096B.
Sure...

Thanks :)

Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  8:36         ` Stan Hoeppner
@ 2013-07-06 15:04           ` Christoph Anton Mitterer
  2013-07-06 15:41             ` Matt Garman
                               ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06 15:04 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2011 bytes --]

On Sat, 2013-07-06 at 03:36 -0500, Stan Hoeppner wrote:
> >> hot/warm/cold spares in the chassis.
> >>   This simply degrades performance.
> > Why should it? If the spare is unused?
> The answer is rather obvious.  If spares are in the chassis one has
> fewer active array spindles.
Sorry... still don't get it...
When you have another drive in the chassis... which is not actively used
by the RAID, but just waiting as a hot spare for a failing device and
rebuild becoming necessary...
Apart from power consumption and more heat... how should that affect the
read/write performance of the RAID?


> > Sure about that? Wasn't there some agreement that HGST belongs to WD but
> > produces independently...?
> 
> That wouldn't make sense for either party.  Where did you read this?
I vaguely remember this being part for some anti trust obligations...

> Got a link?
The only thing I've found right now was:
https://en.wikipedia.org/wiki/HGST#History
"It was agreed that WD would operate with WD Technologies and HGST as
wholly owned subsidiaries and they would compete in the marketplace with
separate brands and product lines."
But the next sentence about Toshiba may indicate that HGST stops 3.5"
business?!


> > and Toshiba got something from the
> > WD/HGST trade and already announced a 3.5" enterprise disk out of that.
> Toshiba has been producing 3.5" enterprise drives for years.  Got a link
> showing that Toshiba received technology from the WD/Hitachi acquisition?
Again, see Wikipedia.


> I already stated many of them.  You don't seem to be following along
> very well.
Well you only named one may need to use partitions (which I do not see
why this should be a disadvantage)... and that in few years... there
might be only some vendors left and (by then) the idea wouldn't work
anymore; but even if that would happen, that's still no reason for not
doing it now?
Anything else I've missed or didn't understand as a disadvantage?


Cheers,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06 15:04           ` Christoph Anton Mitterer
@ 2013-07-06 15:41             ` Matt Garman
  2013-07-07 14:08             ` David Brown
  2013-07-07 16:45             ` Stan Hoeppner
  2 siblings, 0 replies; 27+ messages in thread
From: Matt Garman @ 2013-07-06 15:41 UTC (permalink / raw)
  To: linux-raid

On Sat, Jul 06, 2013 at 05:04:11PM +0200, Christoph Anton Mitterer wrote:
> On Sat, 2013-07-06 at 03:36 -0500, Stan Hoeppner wrote:
> > > Sure about that? Wasn't there some agreement that HGST belongs to WD but
> > > produces independently...?
> > 
> > That wouldn't make sense for either party.  Where did you read this?
> I vaguely remember this being part for some anti trust obligations...
> 
> > Got a link?
> The only thing I've found right now was:
> https://en.wikipedia.org/wiki/HGST#History
> "It was agreed that WD would operate with WD Technologies and HGST as
> wholly owned subsidiaries and they would compete in the marketplace with
> separate brands and product lines."
> But the next sentence about Toshiba may indicate that HGST stops 3.5"
> business?!


Here's another relevant link:
    http://www.toshiba.co.jp/about/press/2012_08/pr0801.htm

I got that from this thread:
    http://forums.servethehome.com/f17/toshiba-announces-12-new-3-5-desktop-drives-775.html

Quote from the thread, "WDC had to forfeit a Hitachi plant in China
to Toshiba to gain FTC approval for its HGST acquisition."

FWIW... I don't use their product, but enjoy the blog of the
Backblaze company (basically a cloud backup service).  There is some
overlap between the storage systems they are building out and what
home users often what (i.e. very cost-sensitive).  Clearly, they buy
a lot of hard drives, so their preference for reliable drives ought
to mean something:
    http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-storage-pod-3-0/


-Matt


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06  2:19         ` Phil Turmel
@ 2013-07-06 17:55           ` Christoph Anton Mitterer
  2013-07-07 12:46             ` Bernd Schubert
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-06 17:55 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 614 bytes --]

On Fri, 2013-07-05 at 22:19 -0400, Phil Turmel wrote:
> I think you should read Neil's blog entry
I did ;)

> before you get too excited
> about raid6check.
Sure it's not a magic wand for all situations... and raid6check itself
seems to be rather at a early starting point...

> You can only trust its decisions when you are
> confident that the problems it finds are *only* due to silent read
> errors.
Sure.... but at least it can be misused as kinda poor-man's integrity
check.

AFAIU it's not yet working, that it can tell you back through the fs,
which file is affected?


Cheers,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06 14:49           ` Christoph Anton Mitterer
@ 2013-07-07  6:36             ` Brad Campbell
  0 siblings, 0 replies; 27+ messages in thread
From: Brad Campbell @ 2013-07-07  6:36 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 06/07/13 22:49, Christoph Anton Mitterer wrote:
> On Sat, 2013-07-06 at 13:29 +0800, Brad Campbell wrote:
>> A messy process to be sure, but no risk of data loss that way.
> Sure... but that might not always be possible at all... imagine you're
> hit by a power outage (okay one could argue, that one must run an
> UPS)... or kernel panic...

If you are not running a UPS, you are not actually serious about your 
data integrity. I don't care if its an APC BackUPS with 5 minutes 
hold-up, you need to be able to shut down cleanly.

> Or people being very paranoid (like me) probably won't let their
> dm-crypt encrypted devices run when being away (freeze attacks against
> the RAM)...

I use dm-crypt too, but there are far easier ways to get access to my 
data than performing a cold attack against my server, so I spend my 
effort against those risks instead.

I guess if your risk assessment states that is a risk worthy of 
mitigating then you have to do what you have to do. Paranoia for 
paranoia's sake can be fun if you don't have anything else to do, but 
unless there is a real (not perceived) risk, then you just balance your 
treatments against that.

> Anyway... I guess this goes off topic ;)
>

Far, far, far...

> But I see that you have not real point *against* running devices from
> different vendors either. Correct me if I'm wrong ;)

Nope. Not wrong at all. That'd be my preference if practical. On the 
other hand, my 10 x WD 2TB green drive RAID-6 is about the worst 
potential time bomb, but then it's all backed up and restore-able if the 
worst should happen. Risk is "very likely", consequence is minimal.

My work data on the other hand is on a RAID-10 of SAS drives with 60 
days of rotating off site & off line backups. Risk is "unlikely", but 
consequence is massive.

We had a TV advertising campaign targeting speeding here here for a few 
years. To paraphrase :  "Choose your speed(risk), choose your 
consequences...".




^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06 17:55           ` Christoph Anton Mitterer
@ 2013-07-07 12:46             ` Bernd Schubert
  2013-07-07 17:39               ` Christoph Anton Mitterer
  0 siblings, 1 reply; 27+ messages in thread
From: Bernd Schubert @ 2013-07-07 12:46 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 07/06/2013 07:55 PM, Christoph Anton Mitterer wrote:
> On Fri, 2013-07-05 at 22:19 -0400, Phil Turmel wrote:
>> I think you should read Neil's blog entry
> I did ;)
> 
>> before you get too excited
>> about raid6check.
> Sure it's not a magic wand for all situations... and raid6check itself
> seems to be rather at a early starting point...
> 
>> You can only trust its decisions when you are
>> confident that the problems it finds are *only* due to silent read
>> errors.
> Sure.... but at least it can be misused as kinda poor-man's integrity
> check.
> 
> AFAIU it's not yet working, that it can tell you back through the fs,
> which file is affected?

The block layer has no knowledge which file a block belongs to. Even for
file systems that is hard task to figure out, as only inodes store
information which blocks they use. So if you would want to figure out
the corresponding file, you first need to scan through all inodes and
search for the specific block. Once you have the corresponding inode you
need to find directory-entries referencing it. So lots of expensive
reverse searching.



Cheers,
Bernd


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06 15:04           ` Christoph Anton Mitterer
  2013-07-06 15:41             ` Matt Garman
@ 2013-07-07 14:08             ` David Brown
  2013-07-07 16:45             ` Stan Hoeppner
  2 siblings, 0 replies; 27+ messages in thread
From: David Brown @ 2013-07-07 14:08 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 06/07/13 17:04, Christoph Anton Mitterer wrote:
> On Sat, 2013-07-06 at 03:36 -0500, Stan Hoeppner wrote:
>>>> hot/warm/cold spares in the chassis.
>>>>    This simply degrades performance.
>>> Why should it? If the spare is unused?
>> The answer is rather obvious.  If spares are in the chassis one has
>> fewer active array spindles.
> Sorry... still don't get it...
> When you have another drive in the chassis... which is not actively used
> by the RAID, but just waiting as a hot spare for a failing device and
> rebuild becoming necessary...
> Apart from power consumption and more heat... how should that affect the
> read/write performance of the RAID?
>

I think the point is that you have a slot in your chassis that is not 
being used actively.  If you have five disk bays, then you will get 
better performance with 5 disks in your array and a spare on the shelf 
beside it than with 4 disks in the array and a spare in the chassis.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-06 15:04           ` Christoph Anton Mitterer
  2013-07-06 15:41             ` Matt Garman
  2013-07-07 14:08             ` David Brown
@ 2013-07-07 16:45             ` Stan Hoeppner
  2013-07-07 17:26               ` Christoph Anton Mitterer
  2 siblings, 1 reply; 27+ messages in thread
From: Stan Hoeppner @ 2013-07-07 16:45 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 7/6/2013 10:04 AM, Christoph Anton Mitterer wrote:
> On Sat, 2013-07-06 at 03:36 -0500, Stan Hoeppner wrote:
>>>> hot/warm/cold spares in the chassis.
>>>>   This simply degrades performance.

>>> Why should it? If the spare is unused?
>> The answer is rather obvious.  If spares are in the chassis one has
>> fewer active array spindles.

> Sorry... still don't get it...

Maybe an example will help.  Is a 12 drive array faster than a 10 drive
array?  Yes, of course.  If your chassis holds 12 drives and you assign
two as spares, then you have 10 drives in your array.  That's 20%
slower.  If you keep spares on a shelf and hot swap them in after
failure, you can have all 12 drives in your array and not lose that 20%
performance.

...
> "It was agreed that WD would operate with WD Technologies and HGST as
> wholly owned subsidiaries and they would compete in the marketplace with
> separate brands and product lines."

I don't have the time, and this is not the appropriate forum, for me to
educated you WRT disk drive business...

> But the next sentence about Toshiba may indicate that HGST stops 3.5"
> business?!

You're confusing 3.5" with consumer.  HGST will still be producing 3.5"
enterprise SAS and SATA drives.  The transfer to Toshiba was limited to
consumer product.

>>> and Toshiba got something from the
>>> WD/HGST trade and already announced a 3.5" enterprise disk out of that.
>> Toshiba has been producing 3.5" enterprise drives for years.  Got a link
>> showing that Toshiba received technology from the WD/Hitachi acquisition?
> Again, see Wikipedia.

As I stated, Toshiba had already been producing 3.5" enterprise drives
for many years.  They had a tiny fraction of the consumer market as WD
and Seagate owned nearly all of it after Seagate's acquisition of
Maxtor.  Toshiba got a consumer line of drives out of this deal which is
what they needed.

The whole purpose of this divestiture was the FTC's goal of preventing
WD and Seagate from owning essentially the entire consumer disk drive
market.  They already had over ~80% of it worldwide between them.  WD
got what it wanted, which was Hitachi's enterprise disk drive line.  WD
has been trying for over a decade to crack the enterprise OEM nut and
was unable to do so, as Seagate, IBM/Hitachi, and Toshiba had it locked
up.  On day one after this acquisition, WD was selling enterprise drives
to the likes of EMC, Dell, HP, IBM, etc, customers they'd been trying to
grab for over a decade without success.

Hitachi's consumer drive biz was never serious competition to WD so they
lost little sleep when the FTC forced them to divest that product line.
 Again, they got what they wanted:  enterprise 15K, SAS, FC drives, and
an existing large OEM customer base for these products.

>> I already stated many of them.  You don't seem to be following along
>> very well.

> Well you only named one may need to use partitions (which I do not see
> why this should be a disadvantage)... and that in few years... there
> might be only some vendors left and (by then) the idea wouldn't work
> anymore; but even if that would happen, that's still no reason for not
> doing it now?

> Anything else I've missed or didn't understand as a disadvantage?

Well of course.  By designing your storage with dissimilar drives to
avoid a rare, show stopping, firmware bug that may or may not be present
in a specific drive model, you're simultaneously exposing yourself to
other issues because the firmware doesn't match.  Performance will be
suboptimal as your drives will have difference response times.

Additionally, timeout and error handling will likely be different,
causing the same.  Interpreting S.M.A.R.T. data will be difficult
because each of the drives will report various metrics differently, etc,
etc.  So instead of only being required to become intimately familiar
with one drive model, you must do so for 4 or 5 drive models.

In summary, in an attempt to avoid one specific potential rare problem
you create many other problems.  I'm not saying this strategy is
"wrong", I'm simply pointing out that it comes with its share of other
problems you'll have to deal with.

-- 
Stan


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-07 16:45             ` Stan Hoeppner
@ 2013-07-07 17:26               ` Christoph Anton Mitterer
  2013-07-09 15:50                 ` Stan Hoeppner
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-07 17:26 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1531 bytes --]

On Sun, 2013-07-07 at 11:45 -0500, Stan Hoeppner wrote:
> Maybe an example will help.  Is a 12 drive array faster than a 10 drive
> array?  Yes, of course.  If your chassis holds 12 drives and you assign
> two as spares, then you have 10 drives in your array.  That's 20%
> slower.  If you keep spares on a shelf and hot swap them in after
> failure, you can have all 12 drives in your array and not lose that 20%
> performance.
Ah... okay I see what you were talking about ... sure... but it's not
the hot swap that will (directly) degrade performance... ist because you
don't use all your slots and thereby not getting out the maxmimum
performance gain possible due to the striping...


> Well of course.  By designing your storage with dissimilar drives to
> avoid a rare, show stopping, firmware bug that may or may not be present
> in a specific drive model, you're simultaneously exposing yourself to
> other issues because the firmware doesn't match.  Performance will be
> suboptimal as your drives will have difference response times.
Sure, but as said performance isn't the main goal...


> Additionally, timeout and error handling will likely be different,
> causing the same.  Interpreting S.M.A.R.T. data will be difficult
> because each of the drives will report various metrics differently, etc,
> etc.  So instead of only being required to become intimately familiar
> with one drive model, you must do so for 4 or 5 drive models.
Sure..


Thanks for your comments.

Cheers,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-07 12:46             ` Bernd Schubert
@ 2013-07-07 17:39               ` Christoph Anton Mitterer
  0 siblings, 0 replies; 27+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-07 17:39 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 754 bytes --]

On Sun, 2013-07-07 at 14:46 +0200, Bernd Schubert wrote:
> The block layer has no knowledge which file a block belongs to.
Sure...

>  Even for
> file systems that is hard task to figure out, as only inodes store
> information which blocks they use. So if you would want to figure out
> the corresponding file, you first need to scan through all inodes and
> search for the specific block. Once you have the corresponding inode you
> need to find directory-entries referencing it. So lots of expensive
> reverse searching.
Well of course nothing that should run online in the RAID drivers... but
when you have a tool like raid6check or similar... it sounds nice if it
would tell you which files were affected (if any).


Cheers,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: question about the best suited RAID level/layout
  2013-07-07 17:26               ` Christoph Anton Mitterer
@ 2013-07-09 15:50                 ` Stan Hoeppner
  0 siblings, 0 replies; 27+ messages in thread
From: Stan Hoeppner @ 2013-07-09 15:50 UTC (permalink / raw)
  To: Christoph Anton Mitterer; +Cc: linux-raid

On 7/7/2013 12:26 PM, Christoph Anton Mitterer wrote:
> On Sun, 2013-07-07 at 11:45 -0500, Stan Hoeppner wrote:
>> Maybe an example will help.  Is a 12 drive array faster than a 10 drive
>> array?  Yes, of course.  If your chassis holds 12 drives and you assign
>> two as spares, then you have 10 drives in your array.  That's 20%
>> slower.  If you keep spares on a shelf and hot swap them in after
>> failure, you can have all 12 drives in your array and not lose that 20%
>> performance.

> Ah... okay I see what you were talking about ... sure... 

> but it's not
> the hot swap that will (directly) degrade performance...  

You seem to be having significant trouble parsing/digesting the
information given to you.  I didn't state, nor suggest, nor imply that
hot swap degrades performance.  Not sure how you ended up with this
idea.  I'll chalk it up to lack of knowledge/experience.

> ist because you
> don't use all your slots and thereby not getting out the maxmimum
> performance gain possible due to the striping...

Yes.  I stated this twice now.  Glad you finally got it. ;)

>> Well of course.  By designing your storage with dissimilar drives to
>> avoid a rare, show stopping, firmware bug that may or may not be present
>> in a specific drive model, you're simultaneously exposing yourself to
>> other issues because the firmware doesn't match.  Performance will be
>> suboptimal as your drives will have difference response times.

> Sure, but as said performance isn't the main goal...

It would benefit you greatly if you'd stop rebutting the information
presented to you, digest the information, commit it to memory, and use
it or not, now or in the future.  I am not a salesman trying to convince
you to use one method or another.  I am a teacher presenting you with
both pros and cons of multiple configuration options.  There is no need
for argument or rebuttal here.

>> Additionally, timeout and error handling will likely be different,
>> causing the same.  Interpreting S.M.A.R.T. data will be difficult
>> because each of the drives will report various metrics differently, etc,
>> etc.  So instead of only being required to become intimately familiar
>> with one drive model, you must do so for 4 or 5 drive models.

> Sure..

You're discounting expert insight out of hand because it doesn't agree
with your predisposition, and you are being adversarial with those
presenting conflicting information.  You're acting a bit like the
immature Kim Jong-un, but you are presumably unable to have those who
disagree with you executed or imprisoned.  Thankfully.

-- 
Stan


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2013-07-09 15:50 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-04 18:17 question about the best suited RAID level/layout Christoph Anton Mitterer
2013-07-04 21:43 ` Phil Turmel
2013-07-04 22:58   ` Christoph Anton Mitterer
2013-07-05  1:07     ` Brad Campbell
2013-07-06  0:36       ` Christoph Anton Mitterer
2013-07-06  5:29         ` Brad Campbell
2013-07-06 14:49           ` Christoph Anton Mitterer
2013-07-07  6:36             ` Brad Campbell
2013-07-06  7:40         ` Piergiorgio Sartor
2013-07-06 14:52           ` Christoph Anton Mitterer
2013-07-05  1:12     ` Stan Hoeppner
2013-07-06  0:46       ` Christoph Anton Mitterer
2013-07-06  8:36         ` Stan Hoeppner
2013-07-06 15:04           ` Christoph Anton Mitterer
2013-07-06 15:41             ` Matt Garman
2013-07-07 14:08             ` David Brown
2013-07-07 16:45             ` Stan Hoeppner
2013-07-07 17:26               ` Christoph Anton Mitterer
2013-07-09 15:50                 ` Stan Hoeppner
2013-07-05 13:36     ` Phil Turmel
2013-07-06  1:11       ` Christoph Anton Mitterer
2013-07-06  2:19         ` Phil Turmel
2013-07-06 17:55           ` Christoph Anton Mitterer
2013-07-07 12:46             ` Bernd Schubert
2013-07-07 17:39               ` Christoph Anton Mitterer
2013-07-05 11:10 ` David Brown
2013-07-06  0:55   ` Christoph Anton Mitterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).