linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* some general questions on RAID
@ 2013-07-04 18:30 Christoph Anton Mitterer
  2013-07-04 22:07 ` Phil Turmel
  2013-07-05  1:13 ` Brad Campbell
  0 siblings, 2 replies; 21+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-04 18:30 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2488 bytes --]

Hi.

Well for me personally these are follow up questions to my scenario
presented here: http://thread.gmane.org/gmane.linux.raid/43405

But I think these questions would be generally interesting an I'd like
to add them to the Debian FAQ for mdadm (and haven't found real good
answers in the archives/google).


1) I plan to use dmcrypt and LUKS and had the following stacking in
mind:
physical devices -> MD -> dmcrypt -> LVM (with multiple LVs) ->
filesystems

Basically I use LVM for partitioning here ;-)

Are there any issues with that order? E.g. I've heard rumours that
dmcrypt on top of MD performs much worse than vice versa...

But when looking at potential disaster recovery... I think not having MD
directly on top of the HDDs (especially having it above dmcrypt) seems
stupid.


2) Chunks / Chunk size
a) How does MD work in that matter... is it that it _always_ reads
and/or writes FULL chunks?
Guess it must at least do so on _write_ for the RAID levels with parity
(5/6)... but what about read?
And what about read/write with the non-parity RAID levels (1, 0, 10,
linear)... is the chunk size of any real influence here (in terms of
reading/writing)?

b) What's the currently suggested chunk size when having a undetermined
mix of file sizes? Well it's obviously >= filesystem block size...
dm-crypt blocksize is always 512B so far so this won't matter... but do
the LVM physical extents somehow play in (I guess not,... and LVM PEs
are _NOT_ always FULLY read and/or written - why should they? .. right?)
From our countless big (hardware) RAID systems at the faculty (we run a
Tier-2 for the LHC Computing Grid)... experience seems that 256K is best
for an undetermined mixture of small/medium/large files... and the
biggest possible chunk size for mostly large files.
But does the 256K apply to MD RAIDs as well?


3) Any extra benefit from the parity?
What I mean is... does that parity give me kinda "integrity check"...
I.e. when a drive fails completely (burns down or whatever)... then it's
clear... the parity is used on rebuild to get the lost chunks back.

But when I only have block errors... and do scrubbing... a) will it tell
me that/which blocks are damaged... it will it be possible to recover
the right value by the parity? Assuming of course that block
error/damage doesn't mean the drive really tells me an error code for
"BLOCK BROKEN"... but just gives me bogus data?


Thanks again,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5165 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2013-07-08  5:40 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-04 18:30 some general questions on RAID Christoph Anton Mitterer
2013-07-04 22:07 ` Phil Turmel
2013-07-04 23:34   ` Christoph Anton Mitterer
2013-07-08  4:48     ` NeilBrown
2013-07-06  1:33   ` Christoph Anton Mitterer
2013-07-06  8:52     ` Stan Hoeppner
2013-07-06 15:15       ` Christoph Anton Mitterer
2013-07-07 16:51         ` Stan Hoeppner
2013-07-07 17:39   ` Milan Broz
2013-07-07 18:01     ` Christoph Anton Mitterer
2013-07-07 18:50       ` Milan Broz
2013-07-07 20:51         ` Christoph Anton Mitterer
2013-07-08  5:40           ` Milan Broz
2013-07-08  4:53     ` NeilBrown
2013-07-08  5:25       ` Milan Broz
2013-07-05  1:13 ` Brad Campbell
2013-07-05  1:39   ` Sam Bingner
2013-07-05  3:06     ` Brad Campbell
2013-07-06  1:23     ` some general questions on RAID (OT) Christoph Anton Mitterer
2013-07-06  6:23       ` Sam Bingner
2013-07-06 15:11         ` Christoph Anton Mitterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).