* RAID6 questions (mdadm 3.2.6/3.3.x)
@ 2014-07-11 22:41 Vlad Dobrotescu
2014-07-12 1:09 ` Chris Murphy
0 siblings, 1 reply; 12+ messages in thread
From: Vlad Dobrotescu @ 2014-07-11 22:41 UTC (permalink / raw)
To: linux-raid
Hi,
First of all, big thanks to all the people involved in the mdadm project
- it's a jewel.
I'm getting close to put together my new home server, with 6 drives in a
RAID6 configuration. I have spent the last few days trying to update my
knowledge on Linux hdd redundancy (didn't need to touch the subject for
about 8 years - again, thanks for the quality of your work) and it seems
I'll have to use the 3.2.6 version of mdadm (coming with the new CentOS
7). I look forward for the goodies coming in 3.3.x, but I can't wait
until the RH guys are totally happy with it. I have a few questions that
I hope someone in this forum can easily shed some light on.
1. If I set up everything with 3.2.6, will 3.3.x be able to "take over"
seamlessly my array and offer the new features (bad blocks/hot replace)?
If not, is there anything I could do proactively to ease the move?
2. In the foreseeable future, I will add more drives to the existing
array (let's say 4 more, thus doubling its storage capacity). My
understanding is that growing the array to include the new disks will
keep the existing size - am I correct? In this situation will the chunk
size stay the same, or will it double (if I don't explicitly specify any
change)?
I plan to implement some proactive maintenance of the array (regular
scrubs, smartctl monitoring), and I may get to the point of wanting to
replace one or more tired (but not yet failed) drives.
3. My understanding is that the hot replace feature of 3.3.x could
handle this in a very efficient way, by cloning the data from the old
drive to the new one - am I correct? If yes, then I wonder if multiple
replacements can be done in parallel (this situation would also occur if
I want to replace existing disks with bigger ones)?
4. Again, my understanding is that 3.2.6 (mdadm-3.2.6-31.el7 to be
exact) can't help me in this situation and I have to go through a full
resync - please correct me if I'm wrong. A comment on Neil's blog
suggests configuring each drive as a degraded RAID1 and assembling the
RAID6 on top of those md devices (so that active drive cloning can be
handled by the RAID1 component). I find this idea quite interesting ...
but would this approach be subject to any significant performance penalty?
5. In the scenario above, I'm thinking that if the RAID1's are
configured to use 1.0 metadata and the RAID6 to use 1.2 metadata, once
mdadm 3.3.x becomes available I could just "re-assemble" the RAID6 using
the same drives (but without the RAID1 envelope) without any resyncing
(the data is "healthy" and the superblocks would already be in the
proper place) ... If I'm not totally off, could someone sketch the
proper procedure of doing it without loosing the data?
And, finaly, a couple of mdadm/LVM related questions:
6. mdadm on top of LVM2 LGs (not the other way around): would there be
any issues or performance penalties?
7. I am sure I read somewhere (can't find the source anymore) that the
"new" RAID features of LVM2 are based on a fork from the md code. If
this is true, are you guys are contributing to that project as well?
Thanks,
Vlad
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-11 22:41 RAID6 questions (mdadm 3.2.6/3.3.x) Vlad Dobrotescu
@ 2014-07-12 1:09 ` Chris Murphy
2014-07-12 1:20 ` NeilBrown
2014-07-12 3:20 ` Vlad Dobrotescu
0 siblings, 2 replies; 12+ messages in thread
From: Chris Murphy @ 2014-07-12 1:09 UTC (permalink / raw)
To: Vlad Dobrotescu; +Cc: linux-raid
On Jul 11, 2014, at 4:41 PM, Vlad Dobrotescu <vlad@dobrotescu.ca> wrote:
>
> 6. mdadm on top of LVM2 LGs (not the other way around): would there be any issues or performance penalties?
You're not assured what PV the LV's are located on. So those 6 LVs you're using as md members might not be on six physical devices. One drive dies, you can lose the whole array. You're better off using LVM raid, or doing things conventionally by first creating the md raid set and then making the md logical device a PV.
>
> 7. I am sure I read somewhere (can't find the source anymore) that the "new" RAID features of LVM2 are based on a fork from the md code. If this is true, are you guys are contributing to that project as well?
It's a good question, I'm not certain but my understanding is device mapper is leveraging existing md code in the kernel, rather than having forked and duplicated that code. They have their own user space tools and on-disk metadata so you can't use mdadm to manage it.
Chris Murphy
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 1:09 ` Chris Murphy
@ 2014-07-12 1:20 ` NeilBrown
2014-07-12 3:21 ` Vlad Dobrotescu
2014-07-12 12:29 ` Piergiorgio Sartor
2014-07-12 3:20 ` Vlad Dobrotescu
1 sibling, 2 replies; 12+ messages in thread
From: NeilBrown @ 2014-07-12 1:20 UTC (permalink / raw)
To: Chris Murphy; +Cc: Vlad Dobrotescu, linux-raid
[-- Attachment #1: Type: text/plain, Size: 976 bytes --]
On Fri, 11 Jul 2014 19:09:48 -0600 Chris Murphy <lists@colorremedies.com>
wrote:
> > 7. I am sure I read somewhere (can't find the source anymore) that the "new" RAID features of LVM2 are based on a fork from the md code. If this is true, are you guys are contributing to that project as well?
>
> It's a good question, I'm not certain but my understanding is device mapper is leveraging existing md code in the kernel, rather than having forked and duplicated that code. They have their own user space tools and on-disk metadata so you can't use mdadm to manage it.
Your understanding is correct. It is the same code for managing RAID
functionality, but different code for managing metadata and different
user-space tools.
I'm hoping that one day the RAID support in LVM2 will be better than mdadm,
and then I can just fade away and no-one will notice that I am gone.
http://downatthirdman.files.wordpress.com/2010/06/grinning-chesire-cat.jpg
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 1:09 ` Chris Murphy
2014-07-12 1:20 ` NeilBrown
@ 2014-07-12 3:20 ` Vlad Dobrotescu
2014-07-12 3:46 ` Chris Murphy
1 sibling, 1 reply; 12+ messages in thread
From: Vlad Dobrotescu @ 2014-07-12 3:20 UTC (permalink / raw)
To: Chris Murphy; +Cc: linux-raid
On 11/07/2014 21:09, Chris Murphy wrote:
> On Jul 11, 2014, at 4:41 PM, Vlad Dobrotescu<vlad@dobrotescu.ca> wrote:
>> 6. mdadm on top of LVM2 LGs (not the other way around): would there be any issues or performance penalties?
> You're not assured what PV the LV's are located on. So those 6 LVs you're using as md members might not be on six physical devices. One drive dies, you can lose the whole array. You're better off using LVM raid, or doing things conventionally by first creating the md raid set and then making the md logical device a PV.
Thanks for the advice, it makes a lot of sense. However, this question
wasn't focused on the RAID6 itself, but related to some fancy (crazy?)
mirroring scheme for the Linux partition I was considering: take a LV
chunk from the VG that sits on the RAID6 and mirror (md RAID1) it with a
partition from the SSD I'll be using for keeping the ext4 journal for
the big data partition. In this way I can have a functional OS even if I
take all the RAID6 disks offline. Of course, this can be achieved in
other ways as well.
Anyhow, do you have any estimation of the speed penalty when overlaying
such layers (md-md, md-lvm, ...)?
Vlad
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 1:20 ` NeilBrown
@ 2014-07-12 3:21 ` Vlad Dobrotescu
2014-07-13 3:15 ` NeilBrown
2014-07-12 12:29 ` Piergiorgio Sartor
1 sibling, 1 reply; 12+ messages in thread
From: Vlad Dobrotescu @ 2014-07-12 3:21 UTC (permalink / raw)
To: NeilBrown; +Cc: Chris Murphy, linux-raid
On 11/07/2014 21:20, NeilBrown wrote:
> On Fri, 11 Jul 2014 19:09:48 -0600 Chris Murphy<lists@colorremedies.com>
> wrote:
>>> 7. I am sure I read somewhere (can't find the source anymore) that the "new" RAID features of LVM2 are based on a fork from the md code. If this is true, are you guys are contributing to that project as well?
>> It's a good question, I'm not certain but my understanding is device mapper is leveraging existing md code in the kernel, rather than having forked and duplicated that code. They have their own user space tools and on-disk metadata so you can't use mdadm to manage it.
> Your understanding is correct. It is the same code for managing RAID
> functionality, but different code for managing metadata and different
> user-space tools.
> I'm hoping that one day the RAID support in LVM2 will be better than mdadm,
> and then I can just fade away and no-one will notice that I am gone.
>
> http://downatthirdman.files.wordpress.com/2010/06/grinning-chesire-cat.jpg
>
> NeilBrown
Thanks for the insight. I don't think your "hope" will ever come to
reality, as they seem to be focused on a different "business case". I
find the LV abstraction pretty cool, but, for my needs, their RAID
support needs significant improvements. It seems a bit weird to have
access to all the amazing md code from the kernel and not being able to
use it to its value. That's why the idea of a fork made more sense to
me. But, then, I don't know if in fact the features that seem important
for me (i.e. adding disks/stripes to an array) are in the kernel code or
in the user-space tools. Maybe I should take a look at the md/mdadm code
... until then, this question only comes from some kind of "academic"
curiosity.
Would you have any opinion on the questions 1-5?
Vlad
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 3:20 ` Vlad Dobrotescu
@ 2014-07-12 3:46 ` Chris Murphy
2014-07-12 13:30 ` Vlad Dobrotescu
0 siblings, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2014-07-12 3:46 UTC (permalink / raw)
To: linux-raid@vger.kernel.org List
On Jul 11, 2014, at 9:20 PM, Vlad Dobrotescu <vlad@dobrotescu.ca> wrote:
> On 11/07/2014 21:09, Chris Murphy wrote:
>> On Jul 11, 2014, at 4:41 PM, Vlad Dobrotescu<vlad@dobrotescu.ca> wrote:
>>> 6. mdadm on top of LVM2 LGs (not the other way around): would there be any issues or performance penalties?
>> You're not assured what PV the LV's are located on. So those 6 LVs you're using as md members might not be on six physical devices. One drive dies, you can lose the whole array. You're better off using LVM raid, or doing things conventionally by first creating the md raid set and then making the md logical device a PV.
> Thanks for the advice, it makes a lot of sense. However, this question wasn't focused on the RAID6 itself, but related to some fancy (crazy?) mirroring scheme for the Linux partition I was considering: take a LV chunk from the VG that sits on the RAID6 and mirror (md RAID1) it with a partition from the SSD I'll be using for keeping the ext4 journal for the big data partition.
Sounds a bit nutty, no offense. It's complicated, non-standard, and therefore at high risk of user induced data loss.
It's basically raid61, which tells me you want the data always for sure always available. Because raid61 is about uptime. The problem is, you're not going to get that because you've overbuilt the storage stack and haven't considered (or mentioned) other fail points like the network, the power supply, power itself. So it just sounds wrongly overbuilt because the data can't possibly require this kind of uptime, chances are you're confusing raid with back ups. If the data is both important and it really needs to be available, build yourself a gluster cluster.
> In this way I can have a functional OS even if I take all the RAID6 disks offline. Of course, this can be achieved in other ways as well.
Well if everything you care about on this raid6 fits on an SSD partition, why don't you just set up an hourly rsync to the raid6 and use the SSD volume for live work? And then if you accidentally delete a file or crash when writing to the SSD chances are the states of the raid6 LV and the SSD volume are different and one is recoverable. If you raid1 them, any accidents affect both and you're hosed.
>
> Anyhow, do you have any estimation of the speed penalty when overlaying such layers (md-md, md-lvm, …)?
No. But what you're talking about is md raid6> lv > md raid1, so three layers.
Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 1:20 ` NeilBrown
2014-07-12 3:21 ` Vlad Dobrotescu
@ 2014-07-12 12:29 ` Piergiorgio Sartor
1 sibling, 0 replies; 12+ messages in thread
From: Piergiorgio Sartor @ 2014-07-12 12:29 UTC (permalink / raw)
To: NeilBrown; +Cc: Chris Murphy, Vlad Dobrotescu, linux-raid
On Sat, Jul 12, 2014 at 11:20:34AM +1000, NeilBrown wrote:
[...]
> I'm hoping that one day the RAID support in LVM2 will be better than mdadm,
> and then I can just fade away and no-one will notice that I am gone.
I think a lot of people will notice... :-)
>
> http://downatthirdman.files.wordpress.com/2010/06/grinning-chesire-cat.jpg
>
> NeilBrown
bye,
--
piergiorgio
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 3:46 ` Chris Murphy
@ 2014-07-12 13:30 ` Vlad Dobrotescu
2014-07-12 14:46 ` Chris Murphy
0 siblings, 1 reply; 12+ messages in thread
From: Vlad Dobrotescu @ 2014-07-12 13:30 UTC (permalink / raw)
To: linux-raid
Chris Murphy <lists <at> colorremedies.com> writes:
>>>> 6. mdadm on top of LVM2 LGs (not the other way around):
>>>> would there be any issues or performance penalties?
>>>
>>> You're not assured what PV the LV's are located on.
>>> ...
>>
>> Thanks for the advice, it makes a lot of sense. However,
>> this question wasn't focused on the RAID6 itself, but
>> related to some fancy (crazy?) mirroring scheme for the
>> Linux partition I was considering: take a LV chunk from
>> the VG that sits on the RAID6 and mirror (md RAID1) it
>> with a partition from the SSD I'll be using for keeping
>> the ext4 journal for the big data partition.
>
> Sounds a bit nutty, no offense. It's complicated, non-standard,
> and therefore at high risk of user induced data loss.
>
> It's basically raid61, which tells me you want the data
> always for sure always available. Because raid61 is about
> uptime. The problem is, you're not going to get that because
> you've overbuilt the storage stack and haven't considered
> (or mentioned) other fail points like the network, the power
> supply, power itself. So it just sounds wrongly overbuilt
> because the data can't possibly require this kind of uptime,
> chances are you're confusing raid with back ups. If the data
> is both important and it really needs to be available, build
> yourself a gluster cluster.
>
>> In this way I can have a functional OS even if I take all
>> the RAID6 disks offline. Of course, this can be achieved
>> in other ways as well.
>
> Well if everything you care about on this raid6 fits on an
> SSD partition, why don't you just set up an hourly rsync to
> the raid6 and use the SSD volume for live work? And then if
> you accidentally delete a file or crash when writing to the
> SSD chances are the states of the raid6 LV and the SSD volume
> are different and one is recoverable. If you raid1 them, any
> accidents affect both and you're hosed.
Thanks a lot for the advice, Chris. That's exactly what I hoped
for when posting to this list. As I mentioned, I am considering
a number what-if scenarios and possible solutions (I added the
rsync one to that list) and weighting pros and cons. For this
RAID61 approach, which seemed to make some logical sense, I had
the feeling it's a bit fishy, but didn't have any real arguments
against it. Now I have.
Since it seems you have a very healthy view of real-world RAID,
could you point out any significant issues when using a disk as
a degraded md RAID1 (not accidental, but on purpose)?
Vlad
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 13:30 ` Vlad Dobrotescu
@ 2014-07-12 14:46 ` Chris Murphy
2014-07-12 17:15 ` Vlad Dobrotescu
0 siblings, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2014-07-12 14:46 UTC (permalink / raw)
To: Vlad Dobrotescu; +Cc: linux-raid
On Jul 12, 2014, at 7:30 AM, Vlad Dobrotescu <vlad@dobrotescu.ca> wrote:
>
> Since it seems you have a very healthy view of real-world RAID,
> could you point out any significant issues when using a disk as
> a degraded md RAID1 (not accidental, but on purpose)?
Intentionally degraded raid1 seems oxymoronic to me. Like fat free ice cream. Uptime/data availability is the purpose of RAID, not backup. It sounds like a member drive is being used as a shelf or offsite backup, with periodic catch-up resyncing. If it's an n way mirror with 3 drives, two left connected, one off-site, then while technically degraded you could still lose one drive and have uptime and a backup. But I still think that's the wrong way to do it — this is probably more of a philosophical argument than a technical one.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 14:46 ` Chris Murphy
@ 2014-07-12 17:15 ` Vlad Dobrotescu
0 siblings, 0 replies; 12+ messages in thread
From: Vlad Dobrotescu @ 2014-07-12 17:15 UTC (permalink / raw)
To: Chris Murphy; +Cc: linux-raid
Chris Murphy <lists <at> colorremedies.com> writes:
>
> On Jul 12, 2014, at 7:30 AM, Vlad Dobrotescu <vlad <at> dobrotescu.ca>
> wrote:
> >
> > Since it seems you have a very healthy view of real-world RAID,
> > could you point out any significant issues when using a disk as
> > a degraded md RAID1 (not accidental, but on purpose)?
>
> Intentionally degraded raid1 seems oxymoronic to me. Like fat free
> ice cream. Uptime/data availability is the purpose of RAID, not
> backup. It sounds like a member drive is being used as a shelf or
> offsite backup, with periodic catch-up resyncing. If it's an n way
> mirror with 3 drives, two left connected, one off-site, then while
> technically degraded you could still lose one drive and have uptime
> and a backup. But I still think that's the wrong way to do it —
> this is probably more of a philosophical argument than a technical
> one.
>
> Chris Murphy
As mentioned in my original message, this would be a setup that can
accomodate "hot-replace" (without a full resync) before this feature
becomes available in mdadm 3.3.x ...
Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-12 3:21 ` Vlad Dobrotescu
@ 2014-07-13 3:15 ` NeilBrown
2014-07-13 6:07 ` Vlad Dobrotescu
0 siblings, 1 reply; 12+ messages in thread
From: NeilBrown @ 2014-07-13 3:15 UTC (permalink / raw)
To: Vlad Dobrotescu; +Cc: Chris Murphy, linux-raid
[-- Attachment #1: Type: text/plain, Size: 2587 bytes --]
On Fri, 11 Jul 2014 23:21:04 -0400 Vlad Dobrotescu <vlad@dobrotescu.ca> wrote:
> On 11/07/2014 21:20, NeilBrown wrote:
> > On Fri, 11 Jul 2014 19:09:48 -0600 Chris Murphy<lists@colorremedies.com>
> > wrote:
> >>> 7. I am sure I read somewhere (can't find the source anymore) that the "new" RAID features of LVM2 are based on a fork from the md code. If this is true, are you guys are contributing to that project as well?
> >> It's a good question, I'm not certain but my understanding is device mapper is leveraging existing md code in the kernel, rather than having forked and duplicated that code. They have their own user space tools and on-disk metadata so you can't use mdadm to manage it.
> > Your understanding is correct. It is the same code for managing RAID
> > functionality, but different code for managing metadata and different
> > user-space tools.
> > I'm hoping that one day the RAID support in LVM2 will be better than mdadm,
> > and then I can just fade away and no-one will notice that I am gone.
> >
> > http://downatthirdman.files.wordpress.com/2010/06/grinning-chesire-cat.jpg
> >
> > NeilBrown
> Thanks for the insight. I don't think your "hope" will ever come to
> reality, as they seem to be focused on a different "business case". I
> find the LV abstraction pretty cool, but, for my needs, their RAID
> support needs significant improvements. It seems a bit weird to have
> access to all the amazing md code from the kernel and not being able to
> use it to its value. That's why the idea of a fork made more sense to
> me. But, then, I don't know if in fact the features that seem important
> for me (i.e. adding disks/stripes to an array) are in the kernel code or
> in the user-space tools. Maybe I should take a look at the md/mdadm code
> ... until then, this question only comes from some kind of "academic"
> curiosity.
>
> Would you have any opinion on the questions 1-5?
>
> Vlad
1-yes
2-no, same -- Assuming I understand correctly. I suggest you test and see.
3-try it and see, but "yes"
4-Not wrong, no penalty
5-it would work but is dangerous - too easy to assemble wrongly.
Why are you using yesterday software to build tomorrows computer?
You've obviously done lots of research and figured out or guessed the correct
answer to your questions. I seriously recommend experimentation to clarify
remaining issues. If you've actually performed a few recoveries or
replacements without live data, you'll feel much more confident when a
disaster happens after you go live.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID6 questions (mdadm 3.2.6/3.3.x)
2014-07-13 3:15 ` NeilBrown
@ 2014-07-13 6:07 ` Vlad Dobrotescu
0 siblings, 0 replies; 12+ messages in thread
From: Vlad Dobrotescu @ 2014-07-13 6:07 UTC (permalink / raw)
To: linux-raid
NeilBrown <neilb <at> suse.de> writes:
>
> On Fri, 11 Jul 2014 23:21:04 -0400 Vlad Dobrotescu <vlad <at>
> dobrotescu.ca> wrote:
> ...
>> Would you have any opinion on the questions 1-5?
>>
>> Vlad
>
> 1-yes
> 2-no, same -- Assuming I understand correctly. I suggest you
> test and see.
> 3-try it and see, but "yes"
> 4-Not wrong, no penalty
> 5-it would work but is dangerous - too easy to assemble wrongly.
> Why are you using yesterday software to build tomorrows computer?
It's a mater of confidence. When I was writing software, I was, just
as you seem to be, an adept of producing stuff people could really
rely on and depend upon, without sparing any effort in this direction.
Now Linux, as a whole, doesn't give me this "dependable" feeling, and
a significant amount of testing is able to alleviate at least some of
my concerns. When it comes to my home server (which will have many
roles), I'd really like to get to a solution that I can "set it and
forget about it", and which would require little human maintenance
(so that the required actions could be described on one page that
most non-IT people would understand).
That's why I'm stuck with something like CentOS, and I need to figure
out as many what-if scenarios as I can, so I can write the proper
scripts and set up the proper configurations that would take me as
close to worry-free as possible.
>
> You've obviously done lots of research and figured out or guessed
> the correct answer to your questions. I seriously recommend
> experimentation to clarify remaining issues. If you've actually
> performed a few recoveries or replacements without live data,
> you'll feel much more confident when a disaster happens after
> you go live.
>
> NeilBrown
>
My plan was to get to a short list of options that make sense (and,
with your help, I feel I got it) and start experimenting in a VM
before setting up the real hardware.
Thanks a lot.
Vlad
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2014-07-13 6:07 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-11 22:41 RAID6 questions (mdadm 3.2.6/3.3.x) Vlad Dobrotescu
2014-07-12 1:09 ` Chris Murphy
2014-07-12 1:20 ` NeilBrown
2014-07-12 3:21 ` Vlad Dobrotescu
2014-07-13 3:15 ` NeilBrown
2014-07-13 6:07 ` Vlad Dobrotescu
2014-07-12 12:29 ` Piergiorgio Sartor
2014-07-12 3:20 ` Vlad Dobrotescu
2014-07-12 3:46 ` Chris Murphy
2014-07-12 13:30 ` Vlad Dobrotescu
2014-07-12 14:46 ` Chris Murphy
2014-07-12 17:15 ` Vlad Dobrotescu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).