From: Steven Haigh <netwiz@crc.id.au>
To: Rogier Wolff <R.E.Wolff@BitWizard.nl>
Cc: linux-raid@vger.kernel.org
Subject: Re: New raid level suggestion.
Date: Thu, 30 Dec 2010 19:47:10 +1100 [thread overview]
Message-ID: <4D1C470E.4080406@crc.id.au> (raw)
In-Reply-To: <20101230082356.GC2986@bitwizard.nl>
On 30/12/2010 7:23 PM, Rogier Wolff wrote:
>
> Hi,
>
> A friend has a webserver. He has 4 drive bays and due to previous
> problems he's not content to have 3 or 4 drives in a raid5
> configuration, but he wants a "hot spare" so that when it takes him a
> week to find a new drive and some time to drive to the hosting
> company, he isn't susceptible to a second drive crashing in the
> meantime.
>
> So in principle he'll build a 3-drive RAID5 with a hot spare....
>
> Now we've been told that raid5 performs badly for the workload that is
> expected. It would be much better to run the system in RAID10. However
> if he'd switch to RAID10, after a single drive failure he has a window
> of about a week where he has a 33% chance of a second drive failure
> being "fatal".
>
> So I was thinking.... He's resigned himself to a configuration where
> he pays for 4x the disk space and only gets 2x the available space.
>
> So he could run his array in RAID10 mode, however when a drive fails,
> a fallback to raid5 would be in order. In this case, after the resync
> a single-drive-failure tolerance is again obtained.
>
> In practise scaling down to raid5 is not easy/possible. RAID4 however
> should be doable.
>
> In fact this can almost be implemented entirely in userspace. Just
> remove the mirror drive from the underlying raid0, and reinitialize as
> raid4. If you do this correctly the data will still be there....
>
> Although doing this with an active filesystem running on these drives
> is probably impossible due to "device is in use" error messages....
>
> So: Has anybody tried this before?
> Can this be implemented without kernel support?
> Anybody feel like implementing this?
>
> Roger.
>
Maybe I'm not quite understanding right, however you can easily do RAID6
with 4 drives. That will give you two redundant, effectively give you
RAID5 if I drive fails, and save buttloads of messing around...
--
Steven Haigh
Email: netwiz@crc.id.au
Web: http://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
next prev parent reply other threads:[~2010-12-30 8:47 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-30 8:23 New raid level suggestion Rogier Wolff
2010-12-30 8:47 ` Steven Haigh [this message]
2010-12-30 9:42 ` Rogier Wolff
2010-12-30 10:39 ` Stan Hoeppner
2010-12-30 11:58 ` John Robinson
2010-12-30 13:11 ` Stan Hoeppner
2010-12-30 18:10 ` John Robinson
2010-12-31 10:23 ` Stan Hoeppner
2010-12-30 23:20 ` Why won't mdadm start several RAIDs that appear to be fine? Jim Schatzman
2010-12-31 1:08 ` Neil Brown
2010-12-31 3:38 ` Why won't mdadm start several RAIDs that appear to be fine? Info from "mdadm -A --verbose" Jim Schatzman
2010-12-31 3:51 ` Why won't mdadm start several RAIDs that appear to be fine? SOLVED! Jim Schatzman
2011-01-03 4:33 ` New raid level suggestion Leslie Rhorer
2011-01-04 15:29 ` Rogier Wolff
2010-12-30 10:01 ` Neil Brown
2010-12-30 14:24 ` Ryan Wagoner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D1C470E.4080406@crc.id.au \
--to=netwiz@crc.id.au \
--cc=R.E.Wolff@BitWizard.nl \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).