linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christopher Smith <csmith@nighthawkrad.net>
To: Sebastian Kuzminsky <seb.kuzminsky@gmail.com>
Cc: Robin Bowes <robin-lists@robinbowes.com>, linux-raid@vger.kernel.org
Subject: Re: Best way to achieve large, expandable, cheap storage?
Date: Tue, 04 Oct 2005 14:09:53 +1000	[thread overview]
Message-ID: <43420091.9060601@nighthawkrad.net> (raw)
In-Reply-To: <7f55de720510030933k26608cfsda63326a8438e35d@mail.gmail.com>

Sebastian Kuzminsky wrote:
> On 10/1/05, Christopher Smith <csmith@nighthawkrad.net> wrote:
> 
>>I RAID5 the drives together and glue multiple sets of 4 drives together
>>into a single usable chunk using LVM.
> 
> 
> Sounds pretty cool.  I've used software RAID but never LVM, let me see
> if I understand your setup:
> 
> At the lowest level, you have 4-disk controller cards, each connected
> to a set of 4 disks.  Each set of 4 has a software RAID-5.  All the
> RAID-5 arrays are used as LVM physical volumes.  These PVs are part of
> a single volume group, from which you make logical volumes as needed.
> 
> When you want more disk, you buy 4 big modern disks (and a 4x
> controller if needed), RAID-5 them, extend the VG onto them, and
> extend the LV(s) on the VG.  Then I guess you have to unmount the
> filesystem(s) on the LV(s), resize them, and remount them.
> 
> If you get low on room in the case or it gets too hot or noisy, you
> have to free up an old, small RAID array.  You unmount, resize, and
> remount the filesystem(s), reduce the LV(s) and the VG, and then
> you're free to pull the old RAID array from the case.

Yep, that's pretty much bang on.  The only thing you've missed is using 
pvmove to physically move the data off the soon-to-be-decomissioned 
PVs(/RAID arrays).

Be warned, for those who haven't used it before, pvmove is _very_ slow.

>>Apart from the actual hardware installations and removals, the various
>>reconfigurations have been quite smoothe and painless, with LVM allowing
>>easy migration of data to/from RAID devices, division of space, etc.
>>I've had 3 disk failures, none of which have resulted in any data loss.
>>  The "data store" has been moved across 3 very different physical
>>machines and 3 different Linux installations (Redhat 9 -> RHEL3 -> FC4).
> 
> 
> Your data survives one disk per PV croaking, but two disks out on any
> one PV causes complete data loss, assuming you use the stripe mapping.

Yep, that's correct.  I've never lost more than one disk out of an array 
at once and I've always replaced any disk failures the same day.  I lost 
two of the 40GB drives (about 6 months apart - back before I had decent 
cooling on them) and one of the 120GB drives.

> You use SATA, which does not support SMART yet, right?  So you get no
> warning of pending drive failures yet.

Yep.  The only annoyance.  I eagerly await the ability to check my SATA 
disks with SMART.

CS

  reply	other threads:[~2005-10-04  4:09 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-09-30 13:20 Best way to achieve large, expandable, cheap storage? Robin Bowes
2005-09-30 13:29 ` Robin Bowes
2005-09-30 18:28   ` Brad Dameron
2005-09-30 19:20     ` Dan Stromberg
2005-09-30 18:16 ` Gregory Seidman
2005-09-30 18:34   ` Andy Smith
2005-10-02  4:36 ` Christopher Smith
2005-10-02  7:09   ` Tyler
2005-10-03  3:19     ` Christopher Smith
2005-10-03 16:33   ` Sebastian Kuzminsky
2005-10-04  4:09     ` Christopher Smith [this message]
2005-10-20 10:23       ` Robin Bowes
2005-10-20 11:19         ` Gregory Seidman
2005-10-20 11:41           ` Robin Bowes
2005-10-21  4:42           ` Christopher Smith
2005-10-21 16:48             ` Gil
2005-10-21 20:08               ` Robin Bowes
2005-10-21  4:40         ` Christopher Smith
  -- strict thread matches above, loose matches on Subject: below --
2005-10-27 19:12 Andrew Burgess

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43420091.9060601@nighthawkrad.net \
    --to=csmith@nighthawkrad.net \
    --cc=linux-raid@vger.kernel.org \
    --cc=robin-lists@robinbowes.com \
    --cc=seb.kuzminsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).