linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Maarten v d Berg <maarten@vbvb.nl>
To: linux-raid@vger.kernel.org
Subject: Re: Extend raid 5
Date: Mon, 12 Jan 2004 11:52:02 +0100	[thread overview]
Message-ID: <200401121152.02791.maarten@vbvb.nl> (raw)
In-Reply-To: <400270C2.3030009@smartjog.com>

On Monday 12 January 2004 11:02, Marc Bevand wrote:
> Maarten v d Berg wrote:
> > [...]

> > Otherwise, adding a 40 GB physical volume to a 120 GB raid5 / LVM set
> > just gives me one 120 GB partition and [room for] another 40 GB
> > partition. There is NO gain whatsoever using LVM here compared to when I
> > would just have added a single 40GB disk all by itself without using LVM
> > in the first place, is there ?
> >
> > This has always left me wondering.  Did I miss something (except using
> > some alpha FS-resize code...) ?
>
> This is precisely the point, you have to resize your filesystem so that
> the extra space added to your LVM device is used. There are many
> options, you can either use a userland tool (resize2fs, resize_reiserfs,
> ...) for resizing an *unmounted* filesystem, or you can do it in the
> kernel (mount -o remount,resize=<size> <device>). As you can see, doing
> it in the kernel has the extra advantage of allowing you to resize a
> *mounted* filesystem.

My raid filesystem is not part of the normal linux FS tree, so for me it is 
perfectly okay to umount the system.  I tend to only use reiserfs.

> Filesystem resizing is more stable than you think, for example the
> commercial program Partition Magic is based on resize2fs (but I am not
> sure if I can convince you with this example since proprietary software
> is evil :P).

I know partition magic but at the time I tried it it did not understand 
reiserfs so I dropped it. I don't know what the current version can do.

In any case I didn't want to run the risk at the time; I had something which 
could be called a "backup" (with some imagination) but it consisted of 
several tapes, disks and whatnot that could help restoring in case of a 
disaster but it was by no means near anything complete nor recent.
In other words, restoring would have cost me at least a full weekend and would 
have cost me anything between 5 - 20% of my data. I can accept that kind of 
risk for statistic 'normal' disasters but not for experiments with a higher- 
than-normal risk of losing the entire filesystem. 

The problem with adding a non-redundant drive to an existing raid-based LVM 
persists however. By adding that one drive and extending the FS to include 
that you introduce a single point of failure. Bye bye raid-redundancy...

Greetings,
Maarten


  reply	other threads:[~2004-01-12 10:52 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-01-12  0:11 Extend raid 5 buliwyf
2004-01-12  2:13 ` Mike Fedyk
2004-01-12  9:21   ` Maarten v d Berg
2004-01-12  9:57     ` Måns Rullgård
2004-01-12 10:02     ` Marc Bevand
2004-01-12 10:52       ` Maarten v d Berg [this message]
2004-01-12 16:15     ` Guy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200401121152.02791.maarten@vbvb.nl \
    --to=maarten@vbvb.nl \
    --cc=linux-raid@vger.kernel.org \
    --cc=maarten@nebula.vbvb.nl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).