linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Old Fart <rascal.jumper-747@cox.net>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] If one disk fails i loose everything?
Date: Fri, 09 Dec 2005 13:34:01 -0500	[thread overview]
Message-ID: <4399CE19.80601@cox.net> (raw)
In-Reply-To: <8777.208.178.77.200.1134146221.squirrel@mnementh.dragonhold.org>

gwood@dragonhold.org wrote:
> In the answers below I've taken 'one linear volume group' to actually mean
> 'one linear volume using all the space in the volume group'.  If this is
> not what you meant, can you please describe your setup?
>
>   
>>   1. If one hard disk fails (hardware) do I loose all the data stored on
>>   the VG?
>>     
> I'm not sure where you searched, but the answer to this is a fundamental
> characteristic of RAID0 - there is no redundancy.  If you have a concat
> (rather than a stripe), then you /might/ be able to salvage /some/ data -
> but the chances are pretty slim that you'll get anything sane back out of
> it, and you're going to have to do some relatively low level stuff to do
> even that I would have thought.
>
>   
>>   2. Can I add a new hard disk in the VG without having to format it
>>   before? (I mean if it is full of data can I just add it?)
>>     
> Not easily using LVM on linux, no.  If it already has partitions on it,
> then the layout of the data on the drive is incorrect for LVM.  There may
> be tools out there to overlay the required metadata, but the underlying
> partitions will have to reduce in size, so this would require the
> filesystems to support being shrunk.
>
>   
>>   3. In case of failure can I recover the data from a single disk on
>>   another box?
>>     
> What sort of failure?  Any VG that has enough drives to provide you with a
> working volume group/volume can be imported to another machine and read. 
> So if you have 3 volumes over the VG, and only one of them doesn't have a
> complete set of blocks on it - the other 2 can be salvaged.  In the case
> of the arrangement you're talking about, no.
>
> If you need to be able to recover the data, then a simple linear volume
> (with no redundancy) is not a good idea.
>
> (In answer to the subject line, 'with the arrangement you're talking
> about' - YES')
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>   
Good discussion....the need to protect data under various contingencies 
is why I use raid5 sets as the PVs.  You can lose up to two and keep 
your data, hot add, have spares, etc.

-- 
Regards,

Old Fart

      reply	other threads:[~2005-12-09 18:33 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-12-06 23:37 [linux-lvm] If one disk fails i loose everything? kyr
2005-12-09 16:34 ` Erik Ohrnberger
2005-12-09 16:37 ` gwood
2005-12-09 18:34   ` Old Fart [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4399CE19.80601@cox.net \
    --to=rascal.jumper-747@cox.net \
    --cc=linux-lvm@redhat.com \
    --cc=rascal.jumper-747@cox.net.spamorama.redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).