linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: raid5/lvm setup questions
       [not found] <20060805165358.GA29177@cm.nu>
@ 2006-08-05 17:31 ` David Greaves
  2006-08-07 19:57   ` Nix
       [not found]   ` <20060805175908.GA31024@cm.nu>
  0 siblings, 2 replies; 7+ messages in thread
From: David Greaves @ 2006-08-05 17:31 UTC (permalink / raw)
  To: Shane; +Cc: linux-raid

Shane wrote:
> Hello all,
> 
> I'm building a new server which will use a number of disks
> and am not sure of the best way to go about the setup. 
> There will be 4 320gb SATA drives installed at first.  I'm
> just wondering how to set the system up for upgradability. 
> I'll be using raid5 but not sure whether to use lvm over
> the raid array.
> 
> By upgradability, I'd like to do several things.  Adding
> another drive of the same size to the array.  I understand
> reshape can be used here to expand the underlying block
> device.
Yes, it can.

  If the block device is the pv of an lvm array,
> would that also automatically expand in which I would
> create additional lvs in the new space.  If this isn't
> automatic, are there ways to do it manually?
Not automatic AFAIK - but doable.

> What about replacing all four drives with larger units. 
> Say going from 300gbx4 to 500gbx4.  Can one replace them
> one at a time, going through fail/rebuild as appropriate
> and then expand the array into the unused space
Yes.

 or would
> one have to reinstall at that point.
No


None of the requirements above drive you to layering lvm over the top.

That's not to say don't do it - but you certainly don't *need* to do it.

Pros:
* allows snapshots (for consistent backups)
* allows various lvm block movements etc...
* Can later grow vg to use discrete additional block devices without raid5 grow
limitations (eg same-ish size disks etc)

Cons:
* extra complexity -> risk of bugs/admin errors...
* performance impact

As an example of the cons: I've just set up lvm2 over my raid5 and whilst
testing snapshots, the first thing that happened was a kernel BUG and an oops...

David

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: raid5/lvm setup questions
       [not found]   ` <20060805175908.GA31024@cm.nu>
@ 2006-08-05 21:02     ` Martin Schröder
  2006-08-07 23:29     ` Neil Brown
  1 sibling, 0 replies; 7+ messages in thread
From: Martin Schröder @ 2006-08-05 21:02 UTC (permalink / raw)
  To: Shane; +Cc: David Greaves, linux-raid

2006/8/5, Shane <shane@cm.nu>:
> Well, the reason I was looking at LVM is because since this
> is a fairly big array, I didn't want to lose a bunch of
> space with ext3 inodes.  For example, the PostGreSQL

Then forget about ext{2|3} and use xfs or reiserfs. ext3 is limited to
4TB anyway.

Best
   Martin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: raid5/lvm setup questions
  2006-08-05 17:31 ` raid5/lvm setup questions David Greaves
@ 2006-08-07 19:57   ` Nix
       [not found]     ` <20060807200455.GA29837@cm.nu>
  2006-08-07 22:32     ` David Greaves
       [not found]   ` <20060805175908.GA31024@cm.nu>
  1 sibling, 2 replies; 7+ messages in thread
From: Nix @ 2006-08-07 19:57 UTC (permalink / raw)
  To: David Greaves; +Cc: Shane, linux-raid

On 5 Aug 2006, David Greaves prattled cheerily:
> As an example of the cons: I've just set up lvm2 over my raid5 and whilst
> testing snapshots, the first thing that happened was a kernel BUG and an oops...

I've been backing up using writable snapshots on LVM2 over RAID-5 for
some time. No BUGs.

I think the blame here is likely to be layable at the snapshots' door,
anyway: they're still a little wobbly and the implementation is pretty
complex: bugs surface on a regular basis.

-- 
`We're sysadmins. We deal with the inconceivable so often I can clearly 
 see the need to define levels of inconceivability.' --- Rik Steenwinkel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: raid5/lvm setup questions
       [not found]     ` <20060807200455.GA29837@cm.nu>
@ 2006-08-07 20:20       ` Chet McNeill
  2006-08-07 22:28       ` David Greaves
  1 sibling, 0 replies; 7+ messages in thread
From: Chet McNeill @ 2006-08-07 20:20 UTC (permalink / raw)
  To: Shane; +Cc: linux-raid

> I seem to recall patches to md floating around a couple
> years back for partitioning of md devices.  Are those still
> available somewhere?

I believe the patches that you are referring to are now included into
the standard 2.6+ kernel.

-Chet

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: raid5/lvm setup questions
       [not found]     ` <20060807200455.GA29837@cm.nu>
  2006-08-07 20:20       ` Chet McNeill
@ 2006-08-07 22:28       ` David Greaves
  1 sibling, 0 replies; 7+ messages in thread
From: David Greaves @ 2006-08-07 22:28 UTC (permalink / raw)
  To: Shane; +Cc: linux-raid

Shane wrote:
> On Mon, Aug 07, 2006 at 08:57:13PM +0100, Nix wrote:
>> On 5 Aug 2006, David Greaves prattled cheerily:
>>> As an example of the cons: I've just set up lvm2 over my raid5 and whilst
>>> testing snapshots, the first thing that happened was a kernel BUG and an oops...
>> I've been backing up using writable snapshots on LVM2 over RAID-5 for
>> some time. No BUGs.
> 
> Just performed some basic throughput tests using 4 SATA
> disks in a raid5 array.  The read performance on the
> /dev/mdx device runs around 180mbps but if lvm is layered
> over that, reads on the lv are around 130mbps.  Not an
> unsubstantial reduction.
Check the readahead at various block levels
blockdev --setra xxx

I think I found the best throughput (for me) was with 0 readahead for /dev/hdX,
0 for /dev/mdX and lots for /dev/vg/lv

> 
> I seem to recall patches to md floating around a couple
> years back for partitioning of md devices.  Are those still
> available somewhere?
man mdadm and see --auto...

David


-- 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: raid5/lvm setup questions
  2006-08-07 19:57   ` Nix
       [not found]     ` <20060807200455.GA29837@cm.nu>
@ 2006-08-07 22:32     ` David Greaves
  1 sibling, 0 replies; 7+ messages in thread
From: David Greaves @ 2006-08-07 22:32 UTC (permalink / raw)
  To: Nix; +Cc: Shane, linux-raid

Nix wrote:
> On 5 Aug 2006, David Greaves prattled cheerily:
that's me :)
>> As an example of the cons: I've just set up lvm2 over my raid5 and whilst
>> testing snapshots, the first thing that happened was a kernel BUG and an oops...
> 
> I've been backing up using writable snapshots on LVM2 over RAID-5 for
> some time. No BUGs.
I tried but it didn't recurr.
I sent a report to lkml.

> I think the blame here is likely to be layable at the snapshots' door,
> anyway: they're still a little wobbly and the implementation is pretty
> complex: bugs surface on a regular basis.
Hmmm. Bugs in a backup strategy. Hmmm.

I think I can live with a nightly shutdown of the daemons whilst rsync does it's
stuff across the LAN.

David


-- 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: raid5/lvm setup questions
       [not found]   ` <20060805175908.GA31024@cm.nu>
  2006-08-05 21:02     ` Martin Schröder
@ 2006-08-07 23:29     ` Neil Brown
  1 sibling, 0 replies; 7+ messages in thread
From: Neil Brown @ 2006-08-07 23:29 UTC (permalink / raw)
  To: Shane; +Cc: David Greaves, linux-raid

On Saturday August 5, shane@cm.nu wrote:
> On Sat, Aug 05, 2006 at 06:31:37PM +0100, David Greaves wrote:
> > > Say going from 300gbx4 to 500gbx4.  Can one replace them
> > > one at a time, going through fail/rebuild as appropriate
> > > and then expand the array into the unused space
> > Yes.
> 
> I didn't see anything in the mdadm manual on this.  Would
> one just do a --grow /dev/md0 once the disks were changed
> out?  It looks like --grow is used to change the number of
> devices in the array but not the device size itself.

It does both (and more).

 mdadm --grow /dev/md0 --raid-disk=5
changes the number of drives to 5.

 mdadm --grow /dev/md0 --size=max

changes the used-size of each drive to the maximum available.

 mdadm --grow /dev/md0 --bitmap=internal
adds an internal write-intent bitmap

 mdadm --grow /dev/md0 --chunksize=128
might change the chunksize to 128k.. but doesn't yet.  
Maybe one day :-)

NeilBrown

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2006-08-07 23:29 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20060805165358.GA29177@cm.nu>
2006-08-05 17:31 ` raid5/lvm setup questions David Greaves
2006-08-07 19:57   ` Nix
     [not found]     ` <20060807200455.GA29837@cm.nu>
2006-08-07 20:20       ` Chet McNeill
2006-08-07 22:28       ` David Greaves
2006-08-07 22:32     ` David Greaves
     [not found]   ` <20060805175908.GA31024@cm.nu>
2006-08-05 21:02     ` Martin Schröder
2006-08-07 23:29     ` Neil Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).