Linux LVM users
 help / color / mirror / Atom feed
From: Eric Monjoin <eric@monjoin.net>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Software raid on top of lvm logical volume
Date: Thu, 28 Oct 2004 08:01:27 +0200	[thread overview]
Message-ID: <41808B37.40406@monjoin.net> (raw)
In-Reply-To: <20041028011738.GD13737@kluge.net>

Theo Van Dinter a �crit :

Well it's because we have problems in this way. We have a server 
connected to 2 EMC Symmetrix where we assign some 70Gb and 40Gb Luns.  
We used Powerpath to manage the dual path to the Luns and so I first 
created mirror as this :

raiddev /dev/md0
        raid-level              1
        nr-raid-disks           2
        nr-spare-disks          0
        chunk-size              32
        persistent-superblock   1
        device                  /dev/emcpowera1
        raid-disk               0
        device                  /dev/emcpowerf1
        raid-disk                1
#        failed-disk        1


raiddev /dev/md1
        raid-level              1
        nr-raid-disks           2
        nr-spare-disks          0
        chunk-size              32
        persistent-superblock   1
        device                  /dev/emcpowerb1
        raid-disk               0
        device                  /dev/emcpowerg1
        raid-disk                1
#    failed-disk        1

raiddev /dev/md2
        raid-level              1
        nr-raid-disks           2
        nr-spare-disks          0
        chunk-size              32
        persistent-superblock   1
        device                  /dev/emcpowerc1
        raid-disk               0
        device                  /dev/emcpowerh1
        raid-disk                1
#    failed-disk        1


raiddev /dev/md3
        raid-level              1
        nr-raid-disks           2
        nr-spare-disks          0
        chunk-size              32
        persistent-superblock   1
        device                  /dev/emcpowerd1
        raid-disk               0
        device                  /dev/emcpoweri1
    raid-disk                1
#        failed-disk        1
......
up to raiddev /dev/md9

So the /proc/mdstat  give :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 15                 
md9 : active raid1 emcpowerd1[1] emcpowero1[0]
      42829184 blocks [2/2] [UU]
     
md8 : active raid1 emcpowerc1[1] emcpowern1[0]
      42829184 blocks [2/2] [UU]
     
md7 : active raid1 emcpowerb1[1] emcpowerm1[0]
      42829184 blocks [2/2] [UU]
     
md6 : active raid1 emcpowera1[1] emcpowerl1[0]
      42829184 blocks [2/2] [UU]
     
md5 : active raid1 emcpowerp1[1] emcpowerk1[0]
      42829184 blocks [2/2] [UU]
     
md4 : active raid1 emcpowerj1[1] emcpowere1[0]
      71384704 blocks [2/2] [UU]
     
md3 : active raid1 emcpoweri1[1] emcpowerd1[0]
      71384704 blocks [2/2] [UU]
     
md2 : active raid1 emcpowerc1[0] emcpowerh1[1]
      71384704 blocks [2/2] [UU]
     
md1 : active raid1 emcpowerg1[1] emcpowerb1[0]
      71384704 blocks [2/2] [UU]
     
md0 : active raid1 emcpowerf1[1] emcpowera1[0]
      71384704 blocks [2/2] [UU]
     
unused devices: <none>

But after a while I obtain that :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 10                 
md9 : active raid1 [dev e9:31][1] [dev e8:e1][0]
      42829184 blocks [2/2] [UU]
     
md8 : active raid1 [dev e9:21][1] [dev e8:d1][0]
      42829184 blocks [2/2] [UU]
     
md7 : active raid1 [dev e9:11][1] [dev e8:c1][0]
      42829184 blocks [2/2] [UU]
     
md6 : active raid1 [dev e9:01][1] [dev e8:b1][0]
      42829184 blocks [2/2] [UU]
     
md5 : active raid1 [dev e8:f1][1] [dev e8:a1][0]
      42829184 blocks [2/2] [UU]
     
md4 : active raid1 [dev e8:91][1] [dev e8:41][0]
      71384704 blocks [2/2] [UU]
     
md3 : active raid1 [dev e8:81][1] [dev e8:31][0]
      71384704 blocks [2/2] [UU]
     
md2 : active raid1 [dev e8:71][1] [dev e8:21][0]
      71384704 blocks [2/2] [UU]
     
md1 : active raid1 [dev e8:61][1] [dev e8:11][0]
      71384704 blocks [2/2] [UU]
     
md0 : active raid1 [dev e8:51][1] [dev e8:01][0]
      71384704 blocks [2/2] [UU]
     
unused devices: <none>

and if we try to rebuild the mirror after after loosing access to one of 
the EMC, we have really bad result :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 26                 
md9 : active raid1 emcpowerd1[2] [dev e8:e1][0]
      42829184 blocks [2/1] [U_]
      [>....................]  recovery =  1.4% (630168/42829184) 
finish=68.1min speed=10315K/sec
md8 : active raid1 emcpowerc1[2] [dev e8:d1][0]
      42829184 blocks [2/1] [U_]
     
md7 : active raid1 emcpowerb1[2] [dev e8:c1][0]
      42829184 blocks [2/1] [U_]
     
md6 : active raid1 emcpowera1[2] [dev e8:b1][0]
      42829184 blocks [2/1] [U_]
     
md5 : active raid1 emcpowerp1[2] [dev e8:a1][0]
      42829184 blocks [2/1] [U_]
     
md4 : active raid1 emcpowerj1[2] [dev e8:41][0]
      71384704 blocks [2/1] [U_]
     
md3 : active raid1 emcpoweri1[2] [dev e8:31][0]
      71384704 blocks [2/1] [U_]
     
md2 : active raid1 emcpowerh1[2] [dev e8:21][0]
      71384704 blocks [2/1] [U_]
     
md1 : active raid1 emcpowerg1[2] [dev e8:11][0]
      71384704 blocks [2/1] [U_]
     
md0 : active raid1 emcpowerf1[2] [dev e8:01][0]
      71384704 blocks [2/1] [U_]
     

So may be it will be better to create a raid device on top of the lvm 
volume.


>On Thu, Oct 28, 2004 at 12:02:06AM +0200, Eric Monjoin wrote:
>  
>
>>I would like to know if it's possible (works perfectly) to create a 
>>software mirror (md0) on top of  2 LVM logical volumes :
>>    
>>
>
>You'd usually want to make your raid devices first, then put LVM on
>top of it.  I can't really think of any benefits of doing it the other
>way around.
>
>  
>
>------------------------------------------------------------------------
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://www.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

  reply	other threads:[~2004-10-28  6:01 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-10-27 22:02 [linux-lvm] Software raid on top of lvm logical volume Eric Monjoin
2004-10-28  1:17 ` Theo Van Dinter
2004-10-28  6:01   ` Eric Monjoin [this message]
2004-10-28  6:35     ` Luca Berra
2004-10-28 19:17       ` [linux-lvm] " Peter T. Breuer
2004-11-01 16:01         ` Michael T. Babcock
2004-11-01 16:51           ` Erik Ohrnberger
2004-11-01 22:03             ` Clint Byrum
2004-11-01 22:07               ` Theo Van Dinter
2004-11-02 15:46             ` Michael T. Babcock
2004-10-28 18:54   ` [linux-lvm] " Michael T. Babcock
2004-10-30 16:55     ` [linux-lvm] " Peter T. Breuer
2004-10-30 17:10     ` [linux-lvm] What is the best way to configure LVM + RAID? Erik Ohrnberger
2004-10-31 17:34       ` [linux-lvm] " Peter T. Breuer
2004-10-30 17:27     ` [linux-lvm] Software raid on top of lvm logical volume Theo Van Dinter
2004-10-30 19:22       ` [linux-lvm] LVM DISK DIE "KieZz"
2004-10-31 16:48     ` [linux-lvm] Software raid on top of lvm logical volume Markus Baertschi
2004-11-01  6:46       ` Scott Serr
2004-11-01 15:38       ` Michael T. Babcock
2004-11-01 17:02         ` [linux-lvm] " Peter T. Breuer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=41808B37.40406@monjoin.net \
    --to=eric@monjoin.net \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox