* Create software RAID from active partition
@ 2008-09-01 12:32 Michael Guyver
2008-09-01 12:45 ` Steve Fairbairn
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Michael Guyver @ 2008-09-01 12:32 UTC (permalink / raw)
To: linux-raid
Hi there,
I've got a question about creating a RAID-1 array on a remote server -
ie: if the operation fails, it's going to be very expensive. The
server has two 200 GB drives and during a hurried re-install of CentOS
5.2 the creation of software RAID partitions was omitted. This means
that the array would include the currently active partition on which
the kernel is installed. So my first question is as to the feasibility
of this operation, and its safety: any comments?
The following may give an insight into the current setup should you
need it to answer my question more accurately.
-------------------------------------------------------------
# fdisk -l
Disk /dev/sda: 203.9 GB, 203928109056 bytes
255 heads, 63 sectors/track, 24792 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 24773 198884700 8e Linux LVM
Disk /dev/sdb: 203.9 GB, 203928109056 bytes
255 heads, 63 sectors/track, 24792 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 83 Linux
/dev/sdb2 14 24773 198884700 8e Linux LVM
-------------------------------------------------------------
# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
-------------------------------------------------------------
# pvdisplay
Found duplicate PV g7ZWtzNQcHx2PMQghP0NBHDXuYcaYqAt: using /dev/sdb2
not /dev/sda2
--- Physical volume ---
PV Name /dev/sdb2
VG Name VolGroup00
PV Size 189.67 GB / not usable 15.34 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 6069
Free PE 0
Allocated PE 6069
PV UUID g7ZWtz-NQcH-x2PM-QghP-0NBH-DXuY-caYqAt
-------------------------------------------------------------
# lvdisplay
Found duplicate PV g7ZWtzNQcHx2PMQghP0NBHDXuYcaYqAt: using /dev/sdb2
not /dev/sda2
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID rvPZJS-6Z7a-kXzk-aLcM-vv13-eRCK-kjg6I1
LV Write Access read/write
LV Status available
# open 1
LV Size 187.72 GB
Current LE 6007
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID zvxDsa-MZXn-akSA-DlzC-49IX-65Fo-HPBuyJ
LV Write Access read/write
LV Status available
# open 1
LV Size 1.94 GB
Current LE 62
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
-------------------------------------------------------------
Judging from the "Found duplicate " messages produced by pvdisplay and
lvdisplay, as well as the mount output, it seems that the root
partition is being loaded from /dev/sdb2. What /dev/sda2 is doing
right now is, I guess, completely sweet FA.
Can anyone point me to the way of finding out a file's physical
location on disc so that I can verify this is the case? So, for
example, I would like to check that my latest edit to ~/somefile.txt
is in fact on /dev/sdb1 at location xyz and that can be verified by
using dd to copy those bytes to a file in /tmp.
Having started reading the docs related to creating a RAID device, it
seems likely that the order of the listed devices is significant when
the array is initialised. However, I haven't yet been able to confirm
that were I to write
mdadm -C /dev/md0 --level raid1 --raid-disks 2 /dev/sdb1 /dev/sda1
that it would start to copy data from sdb1 to sda1 - or have I
misunderstood the initialisation process?
These questions may not seem very well framed, but some initial
guidance while I'm still reading into the problem would be
appreciated.
Best wishes
Michael
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Create software RAID from active partition
2008-09-01 12:32 Create software RAID from active partition Michael Guyver
@ 2008-09-01 12:45 ` Steve Fairbairn
2008-09-01 13:14 ` Alan Jenkins
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Steve Fairbairn @ 2008-09-01 12:45 UTC (permalink / raw)
To: Michael Guyver; +Cc: linux-raid
Michael Guyver wrote:
> I've got a question about creating a RAID-1 array on a remote server -
> ie: if the operation fails, it's going to be very expensive. The
> server has two 200 GB drives and during a hurried re-install of CentOS
> 5.2 the creation of software RAID partitions was omitted. This means
> that the array would include the currently active partition on which
> the kernel is installed. So my first question is as to the feasibility
> of this operation, and its safety: any comments?
>
That would imply that one of the disks is currently doing nothing, which
would make it feasible as far as I can see.
> # pvdisplay
> Found duplicate PV g7ZWtzNQcHx2PMQghP0NBHDXuYcaYqAt: using /dev/sdb2
> not /dev/sda2
>
This would seem to say to me that it's using sdb for all data currently,
but...
>
> Can anyone point me to the way of finding out a file's physical
> location on disc so that I can verify this is the case? So, for
> example, I would like to check that my latest edit to ~/somefile.txt
> is in fact on /dev/sdb1 at location xyz and that can be verified by
> using dd to copy those bytes to a file in /tmp.
>
I can't help you here as I never bother with LVM, so I've not idea how
to work out which physical device the mounted LVM is on.
> Having started reading the docs related to creating a RAID device, it
> seems likely that the order of the listed devices is significant when
> the array is initialised. However, I haven't yet been able to confirm
> that were I to write
>
> mdadm -C /dev/md0 --level raid1 --raid-disks 2 /dev/sdb1 /dev/sda1
>
> that it would start to copy data from sdb1 to sda1 - or have I
> misunderstood the initialisation process?
>
Please accept my standard disclaimer of 'I'm no expert, and I may be
wrong.'...
I don't believe you'd want to do this. What I think you'd want to do
instead is create a degraded RAID 1 array using just the currently
unused disk, then install LVM and a filesystem on that array, then copy
all your data across.
Make sure you install a boot loader in the boot block of the disk you've
made part of the array, and do whatever else you can to ensure the
system next boots off the new md device.
Reboot, and then ensure you really are using the md device for your
mounted filesystems...
Once you are certain, add the now unused drive into your RAID 1 array,
and the replication should start.
> These questions may not seem very well framed, but some initial
> guidance while I'm still reading into the problem would be
> appreciated.
>
Others will be able to give you more specific answers. I unfortunately
don't have linux in front of me, so can't check out the required mdadm
incantations.
Hope this helps a little,
Steve.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Create software RAID from active partition
2008-09-01 12:32 Create software RAID from active partition Michael Guyver
2008-09-01 12:45 ` Steve Fairbairn
@ 2008-09-01 13:14 ` Alan Jenkins
2008-09-01 13:30 ` David Greaves
2008-09-01 20:03 ` Michal Soltys
3 siblings, 0 replies; 7+ messages in thread
From: Alan Jenkins @ 2008-09-01 13:14 UTC (permalink / raw)
To: Michael Guyver; +Cc: linux-raid
Michael Guyver wrote:
> Hi there,
>
> I've got a question about creating a RAID-1 array on a remote server -
> ie: if the operation fails, it's going to be very expensive. The
> server has two 200 GB drives and during a hurried re-install of CentOS
> 5.2 the creation of software RAID partitions was omitted. This means
> that the array would include the currently active partition on which
> the kernel is installed. So my first question is as to the feasibility
> of this operation, and its safety: any comments?
>
> The following may give an insight into the current setup should you
> need it to answer my question more accurately.
>
> -------------------------------------------------------------
> # fdisk -l
> Disk /dev/sda: 203.9 GB, 203928109056 bytes
> 255 heads, 63 sectors/track, 24792 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 1 13 104391 83 Linux
> /dev/sda2 14 24773 198884700 8e Linux LVM
>
> Disk /dev/sdb: 203.9 GB, 203928109056 bytes
> 255 heads, 63 sectors/track, 24792 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 * 1 13 104391 83 Linux
> /dev/sdb2 14 24773 198884700 8e Linux LVM
> -------------------------------------------------------------
> # mount
> /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
> proc on /proc type proc (rw)
> sysfs on /sys type sysfs (rw)
> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
> /dev/sda1 on /boot type ext3 (rw)
> tmpfs on /dev/shm type tmpfs (rw)
> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
> -------------------------------------------------------------
> # pvdisplay
> Found duplicate PV g7ZWtzNQcHx2PMQghP0NBHDXuYcaYqAt: using /dev/sdb2
> not /dev/sda2
> --- Physical volume ---
> PV Name /dev/sdb2
> VG Name VolGroup00
> PV Size 189.67 GB / not usable 15.34 MB
> Allocatable yes (but full)
> PE Size (KByte) 32768
> Total PE 6069
> Free PE 0
> Allocated PE 6069
> PV UUID g7ZWtz-NQcH-x2PM-QghP-0NBH-DXuY-caYqAt
> -------------------------------------------------------------
> # lvdisplay
> Found duplicate PV g7ZWtzNQcHx2PMQghP0NBHDXuYcaYqAt: using /dev/sdb2
> not /dev/sda2
> --- Logical volume ---
> LV Name /dev/VolGroup00/LogVol00
> VG Name VolGroup00
> LV UUID rvPZJS-6Z7a-kXzk-aLcM-vv13-eRCK-kjg6I1
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 187.72 GB
> Current LE 6007
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:0
>
> --- Logical volume ---
> LV Name /dev/VolGroup00/LogVol01
> VG Name VolGroup00
> LV UUID zvxDsa-MZXn-akSA-DlzC-49IX-65Fo-HPBuyJ
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 1.94 GB
> Current LE 62
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:1
>
> -------------------------------------------------------------
>
> Judging from the "Found duplicate " messages produced by pvdisplay and
> lvdisplay, as well as the mount output, it seems that the root
> partition is being loaded from /dev/sdb2. What /dev/sda2 is doing
> right now is, I guess, completely sweet FA.
>
Hmm. It looks like it was set up as some sort of RAID1, hence the
duplicate PVs, but it is no longer using it.
> Can anyone point me to the way of finding out a file's physical
> location on disc so that I can verify this is the case? So, for
> example, I would like to check that my latest edit to ~/somefile.txt
> is in fact on /dev/sdb1 at location xyz and that can be verified by
> using dd to copy those bytes to a file in /tmp.
>
>
swap_offset program from the uswsusp package? I'd trust pvdisplay that
you're only using sdb2 though, and not sda2.
> Having started reading the docs related to creating a RAID device, it
> seems likely that the order of the listed devices is significant when
> the array is initialised. However, I haven't yet been able to confirm
> that were I to write
>
> mdadm -C /dev/md0 --level raid1 --raid-disks 2 /dev/sdb1 /dev/sda1
>
> that it would start to copy data from sdb1 to sda1 - or have I
> misunderstood the initialisation process?
>
> These questions may not seem very well framed, but some initial
> guidance while I'm still reading into the problem would be
> appreciated.
>
> Best wishes
>
> Michael
>
a) For safety, the trick would be to do it in stages. I've done this
locally to add RAID to my existing desktop.
Create a 1-device RAID1 (you can do this, though you have to force it)
on the *unused* drive. Copy your data into the RAID device (e.g. using
dd). Get it to the point where it's bootable on it's own. Boot into it
and check it works (Use the BIOS boot drive selection. You do have
remote BIOS access, right?). Then grow the RAID device by adding the
other disk - overwriting the old contents.
That's one more copy than strictly needed, but it's worth it for peace
of mind.
b) Also, it sounds like you're missing a piece of knowledge. (Sane)
software RAID requires a RAID superblock at the end of the device for
identification. That means you can't take a non-RAID disk and turn it
into a RAID disk while leaving the drive unchanged. You would have to
shrink the LVM physical volume. With the stepwise approach you may be
able to avoid shrinking in-place - which could be risky. E.g. shrink
the boot partition slightly instead before copying.
c) Thinking of the boot partition, you also need to make that RAID1 in
the same way. Unlike LVM, RAID1 is transparent to GRUB (because GRUB
doesn't write to the filesystem). But it's recommended you do the
individual partitions separately, and don't try to do whole-disk RAID.
Don't forget you've got /boot on sda1 - that's going to be a confusing
problem whatever you do.
Alan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Create software RAID from active partition
2008-09-01 12:32 Create software RAID from active partition Michael Guyver
2008-09-01 12:45 ` Steve Fairbairn
2008-09-01 13:14 ` Alan Jenkins
@ 2008-09-01 13:30 ` David Greaves
2008-09-01 14:28 ` Michael Guyver
2008-09-01 20:03 ` Michal Soltys
3 siblings, 1 reply; 7+ messages in thread
From: David Greaves @ 2008-09-01 13:30 UTC (permalink / raw)
To: Michael Guyver; +Cc: linux-raid
Michael Guyver wrote:
> Hi there,
>
> I've got a question about creating a RAID-1 array on a remote server -
> ie: if the operation fails, it's going to be very expensive. The
> server has two 200 GB drives and during a hurried re-install of CentOS
> 5.2 the creation of software RAID partitions was omitted. This means
> that the array would include the currently active partition on which
> the kernel is installed. So my first question is as to the feasibility
> of this operation, and its safety: any comments?
It looks to me like you have a system that is (almost) setup to use lvm
mirroring, not md (raid) mirroring.
Are you sure you want to swap to using md mirroring? Or do you want to restore
the lvm mirror?
> # mount
> /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
> proc on /proc type proc (rw)
> sysfs on /sys type sysfs (rw)
> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
> /dev/sda1 on /boot type ext3 (rw)
> tmpfs on /dev/shm type tmpfs (rw)
> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
So there are no issues mirroring the / fs.
Simply make the /dev/mapper/VolGroup00-LogVol00 mirrored
lvconvert -m or something.
Bear in mind that /boot and possibly the mbr won't be replicated so if a disk
fails you may not lose your rootfs but you won't be booting either...
> Judging from the "Found duplicate " messages produced by pvdisplay and
> lvdisplay, as well as the mount output, it seems that the root
> partition is being loaded from /dev/sdb2. What /dev/sda2 is doing
> right now is, I guess, completely sweet FA.
>
> Can anyone point me to the way of finding out a file's physical
> location on disc so that I can verify this is the case? So, for
> example, I would like to check that my latest edit to ~/somefile.txt
> is in fact on /dev/sdb1 at location xyz and that can be verified by
> using dd to copy those bytes to a file in /tmp.
This is a red herring.
Unless you are into block level data recovery in which case go to the
smartmontools pages - they have a solution for this for ext3. Once you start to
introduce raid/lvm/other fs then you are in a world of confusion.
> Having started reading the docs related to creating a RAID device, it
> seems likely that the order of the listed devices is significant when
> the array is initialised. However, I haven't yet been able to confirm
> that were I to write
>
> mdadm -C /dev/md0 --level raid1 --raid-disks 2 /dev/sdb1 /dev/sda1
>
> that it would start to copy data from sdb1 to sda1 - or have I
> misunderstood the initialisation process?
Yes.
Initialising will likely destroy data unless you are careful about superblock
locations.
Even then you'd need to create the array degraded and grow it.
I'd create the array in degraded mode using the blank disk and then copy the
data and test boot (retaining the ability to boot the old system)
Then, once it worked I'd wipe add the old disk in.
David
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Create software RAID from active partition
2008-09-01 13:30 ` David Greaves
@ 2008-09-01 14:28 ` Michael Guyver
0 siblings, 0 replies; 7+ messages in thread
From: Michael Guyver @ 2008-09-01 14:28 UTC (permalink / raw)
To: David Greaves; +Cc: linux-raid
2008/9/1 David Greaves <david@dgreaves.com>:
> Michael Guyver wrote:
>> Hi there,
>>
>> I've got a question about creating a RAID-1 array on a remote server -
>> ie: if the operation fails, it's going to be very expensive. The
>> server has two 200 GB drives and during a hurried re-install of CentOS
>> 5.2 the creation of software RAID partitions was omitted. This means
>> that the array would include the currently active partition on which
>> the kernel is installed. So my first question is as to the feasibility
>> of this operation, and its safety: any comments?
>
> It looks to me like you have a system that is (almost) setup to use lvm
> mirroring, not md (raid) mirroring.
>
> Are you sure you want to swap to using md mirroring? Or do you want to restore
> the lvm mirror?
>
Well, I'm happy to take the path of least resistance but which would
also leave me with redundancy in case of a disk failure.
> So there are no issues mirroring the / fs.
> Simply make the /dev/mapper/VolGroup00-LogVol00 mirrored
> lvconvert -m or something.
I'll have a read of the lvconvert docs and
> Bear in mind that /boot and possibly the mbr won't be replicated so if a disk
> fails you may not lose your rootfs but you won't be booting either...
Thanks very much David, Steve and Allen for your helpful comments.
I'll do some reading about LVM mirroring and if it's as easy as it
appears may go down that route as it sounds slightly less hairy than
setting up software RAID on a server to which I have no physical
access. The costs of having someone at the data-centre restore the
server are prohibitive and I'm terrified of bricking the server (again
- long story ;).
Cheers
Michael
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Create software RAID from active partition
2008-09-01 12:32 Create software RAID from active partition Michael Guyver
` (2 preceding siblings ...)
2008-09-01 13:30 ` David Greaves
@ 2008-09-01 20:03 ` Michal Soltys
2008-09-01 22:42 ` Michael Guyver
3 siblings, 1 reply; 7+ messages in thread
From: Michal Soltys @ 2008-09-01 20:03 UTC (permalink / raw)
To: Michael Guyver; +Cc: linux-raid
What do:
mdadm -E /dev/sda1
mdadm -E /dev/sdb1
mdadm -E /dev/sda2
mdadm -E /dev/sdb2
show, if anything ?
If they report they belonged once to an array with superblock positioned
at the end (so either 0.9 or 1.0), you could create RAID1 array missing
one component. In your case, something like:
umount /dev/mapper/VolGroup00-LogVol00
umount /dev/mapper/VolGroup00-LogVol01
vgchange -an VolGroup00
mdadm -C /dev/md/1 -l1 -n2 -e0 /dev/sdb2 missing
-e0 assuming it was v0.9 superblock
-e1.0 assuming it was v1.0
and then:
mdadm /dev/md/1 --add /dev/sda2
..wait for resync, then:
vgscan --mknodes
vgchange -ay
..should use /dev/md/1 - but doublecheck /etc/lvm/lvm.conf as well.
Of course it's potentially risky, so doublecheck everything and if it's
possible somehow (considering remote location), backup /dev/sdb2 first
(actually, you could use /dev/sda2 for that, as it's unused at this
moment anyway).
/dev/sd{a,b}1 case will be difficult w/o physical access, if they were
part of another raid1 array.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Create software RAID from active partition
2008-09-01 20:03 ` Michal Soltys
@ 2008-09-01 22:42 ` Michael Guyver
0 siblings, 0 replies; 7+ messages in thread
From: Michael Guyver @ 2008-09-01 22:42 UTC (permalink / raw)
To: Michal Soltys; +Cc: linux-raid
2008/9/1 Michal Soltys <soltys@ziu.info>:
> What do:
>
> mdadm -E /dev/sda1
> mdadm -E /dev/sdb1
> mdadm -E /dev/sda2
> mdadm -E /dev/sdb2
>
> show, if anything ?
>
Hi Michal,
Each device reports the same as /dev/sdb1:
# mdadm -E /dev/sdb1
mdadm: No md superblock detected on /dev/sdb1.
I think there might be a more subtle problem - the disks /dev/sda2 and
/dev/sdb2 appear to have the same UUID. I'm not sure of the
significance at present and am going to have to do a bit more
investigation, but it is enough to prevent
pvcreate /dev/sda2
from working - unless I specify the -ff flag as an additional
parameter, which I am loathe to try without understanding what might
happen. I was working towards creating an lvm-mirror following some
earlier advice in this thread, and found this problem when using
system-config-lvm. The GUI shows /dev/sda2 as uninitialised and the
error occurs when trying to initialise it. Any further advice would be
gratefully received (although I appreciate that this is the
linux-raid, not linux-lvm group)!
Best wishes and thanks for your suggestions.
Michael
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2008-09-01 22:42 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-01 12:32 Create software RAID from active partition Michael Guyver
2008-09-01 12:45 ` Steve Fairbairn
2008-09-01 13:14 ` Alan Jenkins
2008-09-01 13:30 ` David Greaves
2008-09-01 14:28 ` Michael Guyver
2008-09-01 20:03 ` Michal Soltys
2008-09-01 22:42 ` Michael Guyver
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).