From: Ross Boylan <ross@biostat.ucsf.edu>
To: Adam Goryachev <mailinglists@websitemanagers.com.au>
Cc: ross@biostat.ucsf.edu, linux-raid@vger.kernel.org
Subject: Re: Problems after extending partition
Date: Fri, 31 Aug 2012 21:21:07 -0700 [thread overview]
Message-ID: <1346473267.11608.14.camel@corn.betterworld.us> (raw)
In-Reply-To: <5040178C.1070900@websitemanagers.com.au>
On Fri, 2012-08-31 at 11:46 +1000, Adam Goryachev wrote:
> On 31/08/12 10:48, Ross Boylan wrote:
> > /dev/md1 was RAID 1 built from hda3 and hdb3. After increasing the
> > partition size of hd[ab]3, md1 could not be assembled. I think I
> > understand why and have a solution, but I would appreciate it if
> > someone could check it. This is with 0.90 format on Debian Lenny with
> > the partitions in raid auto-detect mode. hda and hdb are virtual
> > disks inside a kvm VM; it would be time-consuming to rebuild it from
> > scratch.
> >
> > The final wrinkle is that when I brought the VM up and dm1 was not
> > constructed one of the partitions was used anyway, so they are now out
> > of sync.
> >
> > Analysis: Growing the partitions meant that the mdadm superblocks were
> > not at the expected offset from the end of the partitions, and so they
> > weren't recognized as part of the array.
> >
> > Solution: (step 3 is the crucial one)
> > 1. Shut down the VM; call it the target VM.
> > 2. Mount the disks onto a rescue VM (running squeeze) as sdb and sdc.
> > 3. mdadm --create /dev/md1 --UUID=xxxxx --level=mirror
> > --raid-devices=2 /dev/sdb3 missing --spare-devices=1 /dev/sdc3.
> > UUID taken from the target VM.
> I would think something like these two might be easier to save step 5
> 3. mdadm --create /dev/md1 --UUID=xxxxx --level=mirror --raid-devices=2
> /dev/sdb3 missing
> 4. mdadm --manage /dev/md1 --add /dev/sdc3
> 5. wait for it to sync
>
Thank you for the review and the suggestion. I am not able to get the
UUID to stick, probably because it is being overridden by the host name:
# mdadm --create /dev/md1 --uuid=6f05ff4e:b4d49c1f:7fa21d88:ad0c50a9 --metadata=0.90 --level=mirror --raid-devices=2 /dev/sdb3 missing
mdadm: array /dev/md1 started.
# mdadm --examine /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 0.90.00
UUID : 6f05ff4e:b4d49c1f:f1c4dd81:d9ca9b38 (local to host squeeze00)
Creation Time : Fri Aug 31 20:59:50 2012
Raid Level : raid1
Used Dev Size : 8187712 (7.81 GiB 8.38 GB)
Array Size : 8187712 (7.81 GiB 8.38 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Update Time : Fri Aug 31 20:59:50 2012
State : active
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Checksum : 39956378 - correct
Events : 1
Number Major Minor RaidDevice State
this 0 8 19 0 active sync /dev/sdb3
0 0 8 19 0 active sync /dev/sdb3
1 0 0 0 0 spare
Also, for the record, the option is uuid, not UUID.
This uuid overriding behavior appears to contradict the man page
(although technically I gave no --homehost option): "Also, using --uuid=
when creating a v0.90 array will silently override any --homehost=
setting."
Note the hostname on the system from which I am manipulating the RAID is
not the same as for the target system.
The target system for the array is running Debian lenny.
I am manipulating it from a Debian squeeze system (which apparently does
not default to 0.90 format).
There is an additional future complication that the UUID seems to get
rewritten by homehost during at least some lenny->squeeze upgrades
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=570516). The main
point of this exercise is to test such an upgrade in a VM.
Any ideas?
Ross
> Another option might have been to create the RAID1 onto the older drive
> (sdc3) and then copy the content of the new drive into the array, and
> then add sdb3 into this new array. This might also allow you to use a
> different md version
> > 4. wait for it to sync.
> > 5. maybe do some kind of command to say the raid no longer has a
> > spare. It might be
> > mdadm --grow /dev/md3 --spare-devices=0
> > 6. Shut the rescue VM and start the target VM.
> >
> > Does it matter if I call the device /dev/md1 in step 3? It is known
> > as that in the target VM.
> >
> Hope that help,
>
> Regards,
> Adam
>
next prev parent reply other threads:[~2012-09-01 4:21 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-31 0:48 Problems after extending partition Ross Boylan
2012-08-31 1:46 ` Adam Goryachev
2012-09-01 4:21 ` Ross Boylan [this message]
2012-09-04 23:29 ` Problems after extending partition [SOLVED] Ross Boylan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1346473267.11608.14.camel@corn.betterworld.us \
--to=ross@biostat.ucsf.edu \
--cc=linux-raid@vger.kernel.org \
--cc=mailinglists@websitemanagers.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).