From: Simon Matthews <simon.d.matthews@gmail.com>
To: NeilBrown <neilb@suse.de>
Cc: LinuxRaid <linux-raid@vger.kernel.org>
Subject: Re: Can't start array and Negative "Used Dev Size"
Date: Fri, 1 Jul 2011 23:19:31 -0700 [thread overview]
Message-ID: <BANLkTi=Ek_Ro04U72o5aEo6NXmsKqmt+yw@mail.gmail.com> (raw)
In-Reply-To: <BANLkTinEcO0kh+WAU4xapWTqTqkB0ZR_KQ@mail.gmail.com>
Neil,
On Fri, Jul 1, 2011 at 9:41 PM, Simon Matthews
<simon.d.matthews@gmail.com> wrote:
> Neil,
>
> On Tue, Jun 28, 2011 at 10:18 PM, NeilBrown <neilb@suse.de> wrote:
>> On Tue, 28 Jun 2011 21:29:37 -0700 Simon Matthews
>> <simon.d.matthews@gmail.com> wrote:
>>
>>> Problem 1: "Used Dev Size"
>>> ====================
>>> Note: the system is a Gentoo box, so perhaps I have missed a kernel
>>> configuration option or use flag to deal with large hard drives.
>>>
>>> A week or two ago, I resized a raid1 array using 2x3TB drives. I went
>>
>> Oopps. That array is using 0.90 metadata which can only handle up to 2TB
>> devices. The 'resize' code should catch that you are asking the impossible,
>> but it doesn't it seems.
>>
>> You need to simply recreate the array as 1.0.
>> i.e.
>> mdadm -S /dev/md5
>> mdadm -C /dev/md5 --metadata 1.0 -l1 -n2 --assume-clean
>
> Before I do this (tomorrow), do I need to add the partitions to the command:
>
> mdadm -C /dev/md5 --metadata 1.0 -l1 -n2 --assume-clean /dev/sdd2 /dev/sdc2
I went ahead and did this. Everything looks good -- I think.
Why do the array sizes from --examine on my metadata 1.0 and metadata
1.2 arrays appear to be twice the size of the array:
# mdadm --examine /dev/sde2
/dev/sde2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8f16e81f:3324004c:8d020c9b:a981e2ae
Name : server2:7 (local to host server2)
Creation Time : Wed Jun 29 10:39:32 2011
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2925064957 (1394.78 GiB 1497.63 GB) <<<
Array Size : 2925064684 (1394.78 GiB 1497.63 GB) <<<
how is 2925064684 equal to 1394.78 GiB?
Used Dev Size : 2925064684 (1394.78 GiB 1497.63 GB) <<<
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : c78ff04c:98c9ea48:77db4b85:46ac6dc1
Update Time : Fri Jul 1 23:13:02 2011
Checksum : e446cf2c - correct
Events : 14
Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing)
Simon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-07-02 6:19 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-29 4:29 Can't start array and Negative "Used Dev Size" Simon Matthews
2011-06-29 5:18 ` NeilBrown
2011-06-29 5:24 ` Simon Matthews
2011-06-29 5:37 ` NeilBrown
2011-06-29 5:59 ` Simon Matthews
2011-06-29 6:18 ` NeilBrown
2011-06-29 15:45 ` Simon Matthews
2011-06-30 0:25 ` NeilBrown
2011-06-30 3:15 ` Simon Matthews
2011-07-02 4:41 ` Simon Matthews
2011-07-02 6:19 ` Simon Matthews [this message]
2011-07-04 5:45 ` Luca Berra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='BANLkTi=Ek_Ro04U72o5aEo6NXmsKqmt+yw@mail.gmail.com' \
--to=simon.d.matthews@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).