From: "Kristleifur Daðason" <kristleifur@gmail.com>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: raid0 not growable?
Date: Wed, 30 Dec 2009 17:55:38 +0000 [thread overview]
Message-ID: <73e903670912300955o63e5adb2va33a610f27ab1edb@mail.gmail.com> (raw)
In-Reply-To: <73e903670912300725g20dc1746mf51db90a7e90e929@mail.gmail.com>
On Wed, Dec 30, 2009 at 3:25 PM, Kristleifur Daðason
<kristleifur@gmail.com> wrote:
> On Wed, Dec 23, 2009 at 11:54 PM, Neil Brown <neilb@suse.de> wrote:
>> On Wed, 23 Dec 2009 23:28:37 +0000
>> Kristleifur Daðason <kristleifur@gmail.com> wrote:
>>
>>> On Wed, Dec 23, 2009 at 10:45 PM, Neil Brown <neilb@suse.de> wrote:
>>> > On Wed, 23 Dec 2009 13:52:54 +0000
>>> > Kristleifur Daðason <kristleifur@gmail.com> wrote:
>>> >
>>> >> Hi,
>>> >>
>>> >> I'm running a raid0 array over a couple of raid6 arrays. I had planned
>>> >> on growing the arrays in time, and now turns out to be the time.
>>> >>
>>> > If the two raid6 arrays are exactly the same size, then you
>>> > could grow the two raid6 arrays, create a new raid0 over them
>>> > with the same block size and all your data will still be there.
>>> > But if they are not the same size, that won't work.
>>>
>>> Current chunksize is 256 and metadata is 1.1. So it's just a "mdadm
>>> --create /dev/md_bigraid0 --level=0 --raid-devices=2 --metadata=1.1
>>> --chunksize=256 /dev/md_raid6a /dev/md_raid6b", right?
>>
>> Yes... there is a possible complication though.
>> With 1.1 metadata mdadm reserves some space between the end of the metadata
>> and the start of the data for a bitmap - even for raid0 which cannot have
>> a bitmap. The amount of space reserved is affected by the size of the
>> devices.
>> So it is possible that the "data offset" will be different.
>> You should check the data offset before and after. If it is different, we
>> will have to hack mdadm to allow you to set the data offset manually.
>
> ... I believe I am guaranteed an identical bitmap size and hence an identical data offset.
>
> And in theory, this case is closed. Thank you, all.
>
Yep, it worked great. We built the new raid0 array over the old one,
and did a "fsck.jfs -n" dry-run over the filesystem. Still there,
clean as a whistle. Next was a quick "mount -o remount,resize /tank"
which grew the JFS filesystem in a second or two.
Very quick and painless. For my purposes, I consider raid0 arrays to
be growable. Such was the ease. Even though raid0 may not be
officially growable.
-- Kristleifur
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2009-12-30 17:55 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-12-23 13:52 raid0 not growable? Kristleifur Daðason
[not found] ` <20091224094557.1ae96a0d@notabene>
2009-12-23 23:28 ` Kristleifur Daðason
2009-12-23 23:54 ` Neil Brown
2009-12-30 15:25 ` Kristleifur Daðason
2009-12-30 17:55 ` Kristleifur Daðason [this message]
2009-12-30 6:26 ` Billy Crook
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=73e903670912300955o63e5adb2va33a610f27ab1edb@mail.gmail.com \
--to=kristleifur@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).