linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Fjellstrom <thomas@fjellstrom.ca>
To: NeilBrown <neilb@suse.de>, linux-raid@vger.kernel.org
Subject: Re: Backup file size when migrating from raid5 to raid6?
Date: Thu, 16 Aug 2012 17:28:13 -0600	[thread overview]
Message-ID: <201208161728.13814.thomas@fjellstrom.ca> (raw)
In-Reply-To: <20120507105458.6411b0cf@notabene.brown>

On Sun May 6, 2012, NeilBrown wrote:
> On Mon, 7 May 2012 00:32:35 +0000 Garðar Arnarsson <gardar@giraffi.net> 
wrote:
> 
> > That's an excellent idea, I was going to add another disk for extra space
> > right after migrating to raid6.
> > 
> > Just to be clear, I'll be running the normalize attribute just once to
> > straighten the array out right? Or will I have to do it for every extra
> > drive I add in the future?
> 
> Just once.
> 
> > 
> > And what are the N+1 you mention in --raid-devices=N+1
> 
> By "N+1" I just meant "1 more than the number of devices currently in the
> array".
> 
> If you have both new devices ready to go, you just do a single reshape
> operation that converts to RAID6 and adds more space.  This does not need a
> backup file and is probably the best approach.
> 
> If you currently have a 10-drive RAID5 and want a 12-drive RAID6, then
> 
>  mdadm --grow /dev/md0 --raid-devices=12 --level=6
> 
> is what you want.

I apologize for bringing back a long dead thread, but I've been wondering if 
mdadm does the grow op in this case, in one step? Or does it internally do 
each step separately, doing a reshape with each one?

I've currently got a 7x1TB disk raid5, and have a couple more disks to add and 
I was planning on moving to raid6. I'm hoping to reduce the amount of time the 
array is "reshaping" because I'm a bit paranoid that my bad luck with hard 
drives will decide to hit right then and there.

> NeilBrown
> 
> 
> > 
> > Thanks.
> > 
> > 
> > 2012/5/6 NeilBrown <neilb@suse.de>
> > 
> > > On Sun, 6 May 2012 10:17:52 +0000 Garðar Arnarsson <gardar@giraffi.net>
> > > wrote:
> > >
> > > > My raid5 array has gotten a bit big, it's containing total 10 drives
> > > > right now (I started out with 3 drives). So I am going to convert it
> > > > to raid6 before it gets any bigger.
> > > >
> > > > I am doing a test-run on a virtual machine with virtual drives to see
> > > > that everything works flawlessly.
> > > >
> > > > When I tried to convert the array to raid6 I got a error message about
> > > > a missing backup-file
> > > >
> > > > mdadm --grow /dev/md0 --raid-devices=5 --level=6
> > > >
> > > > mdadm level of /dev/md0 changed to raid6
> > > > mdadm: /dev/md0: Cannot grow - need backup-file
> > > > mdadm: aborting level change
> > > >
> > > > I added the backup file and was able to convert the array successfully
> > > > after that.
> > > >
> > > > My question is, how big is this backup file going to be? My real raid
> > > > array consists of 2tb drives, will the backup file be as big as one
> > > > drive in the array, or will it just be few megabytes or gigabytes?
> > > > I'm asking because I'm wondering if I need to buy an extra hdd for the
> > > > backup file or if the backup file can just be on my OS hdd that has
> > > > around 100gb free.
> > >
> > > The backup file is a few megabytes. Around 16MB I think.
> > >
> > > However if you are likely to add another device in the not too distant
> > > future
> > > you can save yourself a bit of time.
> > >
> > > If you
> > >
> > >  mdadm --grow /dev/md0 --level=6 --layout=preserve
> > >
> > > It will just make the new few a 'Q-block' device, containing the extra
> > > RAID6
> > > 'parity' block for each stripe.  This doesn't require any reshape or or 
any
> > > backup file and is a lot faster.  All it requires is a normal recovery
> > > operation.
> > >
> > > Then when you later add another device you can
> > >
> > >  mdadm --grow /dev/md0 --raid-devices=N+1 --layout=normalise
> > >
> > > This will convert from the Q-on-the-last-device layout to a more normal
> > > rotated-P-and-Q layout at the same time as adding extra space.
> > >
> > > NeilBrown
> > >
> > 
> > 
> > 
> 
> 


-- 
Thomas Fjellstrom
thomas@fjellstrom.ca
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2012-08-16 23:28 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAH-e9vLTY2eo0p3ud5FaWNe_2f8hej0aitFuu8K0M1RZdObiXQ@mail.gmail.com>
2012-05-06 10:17 ` Backup file size when migrating from raid5 to raid6? Garðar Arnarsson
2012-05-06 11:00   ` NeilBrown
     [not found]     ` <CAH-e9vJkrv2R-HTR7JfHwZK1sbYQ3fduGbtxEwtor4gtyd6PKQ@mail.gmail.com>
2012-05-07  0:35       ` Garðar Arnarsson
2012-05-07  0:54       ` NeilBrown
2012-08-16 23:28         ` Thomas Fjellstrom [this message]
2012-08-17  0:39           ` NeilBrown
2012-08-17  0:45             ` Thomas Fjellstrom
2012-08-17  0:52           ` John Robinson
2013-07-17 13:01   ` Boyan Alexiev
2013-07-17 20:51     ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201208161728.13814.thomas@fjellstrom.ca \
    --to=thomas@fjellstrom.ca \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).