linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "JaniD++" <djani22@dynamicweb.hu>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: Raid 4 resize, raid0 limit question
Date: Sat, 4 Feb 2006 13:33:09 +0100	[thread overview]
Message-ID: <00b101c62987$29376da0$9d00a8c0@dcccs> (raw)
In-Reply-To: 04dd01c62858$3c068090$9d00a8c0@dcccs


--cut--

> > > I plan to resize (grow) one raid4 array.
> > >
> > > 1. stop the array.
> > > 2. resize the partition on all disks to fit the maximum size.
> >
> > The approach is currently not supported.  It would need a change to
> > mdadm to find the old superblock and relocate it to the new end of the
> > partition.
> >
> > The only currently 'supported' way it to remove devices one at a time,
> > resize them, and add them back in as new devices, waiting for the
> > resync.
>
> Good news! :-)
> This takes about 1 weeks for me... :-(
> I should recreate....
>
>
> >
> > NeilBrown

Neil!

What do you think about making 2 files into proc or sys, as 2 margin for
raid sync?
The default value is 0 and sectorcount. (or KiB of the array)

The user can set this befor the sync is starts, or when the sync is run, and
if the sync is done, the default values are set again automatically.
The sync is move between the two value only.

This is easy to write - i think-, not too dangerous, and some times (or
often) very practical.

This will help for me often, including this time, to raid4 resize from 2TB
to 3.6TB.


> >
> >
> > >
> > > After this restart(assemble) the array is possiple?
> > > I mean, how can the kernel find the superblock fits on the half of the
> new
> > > partitions?
> > > I need to recreate the array instead of using -G option?
> > > Can i force raid to resync only the new area?
> > >
> > > The raid0 in 2.6.16-rc1 supports 4x 3.6TB soure devices? :-)
> >
> > ... maybe?
> > I think it does, but I cannot promise anything.
>
> Anyway, i will test it on the weekend, and i dont need to grow the FS too
on
> it.
>
> How can i safe test it (and NBD >2TB) to work well, without data lost?

Anybody know any good tool to test the 13.4TB raid0 array inside 8TB live,
and valuable fs without data lost?

I need to test the raid0 and NBD before resize the FS to fit to the array.

I can think only to dd with skip=NN option, but at this time i did'nt trust
the dd enough. :-)

Thanks,
Janos


>
> Thanks,
> Janos
>
> >
> > NeilBrown
> >
> >
> > >
> > > Thanks,
> > > Janos
> > >
> > > -
> > > To unsubscribe from this list: send the line "unsubscribe linux-raid"
in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


      reply	other threads:[~2006-02-04 12:33 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-02-03  0:00 Raid 4 resize, raid0 limit question JaniD++
2006-02-03  0:12 ` Neil Brown
2006-02-03  0:24   ` JaniD++
2006-02-04 12:33     ` JaniD++ [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='00b101c62987$29376da0$9d00a8c0@dcccs' \
    --to=djani22@dynamicweb.hu \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).