linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "JaniD++" <djani22@dynamicweb.hu>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: Raid source 2TB limit question + system upgrade plan
Date: Mon, 2 Jan 2006 01:38:48 +0100	[thread overview]
Message-ID: <02de01c60f34$e61f12f0$a400a8c0@dcccs> (raw)
In-Reply-To: 17317.63490.418776.287118@cse.unsw.edu.au

> > Few months ago i got one patch from you to let the linear raid handle
>2TB
> > devices.
> > At this point  i not to able to test it, because i dont have money to
buy
> > the upgrade.
> >
> > The question is this:
> >
> > If i switch from i386 to x86_64, the patch will be unneccesary or still
need
> > the kernel, to handle the big drive?
>
> You need it on x86_64 as well, though it you are running 2.6.14 or
> later, the patch is already there.
>
> NeilBrown

Hello, Neil,

Now it is the time to upgrade my system from 8TB to 13.2TB. :-)

(Simple reminder if somebody forget, or missed something:
i use 4 disk node with NBD.
Each nodes are 2TB (11x200GB Raid5) now, and in the concentrator i use one
raid0 to join the nodes to one big raid array.
Some months ago, when i try to build this system, found one limit.
The linear array cannot import the 8TB array, caused by its size.)

I want to ask you, Neil, about possible limitations....

My plan:

I will replace the all 200 GB hdd with new 300GB, + add some new (12.) drive
on each node.
+ i want to backup the existing data to the new space. :-) (or backup some
data if i unable to backup the all of 8TB)

Step 1:
fdisk.
On the new disks, i will create one 200GB partition(part.2.) on the *END* of
the devices (only on 11 device) similar with the original size of the
existing 200GB hdds...
And another one from the beginning, to the next partition.(part.1. ~ 100GB)
(I have found the first problem: [called as #1]
The original 200GB devices partiton starts from the beginning of the device,
and this is only from head 1, and not head 0, caused by MBR.)

Step 2:
I will backup the original nodes to this new raid5 on the end of the
devices.
old md0 = 2TB-64k, new md0 = 2TB
There is no problem. :-)

Step 3:
Join the 4 new  2TB(4x11x200GB) device on the concentrator as md1 (inside
the existing 8TB data), and the another 4 new device(4x12x100GB) as md0
(empty array).
There is some problem, caused by #1.
The new 4x2TB devices is bigger with +64KB, and the raid0's superblock is
"wrong placed".
This is OK, i will use mdadm --build /dev/md1 command to buid an array,
without the superblock. :-)

Step 4:
Copy the most valuable data from md1 to md0
The md1 > md0.
This is OK, the problem is simly mine. :-D (I need to delete....)

Step 5a:
I will delete the partition 2 from all HDD, and resize the partition 1 to
fit the whole drive (300GB).
With this, the nodes will be 3.3TB! (12x300 [-1 raid5])

Step 5b:
Resizing the md0 (raid5 )in the nodes.
If i resize the partitions, the superblock is "wrong placed" again.
It is safe to re-create the raid5 array, with the new size?
In my previos mail with title"where is the spare :-)" i can see, there is
some risk too! :-/
If the newly created raid5 array starts again to using the 12. drive as
spare, it will owerwrite everything!!!

Step 6:
I will recreate the md0 on the concentrator, with the new node size.
(4x3.3TB)

Step 7:
Finally resize(8TB -> 13.2TB) the XFS on the md0 in concentrator.

Done.


I hope, you can say something else than "good luck!" :-D

Thanks,

Janos
(Happy new year! :-)




           reply	other threads:[~2006-01-02  0:38 UTC|newest]

Thread overview: expand[flat|nested]  mbox.gz  Atom feed
 [parent not found: <17317.63490.418776.287118@cse.unsw.edu.au>]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='02de01c60f34$e61f12f0$a400a8c0@dcccs' \
    --to=djani22@dynamicweb.hu \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).