* Growing array, duplicating data, shrinking array questions.
@ 2013-11-21 18:09 Wilson Jonathan
2013-11-30 16:07 ` Wilson Jonathan
0 siblings, 1 reply; 4+ messages in thread
From: Wilson Jonathan @ 2013-11-21 18:09 UTC (permalink / raw)
To: linux-raid
I think I know what I need to do to perform the operations I want, but
wanted to check here first.
I currently have 2 disks used to hold OS/swap/(empty)home/somedata
partitions (500G) and a six drive raid 6 (1TB, single partition not
whole drive) setup.
What I want to do is replace the 6 drives with 4 3TB WD REDs, and also
to move the OS and stuff onto the 4 3TB drives.
At the moment the 500GB drives are partitioned as following.
GPT:
1MB (Bios boot)
210MB RAID1 (boot)
24GB RAID1 (root) (current live boot)
8GB *SWAP
54GB RAID1 (root2) (new install, will replace root)
11GB RAID1 (home)
XXGB RAID1 (data)
The swap is amalgamated into 16GB total swap.
(root) and (root2) are different versions of debian, (root) is my
current "live" and (root2) is my "testing" by the end of this excersize
(root) will no longer exist, and (root2) will become my "live."
My idea is to fail and remove one of the RAID6 devices, then put the 3TB
in its place and format it, using gptfdisk, as the following.
GPT:
1MB (Bios boot)
210MB RAID1 (boot)
84GB RAID1 (root2)
16GB RAID1 (*swap)
XXGB Space unused, to push the last partition to the end of the disk.
1TB RAID6 (raid6)
From what I have read I can grow the (boot) (root2) by increasing the
number of devices, even though (root2) is larger on the new disk it will
be added and only the first 54GB will synced; basically increase the
number of devices from 2 to 3.
I will also add/update the boot loader/mbr to the new drive.
By adding in the raid6 partition it will be re-synced as if it were the
original failed drive being replaced.
I then replicate this process for the other 3 disks.
eg (boot) (root2) no-devices=4
...=5
...=6
and obviously adding in a replacement for the failed/removed RAID6 1TB
drive with the RAID6 partition (raid6) and updating the (bios boot)/mbr
I will create a new RAID1 (*swap) when all the 4 devices are in place.
Also I will then create a RAID6 partition on the 4 drives empty space
unused partition (new6).
Once this is done, and checked if all working ok...
I will then fail and remove both the original, 500GB, RAID1 (boot)
(root) (root2) drives, then grow the raids down to ..devices=4
I believe I will then need to grow the (root2) ..size=max to extend it
from 54GB to 84GB, check the file system, grow the file system, then
check it again.
I will then copy all the data from the (raid6) to (new6) run a check to
see if they are identical, and then bring down and delete (raid6) remove
the last 2 remaining 1TB drives, put them into USB caddies (from whence
they originally came, was cheaper than buying the raw drives) and then
backup to them.
Finally I will delete all the 4 (raid6) partitions, I will also change
the size of the (new6) partition to include the now deleted (raid6)
space. (I'm assuming changing the partition end will not affect the data
contained within) for all 4 drives, then grow the partition ..size=max,
check the file system, grow the file system, final check...
Finally I will update the mdadm.conf in both the (root) and (root2)
directory to take note of changes.. create the swap on the (*swap) and
also change the (root) and (root2) fstab to take note of the file system
changes and the new swap partition...
...to keep things clean, I will change the files slightly differently
for both "/" so that (root) will no longer see (root2) and also the
reverse, once all is complete (root) will no longer be used and will be
removed...
..then check that they both boot ok, finally delete the now unused (bios
boot) (boot) (root2) from the 500GB drives, do some house keeping and if
all has gone to plan then I can finally breath again.
Is there anything I've got in the above that looks wrong, or is a
disaster in the making as I've miss understood something?
Actually, thinking about it... will I need to update the initramfs at
any point, for example when I delete the (root) from within (root2) as
my final stage after all housekeeping, and also once I have updated
everything in (root) such as fstab and mdadm.conf?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Growing array, duplicating data, shrinking array questions.
2013-11-21 18:09 Growing array, duplicating data, shrinking array questions Wilson Jonathan
@ 2013-11-30 16:07 ` Wilson Jonathan
2013-12-03 18:15 ` Wilson Jonathan
0 siblings, 1 reply; 4+ messages in thread
From: Wilson Jonathan @ 2013-11-30 16:07 UTC (permalink / raw)
To: linux-raid
On Thu, 2013-11-21 at 18:09 +0000, Wilson Jonathan wrote:
> I think I know what I need to do to perform the operations I want, but
> wanted to check here first.
>
> I currently have 2 disks used to hold OS/swap/(empty)home/somedata
> partitions (500G) and a six drive raid 6 (1TB, single partition not
> whole drive) setup.
>
> What I want to do is replace the 6 drives with 4 3TB WD REDs, and also
> to move the OS and stuff onto the 4 3TB drives.
>
> At the moment the 500GB drives are partitioned as following.
>
> GPT:
> 1MB (Bios boot)
> 210MB RAID1 (boot)
> 24GB RAID1 (root) (current live boot)
> 8GB *SWAP
> 54GB RAID1 (root2) (new install, will replace root)
> 11GB RAID1 (home)
> XXGB RAID1 (data)
> The swap is amalgamated into 16GB total swap.
>
> (root) and (root2) are different versions of debian, (root) is my
> current "live" and (root2) is my "testing" by the end of this excersize
> (root) will no longer exist, and (root2) will become my "live."
>
> My idea is to fail and remove one of the RAID6 devices, then put the 3TB
> in its place and format it, using gptfdisk, as the following.
>
> GPT:
> 1MB (Bios boot)
> 210MB RAID1 (boot)
> 84GB RAID1 (root2)
> 16GB RAID1 (*swap)
> XXGB Space unused, to push the last partition to the end of the disk.
> 1TB RAID6 (raid6)
>
> From what I have read I can grow the (boot) (root2) by increasing the
> number of devices, even though (root2) is larger on the new disk it will
> be added and only the first 54GB will synced; basically increase the
> number of devices from 2 to 3.
>
> I will also add/update the boot loader/mbr to the new drive.
>
> By adding in the raid6 partition it will be re-synced as if it were the
> original failed drive being replaced.
>
> I then replicate this process for the other 3 disks.
> eg (boot) (root2) no-devices=4
> ...=5
> ...=6
>
> and obviously adding in a replacement for the failed/removed RAID6 1TB
> drive with the RAID6 partition (raid6) and updating the (bios boot)/mbr
>
> I will create a new RAID1 (*swap) when all the 4 devices are in place.
>
> Also I will then create a RAID6 partition on the 4 drives empty space
> unused partition (new6).
>
> Once this is done, and checked if all working ok...
>
> I will then fail and remove both the original, 500GB, RAID1 (boot)
> (root) (root2) drives, then grow the raids down to ..devices=4
>
> I believe I will then need to grow the (root2) ..size=max to extend it
> from 54GB to 84GB, check the file system, grow the file system, then
> check it again.
>
> I will then copy all the data from the (raid6) to (new6) run a check to
> see if they are identical, and then bring down and delete (raid6) remove
> the last 2 remaining 1TB drives, put them into USB caddies (from whence
> they originally came, was cheaper than buying the raw drives) and then
> backup to them.
>
> Finally I will delete all the 4 (raid6) partitions, I will also change
> the size of the (new6) partition to include the now deleted (raid6)
> space. (I'm assuming changing the partition end will not affect the data
> contained within) for all 4 drives, then grow the partition ..size=max,
> check the file system, grow the file system, final check...
>
> Finally I will update the mdadm.conf in both the (root) and (root2)
> directory to take note of changes.. create the swap on the (*swap) and
> also change the (root) and (root2) fstab to take note of the file system
> changes and the new swap partition...
>
> ...to keep things clean, I will change the files slightly differently
> for both "/" so that (root) will no longer see (root2) and also the
> reverse, once all is complete (root) will no longer be used and will be
> removed...
>
> ..then check that they both boot ok, finally delete the now unused (bios
> boot) (boot) (root2) from the 500GB drives, do some house keeping and if
> all has gone to plan then I can finally breath again.
>
> Is there anything I've got in the above that looks wrong, or is a
> disaster in the making as I've miss understood something?
>
> Actually, thinking about it... will I need to update the initramfs at
> any point, for example when I delete the (root) from within (root2) as
> my final stage after all housekeeping, and also once I have updated
> everything in (root) such as fstab and mdadm.conf?
I have a couple of other questions/observations to add to the original
ones.
It seems as if I will need to update the initramfs due to changes to
fstab and mdadm, this is not a problem :-)
Now for a huge problem...
My system is a "bios" not a "UEFI" and as such it, within the bios, does
not report the disk sizes correctly, says is about 700GB (can't be exact
without rebooting)... however to my surprise linux sees the whole disk
as 3TB, I partitioned it using gptfdisk, then created a number of ext4
file systems, filled them up with real files, ran fsck tests, zeroed out
the highest partition, re-fsck'd the lower partitions and all seemed ok.
___question 1___
(it should be noted that I was concerned that if the bios miss-reported
the size and linux saw the whole disk I was worried that perhaps some
overlapping mapping might have been going on, say the first partition,
low on the disk, was somehow mapped into the last (700GB) of the disk..
I'm guessing, and its a big guess, that once the bios has turned over
control to linux then linux no longer uses the bios (or its hooks) to
access the HD and instead goes to it directly, would this be a correct
"guess"?)
Now here is where I know/think I have a big problem with a 3tb drive in
a "bios" system and need confirmation before going any further...
___question 2___
I'm guessing that as the bios can only see the tail end of the hard
drive it would not be able to load the data from the "mbr" (still there
despite being a GPT formatted disk) and proceed to load the first stage
boot loader because it will think the "mbr" should be at the start of
the "700GB" which in reality is 2.2TB into the disk, would this be a
correct assumption? (
Actually I could probably test this by installing
(grub-install /dev/sdX) then going into the bios and telling it to boot
from that disk while the original boot drives (containing
biosboot /boot /root) were still pluged in, if it failed then I know its
not going to work.
___question 3___
If it does not work, but linux can see and use the drive with no
problems, then is it possible to install the first (possibly second, i'm
guessing thats what gets put in the "biosboot" partiton) stage grub2
boot loaders to a USB drive, then have this load the rest of the system
"/boot" partition from the new 3tb drives or if not possible then
include the "/boot" on the usb drive as its only 200M its not a
hardship, if I "raid" it to the existing partitions then any changes
will be replicated, and should the usb fail I can just boot a system
rescue disk make a new usb boot, copy the "/boot" from the hard drive
and away we go.
___question 4___
I guess instead of using a USB, I could use the DVD drive, create a
first/second stage loader onto it that will then chain into the hd to
load the linux kernel.. is this possible?
I guess my last question is related to q3/q4 and that is... if its
possible then is there a simple set of commands required, my initial
thoughts are "grub-install" to the usb stick, as it will know to load
from "the first drive" and as I will replicate, mirror, "/boot" (as well
as "/") on the first 4 drives... if it fails then the second one will
become the first so will be found by grub and so on...
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Growing array, duplicating data, shrinking array questions.
2013-11-30 16:07 ` Wilson Jonathan
@ 2013-12-03 18:15 ` Wilson Jonathan
2013-12-03 22:36 ` Adam Goryachev
0 siblings, 1 reply; 4+ messages in thread
From: Wilson Jonathan @ 2013-12-03 18:15 UTC (permalink / raw)
To: linux-raid
>
> Now for a huge problem...
>
> My system is a "bios" not a "UEFI" and as such it, within the bios, does
> not report the disk sizes correctly, says is about 700GB (can't be exact
> without rebooting)... however to my surprise linux sees the whole disk
> as 3TB, I partitioned it using gptfdisk, then created a number of ext4
> file systems, filled them up with real files, ran fsck tests, zeroed out
> the highest partition, re-fsck'd the lower partitions and all seemed ok.
Did some further checks by tar>tar'ing across from my raid partition to
a single partition ext4 on the 3tb drive, then running diff, then
updating (editing different files manually on both the 3tb and 4tb raid)
a few files and re-running diff, which reported the changes correctly.
I'm now happy that linux sees the 3tb drive correctly and is not
mangling data silently.
>
> ___question 1___
> (it should be noted that I was concerned that if the bios miss-reported
> the size and linux saw the whole disk I was worried that perhaps some
> overlapping mapping might have been going on, say the first partition,
> low on the disk, was somehow mapped into the last (700GB) of the disk..
> I'm guessing, and its a big guess, that once the bios has turned over
> control to linux then linux no longer uses the bios (or its hooks) to
> access the HD and instead goes to it directly, would this be a correct
> "guess"?)
It turns out that linux does indeed ignore the bios, it still uses a few
aspects of it, such as power/suspend/sleep, but on the whole it uses
(from what I've read in my limited understanding) protected mode drivers
that hit the hardware in a more direct manner... in my case the ICH10
(no R) and the drives themselves, again from what I've read ICH9 (not
sure if a revision involved) and above, ie. ICH10, are quite happy to
see and use disks greater than 2.2tb even if the bios miss-reports the
size.
>
> Now here is where I know/think I have a big problem with a 3tb drive in
> a "bios" system and need confirmation before going any further...
>
> ___question 2___
> I'm guessing that as the bios can only see the tail end of the hard
> drive it would not be able to load the data from the "mbr" (still there
> despite being a GPT formatted disk) and proceed to load the first stage
> boot loader because it will think the "mbr" should be at the start of
> the "700GB" which in reality is 2.2TB into the disk, would this be a
> correct assumption? (
>
> Actually I could probably test this by installing
> (grub-install /dev/sdX) then going into the bios and telling it to boot
> from that disk while the original boot drives (containing
> biosboot /boot /root) were still pluged in, if it failed then I know its
> not going to work.
I've concluded that I'm going to have to create some form of work around
as the bios cannot load the mbr from the 3tb drive, and then the grub2
stuff in the bios boot partition which can then load additional grub2
stuff from the /boot partition.
>
> ___question 3___
> If it does not work, but linux can see and use the drive with no
> problems, then is it possible to install the first (possibly second, i'm
> guessing thats what gets put in the "biosboot" partiton) stage grub2
> boot loaders to a USB drive, then have this load the rest of the system
> "/boot" partition from the new 3tb drives or if not possible then
> include the "/boot" on the usb drive as its only 200M its not a
> hardship, if I "raid" it to the existing partitions then any changes
> will be replicated, and should the usb fail I can just boot a system
> rescue disk make a new usb boot, copy the "/boot" from the hard drive
> and away we go.
>
> ___question 4___
> I guess instead of using a USB, I could use the DVD drive, create a
> first/second stage loader onto it that will then chain into the hd to
> load the linux kernel.. is this possible?
>
> I guess my last question is related to q3/q4 and that is... if its
> possible then is there a simple set of commands required, my initial
> thoughts are "grub-install" to the usb stick, as it will know to load
> from "the first drive" and as I will replicate, mirror, "/boot" (as well
> as "/") on the first 4 drives... if it fails then the second one will
> become the first so will be found by grub and so on...
>
From what I can tell, the simplest way would be to make sure that grub2
is using strictly uuid's and not "disk notation" (on my pc booting from
a usb stick, or as I recall just leaving one in during boot, seems to
make the usb "sda" pushing the real disks higher up the chain, b-c-etc.
but I will need to double check that) then make the "/boot" partition on
the USB a member of the raid "/boot" partition held on the hard drives
(currently on a,b) then perform the usual grub-install to the usb stick,
which will update the mbr and put the required data into the bios-boot
partition of the usb stick (in the same way I had to grub=install
to /dev/sda and /dev/sdb)...
This will mean any changes to "boot" will be replicated across all
potential boot devices (even if the bios cant load from the 3tb
drive(s)) and as the bios can boot from the usb and is using uuid
notation it should in theory load and run the kernels from the hard
drives no matter how they are "numbered."
I guess the only "gotcha" I can see from this, is if I pull the usb
stick and re-insert it and forget to then re-add "/boot" back into the
raid1 and it becomes stale then I wonder what would happen then, would
it use the /boot on the usb, or the /boot on the disk drives... although
I guess because it "insmod" raid & mdraid1x it actually performs a
mdraid assemble/start and, tho I dont use it, because it can start
raid6rec and lvm it does actually start it, which would mean it would
always get the most up to date raid status no matter how stale the
"usb /boot" became, which then leads me to think I don't even need it on
the usb as the only important stuff (modules/etc) is put/compiled into
the biosboot partition... wow this stuff can do your head in :-/
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Growing array, duplicating data, shrinking array questions.
2013-12-03 18:15 ` Wilson Jonathan
@ 2013-12-03 22:36 ` Adam Goryachev
0 siblings, 0 replies; 4+ messages in thread
From: Adam Goryachev @ 2013-12-03 22:36 UTC (permalink / raw)
To: Wilson Jonathan, linux-raid
On 04/12/13 05:15, Wilson Jonathan wrote:
>> Now here is where I know/think I have a big problem with a 3tb drive in
>> a "bios" system and need confirmation before going any further...
>>
>> ___question 2___
>> I'm guessing that as the bios can only see the tail end of the hard
>> drive it would not be able to load the data from the "mbr" (still there
>> despite being a GPT formatted disk) and proceed to load the first stage
>> boot loader because it will think the "mbr" should be at the start of
>> the "700GB" which in reality is 2.2TB into the disk, would this be a
>> correct assumption? (
>>
>> Actually I could probably test this by installing
>> (grub-install /dev/sdX) then going into the bios and telling it to boot
>> from that disk while the original boot drives (containing
>> biosboot /boot /root) were still pluged in, if it failed then I know its
>> not going to work.
> I've concluded that I'm going to have to create some form of work around
> as the bios cannot load the mbr from the 3tb drive, and then the grub2
> stuff in the bios boot partition which can then load additional grub2
> stuff from the /boot partition.
I don't know why you seem to have an issue here... IME (limited), the
only thing I had to do was create a small 1M partition at the beginning
of the disk, from memory the OS installer did this automatically for me.
Then grub can get installed to the drive and it all works magically. I'm
really not sure of all the technical details, and I can't actually find
the machine I did this on right now, but I do remember that is how I
made it work.....
BTW, I'd suggest that if grub can "see" the partitions/etc properly,
then just install grub to the USB, and leave the kernel/initrd/etc on
the raid on the hard drives. Keep a image of the USB (or at least the
first few MB that grub is using), so if the USB dies you can boot from a
live CD, mount the hard drives, and use dd to write the grub image to a
new USB.
Personally, I'd try and get grub working from the HDD, I don't like to
find out that I have a problem *after* a reboot for either an upgrade or
as part of recovering from another problem.
Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2013-12-03 22:36 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-21 18:09 Growing array, duplicating data, shrinking array questions Wilson Jonathan
2013-11-30 16:07 ` Wilson Jonathan
2013-12-03 18:15 ` Wilson Jonathan
2013-12-03 22:36 ` Adam Goryachev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).